id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
996,630 | https://en.wikipedia.org/wiki/Senpai%20and%20k%C5%8Dhai | Senpai and kōhai are Japanese terms used to describe an informal hierarchical interpersonal relationship found in organizations, associations, clubs, businesses, and schools in Japan and expressions of Japanese culture worldwide. The senpai (, "senior") and kōhai (, "junior") relationship has its roots in Confucianism, but has developed a distinctive Japanese style. The term senpai can be considered a term in Japanese honorifics.
Concept
The relationship is an interdependent one, as a senpai requires a kōhai and vice versa, and establishes a bond determined by the date of entry into an organization. Senpai refers to the member of higher experience, hierarchy, level, or age in the organization who offers assistance, friendship, and counsel to a new or inexperienced member, known as the kōhai, who must demonstrate gratitude, respect, and occasionally personal loyalty. The kōhai defers to the senpais seniority and experience, and speaks to the senpai using honorific language. The senpai acts at the same time as a friend. This relation is similar to the interpersonal relation between tutor and tutored in Eastern culture, but differs in that the senpai and kōhai must work in the same organization.
The relation originates in Confucian teaching, as well as the morals and ethics that have arrived in Japan from ancient China and have spread throughout various aspects of Japanese philosophy. The senpai–kōhai relation is a vertical hierarchy (like a father–son relation) that emphasizes respect for authority, for the chain of command, and for one's elders, eliminating all forms of internal competition and reinforcing the unity of the organization.
Over time this mechanism has allowed the transfer of experience and knowledge, as well as the expansion of acquaintances and the building of institutional memory. It also allows the development of beneficial experiences between both, as the kōhai benefits from the senpais knowledge and the senpai learns new experiences from the kōhai by way of developing a sense of responsibility. This comradeship does not imply friendship; a senpai and kōhai may become friends, but such is not an expectation.
The Korean terms seonbae and hubae are written with the same Chinese characters and indicate a similar senior–junior relationship. Both the Japanese and Korean terms are based on the Chinese honorifics xianbei (先輩/先辈) and houbei (後輩/后辈), written in the same Chinese characters.
Similar concept exists in the Chinese-speaking world, though the terms vary depending on the context. In business, the terms are usually qiánbèi (前輩/前辈) for seniors and hòubèi (後輩/后辈) for juniors. For students, the term is usually xuézhǎng/xuéjiě (學長/姐, more common in Taiwan) or shīxiōng/shījiě (师兄/姐, mainland China) for male and female senpai, respectively, and xuédì/xuémèi (學弟/妹, Taiwan) or shīdì/shīmèi (师弟/妹, mainland China) for male and female kohai, respectively. The student terms are also used in the Taiwanese military and the police system, though the existence of this seniority system in parallel to the ranks is criticized.
History
The senpai–kōhai system is deeply rooted in Japanese history. Three elements have had a significant impact on its development: Confucianism, the traditional Japanese family system, and the Civil Code of 1898.
Confucianism arrived from China between the 6th and 9th centuries, but the derived line of thought that brought about deep social changes in Japan was Neo-Confucianism, which became the official doctrine of the Tokugawa shogunate (1603–1867). The precepts of loyalty and filial piety as tribute ( ) dominated the Japanese at the time, as respect for elders and ancestor worship that Chinese Confucianism taught were well accepted by the Japanese, and these influences have spread throughout daily life. Like other Chinese influences, the Japanese adopted these ideas selectively and in their own manner, so that the "loyalty" in Confucianism was taken as loyalty to a feudal lord or the Emperor.
The Japanese family system ( ) was also regulated by Confucian codes of conduct and had an influence on the establishment of the senpai–kōhai relation. In this family system the father, as male head, had absolute power over the family and the eldest son inherited the family property. The father had power because he was the one to receive an education and was seen to have superior ethical knowledge. Since reverence for superiors was considered a virtue in Japanese society, the wife and children had to obey it. In addition to the hereditary system, only the eldest son could receive his father's possessions, and neither the eldest daughter nor the younger children received anything from him.
The last factor influencing the senpai–kōhai system was the Civil Code of 1898, which strengthened the rules of privilege of seniority and reinforced the traditional family system, giving clear definitions of hierarchical values within the family. This was called koshusei (, "family-head system"), in which the head of the household had the right to command his family and the eldest son inherited that position. These statutes were abolished in 1947, after the surrender of Japan at the end of World War II. These ideals nevertheless remained during the following years as a psychological influence in Japanese society.
Terminology
The seniority rules are reflected in various grammatical rules in the Japanese language. A person who speaks respectfully to a superior uses honorific language ( ), which is divided into three categories:
(, "respectful language"): Used to denote respect towards a superior with or of whom one speaks, including the actions, objects, characteristics, and people related to this person.
(, "humble language"): In contrast to sonkeigo, with kenjōgo the speaker shows respect to a superior by lowering or deprecating him or herself.
(, "polite language"): Differs from the other two in that the deference is afforded only to the person being addressed, rather than those being spoken about. Use of the verb desu ("to be") and the verb ending -masu are examples of teineigo.
Sonkeigo and kenjōgo have expressions (verbs, nouns, and special prefixes) particular to the type of language; for example, the ordinary Japanese verb for "to do" is suru, but in sonkeigo is nasaru and in kenjōgo is itasu.
Another rule in the hierarchical relation is the use of honorific suffixes of address. A senpai addresses a kōhai with the suffix -kun after the kōhais given name or surname, regardless if the kōhai is male or female. A kōhai similarly addresses a senpai with the suffix -senpai or -san; it is extremely unusual for a kōhai to refer to a senpai with the suffix -sama, which indicates the highest level of respect to the person spoken to.
Prevalence
One place the senpai–kōhai relation applies to its greatest extent in Japan is in schools. For example, in junior and senior high schools (especially in school clubs) third-year students (who are the oldest) demonstrate great power as senpai. It is common in school sports clubs for new kōhai to have to perform basic tasks such as retrieving balls, cleaning playing fields, taking care of equipment, and even wash elder students' clothes. They must also bow to or salute their senpai when congratulated, and senpai may punish kōhai or treat them severely.
The main reason for these humble actions is that it is believed that team members can become good players only if they are submissive, obedient, and follow the orders of the trainer or captain, and thus become a humble, responsible, and cooperative citizen in the future. Relations in Japanese schools also place a stronger emphasis on the age than on the abilities of students. The rules of superiority between a senpai and a kōhai are analogous to the teacher–student relation, in which the age and experience of the teacher must be respected and never questioned.
The senpai–kōhai relation is weaker in universities, as students of a variety of ages attend the same classes; students show respect to older members primarily through polite language (). Vertical seniority rules nevertheless prevail between teachers based on academic rank and experience.
The senpai–kōhai system also prevails in Japanese businesses. The social environment in Japanese businesses is regulated by two standards: the system of superiority and the system of permanent employment. The status, salary, and position of employees depend heavily of seniority, and veteran employees generally take the highest positions and receive higher salaries than their subordinates. Until the turn of the 20th and 21st centuries, employment was guaranteed for life and thus such employees did not have to worry about losing their positions.
The senpai–kōhai relation is a cornerstone in interpersonal relations within the Japanese business world; for example, at meetings the lower-level employee should sit in the seat closest to the door, called shimoza (, "lower seat"), while the senior employee (sometimes the boss) sits next to some important guest in a position called kamiza (, "upper seat"). During meetings, most employees do not give their opinions, but simply listen and concur with their superiors, although they can express opinions with the prior consent of the employees of greater rank and influence in the company.
Outside Japan, the senpai–kōhai relation is often found in the teaching of Japanese martial arts, though misunderstandings arise due to lack of historical knowledge, and as the vertical social hierarchy of Japan does not exist in cultures such as those in the West.
Issues
Despite the senpai–kōhai relation's deep roots in Japanese society, there have been changes since the end of the 20th century in academic and business organizations. Kōhai no longer show as much respect to the experience of their senpai, the relation has become more superficial, and the age factor has begun to lose importance. The student body has diversified with Japanese students, who have spent a large part of their lives overseas and have returned to Japan, as well as foreign students without a mentality rooted in the Japanese hierarchical system.
The collapse of the economic bubble in the early 1990s caused a high level of unemployment, including the laying off of high-ranked employees. Companies since then first began to consider employees' skills rather than age or length of service with the company, due to which many long-serving employees lost their positions over being incapable of fulfilling expectations. Gradually many companies have had to restructure their salary and promotion systems, and seniority has thus lost some influence in Japanese society.
Attitudes towards the senpai–kōhai system vary from appreciation for traditions and the benefits of a good senpai–kōhai relationship; to reluctant acquiescence; to antipathy. Those who criticize the system find it arbitrary and unfair, that senpai were often pushy, and that the system results in students who are shy or afraid of standing out from the group. For example, some kōhai fear that if they outperform their senpai in an activity, their senpai will lose face, for which kōhai must apologize. In some cases, the relation is open to violence and bullying. Most Japanese people—even those who criticize it—accept the senpai–kōhai system as a common-sense aspect of society, straying from which would have inevitably negative social consequences.
See also
Etiquette in Japan
Honne and tatemae
Japanese honorifics
Oyabun and kobun
Sensei
References
Works cited
Confucianism in Japan
Dichotomies
Etiquette
Japanese honorifics
Japanese values
Japanese words and phrases | Senpai and kōhai | [
"Biology"
] | 2,467 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
996,678 | https://en.wikipedia.org/wiki/MPU-401 | The MPU-401, where MPU stands for MIDI Processing Unit, is an important but now obsolete interface for connecting MIDI-equipped electronic music hardware to personal computers. It was designed by Roland Corporation, which also co-authored the MIDI standard.
Design
Released around 1984, the original MPU-401 was an external breakout box providing MIDI IN/MIDI OUT/MIDI THRU/TAPE IN/TAPE OUT/MIDI SYNC connectors, for use with a separately-sold interface card/cartridge ("MPU-401 interface kit") inserted into a computer system. For this setup, the following "interface kits" were made:
MIF-APL: For the Apple II
MIF-C64: For the Commodore 64
MIF-FM7: For the Fujitsu FM-7
MIF-IPC: For the IBM PC/IBM XT. It turned out not to work reliably with 286 and faster processors. Early versions of the actual PCB had IF-MIDI/IBM as a silk screen.
MIF-IPC-A: For the IBM AT, works with PC and XT as well.
Xanadu MUSICOM IFM-PC: For the IBM PC / IBM XT / IBM AT. This was a third party MIDI card, incorporating the MIF-IPC(-A) and additional functionality that was coupled with the OEM Roland MPU-401 BOB. It also had a mini audio jack on the PCB.
MIF-PC8: For the NEC PC-88
MIF-PC98: For the NEC PC-98
MIF-X1: For the Sharp X1
MIF-AMG: For the Amiga, from Musicsoft
In 2014 hobbyists built clones of the MIF-IPC-A card for PCs.
Variants
Later, Roland would put most of the electronics originally found in the breakout box onto the interface card itself, thus reducing the size of the breakout box. Products released in this manner:
MPU-401N: an external interface, specifically designed for use with the NEC PC-98 series notebook computers. This breakout-box unit features a special COMPUTER IN port for direct connection to the computer's 110-pin expansion bus. METRONOME OUT connector was added. Released in Japan only.
MPU-IPC: for the IBM PC/IBM XT/IBM AT and compatibles (8 bit ISA). It had a 25-pin female connector for the breakout box, even though only nine pins were used, and only seven were functionally different: both 5V and ground use two pins each.
MPU-IPC-T: for the IBM PC/IBM XT/IBM AT and compatibles (8-bit ISA). The MIDI SYNC connector was removed from this Taiwanese-manufactured model, and the previously hardcoded I/O address and IRQ could be set to different values with jumpers. The break-out box has three DIN connectors for MIDI (1xIN and 2xOUT) plus three 3.5mm mini jack connectors (TAPE IN, TAPE OUT and METRONOME OUT).
MPU-IMC: for the IBM PS/2's Micro Channel architecture bus. In earlier models both I/O address and IRQ were hardcoded to IRQ 2 (causing serious problems with the hard disk as it also uses that IRQ); in later models the IRQ could be set with a jumper. It had a 9-pin female connector for the breakout box. . Due to the incompatibility of IRQ 2/9 (and potentially I/O addresses) between the MPU-IMC and IBM PS/2 MCA models certain games will not work with MPU-401.
S-MPU/AT (Super MPU): for the IBM AT and compatibles (16-bit ISA). It had a Mini-DIN female connector for the breakout box. The MIDI SYNC, TAPE IN, TAPE OUT, METRONOME OUT connectors was removed, but a second MIDI IN connector was added. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR (it does not take up conventional memory).
S-MPU-IIAT (Super MPU II): for the IBM or compatible Plug and Play PC computers (16 bit ISA). It had a Mini-DIN female connector for the breakout box with two MIDI In connectors and two MIDI Out connectors. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR (it does not take up precious conventional memory).
S-MPU/FMT: For FM Towns
LAPC-I: for the IBM PC and compatibles. Includes the Roland CM-32L sound source. A breakout box for this card, the MCB-1, was sold separately.
LAPC-N: for the NEC PC-98. Includes the Roland CM-32LN sound source. A breakout box for this card, the MCB-2, was sold separately.
RAP-10: for the IBM AT and compatibles (16 bit ISA). General midi sound source only. MPU-401 UART mode only. A breakout box for this card, the MCB-10, was sold separately.
SCP-55: for the IBM and compatible laptops (PCMCIA). Includes the Roland SC-55 sound source. A breakout box for this card, the MCB-3, was sold separately. MPU-401 UART mode only.
Still later, Roland would get rid of the breakout box completely and put all connectors on the back of the interface card itself. Products released in this manner:
MPU-APL: for the Apple II. Single-card combination of the MIF-APL interface and MPU-401, featuring MIDI IN, OUT, and SYNC connectors.
MPU-401AT: for IBM AT and "100% compatibles". Includes a connector for Wavetable daughterboards.
MPU-PC98: for the NEC PC-98
MPU-PC98II: for the NEC PC-98
S-MPU/PC (Super MPU PC-98): for the NEC PC-98
S-MPU/2N (Super MPU II N): for the NEC PC-98
SCC-1: for the IBM PC and compatibles. Includes the Roland SC-55 sound source.
GPPC-N & GPPC-NA: for the NEC PC-98. Includes the Roland SC-55 sound source.
Clones
By the late 1980s other manufacturers of PCBs developed intelligent MPU-401 clones. Some of these, like Voyetra, were equipped with Roland chips whereas most had reverse-engineered ROMs (Midiman / Music Quest).
Examples:
Midiman MM-401 (8BIT, non Roland chip set, also sold as part of the Midiman PC Desktop Music Kit)
Midi System, Inc. MDR-401, non Roland chip set
Computer Music Supply CMS-401 (8BIT, non Roland chip set)
Music Quest PC MIDI Card / MQX-16s / MQX-32m (8 & 16BIT, non Roland chip set)
Voyetra V-400x / OP-400x (V-4000, V4001, 8BIT, Roland chip set)
MIDI LAND DX-401 (non Roland chipset) & MD-401 (non Roland chipset)
Data Soft DS-401 (non Roland chipset)
In 2015 hobbyists developed a Music Quest PC MIDI Card 8BIT clone. In 2017/2018 hobbyists developed a revision of the Music Quest PC MIDI Card 8BIT clone that includes a wavetable header in analogy of the Roland MPU-401AT.
Modes
The MPU-401 can work in two modes, normal mode and UART mode. "Normal mode" would provide the host system with an 8-track sequencer, MIDI clock output, SYNC 24 signal output, Tape Sync and a metronome; as a result of these features, it is often called "intelligent mode". Compare this to UART mode, which reduces the MPU-401 to simply relaying in-/outcoming MIDI data bytes.
As computers became more powerful, the features offered in "intelligent mode" became obsolete. Implementing these in the host system's software was more efficient. Specific hardware was no longer required. As a result, the UART mode became the dominant mode of operation. Early UART MPU-401 capable cards were still advertised as MPU-401 compatible.
SoftMPU
In the mid 2010s, a hobbyist platform software interface, SoftMPU, was written that upgrades UART (non intelligent) MPU-401 interfaces to an intelligent MPU-401 interface, however this only works for MS-DOS. It also does not work for all games. Especially early Sierra games, such as Jones in the Fast Lane, will not work with SoftMPU.
HardMPU
In 2015, a PCB (HardMPU) was developed that incorporates SoftMPU as logic on hardware (so that the PC's CPU does not have to process intelligent MIDI). Currently HardMPU only supports playback and not recording.
Contemporary interfaces
Physical MIDI connections are increasingly replaced with the USB interface, and a USB to MIDI converter in order to drive musical peripherals which do not yet have their own USB ports. Often, peripherals are able to accept MIDI input through USB and convert it for the traditional DIN connectors. While MPU-401 support is no longer included in Windows Vista, a driver is available on Windows Update. As of 2011, the interface was still supported by Linux and Mac OS X.
References
External links
'Card Times' - Sound on Sound magazine, Nov 1996
SoftMPU
Louis Ohland's PS/2 Archiveshere
Computer hardware standards
MIDI
Obsolete technologies
Music sequencers | MPU-401 | [
"Technology",
"Engineering"
] | 2,061 | [
"Computer standards",
"Computer hardware standards",
"Music sequencers",
"Automation"
] |
16,042,253 | https://en.wikipedia.org/wiki/R.%20Graham%20Cooks | Robert Graham Cooks is the Henry Bohn Hass Distinguished Professor of Chemistry in the Aston Laboratories for Mass Spectrometry at Purdue University. He is an ISI Highly Cited Chemist, with over 1,000 publications and an H-index of 150.
Education
Cooks received a bachelor of science and master of science degrees from the University of Natal in South Africa in 1961 and 1963, respectively. He received a Ph.D. from the University of Natal in 1965 and a second Ph.D. from Cambridge University in 1967, where he worked with Peter Sykes. He then did post-doctoral work at Cambridge with Dudley Williams.
Career
Cooks became an Assistant Professor at Kansas State University from 1968 to 1971. In 1971, he took a position at Purdue University. He became a Professor of Chemistry in 1980 and was appointed the Henry Bohn Hass Distinguished Professor in 1990.
Cooks was co-editor of the Annual Review of Analytical Chemistry from 2013-2017.
Select research interests
Research in Cooks' laboratory (the Aston Laboratories) has contributed to a diverse assortment of areas within mass spectrometry, ranging from fundamental research to instrument and method development to applications. Cooks' research interests over the course of his career have included the study of gas-phase ion chemistry, tandem mass spectrometry, angle-resolved mass spectrometry and energy-resolved mass spectrometry (ERMS); dissociation processes, including collision-induced dissociation (CID), surface-induced dissociation (SID), and photodissociation (PD); and desorption processes, including secondary ion mass spectrometry (SIMS), laser desorption ionization (LD) and desorption electrospray ionization (DESI).
His research has ranged through areas from preparative mass spectrometry, ionization techniques and quadrupole ion traps (QITs) and related technologies to as far afield as abiogenisis (also known as "the origin of life") via homochirality.
Awards and fellowships
1984 ACS Analytical Division's Chemical Instrumentation Award
1985 Thomson Medal for International Service to Mass Spectrometry
1990 and 1995 NSF Special Creativity Award
1991 Frank H. Field & Joe Franklin Award, (ACS Award for Mass Spectrometry)
1997 Fisher Award (ACS Award for Analytical Chemistry)
2006 Distinguished Contribution in Mass Spectrometry Award
2008 Robert Boyle Prize for Analytical Science
2012 F.A. Cotton Medal for Excellence in Chemical Research of the American Chemical Society
2013 Dreyfus Prize in the Chemical Sciences
2014 ACS Nobel Laureate Signature Award for Graduate Education in Chemistry, shared with graduate student Livia S. Eberlin
2015 Member, National Academy of Sciences
2017 Aston Medal, British Mass Spectrometry Society
See also
Desorption electrospray ionization
MIKES
Orbitrap
References
External links
Aston Labs
Living people
21st-century American chemists
Mass spectrometrists
Purdue University faculty
Year of birth missing (living people)
Thomson Medal recipients
Annual Reviews (publisher) editors | R. Graham Cooks | [
"Physics",
"Chemistry"
] | 622 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,042,518 | https://en.wikipedia.org/wiki/Michael%20T.%20Bowers | Michael T. Bowers (born 1939) is an American mass spectrometrist, a professor in the department of chemistry and biochemistry at the University of California, Santa Barbara faculty.
Career
He studied at Gonzaga University, Spokane, Washington, earning his 1962 B.S. in 1962 and then earning a Ph.D. from the University of Illinois (with W.H. Flygare) in 1966.
He worked at the Jet Propulsion Laboratory in California for 2 years before joining UC Santa Barbara in 1968, where he was appointed full professor in 1976.
Bowers group uses mass spectrometry and ion mobility spectrometry to study gaseous species and determine their structure, reaction dynamics and mechanism.
Awards
Fellow of the American Chemical Society (ACS)
1987 Elected Fellow of the American Physical Society "for outstanding contributions both theoretically and experimentally on the Mechanism and Dynamics of Ion-Molecule Reactions"
1994 Guggenheim Fellowship
1994 Fellow of the American Association for the Advancement of Science
1996 Frank H. Field and Joe L. Franklin Award of the American Chemical Society
1997 Thomson Medal of the International Mass Spectrometry Foundation
2004 Distinguished Contribution in Mass Spectrometry Award
See also
Gas phase ion chemistry
References
External links
Bowers Group Page
Vita
1939 births
Living people
Gonzaga University alumni
University of Illinois alumni
21st-century American chemists
Mass spectrometrists
University of California, Santa Barbara faculty
Fellows of the American Chemical Society
Fellows of the American Physical Society
Thomson Medal recipients
Fellows of the American Association for the Advancement of Science | Michael T. Bowers | [
"Physics",
"Chemistry"
] | 305 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,042,722 | https://en.wikipedia.org/wiki/Fred%20McLafferty | Fred Warren McLafferty (May 11, 1923 − December 26, 2021) was an American chemist known for his work in mass spectrometry. He is best known for the McLafferty rearrangement reaction that was observed with mass spectrometry. With Roland Gohlke, he pioneered the technique of gas chromatography–mass spectrometry. He is also known for electron-capture dissociation, a method of fragmenting gas-phase ions.
Early life and education
Fred McLafferty was born in Evanston, Illinois in 1923, but attended grade school in Omaha, Nebraska, graduating from Omaha North High School in 1940. The urgent requirements of World War II accelerated his undergraduate studies at the University of Nebraska; he obtained his B.S. degree in 1943 and thereafter entered the US armed forces. He served in western Europe during the invasion of Germany and was awarded the Combat Infantryman Badge, a Purple Heart, Five Bronze Star Medals and a Presidential Unit Citation.
He returned to the University of Nebraska in late 1945 and completed his M.S. degree in 1947. He went on to work under William Miller at Cornell University where he earned his Ph.D. in 1950. He went on to a postdoctoral researcher position at the University of Iowa with R.L. Shriner.
Dow Chemical
He took a position at Dow Chemical in Midland, Michigan in 1950 and was in charge of mass spectrometry and gas chromatography from 1950 to 1956. In 1953-1956, he started collecting reference mass spectra whenever the instruments were not in use.
In 1956, he became the Director of Dow's Eastern Research Lab in Framingham, Massachusetts. During this time, he developed the first GC/MS instruments and analyzed the company's reference collection of spectra he himself founded. This allowed him to work out techniques for determining the structure of organic molecules by mass spectrometry, most notably in the discovery of what is now known as the McLafferty rearrangement.
Academic career
From 1964 to 1968, he was Professor of Chemistry at Purdue University. In 1968, he returned to his alma mater, Cornell University, to become the Peter J. W. Debye Professor of Chemistry. He was elected to the United States National Academy of Sciences in 1982. While at Cornell, McLafferty assembled one of the first comprehensive databases of mass spectra and pioneered artificial intelligence techniques to interpret GC/MS results. His PBM STIRS program has widespread use to save hours of time-consuming work otherwise required to manually analyze GC/MS results.
Personal life and death
McLafferty died in Ithaca, New York, on December 26, 2021, at the age of 98.
Honors and awards
1971 ACS Award in Chemical Instrumentation
1981 ACS Award in Analytical Chemistry
1984 William H. Nichols Medal
1985 Oesper Award
1985 J. J. Thomson Gold Medal by International Mass Spectrometry Society
1987 Pittsburgh Analytical Chemistry Award
1989 Field and Franklin Award for Mass Spectrometry
1989 University of Naples Gold Medal
1992 Robert Boyle Gold Medal by the Royal Society of Chemistry
1996 Chemical Pioneer Award from the American Institute of Chemists
1997 Bijvoet Medal of the Bijvoet Center for Biomolecular Research.
1999 J. Heyrovsky Medal by the Czech Academy of Sciences
2000 G. Natta Gold Medal by Italian Chemical Society
2001 Torbern Bergman Medal by the Swedish Chemical Society
2003 John B. Fenn Distinguished Contribution in Mass Spectrometry by the American Society for Mass Spectrometry (ASMS)
2004 Lavoisier Medal by the French Chemical Society
2006 Pehr Edman Award by the International Association for Protein Structure
2015 Nakanishi Prize from the American Chemical Society
2019 American Chemical Society designated a National Historic Chemical Landmark in Midland, MI for the demonstration of the first operating GC-MS by Fred McLafferty and Roland Gohlke.
References
Bibliography
External links
A Conversation with Fred W. McLafferty 2006, 90 minute video, for Cornell University.
1923 births
2021 deaths
21st-century American chemists
Mass spectrometrists
Purdue University faculty
Cornell University alumni
Cornell University faculty
Members of the United States National Academy of Sciences
Dow Chemical Company employees
Bijvoet Medal recipients
Thomson Medal recipients
Omaha North High School alumni
People from Evanston, Illinois
United States Army personnel of World War II | Fred McLafferty | [
"Physics",
"Chemistry"
] | 891 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,043,047 | https://en.wikipedia.org/wiki/Jesse%20L.%20Beauchamp | Jesse L. Beauchamp (born 1942) is the Charles and Mary Ferkel Professor of Chemistry at the California Institute of Technology.
Early life and education
1964 B.S. California Institute of Technology
1967 Ph.D. Harvard University
Research interests
Development of novel mass spectrometric techniques in biochemistry.
Awards
In 1978 he received the ACS Award in Pure Chemistry from the American Chemical Society and in 1981 was elected to the National Academy of Sciences. In 1999 he received the Peter Debye Award in Physical Chemistry from the American Chemical Society and was again honored in 2003 with the Field and Franklin Award in Mass Spectrometry. In 2007 he received the Distinguished Contribution Award from the American Society for Mass Spectrometry for the original development and chemical applications of ion cyclotron resonance spectroscopy.
Former students
Charles A. Wight – President of Weber State University
Frances Houle (1979) – Director of JCAP North
Terry B. McMahon (1974) – Professor or chemistry at the University of Waterloo
Peter B. Armentrout (1980) – Professor of chemistry at the University of Utah
David Dearden (1989) – Chemistry and Biochemistry department chair at BYU
Elaine Marzluff (1995) – Chemistry department chair at Grinnell College
References
External links
Beauchamp Research Group at Caltech
CCE website
21st-century American chemists
Mass spectrometrists
Living people
California Institute of Technology faculty
1942 births
Members of the United States National Academy of Sciences
Harvard University alumni
California Institute of Technology alumni | Jesse L. Beauchamp | [
"Physics",
"Chemistry"
] | 303 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,044,423 | https://en.wikipedia.org/wiki/Limp%20binding | Limp binding is a bookbinding method in which the book has flexible cloth, leather, vellum, or (rarely) paper sides. When the sides of the book are made of vellum, the bookbinding method is also known as limp vellum.
The cover is made with a single piece of vellum or alternative material, folded around the textblock, the front and back covers being folded double. The quires are sewn onto sewing supports such as cords or alum-tawed thongs and the tips of the sewing supports would be laced into the cover. The thongs could also be used at the fore edge of the covers to create a closure or tie.
In limp binding the covering material is not stiffened by thick boards, although paste-downs, if used, provide some stiffness; some limp bindings are only adhered to the back of the book. Some limp vellum bindings had yapp edges that flop over to protect the textblock.
Usage
Limp vellum bindings for commonplace books were being produced at least as early as the 14th century and probably earlier, but it was not usually common until the 16th and 17th centuries. Its usage subsequently declined until "revived by the private presses near the end of the 19th century". From about 1775 to 1825, limp leather was commonly used for pocket books, but by the 1880s limp bindings came to be largely restricted to devotional books, diaries, and sentimental verse, sometimes with yapp edges. Yapp edges are bent edges on a limp binding projecting beyond the textblock to reduce damage. They are often found in editions of the Bible.
References
Bibliography
External links
an online exhibit of the form with an essay on its history
from the University of Texas at Austin School of Information
Bookbinding
Book design
Hides (skin) | Limp binding | [
"Engineering"
] | 365 | [
"Book design",
"Design"
] |
16,045,000 | https://en.wikipedia.org/wiki/Lesaffre | Lesaffre is a French yeast manufacturer, and the world's largest producer.
History
The company was founded by Louis Lesaffre, the co-founder of Bonduelle, in the mid-19th century.
One of its subsidiaries, Bio Springer, was founded by Baron Max de Springer in 1872 in Maisons-Alfort.
In 2004, it formed a North American joint-venture with Archer Daniels Midland, known as Red Star Yeast.
In 2006, the end of the malt business, via its subsidiary International Malting Company (IMC), then the fifth largest maltster in the world, created tension among the family shareholders. IMC was acquired at 100% by ADM.
In 2007, it was the world's largest producer of yeast. In 2011, it bought the factory of "Voronezh Yeast" LLC in Voronezh.
After the foundation of the Lesaffre Advanced Fermentations (LEAF) subsidiary, the Swiss biofuel start-up Butalco, founded by Eckhard Boles and Gunter Festel, was acquired in July 2014. With this acquisition, Lesaffre entered the market for second generation, waste-based bioethanol and biobutanol.
In 2014, it had an annual turnover of 1.5 billion euros, 7,700 employees and 80 subsidiaries in various countries. The capital is not listed on the stock exchange, but shared among 400 shareholders from the founders' family, whose professional fortune is estimated at 3 billion euros.
In 2018, the group took control of Tunisian Rayen Food Industries, which specializes in the production of baker's yeast, and of a Serbian Alltech plant specializing in yeast extracts.
In 2021, it was ranked 8th on FoodTalks' list of Top 30 Global Probiotic Food Ingredient Companies. According to current data, the company generates annual sales of €2 billion with more than 10,000 employees and 80 subsidiaries in 50 countries.
In 2024, it acquired the control of Brazilian yeast products manufacturer Biorigin from Zilor group.
References
Food and drink companies of France
French brands
Companies based in Hauts-de-France
Yeasts | Lesaffre | [
"Biology"
] | 444 | [
"Yeasts",
"Fungi"
] |
16,045,004 | https://en.wikipedia.org/wiki/Gamma%20%28satellite%29 | Gamma was a Soviet gamma ray telescope. It was launched on 11 July 1990 into an orbit around Earth with a height of 375 km and an inclination of 51.6 degrees. It lasted for around 2 years. On board the mission were three telescopes, all of which could be pointed at the same source. The project was a joint Soviet-French project.
Background
The Gamma-1 telescope was the main telescope. It consisted of 2 scintillation counters and a gas Cerenkov counter. With an effective area of around , it operated in the energy range of 50 MeV to 6 GeV. At 100 MeV it initially had an angular resolution of 1.5 degrees, with a field of view of 5 degrees and an energy resolution of 12%. A Telezvezda star tracker increased the pointing position accuracy of the Gamma-1 telescope to 2 arcminutes by tracking stars up to an apparent magnitude of 5 within its 6 by 6 degree field of view. However, due to the failure of power to a spark chamber, for most of the mission the resolution was around 10 degrees.
The telescope was conceived in 1965, as part of the Soviet Cloud Space Station, which evolved into the Multi-module Orbital Complex (MOK).
When work on Gamma finally began in 1972, it was intended to create a Gamma observatory, the first space station module for MOK, the first modular space station in the Salyut programme.
For this, it was designed to add the scientific instruments of the observatory to a spacecraft derived from the Progress spacecraft – with the Progress in turn being a Soyuz spacecraft derivate – and that this spacecraft would dock to a MOK space station.
However, in 1974, at the time it became a joint venture with France, the MOK space station project was canceled, and in February 1976, the Soviet space program was reconfigured.
When on 16 February 1979 production of the telescope was authorized, the plans for the Soviet space station modules had evolved to use the Functional Cargo Block of the TKS spacecraft instead, with the Kvant-1 Roentgen observatory eventually becoming the first such module for Mir – as a result of these changes the Gamma observatory was redesigned as the free flying Gamma satellite.
At that time the telescope was authorized in 1979, it was planned to be launched in 1984, but the actual launch was delayed until 1990.
Operation
The Disk-M telescope operated in the energy range 20 keV – 5 MeV. It consisted of Sodium iodide scintillation crystals, and had an angular resolution of 25 arcminutes. However, it stopped working shortly after the mission was launched.
Finally, the Pulsar X-2 telescope had 30 arcminute resolution and a 10 deg x 10 deg field of view, and operated in the energy range 2–25 keV.
Observations included studies of the Vela Pulsar, the Galactic Center, Cygnus X-1, Hercules X-1 and the Crab Nebula. The telescopes also measured the Sun during peak solar activity.
See also
Aelita (spacecraft)
References
External links
Gamma on the Internet Encyclopedia of Science
Gamma at Astronautix.com
Gamma-ray telescopes
Space telescopes
Spacecraft launched in 1990
Soviet space observatories
France–Soviet Union relations | Gamma (satellite) | [
"Astronomy"
] | 663 | [
"Space telescopes",
"Soviet space observatories"
] |
16,045,065 | https://en.wikipedia.org/wiki/Small%20Astronomy%20Satellite%202 | The Small Astronomy Satellite 2, also known also as SAS-2, SAS B or Explorer 48, was a NASA gamma ray telescope. It was launched on 15 November 1972 into the low Earth orbit with a periapsis of 443 km and an apoapsis of 632 km. It completed its observations on 8 June 1973.
Mission
SAS 2 was the second in the series of small spacecraft designed to extend the astronomical studies in the X-ray, gamma-ray, ultraviolet, visible, and infrared regions. The primary objective of the SAS-B was to measure the spatial and energy distribution of primary galactic and extragalactic gamma radiation with energies between 20 and 300 MeV. The instrumentation consisted principally of a guard scintillation detector, an upper and a lower spark chamber, and a charged particle telescope.
Launch
The spacecraft was launched on 15 November 1972 into an initial orbit of about of apogee, of perigee, 1.90° of orbital inclination, with an orbital period of 95.40 minutes. from the San Marco platform off the coast of Kenya, Africa, into a nearly equatorial orbit. The orbiting spacecraft was in the shape of a cylinder approximately in diameter and in length. Four solar paddles were used to recharge a 6 amp-h, eight-cell, nickel–cadmium battery and provide power to the spacecraft and telescope experiment. The spacecraft was spin stabilized by an internal wheel, and a magnetically torqued commandable control system was used to point the spin axis of the spacecraft to any point of the sky within approximately 1°. The experiment axis lay along this axis allowing the telescope to look at any selected region of the sky with its ± 30° acceptance aperture. The nominal spin rate was 1/12 rpm. Data were taken at 1000 bps and could be recorded on an onboard tape recorder and simultaneously transmitted in real time. The recorded data were transmitted once per orbit. This required approximately 5 minutes.
Experiment
The telescope experiment was initially turned on 20 November 1972 and by 27 November 1972, the spacecraft became fully operational. The low-voltage power supply for the experiment failed on 8 June 1973. No useful scientific data were obtained after that date. With the exception of a slightly degraded star sensor, the spacecraft control section performed in an excellent manner.
SAS-2 first detected Geminga, a pulsar believed to be the remnant of a supernova that exploded 300,000 years ago.
Gamma-Ray Telescope
The instrument consisted of two spark-chamber assemblies, four plastic scintillation counters, four Cherenkov counters, and an anticoincidence scintillation counter dome assembled to form a telescope. The spark chamber assembly consisted of 16-wire spark-chamber modules with a magnetic core readout system. Sandwiched between these two assemblies was a plane of plastic scintillator formed by the four scintillation counters. Thin tungsten plates, averaging thick, were interleaved between the spark chamber modules, which had an active area of 640-cm2. These plates provided the material for the gamma ray to convert into an electron-positron pair and provided a means of determining the energy of these particles by measuring their coulomb scattering. The spark chamber modules revealed the position and direction of the particles; from this information, the energy and direction of the gamma ray was determined. The scintillation counters and the four directional Cerenkov counters that were placed below the second spark chamber assembly constituted four independent counter coincidence systems. The single-piece plastic scintillator dome surrounded the whole assembly except at the bottom to discriminate against charged particles. The threshold of the instrument was about 30-MeV, and energies up to about 200-MeV could be measured along with the integral flux above 200 MeV. The angular resolution of the telescope varied as a function of energy and arrival direction from 1.5° to 5°. During the lifetime of the experiment from 15 November 1972 to 8 June 1973, approximately 55% of the celestial sphere, including most of the galactic plane, was surveyed.
See also
Small Astronomy Satellite1
Small Astronomy Satellite 3
Notes
References
1972 in spaceflight
Satellites formerly orbiting Earth
Explorers Program
Gamma-ray telescopes
Space telescopes | Small Astronomy Satellite 2 | [
"Astronomy"
] | 851 | [
"Space telescopes"
] |
16,045,068 | https://en.wikipedia.org/wiki/Heavy%20meromyosin | Heavy meromyosin (HMM) is the larger of the two fragments obtained from the muscle protein myosin II following limited proteolysis by trypsin or chymotrypsin. HMM contains two domains S-1 and S-2, S-1 contains is the globular head that can bind to actin while the S-2 domain projects at and angle from light meromyosin (LMM) connecting the two meromyosin fragments.
HMM is used to determine the polarity of actin filaments by decorating them with HMM then viewing them under the electron microscope.
References
Motor proteins | Heavy meromyosin | [
"Chemistry",
"Biology"
] | 132 | [
"Biotechnology stubs",
"Motor proteins",
"Biochemistry stubs",
"Molecular machines",
"Biochemistry"
] |
16,046,286 | https://en.wikipedia.org/wiki/Microvoid%20coalescence | Microvoid coalescence (MVC) is a high energy microscopic fracture mechanism observed in the majority of metallic alloys and in some engineering plastics.
Fracture process
MVC proceeds in three stages: nucleation, growth, and coalescence of microvoids. The nucleation of microvoids can be caused by particle cracking or interfacial failure between precipitate particles and the matrix. Additionally, microvoids often form at grain boundaries or inclusions within the material. Microvoids grow during plastic flow of the matrix, and microvoids coalesce when adjacent microvoids link together or the material between microvoids experiences necking. Microvoid coalescence leads to fracture. Void growth rates can be predicted assuming continuum plasticity using the Rice-Tracey model:
where is a constant typically equal to 0.283 (but dependent upon the stress triaxiality), is the yield stress, is the mean stress, is the equivalent Von Mises plastic strain, is the particle size, and produced by the stress triaxality:
Fracture surface morphologies
MVC can result in three distinct fracture morphologies based on the type of loading at failure. Tensile loading results in equiaxed dimples, which are spherical depressions a few micrometres in diameter that coalesce normal to the loading axis. Shear stresses will result elongated dimples, which are parabolic depressions that coalesce in planes of maximum shear stress. The depressions point back to the crack origin, and shear influenced failure will produce depressions that point in opposite directions on opposing fracture surfaces. Combined tension and bending will also produce the elongated dimple morphology, but the directions of the depressions will be in the same direction on both fracture surfaces.
References
Fracture mechanics
Materials degradation | Microvoid coalescence | [
"Materials_science",
"Engineering"
] | 365 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
16,046,999 | https://en.wikipedia.org/wiki/BCK%20algebra | In mathematics, BCI and BCK algebras are algebraic structures in universal algebra, which were introduced by Y. Imai, K. Iséki and S. Tanaka in 1966, that describe fragments of the propositional calculus involving implication known as BCI and BCK logics.
Definition
BCI algebra
An algebra (in the sense of universal algebra) of type is called a BCI-algebra if, for any , it satisfies the following conditions. (Informally, we may read as "truth" and as " implies ".)
BCI-1
BCI-2
BCI-3
BCI-4
BCI-5
BCK algebra
A BCI-algebra is called a BCK-algebra if it
satisfies the following condition:
BCK-1
A partial order can then be defined as x ≤ y iff x * y = 0.
A BCK-algebra is said to be commutative if it satisfies:
In a commutative BCK-algebra x * (x * y) = x ∧ y is the greatest lower bound of x and y under the partial order ≤.
A BCK-algebra is said to be bounded if it has a largest element, usually denoted by 1. In a bounded commutative BCK-algebra the least upper bound of two elements satisfies x ∨ y = 1 * ((1 * x) ∧ (1 * y)); that makes it a distributive lattice.
Examples
Every abelian group is a BCI-algebra, with * defined as group subtraction and 0 defined as the group identity.
The subsets of a set form a BCK-algebra, where A*B is the difference A\B (the elements in A but not in B), and 0 is the empty set.
A Boolean algebra is a BCK algebra if A*B is defined to be A∧¬B (A does not imply B).
The bounded commutative BCK-algebras are precisely the MV-algebras.
References
Y. Huang, BCI-algebra, Science Press, Beijing, 2006.
Algebraic structures
Universal algebra | BCK algebra | [
"Mathematics"
] | 442 | [
"Mathematical structures",
"Mathematical objects",
"Universal algebra",
"Fields of abstract algebra",
"Algebraic structures"
] |
2,185,671 | https://en.wikipedia.org/wiki/Media%20richness%20theory | Media richness theory (MRT), sometimes referred to as information richness theory, is a framework used to describe a communication medium's ability to reproduce the information sent over it. It was introduced by Richard L. Daft and Robert H. Lengel in 1986 as an extension of information processing theory. MRT is used to rank and evaluate the richness of certain communication media, such as phone calls, video conferencing, and email. For example, a phone call cannot reproduce visual social cues such as gestures which makes it a less rich communication media than video conferencing, which affords the transmission of gestures and body language. Based on contingency theory and information processing theory, MRT theorizes that richer, personal communication media are generally more effective for communicating equivocal issues in contrast with leaner, less rich media.
Background
Media richness theory was introduced in 1986 by Richard L. Daft and Robert H. Lengel. Leaning on information processing theory for its theoretical foundation, MRT was originally developed to describe and evaluate communication media within organizations. In presenting media richness theory, Daft and Lengel sought to help organizations cope with communication challenges, such as unclear or confusing messages, or conflicting interpretations of messages.
Other communication scholars have tested the theory in order to improve it, and more recently Media Richness Theory has been retroactively adapted to include new media communication media, such as video telephony, online conferencing, and online coursework. Although media richness theory relates to media use, rather than media choice, empirical studies of the theory have often studied what medium a manager would choose to communicate over, and not the effects of media use (media-adequacy).
Since its introduction, media richness theory has been applied to contexts outside of organizational and business communication (See "Application" section).
Theory
Information richness is defined by Daft and Lengel as "the ability of information to change understanding within a time interval".
Media richness theory states that all communication media vary in their ability to enable users to communicate and to change understanding. The degree of this ability is known as a medium's "richness." MRT places all communication media on a continuous scale based on their ability to adequately communicate a complex message. Media that can efficiently overcome different frames of reference and clarify ambiguous issues are considered to be richer whereas communications media that require more time to convey understanding are deemed less rich.
A primary driver in selecting a communication medium for a particular message is to reduce the equivocality, or possible misinterpretations, of a message. If a message is equivocal, it is unclear and thus more difficult for the receiver to decode. The more equivocal a message, the more cues and data needed to interpret it correctly. For example, a simple message intended to arrange a meeting time and place could be communicated in a short email, but a more detailed message about a person's work performance and expectations would be better communicated through face-to-face interaction.
The theory includes a framework with axes going from low to high equivocality and low to high uncertainty. Low equivocality and low uncertainty represents a clear, well-defined situation; high equivocality and high uncertainty indicates ambiguous events that need clarification by managers. Daft and Lengel also stress that message clarity may be compromised when multiple departments are communicating with each other, as departments may be trained in different skill sets or have conflicting communication norms.
Determining media richness
In their 1988 article regarding media richness theory, Daft and Lengel state, "The more learning that can be pumped through a medium, the richer the medium." Media richness is a function of characteristics including the following:
Ability to handle multiple information cues simultaneously
Ability to facilitate rapid feedback
Ability to establish a personal focus
Ability to utilize natural language
Selecting an appropriate medium
Media richness theory predicts that managers will choose the mode of communication based on aligning the equivocality of the message to the richness of the medium. In other words, communication channels will be selected based on how communicative they are. However, often other factors, such as the resources available to the communicator, come into play. Daft and Lengel's prediction assumes that managers are most concentrated on task efficiency (that is, achieving the communicative goal as efficiently as possible) and does not take into consideration other factors, such as relationship growth and maintenance. Subsequent researchers have pointed out that attitudes towards a medium may not accurately predict a person's likelihood of using that medium over others, as media usage is not always voluntary. If an organization's norms and resources support one medium, it may be difficult for a manager to choose another form to communicate his or her message.
Social presence refers to the degree to which a medium permits communicators to experience others as being psychologically present or the degree to which a medium is perceived to convey the actual presence of the communicating participants. Tasks that involve interpersonal skills, such as resolving disagreements or negotiation, demand high social presence, whereas tasks such as exchanging routine information require less social presence. Therefore, face-to-face media like group meetings are more appropriate for performing tasks that require high social presence; media such as email and written letters are more appropriate for tasks that require low social presence.
Another model that is related to media richness theory as an alternative, particularly in selecting an appropriate medium, is the social influence model. How we perceive media, in this case to decide where a medium falls on the richness scale, depends on "perceptions of media characteristics that are socially created," reflecting social forces and social norms at play in the current environment and the context that determines the needed use. Each organization is different in the goal that is trying to be reached and the missions that are trying to be completed. Thus, with different organizational cultures and environments, the way each organization perceives a medium is different and as a result, the way each organization uses media and deems media as more or less rich will vary.
Communicators also consider how personal a message is when determining the appropriate media for communication. In general, richer media are more personal as they include nonverbal and verbal cues, body language, inflection, and gestures that signal a person's reaction to a message. Rich media can promote a closer relationship between a manager and subordinate. The sentiment of the message may also have an influence on the medium chosen. Managers may want to communicate negative messages in person or via a richer media, even if the equivocality of the message is not high, in order to facilitate better relationships with subordinates. On the other hand, sending a negative message over a leaner medium would weaken the immediate blame on the message sender and prevent them from observing the reaction of the receiver.
As current business models change, allowing more employees to work outside the office, organizations must rethink the reliance on face-to-face communication. Furthermore, the fear of more lean channels must be rid of. In this current context, managers must decide through trial and errors which medium is best used for various situations, namely an employee that works from the office vs. an employee that works outside the office. Business is being conducted on a global scale. In order to save money and cut back on travel time, organizations must adopt new media in order to stay up-to-date with business functions in the modern times.
Concurrency
In April 1993, Valacich et al. suggested that in light of new media, concurrency be included as an additional characteristic to determine a medium's richness. They define environmental concurrency to represent "the communication capacity of the environment to support distinct communication episodes, without detracting from any other episodes that may be occurring simultaneously between the same or different individuals." Furthermore, they explain that while this idea of concurrency could be applied to the media described in Daft and Lengel's original theories, new media provide a greater opportunity for concurrency than ever before.
Applications
Industries
Organizational and business communications
Media richness theory was originally conceived in an organizational communication setting to better understand interaction within companies. MRT is used to determine the "best" medium for an individual or organization to communicate a message. For example, organizations may find that important decisions need to be discussed in face-to-face interactions; using email would not be an adequate channel.
From an organizational perspective, high level personnel may require verbal media to help solve many of their problems. Entry-level positions with clear, unambiguous tasks may be fulfilled with written media forms. From an individual perspective, though, people prefer oral communication because the abundant communicative cues afford more accurate and efficient interpretation of the message.
An information-processing perspective of organizations also highlights the important role of communication. This perspective suggests that organizations gather information from their environment, process this information, and then act on it. As environmental complexity, turbulence, and information load increase, organizational communication increases. The organization's effectiveness in processing information becomes paramount when the business environment is complex and wrought with rapid change.
Today, companies use technologies such as video conferencing, which enable participants to see each other even when in separate locations. This technology affords organizations the opportunity to have richer communication than via traditional conference calls which only provide audio cues to the participants involved.
Media sensitivity and job performance
Daft and Lengel also assert that not all executives or managers in organizations demonstrate the same skill in making effective media choices for communications. High performing executives or managers tend to be more "sensitive" to richness requirements in media selection than low performing managers. In other words, competent executives select rich media for non-routine messages and lean media for routine messages.
From the consensus and satisfaction perspectives, groups with a communication medium which is too lean for their task seem to experience more difficulties than groups with a communication medium which is too rich for their task. Additionally, face-to-face groups achieved higher consensus change, higher decision satisfaction and higher decision scheme satisfaction than dispersed groups.
Job seeking and recruitment
In a job recruitment context, face-to-face interactions with company representatives, such as at career fairs, should be perceived by applicants as rich media. Career fairs allow instant feedback in the form of questions and answers and permit multiple cues including verbal messages and body gestures and can be tailored to each job seeker's interests and questions.
In comparison, static messages like reading information on a company's website or browsing an electronic bulletin board can be defined as leaner media since they are not customized to the individual needs of job seekers; they are asynchronous in their feedback and, since they are primarily text-based, there are no opportunities for verbal inflections or body gestures. This interaction between job seekers and potential employers affects how candidates process information about the organization. The interactions a candidate has with a potential employer via lean and rich media shape a job seeker's beliefs. Some employers have started using more vivid tools to answer questions about job recruitment such as videos, animations and virtual agents. Counterintuitively, elements which are more interactive like the US Army's virtual agent, Sgt. Starr, have been shown to hinder information transfer for ambiguous or complex messages such as a company's value or mission.
Virtual teams and teleworking
Many organizations are distributed globally with employees on a single team located in many different time zones. In order to facilitate productive cooperation and team dynamics, organizations benefit from considering the technology tools that are provided for coworking and communication. Workman, Kahnweiler, and Bommer (2003) found that an ideal teleworking design would feature a variety of types of media, ranging from lean to rich, in which workers can choose the media that is most suitable for their working style and the task at hand. Further, different jobs may require different types of media. Jobs that are more concrete and structured like planning, administration or operations may be sustainable with lean media options while software design and development which inherently has much more uncertainty and negotiation is best supported by richer media channels.
A 2009 study exploring the dynamics of virtual teams showed that the use of richer media in virtual work environments decreased perceived social loafing, or the feeling that a group member's individual contributions are not noticed or valued.
Corporate social responsibility
The concept of corporate social responsibility (CSR), which originally gained prominence in the 1960s, describes a company's self-regulation in the compliance of the ethical and moral standards. Public companies often describe their CSR efforts as an aspect of marketing campaigns in order to appeal to customers. Sat and Selemat (2014) found that customers were more affected by such messages when they were communicated through rich channels instead of lean ones.
Media
While media richness theory's application to new media has been contested (see "Criticism"), it is still used heuristically as a basis for studies examining new media.
Websites and hypertext
Websites can vary in their richness. In a study examining representations of the former Yugoslavia on the World Wide Web, Jackson and Purcell proposed that hypertext plays a role in determining the richness of individual websites. They developed a framework of criteria in which the use of hypertext on a website can be evaluated in terms of media richness characteristics as set forth by Daft and Lengel in their original theoretical literature. Furthermore, in their 2004 article, Simon and Peppas examined product websites' richness in terms of multimedia use. They classified "rich media sites" as those that included text, pictures, sounds and video clips, while the "lean media sites" contained only text. In their study, they created four sites (two rich and two lean) to describe two products (one simple, one complex). They found that most users, regardless of the complexity of the product, preferred the websites that provided richer media.
Rich media on websites also has the potential to stimulate action in the physical world. Lu, Kim, Dou and Kumar (2014) demonstrated that websites with 3D views of a fitness center were more successful in creating a student's intention to visit the gym than a website with static 2D images.
Instant messaging and texting
Media richness theory implies that a sender should select a medium of appropriate richness to communicate the desired message or fulfill a specific task. Senders that use less-rich communication media must consider the limitations of that medium in the dimensions of feedback, multiple cues, message tailoring, and emotions. Take for example the relative difficulty of determining whether a modern text message is serious or sarcastic in tone. The leanness of the text prevents the transmission of tone and facial expression which would otherwise be useful in detecting the sarcasm. However, results from a study conducted by Anandarajan et al. on Generation Y's use of instant messaging conclude that "the more users recognize IM as a rich communication medium, the more likely they believe this medium is useful for socialization." Though Generation Y users consider texting to be a rich medium, there is additional evidence that shows that easily accessible and non-intrusive media (i.e., texting, Twitter) were more likely to be used for sharing positive than negative events, and intrusive and rich media (i.e., phone calling) were more likely to be used for sharing negative than positive events. Additionally, in order to better understand teenagers' use of MSN (later called Microsoft Messenger service), Sheer examined the effect of both media richness and communication control. Among other findings, Sheer's study demonstrated that "rich features, such as webcam and MSN Spaces seemingly facilitated the increase of acquaintances, new friends, opposite-sex friends, and, thus, the total number of friends."
E-mail
In recent years, as the general population has become more e-mail savvy, the lines have blurred somewhat between face-to-face and e-mail communication. E-mail is now thought of as a verbal tool, with its capacity to enable immediate feedback, leverage natural language, and embed emotion via acronyms and emoticons.
However, there is a downside of e-mail: volume overload. Emails often have large unnecessary quantities of information which is non-job essential and/or spam. Filtering through this junk does require additional time. The time required to manage email can cause an information overload. With excess email, people may feel that they will miss information due to the sheer volume on content they receive. Some individuals may find this volume to be a barrier to swift responses to emails.
Email do have the capacity to transmit more informational content than other channels such as voicemail. Perception of email as a rich platform varies among users though. This perception contributes to how the individual will use the channel. For some, the choice of content will differ. They may include images or videos if they recognize email as a rich channel whereas others may only leverage text. This perception also affects choice of linguistic features. Those that see email as similar to an oral channel will type differently than those who see email as a written channel.
Parents favor using email to communicate in cases that involve their child's academic status. When communicating to teachers, parents favored a more asynchronous form of messaging, to clearly state their concerns about their children. This, in turn, creates a clear communication channel between the teacher and parent.
Email and Virtual Teamwork
When virtual teams were tasked in completing projects, many directed their attention towards email, even though it ranks lower in richness. Emails, although low in richness, provide the rehearsability and reprocessability (ability to read over messages before sending and read over to facilitate better understanding) than other mediums higher in richness may not.
Video conferencing
Video conference software has been used not only for virtual teams in large companies, but also for classrooms and casual conversations. Software or Video Conferencing Systems (VCS) such as Skype and Google Hangouts allow for more visual cues than just audio conversations. Research suggests that VCS is somewhere between the telephone and face-to-face meetings in terms of media richness. Even though video conferencing does not have the same richness as face-to-face conversations, a study regarding video conferencing has said that richer content-presentation types were positively correlated with higher concentration levels but showed mixed results when correlated with perceived usefulness.
Facebook
The social media platform, Facebook, has been found to be even richer than email. Facebook business pages offer such features as immediate feedback from customers, linkage to additional webpages, customization to customers/potential customers, as well as a language variety. Businesses have been able to use this platform to successfully connect with their customers but at a cost of quality and access potentially decreasing.
Other applications
Relational communication
Kashian and Walther (2018) find that asynchronous communication is a better medium for reducing conflict between people who have generally positive opinions/attributions of their partners than does the face-to-face medium. The authors credit relational intimacy and attending positive attributions made by the partners as a potential reason for overcoming asynchronous communication's alleged shortcomings as espoused by the media synchronicity theory, which “[contends] that synchronous media are best for convergent conflict communication” (2018, p. 7, citing Dennis et al., 2008). The authors conclude, “asynchronous CMC is a beneficial medium for online conflict among satisfied couples” (2018, p. 19).
In another study, Koutamanis et al. (2013) suggest that adolescents’ engagement through instant messaging may actually serve to improve their respective abilities to enter into in-person relationships in the real world. The authors focus on textual communication via electronic means. Although the written word is generally considered to be one of the leanest forms of communication regardless of how it is delivered according to the media richness theory, this study illustrates how texting may enhance adolescents’ ability to later succeed with face-to-face interactions that come after a certain amount of interaction through textual correspondence.
In a 2016 article, Lisiecka et al. point out that, although it has been generally accepted that “media other than face-to-face are considered an obstacle rather than an equally effective means of information transfer” (2016, p. 13), their results suggest that computer-mediated communication “has become similarly natural and intuitive as face-to-face contacts” (2016, p. 13).
Tong and Walther (2015) argue that unlike predictions attributed to early computer-mediated communication theories like the media richness (Daft & Lengel, 1986) and media naturalness (Kock, 2004) theories, nonverbal communication may not be “essential to the behavioral transfer and perceptual interpretation of expectancies” (2015, p. 204). And further suggest that face-to-face communication may negatively skew people's interactions if the participants’ first impressions are influenced by biases that are responsive to visual cues.
Deception
Deception, in the context of communication theory, occurs when the sender knowingly conveys a false message to the receiver. According to Buller and Burgoon, "deception occurs when communicators control the information contained in their messages to convey a meaning that departs from the truth as they know it." This idea is central to the Interpersonal deception theory. Additional research has analyzed the relationship between media richness and the communication of deceptive messages. Richer media, especially those that transmit non-verbal cues such tone of voice, facial expression or gestures, show lesser incidences of deceptive messages than lean media. By leveraging a richer media, interlocutors develop stronger affective bonds which mitigates the likelihood that one speaker will try to deceive another. When honesty is not considered the best policy, learner media, such as e-mail, allows for a stronger possibility of deception.
Distance education and e-books
In evaluating students' satisfaction with distance courses, Sheppherd and Martz concluded that a course's use of media rich technology affected how students evaluated the quality of the course. Courses that utilized tools such as "discussion forums, document sharing areas, and web casting" were viewed more favorably. Lai and Chang in 2011 used media richness as a variable in their study examining user attitudes towards e-books, stating that the potential for rich media content like embedded hyperlinks and other multimedia additions, offered users a different reading experience than a printed book. Further research by Lan and Sie (2010), that within the category of text based communication channels, there are significant differences that should shape an instructor's choice of technology. They studied the use of SMS, email and RSS and found that SMS is suitable for fast delivery, email affords greater content richness and RSS is the ideal format for content presentation on front-end mobile devices.
E-books and e-learning are becoming recurrent tools in the academic landscape. One of the key characteristics of e-learning is its capability to integrate different media, such as text, picture, audio, animation and video to create multimedia instructional material. Media selection in e-learning can be a critical issue because of the increased costs of developing non-textual e-learning materials. Learners can benefit from the use of richer media in courses that contain equivocal and complex content; however, learners achieve no significant benefit in either learner score or learner satisfaction from the use of richer media in courses containing low equivocal (numeric) content.
Nursing
The transition from analog to digital record keeping in medical care has been at the forefront of medical innovation.. Castro, Favela, and Gracia-Pena studied the effects of different media (face to face, telephone and videoconferencing) on nursing consultations in emergency calls. They found that while there were no efficacy differences between media, richer media did facilitate faster consultations and resolutions. Videoconferencing may result in less eye contact than if the nurse was face to face with the patient.
Finding physicians and healthcare providers
Despite interpersonal communication being a key ingredient in medical encounters, and physician communication being one of the most important qualities patients indicate in their selection of a new doctor, healthcare organizations do not do a very good job helping their prospective patients understand how a new doctor would communicate with them in future encounters through their online biographies - most only providing "lean" biographies of text, and very few providing "richer" video introductions. Video introductions provide the opportunity to help patients actually see how a physician might interact within a consultation. Perrault and Silk tested what effects richer video introductions of doctors might have on patients when they are in this decision-making phase. They uncovered that when participants were exposed to a richer video introduction of the physician that uncertainty was reduced to a greater extent than when they were only exposed a lean, text-based biography. Participants were also more likely to choose to want to visit the doctor who provided the richer video introduction over the leaner text biography.
Civic engagement
Media used online also has been successfully proven to stimulate civic engagement. Leveraging the Internet to facilitate public deliberation has been proven to be a successful and cost-effective way to engage large volumes of citizens. Studies have shown that mixed modality media (both rich and lean) can be useful in citizen education and engagement. Through the creation of new social networks and various online platforms, media allows for many more opportunities of "greater visibility and community building potential of cultural citizenship's previous 'ephemeral' practices." The explosion of creativity on the internet can be linked to formal institutions such as government and education in order to allow for a broader participation base, leading to stronger engagement of citizens and gaining access to a wider range of insight and knowledge.
Gender and Media Richness Theory
Studies have been conducted to determine which medias allow different genders to become more productive in the workplace. In Gender Differences in the Effects of Media Richness the researchers found that women tend to work better with nonverbal communication than men. In general, nonverbal cues and communication is more easily broken down by women, due to their ability to be expressive more frequently than men.
Researchers found that men are more likely to appreciate task-oriented projects where women prefer social-oriented activities. Men are found to be able to make quicker decisions while women facilitate conversations deeply in order to fully understand what is being talked about.
Criticism
Scope of the theory
Media richness theory has been criticized by what many researchers saw as its deterministic nature. Markus argues that social pressures can influence media use much more strongly than richness, and in ways that are inconsistent with MRT's key tenets. It has also been noted that media richness theory should not assume that the feelings towards using a richer media in a situation are completely opposite to using a leaner media. In fact, media choice is complex and in general even if a rich media is considered to be the "best" to communicate a message, a leaner media may still be able to communicate the message. In addition, for some tasks, the type of media used will make no difference to the accuracy of the communicated message.
In selecting a medium for message transmission, an individual will also consider his or her comfort with the possible options. If an individual is uncomfortable or unfamiliar with using an email system to distribute a message, and view learning to send an email as more time-consuming and inefficient than simply having a group meeting, he or she may choose a richer medium instead of a more efficient medium. This behavior outcome, through irrational, is certainly a reflection upon the previously established experience.
Cultural and social limitations
Ngwenyama and Lee show that cultural and social background influence media choice by individuals in ways that are incompatible with predictions based on media richness theory; their paper received the Paper of the Year Award in the journal MIS Quarterly. Ngwenyama and Lee are not alone in their critiques regarding the limitations of media richness theory, particularly in regards to cultural and individual characteristics. Research by Ook Lee demonstrated that in a Confucian virtual work environment where showing respect is essential, a communication channel's ability to convey cultural protocol is more important than the richness of the channel. In 2009, Gerritsen's study concluded that in business contexts, culture does play a role in determining the receiver's preference of medium, perhaps in terms of the specific culture's threshold for uncertainty avoidance.
Additionally, Dennis, Kinney, and Hung found that in terms of the actual performance of equivocal tasks, the richness of a medium has the most notable effect on teams composed entirely of females. On the other hand, "matching richness to task equivocality did not improve decision quality, time, consensus, or communication satisfaction for all-male or mixed-gender teams." Individually speaking, Barkhi demonstrated that communication mode and cognitive style can play a role in media preference and selection, suggesting that even in situations with identical messages and intentions, the "best" media selection can vary from person to person.
Application to new media
Additionally, because media richness theory was developed before widespread use of the internet, which also introduced media like email, chat rooms, instant messaging, smartphone, and more, some have questioned its ability to accurately predict what new media users may choose. Several studies have been conducted that examine media choice when given options considered to be "new media", such as voice mail and email. Blau, Weiser, and Eshet-Alkalai study the differences and similarities of perceived and actual outcomes for students who take the same class either online or in traditional classroom settings. The authors conclude that face-to-face classroom settings are not superior to online classrooms. Further, they also suggest that a “high level of medium naturalness might hinder the understanding of a very complicated type of knowledge”, which is the opposite of what the media richness theory predicts. El-Shinnaway and Markus hypothesized that, based on media richness theory, individuals would choose to communicate messages over the more rich medium of voice mail than via email, but found that even when sending more equivocal messages, the leaner medium of email was used. Also, it has been indicated that given the expanded capabilities of new media, media richness theory's unidimensional approach to categorizing different communication media in no longer sufficient to capture all the dimensions in which media types can vary.
Related theories
Media naturalness
Several new theories have been developed based on Daft and Lengel's original framework. Kock (2004) argues that human non-lexical communication methods and apparatus, such as facial expressions, gestures, and body language, have evolved for millions of years, and as such, must be important to the naturalness of communication between people. Media Naturalness Theory hypothesizes that because face-to-face communication is the most "natural" method of communication, we should want our other communication methods to resemble face-to-face communication as closely as possible. While media richness theory places media on a scale that range from low to high in richness and places face-to-face communication at the top of the scale, Media Naturalness Theory thinks of face-to-face communication as the middle in a scale, and states that the further away one gets from face-to-face (either more or less rich), the more cognitive processing is required to comprehend a message.
Media compensation
The 2011 media compensation theory by Hantula, Kock, D'Arcy, and DeRosa refines Kock's media naturalness theory. The authors explain that the media compensation theory has been developed to specifically address two paradoxes:
Virtual communication, work, collaboration, and teams are largely successful (sometimes even more so than face-to-face equivalents) which conflicts with Kock's media naturalness theory; and,
"The human species evolved in small groups using communications modalities in constrained areas, yet use electronic communication media to allow large groups to work together effectively across time and space” (Hantula et al., 2011, p. 358).
The authors grapple with how humans “who have not changed much in many millennia” (Hantula et al., 2011, p. 358) are able to successfully embrace and employ lean media, such as texting, considering their assumption that human evolution has progressed down a path toward, and adeptness for, face-to-face communication, and conclude that elements of the media naturalness theory can coexist with Carlson and Zmud's channel expansion theory.
Media synchronicity
To help explain media richness and its application to new media, media synchronicity theory was proposed. Media richness is also related to adaptive structuration theory and social information processing theory, in which instead of focusing on object physical attributions of the media, shift towards social construction of the media. Media synchronicity theory, however, is a theory of communication performance that does not investigate why people choose which media to use. Synchronicity describes the ability of a medium to create the sense that all participants are concurrently engaged in the communication event. Media with high degrees of synchronicity, such as face-to-face meetings, offer participants the opportunity to communicate in real time, immediately observe the reactions and responses of others, and easily determine whether co-participants are fully engaged in the conversation.
Media synchronicity theory states that every communication interaction is composed of two processes: conveyance and convergence. The processes are necessary for completing tasks. Conveyance is about the transmission of new information, while convergence is about reaching an agreement. For tasks that require convergence, media with high degrees of synchronicity, such as face-to-face meetings and video conferences, offer participants the opportunity to communicate at the same time, and develop interpersonal reactions to reach an agreement through discussion. For tasks that require conveyance, media with low degrees of synchronicity such as e-mail and SMS texts allows participants to receive information regardless of geographical dispersion and time zone and have more time to process new information without the necessity to debate with others.
Media synchronicity theory also states that each medium has a set of abilities.These abilities include: transmission velocity, parallelism, symbol sets, rehearsability, and reaccessability. The transmission velocity of a message refers to how quickly the recipient receives it from the sender; parallelism refers to the number of messages that can be conveyed at the same time; symbol sets are the number of ways that recipients can interpret the message, such as verbal and visual cues; rehearsability is the extent to which the sender can revise and edit messages before sending out, and reprocessibility is the degree to which recipients can retrieve and re-interpret messages. According to the theory, choosing a medium with capabilities that match information transmission and processing requirements can make communication more effective.
Applications
The applications of media synchronicity theory include negotiations, virtual team collaborations, and communication during disasters. During negotiations, if both communicators are familiar with each other and know about the subprocesses to complete tasks, then the need for synchronicity will be lower. Negotiations can be more effective if the group has discussed the requirement through media with low synchronicity before synchronous meetings, as the group can remove uncertainties before reaching convergence. The reheasability of the media also has a positive impact on satisfaction. Positive messages transmitted through asynchronous text-based electronically mediated negotiations (TBEM) yields higher negotiator satisfaction than face-to-face negotiations (FTF). Virtual collaborations are considered to be activities that require convergence, and groups with 3D space that provide rooms for synchronous discussion have higher task performances than groups using text-based chat. However, if a mobile-enabling discussion is also provided during the collaborations, its high parallelism and reprocessibility can improve user experience and task performance. During natural disasters, the purpose of risk communication is to educate people about the situation, so conveyance processes are required. A medium with relatively low synchronicity is preferred. Crisis communication, on the other hand, has the purpose of sharing individuals' understandings which requires convergence. A medium with relatively high synchronicity is preferred. A single social platform can have different sets of capabilities depending on features, and people can manipulate symbol sets such as hashtags and the number of words in a post, to maximize the effectiveness of communication. Synchronous channels are helpful for urgent situations especially in more vulnerable areas, while asynchronous channels are useful for governments and utility service providers in their attempts to amplify their crisis management messages and expand the reach of information related to evacuation and recovery.
Channel expansion
Channel expansion theory was proposed by Carlson and Zmud (1999) to explain the inconsistencies found in several empirical studies. In these studies, the results showed that managers would employ "leaner" media for tasks of high equivocality. Channel expansion theory suggested that individual's media choice has a lot to do with individual's experience with the medium itself, with the communicator and also with the topic. Thus it is possible that an individual's experience with using a certain lean medium, will prompt that individual to use it for equivocal tasks. For example, a study by Kahai, Carroll, and Jestice (2007) showed that participants' familiarity with instant messaging led them to perceive the medium as richer than the virtual world known as Second Life. Participants' lack of experience with the objectively richer virtual world may have affected their perception when compared to the more familiar medium of instant messaging.
However, the theory does not suggest that knowledge-building experiences will necessarily equalize differences in richness, whether objective or perceptually-based, across different media. Put in another way, knowledge-building experiences may be positively related to perceptions of the richness of email, but this does not necessarily mean that email will be viewed as richer than another medium, such as face-to-face interaction
See also
Communication theory
Emotions in virtual communication
Hyperpersonal model
Multicommunicating
Social identity model of deindividuation effects (SIDE)
Telecommuting
Theories of technology
References
Further reading
Daft, R.L. & Lengel, R.H. (1984). Information richness: a new approach to managerial behavior and organizational design. In: Cummings, L.L. & Staw, B.M. (Eds.), Research in organizational behavior 6, (191-233). Homewood, IL: JAI Press.
Daft, R.L., Lengel, R.H., & Trevino, L.K. (1987). Message equivocality, media selection, and manager performance: Implications for information systems. MIS Quarterly, September, 355–366.
Mass media technology | Media richness theory | [
"Technology"
] | 8,041 | [
"Information and communications technology",
"Mass media technology"
] |
2,185,680 | https://en.wikipedia.org/wiki/Multi-configuration%20time-dependent%20Hartree | Multi-configuration time-dependent Hartree (MCTDH) is a general algorithm to solve the time-dependent Schrödinger equation for multidimensional dynamical systems consisting of distinguishable particles. MCTDH can thus determine the quantal motion of the nuclei of a molecular system evolving on one or several coupled electronic potential energy surfaces. MCTDH by its very nature is an approximate method. However, it can be made as accurate as any competing method, but its numerical efficiency deteriorates with growing accuracy.
MCTDH is designed for multi-dimensional problems, in particular for problems that are difficult or even impossible to attack in a conventional way. There is no or only little gain when treating systems with less than three degrees of freedom by MCTDH. MCTDH will in general be best suited for systems with 4 to 12 degrees of freedom. Because of hardware limitations it may in general not be possible to treat much larger systems. For a certain class of problems, however, one can go much further. The MCTDH program package has recently been generalised to enable the propagation of density operators.
References
External links
The Heidelberg MCTDH Homepage
Quantum chemistry
Scattering | Multi-configuration time-dependent Hartree | [
"Physics",
"Chemistry",
"Materials_science"
] | 241 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Scattering stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Scattering",
"Particle physics",
" molecular",
"Nuclear physics",
"Atomic",
"Condensed matter physics",
"Physical chemistry stubs",
" and optica... |
2,185,977 | https://en.wikipedia.org/wiki/Nanotomography | Nanotomography, much like its related modalities tomography and microtomography, uses x-rays to create cross-sections from a 3D-object that later can be used to recreate a virtual model without destroying the original model, applying Nondestructive testing. The term nano is used to indicate that the pixel sizes of the cross-sections are in the nanometer range
Nano-CT beamlines have been built at 3rd generation synchrotron radiation facilities, including the Advanced Photon Source of Argonne National Laboratory, SPring-8, and ESRF from early 2000s. They have been applied to wide variety of three-dimensional visualization studies, such as those of comet samples returned by the Startdust mission, mechanical degradation in lithium-ion batteries, and neuron deformation in schizophrenic brains.
Although a lot of research is done to create nano-CT scanners, currently there are only a few available commercially. The SkyScan-2011 has a range of about 150 to 250 nanometers per pixel with a resolution of 400 nm and a field of view (FOV) of 200 micrometers. The Xradia nanoXCT has a spatial resolution of better than 50 nm and a FOV of 16 micrometers.
At the Ghent University, the UGCT team developed a nano-CT scanner based on commercially available components. The UGCT facility is an open nano-CT facility giving access to scientists from universities, institutes and industry.
References
Medical imaging
Microscopes | Nanotomography | [
"Chemistry",
"Technology",
"Engineering"
] | 309 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
2,186,043 | https://en.wikipedia.org/wiki/Pituitary%20stalk | The pituitary stalk, also known as the infundibular stalk, infundibulum, or Fenderson's funnel, is the connection between the hypothalamus and the posterior pituitary, the posterior lobe of the pituitary gland. The floor of the third ventricle is prolonged downward as a funnel-shaped recess—the infundibular recess—into the infundibulum, where the apex of the pituitary is attached.
It passes through the dura mater of the diaphragma sellae as it carries axons from the magnocellular neurosecretory cells of the hypothalamus down to the posterior pituitary where they release their neurohypophysial hormones, oxytocin and vasopressin, into the blood.
Damage to the pituitary stalk blocks the release of antidiuretic hormone, resulting in polydipsia (excessive water intake) and polyuria (excessive urination, central diabetes insipidus).
The diameter of the pituitary stalk at the level of optic chiasm is 3.3 mm, and at the pituitary gland insertion site is measured at 1.9 mm.
See also
Pituitary stalk interruption syndrome
Additional images
References
Endocrine system
Hypothalamus | Pituitary stalk | [
"Biology"
] | 277 | [
"Organ systems",
"Endocrine system"
] |
2,186,113 | https://en.wikipedia.org/wiki/Okadaic%20acid | Okadaic acid, C44H68O13, is a toxin produced by several species of dinoflagellates, and is known to accumulate in both marine sponges and shellfish. One of the primary causes of diarrhetic shellfish poisoning, okadaic acid is a potent inhibitor of specific protein phosphatases and is known to have a variety of negative effects on cells. A polyketide, polyether derivative of a C38 fatty acid, okadaic acid and other members of its family have shined light upon many biological processes both with respect to dinoflagellete polyketide synthesis as well as the role of protein phosphatases in cell growth.
History
As early as 1961, reports of gastrointestinal disorders following the consumption of cooked mussels appeared in both the Netherlands and Los Lagos. Attempts were made to determine the source of the symptoms, however they failed to elucidate the true culprit, instead implicating a species of microplanctonic dinoflagellates. In the summers of the late 1970s, a series of food poisoning outbreaks in Japan lead to the discovery of a new type of shellfish poisoning. Named for the most prominent symptoms, the new Diarrhetic Shellfish Poisoning (DSP) only affected the northern portion of Honshu during 1976, however by 1977 large cities such as Tokyo and Yokohama were affected. Research into the shellfish consumed in the affected regions showed that a fat-soluble toxin was responsible for the 164 documented cases, and this toxin was traced to mussels and scallops harvested in the Miyagi prefecture. In northeastern Japan, a legend had existed that during the season of paulownia flowers, shellfish can be poisonous. Studies following this outbreak showed that toxicity of these mussels and scallops appeared and increased during the months of June and July, and all but disappeared between August and October.
Elsewhere in Japan, in 1975 Fujisawa pharmaceutical company observed that the extract of a black sponge, Halichondria okadai, was a potent cytotoxin, and was dubbed Halichondrine-A. In 1981, the structure of one such toxin, okadaic acid, was determined after it was extracted from both the black sponge in Japan, Halichondria okadai, for which it was named, and a sponge in the Florida Keys, Halichondria melanodocia. Okadaic acid sparked research both for its cytotoxic feature and for being the first reported marine ionophore.
One of the toxic culprits of DSP, dinophysistoxin-1 (DTX-1), named for one of the organisms implicated in its production, Dinophysis fortii, was compared to and shown to be very chemically similar to okadaic acid several years later, and okadaic acid itself was implicated in DSP around the same time. Since its initial discovery, reports of DSP have spread throughout the world, and are especially concentrated in Japan, South America and Europe.
Synthesis
Derivatives
Okadaic acid (OA) and its derivatives, the dinophysistoxins (DTX), are members of a group of molecules called polyketides. The complex structure of these molecules include multiple spiroketals, along with fused ether rings.
Biosynthesis
Being polyketides, the okadaic acid family of molecules are synthesized by dinoflagellates via polyketide synthase (PKS). However unlike the majority of polyketides, the dinoflagellate group of polyketides undergo a variety of unusual modifications. Okadaic acid and its derivatives are some of the most well studied of these polyketides, and research on these molecules via isotopic labeling has helped to elucidate some of those modifications.
Okadaic acid is formed from a starter unit of glycolate, found at carbons 37 and 38, and all subsequent carbons in the chain are derived from acetate. Because polyketide synthesis is similar to fatty acid synthesis, during chain extension the molecule may undergo reduction of the ketone, dehydration, and reduction of the olefin. Failure to perform one of more of these three steps, combined with several unusual reactions is what allows for the formation of the functionality of okadaic acid. Carbon deletion and addition at the alpha and beta position comprise the other transformations present in the okadaic acid biosynthesis.
Carbon deletion occurs by way of a Favorskii rearrangement and subsequent decarboxylation. Attack of a ketone in the growing chain by enzyme-bound acetates, and subsequent decarboxylation/dehydration results in an olefin replacing the ketone, in both alpha and beta alkylation. After this the olefin can isomerize to more thermodynamically stable positions, or can be activated for cyclizations, in order to produce the natural product.
Laboratory syntheses
To date, several studies have been performed toward the synthesis of okadaic acid and its derivatives. 3 total syntheses of okadaic acid have been achieved, along with many more formal syntheses and several total syntheses of the other dinophysistoxins. The first total synthesis of okadaic acid was completed in 1986 by Isobe et al., just 5 years after the molecule's structure was elucidated. The next two were completed in 1997 and 1998 by the Forsyth and Ley groups respectively.
In Isobe's synthesis, the molecule was broken into 3 pieces, along the C14-C15 bonds, and the C27-C28 bonds. This formed fragments A, B, and C, which were all synthesized separately, after which the B and C fragments were combined, and then combined with the A fragment. This synthesis contained 106 steps, with a longest linear sequence of 54 steps. The precursors to all three fragments were all glucose derivatives obtained from the chiral pool. Spiroketals were obtained from precursor ketone diols, and were therefore formed thermally in acid.
Similar to Isobe's synthesis, the Forsyth synthesis sought to reduce the number of steps, and to increase potential for designing analogues late in the synthesis. To do this, Forsyth et al. designed the synthesis to allow for structural changes and installation of important functional groups before large pieces were joined. Their resulting synthesis was 3% yielding, with 26 steps in the longest linear sequence. As above, spiroketalization was performed thermodynamically with introduction of acid.
Ley's synthesis of okadaic acid is most unlike its predecessors, although it still contains similar motifs. Like the others, this synthesis divided okadaic acid into three components along the acyclic segments. However, designed to display new techniques developed in their group, Ley's synthesis included forming the spiroketals using (diphenylphosphineoxide)-tetrahydrofuran and (phenylsulfonyl)-tetrahydropyrans, allowing for more mild conditions. Similar to those above, a portion of the stereochemistry in the molecule was set by starting materials obtained from the chiral pool, in this case mannose.
Biology
Mechanism of action
Okadaic acid (OA) and its relatives are known to strongly inhibit protein phosphatases, specifically serine/threonine phosphatases. Furthermore, of the 4 such phosphatases, okadaic acid and its relatives specifically target protein phosphatase 1 (PP1) and protein phosphatase 2A (PP2A), at the exclusion of the other two, with dissociation constants for the two proteins of 150 nM and 30 pM respectively. Because of this, this class of molecules has been used to study the action of these phosphatases in cells. Once OA binds to the phosphatase protein(s), it results in hyperphosphorylation of specific proteins within the afflicted cell, which in turn reduces control over sodium secretion and solute permeability of the cell. Affinity between okadaic acid and its derivatives and PP2A has been tested, and it was shown that the only derivative with a lower dissociation constant, and therefore higher affinity, was DTX1, which has been shown to be 1.6 times stronger. Furthermore, for the purpose of determining the toxicity of mixtures of different okadaic acid derivatives, inhibitory equivalency factors for the relatives of okadaic acid have been studied. In wild type PP2A, the inhibitory equivalency relative to okadaic acid were 0.9 for DTX-1 and 0.6 for DTX-2.
Toxicology
The main route of exposure to DSP from okadaic acid and its relatives is through the consumption of shellfish. It was initially shown that the toxic agents responsible for DSP tend to be most concentrated in the hepatopancreas, followed by the gills for certain shellfish. The symptoms for diarrhetic shellfish poisoning include intense diarrhea and severe abdominal pains, and rarely nausea and vomiting, and they tend to occur anytime between 30 minutes and at most 12 hours after consuming toxic shellfish. It has been estimated that it takes roughly 40 μg of okadaic acid to trigger diarrhea in adult humans.
Medical uses
Because of its inhibitory effects in phosphatases, okadaic acid has shown promise in the world of medicine for numerous potential uses. During its initial discovery, okadaic acid, specifically the crude source extract, showed potent inhibition of cancer cells, and so initial interest in the family of molecules tended to center around that feature. However, it was shown that the more cytotoxic component of H. okadai was actually a separate family of compounds, the Halichondrines, and as such research into the cytotoxicity of okadaic acid decreased. However, the unique function of okadaic acid upon cells maintained biological interest in the molecule. Okadaic acid has been shown to have neurotoxic, immunotoxic, and embryotoxic effects. Furthermore, in two-stage carcinogenesis of mouse skin, the molecule and its relatives have been shown to have tumor promoting effects. Because of this, the effects of okadaic acid on Alzheimer's, AIDS, diabetes, and other human diseases have been studied.
See also
Canadian Reference Materials
Brevetoxin
Ciguatoxin
Domoic acid
Saxitoxin
Tetrodotoxin
Rubratoxin
References
External links
Carboxylic acids
Laxatives
Phycotoxins
Polyketides
Polyether toxins
Spiro compounds
Oxygen heterocycles
Phosphatase inhibitors | Okadaic acid | [
"Chemistry"
] | 2,270 | [
"Biomolecules by chemical classification",
"Natural products",
"Toxins by chemical classification",
"Polyether toxins",
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Polyketides",
"Spiro compounds"
] |
2,186,122 | https://en.wikipedia.org/wiki/Diminazene | Diminazene (INN; also known as diminazen) is an anti-infective medication for animals that is sold under a variety of brand names. It is effective against certain protozoa such as Babesia, Trypanosoma, and Cytauxzoon. The drug may also be effective against certain bacteria including Brucella and Streptococcus.
Chemically it is a di-amidine and it is formulated as its aceturate salt, diminazene aceturate.
The mechanism is not well understood; it probably inhibits DNA replication, but also has affinity to RNA.
Side effects
Acute side effects include vomiting, diarrhea, and hypotension (low blood pressure). Diminazen can harm the liver, kidneys and brain, which is potentially life-threatening; camels are especially susceptible to these effects.
Resistance
The Gibe River Valley in southwest Ethiopia showed universal resistance between July 1989 and February 1993. This likely indicates a permanent loss of function in this area against the tested target, T. congolense isolated from Boran cattle.
References
Amidines
Antiprotozoal agents
Veterinary drugs
Triazenes | Diminazene | [
"Chemistry",
"Biology"
] | 242 | [
"Antiprotozoal agents",
"Amidines",
"Functional groups",
"Biocides",
"Bases (chemistry)"
] |
2,186,198 | https://en.wikipedia.org/wiki/Parc%20de%20la%20Villette | The Parc de la Villette () is the third-largest park in Paris, in area, located at the northeastern edge of the city in the 19th arrondissement. The park houses one of the largest concentrations of cultural venues in Paris, including the Cité des Sciences et de l'Industrie (City of Science and Industry, Europe's largest science museum), three major concert venues, and the prestigious Conservatoire de Paris.
Parc de la Villette is served by Paris Métro stations Corentin Cariou on Line 7 and Porte de Pantin on Line 5.
History
The park was designed by Bernard Tschumi, a French architect of Swiss origin, who built it from 1984 to 1987 in partnership with Colin Fournier, on the site of the huge Parisian abattoirs (slaughterhouses) and the national wholesale meat market, as part of an urban redevelopment project. The slaughterhouses, built in 1867 on the instructions of Napoléon III, had been cleared away and relocated in 1974. Tschumi won a major design competition in 1982–83 for the park as part of the Grands Projets of François Mitterrand, and sought the opinions of the deconstructionist philosopher Jacques Derrida in the preparation of his design proposal.
Since the creation of the park, museums, concert halls, and theatres have been designed by several noted contemporary architects, including Christian de Portzamparc, Adrien Fainsilber, Philippe Chaix, Jean-Paul Morel, Gérard Chamayou, on to Mr. Tschumi.
Park attractions
The park houses museums, concert halls, live performance stages, and theatres, as well as playgrounds for children, and thirty-five architectural follies. These include:
Cité des Sciences et de l'Industrie (City of Science and Industry), the largest science museum in Europe; also home of Vill'Up, a shopping centre opened in November 2016 with the world largest indoor pulsed air free fall flight simulator of 14 m high and several cinemas (IMAX, 4DX and dynamic);
La Géode, an IMAX theatre inside of a diameter geodesic dome;
Cité de la musique (City of Music), a museum of historical musical instruments with a concert hall, also home of the Conservatoire de Paris;
Philharmonie de Paris, a new symphony hall with 2,400 seats for orchestral works, jazz, and world music designed by Jean Nouvel, opened since January 2015.
Grande halle de la Villette, a historical cast iron & glass abattoir that now holds fairs, festive cultural events, and other programming;
Le Zénith, a concert arena with 6,300 seats for rock and pop music;
L'Argonaute, a 50 m long decommissioned military submarine;
Cabaret Sauvage, a flexible small concert stage with 600 to 1,200 seats, designed by Méziane Azaïche in 1997;
Le Trabendo, a contemporary venue for pop, rock, folk music, and jazz with 700 seats;
Théâtre Paris-Villette, a small actors' theatre and acting workshop with 211 seats;
Le Hall de la Chanson (at Pavillon du Charolais), theatre dedicated to French song with 140 seats
WIP Villette, "Work In Progress–Maison de la Villette," a space dedicated to Hip-Hop culture, social theatre, art work initiatives, and cultural democracy;
Espace Chapiteaux, a permanent space under a tent for contemporary circus, resident and touring companies perform;
Pavillon Paul-Delouvrier, a chic contemporary event space for conferences, workshops, and social events designed by Oscar Tusquets;
Centre équestre de la Villette, equestrian center with numerous year-round events.
Cinéma en plein air, an outdoor movie theatre, site of an annual film festival;
Le TARMAC (former Théâtre de l'Est Parisien), venue for world performance art and dance companies touring from "La Francophonie", has moved to 159 avenue Gambetta in the 20th arrondissement.
Tourism
Since its completion in 1987, the Parc de la Villette has become a popular attraction for Paris residents and international travelers alike. An estimated 10 million people visit the park each year to take part in an array of cultural activities. With its collection of museums, theatres, architectural follies, themed gardens, and open spaces for exploration and activity, the park has created an area that relates to both adults and children.
Designed by Bernard Tschumi, the park is meant to be a place inspired by the post-modernist architectural ideas of deconstructivism. Tschumi's design was in partial response to the philosophies of Jacques Derrida, acting as an architectural experiment in space (through a reflection on Plato's Khôra), form, and how those relate a person's ability to recognize and interact. According to Tschumi, the intention of the park was to create space for activity and interaction, rather than adopt the conventional park mantra of ordered relaxation and self-indulgence. The vast expanse of the park allows for visitors to walk about the site with a sense of freedom and opportunity for exploration and discovery.
The design of the park is organized into a series of points, lines, and surfaces. These categories of spatial relation and formulation are used in Tschumi's design to act as a means of deconstructing the traditional views of how a park is conventionally meant to exist.
Activities
The Parc de la Villette boasts activities that engage all people of all ages and cultural backgrounds. The park is a contemporary melting pot of cultural expression where local artists and musicians produce exhibits and performances. On the periphery of the park lies the Cité des Sciences et de l'Industrie, the largest science museum in Europe. There are a convention center and an I-MAX theatre. The park acts as a connection between these exterior functions. Concerts are scheduled year-round, hosting local and mainstream musicians. Dividing the park is the Canal de l'Ourcq, which has boat tours that transport visitors around the park and to other sites in Paris. Festivals are common in the park along with artist conventions and shows by performers.
The Parc de la Villette hosts an annual open-air film festival. In 2010 the festival's theme was "To Be 20" ("Avoir 20 ans") and featured films about youth and self-discovery around the age of 20. In 2010 films were shown by American filmmakers Woody Allen and Sofia Coppola as well as French and international filmmakers.
Gardens
The Parc de la Villette has a collection of ten themed gardens that attract a large number of the park's visitors. Each garden is created with a different representation of architectural deconstructionism and tries to create space through playfully sculptural and clever means. While some of the gardens are minimalist in design, others are clearly constructed with children in mind.
The "Jardin du Dragon" (The Garden of the Dragon) is home to a large sculptural steel dragon that has an 80-foot slide for children to play on.
The "Jardin de Bambou" (Bamboo Garden) at the Parc de la Villette was designed by Alexandre Chemetoff, winner of the Grand Prix de l'urbanisme (2000).
The "Jardin de la Treille" (Trellis Garden) designed by Gilles Vexlard and Laurence Vacherot. Vines and creepers are going along a roof trellis and 90 small fountains designed so that you only really hear the murmur of them in between the grape vines.
7 Sculptures de visées (Sculptures Bachelard) by Jean-Max Albert are installed all around and an anamorphosis reflection is displayed in a small pool.
The gardens range in function; where some gardens are meant for active engagement, others exist to play off of curiosity and investigation or merely allow for relaxation.
Follies
Probably the most iconic pieces of the park, the follies act as architectural representations of deconstruction. In architecture, a folly (in French, folie) is a building constructed primarily for decoration, but suggesting by its appearance some other purpose, or so extravagant that it transcends the normal range of garden ornaments or other class of building to which it belongs. Architecturally, the follies are meant to act as points of reference that help visitors gain a sense of direction and navigate throughout the space. Twenty-six follies, made of metal and painted bright red, are placed on a grid and offer a distinct organization to the park. Each is identified by a name and a code letter-number.
Architecturally, the follies are meant to act as points of reference that help visitors gain a sense of direction and navigate throughout the space. While the follies are meant to exist in a deconstructive vacuum without historical relation, many have found connections between the steel structures and the previous buildings that were part of the old industrial fabric of the area. Today, the follies remain as cues to organization and direction for park visitors. Some of them house restaurants, information centers, and other functions associated with the park's needs.
Architectural deconstructivism and the park
There have been many criticisms of the innovative design of the park since its original completion. To some, the park has little concern with the human scale of park functions and the vast open space seem to challenge the expectation that visitors may have of an urban park. Bernard Tschumi designed the Parc de la Villette with the intention of creating a space that exists in a vacuum, something without historical precedent. The park strives to strip down the signage and conventional representations that have infiltrated architectural design and allow for the existence of a “non-place.” This non-place, envisioned by Tschumi, is the most appropriate example of space and provides a truly honest relationship between the subject and the object.
Visitors view and react to the plan, landscaping, and sculptural pieces without the ability to cross-reference them with previous works of historical architecture. The design of the park capitalizes on the innate qualities that are illustrated within architectural deconstructivism. By allowing visitors to experience the architecture of the park within this constructed vacuum, the time, recognitions, and activities that take place in that space begin to acquire a more vivid and authentic nature. The park is not acting as a spectacle; it is not an example of traditional park design such as New York City's Central Park. The Parc de la Villette strives to act as merely a frame for other cultural interaction.
The park embodies anti-tourism, not allowing visitors to breeze through the site and pick and choose the sites they want to see. Upon arrival in the park, visitors are thrust into a world that is not defined by conventional architectural relationships. The frame of the park, due to its roots in deconstructivism, tries to change and react to the functions that it holds within.
See also
List of tourist attractions in Paris
World Architecture Survey
References
External links
Parc de la Villette website
Galinsky: Parc de la Villette
Archidose: Parc de la Villette
Review essay on Parc de la Villette
Images and Links Resource collection
Follies Parc de la Villette 3D model of two of the Follies
19th arrondissement of Paris
Villette, Parc de la
Deconstructivism
Landscape architecture
Bernard Tschumi buildings | Parc de la Villette | [
"Engineering"
] | 2,344 | [
"Landscape architecture",
"Architecture"
] |
2,186,296 | https://en.wikipedia.org/wiki/Welin%20breech%20block | The Welin breech block was a revolutionary stepped, interrupted thread design for locking artillery breeches, invented by Axel Welin in 1889 or 1890. Shortly after, Vickers acquired the British patents. Welin breech blocks provide obturation for artillery pieces which use separate loading bagged charges and projectiles. In this system the projectile is loaded first and then followed by cloth bags of propellant.
Design
The breech block screw incorporates multiple threaded "steppings" of progressively larger radius and a gap step occupying each circular section. A three step breech block screw's circular area would nominally be divided into quarters, with each quarter containing three threaded sections of progressively increasing height and a gap step for insertion.
Each step engages with its matching thread cut in the gun breech when inserted and rotated. A gap in the thread steps was still necessary for the insertion of the largest step before rotation, so the area of the breech secured by threads in the block is:
This was a major improvement on previous, non-stepped designs such as the de Bange system. The breech loader [of the] de Bange system uses a single thread step and can only engage half of the block's circumferential threads with the breech, necessitating a long screw to achieve a strong lock. Efficiency is gained with multiple thread step heights on an interrupted screw because the smaller thread steps may be inserted into the breech along the radial vector of any larger step, whereas a single thread step interrupted screw cannot be inserted where any threads of the breech lay in the radial vector as any threads of the screw. Thread lock area is directly related to the munitions strength which may be safely fired, and thus large munitions with single step screw breeches needed unreasonably long breech screws. These in turn required more time and much greater room to extract and move aside to gain access the gun's chamber to clean and reload it. The engagement of threads around much more of the circumference of the Welin block allowed it to be shorter for the same total engagement area and strength, it required less than 90 degree rotation to lock the threads, making operation faster than previous designs and possible in much tighter space. It was also simpler and more secure.
The Welin breech was a single motion screw, allowing it to be operated much faster than previous interrupted-thread breeches, and it became very common on British and American large calibre naval artillery and also field artillery above about .
Though the US Navy was offered the design a year or two after its invention, they declined and the American Bethlehem Steel spent the next five years in trying to circumvent Welin's patent, before having to buy it through Vickers.
See also
Rifled breech loader
Notes and references
External links
YouTube video showing Welin breech mechanism
Royal New Zealand Artillery Association, Breech Mechanisms
Artillery components
Firearm actions | Welin breech block | [
"Technology"
] | 592 | [
"Artillery components",
"Components"
] |
2,186,444 | https://en.wikipedia.org/wiki/Laser-hybrid%20welding | Laser-hybrid welding is a type of welding process that combines the principles of laser beam welding and arc welding.
The combination of laser light and an electrical arc into an amalgamated welding process has existed since the 1970s, but has only recently been used in industrial applications. There are three main types of hybrid welding process, depending on the arc used: TIG, plasma arc or MIG augmented laser welding. While TIG-augmented laser welding was the first to be researched, MIG is the first to go into industry and is commonly known as hybrid laser welding.
Whereas in the early days laser sources still had to prove their suitability for industrial use, today they are standard equipment in many manufacturing enterprises.
The combination of laser welding with another weld process is called a "hybrid welding process". This means that a laser beam and an electrical arc act simultaneously in one welding zone, influencing and supporting each other.
Laser
Laser welding not only requires high laser power but also a high quality beam to obtain the desired "deep-weld effect". The resulting higher quality of beam can be exploited either to obtain a smaller focus diameter or a larger focal distance. A variety of laser types are used for this process, in particular Nd:YAG where the laser light can be transmitted via a water-cooled glass fiber. The beam is projected onto the workpiece by collimating and focusing optics. Carbon dioxide laser can also be used where the beam is transmitted via lens or mirrors.
Laser-hybrid process
For welding metallic objects, the laser beam is focused to obtain intensities of more than 1 MW/cm2. When the laser beam hits the surface of the material, this spot is heated up to vaporization temperature, and a vapor cavity is formed in the weld metal due to the escaping metal vapor. This is known as a keyhole. The extraordinary feature of the weld seam is its high depth-to-width ratio. The energy-flow density of the freely burning arc is slightly more than 100 kW/cm2. Unlike a dual process where two separate weld processes act in succession, hybrid welding may be viewed as a combination of both weld processes acting simultaneously in one and the same process zone. Depending on the kind of arc or laser process used, and depending on the process parameters, the two systems will influence each other in different ways.
The combination of the laser process and the arc process results in an increase in both weld penetration depth and welding speed (as compared to each process alone). The metal vapor escaping from the vapor cavity acts upon the arc plasma. Absorption of the laser radiation in the processing plasma remains negligible. Depending on the ratio of the two power inputs, the character of the overall process may be mainly determined either by the laser or by the arc.
Absorption of the laser radiation is substantially influenced by the temperature of the workpiece surface. Before the laser welding process can start, the initial reflectance must be overcome, especially on aluminum surfaces. This can be achieved by preheating the material. In the hybrid process, the arc heats the metal, helping the laser beam to couple in. After the vaporisation temperature has been reached, the vapor cavity is formed, and nearly all radiation energy can be put into the workpiece. The energy required for this is thus determined by the temperature-dependent absorption and by the amount of energy lost by conduction into the rest of the workpiece. In laser-hybrid welding, using MIG, vaporisation takes place not only from the surface of the workpiece but also from the filler wire, so that more metal vapor is available to facilitate the absorption of the laser radiation.
Fatigue behavior
Over the years a great deal of research has been done to understand fatigue behavior, particularly for new techniques like laser-hybrid welding, but knowledge is still limited. Laser-hybrid welding is an advanced welding technology that creates narrow deep welds and offers greater freedom to control the weld surface geometry. Therefore, fatigue analysis and life prediction of hybrid weld joints has become more important and is the subject of ongoing research.
References
See also
List of laser articles
Welding
Welding
Welding | Laser-hybrid welding | [
"Engineering"
] | 835 | [
"Welding",
"Mechanical engineering"
] |
2,186,500 | https://en.wikipedia.org/wiki/Azimilide | Azimilide is a class ΙΙΙ antiarrhythmic drug (used to control abnormal heart rhythms). The agents from this heterogeneous group have an effect on the repolarization, they prolong the duration of the action potential and the refractory period. Also they slow down the spontaneous discharge frequency of automatic pacemakers by depressing the slope of diastolic depolarization. They shift the threshold towards zero or hyperpolarize the membrane potential. Although each agent has its own properties and will have thus a different function.
Heart potential
Azimilide dihydrochloride is a chlorophenylfuranyl compound, which slows repolarization of the heart and prolongs the QT interval of the electrocardiogram. Prolongation of atrial or ventricular repolarization can provide an anti-arrhythmic benefit in patients with heart rhythm disturbances, and this has been the primary interest in the clinical development azimilide. In rare cases, excessive prolongation of ventricular repolarization by azimilide can result in predisposition towards severe ventricular arrhythmias. Most recent clinical trials have investigated the use of azimilide in reducing the frequency and severity of arrhythmias in patients with implanted cardiac pacemakers-defibrillators, where rare pro-arrhythmic events are rescued by the device.
The ion currents
The action of azimilide is directed to the different currents present in atrial and ventricular cardiac myocytes. It principally blocks IKr, and IKs, with much weaker effects on INa, ICa, INCX and IK.Ach. The IKr(rapid)and IKs (slow) are inward rectifier potassium currents, responsible for repolarizing cardiac myocytes towards the end of the cardiac action potential. A somewhat higher concentration of azimilide is needed to block the IKs current. Both blockages result in an increase of the QT interval and a prolongation of atrial and ventricular refractory periods.
Azimilide blocks hERG channels (which encode the IKr current) with an affinity comparable to that with which KvLQT1 / minK channels (which encode the IKs current) are blocked. This block exhibits reverse use-dependence, i.e. the channel blocking effect wanes at faster pulsing rates of the cell. A possible explanation is an interaction of azimilide with K+ close to its binding site in the ion channel. However, there is an agonist effect as well, which is a voltage-dependent effect. This is a dual effect, a low voltage depolarization near the activation threshold will increase the current amplitude and higher depolarizing voltages will suppress the current amplitude. The effect comes from outside of the cell membrane and does not depend on G-proteins or kinase activity inside the cell. Azimilide binds on the extracellular domain of the hERG channel, this propagates a conformational change and inhibits the current. This change makes the activation gate open more easily by low voltage depolarization. Azimilide has two separate binding sites in hERG channel, one for its antagonist function and the other for the agonist function.
Pharmacology
Azimilide has been studied for its anti-arrhythmic effects: its converts and maintains sinus rhythm in patients with atrial arrhythmias; and it reduces the frequency and severity of ventricular arrhythmias in patients with implanted cardioverter-defibrillators. Azimilide's most important adverse effect is torsades de pointes, which is a form of ventricular tachycardia.
Pharmacokinetics
The drug is administered orally and will be completely absorbed. It shows none or very minor interactions with other drugs and it will be eventually cleared by the kidney. A peak in concentration in the blood is observed seven hours after the administration of Azimilide. The metabolic clearance is mediated through several pathways:
10% is found unchanged in the blood
30% will cleared by cleavage
25% by CYP 1A1 pathway
25% by CYP 3A4
F-1292 is the major metabolite of azimilide, it is formed cleavage of the aromethine bond. Unlike desmethyl azimilide, azimilide N-oxide and azimilide carboxylate F-1292 has no cardiovascular activity while the other three minor metabolites have a class ΙΙΙ antiarrhythmic activity. They only make out 10% of azimilide in the blood, so their contribution is not measurable.
References
Antiarrhythmic agents
Furans
HERG blocker
Hydrazones
4-Chlorophenyl compounds
4-Methylpiperazin-1-yl compounds
Ureas
Hydantoins | Azimilide | [
"Chemistry"
] | 1,026 | [
"Organic compounds",
"Hydrazones",
"Functional groups",
"Ureas"
] |
2,186,554 | https://en.wikipedia.org/wiki/Dinaric%20calcareous%20block%20fir%20forest | The dinaric calcareous silver fir forests are an endemic vegetation type of the littoral Dinaric Alps, located in the Dinaric Mountains mixed forests ecoregion in Southeastern Europe. Pure stands of dinaric calcareous silver fir (Abies alba) forests appear on limestone escarpments in the montane zones of Orjen, Velebit, Biokovo and Prenj. As an endemic and rare vegetation type of the Dinarides, they need protection.
Structure
Dinaric calcareous silver fir forests have an open structure which is environmentally sensible. As storms of bora and scirocco type are common in the coastal dinaric mountains, wind plays a great role in the formation of the highly labile structure in the pure silver fir communities. Silver firs can reach up to on limestone and trunk diameters of have been observed.
Distribution
Dinaric calcareous silver fir forests are dispersed in smaller patches on the hyperkarstic littoral karst mountain environments of the Dinarides. Prominent are those on Velebit and Orjen, appearing on bare limestone escarpments in the montane lifezone between . The abundance of precipitation on these coastal mountains of up to 5000 mm/m2a with the dry soil conditions restricts these pure silver fir forests to the most rainy and humid spots of the Dinarides.
Ecology
Silver fir is a constituent of montane central European forests. As a rare species in dry climates of the Mediterranean, the silver firs presence on Mt. Orjen are restricted to humid northern slopes. A marked difference in the fir's vegetation patterns is seen here. It has a common cause in soil formation. High soil-water content in terrae fuscae on glacial superstratum leads to beech-fir forests, whereas dry initial rendzinas on glacio-karstic substrate support xeric dinaric calcareous silver fir forests. The latter endemic community rich in submediterranean species has evolutionary parallels with Bosnian Pine communities.
Floristic composition
Dinaric calcareous silver fir forests are among the most species-rich montane ecosystems in the Dinaric Alps.
Mixed deciduous-silver fir peony (Paeonia daurica Andrews)-forests with Paeonia daurica''' have the most species rich composition found so far on Orjen (Abies alba, Corylus colurna, Fraxinus excelsior, Fagus sylvatica, Acer intermedium, Tilia cordata, Acer pseudoplatanus, Pinus heldreichii).''
*
Syntaxonomic chart of mixed deciduous-silver fir-peony forest at Orjen
Plant list
Typical plants of the many times dry basic Kalkomelasol soil plant Biotope:
C. OREOHERZOGIO-ABIETALIA Fuk. 1969
a) O r e o h e r z o g i o-A b i e t i o n Ht. emend. Fuk.
1. Oreoherzogio-Abietetum Fuk.
References
Pavle Cikovac: Sociology and ecology of silver fir forests on Mt. Orjen - Montenegro. LMU Munich 2002, Department of Geography
Dinaric Mountains mixed forests
Lists of biota of Europe
Lists of plants
Flora of Southeastern Europe
Temperate coniferous forests
Forests of Croatia | Dinaric calcareous block fir forest | [
"Biology"
] | 691 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
2,186,581 | https://en.wikipedia.org/wiki/Palm%20Universal%20Connector | The Universal Connector was a standard port fitted to the bottom of many Palm PDAs from 2001 to 2004 and on units from other manufacturers that licensed Palm technology, including Garmin.
Out of the box, it is used to connect to the sync and charge cradle, allowing the Palm to connect to a desktop PC and receive battery power. A range of accessories were also available for the Universal Connector, including folding keyboards, external battery packs, wired and wireless modems, and many more.
The Universal Connector cradles were the first synchronization device that used USB to communicate with the host computer, in addition to the older serial port standard.
Some Palm devices manufactured between 2001 and 2004 did not use the Universal Connector. For instance, the Tungsten E had a mini-USB connector.
The Universal Connector was superseded by the Palm Multi-Connector for the final devices released by Palm, this standard added stereo audio output and mono microphone input.
Palm Models fitted with the Universal Connector
m125, m130
m500, m505, m515
Palm i705
Zire 71
Tungsten T, T2, T3, C, W
Garmin Models fitted with the Universal Connector
Garmin iQue 3600, 3200
External links
Palm Universal connector pinout
Palm OS devices | Palm Universal Connector | [
"Technology"
] | 258 | [
"Mobile computer stubs",
"Computing stubs",
"Mobile technology stubs",
"Computer hardware stubs"
] |
2,186,783 | https://en.wikipedia.org/wiki/Adams%27%20catalyst | Adams' catalyst, also known as platinum dioxide, is usually represented as platinum(IV) oxide hydrate, PtO2•H2O. It is a catalyst for hydrogenation and hydrogenolysis in organic synthesis. This dark brown powder is commercially available. The oxide itself is not an active catalyst, but it becomes active after exposure to hydrogen whereupon it converts to platinum black, which is responsible for reactions.
Preparation
Adams' catalyst is prepared from chloroplatinic acid H2PtCl6 or ammonium chloroplatinate, (NH4)2PtCl6, by fusion with sodium nitrate. The first published preparation was reported by V. Voorhees and Roger Adams. The procedure involves first preparing a platinum nitrate which is then heated to expel nitrogen oxides.
H2PtCl6 + 6 NaNO3 → Pt(NO3)4 + 6 NaCl (aq) + 2 HNO3
Pt(NO3)4 → PtO2 + 4 NO2 + O2
The resulting brown cake is washed with water to free it from nitrates. The catalyst can either be used as is or dried and stored in a desiccator for later use. Platinum can be recovered from spent catalyst by conversion to ammonium chloroplatinate using aqua regia followed by ammonia.
Uses
Adams' catalyst is used for many applications. It has shown to be valuable for hydrogenation, hydrogenolysis, dehydrogenation, and oxidation reactions. During the reaction, platinum metal (platinum black) is formed which has been cited to be the active catalyst. Hydrogenation occurs with syn stereochemistry when used on an alkyne resulting in a cis-alkene. Some of the most important transformations include the hydrogenation of ketones to alcohols or ethers (the latter product forming in the presence of alcohols and acids) and the reduction of nitro compounds to amines. However, reductions of alkenes can be performed with Adams' catalyst in the presence of nitro groups without reducing the nitro group. When reducing nitro compounds to amines, platinum catalysts are preferred over palladium catalysts to minimize hydrogenolysis. The catalyst is also used for the hydrogenolysis of phenyl phosphate esters, a reaction that does not occur with palladium catalysts. The pH of the solvent significantly affects the reaction course, and reactions of the catalyst are often enhanced by conducting the reduction in neat acetic acid, or solutions of acetic acid in other solvents.
Development
Before development of Adams' catalyst, organic reductions were carried out using colloidal platinum or platinum black. The colloidal catalysts were more active but posed difficulties in isolating reaction products. This led to more widespread use of platinum black. In Adams' own words:
"...Several of the problems I assigned my students involved catalytic reduction. For this purpose we were using as a catalyst platinum black made by the generally accepted best method known at the time. The students had much trouble with the catalyst they obtained in that frequently it proved to be inactive even though prepared by the same detailed procedure which resulted occasionally in an active product. I therefore initiated a research to find conditions for preparing this catalyst with uniform activity."
Safety
Little precaution is necessary with the oxide but, after exposure to H2, the resulting platinum black can be pyrophoric. Therefore, it should not be allowed to dry and all exposure to oxygen should be minimized.
See also
Platinum on carbon
Platinum black
Rhodium-platinum oxide
Palladium on carbon
References
External links
Platinum compounds: platinum dioxide - WebElements.com
Platinum(IV) compounds
Hydrogenation catalysts
Transition metal oxides
Transition metal dichalcogenides | Adams' catalyst | [
"Chemistry"
] | 768 | [
"Hydrogenation catalysts",
"Hydrogenation"
] |
2,186,907 | https://en.wikipedia.org/wiki/Vigipirate | Vigipirate () is France's national security alert system. Created in 1978 through interministerial sessions and falling within the responsibilities of the prime minister, it has since been updated three times: in 1995 (following a terror bombing campaign), 2000 and 2004.
Details
Until 2014 the system defined four levels of threats represented by five colors: white, yellow, orange, red, scarlet. The levels called for specific security measures, including increased police or police/military mixed patrols in subways, train stations and other vulnerable locations.
In February 2014 the levels were simplified to 'vigilance' (or surveillance) and 'attack alert'. In December 2016, they were reorganized in three levels: 'vigilance', 'heightened security/risk of attack' and 'attack emergency'.
The name "Vigipirate" is an acronym of ("surveillance and protection of facilities against the risk of terrorist bombing attacks")
Levels of alert (to 2014)
Levels of alert (2014-2016)
Levels of alert (from 2016)
History of alert levels
See also
States of emergency in France
Opération Sentinelle
UK Threat Levels, used in the United Kingdom from 2006
BIKINI state, previously used in the United Kingdom
Homeland Security Advisory System (United States)
References
Law enforcement in France
Alert measurement systems
Emergency management in France
1978 introductions
Valéry Giscard d'Estaing | Vigipirate | [
"Technology"
] | 281 | [
"Warning systems",
"Alert measurement systems"
] |
2,186,936 | https://en.wikipedia.org/wiki/Crimp%20%28joining%29 | Crimping is a method of joining two or more pieces of metal or other ductile material by deforming one or both of them to hold the other. The bend or deformity is called the crimp. Crimping tools are used to create crimps.
Crimping is used extensively in metalworking, including to contain bullets in cartridge cases, for electrical connections, and for securing lids on metal food cans. Because it can be a cold-working technique, crimping can also be used to form a strong bond between the workpiece and a non-metallic component. It is also used to connect two pieces of food dough.
Tools
A crimping tool or crimp tool is used to create crimps. Crimping tools range in size from small handheld devices, to benchtop machines used for industrial purposes, to large fully-automatic wire processing machines for high-volume production.
For electrical crimps, a wide variety of crimping tools exist, and they are generally designed for a specific type and size of terminal. Handheld tools (sometimes called crimping pliers) are common. These often use a ratcheting mechanism to ensure sufficient crimping force has been applied. Apart from handheld tools, crimping tools can also include sophisticated electrically powered hydraulic types and battery operated tools that cover the entire size range and type of conductors, designed for mass production operations.
Electrical crimp
An electrical crimp is a type of solderless electrical connection which uses physical pressure to join the contacts. Crimp connectors are typically used to terminate stranded wire. Stripped wire is inserted through the correctly sized opening of the connector, and a crimper is used to tightly squeeze the opening against the wire. Depending on the type of connector used, it may be attached to a metal plate by a separate screw or bolt or it could be simply screwed on using the connector itself to make the attachment like an F connector.
Characteristics
The benefits of crimping over soldering and wire wrapping include:
A well-engineered and well-executed crimp is designed to be gas-tight, which prevents oxygen and moisture from reaching the metals (which are often different metals) and causing corrosion
Because no alloy is used (as in solder) the joint is mechanically stronger
Crimped connections can be used for cables of both small and large cross-sections, whereas only small cross-section wires can be used with wire wrapping
Crimping is normally performed by first inserting the terminal into the crimp tool. The terminal must be placed into the appropriately sized crimp barrel. The wire is then inserted into the terminal with the end of the wire flush with the exit of the terminal to maximize cross-sectional contact. Finally, the handles of the crimp tool are used to compress and reshape the terminal until it is cold-welded onto the wire.
The resulting connection may appear loose at the edges of the terminal, but this is desirable so as to not have sharp edges that could cut the outer strands of the wire. If executed properly, the middle of the crimp will be swaged or cold-formed.
More specialized crimp connectors are also used, for example as signal connectors on coaxial cables in applications at high radio frequencies (VHF, UHF) . These often require specialised crimping tools to form the proper crimp.
Crimped contacts are permanent (i.e. the connectors and wire ends cannot be reused).
Theory
Crimp-on connectors are attached by inserting the stripped end of a stranded wire into a portion of the connector, which is then mechanically deformed by compressing (crimping) it tightly around the wire. The crimping is usually accomplished with special crimping tool such as crimping pliers. A key idea behind crimped connectors is that the finished connection should be gas-tight.
Effective crimp connections deform the metal of the connector past its yield point so that the compressed wire causes tension in the surrounding connector, and these forces counter each other to create a high degree of static friction which holds the cable in place. Due to the elastic nature of the metal in crimped connections, they are highly resistant to vibration and thermal shock.
Two main classes of wire crimps exist:
Closed barrel crimps have a cylindrical opening for a wire, and the crimping tool deforms the originally circular cross section of the terminal into some other shape. This method of crimping is less resilient to vibration.
Open barrel crimps have "ears" of metal that are shaped like a V or U, and the crimp terminal bends and folds them over the wire prior to swaging the wire to the terminal. Open-barrel terminals are claimed to be easier to automate because of avoiding the need to funnel stranded wire into the narrow opening of a barrel terminal.
In addition to their shape, crimped connectors can also be characterized by their insulation (insulated or non-insulated), and whether they crimp onto the conductor(s) of a wire (wire crimp) or its insulation (insulation crimp).
Shapes
C crimp
D crimp
F crimp (a.k.a. B crimp)
O crimp
W crimp
Overlap/OVL crimp
Oval (confined) crimp
Four-Mandrel crimp
Mandrel (crescent) crimp
Mandrel crimp-narrow (indented)
Hexagonal crimp
Mandrel (indent) crimp
Square crimp
Trapezoidal crimp
Trapezoidal indent crimp
Trapezoidal crimp front
Tyco crimp
Western crimp
Applications
Crimped connections are common alternatives to soldered connections. There are complex considerations for determining which method is appropriatecrimp connections are sometimes preferred for these reasons:
Easier, cheaper, or faster to reproduce reliably in large-scale production
Fewer dangerous or harmful processes involved in termination (soldered connections require aggressive cleaning, high heat, and possibly toxic solders)
Potentially superior mechanical characteristics due to strain relief and lack of solder wicking
Crimped connectors fulfill numerous uses, including termination of wires to screw terminals, blade terminals, ring/spade terminals, wire splices, or various combinations of these. A tube-shaped connector with two crimps for splicing wires in-line is called a butt splice connector.
Single-wire crimp terminals include:
Blade or quick disconnect (e.g., Faston or Lucar)
Bullet (e.g. Shur-Plug)
Butt splice
Flag tongue
Rectangular tongue
Hook tongue
Spade tongue (flanged, short spring, long spring)
Ring tongue (slotted, offset)
Multiple stud
Packard 56
Pin (SAE/J928)
Wire pin
Crimping is also a common technique to join wires to a multipin connector, such as in Molex connectors or modular connectors.
Circular connectors using crimp contacts can be classified as rear release or front release, referring to the side of the connector where the pins are anchored:
Front release contacts are released from the front (contact side) of the connector, and removed from the rear. The removal tool engages with the front portion of the contact and pushes it through to the back of the connector.
Rear release contacts are released and removed from the rear (wire side) of the connector. The removal tool releases the contacts from the rear and pulls the contact out of the retainer.
Crimp connections are used typically to attach RF connectors, such as BNC connectors, to coaxial cables quickly, as an alternative to soldered connections. Typically the male connector is crimp-fitted to a cable, and the female attached, often using soldered connections, to a panel on equipment. A special power or manual tool is used to fit the connector. Wire strippers which strip outer jacket, shield braid, and inner insulation to the correct lengths in one operation are used to prepare the cable for crimping.
Quality
A crimped connection will only be reliable if a number of criteria are met:
All strands have been deformed enough to cold-flow into the terminal body
The compression force is not too light, nor too strong
The connector body is not overly deformed
Wires must be in solid working condition, cannot have scrapes, nicks, severing or other damages
Insulation should not show any signs of pinching, pulling, fraying, discoloration, or charring
Large voids are not left inside the crimp (caused by not enough wire inside the connector)
The wire should have as many strands as possible, so that a few damaged or uninserted wires will not adversely affect the crimp density, and thus degrade the electrical and mechanical properties of the connection.
Micrographs of the crimped connections can be prepared to illustrate good and bad crimps for training and quality assurance purposes. The assembled connection is cut in cross-section, polished and washed in nitric acid to dissolve any copper dust that may be filling voids leading to a false indication of a good crimp.
Terminal insulation colors
Other uses
Crimping is most extensively used in metalworking. Crimping is commonly used to fix bullets in their cartridge cases, for rapid but lasting electrical connections, for securing lids on metal food cans, and for many other applications.
Bullets
Canning
Jewelry
In jewelry manufacture, crimp beads, or crimp tubes, are used to make secure joints in fine wire, such as used in clasps or tie loops. A crimped lead (or other soft metal) seal is attached to secure wires used to secure fasteners in aircraft, or to provide visual evidence of tampering when securing a utility meter or as a seal on cargo containers.
Plumbing
In plumbing, there is a trend in some jurisdictions towards the use of crimped fittings to join metallic pipes, replacing the traditional soldering or "sweating" of joints. This trend is driven in part by increased restrictions or bans of processes involving open flames, which may now require costly special permits.
Sheet metal
When joining segments of tubular sheet metal pipe, such as for smoke pipes for wood stoves, downspouts for rain gutters, or for installation of ventilation ducting, one end of a tube is treated with a crimping tool to make a slip joint into the next section of duct. The joint will not be liquid-tight but will be adequate for conveying low pressure fluids. Crimp joints may be arranged to prevent accumulation of dirt.
Food
Crimping is often used around the edges of pies and filled pasta like ravioli to seal the insides by connecting the top and bottom dough layers. This can be done with fingers, a fork, or a crimping tool. A jagging iron, also known as a crimping wheel, or jagger, consists of a handle and a wheel with a wavy pattern. There are also crimping tongs.
History
The technique of soldering wires has remained common for at least a century, however crimp terminals came into use in the middle of the 20th century. In 1953, AMP Incorporated (now TE Connectivity) introduced crimp barrel terminals, and in 1957 Cannon Brothers experimented with machined contacts integrating crimp barrels. During the 1960s, several standards for crimp connectors were published, including MS3191-1, MS3191-4 and MIL-T-22520. In 2010, the predominant standard for crimp connectors changed to MIL-DTL-22520.
See also
Pliers
References
External links
Fabrication (metal)
Jewellery components
Joining | Crimp (joining) | [
"Technology"
] | 2,438 | [
"Jewellery components",
"Components"
] |
2,186,993 | https://en.wikipedia.org/wiki/Bioprocessor | A bioprocessor is a miniaturized bioreactor capable of culturing mammalian, insect and microbial cells. Bioprocessors are capable of mimicking performance of large-scale bioreactors, hence making them ideal for laboratory scale experimentation of cell culture processes. Bioprocessors are also used for concentrating bioparticles (such as cells) in bioanalytical systems. Microfluidic processes such as electrophoresis can be implemented by bioprocessors to aid in DNA isolation and purification.
References
Biochemical engineering
Biotechnology | Bioprocessor | [
"Chemistry",
"Engineering",
"Biology"
] | 118 | [
"Biological engineering",
"Chemical engineering",
"Biotechnology stubs",
"Biochemical engineering",
"Biotechnology",
"nan",
"Biochemistry"
] |
2,187,120 | https://en.wikipedia.org/wiki/TK82C | TK82C was a Sinclair ZX81 clone made by Microdigital Eletrônica Ltda., a computer company located in Brazil.
General information
The TK82C had the Zilog Z80A processor running at 3.25 MHz, 2 KB SRAM and 8 KB of EPROM with the BASIC interpreter. The C letter stands for "Científico", or "Scientific" in English.
The keyboard was made of layers of conductive (membrane) material and followed the Sinclair layout. The video output was sent via a RF modulator to a TV set tuned at VHF channel 3, and featured black characters on a white background. The maximum resolution was 64 x 44 pixels, based on semigraphic characters useful for games and basic images (see ZX81 character set).
The TK82C included the SLOW function, which permitted the video be shown during the processing (the prior version, TK82, a Sinclair ZX80 clone, ran only in fast mode, so the image was not shown during its processing). In reality, the SLOW function was done by an add-on board that was factory-mounted over the main board.
Although being a ZX81 clone, the TK82C did not have the ULA chip from Ferranti, used in the former. Instead it was manufactured with a dozen of TTL integrated circuits, which resulted in a somewhat large power consumption. This could be noted as the computer's case used to become quite hot after some minutes of operation.
Data Storage
Data storage was done in audio cassette tapes at 300 bits per second, and large programs could take up to 6 minutes to load. Audio cables were supplied with the computer for connection with a regular tape recorder.
As the data encoding was entirely done by software, some hacks were made available to allow much faster transfers. Hi-fi recorders were required in order to use the greater speeds with a minimum of reliability.
Accessories
A 16 KB DRAM expansion was made available and, despite being optional, became a standard item. Soon after, a 48 KB expansion was also released, but due to pricing and the problematic data storage in cassettes, it never sold well.
The TK82C featured a DIN connector for a joystick (in reality, it was wired to the keyboard matrix); Microdigital then marketed an Atari 2600 joystick, accordingly retrofitted to match the DIN connector.
A small printer, indeed a ZX Printer clone, was announced for a long time by Microdigital, but was never released.
Compatibility and Legal Issues
All software designed to the ZX81 could run in the TK82C with no problems, and vice versa. So it was not uncommon to find software distributed in Brazil, that were nothing more than illegitimate copies of products for the ZX81. However, given the TK82's popularity, a great deal of original software was developed in Brazil as well.
In 1983, Sinclair Research sued Microdigital over copyright violation because of the unauthorized cloning of its product. Due to political trends from that time, the Brazilian court in charge of the case sided with Microdigital.
Later Products
The TK82C was replaced by the TK83 (it used a ULA similar chip, as the original ZX81) and by the TK85 (a 16 KB RAM version with a case similar to ZX-Spectrum), more robust and with a better design.
Microdigital later produced the TK90X and TK95, which were clones of the ZX Spectrum.
Trivia
TK82C is also a designation for a copier from Kyocera.
References
External links
Microdigital TK82 (archived version)
Microdigital Eletrônica
Computer-related introductions in 1983
Goods manufactured in Brazil
Products introduced in 1982
Z80
Sinclair ZX81 clones | TK82C | [
"Technology"
] | 822 | [
"Computing stubs",
"Computer hardware stubs"
] |
2,187,251 | https://en.wikipedia.org/wiki/Decade%20%28log%20scale%29 | One decade (symbol dec) is a unit for measuring ratios on a logarithmic scale, with one decade corresponding to a ratio of 10 between two numbers.
Example: Scientific notation
When a real number like .007 is denoted alternatively by 7.0 × 10—3 then it is said that the number is represented in scientific notation. More generally, to write a number in the form a × 10b, where 1 <= a < 10 and b is an integer, is to express it in scientific notation, and a is called the significand or the mantissa, and b is its exponent. The numbers so expressible with an exponent equal to b span a single decade, from
Frequency measurement
Decades are especially useful when describing frequency response of electronic systems, such as audio amplifiers and filters.
Calculations
The factor-of-ten in a decade can be in either direction: so one decade up from 100 Hz is 1000 Hz, and one decade down is 10 Hz. The factor-of-ten is what is important, not the unit used, so 3.14 rad/s is one decade down from 31.4 rad/s.
To determine the number of decades between two frequencies ( & ), use the logarithm of the ratio of the two values:
decades
or, using natural logarithms:
decades
How many decades is it from 15 rad/s to 150,000 rad/s?
decades
How many decades is it from 3.2 GHz to 4.7 MHz?
decades
How many decades is one octave?
One octave is a factor of 2, so decades per octave (decade = just major third + three octaves, 10/1 () = 5/4)
To find out what frequency is a certain number of decades from the original frequency, multiply by appropriate powers of 10:
What is 3 decades down from 220 Hz?
Hz
What is 1.5 decades up from 10 Hz?
Hz
To find out the size of a step for a certain number of frequencies per decade, raise 10 to the power of the inverse of the number of steps:
What is the step size for 30 steps per decade?
– or each step is 7.9775% larger than the last.
Graphical representation and analysis
Decades on a logarithmic scale, rather than unit steps (steps of 1) or other linear scale, are commonly used on the horizontal axis when representing the frequency response of electronic circuits in graphical form, such as in Bode plots, since depicting large frequency ranges on a linear scale is often not practical. For example, an audio amplifier will usually have a frequency band ranging from 20 Hz to 20 kHz and representing the entire band using a decade log scale is very convenient. Typically the graph for such a representation would begin at 1 Hz (100) and go up to perhaps 100 kHz (105), to comfortably include the full audio band in a standard-sized graph paper, as shown below. Whereas in the same distance on a linear scale, with 10 as the major step-size, you might only get from 0 to 50.
Electronic frequency responses are often described in terms of "per decade". The example Bode plot shows a slope of −20 dB/decade in the stopband, which means that for every factor-of-ten increase in frequency (going from 10 rad/s to 100 rad/s in the figure), the gain decreases by 20 dB.
See also
Slide rule
One-third octave
Frequency level
Octave
Savart
Order of magnitude
References
Charts
Units of level | Decade (log scale) | [
"Physics",
"Mathematics"
] | 724 | [
"Physical quantities",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Units of measurement"
] |
2,187,308 | https://en.wikipedia.org/wiki/DBFS | Decibels relative to full scale (dBFS or dB FS) is a unit of measurement for amplitude levels in digital systems, such as pulse-code modulation (PCM), which have a defined maximum peak level. The unit is similar to the units dBov and decibels relative to overload (dBO).
The level of 0dBFS is assigned to the maximum possible digital level. For example, a signal that reaches 50% of the maximum level has a level of −6dBFS, which is 6dB below full scale. Conventions differ for root mean square (RMS) measurements, but all peak measurements smaller than the maximum are negative levels.
A digital signal that does not contain any samples at 0dBFS can still clip when converted to analog form due to the signal reconstruction process interpolating between samples. This can be prevented by careful digital-to-analog converter circuit design. Measurements of the true inter-sample peak levels are notated as dBTP or dB TP ("decibels true peak").
RMS levels
Since a peak measurement is not useful for qualifying the noise performance of a system, or measuring the loudness of an audio recording, for instance, RMS measurements are often used instead.
A potential for ambiguity exists when assigning a level on the dBFS scale to a waveform rather than to a specific amplitude, because some engineers follow the mathematical definition of RMS, which for sinusoidal signals is 3dB below the peak value, while others choose the reference level so that RMS and peak measurements of a sine wave produce the same result.
The unit dB FS or dBFS is defined in AES Standard AES17-1998, IEC 61606, and ITU-T Recs. P.381 and P.382, such that the RMS value of a full-scale sine wave is designated 0dB FS. This means a full-scale square wave would have an RMS value of +3dB FS. This convention is used in Wolfson and Cirrus Logic digital microphone specs, etc.
The unit dBov is defined in the ITU-T G.100.1 telephony standard such that the RMS value of a full-scale square wave is designated 0dBov. All possible dBov measurements are negative numbers, and a sine wave cannot exist at a larger RMS value than −3 dBov without clipping. This unit can be applied to both analog and digital systems. This convention is the basis for the ITU's LUFS loudness unit, and is also used in Sound Forge and Euphonix meters, and Analog Devices digital microphone specs (though referred to as "dBFS").
Dynamic range
The measured dynamic range (DR) of a digital system is the ratio of the full scale signal level to the RMS noise floor. The theoretical minimum noise floor is caused by quantization noise. This is usually modeled as a uniform random fluctuation between − LSB and + LSB. (Only certain signals produce uniform random fluctuations, so this model is typically, but not always, accurate.)
As the dynamic range is measured relative to the RMS level of a full scale sine wave, the dynamic range and the level of this quantization noise in dBFS can both be estimated with the same formula (though with reversed sign):
The value of n equals the resolution of the system in bits or the resolution of the system minus 1bit (the measure error). For example, a 16-bit system has a theoretical minimum noise floor of −98.09dBFS relative to a full-scale sine wave:
In any real converter, dither is added to the signal before sampling. This removes the effects of non-uniform quantization error, but increases the minimum noise floor.
History
The phrase "dB below full scale" has appeared in print since the 1950s, and the term "dBFS" has been used since 1977.
Although the decibel (dB) is permitted for use alongside units of the International System of Units (SI), the dBFS is not.
Analog levels
dBFS is not defined for analog levels, according to standard AES-6id-2006. No single standard converts between digital and analog levels, mostly due to the differing capabilities of different equipment. The amount of oversampling also affects the conversion with values that are too low having significant error. The conversion level is chosen as the best compromise for the typical headroom and signal-to-noise levels of the equipment in question. Examples:
EBU R68 is used in most European countries, specifying +18dBu at 0dBFS.
In Europe, the EBU recommend that −18dBFS equates to the alignment level.
UK broadcasters, alignment level is taken as 0dBu (PPM4 or −4VU)
The American SMPTE standard defines −20dBFS as the alignment level.
European and UK calibration for is −18dBFS = 0VU.
US installations use +24dBu for 0dBFS.
American and Australian Post: −20dBFS = 0VU = +4dBu.
In Japan, France, and some other countries, converters may be calibrated for +22dBu at 0dBFS.
BBC specification: −18dBFS = PPM"4" = 0dBu
German ARD and studio, PPM+6dBu = −10 (−9)dBFS. +16 (+15)dBu = 0dBFS. No VU.
Belgium VRT: 0 dB (VRT ref.) = +6dBu; −9dBFS = 0 dB (VRT ref.); 0dBFS = +15dBu.
See also
Audio bit depth
Bit rate
Full scale
References
External links
AES Pro Audio Reference definition of dBFS
dBFS – Sweetwater glossary
Digital audio
Logarithmic scales of measurement
ru:DBFS | DBFS | [
"Physics",
"Mathematics"
] | 1,223 | [
"Quantity",
"Logarithmic scales of measurement",
"Physical quantities"
] |
2,187,814 | https://en.wikipedia.org/wiki/Universal%20Satellites%20Automatic%20Location%20System | Universal Satellites Automatic Location System (USALS), also known (unofficially) as DiSEqC 1.3, Go X or Go to XX is a satellite dish motor protocol that automatically creates a list of available satellite positions in a motorised satellite dish setup. It is used in conjunction with the DiSEqC 1.2 protocol. It was developed by STAB, an Italian motor manufacturer, who still make the majority of USALS compatible motors.
Software on the satellite receiver (or external positioner) calculates the position of all available satellites from an initial location (input by the user), which is the latitude and longitude relative to Earth. Calculated positions can differ ±0.1 degrees from the offset. This is adjusted automatically and does not require previous technical knowledge.
Compared to DiSEqC 1.2, it is not necessary to manually search and store every known satellite position. Pointing to a known satellite position (for example 19.2ºE) is enough; this position will act as the central point, and the USALS system will then calculate visible satellites position within the offset.
Receivers are aligned to the satellite most southern to their position in the northern hemisphere, or the northernmost in the southern hemisphere.
As it is not an open standard, for a receiver to carry the USALS logo it must undergo a certification test by STAB's laboratories. If successful the manufacturer can include a USALS settings entry in its own menu, as well as place the logo on the front of their unit. However, a large number of manufacturers of both receivers and motors provide compatible modes which have not received certification, leading to use of unofficial terms.
USALS is a program and not a communication protocol. The USALS calculates the dish angular position given by the dish longitude/latitude and the position of the satellite in geostationary orbit. It then sends the angular position to the positioner using the DiSEqC 1.2 protocol. This calculation is straight on using geometric calculations.
See also
Antenna tracking system
Automatic Tracking Satellite Dish
Satellite finder
DiSEqC = Digital Satellite Equipment Control
Duo LNB
Monoblock LNB
SAT>IP ip based approach
References
External links
STAB Italy official website
What is DiSEqC?
Satellite broadcasting
Television technology | Universal Satellites Automatic Location System | [
"Technology",
"Engineering"
] | 455 | [
"Information and communications technology",
"Telecommunications engineering",
"Television technology",
"Satellite broadcasting"
] |
2,187,847 | https://en.wikipedia.org/wiki/Complex%20Lie%20group | In geometry, a complex Lie group is a Lie group over the complex numbers; i.e., it is a complex-analytic manifold that is also a group in such a way is holomorphic. Basic examples are , the general linear groups over the complex numbers. A connected compact complex Lie group is precisely a complex torus (not to be confused with the complex Lie group ). Any finite group may be given the structure of a complex Lie group. A complex semisimple Lie group is a linear algebraic group.
The Lie algebra of a complex Lie group is a complex Lie algebra.
Examples
A finite-dimensional vector space over the complex numbers (in particular, complex Lie algebra) is a complex Lie group in an obvious way.
A connected compact complex Lie group A of dimension g is of the form , a complex torus, where L is a discrete subgroup of rank 2g. Indeed, its Lie algebra can be shown to be abelian and then is a surjective morphism of complex Lie groups, showing A is of the form described.
is an example of a surjective homomorphism of complex Lie groups that does not come from a morphism of algebraic groups. Since , this is also an example of a representation of a complex Lie group that is not algebraic.
Let X be a compact complex manifold. Then, analogous to the real case, is a complex Lie group whose Lie algebra is the space of holomorphic vector fields on X:.
Let K be a connected compact Lie group. Then there exists a unique connected complex Lie group G such that (i) , and (ii) K is a maximal compact subgroup of G. It is called the complexification of K. For example, is the complexification of the unitary group. If K is acting on a compact Kähler manifold X, then the action of K extends to that of G.
Linear algebraic group associated to a complex semisimple Lie group
Let G be a complex semisimple Lie group. Then G admits a natural structure of a linear algebraic group as follows: let be the ring of holomorphic functions f on G such that spans a finite-dimensional vector space inside the ring of holomorphic functions on G (here G acts by left translation: ). Then is the linear algebraic group that, when viewed as a complex manifold, is the original G. More concretely, choose a faithful representation of G. Then is Zariski-closed in .
References
Lie groups
Manifolds | Complex Lie group | [
"Mathematics"
] | 510 | [
"Lie groups",
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Algebraic structures",
"Geometry",
"Geometry stubs",
"Manifolds"
] |
2,187,853 | https://en.wikipedia.org/wiki/Gutta | A gutta (Latin pl. guttae, "drops") is a small water-repelling, cone-shaped projection used near the top of the architrave of the Doric order in classical architecture. At the top of the architrave blocks, a row of six guttae below the narrow projection of the taenia (fillet) formed an element called a regula. A regula was aligned under each triglyph of the Doric frieze. In addition, the underside of the projecting geison above the frieze had rectangular protrusions termed mutules that each had three rows of six guttae. These mutules were aligned above each triglyph and each metope.
It is thought that the guttae were a skeuomorphic representation of the pegs used in the construction of the wooden structures that preceded the familiar Greek architecture in stone. However, they have some functionality, as water drips over the edges, away from the edge of the building.
Outside the Doric
In the strict tradition of classical architecture, a set of guttae always go with a triglyph above (and vice versa), and the pair of features are only found in entablatures using the Doric order. In Renaissance and later architecture these strict conventions are sometimes abandoned, and guttae and triglyphs, alone or together, may be used somewhat randomly as ornaments. The Doric order of the Villa Lante al Gianicolo in Rome, an early work of Giulio Romano (1520–21), has a narrow "simplified entablature" with guttae but no tryglyphs. The stone fireplace in the Oval Office has Ionic columns at the side, but the decorative wreath in the centre of the lintel has sets of guttae below (only five to a set). The Baroque Černín Palace in Prague (1660s) has triglyphs and guttae as ornaments at the top of arches, in a facade using an eclectic Ionic order.
Gallery
Notes
References
Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series,
Architectural elements
Columns and entablature | Gutta | [
"Technology",
"Engineering"
] | 455 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
2,187,854 | https://en.wikipedia.org/wiki/Geochron | Geochron, Inc. is an American company founded in 1965 by James Kilburg, an inventor from Luxembourg. It is also the name of their flagship product, the Geochron Global Time Indicator. The Geochron was the first world clock to display day and night on a world map, showing the sinh "bell curve" of light and darkness. The Geochron employs an intricate analog clockwork mechanism for its display, that shows the month, date, day of the week, hours and minutes, the areas of the world currently experiencing day and night, and the meridian passage of the sun. The main display is dominated by a world map, with time zones prominently indicated. At the top of the map are arrows corresponding to each time zone. As each day progresses, the map is scrolled from left to right by gear mechanisms, and the arrows for each time zone shift their positions relative to a stationary band fixed at the top that has a horizontal series of numbers representing hours. The viewer may read the time by seeing what number the time zone's arrow is currently pointing to. The map is backlit, and a mechanism behind the map defines well-lit and shaded areas that are also stationary relative to the movement of the map. In this way, as time progresses, different areas are shown to be experiencing daytime and night. The center of the lit area lines up with the 12 noon on the stationary time strip. There is also a day-and-month readout below the map, and a minutes readout above.
Each Geochron is assembled upon demand, with prices starting at above $1,500. President Ronald Reagan presented a Geochron to Mikhail Gorbachev in 1985 as an "example of American ingenuity." In the mid-1980s, the company was selling about 75 clocks per month, increasing to around 200 per month during the holiday season. It had 16 employees in 1987.
The Hubble Space Telescope control center at Goddard Space Flight Center uses a Geochron in its day-to-day operations. The European Space Agency displays a Geochron in their command center. The Smithsonian Institution has called the Geochron the "last significant contribution in timekeeping." The world clock was featured in motion pictures such as Hunt for Red October, Patriot Games, Clear and Present Danger and Three Days of the Condor.
Founder James Kilburg died in 1985. His son, James M. Kilburg, had purchased one-third of the company from his father a short time earlier. Bob Williamson acquired the other two-thirds of the company, and he and the younger Kilburg became partners in managing the business.
After many years in Redwood City, California, in 2007 Geochron Enterprises was sold and moved to Oregon City, Oregon, and became Geochron, Inc. It was sold again in 2015, to Patrick Bolan. The Geochron World Clock has been updated under new management to include new mapsets, lighting options, and new magnetic stepper motors. Geochron World Clocks are still built and restored by hand and manufactured at a small machine shop in Oregon City. In September 2019, the company announced that it was preparing to move from Oregon City to Estacada, Oregon.
In 2018, Geochron released an electronic version of its mechanical clock, optimized for 4K resolution displays. It includes many features that were unavailable prior to the internet, including satellite tracking, and demographic layers above different mapsets. All Geochron mapsets are in a Mercator Projection. As of October 2019, the company continues to sell the mechanical version, in addition to the digital version.
References
External links
Official Website
See also
List of satellite pass predictors
Clocks
Clock brands
Manufacturing companies based in Oregon
American companies established in 1965
Manufacturing companies established in 1965 | Geochron | [
"Physics",
"Technology",
"Engineering"
] | 776 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
2,188,372 | https://en.wikipedia.org/wiki/Wobble%20frequency | Optical discs, with the exception of DVD-RAM, have their data encoded on a single spiral, or a groove, which covers the surface of the disc. In the case of recordable media, this spiral contains a slight sinusoidal deviation from a perfect spiral. The period of this sine curve corresponds to the wobble frequency. The wobble frequency is commonly used as a synchronization source to achieve constant linear velocity while writing a disc, but has other uses as well depending on the type of disc. The frequencies quoted all assume that the disc is being written at the '1x' speed. The frequencies are appropriately higher for faster writing speeds.
CD-R and CD-RW discs use a frequency modulated wobble of 140.6 kHz to encode information, such as the Absolute Time in Pregroove (ATIP), into the groove.
DVD-R and DVD-RW have a constant wobble frequency of 140.6 kHz relying on data 'pits' beside the groove to convey information (Land pre-pit).
DVD+R and DVD+RW have a constant wobble frequency of 817.4 kHz, but encodes its addressing information by periodically inverting the phase of the wobble signal (bi-phase modulation) to encode an exact address of the location on the spiral track (Address in Pregroove). The practical upshot of this arrangement is that the recording drive can navigate to an exact location on the DVD+R(W) disc whereas it cannot do so with the DVD-R(W).
BD-R and BD-RE discs utilise Address in Pregroove.
HD DVD-R and HD DVD-RW uses the land pre-pit system of the DVD-R(W)
References
Compact disc
DVD
Blu-ray Disc
Optical computer storage media | Wobble frequency | [
"Technology"
] | 384 | [
"Computing stubs",
"Computer hardware stubs"
] |
2,188,477 | https://en.wikipedia.org/wiki/DEA%20list%20of%20chemicals | The United States Drug Enforcement Administration (DEA) maintains lists regarding the classification of illicit drugs (see DEA Schedules). It also maintains List I of chemicals and List II of chemicals, which contain chemicals that are used to manufacture the controlled substances/illicit drugs. The list is designated within the Controlled Substances Act but can be modified by the U.S. Attorney General as illegal manufacturing practices change.
Although the list is controlled by the Attorney General, the list is considered a DEA list because the DEA publishes and enforces the list.
Suppliers of these products are subject to regulation and control measures:
List I chemicals
These chemicals are designated as those that are used in the manufacture of the controlled substances and are important to the manufacture of the substances:
List II chemicals
These chemicals are designated as those that are used in the manufacture of controlled substances:
Special Surveillance List
Chemicals
All listed chemicals as specified in 21 CFR 1310.02 (a) or (b). This includes supplements which contain a listed chemical, regardless of their dosage form or packaging and regardless of whether the chemical mixture, drug product or dietary supplement is exempt from regulatory controls. For each chemical, its illicit manufacturing use is given in parentheses. Some Special Surveillance List chemicals do not have an exclusive manufacturing use for a specific illicit drug but rather have a broad range of uses in both legitimate and illicit manufacturing operations.
Equipment
The equipment list:
Hydrogenators
Tableting machines, including punches and dies
Encapsulating machines
22-liter heating mantles
References
External links
DEA Controlled Substance Schedules
See also
Drug precursors
European law on drug precursors
Combat Methamphetamine Epidemic Act of 2005
Chemical Diversion and Trafficking Act
Drug Enforcement Administration
Chemistry-related lists
Drug control law in the United States
Regulation of chemicals
Regulation in the United States | DEA list of chemicals | [
"Chemistry"
] | 355 | [
"nan"
] |
2,188,689 | https://en.wikipedia.org/wiki/Weather%20front | A weather front is a boundary separating air masses for which several characteristics differ, such as air density, wind, temperature, and humidity. Disturbed and unstable weather due to these differences often arises along the boundary. For instance, cold fronts can bring bands of thunderstorms and cumulonimbus precipitation or be preceded by squall lines, while warm fronts are usually preceded by stratiform precipitation and fog. In summer, subtler humidity gradients known as dry lines can trigger severe weather. Some fronts produce no precipitation and little cloudiness, although there is invariably a wind shift.
Cold fronts generally move from west to east, whereas warm fronts move poleward, although any direction is possible. Occluded fronts are a hybrid merge of the two, and stationary fronts are stalled in their motion. Cold fronts and cold occlusions move faster than warm fronts and warm occlusions because the dense air behind them can lift as well as push the warmer air. Mountains and bodies of water can affect the movement and properties of fronts, other than atmospheric conditions. When the density contrast has diminished between the air masses, for instance after flowing out over a uniformly warm ocean, the front can degenerate into a mere line which separates regions of differing wind velocity known as a shear line. This is most common over the open ocean.
Bergeron classification of air masses
The Bergeron classification is the most widely accepted form of air mass classification. Air mass classifications are indicated by three letters: Fronts separate air masses of different types or origins, and are located along troughs of lower pressure.
The first letter describes its moisture properties, with
c used for ontinental air masses (dry) and
m used for aritime air masses (moist).
The second letter describes the thermal characteristic of its source region:
T for ropical,
P for olar,
A for rctic or ntarctic,
M for onsoon,
E for quatorial, and
S for uperior air (dry air formed by significant upward lift in the atmosphere).
The third letter designates the stability of the atmosphere; it is labeled:
k if the air mass is older (“older”) than the ground below it.
w if the air mass is armer than the ground below it.
Surface weather analysis
A surface weather analysis is a special type of weather map which provides a top view of weather elements over a geographical area at a specified time based on information from ground-based weather stations. Weather maps are created by detecting, plotting and tracing the values of relevant quantities such as sea-level pressure, temperature, and cloud cover onto a geographical map to help find synoptic scale features such as weather fronts. Surface weather analyses have special symbols which show frontal systems, cloud cover, precipitation, or other important information. For example, an H may represent a high pressure area, implying fair or clear weather. An L on the other hand may represent low pressure, which frequently accompanies precipitation and storms. Low pressure also creates surface winds deriving from high pressure zones and vice versa. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, but also to depict the present weather at various locations on the weather map. In addition, areas of precipitation help determine the frontal type and location.
Types
There are two different meanings used within meteorology to describe weather around a frontal zone. The term "anafront" describes boundaries which show instability, meaning air rises rapidly along and over the boundary to cause significant weather changes and heavy precipitation. A "katafront" is weaker, bringing smaller changes in temperature and moisture, as well as limited rainfall.
Cold front
A cold front is located along and on the bounds of the warm side of a tightly packed temperature gradient. On surface analysis charts, this temperature gradient is visible in isotherms and can sometimes also be identified using isobars since cold fronts often align with a surface trough. On weather maps, the surface position of the cold front is marked by a blue line with triangles pointing in the direction where cold air travels and it is placed at the leading edge of the cooler air mass. Cold fronts often bring rain, and sometimes heavy thunderstorms as well. Cold fronts can produce sharper and more intense changes in weather and move at a rate that is up to twice as fast as warm fronts, since cold air is more dense than warm air, lifting as well as pushing the warm air preceding the boundary. The lifting motion often creates a narrow line of showers and thunderstorms if enough humidity is present as the lifted moist warm air condenses. The concept of colder, dense air "wedging" under the less dense warmer air is too simplistic, as the upward motion is really part of a maintenance process for geostrophic balance on the rotating Earth in response to frontogenesis.
Warm front
Warm fronts are at the leading edge of a homogeneous advancing warm air mass, which is located on the equatorward edge of the gradient in isotherms, and lie within broader troughs of low pressure than cold fronts. A warm front moves more slowly than the cold front which usually follows because cold air is denser and harder to lift from the Earth's surface.
This also forces temperature differences across warm fronts to be broader in scale. Clouds appearing ahead of the warm front are mostly stratiform, and rainfall more gradually increases as the front approaches. Fog can also occur preceding a warm frontal passage. Clearing and warming is usually rapid after frontal passage. If the warm air mass is unstable, thunderstorms may be embedded among the stratiform clouds ahead of the front, and after frontal passage thundershowers may still continue. On weather maps, the surface location of a warm front is marked with a red line of semicircles pointing in the direction the air mass is travelling.
Occluded front
An occluded front is formed when a cold front overtakes a warm front, and usually forms around mature low-pressure areas, including cyclones. The cold and warm fronts curve naturally poleward into the point of occlusion, which is also known as the triple point. It lies within a sharp trough, but the air mass behind the boundary can be either warm or cold. In a cold occlusion, the air mass overtaking the warm front is cooler than the cold air mass receding from the warm front and plows under both air masses. In a warm occlusion, the cold air mass overtaking the warm front is warmer than the cold air mass receding from the warm front and rides over the colder air while lifting the warm air.
A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is also associated with a drying of the air mass. Within the occlusion of the front, a circulation of air brings warm air upward and sends drafts of cold air downward, or vice versa depending on the type of occlusion the front is experiencing. Precipitations and clouds are associated with the trowal, the projection on the Earth's surface of the tongue of warm air aloft formed during the occlusion process of the depression or storm.
Occluded fronts are indicated on a weather map by a purple line with alternating half-circles and triangles pointing in direction of travel. The trowal is indicated by a series of blue and red junction lines.
Warm sector
The warm sector is a near-surface air mass in between the warm front and the cold front, usually found on the equatorward side of an extratropical cyclone. With its warm and humid characteristics, this air is susceptive to convective instability and can sustain thunderstorms, especially if lifted by the advancing cold front.
Stationary front
A stationary front is a non-moving (or stalled) boundary between two air masses, neither of which is strong enough to replace the other. They tend to remain essentially in the same area for extended periods of time, especially with parallel winds directions; They usually move in waves but not persistently. There is normally a broad temperature gradient behind the boundary with more widely spaced isotherm packing.
A wide variety of weather can be found along a stationary front, but usually clouds and prolonged precipitation are found there. Stationary fronts either dissipate after several days or devolve into shear lines, but they can transform into a cold or warm front if the conditions aloft change. Stationary fronts are marked on weather maps with alternating red half-circles and blue spikes pointing opposite to each other, indicating no significant movement.
When stationary fronts become smaller in scale and stabilizes in temperature, degenerating to a narrow zone where wind direction changes significantly over a relatively short distance, they become known as shearlines. A shearline is depicted as a line of red dots and dashes. Stationary fronts may bring light snow or rain for a long period of time.
Dry line
A similar phenomenon to a weather front is the dry line, which is the boundary between air masses with significant moisture differences instead of temperature. When the westerlies increase on the north side of surface highs, areas of lowered pressure will form downwind of north–south oriented mountain chains, leading to the formation of a lee trough. Near the surface during daylight hours, warm moist air is denser than dry air of greater temperature, and thus the warm moist air wedges under the drier air like a cold front. At higher altitudes, the warm moist air is less dense than the cooler dry air and the boundary slope reverses. In the vicinity of the reversal aloft, severe weather is possible, especially when an occlusion or triple point is formed with a cold front. A weaker form of the dry line seen more commonly is the lee trough, which displays weaker differences in moisture. When moisture pools along the boundary during the warm season, it can be the focus of diurnal thunderstorms.
The dry line may occur anywhere on earth in regions intermediate between desert areas and warm seas. The southern plains west of the Mississippi River in the United States are a particularly favored location. The dry line normally moves eastward during the day and westward at night. A dry line is depicted on National Weather Service (NWS) surface analyses as an orange line with scallops facing into the moist sector. Dry lines are one of the few surface fronts where the pips indicated do not necessarily reflect the direction of motion.
Squall line
Organized areas of thunderstorm activity not only reinforce pre-existing frontal zones, but can outrun actively existing cold fronts in a pattern where the upper level jet splits apart into two streams, with the resultant Mesoscale Convective System (MCS) forming at the point of the upper level split in the wind pattern running southeast into the warm sector parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge of the significant wind shift and pressure rise. Even weaker and less organized areas of thunderstorms lead to locally cooler air and higher pressures, and outflow boundaries exist ahead of this type of activity, which can act as foci for additional thunderstorm activity later in the day.
These features are often depicted in the warm season across the United States on surface analyses and lie within surface troughs. If outflow boundaries or squall lines form over arid regions, a haboob may result. Squall lines are depicted on NWS surface analyses as an alternating pattern of two red dots and a dash labelled SQLN or squal line, while outflow boundaries are depicted as troughs with a label of outflow boundary.
Precipitation produced
Fronts are the principal cause of significant weather. Convective precipitation (showers, thundershowers, heavy rain and related unstable weather) is caused by air being lifted and condensing into clouds by the movement of the cold front or cold occlusion under a mass of warmer, moist air. If the temperature differences of the two air masses involved are large and the turbulence is extreme because of wind shear and the presence of a strong jet stream, "roll clouds" and tornadoes may occur.
In the warm season, lee troughs, breezes, outflow boundaries and occlusions can lead to convection if enough moisture is available. Orographic precipitation is precipitation created through the lifting action of air due to air masses moving over terrain such as mountains and hills, which is most common behind cold fronts that move into mountainous areas. It may sometimes occur in advance of warm fronts moving northward to the east of mountainous terrain. However, precipitation along warm fronts is relatively steady, as in light rain or drizzle. Fog, sometimes extensive and dense, often occurs in pre-warm-frontal areas. Although, not all fronts produce precipitation or even clouds because moisture must be present in the air mass which is being lifted.
Movement
Fronts are generally guided by winds aloft, but do not move as quickly. Cold fronts and occluded fronts in the Northern Hemisphere usually travel from the northwest to southeast, while warm fronts move more poleward with time. In the Northern Hemisphere a warm front moves from southwest to northeast. In the Southern Hemisphere, the reverse is true; a cold or occluded front usually moves from southwest to northeast, and a warm front moves from northwest to southeast. Movement is largely caused by the pressure gradient force (horizontal differences in atmospheric pressure) and the Coriolis effect, which is caused by Earth's spinning about its axis. Frontal zones can be slowed by geographic features like mountains and large bodies of warm water.
See also
Anticyclonic storm
Atmosphere of Earth
Atmospheric circulation
Cold front
Cyclogenesis
Extratropical cyclone
Hadley cell
Norwegian cyclone model
Surface weather analysis
Trough (meteorology)
Warm front
References
Further reading
External links
Meteorological phenomena
Synoptic meteorology and weather | Weather front | [
"Physics"
] | 2,824 | [
"Meteorological phenomena",
"Physical phenomena",
"Earth phenomena"
] |
2,188,925 | https://en.wikipedia.org/wiki/Obsidian%20hydration%20dating | Obsidian hydration dating (OHD) is a geochemical method of determining age in either absolute or relative terms of an artifact made of obsidian.
Obsidian is a volcanic glass that was used by prehistoric people as a raw material in the manufacture of stone tools such as projectile points, knives, or other cutting tools through knapping, or breaking off pieces in a controlled manner, such as pressure flaking.
Obsidian obeys the property of mineral hydration and absorbs water, when exposed to air at a well-defined rate. When an unworked nodule of obsidian is initially fractured, there is typically less than 1% water present. Over time, water slowly diffuses into the artifact forming a narrow "band", "rim", or "rind" that can be seen and measured with many different techniques such as a high-power microscope with 40–80 power magnification, depth profiling with SIMS (secondary ion mass spectrometry), and IR-PAS (infra red photoacoustic spectroscopy). In order to use obsidian hydration for absolute dating, the conditions that the sample has been exposed to and its origin must be understood or compared to samples of a known age (e.g. as a result of radiocarbon dating of associated materials).
History
Obsidian hydration dating was introduced in 1960 by Irving Friedman and Robert Smith of the U.S. Geological Survey. Their initial work focused on obsidians from archaeological sites in western North America.
The use of Secondary ion mass spectrometry (SIMS) in the measurement of obsidian hydration dating was introduced by two independent research teams in 2002.
Today the technique is applied extensively by archaeologists to date prehistoric sites and sites from prehistory in California and the Great Basin of North America. It has also been applied in South America, the Middle East, the Pacific Islands, including New Zealand and Mediterranean Basin.
Techniques
Conventional procedure
To measure the hydration band, a small slice of material is typically cut from an artifact. This sample is ground down to about 30 micrometers thick and mounted on a petrographic slide (this is called a thin section). The hydration rind is then measured under a high-power microscope outfitted with some method for measuring distance, typically in tenths of micrometers. The technician measures the microscopic amount of water absorbed on freshly broken surfaces. The principle behind obsidian hydration dating is simple–the longer the artifact surface has been exposed, the thicker the hydration band will be.
Secondary ion mass spectrometry (SIMS) procedure
In case of measuring the hydration rim using the depth profiling ability of the secondary ion mass spectrometry technique, the sample is mounted on a holder without any preparation or cutting. This method of measurement is non-destructive.
There are two general SIMS modes: static mode and dynamic mode, depending on the primary ion current density, and three different types of mass spectrometers: magnetic sector, quadrupole and time-of-flight (TOF).
Any mass-spectrometer can work in static mode (very low ion current, a top mono-atomic layer analysis), and dynamic mode (a high ion current density, in-depth analysis).
Although relatively infrequent the use of SIMS on obsidian surface investigations has produced great progress in OHD dating. SIMS in general refers to four instrumental categories according to their operation; static, dynamic, quadrupole, and time-of-flight, TOF. In essence it is a technique with a large resolution on a plethora of chemical elements and molecular structures in an essentially non destructive manner. An approach to OHD with a completely new rationale suggests that refinement of the technique is possible in a manner which improves both its accuracy and precision and potentially expands the utility by generating reliable chronological data. Anovitz et al. presented a model which relied solely on compositionally-dependent diffusion, following numerical solutions (finite difference (FD), or finite element) elaborating on the H+ profile acquired by SIMS. A test of the model followed using results from Mount 65, Chalco in Mexico by Riciputi et al. This technique used numerical calculation to model the formation of the entire diffusion profile as a function of time and fitted the derived curve to the hydrogen profile. The FD equations are based on a number of assumptions about the behavior of water as it diffused into the glass and characteristic points of the SIMS H+ diffusion profile.
In Rhodes, Greece, under the direction and invention of Ioannis Liritzis,
the dating approach is based on modeling the S-like hydrogen profile by SIMS, following Fick's diffusion law, and an understanding of the surface saturation layer (see Figure). In fact, the saturation layer on the surface forms up to a certain depth depending on factors that include the kinetics of the diffusion mechanism for the water molecules, the specific chemical structure of obsidian, as well as the external conditions affecting diffusion (temperature, relative humidity, and pressure). Together these factors result in the formation of an approximately constant, boundary concentration value, in the external surface layer. Using the end product of diffusion, a phenomenological model has been developed, based on certain initial and boundary conditions and appropriate physicochemical mechanisms, that express the H2O concentration versus depth profile as a diffusion/time equation.
This latest advance, the novel secondary ion mass spectrometry–surface saturation (SIMS-SS), thus, involves modelling the hydrogen concentration profile of the surface versus depth, whereas the age determination is reached via equations describing the diffusion process, while topographical effects have been confirmed and monitored through atomic force microscopy.
Limitations
Several factors complicate simple correlation of obsidian hydration band thickness with absolute age. Temperature is known to speed up the hydration process. Thus, artifacts exposed to higher temperatures, for example by being at lower elevation, seem to hydrate faster. As well, obsidian chemistry, including the intrinsic water content, seems to affect the rate of hydration. Once an archeologist can control for the geochemical signature of the obsidian (e.g., the "source") and temperature (usually approximated using an "effective hydration temperature" or EHT coefficient), he or she may be able to date the artifact using the obsidian hydration technique. Water vapor pressure may also affect the rate of obsidian hydration.
The reliability of the method based on Friedman's empirical age equation (x²=kt, where x is the thickness of the hydration rim, k is the diffusion coefficient, and t is the time) is questioned from several grounds regarding temperature dependence, square root of time and determination of diffusion rate per sample and per site, as part of some successful attempts on the procedure and applications. The SIMS-SS age calculation procedure is separated into two major steps. The first step concerns the calculation of a 3rd order fitting polynomial of the SIMS profile (eq. 1). The second stage regards the determination of the saturation layer, i.e. its depth and concentration. The whole computing processing is embedded in stand-alone software created in Matlab (version 7.0.1) software package with a graphical user interface and executable under Windows XP. Thus, the SIMS-SS age equation in years before present is given in eq. 2:
Eq. 1 Fitting polynomial of the SIMS profile
Eq. 2 The SIMS-SS age equation in years before present
Where, Ci is the intrinsic concentration of water, Cs is the saturation concentration, dC/dx is the diffusion coefficient for depth x=0, k is derived from a family of Crank's theoretical diffusion curves, and is an effective diffusion coefficient (eq. 3) which relates the inverse gradient of the fit polynomial to well dated samples:
Ds,eff = aDs + b/ (1022Ds) = 8.051e−6Ds+0.999/(1022Ds), Eq. 3
where Ds = (1/(dC/dx))10−11 assuming a constant flux and taken as unity. The eq. (2) and assumption of unity is a matter of further investigation.
Several commercial companies and university laboratories provide obsidian hydration services.
See also
Dating methodology (archaeology)
References
Citations
General references
External links
National Park Service page describing Obsidian hydration
Geochemistry
Dating methodologies in archaeology
American inventions
hydration dating | Obsidian hydration dating | [
"Chemistry"
] | 1,732 | [
"nan"
] |
2,188,998 | https://en.wikipedia.org/wiki/Phycomyces%20blakesleeanus | Phycomyces blakesleeanus is a filamentous fungus in the Order Mucorales of the phylum Zygomycota or subphylum Mucoromycotina. The spore-bearing sporangiophores of Phycomyces are very sensitive to different environmental signals including light, gravity, wind, chemicals, and adjacent objects. They exhibit phototropic growth: most Phycomyces research has focused on sporangiophore photobiology, such as phototropism and photomecism ('light growth response'). Metabolic, developmental, and photoresponse mutants have been isolated, some of which have been genetically mapped. At least ten different genes (named madA through to madJ) are required for phototropism. The madA gene encodes a protein related to the White Collar-1 class of photoreceptors that are present in other fungi, while madB encodes a protein related to the White Collar-2 protein that physically bind to White collar 1 to participate in the responses to light.
Phycomyces also exhibits an avoidance response, in which the growing sporangiophore avoids solid objects in its path, bending away from them without touching them, and then continuing to grow upward again. This response is believed to result from an unidentified "avoidance gas" that is emitted by the growing zone of the sporangiophore. This gas would concentrate in the airspace between the Phycomyces and the object. This higher concentration would be detected by the side of the sporangiophore's growing zone, which would grow faster, causing the sporangiophore to bend away.
Phycomyces blakesleeanus became the primary organism of research of the Nobel laureate Max Delbrück starting in the 1950s when Delbrück decided to switch from research on bacteriophage and bacteria to P. blakesleeanus.
A genetic linkage map was developed for P.blakesleeanus. This genetic map was constructed from 121 progeny of a cross between two wild type isolates and involved 134 markers. The markers were mostly PCR-based restriction fragment length polymorphisms. Zygospores are the sexual structures of P. blakesleeanus in which the diploid zygote is formed and meiosis is presumed to take place. The data from this cross provided supporting evidence for meiosis during zygospore development.
References
External links
Phycomyces at Zygomycetes.org
Phycomyces blakesleeanus genome sequencing project (for strain NRRL1555)
Phycomyces strains at the FGSC
Zygomycota
Fungi described in 1925
Fungus species | Phycomyces blakesleeanus | [
"Biology"
] | 571 | [
"Fungi",
"Fungus species"
] |
2,189,362 | https://en.wikipedia.org/wiki/Phosphorus%20tribromide%20%28data%20page%29 | This page provides supplementary chemical data on phosphorus tribromide.
Material Safety Data Sheet
External MSDS sheets:
Fisher MSDS
Aldrich MSDS
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Phosphorus tribromide (data page) | [
"Chemistry"
] | 49 | [
"Chemical data pages",
"nan"
] |
2,189,384 | https://en.wikipedia.org/wiki/Drinking%20bird | Drinking birds, also known as dunking birds, drinky birds, water birds, or dipping birds are toy heat engines that mimic the motions of a bird drinking from a water source. They are sometimes incorrectly considered examples of a perpetual motion device.
Construction and materials
A drinking bird consists of two glass bulbs joined by a glass tube (the bird's neck/body). The tube extends nearly all the way into the bottom bulb, and attaches to the top bulb but does not extend into it.
The space inside the bird contains a fluid, usually colored for visibility. (This dye might fade when exposed to light, with the rate depending on the dye/color).
The fluid is typically dichloromethane (DCM), also known as methylene chloride.
Earlier versions contained trichlorofluoromethane.
Miles V. Sullivan's 1945 patent suggested ether, alcohol, carbon tetrachloride, or chloroform.
Air is removed from the apparatus during manufacture, so the space inside the body is filled by vapor evaporated from the fluid. The upper bulb has a "beak" attached which, along with the head, is covered in a felt-like material. The bird is typically decorated with paper eyes, a plastic top hat, and one or more tail feathers. The whole device pivots on a crosspiece attached to the body.
Heat engine steps
The drinking bird is a heat engine that exploits a temperature difference to convert heat energy to a pressure difference within the device, and performs mechanical work. Like all heat engines, the drinking bird works through a thermodynamic cycle. The initial state of the system is a bird with a wet head oriented vertically.
The process operates as follows:
The water evaporates from the felt on the head.
Evaporation lowers the temperature of the glass head (heat of vaporization).
The temperature decrease causes some of the dichloromethane vapor in the head to condense.
The lower temperature and condensation together cause the pressure to drop in the head (governed by Equations of state).
The higher vapor pressure in the warmer base pushes the liquid up the neck.
As the liquid rises, the bird becomes top heavy and tips over.
When the bird tips over, the bottom end of the neck tube rises above the surface of the liquid in the bottom bulb.
A bubble of warm vapor rises up the tube through this gap, displacing liquid as it goes.
Liquid flows back to the bottom bulb (the toy is designed so that when it has tipped over the neck's tilt allows this). Pressure equalizes between top and bottom bulbs.
The weight of the liquid in the bottom bulb restores the bird to its vertical position.
The liquid in the bottom bulb is heated by ambient air, which is at a temperature slightly higher than the temperature of the bird's head.
If a glass of water is placed so that the beak dips into it on its descent, the bird will continue to absorb water and the cycle will continue as long as there is enough water in the glass to keep the head wet. However, the bird will continue to dip even without a source of water, as long as the head is wet, or as long as a temperature differential is maintained between the head and body. This differential can be generated without evaporative cooling in the head; for instance, a heat source directed at the bottom bulb will create a pressure differential between top and bottom that will drive the engine. The ultimate source of energy is the temperature gradient between the toy's head and base; the toy is not a perpetual motion machine.
Physical and chemical principles
The drinking bird is an exhibition of several physical laws and is therefore a staple of basic chemistry and physics education. These include:
The dichloromethane with a low boiling point ( under standard pressure po = 105 Pa – as the drinking bird is first evacuated, partially filled and sealed, the pressure and thus the boiling point in the drinking bird will be different), gives the heat engine the ability to extract motion from low temperatures. The drinking bird is a heat engine that works at room temperature.
The combined gas law, which establishes a proportional relationship between temperature and pressure exerted by a gas in a constant volume.
The ideal gas law, which establishes a proportional relationship between number of gas particles and pressure in a constant volume.
The Maxwell–Boltzmann distribution, which establishes that molecules in a given space at a given temperature vary in energy level, and therefore can exist in multiple phases (solid/liquid/gas) at a single temperature.
Heat of vaporization (or condensation), which establishes that substances absorb (or give off) heat when changing state at a constant temperature.
Torque and center of mass.
Capillary action of the wicking felt.
Wet-bulb temperature: The temperature difference between the head and body depends on the relative humidity of the air.
The operation of the bird is also affected by relative humidity.
By using a water-ethanol mixture instead of water, the effect of different rates of evaporation can be demonstrated.
By considering the difference between the wet and dry bulb temperatures, it is possible to develop a mathematical expression to calculate the maximum work that can be produced from a given amount of water "drunk". Such analysis is based on the definition of the Carnot heat engine efficiency and the psychrometric concepts.
The drinking bird may also be considered to be an entropy engine driven by the difference of the entropy of liquid water and the entropy of water vapor dispersed in air, that is, the sum of the entropy of evaporation of pure water plus the entropy of dilution of water vapor in air. The evaporation of water is an endothermic process requiring the input of thermal energy or a positive enthalpy flow from the environment. Since a spontaneous process requires a negative change in Gibbs free energy, the positive enthalpy has to be overcome by the large entropy increase.
History
By the 1760s (or earlier) German artisans had invented a so-called "pulse hammer" (Pulshammer). In 1767 Benjamin Franklin visited Germany, saw a pulse hammer, and in 1768, improved it. Franklin's pulse hammer consisted of two glass bulbs connected by a U-shaped tube; one of the bulbs was partially filled with water in equilibrium with its vapor. Holding the partially filled bulb in one's hand would cause the water to flow into the empty bulb. In 1872, the Italian physicist and engineer Enrico Bernardi combined three Franklin tubes to build a simple heat motor that was powered by evaporation in a way similar to the drinking bird.
In 1881 Israel L. Landis got a patent for a similar oscillating motor.
A year later (1882), the Iske brothers got a patent for a similar motor.
Unlike the drinking bird, the lower tank was heated and the upper tank just air-cooled in this engine. Other than that, it used the same principle. The Iske brothers during that time got various patents on a related engine which is now known as Minto wheel.
A Chinese drinking bird toy dating back to 1910s~1930s named insatiable birdie is described in Yakov Perelman's Physics for Entertainment. The book explained the "insatiable" mechanism: "Since the headtube's temperature becomes lower than that of the tail reservoir, this causes a drop in the pressure of the saturated vapours in the head-tube ..." It was said in Shanghai, China, that when Albert Einstein and his wife, Elsa, arrived in Shanghai in 1922, they were fascinated by the Chinese "insatiable birdie" toy.
In addition, the Japanese professor of toys, Takao Sakai, from Tohoku University, also introduced this Chinese toy.
Arthur M. Hillery got a US patent in 1945. Arthur M. Hillery suggested the use of acetone as working fluid.
It was again patented in the US by Miles V. Sullivan in 1946.
He was a Ph.D. inventor-scientist at Bell Labs in Murray Hill, NJ, USA. Robert T. Plate got a US Design patent in 1947, that cites Arthur M. Hillery's patent.
Notable usage in popular culture
The drinking bird has been used in many fictional contexts.
Drinking birds have been featured as plot elements in the 1951 Merrie Melodies cartoon Putty Tat Trouble and the 1968 science fiction thriller The Power. In S4E11 of the comedy series Arrested Development, a delusional character hears the voice of God speaking through a drinking bird.
In Australian contemporary playwright John Romeril's play The Floating World, drinking birds are a symbolic prop which represent the progression of Les' insanity.
In season 7 episode 7 of animated sitcom The Simpsons titled "King-Size Homer" Homer uses a drinking bird to press the Y key on his nuclear control computer, eventually leading to a nuclear meltdown. The bird eventually returns two seasons later in the episode "Das Bus".
Alternative design
In 2003 an alternative mechanism was devised by Nadine Abraham and Peter Palffy-Muhoray of Ohio, USA, that utilizes capillary action combined with evaporation to produce motion, but has no volatile working fluid. Their paper "A Dunking Bird of the Second Kind", was submitted to the American Journal of Physics, and published in June 2004. It describes a mechanism which, while similar to the original drinking bird, operates without a temperature difference. Instead it utilizes a combination of capillary action, gravitational potential difference and the evaporation of water to power the device.
This bird works as follows: it is balanced such that, when dry, it tips into a head-down position. The bird is placed next to a water source such that this position brings its beak into contact with water. Water is then lifted into the beak by capillary action (the authors used a triangular sponge) and carried by capillary action past the fulcrum to a larger sponge reservoir which they fashioned to resemble wings. When enough water has been absorbed by the reservoir, the now-heavy bottom causes the bird to tip into a head-up position. With the beak out of the water, eventually enough water evaporates from the sponge that the original balance is restored and the head tips down again. Although a small drop in temperature may occur due to evaporative cooling, this does not contribute to the motion of the bird. The device operates relatively slowly with 7 hours 22 minutes being the average cycle time measured.
See also
Minto wheel - a heat engine consisting of a set of sealed chambers with volatile fluid inside just as in the drinking bird
Cryophorus - a glass container with two bulbs containing liquid water and water vapor. It is used in physics courses to demonstrate rapid freezing by evaporation
Heat pipe - a heat-transfer device that employs phase transition to transfer heat between two solid interfaces.
Thermodynamics - the branch of physics concerned with heat and temperature and their relation to energy and work
References
External links
1940s toys
Birds in popular culture
Educational toys
Novelty items
Thermodynamics
Bird
Water toys
Articles containing video clips | Drinking bird | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,278 | [
"Thermodynamics",
"Dynamical systems"
] |
2,189,529 | https://en.wikipedia.org/wiki/Western%20Latin%20character%20sets%20%28computing%29 | Several 8-bit character sets (encodings) were designed for binary representation of common Western European languages (Italian, Spanish, Portuguese, French, German, Dutch, English, Danish, Swedish, Norwegian, and Icelandic), which use the Latin alphabet, a few additional letters and ones with precomposed diacritics, some punctuation, and various symbols (including some Greek letters). These character sets also happen to support many other languages such as Malay, Swahili, and Classical Latin.
This material is technically obsolete, having been functionally replaced by Unicode. However it continues to have historical interest.
Summary
The ISO-8859 series of 8-bit character sets encodes all Latin character sets used in Europe, albeit that the same code points have multiple uses that caused some difficulty (including mojibake, or garbled characters, and communication issues). The arrival of Unicode, with a unique code point for every glyph, resolved these issues.
ISO/IEC 8859-1 or Latin-1 is the most used and also defines the first 256 codepoints in Unicode.
ISO/IEC 8859-15 modifies ISO-8859-1 to fully support Estonian, Finnish and French and add the euro sign.
Windows-1252 is a superset of ISO-8859-1 that includes the printable characters from ISO/IEC 8859-15 and popular punctuation such as curved quotation marks (also known as smart quotes, such as in Microsoft Word settings and similar programs). It is common that web page tools for Windows use Windows-1252 but label the web page as using ISO-8859-1, this has been addressed in HTML5, which mandates that pages labeled as ISO-8859-1 must be interpreted as Windows-1252.
IBM CP437, being intended for English only, has very little in the way of accented letters (particularly uppercase) but has far more graphics characters than the other IBM code pages listed here and also some mathematical and Greek characters that are useful as technical symbols.
IBM CP850 has all the printable characters that ISO-8859-1 has (albeit arranged differently) and still manages to have enough graphics characters to build a usable text-mode user interface.
IBM CP858 differs from CP850 only by one character — a dotless i (ı), rarely used outside Turkey and with no uppercase equivalent provided, was replaced by euro currency sign (€).
IBM CP859 contains all the printable characters that ISO/IEC 8859-15 has, so unlike CP850 it supports the euro sign, Estonian, Finnish and French.
IBM code pages 037, 500, and 1047 are EBCDIC encodings that include all of the ISO-8859-1 characters.
The Mac OS Roman character set (often referred to as MacRoman and known by the IANA as simply MACINTOSH) has most, but not all, of the same characters as ISO/IEC 8859-1 but in a very different arrangement; and it also adds many technical and mathematical characters (though it lacks the important multiplication sign, ) and more diacritics. Older Macintosh web browsers were known to munge the few characters that were in ISO/IEC 8859-1 but not their native Macintosh character set when editing text from Web sites. Conversely, in Web material prepared on an older Macintosh, many characters were displayed incorrectly when read by other operating systems. The Macintosh Latin encoding, a modification of Mac OS Roman to support ISO/IEC 8859-1, was created by the creators of Kermit (protocol) to solve this problem.
History
The earlier seven-bit U.S. American Standard Code for Information Interchange ('ASCII') encoding has characters sufficient to properly represent only a few languages such as English, Latin, Malay and Swahili. It is missing some letters and letter-diacritic combinations used in other Latin-alphabet languages. However, since there was no other choice on most US-supplied computer platforms, use of ASCII was unavoidable except where there was a strong national computing industry. There was the ISO 646 group of encodings which replaced some of the symbols in ASCII with local characters, but space was very limited, and some of the symbols replaced were quite common in things like programming languages.
Most computers internally used eight-bit bytes but communication (seen as inherently unreliable) used seven data bits plus one parity bit. In time, it became common to use all eight bits for data, creating space for another 128 characters. In the early days most of these were system specific, but gradually the ISO/IEC 8859 standards emerged to provide some cross-platform similarity to enable information interchange.
Towards the end of the 20th century, as storage and memory costs fell, the issues associated with multiple meanings of a given eight-bit code (there are seven ISO-Latin code sets alone) have ceased to be justified. All major operating systems have moved to Unicode as their main internal representation. However, as Windows did not support the UTF-8 method of encoding Unicode (preferring UTF-16), many applications continued to be restricted to these legacy character sets.
The euro sign
The introduction of the euro and its associated euro sign () introduced significant pressure on computer systems developers to support this new symbol, and most 8-bit character sets had to be adapted in some way.
Apple with MacRoman and Sun Microsystems with Solaris OS simply replaced the generic currency sign (). This caused difficulty in some places because organisations had found other uses for its code point, such as the company logo.
ISO introduced a further variant of ISO 8859, ISO 8859-15, which replaced the generic currency sign with the euro sign as well as making some other replacements of symbols with letters with diacritics. ISO 8859-15 never received widespread adoption.
With Windows-1252, Microsoft placed the euro sign in a gap (position 80hex) in the existing C1 control codes, a decision that other vendors considered counter-architectural.
Whilst these decisions had limited effect for documents that were only used within a single computer (or at least within a single vendor's "digital ecosystem"), it meant that documents containing a euro sign would fail to render as expected when interchanged between ecosystems.
All of these issues have been resolved as operating systems have been upgraded to support Unicode as standard, which encodes the euro sign at U+20AC (decimal 8364).
Comparison table
Code points to U+007F are not shown in this table currently, as they are directly mapped in all character sets listed here. The ASCII coding standard defines the original specification for the mapping of the first 0-127 characters.
The table is arranged by Unicode code point. Character sets are referred to here by their IANA names in upper case.
The mappings for the IBM code pages are from the Unicode site supplied by Microsoft. The Unicode Consortium's document has links to sources giving the differences between IBM's and Microsoft's mappings for these code pages.
IBM437 and IBM850 defined printable characters for the control code ranges. While these could not be used when printing text through DOS, as they would be trapped before reaching the screen, they could be used by applications that used screen memory directly.
Macintosh has an Apple logo at 0xF0, and translates it to U+F8FF in the Private Use Area for Unicode.
Notes
References
Character sets
Articles with unsupported Private Use Area characters
History of computing | Western Latin character sets (computing) | [
"Technology"
] | 1,553 | [
"Computers",
"History of computing"
] |
2,189,647 | https://en.wikipedia.org/wiki/List%20of%20nuclear%20weapons%20tests | Nuclear weapons testing is the act of experimentally and deliberately firing one or more nuclear devices in a controlled manner pursuant to a military, scientific or technological goal. This has been done on test sites on land or waters owned, controlled or leased from the owners by one of the eight nuclear nations: the United States, the Soviet Union, the United Kingdom, France, China, India, Pakistan and North Korea, or has been done on or over ocean sites far from territorial waters. There have been 2,121 tests done since the first in July 1945, involving 2,476 nuclear devices. As of 1993, worldwide, 520 atmospheric nuclear explosions (including eight underwater) have been conducted with a total yield of 545 megatons (Mt): 217 Mt from pure fission and 328 Mt from bombs using fusion, while the estimated number of underground nuclear tests conducted in the period from 1957 to 1992 is 1,352 explosions with a total yield of 90 Mt. As a result of the 1996 Comprehensive Nuclear-Test-Ban Treaty, there were no declared tests between the 1992 US Julin Divider and 2006 North Korean test, and none outside North Korea to date.
Very few unknown tests are suspected at this time, the Vela incident being the most prominent. Israel is the only country suspected of having nuclear weapons but not confirmed to have ever tested any.
The following are considered nuclear tests:
Single nuclear devices fired in deep horizontal tunnels (drifts) or in vertical shafts, in shallow shafts ("cratering"), underwater, on barges or vessels on the water, on land, in towers, carried by balloons, shot from cannons, dropped from airplanes with or without parachutes, and shot into a ballistic trajectory, into high atmosphere or into near space on rockets. Since 1963 the great majority have been underground due to the Partial Test Ban Treaty.
Salvo tests in which several devices are fired simultaneously, as defined by international treaties:
The two nuclear bombs dropped in combat over Japan in 1945. While the primary purpose of these two detonations was military and not experimental, observations were made and the tables would be incomplete without them.
Nuclear safety tests in which the intended nuclear yield was intended to be zero, and which failed to some extent if a nuclear yield was detected. There have been failures, and therefore they are included in the lists, as well as the successes.
Fizzles, in which the expected yield was not reached.
Tests intended but not completed because of vehicle or other support failures that destroyed the device.
Tests that were emplaced and could not be fired for various reasons. Usually, the devices were ultimately destroyed by later conventional or nuclear explosions.
Not included as nuclear tests:
Misfires which were corrected and later fired as intended.
Hydro-nuclear or subcritical testing in which the normal fuel material for a nuclear device is below the amount necessary to sustain a chain reaction. The line here is finely drawn, but, among other things, subcritical testing is not prohibited by the Comprehensive Nuclear Test Ban Treaty, while safety tests are.
Tests by country
The table in this section summarizes all worldwide nuclear testing (including the two bombs dropped in combat which were not tests). The country names are links to summary articles for each country, which may in turn be used to drill down to test series articles which contain details on every known nuclear explosion and test. The notes attached to various table cells detail how the numbers therein are arrived at.
Known tests
In the following subsections, a selection of significant tests (by no means exhaustive) is listed, representative of the testing effort in each nuclear country.
United States
The standard official list of tests for American devices is arguably the United States Department of Energy DoE-209 document. The United States conducted around 1,054 nuclear tests (by official count) between 1945 and 1992, including 216 atmospheric, underwater, and space tests. Some significant tests conducted by the United States include:
The Trinity test on 16 July 1945, near Socorro, New Mexico, was the first-ever test of a nuclear weapon (yield of around 20 kilotons).
The Operation Crossroads series in July 1946, at Bikini Atoll in the Marshall Islands, was the first postwar test series and one of the largest military operations in U.S. history.
The Operation Greenhouse shots of May 1951, at Enewetak Atoll in the Marshall Islands, included the first boosted fission weapon test (named Item) and a scientific test (named George) which proved the feasibility of thermonuclear weapons.
The Ivy Mike shot of 1 November 1952, at Enewetak Atoll, was the first full test of a Teller-Ulam design staged hydrogen bomb, with a yield of 10 megatons. This was not a deployable weapon. With its full cryogenic equipment it weighed about 82 tons.
The Castle Bravo shot of 1 March 1954, at Bikini Atoll, was the first test of a deployable (solid fuel) thermonuclear weapon, and also (accidentally) the largest weapon ever tested by the United States (15 megatons). It was also the single largest U.S. radiological accident in connection with nuclear testing. The unanticipated yield, and a change in the weather, resulted in nuclear fallout spreading eastward onto the inhabited Rongelap and Rongerik atolls, which were soon evacuated. Many of the Marshall Islands natives have since suffered from birth defects and have received some compensation from the federal government of the United States. A Japanese fishing boat, the Daigo Fukuryū Maru, also came into contact with the fallout, which caused many of the crew to grow ill; one eventually died. The crew's exposure was referenced in the film Godzilla as a criticism of American nuclear tests in the Pacific.
The Operation Plumbbob series of May–October 1957 is considered the biggest, longest, and most controversial test series that occurred within the continental United States. Rainier Mesa, Frenchman Flat, and Yucca Flat were all used for the 29 different atmospheric explosions.
Shot Argus I of Operation Argus, on 27 August 1958, was the first detonation of a nuclear weapon in outer space when a 1.7-kiloton warhead was detonated at 200 kilometers altitude over the South Atlantic Ocean during a series of high-altitude nuclear explosions.
Shot Frigate Bird of Operation Dominic on 6 May 1962, was the only U.S. test of an operational ballistic missile with a live nuclear warhead (yield of 600 kilotons), at Johnston Atoll in the Pacific. In general, missile systems were tested without live warheads and warheads were tested separately for safety concerns. In the early 1960s there were mounting questions about how the systems would behave under combat conditions (when they were mated, in military parlance), and this test was meant to dispel these concerns. However, the warhead had to be somewhat modified before its use, and the missile was only a SLBM (and not an ICBM), so by itself, it did not satisfy all concerns.
Shot Sedan of Operation Storax on 6 July 1962 (yield of 104 kilotons), was an attempt at showing the feasibility of using nuclear weapons for civilian, peaceful purposes as part of Operation Plowshare. In this instance, a 1280-feet-in-diameter and 320-feet-deep explosion crater, morphologically similar to an impact crater, was created at the Nevada Test Site.
Shot Divider of Operation Julin on 23 September 1992, at the Nevada Test Site, was the last U.S. nuclear test. Described as a "test to ensure safety of deterrent forces", the series was interrupted by the beginning of negotiations over the Comprehensive Nuclear-Test-Ban Treaty.
Soviet Union
After the fall of the USSR, the American government (as a member of the International Consortium International Science and Technology Center) hired a number of top scientists in Sarov (aka Arzamas-16, the Soviet equivalent of Los Alamos and thus sometimes called Los Arzamas) to draft a number of documents about the history of the Soviet atomic program. One of the documents was the definitive list of Soviet nuclear tests. Most of the tests have no code names, unlike the American tests, so they are known by their test numbers from this document. Some list compilers have detected discrepancies in that list; one device was abandoned in its cove in a tunnel in Semipalatinsk when the Soviets abandoned Kazakhstan, and one list lists 13 other tests which apparently failed to provide any yield. The source for that was the well respected Russian Strategic Nuclear Forces which confirms 11 of the 13; those 11 are in the Wikipedia lists.
The Soviet Union conducted 715 nuclear tests (by the official count) between 1949 and 1990, including 219 atmospheric, underwater, and space tests. Most of them took place at the Semipalatinsk Test Site in Kazakhstan and the Northern Test Site at Novaya Zemlya. Additional industrial tests were conducted at various locations in Russia and Kazakhstan, while a small number of tests were conducted in Ukraine, Uzbekistan, and Turkmenistan.
In addition, the large-scale military exercise was conducted by Soviet army to explore the possibility of defensive and offensive warfare operations on the nuclear battlefield. The exercise, under code name of Snezhok (Snowball), involved detonation of a nuclear bomb twice as powerful as the one used in Nagasaki and approximately 45,000 soldiers coming through the epicenter immediately after the blast The exercise was conducted on September 14, 1954, under command of Marshal Georgy Zhukov to the north of Totskoye village in Orenburg Oblast, Russia.
Some significant Soviet tests include:
Operation First Lightning/RDS-1 (known as Joe 1 in the West), August 29, 1949: first Soviet nuclear test.
RDS-6s (known as Joe 4 in the West), August 12, 1953: first Soviet thermonuclear test using a sloyka (layer cake) design. The design proved to be unscalable into megaton yields, but it was air-deployable.
RDS-37, November 22, 1955: first Soviet multi-megaton, true hydrogen bomb test using Andrei Sakharov's third idea, essentially a re-invention of the Teller-Ulam.
Tsar Bomba, October 30, 1961: largest nuclear weapon ever detonated, with a design yield of 100 Mt, de-rated to 50 Mt for the test drop.
Chagan, January 15, 1965: large cratering experiment as part of Nuclear Explosions for the National Economy program, which created an artificial lake.
The last Soviet test took place on October 24, 1990. After the dissolution of the USSR in 1992, Ukraine and Russia inherited the USSR's nuclear stockpile, though Ukraine later handed theirs over to the latter, while Kazakhstan inherited the Semipalatinsk nuclear test area, as well as the Baikonur Cosmodrome, the Sary Shagan missile/radar test area and three ballistic missile fields. Semipalatinsk included at least the one unexploded device, later blown up with conventional explosives by a combined US–Kazakh team. No testing has occurred in the former territory of the USSR since its dissolution.
United Kingdom
The United Kingdom has conducted 45 tests (12 in Australian territory, including 3 in the Montebello Islands of Western Australia and 9 in mainland South Australia (7 at Maralinga and 2 at Emu Field); 9 in the Line Islands of the central Pacific (3 at Malden Island and 6 at Kiritimati/Christmas Island); and 24 in the U.S. as part of joint test series). Often excluded from British totals are the 31 safety tests of Operation Vixen in Maralinga. British test series include:
Operation Hurricane, October 3, 1952 (UK's first atomic bomb)
Operation Totem, 1953
Operation Mosaic, 1956
Operation Buffalo, 1956
Operation Antler, 1957
Operation Grapple, 1957–1958 (Included the UK's first hydrogen bomb, Grapple X/Round C)
Last test: Julin Bristol, November 26, 1991, vertical shaft.
Atmospheric tests involving nuclear material but conventional explosions:
Operation Kittens, 1953–1961 (initiator tests using conventional explosive)
Operation Rats, 1956–1960 (conventional explosions to study dispersal of uranium)
Operation Tims, 1955–1963 (conventional explosions for tamper, plutonium compression trials)
Operation Vixen, 1959–1963 (effects of accidental fire or explosion on nuclear weapons)
France
France conducted 210 nuclear tests between February 13, 1960 and January 27, 1996. Four were tested at Reggane, French Algeria, 13 at In Ekker, Algeria and the rest at Moruroa and Fangataufa Atolls in French Polynesia. Often skipped in lists are the 5 safety tests at Adrar Tikertine in Algeria.
Operation Gerboise bleue, February 13, 1960 (first atomic bomb) and three more: Reggane, Algeria; in the atmosphere; final test reputed to be more intended to prevent the weapon from falling into the hands of generals rebelling against French colonial rule than for testing purposes.
Operation Agathe, November 7, 1961 and 12 more: In Ekker, Algeria; underground
Operation Aldébaran, July 2, 1966 and 45 more: Moruroa and Fangataufa; in the atmosphere;
Canopus first hydrogen bomb: August 24, 1968 (Fangataufa)
Operation Achille June 5, 1975 and 146 more: Moruroa and Fangataufa; underground
Operation Xouthos last test: January 27, 1996 (Fangataufa)
China
The foremost list of Chinese tests compiled by the Federation of American Scientists skips over two Chinese tests listed by others. The People's Republic of China conducted 45 tests (23 atmospheric and 22 underground, all conducted at Lop Nur Nuclear Weapons Test Base, in Malan, Xinjiang)
596 First test – October 16, 1964
Film is now available of 1966 tests here at time 09:00 and another test later in this film.
Test No. 6, First hydrogen bomb test – June 17, 1967
CHIC-16, 200 kt-1 Mt atmospheric test – June 17, 1974
#21, Largest hydrogen bomb tested by China (4 megatons) - November 17, 1976
#29, Last atmospheric test – October 16, 1980. This is to date the last atmospheric nuclear test by any country.
#45, Last test – July 29, 1996, underground.
India
India announced it had conducted a test of a single device in 1974 near Pakistan's eastern border under the codename Operation Smiling Buddha. After 24 years, India publicly announced five further nuclear tests on May 11 and May 13, 1998. The official number of Indian nuclear tests is six, conducted under two different code-names and at different times.
May 18, 1974: Operation Smiling Buddha (type: implosion, plutonium and underground). One underground test in a horizontal shaft around 107 m long under the long-constructed Indian Army Pokhran Test Range (IA-PTR) in the Thar Desert, eastern border of Pakistan. The Indian Meteorological Department and the Atomic Energy Commission announced the yield of the weapon at 12 kt. Other Western sources claimed the yield to be around 2–12 kt. However, the claim was dismissed by the Bulletin of the Atomic Scientists and it was later reported to be 8 kt.
May 11, 1998: Operation Shakti (type: implosion, 3 uranium and 2 plutonium devices, all underground). The Atomic Energy Commission (AEC) of India and the Defence Research and Development Organisation (DRDO) simultaneously conducted a test of three nuclear devices at the Indian Army Pokhran Test Range (IAPTR) on May 11, 1998. Two days later, on May 13, the AEC and DRDO carried out a test of two further nuclear devices, detonated simultaneously. During this operation, AEC India claimed to have tested a three-stage thermonuclear device (Teller-Ulam design), but the yield of the tests was significantly lower than that expected from thermonuclear devices. The yields remain questionable, at best, by Western and Indian scholars, estimated at 45 kt; scale down of 200 kt model.
Pakistan
Pakistan conducted 6 official tests, under 2 different code names, in the final week of May 1998. From 1983 to 1994, around 24 nuclear cold tests were carried out by Pakistan; these remained unannounced and classified until 2000. In May 1998, Pakistan responded publicly by testing 6 nuclear devices.
March 11, 1983: Kirana-I (type: implosion, non-fissioned (plutonium) and underground). The 24 underground cold tests of nuclear devices were performed near the Sargodha Air Force Base.
May 28, 1998: Chagai-I (type: implosion, HEU and underground). One underground horizontal-shaft tunnel test (inside a granite mountain) of boosted fission devices at Koh Kambaran in the Ras Koh Hills in Chagai District of Balochistan Province. The announced yield of the five devices was a total of 40–45 kilotonnes with the largest having a yield of approximately 30–45 kilotonnes. An independent assessment however put the test yield at no more than 12 kt and the maximum yield of a single device at only 9 kt as opposed to 35 kt as claimed by Pakistani authorities. According to The Bulletin of the Atomic Scientists, the maximum yield was only 2–10 kt as opposed to the claim of 35 kt and the total yield of all tests was no more than 8–15 kt.
May 30, 1998: Chagai-II (type: implosion, plutonium device and underground). One underground vertical-shaft tunnel test of a miniaturized fission device having an announced yield of approximately 18–20 kilotonnes, carried out in the Kharan Desert in Kharan District, Balochistan Province. An independent assessment put the figure of this test at 4–6 kt only. Some Western seismologists put the figure at a mere 2 kt.
North Korea
On October 9, 2006, North Korea announced they had conducted a nuclear test in North Hamgyong Province on the northeast coast at 10:36 AM (11:30 AEST). There was a 3.58 magnitude earthquake reported in South Korea, and a 4.2 magnitude tremor was detected 386 km (240 mi) north of P'yongyang. The low estimates on the yield of the test—potentially less than a kiloton in strength—have led to speculation as to whether it was a fizzle (unsuccessful test), or not a genuine nuclear test at all.
On May 25, 2009, North Korea announced having conducted a second nuclear test. A tremor, with magnitude reports ranging from 4.7 to 5.3, was detected at Mantapsan, 375 km (233 mi) northeast of P'yongyang and within a few kilometers of the 2006 test location. While estimates, as to yield, are still uncertain, with reports ranging from 3 to 20 kilotons, the stronger tremor indicates a significantly larger yield than the 2006 test.
On 12 February 2013, North Korean state media announced it had conducted an underground nuclear test, its third in seven years. A tremor that exhibited a nuclear bomb signature with an initial magnitude 4.9 (later revised to 5.1) was detected by both Comprehensive Nuclear-Test-Ban Treaty Organization Preparatory Commission (CTBTO) and the United States Geological Survey (USGS). The tremor occurred at 11:57 local time (02:57 UTC) and the USGS said the hypocenter of the event was only one kilometer deep. South Korea's defense ministry said the event reading indicated a blast of six to seven kilotons. However, there are some experts who estimate the yield to be up to 15 kt, since the test site's geology is not well understood. In comparison, the atomic (fission) bombs dropped by the Enola Gay on Hiroshima (Little Boy, a gun-type atomic bomb) and on Nagasaki by Bockscar (Fat Man, an implosion-type atomic bomb) had blast yields of the equivalents of 15 and 21 kilotons of TNT, respectively.
On January 5, 2015, North Korean TV news anchors announced that they had successfully tested a miniaturized atomic bomb, about 8 km (5 mi) from the Punggye-ri nuclear site where a test was conducted in 2013.
On January 6, 2016, North Korea announced that it conducted a successful test of a hydrogen bomb. The seismic event, at a magnitude of 5.1, occurred 19 kilometers (12 miles) east-northeast of Sungjibaegam.
On September 9, 2016, North Korea announced another successful nuclear weapon test at the Punggye-ri Test Site. This is the first warhead the state claims to be able to mount to a missile or long-range rocket previously tested in June 2016. Estimates for the explosive yield range from 20 to 30 kt and coincided with a 5.3 magnitude earthquake in the region.
On September 3, 2017, North Korea successfully detonated its first weapon self-designated as a hydrogen bomb. Initial yield estimates place it at 100 kt. Reports indicate that the test blast caused a magnitude 6.3 earthquake, and possibly resulted in a cave-in at the test site.
Alleged tests
There have been a number of significant alleged, disputed or unacknowledged accounts of countries testing nuclear explosives. Their status is either not certain or entirely disputed by most mainstream experts.
China
On April 15, 2020, the Wall Street Journal published details of a US State Department report on activity during 2019 at China's Lop Nur test site, alleging supercritical experiments could have occurred in an absence of effective monitoring.
Israel
Israel was alleged by a Bundeswehr report to have made an underground test in 1963. Historian Taysir Nashif reported a zero yield implosion test in 1966. Scientists from Israel participated in the earliest French nuclear tests before DeGaulle cut off further cooperation.
North Korea
On September 9, 2004, South Korean media reported that there had been a large explosion at the Chinese/North Korean border. This explosion left a crater visible by satellite and precipitated a large (3-km diameter) mushroom cloud. The United States and South Korea quickly downplayed this, explaining it as a forest fire that had nothing to do with the DPRK's nuclear weapons program.
Pakistan
Because Pakistan's nuclear program was conducted under extreme secrecy, it raised concerns in the Soviet Union and India, who suspected that since the 1974 test it was inevitable that Pakistan would further develop its program. The pro-Soviet newspaper, The Patriot, reported that "Pakistan has exploded a nuclear device in the range of 20 to 50 kilotons" in 1983. But it was widely dismissed by Western diplomats as it was pointed out that The Patriot had previously engaged in spreading disinformation on several occasions. In 1983, India and the Soviet Union both investigated secret tests but, due to lack of any scientific data, these statements were widely dismissed.
In their book, The Nuclear Express, authors Thomas Reed and Danny Stillman also allege that the People's Republic of China allowed Pakistan to detonate a nuclear weapon at its Lop Nur test site in 1990, eight years before Pakistan held its first official weapons test.
However, senior scientist Abdul Qadeer Khan strongly rejected the claim in May 1998. According to Khan, due to its sensitivity, no country allows another country to use their test site to explode the devices. Such an agreement only existed between the United States and the United Kingdom since the 1958 US–UK Mutual Defense Agreement which among other things allows Britain access to the American Nevada National Security Site for testing. Dr. Samar Mubarakmand, another senior scientist, also confirmed Dr. Khan's statement and acknowledged that cold tests were carried out, under codename Kirana-I, in a test site which was built by the Corps of Engineers under the guidance of the PAEC.
Additionally, the UK conducted nuclear tests in Australia in the 1950s.
Russia
The Yekaterinburg Fireball of November 14, 2014, is alleged by some to have been a nuclear test in space, which would not have been detected by the CTBTO because the CTBTO does not have autonomous ways to monitor space nuclear tests (i.e. satellites) and relies thus on information that member States would accept to provide. The fireball happened a few days before a conference in Yekaterinburg on the theme of air/missile defense. The affirmation, however, is disputed as the Russian Ministry of Emergency Situations claimed it was an "on-ground" explosion. The Siberian Times, a local newspaper, noted that "the light was not accompanied by any sound".
Vela incident
The Vela incident was an unidentified double flash of light detected by a partly functional, decommissioned American Vela Satellite on September 22, 1979, in the Indian Ocean (near the Prince Edward Islands off Antarctica). Sensors which could have recorded proof of a nuclear test were not functioning on this satellite. It is possible that this was produced by a nuclear device. If this flash detection was actually a nuclear test, a popular theory favored in the diary of then sitting American President Jimmy Carter, is that it resulted from a covert joint South African and Israeli nuclear test of an advanced highly miniaturized Israeli artillery shell sized device which was unintentionally detectable by satellite optical sensor due to a break in the cloud cover of a typhoon. Analysis of the South African nuclear program later showed only six of the crudest and heavy designs weighing well over 340 kg had been built when they finally declared and disarmed their nuclear arsenal. The 1986 Vanunu leaks analyzed by nuclear weapon miniaturization pioneer Ted Taylor revealed very sophisticated miniaturized Israeli designs among the evidence presented. Also suspected were France testing a neutron bomb near their Kerguelen Islands territory, the Soviet Union making a prohibited atmospheric test, as well as India or Pakistan doing initial proof of concept tests of early weaponized nuclear bombs.
Tests of live warheads on rockets
Missiles and nuclear warheads have usually been tested separately because testing them together is considered highly dangerous; they are certainly the most extreme type of live fire exercise. The only US live test of an operational missile was the following:
Frigate Bird: on May 6, 1962, a UGM-27 Polaris A-2 missile with a live 600 kt W47 warhead was launched from the USS Ethan Allen; it flew , re-entered the atmosphere, and detonated at an altitude of over the South Pacific.
Other live tests with the nuclear explosive delivered by rocket by the USA include:
The July 19, 1957 test Plumbbob/John fired a small yield nuclear weapon on an AIR-2 Genie air-to-air rocket from a jet fighter.
On August 1, 1958, Redstone rocket launched nuclear test Teak that detonated at an altitude of . On August 12, 1958, Redstone #CC51 launched nuclear test Orange to a detonation altitude of . Both were part of Operation Hardtack I and had a yield of 3.75 Mt
Operation Argus: three tests above the South Atlantic Ocean, August 27, August 30, and September 6, 1958
On July 9, 1962, Thor missile launched a Mk4 reentry vehicle containing a W49 thermonuclear warhead to an altitude of 248 miles (400 km). The warhead detonated with a yield of 1.45 Mt. This was the Starfish Prime event of nuclear test operation Dominic-Fishbowl
In the Dominic-Fishbowl series in 1962: Checkmate, Bluegill, Kingfish and Tightrope
The USA also conducted two live weapons test involving nuclear artillery including:
Test of the M65 atomic cannon using the W9 artillery shell during the Upshot-Knothole Grable test on May 25, 1953.
Test of the Davy Crockett recoilless gun during Little Feller I test on July 17, 1962.
The USA also conducted one live weapons test involving a missile launched nuclear depth charge:
Test of the RUR-5 ASROC during the Dominic-Swordfish test on May 11, 1962.
The Soviet Union tested nuclear explosives on rockets as part of their development of a localized anti-ballistic missile system in the 1960s. Some of the Soviet nuclear tests with warheads delivered by rocket include:
Baikal (USSR Test #25, February 2, 1956, at Aralsk) – one test, with a R-5M rocket launch from Kapustin Yar.
ZUR-215 (#34, January 19, 1957, at Kapustin Yar) – one test, with a rocket launch from Kapustin Yar.
(#82 and 83, early November 1958) two tests, done after declared cease-fire for test moratorium negotiations, from Kapustin Yar.
Groza (#88, September 6, 1961, at Kapustin Yar) – one test, with a rocket launch from Kapustin Yar.
Grom (#115, October 6, 1961, at Kapustin Yar) – one test, with a rocket launch from Kapustin Yar.
Volga (#106 and 108, September 20–22, 1961, at Novaya Zemlya) – two tests, with R-11M rockets launch from Rogachevo.
Roza (#94 and 99, September 12–16, 1961, at Novaya Zemlya) – two tests, with R-12 rockets launch from Vorkuta.
Raduga (#121, October 20, 1961, at Novaya Zemlya) – one test, with a R-13 rocket launch.
Tyulpan (#164, September 8, 1962, at Novaya Zemlya) – one test, with R-14 rockets launched from Chita.
Operation K (1961 and 1962, at Sary-Shagan) – five tests, at high altitude, with rockets launched from Kapustin Yar.
The Soviet Union also conducted three live nuclear torpedo tests including:
Test of the T-5 torpedo on September 21, 1955 at Novaya Zemlya.
Test of the T-5 torpedo on October 10, 1957 at Novaya Zemlya.
Test of the T-5 torpedo on October 23, 1961 at Novaya Zemlya.
The People's Republic of China conducted CHIC-4 with a Dongfeng-2 rocket launch on October 25, 1966. The warhead exploded with a yield of 12 kt.
Most powerful tests
The following is a list of the most powerful nuclear weapon tests. All tests on the first chart were multi-stage thermonuclear weapons.
See also
Andrei Sakharov
Edward Teller
High explosive nuclear effects testing
Historical nuclear weapons stockpiles and nuclear tests by country
International Day against Nuclear Tests
J. Robert Oppenheimer
Largest artificial non-nuclear explosions
List of nuclear weapon test locations
List of nuclear weapons tests of China
Lists of nuclear disasters and radioactive incidents
Novaya Zemlya
Nuclear fallout
Nuclear Test Ban
Soviet atomic bomb project
Stanislaw Ulam
References
External links
United States Nuclear Tests July 1945 through September 1992
Australian Government — Geoscience Australia — database of nuclear explosions since 1945
Video archive of nuclear weapon testing
Nuclear Proliferation Archive
History-related lists
Nuclear technology-related lists | List of nuclear weapons tests | [
"Technology"
] | 6,460 | [
"Environmental impact of nuclear power",
"Nuclear weapons testing"
] |
2,189,901 | https://en.wikipedia.org/wiki/Microstructure | Microstructure is the very small scale structure of a material, defined as the structure of a prepared surface of material as revealed by an optical microscope above 25× magnification. The microstructure of a material (such as metals, polymers, ceramics or composites) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behaviour or wear resistance. These properties in turn govern the application of these materials in industrial practice.
Microstructure at scales smaller than can be viewed with optical microscopes is often called nanostructure, while the structure in which individual atoms are arranged is known as crystal structure. The nanostructure of biological specimens is referred to as ultrastructure. A microstructure’s influence on the mechanical and physical properties of a material is primarily governed by the different defects present or absent of the structure. These defects can take many forms but the primary ones are the pores. Even if those pores play a very important role in the definition of the characteristics of a material, so does its composition. In fact, for many materials, different phases can exist at the same time. These phases have different properties and if managed correctly, can prevent the fracture of the material.
Methods
The concept of microstructure is observable in macrostructural features in commonplace objects. Galvanized steel, such as the casing of a lamp post or road divider, exhibits a non-uniformly colored patchwork of interlocking polygons of different shades of grey or silver. Each polygon is a single crystal of zinc adhering to the surface of the steel beneath. Zinc and lead are two common metals which form large crystals (grains) visible to the naked eye. The atoms in each grain are organized into one of seven 3d stacking arrangements or crystal lattices (cubic, tetrahedral, hexagonal, monoclinic, triclinic, rhombohedral and orthorhombic). The direction of alignment of the matrices differ between adjacent crystals, leading to variance in the reflectivity of each presented face of the interlocked grains on the galvanized surface. The average grain size can be controlled by processing conditions and composition, and most alloys consist of much smaller grains not visible to the naked eye. This is to increase the strength of the material (see Hall-Petch Strengthening).
Microstructure characterizations
To quantify microstructural features, both morphological and material property must be characterized. Image processing is a robust technique for determination of morphological features such as volume fraction, inclusion morphology, void and crystal orientations. To acquire micrographs, optical as well as electron microscopy are commonly used.
To determine material property, Nanoindentation is a robust technique for determination of properties in micron and submicron level for which conventional testing are not feasible. Conventional mechanical testing such as tensile testing or dynamic mechanical analysis (DMA) can only return macroscopic properties without any indication of microstructural properties. However, nanoindentation can be used for determination of local microstructural properties of homogeneous as well as heterogeneous materials. Microstructures can also be characterized using high-order statistical models through which a set of complicated statistical properties are extracted from the images. Then, these properties can be used to produce various other stochastic models.
Microstructure generation
Microstructure generation is also known as stochastic microstructure reconstruction.
Computer-simulated microstructures are generated to replicate the microstructural features of actual microstructures. Such microstructures are referred to as synthetic microstructures. Synthetic microstructures are used to investigate what microstructural feature is important for a given property. To ensure statistical equivalence between generated and actual microstructures, microstructures are modified after generation to match the statistics of an actual microstructure. Such procedure enables generation of theoretically infinite number of computer simulated microstructures that are statistically the same (have the same statistics) but stochastically different (have different configurations).
Influence of pores and composition
A pore in a microstructure, unless desired, is a disadvantage for the properties. In fact, in nearly all of the materials, a pore will be the starting point for the rupture of the material. It is the initiation point for the cracks. Furthermore, a pore is usually quite hard to get rid of. Those techniques described later involve a high temperature process. However, even those processes can sometimes make the pore even bigger. Pores with large coordination number (surrounded by many particles) tend to grow during the thermal process. This is caused by the thermal energy being converted to a driving force for the growth of the particles which will induce the growth of the pore as the high coordination number prohibits the growth towards the pore.
For many materials, it can be seen from their phase diagram that multiple phases can exist at the same time. Those different phases might exhibit different crystal structure, thus exhibiting different mechanical properties. Furthermore, these different phases also exhibit a different microstructure (grain size, orientation). This can also improve some mechanical properties as crack deflection can occur, thus pushing the ultimate breakdown further as it creates a more tortuous crack path in the coarser microstructure.
Improvement techniques
In some cases, simply changing the way the material is processed can influence the microstructure. An example is the titanium alloy TiAl6V4. Its microstructure and mechanical properties are enhanced using SLM (selective laser melting) which is a 3D printing technique using powder and melting the particles together using high powered laser. Other conventional techniques for improving the microstructure are thermal processes. Those processes rely in the principle that an increase in temperature will induce the reduction or annihilation of pores. Hot isostatic pressing (HIP) is a manufacturing process, used to reduce the porosity of metals and increase the density of many ceramic materials. This improves the material's mechanical properties and workability.
The HIP process exposes the desired material to an isostatic gas pressure as well as high temperature in a sealed vessel (high pressure). The gas used during this process is mostly Argon. The gas needs to be chemically inert so that no reaction occurs between it and the sample. The pressure is achieved by simply applying heat to the hermetically sealed vessel. However, some systems also associate gas pumping to the process to achieve the required pressure level. The pressure applied on the materials is equal and comes from all directions (hence the term “isostatic”). When castings are treated with HIP, the simultaneous application of heat and pressure eliminates internal voids and microporosity through a combination of plastic deformation, creep, and diffusion bonding; this process improves fatigue resistance of the component.
See also
References
External links
Materials science
Metallurgy | Microstructure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,447 | [
"Metallurgy",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,189,949 | https://en.wikipedia.org/wiki/Corner%20solution | In mathematics and economics, a corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. In non-technical terms, a corner solution is when the chooser is either unwilling or unable to make a trade-off between goods.
In economics
In the context of economics the corner solution is best characterised by when the highest indifference curve attainable is not tangential to the budget line, in this scenario the consumer puts their entire budget into purchasing as much of one of the goods as possible and none of any other. When the slope of the indifference curve is greater than the slope of the budget line, the consumer is willing to give up more of good 1 for a unit of good 2 than is required by the market. Thus, it follows that if the slope of the indifference curve is strictly greater than the slope of the budget line:
Then the result will be a corner solution intersecting the x-axis. The converse is also true for a corner solution resulting from an intercept through the y-axis.
Examples
Real world examples of a corner solution occur when someone says "I wouldn't buy that at any price", "Why would I buy X when Y is cheaper" or "I will do X no matter the cost", this could be for any number of reasons e.g. a bad brand experience, loyalty to a specific brand or when a cheaper version of the same good exists.
Another example is "zero-tolerance" policies, such as a parent who is unwilling to expose their children to any risk, no matter how small and no matter what the benefits of the activity might be. "Nothing is more important than my child's safety" is a corner solution in its refusal to admit there might be trade-offs. The term "corner solution" is sometimes used by economists in a more colloquial fashion to refer to these sorts of situations.
Another situation a corner solution may arise is when the two goods in question are perfect substitutes. The word "corner" refers to the fact that if one graphs the maximization problem, the optimal point will occur at the "corner" created by the budget constraint and one axis.
In mathematics
A corner solution is an instance where the "best" solution (i.e. maximizing profit, or utility, or whatever value is sought) is achieved based not on the market-efficient maximization of related quantities, but rather based on brute-force boundary conditions. Such a solution lacks mathematical elegance, and most examples are characterized by externally forced conditions (such as "variables x and y cannot be negative") that put the actual local extrema outside the permitted values.
Another technical way to state it is that a corner solution is a solution to a minimization or maximization problem where the non-corner solution is infeasible, that is, not in the domain. Instead, the solution is a corner solution on an axis where either x or y is equal to zero. For instance, from the example above in economics, if the maximal utility of two goods is achieved when the quantity of goods x and y are (−2, 5), and the utility is subject to the constraint x and y are greater than or equal to 0 (one cannot consume a negative quantity of goods) as is usually the case, then the actual solution to the problem would be a corner solution where x = 0.
In consumer theory
The more usual solution will lie in the non-zero interior at the point of tangency between the objective function and the constraint. For example, in consumer theory the objective function is the indifference-curve map (the utility function) of the consumer. The budget line is the constraint. In the usual case, constrained utility is maximized on the budget constraint with strictly positive quantities consumed of both goods. For a corner solution, however, utility is maximized at a point on one axis where the budget constraint intersects the highest attainable indifference curve at zero consumption for one good with all income used for the other good. Furthermore, a range of lower prices for the good with initial zero consumption may leave quantity demanded unchanged at zero, rather than increasing it as in the more usual case.
Calculation
To find a corner solution graphically one must shift the indifference curve in the direction which increases utility. If a tangency point is reached between the indifference curve and budget line then you do not have a corner solution, this is an interior solution. If you do not find a tangency point within the domain then the utility maximising indifference curve for the given budget constraint will be at an intersection between either the x or y axis (depending on whether the slope of the indifference curve is strictly greater than or less than the slope of the budget constraint) - this is a corner solution.
To solve a corner solution mathematically the Lagrangian method must be applied with the non-negativity constraints x ≥ 0 and y ≥ 0.
See also
Indifference curve: Assumptions section
Interior solution (optimization)
References
Mathematical optimization
Utility
Microeconomics
Consumer theory | Corner solution | [
"Mathematics"
] | 1,029 | [
"Mathematical optimization",
"Mathematical analysis"
] |
2,189,975 | https://en.wikipedia.org/wiki/Comfort%20noise | Comfort noise (or comfort tone) is synthetic background noise used in radio and wireless communications to fill the artificial silence in a transmission resulting from voice activity detection or from the audio clarity of modern digital lines.
Some modern telephone systems (such as wireless and VoIP) use voice activity detection (VAD), a form of squelching where low volume levels are ignored by the transmitting device. In digital audio transmissions, this saves bandwidth of the communications channel by transmitting nothing when the source volume is under a certain threshold, leaving only louder sounds (such as the speaker's voice) to be sent. However, improvements in background noise reduction technologies can occasionally result in the complete removal of all noise. Although maximizing call quality is of primary importance, exhaustive removal of noise may not properly simulate the typical behavior of terminals on the PSTN system.
Issues with silence
The result of receiving total silence, especially for a prolonged period, has a number of unwanted effects on the listener, including the following:
the listener may believe that the transmission has been lost, and therefore hang up prematurely
the speech may sound "choppy" (see noise gate) and difficult to understand
the sudden change in sound level can be jarring to the listener.
To counteract these effects, comfort noise is added, usually on the receiving end in wireless or VoIP systems, to fill in the silent portions of transmissions with artificial noise.
Noise
Generated comfort noise is at a low but audible volume level, and can vary based on the average volume level of received signals to minimize jarring transitions.
In many VoIP products, users may control how VAD and comfort noise are configured, or disable the feature entirely.
As part of the RTP audio video profile, RFC 3389 defines a standard for distributing comfort noise information in VoIP systems.
Examples
Many radio stations broadcast birdsong, city-traffic or other atmospheric comfort noise during periods of deliberate silence. For example, in the UK, silence is observed on Remembrance Sunday, and London's quiet city ambiance is used. This is to reassure the listener that the station is on-air, but primarily to prevent silence detection systems at transmitters from automatically starting backup tapes of music (designed to be broadcast in the case of transmission link failure).
During the siege of Leningrad, the beat of a metronome was used as comfort noise on the Leningrad radio network, indicating that the network was still functioning.
Related concepts
A similar concept is that of sidetone, the effect of sound that is picked up by a telephone's mouthpiece and introduced (at low level) into the earpiece of the same handset, acting as feedback.
See also
Ambient noise
Talkspurt
Discontinuous transmission (DTX)
Presence (sound recording)
Autonomous sensory meridian response
Sound masking
References
Gao Research - VAD/CNG
Newton, Harry. Newton's Telecom Dictionary. 20th ed. 2004.
Noise
Radio technology
Voice over IP
Mobile telecommunications | Comfort noise | [
"Technology",
"Engineering"
] | 597 | [
"Information and communications technology",
"Mobile telecommunications",
"Telecommunications engineering",
"Radio technology"
] |
2,189,987 | https://en.wikipedia.org/wiki/Complete%20market | In economics, a complete market (aka Arrow-Debreu market or complete system of markets) is a market with two conditions:
Negligible transaction costs and therefore also perfect information,
Every asset in every possible state of the world has a price.
In such a market, the complete set of possible bets on future states of the world can be constructed with existing assets without friction. Here, goods are state-contingent; that is, a good includes the time and state of the world in which it is consumed. For instance, an umbrella tomorrow if it rains is a distinct good from an umbrella tomorrow if it is clear. The study of complete markets is central to state-preference theory. The theory can be traced to the work of Kenneth Arrow (1964), Gérard Debreu (1959), Arrow & Debreu (1954) and Lionel McKenzie (1954). Arrow and Debreu were awarded the Nobel Memorial Prize in Economics (Arrow in 1972, Debreu in 1983), largely for their work in developing the theory of complete markets and applying it to the problem of general equilibrium.
States of the world
A state of the world is a complete specification of the values of all relevant variables over the relevant time horizon. A state-contingent claim, or state claim, is a contract whose future payoffs depend on future states of the world. For example, suppose you can bet on the outcome of a coin toss. If you guess the outcome correctly, you will win one dollar, and otherwise you will lose one dollar. A bet on heads is a state claim, with payoff of one dollar if heads is the outcome, and payoff of negative one dollar if tails is the outcome. "Heads" and "tails" are the states of the world in this example. A state-contingent claim can be represented as a payoff vector with one element for each state of the world, e.g. (payoff if heads, payoff if tails). So a bet on heads can be represented as ($1, −$1) and a bet on tails can be represented as (−$1, $1). Notice that by placing one bet on heads and one bet on tails, you have a state-contingent claim of ($0, $0); that is, the payoff is the same regardless of which state of the world occurs.
Dynamically-complete market
In order for a market to be complete, it must be possible to instantaneously enter into any position regarding any future state of the market.
See also
Incomplete markets
References
Further reading
Mark D. Flood (1991), "An Introduction to Complete Markets", Federal Reserve Bank of St. Louis, Review, March/April 1991
Mathematical finance | Complete market | [
"Mathematics"
] | 558 | [
"Applied mathematics",
"Mathematical finance"
] |
2,189,989 | https://en.wikipedia.org/wiki/Sodium%20pareth%20sulfate | Sodium alketh sulfate, known prior to being renamed in 2021 as sodium pareth sulfate, and also as sodium polyoxyethylene alkyl ether sulfate, is a surfactant found in some detergent products such as hand or body washes, but not as commonly as other chemicals such as sodium laureth sulfate (SLES). It is the sodium salt of a sulfated polyethylene glycol ether.
In February 2021, the Personal Care Products Council announced a revision of INCI nomenclature that included replacement of the term "pareth" with "alketh" in all INCI names, affecting hundreds of INCI names.
It is produced similarly to SLES starting from fatty alcohols with 10 to 16 carbon atoms.
External links
Ethers
Organic sodium salts
Anionic surfactants
Sulfate esters | Sodium pareth sulfate | [
"Chemistry"
] | 172 | [
"Functional groups",
"Salts",
"Organic compounds",
"Organic sodium salts",
"Ethers"
] |
2,190,069 | https://en.wikipedia.org/wiki/Plasmodium%20vivax | Plasmodium vivax is a protozoal parasite and a human pathogen. This parasite is the most frequent and widely distributed cause of recurring malaria. Although it is less virulent than Plasmodium falciparum, the deadliest of the five human malaria parasites, P. vivax malaria infections can lead to severe disease and death, often due to splenomegaly (a pathologically enlarged spleen). P. vivax is carried by the female Anopheles mosquito; the males do not bite.
Health
Epidemiology
Plasmodium vivax is found mainly in Asia, Latin America, and in some parts of Africa. P. vivax is believed to have originated in Asia, but recent studies have shown that wild chimpanzees and gorillas throughout central Africa are endemically infected with parasites that are closely related to human P. vivax. These findings indicate that human P. vivax is of African origin. Plasmodium vivax accounts for 65% of malaria cases in Asia and South America. Unlike Plasmodium falciparum, Plasmodium vivax is capable of undergoing sporogonic development in the mosquito at lower temperatures. It has been estimated that 2.5 billion people are at risk of infection with this organism.
Although the Americas contribute 22% of the global area at risk, high endemic areas are generally sparsely populated and the region contributes only 6% to the total population at risk. In Africa, the widespread lack of the Duffy antigen in the population has ensured that stable transmission is constrained to Madagascar and parts of the Horn of Africa. It contributes 3.5% of the global population at risk. Central Asia is responsible for 82% of the global population at risk with high endemic areas coinciding with dense populations particularly in India and Myanmar. South East Asia has areas of high endemicity in Indonesia and Papua New Guinea and overall contributes 9% of the global population at risk.
P. vivax is carried by at least 71 mosquito species. Many vivax vectors thrive in temperate climates—as far north as Finland. Some prefer to bite outdoors or during the daytime, hampering the effectiveness of indoor insecticide and bed nets. Several key vector species have yet to be grown in the lab for closer study, and insecticide resistance is unquantified.
Clinical presentation
Pathogenesis results from the rupture of infected red blood cells, leading to fever. Infected red blood cells may also stick to each other and walls of capillaries. Vessels plug up and deprive tissues of oxygen. Infection may also cause the spleen to enlarge.
Unlike P. falciparum, P. vivax can populate the bloodstream, even before a patient shows symptoms, with sexual-stage parasites—the form ingested by mosquitoes before biting the next victim. Consequently, prompt treatment of symptomatic patients does not necessarily help stop an outbreak, as it does with falciparum malaria, in which fevers occur as sexual stages develop. Even when symptoms appear, because the disease is usually not immediately fatal, the parasite continues to multiply.
Plasmodium vivax can cause a more unusual form of malaria with atypical symptoms. It has been known to debut with hiccups, loss of taste, lack of fever, pain while swallowing, cough and urinary discomfort.
The parasite can lie dormant in the liver for days to years, causing no symptoms and remaining undetectable in blood tests. They form hypnozoites, a small stage that nestles inside an individual liver cell. This name derives from "sleeping organisms". The hypnozoites allow the parasite to survive in more temperate zones, where mosquitoes bite only part of the year.
A single infectious bite can trigger six or more relapses a year, leaving patients more vulnerable to other diseases. Other infectious diseases, including falciparum malaria, appear to trigger relapses.
Serious complications
Serious complications for malaria are dormant liver stage parasites, organ failures such as acute kidney failure. More complications of malaria can also be impairment of consciousness, neurological abnormalities, hypoglycemia and low blood pressures caused by cardiovascular collapse, clinical jaundice, and or other vital organ dysfunctions and coagulation defects. The most serious complication ultimately is death.
Prevention
The main way to prevent malaria is through vector control. There are mostly three main forms in which the vector can be controlled: (1) insecticide-treated mosquito nets, (2) indoor residual spraying, and (3) antimalarial drugs. Long-lasting insecticidal nets (LLNs) are the preferred method of control because it is the most cost-effective. The WHO is currently strategizing how to ensure that the net is properly maintained to protect people at risk. The second option is indoor residual spraying which has been proven effective if at least 80% of the homes are sprayed. However, such a method is only effective for 3–6 months. A drawback to these two methods, unfortunately, is that mosquito resistance against these insecticides has risen. National malaria control efforts are undergoing rapid changes to ensure that people are given the most effective method of vector control. Lastly, antimalarial drugs can also be used to prevent infection from developing into a clinical disease. However, there has also been an increase in resistance to antimalarial medicine.
In 2015 the World Health Organization (WHO) drew up a plan to address vivax malaria, as part of their Global Technical Strategy for Malaria.
Diagnosis
P. vivax and P. ovale that has been sitting in EDTA for more than 30 minutes before the blood film is made will look very similar in appearance to P. malariae,[source needed] which is an important reason to warn the laboratory immediately when the blood sample is drawn so they can process the sample as soon as it arrives. Blood films are preferably made within 30 minutes of the blood draw and must certainly be made within an hour of the blood being drawn. Diagnosis can be done with the strip fast test of antibodies.
Treatment
Chloroquine remains the treatment of choice for vivax malaria, except in Indonesia's Irian Jaya (Western New Guinea) region and the geographically contiguous Papua New Guinea, where chloroquine resistance is common (up to 20% resistance). Chloroquine resistance is an increasing problem in other parts of the world, such as Korea and India.
When chloroquine resistance is common or when chloroquine is contraindicated, then artesunate is the drug of choice, except in the U.S., where it is not approved for use. Where an artemisinin-based combination therapy has been adopted as the first-line treatment for P. falciparum malaria, it may also be used for P. vivax malaria in combination with primaquine for radical cure. An exception is artesunate plus sulfadoxine-pyrimethamine (AS+SP), which is not effective against P. vivax in many places. Mefloquine is a good alternative and in some countries is more readily available. Atovaquone-proguanil is an effective alternative in patients unable to tolerate chloroquine. Quinine may be used to treat vivax malaria but is associated with inferior outcomes.
32–100% of patients will relapse following successful treatment of P. vivax infection if a radical cure (inactivation of liver stages) is not given.
Eradication of the liver stages is achieved by giving primaquine but patients with glucose-6-phosphate dehydrogenase deficiency are at risk for haemolysis. G6PD-testing is therefore very important, both in endemic areas and in travelers. At least a 14-day course of primaquine is required for the radical treatment of P. vivax malaria.
The idea that primaquine kills parasites in the liver is the traditional assumption. However, it has been suggested that primaquine might, to a currently unknown extent, also inactivate noncirculating, extrahepatic merozoites (clarity in this regard is expected to be forthcoming soon).
Tafenoquine
In 2013 a Phase IIb trial was completed that studied a single-dose alternative drug named tafenoquine. It is an 8-aminoquinoline, of the same family as primaquine, developed by researchers at the Walter Reed Army Institute of Research in the 1970s and tested in safety trials. It languished, however, until the push for malaria elimination sparked new interest in primaquine alternatives.
Among patients who received a 600-mg dose, 91% were relapse-free after 6 months. Among patients who received primaquine, 24% relapsed within 6 months. "The data are absolutely spectacular," Wells says. Ideally, he says, researchers will be able to combine the safety data from the Army's earlier trials with the new study in a submission to the U.S. Food and Drug Administration for approval. Like primaquine, tafenoquine causes hemolysis in people who are G6PD deficient.
In 2013 researchers produced cultured human "microlivers" that supported liver stages of both P. falciparum and P. vivax and may have also created hypnozoites.
Eradication
Mass-treating populations with primaquine can kill the hypnozoites, exempting those with G6PD deficiency. However, the standard regimen requires a daily pill for 14 days across an asymptomatic population.
Korea
P. vivax is the only indigenous malaria parasite on the Korean peninsula. In the years following the Korean War (1950–53), malaria eradication campaigns successfully reduced the number of new cases of the disease in North Korea and South Korea. In 1979, World Health Organization declared the Korean peninsula vivax malaria-free, but the disease unexpectedly re-emerged in the late 1990s and persists today. Several factors contributed to the re-emergence of the disease, including a reduced emphasis on malaria control after 1979, floods and famine in North Korea, the emergence of drug resistance, and possibly global warming. Most cases are identified along the Korean Demilitarized Zone. As such, vivax malaria offers the two Koreas an opportunity to work together on an important health problem that affects both countries.
Drug targets
Given that drugs that target the various life stages of the parasite can sometimes have undesirable side effects, it is desirable to come up with drug molecules targeting specific proteins/enzymes that are essential for the parasite's survival or that can compromise the fitness of the organism. Enzymes in the purine salvage pathway had been favorite targets to this end. However, given the high degree of conservation in purine metabolism across the parasite and its host, there could be potential cross-reactivity making it difficult to design selective drugs against the parasite. To overcome this, recent efforts have focused on deducing the function of orphan hypothetical proteins whose functions have been unknown. Though, a lot of the hypothetical proteins have role in secondary metabolism, targeting them will be beneficial from two perspectives, i.e., specificity and reducing the virulence of the pathogen with no or minimal undesirable cross-reactivities.
Biology
Life cycle
Like all malaria parasites, P. vivax has a complex life cycle. It infects a definitive insect host, where sexual reproduction occurs, and an intermediate vertebrate host, where asexual amplification occurs. In P. vivax, the definitive hosts are Anopheles mosquitoes (also known as the vector), while humans are the intermediate asexual hosts. During its life cycle, P. vivax assumes various different physical forms (see below).
Asexual forms:
Sporozoite: Transfers infection from mosquito to human
Immature trophozoites (ring or signet-ring shaped), about one-third of the diameter of an RBC.
Mature trophozoites: Very irregular and delicate (described as amoeboid); many pseudopodial processes seen. The presence of fine grains of brown pigment (malarial pigment) or hematin is probably derived from the haemoglobin of the infected red blood cell.
Schizonts (also called meronts): As large as a normal red cell; thus the parasitized corpuscle becomes distended and larger than normal. There are about sixteen merozoites.
Sexual forms:
Gametocytes: Round. P. vivax gametocytes are commonly found in human peripheral blood at about the end of the first week of parasitemia.
Gametes: Formed from gametocytes in mosquitoes.
Zygote: Formed from combination of gametes
Oocyst: Contains zygote, develops into sporozoites
Human infection
P. vivax human infection occurs when an infected mosquito feeds on a human. During feeding, the mosquito injects saliva, along with sporozoites, through the skin. A proportion of these sporozoites reach the liver. There they enter hepatic cells, on which they feed, and reproduce asexually, as described in the next section. This process gives rise to thousands of merozoites (plasmodial daughter cells) in the body.
The incubation period of human infection usually ranges from ten to seventeen days and sometimes up to a year. Persistent liver stages allow relapse up to five years after the elimination of red blood cell stages and clinical cure.
Liver stage
The P. vivax sporozoite enters a hepatocyte and begins its exoerythrocytic schizogony stage. This is characterized by multiple rounds of nuclear division without cellular segmentation. After several nuclear divisions, the parasite cell will segment, and merozoites are formed.
There are situations where some of the sporozoites do not immediately start to grow and divide after entering the hepatocyte, but remain in a dormant, hypnozoite stage for weeks or months. The duration of latency is thought to be variable from one hypnozoite to another and the factors that will eventually trigger growth are not known; this might explain how a single infection can be responsible for a series of waves of parasitaemia or "relapses". It has been assumed that different strains of P. vivax have their own characteristic relapse pattern and timing.
However, such recurrent parasitemia is probably being over-attributed to hypnozoite activation. Two newly recognized, non-hypnozoite, probable contributing sources to recurrent peripheral P. vivax parasitemia are erythrocytic forms in bone marrow and the spleen. Between 2018 and 2021, it was reported that vast numbers of non-circulating, non-hypnozoite parasites occur unobtrusively in tissues of P. vivax-infected people, with only a small proportion of the total parasite biomass present in the peripheral bloodstream. This finding supports an intellectually insightful, paradigm-shifting viewpoint, which had prevailed since 2011 (albeit not believed between 2011 and 2018 by most malariologists and therefore ignored), that an unknown percentage of P. vivax recurrences are recrudescences (having a non-circulating or sequestered merozoite origin), and not relapses (which have a hypnozoite source). The recent discoveries concerning bodily parasite biomass distribution did not give rise to this new theory; it was pre-existing, as explained above. The recent bone marrow and spleen, etc., findings merely confirm the likely validity of the theory.
Erythrocytic cycle
P. vivax preferentially penetrates young red blood cells (reticulocytes), unlike Plasmodium falciparum which can invade erythrocytes. In order to achieve this, merozoites have two proteins at their apical pole (PvRBP-1 and PvRBP-2). The parasite uses the Duffy blood group antigens (Fy6) to penetrate red blood cells. This antigen does not occur in the majority of humans in West Africa [phenotype Fy (a-b-)]. As a result, P. vivax occurs less frequently in West Africa.
The parasitised red blood cell is up to twice as large as a normal red cell and Schüffner's dots (also known as Schüffner's stippling or Schüffner's granules) are seen on the infected cell's surface. Schüffner's dots have a spotted appearance, varying in color from light pink to red, to red-yellow, as coloured with Romanovsky stains. The parasite within it is often wildly irregular in shape (described as "amoeboid"). Schizonts of P. vivax have up to twenty merozoites within them. It is rare to see cells with more than one parasite within them. Merozoites will only attach to immature blood cells (reticulocytes) and therefore it is unusual to see more than 3% of all circulating erythrocytes parasitised.
Unusual erythrocytic forms were detected in a few cases of an outbreak in Brazil.
Mosquito stage
Parasite life cycle in mosquitoes includes all stages of
sexual reproduction:
Infection and Gametogenesis
Microgametes
Macrogametes
Fertilization
Ookinite
Oocyst
Sporogony
Mosquito infection and gamete formation
When a female Anopheles mosquito bites an infected person, gametocytes and other stages of the parasite are transferred to the mosquito's stomach.
Gametocytes ultimately develop into gametes, a process known as gametogony.
Microgametocytes become very active, and their nuclei undergo fission (i.e. amitosis) to each give 6-8 daughter nuclei, which become arranged at the periphery. The cytoplasm develops long thin flagella-like projections, then a nucleus enters into each one of these extensions. These cytoplasmic extensions later break off as mature male gametes (microgametes). This process of formation of flagella-like microgametes or male gametes is known as exflagellation.
Macrogametocytes show very little change. They develop a cone of reception at one side and become mature as macrogametocytes (female gametes).
Fertilization
Male gametes move actively in the stomach of mosquitoes in search of female gametes. Male gametes then enter into female gametes through the cone of reception. The complete fusion of 2 gametes results in the formation of zygote. Here, the fusion of 2 dissimilar gametes occurs, known as anisogamy.
The zygote remains inactive for some time but it soon elongates and becomes vermiform (worm-like) and motile. It is now known as ookinete. The pointed ends of ookinete penetrate the stomach wall and come to lie below its outer epithelial layer. Here the zygote becomes spherical and develops a cyst wall around itself. The cyst wall is derived partly from the stomach tissues and partly produced by the zygote itself. At this stage, the zygote is known as an oocyst. The oocyst absorbs nourishment and grows in size. Oocysts protrude from the surface of the stomach, giving it a blistered appearance. In a highly infected mosquito, as many as 1000 oocysts may be seen.
Sporogony
The oocyst nucleus divides repeatedly to form a large number of daughter nuclei. At the same time, the cytoplasm develops large vacuoles and forms numerous cytoplasmic masses. These cytoplasmic masses then elongate and a daughter nuclei migrate into each mass. The resulting sickle-shaped bodies are known as sporozoites. This phase of asexual multiplication is known as sporogony and is completed in about 10–21 days. The oocyst then bursts and sporozoites are released into the body cavity of mosquitoes. Sporozoites eventually reach the salivary glands of mosquitoes via its hemolymph. The mosquito now becomes infectious. The salivary glands of a single infected mosquito may contain as many as 200,000 sporozoites.
When the mosquito bites a healthy person, thousands of sporozoites are injected into the blood along with the saliva and the cycle starts again.
Taxonomy
P. vivax can be divided into two clades: one that appears to have origins in the Old World and a second that originated in the New World. The distinction can be made based on the structure of the A and S forms of the rRNA. A rearrangement of these genes appears to have occurred in the New World strains. It appears that a gene conversion occurred in an Old World strain and this strain gave rise to the New World strains. The timing of this event has yet to be established.
At present, both types of P. vivax circulate in the Americas. The monkey parasite – Plasmodium simium – is related to the Old World strains rather than to the New World strains.
A specific name – Plasmodium collinsi – has been proposed for the New World strains, but this suggestion has not been accepted to date.
Miscellaneous
It has been suggested that P. vivax has horizontally acquired genetic material from humans.
Plasmodium vivax is not known to have a particular gram stain (negative vs. positive) and may appear as either.
There is evidence that P. vivax is itself infected by viruses.
Therapeutic use
P. vivax was used between 1917 and the 1940s for malariotherapy, that is, to create very high fevers to combat certain diseases such as tertiary syphilis. In 1917, the inventor of this technique, Julius Wagner-Jauregg, received the Nobel Prize in Physiology or Medicine for his discoveries. However, the technique was dangerous, killing about 15% of patients, so it is no longer in use.
See also
List of parasites (human)
Apicomplexan life cycle
Gametocyte
Host (biology)
References
External links
Malaria Atlas Project
vivax
Parasites of humans
Malaria
Protozoal diseases | Plasmodium vivax | [
"Biology"
] | 4,654 | [
"Parasites of humans",
"Humans and other species"
] |
2,190,139 | https://en.wikipedia.org/wiki/Bearer-Independent%20Call%20Control | The Bearer-Independent Call Control (BICC) is a signaling protocol based on N-ISUP that is used for supporting narrowband Integrated Services Digital Network (ISDN) service over a broadband backbone network. BICC is designed to interwork with existing transport technologies. BICC is specified in ITU-T recommendation Q.1901.
BICC signaling messages are nearly identical to those in ISDN User Part (ISUP); the main difference being that the narrowband circuit identification code (CIC) has been modified. The BICC architecture consists of interconnected serving nodes that provide the call service function and the bearer control function. The call service function uses BICC signaling for call setup and may also interwork with ISUP. The bearer control function receives directives from the call service function via BICC Bearer Control Protocol (ITU-T recommendation Q.1950) and is responsible for setup and teardown of bearer paths on a set of physical transport links. Transport links are most commonly Asynchronous Transfer Mode (ATM) or Internet Protocol (IP).
According to the ITU, the completion of the BICC protocols is a historic step toward broadband multimedia networks because it enables the seamless migration from circuit-switched TDM networks to high-capacity broadband multimedia networks.
The Third-Generation Partnership Project (3GPP) has included BICC CS 2 in the Universal Mobile Telecommunications System (UMTS) release 4.
References
ITU-T Recommendation Q.1901 : Bearer Independent Call Control protocol
ITU-T Recommendation Q.1902.1 : Bearer Independent Call Control protocol (Capability Set 2): Functional description
ITU-T Recommendation Q.1950 : Bearer independent call bearer control protocol
ITU-T Press Release : Agreement on BICC protocols: a historic step for evolution towards next-generation server- networks
3GPP TS 29.205 : Application of Q.1900 series to Bearer Independent CS Network architecture; Stage 3
Network protocols | Bearer-Independent Call Control | [
"Technology"
] | 398 | [
"Computing stubs",
"Computer network stubs"
] |
2,190,194 | https://en.wikipedia.org/wiki/Architecture%20of%20Hong%20Kong | The architecture of Hong Kong features great emphasis on contemporary architecture, especially Modernism, Postmodernism, Functionalism, etc. Due to the lack of available land, few historical buildings remain in the urban areas of Hong Kong. Therefore, Hong Kong has become a centre for modern architecture as older buildings are cleared away to make space for newer, larger buildings. It has more buildings above 35m (or 100m) and more skyscrapers above 150m than any other city. Hong Kong's skyline is often considered to be the best in the world, with the mountains and Victoria Harbour complementing the skyscrapers.
Pre-sincisation architecture
Back in the day of the Nanyue kingdom, Hong Kong was already inhabited. Baiyue peoples in the area demonstrated some level of sophistication in architecture. An example is the Lei Cheng Uk Han Tomb.
Local and Lingnan architecture
Prior to the British settlement of Hong Kong in 1841, architecture in Hong Kong was predominantly Cantonese. With the majority of the population being fishers at the mercy of typhoons and pirates, numerous Tin Hau temples were dedicated to their patron Goddess Mazu. Likewise farmers built fortified villages to defend themselves from bandits.
After the British established the entrepôt of Victoria City (now Central and Western District on Hong Kong Island), the local population increased substantially, and as a result Tong Lau (tenement common in Southern China, especially Lingnan) began to appear. These were three-to-four-storey buildings, tightly packed in city blocks, and combining Southern Chinese and European architectural elements. The ground floor were typically shops, with apartments and small balconies upstairs. These buildings had stairs but no elevators, and sometimes had no toilet facility. These Tong Lau remained the mainstay of Hong Kong architecture until at least World War II; a number of these building survive to this day, albeit often in a derelict state.
Hong Kong walled villages
Pang uk
Classical Lingnan architecture in Hong Kong
Tong laus in Hong Kong
British architecture
Meanwhile, the British introduced Victorian and Edwardian architectural styles from the mid-19th century onwards. Notable surviving examples include the Legislative Council Building, the Central Police Station and Murray House. One building that has since been demolished was the Hong Kong Club Building; it was built atop a smaller structure designed in Italian Renaissance Revival style in 1897. The building was the subject of a bitter heritage conservation struggle in the late 1970s, which ultimately failed to save the building.
The first building in Hong Kong to be classified as the first high rise was constructed between June 1904 and December 1905. It consisted of 5 major buildings, each stacking 5 to 6 stories high. The structures were raised by the Hongkong Land under Catchick Paul Chater and James Johnstone Keswick.
Most high-rise buildings to be built afterwards were for business purposes; the first true skyscraper in Hong Kong was built for HongkongBank in 1935, which was also the first building in Hong Kong to have air conditioning; however this has since been replaced with the HSBC Main Building, Hong Kong of 1985. Likewise the few examples of 1930s Streamline Moderne and Bauhaus architecture in Hong Kong, such as the Central Market and the Wan Chai Market, are facing imminent demolitions despite protests from heritage conservation groups.
In the residential sector, multi-story buildings did not appear until the Buildings Ordinance 1955 lifted the height limit of residential buildings. This change was necessitated by the massive influx of refugees into Hong Kong after the conclusion of the Chinese Communist Revolution in 1949, and the subsequent Shek Kip Mei slum fire in 1953.
Public housing estates, originally seven-storeys high with notoriously cramped conditions, public bathrooms and no kitchens, were hastily built to accommodate the homeless; meanwhile private apartments, still tightly packed into city blocks like the Tong Lau of old, had grown to over 20 stories high by the mid-1960s.
The private housing estate began in 1965 with Mei Foo Sun Chuen. The first major private construction came from Swire properties in 1972 with the development of middle-class estate of Taikoo Shing. With little space wasted on statues or landmarks that consumed unnecessary real estate, Taikoo Shing's design was the new standard.
Gallery
Contemporary architecture
In the late 1990s, the primary demand for high-end buildings was in and around Central. The buildings of Central comprise the skyline along the coast of the Victoria Harbour, a famous tourist attraction in Hong Kong. But until Kai Tak Airport closed in 1998, strict height restrictions were in force in Kowloon so that aeroplanes could come in to land. These restrictions have now been lifted and many new skyscrapers in Kowloon have been constructed, including the International Commerce Centre at the West Kowloon reclamation, which has been the tallest building in Hong Kong since its completion in 2010.
Many commercial and residential towers built in the past two decades are among the tallest in the world, including Highcliff, The Arch, and The Harbourside. Still, more towers are under construction, like One Island East. At present, Hong Kong has the world's biggest skyline with a total of 7,681 skyscrapers, placing it ahead of even New York City, despite the fact that New York is larger in area. Most of these were built in past two decades.
Many skyscrapers in Hong Kong feature holes in them called "dragon gates". Local folklore claims that such holes are for dragons to pass through, though some such holes are created to fulfil air ventilation requirements.
Hong Kong's best-known building is probably I. M. Pei's Bank of China Tower. The building attracted heated controversy from the moment its design was released to the public, which continued for years after the building's completion in 1990. The building was said to cast negative feng shui energy into the heart of Hong Kong due to the building's sharp angles. One rumour even went so far as to say that the negative energy was concentrated on the Government House as a Chinese plot to foil any decisions taken there. The two white aerials on top on the building were deemed inauspicious as two sticks of incense are burned for the dead.
One of the largest construction projects in Hong Kong has been the new Hong Kong International Airport on Chek Lap Kok near Lantau, which was the most extensive single civil engineering project ever undertaken. Designed by Sir Norman Foster, the huge land reclamation project is linked to the centre of Hong Kong by the Lantau Link, which features three new major bridges: the world's sixth largest suspension bridge, Tsing Ma, which was built in 1997, connecting the islands of Tsing Yi and Ma Wan; the world's longest cable-stayed bridge carrying both road and railway traffic, Kap Shui Mun, which links Ma Wan and Lantau; and the world's first major 4-span cable-stayed bridge, Ting Kau, which connects Tsing Yi and the mainland New Territories.
Recent trends
In recent years, the new architecture in Hong Kong tends to focus on providing more public green spaces that combine environmentally friendly concepts together with cultural exchanges, aiming to improve the quality of life for the city's people. Besides green space, there are also the developments of unused old spaces by turning them into cultural hubs that nurture creativities and innovations. Architects have also explored more energy-efficient design.
West Kowloon Cultural District
Located at the headland of Kowloon, the West Kowloon Waterfront Promenade is a quiet haven in the busy city. Stroll along the boardwalk and find yourself surrounded on all sides by Hong Kong's iconic waterfront scenes. The promenade includes an area for cultural exchanges, where live music is played during the weekends. A nice cycling and jogging path provide citizens an amazing harbour view while doing exercises.
PMQ
A design hub which utilises old, unused spaces to create platforms for a variety of start-ups to showcase their best innovations and products for the public to get access to. After two years of renovations, the former police married quarters in Aberdeen Street, Central, has been reborn as PMQ.
Although studio spaces are small (about 450 sq ft), the hub is a great venue to foster a community. Spacious open-air corridors in front of each unit will be used for exhibitions and pop-up events; there will be a co-working space and units for overseas designers-in-residence. The PMQ's entrepreneurial focus is the best chance for young Hong Kong designers to become successful, since the hierarchical nature of most local companies stifles innovation.
Hong Kong Science Park
It is a project which set to promote high end technologies and innovation ideas exchange. The development is a key infrastructure projects that integrates with Hong Kong's advancement as a regional hub for high-tech innovation. The Hong Kong Science Park is located at Tolo Harbour and comprises three phases. Phase I site is divided into three zones: Core, Corporate and Campus. The Core Zone is centrally located and consists of communal and recreational facilities, meeting and conference rooms, exhibition halls, shops, dining areas as well as office spaces for small size companies. The Corporate Zone is located along the waterfront and is reserved for large size corporate companies who wish to operate in a building solely owned by them. The Campus Zone is situated by the Tolo Highway and is designed to accommodate medium size companies in multi-tenants buildings.
Gallery
See also
List of tallest buildings in Hong Kong
Housing in Hong Kong
Heritage conservation in Hong Kong
Hong Kong Institute of Architects
Kowloon Walled City
List of buildings and structures in Hong Kong
List of cities with most skyscrapers
List of the oldest buildings and structures in Hong Kong
List of lost buildings and structures in Hong Kong
References
External links
Dr Howard M Scott "Colonial Architecture in Hong Kong"
Culture of Hong Kong
Hong Kong | Architecture of Hong Kong | [
"Engineering"
] | 1,992 | [
"Architecture by city",
"Architecture"
] |
2,190,535 | https://en.wikipedia.org/wiki/Matrix-assisted%20laser%20desorption/ionization | In mass spectrometry, matrix-assisted laser desorption/ionization (MALDI) is an ionization technique that uses a laser energy-absorbing matrix to create ions from large molecules with minimal fragmentation. It has been applied to the analysis of biomolecules (biopolymers such as DNA, proteins, peptides and carbohydrates) and various organic molecules (such as polymers, dendrimers and other macromolecules), which tend to be fragile and fragment when ionized by more conventional ionization methods. It is similar in character to electrospray ionization (ESI) in that both techniques are relatively soft (low fragmentation) ways of obtaining ions of large molecules in the gas phase, though MALDI typically produces far fewer multi-charged ions.
MALDI methodology is a three-step process. First, the sample is mixed with a suitable matrix material and applied to a metal plate. Second, a pulsed laser irradiates the sample, triggering ablation and desorption of the sample and matrix material. Finally, the analyte molecules are ionized by being protonated or deprotonated in the hot plume of ablated gases, and then they can be accelerated into whichever mass spectrometer is used to analyse them.
History
The term matrix-assisted laser desorption ionization (MALDI) was coined in 1985 by Franz Hillenkamp, Michael Karas and their colleagues. These researchers found that the amino acid alanine could be ionized more easily if it was mixed with the amino acid tryptophan and irradiated with a pulsed 266 nm laser. The tryptophan was absorbing the laser energy and helping to ionize the non-absorbing alanine. Peptides up to the 2843 Da peptide melittin could be ionized when mixed with this kind of "matrix". The breakthrough for large molecule laser desorption ionization came in 1987 when Koichi Tanaka of Shimadzu Corporation and his co-workers used what they called the "ultra fine metal plus liquid matrix method" that combined 30 nm cobalt particles in glycerol with a 337 nm nitrogen laser for ionization. Using this laser and matrix combination, Tanaka was able to ionize biomolecules as large as the 34,472 Da protein carboxypeptidase-A. Tanaka received one-quarter of the 2002 Nobel Prize in Chemistry for demonstrating that, with the proper combination of laser wavelength and matrix, a protein can be ionized. Karas and Hillenkamp were subsequently able to ionize the 67 kDa protein albumin using a nicotinic acid matrix and a 266 nm laser. Further improvements were realized through the use of a 355 nm laser and the cinnamic acid derivatives ferulic acid, caffeic acid and sinapinic acid as the matrix. The availability of small and relatively inexpensive nitrogen lasers operating at 337 nm wavelength and the first commercial instruments introduced in the early 1990s brought MALDI to an increasing number of researchers. Today, mostly organic matrices are used for MALDI mass spectrometry.
Matrix
The matrix consists of crystallized molecules, of which the three most commonly used are sinapinic acid, α-cyano-4-hydroxycinnamic acid (α-CHCA, alpha-cyano or alpha-matrix) and 2,5-dihydroxybenzoic acid (DHB). A solution of one of these molecules is made, often in a mixture of highly purified water and an organic solvent such as acetonitrile (ACN) or ethanol. A counter ion source such as trifluoroacetic acid (TFA) is usually added to generate the [M+H] ions. A good example of a matrix-solution would be 20 mg/mL sinapinic acid in ACN:water:TFA (50:50:0.1).
The identification of suitable matrix compounds is determined to some extent by trial and error, but they are based on some specific molecular design considerations. They are of a fairly low molecular weight (to allow easy vaporization), but are large enough (with a low enough vapor pressure) not to evaporate during sample preparation or while standing in the mass spectrometer. They are often acidic, therefore act as a proton source to encourage ionization of the analyte. Basic matrices have also been reported. They have a strong optical absorption in either the UV or IR range, so that they rapidly and efficiently absorb the laser irradiation. This efficiency is commonly associated with chemical structures incorporating several conjugated double bonds, as seen in the structure of cinnamic acid. They are functionalized with polar groups, allowing their use in aqueous solutions. They typically contain a chromophore.
The matrix solution is mixed with the analyte (e.g. protein-sample). A mixture of water and organic solvent allows both hydrophobic and water-soluble (hydrophilic) molecules to dissolve into the solution. This solution is spotted onto a MALDI plate (usually a metal plate designed for this purpose). The solvents vaporize, leaving only the recrystallized matrix, but now with analyte molecules embedded into MALDI crystals. The matrix and the analyte are said to be co-crystallized. Co-crystallization is a key issue in selecting a proper matrix to obtain a good quality mass spectrum of the analyte of interest.
In analysis of biological systems, inorganic salts, which are also part of protein extracts, interfere with the ionization process. The salts can be removed by solid phase extraction or by washing the dried-droplet MALDI spots with cold water. Both methods can also remove other substances from the sample. The matrix-protein mixture is not homogeneous because the polarity difference leads to a separation of the two substances during co-crystallization. The spot diameter of the target is much larger than that of the laser, which makes it necessary to make many laser shots at different places of the target, to get the statistical average of the substance concentration within the target spot.
The matrix can be used to tune the instrument to ionize the sample in different ways. As mentioned above, acid-base like reactions are often utilized to ionize the sample, however, molecules with conjugated pi systems, such as naphthalene like compounds, can also serve as an electron acceptor and thus a matrix for MALDI/TOF. This is particularly useful in studying molecules that also possess conjugated pi systems. The most widely used application for these matrices is studying porphyrin-like compounds such as chlorophyll. These matrices have been shown to have better ionization patterns that do not result in odd fragmentation patterns or complete loss of side chains. It has also been suggested that conjugated porphyrin like molecules can serve as a matrix and cleave themselves eliminating the need for a separate matrix compound.
Instrumentation
There are several variations of the MALDI technology and comparable instruments are today produced for very different purposes, from more academic and analytical, to more industrial and high throughput. The mass spectrometry field has expanded into requiring ultrahigh resolution mass spectrometry such as the FT-ICR instruments as well as more high-throughput instruments. As many MALDI MS instruments can be bought with an interchangeable ionization source (electrospray ionization, MALDI, atmospheric pressure ionization, etc.) the technologies often overlap and many times any soft ionization method could potentially be used. For more variations of soft ionization methods see: Soft laser desorption or Ion source.
Laser
MALDI techniques typically employ the use of UV lasers such as nitrogen lasers (337 nm) and frequency-tripled and quadrupled Nd:YAG lasers (355 nm and 266 nm respectively).
Infrared laser wavelengths used for infrared MALDI include the 2.94 μm Er:YAG laser, mid-IR optical parametric oscillator, and 10.6 μm carbon dioxide laser. Although not as common, infrared lasers are used due to their softer mode of ionization. IR-MALDI also has the advantage of greater material removal (useful for biological samples), less low-mass interference, and compatibility with other matrix-free laser desorption mass spectrometry methods.
Time of flight
The type of a mass spectrometer most widely used with MALDI is the time-of-flight mass spectrometer (TOF), mainly due to its large mass range. The TOF measurement procedure is also ideally suited to the MALDI ionization process since the pulsed laser takes individual 'shots' rather than working in continuous operation. MALDI-TOF instruments are often equipped with a reflectron (an "ion mirror") that reflects ions using an electric field. This increases the ion flight path, thereby increasing time of flight between ions of different m/z and increasing resolution. Modern commercial reflectron TOF instruments reach a resolving power m/Δm of 50,000 FWHM (full-width half-maximum, Δm defined as the peak width at 50% of peak height) or more.
MALDI has been coupled with IMS-TOF MS to identify phosphorylated and non-phosphorylated peptides.
MALDI-FT-ICR MS has been demonstrated to be a useful technique where high resolution MALDI-MS measurements are desired.
Atmospheric pressure
Atmospheric pressure (AP) matrix-assisted laser desorption/ionization (MALDI) is an ionization technique (ion source) that in contrast to vacuum MALDI operates at normal atmospheric environment. The main difference between vacuum MALDI and AP-MALDI is the pressure in which the ions are created. In vacuum MALDI, ions are typically produced at 10 mTorr or less while in AP-MALDI ions are formed in atmospheric pressure. In the past, the main disadvantage of the AP-MALDI technique compared to the conventional vacuum MALDI has been its limited sensitivity; however, ions can be transferred into the mass spectrometer with high efficiency and attomole detection limits have been reported. AP-MALDI is used in mass spectrometry (MS) in a variety of applications ranging from proteomics to drug discovery. Popular topics that are addressed by AP-MALDI mass spectrometry include: proteomics; mass analysis of DNA, RNA, PNA, lipids, oligosaccharides, phosphopeptides, bacteria, small molecules and synthetic polymers, similar applications as available also for vacuum MALDI instruments. The AP-MALDI ion source is easily coupled to an ion trap mass spectrometer or any other MS system equipped with electrospray ionization (ESI) or nanoESI source.
MALDI with ionization at reduced pressure is known to produce mainly singly-charged ions (see "Ionization mechanism" below). In contrast, ionization at atmopsheric pressure can generate highly-charged analytes as was first shown for infrared and later also for nitrogen lasers. Multiple charging of analytes is of great importance, because it allows to measure high-molecular-weight compounds like proteins in instruments, which provide only smaller m/z detection ranges such as quadrupoles. Besides the pressure, the composition of the matrix is important to achieve this effect.
Aerosol
In aerosol mass spectrometry, one of the ionization techniques consists in firing a laser to individual droplets. These systems are called single particle mass spectrometers (SPMS). The sample may optionally be mixed with a MALDI matrix prior to aerosolization.
Ionization mechanism
The laser is fired at the matrix crystals in the dried-droplet spot. The matrix absorbs the laser energy and it is thought that primarily the matrix is desorbed and ionized (by addition of a proton) by this event. The hot plume produced during ablation contains many species: neutral and ionized matrix molecules, protonated and deprotonated matrix molecules, matrix clusters and nanodroplets. Ablated species may participate in the ionization of analyte, though the mechanism of MALDI is still debated. The matrix is then thought to transfer protons to the analyte molecules (e.g., protein molecules), thus charging the analyte. An ion observed after this process will consist of the initial neutral molecule [M] with ions added or removed. This is called a quasimolecular ion, for example [M+H]+ in the case of an added proton, [M+Na]+ in the case of an added sodium ion, or [M-H]− in the case of a removed proton. MALDI is capable of creating singly charged ions or multiply charged ions ([M+nH]n+) depending on the nature of the matrix, the laser intensity, and/or the voltage used. Note that these are all even-electron species. Ion signals of radical cations (photoionized molecules) can be observed, e.g., in the case of matrix molecules and other organic molecules.
The gas phase proton transfer model, implemented as the coupled physical and chemical dynamics (CPCD) model, of UV laser MALDI postulates primary and secondary processes leading to ionization. Primary processes involve initial charge separation through absorption of photons by the matrix and pooling of the energy to form matrix ion pairs. Primary ion formation occurs through absorption of a UV photon to create excited state molecules by
S0 + hν → S1
S1 + S1 → S0 + Sn
S1 + Sn → M+ + M−
where S0 is the ground electronic state, S1 the first electronic excited state, and Sn is a higher electronic excited state. The product ions can be proton transfer or electron transfer ion pairs, indicated by M+ and M− above. Secondary processes involve ion-molecule reactions to form analyte ions.
The lucky survivor model (cluster ionization mechanism) postulates that analyte molecules are incorporated in the matrix maintaining the charge state from solution. Ion formation occurs through charge separation upon fragmentation of laser ablated clusters. Ions that are not neutralized by recombination with photoelectrons or counter ions are the so-called lucky survivors.
The thermal model postulates that the high temperature facilitates the proton transfer between matrix and analyte in melted matrix liquid. Ion-to-neutral ratio is an important parameter to justify the theoretical model, and the mistaken citation of ion-to-neutral ratio could result in an erroneous determination of the ionization mechanism. The model quantitatively predicts the increase in total ion intensity as a function of the concentration and proton affinity of the analytes, and the ion-to-neutral ratio as a function of the laser fluences. This model also suggests that metal ion adducts (e.g., [M+Na]+ or [M+K]+) are mainly generated from the thermally induced dissolution of salt.
The matrix-assisted ionization (MAI) method uses matrix preparation similar to MALDI but does not require laser ablation to produce analyte ions of volatile or nonvolatile compounds. Simply exposing the matrix with analyte to the vacuum of the mass spectrometer creates ions with nearly identical charge states to electrospray ionization. It is suggested that there are likely mechanistic commonality between this process and MALDI.
Ion yield is typically estimated to range from 10−4 to 10−7, with some experiments hinting to even lower yields of 10−9. The issue of low ion yields had been addressed, already shortly after introduction of MALDI by various attempts, including post-ionization utilizing a second laser. Most of these attempts showed only limited success, with low signal increases. This might be attributed to the fact that axial time-of-flight instruments were used, which operate at pressures in the source region of 10−5 to 10−6, which results in rapid plume expansion with particle velocities of up to 1000 m/s. In 2015, successful laser post-ionization was reported, using a modified MALDI source operated at an elevated pressure of ~3 mbar coupled to an orthogonal time-of-flight mass analyzer, and employing a wavelength-tunable post-ionization laser, operated at wavelength from 260 nm to 280 nm, below the two-photon ionization threshold of the matrices used, which elevated ion yields of several lipids and small molecules by up to three orders of magnitude. This approach, called MALDI-2, due to the second laser, and the second MALDI-like ionization process, was afterwards adopted for other mass spectrometers, all equipped with sources operating in the low mbar range.
Applications
Biochemistry
In proteomics, MALDI is used for the rapid identification of proteins isolated by using gel electrophoresis: SDS-PAGE, size exclusion chromatography, affinity chromatography, strong/weak ion exchange, isotope coded protein labeling (ICPL), and two-dimensional gel electrophoresis. Peptide mass fingerprinting is the most popular analytical application of MALDI-TOF mass spectrometers. MALDI TOF/TOF mass spectrometers are used to reveal amino acid sequence of peptides using post-source decay or high energy collision-induced dissociation (further use see mass spectrometry).
MALDI-TOF have been used to characterise post-translational modifications. For example, it has been widely applied to study protein methylation and demethylation. However, care must be taken when studying post-translational modifications by MALDI-TOF. For example, it has been reported that loss of sialic acid has been identified in papers when dihydroxybenzoic acid (DHB) has been used as a matrix for MALDI MS analysis of glycosylated peptides. Using sinapinic acid, 4-HCCA and DHB as matrices, S. Martin studied loss of sialic acid in glycosylated peptides by metastable decay in MALDI/TOF in linear mode and reflector mode. A group at Shimadzu Corporation derivatized the sialic acid by an amidation reaction as a way to improve detection sensitivity and also demonstrated that ionic liquid matrix reduces a loss of sialic acid during MALDI/TOF MS analysis of sialylated oligosaccharides. THAP, DHAP, and a mixture of 2-aza-2-thiothymine and phenylhydrazine have been identified as matrices that could be used to minimize loss of sialic acid during MALDI MS analysis of glycosylated peptides. It has been reported that a reduction in loss of some post-translational modifications can be accomplished if IR MALDI is used instead of UV MALDI.
Besides proteins, MALDI-TOF has also been applied to study lipids. For example, it has been applied to study the catalytic reactions of phospholipases. In addition to lipids, oligonucleotides have also been characterised by MALDI-TOF. For example, in molecular biology, a mixture of 5-methoxysalicylic acid and spermine can be used as a matrix for oligonucleotides analysis in MALDI mass spectrometry, for instance after oligonucleotide synthesis.
Organic chemistry
Some synthetic macromolecules, such as catenanes and rotaxanes, dendrimers and hyperbranched polymers, and other assemblies, have molecular weights extending into the thousands or tens of thousands, where most ionization techniques have difficulty producing molecular ions. MALDI is a simple and fast analytical method that can allow chemists to rapidly analyze the results of such syntheses and verify their results.
Polymers
In polymer chemistry, MALDI can be used to determine the molar mass distribution. Polymers with polydispersity greater than 1.2 are difficult to characterize with MALDI due to the signal intensity discrimination against higher mass oligomers.
A good matrix for polymers is dithranol or AgTFA. The sample must first be mixed with dithranol and the AgTFA added afterwards; otherwise the sample will precipitate out of solution.
Microbiology
MALDI-TOF spectra are often used for the identification of microorganisms such as bacteria or fungi. A portion of a colony of the microbe in question is placed onto the sample target and overlaid with matrix. The mass spectra of expressed proteins generated are analyzed by dedicated software and compared with stored profiles for species determination in what is known as biotyping. It offers benefits to other immunological or biochemical procedures and has become a common method for species identification in clinical microbiological laboratories. Benefits of high resolution MALDI-MS performed on a Fourier transform ion cyclotron resonance mass spectrometry (also known as FT-MS) have been demonstrated for typing and subtyping viruses though single ion detection known as proteotyping, with a particular focus on influenza viruses.
One main advantage over other microbiological identification methods is its ability to rapidly and reliably identify, at low cost, a wide variety of microorganisms directly from the selective medium used to isolate them. The absence of the need to purify the suspect or "presumptive" colony allows for a much faster turn-around times. For example, it has been demonstrated that MALDI-TOF can be used to detect bacteria directly from blood cultures.
Another advantage is the potential to predict antibiotic susceptibility of bacteria. A single mass spectral peak can predict methicillin resistance of Staphylococcus aureus. MALDI can also detect carbapenemase of carbapenem-resistant enterobacteriaceae, including Acinetobacter baumannii and Klebsiella pneumoniae. However, most proteins that mediate antibiotic resistance are larger than MALDI-TOF's 2000–20,000 Da range for protein peak interpretation and only occasionally, as in the 2011 Klebsiella pneumoniae carbapenemase (KPC) outbreak at the NIH, a correlation between a peak and resistance conferring protein can be made.
Parasitology
MALDI-TOF spectra have been used for the detection and identification of various parasites such as trypanosomatids, Leishmania and Plasmodium. In addition to these unicellular parasites, MALDI/TOF can be used for the identification of parasitic insects such as lice or cercariae, the free-swimming stage of trematodes.
Medicine
MALDI-TOF spectra are often utilized in tandem with other analysis and spectroscopy techniques in the diagnosis of diseases. MALDI/TOF is a diagnostic tool with much potential because it allows for the rapid identification of proteins and changes to proteins without the cost or computing power of sequencing nor the skill or time needed to solve a crystal structure in X-ray crystallography.
One example of this is necrotizing enterocolitis (NEC), which is a devastating disease that affects the bowels of premature infants. The symptoms of NEC are very similar to those of sepsis, and many infants die awaiting diagnosis and treatment. MALDI/TOF was used to identify bacteria present in the fecal matter of NEC positive infants. This study focused on characterization of the fecal microbiota associated with NEC and did not address the mechanism of disease. There is hope that a similar technique could be used as a quick, diagnostic tool that would not require sequencing.
Another example of the diagnostic power of MALDI/TOF is in the area of cancer. Pancreatic cancer remains one of the most deadly and difficult to diagnose cancers. Impaired cellular signaling due to mutations in membrane proteins has been long suspected to contribute to pancreatic cancer. MALDI/TOF has been used to identify a membrane protein associated with pancreatic cancer and at one point may even serve as an early detection technique.
MALDI/TOF can also potentially be used to dictate treatment as well as diagnosis. MALDI/TOF serves as a method for determining the drug resistance of bacteria, especially to β-lactams (Penicillin family). The MALDI/TOF detects the presence of carbapenemases, which indicates drug resistance to standard antibiotics. It is predicted that this could serve as a method for identifying a bacterium as drug resistant in as little as three hours. This technique could help physicians decide whether to prescribe more aggressive antibiotics initially.
Detection of protein complexes
Following initial observations that some peptide-peptide complexes could survive MALDI deposition and ionization, studies of large protein complexes using MALDI-MS have been reported.
Small molecules
While MALDI is a common technique for large macro-molecules, it is often possible to also analyze small molecules with mass below 1000 Da. The problem with small molecules is that of matrix effects, where signal interference, detector saturation, or suppression of the analyte signal is possible since the matrices often consists of small molecules themselves. The choice of matrix is highly dependent on what molecules are to be analyzed.
MALDI-imaging mass spectrometry
Due to MALDI being a soft ionization source, it is used on a wide veriety of biomolecules. This has led to it being used in new ways such as MALDI-imaging mass spectrometry. This technique allows for the imaging of the spacial distribution of biomolecules.
See also
MALDI imaging
Matrix (mass spectrometry)
PEGylation
Peptide mass fingerprinting
References
Bibliography
External links
Primer on Matrix-Assisted Laser Desorption Ionization (MALDI) National High Magnetic Field Laboratory
Ion source
Biochemistry methods | Matrix-assisted laser desorption/ionization | [
"Physics",
"Chemistry",
"Biology"
] | 5,354 | [
"Biochemistry methods",
"Spectrum (physical sciences)",
"Ion source",
"Mass spectrometry",
"Biochemistry"
] |
9,362,854 | https://en.wikipedia.org/wiki/Cesare%20Cremonini%20%28philosopher%29 | Cesare Cremonini (; 22 December 1550 – 19 July 1631), sometimes Cesare Cremonino, was an Italian professor of natural philosophy, working rationalism (against revelation) and Aristotelian materialism (against the dualist immortality of the soul) inside scholasticism. His Latinized name was Cæsar Cremoninus or Cæsar Cremonius.
Considered one of the greatest philosophers in his time, patronized by Alfonso II d'Este, Duke of Ferrara, corresponding with kings and princes who had his portrait, paid twice the salary of Galileo Galilei, he is now more remembered as an infamous side actor of the Galileo affair, being one of the two scholars who refused to look through Galileo's telescope.
Biography
Cesare Cremonini was born in Cento in the then Papal States. He was a professor of natural philosophy for about 60 years:
From 1573 to 1590, at the University of Ferrara. Starting at a very young age and considered a great talent, he obtained the patronage of Alfonso II d'Este, Duke of Ferrara (to whom he would dedicate his first major book in 1596). The jealousies caused by this protection helped him to eventually accept a position outside his native province.
From 1591 until his death, at the University of Padua in Padua, then under Republic of Venice rule (succeeding to Jacopo Zabarella), in a chair of natural philosophy and a chair of medicine.
He taught the doctrines of Aristotle, especially as interpreted by Alexander of Aphrodisias and Averroes.
He was so popular in his time that most kings and princes had his portrait and corresponded with him, sometimes consulting him about private and public affairs. At Padua, his salary was twice that of Galileo. He was especially popular among the French intellectuals who called him "le Cremonin" (the Cremonin); even a remote writer such as Jean-Louis Guez de Balzac mentioned him as "le grand Cremonin" (the great Cremonin) in his Lettres.
Metaphysical views
Following up on the controversy opened in 1516 by Pietro Pomponazzi and continued by Jacopo Zabarella (his predecessors in the chair), Cremonini too taught that reason alone cannot demonstrate the immortality of the soul – his absolute adherence to Aristotle implying that he believed in the mortality of the soul. After a paper he wrote about the Jesuits, and public statements he made in favor of laic teachers, the Jesuits in Venice accused him of materialism, then relayed their grievances to Rome. He was prosecuted in 1604 by the Inquisition for atheism and the Averroist heresy of "double truth", and ordered to refute his claims: as was his manner, Cremonini gently refused to retract himself, sheltering himself behind Aristotle's authority. Because Padua was then under tolerant Venetian rule, he was kept out of reach of a full trial.
As for the accusations, and beyond Cremonini's teachings: indeed his personal motto was "Intus ut libet, foris ut moris est" (Latin for "In private think what you wish, in public behave as is the custom"), which was taken by humanists as meaning that a scientific thinker could hold one set of opinions as a philosopher, and another set as a Christian; it was also adopted by European Libertines (brought back to France by his student and confidant Gabriel Naudé). After his death, Cremonini had his tombstone engraved with "Cæsar Cremoninus hic totus jacet" (Latin for "Here lies all of Cremonini"), implying that no soul survived.
His student Naudé (who had been his confidant for three months) qualified most of his Italian teachers as "Atheists" and especially Cremonini as a "déniaisé" ("one who has been wised up, unfoolish, devirginized", the Libertines' word for unbelievers); he added to his friends, translated, "The Cremonin, Professor of Philosophy in Padua, confessed to a few choice Friends of his that he believed neither in God, nor in Devil, nor in the immortality of the soul: yet he was careful that his manservant was a good Catholic, for fear he said, should he believe in nothing, that he may one morning cut my throat in my bed". Later, Pierre Bayle pointed out that Cremonini did not believe in the immortality of the soul (in the "Crémonin" article of his Historical and Critical Dictionary). Gottfried Leibniz, in his 1710 Theodicy, dealing with the Averroists, who "declared that man's soul is, according to philosophy, mortal, while they protested their acquiescence in Christian theology, which declares the soul's immortality", says "that very sect of the Averroists survived as a school. It is thought that Caesar Cremoninus, a philosopher famous in his time, was one of its mainstays". Pierre Larousse, in his opinionated Grand dictionnaire universel du XIXe siècle, stated Cremonini was not a Christian.
Cremonini and Galileo
At Padua Cremonini was both a rival and a friend of his colleague Galileo. When Galileo announced that he had discovered mountains on the Moon in 1610, he offered Cremonini the chance to observe the evidence through a telescope. Cremonini refused even to look through the telescope and insisted that Aristotle had definitely proved the Moon could only be a perfect sphere. When Galileo decided to move to Tuscany that year, Cremonini warned him that it would bring him under the Inquisition's jurisdiction. Indeed, the next year the Inquisition reviewed Cremonini's case for evidence against Galileo. Years later, in his book Dialogue Concerning the Two Chief World Systems, Galileo would include the character Simplicio - the name was not casually chosen - a dogmatic Aristotelian philosopher who was partly based on Cremonini.
Death and legacy
When Cremonini died in 1631 during the Paduan outbreak of the Italian Plague of 1629-1631, more than 400 students were working with him. His previous students included, alphabetically:
Theophilos Corydalleus, graduated 1613, a Greek philosopher, had some influence in the Greek-speaking world during the 17th and 18th centuries, founded Corydalism
William Harvey, graduated 1602, an English doctor who was the first to correctly describe the circulation of the blood
Joachim Jung, graduated 1619, a German mathematician and naturalist popularized by John Ray
Ioannis Kottounios, an eminent Greek scholar and his successor to the chair of philosophy at Padua
Justus Lipsius, a philosopher of the Spanish Netherlands
Gabriel Naudé, in 1625–27, a French scholar and Cardinal Mazarin's librarian
Guy Patin, a French doctor, headmaster of the School of Medicine in Paris
Antonio Rocco, an Italian philosophy teacher and libertine writer
Corfitz Ulfeldt, in 1628–29, a famous Danish statesman and traitor
Flemming Ulfeldt, also in 1628–29, a Danish statesman and military leader, younger brother of Corfitz
He was buried in the Benedictine monastery of St. Justina of Padua (to which he also willed his possessions). His name has been given to several streets ("via Cesare Cremonini" in Cento, "via Cesare Cremonino" in Padua) and an institute ("Istituto Magistrale Cesare Cremonini" in Cento).
Bibliography
Concise bibliography
Below are his main books (many of them including separate treatises), listing only their most usual abridged titles:
1596: Explanatio proœmii librorum Aristotelis De physico auditu
1605: De formis elementorum
1611: De Anima (student transcript of a Cremonini lecture)
1613: Disputatio de cœlo
1616: De quinta cœli substantia (second series of De cœlo)
1626: De calido innato (reprinted in 1634)
1627: De origine et principatu membrorum
163?: De semine (printed or reprinted in 1634)
--- Posthumous:
1634: De calido innato et semine (expanding 1626 with 163?)
1644: De sensibus et facultate appetitiva
1663: Dialectica
(Not included are poems and other personal texts.)
Extended bibliography
Below are his main books (with usual short titles, original full titles, and indication of some variants or misspellings commonly found in literature). As was the practice of the time, many of them are made of opuscules, separate treatises grouped in a single binding. (Please note that Latin title spelling can vary depending on their grammatical position in a sentence, such as a "tractatus" becoming a "tractatum" in the accusative case when inside a longer title.)
1596: Explanatio proœmii librorum Aristotelis De physico auditu [1+20+22+43+1 folios] (Explanatio proœmii librorum Aristotelis De physico auditu cum introductione ad naturalem Aristotelis philosophiam, continente tractatum de pædia, descriptionemque universæ naturalis Aristoteliæ philosophiæ, quibus adjuncta est præfatio in libros De physico auditu. Ad serenissimum principem Alphonsum II Estensem Ferrariæ ducem augustissimum) also ("Explanatio proœmii librorum Aristotelis De physico auditu, et in eosdem Præfatio, una cum Tractatu de Pædia, seu, Introductione ad philosophiam naturalem Aristotelis.") (ed. Melchiorre Novello as "Melchiorem Novellum") – Padua: Novellum
"Tractatus de pædia" alias "De pædia Aristotelis" or sometimes "De pœdia Aristotelis" (also as "Descriptio universæ naturalis Aristoteliæ philosophiæ", or erroneously "Diatyposis universæ naturalis aristotelicæ philosophiæ")
"Introductio ad naturalem Aristotelis philosophiam" (sometimes "Introductio ad naturalem Aristotelis philosophiam")
"Explanatio proœmii librorum Aristotelis De physico auditu" (sometimes "Explanatio proœmii librorum De physico auditu")
1605: De formis elementorum (Disputatio De formis quatuor corporum simplicium quæ vocantur elementa) – Venice
1611: De Anima (De Anima lectiones 31, opiniones antiquorum de anima lect. 17) – student transcript of a Cremonini lecture
1613: Disputatio de cœlo (Disputatio de cœlo : in tres partes divisa, de natura cœli, de motu cœli, de motoribus cœli abstractis. Adjecta est Apologia dictorum Aristotelis, de via lactea, et de facie in orbe lunæ) – Venice: Thomam Balionum
"De cœlo"
"De natura cœli"
"De motu cœli"
"De motoribus cœli abstractis"
"De via lactea"
"De facie in orbe lunæ"
1616: De quinta cœli substantia (Apologia dictorum Aristotelis, de quinta cœli substantia adversus Xenarcum, Joannem Grammaticum, et alios) – Venice: Meiettum (second series of De cœlo)
1626: De calido innato (Apologia dictorum Aristotelis De calido innato adversus Galenum) – Venice: Deuchiniana (reprinted in 1634)
1627: De origine et principatu membrorum (Apologia dictorum Aristotelis De origine et Principatu membrorum adversus Galenum) – Venice: Hieronymum Piutum
"De origine"
"De principatu membrorum"
163?: De semine (Expositio in digressionem Averrhois de semine contra Galenum pro Aristotele) – (printed or reprinted in 1634)
--- Posthumous:
1634: De calido innato et semine (Tractatus de calido innato, et semine, pro Aristotele adversus Galenum) – Leiden: Elzevir (Lugduni-Batavorum) (expanding 1626 with 163?)
"De calido innato"
"De semine" (Apologia dictorum Aristotelis De Semine)
1644: De sensibus et facultate appetitiva (Tractatus tres : primus est de sensibus externis, secundus de sensibus internis, tertius de facultate appetitiva. Opuscula haec revidit Troylus Lancetta auctoris discipulus, et adnotatiotes confecit in margine) also (Tractatus III : de sensibus externis, de sensibus internis, de facultate appetitiva) (ed. Troilo Lancetta, as "Troilus Lancetta" or "Troilo de Lancettis"), Venice: Guerilios
"De sensibus externis"
"De sensibus internis"
"De facultate appetitiva"
1663: Dialectica (Dialectica, Logica sive dialectica) (ed. Troilo Lancetta, as "Troilus Lancetta" or "Troilo de Lancettis") (sometimes "Dialecticum opus posthumum") – Venice: Guerilios
(Poems and other personal texts not included here.)
References
Sources
Dictionaries and encyclopedias
Pierre Bayle: "Crémonin, César". In: Dictionnaire historique et critique, vol. 5, 1820, pp. 320–323
John Gorton: A General Biographical Dictionary, London: Henry G. Bohn, 1828, new edition 1851, page 146, article "Cremonini, Cæsar" online
Adolphe Franck: Dictionnaire des sciences philosophiques, volume 1, Paris: Hachette, 1844, pp. 598–599, article "Crémonini, César" (in French) online
Ferdinand Hoefer : Nouvelle biographie générale, volume XII, Paris: Firmin-Didot, 1855, second edition 1857, pp. 416–419, article "Cremonini, César" (in French) online
Pierre Larousse: Grand dictionnaire universel du XIXe siècle, volume 5, Paris: 1869, page 489, article "Crémonini, César" (in French) online (PDF or TIFF plugin required)
Marie-Nicolas Bouillet, Alexis Chassang (ed.): Dictionnaire universel d'histoire et de géographie, 26th edition, Paris: Hachette, 1878, page 474, article "Cremonini, César" (in French) online (PDF or TIFF plugin required)
Werner Ziegenfuss: Philosophen-lexikon: Handwörterbuch der Philosophie nach Personen, Walter de Gruyter, 1950, , page 208, article "Cremoninus, Caesar (Cesare Cremonini)"
Various: Encyclopædia Universalis, CD-ROM edition: 1996, article "Cremonini, C." (in French)
Herbert Jaumann: Handbuch Gelehrtenkultur der Frühen Neuzeit, Walter de Gruyter, 2004, , page 203, article "Cremonini, Cesare"
Filosofico.net: Indice alfabetico dei filosofi, article "Cesare Cremonino" (in Italian) online : picture and profile
Philosophy Institute at the University of Düsseldorf: Philosophengalerie, article "Caesar Cremoninus (Cesare Cremonini)" (in German) online : another picture, bibliography, literature
Philosophy
Léopold Mabilleau: Étude historique sur la philosophie de la Renaissance en Italie, Paris: Hachette, 1881
J.-Roger Charbonnel: La pensée italienne au XVIe siècle et le courant libertin, Paris: Champion, 1919
David Wootton: "Unbelief in Early Modern Europe", History Workshop Journal, No. 20, 1985, pages 83–101 : Averroes, Pomponazzi, Cremonini
Cremonini and Galileo
Evan R. Soulé, Jr.: "The Energy Machine of Joseph Newman", Discover Magazine, May 1987, online version : telescope incident account
Thomas Lessl: "The Galileo Legend", New Oxford Review, June 2000, pp. 27–33, online at CatholicEducation.org : telescope incident note
Paul Newall: "The Galileo Affair", 2005, online at Galilean-Library.org : telescope incident note (with typo "Cremoni")
W.R. Laird: "Venetischer Aristotelismus im Ende der aristotelischen Welt: Aspekte der Welt und des Denkens des Cesare Cremonini (1550–1631)(Review)" in Renaissance Quarterly, 1999, online excerpt at Amazon.com or excerpt at FindArticles.com
Stephen Mason: "Galileo's Scientific Discoveries, Cosmological Confrontations, and the Aftermath", in History of Science, volume 40, December 2002, pp. 382–383 (article pp. 6–7), PDF version online : salary, advices to Galileo
Galileo Galilei, Andrea Frova, Mariapiera Marenzana: Thus Spoke Galileo, Oxford University Press, 2006 (translated from a 1998 book), , page 9 : Inquisition
External links
Cesare Cremonino site (in Italian) including detailed biography, bibliography, literature.
Heinrich C. Kuhn: Cesare Cremonini: volti e maschere di un filosofo scomodo per tre secoli e mezzo (in Italian) 1999 conference about "the masks of Cremonini: Blind Man, Libertine Atheist, Rational Rigorist, and more"
Texts of Cremonini
Cæsar Cremoninus – Disputatio de cœlo (1613), online scans (Javascript required)
Free books by Cremonini (Google Books)
1550 births
1631 deaths
Galileo affair
Natural philosophers
Scholastic philosophers
Aristotelian philosophers
Latin commentators on Aristotle
People from Cento
17th-century deaths from plague (disease)
16th-century Italian philosophers
16th-century Italian male writers
17th-century Italian philosophers | Cesare Cremonini (philosopher) | [
"Astronomy"
] | 4,108 | [
"Astronomical controversies",
"Galileo affair"
] |
9,363,081 | https://en.wikipedia.org/wiki/Centre%20for%20Energy%2C%20Petroleum%20and%20Mineral%20Law%20and%20Policy | The Centre for Energy, Petroleum and Mineral Law and Policy (CEPMLP) is a graduate school at the University of Dundee, Scotland, United Kingdom, focused on the fields of international business transactions, energy law and policy, mining and the use of natural resources.
It is affiliated with, but not part of, the University of Dundee School of Law.
The CEPMLP is part of the University of Dundee's School of Social Sciences and is based in the Carnegie Building on the Geddes Quadrangle of the University's main campus.
The CEPMLP adopts an interdisciplinary approach to teaching, research and consultancy providing perspective on how governments, business and communities operate.
Master's degrees
The CEPMLP offers a wide range of degrees from full-time taught master's degrees both full-time on site and by distance learning, as well as research degrees and executive leadership programmes.
Academic credentials
Current RAE (UK Research Assessment Exercise) rating of 5.
Awarded the highest rank available by the UK Quality Assurance Agency for Higher Education (2002)for the taught postgraduate programme.
Queen's Award for Enterprise in International Trade, 2004
Doctoral programme with more than 40 PhD students, and a faculty of renowned international experts.
Strategic alliances with partner institutions
Washington College of Law at the American University, Washington, DC, USA.
Institut français du pétrole, Paris, France.
References
The Quality Assurance Agency for Higher Education, (2002), "Academic review: Subject Review; Law, University of Dundee", https://web.archive.org/web/20071008134635/http://www.qaa.ac.uk/reviews/reports/subjectlevel/sr060_02.pdf, accessed 02-10-2007
LLM Guide Master's of Law Programmes World Wide, http://www.llm-guide.com/board/6332, accessed 28-09-2007
Oil Voice Forum, https://web.archive.org/web/20071018042804/http://forum.oilvoice.com/topic.asp?TOPIC_ID=764, accessed 28-09-2007
Cresswell, J., (2006), "Scots energy and mineral law centre a hidden jewel", Press & Journal, Aberdeen.
Department of Trade and Industry (Scotland), (2006), "Dundee University receives Royal Award Visit", http://www.gnn.gov.uk/content/detail.asp?NewsAreaID=2&ReleaseID=229917, accessed 02-10-2007.
The Queen's Awards for Enterprise, 2004 Winners, https://web.archive.org/web/20070927064335/http://www.queensawards.org.uk/business/Winners/2004.html, accessed 09-10-2007.
External links
The Centre for Energy, Petroleum and Mineral Law and Policy
University of Dundee
Energy in Scotland
Petroleum engineering schools | Centre for Energy, Petroleum and Mineral Law and Policy | [
"Engineering"
] | 622 | [
"Petroleum engineering",
"Petroleum engineering schools",
"Engineering universities and colleges"
] |
9,363,315 | https://en.wikipedia.org/wiki/Australian%20Aboriginal%20astronomy | Australian Aboriginal astronomy has been passed down orally, through ceremonies, and in their artwork of many kinds. The astronomical systems passed down thus show a depth of understanding of the movement of celestial objects which allowed them to use them as a practical means for creating calendars and for navigating across the continent and waters of Australia. There is a diversity of astronomical traditions in Australia, each with its own particular expression of cosmology. However, there appear to be common themes and systems between the groups. Due to the long history of Australian Aboriginal astronomy, the Aboriginal peoples have been described as "world's first astronomers" on several occasions.
Many of the constellations were given names based on their shapes, just as traditional western astronomy does, such as the Pleiades, Orion and the Milky Way, with others, such as Emu in the Sky, describes the dark patches rather than the points lit by the stars. Contemporary Indigenous Australian art often references astronomical subjects and their related lore, such as the Seven Sisters.
Records of Aboriginal astronomy
One of the earliest written records of Aboriginal astronomy was made by William Edward Stanbridge, an Englishman who emigrated to Australia in 1841 and befriended the local Boorong people.
Interpreting the sky
Emu in the sky
A constellation used almost everywhere in Australian Aboriginal culture is the "Emu in the Sky", which consists of dark nebulae (opaque clouds of dust and gas in outer space) that are visible against the (centre and other sectors of the) Milky Way background. The Emu's head is the very dark Coalsack nebula, next to the Southern Cross; the body and legs are that extension of the Great Rift trailing out to Scorpius.
In Ku-ring-gai Chase National Park, north of Sydney, are extensive rock engravings of the Guringai people who lived there, including representations of the creator-hero Daramulan and his emu-wife. An engraving near the Elvina Track shows an emu in the same pose and orientation as the Emu in the Sky constellation.
To the Wardaman, however, the Coalsack is the head of a lawman.
Bruce Pascoe's book Dark Emu takes its title from one of the Aboriginal names for the constellation, known as Gugurmin to the Wiradjuri people.
In May 2020, the Royal Australian Mint launched a limited edition commemorative one-dollar coin, as the first in its "Star Dreaming" series celebrating Indigenous Australians' astrology.
Canoe in Orion
The Yolŋu people of northern Australia say that the constellation of Orion, which they call Julpan (or Djulpan), is a canoe. They tell the story of three brothers who went fishing, and one of them ate a sawfish that was forbidden under their law. Seeing this, the Sun-woman, Walu, made a waterspout that carried him and his two brothers and their canoe up into the sky. The three stars that line in the constellation's centre, which form Orion's Belt in Western mythology, are the three brothers; the Orion Nebula above them is the forbidden fish; and the bright stars Betelgeuse and Rigel are the bow and stern of the canoe. This is an example of astronomical legends underpinning the ethical and social codes that people use on Earth.
Seven Sisters
The Pleiades constellation figures in the Dreamings and songlines of several Aboriginal Australian peoples, usually referred to as the seven sisters. The story has been described as "one of the most defining and predominant meta-narratives chronicled in ancient mainland Australia", which describes a male ancestral being (with names including Wati Nyiru, Yurlu and others) who pursues seven sisters across the middle of the Australian continent from west to east, where the sisters turn into stars. Told by a number of peoples across the country, using varying names for the characters, it starts in Martu country in the Pilbara region of Western Australia (specifically, Roebourne), and travels across the lands of the Ngaanyatjarra (WA) to (Anangu Pitjantjatjara Yankunytjatjara, or APY lands, of South Australia, where the Pitjantjatjara and Yankunytjatjara peoples live. The story also includes Warlpiri lands, the Tanami Desert of the Northern Territory.
The Yamatji people of the Wajarri language group, of the Murchison region in Western Australia, call the sisters Nyarluwarri. When the constellation is close to the horizon as the sun is setting, the people know that it is the right time to harvest emu eggs, and they also use the brightness of the stars to predict seasonal rainfall.
In the Kimberley region of Western Australia, the eagle hawk chases the seven sisters up into the sky, where they become the star cluster and he becomes the Southern Cross.
In the Western Desert cultural bloc in central Australia, they are said to be seven sisters fleeing from the unwelcome attentions of a man represented by some of the stars in Orion, the hunter. In these stories, the man is called Nyiru or Nirunja, and the Seven Sisters is songline known as Kungkarangkalpa. The seven sisters story often features in the artwork of the region, such as the 2017 painting by Tjungkara Ken, Kaylene Whiskey's 2018 work "Seven Sistas", and the large-scale installation by the Tjanpi Desert Weavers commissioned as a feature of the National Gallery of Australia's 2020 Know My Name Exhibition. The Museum of Contemporary Art Australia in Sydney holds a 2013 work by the Tjanpi Desert Weavers called Minyma Punu Kungkarangkalpa (Seven Sisters Tree Women). In March 2013, senior desert dancers from the APY Lands (South Australia) in collaboration with the Australian National University's ARC Linkage and mounted by artistic director Wesley Enoch, performed Kungkarangkalpa: The Seven Sisters Songline on the shores of Lake Burley Griffin in Canberra.
In the Warlpiri version of the story, the Napaljarri sisters are often represented carrying a man called Wardilyka, who is in love with the women. But the morning star, Jukurra-jukurra, a man from a different skin group and who is also in love with the sisters, chases them across the sky. Each night they launch themselves into the sky, and each night he follows them. This story is known as the Napaljarri-warnu Jukurrpa.
The people of around Lake Eyre in South Australia tell how the ancestor male is prevented from capturing one of the seven sisters by a great flood.
The Wirangu people of the west coast of South Australia have a creation story embodied in a songline of great significance based on the Pleiades. In the story, the hunter (the Orion constellation) is named Tgilby. Tgilby, after falling in love with the seven sisters, known as Yugarilya, chases them out of the sky, onto and across the earth. He chases them as the Yugarilya chase a snake, Dyunu.
The Boonwurrung people of the Kulin nation of Victoria tell the Karatgurk story, which tells of how a crow robbed the seven sisters of their secret of how to make fire, thus bringing the skill to the people on earth.
In another story, told by peoples of New South Wales, the seven sisters are beautiful women known as the Maya-Mayi, two of whom are kidnapped by a warrior, Warrumma, or Warunna. They eventually escape by climbing a pine tree that continually grows up into the sky where they join their other sisters.
In 2017, a major exhibition entitled Songlines: Tracking the Seven Sisters was mounted at the National Museum of Australia, afterwards travelling to Berlin (2022) and Paris (2023).
In September 2020, the Royal Australian Mint issued its second commemorative one-dollar coin in its "Star Dreaming" series celebrating Indigenous Australians' astrology (see Emu in the sky above).
The Milky Way
The Kaurna people of the Adelaide Plains of South Australia called the (centre and other sectors of) the Milky Way wodliparri in the Kaurna language, meaning "house river". They believed that Karrawirra Parri (the River Torrens) was a reflection of wodliparri.
The Yolŋu people believe that when they die, they are taken by a mystical canoe, Larrpan, to the spirit-island Baralku in the sky, where their camp-fires can be seen burning along the edge of the great river of the Milky Way. The canoe is sent back to Earth as a shooting star, letting their family on Earth know that they have arrived safely in the spirit-land. Aboriginals also thought that god was the canoe.
The Boorong people see in the Southern Cross a possum in a tree.
Sun and Moon
Many traditions have stories of a female Sun and a male Moon.
The Yolŋu say that Walu, the Sun-woman, lights a small fire each morning, which we see as the dawn. She paints herself with red ochre, some of which spills onto the clouds, creating the sunrise. She then lights a torch and carries it across the sky from east to west, creating daylight. At the end of her journey, as she descends from the sky, some of her ochre paints again rubs off onto the clouds, creating the sunset. She then puts out her torch, and throughout the night travels underground back to her starting camp in the east. Other Aboriginal peoples of the Northern Territory call her Wuriupranili. Other stories about the Sun involve Wala, Yhi, and Gnowee.
The Yolŋu tell that Ngalindi, the Moon-man, was once young and slim (the waxing Moon), but grew fat and lazy (the full Moon). His wives chopped bits off him with their axes (the waning Moon); to escape them he climbed a tall tree towards the Sun, but died from the wounds (the new Moon). After remaining dead for three days, he rose again to repeat the cycle, and continues doing so till this day. The Kuwema people in the Northern Territory say that he grows fat at each full Moon by devouring the spirits of those who disobey the tribal laws. Another story by the Aboriginals of Cape York involves the making of a giant boomerang that is thrown into the sky and becomes the Moon.
A story from Southern Victoria concerns a beautiful woman who is forced to live by herself in the sky after a number of scandalous affairs.
The Yolŋu also associated the Moon with the tides.
Eclipses
The Warlpiri people explain a solar eclipse as being the Sun-woman being hidden by the Moon-man as he makes love to her. This explanation is shared by other groups, such as the Wirangu.
In the Ku-ring-gai Chase National Park there are a number of engravings showing a crescent shape, with sharp horns pointing down, and below it a drawing of a man in front of a woman. While the crescent shape has been assumed by most researchers to represent a boomerang, some argue that it is more easily interpreted as a solar eclipse, with the mythical man-and-woman explanation depicted below it.
Venus
The rising of Venus marks an important ceremony of the Yolŋu, who call it Barnumbirr ("Morning Star and Evening Star"). They gather after sunset to await the rising of the planet. As she reappears (or in other nearby weeks appears only) in the early hours before dawn, the Yolŋu say that she draws behind her a rope of light attached to the island of Baralku on Earth, and along this rope, with the aid of a richly decorated "Morning Star Pole", the people are able to communicate with their dead loved ones, showing that they still love and remember them.
Jupiter
The Dja Dja Wurrung call Jupiter "Bunjil's campfire". The planet features in the Dja Dja Wurrung Aboriginal Clans Corporation logo, as a symbol of the Creator Spirit.
Eta Carinae
In 2010, astronomers Duane Hamacher and David Frew from Macquarie University in Sydney showed that the Boorong Aboriginal people of northwestern Victoria, Australia, witnessed the outburst of Eta Carinae in the 1840s and incorporated it into their oral traditions as Collowgulloric War, the wife of War (Canopus, the Crow – ). This is the only definitive indigenous record of Eta Carinae's outburst identified in the literature to date.
Astronomical calendars
Aboriginal calendars tend to differ from European calendars: many groups in northern Australia use a calendar with six seasons, and some groups mark the seasons by the stars which are visible during them. For the Pitjantjatjara, for example, the rising of the Pleiades at dawn (in May) marks the start of winter.
It is not known to what extent Aboriginal people were interested in the precise motion of the sun, moon, planets or stars. However, it likely that some of the stone arrangements in Victoria such as Wurdi Youang near Little River, Victoria, may have been used to predict and confirm the equinoxes and/or solstices. The arrangement is aligned with the setting sun at the solstices and equinox, but its age is unknown.
There are rock engravings by the Nganguraku people at Ngaut Ngaut which, according to oral tradition, represent lunar cycles. Most of their culture (including their language) has been lost because of the banning of such things by Christian missionaries before 1913.
Stories enrich a custom-linked calendar whereby the heliacal rising or setting of stars or constellations indicates to Aboriginal Australians when it is time to move to a new place and/or look for a new food source. For example, the Boorong people in Victoria know that when the Malleefowl (Lyra) disappears in October, to "sit with the Sun", it is time to start gathering her eggs on Earth. Other groups know that when Orion first appears in the sky, the dingo puppies are about to be born. When Scorpius appears, the Yolŋu know that the Macassan fisherman would soon arrive to fish for trepang.
In contemporary culture
A great deal of contemporary Aboriginal art has an astronomical theme, reflecting the astronomical elements of the artists' cultures. Prominent examples are Gulumbu Yunupingu, Bill Yidumduma Harney, and Nami Maymuru, all of whom have won awards or been finalists in the Telstra Indigenous Art Awards. In 2009 an exhibition of Indigenous Astronomical Art from WA, named "Ilgarijiri", was launched at AIATSIS in Canberra in conjunction with a Symposium on Aboriginal Astronomy.
Other contemporary painters include the daughters of the late Clifford Possum Tjapaltjarri, who have the seven sisters as one of their Dreamings. Gabriella Possum and Michelle Possum paint the Seven Sisters Dreaming in their paintings. They inherited this Dreaming through their maternal line.
See also
Australian Aboriginal Astronomy Project
Archaeoastronomy
Indigenous Australian art
List of archaeoastronomical sites by country
Pleiades in folklore and literature
References
Further reading
}
ABC Message Stick program on Aboriginal Astronomy
The Emu in the Sky story at Questacon
ABC Radio National Artworks piece on "The First Astronomers"
Cairns, H. & Yidumduma Harney, B. (2003). Dark Sparklers: Yidumduma's Aboriginal Astronomy. Hugh Cairns, Sydney.
Fredrick, S. (2008). The Sky of Knowledge: A Study of the Ethnoastronomy of the Aboriginal People of Australia. Master of Philosophy Thesis. Department of Archaeology and Ancient History, University of Leicester, UK.
Fuller, R.S.; Hamacher, D.W. & Norris, R.P. (2013). Astronomical Orientations of Bora Ceremonial Grounds in Southeast Australia. Australian Archaeology, No. 77, pp. 30–37.
Hamacher, D.W. (2013). Aurorae in Australian Aboriginal Traditions." Journal of Astronomical History & Heritage", Vol. 16(2), pp. 207–219.
Hamacher, D.W. (2012). On the Astronomical Knowledge and Traditions of Aboriginal Australians. Doctor of Philosophy Thesis. Department of Indigenous Studies, Macquarie University, Sydney, Australia.
Hamacher, D.W. & Norris, R.P. (2011). Bridging the Gap through Australian Cultural Astronomy. In Archaeoastronomy & Ethnoastronomy: building bridges between cultures, edited by C. Ruggles. Cambridge University Press, pp. 282–290.
Haynes, R.F., et al. (1996). Dreaming the Stars. In Explorers of the Southern Sky, edited by R. Haynes. Cambridge University Press, pp. 7–20.
Johnson, D. (1998). Night skies of Aboriginal Australia: a Noctuary. University of Sydney Press.
Morieson, J. (1996). The Night Sky of the Boorong. Master of Arts Thesis, Australian Centre, University of Melbourne.
Morieson, J. (2003). The Astronomy of the Boorong. World Archaeological Congress, June 2003.
Norris, R.P. & Hamacher, D.W. (2013). Australian Aboriginal Astronomy: An Overview. In Handbook of Cultural Astronomy, edited by C. Ruggles. Springer, in press.
Norris, R.P. & Hamacher, D.W. (2009). The Astronomy of Aboriginal Australia. In The Role of Astronomy in Society and Culture, edited by D. Valls-Gabaud & A. Boksenberg. Cambridge University Press, pp. 39–47.
Norris, R.P. & Norris, P.M. (2008). Emu Dreaming: An Introduction to Aboriginal Astronomy. Emu Dreaming, Sydney.
Norris, R. P., (2016)
External links
Website created by Kokatha artist Darryl Milika, designer of the Yerrakartarta art installation in Adelaide.
Australian Aboriginal mythology
Archaeoastronomy
Astronomy in Australia | Australian Aboriginal astronomy | [
"Astronomy"
] | 3,800 | [
"Archaeoastronomy",
"Astronomical sub-disciplines"
] |
9,363,374 | https://en.wikipedia.org/wiki/Mobile%20communications%20over%20IP | MoIP, or mobile communications over Internet Protocol, is the mobilization of peer-to-peer communications including chat and talk using Internet Protocol via standard mobile communications applications including 3G, GPRS, Wi-Fi as well as WiMax. Unlike mobile VoIP, MoIP is not a VoIP program made accessible from mobile phones or a switchboard application using VoIP in the background. It is rather a native mobile application on users’ handsets and used to conduct talk and chat over the internet connection as its primary channel.
How MoIP (mobile) works
MoIP applications typically work without any proprietary hardware, are enhanced with real-time contact availability (presence) and save the users money by utilizing free Wi-Fi internet access or fixed internet data plans instead of GSM (talk) minutes. They are completely mobile-centric, designed and optimized specifically for mobile-handsets environment rather than the PC.
References
External links
ZDNet: Mobile VoIP means business
White paper: V.150 Modem over IP
Voice over IP
Wireless networking | Mobile communications over IP | [
"Technology",
"Engineering"
] | 215 | [
"Wireless networking",
"Computer networks engineering"
] |
9,363,637 | https://en.wikipedia.org/wiki/Equivalent%20dumping%20coefficient | An equivalent dumping coefficient is a mathematical coefficient used in the calculation of the energy dispersed when a structure moves. As a civil engineering term, it defines the percent of a cycle of oscillation that is absorbed (converted to heat by friction) for the structure or sub-structure under analysis. Usually it is assumed that the equivalent dumping coefficient is linear, which is to say invariant compare to oscillatory amplitude. Modern seismic studies have shown this not to be a satisfactory assumption for larger civic structures, and have developed sophisticated amplitude and frequency based functions for equivalent dumping coefficient.
When a building moves, the materials it is made from absorb a fraction of the kinetic energy (this is especially true of concrete) due primarily to friction and to viscous or elastomeric resistance which convert motion or kinetic energy to heat.
References
Energy (physics) | Equivalent dumping coefficient | [
"Physics",
"Mathematics"
] | 170 | [
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities"
] |
9,363,980 | https://en.wikipedia.org/wiki/Western%20Union%20splice | The Western Union splice or lineman splice is a method of joining electrical cable, developed in the nineteenth century during the introduction of the telegraph and named for the Western Union telegraph company. This method can be used where the cable may be subject to loading stress. The wrapping pattern design causes the join to tighten as the conductors pull against each other.
History
In 1915, Practical Electric Wiring described it as being "by far the most widely used splice" in practical electrical wiring work. NASA included the splice in its technical standard Workmanship Standard for Crimping, Interconnecting Cables, Harnesses, and Wiring, first produced in 1998.
Construction
The 1915 textbook Practical Electric Wiring describes the construction of the Western Union splice; short tie and long tie. The short tie splice has it being formed after stripping the insulation from a pair of wires for several inches, each, crossing the wires left over right as shown in figure part A; then, a hooked cross (figure part B) is formed holding the crossing point of the two wires, and pulling the right wire tip toward and pushing the left wire tip away from the worker, leaving the tips oriented vertically as shown. The wires are then held with pliers to the left of their crossing point while the right splice is formed by continuing to wind the wire tip away from the worker, creating 5–6 twists snug against the core wire and against each preceding twist. NASA recommend "tight, with no gaps between adjacent turns". The wires are then again held with pliers, but on the first-made twist, to the right of the crossing point, and then the left splice is formed by winding the remaining wire tip toward the worker for a comparable 5–6 snug twists. The splice wire ends are then trimmed as needed, and the splice may then be soldered, and/or covered (e.g., with a heat-shrunk tube of insulation).
Practical Electric Wiring described the splice as having two variations, the "short tie" (figure part D) and "long tie" (figure parts E or F), with the latter examples having a "twist between wrappings [that] allows a better chance for solder to pass in between the wires". The book suggested the long tie variant more suited to splices where soldering was intended. However, this was not backed up by NASA testing.
Testing
The NASA tests included soldering, and were performed to an organizational standard operating procedure (NASA-STD-8739.3) for a solder termination, which includes a number of specific requirements, including "proper insulation spacing"; tight wrapping; trimming of wire ends to prevent protrusions through the solder; and over-sleeving with a transparent or translucent heat shrink seal to cover the completed splice and all exposed metal.
NASA found both the short and long tie variants to be strong when soldered. The splices were examined in tensile strength ("pull") tests on 16 and 22 American wire gauge wire; even the short tie variation of the Western Union splice performed well after soldering. The test splices never failed at the splice (instead breaking outside of the splice area), leaving NASA to conclude that "the solder connection at the splice was as strong or stronger than the un-spliced wires".
See also
T-splice
Rat-tail splice
References
Telecommunications equipment
Electrical wiring
Splices
Western Union | Western Union splice | [
"Physics",
"Engineering"
] | 714 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
9,364,565 | https://en.wikipedia.org/wiki/Technora | Technora is an aramid that is useful for a variety of applications that require high strength or chemical resistance. It is a brand name of the company Teijin Aramid.
Technora was used on January 25, 2004 to suspend the NASA Mars rover Opportunity from its parachute during descent.
It was also later used by NASA as one of the materials, combined with nylon and Kevlar, making up the parachute that was used to perform a braking manoeuvre during atmospheric entry of the rover Perseverance that landed on Mars on February 18, 2021.
Production
Technora is produced by condensation polymerization of terephthaloyl chloride (TCl) with a mixture of p-phenylenediamine (PPD) and 3,4'-diaminodiphenylether (3,4'-ODA). The polymer is closely related to Teijin Aramids's Twaron or DuPont's Kevlar. Technora is derived from two different diamines, 3,4'-ODA and PPD, whereas Twaron is derived from PPD alone. Because only one amide solvent is used in this very straightforward procedure, spinning can be completed immediately after polymer synthesis.
Physical properties
Technora has a better strength to weight ratio than steel. Technora also has fire resistant properties which can be beneficial.
Major industrial uses
Automotive and other industries:
Turbo hoses
high pressure hoses
Timing and V-belts
mechanical rubber goods reinforcement
Linear tension
Optical fiber cables (OFC)
Ram air parachute suspension lines
ropes, wire ropes and cables
Umbilical cables
Electrical mechanical cable (EMC)
Windsurfing sails
Hangglider sails
Drumheads
Personal protective equipment
Poi (performance art)
See also
Vectran
References
Synthetic fibers
Materials
Organic polymers
Brand name materials
Cables | Technora | [
"Physics",
"Chemistry"
] | 376 | [
"Organic polymers",
"Synthetic fibers",
"Synthetic materials",
"Organic compounds",
"Materials",
"Matter"
] |
9,365,039 | https://en.wikipedia.org/wiki/Crabtree%20effect | The Crabtree effect, named after the English biochemist Herbert Grace Crabtree, describes the phenomenon whereby the yeast, Saccharomyces cerevisiae, produces ethanol (alcohol) in aerobic conditions at high external glucose concentrations rather than producing biomass via the tricarboxylic acid (TCA) cycle, the usual process occurring aerobically in most yeasts e.g. Kluyveromyces spp. This phenomenon is observed in most species of the Saccharomyces, Schizosaccharomyces, Debaryomyces, Brettanomyces, Torulopsis, Nematospora, and Nadsonia genera. Increasing concentrations of glucose accelerates glycolysis (the breakdown of glucose) which results in the production of appreciable amounts of ATP through substrate-level phosphorylation. This reduces the need of oxidative phosphorylation done by the TCA cycle via the electron transport chain and therefore decreases oxygen consumption. The phenomenon is believed to have evolved as a competition mechanism (due to the antiseptic nature of ethanol) around the time when the first fruits on Earth fell from the trees. The Crabtree effect works by repressing respiration by the fermentation pathway, dependent on the substrate.
Ethanol formation in Crabtree-positive yeasts under strictly aerobic conditions was firstly thought to be caused by the inability of these organisms to increase the rate of respiration above a certain value. This critical value, above which alcoholic fermentation occurs, is dependent on the strain and the culture conditions. More recent evidences demonstrated that the occurrence of alcoholic fermentation might not be primarily due to a limited respiratory capacity, but could be caused by a limit in the cellular Gibbs energy dissipation rate.
For S. cerevisiae in aerobic conditions, glucose concentrations below 150 mg/L did not result in ethanol production. Above this value, ethanol was formed with rates increasing up to a glucose concentration of 1000 mg/L. Thus, above 150 mg/L glucose the organism exhibited a Crabtree effect.
It was the study of tumor cells that led to the discovery of the Crabtree effect. Tumor cells have a similar metabolism, the Warburg effect, in which they favor glycolysis over the oxidative phosphorylation pathway.
References
Further reading
Biochemical reactions | Crabtree effect | [
"Chemistry",
"Biology"
] | 490 | [
"Biochemistry",
"Biochemical reactions"
] |
9,365,585 | https://en.wikipedia.org/wiki/NONMEM | NONMEM is a non-linear mixed-effects modeling software package developed by Stuart L. Beal and Lewis B. Sheiner in the late 1970s at University of California, San Francisco, and expanded by Robert Bauer at Icon PLC. Its name is an acronym for nonlinear mixed effects modeling but it is especially powerful in the context of population pharmacokinetics, pharmacometrics, and PK/PD models.
NONMEM models are written in NMTRAN, a dedicated model specification language that is translated into FORTRAN, compiled on the fly and executed by a command-line script. Results are presented as text output files including tables. There are multiple interfaces to assist modelers with housekeeping of files, tracking of model development, goodness-of-fit evaluations and graphical output, such as PsN and xpose and Wings for NONMEM. Current version for NONMEM is 7.5.
Model estimation
NONMEM estimates its models according to principles of maximum likelihood estimation. nonlinear mixed-effects model generally do not have close-formed solutions, and therefore specific estimation methods are applied, such as linearization methods as first-order (FO), first-order conditional (FOCE) or the laplacian (LAPL), approximation methods such as iterative-two stage (ITS), importance sampling (IMP), stochastic approximation estimation (SAEM) or direct sampling.
References
External links
Product site
NONMEM UsersNet Archive
Numerical software
Pharmacodynamics
Pharmacokinetics | NONMEM | [
"Chemistry",
"Mathematics"
] | 319 | [
"Pharmacology",
"Pharmacokinetics",
"Pharmacodynamics",
"Medicinal chemistry stubs",
"Numerical software",
"Pharmacology stubs",
"Mathematical software"
] |
9,366,097 | https://en.wikipedia.org/wiki/Truncation%20error | In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples
Infinite series
A summation series for is given by an infinite series such as
In reality, we can only use a finite number of these terms as it would take an infinite amount of computational time to make use of all of them. So let's suppose we use only three terms of the series, then
In this case, the truncation error is
Example A:
Given the following infinite series, find the truncation error for if only the first three terms of the series are used.
Solution
Using only first three terms of the series gives
The sum of an infinite geometrical series
is given by
For our series, and , to give
The truncation error hence is
Differentiation
The definition of the exact first derivative of the function is given by
However, if we are calculating the derivative numerically, has to be finite. The error caused by choosing to be finite is a truncation error in the mathematical process of differentiation.
Example A:
Find the truncation in calculating the first derivative of at using a step size of
Solution:
The first derivative of is
and at ,
The approximate value is given by
The truncation error hence is
Integration
The definition of the exact integral of a function from to is given as follows.
Let be a function defined on a closed interval of the real numbers, , and
be a partition of I, where
where and .
This implies that we are finding the area under the curve using infinite rectangles. However, if we are calculating the integral numerically, we can only use a finite number of rectangles. The error caused by choosing a finite number of rectangles as opposed to an infinite number of them is a truncation error in the mathematical process of integration.
Example A.
For the integral
find the truncation error if a two-segment left-hand Riemann sum is used with equal width of segments.
Solution
We have the exact value as
Using two rectangles of equal width to approximate the area (see Figure 2) under the curve, the approximate value of the integral
Occasionally, by mistake, round-off error (the consequence of using finite precision floating point numbers on computers), is also called truncation error, especially if the number is rounded by chopping. That is not the correct use of "truncation error"; however calling it truncating a number may be acceptable.
Addition
Truncation error can cause within a computer when because (like it should), while . Here, has a truncation error equal to 1. This truncation error occurs because computers do not store the least significant digits of an extremely large integer.
See also
Quantization error
References
.
Numerical analysis | Truncation error | [
"Mathematics"
] | 565 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
9,366,660 | https://en.wikipedia.org/wiki/Callose | Callose is a plant polysaccharide. Its production is due to the glucan synthase-like gene (GLS) in various places within a plant. It is produced to act as a temporary cell wall in response to stimuli such as stress or damage. Callose is composed of glucose residues linked together through β-1,3-linkages, and is termed a β-glucan. It is thought to be manufactured at the cell wall by callose synthases and is degraded by β-1,3-glucanases. Callose is very important for the permeability of plasmodesmata (Pd) in plants; the plant's permeability is regulated by plasmodesmata callose (PDC). PDC is made by callose synthases and broken down by β-1,3-glucanases (BGs). The amount of callose that is built up at the plasmodesmatal neck, which is brought about by the interference of callose synthases (CalSs) and β-1,3-glucanases, determines the conductivity of the plasmodesmata.
Formation and function
Callose is laid down at plasmodesmata, at the cell plate during cytokinesis, and during pollen development. Endothecium contains a substance callose, which makes it thicker. Callose is produced in response to wounding, infection by pathogens, aluminium, and abscisic acid. When there is wounding in the plant tissue, it is fixed by the deposition of callose at the plasmodesmata and cell wall; this process happens within minutes after damage. Even though callose is not a constitutional component of the plant's cell wall, it is related to the plant's defense mechanism. Deposits often appear on the sieve plates at the end of the growing season. Callose also forms immediately around the developing meiocytes and tetrads of sexually reproducing angiosperms but is not found in related apomictic taxa. Callose deposition at the cell wall has been suggested as an early marker for direct somatic embryogenesis from cortical and epidermal cells of Cichorium hybrids. Temporary callose walls are also thought to be a barrier between a cell and its environment, while the cell is undergoing a genetic programming that allows it to differentiate. This is because callose walls can be found around nucellar embryos during Nucellar embryony.
See also
Curdlan
References
Polysaccharides | Callose | [
"Chemistry"
] | 534 | [
"Carbohydrates",
"Polysaccharides"
] |
9,366,901 | https://en.wikipedia.org/wiki/I-Chen%20Wu | I-Chen Wu () is a professor at Department of Computer Science, National Chiao Tung University. He received his B.S. in Electronic Engineering from National Taiwan University (NTU), M.S. in computer science from NTU, and Ph.D. in computer science from Carnegie-Mellon University, in 1982, 1984 and 1993, respectively.
Wu invented a new game, named Connect6, a variation of the five-in-a-row game, and presented this game in the 11th Advances in Computer Games Conference (ACG'11) in 2005. The game-tree complexity of this game is quite high, close to Chinese Chess. Since presented in 2005, Connect6 has been a tournament item in Computer Olympiad. He wrote a program, named NCTU6, and won the gold in the tournament in 2006. Up to date, there have been at least four game websites supporting this game, at least 10 web forums for this game (in Traditional Chinese, Simplified Chinese, English, Spanish and multi-lingual), hundreds of thousands games played over the Internet, several Josekis (opening moves) and Tsumegos (like puzzles) developed, and one human Connect6 open tournament held in Summer 2006.
Wu also developed a game platform over Internet and actively participated in software development leading a team to major software components and framework in both clients and servers. In the client side, the team led by him developed a portable AWT/Swing architecture for Java game development, which has been used in some game companies including Sina Inc., Hinet, and ThinkNewIdea Inc., in Taiwan.
References
External links
Homepage of I-Chen Wu
Homepage of Connect6
Chess Programming Kiwi -- I-Chen Wu
Academic staff of the National Chiao Tung University
Affiliated Senior High School of National Taiwan Normal University alumni
Artificial intelligence researchers
Carnegie Mellon University alumni
Living people
National Taiwan University alumni
Taiwanese computer scientists
Year of birth missing (living people) | I-Chen Wu | [
"Technology"
] | 401 | [
"Computing stubs",
"Computer specialist stubs"
] |
9,367,435 | https://en.wikipedia.org/wiki/Variable-range%20hopping | Variable-range hopping is a model used to describe carrier transport in a disordered semiconductor or in amorphous solid by hopping in an extended temperature range. It has a characteristic temperature dependence of
where is the conductivity and is a parameter dependent on the model under consideration.
Mott variable-range hopping
The Mott variable-range hopping describes low-temperature conduction in strongly disordered systems with localized charge-carrier states and has a characteristic temperature dependence of
for three-dimensional conductance (with = 1/4), and is generalized to d-dimensions
.
Hopping conduction at low temperatures is of great interest because of the savings the semiconductor industry could achieve if they were able to replace single-crystal devices with glass layers.
Derivation
The original Mott paper introduced a simplifying assumption that the hopping energy depends inversely on the cube of the hopping distance (in the three-dimensional case). Later it was shown that this assumption was unnecessary, and this proof is followed here. In the original paper, the hopping probability at a given temperature was seen to depend on two parameters, R the spatial separation of the sites, and W, their energy separation. Apsley and Hughes noted that in a truly amorphous system, these variables are random and independent and so can be combined into a single parameter, the range between two sites, which determines the probability of hopping between them.
Mott showed that the probability of hopping between two states of spatial separation and energy separation W has the form:
where α−1 is the attenuation length for a hydrogen-like localised wave-function. This assumes that hopping to a state with a higher energy is the rate limiting process.
We now define , the range between two states, so . The states may be regarded as points in a four-dimensional random array (three spatial coordinates and one energy coordinate), with the "distance" between them given by the range .
Conduction is the result of many series of hops through this four-dimensional array and as short-range hops are favoured, it is the average nearest-neighbour "distance" between states which determines the overall conductivity. Thus the conductivity has the form
where is the average nearest-neighbour range. The problem is therefore to calculate this quantity.
The first step is to obtain , the total number of states within a range of some initial state at the Fermi level. For d-dimensions, and under particular assumptions this turns out to be
where .
The particular assumptions are simply that is well less than the band-width and comfortably bigger than the interatomic spacing.
Then the probability that a state with range is the nearest neighbour in the four-dimensional space (or in general the (d+1)-dimensional space) is
the nearest-neighbour distribution.
For the d-dimensional case then
.
This can be evaluated by making a simple substitution of into the gamma function,
After some algebra this gives
and hence that
.
Non-constant density of states
When the density of states is not constant (odd power law N(E)), the Mott conductivity is also recovered, as shown in this article.
Efros–Shklovskii variable-range hopping
The Efros–Shklovskii (ES) variable-range hopping is a conduction model which accounts for the Coulomb gap, a small jump in the density of states near the Fermi level due to interactions between localized electrons. It was named after Alexei L. Efros and Boris Shklovskii who proposed it in 1975.
The consideration of the Coulomb gap changes the temperature dependence to
for all dimensions (i.e. = 1/2).
See also
Mobility edge
Notes
Electrical phenomena
Electrical resistance and conductance | Variable-range hopping | [
"Physics",
"Mathematics"
] | 760 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electrical phenomena",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
9,367,526 | https://en.wikipedia.org/wiki/EcoProfit | ECOPROFIT (in full Ecological Project for Integrated Environmental Protection, in German Ökoprofit) was developed in 1991 in Graz, Austria by the Environmental Office of the City of Graz and Graz University of Technology.
ECOPROFIT is a cooperative approach between the regional authority and local companies with the goal of reducing cost for waste, raw materials, water, and energy. Reductions in these areas also reduce environmental aspects of businesses. The model addresses production companies as well as hospitals, hotels, service companies and tradespeople.
Important elements of ECOPROFIT are workshops in Cleaner production and individual consulting by experienced consultants. After the first year the companies are audited (legal compliance, environmental performance, environmental programme) and receive an official award by the City. A number of companies at the same time goes for certification according to ISO 14001.
Additionally, most of the companies join the so-called ECOPROFIT CLUB. In regular workshop meetings they receive an exchange of experience and update their knowledge on environmental law and new organisational and technical development. The companies also receive support by consultants in the identification and implementation of new measures.
The ECOPROFIT approach as a model of cooperation of the community with regional companies is used in 19 countries on 4 continents.
Austria (Graz, Vienna, Vorarlberg, Klagenfurt),
Germany (Munich, Berlin, Hamburg, Dortmund, Aachen, and 60 more cities),
Slovenia (Ljubljana, Maribor),
Italy (Modena),
Hungary (Pécs),
India (Gurgaon),
Colombia (Bucaramanga, Medellín),
Korea (Incheon, Busan),
China (Panzhihua) and further more.
More than 5.000 companies worldwide participate in ECOPROFIT projects, most of them in Austria and Germany.
In Graz, a city with 260.000 inhabitants, until today, on an annual basis approximately 2 million Euros are saved. Results include (as of 2002, first year savings): 100.000 m³ of water, 2 GWh electricity, 0,5 GWhs of process heat, 700 tons of solid waste. This makes Ecoprofit an important means to reduce the industrial contribution to Global warming. In the German projects, more than 100.000 tons of carbon dioxide are saved annually.
References
External links
ECOPROFIT Platform
CPC Austria GmbH - ECOPROFIT Training and Distribution Center
Stenum GmbH - International ECOPROFIT Trainer / Consulter
City of Graz - Regional ECOPROFIT homepage
Environmental mitigation
Environmental economics | EcoProfit | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 524 | [
"Environmental economics",
"Environmental mitigation",
"Environmental social science",
"Environmental engineering"
] |
9,367,832 | https://en.wikipedia.org/wiki/Sulfamide | Sulfamide (IUPAC name: sulfuric diamide) is a compound with the chemical formula and structure . Sulfamide is produced by the reaction of sulfuryl chloride with ammonia. Sulfamide was first prepared in 1838 by the French chemist Henri Victor Regnault.
Sulfamide functional group
In organic chemistry, the term sulfamide may also refer to the functional group which consists of at least one organic group attached to a nitrogen atom of sulfamide.
Symmetric sulfamides can be prepared directly from amines, sulfur dioxide gas and an oxidant:
In this example, the reactants are aniline, triethylamine (, Et = ethyl group), and iodine. Sulfur dioxide is believed to be activated through a series of intermediates: , and .
The sulfamide functional group is an increasingly common structural feature used in medicinal chemistry.
See also
Sulfamic acid
Sulfonamide
References
Sulfuryl compounds
Inorganic nitrogen compounds | Sulfamide | [
"Chemistry"
] | 200 | [
"Sulfuryl compounds",
"Functional groups",
"Inorganic compounds",
"Inorganic nitrogen compounds"
] |
9,368,062 | https://en.wikipedia.org/wiki/Magnetic%20tweezers | Magnetic tweezers (MT) are scientific instruments for the manipulation and characterization of biomolecules or polymers. These apparatus exert forces and torques to individual molecules or groups of molecules. It can be used to measure the tensile strength or the force generated by molecules.
Most commonly magnetic tweezers are used to study mechanical properties of biological macromolecules like DNA or proteins in single-molecule experiments. Other applications are the rheology of soft matter, and studies of force-regulated processes in living cells. Forces are typically on the order of pico- to nanonewtons (pN to nN). Due to their simple architecture, magnetic tweezers are a popular biophysical tool.
In experiments, the molecule of interest is attached to a magnetic microparticle. The magnetic tweezer is equipped with magnets that are used to manipulate the magnetic particles whose position is measured with the help of video microscopy.
Construction principle and physics of magnetic tweezers
A magnetic tweezers apparatus consists of magnetic micro-particles, which can be manipulated with the help of an external magnetic field. The position of the magnetic particles is then determined by a microscopic objective with a camera.
Magnetic particles
Magnetic particles for the operation in magnetic tweezers come with a wide range of properties and have to be chosen according to the intended application. Two basic types of magnetic particles are described in the following paragraphs; however there are also others like magnetic nanoparticles in ferrofluids, which allow experiments inside a cell.
Superparamagnetic beads
Superparamagnetic beads are commercially available with a number of different characteristics. The most common is the use of spherical particles of a diameter in the micrometer range. They consist of a porous latex matrix in which magnetic nanoparticles have been embedded. Latex is auto-fluorescent and may therefore be advantageous for the imaging of their position. Irregular shaped particles present a larger surface and hence a higher probability to bind to the molecules to be studied. The coating of the microbeads may also contain ligands able to attach the molecules of interest. For example, the coating may contain streptavidin which couples strongly to biotin, which itself may be bound to the molecules of interest.
When exposed to an external magnetic field, these microbeads become magnetized. The induced magnetic moment is proportional to a weak external magnetic field :
where is the vacuum permeability. It is also proportional to the volume of the microspheres, which stems from the fact that the number of magnetic nanoparticles scales with the size of the bead. The magnetic susceptibility is assumed to be scalar in this first estimation and may be calculated by , where is the relative permeability. In a strong external field, the induced magnetic moment saturates at a material dependent value . The force experienced by a microbead can be derived from the potential of this magnetic moment in an outer magnetic field:
The outer magnetic field can be evaluated numerically with the help of finite element analysis or by simply measuring the magnetic field with the help of a Hall effect sensor. Theoretically it would be possible to calculate the force on the beads with these formulae; however the results are not very reliable due to uncertainties of the involved variables, but they allow estimating the order of magnitude and help to better understand the system. More accurate numerical values can be obtained considering the Brownian motion of the beads.
Due to anisotropies in the stochastic distribution of the nanoparticles within the microbead the magnetic moment is not perfectly aligned with the outer magnetic field i.e. the magnetic susceptibility tensor cannot be reduced to a scalar. For this reason, the beads are also subjected to a torque which tries to align and :
The torques generated by this method are typically much greater than , which is more than necessary to twist the molecules of interest.
Ferromagnetic nanowires
The use of ferromagnetic nanowires for the operation of magnetic tweezers enlarges their experimental application range. The length of these wires typically is in the order of tens of nanometers up to tens of micrometers, which is much larger than their diameter. In comparison with superparamagnetic beads, they allow the application of much larger forces and torques. In addition to that, they present a remnant magnetic moment. This allows the operation in weak magnetic field strengths. It is possible to produce nanowires with surface segments that present different chemical properties, which allows controlling the position where the studied molecules can bind to the wire.
Magnets
To be able to exert torques on the microbeads at least two magnets are necessary, but many other configurations have been realized, reaching from only one magnet that only pulls the magnetic microbeads to a system of six electromagnets that allows fully controlling the 3-dimensional position and rotation via a digital feedback loop. The magnetic field strength decreases roughly exponentially with the distance from the axis linking the two magnets on a typical scale of about the width of the gap between the magnets. Since this scale is rather large in comparison to the distances, when the microbead moves in an experiment, the force acting on it may be treated as constant. Therefore, magnetic tweezers are passive force clamps due to the nature of their construction in contrast to optical tweezers, although they may be used as positive clamps, too, when combined with a feedback loop. The field strength may be increased by sharpening the pole face of the magnet which, however, also diminishes the area where the field may be considered as constant. An iron ring connection the outer poles of the magnets may help to reduce stray fields. Magnetic tweezers can be operated with both permanent magnets and electromagnets. The two techniques have their specific advantages.
Permanent Magnets
Permanent magnets of magnetic tweezers are usually out of rare earth materials, like neodymium and can reach field strengths exceeding 1.3 Tesla. The force on the beads may be controlled by moving the magnets along the vertical axis. Moving them up decreases the field strength at the position of the bead and vice versa. Torques on the magnetic beads may be exerted by turning the magnets around the vertical axis to change the direction of the field. The size of the magnets is in the order of millimeters as well as their spacing.
Electromagnets
The use of electromagnets in magnetic tweezers has the advantage that the field strength and direction can be changed just by adjusting the amplitude and the phase of the current for the magnets. For this reason, the magnets do not need to be moved which allows a faster control of the system and reduces mechanical noise. In order to increase the maximum field strength, a core of a soft paramagnetic material with high saturation and low remanence may be added to the solenoid. In any case, however, the typical field strengths are much lower compared to those of permanent magnets of comparable size. Additionally, using electromagnets requires high currents that produce heat that may necessitate a cooling system.
Bead tracking system
The displacement of the magnetic beads corresponds to the response of the system to the imposed magnetic field and hence needs to be precisely measured: In a typical set-up, the experimental volume is illuminated from the top so that the beads produce diffraction rings in the focal plane of an objective which is placed under the tethering surface. The diffraction pattern is then recorded by a CCD-camera. The image can be analyzed in real time by a computer. The detection of the position in the plane of the tethering surface is not complicated since it corresponds to the center of the diffraction rings. The precision can be up to a few nanometers. For the position along the vertical axis, the diffraction pattern needs to be compared to reference images, which show the diffraction pattern of the considered bead in a number of known distances from the focal plane. These calibration images are obtained by keeping a bead fixed while displacing the objective, i.e. the focal plane, with the help of piezoelectric elements by known distances. With the help of interpolation, the resolution can reach precision of up 10 nm along this axis. The obtained coordinates may be used as input for a digital feedback loop that controls the magnetic field strength, for example, in order to keep the bead at a certain position.
Non-magnetic beads are usually also added to the sample as a reference to provide a background displacement vector. They have a different diameter as the magnetic beads so that they are optically distinguishable. This is necessary to detect potential drift of the fluid. For example, if the density of magnetic particles is too high, they may drag the surrounding viscous fluid with them. The displacement vector of a magnetic bead can be determined by subtracting its initial position vector and this background displacement vector from its current position.
Force Calibration
The determination of the force that is exerted by the magnetic field on the magnetic beads can be calculated considering thermal fluctuations of the bead in the horizontal plane: The problem is rotational symmetric with respect to the vertical axis; hereafter one arbitrarily picked direction in the symmetry plane is called . The analysis is the same for the direction orthogonal to the x-direction and may be used to increase precision. If the bead leaves its equilibrium position on the -axis by due to thermal fluctuations, it will be subjected to a restoring force that increases linearly with in the first order approximation. Considering only absolute values of the involved vectors it is geometrically clear that the proportionality constant is the force exerted by the magnets over the length of the molecule that keeps the bead anchored to the tethering surface:
.
The equipartition theorem states that the mean energy that is stored in this "spring" is equal to per degree of freedom. Since only one direction is considered here, the potential energy of the system reads:
.
From this, a first estimate for the force acting on the bead can be deduced:
.
For a more accurate calibration, however, an analysis in Fourier space is necessary. The power spectrum density of the position of the bead is experimentally available. A theoretical expression for this spectrum is derived in the following, which can then be fitted to the experimental curve in order to obtain the force exerted by the magnets on the bead as a fitting parameter. By definition this spectrum is the squared modulus of the Fourier transform of the position over the spectral bandwidth :
can be obtained considering the equation of motion for a bead of mass :
The term corresponds to the Stokes friction force for a spherical particle of radius in a medium of viscosity and is the restoring force which is opposed to the stochastic force due to the Brownian motion. Here, one may neglect the inertial term , because the system is in a regime of very low Reynolds number .
The equation of motion can be Fourier transformed inserting the driving force and the position in Fourier space:
This leads to:
.
The power spectral density of the stochastic force can be derived by using the equipartition theorem and the fact that Brownian collisions are completely uncorrelated:
This corresponds to the Fluctuation-dissipation theorem. With that expression, it is possible to give a theoretical expression for the power spectrum:
The only unknown in this expression, , can be determined by fitting this expression to the experimental power spectrum. For more accurate results, one may subtract the effect due to finite camera integration time from the experimental spectrum before doing the fit.
Another force calibration method is to use the viscous drag of the microbeads: Therefore, the microbeads are pulled through the viscous medium while recording their position. Since the Reynolds number for the system is very low, it is possible to apply Stokes law to calculate the friction force which is in equilibrium with the force exerted by the magnets:
.
The velocity can be determined by using the recorded velocity values. The force obtained via this formula can then be related to a given configuration of the magnets, which may serve as a calibration.
Typical experimental set-up
This section gives an example for an experiment carried out by Strick, Allemand, Croquette with the help of magnetic tweezers. A double-stranded DNA molecule is fixed with multiple binding sites on one end to a glass surface and on the other to a magnetic micro bead, which can be manipulated in a magnetic tweezers apparatus. By turning the magnets, torsional stress can be applied to the DNA molecule. Rotations in the sense of the DNA helix are counted positively and vice versa. While twisting, the magnetic tweezers also allow stretching the DNA molecule. This way, torsion extension curves may be recorded at different stretching forces. For low forces (less than about 0.5 pN), the DNA forms supercoils, so called plectonemes, which decrease the extension of the DNA molecule quite symmetrically for positive and negative twists. Augmenting the pulling force already increases the extension for zero imposed torsion. Positive twists lead again to plectoneme formation that reduces the extension. Negative twist, however, does not change the extension of the DNA molecule a lot. This can be interpreted as the separation of the two strands which corresponds to the denaturation of the molecule. In the high force regime, the extension is nearly independent of the applied torsional stress. The interpretation is the apparition of local regions of highly overwound DNA. An important parameter of this experiment is also the ionic strength of the solution which affects the critical values of the applied pulling force that separate the three force regimes.
History and development
Applying magnetic theory to the study of biology is a biophysical technique that started to appear in Germany in the early 1920s. Possibly the first demonstration was published by Alfred Heilbronn in 1922; his work looked at viscosity of protoplasts. The following year, Freundlich and Seifriz explored rheology in echinoderm eggs. Both studies included insertion of magnetic particles into cells and resulting movement observations in a magnetic field gradient.
In 1949 at Cambridge University, Francis Crick and Arthur Hughes demonstrated a novel use of the technique, calling it "The Magnetic Particle Method." The idea, which originally came from Dr. Honor Fell, was that tiny magnetic beads, phagocytoced by whole cells grown in culture, could be manipulated by an external magnetic field The tissue culture was allowed to grow in the presence of the magnetic material, and cells that contained a magnetic particle could be seen with a high power microscope. As the magnetic particle was moved through the cell by a magnetic field, measurements about the physical properties of the cytoplasm were made. Although some of their methods and measurements were self-admittedly crude, their work demonstrated the usefulness of magnetic field particle manipulation and paved the way for further developments of this technique. The magnetic particle phagocytosis method continued to be used for many years to research cytoplasm rheology and other physical properties in whole cells.
An innovation in the 1990s lead to an expansion of the technique's usefulness in a way that was similar to the then-emerging optical tweezers method. Chemically linking an individual DNA molecule between a magnetic bead and a glass slide allowed researchers to manipulate a single DNA molecule with an external magnetic field. Upon application of torsional forces to the molecule, deviations from free-form movement could be measured against theoretical standard force curves or Brownian motion analysis. This provided insight into structural and mechanical properties of DNA, such as elasticity.
Magnetic tweezers as an experimental technique has become exceptionally diverse in use and application. More recently, the introduction of even more novel methods have been discovered or proposed. Since 2002, the potential for experiments involving many tethering molecules and parallel magnetic beads has been explored, shedding light on interaction mechanics, especially in the case of DNA-binding proteins. A technique was published in 2005 that involved coating a magnetic bead with a molecular receptor and the glass slide with its ligand. This allows for a unique look at receptor-ligand dissociation force. In 2007, a new method for magnetically manipulating whole cells was developed by Kollmannsberger and Fabry. The technique involves attaching beads to the extracellular matrix and manipulating the cell from the outside of the membrane to look at structural elasticity. This method continues to be used as a means of studying rheology, as well as cellular structural proteins. A study appeared in a 2013 that used magnetic tweezers to mechanically measure the unwinding and rewinding of a single neuronal SNARE complex by tethering the entire complex between a magnetic bead and the slide, and then using the applied magnetic field force to pull the complex apart.
Biological applications
Magnetic tweezer rheology
Magnetic tweezers can be used to measure mechanical properties such as rheology, the study of matter flow and elasticity, in whole cells. The phagocytosis method previously described is useful for capturing a magnetic bead inside a cell. Measuring the movement of the beads inside the cell in response to manipulation from the external magnetic field yields information on the physical environment inside the cell and internal media rheology: viscosity of the cytoplasm, rigidity of internal structure, and ease of particle flow.
A whole cell may also be magnetically manipulated by attaching a magnetic bead to the extracellular matrix via fibronectin-coated magnetic beads. Fibronectin is a protein that will bind to extracellular membrane proteins. This technique allows for measurements of cell stiffness and provides insights into the functioning of structural proteins. The schematic shown at right depicts the experimental setup devised by Bonakdar and Schilling, et al. (2015) for studying the structural protein plectin in mouse cells. Stiffness was measured as proportional to bead position in response to external magnetic manipulation.
Single-molecule experiments
Magnetic tweezers as a single-molecule method is decidedly the most common use in recent years. Through the single-molecule method, molecular tweezers provide a close look into the physical and mechanical properties of biological macromolecules. Similar to other single-molecule methods, such as optical tweezers, this method provides a way to isolate and manipulate an individual molecule free from the influences of surrounding molecules. Here, the magnetic bead is attached to a tethering surface by the molecule of interest. DNA or RNA may be tethered in either single-stranded or double-stranded form, or entire structural motifs can be tethered, such as DNA Holliday junctions, DNA hairpins, or entire nucleosomes and chromatin. By acting upon the magnetic bead with the magnetic field, different types of torsional force can be applied to study intra-DNA interactions, as well as interactions with topoisomerases or histones in chromosomes .
Single-complex studies
Magnetic tweezers go beyond the capabilities of other single-molecule methods, however, in that interactions between and within complexes can also be observed. This has allowed recent advances in understanding more about DNA-binding proteins, receptor-ligand interactions, and restriction enzyme cleavage. A more recent application of magnetic tweezers is seen in single-complex studies. With the help of DNA as the tethering agent, an entire molecular complex may be attached between the bead and the tethering surface. In exactly the same way as with pulling a DNA hairpin apart by applying a force to the magnetic bead, an entire complex can be pulled apart and force required for the dissociation can be measured. This is also similar to the method of pulling apart receptor-ligand interactions with magnetic tweezers to measure dissociation force.
Comparison to other techniques
This section compares the features of magnetic tweezers with those of the most important other single-molecule experimental methods: optical tweezers and atomic force microscopy. The magnetic interaction is highly specific to the used superparamagnetic microbeads. The magnetic field does practically not affect the sample. Optical tweezers have the problem that the laser beam may also interact with other particles of the biological sample due to contrasts in the refractive index. In addition to that, the laser may cause photodamage and sample heating. In the case of atomic force microscopy, it may also be hard to discriminate the interaction of the tip with the studied molecule from other nonspecific interactions.
Thanks to the low trap stiffness, the range of forces accessible with magnetic tweezers is lower in comparison with the two other techniques. The possibility to exert torque with magnetic tweezers is not unique: optically tweezers may also offer this feature when operated with birefringent microbeads in combination with a circularly polarized laser beam.
Another advantage of magnetic tweezers is that it is easy to carry out in parallel many single molecule measurements.
An important drawback of magnetic tweezers is the low temporal and spatial resolution due to the data acquisition via video-microscopy. However, with the addition of a high-speed camera, the temporal and spatial resolution has been demonstrated to reach the Angstrom-level.
References
Further reading
Biophysics
Measuring instruments
Particle traps | Magnetic tweezers | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 4,427 | [
"Molecular physics",
"Applied and interdisciplinary physics",
"Measuring instruments",
"Particle traps",
"Biophysics"
] |
9,368,078 | https://en.wikipedia.org/wiki/Hood%20mould | In architecture, a hood mould, hood, label mould (from Latin , lip), drip mould or dripstone is an external moulded projection from a wall over an opening to throw off rainwater, historically often in form of a pediment. This moulding can be terminated at the side by ornamentation called a label stop.
The hood mould was introduced into architecture in the Romanesque period, though they became much more common in the Gothic period. Later, with the increase in rectangular windows they became more prevalent in domestic architecture.
Styles of hood moulding
References
Architectural elements | Hood mould | [
"Technology",
"Engineering"
] | 124 | [
"Building engineering",
"Architectural elements",
"Architecture stubs",
"Components",
"Architecture"
] |
9,368,110 | https://en.wikipedia.org/wiki/Logarithmically%20concave%20measure | In mathematics, a Borel measure μ on n-dimensional Euclidean space is called logarithmically concave (or log-concave for short) if, for any compact subsets A and B of and 0 < λ < 1, one has
where λ A + (1 − λ) B denotes the Minkowski sum of λ A and (1 − λ) B.
Examples
The Brunn–Minkowski inequality asserts that the Lebesgue measure is log-concave. The restriction of the Lebesgue measure to any convex set is also log-concave.
By a theorem of Borell, a probability measure on R^d is log-concave if and only if it has a density with respect to the Lebesgue measure on some affine hyperplane, and this density is a logarithmically concave function. Thus, any Gaussian measure is log-concave.
The Prékopa–Leindler inequality shows that a convolution of log-concave measures is log-concave.
See also
Convex measure, a generalisation of this concept
Logarithmically concave function
References
Measures (measure theory) | Logarithmically concave measure | [
"Physics",
"Mathematics"
] | 243 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
9,368,121 | https://en.wikipedia.org/wiki/Douglas%20Northcott | Douglas Geoffrey Northcott, FRS (31 December 1916 – 8 April 2005) was a British mathematician who worked on ideal theory.
Early life and career
Northcott was born Douglas Geoffrey Robertson in Kensington on 31 December 1916 to Clara Freda (née Behl) (1894–1958) and her first husband Geoffrey Douglas Spence Robertson (1894–1978). His mother remarried in 1919 to Arthur Hugh Kynaston Northcott (1887–1952). In 1935, he legally adopted his step-father's surname.
He was educated in London, then at Christ's Hospital and St John's College, Cambridge, where he started research under the supervision of G.H. Hardy.
His work was interrupted by active service during World War II. Captured at Singapore, he survived his time as a prisoner of war in Japan, and returned to Cambridge at the end of the war.
Back at Cambridge, he published his dissertation "Abstract Tauberian theorems with applications to power series and Hilbert series ". He then turned to algebra under the influence of Emil Artin, whom he had met while visiting Princeton University. He became a Research Fellow of St John's College in 1948.
In 1949, he proved an important result in the theory of heights, namely that there are only finitely many algebraic numbers of bounded degree and bounded height. In analogy to this result, a set of algebraic numbers is said to satisfy the Northcott property if there are only finitely many elements of bounded height.
In 1952, he moved to the Town Trust Chair of Pure Mathematics at Sheffield University. He remained at Sheffield until his retirement in 1982, also serving as Head of Department and Dean of Pure Science.
In 1954, Douglas Northcott and David Rees introduced in a joint paper the Northcott-Rees theory of reductions and integral closures, which has subsequently been influential in commutative algebra.
Awards
Northcott was awarded the London Mathematical Society Junior Berwick Prize in 1953 and served as LMS Vice-President during 1968-69. He was elected Fellow of the Royal Society in 1961.
Family life
In 1949, at Cambridge, Northcott married Rose Hilda Austin (1917-1992), with two daughters, Anne Patricia (born 1950) and Pamela Rose (1952-1992).
Publications
Northcott, D. G. Multilinear algebra. Cambridge University Press, Cambridge, 1984.
Northcott, D. G. A first course of homological algebra. Reprint of 1973 edition. Cambridge University Press, Cambridge-New York, 1980.
Northcott, D. G. Affine sets and affine groups. London Mathematical Society Lecture Note Series, 39. Cambridge University Press, Cambridge-New York, 1980.
Northcott, D. G. Finite free resolutions. Cambridge Tracts in Mathematics, No. 71. Cambridge University Press, Cambridge-New York-Melbourne, 1976.
Northcott, D. G. Lessons on rings, modules and multiplicities. Cambridge University Press, London 1968
Northcott, D. G. An introduction to homological algebra. Cambridge University Press, New York 1960
Northcott, D. G. Ideal theory. Cambridge Tracts in Mathematics and Mathematical Physics, No. 42. Cambridge, at the University Press, 1953.
References
20th-century British mathematicians
People educated at Christ's Hospital
Academics of the University of Sheffield
World War II prisoners of war held by Japan
Alumni of St John's College, Cambridge
Fellows of St John's College, Cambridge
Fellows of the Royal Society
People from Kensington
Algebraists
1916 births
2005 deaths | Douglas Northcott | [
"Mathematics"
] | 712 | [
"Algebra",
"Algebraists"
] |
9,368,946 | https://en.wikipedia.org/wiki/Neutral%20axis | The neutral axis is an axis in the cross section of a beam (a member resisting bending) or shaft along which there are no longitudinal stresses or strains.
Theory
If the section is symmetric, isotropic and is not curved before a bend occurs, then the neutral axis is at the geometric centroid of a beam or shaft. All fibers on one side of the neutral axis are in a state of tension, while those on the opposite side are in compression.
Since the beam is undergoing uniform bending, a plane on the beam remains plane. That is:
Where is the shear strain and is the shear stress
There is a compressive (negative) strain at the top of the beam, and a tensile (positive) strain at the bottom of the beam. Therefore by the Intermediate Value Theorem, there must be some point in between the top and the bottom that has no strain, since the strain in a beam is a continuous function.
Let L be the original length of the beam (span)
ε(y) is the strain as a function of coordinate on the face of the beam.
σ(y) is the stress as a function of coordinate on the face of the beam.
ρ is the radius of curvature of the beam at its neutral axis.
θ is the bend angle
Since the bending is uniform and pure, there is therefore at a distance y from the neutral axis with the inherent property of having no strain:
Therefore the longitudinal normal strain varies linearly with the distance y from the neutral surface. Denoting as the maximum strain in the beam (at a distance c from the neutral axis), it becomes clear that:
Therefore, we can solve for ρ, and find that:
Substituting this back into the original expression, we find that:
Due to Hooke's Law, the stress in the beam is proportional to the strain by E, the modulus of elasticity:
Therefore:
From statics, a moment (i.e. pure bending) consists of equal and opposite forces. Therefore, the total amount of force across the cross section must be 0.
Therefore:
Since y denotes the distance from the neutral axis to any point on the face, it is the only variable that changes with respect to dA. Therefore:
Therefore the first moment of the cross section about its neutral axis must be zero. Therefore the neutral axis lies on the centroid of the cross section.
Note that the neutral axis does not change in length when under bending. It may seem counterintuitive at first, but this is because there are no bending stresses in the neutral axis. However, there are shear stresses (τ) in the neutral axis, zero in the middle of the span but increasing towards the supports, as can be seen in this function (Jourawski's formula);
where
T = shear force
Q = first moment of area of the section above/below the neutral axis
w = width of the beam
I = second moment of area of the beam
This definition is suitable for the so-called long beams, i.e. its length is much larger than the other two dimensions.
Arches
Arches also have a neutral axis if they are made of stone; stone is an inelastic medium, and has little strength in tension. Therefore as the loading on the arch changes the neutral axis moves- if the neutral axis leaves the stonework, then the arch will fail.
This theory (also known as the thrust line method) was proposed by Thomas Young and developed by Isambard Kingdom Brunel.
Practical applications
Building trades workers should have at least a basic understanding of the concept of neutral axis, to avoid cutting openings to route wires, pipes, or ducts in locations which may dangerously compromise the strength of structural elements of a building. Building codes usually specify rules and guidelines which may be followed for routine work, but special situations and designs may need the services of a structural engineer to assure safety.
See also
Neutral plane
Second moment of inertia
References
Boilermaking
Solid mechanics | Neutral axis | [
"Physics",
"Chemistry"
] | 802 | [
"Metallurgical processes",
"Solid mechanics",
"Boilermaking",
"Mechanics"
] |
9,369,090 | https://en.wikipedia.org/wiki/Department%20of%20Physics%20and%20Astronomy%2C%20University%20of%20Manchester | The Department of Physics and Astronomy at the University of Manchester is one of the largest and most active physics departments in the UK, taking around 330 new undergraduates and 50 postgraduates each year, and employing more than 80 members of academic staff and over 100 research fellows and associates. The department is based on two sites: the Schuster Laboratory on Brunswick Street and the Jodrell Bank Centre for Astrophysics in Cheshire, international headquarters of the Square Kilometre Array (SKA).
According to the Academic Ranking of World Universities, the department is the 9th best physics department in the world and best in Europe. It is ranked 2nd place in the UK by Grade Point Average (GPA) according to the Research Excellence Framework (REF) in 2021, being only behind the University of Sheffield. The University has a long history of physics dating back to 1874, which includes 12 Nobel Prize laureates]], most recently Andre Geim and Konstantin Novoselov who were awarded the Nobel Prize in Physics in 2010 for their discovery of graphene.
Research groups
The Department of Physics and Astronomy comprises eight research groups:
Astronomy and Astrophysics
Biological Physics
Condensed Matter Physics
Nonlinear Dynamics and Liquid Crystal Physics
Photon Physics
Particle Physics
Nuclear Physics
Theoretical Physics
Research in the department of Physics has been funded by the Particle Physics and Astronomy Research Council (PPARC), the Science and Technology Facilities Council (STFC) and the Royal Society.
Notable faculty
the department employs 53 Professors, including Emeritus Professors.
Teresa Anderson Professor of Physics and co-founder of the Bluedot Festival
Philippa Browning Professor Astrophysics
Brian Cox, Professor of Particle Physics, working on the ATLAS experiment at the Large Hadron Collider
Philip Diamond, Professor of Photon Physics and Director General of the Square Kilometre Array (SKA)
Wendy Flavell, Vice Dean for Research and a Professor of Surface Physics
Jeffrey Forshaw, Professor of Particle Physics and co-author of The Quantum Universe
Sir Andre Geim, Regius Professor & Royal Society Research Professor
Sir Konstantin Novoselov, Langworthy Professor of Physics
Tim O'Brien, Professor of Astrophysics
Terry Wyatt Professor of Particle Physics
Notable alumni and former staff
Sarah Bridle, Professor of Food, Climate and Society at the University of York
Neil Burgess, University College London
Tamsin Edwards, King's College London
Yvonne Elsworth, University of Birmingham
Danielle George, Professor of Radiofrequency Engineering
History
The department has origins dating back to 1874 when Balfour Stewart was appointed the first Langworthy Professor of Physics at Owens College, Manchester. Stewart was the first to identify an electrified atmospheric layer (now known as the ionosphere) which could distort the Earth's magnetic field. The theory of the ionosphere was postulated by Carl Friedrich Gauss in 1839, Stewart published the first experimental confirmation of the theory in 1878. Since then, the department has hosted many award-winning scientists including:
Hans Bethe, awarded the Nobel Prize in Physics in 1967
Patrick Blackett, Baron Blackett, awarded the Nobel Prize in Physics in 1948
Niels Bohr, awarded the Nobel Prize in Physics in 1922
Sir William Lawrence Bragg, discovered Bragg's law and awarded the Nobel Prize in Physics in 1915
Sir James Chadwick, awarded the Nobel Prize in Physics in 1935
Sir John Cockcroft, awarded the Nobel Prize in Physics in 1951
Rod Davies, Professor of Radio Astronomy
Richard Davis, Professor of Astrophysics
Samuel Devons,
Brian Flowers, Baron Flowers,
Sir Francis Graham-Smith, Astronomer Royal from 1982 to 1990
Henry Hall, who built the first dilution refrigerator
Sir Bernard Lovell, creator of the Lovell Telescope at the Jodrell Bank Observatory
Henry Moseley, creator of Moseley's law
Nevill Francis Mott, awarded the Nobel Prize in Physics in 1977
Ernest Rutherford, awarded the Nobel Prize in Chemistry in 1908 for splitting the atom
Sir Arthur Schuster,
Balfour Stewart, first Langworthy Professor of Physics
Sir Joseph John "J. J." Thomson, studied Physics at Owens College, Manchester aged 14, went on to run the Cavendish Laboratory in Cambridge and was awarded the 1906 Nobel Prize in Physics.
In 2004, the two separate departments of Physics at the Victoria University of Manchester and the University of Manchester Institute of Science and Technology (UMIST) were merged to form the current Department of Physics and Astronomy at the University of Manchester. The department was known as the School of Physics and Astronomy until a 2019 reshuffle.
Emeritus professors
The department is also home to several Emeritus Scientists, pursuing their research interests after their formal retirement including:
Alexander Donnachie, Research Professor
Andrew Lyne, Emeritus Professor and co-discoverer of the binary pulsar
Robin Marshall, Professor of Physics and Biology
Michael Moore, Emeritus Professor of Theoretical Physics
References
Physics
Astronomy education
Physics departments in the United Kingdom
Astronomy in the United Kingdom
Professional education in Manchester | Department of Physics and Astronomy, University of Manchester | [
"Astronomy"
] | 966 | [
"Astronomy education"
] |
9,369,098 | https://en.wikipedia.org/wiki/Laboratory%20oven | Laboratory ovens are a common piece of equipment that can be found in electronics, materials processing, forensic, and research laboratories. These ovens generally provide pinpoint temperature control and uniform temperatures throughout the heating process. The following applications are some of the common uses for laboratory ovens: annealing, die-bond curing, drying or dehydrating, Polyimide baking, sterilizing, evaporating. Typical sizes are from one cubic foot to . Some ovens can reach temperatures that are higher than 300 degrees Celsius. These temperatures are then applied from all sides of the oven to provide constant heat to sample.
Laboratory ovens can be used in numerous different applications and configurations, including clean rooms, forced convection, horizontal airflow, inert atmosphere, natural convection, and pass through.
There are many types of laboratory ovens that are used throughout laboratories. Standard digital ovens are mainly used for drying and heating processes while providing temperature control and safety. Heavy duty ovens are used more in the industrial laboratories and provide testing and drying for biological samples. High temperature ovens are custom built and have additional insulation lining. This is needed for the oven due to its high temperatures that can reach up to 500 degrees Celsius. Other forms of the laboratory oven include vacuum ovens, forced air convection ovens, and gravity convection ovens.
Forensic labs use vacuum ovens that have been configured in specific ways to assist in developing fingerprints. Gravity convection ovens are used for biological purposes such as removing biological contaminants from samples. Along with forced-air ovens, they are also used in environmental studies to dry out samples that have been taken. These samples are weighed before and after to calculate the amount of moisture in the sample.
Laboratory Oven Safety
Laboratory ovens contain many components and other procedures that can be harmful to the user. Proper procedure and safety can help lead to mitigating the amount of injuries and oven malfunctions when using laboratory ovens. Before the oven is used, check to make sure that the oven is still in good working condition. All temperature sensing devices need be operational and should shut off the oven if temperatures exceed there limits. If the oven is not operational, it must be unplugged and labeled with the statement "Defective Equipment" on the surface of the oven.
Potential hazards that can be faced when using the laboratory ovens are fire hazards, health hazards, and burn hazards. Using plastic materials that can't withstand the temperatures of the oven will melt and ignite. This can cause a fire to start in the oven and room. Checking materials before continuing with experiments will help prevent potential fires. If some items are placed in the oven and haven't been cleaned properly, the heat will cause the residue of past experiments to become airborne. Properly cleaning and washing material before use is a great way to remove this hazard. Avoid touching hot surfaces on the oven when it is being used. Not doing so will result in the user being severely burned. The equipment needed while using the oven includes the following: lab coat, eye/face protection, heat resistant gloves. Rubber sleeve protectors and aprons will also be helpful in using the ovens. If the proper safety guidelines and equipment are used, the lower the chance of problems are to occur.
References
Convection
Laboratory equipment | Laboratory oven | [
"Physics",
"Chemistry"
] | 673 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Thermodynamics"
] |
9,369,367 | https://en.wikipedia.org/wiki/Piggybacking%20%28Internet%20access%29 | Piggybacking on Internet access is the practice of establishing a wireless Internet connection by using another subscriber's wireless Internet access service without the subscriber's explicit permission or knowledge. It is a legally and ethically controversial practice, with laws that vary by jurisdiction around the world. While completely outlawed or regulated in some places, it is permitted in others.
A customer of a business providing hotspot service, such as a hotel or café, is generally not considered to be piggybacking, though non-customers or those outside the premises who are simply in reach may be. Many such locations provide wireless Internet access as a free or paid-for courtesy to their patrons or simply to draw people to the area. Others near the premises may be able to gain access.
Piggybacking is distinct from wardriving, which involves only the logging or mapping of the existence of access points.
Background
Piggybacking has become a widespread practice in the 21st century due to the advent of wireless Internet connections and wireless access points. Computer users who either do not have their own connections or who are outside the range of their own might find someone else's by wardriving or luck and use that one.
However, those residing near a hotspot or another residence with the service have been found to have the ability to piggyback off such connections without patronizing these businesses, which has led to more controversy. While some may be in reach from their own home or nearby, others may be able to do so from the parking lot of such an establishment, from another business that generally tolerates the user's presence, or from the public domain. Others, especially those living in apartments or town houses, may find themselves able to use a neighbour's connection.
Wi-Fi hotspots, unsecured and secured, have been recorded to some degree with GPS-coordinates. Some sites host searchable databases or maps of the locations of user-submitted access points. The activity of finding and mapping locations has also been crowdsourced by many smartphone apps.
Long range antennas can be hooked up to laptop computers with an external antenna jack, which allows a user to pick up a signal from as far as several kilometers away. Since unsecured wireless signals can be found readily in most urban areas, laptop owners may find free or open connections almost anywhere. While 2.4 and 5 GHz antennas are commercially available and easily purchased from many online vendors, they are also relatively easy to make. Laptops and tablets that lack external antenna jacks can rely on external Wi-Fi network cards, many requiring only USB, which the laptop can itself easily provide from its own battery.
Reasons
There are many reasons why Internet users desire to piggyback on other's networks.
For some, the cost of Internet service is a factor. Many computer owners who cannot afford a monthly subscription to an Internet service, who only use it occasionally, or who otherwise wish to save money and avoid paying, will routinely piggyback from a neighbour or a nearby business, or visit a location providing this service without being a paying customer. If the business is large and frequented by many people, this may go largely unnoticed.
Yet other piggybackers are regular subscribers to their own service, but are away from home when they wish to gain Internet access and do not have their own connection available at all or at an agreeable cost.
Often, a user will access a network completely by accident, as the network access points and computer's wireless cards and software are designed to connect easily by default. This is common when away from home or when the user's own network is not behaving correctly. Such users are often unaware that they are piggybacking, and the subscriber has not noticed. Regardless, piggybacking is difficult to detect unless the user can be viewed by others using a computer under suspicious circumstances.
Less often, it is used as a means of hiding illegal activities, such as downloading child pornography or engaging in identity theft. This is one main reason for controversy.
Network owners leave their networks unsecured for a variety of reasons. They may desire to share their Internet access with their neighbours or the general public or may be intimidated by the knowledge and effort required to secure their network while making it available to their own devices. Some wireless networking devices may not support the latest security mechanisms, and users must therefore leave their network unsecured. For example, the Nintendo DS and Nintendo DS Lite can only access wireless routers using the discredited WEP standard, however, the Nintendo DSi and Nintendo 3DS both support WPA encryption. Given the rarity of such cases where hosts have been held liable for the activities of piggybackers, they may be unaware or unconcerned about the risks they incur by not securing their network, or of a need for an option to protect their network.
Some jurisdictions have laws requiring residential subscribers to secure their networks (e.g., in France "négligence caractérisée" in HADOPI). Even where not required by law, landlords might request that tenants secure their networks as a condition of their lease.
Legality
Views
Views on the ethics of piggybacking vary widely. Many support the practice by stating that it is harmless and benefits the piggybacker at no expense to others, but others criticize it with terms like "leeching," "mooching," or "freeloading." Different analogies are made in public discussions to relate the practice to more familiar situations. Advocates compare the practice to the following:
sitting behind other passengers on a train and reading their newspaper over their shoulder.
enjoying the music a neighbour is playing in one's backyard.
using a drinking fountain.
sitting in a chair put in a public place.
reading from the light of a porch light or streetlamp.
accepting an invitation to a party since unprotected wireless routers can be interpreted as being open to use.
borrowing a cup of sugar.
Opponents to piggybacking compare the practice to the following:
entering a home just because the door is unlocked.
hanging on the outside of a bus to obtain a free ride.
connecting one's own wire to a neighbour's house to obtain free cable TV service when the neighbour is a subscriber.
The piggybacker uses the connection paid for by another without sharing the cost. That is especially common in an apartment building in which many residents live within the normal range of a single wireless connection. Some residents can gain free Internet access while others pay. Many ISPs charge monthly rates, however, and so there is no difference in cost to the network owner.
Excessive piggybacking may slow the host's connection, with the host typically unaware of the reason for the reduction of speed. That is more of a problem if many persons are engaging in this practice, such as in an apartment or near a business.
Piggybackers may engage in illegal activity such as identity theft or child pornography without much of a trail to their own identity. That leaves network owners subject to investigation for crimes of which they are unaware. While persons engaging in piggybacking are generally honest citizens, a smaller number are breaking the law in that manner and so avoid identification by investigators. That, in particular, has led to some anti-piggybacking laws.
Some access points, when the factory default settings are used, are configured to provide wireless access to all who request it. Some commentators argue that those who set up access points without enabling security measures are offering their connection to the community. Many people intentionally leave their networks open to allow neighbours casual access, with some joining wireless community networks to share bandwidth freely. It has largely become good etiquette to leave access points open for others to use, just as someone expects to find open access points while on the road.
Jeffrey L. Seglin, an ethicist for the New York Times, recommends notifying network owners if they are identifiable, but he says there is nothing inherently wrong with accessing an open network and using the connection. "The responsibility for deciding whether others should be able to tap into a given access belongs squarely on the shoulders of those setting up the original connection."
Similarly, Randy Cohen, the author of The Ethicist column for The New York Times Magazine and National Public Radio, says that one should attempt to contact the owner of a regularly used network and offer to contribute to the cost. However, he points out that network owners can easily password protect their networks and quotes the attorney Mike Godwin to conclude that open networks likely represent indifference on the part of the network owner and so accessing them is morally acceptable, if it is not abused.
The policy analyst Timothy B. Lee (not to be confused with Tim Berners-Lee) writes in the International Herald Tribune that the ubiquity of open wireless points is something to celebrate. He says that borrowing a neighbour's Wi-Fi is like sharing a cup of sugar, and leaving a network open is just being a good neighbour.
Techdirt blogger Mike Masnick responded to an article in Time Magazine to express his disagreement with why a man was arrested for piggybacking a cafe's wireless medium. The man had been charged with breaking Title 18, Part 1, Chapter 47, of the United States Code, which states and includes anyone who "intentionally accesses a computer without authorization or exceeds authorized access." The writer himself is not sure what that title really means or how it applies to contemporary society since the code was established regarding computers and their networks during the Cold War era.
In the technical legality of the matter, Masnick believes the code was not broken because the access point owner did not secure the device specifically for authorized users. Therefore the device was implicitly placed into a status of "authorized." Lev Grossman, with Time Magazine, is on the side of most specialist and consumers, who believe the fault, if there is any, is mostly that of the network's host or owner.
An analogy commonly used in this arena of debate equates wireless signal piggybacking with entering a house with an open door. Both are supposed to be equatable, but the analogy is tricky, as it does not take into account unique differences regarding the two items in reference, which ultimately leave the analogy flawed.
The key to the flaw in the analogy is that with an unprotected access point, the default status is for all users to be authorized. An access point is an active device that initiates the announcement of its services and, if setup securely allows or denies authorization by its visitors.
A house door, on the other hand, has physical attributes that distinguish access to the house as authorized or unauthorized by its owner. Even with an open house door, it is plain whether one has been invited to that house by its owner and if entrance will be authorized or denied. A house owner's door is passive but has an owner who knows the risks of leaving their door open and house unprotected in the absence of the gate keeping presence. Equally, wireless access point owners should be aware that security risks exist when they leave their network unprotected. In that scenario, the owner has made the decision to allow the gatekeeper or access point to authorize all who attempt to connect because the gatekeeper was not told whom not to let in.
Prevention
Laws do not have the physical ability to prevent such action from occurring, and piggybacking may be practiced with negligible detection.
The owner of any wireless connection has the ability to block access from outsiders by engaging wireless LAN security measures. Not all owners do so, and some security measures are more effective than others. As with physical security, choice is a matter of trade-offs involving the value of what is being protected, the probability of its being taken, and the cost of protection. An operator merely concerned with the possibility of ignorant strangers leeching Internet access may be less willing to pay a high cost in money and convenience than one who is protecting valuable secrets from experienced and studious thieves. More security-conscious network operators may choose from a variety of security measures to limit access to their wireless network, including:
Hobbyists, computer professionals and others can apply Wired Equivalent Privacy (WEP) to many access points without cumbersome setup, but it offers little in the way of practical security against similarly studious piggybackers. It is cryptographically very weak, so an access key can easily be cracked. Its use is often discouraged in favor of other more robust security measures, but many users feel that any security is better than none or are unaware of any other. In practice, this may simply mean that nearby non-WEP networks are more accessible targets. WEP is sometimes known to slow down network traffic in the sense that the WEP implementation causes extra packets to be transmitted across the network. Some claim that "Wired Equivalent Privacy" is a misnomer, but it generally fits because wired networks are not particularly secure either.
Wi-Fi Protected Access (WPA), as well as WPA2 and EAP are more secure than WEP. As of May 2013, 44.3 percent of all wireless networks surveyed by WiGLE use WPA or WPA2.
MAC address authentication in combination with discretionary DHCP server settings allow a user to set up an "allowed MAC address" list. Under this type of security, the access point will only give an IP Address to computers whose MAC address is on the list. Thus, the network administrator would obtain the valid MAC addresses from each of the potential clients in their network. Disadvantages to this method include the additional setup. This method does not prevent eavesdropping traffic sent over the air (there is no encryption involved). Methods to defeat this type of security include MAC address spoofing, detailed on the MAC address page, whereby network traffic is observed, valid MACs are collected, and then used to obtain DHCP leases. It is also often possible to configure IP for a computer manually, ignoring DHCP, if sufficient information about the network is known (perhaps from observed network traffic).
IP security (IPsec) can be used to encrypt traffic between network nodes, reducing or eliminating the amount of plain text information transmitted over the air. This security method addresses privacy concerns of wireless users, as it becomes much more difficult to observe their wireless activity. Difficulty of setting up IPsec is related to the brand of access point being used. Some access points may not offer IPsec at all, while others may require firmware updates before IPsec options are available. Methods to defeat this type of security are computationally intensive to the extent that they are infeasible using readily-available hardware, or they rely on social engineering to obtain information (keys, etc.) about the IPsec installation.
VPN options such as tunnel-mode IPSec or OpenVPN can be difficult to set up, but often provide the most flexible, extendable security, and as such are recommended for larger networks with many users.
Wireless intrusion detection systems can be used to detect the presence of rogue access points which expose a network to security breaches. Such systems are particularly of interest to large organizations with many employees.
Flash a 3rd party firmware such as OpenWrt, Tomato or DD-WRT with support for RADIUS.
Honeypot (computing) involves setting up a computer on a network just to see who comes along and does something on the open access point.
Disabling SSID broadcasts has been recommended in the past as a security measure, although it only hides networks superficially. MAC addresses of routers are still broadcast, and can be detected using special means. But worse, a device that once connected to a hidden SSID will continuously transmit probe requests for this SSID and is vulnerable to the Evil Twin attack. Therefore, SSID hiding can no longer be considered a security measure.
Alternatives
There are several alternatives to piggybacking. Internet access is available on many data plans for smartphones and PDAs. Although it may have browsing limitations compared with Internet access from traditional Internet service providers for desktop or laptop computers, the Internet can be accessed anywhere there is an adequately strong data signal. Some mobile phone service providers offer mobile Internet service to other devices via a data connection from the mobile phone. Also known as tethering, one can interface to their phone either wirelessly using Bluetooth or Wi-Fi or wired via cable allowing access to the Internet anywhere there is a cell network signal.
Many jurisdictions have been experimenting with statewide, province-wide, county-wide or municipal wireless network access. On September 20, 2005, Google WiFi was announced as a municipal wireless mesh network in Mountain View, California. Baltimore County, Maryland provides free Wi-Fi access at government offices, libraries, and county facilities. This service was first provided in May 2007 in the central business district of the county seat, Towson, and gradually expanded throughout the remainder of the county. When the service was expanded to more public areas in 2014, Baltimore's acting chief technology officer, L. Jerome Mullen, remarked, "Projects like this are just the beginning of the opportunities that remain as we strengthen and expand the City's fiber optic network. We are building digital city infrastructure, and the possibilities are endless." In New York City, the Department of Parks and Recreation provides free Wi-Fi in parks across the city. BAI Communications was contracted by municipal public transportation authorities to install free Wi-Fi in underground subway stations in Toronto, Canada and in all 279 Manhattan, Queens, and Bronx underground subway stations in New York City. On January 8, 2013, Google and the Chelsea Improvement Company, a local public advocacy group, announced that they would install free Wi-Fi in the New York City neighborhood of Chelsea. New York Senator Chuck Schumer said at the press conference, "It's not very expensive at all—just a smidgeon of what Sandy cost. The mayor and I said maybe we could get this done for all of New York. We look forward to the day when all of New York has free Wi-Fi." On November 17, 2014, the mayor of New York City, Bill de Blasio, announced LinkNYC, an infrastructure project to create a free, encrypted, gigabit wireless network to cover New York City by replacing the city's payphones with Wi-Fi hotspots and web browser kiosks where free phone calls could also be made. These pilot programs may result in similar services being launched and interconnected nationwide.
Free Internet access hotspots have also been opened by a wide range of organisations. Companies sell hardware and network management services to establish hotspots. Other hotspot-based efforts have been launched with the intention of providing global, low-cost or free Internet access. Fon is a wireless router vendor which allows owners of its routers to share Internet access with other owners of Fon routers. Users who do not own a Fon router can also connect at a small price. Guifi.net is a free, open, international telecommunications community network organized and expanded by individuals, companies and administrations. On November 27, 2012, the Electronic Frontier Foundation and a coalition of nine other groups launched OpenWireless.org, an Internet activism project which seeks to increase Internet access by encouraging individuals and organisations to configure their wireless routers to offer a separate public wireless guest network or to open their network completely.
See also
Evil twin phishing
Exposed terminal problem
Fixed Wireless Data
Hidden terminal problem
IEEE 802.11
Legality of piggybacking
Local area network
Wardriving
Wireless network
References
External links
Local area networks
Wireless networking
Wireless access points | Piggybacking (Internet access) | [
"Technology",
"Engineering"
] | 4,045 | [
"Wireless networking",
"Computer networks engineering"
] |
9,371,968 | https://en.wikipedia.org/wiki/Roskamp%20Institute | The Roskamp Institute, was co-founded by Robert and Diane Roskamp, and Fiona Crawford and Michael Mullan in Sarasota, Florida in 2003. It is a nonprofit biomedical research facility specializing neurological research including Alzheimer's disease, traumatic brain injury, Gulf War syndrome, and posttraumatic stress disorder. It also operates an onsite neurology clinic. The institute is focused on finding the causes and treatments for neuropsychiatric and neurodegenerative diseases.
The institute's lead researchers, Michael Mullan and Fiona Crawford, were members of a team of scientists who discovered the first genetic errors causing Alzheimer's disease in 1991 in the APP gene in early onset familial cases. Mullan and Crawford also discovered the Swedish mutation which has been incorporated into transgenic mice which are widely used to understand the disease and test new treatments.
The institute is particularly focused on translational research that can lead to novel drug or other therapeutic interventions in neurodegenerative disorders. In this regard, Institute scientists discovered that certain members of a class of drugs called dihydropyridines [DHPs] can lower the levels of amyloid beta in the brains of transgenic models of the disease and decided to take one of them, nilvadipine, forward into clinical trials for Alzheimer's disease. This work was conducted by Archer Pharmaceuticals, a for-profit spin off of the institute, headed by Mullan. In partnership with colleagues at Trinity College, Dublin led by Brian Lawlor, Archer and Institute scientists conducted an open label phase I/II trial of nilvadipine in mild to moderate Alzheimer's disease subjects. More recently, in collaboration with multiple partners at academic institutes in Europe, and again led by Lawlor, Archer and Roskamp Institute scientists partnered to conduct a phase III clinical trial of nilvadipine in mild to moderate Alzheimer's disease.
The institute is currently housed in a scientific research facility in Sarasota, Florida. The institute facility contains mass spectrometry, pathology, microscopy, certified GMO testing, and chemistry labs. The organization employs more than 50 scientists, technicians, clinicians, and other research staff.
The neurology clinic, headed by neurologist Andy Keegan, offers free memory screening as well as conducts various clinical trials simultaneously for neurological disorders.
References
External links
RoskampInstitute.org
Biomedical research foundations
Medical and health foundations in the United States | Roskamp Institute | [
"Engineering",
"Biology"
] | 501 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
9,372,609 | https://en.wikipedia.org/wiki/Environmental%20testing | Environmental testing is the measurement of the performance of equipment under specified environmental conditions. This can include the following:
High and low extreme temperatures
Temperature cycling
Sand and dust exposure
Salt spray
High and low humidity
Wet environments
Deep water submersion
Corrosive material exposure
Algae and microbial exposure
Shock and vibrations, including gun fire
High and low pressure
Pressure cycling
Electromagnetic interference
Such tests are most commonly performed on equipment used in military, maritime, aeronautical and space applications.
Standards
Environmental test standards include:
MIL-STD-810
MIL-HDBK-2036
IEC 60068
IEC 60945
RTCA DO-160
MIL-STD-461
See also
Environmental stress screening
Environmental test chambers
Direct Field Acoustic Testing
Technischer Überwachungsverein
References
External links
Testing
Reliability engineering | Environmental testing | [
"Engineering"
] | 160 | [
"Environmental testing",
"Systems engineering",
"Reliability engineering"
] |
9,372,872 | https://en.wikipedia.org/wiki/Nafovanny | Nafovanny in Vietnam is the largest captive-breeding primate facility in the world. It supplies crab-eating Macaques (Macaca fascicularis) to animal testing laboratories.
Background
Nafovanny was set up in 1994 by Vanny Chian Technology, a Hong Kong company, according to Reuters.
Criticism of the project was expressed by Dr. John Wedderburn, a former member of the RSPCA's ruling council: "It is terrible, terrible. There is no end to the ingenuity of man when it comes to making money and being cruel." Daniel Chen, a director of Vanny Chain Technology responded "We have not got a problem with that because what we are doing is very humane and it is for the welfare of human beings."
Location and size
Nafovanny consists of two main farms in Long Thanh, Vietnam. According to the British Union for the Abolition of Vivisection (BUAV), the facility also maintains satellite breeding farms on the Cambodian border, in which the BUAV alleges wild monkeys may also be held. The existence of these farms is not referenced in the company's brochure, according to the BUAV.
The British Home Office has said it has no knowledge of Nafovanny satellite farms. However, it also said that "This decision was premised upon the contractual arrangements having been entered into in good-faith whilst Nafovanny was still considered to be an acceptable source, and animals already selected for onward supply to the UK. The view was taken that proper provision would be made for the welfare of these animals prior to their being shipped to the UK – and that this could be verified from the records and findings on arrival."
Customers
The British government approved Nafovanny to export primates to British laboratories in 1999. The British Animal Scientific Procedures Inspectorate visited Nafovanny in March 2005 and identified "shortcomings in animal accommodation and care", but since then, the government has "received assurances and evidence that significant improvements have been made".
According to Viet Nam News, 3,000 Nafovanny macaques were exported to the U.S. for testing purposes in 2000. The International Primate Protection League reported that Nafovanny exported 1,440 macaques to the United States in 2013.
See also
Animal testing on non-human primates
International trade in primates
Notes
Further reading
"Animal Research: Primates, Hansard, July 25, 2006.
Agricultural organizations based in Vietnam
Animal testing
Primate trade
1994 establishments in Vietnam | Nafovanny | [
"Chemistry"
] | 524 | [
"Animal testing"
] |
9,373,147 | https://en.wikipedia.org/wiki/Utahdactylus | Utahdactylus was a genus of extinct reptile from the Kimmeridgian-Tithonian-age Upper Jurassic Morrison Formation of Utah, United States. Based on DM 002/CEUM 32588 (an incomplete skeleton described as including a fragment of the skull, a cervical vertebra, three back vertebrae, and a caudal vertebra, ribs, a scapula, coracoid, and limb bones), Czerkas and Mickelson (2002) identified it as a "rhamphorhynchoid" pterosaur. Bennett (2007) later concluded that it has no diagnostic features of the Pterosauria, and cannot be positively identified beyond being an indeterminate diapsid. More recent work on newly prepared material, however, seems to confirm once again that Utahdactylus was a pterosaur.
History
The genus was named and described in 2002 by Stephen Czerkas and Debra Mickelson. The type species is Utahdactylus kateae. The genus name is derived from Utah and Greek daktylos, "finger". The specific name means "for Kate", referring to Kate Mickelson.
The holotype consists of some disarticulated bone fragments preserved on several chalkstone blocks. It is housed in the Dinosaur Museum, run by Czerkas himself.
The specimen was first described as a pterosaur, with a long tail and an estimated wingspan of 1.20 meters (3.94 feet). The authors considered it to be a "rhamphorhynchoid", i.e. a basal pterosaur, due to its long tail and large but not elongate cervical vertebrae, but without the typical groove in its forelimb bones. It was regarded as a "rhamphorhynchoid" based on an unprepared specimen in the most recent review of Morrison pterosaurs.
In 2007, pterosaur specialist Chris Bennett published a redescription wherein he disagreed with Czerkas' and Mickelson's conclusions. He found several of the bone identifications and interpretations to be mistaken, such as the skull bone (interpreted here as just a bone fragment of unknown origin), elongate tail vertebra (the presumed elongated extensions were ribs), humerus (unknown), and the orientation of the bone described as a scapulacoracoid (the scapula and coracoid parts had been confused). He could not locate other bones seen as impressions, and found no evidence to suggest that the identifiable bones came from a pterosaur. In fact, he found the general quality of the bone texture to differ from that of pterosaur bones. He concluded by classifying it as Diapsida incertae sedis, and a dubious name, adding an exhortation not to name pterosaurs from material lacking unequivocal pterosaur characters.
Bennett's conclusion was rejected by Czerkas & Ford (2018), who affirmed the validity of initial interpretation of Utahdactylus as a pterosaur. The authors reported that the cervical of the holotype specimen was completely prepared out of the matrix along with the scapulacoracoid, a sacrum, and part of a mandible. The study of the glenoid indicated that it had a pterosaur saddle-shaped articulation for the humerus. In addition, the authors reported a parallel alignment of the dentaries, indicating that they formed a mandibular symphysis. Czerkas & Ford interpreted the presence of the elongate mandibular symphysis and spoon-like expansion of the anterior end of the jaws as aligning Utahdactylus with pterodactyloids, and specifically within Ctenochasmatidae. The authors interpreted Utahdactylus as the first known gnathosaur from North America.
See also
List of pterosaur genera
Timeline of pterosaur research
References
External links
Utahdactylus in The Pterosauria
Pterosaurs
Nomina dubia
Late Jurassic pterosaurs of North America
Morrison fauna
Fossil taxa described in 2002 | Utahdactylus | [
"Biology"
] | 865 | [
"Biological hypotheses",
"Nomina dubia",
"Controversial taxa"
] |
9,373,204 | https://en.wikipedia.org/wiki/General%20set%20theory | General set theory (GST) is George Boolos's (1998) name for a fragment of the axiomatic set theory Z. GST is sufficient for all mathematics not requiring infinite sets, and is the weakest known set theory whose theorems include the Peano axioms.
Ontology
The ontology of GST is identical to that of ZFC, and hence is thoroughly canonical. GST features a single primitive ontological notion, that of set, and a single ontological assumption, namely that all individuals in the universe of discourse (hence all mathematical objects) are sets. There is a single primitive binary relation, set membership; that set a is a member of set b is written a ∈ b (usually read "a is an element of b").
Axioms
The symbolic axioms below are from Boolos (1998: 196), and govern how sets behave and interact.
As with Z, the background logic for GST is first order logic with identity. Indeed, GST is the fragment of Z obtained by omitting the axioms Union, Power Set, Elementary Sets (essentially Pairing) and Infinity and then taking a theorem of Z, Adjunction, as an axiom.
The natural language versions of the axioms are intended to aid the intuition.
1) Axiom of Extensionality: The sets x and y are the same set if they have the same members.
The converse of this axiom follows from the substitution property of equality.
2) Axiom Schema of Specification (or Separation or Restricted Comprehension): If z is a set and is any property which may be satisfied by all, some, or no elements of z, then there exists a subset y of z containing just those elements x in z which satisfy the property . The restriction to z is necessary to avoid Russell's paradox and its variants. More formally, let be any formula in the language of GST in which x may occur freely and y does not. Then all instances of the following schema are axioms:
3) Axiom of Adjunction: If x and y are sets, then there exists a set w, the adjunction of x and y, whose members are just y and the members of x.
Adjunction refers to an elementary operation on two sets, and has no bearing on the use of that term elsewhere in mathematics, including in category theory.
ST is GST with the axiom schema of specification replaced by the axiom of empty set:
Discussion
Metamathematics
Note that Specification is an axiom schema. The theory given by these axioms is not finitely axiomatizable. Montague (1961) showed that ZFC is not finitely axiomatizable, and his argument carries over to GST. Hence any axiomatization of GST must include at least one axiom schema.
With its simple axioms, GST is also immune to the three great antinomies of naïve set theory: Russell's, Burali-Forti's, and Cantor's.
GST is Interpretable in relation algebra because no part of any GST axiom lies in the scope of more than three quantifiers. This is the necessary and sufficient condition given in Tarski and Givant (1987).
Peano arithmetic
Setting φ(x) in Separation to x≠x, and assuming that the domain is nonempty, assures the existence of the empty set. Adjunction implies that if x is a set, then so is . Given Adjunction, the usual construction of the successor ordinals from the empty set can proceed, one in which the natural numbers are defined as . See Peano's axioms.
GST is mutually interpretable with Peano arithmetic (thus it has the same proof-theoretic strength as PA).
The most remarkable fact about ST (and hence GST), is that these tiny fragments of set theory give rise to such rich metamathematics. While ST is a small fragment of the well-known canonical set theories ZFC and NBG, ST interprets Robinson arithmetic (Q), so that ST inherits the nontrivial metamathematics of Q. For example, ST is essentially undecidable because Q is, and every consistent theory whose theorems include the ST axioms is also essentially undecidable. This includes GST and every axiomatic set theory worth thinking about, assuming these are consistent. In fact, the undecidability of ST implies the undecidability of first-order logic with a single binary predicate letter.
Q is also incomplete in the sense of Gödel's incompleteness theorem. Any axiomatizable theory, such as ST and GST, whose theorems include the Q axioms is likewise incomplete. Moreover, the consistency of GST cannot be proved within GST itself, unless GST is in fact inconsistent.
Infinite sets
Given any model M of ZFC, the collection of hereditarily finite sets in M will satisfy the GST axioms. Therefore, GST cannot prove the existence of even a countable infinite set, that is, of a set whose cardinality is . Even if GST did afford a countably infinite set, GST could not prove the existence of a set whose cardinality is , because GST lacks the axiom of power set. Hence GST cannot ground analysis and geometry, and is too weak to serve as a foundation for mathematics.
History
Boolos was interested in GST only as a fragment of Z that is just powerful enough to interpret Peano arithmetic. He never lingered over GST, only mentioning it briefly in several papers discussing the systems of Frege's Grundlagen and Grundgesetze, and how they could be modified to eliminate Russell's paradox. The system Aξ'[δ0] in Tarski and Givant (1987: 223) is essentially GST with an axiom schema of induction replacing Specification, and with the existence of an empty set explicitly assumed.
GST is called STZ in Burgess (2005), p. 223. Burgess's theory ST is GST with Empty Set replacing the axiom schema of specification. That the letters "ST" also appear in "GST" is a coincidence.
Footnotes
References
George Boolos (1999) Logic, Logic, and Logic. Harvard Univ. Press.
Burgess, John, 2005. Fixing Frege. Princeton Univ. Press.
Collins, George E., and Daniel, J. D. (1970). "On the interpretability of arithmetic in set theory". Notre Dame Journal of Formal Logic, 11 (4): 477–483.
Richard Montague (1961) "Semantical closure and non-finite axiomatizability" in Infinistic Methods. Warsaw: 45-69.
Alfred Tarski, Andrzej Mostowski, and Raphael Robinson (1953) Undecidable Theories. North Holland.
Tarski, A., and Givant, Steven (1987) A Formalization of Set Theory without Variables. Providence RI: AMS Colloquium Publications, v. 41.
External links
Stanford Encyclopedia of Philosophy: Set Theory—by Thomas Jech.
Systems of set theory
Z notation | General set theory | [
"Mathematics"
] | 1,497 | [
"Z notation"
] |
9,373,360 | https://en.wikipedia.org/wiki/Personnel%20Reliability%20Program | The Personnel Reliability Program (PRP) is a United States Department of Defense security, medical and psychological evaluation program, designed to permit only the most trustworthy individuals to have access to nuclear weapons (NPRP), chemical weapons (CPRP), and biological weapons (BPRP).
The program was first instituted for nuclear weapons during the Cold War; it was later extended to the realm of chemical and biological workers. Among its goals are, (Quoting from DOD Directive 5210.42)
The Department of Defense shall support the national security of the United States by maintaining an effective nuclear deterrent while protecting the public health, safety, and environment. For that reason, nuclear-weapons require special consideration because of their policy implications and military importance, their destructive power, and the political consequences of an accident or an unauthorized act. The safety, security, control, and effectiveness of nuclear weapons are of paramount importance to the security of the United States.
Nuclear weapons shall not be subject to loss, theft, sabotage, unauthorized use, unauthorized destruction, unauthorized disablement, jettison, or accidental damage.
Only those personnel who have demonstrated the highest degree of individual reliability for allegiance, trustworthiness, conduct, behavior, and responsibility shall be allowed to perform duties associated with nuclear weapons, and they shall be continuously evaluated for adherence to PRP standards.
The PRP evaluates many aspects of the individual's work life and home life. Any disruption of these, or severe deviation from an established norm would be cause to deny access. The denial might be temporary or permanent. However, the policy does explicitly state,
The denial of eligibility or the revocation of certification for assignment to PRP positions is neither a punitive measure nor the basis for disciplinary action. The failure of an individual to be certified for assignment to PRP duties does not necessarily reflect unfavorably on the individual's suitability for assignment to other duties.
In certain instances officers and enlisted personnel certified under PRP have been punished for information that also disqualifies them from the program. The suspension from, or indeed the permanent removal of an individual from the program in it itself does not represent a punitive measure.
External links
Nuclear warfare
Nuclear weapons | Personnel Reliability Program | [
"Chemistry"
] | 447 | [
"Radioactivity",
"Nuclear warfare"
] |
9,374,505 | https://en.wikipedia.org/wiki/Schild%20equation | In pharmacology, Schild regression analysis, based upon the Schild equation, both named for Heinz Otto Schild, are tools for studying the effects of agonists and antagonists on the response caused by the receptor or on ligand-receptor binding.
Concept
Dose-response curves can be constructed to describe response or ligand-receptor complex formation as a function of the ligand concentration. Antagonists make it harder to form these complexes by inhibiting interactions of the ligand with its receptor. This is seen as a change in the dose response curve: typically a rightward shift or a lowered maximum. A reversible competitive antagonist should cause a rightward shift in the dose response curve, such that the new curve is parallel to the old one and the maximum is unchanged. This is because reversible competitive antagonists are surmountable antagonists. The magnitude of the rightward shift can be quantified with the dose ratio, r. The dose ratio r is the ratio of the dose of agonist required for half maximal response with the antagonist present divided by the agonist required for half maximal response without antagonist ("control"). In other words, the ratio of the EC50s of the inhibited and un-inhibited curves. Thus, r represents both the strength of an antagonist and the concentration of the antagonist that was applied. An equation derived from the Gaddum equation can be used to relate r to , as follows:
where
r is the dose ratio
is the concentration of the antagonist
is the equilibrium constant of the binding of the antagonist to the receptor
A Schild plot is a double logarithmic plot, typically as the ordinate and as the abscissa. This is done by taking the base-10 logarithm of both sides of the previous equation after subtracting 1:
This equation is linear with respect to , allowing for easy construction of graphs without computations. This was particular valuable before the use of computers in pharmacology became widespread. The y-intercept of the equation represents the negative logarithm of and can be used to quantify the strength of the antagonist.
These experiments must be carried out on a very wide range (therefore the logarithmic scale) as the mechanisms differ over a large scale, such as at high concentration of drug.
The fitting of the Schild plot to observed data points can be done with regression analysis.
Schild regression for ligand binding
Although most experiments use cellular response as a measure of the effect, the effect is, in essence, a result of the binding kinetics; so, in order to illustrate the mechanism, ligand binding is used. A ligand A will bind to a receptor R according to an equilibrium constant :
Although the equilibrium constant is more meaningful, texts often mention its inverse, the affinity constant (Kaff = k1/k−1): A better binding means an increase of binding affinity.
The equation for simple ligand binding to a single homogeneous receptor is
This is the Hill-Langmuir equation, which is practically the Hill equation described for the agonist binding. In chemistry, this relationship is called the Langmuir equation, which describes the adsorption of molecules onto sites of a surface (see adsorption).
is the total number of binding sites, and when the equation is plotted it is the horizontal asymptote to which the plot tends; more binding sites will be occupied as the ligand concentration increases, but there will never be 100% occupancy. The binding affinity is the concentration needed to occupy 50% of the sites; the lower this value is the easier it is for the ligand to occupy the binding site.
The binding of the ligand to the receptor at equilibrium follows the same kinetics as an enzyme at steady-state (Michaelis–Menten equation) without the conversion of the bound substrate to product.
Agonists and antagonists can have various effects on ligand binding. They can change the maximum number of binding sites, the affinity of the ligand to the receptor, both effects together or even more bizarre effects when the system being studied is more intact, such as in tissue samples. (Tissue absorption, desensitization, and other non equilibrium steady-state can be a problem.)
A surmountable drug changes the binding affinity:
competitive ligand:
cooperative allosteric ligand:
A nonsurmountable drug changes the maximum binding:
noncompetitive binding:
irreversible binding
The Schild regression also can reveal if there are more than one type of receptor and it can show if the experiment was done wrong as the system has not reached equilibrium.
Radioligand binding assays
The first radio-receptor assay (RRA) was done in 1970 by Lefkowitz et al., using a radiolabeled hormone to determine the binding affinity for its receptor.
A radio-receptor assay requires the separation of the bound from the free ligand. This is done by filtration, centrifugation or dialysis.
A method that does not require separation is the scintillation proximity assay that relies on the fact that β-rays from 3H travel extremely short distances. The receptors are bound to beads coated with a polyhydroxy scintillator. Only the bound ligands to be detected.
Today, the fluorescence method is preferred to radioactive materials due to a much lower cost, lower hazard, and the possibility of multiplexing the reactions in a high-throughput manner. One problem is that fluorescent-labeled ligands have to bear a bulky fluorophore that may cause it to hinder the ligand binding. Therefore, the fluorophore used, the length of the linker, and its position must be carefully selected.
An example is by using FRET, where the ligand's fluorophore transfers its energy to the fluorophore of an antibody raised against the receptor.
Other detection methods such as surface plasmon resonance do not even require fluorophores.
See also
Dose-response relationship
References
Further reading
External links
curvefit.com - Dose-response curves in the presence of antagonists, for a clear explanation.
Pharmacology articles needing expert attention
Pharmacodynamics
Biochemistry methods | Schild equation | [
"Chemistry",
"Biology"
] | 1,273 | [
"Biochemistry methods",
"Pharmacology",
"Pharmacodynamics",
"Biochemistry"
] |
9,375,496 | https://en.wikipedia.org/wiki/Nepenthes%20%C3%97%20mirabilata | Nepenthes × mirabilata (; a blend of mirabilis and alata) is a natural hybrid involving N. alata and N. mirabilis.
Nepenthes × mirabilata was mentioned as a natural hybrid in Guide to Nepenthes Hybrids (1995). The hybrid is restricted to Mindanao, the Philippines, the only location where the parent species overlap.
References
McPherson, S.R. & V.B. Amoroso 2011. Field Guide to the Pitcher Plants of the Philippines. Redfern Natural History Productions, Poole.
CP Database: Nepenthes × mirabilata
Carnivorous plants of Asia
mirabilata
Nomina nuda
Flora of Mindanao
Plants described in 1995 | Nepenthes × mirabilata | [
"Biology"
] | 142 | [
"Biological hypotheses",
"Nomina nuda",
"Controversial taxa"
] |
9,375,579 | https://en.wikipedia.org/wiki/Nepenthes%20%C3%97%20tsangoya | Nepenthes × tsangoya (; after Peter Tsang) is a tropical pitcher plant. It reportedly represents the complex natural hybrid (N. alata × N. merrilliana) × N. mirabilis.
Nepenthes × tsangoya was mentioned as a natural hybrid in Guide to Nepenthes Hybrids (1995). The known ranges of the parent species only overlap in Mindanao, the Philippines.
References
CP Database: Nepenthes × tsangoya
Carnivorous plants of Asia
tsangoya
Nomina nuda
Flora of Mindanao | Nepenthes × tsangoya | [
"Biology"
] | 111 | [
"Biological hypotheses",
"Nomina nuda",
"Controversial taxa"
] |
14,513,019 | https://en.wikipedia.org/wiki/Comparison%20of%20programming%20languages%20%28basic%20instructions%29 | This article compares a large number of programming languages by tabulating their data types, their expression, statement, and declaration syntax, and some common operating-system interfaces.
Conventions of this article
Generally, var, , or is how variable names or other non-literal values to be interpreted by the reader are represented. The rest is literal code. Guillemets ( and ) enclose optional sections. indicates a necessary (whitespace) indentation.
The tables are not sorted lexicographically ascending by programming language name by default, and that some languages have entries in some tables but not others.
Type identifiers
Integers
The standard constants int shorts and int lengths can be used to determine how many shorts and longs can be usefully prefixed to short int and long int. The actual sizes of short int, int, and long int are available as the constants short max int, max int, and long max int etc.
Commonly used for characters.
The ALGOL 68, C and C++ languages do not specify the exact width of the integer types short, int, long, and (C99, C++11) long long, so they are implementation-dependent. In C and C++ short, long, and long long types are required to be at least 16, 32, and 64 bits wide, respectively, but can be more. The int type is required to be at least as wide as short and at most as wide as long, and is typically the width of the word size on the processor of the machine (i.e. on a 32-bit machine it is often 32 bits wide; on 64-bit machines it is sometimes 64 bits wide). C99 and C++11 also define the [u]intN_t exact-width types in the stdint.h header. See C syntax#Integral types for more information. In addition the types size_t and ptrdiff_t are defined in relation to the address size to hold unsigned and signed integers sufficiently large to handle array indices and the difference between pointers.
Perl 5 does not have distinct types. Integers, floating point numbers, strings, etc. are all considered "scalars".
PHP has two arbitrary-precision libraries. The BCMath library just uses strings as datatype. The GMP library uses an internal "resource" type.
The value of n is provided by the SELECTED_INT_KIND intrinsic function.
ALGOL 68G's runtime option --precision "number" can set precision for long long ints to the required "number" significant digits. The standard constants long long int width and long long max int can be used to determine actual precision.
COBOL allows the specification of a required precision and will automatically select an available type capable of representing the specified precision. "PIC S9999", for example, would require a signed variable of four decimal digits precision. If specified as a binary field, this would select a 16-bit signed type on most platforms.
Smalltalk automatically chooses an appropriate representation for integral numbers. Typically, two representations are present, one for integers fitting the native word size minus any tag bit () and one supporting arbitrary sized integers (). Arithmetic operations support polymorphic arguments and return the result in the most appropriate compact representation.
Ada range types are checked for boundary violations at run-time (as well as at compile-time for static expressions). Run-time boundary violations raise a "constraint error" exception. Ranges are not restricted to powers of two. Commonly predefined Integer subtypes are: Positive (range 1 .. Integer'Last) and Natural (range 0 .. Integer'Last). Short_Short_Integer (8 bits), Short_Integer (16 bits) and Long_Integer (64 bits) are also commonly predefined, but not required by the Ada standard. Runtime checks can be disabled if performance is more important than integrity checks.
Ada modulo types implement modulo arithmetic in all operations, i.e. no range violations are possible. Modulos are not restricted to powers of two.
Commonly used for characters like Java's char.
int in PHP has the same width as long type in C has on that system.
Erlang is dynamically typed. The type identifiers are usually used to specify types of record fields and the argument and return types of functions.
When it exceeds one word.
Floating point
The standard constants real shorts and real lengths can be used to determine how many shorts and longs can be usefully prefixed to short real and long real. The actual sizes of short real, real, and long real are available as the constants short max real, max real and long max real etc. With the constants short small real, small real and long small real available for each type's machine epsilon.
declarations of single precision often are not honored
The value of n is provided by the SELECTED_REAL_KIND intrinsic function.
ALGOL 68G's runtime option --precision "number" can set precision for long long reals to the required "number" significant digits. The standard constants long long real width and long long max real can be used to determine actual precision.
These IEEE floating-point types will be introduced in the next COBOL standard.
Same size as double on many implementations.
Swift supports 80-bit extended precision floating point type, equivalent to long double in C languages.
Complex numbers
The value of n is provided by the SELECTED_REAL_KIND intrinsic function.
Generic type which can be instantiated with any base floating point type.
Other variable types
specifically, strings of arbitrary length and automatically managed.
This language represents a boolean as an integer where false is represented as a value of zero and true by a non-zero value.
All values evaluate to either true or false. Everything in TrueClass evaluates to true and everything in FalseClass evaluates to false.
This language does not have a separate character type. Characters are represented as strings of length 1.
Enumerations in this language are algebraic types with only nullary constructors
The value of n is provided by the SELECTED_INT_KIND intrinsic function.
Derived types
Array
In most expressions (except the sizeof and & operators), values of array types in C are automatically converted to a pointer of its first argument. See C syntax#Arrays for further details of syntax and pointer operations.
The C-like type x[] works in Java, however type[] x is the preferred form of array declaration.
Subranges are used to define the bounds of the array.
JavaScript's array are a special kind of object.
The DEPENDING ON clause in COBOL does not create a true variable length array and will always allocate the maximum size of the array.
Other types
Only classes are supported.
structs in C++ are actually classes, but have default public visibility and are also POD objects. C++11 extended this further, to make classes act identically to POD objects in many more cases.
pair only
Although Perl doesn't have records, because Perl's type system allows different data types to be in an array, "hashes" (associative arrays) that don't have a variable index would effectively be the same as records.
Enumerations in this language are algebraic types with only nullary constructors
Variable and constant declarations
Pascal has declaration blocks. See functions.
Types are just regular objects, so you can just assign them.
In Perl, the "my" keyword scopes the variable into the block.
Technically, this does not declare name to be a mutable variable—in ML, all names can only be bound once; rather, it declares name to point to a "reference" data structure, which is a simple mutable cell. The data structure can then be read and written to using the ! and := operators, respectively.
If no initial value is given, an invalid value is automatically assigned (which will trigger a run-time exception if it used before a valid value has been assigned). While this behaviour can be suppressed it is recommended in the interest of predictability. If no invalid value can be found for a type (for example in case of an unconstraint integer type), a valid, yet predictable value is chosen instead.
In Rust, if no initial value is given to a let or let mut variable and it is never assigned to later, there is an "unused variable" warning. If no value is provided for a const or static or static mut variable, there is an error. There is a "non-upper-case globals" error for non-uppercase const variables. After it is defined, a static mut variable can only be assigned to in an unsafe block or function.
Control flow
Conditional statements
A single instruction can be written on the same line following the colon. Multiple instructions are grouped together in a block which starts on a newline (The indentation is required). The conditional expression syntax does not follow this rule.
This is pattern matching and is similar to select case but not the same. It is usually used to deconstruct algebraic data types.
In languages of the Pascal family, the semicolon is not part of the statement. It is a separator between statements, not a terminator.
END-IF may be used instead of the period at the end.
In Rust, the comma (,) at the end of a match arm can be omitted after the last match arm, or after any match arm in which the expression is a block (ends in possibly empty matching brackets {}).
Loop statements
"step n" is used to change the loop interval. If "step" is omitted, then the loop interval is 1.
This implements the universal quantifier ("for all" or "") as well as the existential quantifier ("there exists" or "").
THRU may be used instead of THROUGH.
«IS» GREATER «THAN» may be used instead of >.
Type of set expression must implement trait std::iter::IntoIterator.
Exceptions
Common Lisp allows with-simple-restart, restart-case and restart-bind to define restarts for use with invoke-restart. Unhandled conditions may cause the implementation to show a restarts menu to the user before unwinding the stack.
Uncaught exceptions are propagated to the innermost dynamically enclosing execution. Exceptions are not propagated across tasks (unless these tasks are currently synchronised in a rendezvous).
Other control flow statements
Pascal has declaration blocks. See functions.
label must be a number between 1 and 99999.
Functions
See reflective programming for calling and declaring functions by strings.
Pascal requires "forward;" for forward declarations.
Eiffel allows the specification of an application's root class and feature.
In Fortran, function/subroutine parameters are called arguments (since PARAMETER is a language keyword); the CALL keyword is required for subroutines.
Instead of using "foo", a string variable may be used instead containing the same value.
Type conversions
Where string is a signed decimal number:
JavaScript only uses floating point numbers so there are some technicalities.
Perl doesn't have separate types. Strings and numbers are interchangeable.
NUMVAL-C or NUMVAL-F may be used instead of NUMVAL.
str::parse is available to convert any type that has an implementation of the std::str::FromStr trait. Both str::parse and FromStr::from_str return a Result that contains the specified type if there is no error. The turbofish (::<_>) on str::parse can be omitted if the type can be inferred from context.
Standard stream I/O
ALGOL 68 additionally as the "unformatted" transput routines: read, write, get, and put.
gets(x) and fgets(x, length, stdin) read unformatted text from stdin. Use of gets is not recommended.
puts(x) and fputs(x, stdout) write unformatted text to stdout.
fputs(x, stderr) writes unformatted text to stderr
are defined in the module.
Reading command-line arguments
In Rust, std::env::args and std::env::args_os return iterators, std::env::Args and std::env::ArgsOs respectively. Args converts each argument to a String and it panics if it reaches an argument that cannot be converted to UTF-8. ArgsOs returns a non-lossy representation of the raw strings from the operating system (std::ffi::OsString), which can be invalid UTF-8.
In Visual Basic, command-line arguments are not separated. Separating them requires a split function Split(string).
The COBOL standard includes no means to access command-line arguments, but common compiler extensions to access them include defining parameters for the main program or using ACCEPT statements.
Execution of commands
Fortran 2008 or newer.
References
Programming constructs
Basic instructions | Comparison of programming languages (basic instructions) | [
"Technology"
] | 2,774 | [
"Programming language comparisons",
"Computing comparisons"
] |
14,513,043 | https://en.wikipedia.org/wiki/Eve%20of%20Destruction%20%28film%29 | Eve of Destruction is a 1991 American science fiction action thriller film. The film is about a nuclear armed prototype android named EVE gone amok while being field tested by the military in a big city. The film stars Gregory Hines as Col. Jim McQuade and Dutch actress Renée Soutendijk (in her first U.S. film) with the dual roles as the robot's creator Dr. Eve Simmons, and the robot Eve herself.
Plot
EVE VIII is a military android created to look and sound exactly like her creator, Dr. Eve Simmons. When the robot is damaged during a bank robbery, it accesses memories it was programmed with by her creator. The memories used, though, are dark and tragic ones.
The robot is also programmed as a killing machine if anyone tries to stop her mission. Colonel Jim McQuade is tasked with eliminating the unstoppable machine. With the help of Dr. Simmons, he tries to outthink the intelligent and emotional robotic doppelgänger.
Cast
Gregory Hines as Col. Jim McQuade
Renée Soutendijk as Dr. Eve Simmons/EVE VIII
Kurt Fuller as Bill Schneider
Michael Greene as General Curtis
John M. Jackson as Peter Arnold
George P. Wilbur as Trooper
Kevin McCarthy as William Simmons (uncredited)
Reception
The film received negative reviews from critics, having a 20% "rotten" score on RottenTomatoes.com. Vincent Canby gave a negative review in The New York Times, calling the film "an undistinguished, barely functional action-melodrama."
Box office
The movie opened with $2.5 million. It finished its run with a total of $5,451,119 against a $13 million budget, making it a box-office bomb.
Home media
Eve of Destruction released on VHS on August 8, 1991, from New Line Home Video.
Also, MGM Home Entertainment released Eve of Destruction on DVD on July 15, 2003.
References
External links
1991 films
1990s science fiction action films
American chase films
American science fiction action films
1990s chase films
Fictional cyborgs
Films about computing
Orion Pictures films
New Line Cinema films
Techno-thriller films
Interscope Communications films
Films about androids
Films scored by Philippe Sarde
1990s English-language films
1990s American films
English-language science fiction action films
1991 science fiction films
English-language action thriller films | Eve of Destruction (film) | [
"Technology"
] | 473 | [
"Works about computing",
"Films about computing"
] |
14,513,559 | https://en.wikipedia.org/wiki/Environmental%20Research | Environmental Research is a peer-reviewed environmental science and environmental health journal published by Elsevier. The editor in chief is Jose L. Domingo.
The journal's 2020 impact factor of 6.498 placed it 16th out of 203 journals in the category Public, Environmental, and Occupational Health; the 2021 impact factor increased to 8.431.
References
External links
Environmental science journals
Elsevier academic journals
Academic journals established in 1967
Environmental health journals | Environmental Research | [
"Environmental_science"
] | 89 | [
"Environmental science journals",
"Environmental health journals",
"Environmental science journal stubs"
] |
14,515,477 | https://en.wikipedia.org/wiki/GPR161 | G-protein coupled receptor 161 is a protein that in humans is encoded by the GPR161 gene.
References
Further reading
G protein-coupled receptors | GPR161 | [
"Chemistry"
] | 32 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,515,747 | https://en.wikipedia.org/wiki/Puppis%20A | Puppis A (Pup A) is a supernova remnant (SNR) about 100 light-years in diameter and roughly 6500–7000 light-years distant. Its apparent angular diameter is about 1 degree. The light of the supernova explosion reached Earth approximately 3700 years ago. Although it overlaps the Vela Supernova Remnant, it is four times more distant.
A hypervelocity neutron star known as the Cosmic Cannonball has been found in this SNR.
Puppis X-1
Puppis X-1 (Puppis A) was discovered by a Skylark flight in October 1971, viewed for 1 min with an accuracy ≥ 2 arcsec, probably at 1M 0821-426, with Puppis A (RA 08h 23m 08.16s Dec -42° 41′ 41.40″) as the likely visual counterpart.
Puppis A is one of the brightest X-ray sources in the X-ray sky. Its X-ray designation is 2U 0821-42.
Gallery
References
"Puppis A: Chandra Reveals Cloud Disrupted By Supernova Shock", Chandra: NASA/CXC/GSFC/U.Hwang et al.; ROSAT: NASA/GSFC/S.Snowden et al.,
Simbad
See also
List of supernova remnants
Supernova remnants
Puppis
Astronomical X-ray sources | Puppis A | [
"Astronomy"
] | 282 | [
"Puppis",
"Astronomy stubs",
"Constellations",
"Nebula stubs",
"Astronomical X-ray sources",
"Astronomical objects"
] |
14,517,273 | https://en.wikipedia.org/wiki/GPR56 | G protein-coupled receptor 56 also known as TM7XN1 is a protein encoded by the ADGRG1 gene. GPR56 is a member of the adhesion GPCR family.
Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
GPR56 is expressed in liver, muscle, tendon, neural, and cytotoxic lymphoid cells in human as well as in hematopoietic precursor, muscle, and developing neural cells in the mouse.
GPR56 has been shown to have numerous role in cell guidance/adhesion as exemplified by its roles in tumour inhibition and neuron development. More recently it has been shown to be a marker for cytotoxic T cells and a subgroup of Natural killer cells.
Ligands
GPR56 binds transglutaminase 2 to suppress tumor metastasis and binds collagen III to regulate cortical development and lamination.
Signaling
GPR56 couples to Gαq/11 protein upon association with the tetraspanins CD9 and CD81. Forced GPR56 expression activates NF-kB, PAI-1, and TCF transcriptional response elements. The splicing of GPR56 induces tumorigenic responses as a result of activating the transcription of genes, such as COX2, iNOS, and VEGF85. GPR56 couples to the Gα12/13 protein and activates RhoA and mammalian target of rapamycin (mTOR) pathway upon ligand binding. Lack of the N-terminal fragment (NTF) of GPR56 causes stronger RhoA signaling and β-arrestin accumulation, leading to extensive ubiquitination of the C-terminal fragment (CTF). Finally, GPR56 suppresses PKCα activation to regulate angiogenesis.
Function
Studies in the hematopoietic system disclosed that during endothelial to hematopoietic stem cell transition, Gpr56 is a transcriptional target of the heptad complex of hematopoietic transcription factors, and is required for hematopoietic cluster formation. Recently, two studies showed that GPR56, is a cell autonomous regulator of oligodendrocyte development through Gα12/13 proteins and Rho activation. Della Chiesa et al. demonstrate that GPR56 is expressed on CD56dull natural killer (NK) cells. Lin and Hamann's group show all human cytotoxic lymphocytes, including CD56dull NK cells and CD27–CD45RA+ effector-type CD8+ T cells, express GPR56.
Clinical significance
GPR56 was the first adhesion GPCR causally linked to a disease. Loss-of-function mutations in GPR56 cause a severe cortical malformation known as bilateral frontoparietal polymicrogyria (BFPP). Investigating the pathological mechanism of disease-associated GPR56 mutations in BFPP has provided mechanistic insights into the functioning of adhesion GPCRs. Researchers demonstrated that disease-associated GPR56 mutations cause BFPP via multiple mechanisms. Li et al. demonstrated that GPR56 regulates pial basement membrane (BM) organization during cortical development. Disruption of the Gpr56 gene in mice leads to neuronal malformation in the cerebral cortex, which resulted in 4 critical pathological morphologies: defective pial BM, abnormal localized radial glial endfeet, malpositioned Cajal-Retzius cells, and overmigrated neurons. Furthermore, the interaction of GPR56 and collagen III inhibits neural migration to regulate lamination of the cerebral cortex. Next to GPR56, the α3β1 integrin is also involved in pial BM maintenance. Study from Itga3 (α3 integrin)/Gpr56 double knockout mice showed increased neuronal overmigration compared to Gpr56 single knockout mice, indicating cooperation of GPR56 and α3β1 integrin in modulation of the development of the cerebral cortex. More recently, the Walsh laboratory showed that alternative splicing of GPR56 regulates regional cerebral cortical patterning.
In depression patients, blood GPR56 mRNA expression increases only in responders and not non-responders to serotonin-norepinephrine reuptake inhibitor treatment. Furthermore, GPR56 was down-regulated in the prefrontal cortex of individuals with depression that died by suicide.
Outside the nervous system, GPR56 has been linked to muscle function and male fertility. The expression of GPR56 is upregulated during early differentiation of human myoblasts. Investigation of Gpr56 knockout mice and BFPP patients showed that GPR56 is required for in vitro myoblast fusion via signaling of serum response factor (SRF) and nuclear factor of activated T-cell (NFAT), but is not essential for muscle development in vivo. Additionally, GPR56 is a transcriptional target of peroxisome proliferator-activated receptor gamma coactivator 1-alpha 4 and regulates overload-induced muscle hypertrophy through Gα12/13 and mTOR signaling. Therefore, the study of knockout mice revealed that GPR56 is involved in testis development and male fertility. In melanocytic cells GPR56 gene expression may be regulated by MITF.
Mutations in GPR56 cause the brain developmental disorder BFPP, characterized by disordered cortical lamination in frontal cortex. Mice lacking expression of GPR56 develop a comparable phenotype. Furthermore, loss of GPR56 leads to reduced fertility in male mice, resulting from a defect in seminiferous tubule development. GPR56 is expressed in glioblastoma/astrocytoma as well as in esophageal squamous cell, breast, colon, non-small cell lung, ovarian, and pancreatic carcinoma. GPR56 was shown to localize together with α-actinin at the leading edge of membrane filopodia in glioblastoma cells, suggesting a role in cell adhesion/migration. In addition, recombinant GPR56-NTF protein interacts with glioma cells to inhibit cellular adhesion. Inactivation of Von Hippel-Lindau (VHL) tumor-suppressor gene and hypoxia suppressed GPR56 in a renal cell carcinoma cell line, but hypoxia influenced GPR56 expression in breast or bladder cancer cell lines. GPR56 is a target gene for vezatin, an adherens junctions transmembrane protein, which is a tumor suppressor in gastric cancer. Xu et al. used an in vivo metastatic model of human melanoma to show that GPR56 is downregulated in highly metastatic cells. Later, by ectopic expression and RNA interference they confirmed that GPR56 inhibits melanoma tumor growth and metastasis. Silenced expression of GPR56 in HeLa cells enhanced apoptosis and anoikis, but suppressed anchorage-independent growth and cell adhesion. High ecotropic viral integration site-1 acute myeloid leukemia (EVI1-high AML) expresses GPR56 that was found to be a transcriptional target of EVI1. Silencing expression of GPR56 decreases adhesion, cell growth and induces apoptosis through reduced RhoA signaling. GPR56 suppresses the angiogenesis and melanoma growth through inhibition of vascular endothelial growth factor (VEGF) via PKCα signaling pathway. Furthermore, GPR56 expression was found to be negatively correlated with the malignancy of melanomas in human patients.
References
External links
Adhesion GPCR consortium
GeneReviews/NIH/NCBI/UW entry on Polymicrogyria Overview
G protein-coupled receptors | GPR56 | [
"Chemistry"
] | 1,705 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,309 | https://en.wikipedia.org/wiki/GPR83 | Probable G-protein coupled receptor 83 is a protein that in humans is encoded by the GPR83 gene.
References
Further reading
G protein-coupled receptors | GPR83 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,351 | https://en.wikipedia.org/wiki/GPR162 | Probable G-protein coupled receptor 162 is a protein that in humans is encoded by the GPR162 gene.
This gene was identified upon genomic analysis of a gene-dense region at human chromosome 12p13. It appears to be mainly expressed in the brain; however, its function is not known. Alternatively spliced transcript variants encoding different isoforms have been identified.
References
Further reading
G protein-coupled receptors | GPR162 | [
"Chemistry"
] | 86 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,378 | https://en.wikipedia.org/wiki/GPR85 | Probable G-protein coupled receptor 85 is a protein that in humans is encoded by the GPR85 gene.
See also
SREB
References
Further reading
G protein-coupled receptors | GPR85 | [
"Chemistry"
] | 36 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,415 | https://en.wikipedia.org/wiki/GPR139 | G-protein coupled receptor 139 (GPC139) is a protein that in humans is encoded by the GPR139 gene. Research has shown that mice with loss of GCP139 experience schizophrenia-like symptomatology that is rescued with the dopamine receptor antagonist haloperidol and the μ-opioid receptor antagonist naltrexone.
Ligands
Agonists
Zelatriazin (TAK-41), (NBI-1065846) a potent, and GPR139 receptor selective agonist which was in clinical trials to gauge the efficacy for treating psychiatric conditions such as major depressive disorder and the negative symptoms of schizophrenia, but was later dropped from development.
Antagonists
JNJ-3792165
References
Further reading
G protein-coupled receptors | GPR139 | [
"Chemistry"
] | 165 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,434 | https://en.wikipedia.org/wiki/GPR151 | Probable G-protein coupled receptor 151 is a protein that in humans is encoded by the GPR151 gene.
References
Further reading
G protein-coupled receptors | GPR151 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,466 | https://en.wikipedia.org/wiki/Free%20fatty%20acid%20receptor%204 | Free Fatty acid receptor 4 (FFAR4), also termed G-protein coupled receptor 120 (GPR120), is a protein that in humans is encoded (i.e., its formation is directed) by the FFAR4 gene. This gene is located on the long (i.e. "q") arm of chromosome 10 at position 23.33 (position notated as 10q23.33). G protein-coupled receptors (also termed GPRs or GPCRs) reside on their parent cells' surface membranes, bind any one of the specific set of ligands that they recognize, and thereby are activated to trigger certain responses in their parent cells. FFAR4 is a rhodopsin-like GPR in the broad family of GPRs which in humans are encoded by more than 800 different genes. It is also a member of a small family of structurally and functionally related GPRs that include at least three other free fatty acid receptors (FFARs) viz., FFAR1 (also termed GPR40), FFAR2 (also termed GPR43), and FFAR3 (also termed GPR41). These four FFARs bind and thereby are activated by certain fatty acids.
FFAR4 protein is expressed in a wide range of cell types. Studies conducted primarily on human and rodent cultured cells and in animals (mostly rodents) suggest that FFAR4 acts in these cells to regulate many normal bodily functions such as food preferences, food consumption, food tastes, body weight, blood sugar (i.e., glucose) levels, inflammation, atherosclerosis, and bone remodeling. Studies also suggest that the stimulation or suppression of FFAR4 alters the development and progression of several types of cancers. In consequence, agents that activate or inhibit FFAR4 may be useful for treating excessive fatty food consumption, obesity, type 2 diabetes, pathological inflammatory reactions, atherosclerosis, atherosclerosis-induced cardiovascular disease, repair of damaged bones, osteoporosis. and some cancers. These findings have made FFAR4 a potentially attractive therapeutic biological target for treating these disorders and therefore lead to the development of drugs directed at regulating FFAR4's activities.
Certain fatty acids, including in particular the omega-3 fatty acids, docosahexaenoic and eicosapentaenoic acids, have been taken in diets and supplements to prevent or treat the diseases and tissue injuries that recent studies suggest are associated with abnormalities in FFAR4's functions. It is now known that these fatty acids activate FFAR4. While dietary and supplemental omega-3 fatty acids have had little or only marginal therapeutic effects on these disorders (see health effects of omega-3 fatty acid supplementation), many drugs have been found that are more potent and selective in activating FFAR4 than the omega-3 fatty acids and one drug is a potent inhibitor of FFAR4. This raised a possibility that the drugs may be more effective in treating these disorders and prompted initial studies testing the effectiveness of them in disorders targeted by the omega-3 fatty acids. These studies, which are mostly preclinical studies on cultured cells or animal models of disease with only a few preliminary clinical studies, are reviewed here.
FFAR genes
The genes for FFAR1, FFAR2, and FFAR3 are located close to each other on the short (i.e., "p") arm of chromosome 19 at position 13.12 (location notated as 19p13.12); the FFAR4 gene is located on the "q" (i.e., long) arm of chromosome 10 at position 23.33 (location notated as 10q23.33). Humans express a long FFAR4 protein isoform consisting of 377 amino acids and a short splice variant protein isoform consisting of 361 amino acids. However, rodents, non-human primates, and other studied animals express only the short protein. The two isoforms operate through different cell-stimulating pathways to elicit different responses. Furthermore, humans express the long FFAR4 protein only in their colon and colon cancer tissues. The consequences of these differences for the studies reported here have not been determined.
Activators and inhibitors of the free fatty acid receptors
FFARs are activated by certain straight-chain fatty acids. FFAR2 and FFAR3 are activated by short-chain fatty acids, i.e., fatty acid chains consisting of 2 to 5 carbon atoms, mainly acetic, butyric, and propionic acids. FFAR1 and FFAR4 are activated by 1) medium-chain fatty acids i.e., fatty acids consisting of 6-12 carbon atoms such as capric and lauric acids; 2) long-chain unsaturated fatty acids consisting of 13 to 21 carbon atoms such as myristic and steric acids; 3) monounsaturated fatty acids such as oleic and palmitoleic acids; and 4) polyunsaturated fatty acids such as the omega-3 fatty acids alpha-linolenic, eicosatrienoic, eicosapentaenoic, and docosahexaenoic acids or omega-6 fatty acids such as linoleic, gamma-linolenic, dihomo-gamma-linolenic, arachidonic, and docosatetraenoic acids. Docosahexaenoic and eicosapentaenoic acid are commonly regarded as the main dietary fatty acids that activate FFAR4. Since all of the FFAR1- and FFAR4-activating fatty acids have similar potencies in activating FFAR4 and FFAR1 and have FAR-independent means of influencing cells, it can be difficult to determine if their actions involve FFAR4, FFAR1, both FFARs, or FFAR-independent mechanisms.
The drugs which stimulate (i.e., are agonists of) FFAR4 include: GW9508 (the first discovered and most studied FFAR agonist is about 60-fold more potent in activating FFAR1 than FFAR4; it is often used to implicate FFAR4 functions in cells that naturally or after manipulation express no or very low FFAR1 levels); TUG-891 (almost 300-fold more potent in activating FFAR4 than FFAR1 in human cells but only modestly more potent on FFAR4 than FFAR1 in mouse cells); TUG-1197 (activates FFAR4 but not FFAR1); metabolex 36 (about 100-fold more potent in activating FFAR4 than FFAR1); GSK137647A (about 50-fold more potent in activating FFAR4 than FFAR1); compound A (Merck & Co.) and compound B (CymaBay Therapeutics) (both potently activate FFAR4 with negligible effects on FFAR1); and GPR120 III (2,000-fold more active on FFAR4 than FFAR1). AH-7614 acts as a negative allosteric modulator to inhibit FFAR4; it is >100-fold more potent in inhibiting FFAR4 than FFAR1 and the only currently available FFAR4 antagonist that inhibits FFAR4. Most of the studies reported to date have examined the effects of two FFAR4 agonists, GW9508 and TUG-891, that have been available far longer than the other listed drugs.
Cells and tissues expressing FFAR4
FFAR4 is expressed in a wide variety of tissues and cell types but its highest levels of expression are in certain intestinal cells (i.e., enteroendocrine K and I cells), taste bud cells, fat cells, respiratory epithelium cells in the lung (i.e., club cells also termed clara cells), and macrophages. It is less strongly expressed in other cell types including: various immune cells besides macrophages, cells in brain, heart, and liver tissues, skeletal muscle cells, blood vessel endothelial cells, enteroendocrine L cells of the gastrointestinal tract, delta cells in the islets of the pancreas, cells involved in bone development and remodeling, some cells in the arcuate nucleus and nucleus accumbens of the hypothalamus, and in some types of cancer cells. However, in comparing animal to human studies the cells and tissues that express FFAR4 can differ and many of these studies have measured FFAR4 messenger RNA (mRNA) but not the product directed to be made by this mRNA, FFRA4 protein. The significance of these issues requires study.
FFAR4 functions and activities
Fat tissue development and thermogenesis
The two forms of fat cells, i.e., white and brown fat cells, develop from precursor stem cells. Brown fat cells promote thermogenesis, i.e., the generation of body heat. Studies have reported that: 1) FFAR4 levels rose in the fat tissues of mice exposed to cold; 2) TUG‐891 and GW9508 stimulated 3T3-L1 mouse stem-like cells to mature into fat cells; 3) mice lacking a functional ffar4 gene (i.e., ffar4 knockout mice; ffar4 is the term for the mouse equivalent to the human FFAR4 gene) had fewer brown fat cells in their subcutaneous adipose (i.e. fat) tissues in response to cold exposure, were cold intolerant, and had poor survival rates in cold temperatures; 4) GW9508 stimulated increases in the brown fat tissue of normal mice; however, in ffar4 gene knockout mice it simulated histology, i.e., microscopic, changes in fat tissue suggesting that thermogenesis was impaired; 5) TUG-891 stimulated cultured mouse fat cells to oxidize fatty acids (this oxidation underlies the development of body heat in the non-shivering form of thermogenesis); and 6) FFAR1 has not yet been reported to be expressed in the fat tissue of mice or humans. These studies suggest that FFAR4 contributes to the proliferation of brown fat cells and thermogenesis in mice. Studies are needed to determine if FFAR4 has a similar role in humans.
Obesity
Two rodent studies suggested that FFAR4 functions to limit excessive weight gains: FFAR4 deficient mice developed obesity and mice treated with the FFAR4 agonist, TUG-891, lost fat tissue. FFAR4 might play a similar obesity-suppressing role in humans. One study found that FFAR4 mRNA and protein levels were lower in the visceral fat tissues (i.e., fat around internal organs) of obese than lean individuals. However, another study found that the expression of FFAR4 mRNA was higher in the subcutaneous and omental fat tissues of obese than lean individuals. Similarly, one study reported that Europeans who carried a single-nucleotide variant of the FFAR4 gene which encoded a dysfunctional FFAR4 protein (it has the amino acid arginine rather than histidine at position 270 and is notated as p.R270H) increased the risk of becoming obese. However, this relation was not found in later studies on Danish and European populations. It is possible that the loss of FFAR4 expression or activity may contribute to but by itself is insufficient to promote obesity. Other studies have implicated activated FFAR1 in having anti-obese effects in cultured cells, animal models, and possibly humans (see FFAR1 and obesity). For example, Ffar1 gene knockout mice (i.e., mice made to lack Ffar1 genes) became obese when fed a low-fat diet while control mice became obese only when fed a high-fat diet.
Type 2 diabetes
The following studies suggest that FFAR4 regulates blood glucose levels and that FFAR4 agonists may be useful for treating individuals with type 2 diabetes. 1) The FFAR4 agonist GSK137647 and docosahexaenoic acid stimulated the release of insulin from cultured mouse and rat pancreatic islets (sites of insulin production and storage) and improved post-feeding hyperglycemia in diabetic mice. 2) The FFAR1 agonist TUG-891 stimulated cultured mouse fat cells to take up glucose and lowered fasting and post-feeding blood glucose levels in diabetic rats; it also stimulated insulin secretion and lowered blood glucose levels in mice. 3) FFAR1 agonist compound A (Merck & Co.) and, with greater efficacy, a dual agonist of FFAR1 and FFAR4, DFL23916, improved blood insulin and glucose levels in mice challenged with a glucose tolerance test. 4) Fatty acid activators of FFAR4 promoted the release of glucagon-like peptide-1 and gastric inhibitory peptide (both stimulate insulin secretion) and reduced secretion of ghrelin (which stimulates the drive to eat) in mice. 5) Downregulation (i.e., forced reduction in the cellular levels) of FFAR4 impaired insulin's actions by reducing the levels of the glucose transporter GLUT4 and insulin receptor substrate in 3T3-L1 mouse fat cells. 6) FFAR4-deficient mice developed glucose intolerance (a potential form of prediabetes) when fed a high fat diet. 7) A diet rich in omega-3 fatty acids improved insulin sensitivity and glucose uptake in muscle and liver tissues in normal but not FFAR4-deficient mice. 8) FFAR4 levels in pancreas islets are higher in individuals with higher insulin and lower HbA1c levels (HbA1c levels rise with higher blood glucose levels averaged over the preceding 3 months). And, 9) individuals who carried the FFAR4 gene variant, p.R270H, (codes a hypoactive FFAR4) who regularly consumed low-fat diets had an increased incidence of developing type 2 diabetes; this association did not occur in p.R270H carriers who regularly consumed high-fat diets.
In a phase II clinical trial (NCT02444910 https://www.clinicaltrials.gov), nine adults with previously untreated insulin-resistant type 2 diabetes were treated orally with increasing doses of KDT501 (an isohumulone derivative that is a relative weak FFAR4 activator and partial agonist of PPARγ) for up to 29 days. After treatment, the participants had significantly lower blood plasma triglyceride and TNF-α levels and higher levels of adiponectin, a regulator of blood glucose levels. However, there were no significant changes in these individuals' oral glucose tolerance test results or measurements of insulin sensitivity. Further studies including the usage of more potent and selectively acting FFAR4 agonists are needed to determine their effectiveness in regulating blood glucose levels and treating type 2 diabetes. Two separate studies have reported that the selective FFAR1 agonists MK‐8666 and TAK-875 greatly improved blood glucose levels in type 2 diabetic patients but also appeared to cause unacceptable liver damage (see FFAR1 and type 2 diabetes). These studies have been regarded as proof that FFAR1 contributes to regulating glucose levels in patients with type 2 diabetes and therefore is a target for treating these patients with FFAR1 agonists that do not have significant adverse effects such as hepatotoxicity. Recent preclinical studies are examining other FFAR1 agonists for their liver and other toxicities.
Taste
Human and rodent taste buds and other areas of their tongues contain cells that express taste receptors which detect the five taste perception elements viz., saltiness, sourness, bitterness, sweetness, and umami. One well-studied site that has these receptor-bearing cells is in the mouse and human taste buds of their tongues' circumvallate papillae. TUG-891 stimulated cultured mouse and human taste bud cells to mobilize several cell activation pathways. Furthermore: 1) application of TUG-891 to the tongues of mice caused alterations in their blood levels of cholecystokinin (one of its functions is to mediate satiety) and adipokines (i.e., signaling proteins secreted by fat tissues); 2) dietary fatty acids that activate FFAR4 altered the taste of and preferences for fats in rats; 3) FFAR4-deficient mice were less likely to consume fatty meals; 4) the injection of the FFAR4 agonist GPR120 III into the arcuate nucleus and nucleus accumbens brain areas of mice reduced their food intake and suppressed the rewarding effects of high-fat and high-sugar foods; and 5) TUG-891 enhanced humans' fatty orosensation (i.e., false sensation of taste obtained by tongue stimulation) when added to FFAR4-activating dietary fats but not when added to fat-free mineral oil. The latter finding suggests that in humans FFAR4 agonists enhance the sensation of fats but by themselves do not directly evoke it. However, one study found that mice with non-functional Ffra4 genes retained their preferences for oily solutions and long chain fatty acids. Follow-up studies are essential to confirm the functional roles of FFAR4 in taste perceptions and preferences.
Inflammation
FFAR4 is expressed by a variety of cell types involved in inflammation such as macrophages, dendritic cells, eosinophils, neutrophils, and T cells. FFAR4 activators inhibited: human eosinophils from secreting a pro-inflammatory cytokine, interleukin 4; mouse RAW 264.7 and peritoneal macrophages from secreting the pro-inflammatory cytokines tumor necrosis factor-α (i.e., TNF-α) and interleukin-6; bone marrow-derived mouse dendritic cells from secreting the pro-inflammatory cytokines monocyte chemoattractant protein 1, TNF-α, interleukin 6, interleukin-12 subunit alpha, and interleukin-12 subunit beta; and mouse helper and cytotoxic T cells from releasing the pro-inflammatory cytokines interferon gamma, interleukin 17, interleukin-2, and TNF-α. These findings suggest that FFAR4 acts to suppress inflammation, a view supported by the following studies. FFAR4-deficient mice have increased levels of inflammation in their fat tissues. Furthermore, FFAR4 agonist drugs and/or omega-3 fatty acids reduced: 1) the chronic inflammation that develops in the fat and liver tissues of db/db mice; 2) cyclophosphamide-induced interstitial cystitis (i.e., urinary bladder inflammation) in rats; 3) the liver inflammation which follows transient blockage of its blood supply in mice; 4) chronic sleep deprivation-induced inflammation of visceral fat tissues in mice; 5) diet-induced inflammation in the islets of the pancreas in mice (this reduction did not occur in mice lacking a functional FFAR4 gene); 6) 2,4-dinitrochlorobenzene-induced mouse contact dermatitis (this reduction did not occur in FFAR4 gene-deficient mice); 7) dextran sodium sulfate-induced colitis in mice; and 8) brain inflammation due to the reduction of blood flow to the brains of mice caused by experimentally induced cerebral infarction.
The short-term (i.e., less than 29 days) phase II clinical trial (NCT02444910 https://www.clinicaltrials.gov) found that nine diabetic adults treated with the FFAR4 agonist KDT501 developed higher plasma levels of adiponectin. Biopsied specimens of these individual's subcutaneous fat tissues obtained up to 3 days after the end of KDT501 treatment released greater amounts of adiponectin than biopsies obtained before KDT501 treatment. Adiponectin has various anti-inflammatory actions.
Atherosclerosis and cardiovascular disease
Arterial atherosclerosis is initiated by damage to these blood vessels' endothelial cells, i.e., their single layers of cells which face the blood. This damage opens a passage for circulating low-density lipoproteins to enter the vessel and move to its innermost layer, the tunica intima, where they are metabolized to oxidized low-density lipoproteins (i.e., oxLDL). Circulating monocytes attach to the damaged endothelium, move to the tunica intima, ingest the oxLDL, and differentiate into M1 macrophages, i.e., macrophages that promote inflammation. These M1 cells continue to ingest oxLDL and may eventually become cholesterol-laden foam cells that promote the development of atheromatous plaques, i.e., hardened accumulations of macrophages, lipids, calcium, and fibrous connective tissue. Over time, the plaques may grow to sizes that narrow or occlude the arteries in which they reside to cause peripheral artery disease, hypertension, coronary artery diseases, and heart damage. The following studies suggest that the suppression of vascular inflammation by FFAR4 agonists reduces the development of atherosclerosis and its associated disorders. 1) The FFAR1/FFAR4 activating drug, GW9508, stimulated cultured human THP-1 macrophage foam cells and RAW264.7 mouse macrophages to secrete their cholesterol and reduce their levels of cholesteryl esters. 2) In cell cultures, GW9508 and TUG-891 inhibited THP-1 monocytes from attaching to human aortic endothelial cells. 3) Long-term administration of GW9508 or TUG-891 to APOE−/− mice (these mice develop atherosclerosis due to lack of the apolipoprotein E gene) converted M1 macrophages to inflammation-suppressing M2-macrophages and resulted in less vascular inflammation and smaller atherosclerotic plaques (TUG-891's actions were reversed by the FFAR4 antagonist, AH-7614). 4) Following constriction of their aortas (using an experimental procedure termed transverse aortic constriction which forces the heart to beat against excessively high blood pressures), the hearts of FFAR4-deficient male but not female mice had pathologically thickened ventricle walls which contracted dysfunctionally compared to mice with normal FFAR4 levels. 5) Cardiac tissue FFAR4 levels were lower in humans with congestive heart failure. And, 6) compared to women, men carrying the defective p.R270H FFAR4 gene had several cardiac abnormalities including larger left ventricle masses, larger left ventricle diameters as measured at the end of diastole, increased maximum left cardiac atrium sizes, and a trend toward somewhat lower minimum cardiac ejection fractions. Many but not all clinical trials have found that dietary regimens enriched with eicosapentaenoic and docosahexaenoic acids lower the risk of coronary artery heart disease, congestive heart failure, and sudden death due to cardiac disease. Further studies are needed to determine if the therapeutic effects of omega-3 fatty acids in mice and humans involve FFAR4 activation and if potent, selectively acting FFAR4 drugs are more effective than omega-3 fatty acids in preventing and/or treating these and the other cited atherosclerosis-associated disorders.
Cancer
FFAR4 has been detected in various types of cultured human cancer cells and found to promote or inhibit their proliferation, migration, survival, and/or resistance to anti-cancer drugs. The direction of their effects depended on the type of cancer cell and response examined. Studies have reported that: 1) GW9508 (which activates FFAR1 but at higher concentrations also activates FFAR4) stimulated migration of human SW480 and HCT116 colon cancer cells (since neither cell line expressed FFRA1, GW9508 appeared to stimulate this migration by activating FFAR4); 2) GW9508 inhibited the migration and proliferation of human A375 and G361 melanoma cells but was less effective on FFAR4 knockdown A375 cells (i.e. cells that have been forced to express low levels of FFAR4; this result suggests that GW5098 acted through both FFAR4 and FFAR1 in these two melanoma cell types); 3) GW9508 stimulated the migration and invasiveness of human MG-63 bone osteosarcoma cells but this stimulation did not occur in FFAR4 gene knockdown cells and therefore appeared to involve FFAR4 activation; 4) the FFAR4 agonist TUG-891 reduced the ability of docosahexaenoic and eicosapentaenoic acids to stimulate the proliferation of human DU145 and PC-3 prostate cancer cells (this result suggests that activated FFAR1 inhibited these cells proliferation); and 5) the effects of FFAR4 and FFAR1 gene knockdowns and TUG-891 treatment in PANC-1 human pancreas cancer cells suggested that FFAR4 stimulated and FFAR1 inhibited these cells motility, invasiveness, and formation of colonies in cell culture assays. Activated FFAR1 also stimulates or inhibits the malignant behaviors of various cancer includend some of those discussed here (see FFAR1 and cancer). Finally, one study reported that individuals carrying a single-nucleotide polymorphism, i.e. SNP, variant allele of the FFAR4 gene (variant described as 9469C>A in which adenine replaces cytosine at position 9469 of the gene's nucleic acid sequence) had increased family histories and personal risks of developing lung cancer. Further animal model, clinical, and gene studies are needed to define the roles of FFAR4 and FFAR1 in these and other cancers.
Breast cancer
Breast cancer studies on FFAR4 have been more extensive than those on other cancers. Cell culture studies showed that the knockdown of FFAR4 levels in cultured human MCF-7, MDA-MB-231, and SKBR3 breast cancer cells slowed their proliferation and increased their death by apoptosis and that GW9508 and TUG-891 inhibited the proliferation and migration of MCF-7 and MDA-MB-231 cells. Animal studies found that Ffar4 gene knockdown MBA-MB-231 cells transplanted into mice formed more rapidly growing and larger tumors than those formed by normal MBA-MB-231 cell transplants; and that GW9508-treated mice transplanted with FFAR4 gene knockdown MBA-MB-231 cells had more lung metastases than mice transplanted with normal MBA-MB-231-cells. These findings suggest that FFAR4 and FFAR1 contribute to inhibiting breast tumor growth but FFAR1, not FFAR4, inhibits these cell's metastasis. Clinical observation studies reported that: 1) FFAR4 was expressed in patients' breast cancers but not in the normal epithelium lining their breasts' ducts and lobules; 2) the proportions of five fatty acids which activate FFRA4 and FFAR1 (viz., stearic, dihomo-gamma-linolenic, docosatetraenoic, docosapentaenoic, and docosahexaenoic acids) were higher in patients' cancerous than adjacent normal breast tissues; 3) patients with ER(+) breast cancer, (i.e., breast cancers containing cells that express estrogen receptors) had higher cancer tissue levels of FFAR4 than patients with estrogen receptor negative, i.e., ER(-), breast cancer; 4) among all ER(+) breast cancer patients who were treated with tamoxifen (a selective estrogen receptor modulator commonly used to treat breast cancer), those with high cancer cell FFAR4 levels had a significantly lower 10 year recurrence-free survival rate (percentage of individuals disease-free 10 years after diagnosis) and lower 10 year breast cancer-specific survival rate (percentage of individuals alive 10 years after diagnosis) than those with lower FFAR4 expression levels or ER(-) breast cancer patients; 5) individuals with higher cancer tissue FFAR4 levels who had a luminal A, luminal B HER2(–), or luminal B HER(2+) breast cancer subtype (see breast cancer subtypes) had worse prognoses than individuals with low FFAR4 levels in these respective cancer subtypes (individuals with non-luminal HER2(+) or triple-negative breast cancer subtypes did not show this relationship). These clinical findings allow that one or more of the five FFAR4/FFAR1-activating fatty acids in breast cancer tissues contributes to this cancers development and/or progression; that high levels of FFAR4 in ER(+) breast cancers confer resistance to tamoxifen therapy and thereby reduce survival; and that high levels of FFAR4 are associated with poorer survival in certain breast cancer subtypes. Studies are needed to determine if a high FFAR4 level can be a clinically useful marker for predicting the severity and prognosis of breast cancers, a contraindication to using tamoxifen to treat breast cancers, and a target for treating ER(+) breast cancers with, e.g., a FFAR4 inhibitor.
Bone remodeling
Osteoclasts absorb bone tissue in a physiological process needed to maintain, repair, and remodel bones. Osteoclasts develop by a process of cellular differentiation termed osteoclastogenesis from cells in the mononuclear phagocyte system. In a mouse bone marrow culture model of bone reabsorption, GW9508 inhibited osteoclast activity by reducing the differentiation of cells to osteoclasts as well as the survival and function of the osteoclasts. Since FFAR4 expression was 100-fold higher than FFAR1 in the osteoclasts and knockdown of FFAR4 in the osteoclasts blocked GW9508's effects, studies suggest that activated FFAR4 functions to block osteoclast-mediated bone resorption. In support of this view, Ffar4 gene knockdown mice developed osteoarthritis more rapidly than control mice in a model of knee osteoarthritis; docosahexaenoic acid inhibited the expression of inflammatory factors in cultured human chondrocytes; and the levels of FFAR4 protein in the osteoarthritic and/or nearby fat tissue of humans with osteoarthritis were higher than those with non-osteoarthritic bone disease. These studies suggest that FFAR4 inhibits bone resorption and may prove to be useful for treating excessive bone resorption, i.e., osteoporosis.
References
See also
Free fatty acid receptor
Free fatty acid receptor 1
G protein-coupled receptors | Free fatty acid receptor 4 | [
"Chemistry"
] | 6,688 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,481 | https://en.wikipedia.org/wiki/GPR142 | Probable G-protein coupled receptor 142 is a protein that in humans is encoded by the GPR142 gene.
GPR142 is a member of the rhodopsin family of G protein-coupled receptors (GPRs) (Fredriksson et al., 2003).[supplied by OMIM]
References
Further reading
G protein-coupled receptors | GPR142 | [
"Chemistry"
] | 74 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,495 | https://en.wikipedia.org/wiki/GPR148 | G protein-coupled receptor 148, also known as GPR148, is a human orphan receptor from GPCR superfamily. It is expressed primarily in nervous system and testis. Is may be implicated in prostate cancer.
References
Further reading
G protein-coupled receptors | GPR148 | [
"Chemistry"
] | 54 | [
"G protein-coupled receptors",
"Signal transduction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.