id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,522,081 | https://en.wikipedia.org/wiki/AACS%20LA | AACS LA (Advanced Access Content System Licensing Administrator) is the body that develops and licenses the AACS copy-protection system used on the HD DVD and Blu-ray Disc high-definition optical disc formats.
History
The AACS LA consortium was founded in 2004 consisting of 8 companies which are Intel, Microsoft, Panasonic, IBM, Sony, Toshiba, Warner Brother and The Walt Disney Company. The AACS standard was delayed 2 times, the first of which were caused by development issues, then the second from an important member of the Blu-ray group expressing concerns. At the request of Toshiba, an interim standard was published which did not include some features, like managed copy. On July 5, 2009 the license of AACS1 went online.
See also
AACS encryption key controversy
Advanced Access Content System
References
External links
Technology consortia
Advanced Access Content System | AACS LA | [
"Technology"
] | 178 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,522,147 | https://en.wikipedia.org/wiki/Clinical%20coder | A clinical coder—also known as clinical coding officer, diagnostic coder, medical coder, or nosologist—is a health information professional whose main duties are to analyse clinical statements and assign standardized codes using a classification system. The health data produced are an integral part of health information management, and are used by local and national governments, private healthcare organizations and international agencies for various purposes, including medical and health services research, epidemiological studies, health resource allocation, case mix management, public health programming, medical billing, and public education.
For example, a clinical coder may use a set of published codes on medical diagnoses and procedures, such as the International Classification of Diseases (ICD), the Healthcare Common procedural Coding System (HCPCS), and Current Procedural Terminology (CPT) for reporting to the health insurance provider of the recipient of the care. The use of standard codes allows insurance providers to map equivalencies across different service providers who may use different terminologies or abbreviations in their written claims forms, and be used to justify reimbursement of fees and expenses. The codes may cover topics related to diagnoses, procedures, pharmaceuticals or topography. The medical notes may also be divided into specialities, for example cardiology, gastroenterology, nephrology, neurology, pulmonology or orthopedic care. There are also specialist manuals for oncology known as ICD-O (International Classification of Diseases for Oncology) or "O Codes", which are also used by tumor registrars (who work with cancer registries), as well as dental codes for dentistry procedures known as "D codes" for further specifications.
A clinical coder therefore requires a good knowledge of medical terminology, anatomy and physiology, a basic knowledge of clinical procedures and diseases and injuries and other conditions, medical illustrations, clinical documentation (such as medical or surgical reports and patient charts), legal and ethical aspects of health information, health data standards, classification conventions, and computer- or paper-based data management, usually as obtained through formal education and/or on-the-job training.
In practice
The basic task of a clinical coder is to classify medical and health care concepts using a standardised classification. Inpatient, mortality events, outpatient episodes, general practitioner visits and population health studies can all be coded.
Clinical coding has three key phases: a) abstraction; b) assignment; and c) review.
Abstraction
The abstraction phase involves reading the entire record of the health encounter and analysing the information to determine what condition(s) the patient had, what caused it and how it was treated. The information comes from a variety of sources within the medical record, such as clinical notes, laboratory and radiology results, and operation notes.
Assignment
The assignment phase has two parts: finding the appropriate code(s) from the classification for the abstraction; and entering the code into the system being used to collect the coded data.
Review
Reviewing the code set produced from the assignment phase is very important. Clinical coder must ask themselves, "does this code set fairly represent what happened to this patient in this health encounter at this facility?" By doing this, clinical coders are checking that they have covered everything that they must, but not used extraneous codes. For health encounters that are funded through a case mix mechanism, the clinical coder will also review the diagnosis-related group (DRG) to ensure that it does fairly represent the health encounter.
Competency levels
Clinical coders may have different competency levels depending on the specific tasks and employment setting.
Entry-level / trainee coder
An entry-level coder has completed (or nearly completed) an introductory training program in using clinical classifications. Depending on the country, this program may be in the form of a certificate, or even a degree, which has to be earned before the trainee is allowed to start coding. All trainee coders will have some form of continuous, on-the-job training, often being overseen by a more senior coder.
Intermediate-level coder
An intermediate-level coder has acquired the skills necessary to code many cases independently. Coders at this level are also able to code cases with incomplete information. They have a good understanding of anatomy and physiology along with disease processes. Intermediate-level coders have their work audited periodically by an advanced coder.
Advanced-level / senior coder
Advanced-level and senior coders are authorized to code all cases including the most complex. Advanced coders will usually be credentialed and will have several years of experience. An advanced coder is also able to train entry-level coders.
Nosologist
A nosologist understands how the classification is underpinned. Nosologists consult nationally and internationally to resolve issues in the classification and are viewed as experts who can not only code, but design and deliver education, assist in the development of the classification and the rules for using it.
Nosologists are usually expert in more than one classification, including morbidity, mortality and case mix. In some countries the term nosologist is used as a catch-all term for all levels.
Classification types
Clinical coders may use many different classifications, which fall into two main groupings: statistical classifications and nomenclatures.
Statistical classification
A statistical classification, such as ICD-10 or DSM-5, will bring together similar clinical concepts, and group them into one category. This allows the number of categories to be limited so that the classification does not become too big, but still allows statistical analysis. An example of this is in ICD-10 at code I47.1. The code title (or rubric) is Supraventricular tachycardia. However, there are several other clinical concepts that are also classified here. Amongst them are paroxysmal atrial tachycardia, paroxysmal junctional tachycardia, auricular tachycardia and nodal tachycardia.
Nomenclature
With a nomenclature, for example SNOMED CT, there is a separate listing and code for every clinical concept. So, in the tachycardia example above, each type and clinical term for tachycardia would have its own code listed. This makes nomenclatures unwieldy for compiling health statistics.
Qualification and professional association
In some countries, clinical coders may seek voluntary certification or accreditation through assessments conducted by professional associations, health authorities or, in some instances, universities. The options available to the coder will depend on the country, and, occasionally, even between states within a country.
Professional bodies that provide certification for clinical coders may also represent other health information management professionals.
Australia
Clinical Coders' Society of Australia (CCSA)
Health Information Management Association of Australia (HIMAA)
Canada
Canadian Health Information Management Association (CHIMA)
Saudi Arabia
Saudi Health Information Management Association (SHIMA)
United Kingdom
Clinical coders start as trainees, and there are no conversion courses for coders immigrating to the United Kingdom.
The National Clinical Coding Qualification (NCCQ) is an exam for experienced coders, and is recognised by the four health agencies of the UK. Institute of Health Records and Information Management (IHRIM) are the awarding body.
England
In England, a novice coder will complete the national standards course written by NHS Digital within six months of being in post. They will then start working towards the NCCQ.
Three years after passing the NCCQ, two further professional qualifications are made available to the coder in the form of NHS Digital's clinical coding auditor and trainer programmes.
Scotland
In 2015, National Services Scotland, in collaboration with Health Boards, launched the Certificate of Technical Competence (CTC) in Clinical Coding (Scotland). Awarded by the Institute of Health Records & Information Management (IHRIM), the aims of the certificate include supporting staff new to clinical coding, and providing a standardised framework of clinical coding training across NHS Scotland.
The NCCQ is a recognized coding qualification in Scotland.
Wales
The NCCQ is a recognized coding qualification by NHS Wales.
Northern Ireland
Health and Social Care in Northern Ireland recognizes the NCCQ as a coding qualification.
United States
, the typical qualification for an entry-level medical coder in the United States is completion of a diploma or certificate, or, where they are offered, an associate degree. The diploma, certificate, or degree will usually always include an Internet-based and/or in-person internship at some form of a medical office or facility. Some form of on-the-job training is also usually provided in the first months on the job until the coder can earn an intermediate or advanced level of certification and accumulate time on the job. For further academic training, a baccalaureate or master's degree in medical information technology, or a related field, can be earned by those who wish to advance to a supervisory or academic role. A nosologist (medical coding expert) in the U.S. will usually be certified by either AHIMA or the AAPC (often both) at their highest level of certification and speciality inpatient and/or outpatient certification (pediatrics, obstetrics/gynecology, gerontology, oncology are among those offered by AHIMA and/or the AAPC), have at least 3–5 years of intermediate experience beyond entry-level certification and employment, and often holds an associate, bachelor's, or graduate degree.
There are several associations that medical coders in the United States may join, including:
AAPC (formerly American Academy of Professional Coders)
American Board of Health Care Professionals (ABHCP)
American Health Information Management Association (AHIMA)
Some medical coders elect to be certified by more than one society.
The AAPC offers the following entry-level certifications in the U.S.: Certified Professional Coder (CPC); which tests on most areas of medical coding, and also the Certified Inpatient Coder (CIC) and Certified Outpatient Coder (COC). Both the CPC and COC have apprentice designations (CPC-A and COC-A, respectively) for those who pass the certification exams but do not have two years of on the job experience. There is no apprentice designation available for the CIC. After completing two years of on the job experience the apprentice credential holder can request to have the apprentice designation removed from their credential. There are also further specialist coding certifications, for example, the CHONC credential for those who specialize in hematology and oncology coding and the CASCC credential for those who specialize in ambulatory surgery center coding.
The other main organization is American Health Information Management Association (AHIMA) which offers the Certified Coding Specialist (CCS), Certified Coding Specialist-Physician-based (CCS-P), and the entry-level Certified Coding Associate (CCA).
Some U.S. states now mandate or at least strongly encourage certification from either AAPC or AHIMA or a degree from a college to be employed. Some states have registries of medical coders, though these can be voluntary listings. This trend was accelerated in part by the passage of HIPAA and the Affordable Care Act and similar changes in other Western countries, many of which use the ICD-10 for diagnostic medical coding. The change to more regulation and training has also been driven by the need to create accurate, detailed, and secure medical records (especially patient charts, bills, and claim form submissions) that can be recorded efficiently in an electronic era of medical records where they need to be carefully shared between different providers or institutions of care. This was encouraged and later required by legislation and institutional policy.
See also
Clinical medicine
Current Procedural Terminology
Diagnosis-related group
Diagnostic and Statistical Manual of Mental Disorders (DSM)
Health informatics
International Classification of Diseases (ICD)
ICD-11
ICD-10
Medical diagnosis
Pathology Messaging Implementation Project
WHO Family of International Classifications
References
External links
WHO Family of International Classifications
Health informatics
Health care occupations
Medical classification | Clinical coder | [
"Biology"
] | 2,468 | [
"Health informatics",
"Medical technology"
] |
13,522,189 | https://en.wikipedia.org/wiki/%C5%8Cno%20Benkichi | was a Japanese photographer and inventor. He is known for making Karakuri puppets.
Life and career
Ōno Benkichi was born in Kyoto in 1801. His real name was . At the age of 20, he moved to Nagasaki to study Western medicine and science. After studying weaponry and mathematics on Tsushima Island, he returned to Kyoto and married. In 1831, he moved to Ohno (now Ishikawa Prefecture), where his wife was born, and lived there for the rest of his life. He died in 1870.
Ōno was one of the first Japanese to experiment with photography. His first photograph dates back to 1850s. Ōno designed various devices, including cameras, lighters, clocks, and telescopes. His invention, , was designed to generate static electricity and was used in medicine. One of his famous mechanisms is called the "Tea-serving boy".
Legacy
Karakuri Memorial Museum is dedicated to Ōno's inventions and life, and features a display of the Karakuri puppets he made.
References
Japanese photographers
Japanese inventors
1801 births
1870 deaths
Date of death missing
Date of birth unknown
Karakuri | Ōno Benkichi | [
"Physics",
"Technology"
] | 227 | [
"Physical systems",
"Machines",
"Karakuri"
] |
13,522,371 | https://en.wikipedia.org/wiki/Iron%20oxide%20adsorption | Iron oxide adsorption is a water treatment process that is used to remove arsenic from drinking water. Arsenic is a common natural contaminant of well water and is highly carcinogenic. Iron oxide adsorption treatment for arsenic in groundwater is a commonly practiced removal process which involves the chemical treatment of arsenic species such that they adsorb onto iron oxides and create larger particles that may be filtered out of the water stream.
The addition of ferric chloride, FeCl3, to well water immediately after the well at the influent to the treatment plant creates ferric hydroxide, Fe(OH)3, and hydrochloric acid, HCl.
3H2O + FeCl3 → Fe(OH)3 + 3HCl
Fe(OH)3 in water is a strong adsorbent of arsenate, As(V), provided that the pH is low. HCl lowers pH, assuring arsenic adsorption, and the disassociated chlorine oxidizes iron in solution from Fe+2 to Fe+3, which then may bond with hydroxide ions, OH−, thus creating more adsorbent.
This adjustment also lowers the pH of the well water, decreasing alkalinity and allowing more cationic species such Fe(+) or As(+) as to exist freely within the flow. Low pH also decreases the solubility of some iron and arsenic species as well as increasing the adsorptive reactivity of arsenate, As(V).
Additional oxidation of Fe+2 to Fe+3, also referred to as iron(II) and iron(III), is induced by the addition of sodium hypochlorite, NaOCl, at the well head. NaOCl is usually added for disinfection although it may be used in this case towards the objectives of a distribution system free chlorine residual of 1 mg/L and the oxidation of aqueous As(III) to As(V), and aqueous iron Fe+2 to Fe+3, which will bond with hydroxide for further adsorption.
The filter media usually consists of anthracite, iron-manganese oxidizing sand, and garnet sand over support gravel.
References
Water treatment | Iron oxide adsorption | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 476 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
10,981,939 | https://en.wikipedia.org/wiki/Small%20science | Small science refers (in contrast to big science) to science performed in a smaller scale, such as by individuals, small teams or within community projects.
Bodies which fund research, such as the National Science Foundation, DARPA, and the EU with its Framework programs, have a tendency to fund larger-scale research projects. Reasons include the idea that ambitious research needs significant resources devoted for its execution and the reduction of administrative and overhead costs on the funding body side. However, small science which has data that is often local and is not easily shared is funded in many areas such as chemistry and biology by these funding bodies.
The importance of Small Science
Small Science helps define the goals and directions of large scale scientific projects. In turn, results of large scale projects are often best synthesized and interpreted by the long-term efforts of the Small Science community. In addition, because Small Science is typically done at universities, it provides students and young researchers with an integral involvement in defining and solving scientific problems. Hence, small science can be seen as an important factor for bringing together science and society.
According to the Chronicle for Higher Education, James M. Caruthers, a professor of chemical engineering at Purdue University, data from Big Science is highly organized on the front end where researchers define it before it even starts rolling off the machines, making it easier to handle, understand, and archive. Small Science is "horribly heterogeneous," and far more vast. In time, Small Science will generate two to three times more data than Big Science.
The American Geophysical Union stresses the importance of small science in a position statement.
Examples of Small Science results with high impact
Many historical examples show that results of Small Science can have enormous impacts:
Galois theory, one of the foundational theories of abstract algebra was developed by Évariste Galois within just weeks.
Albert Einstein developed his theory of special relativity as a hobby while working full-time in a patent office.
Robert Goddard invented the liquid propelled and multi stage rockets largely on his own. These breakthroughs lead to the German V2 and the Apollo Saturn 5 rockets.
See also
Citizen science
Independent scientist
References
History of science | Small science | [
"Technology"
] | 434 | [
"History of science",
"History of science and technology"
] |
10,982,594 | https://en.wikipedia.org/wiki/ELMER%20guidelines | ELMER (Easier and more Efficient Reporting), is a comprehensive set of principles and specifications for the design of Internet-based forms.
The Norwegian Ministry of Trade and Industry has decided that the most recent version, ELMER2, shall be the common guidelines for user interfaces in Norwegian public forms for enterprises on the Internet. All public forms in Norway shall be based on the ELMER guidelines within the end of 2008 .
The Norwegian authorities emphasize that simplification of public forms is important to improve communication between the users and the public sector. The idea is expressed like this in the preface of the guidelines:
"The proceeding transition to electronic reporting may be an important simplification measure for the respondents, but only if the Internet-based solutions are felt to be more user friendly than the old paper versions. By applying good pedagogical principles, electronic forms may also ensure a better understanding of the task, better validation of the data before submission, and by that even better response quality and more efficient processing by the relevant authority."
The objective of the ELMER guidelines is to meet the challenges of Internet design and pedagogics that are particular to web forms. WCAG requirements and W3C conventions have not been baked into the ELMER guidelines, but they shall not contain recommendations which are incompatible with these. In other words, ELMER is not the only set of guidelines to consider, when designing electronic forms, but the only one that concentrate on the form itself.
Background
The first ELMER Project
During the summer of 2000, an interdisciplinary reference group on electronic reporting initiated the ELMER project. The project followed six enterprises over the period of one year in order to map out their reporting duties, and test simple solutions for electronic reporting based on familiar technology.
Among other things, the ELMER project presented an example for design of a complex web form. First and foremost the example demonstrated that the use of simple Internet technology opens up for new pedagogical opportunities which may make reporting to governmental authorities a lot easier.
The ELMER 2 Project
In 2005 the ELMER 2 project developed the example towards a comprehensive set of principles and specifications for the design of Internet-based forms. Business organizations, governmental bodies, usability experts and form developers were invited to submit suggestions and to take part in debates during the process. Open workshops were held, and a number of authorities and experts wrote, read and commented contributions posted on the project discussion forum.
ELMER 2 has evolved through co-operation between enterprises and experts. The participants have experiences from designing forms for several agencies and suppliers, from the reception and use of the same forms by the inquirer, and from usability testing of ELMER1-based and other electronic forms in varied user groups.
The guidelines
The Norwegian Ministry of Trade and Industry submitted the draft for public hearing in the autumn of 2005. The approved ELMER 2 guidelines was published in October 2006.
The guidelines are being administered by the Brønnøysund Register Centre.
References
Norwegian Ministry of Trade and Industry. "Forenkling for næringslivet", 2007-01-15. Retrieved on April 30, 2007.
External links
The current ELMER guidelines (PDF - 1 MB)
The ELMER website
The Norwegian Ministry of Trade and Industry
Human–computer interaction
Usability
User interfaces | ELMER guidelines | [
"Technology",
"Engineering"
] | 651 | [
"User interfaces",
"Interfaces",
"Human–machine interaction",
"Human–computer interaction"
] |
10,984,862 | https://en.wikipedia.org/wiki/Synchrotron%20Radiation%20Center | The Synchrotron Radiation Center (SRC), located in Stoughton, Wisconsin and operated by the University of Wisconsin–Madison, was a national synchrotron light source research facility, operating the Aladdin storage ring. From 1968 to 1987 SRC was the home of Tantalus, the first storage ring dedicated to the production of synchrotron radiation.
History
The Road to SRC: 1953–1968
15 universities formed the Midwest Universities Research Association (MURA) in 1953 to promote and design a high energy proton synchrotron, to be built in the Midwest. With the intent of constructing a large accelerator, MURA purchased a suitable area of land with an underlying flat limestone base near Stoughton, Wisconsin, about from the Madison campus of the University of Wisconsin.
MURA's first accelerator was a 45 MeV synchrotron, built in a concrete underground "vault", mostly for radiation protection purposes. A small electron storage ring, operating at 240 MeV, was designed by Ed Rowe and collaborators as a test facility to study high currents, and construction of this ring started in 1965. However, in 1963 President Johnson had decided that the next large accelerator facility would not be built at the MURA site, but in Batavia, Illinois; this became Fermilab. In 1967 MURA dissolved with the storage ring incomplete and with no further funding. The researchers, feeling teased by fate (and the government backers) named the machine after the mythological figure Tantalus, famed for his eternal punishment to stand beneath a fruit tree with the fruit ever eluding his grasp.
In 1966 a subcommittee of the National Research Council, which had been investigating the properties of synchrotron radiation from the 240 MeV ring, recommended it be completed as a tool for spectroscopy. A successful proposal was made to the US Air Force Office of Scientific Research, and the ring was completed in 1968—the first storage ring dedicated to the production of synchrotron radiation.
With the demise of MURA, a new entity was created to run the facility: the Synchrotron Radiation Center (SRC), administered by the University of Wisconsin.
Tantalus: 1968–1987
Tantalus had a circumference of just over , and, with an energy of 240 MeV, had a critical energy of slightly under 50 eV. It achieved its first stored beam in March 1968. Initial operations were very difficult, with only about 5 hours per week of usable beam, and currents of less than 1 mA. Initial users came from three groups, who took turns using their commercial monochromators on the one available beamline. On August 7, 1968, this first dedicated storage ring based synchrotron radiation facility produced its first data when Ulrich Gerhardt of the University of Chicago, carried out simultaneous reflection and absorption measurements on CdS over the wavelength range 1100-2700 Å.
In 1972 the building was enlarged to accommodate new beamlines, and by 1973 there were ten ports, and beam currents were up to about 50 mA. A new injector, a 40 MeV microtron, was installed as an injector in 1974, replacing the original MURA accelerator that had been used until that point, and within a year currents exceeded 150 mA, with typically over 30 hours of beam per week. A stored beam of 260 mA was achieved in 1977. In October 1974 the National Science Foundation took over funding from the Air Force.
Initial monochromators were commercial instruments with drawbacks for use at a synchrotron. SRC started a program of instrument development, both to take advantage of the unique properties of synchrotron radiation and to make beamlines available to users without their own instruments. Such users became known as "general users", while groups with their own beamlines became known as Participating Research Teams (PRTs). This model has become widely used at other facilities, where PRTs are also denoted Collaborating Access Teams (CATs) and Collaborating Research Groups (CRGs). PRTs have been used extensively by US scientists at US facilities but by 2010 were somewhat out of favor. The CRG in Europe, however, remains as an important and successful means of flexible access.
For two decades Tantalus produced hundreds of experiments and was a testing ground for many synchrotron techniques still in use. Current synchrotron facilities can be very large, while Tantalus was not, and its small building, even after the 1972 expansion, was crowded with equipment and researchers. Users worked in very close quarters and the close proximity combined with the relative isolation of the facility, made cross fertilization of ideas unavoidable. The atmosphere was open, friendly, and informal, although not particularly comfortable physically, The heating system in one washroom did not work, so, to avoid frozen pipes, users just left the door wide open. After someone posted a sign alerting users to the policy, an international contest began, with each person translating the message into their own language. A copy of this sign was included as part of an NSF funding request as evidence of Tantalus's growing international impact.
Research during those early years was dominated by optical spectroscopy. In 1971 an IBM research group produced the first photoelectron spectra using Tantalus, a milestone in the development of photoemission spectroscopy as a research tool. The tunability of the radiation allowed researchers to disentangle a material's ground-state electronic properties. In the mid-1970s the increasing beam current from the ring gave intensity levels sufficient for angle-resolved photoemission spectroscopy, with a joint Bell Labs–Montana State University group conducting the earliest experiments. As an experimental technique, angle-resolved photoemission developed rapidly and had an important conceptual impact on condensed-matter physics. Gas-phase spectroscopy was another successful field at SRC, starting from early absorption studies of noble gases.
With the new Aladdin storage ring operating, Tantalus was officially decommissioned in 1987, although it was run for six weeks in the summer of 1988 for experiments in atomic and molecular fluorescence. The storage ring was disassembled in 1995, and half the ring, the RF cavity and one of the original beamlines are now in storage at the Smithsonian Institution.
Aladdin, the early years: 1976–1986
In 1976 SRC submitted a proposal to the NSF for a 750 MeV storage ring as an intense source of VUV and soft x-ray radiation to an energy greater than 1 keV. This proposed ring was named Aladdin. Funding for the new ring was obtained from the NSF, the State of Wisconsin, and the Wisconsin Alumni Research Foundation (WARF). The final design was a four straight section 1 GeV ring, of circumference, and construction of some components started in 1978. A new building to house the facility started construction in April 1979. The initial target date for first stored beam was October 1980.
The construction phase of Aladdin ended in 1981, but by late 1984 SRC had been unable to complete the commissioning of the facility, with a maximum stored current of 2.5 mA, too little to provide useful light intensities. Accelerator experts reviewing the project recommended the addition of a booster synchrotron at a cost of (equivalent to $ million in ). In May 1985, after a review by L. Edward Temple of the Department of Energy, which recommended still another study period while difficulties were ironed out, NSF director Eric Bloch decided not only against the upgrade, but also against continued funding for Aladdin operations. SRC was kept running with existing NSF funding for Tantalus and funds from WARF. The University of Wisconsin made it clear it would only continue funding Aladdin until June 1986, a situation characterized on campus as the Perils of Pauline. Concurrent with these events, the technical issue limiting the machine performance had been solved, and three months after the decision to withdraw NSF funding, currents of 40 mA had been achieved. By July 1986 this had risen to over 150 mA, and NSF funding was restored.
Closing
National Science Foundation funding stopped in 2011. The University of Wisconsin gave SRC (equivalent to $ million in ) to keep the facility operating until June 2013, while new funding was sought. The biggest budget cutbacks were in education, outreach and support for outside users. By January 2012 the facility had lost about one-third of its staff to retirements and layoffs. In February 2014 the facility director announced that the center would be closing. The final beam run was completed March 7, 2014, after which the process of dismantling and disposing of the equipment began.
SRC history project
A project, completed in 2011, collected oral histories and historical documents related to SRC. These were deposited in the archives of the University of Wisconsin–Madison, and digitized copies of some of the material are available online.
G. J. Lapeyre award
In 1973 the vault that held Tantalus was being enlarged, and during a facility picnic a rainstorm hit and caused the vault to start to flood. Jerry Lapeyre of Montana State University used the lab's tractor to build earthworks to divert the water. His efforts led then-director Rowe to create the annual G. J. Lapeyre award to be awarded to "one who met and overcame the greatest obstacle in the pursuit of their research". The trophy had an octagonal base representing Tantalus, with a beer can from the lab picnic which preceded the flood, topped by a concrete "raindrop".
Technical description
Beamlines
References
External links
Official website
SRC history project digital archive
Synchrotron radiation facilities
Research institutes in Wisconsin
University of Wisconsin–Madison | Synchrotron Radiation Center | [
"Materials_science"
] | 1,979 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
10,985,546 | https://en.wikipedia.org/wiki/RCA%20tape%20cartridge | The RCA tape cartridge (labeled the RCA Sound Tape Cartridge) is a magnetic tape audio format that was designed to offer stereo quarter-inch reel-to-reel tape recording quality in a convenient format for the consumer market. It was introduced in 1958, following four years of development. This timing coincided with the launch of the stereophonic phonograph record.
The main advantage of the RCA tape cartridge over reel-to-reel machines is convenience. The user is not required to handle unruly tape ends and thread the tape through the machine before use, making the medium of magnetic tape more friendly to casual users. In addition, since the cartridge carries both supply and take-up reels, the cartridge does not have to be rewound before the tape is removed from the machine and stored. Because of these conveniences, the RCA tape cartridge system did see some success in schools, particularly in student language learning labs.
The same design concept would later be used in the more successful Compact Cassette, introduced by Philips in 1962.
Similar to the Compact Cassette, the RCA cartridges are reversible so that either side can be played. An auto-reverse mechanism in some models allows the tape to run continuously. Equal to 8-track tape and Stereo-Pak, the tape runs at a standard speed of 3.75 inches per second (IPS). This is double the speed of the Compact Cassette and half of the top speed of consumer reel-to-reel tape recorders, which usually offer both 3.75 IPS and 7.5 IPS speeds. Such consumer reel-to-reel machines are capable of superior audio performance, but only at the faster speed.
The RCA tape cartridge format offers four discrete audio tracks that provide a typical playtime of 30 minutes of stereo sound per side, or double that for monophonic sound. Some models can also play and record at 1.875 IPS, doubling playing time with a significant reduction in sound quality. This speed was of too low quality for music on these machines, but was acceptable for voice recording.
With two interleaved stereo pairs, the track format and speed of the RCA tape cartridge is the same as that of consumer reel-to-reel stereo tape recorders, which run at 3.75 IPS. It is possible to dismantle the cartridge, spool the tape onto an open reel, and play it on such a machine. In fact, RCA offered an adapter for their Sound Tape Cartridge machines to enable them to both play back and record traditional reels of tape up to 5 inches in reel diameter.
Unlike the Compact Cassette, the RCA tape cartridge incorporates a brake to prevent the tape hubs from moving when the cartridge is not in a player. Small slot windows extend from the tape hubs toward the outside of the cartridge so that the amount of tape visible on each spool can be seen.
Despite its convenience, the RCA tape cartridge was not much of a success. RCA was slow to produce machines for the home market. They were also slow to license pre-recorded music tapes for home playback. Cost was also an issue, with a single cartridge costing US$4.50 in 1960 ($ with inflation today) compared to a 1,200 foot (365 m) reel of tape, which cost $3.50 ($ today). The format was advertised nationally by RCA as late as fall 1964 and was continued into model year 1965 production.
The physical tape width and speed of the tape and even the size of the RCA tape cartridge is similar to, though incompatible with, Sony's Elcaset system, introduced in 1976. That system also failed to achieve much market acceptance and was soon withdrawn.
References
External links
Revolutionary New Triumph in Tape, an RCA promotional film
Audio Recording History
Image of a prerecorded cartridge by Perez Prado
Techmoan: RetroTech: RCA Victor Tape Cartridge - A trailblazing failure, YouTube 22 September 2016
Audio storage
Tape recording
Discontinued media formats
RCA brands
1958 in music
1958 in technology
Products introduced in 1958 | RCA tape cartridge | [
"Technology"
] | 816 | [
"Recording devices",
"Tape recording"
] |
10,985,744 | https://en.wikipedia.org/wiki/Rotational%20diffusion | Rotational diffusion is the rotational movement which acts upon any object such as particles, molecules, atoms when present in a fluid, by random changes in their orientations.
Although the directions and intensities of these changes are statistically random, they do not arise randomly and are instead the result of interactions between particles. One example occurs in colloids, where relatively large insoluble particles are suspended in a greater amount of fluid. The changes in orientation occur from collisions between the particle and the many molecules forming the fluid surrounding the particle, which each transfer kinetic energy to the particle, and as such can be considered random due to the varied speeds and amounts of fluid molecules incident on each individual particle at any given time.
The analogue to translational diffusion which determines the particle's position in space, rotational diffusion randomises the orientation of any particle it acts on.
Anything in a solution will experience rotational diffusion, from the microscopic scale where individual atoms may have an effect on each other, to the macroscopic scale.
Applications
Rotational diffusion has multiple applications in chemistry and physics, and is heavily involved in many biology based fields. For example, protein-protein interaction is a vital step in the communication of biological signals. In order to communicate, the proteins must both come into contact with each other and be facing the appropriate way to interact with each other's binding site, which relies on the proteins ability to rotate.
As an example concerning physics, rotational Brownian motion in astronomy can be used to explain the orientations of the orbital planes of binary stars, as well as the seemingly random spin axes of supermassive black holes.
The random re-orientation of molecules (or larger systems) is an important process for many biophysical probes. Due to the equipartition theorem, larger molecules re-orient more slowly than do smaller objects and, hence, measurements of the rotational diffusion constants can give insight into the overall mass and its distribution within an object. Quantitatively, the mean square of the angular velocity about each of an object's principal axes is inversely proportional to its moment of inertia about that axis. Therefore, there should be three rotational diffusion constants - the eigenvalues of the rotational diffusion tensor - resulting in five rotational time constants. If two eigenvalues of the diffusion tensor are equal, the particle diffuses as a spheroid with two unique diffusion rates and three time constants. And if all eigenvalues are the same, the particle diffuses as a sphere with one time constant. The diffusion tensor may be determined from the Perrin friction factors, in analogy with the Einstein relation of translational diffusion, but often is inaccurate and direct measurement is required.
The rotational diffusion tensor may be determined experimentally through fluorescence anisotropy, flow birefringence, dielectric spectroscopy, NMR relaxation and other biophysical methods sensitive to picosecond or slower rotational processes. In some techniques such as fluorescence it may be very difficult to characterize the full diffusion tensor, for example measuring two diffusion rates can sometimes be possible when there is a great difference between them, e.g., for very long, thin ellipsoids such as certain viruses. This is however not the case of the extremely sensitive, atomic resolution technique of NMR relaxation that can be used to fully determine the rotational diffusion tensor to very high precision. Rotational diffusion of macromolecules in complex biological fluids (i.e., cytoplasm) is slow enough to be measurable by techniques with microsecond time resolution, i.e. fluorescence correlation spectroscopy.
Relation to translational diffusion
thumb|The standard translational model of Brownian motion
Much like translational diffusion in which particles in one area of high concentration slowly spread position through random walks until they are near-equally distributed over the entire space, in rotational diffusion, over long periods of time the directions which these particles face will spread until they follow a completely random distribution with a near-equal amount facing in all directions. As impacts from surrounding particles rarely, if ever, occur directly in the centre of mass of a 'target' particle, each impact will occur off-centre and as such it is important to note that the same collisions that cause translational diffusion cause rotational diffusion as some of the impact energy is transferred to translational kinetic energy and some is transferred into torque.
Rotational version of Fick's law
A rotational version of Fick's law of diffusion can be defined. Let each rotating molecule be associated with a unit vector ; for example, might represent the orientation of an electric or magnetic dipole moment. Let f(θ, φ, t) represent the probability density distribution for the orientation of at time t. Here, θ and φ represent the spherical angles, with θ being the polar angle between and the z-axis and φ being the azimuthal angle of in the x-y plane.
The rotational version of Fick's law states
.
This partial differential equation (PDE) may be solved by expanding f(θ, φ, t) in spherical harmonics for which the mathematical identity holds
.
Thus, the solution of the PDE may be written
,
where Clm are constants fitted to the initial distribution and the time constants equal
.
Two-dimensional rotational diffusion
A sphere rotating around a fixed axis will rotate in two dimensions only and can be viewed from above the fixed axis as a circle. In this example, a sphere which is fixed on the vertical axis rotates around that axis only, meaning that the particle can have a θ value of 0 through 360 degrees, or 2π Radians, before having a net rotation of 0 again.
These directions can be placed onto a graph which covers the entirety of the possible positions for the face to be at relative to the starting point, through 2π radians, starting with -π radians through 0 to π radians. Assuming all particles begin with single orientation of 0, the first measurement of directions taken will resemble a delta function at 0 as all particles will be at their starting, or 0th, position and therefore create an infinitely steep single line. Over time, the increasing amount of measurements taken will cause a spread in results; the initial measurements will see a thin peak form on the graph as the particle can only move slightly in a short time. Then as more time passes, the chance for the molecule to rotate further from its starting point increases which widens the peak, until enough time has passed that the measurements will be evenly distributed across all possible directions.
The distribution of orientations will reach a point where they become uniform as they all randomly disperse to be nearly equal in all directions. This can be visualized in two ways.
For a single particle with multiple measurements taken over time. A particle which has an area designated as its face pointing in the starting orientation, starting at a time t0 will begin with an orientation distribution resembling a single line as it is the only measurement. Each successive measurement at time greater than t0 will widen the peak as the particle will have had more time to rotate away from the starting position.
For multiple particles measured once long after the first measurement. The same case can be made with a large number of molecules, all starting at their respective 0th orientation. Assuming enough time has passed to be much greater than t0, the molecules may have fully rotated if the forces acting on them require, and a single measurement shows they are near-to-evenly distributed.
Basic equations
For rotational diffusion about a single axis, the mean-square angular deviation in time is
,
where is the rotational diffusion coefficient (in units of radians2/s).
The angular drift velocity in response to an external torque (assuming that the flow stays non-turbulent and that inertial effects can be neglected) is given by
,
where is the frictional drag coefficient. The relationship between the rotational diffusion coefficient and the rotational frictional drag coefficient is given by the Einstein relation (or Einstein–Smoluchowski relation):
,
where is the Boltzmann constant and is the absolute temperature. These relationships are in complete analogy to translational diffusion.
The rotational frictional drag coefficient for a sphere of radius is
where is the dynamic (or shear) viscosity.
The rotational diffusion of spheres, such as nanoparticles, may deviate from what is expected when in complex environments, such as in polymer solutions or gels. This deviation can be explained by the formation of a depletion layer around the nanoparticle.
Langevin dynamics
Collisions with the surrounding fluid molecules will create a fluctuating torque on the sphere due to the varied speeds, numbers, and directions of impact. When trying to rotate a sphere via an externally applied torque, there will be a systematic drag resistance to rotation. With these two facts combined, it is possible to write the Langevin-like equation:
Where:
L is the angular momentum.
is torque.
I is the moment of inertia about the rotation axis.
t is the time.
t0 is the start time.
θ is the angle between the orientation at t0 and any time after, t.
ζr is the rotational friction coefficient.
TB(t) is the fluctuating Brownian torque at time t.
The overall Torque on the particle will be the difference between:
and .
This equation is the rotational version of Newtons second equation of motion. For example, in standard translational terms, a rocket will experience a boosting force from the engine while simultaneously experiencing a resistive force from the air it is travelling through. The same can be said for an object which is rotating.
Due to the random nature of rotation of the particle, the average Brownian torque is equal in both directions of rotation. symbolised as:
This means the equation can be averaged to get:
Which is to say that the first derivative with respect to time of the average Angular momentum is equal to the negative of the Rotational friction coefficient divided by the moment of inertia, all multiplied by the average of the angular momentum.
As is the rate of change of angular momentum over time, and is equal to a negative value of a coefficient multiplied by , this shows that the angular momentum is decreasing over time, or decaying with a decay time of:
.
For a sphere of mass m, uniform density ρ and radius a, the moment of inertia is:
.
As mentioned above, the rotational drag is given by the Stokes friction for rotation:
Combining all of the equations and formula from above, we get:
where:
is the momentum relaxation time
η is the viscosity of the Liquid the sphere is in.
Example: Spherical particle in water
Let's say there is a virus which can be modelled as a perfect sphere with the following conditions:
Radius (a) of 100 nanometres, a = 10−7m.
Density: ρ = 1500 kg m−3
Orientation originally facing in a direction denoted by π.
Suspended in water.
Water has a viscosity of η = 8.9 × 10−4 Pa·s at 25 °C
Assume uniform mass and density throughout the particle
First, the mass of the virus particle can be calculated:
From this, we now know all the variables to calculate moment of inertia:
Simultaneous to this, we can also calculate the rotational drag:
Combining these equations we get:
As the SI units for Pascal are kg⋅m−1⋅s−2 the units in the answer can be reduced to read:
For this example, the decay time of the virus is in the order of nanoseconds.
Smoluchowski description of rotation
To write the Smoluchowski equation for a particle rotating in two dimensions, we introduce a probability density P(θ, t) to find the vector u at an angle θ and time t.
This can be done by writing a continuity equation:
where the current can be written as:
Which can be combined to give the rotational diffusion equation:
We can express the current in terms of an angular velocity which is a result of Brownian torque TB through a rotational mobility with the equation:
Where:
The only difference between rotational and translational diffusion in this case is that in the rotational diffusion, we have periodicity in the angle θ. As the particle is modelled as a sphere rotating in two dimensions, the space the particle can take is compact and finite, as the particle can rotate a distance of 2π before returning to its original position
We can create a conditional probability density, which is the probability of finding the vector u at the angle θ and time t given that it was at angle θ0 at time t=0 This is written as such:
The solution to this equation can be found through a Fourier series:
Where is the Jacobian theta function of the third kind.
By using the equation
The conditional probability density function can be written as :
For short times after the starting point where t ≈ t0 and θ ≈ θ0, the formula becomes:
The terms included in these are exponentially small and make little enough difference to not be included here. This means that at short times the conditional probability looks similar to translational diffusion, as both show extremely small perturbations near t0. However at long times, t » t0 , the behaviour of rotational diffusion is different to translational diffusion:
The main difference between rotational diffusion and translational diffusion is that rotational diffusion has a periodicity of , meaning that these two angles are identical. This is because a circle can rotate entirely once before being at the same angle as it was in the beginning, meaning that all the possible orientations can be mapped within the space of . This is opposed to translational diffusion, which has no such periodicity.
The conditional probability of having the angle be θ is approximately .
This is because over long periods of time, the particle has had time rotate throughout the entire range of angles possible and as such, the angle θ could be any amount between θ0 and θ0 + 2 π. The probability is near-evenly distributed through each angle as at large enough times.
This can be proven through summing the probability of all possible angles. As there are 2π possible angles, each with the probability of , the total probability sums to 1, which means there is a certainty of finding the angle at some point on the circle.
See also
Diffusion equation
Perrin friction factors
Rotational correlation time
False diffusion
References
Further reading
Diffusion
Rotation | Rotational diffusion | [
"Physics",
"Chemistry"
] | 2,924 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Classical mechanics",
"Rotation",
"Motion (physics)"
] |
10,986,233 | https://en.wikipedia.org/wiki/Constructive%20Approximation | Constructive Approximation is "an international mathematics journal dedicated to Approximations, expansions, and related research in: computation, function theory, functional analysis, interpolation spaces and interpolation of operators, numerical analysis, space of functions, special functions, and applications."
References
External links
Constructive Approximation web site
Mathematics journals
Approximation theory
English-language journals
Academic journals established in 1985
Springer Science+Business Media academic journals
Bimonthly journals | Constructive Approximation | [
"Mathematics"
] | 87 | [
"Approximation theory",
"Mathematical relations",
"Approximations"
] |
10,986,430 | https://en.wikipedia.org/wiki/Journal%20of%20Approximation%20Theory | The Journal of Approximation Theory is "devoted to advances in pure and applied approximation theory and related areas."
References
External links
Journal of Approximation Theory web site
Journal of Approximation Theory home page at Elsevier
Approximation theory journals
Approximation theory
Academic journals established in 1968
Elsevier academic journals
English-language journals
Monthly journals | Journal of Approximation Theory | [
"Mathematics"
] | 61 | [
"Approximation theory",
"Mathematical relations",
"Approximations"
] |
10,986,798 | https://en.wikipedia.org/wiki/Molecular%20symmetry | In chemistry, molecular symmetry describes the symmetry present in molecules and the classification of these molecules according to their symmetry. Molecular symmetry is a fundamental concept in chemistry, as it can be used to predict or explain many of a molecule's chemical properties, such as whether or not it has a dipole moment, as well as its allowed spectroscopic transitions. To do this it is necessary to use group theory. This involves classifying the states of the molecule using the irreducible representations
from the character table of the symmetry group of the molecule. Symmetry is useful in the study of molecular orbitals, with applications to the Hückel method, to ligand field theory, and to the Woodward-Hoffmann rules. Many university level textbooks on physical chemistry, quantum chemistry, spectroscopy and inorganic chemistry discuss symmetry. Another framework on a larger scale is the use of crystal systems to describe crystallographic symmetry in bulk materials.
There are many techniques for determining the symmetry of a given molecule, including X-ray crystallography and various forms of spectroscopy. Spectroscopic notation is based on symmetry considerations.
Point group symmetry concepts
Elements
The point group symmetry of a molecule is defined by the presence or absence of 5 types of symmetry element.
Symmetry axis: an axis around which a rotation by results in a molecule indistinguishable from the original. This is also called an n-fold rotational axis and abbreviated Cn. Examples are the C2 axis in water and the C3 axis in ammonia. A molecule can have more than one symmetry axis; the one with the highest n is called the principal axis, and by convention is aligned with the z-axis in a Cartesian coordinate system.
Plane of symmetry: a plane of reflection through which an identical copy of the original molecule is generated. This is also called a mirror plane and abbreviated σ (sigma = Greek "s", from the German 'Spiegel' meaning mirror). Water has two of them: one in the plane of the molecule itself and one perpendicular to it. A symmetry plane parallel with the principal axis is dubbed vertical (σv) and one perpendicular to it horizontal (σh). A third type of symmetry plane exists: If a vertical symmetry plane additionally bisects the angle between two 2-fold rotation axes perpendicular to the principal axis, the plane is dubbed dihedral (σd). A symmetry plane can also be identified by its Cartesian orientation, e.g., (xz) or (yz).
Center of symmetry or inversion center, abbreviated i. A molecule has a center of symmetry when, for any atom in the molecule, an identical atom exists diametrically opposite this center an equal distance from it. In other words, a molecule has a center of symmetry when the points (x,y,z) and (−x,−y,−z) of the molecule always look identical. For example, whenever there is an oxygen atom in some point (x,y,z), then there also has to be an oxygen atom in the point (−x,−y,−z). There may or may not be an atom at the inversion center itself. An inversion center is a special case of having a rotation-reflection axis about an angle of 180° through the center. Examples are xenon tetrafluoride (a square planar molecule), where the inversion center is at the Xe atom, and benzene () where the inversion center is at the center of the ring.
Rotation-reflection axis: an axis around which a rotation by , followed by a reflection in a plane perpendicular to it, leaves the molecule unchanged. Also called an n-fold improper rotation axis, it is abbreviated Sn. Examples are present in tetrahedral silicon tetrafluoride, with three S4 axes, and the staggered conformation of ethane with one S6 axis. An S1 axis corresponds to a mirror plane σ and an S2 axis is an inversion center i. A molecule which has no Sn axis for any value of n is a chiral molecule.
Identity, abbreviated to E, from the German 'Einheit' meaning unity. This symmetry element simply consists of no change: every molecule has this symmetry element, which is equivalent to a C1 proper rotation. It must be included in the list of symmetry elements so that they form a mathematical group, whose definition requires inclusion of the identity element. It is so called because it is analogous to multiplying by one (unity).
Operations
The five symmetry elements have associated with them five types of symmetry operation, which leave the geometry of the molecule indistinguishable from the starting geometry. They are sometimes distinguished from symmetry elements by a caret or circumflex. Thus, Ĉn is the rotation of a molecule around an axis and Ê is the identity operation. A symmetry element can have more than one symmetry operation associated with it. For example, the C4 axis of the square xenon tetrafluoride (XeF4) molecule is associated with two Ĉ4 rotations in opposite directions (90° and 270°), a Ĉ2 rotation (180°) and Ĉ1 (0° or 360°). Because Ĉ1 is equivalent to Ê, Ŝ1 to σ and Ŝ2 to î, all symmetry operations can be classified as either proper or improper rotations.
For linear molecules, either clockwise or counterclockwise rotation about the molecular axis by any angle Φ is a symmetry operation.
Symmetry groups
Groups
The symmetry operations of a molecule (or other object) form a group. In mathematics, a group is a set with a binary operation that satisfies the four properties listed below.
In a symmetry group, the group elements are the symmetry operations (not the symmetry elements), and the binary combination consists of applying first one symmetry operation and then the other. An example is the sequence of a C4 rotation about the z-axis and a reflection in the xy-plane, denoted σ(xy)C4. By convention the order of operations is from right to left.
A symmetry group obeys the defining properties of any group.
closure property:
This means that the group is closed so that combining two elements produces no new elements. Symmetry operations have this property because a sequence of two operations will produce a third state indistinguishable from the second and therefore from the first, so that the net effect on the molecule is still a symmetry operation. This may be illustrated by means of a table. For example, with the point group C3, there are three symmetry operations: rotation by 120°, C3, rotation by 240°, C32 and rotation by 360°, which is equivalent to identity, E.
{| class="wikitable"
|+ Point group C3 Multiplication table
|-
! !!E || C3 || C32
|-
!E
| E|| C3 || C32
|-
!C3
|C3||C32||E
|-
!C32
|C32||E||C3
|-
|}
This table also illustrates the following properties
Associative property:
existence of identity property:
existence of inverse element:
The order of a group is the number of elements in the group. For groups of small orders, the group properties can be easily verified by considering its composition table, a table whose rows and columns correspond to elements of the group and whose entries correspond to their products.
Point groups
The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the molecule. For example, a C2 rotation followed by a σv reflection is seen to be a σv' symmetry operation: σv*C2 = σv'. ("Operation A followed by B to form C" is written BA = C). Moreover, the set of all symmetry operations (including this composition operation) obeys all the properties of a group, given above. So (S,*) is a group, where S is the set of all symmetry operations of some molecule, and * denotes the composition (repeated application) of symmetry operations.
This group is called the point group of that molecule, because the set of symmetry operations leave at least one point fixed (though for some symmetries an entire axis or an entire plane remains fixed). In other words, a point group is a group that summarises all symmetry operations that all molecules in that category have. The symmetry of a crystal, by contrast, is described by a space group of symmetry operations, which includes translations in space.
Examples of point groups
Assigning each molecule a point group classifies molecules into categories with similar symmetry properties. For example, PCl3, POF3, XeO3, and NH3 all share identical symmetry operations. They all can undergo the identity operation E, two different C3 rotation operations, and three different σv plane reflections without altering their identities, so they are placed in one point group, C3v, with order 6. Similarly, water (H2O) and hydrogen sulfide (H2S) also share identical symmetry operations. They both undergo the identity operation E, one C2 rotation, and two σv reflections without altering their identities, so they are both placed in one point group, C2v, with order 4. This classification system helps scientists to study molecules more efficiently, since chemically related molecules in the same point group tend to exhibit similar bonding schemes, molecular bonding diagrams, and spectroscopic properties.
Point group symmetry describes the symmetry of a molecule when fixed at its equilibrium configuration in a particular electronic state. It does not allow for tunneling between minima nor for the change in shape that can come about from the centrifugal distortion effects of molecular rotation.
Common point groups
The following table lists many of the point groups applicable to molecules, labelled using the Schoenflies notation, which is common in chemistry and molecular spectroscopy. The descriptions include common shapes of molecules, which can be explained by the VSEPR model. In each row, the descriptions and examples have no higher symmetries, meaning that the named point group captures all of the point symmetries.
Representations and their characters
A set of matrices that multiply together in a way that mimics the multiplication table of the elements of a group is called a representation of the group. For example, for the C2v point group, the following three matrices are part of a representation of the group:
This point group only contains four operations and the 3-dimensional representations above provide matrices for three of the four operations. Only the identity operation E remains but this matrix just contains 1's on the leading diagonal (top left to bottom right) and 0's elsewhere. Although an infinite number of such representations exist, the irreducible representations (or "irreps") of the group are all that are needed as all other representations of the group can be described as a direct sum of the irreducible representations. The first step in finding the irreps making up a given representation is to sum up the values of the leading diagonals for each matrix so, taking the identity matrix first then the matrices in the order above, one obtains (3, -1, 1, 1). These values are the traces or characters of the four matrices. Asymmetric point groups such as C2v only have 1-dimensional irreps so the character of an irrep is exactly the same is the irrep itself and the following table can be interpreted as irreps or characters.
Looking again at the characters obtained for the 3D representation above (3, -1, 1, 1), we only need simple arithmetic to break this down into irreps. Clearly, E = 3 means there are three irreps and a C2 representation sum of -1 means there must be one A and two B irreps so the only combination that adds up to the characters derived is
A1+ B1 + B2
In fact, this result could have been deduced by simply looking at the 3D representation itself: the three irreps are obvious in the three diagonal positions. Robert Mulliken was the first to publish character tables in English (1933), and E. Bright Wilson used them in 1934 to predict the symmetry of vibrational normal modes. . For this reason, the notation used to label irreps in the above table is called Mulliken notation and for asymmetric groups it consists of letters A and B with subscripts 1 and 2 as above and subscripts g and u as in the C2h example below. (Subscript 3 also also appears in D2 ) The irreducible representations are those matrix representations in which the matrices are in their most diagonal form possible and for asymmetric groups this means totally diagonal. One further thing to note about the irrep/character table above is the appearance of polar and axial base vector symbols on the right hand side. This tells us that, for example, cartesian base vector x transforms as irrep B1 under the operations of this group. The same collection of product base vectors is used for all asymmetric groups but symmetric and spherical groups use different sets of product base vectors.
Point group C2h has the operations and transformation matrices shown in the following diagram
In summary, for any group, its character table gives a tabulation (for the classes of the group) of the characters (the sum of the diagonal elements) of the matrices of all the irreducible representations of the group. As the number of irreducible representations equals the number of classes, the character table is square.
The representations are labeled according to a set of conventions:
A, when rotation around the principal axis is symmetrical
B, when rotation around the principal axis is asymmetrical
E and T are doubly and triply degenerate representations, respectively
when the point group has an inversion center, the subscript g ( or even) signals no change in sign, and the subscript u (ungerade or uneven) a change in sign, with respect to inversion.
with point groups C∞v and D∞h the symbols are borrowed from angular momentum description: Σ, Π, Δ.
The tables also capture information about how the Cartesian basis vectors, rotations about them, and quadratic functions of them transform by the symmetry operations of the group, by noting which irreducible representation transforms in the same way. These indications are conventionally on the righthand side of the tables. This information is useful because chemically important orbitals (in particular p and d orbitals) have the same symmetries as these entities.
Atomic orbital symmetry
Consider the example of water (H2O), which has the C2v symmetry described above. The 2px orbital of oxygen has B1 symmetry as in the fourth row of the character table above, with x in the sixth column). It is oriented perpendicular to the plane of the molecule and switches sign with a C2 and a σv'(yz) operation, but remains unchanged with the other two operations (obviously, the character for the identity operation is always +1). This orbital's character set is thus {1, −1, 1, −1}, corresponding to the B1 irreducible representation. Likewise, the 2pz orbital is seen to have the symmetry of the A1 irreducible representation (i.e.: none of the symmetry operations change it), 2py B2, and the 3dxy orbital A2. These assignments and others are noted in the rightmost two columns of the table.
Historical background
All of the group operations described above and the symbols for crystallographic point groups themselves were first published by Arthur Schoenflies in 1891 but the groups had been applied by other researchers to the external morphology of crystals much earlier in the 19th century.
In 1914 Max von Laue published the results of experiments using x-ray diffraction to elucidate the internal structures of crystals producing a limited version of a table of "Laue classes" shown to the right. When adapted for molecular work this table first divides point groups into three kinds: asymmetric, symmetric and spherical tops. These are categories related to the angular momentum of molecules, having respectively 3, 2 and 1 distinct values of angular momentum. A further sub-division into systems is defined by the rotational group G in the leftmost column then into rows of Laue classes. Every point group in a Laue class has exactly the same abstract group structure except the centred group in the rightmost column which is the direct product of the rotational group with inversion. It follows that all groups in a Laue class have the same order except the centred group which is twice that of the others. Laue found that x-ray diffraction was unable to distinguish between point groups of a Laue class.
Hans Bethe used characters of point group operations in his study of ligand field theory in 1929, and Eugene Wigner used group theory to explain the selection rules of atomic spectroscopy. The first character tables were compiled by László Tisza (1933), in connection to vibrational spectra. It is important to note that, since all the point groups of a Laue class have the same abstract structure, they also have exactly the same irreducible representations and character tables. As in x-ray crystallography many properties in molecular work are decided by the Laue class.
The complete set of 32 crystallographic point groups was published in 1936 by Rosenthal and Murphy
The molecular symmetry group
One can determine the symmetry operations of the point group for a particular molecule by considering the geometrical symmetry of its molecular model. However, when one uses a point group to classify molecular states, the operations in it are not to be interpreted in the same way. Instead the operations are interpreted as rotating and/or reflecting the vibronic (vibration-electronic) coordinates and these operations commute with the vibronic Hamiltonian. They are "symmetry operations" for that vibronic Hamiltonian. The point group is used to classify by symmetry the vibronic eigenstates of a rigid molecule. The symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, can be achieved through the use of the appropriate permutation-inversion group (called the molecular symmetry group), as introduced by Longuet-Higgins.
Symmetry of vibrational modes
Each normal mode of molecular vibration has a symmetry which forms a basis for one irreducible representation of the molecular symmetry group. For example, the water molecule has three normal modes of vibration: symmetric stretch in which the two O-H bond lengths vary in phase with each other, asymmetric stretch in which they vary out of phase, and bending in which the bond angle varies. The molecular symmetry of water is C2v with four irreducible representations A1, A2, B1 and B2. The symmetric stretching and the bending modes have symmetry A1, while the asymmetric mode has symmetry B2. The overall symmetry of the three vibrational modes is therefore Γvib = 2A1 + B2.
Vibrational modes of ammonia
The molecular symmetry of ammonia (NH3) is C3v, with symmetry operations E, C3 and σv. For N = 4 atoms, the number of vibrational modes for a non-linear molecule is 3N-6 = 6, due to the relative motion of the nitrogen atom and the three hydrogen atoms. All three hydrogen atoms travel symmetrically along the N-H bonds, either in the direction of the nitrogen atom or away from it. This mode is known as symmetric stretch (v₁) and reflects the symmetry in the N-H bond stretching. Of the three vibrational modes, this one has the highest frequency.
In the Bending (ν₂) vibration, the nitrogen atom stays on the axis of symmetry, while the three hydrogen atoms move in different directions from one another, leading to changes in the bond angles. The hydrogen atoms move like an umbrella, so this mode is often referred to as the "umbrella mode".
There is also an Asymmetric Stretch mode (ν₃) in which one hydrogen atom approaches the nitrogen atom while the other two hydrogens move away.
The total number of degrees of freedom for each symmetry species (or irreducible representation) can be determined. Ammonia has four atoms, and each atom is associated with three vector components. The symmetry group C3v for NH3 has the three symmetry species A1, A2 and E. The modes of vibration include the vibrational, rotational and translational modes.
Total modes = 3A1 + A2 + 4E. This is a total of 12 modes because each E corresponds to 2 degenerate modes (at the same energy).
Rotational modes = A2 + E (3 modes)
Translational modes = A1 + E
Vibrational modes = Total modes - Rotational modes - Translational modes = 3A1 + A2 + 4E - A2 - E - A1 - E = 2A1 + 2E (6 modes).
More examples of vibrational symmetry
W(CO)6 has octahedral geometry. The irreducible representation for the C-O stretching vibration is A1g + Eg + T1u . Of these, only T1u is IR active.
B2H6 (diborane) has D2h molecular symmetry. The terminal B-H stretching vibrations which are active in IR are B2u and B3u.
Fac-Mo(CO)3(CH3CH2CN)3, has C3v geometry. The irreducible representation for the C-O stretching vibration is A1 + E. Both of which are IR active.
Symmetry of molecular orbitals
Each molecular orbital also has the symmetry of one irreducible representation. For example, ethylene (C2H4) has symmetry group D2h, and its highest occupied molecular orbital (HOMO) is the bonding pi orbital which forms a basis for its irreducible representation B1u.
Molecular rotation and molecular nonrigidity
As discussed above in the section The molecular symmetry group, point groups are useful for classifying the vibrational and electronic states of rigid molecules (sometimes called semi-rigid molecules) which undergo only small oscillations about a single equilibrium geometry. Longuet-Higgins introduced the molecular symmetry group (a more general type of symmetry group) suitable not only for classifying the vibrational and electronic states of rigid molecules but also for classifying their rotational and nuclear spin states. Further, such groups can be used to classify the states of non-rigid (or fluxional) molecules that tunnel between equivalent geometries and to allow for the distorting effects of molecular rotation. The symmetry operations in the molecular symmetry group are so-called 'feasible' permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two, so that the group is sometimes called a "permutation-inversion group".
Examples of molecular nonrigidity abound. For example, ethane (C2H6) has three equivalent staggered conformations. Tunneling between the conformations occurs at ordinary temperatures by internal rotation of one methyl group relative to the other. This is not a rotation of the entire molecule about the C3 axis, although each conformation has D3d symmetry, as in the table above. Similarly, ammonia (NH3) has two equivalent pyramidal (C3v) conformations which are interconverted by the process known as nitrogen inversion.
Additionally, the methane (CH4) and H3+ molecules have highly symmetric equilibrium structures with Td and D3h point group symmetries respectively; they lack permanent electric dipole moments but they do have very weak pure rotation spectra because of rotational centrifugal distortion.
Sometimes it is necessary to consider together electronic states having different point group symmetries at equilibrium. For example, in its ground (N) electronic state the ethylene molecule C2H4
has D2h point group symmetry whereas in the excited (V) state it has
D2d symmetry. To treat these two states together it is necessary to
allow torsion and to use the double group of the molecular symmetry group
G16.
See also
Character table
Crystallographic point group
Point groups in three dimensions
Symmetry of diatomic molecules
Symmetry in quantum mechanics
References
External links
The molecular symmetry group @ The University of Western Ontario
Point group symmetry @ Newcastle University
Molecular symmetry @ Imperial College London
Molecular Point Group Symmetry Tables
Character tables for point groups for chemistry
Molecular Symmetry Online @ The Open University of Israel
An internet lecture course on molecular symmetry @ Bergische Universitaet
DECOR – Symmetry @ The Cambridge Crystallographic Data Centre
Symmetry
Theoretical chemistry | Molecular symmetry | [
"Physics",
"Chemistry",
"Mathematics"
] | 5,067 | [
"Theoretical chemistry",
"nan",
"Geometry",
"Symmetry"
] |
10,987,890 | https://en.wikipedia.org/wiki/Emodin | Emodin (6-methyl-1,3,8-trihydroxyanthraquinone) is an organic compound. Classified as an anthraquinone, it can be isolated from rhubarb, buckthorn, and Japanese knotweed (Reynoutria japonica syn. Polygonum cuspidatum). Emodin is particularly abundant in the roots of the Chinese rhubarb (Rheum palmatum), knotweed and knotgrass (Polygonum cuspidatum and Polygonum multiflorum) as well as Hawaii ‘au‘auko‘i cassia seeds or coffee weed (Semen cassia). It is specifically isolated from Rheum palmatum L. It is also produced by many species of fungi, including members of the genera Aspergillus, Pyrenochaeta, and Pestalotiopsis, inter alia. The common name is derived from Rheum emodi, a taxonomic synonym of Rheum australe (Himalayan rhubarb), and synonyms include emodol, frangula emodin, rheum emodin, 3-methyl-1,6,8-trihydroxyanthraquinone, Schüttgelb (Schuttgelb), and Persian Berry Lake.
Pharmacology
Emodin is an active component of several plants used in traditional Chinese medicine (TCM) such as Rheum palmatum, Polygonum cuspidatum, and Polygonum multiflorum. It has various actions including laxative, anticancer, antibacterial and antiinflammatory effects, and has also been identified as having potential antiviral activity against coronaviruses such as SARS-CoV-2, being one of the major active components of the antiviral TCM formulation Lianhua Qingwen.
Emodin has been shown to inhibit the ion channel of protein 3a, which could play a role in the release of the virus from infected cells.
List of species
The following plant species are known to produce emodin:
Acalypha australis
Cassia occidentalis
Cassia siamea
Frangula alnus
Glossostemon bruguieri
Kalimeris indica
Polygonum hypoleucum
Reynoutria japonica (syn. Fallopia japonica) (syn. Polygonum cuspidatum)
Rhamnus alnifolia, the alderleaf buckthorn
Rhamnus cathartica, the common buckthorn
Rheum palmatum
Rumex nepalensis
Senna obtusifolia (syn. Cassia obtusifolia)
Thielavia subthermophila
Ventilago madraspatana
Emodin also occurs in variable amounts in members of the crustose lichen genus Catenarina.
Compendial status
British Pharmacopoeia
List of compounds with carbon number 15
References
Trihydroxyanthraquinones
Virucides
Resorcinols
3-Hydroxypropenals within hydroxyquinones | Emodin | [
"Biology"
] | 644 | [
"Virucides",
"Biocides"
] |
10,987,985 | https://en.wikipedia.org/wiki/Exact%20category | In mathematics, specifically in category theory, an exact category is a category equipped with short exact sequences. The concept is due to Daniel Quillen and is designed to encapsulate the properties of short exact sequences in abelian categories without requiring that morphisms actually possess kernels and cokernels, which is necessary for the usual definition of such a sequence.
Definition
An exact category E is an additive category possessing a class E of "short exact sequences": triples of objects connected by arrows
satisfying the following axioms inspired by the properties of short exact sequences in an abelian category:
E is closed under isomorphisms and contains the canonical ("split exact") sequences:
Suppose occurs as the second arrow of a sequence in E (it is an admissible epimorphism) and is any arrow in E. Then their pullback exists and its projection to is also an admissible epimorphism. Dually, if occurs as the first arrow of a sequence in E (it is an admissible monomorphism) and is any arrow, then their pushout exists and its coprojection from is also an admissible monomorphism. (We say that the admissible epimorphisms are "stable under pullback", resp. the admissible monomorphisms are "stable under pushout".);
Admissible monomorphisms are kernels of their corresponding admissible epimorphisms, and dually. The composition of two admissible monomorphisms is admissible (likewise admissible epimorphisms);
Suppose is a map in E which admits a kernel in E, and suppose is any map such that the composition is an admissible epimorphism. Then so is Dually, if admits a cokernel and is such that is an admissible monomorphism, then so is
Admissible monomorphisms are generally denoted and admissible epimorphisms are denoted These axioms are not minimal; in fact, the last one has been shown by to be redundant.
One can speak of an exact functor between exact categories exactly as in the case of exact functors of abelian categories: an exact functor from an exact category D to another one E is an additive functor such that if
is exact in D, then
is exact in E. If D is a subcategory of E, it is an exact subcategory if the inclusion functor is fully faithful and exact.
Motivation
Exact categories come from abelian categories in the following way. Suppose A is abelian and let E be any strictly full additive subcategory which is closed under taking extensions in the sense that given an exact sequence
in A, then if are in E, so is . We can take the class E to be simply the sequences in E which are exact in A; that is,
is in E iff
is exact in A. Then E is an exact category in the above sense. We verify the axioms:
E is closed under isomorphisms and contains the split exact sequences: these are true by definition, since in an abelian category, any sequence isomorphic to an exact one is also exact, and since the split sequences are always exact in A.
Admissible epimorphisms (respectively, admissible monomorphisms) are stable under pullbacks (resp. pushouts): given an exact sequence of objects in E,
and a map with in E, one verifies that the following sequence is also exact; since E is stable under extensions, this means that is in E:
Every admissible monomorphism is the kernel of its corresponding admissible epimorphism, and vice versa: this is true as morphisms in A, and E is a full subcategory.
If admits a kernel in E and if is such that is an admissible epimorphism, then so is : See .
Conversely, if E is any exact category, we can take A to be the category of left-exact functors from E into the category of abelian groups, which is itself abelian and in which E is a natural subcategory (via the Yoneda embedding, since Hom is left exact), stable under extensions, and in which a sequence is in E if and only if it is exact in A.
Examples
Any abelian category is exact in the obvious way, according to the construction of #Motivation.
A less trivial example is the category Abtf of torsion-free abelian groups, which is a strictly full subcategory of the (abelian) category Ab of all abelian groups. It is closed under extensions: if
is a short exact sequence of abelian groups in which are torsion-free, then is seen to be torsion-free by the following argument: if is a torsion element, then its image in is zero, since is torsion-free. Thus lies in the kernel of the map to , which is , but that is also torsion-free, so . By the construction of #Motivation, Abtf is an exact category; some examples of exact sequences in it are:
where the last example is inspired by de Rham cohomology ( and are the closed and exact differential forms on the circle group); in particular, it is known that the cohomology group is isomorphic to the real numbers. This category is not abelian.
The following example is in some sense complementary to the above. Let Abt be the category of abelian groups with torsion (and also the zero group). This is additive and a strictly full subcategory of Ab again. It is even easier to see that it is stable under extensions: if
is an exact sequence in which have torsion, then naturally has all the torsion elements of . Thus it is an exact category.
References
Additive categories
Homological algebra | Exact category | [
"Mathematics"
] | 1,204 | [
"Mathematical structures",
"Additive categories",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
10,988,372 | https://en.wikipedia.org/wiki/Statistical%20interference | When two probability distributions overlap, statistical interference exists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much.
This technique can be used for geometric dimensioning of mechanical parts, determining when an applied load exceeds the strength of a structure, and in many other situations. This type of analysis can also be used to estimate the probability of failure or the failure rate.
Dimensional interference
Mechanical parts are usually designed to fit precisely together. For example, if a shaft is designed to have a "sliding fit" in a hole, the shaft must be a little smaller than the hole. (Traditional tolerances may suggest that all dimensions fall within those intended tolerances. A process capability study of actual production, however, may reveal normal distributions with long tails.) Both the shaft and hole sizes will usually form normal distributions with some average (arithmetic mean) and standard deviation.
With two such normal distributions, a distribution of interference can be calculated. The derived distribution will also be normal, and its average will be equal to the difference between the means of the two base distributions. The variance of the derived distribution will be the sum of the variances of the two base distributions.
This derived distribution can be used to determine how often the difference in dimensions will be less than zero (i.e., the shaft cannot fit in the hole), how often the difference will be less than the required sliding gap (the shaft fits, but too tightly), and how often the difference will be greater than the maximum acceptable gap (the shaft fits, but not tightly enough).
Physical property interference
Physical properties and the conditions of use are also inherently variable. For example, the applied load (stress) on a mechanical part may vary. The measured strength of that part (tensile strength, etc.) may also be variable. The part will break when the stress exceeds the strength.
With two normal distributions, the statistical interference may be calculated as above. (This problem is also workable for transformed units such as the log-normal distribution). With other distributions, or combinations of different distributions, a Monte Carlo method or simulation is often the most practical way to quantify the effects of statistical interference.
See also
Interference fit
Interval estimation
Joint probability distribution
Probabilistic design
Process capability
Reliability engineering
Specification
Tolerance (engineering)
References
Paul H. Garthwaite, Byron Jones, Ian T. Jolliffe (2002) Statistical Inference.
Haugen, (1980) Probabilistic mechanical design, Wiley.
Statistical theory
Survival analysis
Reliability engineering
Probability theory
Applied probability | Statistical interference | [
"Mathematics",
"Engineering"
] | 519 | [
"Applied mathematics",
"Systems engineering",
"Reliability engineering",
"Applied probability"
] |
10,988,544 | https://en.wikipedia.org/wiki/WCF%20Data%20Services | WCF Data Services (formerly ADO.NET Data Services, codename "Astoria") is a platform for what Microsoft calls Data Services. It is actually a combination of the runtime and a web service through which the services are exposed. It also includes the Data Services Toolkit which lets Astoria Data Services be created from within ASP.NET itself. The Astoria project was announced at MIX 2007, and the first developer preview was made available on April 30, 2007. The first CTP was made available as a part of the ASP.NET 3.5 Extensions Preview. The final version was released as part of Service Pack 1 of the .NET Framework 3.5 on August 11, 2008. The name change from ADO.NET Data Services to WCF data Services was announced at the 2009 PDC.
Overview
WCF Data Services exposes data, represented as Entity Data Model (EDM) objects, via web services accessed over HTTP. The data can be addressed using a REST-like URI. The data service, when accessed via the HTTP GET method with such a URI, will return the data. The web service can be configured to return the data in either plain XML, JSON or RDF+XML. In the initial release, formats like RSS and ATOM are not supported, though they may be in the future. In addition, using other HTTP methods like PUT, POST or DELETE, the data can be updated as well. POST can be used to create new entities, PUT for updating an entity, and DELETE for deleting an entity.
Description
Windows Communication Foundation (WCF) comes to the rescue when we find ourselves not able to achieve what we want to achieve using web services, i.e., other protocols support and even duplex communication. With WCF, we can define our service once and then configure it in such a way that it can be used via HTTP, TCP, IPC, and even Message Queues. We can consume Web Services using server side scripts (ASP.NET), JavaScript Object Notations (JSON), and even REST (Representational State Transfer).
Understanding the basics
When we say that a WCF service can be used to communicate using different protocols and from different kinds of applications, we will need to understand how we can achieve this. If we want to use a WCF service from an application, then we have three major questions:
1.Where is the WCF service located from a client's perspective?
2.How can a client access the service, i.e., protocols and message formats?
3.What is the functionality that a service is providing to the clients?
Once we have the answer to these three questions, then creating and consuming the WCF service will be a lot easier for us. The WCF service has the concept of endpoints. A WCF service provides endpoints which client applications can use to communicate with the WCF service. The answer to these above questions is what is known as the ABC of WCF services and in fact are the main components of a WCF service. So let's tackle each question one by one.
Address: Like a webservice, a WCF service also provides a URI which can be used by clients to get to the WCF service. This URI is called as the Address of the WCF service. This will solve the first problem of "where to locate the WCF service?" for us.
Binding: Once we are able to locate the WCF service, one should think about how to communicate with the service (protocol wise). The binding is what defines how the WCF service handles the communication. It could also define other communication parameters like message encoding, etc. This will solve the second problem of "how to communicate with the WCF service?" for us.
Contract: Now the only question one is left with is about the functionalities that a WCF service provides. The contract is what defines the public data and interfaces that WCF service provides to the clients.
The URIs representing the data will contain the physical location of the service, as well as the service name. It will also need to specify an EDM Entity-Set or a specific entity instance, as in respectively
http://dataserver/service.svc/MusicCollection
or
http://dataserver/service.svc/MusicCollection[SomeArtist]
The former will list all entities in the Collection set whereas the latter will list only for the entity which is indexed by SomeArtist.
The URIs can also specify a traversal of a relationship in the Entity Data Model. For example,
http://dataserver/service.svc/MusicCollection[SomeSong]/Genre
traverses the relationship Genre (in SQL parlance, joins with the Genre table) and retrieves all instances of Genre that are associated with the entity SomeSong. Simple predicates can also be specified in the URI, like
http://dataserver/service.svc/MusicCollection[SomeArtist]/ReleaseDate[Year eq 2006]
will fetch the items that are indexed by SomeArtist and had their release in 2006. Filtering and partition information can also be encoded in the URL as
http://dataserver/service.svc/MusicCollection?$orderby=ReleaseDate&$skip=100&$top=50
Although the presence of skip and top keywords indicates paging support, in Data Services version 1 there is no method of determining the number of records available and thus impossible to determine how many pages there may be. The OData 2.0 spec adds support for the $count path segment (to return just a count of entities) and $inlineCount (to retrieve a page worth of entities and a total count without a separate round-trip....).
References
External links
ADO.NET Data Services Framework (formerly "Project Astoria")
Using Microsoft ADO.NET Data Services
ASP.NET 3.5 Extensions Preview
ADO.NET Data Services (Project Astoria) Team Blog
Access Cloud Data with Astoria: ENT News Online
Data management
Web services
ADO.NET Data Access technologies
.NET | WCF Data Services | [
"Technology"
] | 1,292 | [
"Data management",
"Data"
] |
10,989,135 | https://en.wikipedia.org/wiki/Electroanalytical%20methods | Electroanalytical methods are a class of techniques in analytical chemistry which study an analyte by measuring the potential (volts) and/or current (amperes) in an electrochemical cell containing the analyte. These methods can be broken down into several categories depending on which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), amperometry (electric current is the analytical signal), coulometry (charge passed during a certain time is recorded).
Potentiometry
Potentiometry passively measures the potential of a solution between two electrodes, affecting the solution very little in the process. One electrode is called the reference electrode and has a constant potential, while the other one is an indicator electrode whose potential changes with the sample's composition. Therefore, the difference in potential between the two electrodes gives an assessment of the sample's composition. In fact, since the potentiometric measurement is a non-destructive measurement, assuming that the electrode is in equilibrium with the solution, we are measuring the solution's potential.
Potentiometry usually uses indicator electrodes made selectively sensitive to the ion of interest, such as fluoride in fluoride selective electrodes, so that the potential solely depends on the activity of this ion of interest.
The time that takes the electrode to establish equilibrium with the solution will affect the sensitivity or accuracy of the measurement. In aquatic environments, platinum is often used due to its high electron transfer kinetics, although an electrode made from several metals can be used in order to enhance the electron transfer kinetics. The most common potentiometric electrode is by far the glass-membrane electrode used in a pH meter.
A variant of potentiometry is chronopotentiometry which consists in using a constant current and measurement of potential as a function of time. It has been initiated by Weber.
Amperometry
Amperometry indicates the whole of electrochemical techniques in which a current is measured as a function of an independent variable that is, typically, time (in a chronoamperometry) or electrode potential (in a voltammetry). Chronoamperometry is the technique in which the current is measured, at a fixed potential, at different times since the start of polarisation. Chronoamperometry is typically carried out in unstirred solution and at the fixed electrode, i.e., under experimental conditions avoiding convection as the mass transfer to the electrode. On the other hand, voltammetry is a subclass of amperometry, in which the current is measured by varying the potential applied to the electrode. According to the waveform that describes the way how the potential is varied as a function of time, the different voltammetric techniques are defined.
Chronoamperometry
In a chronoamperometry, a sudden step in potential is applied at the working electrode and the current is measured as a function of time. Since this is not an exhaustive method, microelectrodes are used and the amount of time used to perform the experiments is usually very short, typically 20 ms to 1 s, as to not consume the analyte.
Voltammetry
A voltammetry consists in applying a constant and/or varying potential at an electrode's surface and measuring the resulting current with a three-electrode system. This method can reveal the reduction potential of an analyte and its electrochemical reactivity. This method, in practical terms, is non-destructive since only a very small amount of the analyte is consumed at the two-dimensional surface of the working and auxiliary electrodes. In practice, the analyte solution is usually disposed of since it is difficult to separate the analyte from the bulk electrolyte, and the experiment requires a small amount of analyte. A normal experiment may involve 1–10 mL solution with an analyte concentration between 1 and 10 mmol/L. More advanced voltammetric techniques can work with microliter volumes and down to nanomolar concentrations. Chemically modified electrodes are employed for the analysis of organic and inorganic samples.
Polarography
Polarography is a subclass of voltammetry that uses a dropping mercury electrode as the working electrode.
Coulometry
Coulometry uses applied current or potential to convert an analyte from one oxidation state to another completely. In these experiments, the total current passed is measured directly or indirectly to determine the number of electrons passed. Knowing the number of electrons passed can indicate the concentration of the analyte or when the concentration is known, the number of electrons transferred in the redox reaction. Typical forms of coulometry include bulk electrolysis, also known as Potentiostatic coulometry or controlled potential coulometry, as well as a variety of coulometric titrations.
References
Bibliography | Electroanalytical methods | [
"Chemistry"
] | 1,000 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
10,989,188 | https://en.wikipedia.org/wiki/Retinalophototroph | A retinalophototroph is one of two different types of phototrophs, and are named for retinal-binding proteins (microbial rhodopsins) they utilize for cell signaling and converting light into energy. Like all phototrophs, retinalophototrophs absorb photons to initiate their cellular processes. In contrast with chlorophototrophs, retinalophototrophs do not use chlorophyll or an electron transport chain to power their chemical reactions. This means retinalophototrophs are incapable of traditional carbon fixation, a fundamental photosynthetic process that transforms inorganic carbon (carbon contained in molecular compounds like carbon dioxide) into organic compounds. For this reason, experts consider them to be less efficient than their chlorophyll-using counterparts, chlorophototrophs.
Energy conversion
Retinalophototrophs achieve adequate energy conversion via a proton-motive force. In retinalophototrophs, proton-motive force is generated from rhodopsin-like proteins, primarily bacteriorhodopsin and proteorhodopsin, acting as proton pumps along a cellular membrane.
To capture photons needed for activating a protein pump, retinalophototrophs employ organic pigments known as carotenoids, namely beta-carotenoids. Beta-carotenoids present in retinalophototrophs are unusual candidates for energy conversion, but they possess high Vitamin-A activity necessary for retinaldehyde, or retinal, formation. Retinal, a chromophore molecule configured from Vitamin A, is formed when bonds between carotenoids are disrupted in a process called cleavage. Due to its acute light sensitivity, retinal is ideal for activation of proton-motive force and imparts a unique purple coloration to retinalophototrophs. Once retinal absorbs enough light, it isomerizes, thereby forcing a conformational (i.e., structural) change among the covalent bonds of the rhodopsin-like proteins. Upon activation, these proteins mimic a gateway, allowing passage of ions to create an electrochemical gradient between the interior and exterior of the cellular membrane. Ions diffusing outwards across the gradient through proton pumps are then bound to ATP synthase proteins on the cell’s surface. As they diffuse back into the cell, their protons catalyze the creation of ATP (from ADP and a phosphorus ion), providing energy for retinalophototrophic self-sustenance and proliferation.
Interaction with carbon
Many, if not all, retinalophototrophs are photoheterotrophs: although sufficient ATP is produced by light, they cannot subsist on light and inorganic substances alone because they cannot produce needed organic materials from only . This category includes retinalophototrophs that perform anaplerotic fixation, such as a flavobacterium that can use pyruvate and CO2 to make malate. This ability does, however, help "stretch" limited supplies of carbon.
Taxonomy
Retinalophototrophs are found across all domains of life but predominantly in the Bacteria and Archaea taxonomic groups. Scientists believe retinalophototroph’s general ecological abundance correlates to horizontal gene transfer since only two genes are required for retinalophototrophy to occur: essentially, one gene for retinal-binding protein synthesis (bop) and one for retinal chromophore synthesis (blh).
Interactions with environment
Despite their apparent simplicity, retinalophototrophs boast versatile ion usage that translates to their existence in relatively extreme environments. For instance, retinalophototrophs can thrive at depths over 200 meters where, despite a lack of inorganic carbon, sufficient light as well as sodium, hydrogen, or chloride concentrations harbor conditions capable of supporting their vital metabolic processes. Studies have also shown sodium and hydrogen ions correlate directly with retinalophototroph’s nutrient uptake and ATP synthesis, while chloride drives processes responsible for osmotic equilibrium. Even though retinalophototrophs are widespread, research has shown they can be niche too. Depending on their proximity to the oceans surface, retinalophototrophs have evolved to be better at absorbing light within specific wavelengths. Most importantly, retinalophototrophs prevalence as a primary producer contributes substantially to the bottom-up mechanics of marine environments and, consequently, success of fauna and flora worldwide.
Although retinalophototrophs are less efficient at converting light than chlorophototrophs, the simplicity makes it the preferred system in a large number of environments. For example, because retinalophototrophs requires no iron in the reaction center, they are well-adapted to the iron-poor ocean environment. At high light level, they are more efficient in terms of protein investment to energy output due to the small size.
References
Photosynthesis
Trophic ecology
Microbial growth and nutrition
Biology terminology | Retinalophototroph | [
"Chemistry",
"Biology"
] | 1,067 | [
"Biochemistry",
"Photosynthesis",
"nan"
] |
10,989,485 | https://en.wikipedia.org/wiki/Restricted%20partial%20quotients | In mathematics, and more particularly in the analytic theory of regular continued fractions, an infinite regular continued fraction x is said to be restricted, or composed of restricted partial quotients, if the sequence of denominators of its partial quotients is bounded; that is
and there is some positive integer M such that all the (integral) partial denominators ai are less than or equal to M.
Periodic continued fractions
A regular periodic continued fraction consists of a finite initial block of partial denominators followed by a repeating block; if
then ζ is a quadratic irrational number, and its representation as a regular continued fraction is periodic. Clearly any regular periodic continued fraction consists of restricted partial quotients, since none of the partial denominators can be greater than the largest of a0 through ak+m. Historically, mathematicians studied periodic continued fractions before considering the more general concept of restricted partial quotients.
Restricted CFs and the Cantor set
The Cantor set is a set C of measure zero from which a complete interval of real numbers can be constructed by simple addition – that is, any real number from the interval can be expressed as the sum of exactly two elements of the set C. The usual proof of the existence of the Cantor set is based on the idea of punching a "hole" in the middle of an interval, then punching holes in the remaining sub-intervals, and repeating this process ad infinitum.
The process of adding one more partial quotient to a finite continued fraction is in many ways analogous to this process of "punching a hole" in an interval of real numbers. The size of the "hole" is inversely proportional to the next partial denominator chosen – if the next partial denominator is 1, the gap between successive convergents is maximized.
To make the following theorems precise we will consider CF(M), the set of restricted continued fractions whose values lie in the open interval (0, 1) and whose partial denominators are bounded by a positive integer M – that is,
By making an argument parallel to the one used to construct the Cantor set two interesting results can be obtained.
If M ≥ 4, then any real number in an interval can be constructed as the sum of two elements from CF(M), where the interval is given by
A simple argument shows that holds when M ≥ 4, and this in turn implies that if M ≥ 4, every real number can be represented in the form n + CF1 + CF2, where n is an integer, and CF1 and CF2 are elements of CF(M).
Zaremba's conjecture
Zaremba has conjectured the existence of an absolute constant A, such that the rationals with partial quotients restricted by A contain at least one for every (positive integer) denominator. The choice A = 5 is compatible with the numerical evidence. Further conjectures reduce that value, in the case of all sufficiently large denominators. Jean Bourgain and Alex Kontorovich have shown that A can be chosen so that the conclusion holds for a set of denominators of density 1.
See also
Markov spectrum
References
Continued fractions
Diophantine approximation | Restricted partial quotients | [
"Mathematics"
] | 663 | [
"Continued fractions",
"Mathematical relations",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
10,989,910 | https://en.wikipedia.org/wiki/3rd%20Space%20Vest | The ForceWear Vest is a haptic suit that was unveiled at the Game Developers Conference in San Francisco in March 2007. The vest was mentioned in several articles about next-generation gaming accessories. The vest was released in November 2007, and reviews of the product have been generally favorable.
The vest uses eight trademarked "contact points" that simulate gunfire, body slams or G-forces associated with race car driving. It is unique because unlike traditional force feedback accessories, the vest is directional, so that action taking place outside the players' field of view can also be felt. A player hit by gunfire from behind will actually feel the shot in his back while he may not be otherwise aware of this using standard visual display cues.
Gaming reporter Charlie Demerjian of The Inquirer said, "If they can keep the price reasonable and have a few good games, this has a chance of becoming a useful gaming accessory." Currently, players have three ways to utilize the vest. Playing games with Direct Integration, such as TN Games' own 3rd Space Incursion, using the 3rd space game drivers whilst playing a game (drivers currently in Beta 2), or installing specially made mods for a game. The vest works with many games, including Call of Duty 2: 3rd Space Edition, 3rd Space Incursion, Half-Life 2: Episodes 1 & 2, Crysis, Enemy Territory Quake Wars, Clive Barker's Jericho, Unreal Tournament 3, F.E.A.R., Medal of Honor: Airborne, Quake 4 and Doom 3.
References
External links
3rd Space Vest
FPS Vest
Video game accessories | 3rd Space Vest | [
"Technology"
] | 328 | [
"Video game accessories",
"Components"
] |
10,990,040 | https://en.wikipedia.org/wiki/Luzin%20space | In mathematics, a Luzin space (or Lusin space), named for N. N. Luzin, is an uncountable topological T1 space without isolated points in which every nowhere-dense subset is countable. There are many minor variations of this definition in use: the T1 condition can be replaced by T2 or T3, and some authors allow a countable or even arbitrary number of isolated points.
The existence of a Luzin space is independent of the axioms of ZFC. showed that the continuum hypothesis implies that a Luzin space exists.
showed that assuming Martin's axiom and the negation of the continuum hypothesis, there are no Hausdorff Luzin spaces.
In real analysis
In real analysis and descriptive set theory, a Luzin set (or Lusin set), is defined as an uncountable subset of the reals such that every uncountable subset of is nonmeager; that is, of second Baire category. Equivalently, is an uncountable set of reals that meets every first category set in only countably many points. Luzin proved that, if the continuum hypothesis holds, then every nonmeager set has a Luzin subset. Obvious properties of a Luzin set are that it must be nonmeager (otherwise the set itself is an uncountable meager subset) and of measure zero, because every set of positive measure contains a meager set that also has positive measure, and is therefore uncountable. A weakly Luzin set is an uncountable subset of a real vector space such that for any uncountable subset the set of directions between different elements of the subset is dense in the sphere of directions.
The measure-category duality provides a measure analogue of Luzin sets – sets of positive outer measure, every uncountable subset of which has positive outer measure. These sets are called Sierpiński sets, after Wacław Sierpiński. Sierpiński sets are weakly Luzin sets but are not Luzin sets.
Example of a Luzin set
Choose a collection of 2ℵ0 meager subsets of R such that every meager subset is contained in one of them. By the continuum hypothesis, it is possible to enumerate them as Sα for countable ordinals α. For each countable ordinal β choose a real number xβ that is not in any of the sets Sα for α<β, which is possible as the union of these sets is meager so is not the whole of R. Then the uncountable set X of all these real numbers xβ has only a countable number of elements in each set Sα, so is a Luzin set.
More complicated variations of this construction produce examples of Luzin sets that are subgroups, subfields or real-closed subfields of the real numbers.
References
Paper mentioning Luzin spaces
Properties of topological spaces
Descriptive set theory | Luzin space | [
"Mathematics"
] | 609 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
10,990,251 | https://en.wikipedia.org/wiki/%C3%89milien%20Dumas | Jean Louis George Émilien Dumas (16 october 1804 – 21 September 1870) was a French scholar, palaeontologist, and geologist.
Biography
Born to a Protestant family of the bourgeoisie in Gard, Émilien Dumas was immersed from his early childhood in an atmosphere of learning and erudition. His father, a former merchant involved in agriculture, was an educated man. The native flora of Gard provided him with his first field of study. From 1815 to 1824, he studied at Morges, Switzerland, then at Basel, where his passion for the natural sciences matured. He returned to his homeland in 1824 following the death of his mother.
Embarking on a career in the sciences, he went to Paris and studied at the Collège de France, the Ecole des Mines de Paris and the Muséum national d'histoire naturelle, and with Georges Cuvier, Étienne Geoffroy Saint-Hilaire, and Adrien-Henri de Jussieu.
His education in the natural sciences was well rounded, and he threw himself with equal passion into Zoology, Mineralogy, and Botany, as well as engaging in the contemporary debate over Lamarckism.
In 1828, he returned to Sommières, where he married Pauline Borel, a wealthy heiress from Orange, and daughter of a silk manufacturer. The same year, he unveiled a rich paleontological dig site at Pondres (Gard) whose human and animal remains fueled Lamarckist arguments, particularly in the field of Archaeozoology
He surveyed his region with great patience and tenacity over a period of 20 years, to produce a geological map of the département of Gard. During a long voyage in the 1860s he studied the geography of southern Europe. As an avid collector, he cultivated his curiosity throughout his life, and the Natural History Museum at Nîmes now preserves a large part of his numerous collections spanning the fields of Greek antiquities, botany, and geology.
The missing piece in this portrait of the "Explorer of Gard" is his taste for theater and acting. He was a willing participant as well as observer, which was considered by his contemporaries as incompatible with his role as a scientist.
He died on September 21, 1870, in Ax-sur-Ariège.
Works
Émilien Dumas, , 1876
Bibliography
Édouard Dumas, Émilien Dumas et l'empreinte de Sommières, Lacour-Ollé, 1993.
« Émilien Dumas, l'explorateur du Gard », Catalogue de l'expostion organisée à l'occasion du bicentenaiee de sa naissance, Musée d'Histoire naturelle de Nîmes.
External links
An article by the Sommières Association (French)
The text of his treatise on the geology of Gard (French)
1804 births
1870 deaths
French geologists
French paleontologists
Lamarckism | Émilien Dumas | [
"Biology"
] | 593 | [
"Non-Darwinian evolution",
"Biology theories",
"Obsolete biology theories",
"Lamarckism"
] |
10,990,255 | https://en.wikipedia.org/wiki/Oocyte%20cryopreservation | Oocyte cryopreservation is a procedure to preserve a woman's eggs (oocytes). The technique is often used to delay pregnancy. At the time pregnancy is desired, the eggs can be thawed, fertilized, and transferred to the uterus as embryos. Many studies have suggested infertility problems as germ cell deterioration related to aging. The procedure's success rate varies according to the woman's age (with higher odds of success in younger women), health, and genetic factors. The first human birth of oocyte cryopreservation was reported in 1986.
Indications
Women diagnosed with cancer who have not yet begun chemotherapy or radiotherapy can benefit from Oocyte cryopreservation. Chemotherapy and radiotherapy are toxic to oocytes, reducing the number of viable eggs. Egg-freezing may be used in this case to preserve eggs as opposed to Oocyte cryopreservation.
Those undergoing treatment with assisted reproductive technologies who do not consider embryo freezing an option often look towards Oocyte cryopreservation as an alternative option.
Women who would like to preserve their future ability to have children often use oocyte cryopreservation to delay and freeze their eggs, allowing for them to have children later in life.
Women with a family history of early menopause may have an interest in fertility preservation to preserve viable eggs that could deteriorate at an earlier onset.
Those with ovarian diseases such as Polycystic Ovary Syndrome could opt for this method.
Oocyte cryopreservation is one of many options for individuals undergoing IVF. In some cases, persons may prefer oocyte cryopreservation over other options, where freezing embryos is the primary procedure.
Method
The egg retrieval process for oocyte cryopreservation is the same as that for in vitro fertilization (IVF). This includes one to several weeks of hormone injections that stimulate ovaries to ripen multiple eggs. When the eggs are mature, final maturation induction is performed. The eggs are subsequently removed from the body by transvaginal oocyte retrieval. The procedure is usually conducted under sedation. The eggs are immediately frozen.
The egg is the largest cell in the human body and contains a large amount of water. When the egg is frozen, the ice crystals that form can destroy the integrity of the cell. To prevent this, the egg must be dehydrated before freezing. This is done using cryoprotectants which replace most of the water within the cell and inhibit the formation of ice crystals.
Eggs (oocytes) are frozen using either a controlled rate, a slow-cooling method, or a newer flash-freezing process known as vitrification. Vitrification is much faster but requires higher concentrations of cryoprotectants to be added. The result of vitrification is a solid glass-like cell, free of ice crystals. Vitrification has been developed and successfully applied in IVF treatment with the first live birth following the vitrification of oocytes achieved in 1999. Vitrification eliminates ice formation inside and outside of oocytes on cooling, during cryostorage, and as the oocytes warm. Vitrification is associated with higher survival rates and enhanced development compared to slow-cooling when applied to oocytes in metaphase II. Vitrification has also become the method of choice for pronuclear oocytes, although prospective randomized controlled trials are still lacking.
During the freezing process, the zona pellucida, or shell of the egg, can be modified preventing fertilization. Thus, when eggs are thawed and pregnancy is desired, a fertilization procedure known as ICSI (Intracytoplasmic Sperm Injection) is performed by an embryologist whereby sperm is injected directly into the egg with a needle rather than allowing sperm to penetrate naturally by placing it around the egg in a dish.
Immature oocytes have been grown until maturation in vitro, but it is not yet clinically available.
Success rates
Early work investigating the percentage of transferred cycles showed lower frozen cycles compared with fresh cycles (approx. 30% and 50%), however more recent studies show "fertilization and pregnancy rates are similar to IVF/ICSI (in vitro fertilization/intracytoplasmic sperm injection) with fresh oocytes when [both] vitrified and warmed oocytes are used as part of IVF/ICSI". These studies were completed mostly in young patients.
In a 2013 meta-analysis of more than 2,200 cycles using frozen eggs, scientists found the probability of having a live birth after three cycles was 31.5% for women who froze their eggs at age 25, 25.9% at age 30, 19.3% at age 35, and 14.8% at age 40.
Studies have shown that the rate of birth defects and chromosomal defects when using cryopreserved oocytes is consistent with that of natural conception.
Recent modifications in the protocol regarding cryoprotectant composition, temperature, and storage methods have had a large impact on the technology, and while it is still considered an experimental procedure, it is quickly becoming an option for women. Slow freezing traditionally has been the most commonly used method to cryopreserve oocytes and is the method that has resulted in the most babies born from frozen oocytes worldwide. Ultra-rapid freezing or vitrification represents a potential alternative freezing method.
In the fall of 2009, The American Society for Reproductive Medicine (ASRM) issued an opinion on oocyte cryopreservation concluding that the science holds "great promise for applications in oocyte donation and fertility preservation" because recent laboratory modifications have resulted in improved oocyte survival, fertilization, and pregnancy rates from frozen-thawed oocytes in IVF. The ASRM noted that from the limited research performed to date, there does not appear to be an increase in chromosomal abnormalities, birth defects, or developmental deficits in the children born from cryopreserved oocytes. The ASRM recommended that pending further research, oocyte cryopreservation should be introduced into clinical practice on an investigational basis and under the guidance of an Institutional Review Board (IRB). As with any new technology, safety and efficacy must be evaluated and demonstrated through continued research.
In October 2012, the ASRM lifted the experimental label from the technology for women with a medical need, citing success rates in live births, among other findings. However, they also warned against using it only to delay child-bearing.
In 2014, a Cochrane systematic review was published. It compared vitrification (the newest technology) versus slow freezing (the oldest one). Key results of that review showed that the clinical pregnancy rate was almost 4 times higher in the oocyte vitrification group than in the slow-freezing group, with moderate quality of evidence.
Immature oocytes have been grown until maturation in vitro at a 10% survival rate, but no experiment has been performed to fertilize such oocytes.
Cost
The cost of the egg-freezing procedure (without embryo transfer) in the United States, the United Kingdom, and other European countries varies in between $5,000 and $12,000. The cost of egg storage can vary from $100 to more than $1,000. Provisional health programs do not cover social egg freezing. Furthermore, no provinces provide funding for IVF after social egg freezing.
Medical tourism may have lower costs than performing egg freezing in high-cost countries like the US. Some well-established medical tourism and IVF countries such as the Czech Republic, Ukraine, Greece and Cyprus offer egg freezing at competitive prices. It is a lower-cost alternative to typical US options for egg freezing. Spain and the Czech Republic are popular destinations for this treatment.
Iranian insurance started to pay insurance incentives for women freezing their eggs in 2024.
History
Cryopreservation itself has always played a central role in assisted reproductive technology. With the first cryopreservation of sperm in 1953 and of embryos twenty five years later, these techniques have become routine. Dr. Christopher Chen of Singapore reported the world's first pregnancy in 1986 using previously frozen oocytes. This report stood alone for several years followed by studies reporting success rates using frozen eggs to be much lower than those of traditional in vitro fertilization (IVF) techniques using fresh oocytes. Providing the lead to a new direction in cryobiology, Dr. Lilia Kuleshova was the first scientist to achieve vitrification of human oocytes that resulted in a live birth in 1999. Articles published in the journal Fertility and Sterility reported that pregnancy rates using frozen oocytes that were comparable to those of cryopreserved embryos and even fresh embryos.
Elective oocyte cryopreservation
Elective oocyte cryopreservation, also known as social egg freezing, is non-essential egg freezing to preserve fertility for delayed child-bearing when natural conception becomes more problematic. The frequency of this procedure has steadily increased since October 2012 when the American Society for Reproductive Medicine (ASRM) lifted the 'experimental' label from the process. There was a spike in interest in 2014 when global corporations Apple and Meta Platforms announced they were going to pay for the procedure of egg freezing as a benefit for their female employees. This announcement was controversial as some women found it empowering and practical, while others viewed the message these companies were sending to women trying to have a successful long-term career and a family as harmful and alienating. A string of "egg-freezing parties" hosted by third-party companies have also helped popularize the concept among young women.
Social science research suggests that women use elective egg freezing to disentangle their search for a romantic partner from their plans to have children.
In 2016, then US Secretary of Defense Ash Carter announced that the Department of Defense would cover the cost of freezing sperm or eggs through a pilot program for active duty service members, to preserve their ability to start a family even if they sustain certain combat injuries.
There are still warnings for women using this technology to fall pregnant at an older age as the risk of pregnancy complications increases with a mother's age. However, studies have shown that the risk of congenital abnormalities in babies born from frozen oocytes is not increased further when compared to naturally conceived babies.
Risks
The risks associated with egg freezing relate to the administration of medications to stimulate the ovaries and the procedure of egg collection.
The main risk associated with the administration of medications to stimulate the ovaries is ovarian hyperstimulation syndrome (OHSS). This is a transient syndrome in which there is increased permeability of the blood vessels, resulting in fluid loss from the vessels into the surrounding tissues. In most cases, the syndrome is mild, with symptoms such as abdominal bloating, mild discomfort, and nausea. In moderate OHSS there is increased abdominal bloating resulting in pain and vomiting. Reduced urine output may occur. Severe OHSS is serious with even further bloating so that the abdomen appears very distended, and thirst and dehydration occur with minimal urine output. There may be shortness of breath and there is an increased risk of DVT and/or pulmonary embolism. Kidney and liver function can be compromised. Hospitalization under specialist care is indicated. There is no treatment for OHSS, supportive care until the symptoms naturally resolve is required. If an hCG trigger has been used with no embryo transfer, OHSS usually resolves in 7–10 days. If an embryo transfer has occurred and pregnancy results, the symptoms may persist for several weeks. Doctors reduce the likelihood of OHSS occurring by decreasing the doses of gonadotropins (FSH) administered, using a GnRH agonist trigger (instead of an hCG trigger), and freezing all embryos for transfer rather than conducting a fresh embryo transfer.
Risks associated with the egg collection procedure relate to bleeding and infection. The collection procedure involves passing a needle through the wall of the vagina into vascular-stimulated ovaries. A small amount of bleeding is inevitable. In rare cases, there is excessive bleeding into the abdomen requiring surgery. Women undergoing the procedure must advise their specialist of all medications, including herbal supplements, they are using so the specialist can assess whether any of these medications will affect the ability of the blood to clot. Concerning infection, provided the woman does not have additional risk factors for infection (suppressed immune system, use of immuno-suppressive medications, or large ovarian endometriomas) the risk of infection is very low.
One additional risk of the ovaries being temporarily increased in size is ovarian torsion. Ovarian torsion occurs when an enlarged ovary twists around on itself, cutting off its blood supply. The condition is excruciatingly painful and requires urgent surgery to prevent the ischemic loss of the ovary.
See also
Egg donation
Semen cryopreservation
In vitro fertilization
References
External links
How egg freezing works, Human Fertilisation and Embryology Authority
National Cancer Institute – Sexuality and Reproductive Issues
Mature oocyte cryopreservation: a guideline American Society for Reproductive Medicine (PDF)
American Society for Reproductive Medicine
World Association of Reproductive Medicine
Assisted reproductive technology
Cryopreservation
Human embryology | Oocyte cryopreservation | [
"Chemistry",
"Biology"
] | 2,758 | [
"Cryopreservation",
"Cryobiology",
"Assisted reproductive technology",
"Medical technology"
] |
10,991,325 | https://en.wikipedia.org/wiki/Hoofdletters%2C%20Tweeling-%20en%20Meerlingdruk | Hoofdletters, Tweeling- en Meerlingdruk was a Dutch book published in 1958. In the book, author Dr. George van den Bergh made several propositions for a more economical arrangement of type in books. The book was featured in Herbert Spencer's Typographica (Old Series, number 16, 1959) in and Eye magazine (no. 47, vol. 12, Spring 2003). In Rick Poynor's Typographica he translates the Dutch title as "Capitals, twin- and multi-print."
There were three principles in van den Bergh's proposals. The first was that printing in all caps (Hoofdletters in Dutch means uppercase letters) would save the space wasted by the ascenders and descenders of lowercase letters. The second principle involved double printing texts that could be screened by overlaying sheets that masked every other line of text. The third principle involved double printing texts in red and green: the reader could then read through red or green "spectacles" that filtered out one text.
Erik Kindel, author of the 2003 Eye article sums up with a contemporary evaluation of the book:
Notes
1958 non-fiction books
Communication design
Dutch non-fiction books
Graphic design | Hoofdletters, Tweeling- en Meerlingdruk | [
"Engineering"
] | 259 | [
"Design",
"Communication design"
] |
10,991,523 | https://en.wikipedia.org/wiki/Total%20human%20ecosystem | Total human ecosystem (THE) is an ecocentric concept initially proposed by ecology professors Zeev Naveh and Arthur S. Lieberman in 1994.
History of the concept
Naveh and Lieberman proposed a holistic, ecocentric concept of the total human ecosystem in order to study anthropocene ecology and improve land use planning and environmental management within an integrated and interdisciplinary approach. In Naveh's words, the total human ecosystem is "the highest co-evolutionary ecological entity on earth with landscapes as its concrete three-dimensional ‘Gestalt’ systems, forming the spatial and functional matrix for all organisms". This concept (or meta-concept) integrates human systems (the technosphere, but also in the conceptual space of human noosphere) and natural systems (the geophysical eco-space of the Earth biosphere).
Zev Naveh (1919-2011), the major contributor to this concept, was a professor in landscape ecology at the Technion, Israel Institute of Technology, Haifa. Until 1965 he worked as a range and pasture specialist in Israel and Tanzania. His research at the Technion was devoted to human impacts on Mediterranean landscapes, fire ecology and dynamic conservation management, and the introduction of drought resistant plants for multi-beneficial landscape restoration and beautification.
Almo Farina, who also developed the concept from 2000 onwards, is also a professor of ecology at the Urbino University, Faculty of Environmental Sciences, in Italy.
Concepts and epistemology
The interaction and co-evolution of the human and natural ecosystem interactions are the driving forces for the current Earth system. The total human ecosystem meta-conceptional approach aims to integrate the bio-and geo-centric approaches, derived from the natural sciences, and the approaches derived from the social sciences and the humanities in order to prevent further environmental degradation and drive natural and human systems towards a sustainable future.
A natural ecosystem within this concept is solar energy powered, self-organizing and self-creating. The human ecosystem is fossil energy powered by high input and throughput, and can be divided into two sub-ecosystems: urban-industrial and agro-industrial. The ecosystem is realised in space as an ecotope and the system of ecotopes is the landscape: natural, semi-natural, urban-industrial are the tangible, three-dimensional physical systems. These form the total human ecosystem. The total human ecosystem also consists of the domain of information, perceptions (in landscape ecology this is the ecofield concept), knowledge, feeling and consciousness, enabling human (but also biological) self-awareness.
A special case of landscapes inside of the total human ecosystem are the cultural landscapes in which the relationships between human activity (as an effective, ecology-based, land or sea stewardship) have created ecological, socioeconomic and cultural patterns and feedback mechanisms that preserve biological and cultural diversity and maintain or even improve the ecosystem's resilience and resistance.
See also
Human ecosystem
Landscape ecology
Environmental geography
Ecosystem
Sustainability
References
Farina, A., 2006. Principles and Methods in Landscape Ecology: Towards a Science of the Landscape, Springer, Dordrecht, 412 p.
Ecology
Ecosystems | Total human ecosystem | [
"Biology"
] | 636 | [
"Ecology",
"Symbiosis",
"Ecosystems"
] |
10,991,941 | https://en.wikipedia.org/wiki/Omphalotus%20nidiformis | Omphalotus nidiformis, or ghost fungus, is a gilled basidiomycete mushroom most notable for its bioluminescent properties. It is known to be found primarily in southern Australia and Tasmania, but was reported from India in 2012 and 2018. The fan or funnel shaped fruit bodies are up to across, with cream-coloured caps overlain with shades of orange, brown, purple, or bluish-black. The white or cream gills run down the length of the stipe, which is up to long and tapers in thickness to the base. The fungus is both saprotrophic and parasitic, and its fruit bodies are generally found growing in overlapping clusters on a wide variety of dead or dying trees.
First described scientifically in 1844, the fungus has been known by several names in its taxonomic history. It was assigned its current name by Orson K. Miller, Jr. in 1994. Its epithet name is derived from the Latin nidus "nest", hence 'nest shaped'. Similar in appearance to the common edible oyster mushroom, it was previously considered a member of the same genus, Pleurotus, and described under the former names Pleurotus nidiformis or Pleurotus lampas. Unlike oyster mushrooms, O. nidiformis is poisonous; while not lethal, its consumption leads to severe cramps and vomiting. The toxic properties of the mushroom are attributed to compounds called illudins. O. nidiformis is one of several species in the cosmopolitan genus Omphalotus, all of which have bioluminescent properties.
Taxonomy and naming
The ghost fungus was initially described in 1844 by English naturalist Miles Joseph Berkeley as Agaricus nidiformis. Berkeley felt it was related to Agaricus ostreatus (now Pleurotus ostreatus) but remarked it was a "far more magnificent species". Material was originally collected by Scottish naturalist James Drummond in 1841 on Banksia wood along the Swan River. He wrote "when this fungus was laid on a newspaper, it emitted by night a phosphorescent light, enabling us to read the words around it; and it continued to do so for several nights with gradually decreasing intensity as the plant dried up." More material collected from near the base of a "sickly but living" shrub (Grevillea drummondii) was named as Agaricus lampas by Berkeley. He noted both were phosphorescent and closely related species. Tasmanian botanist Ronald Campbell Gunn collected material in October 1845 from that state, which Berkeley felt differed from previous collections in having more demarcated and less decurrent gills and a shorter stipe, and named it Agaricus phosphorus in 1848. Italian mycologist Pier Andrea Saccardo placed all three named taxa in the genus Pleurotus in 1887. These names have been synonymised with O. nidiformis, although the name Pleurotus lampas persisted in some texts, including the 1934–35 monograph of Australian fungi by John Burton Cleland. In reviewing the published literature, Victorian botanical liaison officer Jim Willis was aware of Rolf Singer's placing of Pleurotus olearius into the genus Omphalotus, but stopped short of transferring the ghost fungus across, even though he conceded it was wrongly placed in Pleurotus. Investigating the species in 1994, Orson K. Miller, Jr. gave the ghost fungus its current binomial name when he transferred it to the genus Omphalotus with other bioluminescent mushrooms.
The specific epithet nidiformis is derived from the Latin terms nīdus 'nest' and forma 'shape' or 'form', hence 'nest shaped'. Lampas is derived from the Greek lampas/λαμπας 'torch'. Common names include ghost fungus and Australian glow fungus. Drummond reported that the local Aboriginal people were fearful when shown the luminescent fungus and called out chinga, a local word for spirit; Drummond himself likened it to a will-o'-the-wisp. On the Springbrook Plateau in southeastern Queensland, the local Kombumerri people believed the lights to be ancestors and gave the area a wide berth out of respect.
Several Omphalotus species with similar bioluminescent properties occur worldwide, all of which are presumed poisonous. The best known are the North American jack o'lantern mushroom (O. olearius) and the tsukiyotake (O. japonicus (Kawam.) Kirchm. & O.K. Mill. (formerly known as Lampteromyces japonicus (Kawam.) Sing.), found in Japan and eastern Asia. A 2004 molecular study shows the ghost fungus to be most closely related to the western jack o'lantern mushroom (O. olivascens), which is abundant in Southern and Central California. Miller notes that the colours and shades of the ghost fungus most closely resemble this species.
Laboratory breeding experiments with it and other Omphalotus species have revealed a low level of compatibility (ability to breed and produce fertile hybrids), suggesting it is genetically distinct and has been isolated for a long time. It is particularly poorly compatible with O. illudens, the authors of the study suggesting the separation may have been as long ago as the Late Carboniferous separation of Gondwana from Laurasia but conceding the lack of any fossil record makes it impossible to know whether the genus even existed at the time.
Variation
Miller noted there appeared to be two colour forms reported across its range, namely a more cream-coloured form with darker shades of brown and grey in its cap that darkens with age, and a more wholly brownish form with paler edges and darker centre to its cap. He found the cream-coloured form to be strongly luminescent—the brightest of any fungus in the genus—with the cap, stipe and gills all glowing. The brown form was generally fainter, with its luminescence restricted to the gills. However, some strongly luminescent wholly brown-coloured mushrooms were recorded, and laboratory experiments showed all interbred freely and produced fertile offspring, leading Miller to conclude that these were phenotypic variants of a single taxon.
Description
The fruit bodies of the ghost fungus can be found on dead or diseased wood. They may be first seen at night as a pale whitish glow at the base of trees in a eucalypt forest. The cap is very variable in colour, sometimes cream though often tinted with orange, brownish, greyish, purple or even bluish-black shades. The margin is lighter, generally cream, though brown forms have tan or brown edges. The centre generally has several darker shades, and younger specimens are often darker. Growing up to in diameter it is funnel-shaped or fan-shaped in appearance with inrolled margins. The cream-white gills are decurrent and often drip with moisture. They are up to deep, somewhat distant to closely spaced, and have a smooth edge until they erode in maturity. The stipe may be central to lateral in its attachment to the cap and is up to long and tapers to the base. The thin flesh is generally creamy white in colour, but can have reddish tones near the base of the stipe. There is no distinctive smell or taste. The spore print is white.
The spores are roughly elliptical, or, less commonly, somewhat spherical, and have dimensions of 7.5–9.5 by 5–7 μm. They are thin-walled, inamyloid, and have a smooth surface. Each features a prominent hilar appendage. The basidia (spore-bearing cells), measuring 32–42 by 6–9 μm, are club-shaped and four-spored, with sterigmata up to 7 μm long. Cheilocystidia (cystidia found on the gill edges) are abundant, and measure 15–40 by 3–6 μm; no pleurocystida (cystidia on the gill faces) are present. The cap cuticle comprises a thin layer of 3–6 μm-wide hyphae that are interwoven either loosely or tightly. All hyphae of O. nidiformis have clamp connections.
The bioluminescence of O. nidiformis fruit bodies is best seen in low-light conditions when the viewer's eyes have become dark-adapted. The gills are the most luminescent part of the fungus, emitting a greenish light that fades with age. Although the intensity of the luminescence is variable, William Henry Harvey once reported that it was bright enough to read a watch face by. It is not known if the mycelium is also luminescent.
Omphalotus nidiformis may be confused with the edible brown oyster mushroom (Pleurotus australis), which is brown and does not glow in the dark. Confusion with another edible lookalike, Pleurotus ostreatus, common in the Northern Hemisphere and cultivated commercially, has been the source for at least one case of poisoning reported in the literature.
Distribution and habitat
Omphalotus nidiformis occurs in two disjunct ranges in southern Australia. In southwest Western Australia, it has been recorded from Perth and the Avon wheatbelt southwest to Augusta and east along the southern coastline to Esperance. In the southeast of the continent, it is found from eastern South Australia, where it has been recorded from Mount Gambier and the Fleurieu Peninsula, the Mount Lofty Ranges around Adelaide, the Murraylands, and north to the Flinders Ranges and from Lincoln National Park at the apex of the Eyre Peninsula, through to southeast Queensland. It also occurs in Tasmania. It can be found in eucalypt and pine forests, in habitats as diverse as the arid scrubland of Wyperfeld National Park and subalpine areas of Mount Buffalo National Park, as well as in urban parks and gardens. Fruit bodies can be numerous and occur in overlapping clusters on dead wood. Outside Australia, it has been recorded from Norfolk Island. In 2012, it was reported for the first time from Kerala, India, where it was discovered growing on a coconut tree stump.
Ecology
A saprobe or parasite, O. nidiformis is nonspecific in its needs and is compatible with a wide variety of hosts. It has been recorded on native Banksia (including B. attenuata and B. menziesii), Hakea, Acacia, Nuytsia floribunda and various Myrtaceae, including Agonis flexuosa and Melaleuca species, and especially Eucalyptus, as well as Nothofagus, Casuarina species and Allocasuarina fraseriana, and even introduced trees such as Pinus or Platanus species. It plays an important role in breaking down wood and recycling nutrients into the soil.
Omphalotus species cause a white rot by breaking down lignin in their tree hosts. The fungus infiltrates the heartwood of the tree via a breach in its bark, either by a branch falling, damage from insects or mistletoe, or by mechanical damage from logging. O. nidiformis has been implicated in the heartwood rot of several species of eucalypt around Australia, including marri (Corymbia calophylla) in southwest Western Australia, in spotted gum (C. maculata) and messmate (Eucalyptus obliqua) in New South Wales, and in blackbutt (E. pilularis), Sydney blue gum (E. saligna), red stringybark (E. macrorhyncha) and Forth River peppermint (E. radiata) in Victoria.
The US Department of Agriculture considers there is a moderate to high risk of O. nidiformis being accidentally introduced to the United States in untreated eucalyptus woodchips from Australia. Nearly a century ago, Cleland and Edwin Cheel suggested that even though the fungus was of "no great economic importance", "it would be advisable to destroy it by burning wherever found."
Several species of Tapeigaster flies have been collected from the fruit bodies, including T. cinctipes, T. annulipes, and T. nigricornis; the latter species uses the fruit bodies as a host to rear its young. Fruit bodies in Springbrook National Park have been observed to attract nocturnal insects such as beetles, native cockroaches and crickets (white-kneed cricket (Papuastus spp.) and thorny cricket), as well as giant rainforest snails (Hedleyella falconeri) and red triangle slugs (Triboniophorus graeffei), which voraciously consume the fungus.
Biochemistry
Omphalotus nidiformis is not edible. Although reputedly mild tasting, eating it will result in vomiting which generally occurs 30 minutes to two hours after consumption and lasts for several hours. There is no diarrhea and patients recover without lasting ill-effects. Its toxicity was first mentioned by Anthony M. Young in his 1982 guidebook Common Australian Fungi. The toxic ingredient of many species of Omphalotus is a sesquiterpene compound known as illudin S. This, along with illudin M and a co-metabolite illudosin, have been identified in O. nidiformis. The two illudins are common to the genus Omphalotus and not found in any other basidiomycete mushroom. An additional three compounds unique to O. nidiformis have been identified and named illudins F, G and H.
Irofulven, a compound derived from illuden S, is undergoing phase II clinical trials as a possible therapy for various types of cancers. Fruit body extracts have antioxidant and free radical scavenging properties, which may be attributed to the presence of phenolic compounds.
See also
List of bioluminescent fungi
References
External links
Bioluminescent fungi
Fungi described in 1844
Fungi native to Australia
Fungi of Asia
nidiformis
Poisonous fungi
Taxa named by Miles Joseph Berkeley
Fungus species | Omphalotus nidiformis | [
"Biology",
"Environmental_science"
] | 2,933 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
10,992,237 | https://en.wikipedia.org/wiki/G418 | G418 (geneticin) is an aminoglycoside antibiotic similar in structure to gentamicin B1. It is produced by Micromonospora rhodorangea. G418 blocks polypeptide synthesis by inhibiting the elongation step in both prokaryotic and eukaryotic cells. Resistance to G418 is conferred by the neo gene from Tn5 encoding an aminoglycoside 3'-phosphotransferase, APT 3' II. G418 is an analog of neomycin sulfate, and has similar mechanism as neomycin. G418 is commonly used in laboratory research to select genetically engineered cells . In general, for bacteria and algae, concentrations of 5 μg/mL or less are used; for mammalian cells, concentrations of approximately 400 μg/mL are used for selection and 200 μg/mL for maintenance. However, optimal concentration for resistant clones selection in mammalian cells depends on the cell line used as well as on the plasmid carrying the resistance gene. Therefore, antibiotic titration should be done to find the best condition for every experimental system. Titration should be done using antibiotic concentrations ranging from 100 μg/mL up to 1400 μg/mL. Resistant clones selection could require from 1 to up to 3 weeks.
G418 impurity profile
G418 is produced by fermentation and isolation processes and the G418 producing strain Micromonospora rhodorangea produces many other gentamicins while producing G418. The common impurities of G418 include gentamicins A, C1, C1a, C2, C2a and X2. The quality of G418 is not based on just the potency, but more on the selectivity defined by the killing curve of the sensitive cells vs the resistant cells. A good G418 product has the lowest for sensitive cells (such as NIH 3T3) and the highest LD50 (can be up to 5,000 μg/ml) for resistant cells (NIH 3T3 transfected with resistant genes). Gentamicins have almost no selectivity, except gentamicin X2.
Use in cell biology
G418 is routinely used as a selective agent in cell culture set-ups. Researchers can link the neoR selective resistance gene with their vector. Then if the vector is successfully introduced into cells, the cells can become G418-resistant cells. After treating with G418, these vector(-) cells will die, while vector(+) cells will survive. This method can help researchers select vector(+) cells.
Mechanism of action
G418 Disulfate and other aminoglycosides prevent protein synthesis at the early stages of elongation, post-initiation, initiation of translation. Resistance to G418 Disulfate is conferred by the Neomycin resistance gene (neo) from either Tn5 or Tn601 (903) transposons. Cells transfected with resistance plasmids containing the neo gene can express aminoglycoside 3'-phosphotransferase (APT 3' I or APT 3' II) which covalently modifies G418 to 3-phosphoric G418, which has negligible potency and has low-affinity for prokaryotic and eukaryotic ribosomes.
References
Aminoglycoside antibiotics
Cell culture reagents
Eukaryotic selection compounds | G418 | [
"Chemistry",
"Biology"
] | 734 | [
"Reagents for biochemistry",
"Cell culture reagents"
] |
10,993,126 | https://en.wikipedia.org/wiki/Baldwin%E2%80%93Lomax%20model | The Baldwin–Lomax model is a 0-equation turbulence model used in computational fluid dynamics analysis of turbulent boundary layer flows.
External links
Baldwin-Lomax model at cfd-online.com
Fluid dynamics
Mathematical modeling | Baldwin–Lomax model | [
"Chemistry",
"Mathematics",
"Engineering"
] | 45 | [
"Mathematical modeling",
"Applied mathematics",
"Chemical engineering",
"Applied mathematics stubs",
"Piping",
"Fluid dynamics"
] |
10,993,199 | https://en.wikipedia.org/wiki/Cebeci%E2%80%93Smith%20model | The Cebeci–Smith model, developed by Tuncer Cebeci and Apollo M. O. Smith in 1967, is a 0-equation eddy viscosity model used in computational fluid dynamics analysis of turbulence in boundary layer flows. The model gives eddy viscosity, , as a function of the local boundary layer velocity profile. The model is suitable for high-speed flows with thin attached boundary layers, typically present in aerospace applications. Like the Baldwin-Lomax model, it is not suitable for large regions of flow separation and significant curvature or rotation. Unlike the Baldwin-Lomax model, this model requires the determination of a boundary layer edge.
Equations
In a two-layer model, the boundary layer is considered to comprise two layers: inner (close to the surface) and outer. The eddy viscosity is calculated separately for each layer and combined using:
where is the smallest distance from the surface where is equal to .
The inner-region eddy viscosity is given by:
where
with the von Karman constant usually being taken as 0.4, and with
The eddy viscosity in the outer region is given by:
where , is the displacement thickness, given by
and FK is the Klebanoff intermittency function given by
References
Smith, A.M.O. and Cebeci, T., 1967. Numerical solution of the turbulent boundary layer equations. Douglas aircraft division report DAC 33735
Cebeci, T. and Smith, A.M.O., 1974. Analysis of turbulent boundary layers. Academic Press,
Wilcox, D.C., 1998. Turbulence Modeling for CFD. , 2nd Ed., DCW Industries, Inc.
External links
This article was based on the Cebeci Smith model article in CFD-Wiki
Turbulence models
Fluid dynamics
Mathematical modeling | Cebeci–Smith model | [
"Chemistry",
"Mathematics",
"Engineering"
] | 371 | [
"Mathematical modeling",
"Applied mathematics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
10,993,317 | https://en.wikipedia.org/wiki/Lutetium%28III%29%20oxide%20%28data%20page%29 | This page provides supplementary chemical data on Lutetium(III) oxide
Thermodynamic properties
Spectral data
Structure and properties data
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions.
References
A.F. Trotman-Dickenson, (ed.) in Comprehensive Inorganic Chemistry, Pergamon, Oxford, UK, 1973.
Chemical data pages
Chemical data pages cleanup | Lutetium(III) oxide (data page) | [
"Chemistry"
] | 116 | [
"Chemical data pages",
"nan"
] |
10,993,385 | https://en.wikipedia.org/wiki/Leukotriene%20D4 | {{DISPLAYTITLE:Leukotriene D4}}
Leukotriene D4 (LTD4) is one of the leukotrienes. Its main function in the body is to induce the contraction of smooth muscle, resulting in bronchoconstriction and vasoconstriction. It also increases vascular permeability. LTD4 is released by basophils. Other leukotrienes that function in a similar manner are leukotrienes C4 and E4. Pharmacological agents that inhibit the function of these leukotrienes are leukotriene receptor antagonists (e.g., zafirlukast, montelukast) and are useful for asthmatic individuals.
References
Eicosanoids | Leukotriene D4 | [
"Chemistry",
"Biology"
] | 166 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
10,993,498 | https://en.wikipedia.org/wiki/Select%20%28Unix%29 | is a system call and application programming interface (API) in Unix-like and POSIX-compliant operating systems for examining the status of file descriptors of open input/output channels. The select system call is similar to the facility introduced in UNIX System V and later operating systems. However, with the c10k problem, both select and poll have been superseded by the likes of kqueue, epoll, /dev/poll and I/O completion ports.
One common use of select outside of its stated use of waiting on filehandles is to implement a portable sub-second sleep. This can be achieved by passing NULL for all three fd_set arguments, and the duration of the desired sleep as the timeout argument.
In the C programming language, the select system call is declared in the header file sys/select.h or unistd.h, and has the following syntax:
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout);
fd_set type arguments may be manipulated with four utility macros: , and .
Select returns the total number of bits set in and , or zero if the timeout expired, and -1 on error.
The sets of file descriptor used in select are finite in size, depending on the operating system. The newer system call provides a more flexible solution.
Example
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <sys/select.h>
#include <fcntl.h>
#include <unistd.h>
#include <err.h>
#include <errno.h>
#define PORT "9421"
/* function prototypes */
void die(const char*);
int main(int argc, char **argv)
{
int sockfd, new, maxfd, on = 1, nready, i;
struct addrinfo *res0, *res, hints;
char buffer[BUFSIZ];
fd_set master, readfds;
int error;
ssize_t nbytes;
(void)memset(&hints, '\0', sizeof(struct addrinfo));
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = IPPROTO_TCP;
hints.ai_flags = AI_PASSIVE;
if (0 != (error = getaddrinfo(NULL, PORT, &hints, &res0)))
errx(EXIT_FAILURE, "%s", gai_strerror(error));
for (res = res0; res; res = res->ai_next)
{
if (-1 == (sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol)))
{
perror("socket()");
continue;
}
if (-1 == (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, (char*)&on, sizeof(int))))
{
perror("setsockopt()");
continue;
}
if (-1 == (bind(sockfd, res->ai_addr, res->ai_addrlen)))
{
perror("bind()");
continue;
}
break;
}
if (-1 == sockfd)
exit(EXIT_FAILURE);
freeaddrinfo(res0);
if (-1 == (listen(sockfd, 32)))
die("listen()");
if (-1 == (fcntl(sockfd, F_SETFD, O_NONBLOCK)))
die("fcntl()");
FD_ZERO(&master);
FD_ZERO(&readfds);
FD_SET(sockfd, &master);
maxfd = sockfd;
while (1)
{
memcpy(&readfds, &master, sizeof(master));
(void)printf("running select()\n");
if (-1 == (nready = select(maxfd+1, &readfds, NULL, NULL, NULL)))
die("select()");
(void)printf("Number of ready descriptor: %d\n", nready);
for (i=0; i<=maxfd && nready>0; i++)
{
if (FD_ISSET(i, &readfds))
{
nready--;
if (i == sockfd)
{
(void)printf("Trying to accept() new connection(s)\n");
if (-1 == (new = accept(sockfd, NULL, NULL)))
{
if (EWOULDBLOCK != errno)
die("accept()");
break;
}
else
{
if (-1 == (fcntl(new, F_SETFD, O_NONBLOCK)))
die("fcntl()");
FD_SET(new, &master);
if (maxfd < new)
maxfd = new;
}
}
else
{
(void)printf("recv() data from one of descriptors(s)\n");
nbytes = recv(i, buffer, sizeof(buffer), 0);
if (nbytes <= 0)
{
if (EWOULDBLOCK != errno)
die("recv()");
break;
}
buffer[nbytes] = '\0';
printf("%s", buffer);
(void)printf("%zi bytes received.\n", nbytes);
close(i);
FD_CLR(i, &master);
}
}
}
}
return 0;
}
void die(const char *msg)
{
perror(msg);
exit(EXIT_FAILURE);
}
See also
Berkeley sockets
Polling
epoll
kqueue
Input/output completion port (IOCP)
References
External links
C POSIX library
Events (computing)
System calls
Articles with example C code | Select (Unix) | [
"Technology"
] | 1,464 | [
"Information systems",
"Events (computing)"
] |
10,993,822 | https://en.wikipedia.org/wiki/Plumbane | Plumbane is an inorganic chemical compound with the chemical formula PbH. It is a colorless gas. It is a metal hydride and group 14 hydride composed of lead and hydrogen. Plumbane is not well characterized or well known, and it is thermodynamically unstable with respect to the loss of a hydrogen atom. Derivatives of plumbane include lead tetrafluoride, PbF, and tetraethyllead, (CHCH)Pb.
History
Until recently, it was uncertain whether plumbane had ever actually been synthesized, although the first reports date back to the 1920s and in 1963, Saalfeld and Svec reported the observation of by mass spectrometry. Plumbane has repeatedly been the subject of Dirac–Hartree–Fock relativistic calculation studies, which investigate the stabilities, geometries, and relative energies of hydrides of the formula MH or MH.
Properties
Plumbane is an unstable colorless gas and is the heaviest group IV hydride; and has a tetrahedral (T) structure with an equilibrium distance between lead and hydrogen of 1.73 Å. By weight, plumbane is 1.91% hydrogen and 98.09% lead. In plumbane, the formal oxidation states of hydrogen and lead are +1 and -4, respectively, because the electronegativity of lead(IV) is higher than that of hydrogen. The stability of hydrides MH (M = C–Pb) decreases as the atomic number of M increases.
Preparation
Early studies of PbH revealed that the molecule is unstable as compared to its lighter congeners silane, germane, and stannane. It cannot be made by methods used to synthesize GeH or SnH.
In 1999, plumbane was synthesized from lead(II) nitrate, Pb(NO), and sodium borohydride, NaBH. A non-nascent mechanism for plumbane synthesis was reported in 2005.
In 2003, Wang and Andrews carefully studied the preparation of PbH by laser ablation and additionally identified the infrared (IR) bands.
Congeners
Congeners of plumbane include:
Methane, CH
Silane, SiH
Germane, GeH
Stannane, SnH
References
Metal hydrides
Lead compounds | Plumbane | [
"Chemistry"
] | 484 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
5,608,037 | https://en.wikipedia.org/wiki/Sandwich-structured%20composite | 1) In materials science, a sandwich-structured composite is a special class of composite materials that is fabricated by attaching two thin-but-stiff skins to a lightweight-but-thick core. The core material is normally of low strength, but its greater thickness provides the sandwich composite with high bending stiffness with overall low density.
2)Open- and closed-cell-structured foams like Polyethersulfone, polyvinylchloride, polyurethane, polyethylene or polystyrene foams, balsa wood, syntactic foams, and honeycombs are commonly used core materials. Sometimes, the honeycomb structure is filled with other foams for added strength. Open- and closed-cell metal foam can also be used as core materials.
3) Laminates of glass or carbon fiber-reinforced thermoplastics or mainly thermoset polymers (unsaturated polyesters, epoxies...) are widely used as skin materials. Sheet metal is also used as skin material in some cases.
The core is bonded to the skins with an adhesive or with metal components by brazing together.
History
A summary of the important developments in sandwich structures is given below.
230 BC Archimedes describes the laws of levers and a way to calculate density.
25 BC Vitruvius reports about the efficient use of materials in Roman truss roof structures.
1493 Leonardo da Vinci discovers the neutral axis and load deflection relation in three-point bending.
1570 Palladio presents truss-beam constructions with diagonal beams to prevent shear deformations.
1638 Galileo Galilei describes the efficiency of tubes versus solid rods.
1652 Wendelin Schildknecht reports about sandwich beam structures with curved wooden beam reinforcements.
1726 Jacob Leupold documents tubular bridges with compression loaded roofs.
1786 Victor Louis uses iron sandwich beams in the galleries of the Palais-Royal in Paris.
1802 Jean-Baptiste Rondelet analyses and documents the sandwich effect in a beam with spacers.
1820 Alphonse Duleau discovers and publishes the moment of inertia for sandwich constructions.
1830 Robert Stephenson builds the Planet locomotive using a sandwich beam frame made of wood plated with iron
1914 R. Höfler and S. Renyi patent the first use of honeycomb structures for structural applications.
1915 Hugo Junkers patents the first honeycomb cores for aircraft application.
1934 Edward G. Budd patents welded steel honeycomb sandwich panel from corrugated metal sheets.
1937 Claude Dornier patents a honeycomb sandwich panel with skins pressed in a plastic state into the core cell walls.
1938 Norman de Bruyne patents the structural adhesive bonding of honeycomb sandwich structures.
1940 The de Havilland Mosquito was built with sandwich composites; a balsawood core with plywood skins.
Types of sandwich structures
Metal composite material (MCM) is a type of sandwich formed from two thin skins of metal bonded to a plastic core in a continuous process under controlled pressure, heat, and tension.
Recycled paper is also now being used over a closed-cell recycled kraft honeycomb core, creating a lightweight, strong, and fully repulpable composite board. This material is being used for applications including point-of-purchase displays, bulkheads, recyclable office furniture, exhibition stands, wall dividers and terrace boards.
To fix different panels, among other solutions, a transition zone is normally used, which is a gradual reduction of the core height, until the two fiber skins are in touch. In this place, the fixation can be made by means of bolts, rivets, or adhesive.
With respect to the core type and the way the core supports the skins, sandwich structures can be divided into the following groups: homogeneously supported, locally supported, regionally supported, unidirectionally supported, bidirectionally supported. The latter group is represented by honeycomb structure which, due to an optimal performance-to-weight ratio, is typically used in most demanding applications including aerospace.
Properties of sandwich structures
The strength of the composite material is dependent largely on two factors:
The outer skins: If the sandwich is supported on both sides, and then stressed by means of a downward force in the middle of the beam, then the bending moment will introduce shear forces in the material. The shear forces result in the bottom skin in tension and the top skin in compression. The core material spaces these two skins apart. The thicker the core material the stronger the composite. This principle works in much the same way as an I-beam does.
The interface between the core and the skin: Because the shear stresses in the composite material change rapidly between the core and the skin, the adhesive layer also sees some degree of shear force. If the adhesive bond between the two layers is too weak, the most probable result will be delamination. The failure of the interface between the skin and core is critical and the most common damage mode. The propensity of this damage to propagate through the interface or dive either into the skin or core is governed by the shear component.
Application of sandwich structures
Sandwich structures can be widely used in sandwich panels, with different types such as FRP sandwich panel, aluminium composite panel, etc. FRP polyester reinforced composite honeycomb panel (sandwich panel) is made of polyester reinforced plastic, multi-axial high-strength glass fiber and PP honeycomb panel in special antiskid tread pattern mold through the process of constant temperature vacuum adsorption & agglutination and solidification.
Theory
Sandwich theory describes the behaviour of a beam, plate, or shell which consists of three layers - two face sheets and one core. The most commonly used sandwich theory is linear and is an extension of first order beam theory. Linear local buckling sandwich theory is of importance for the design and analysis of Sandwich plates or sandwich panels, which are of use in building construction, vehicle construction, airplane construction and refrigeration engineering.
See also
Sandwich panel
Sandwich plate system
Composite honeycomb
Honeycomb Structures
Sandwich theory
Flitch beam
Bending
Beam theory
Composite material
Hill yield criteria
Timoshenko beam theory
Plate theory
References
External links
SandwichPanels.org – Composite sandwich structure information
Diab Sandwich Handbook
Honeycomb Sandwich Design Technology
Engineered timber sandwich core materials – Composite sandwich structure information
-Application of aluminium honeycomb sandwich panel as an energy absorber of high-speed train nose
Composite materials
Aerospace materials | Sandwich-structured composite | [
"Physics",
"Engineering"
] | 1,306 | [
"Aerospace materials",
"Composite materials",
"Materials",
"Aerospace engineering",
"Matter"
] |
5,608,629 | https://en.wikipedia.org/wiki/Open%20book%20decomposition | In mathematics, an open book decomposition (or simply an open book) is a decomposition of a closed oriented 3-manifold M into a union of surfaces (necessarily with boundary) and solid tori. Open books have relevance to contact geometry, with a famous theorem of Emmanuel Giroux (given below) that shows that contact geometry can be studied from an entirely topological viewpoint.
Definition and construction
Definition. An open book decomposition of a 3-dimensional manifold M is a pair (B, π) where
B is an oriented link in M, called the binding of the open book;
π: M \ B → S1 is a fibration of the complement of B such that for each θ ∈ S1, π−1(θ) is the interior of a compact surface Σ ⊂ M whose boundary is B. The surface Σ is called the page of the open book.
This is the special case m = 3 of an open book decomposition of an m-dimensional manifold, for any m.
The definition for general m is similar, except that the surface with boundary (Σ, B) is replaced by an (m − 1)-manifold with boundary (P, ∂P). Equivalently, the open book decomposition can be thought of as a homeomorphism of M to the quotient space
where f:P → P is a self-homeomorphism preserving the boundary. This quotient space is called a relative mapping torus.
When Σ is an oriented compact surface with n boundary components and φ: Σ → Σ is a homeomorphism which is the identity near the boundary, we can construct an open book by first forming the mapping torus Σφ. Since φ is the identity on ∂Σ, ∂Σφ is the trivial circle bundle over a union of circles, that is, a union of tori; one torus for each boundary component. To complete the construction, solid tori are glued to fill in the boundary tori so that each circle S1 × {p} ⊂ S1×∂D2 is identified with the boundary of a page. In this case, the binding is the collection of n cores S1×{q} of the n solid tori glued into the mapping torus, for arbitrarily chosen q ∈ D2. It is known that any open book can be constructed this way. As the only information used in the construction is the surface and the homeomorphism, an alternate definition of open book is simply the pair (Σ, φ) with the construction understood. In short, an open book is a mapping torus with solid tori glued in so that the core circle of each torus runs parallel to the boundary of the fiber.
Each torus in ∂Σφ is fibered by circles parallel to the binding, each circle a boundary component of a page. One envisions a rolodex-looking structure for a neighborhood of the binding (that is, the solid torus glued to ∂Σφ)—the pages of the rolodex connect to pages of the open book and the center of the rolodex is the binding. Thus the term open book.
It is a 1972 theorem of Elmar Winkelnkemper that for m > 6, a simply-connected m-dimensional manifold has an open book decomposition if and only if it has signature 0. In 1977 Terry Lawson proved that for odd m > 6, every m-dimensional manifold has an open book decomposition, a result extended to 5-manifolds and manifolds with boundary by Frank Quinn in 1979. Quinn also showed that for even m > 6, an m-dimensional manifold has an open book decomposition if and only if an asymmetric Witt group obstruction is 0.
Giroux correspondence
In 2002, Emmanuel Giroux published the following result:
Theorem. Let M be a compact oriented 3-manifold. Then there is a bijection between the set of oriented contact structures on M up to isotopy and the set of open book decompositions of M up to positive stabilization.
Positive stabilization consists of modifying the page by adding a 2-dimensional 1-handle and modifying the monodromy by adding a positive Dehn twist along a curve that runs over that handle exactly once. Implicit in this theorem is that the new open book defines the same contact 3-manifold. Giroux's result has led to some breakthroughs in what is becoming more commonly called contact topology, such as the classification of contact structures on certain classes of 3-manifolds. Roughly speaking, a contact structure corresponds to an open book if, away from the binding, the contact distribution is isotopic to the tangent spaces of the pages through confoliations. One imagines smoothing the contact planes (preserving the contact condition almost everywhere) to lie tangent to the pages.
References
Etnyre, John B. Lectures on open book decompositions and contact structures, ArXiv
Ranicki, Andrew, High-dimensional knot theory, Springer (1998)
Ranicki, Andrew, Mapping torus of an automorphism of a manifold, Springer Online Encyclopedia of Mathematics
Topology
3-manifolds
Structures on manifolds
Contact geometry | Open book decomposition | [
"Physics",
"Mathematics"
] | 1,042 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
5,608,648 | https://en.wikipedia.org/wiki/Journal%20of%20Sex%20%26%20Marital%20Therapy | The Journal of Sex & Marital Therapy is a peer-reviewed scientific journal published by Routledge and formerly by Brunner/Mazel. Its editor-in-chief is R. Taylor Segraves.
Scope
The Journal of Sex & Marital Therapy covers:
Sexual dysfunctions—ranging from dyspareunia to autogynephilia to pedophilia
Therapeutic techniques—including psychopharmacology and sexual counseling for a wide range of dysfunctions
Clinical considerations—sexual dysfunction and its relationship to aging, unemployment, alcoholism, and more
Theoretical issues—such as the ethics of pornography in the AIDS era
Marital relationships—including psychological intimacy, and marital stability in women abused as children.
References
External links
Academic journals established in 1975
Sexology journals
Taylor & Francis academic journals
English-language journals
5 times per year journals | Journal of Sex & Marital Therapy | [
"Biology"
] | 164 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
5,608,835 | https://en.wikipedia.org/wiki/Ultra%20%28personal%20rapid%20transit%29 | Ultra (a term formed from the first letters of the words in the phrase "urban light transit") is a personal rapid transit podcar system developed by the British engineering company Ultra Global PRT (formerly Advanced Transport Systems).
The only publicly operating Ultra pod system opened at Heathrow Airport in London in May 2011 and is referred to as the Heathrow pod system. It consists of 21 vehicles operating on a route connecting Terminal 5 to its business passenger car park, just north of the airport.
To reduce construction costs, Ultra largely uses off-the-shelf technologies, such as rubber tyres running on an open guideway. The approach has resulted in a system that Ultra believes to be economical: the company reports that the total cost (vehicles, infrastructure, and control systems) is between £3 million and £5 million per kilometre (0.62 miles) of guideway. By contrast, the Heathrow deployment cost £30 million for of guideway.
Inception
The system was originally designed by Martin Lowson and his design team; Lowson had put £10 million into the project. He formed Advanced Transport Systems (ATS) in Cardiff to develop the system, and the site was later the location of its test track. Ultra has twice been awarded funding from the UK National Endowment for Science, Technology and the Arts (NESTA). Much of the original research on Ultra was done by the Aerospace Engineering department at the University of Bristol in the 1990s. Recently, the company renamed itself to "Ultra PRT Limited" because of its primary business, and it moved its corporate headquarters to Bristol.
Background
Past PRT designs
Personal rapid transit was originally developed in the 1950s as a response to the need to move commuters in areas with densities too low to pay for the construction of a conventional metro system. Using automated guidance allowed headways to be shortened, often to a few seconds or even fractions of a second. That increases the route capacity, allowing the vehicles to become much smaller but still carry the same passenger load in a given time. Smaller vehicles in turn would require simpler "tracks" and smaller stations, which lowered capital costs. Smaller towns and cities that could never hope to fund a conventional mass transit system could afford PRT, and the concept generated intense interest.
Numerous PRT systems were designed in the late 1960s and early 1970s, many as a result of the publication of the highly-influential HUD reports. In general, the systems intended to use small four-to-six-passenger vehicles, but most evolved to larger designs over time (see Alden staRRcar). As they did so, vehicles and tracks grew heavier, capital costs rose, and interest dropped. In the end, only one production PRT system was built, the Morgantown, W.Va PRT in 1975, a government-funded demonstration system to prove the concept. Originally derided as a white elephant, the Morgantown system has since proven itself both reliable and relatively low cost.
Ultra
In the time since the Morgantown system was installed in 1975, general technological improvements have led to a number of ways to lower the cost of a PRT system. One of the simplest but most profound way was the development of more efficient, reliable and quick-charging battery systems. Older PRT systems used electricity fed from track-side conductors like a conventional metro, but they can be eliminated in favour of batteries that quickly charge up at stations or small charging strips along the route. Another change is the moving of the guidance logic from centralised computers to on-board systems of dramatically improved performance, allowing the vehicles to steer and switch themselves between routes on their own. That eliminates the need for a track-mounted guiderail able to steer the vehicle (see, for instance, the Ford ACT). Together, the changes mean the vehicle no longer needs strong mechanical contact with the guideway, which can be dramatically reduced in complexity.
In the case of Ultra, the guideway can consist of as little as two parallel rows of concrete barriers, similar to the bumpers found in a car park. The vehicle uses them for fine guidance only; it is able to steer itself around curves by following the barriers passively. No "switching" is required on the track, as the vehicles can make their own turns between routes based on an internal map. Since the vehicles are battery-powered, there is no need for electrification along the track: the vehicles recharge when they are parked at the stations. As a result, the trackway is similar in complexity to a conventional road surface, a light-duty one as the vehicles will not vary in weight to the extent of a tractor-trailer. Even the stations are greatly simplified; in the case of ground-level tracks, the lack of any substantial infrastructure means that the vehicles can stop at any kerb. Stations at Heathrow resemble a car park with diagonal slots, with a rain shield similar to the awnings at a petrol station.
As part of the development of the first commercial system at Heathrow Airport, in 2005 the owner of the airport, BAA Airports Ltd, purchased 25% of the company. Following its successful launch, there are now plans to extend it to the rest of the airport and even to the nearest town of Staines-upon-Thames, which is home to many of the airport's staff.
Description
Vehicles
With a Length of 3.7 m (12 ft 2 in), a Width of 1.5 m (4 ft 11 in), a Height of 1.8 m (5 ft 11 in) and a gross weight of 1300kg, the electric-powered vehicles have four seats, can carry a 500 kg (78.74 stone) payload and are designed to travel at 40 kilometers per hour (25 mph) at gradients of up to 20%, but the company has suggested limiting operating routes to 10% gradients to improve passenger comfort. The vehicles can accommodate wheelchairs, shopping trolleys and other luggage, in addition to the passengers.
Each pod is powered by four car batteries, giving an average 2 kW and adding 8% to the gross weight of the vehicle. Other specifications include a turning radius, an energy requirement of 0.55 MJ per passenger-kilometre, and running noise levels of 35 dBA at , as measured at a distance of .
The company has also developed designs for a freight version. It has the same external appearance as the passenger version, but its entire internal space is adapted to host a cargo capsule. They can be valuable in airport environments, where the network can be used to haul small freight.
Control technology
According to Ultra, its control system has three separate levels of operation, with the following features:
Central synchronous control
Immediately allocates the passenger a vehicle
Instructs the vehicle to follow a set path and timing to reach the destination
Ensures that there is no interaction between vehicles
Manages empty vehicles
Autonomous vehicle control
Receives instruction from central synchronous control
Navigates the pod to its destination by continuously using lasers to verify vehicle position and heading
Automatic vehicle protection system
Based on fixed block signalling systems like railways
Inductive loops set into the guideway interact with sensors on the vehicle
Each vehicle must be receiving a constant "proceed" signal to move
The signal is inhibited in an area directly behind each pod for automatically halting others that are approaching; that provides a failsafe system that is independent of other layers of control
Test track
The test track in Cardiff was launched in January 2002. The $4 million funding for the test track came from various sources in the United Kingdom government. One electric vehicle was demonstrated running at speeds up to . Accurate stopping was demonstrated, and the vehicle ascended and descended a steep gradient. A single, rudimentary ground level station was shown.
Most of the test track guideway is at ground level. It is stated that in a commercial application, 90% or more of the guideway might have to be elevated. The elevated guideway is about wide. According to a study of a hypothetical city-based installation, consisting of of guideway (89% elevated), the total cost of track and associated civil engineering works is estimated to be £2.9 million per kilometre ($8.7 million/mi). Per-station costs were estimated to be £0.48 million ($0.89 million). Vehicle costs were not considered in the study.
Deployments
Heathrow Terminal 5
The first system began passenger trials at Heathrow Terminal 5, in October 2010, and it opened for full passenger service 22 hours a day, 7 days a week, in May 2011. Operational statistics in May 2012 demonstrate more than 99% reliability and an average passenger wait time over the year of 10 seconds. Ultra has achieved a number of awards from the London Transport Awards and the British Parking Awards.
It connects Heathrow Terminal 5 to its business passenger car park, just north of the airport, by a line built on behalf of Heathrow Airport Holdings, the airport's owner and operator. The system cost £30 million to develop.
Construction of the guideway was completed in October 2008. The line is largely elevated, but it includes a ground-level section, where the route passes under the approach to the airport's northern runway. The three stations, with two pod stations and one station within the car park at Terminal 5, were designed by Gebler Tooth Architects, along with the touchscreen interface for passengers to select their destination on their journey. Following various trials, including some that used airport staff as test passengers, the line opened to the public in May 2011 as a passenger trial. Subsequently, it was made fully operational, and the bus service between the business car park and Terminal 5 was discontinued. The pods use 50% less energy than a bus and run 22 h a day. Unlike nearly all UK road and rail traffic, which drives on the left, the PRT system drives on the right. As of May 2013, the system passed the 600,000th-passenger milestone.
The developers expected that users will wait an average of around 12 seconds, with 95% of passengers waiting for less than 1 minute for their private pod, which will travel at up to .
the 21 pods carry upwards of 1,000 travellers per day.
Chengdu Tianfu International Airport, China (awaiting commissioning)
In 2018, it was announced that a PRT system would be installed at the new Chengdu Tianfu International Airport in Chengdu. The system will include of guideway, 4 stations, 22 pods and will connect a remote parking area to the two terminal buildings. It is supplied by Ultra-MTS. The airport opened on 27 June 2021. there are no reports the PRT has commenced operation.
Proposals
Jewar International Airport, India (proposed)
In March 2021, it was announced that a PRT system will be installed from the proposed film city in Noida to the upcoming Jewar International Airport in Jewar.
Ajman City, United Arab Emirates (proposed, signed contract)
In July 2017 Ultra-Fairwood (a joint venture) announced that it had signed a contract with the Government of Ajman for the construction of a system in Ajman City. The proposed network will include of track overall, including a total route length of , covering 115 stations. These will be served by a fleet of 1,745 vehicles, offering an expected system capacity of 1.64 million passenger trips per day. The system will comprise two overlapping networks. The first of which is a PRT system with six seat vehicles running on elevated guideways with elevated stations. The second is a Group Rapid Transit (GRT) with thirty seat vehicles running mainly at grade with ground level stations. The vehicles will be produced at a factory in India. The total value of the project is US$881 million with the system cost, supplied by Ultra-Fairwood worth US$723 million.
Gurugram, India (proposal)
In March 2010, the government of Haryana said that it was looking into a proposal to deploy Ultra for rapid commuter transport in the city of Gurugram. The city is looking at over 10 to 12 individual routes to cover a total distance of approximately .
In July 2012, it was reported that the Chief Minister of Haryana had ordered officials to "complete all the necessary formalities in the next three months and begin work on the project". In October 2016, Indian Transport Minister Nitin Gadkari said four competing technical proposals had been received, and the system was still subject to approval and financial bidding.
In January 2017, ULTra was one of three companies – along with SkyTran and Metrino – approved to build a test track evaluate PRT technology for potential deployment in Gurugram and Bengaluru. The companies will need to fund the construction themselves. As of August 2017, Metrino has withdrawn from the competition and construction has not commenced, but the trial is still set to proceed.
Heathrow New PRT (deferred proposal)
In May 2013, Heathrow Airport Limited announced, as part of its draft five-year (2014–2019) master plan that it intended to use the PRT system to connect Terminal 2 and Terminal 3 to their respective business car parks. The proposal was not included in the final plan because of spending priority being given to other capital projects, and has been deferred.
There were also plans to extend the PRT throughout the airport, and to nearby hotels by using 400 pods.
Amritsar, India (failed proposal)
In December 2011, Ultra-Fairwood (a joint venture) announced a plan to build an elevated guideway in a Y-shaped network in Amritsar, India, serving seven stations, with over 200 pods. The network would connect the railway station, the bus station and the Golden Temple. Initial projections were for up to 100,000 passengers per day from 4:00 a.m. to midnight that would carry 35% of the visitors to the Golden Temple. The system was projected to be completed by 2014 with private financing on a 'Build, Own, Operate, Transfer' (BOOT) basis.
The unsolicited bid was announced by the local government as set to proceed, and a foundation stone was laid. The proposed route received objections from some businesses, particularly in the Hall Bazaar and the route was then changed, with the Katra Jaimal Singh area dropped from the line, between the railway station and the temple.
In March 2013, the government of Punjab announced that it would open the project to competitive tendering with the Swiss challenge method. Ultra-Fairwood was one of three suppliers that was expected to be bidding. Reports indicate the government is due to finalise the bid by the end of June 2013.
In June 2014, it was scrapped to be replaced by a cheaper rapid bus transit system.
References
Sources
Isaiah Litvak and Christopher Maule, "The Light-Rapid Comfortable (LRC) Train and the Intermediate Capacity Transit System (ICTS): Two Case Studies of Innovation in the Urban Transportation Equipment Manufacturing Industry", University of Toronto/York University Joint Program in Transportation, 1982
External links
"Cardiff County Council Environmental Scrutiny Committee Meeting held 25 June 2002"
Robert Llewelyn explains the system and talks to the main people in an episode of the web series 'Fully charged'
Companies based in Bristol
Economy of Cardiff
Personal rapid transit
Proposed public transport in the United Kingdom
Airport people mover systems
Airport people mover systems in the United Kingdom
Self-driving cars
2011 establishments in England | Ultra (personal rapid transit) | [
"Engineering"
] | 3,123 | [
"Automotive engineering",
"Self-driving cars"
] |
5,609,205 | https://en.wikipedia.org/wiki/Postback | In web development, a postback is the exchange of information between servers to report a user's action on a website, network, or app.
Technically speaking, a postback is an HTTP POST to the same page that the form is on. In other words, the contents of the form are POSTed back to the same URL as the form.
Postbacks are commonly seen in edit forms, where the user introduces information in a form and hits "save" or "submit", causing a postback. The server then refreshes the same page using the information it has just received.
Postbacks are most commonly discussed in relation to JSF and ASP or ASP.NET.
In ASP, a form and its POST action have to be created as two separate pages, resulting in the need for an intermediate page and a redirect if one simply wants to perform a postback. This problem was addressed in ASP.NET with the __doPostBack() function and an application model that allows a page to perform validation and processing on its own form data.
In JSF, postbacks trigger the full JSF life-cycle, which just like ASP.NET performs conversion and validation of the form data that was included in the postback. Various utility methods are present in the JSF API to programmatically check if a given request is a postback or not.
Postback types
Postback for Affiliate Networks
Postback for Traffic Flows
Postback Macros
References
See also
Ajax (programming)
ASP.NET
JavaServer Faces
Web design | Postback | [
"Engineering"
] | 320 | [
"Design",
"Web design"
] |
5,609,640 | https://en.wikipedia.org/wiki/Grammatical%20evolution | Grammatical evolution (GE) is a genetic programming (GP) technique (or approach) from evolutionary computation pioneered by Conor Ryan, JJ Collins and Michael O'Neill in 1998 at the BDS Group in the University of Limerick.
As in any other GP approach, the objective is to find an executable program, program fragment, or function, which will achieve a good fitness value for a given objective function. In most published work on GP, a LISP-style tree-structured expression is directly manipulated, whereas GE applies genetic operators to an integer string, subsequently mapped to a program (or similar) through the use of a grammar, which is typically expressed in Backus–Naur form. One of the benefits of GE is that this mapping simplifies the application of search to different programming languages and other structures.
Problem addressed
In type-free, conventional Koza-style GP, the function set must meet the requirement of closure: all functions must be capable of accepting as their arguments the output of all other functions in the function set. Usually, this is implemented by dealing with a single data-type such as double-precision floating point. While modern Genetic Programming frameworks support typing, such type-systems have limitations that Grammatical Evolution does not suffer from.
GE's solution
GE offers a solution to the single-type limitation by evolving solutions according to a user-specified grammar (usually a grammar in Backus-Naur form). Therefore the search space can be restricted, and domain knowledge of the problem can be incorporated. The inspiration for this approach comes from a desire to separate the "genotype" from the "phenotype": in GP, the objects the search algorithm operates on and what the fitness evaluation function interprets are one and the same. In contrast, GE's "genotypes" are ordered lists of integers which code for selecting rules from the provided context-free grammar. The phenotype, however, is the same as in Koza-style GP: a tree-like structure that is evaluated recursively. This model is more in line with how genetics work in nature, where there is a separation between an organism's genotype and the final expression of phenotype in proteins, etc.
Separating genotype and phenotype allows a modular approach. In particular, the search portion of the GE paradigm needn't be carried out by any one particular algorithm or method. Observe that the objects GE performs search on are the same as those used in genetic algorithms. This means, in principle, that any existing genetic algorithm package, such as the popular GAlib, can be used to carry out the search, and a developer implementing a GE system need only worry about carrying out the mapping from list of integers to program tree. It is also in principle possible to perform the search using some other method, such as particle swarm optimization (see the remark below); the modular nature of GE creates many opportunities for hybrids as the problem of interest to be solved dictates.
Brabazon and O'Neill have successfully applied GE to predicting corporate bankruptcy, forecasting stock indices, bond credit ratings, and other financial applications. GE has also been used with a classic predator-prey model to explore the impact of parameters such as predator efficiency, niche number, and random mutations on ecological stability.
It is possible to structure a GE grammar that for a given function/terminal set is equivalent to genetic programming.
Criticism
Despite its successes, GE has been the subject of some criticism. One issue is that as a result of its mapping operation, GE's genetic operators do not achieve high locality which is a highly regarded property of genetic operators in evolutionary algorithms.
Variants
Although GE was originally described in terms of using an Evolutionary Algorithm, specifically, a Genetic Algorithm, other variants exist. For example, GE researchers have experimented with using particle swarm optimization to carry out the searching instead of genetic algorithms with results comparable to that of normal GE; this is referred to as a "grammatical swarm"; using only the basic PSO model it has been found that PSO is probably equally capable of carrying out the search process in GE as simple genetic algorithms are. (Although PSO is normally a floating-point search paradigm, it can be discretized, e.g., by simply rounding each vector to the nearest integer, for use with GE.)
Yet another possible variation that has been experimented with in the literature is attempting to encode semantic information in the grammar in order to further bias the search process. Other work showed that, with biased grammars that leverage domain knowledge, even random search can be used to drive GE.
Related Work
GE was originally a combination of the linear representation as used by the Genetic Algorithm for Developing Software (GADS) and Backus Naur Form grammars, which were originally used in tree-based GP by Wong and Leung in 1995 and Whigham in 1996. Other related work noted in the original GE paper was that of Frederic Gruau, who used a conceptually similar "embryonic" approach, as well as that of Keller and Banzhaf, which similarly used linear genomes.
Implementations
There are several implementations of GE. These include the following.
See also
Genetic programming
Java Grammatical Evolution
Cartesian genetic programming
Gene expression programming
Linear genetic programming
Multi expression programming
Notes
Resources
Grammatical Evolution Tutorial.
Grammatical Evolution in Java .
jGE - Java Grammatical Evolution.
The Biocomputing and Developmental Systems (BDS) Group at the University of Limerick.
Michael O'Neill's Grammatical Evolution Page, including a bibliography.
DRP, Directed Ruby Programming, is an experimental system designed to let users create hybrid GE/GP systems. It is implemented in pure Ruby.
GERET, Grammatical Evolution Ruby Exploratory Toolkit.
gramEvol, Grammatical Evolution for R. | Grammatical evolution | [
"Biology"
] | 1,176 | [
"Genetics techniques",
"Genetic programming"
] |
5,610,322 | https://en.wikipedia.org/wiki/Timeline%20of%20space%20exploration | This is a timeline of space exploration which includes notable achievements, first accomplishments and milestones in humanity's exploration of outer space.
This timeline generally does not distinguish achievements by a specific country or private company, as it considers humanity as a whole. See otherwise the timeline of private spaceflight or look for achievements by each space agency.
Pre-20th century
1900–1956
1957–1959
1960–1969
1970–1979
1980–1989
1990–1999
2000–2009
2010–2019
Since 2020
Notes
See also
Discovery and exploration of the Solar System
List of spaceflight records
List of spaceflight records#Human spaceflight firsts
Timeline of Solar System exploration – A comprehensive list of events in the exploration of the Solar System.
Timeline of artificial satellites and space probes – A comprehensive list of artificial satellites and space probes.
Timeline of space travel by nationality
Timeline of spaceflight – Chronological list of events in spaceflight broken down as a separate article for each year
Timeline of private spaceflight – For first achievements by private space companies
References
External links
Chronology of Space Exploration archive of important space exploration missions and events, including future planned and proposed endeavors
Crewed spaceflight 1961–1980
Crewed spaceflight chronology
History of crewed space missions
Timeline of the Space Race/Moon Race
Chronology: Moon Race at russianspaceweb.com
Space Timeline in 3d
Exploration
Lists of firsts in outer space | Timeline of space exploration | [
"Astronomy"
] | 274 | [
"Space exploration",
"Outer space"
] |
5,610,574 | https://en.wikipedia.org/wiki/Juicy%20Salif | Juicy Salif, a citrus reamer designed by Philippe Starck in 1990, is considered an icon of industrial design, and has been displayed in the permanent collections of the Museum of Modern Art and the Metropolitan Museum of Art in New York City, as well as the Victoria and Albert Museum in London. It has also received this distinction at the Rhode Island School of Design Museum and the Museum of Fine Arts, Houston.
Description
Made of cast and polished aluminum by the Italian kitchenware company Alessi, the tool measures in diameter, and high.
But the device is not easy to use, and its polished aluminum finish is vulnerable to corrosion and producing an unpleasant taste, as conceded in its official instructions. The kitchen tool is not dishwasher-safe, and must be washed by hand, while taking care to avoid injury from its sharp point.
History
The sleek, exotic-looking shape was inspired by a calamari squid; the original drawings were sketched on a pizza-stained paper placemat.
The founder of the manufacturer, Alberto Alessi, later recalled:
I received a napkin from Starck, on it among some incomprehensible marks (tomato sauce, in all likelihood) there were some sketches. Sketches of squid. They started on the left, and as they worked their way over to the right, they took on the unmistakable shape of what was to become the juicy salif. While eating a dish of squid and squeezing a lemon over it, Starck drew on the napkin his famous lemon squeezer.
Alberto Alessi, in a recorded video interview posted on Dezeen, said "I am very happy with this project because I consider it a big joke to everybody. [...] It is the most controversial squeezer of the century I must say, but one of the most amusing projects I have done in my career." He regarded it as one of the company's most successful products.
Sales
For the tenth anniversary of its launch, 10,000 Juicy Salifs were issued, individually numbered and gold-plated. But this luxury version came with instructions warning that the juicer should never be used with actual fruit, because the finish would corrode. There has also been a grey/black (anthracite) coloured version, of which 47,000 un-numbered examples were produced between 1991 and 2004. Both now are collectors' items, though an urban legend perpetuates the idea that the anthracite version is rarer than the gold-plated one.
By 2003, a total of more than 500,000 of the iconic design artifacts had been sold.
Critical reception
Starck has publicly stated that his citrus reamer was "not meant to squeeze lemons" but "to start conversations".
An image of the Juicy Salif was featured on the front cover of Donald Norman's book Emotional Design. The gold-plated version was described as an "ornament" because citric acid from fruit would discolor and erode the gold plating.
An article in the Financial Times about bad design included the Juicy Salif among other examples, and proposed that the original "chamber of horrors" at the Victoria and Albert Museum be revived, to showcase modern examples.
References
External links
Juicy Salif, Centre Pompidou, Paris
Juicy Salif, Alessi
Industrial design
Food preparation utensils
Collection of the Museum of Modern Art (New York City)
Collection of the Metropolitan Museum of Art
Products introduced in 1990 | Juicy Salif | [
"Engineering"
] | 707 | [
"Industrial design",
"Design engineering",
"Design"
] |
5,611,136 | https://en.wikipedia.org/wiki/Shital%20Pati | Sitalpati (, ), also called sital pati, sittal pati and adi (in Sylhet Region), is a kind of mat which feels cold by nature. It is made from murta plants (Schumannianthus dichotomus). It is usually used in Bangladesh (and to a lesser extent, India's West Bengal). Mats with decorative designs are called nakshi pati.
Sitalpati are made from cane or from murta plants, known in different places as , , and . The murta plant grows around water bodies in Sylhet, Sunamganj, Barisal, Tangail, Comilla, Noakhali, Feni and Chittagong. Nakshi pati made of murta plants is available only in Sylhet and Noakhali districts of Bangladesh. In India, Sitalpati is made in the northern Cooch Behar district of the state of West Bengal. Among the areas of Cooch Behar where Sitalpatis are woven, Sagareswar, Ghugumari and Pashnadanga are important centres.
Recognition
UNESCO has recognised the Traditional Art of Shital Pati weaving of Sylhet and included it in the Representative List of the Intangible Cultural Heritage of Humanity.
See also
Nakshi kantha, decorative quilts made from cloth
References
External links
Shital pati of Assam
Shital pati in Bangladesh
Culture of Bangladesh
Floors
Bangladeshi handicrafts
Sylhet Division | Shital Pati | [
"Engineering"
] | 303 | [
"Structural engineering",
"Floors"
] |
5,611,195 | https://en.wikipedia.org/wiki/Uranium-232 | Uranium-232 () is an isotope of uranium. It has a half-life of around
69 years and is a side product in the thorium cycle. It has been cited as an obstacle to nuclear proliferation using 233U as the fissile material, because the intense gamma radiation emitted by 208Tl (a daughter of 232U, produced relatively quickly) makes the 233U contaminated with it more difficult to handle.
Production of 233U (through the neutron irradiation of 232Th) invariably produces small amounts of 232U as an impurity, because of parasitic (n,2n) reactions on uranium-233 itself, or on protactinium-233, or on thorium-232:
232Th (n,γ) 233Th (β−) 233Pa (β−) 233U (n,2n) 232U
232Th (n,γ) 233Th (β−) 233Pa (n,2n) 232Pa (β−) 232U
232Th (n,2n) 231Th (β−) 231Pa (n,γ) 232Pa (β−) 232U
Another channel involves neutron capture reaction on small amounts of thorium-230, which is a tiny fraction of natural thorium present due to the decay of uranium-238:
230Th (n,γ) 231Th (β−) 231Pa (n,γ) 232Pa (β−) 232U
The decay chain of 232U quickly yields strong gamma radiation emitters:
232U (α, 68.9 years)
228Th (α, 1.9 year)
224Ra (α, 3.6 day, 0.24 MeV) (from this point onwards, the decay chain is identical to that of 232Th; thorium-232 is nevertheless much less dangerous because its extremely long half-life of about 14-15 billion years means that not as much of its dangerous daughters builds up)
220Rn (α, 55 s, 0.54 MeV)
216Po (α, 0.15 s)
212Pb (β−, 10.64 h)
212Bi (α, 61 min, 0.78 MeV)
208Tl (β−, 3 min, 2.6 MeV) (35.94% branching ratio)
208Pb (stable)
This makes manual handling in a glove box with only light shielding (as commonly done with plutonium) too hazardous, (except possibly in a short period immediately following chemical separation of the uranium from its decay products) and instead requiring remote manipulation for fuel fabrication.
Unusually for an isotope with even mass number, 232U has a significant neutron absorption cross section for fission (thermal neutrons , resonance integral ) as well as for neutron capture (thermal , resonance integral ).
References
Isotopes of uranium
Actinides
Nuclear materials
Fissile materials | Uranium-232 | [
"Physics",
"Chemistry"
] | 581 | [
"Isotopes",
"Fissile materials",
"Materials",
"Nuclear materials",
"Isotopes of uranium",
"Explosive chemicals",
"Matter"
] |
5,611,262 | https://en.wikipedia.org/wiki/Uranium%20in%20the%20environment | Uranium in the environment is a global health concern, and comes from both natural and man-made sources. Beyond naturally occurring uranium, mining, phosphates in agriculture, weapons manufacturing, and nuclear power are anthropogenic sources of uranium in the environment.
In the natural environment, radioactivity of uranium is generally low, but uranium is a toxic metal that can disrupt normal functioning of the kidney, brain, liver, heart, and numerous other systems. Chemical toxicity can cause public health issues when uranium is present in groundwater, especially if concentrations in food and water are increased by mining activity. The biological half-life (the average time it takes for the human body to eliminate half the amount in the body) for uranium is about 15 days.
Uranium's radioactivity can present health and environmental issues in the case of nuclear waste produced by nuclear power plants or weapons manufacturing.
Uranium is weakly radioactive and remains so because of its long physical half-life (4.468 billion years for uranium-238). The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects.
Natural occurrence
Uranium is a naturally occurring element found at low levels within all rock, soil, and water. This is the highest-numbered element to be found naturally in significant quantities on Earth. According to the United Nations Scientific Committee on the Effects of Atomic Radiation the normal concentration of uranium in soil is 300 μg/kg to 11.7 mg/kg.
It is considered to be more plentiful than antimony, beryllium, cadmium, gold, mercury, silver, or tungsten and is about as abundant as tin, arsenic or molybdenum. It is found in many minerals including uraninite (the most common uranium ore), autunite, uranophane, torbernite, and coffinite. There are significant concentrations of uranium in some substances, such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores. (It is recovered commercially from these sources.) Coal fly ash from uranium-bearing coal is particularly rich in uranium, and there have been several proposals to "mine" this waste product for its uranium content. Because some of the ash produced in a coal power plant escapes through the smokestack, the radioactive contamination released by coal power plants in normal operation is actually higher than that of nuclear power plants.
Seawater contains about 3.3 parts per billion (3.3 μg/kg of uranium by weight or 3.3 micrograms per liter).
Sources of uranium
Mining and milling
Mining is the largest source of uranium contamination in the environment. Uranium milling creates radioactive waste in the form of tailings, which contain uranium, radium, and polonium. Consequently, uranium mining results in "the unavoidable radioactive contamination of the environment by solid, liquid and gaseous wastes".
Seventy percent of global uranium resources are on or adjacent to traditional lands belonging to Indigenous people, and perceived environmental risks associated with uranium mining have resulted in environmental conflicts involving multiple actors, in which local campaigns have become national or international debates.
Some of these environmental conflicts have limited uranium exploration. Incidents at Ranger Uranium Mine in the Northern Territory of Australia and disputes over Indigenous land rights led to increased opposition to development of the nearby Jabiluka deposits and suspension of that project in the early 2000s. Similarly, environmental damage from Uranium mining on traditional Navajo lands in the southwestern United States resulted in restrictions on additional mining in Navajo lands in 2005.
Occupational hazards
The radiation hazards of uranium mining and milling were not appreciated in the early years, resulting in workers being exposed to high levels of radiation. Inhalation of radon gas caused sharp increases in lung cancers among underground uranium miners employed in the 1940s and 1950s.
Military activity
Military activity is a source of uranium, especially at nuclear or munitions testing sites. Depleted uranium (DU) is a byproduct of uranium enrichment that is used for defensive armor plating and armor-piercing projectiles. Uranium contamination has been found at testing sites in the UK, in Kazakhstan, and in several countries as a result of DU munitions used in the Gulf War and the Yugoslav wars. During a three-week period of conflict in 2003 in Iraq, 1,000 to 2,000 tonnes of DU munitions were used.
Combustion and impact of DU munitions can produce aerosols that disperse uranium metal into the air and water where it can be inhaled or ingested by humans. A United Nations Environment Programme (UNEP) study has expressed concerns about groundwater contamination from these munitions. Studies of DU aerosol exposure suggest that uranium particles would quickly settle out of the air, and thus should not affect populations more than a few kilometres from target areas.
Nuclear energy and waste
The nuclear power industry is also a source of uranium in the environment in the form of radioactive waste or through nuclear accidents such as Three Mile Island or the Chernobyl disaster. Perceived risks of contamination associated with this industry contribute to the anti-nuclear movement.
In 2020, there were over 250,000 metric tons of high-level radioactive waste being stored globally in temporary containers. This waste is produced by nuclear power plants and weapons facilities, and is a serious human health and environmental issue. There are plans to permanently dispose of high-level waste in deep geological repositories, but none of these are operational. Corrosion of aging temporary containers has caused some waste to leak into the environment.
As spent uranium dioxide fuel is very insoluble in water, it is likely to release uranium (and fission products) even more slowly than borosilicate glass when in contact with water.
Health effects
Soluble uranium salts are toxic, though less so than those of other heavy metals such as lead or mercury. The organ which is most affected is the kidney. Soluble uranium salts are readily excreted in the urine, although some accumulation in the kidneys does occur in the case of chronic exposure. The World Health Organization has established a daily "tolerated intake" of soluble uranium salts for the general public of 0.5 μg/kg body weight (or 35 μg for a 70 kg adult): exposure at this level is not thought to lead to any significant kidney damage.
Tiron may be used to remove uranium from the human body, in a form of chelation therapy. Bicarbonate may also be used as uranium (VI) forms complexes with the carbonate ion.
Public health
Uranium mining produces toxic tailings that are radioactive and may contain other toxic elements such as radon. Dust and water leaving tailing sites may carry long-lived radioactive elements that enter water sources and the soil, increase background radiation, and eventually be ingested by humans and animals. A 2013 analysis in a medical journal found that, "The effects of all these sources of contamination on human health will be subtle and widespread, and therefore difficult to detect both clinically and epidemiologically." A 2019 analysis of the global uranium industry said that the industry was shifting mining activities toward the Global South where environmental regulations are typically less stringent; and that people in impacted communities would "surely experience adverse environmental consequences" and public health issues arising from mining activities carried out by powerful multi-national corporations or mining companies based in foreign countries.
Cancer
In 1950, the US Public Health service began a comprehensive study of uranium miners, leading to the first publication of a statistical correlation between cancer and uranium mining, released in 1962. The federal government eventually regulated the standard amount of radon in mines, setting the level at 0.3 WL on January 1, 1969.
Out of 69 present and former uranium milling sites in 12 states, 24 have been abandoned, and are the responsibility of the US Department of Energy. Accidental releases from uranium mills include the 1979 Church Rock uranium mill spill in New Mexico, called the largest accident of nuclear-related waste in US history, and the 1986 Sequoyah Corporation Fuels Release in Oklahoma.
In 1990, Congress passed the Radiation Exposure Compensation Act (RECA), granting reparations for those affected by mining, with amendments passed in 2000 to address criticisms with the original act.
Depleted uranium exposure studies
The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects. Normal functioning of the kidney, brain, liver, heart, and numerous other systems can be affected by uranium exposure, because uranium is a toxic metal. Some people have raised concerns about the use of DU munitions because of its mutagenicity, teratogenicity in mice, neurotoxicity, and its suspected carcinogenic potential. Additional concerns address unexploded DU munitions leeching into groundwater over time.
The toxicity of DU is a point of medical controversy. Multiple studies using cultured cells and laboratory rodents suggest the possibility of leukemogenic, genetic, reproductive, and neurological effects from chronic exposure.
A 2005 epidemiology review concluded: "In aggregate the human epidemiological evidence is consistent with increased risk of birth defects in offspring of persons exposed to DU." The World Health Organization states that no risk of reproductive, developmental, or carcinogenic effects have been reported in humans due to DU exposure. This report has been criticized by Dr. Keith Baverstock for not including possible long-term effects.
Birth defects
Most scientific studies have found no link between uranium and birth defects, but some claim statistical correlations between soldiers exposed to DU, and those who were not, concerning reproductive abnormalities.
One study found epidemiological evidence for increased risk of birth defects in the offspring of persons exposed to DU. Several sources have attributed an increased rate of birth defects in the children of Gulf War veterans and in Iraqis to inhalation of depleted uranium. A 2001 study of 15,000 Gulf War combat veterans and 15,000 control veterans found that the Gulf War veterans were 1.8 (fathers) to 2.8 (mothers) times more likely to have children with birth defects.
A study of Gulf War Veterans from the UK found a 50% increased risk of malformed pregnancies reported by men over non-Gulf War veterans. The study did not find correlations between Gulf war deployment and other birth defects such as stillbirth, chromosomal malformations, or congenital syndromes. The father's service in the Gulf War was associated with increased rate of miscarriage, but the mother's service was not.
In animals
Uranium causes reproductive defects and other health problems in rodents, frogs and other animals. Uranium was also shown to have cytotoxic, genotoxic and carcinogenic effects in animals. It has been shown in rodents and frogs that water-soluble forms of uranium are teratogenic.
In soil and microbiology
Bacteria and Pseudomonadota, such as Geobacter and Burkholderia fungorum (strain Rifle), can reduce and fix uranium in soil and groundwater. These bacteria change soluble U(VI) into the highly insoluble complex-forming U(IV) ion, hence stopping chemical leaching.
It has been suggested that it is possible to form a reactive barrier by adding something to the soil which will cause the uranium to become fixed. One method of doing this is to use a mineral (apatite) while a second method is to add a food substance such as acetate to the soil. This will enable bacteria to reduce the uranium(VI) to uranium(IV), which is much less soluble. In peat-like soils, the uranium will tend to bind to the humic acids; this tends to fix the uranium in the soil.
References
Element toxicology
Nuclear technology
Radiobiology
Radioactivity
Soil contamination
Uranium
fi:Uraanin esiintyminen | Uranium in the environment | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 2,391 | [
"Element toxicology",
"Biology and pharmacology of chemical elements",
"Radiobiology",
"Environmental chemistry",
"Nuclear technology",
"Soil contamination",
"Nuclear physics",
"Radioactivity"
] |
5,611,461 | https://en.wikipedia.org/wiki/Contrast%20%28vision%29 | Contrast is the difference in luminance or color that makes an object (or its representation in an image or display) visible against a background of different luminance or color. The human visual system is more sensitive to contrast than to absolute luminance; thus, we can perceive the world similarly despite significant changes in illumination throughout the day or across different locations.
The maximum contrast of an image is termed the contrast ratio or dynamic range. In images where the contrast ratio approaches the maximum possible for the medium, there is a conservation of contrast. In such cases, increasing contrast in certain parts of the image will necessarily result in a decrease in contrast elsewhere. Brightening an image increases contrast in darker areas but decreases it in brighter areas; conversely, darkening the image will have the opposite effect. Bleach bypass reduces contrast in the darkest and brightest parts of an image while enhancing luminance contrast in areas of intermediate brightness.
Biological contrast sensitivity
Campbell and Robson (1968) showed that the human contrast sensitivity function shows a typical band-pass filter shape peaking at around 4 cycles per degree (cpd or cyc/deg), with sensitivity dropping off either side of the peak. This can be observed by changing one's viewing distance from a "sweep grating" (shown below) showing many bars of a sinusoidal grating that go from high to low contrast along the bars, and go from narrow (high spatial frequency) to wide (low spatial frequency) bars across the width of the grating.
The high-frequency cut-off represents the optical limitations of the visual system's ability to resolve detail and is typically about 60 cpd. The high-frequency cut-off is also related to the packing density of the retinal photoreceptor cells: a finer matrix can resolve finer gratings.
The low frequency drop-off is due to lateral inhibition within the retinal ganglion cells. A typical retinal ganglion cell's receptive field comprises a central region in which light either excites or inhibits the cell, and a surround region in which light has the opposite effects.
One experimental phenomenon is the inhibition of blue in the periphery if blue light is displayed against a white background, leading to a yellow surrounding. The yellow is derived from the inhibition of blue on the surroundings by the center. Since white minus blue is red and green, this mixes to become yellow.
For example, in the case of graphical computer displays, contrast depends on the properties of the picture source or file and the properties of the computer display, including its variable settings. For some screens the angle between the screen surface and the observer's line of sight is also important.
Quantifications
There are many possible definitions of contrast. Some include color; others do not. Russian scientist laments, "Such a multiplicity of notions of contrast is extremely inconvenient. It complicates the solution of many applied problems and makes it difficult to compare the results published by different authors."
Various definitions of contrast are used in different situations. Here, luminance contrast is used as an example, but the formulas can also be applied to other physical quantities. In many cases, the definitions of contrast represent a ratio of the type
The rationale behind this is that a small difference is negligible if the average luminance is high, while the same small difference matters if the average luminance is low (see Weber–Fechner law). Below, some common definitions are given.
Weber contrast
Weber contrast is defined as
with and representing the luminance of the features and the background, respectively. The measure is also referred to as Weber fraction, since it is the term that is constant in Weber's Law. Weber contrast is commonly used in cases where small features are present on a large uniform background, i.e., where the average luminance is approximately equal to the background luminance.
Michelson contrast
Michelson contrast (also known as the visibility) is commonly used for patterns where both bright and dark features are equivalent and take up similar fractions of the area (e.g. sine-wave gratings). The Michelson contrast is defined as
with and representing the highest and lowest luminance. The denominator represents twice the average of the maximum and minimum luminances.
This form of contrast is an effective way to quantify contrast for periodic functions and is also known as the modulation of a periodic signal . Modulation quantifies the relative amount by which the amplitude (or difference) of stands out from the average value (or background) .
In general, refers to the contrast of the periodic signal relative to its average value. If , then has no contrast. If two periodic functions and have the same average value, then has more contrast than if .
RMS contrast
Root mean square (RMS) contrast does not depend on the spatial frequency content or the spatial distribution of contrast in the image. RMS contrast is defined as the standard deviation of the pixel intensities:
where intensities are the -th -th element of the two-dimensional image of size by . is the average intensity of all pixel values in the image. The image is assumed to have its pixel intensities normalized in the range .
Contrast sensitivity
Contrast sensitivity is a measure of the ability to discern different luminances in a static image. It varies with age, increasing to a maximum around 20 years at spatial frequencies of about 2–5 cpd; aging then progressively attenuates contrast sensitivity beyond this peak. Factors such as cataracts and diabetic retinopathy also reduce contrast sensitivity. In the sweep grating figure below, at an ordinary viewing distance, the bars in the middle appear to be the longest due to their optimal spatial frequency. However, at a far viewing distance, the longest visible bars shift to what were originally the wide bars, now matching the spatial frequency of the middle bars at reading distance.
Contrast sensitivity and visual acuity
Visual acuity is a parameter that is frequently used to assess overall vision. However, diminished contrast sensitivity may cause decreased visual function in spite of normal visual acuity. For example, some individuals with glaucoma may achieve 20/20 vision on acuity exams, yet struggle with activities of daily living, such as driving at night.
As mentioned above, contrast sensitivity describes the ability of the visual system to distinguish bright and dim components of a static image. Visual acuity can be defined as the angle with which one can resolve two points as being separate since the image is shown with 100% contrast and is projected onto the fovea of the retina. Thus, when an optometrist or ophthalmologist assesses a patient's visual acuity using a Snellen chart or some other acuity chart, the target image is displayed at high contrast, e.g., black letters of decreasing size on a white background. A subsequent contrast sensitivity exam may demonstrate difficulty with decreased contrast (using, e.g., the Pelli–Robson chart, which consists of uniform-sized but increasingly pale grey letters on a white background).
To assess a patient's contrast sensitivity, one of several diagnostic exams may be used. Most charts in an ophthalmologist's or optometrist's office will show images of varying contrast and spatial frequency. Parallel bars of varying width and contrast, known as sine-wave gratings, are sequentially viewed by the patient. The width of the bars and their distance apart represent spatial frequency, measured in cycles per degree.Studies have demonstrated that contrast sensitivity is maximum for spatial frequencies of 2-5 cpd, falling off for lower spatial frequencies and rapidly falling off for higher spatial frequencies. The upper limit for the human vision system is about 60 cpd. The correct identification of small letters requires the letter size be about 18-30 cpd. Contrast threshold can be defined as the minimum contrast that can be resolved by the patient. Contrast sensitivity is typically expressed as the reciprocal of the threshold contrast for detection of a given pattern (i.e., 1 ÷ contrast threshold).
Using the results of a contrast sensitivity exam, a contrast sensitivity curve can be plotted, with spatial frequency on the horizontal, and contrast threshold on the vertical axis. Also known as contrast sensitivity function (CSF), the plot demonstrates the normal range of contrast sensitivity, and will indicate diminished contrast sensitivity in patients who fall below the normal curve. Some graphs contain "contrast sensitivity acuity equivalents", with lower acuity values falling in the area under the curve. In patients with normal visual acuity and concomitant reduced contrast sensitivity, the area under the curve serves as a graphical representation of the visual deficit. It can be because of this impairment in contrast sensitivity that patients have difficulty driving at night, climbing stairs and other activities of daily living in which contrast is reduced.
Recent studies have demonstrated that intermediate-frequency sinusoidal patterns are optimally-detected by the retina due to the center-surround arrangement of neuronal receptive fields. In an intermediate spatial frequency, the peak (brighter bars) of the pattern is detected by the center of the receptive field, while the troughs (darker bars) are detected by the inhibitory periphery of the receptive field. For this reason, low- and high-spatial frequencies elicit excitatory and inhibitory impulses by overlapping frequency peaks and troughs in the center and periphery of the neuronal receptive field. Other environmental, physiological, and anatomical factors influence the neuronal transmission of sinusoidal patterns, including adaptation.
Decreased contrast sensitivity arises from multiple etiologies, including retinal disorders such as age-related macular degeneration (ARMD), amblyopia, lens abnormalities, such as cataract, and by higher-order neural dysfunction, including stroke and Alzheimer's disease. In light of the multitude of etiologies leading to decreased contrast sensitivity, contrast sensitivity tests are useful in the characterization and monitoring of dysfunction, and less helpful in detection of disease.
Contrast threshold
A large-scale study of luminance contrast thresholds was done in the 1940s by Blackwell, using a forced-choice procedure. Discs of various sizes and luminances were presented in different positions against backgrounds at a wide range of adaptation luminances, and subjects had to indicate where they thought the disc was being shown. After statistical pooling of results (90,000 observations by seven observers), the threshold for a given target size and luminance was defined as the Weber contrast level at which there was a 50% detection level. The experiment employed a discrete set of contrast levels, resulting in discrete values of threshold contrast. Smooth curves were drawn through these, and values tabulated. The resulting data have been used extensively in areas such as lighting engineering and road safety.
A separate study by Knoll et al investigated thresholds for point sources by requiring subjects to vary the brightness of the source to find the level at which it was just visible. A mathematical formula for the resulting threshold curve was proposed by Hecht, with separate branches for scotopic and photopic vision. Hecht's formula was used by Weaver to model the naked-eye visibility of stars. The same formula was used later by Schaefer to model stellar visibility through a telescope.
Crumey showed that Hecht's formula fitted the data very poorly at low light levels, so was not really suitable for modelling stellar visibility. Crumey instead constructed a more accurate and general model applicable to both the Blackwell and Knoll et al data. Crumey's model covers all light levels, from zero background luminance to daylight levels, and instead of parameter-tuning is based on an underlying linearity related to Ricco's law. Crumey used it to model astronomical visibility for targets of arbitrary size, and to study the effects of light pollution.
Test images
Test images types
Pelli-Robson Contrast Sensitivity Chart
Regan chart
Arden grating chart
Campbell-Robson Contrast Sensitivity Chart
See also
Acutance
Color blindness
Contrast ratio
Display contrast
Eye examination
Optical resolution
Psychophysics
Radiocontrast
Spatial frequency
Visual acuity
References
External links
Details on luminance contrast
Vision
Photometry
Dimensionless numbers | Contrast (vision) | [
"Mathematics"
] | 2,508 | [
"Dimensionless numbers",
"Mathematical objects",
"Numbers"
] |
5,612,136 | https://en.wikipedia.org/wiki/List%20of%20object%E2%80%93relational%20mapping%20software | This is a list of well-known object–relational mapping software.
Java
Apache Cayenne, open-source for Java
Apache OpenJPA, open-source for Java
DataNucleus, open-source JDO and JPA implementation (formerly known as JPOX)
Ebean, open-source ORM framework
EclipseLink, Eclipse persistence platform
Enterprise JavaBeans (EJB)
Enterprise Objects Framework, Mac OS X/Java, part of Apple WebObjects
Hibernate, open-source ORM framework, widely used
Java Data Objects (JDO)
JOOQ Object Oriented Querying (jOOQ)
Kodo, commercial implementation of both Java Data Objects and Java Persistence API
TopLink by Oracle
Node.js
Bookshelf, lightweight ORM tool for PostgreSQL, MySQL, and SQLite3
Orange ORM Typescript/Javascript ORM for PostgreSQL, MySQL, SQL Server, SQLite, Oracle and SAP ASE
Prisma ORM Typescript/Javascript ORM for PostgreSQL, MySQL, SQL Server, SQLite, MongoDB, CockroachDB, Planetscale, MariaDB
Sequelize, Node.js ORM tool for Postgres, MySQL, MariaDB, SQLite, DB2, Microsoft SQL Server, and Snowflake
Typeorm, Typescript/Javascript scalable ORM tool
MikroORM, TypeScript ORM based on Data Mapper, Unit of Work and Identity Map patterns. Supports PostgreSQL, MySQL, SQLite (including libSQL), MongoDB, and MariaDB
iOS
Core Data by Apple for Mac OS X and iOS
.NET
Base One Foundation Component Library, free or commercial
Dapper, open source
Entity Framework, included in .NET Framework 3.5 SP1 and above
iBATIS, free open source, maintained by ASF but now inactive.
LINQ to SQL, included in .NET Framework 3.5
NHibernate, open source
nHydrate, open source
Quick Objects, free or commercial
Objective-C, Cocoa
Enterprise Objects, one of the first commercial OR mappers, available as part of WebObjects
Core Data, object graph management framework with several persistent stores, ships with Mac OS X and iOS
Perl
DBIx::Class
PHP
Laravel, framework that contains an ORM called "Eloquent" an ActiveRecord implementation.
Doctrine, open source ORM for PHP, Free software (MIT)
CakePHP, ORM and framework, open source (scalars, arrays, objects); based on database introspection, no class extending
CodeIgniter, framework that includes an ActiveRecord implementation
Yii, ORM and framework, released under the BSD license. Based on the ActiveRecord pattern
FuelPHP, ORM and framework for PHP, released under the MIT license. Based on the ActiveRecord pattern.
Laminas, framework that includes a table data gateway and row data gateway implementations
Qcodo, ORM and framework, open source
Redbean, ORM layer for PHP, for creating and maintaining tables on the fly, open source, BSD
Skipper, visualization tool and a code/schema generator for PHP ORM frameworks, commercial
Python
Django, ActiveRecord ORM included in Django framework, open source
SQLAlchemy, open source, a Data Mapper ORM
SQLObject, open source
Storm, open source (LGPL 2.1) developed at Canonical Ltd.
Tryton, open source
web2py, the facilities of an ORM are handled by the DAL in web2py, open source
Odoo – Formerly known as OpenERP, It is an Open Source ERP in which ORM is included.
Ruby
iBATIS (inactive)
ActiveRecord
DataMapper
Rust
Diesel
SeaORM
Welds
Smalltalk
TOPLink/Smalltalk, by Oracle, the Smalltalk predecessor of the Java version of TOPLink
See also
Comparison of object–relational mapping software
References
Object-relational mapping software | List of object–relational mapping software | [
"Technology"
] | 861 | [
"Computing-related lists",
"Lists of software"
] |
5,612,336 | https://en.wikipedia.org/wiki/Blanchard%27s%20transsexualism%20typology | The American-Canadian sexologist Ray Blanchard proposed a psychological typology of gender dysphoria, transsexualism, and fetishistic transvestism in a series of academic papers through the 1980s and 1990s. Building on the work of earlier researchers, including his colleague Kurt Freund, Blanchard categorized trans women into two groups: homosexual transsexuals who are attracted exclusively to men and are feminine in both behavior and appearance; and autogynephilic transsexuals who experience sexual arousal at the idea of having a female body (). Blanchard and his supporters argue that the typology explains differences between the two groups in childhood gender nonconformity, sexual orientation, history of sexual fetishism, and age of transition.
Blanchard's typology has attracted significant controversy, especially following the 2003 publication of J. Michael Bailey's book The Man Who Would Be Queen, which presented the typology to a general audience. Scientific criticisms commonly made against Blanchard's research include that the typology is unfalsifiable because Blanchard and other supporters regularly dismiss or ignore data that challenges the theory, that it failed to properly control against cisgender women rather than against cisgender men in rating levels of autogynephilia, and that when such studies are performed they show that cisgender women have similar levels of autogynephilic responses to transgender women.
The American Psychiatric Association includes with autogynephilia as a specifier to a diagnosis of transvestic disorder in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (2013); this addition was objected to by the World Professional Association for Transgender Health (WPATH), who argued that there was a lack of scientific consensus on and empirical evidence for the concept of autogynephilia.
History
Background
Beginning in the 1950s, clinicians and researchers developed a variety of classifications of transsexualism. These were variously based on sexual orientation, age of onset, and fetishism. Prior to Blanchard, these classifications generally divided transgender women into two groups: "homosexual transsexuals" if sexually attracted to men and "heterosexual fetishistic transvestites" if sexually attracted to women. These labels carried a social stigma of mere sexual fetishism, and contradicted trans women's self-identification as "heterosexual" or "homosexual", respectively.
In 1982, Kurt Freund and colleagues argued there were two distinct types of trans women, each with distinct causes: one type associated with childhood femininity and androphilia (sexual attraction to men), and another associated with fetishism and gynephilia (sexual attraction to women). Freund stated that the sexual arousal in this latter type could be associated, not only with crossdressing, but also with other feminine-typical behaviors, such as applying make-up or shaving the legs.
Freund, four of his colleagues, and two other sexologists had previously published papers on "feminine gender identity in homosexual males" and "Male Transsexualism" in 1974. They occasionally also used the term homosexual transsexual to describe transgender men attracted to women. Blanchard credited Freund with being the first author to distinguish between erotic arousal due to dressing as a woman (transvestic fetishism) and erotic arousal due to fantasizing about being female (which Freund called cross-gender fetishism).
Early research
Blanchard conducted a series of studies on people with gender dysphoria, analyzing the files of cases seen in the Gender Identity Clinic of the Clarke Institute of Psychiatry and comparing them on multiple characteristics. These studies have been criticized as bad science for being unfalsifiable and for failing to sufficiently operationalize their definitions. They have also been criticized for lacking reproducibility, and for a lack of a control group of cisgender women. Supporters of the typology deny these allegations.
Studying patients who had felt like women at all times for at least a year, Blanchard classified them according to whether they were attracted to men, women, both, or neither. He then compared these four groups regarding how many in each group reported a history of sexual arousal together with cross-dressing. 73% of the gynephilic, asexual, and bisexual groups said they did experience such feelings, but only 15% of the androphilic group did. He concluded that asexual, bisexual, and gynephilic transsexuals were motivated by erotic arousal to the thought or image of themself as a woman, and he coined the term autogynephilia to describe this.
Blanchard and colleagues conducted a study in 1986 using phallometry (a measure of blood flow to the penis), demonstrating arousal in response to cross-dressing audio narratives among trans women. Although this study is often cited as evidence for autogynephilia, the authors did not attempt to measure subjects' ideas of themselves as women. The authors concluded that gynephilic gender identity patients who denied experiencing arousal to cross-dressing were still measurably aroused by autogynephilic stimuli, and that autogynephilia among non-androphilic trans women was negatively associated with tendency to color their narrative to be more socially acceptable. However, in addition to having methodological problems, the reported data did not support this conclusion, because the measured arousal to cross-dressing situations was minimal and consistent with subjects' self-reported arousal. This study has been cited by proponents to argue that gynephilic trans women who reported no autogynephilic interests were misrepresenting their erotic interests.
Popularization
Blanchard's research and conclusions came to wider attention with the publication of popular science books on transsexualism, including The Man Who Would Be Queen (2003) by sexologist J. Michael Bailey and Men Trapped in Men's Bodies (2013) by sexologist and trans woman Anne Lawrence, both of which based their portrayals of male-to-female transsexuals on Blanchard's taxonomy. The concept of autogynephilia in particular received little public interest until Bailey's 2003 book, though Blanchard and others had been publishing studies on the topic for nearly 20 years. Bailey's book was followed by peer-reviewed articles critiquing the methodology used by Blanchard.
Both Bailey and Blanchard have since attracted intense criticism by some clinicians and by many transgender activists.
Measures of orientation
Sexologists may measure sexual orientation using psychological personality tests, self reports, or techniques such as photoplethysmography. Blanchard argues that self-reporting is not always reliable. Morgan, Blanchard and Lawrence have speculated that many reportedly "non-homosexual" trans women systematically distorted their life stories because "non-homosexuals" were often screened out as candidates for surgery.
Blanchard and Freund used the Masculine Identity in Females (MGI) scale and the Modified Androphilia Scale. Lawrence writes that homosexual transsexuals averaged a Kinsey scale measurement of 5–6 or a 9.86 ± 2.37 on the Modified Androphilia Scale.
Neurological differences
The concept that androphilia in trans women is related to homosexuality in cisgender men has been tested by MRI studies. Cantor interprets these studies as supporting Blanchard's transsexualism typology. These studies show neurological differences between trans women attracted to men and cis men attracted to women, as well as differences between androphilic and gynephilic trans women. The studies also showed differences between transsexual and nontranssexual people, leading to the conclusion that transsexuality is "a likely innate and immutable characteristic".
According to a 2016 review, structural neuroimaging studies seem to support the idea that androphilic and gynephilic trans women have different brain phenotypes, though the authors state that more independent studies of gynephilic trans women are needed to confirm this. A 2021 review examining transgender neurology found similar differences in brain structure between cisgender homosexuals and heterosexuals.
Autogynephilia
Autogynephilia (derived from Greek for "love of oneself as a woman") is a term coined by Blanchard for "a male's propensity to be sexually aroused by the thought of himself as a female", intending for the term to refer to "the full gamut of erotically arousing cross-gender behaviors and fantasies". Blanchard states that he intended the term to subsume transvestism, including for sexual ideas in which feminine clothing plays only a small or no role at all. Other terms for such cross-gender fantasies and behaviors include automonosexuality, eonism, and sexo-aesthetic inversion.
It is not disputed that autogynephilic sexual arousal exists and has been reported by both some transsexuals and some non-transsexuals. The disputed aspects of Blanchard's theories are the theory that autogynephilia is the central motivation for non-androphilic MtF transsexuals while being absent in androphilic ones, and his characterisations of autogynephilia, including as a paraphilia. Blanchard writes that the accuracy of these theories needs further empirical research to resolve, while others such as the transfeminist Julia Serano characterise them as incorrect.
Subtypes
Blanchard identified four types of autogynephilic sexual fantasy, but stated that co-occurrence of types was common.
Transvestic autogynephilia: arousal to the act or fantasy of wearing typically feminine clothing
Behavioral autogynephilia: arousal to the act or fantasy of doing something regarded as feminine
Physiologic autogynephilia: arousal to fantasies of body functions specific to people regarded as female
Anatomic autogynephilia: arousal to the fantasy of having a normative woman's body, or parts of one
Relationship to gender dysphoria
The exact proposed nature of the relationship between autogynephilia and gender dysphoria is unclear, and the desire to live as a woman often remains as strong or stronger after an initial sexual response to the idea has faded. Blanchard and Lawrence argue that this is because autogynephilia causes a female gender identity to develop, which becomes an emotional attachment and something aspirational in its own right.
Many transgender people dispute that their gender identity is related to their sexuality,
and have argued that the concept of autogynephilia unduly sexualizes trans women's gender identity. Some fear that the concept of autogynephilia will make it harder for gynephilic or "non-classical" MtF transsexuals to receive sex reassignment surgery. Lawrence writes that some transsexual women identify with autogynephilia, some of these feeling positively and some negatively as a result, with a range of opinions reflected as to whether or not this played a motivating role in their decision to transition.
In the first peer-reviewed critique of autogynephilia research, Charles Allen Moser found no substantial difference between "autogynephilic" and "homosexual" transsexuals in terms of gender dysphoria, stating that the clinical significance of autogynephilia was unclear. According to Moser, the idea is not supported by the data, and that despite autogynephilia existing, it is not predictive of the behavior, history, and motivation of trans women. In a re-evaluation of the data used by Blanchard and others as the basis for the typology, he states that autogynephilia is not always present in trans women attracted to women, or absent in trans women attracted to men, and that autogynephilia is not the primary motivation for gynephilic trans women to seek sex reassignment surgery.
In a 2011 study presenting an alternative to Blanchard's explanation, Larry Nuttbrock and colleagues reported that autogynephilia-like characteristics were strongly associated with a specific generational cohort as well as the ethnicity of the subjects; they hypothesized that autogynephilia may become a "fading phenomenon".
As a sexual orientation
Blanchard and Lawrence have classified autogynephilia as a sexual orientation. Blanchard attributed the notion of some cross-dressing men being sexually aroused by the image of themselves as female to Magnus Hirschfeld. (The concept of a taxonomy based on transsexual sexuality was refined by endocrinologist Harry Benjamin in the Benjamin Scale in 1966, who wrote that researchers of his day thought attraction to men while feeling oneself to be a woman was the factor that distinguished a transsexual from a transvestite (who "is a man [and] feels himself to be one").) Blanchard and Lawrence argue that just like more common sexual orientations such as heterosexuality and homosexuality, it is not only reflected by penile responses to erotic stimuli, but also includes the capacity for pair bond formation and romantic love.
Later studies have found little empirical support for autogynephilia as a sexual identity classification, and sexual orientation is generally understood to be distinct from gender identity. Elke Stefanie Smith and colleagues describe Blanchard's approach as "highly controversial as it could erroneously suggest an erotic background" to transsexualism.
Serano says the idea is generally disproven within the context of gender transition as trans women who are on feminizing hormone therapy, especially on anti-androgens, experience a severe drop and in some cases complete loss in libido. Despite this the vast majority of transgender women continue their transition.
Erotic target location errors
Blanchard conjectured that sexual interest patterns could have inwardly instead of outwardly directed forms, which he called erotic target location errors (ETLE). Autogynephilia would represent an inwardly directed form of gynephilia, with the attraction to women being redirected towards the self instead of others. These forms of erotic target location errors have also been observed with other base orientations, such as pedophilia, attraction to amputees, and attraction to plush animals. Anne Lawrence wrote that this phenomenon would help to explain an autogynephilia typology.
Cisgender women
The concept of autogynephilia has been criticized for implicitly assuming that cisgender women do not experience sexual desire mediated by their own gender identity. Research on autogynephilia in cisgender women shows that cisgender women commonly endorse items on adapted versions of Blanchard's autogynephilia scales.
Moser created an Autogynephilia Scale for Women in 2009, based on items used to categorize MtF transsexuals as autogynephilic in other studies. A questionnaire that included the ASW was distributed to a sample of 51 professional cisgender women employed at an urban hospital; 29 completed questionnaires were returned for analysis. By the common definition of ever having erotic arousal to the thought or image of oneself as a woman, 93% of the respondents would be classified as autogynephilic. Using a more rigorous definition of "frequent" arousal to multiple items, 28% would be classified as autogynephilic.
Lawrence criticized Moser's methodology and conclusions and stated that genuine autogynephilia occurs very rarely, if ever, in cisgender women as their experiences are superficially similar but the erotic responses are ultimately markedly different. Moser responded that Lawrence had made multiple errors by comparing the wrong items. Lawrence argues that the scales used by both Veale et al. and Moser fail to differentiate between arousal from wearing provocative clothing or imagining that potential partners find one attractive, and arousal merely from the idea that one is a woman or has a woman's body.
In a 2022 study, Bailey and Kevin J. Hsu dispute that "natal females" experience autogynephilia based on an application of Blanchard's original Core Autogynephilia Scale to four samples of "autogynephilic natal males", four samples of "non-autogynephilic natal males" and two samples of "natal females". Serano and Veale argue that Bailey and Hsu's results do not support their conclusion, because most "natal females" in their research reported at least some autogynephilic fantasies. Furthermore, Bailey and Hsu's "autogynephilic natal male" samples 1, 2, and 4 do not apply to trans people as the majority of the sample were cis crossdressers, not trans women. Sample 3, which was majority trans women, did not have high rates of autogynephilia compared to the other two samples. Serano and Veale also criticize Bailey and Hsu for leaving out two scales that played a central role in Blanchard's original conception of autogynephilia, saying that this implies a much narrower definition of autogynephilia which would have excluded many of Blanchard's original trans subjects.
Similar to Serano and Veale, Moser also criticizes Bailey and Hsu for mainly comparing the scores of cisgender women with cisgender male crossdressers instead of transgender women.
Transfeminist critique
Critics of the autogynephlia hypothesis include transfeminists such as Julia Serano and Talia Mae Bettcher. Serano describes the concept as flawed, unscientific, and needlessly stigmatizing. According to Serano, "Blanchard's controversial theory is built upon a number of incorrect and unfounded assumptions, and there are many methodological flaws in the data he offers to support it." She argues that flaws in Blanchard's original studies include: being conducted among overlapping populations primarily at the Clarke Institute in Toronto without nontranssexual controls; subtypes not being empirically derived but instead "begging the question that transsexuals fall into subtypes based on their sexual orientation"; and further research finding a non-deterministic correlation between cross-gender arousal and sexual orientation. She states that Blanchard did not discuss the idea that cross-gender arousal may be an effect, rather than a cause, of gender dysphoria, and that Blanchard assumed that correlation implied causation.
Serano also states that the wider idea of cross-gender arousal was affected by the prominence of sexual objectification of women, accounting for both a relative lack of cross-gender arousal in transsexual men and similar patterns of autogynephilic arousal in non-transsexual women. She criticised proponents of the typology, claiming that they dismiss non-autogynephilic, non-androphilic transsexuals as misreporting or lying while not questioning androphilic transsexuals, describing it as "tantamount to hand-picking which evidence counts and which does not based upon how well it conforms to the model", either making the typology unscientific due to its unfalsifiability, or invalid due to the nondeterministic correlation that later studies found. Serano says that the typology undermined lived experience of transsexual women, contributed to pathologisation and sexualisation of transsexual women, and the literature itself fed into the stereotype of transsexuals as "purposefully deceptive", which could be used to justify discrimination and violence against transsexuals. According to Serano, studies have usually found that some non-androphilic transsexuals report having no autogynephilia.
Bettcher, based on her own experience as a trans woman, has critiqued the notion of autogynephilia, and "target errors" generally, within a framework of "erotic structuralism," arguing that the notion conflates essential distinctions between "source of attraction" and "erotic content," and "(erotic) interest" and "(erotic) attraction," thus misinterpreting what she prefers to call, following Serano, "female embodiment eroticism." She maintains that not only is "an erotic interest in oneself as a gendered being," as she puts it, a non-pathological and indeed necessary component of regular sexual attraction to others, but within the framework of erotic structuralism, a "misdirected" attraction to oneself as postulated by Blanchard is outright nonsensical.
Activist and law professor Florence Ashley writes that the autogynephilia concept has been "discredited", and that Bailey's and Blanchard's work "has long been criticised for perpetuating stereotypes and prejudices against trans women, notably suggesting that LGBQ trans women's primary motivation for transitioning is sexual arousal."
Terminology
The concept that trans people with different sexual orientations are etiologically different goes back to the 1920s, but the terms used have not always been agreed on.
Blanchard said that one of his two types of gender dysphoria/transsexualism manifests itself in individuals who are almost if not exclusively attracted to men, whom he referred to as homosexual transsexuals. Blanchard uses the term "homosexual" relative to the person's sex assigned at birth, not their current gender identity. This use of the term "homosexual" relative to the person's birth sex has been heavily criticized by other researchers. It has been described as archaic confusing, demeaning, pejorative, offensive, and heterosexist. Benjamin states that trans women can only be "homosexual" if anatomy alone is considered, and psyches are ignored; he states that after sex-reassignment surgery, calling a male-to-female transsexual "homosexual" is pedantic and against "reason and common sense". Many authorities, including some supporters, criticize Blanchard's choice of terminology as confusing or degrading because it emphasizes trans women's assigned sex, and disregards their sexual orientation identity. Leavitt and Berger write that the term is "both confusing and controversial" and that trans women "vehemently oppose the label and its pejorative baggage."
In 1987, this terminology was included in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) as "transsexual, homosexual subtype". The later DSM-IV (1994) and DSM-IV-TR (2000) stated that a transsexual was to be described as "attracted to males, females, both or neither".
Blanchard defined the second type of transsexual as including those who are attracted almost if not exclusively to females (gynephilic), attracted to both males and females (bisexual), and attracted to neither males nor females (asexual); Blanchard referred to this latter set collectively as the non-homosexual transsexuals. Blanchard says that the "non-homosexual" transsexuals (but not the "homosexual" transsexuals) exhibit autogynephilia, which he defined as a paraphilic interest in having female anatomy.
Alternative terms
Professor of anatomy and reproductive biology Milton Diamond proposed the use of the terms androphilic (attracted to men) and gynephilic (attracted to women) as neutral descriptors for sexual orientation that do not make assumptions about the sex or gender identity of the person being described, alternatives to homosexual and heterosexual. Frank Leavitt and Jack Berger state that the label homosexual transsexual seems to have little clinical merit, as its referents have "little in common with homosexuals, except a stated erotic interest in males"; they too suggest "more neutral descriptive terms such as androphilia". Sexological research has been done using these alternative terms by researchers such as Sandra L. Johnson. Both Blanchard and Leavitt used a psychological test called the "modified androphilia scale" to assess whether a transsexual was attracted to men or not. Sociologist Aaron Devor wrote, "If what we really mean to say is attracted to males, then say 'attracted to males' or androphilic ... I see absolutely no reason to continue with language that people find offensive when there is perfectly serviceable, in fact better, language that is not offensive."
Other traits
According to the typology, autogynephilic transsexuals are attracted to femininity while homosexual transsexuals are attracted to masculinity. However, a number of other differences between the types have been reported. Cantor states that "homosexual transsexuals" usually begin to seek sex reassignment surgery (SRS) in their mid-twenties, while "autogynephilic transsexuals" usually seek clinical treatment in their mid-thirties or even later. Blanchard also states that homosexual transsexuals were younger when applying for sex reassignment, report a stronger cross-gender identity in childhood, have a more convincing cross-gender appearance, and function psychologically better than "non-homosexual" transsexuals. A lower percentage of those described as homosexual transsexuals report being (or having been) married, or report sexual arousal while cross-dressing. Bentler reported that 23% of homosexual transsexuals report a history of sexual arousal to cross-dressing, while Freund reported 31%. In 1990, using the alternative term "androphilic transsexual", Johnson wrote that there was a correlation between social adjustment to the new gender role and androphilia.
Anne Lawrence, a proponent of the concept, argues that homosexual transsexuals pursue sex reassignment surgery out of a desire for greater social and romantic success. Lawrence has proposed that autogynephilic transsexuals are more excited about sexual reassignment surgery than homosexual transsexuals. She states that homosexual transsexuals are typically ambivalent or indifferent about SRS, while autogynephilic transsexuals want to have surgery as quickly as possible, are happy to be rid of their penis, and proud of their new genitals. Lawrence states that autogynephilia tends to appear along with other paraphilias. J. Michael Bailey argued that both "homosexual transsexuals" and "autogynephilic transsexuals" were driven to transition mainly for sexual gratification, as opposed to gender-identity reasons.
Birth order
Blanchard and Zucker state that birth order has some influence over sexual orientation in male-assigned people in general, and androphilic trans women in specific. This phenomenon is called the "fraternal birth order effect". In 2000, Richard Green reported that androphilic trans women tended have a later-than-expected birth order, and more older brothers than other subgroups of trans women. Each older brother increased the odds that a trans woman was androphilic by 40%.
Transgender men
Blanchard's typology is mainly concerned with transgender women. Richard Ekins and Dave King state that female-to-male transsexuals (trans men) are absent from the typology, while Blanchard, Cantor, and Katherine Sutton distinguish between gynephilic and androphilic trans men. They state that gynephilic trans men are the counterparts of androphilic trans women, that they experience strong childhood gender nonconformity, and that they generally begin to seek sex reassignment in their mid-twenties. They describe androphilic trans men as a rare but distinct group who say they want to become gay men, and, according to Blanchard, are often specifically attracted to gay men. Cantor and Sutton state that while this may seem analogous to autogynephilia, no distinct paraphilia for this has been identified.
Gynephilic transgender men
In 2000, Meredith L. Chivers and Bailey wrote, "Transsexualism in genetic females has previously been thought to occur predominantly in homosexual (gynephilic) women." According to them, Blanchard reported in 1987 that only 1 in 72 trans men he saw at his clinic were primarily attracted to men. They observed that these individuals were so uncommon that some researchers thought that androphilic trans men did not exist, or misdiagnosed them as homosexual transsexuals, attracted to women. They wrote that relatively few studies had examined childhood gender variance in trans men.
In a 2005 study by Smith and van Goozen, their findings in regards to trans men were different from their findings for trans women. Smith and van Goozen's study included 52 female-to-male transsexuals, who were categorized as either homosexual or non-homosexual. Smith concluded that female-to-male transsexuals, regardless of sexual orientation, reported more GID symptoms in childhood, and a stronger sense of gender dysphoria. Smith wrote that she found some differences between homosexual and non-homosexual female-to-male transsexuals. Smith says that homosexual female-to-males reported more gender dysphoria than any group in her study.
Inclusion in the Diagnostic and Statistical Manual of Mental Disorders
In the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) (1980), the diagnosis of "302.5 Transsexualism" was introduced under "Other Psychosexual Disorders". This was an attempt to provide a diagnostic category for gender identity disorders. The diagnostic category, transsexualism, was for gender dysphoric individuals who demonstrated at least two years of continuous interest in transforming their physical and social gender status. The subtypes were asexual, homosexual (same "biological sex"), heterosexual (other "biological sex") and unspecified. This was removed in the DSM-IV, in which gender identity disorder replaced transsexualism. Previous taxonomies, or systems of categorization, used the terms classic transsexual or true transsexual, terms once used in differential diagnoses.
The DSM-IV-TR included autogynephilia as an "associated feature" of gender identity disorder and as a common occurrence in the transvestic fetishism disorder, but does not classify autogynephilia as a disorder by itself.
The paraphilias working group on the DSM-5, chaired by Ray Blanchard, included both with autogynephilia and with autoandrophilia as specifiers to transvestic disorder in an October 2010 draft of the DSM-5. This proposal was opposed by the World Professional Association for Transgender Health (WPATH), citing a lack of empirical evidence for these specific subtypes. WPATH argued that there was no scientific consensus on the concept, and that there was a lack of longitudinal studies on the development of transvestic fetishism. With autoandrophilia was removed from the final draft of the manual. Blanchard later said he had initially included it to avoid criticism: "I proposed it simply in order not to be accused of sexism [...] I don't think the phenomenon even exists." When published in 2013, the DSM-5 included With autogynephilia (sexual arousal by thoughts, images of self as a female) as a specifier to 302.3 Transvestic disorder (intense sexual arousal from cross-dressing fantasies, urges or behaviors); the other specifier is With fetishism (sexual arousal to fabrics, materials or garments).
Societal impact
Litigation
In the 2010 U.S. Tax Court case O'Donnabhain v. Commissioner, the Internal Revenue Service cited Blanchard's typology as justification for denying a transgender woman's tax deductions for medical costs relating to treatment of her gender identity disorder, claiming the procedures were not medically necessary. The court found in favor of the plaintiff, Rhiannon O'Donnabhain, ruling that she should be allowed to deduct the costs of her treatment, including sex reassignment surgery and hormone therapy. In its decision, the court declared the IRS's position "at best a superficial characterization of the circumstances" that was "thoroughly rebutted by the medical evidence".
Anti-LGBT groups
According to the Southern Poverty Law Center (SPLC), autogynephilia has been promoted by anti-LGBT hate groups. These include the Family Research Council (FRC), United Families International (UFI), and the American College of Pediatricians (ACPeds).
Both Blanchard and Bailey have written articles for 4thWaveNow, which the SPLC describes as an anti-trans website.
Nic Rider and Elliot Tebbe characterize Blanchard's theory of autogynephilia as an anti-trans theory that functions to invalidate and delegitimize transgender individuals.
Serano writes that trans-exclusionary radical feminists, self-described as "gender-critical" feminists, have embraced the idea of autogynephilia beginning in the 2000s. One early proponent of autogynephilia was radical feminist Sheila Jeffreys. The concept has been used to imply that trans women are sexually deviant men. The concept of autogynephilia became popular on gender-critical websites such as 4thWaveNow, Mumsnet, and the Reddit community /r/GenderCritical.
See also
Classification of transsexual and transgender people
Autoeroticism
Partialism
Transgender sexuality
List of transgender-related topics
Notes
References
External links
Men and sexuality
Paraphilias
Sexology
Gender identity
Sexual fetishism
Sexual orientation
Sexuality and society
Transgender sexuality
Transgender women-related topics
LGBTQ-related controversies in the United States | Blanchard's transsexualism typology | [
"Biology"
] | 7,009 | [
"Behavioural sciences",
"Behavior",
"Sexology"
] |
5,612,449 | https://en.wikipedia.org/wiki/OC%20Oerlikon | OC Oerlikon is a listed technology group headquartered in Pfäffikon (Schwyz), Switzerland. The name "Oerlikon" (or "œrlikon", as the company styles itself according to its corporate identity) comes from the Oerlikon district in Zurich where the group has its origins.
The roots of today's OC Oerlikon are to be found in Maschinenfabrik Oerlikon, which was established in 1876 and evolved into Oerlikon-Bührle Holding in 1973. Following an extensive restructuring process, the holding was renamed Unaxis at the start of 2000. The Austrian Victory Industriebeteiligung AG acquired a majority share in Unaxis in 2005. New management initiated a restructuring effort that manifested itself in a new name – OC Oerlikon – from the beginning of September 2005. At the end of 2006, the Saurer Group was acquired and integrated into OC Oerlikon. As of today, a position of around 41% is held by Liwet Holding AG, of which Victor Vekselberg is one of the beneficial owners.
Corporate affairs
Corporate structure
OC Oerlikon currently has two Divisions:
Surface Solutions Division (Oerlikon Balzers, Oerlikon Metco and Oerlikon AM)
Polymer Processing Division (Oerlikon Barmag, Oerlikon Neumag and Oerlikon Nonwoven)
The following divisions of the Oerlikon Components business unit, which were designated as not in line with the company's core business, were divested in 2009: Oerlikon Esec (semiconductors) was sold in April 2009 to the Dutch company BE Semiconductor Industries (Besi), and Oerlikon Space (aerospace engineering) was sold off in June 2009 to RUAG Holding. The last remaining company in the Oerlikon Optics business unit (optics) – Oerlikon Optics Shanghai – was sold off in mid-August 2009 to the British company EIS Optics, which had been newly set up by the London-based private equity firms Nova Capital Management Limited and FF&P Private Equity Limited. Since that time, Oerlikon Components has operated as the Advanced Technologies Segment.
On March 2, 2012, Oerlikon signed a contract of sale with Tokyo Electron for its solar division, Oerlikon Solar, which employed 675 people at eight locations around the world, including its headquarters in Trübbach, Switzerland, near the border to Liechtenstein. Tokyo Electron Limited is one of the world's leading suppliers of semiconductor production equipment and is active in R&D, manufacturing and sales in a wide range of product fields. The sale of Oerlikon Solar was finalized on November 27, 2012.
On December 3, 2012, Oerlikon announced that it was selling two business units from its Textile Segment – Natural Fibers and Textile Components – to the Chinese Jinsheng Group, to allow the Textile Segment to focus on its manmade fibers business. The sale of the Natural Fibers business unit was finalized on July 4, 2013.
On June 2, 2014, Oerlikon acquired Metco from Sulzer AG, integrating it into its existing Coating Segment to create a global provider of surface solutions in the form of the Surface Solutions Segment, currently the company's largest Segment.
On December 23, 2014, Oerlikon announced that it had reached an agreement on the sale of its Advanced Technologies Segment to the Swiss company Evatec AG. This sale was closed ahead of schedule and the Segment's 200 employees and all of its assets were successfully transferred to Evatec on February 3, 2015.
In July 2016 Oerlikon announced that all approvals for a strategic divestment of Oerlikon's Vacuum business to Atlas Copco have been received. The transaction was successfully closed on August 31, 2016.
In November 2016, Oerlikon announced that they will be building a new manufacturing facility in Plymouth Township, Michigan. This facility will be used to produce materials for additive manufacturing and surface coatings. There will also be a research & development (R&D) lab for further developments of titanium and other alloys at this facility.
In July 2018, Oerlikon announced that it had signed a definite agreement for the sale of its Drive Systems Segment to Dana Inc. The transaction was closed on February 28, 2019.
Ownership structure
The ownership structure of OC Oerlikon has gone through numerous drastic changes over the years, partly due to some large options transactions. When the company was recapitalized, its shareholder structure changed once again. Oerlikon's Annual Report 2021 gives the shareholder structure as follows:
Board of directors
The board of directors is responsible for the supervision and strategic management of the company. At Oerlikon's 42nd Annual General Meeting of Shareholders, Michael Suess was appointed chairman of the board of directors. The current members of the Board of Directors of OC Oerlikon Corporation AG, Pfäffikon, are:
Michael Süss, Executive Chairman
Gerhard Pegam, Member and Vice Chairman of the Board of Directors
Irina Matveeva, Member of the Board of Directors
Alexey Moskov, Member of the Board of Directors
Geoffery Merszei, Member of the Board of Directors
Zhenguo Yao, Member of the Board of Directors
Paul Adams, Member of the Board of Directors
Although Süss, Moskov and Matveeva represent the interests of Liwet AG, all seven members of the board are independent as defined by the Swiss Code of Best Practice for Corporate Governance.
Executive committee
In May 2004, Thomas Limberger was appointed to the board of directors; in June 2005, at the request of the majority shareholder, Victory Industriebeteiligung, he was named CEO of Unaxis as the company was known at the time. Under Limberger, two further roles were added to the executive committee in February 2007: General Counsel (filled by Bjoern Bajan), and COO (Uwe Krüger). Limberger stayed with the company until May 2007, when he resigned as CEO and from the board of directors and joined Von Roll Holding.
Upon his resignation, Limberger was replaced as CEO by Uwe Krüger, whose previous position as COO remained vacant until September 2008, when it was filled by Thomas Babacan. At the presentation of the interim financial results on August 25, 2009, it was announced that Uwe Krüger would be leaving the company with immediate effect. The position of CEO was temporarily filled by Hans Ziegler, who had been a member of the Board of Directors since May 2008.
In May 2010, assumed the role of CEO, relieving Hans Ziegler as interim CEO. Adrian Cojocaru was appointed Chief HR Officer in November 2010. The position of Chief Restructuring Officer, which had been created during the restructuring process and was occupied by Raafat Morcos, was abolished again following the end of the restructuring program on August 17, 2011. The post of Chief Operating Officer that had been left vacant following the departure of Thomas Babacan on December 31, 2011, was not occupied again. The Chief HR Officer, Adrian Cojocaru, left the company at the end of 2012.
CEO Michael Buscher left the company in March 2013. He was replaced as CEO on an interim basis by CFO Jürg Fedier, who at that time was the only member of the executive committee.
Brice Koch took up the position of CEO of OC Oerlikon in January 2014, allowing Jürg Fedier to focus on his original role as CFO.
In March 2016, Oerlikon announced former Siemens AG manager Roland Fischer would replace Brice Koch as CEO.
With the publication of the 2021 annual results in March 2022, OC Oerlikon announced that Roland Fischer was stepping down as CEO for private reasons, and that Michael Süss will assume the executive chair role. Since July 2022, Suss is executive chairman of OC Oerlikon, overseeing all group-level management topics and leading the executive committee in addition to his role as chairman of Oerlikon's board of directors.
The current members of the executive committee of OC Oerlikon are:
Michael Süss, Executive Chairman
Philipp Müller, CFO
Anna Ryzhova, Chief Human Resources Officer
Markus Tacke, CEO Surface Solutions Division
Georg Stausberg, CEO Polymer Processing Division
History
Oerlikon-Bührle
The foundations for OC Oerlikon were laid when Oerlikon-Bührle Holding AG was established in 1973. At its peak in 1980, the holding company had 37,000 employees. At the start of the 1980s, the group already had an aerospace division (set up in 1964 as part of Contraves AG) and a thin-film/vacuum technology division, which it had had since 1976 when the company took over Balzers AG. Due to poor performance, in 1991 the group was forced to concentrate on certain divisions only. The decision was taken to restructure the company and focus exclusively on technology. This narrowing of the company's focus was driven by the 1994 takeover of the Leybold Group, a specialist in vacuum technology, which was merged with Balzers to form Balzers & Leybold, a leading company in thin-film technology, which would later become Oerlikon-Bührle's core business.
The biggest turning point for the company came in 1999 with the sale of various core businesses and virtually all interests in other companies that no longer fitted Oerlikon's new business concept. The company sold its arms division, Oerlikon Contraves Defence, to the German Rheinmetall DeTec, where it now operates under the new name of Rheinmetall Air Defence AG. Oerlikon-Bührle Immobilien AG was sold to Allreal Holding where it acquired its present name of Allreal Generalunternehmung AG. The shoe and accessories manufacturer Bally was sold to the US-based Texas Pacific Group. Oerlikon-Bührle was renamed Unaxis in January 2000.
Unaxis
In mid-2000, Unaxis acquired a majority share in the semiconductor manufacturer Esec AG. Toward the end of that year, it sold Pilatus Flugzeugwerke AG – the last company that did not fit in with the rest of Unaxis' technology portfolio. In December 2001, Unaxis spun off Leybold Optics once again, but retained the vacuum technology division.
At the beginning of 2004, Unaxis was restructured into five Segments: Semiconductor Equipment, Data Storage Solutions, Coating Services, Vacuum Solutions and Components and Special Systems. A merger in March 2004 placed Esec entirely under the ownership of Unaxis. Poor performance in FY 2004 from the semiconductor division of Esec meant losses of CHF 372 million for Unaxis and a slump in its share price. The Esec business unit was eventually sold again in April 2009.
In June 2005, Unaxis' new majority shareholder, the Austrian firm Victory Industriebeteiligung AG, called an extraordinary general meeting where it replaced virtually all of the group's management team. This also saw Thomas Limberger appointed as the new Unaxis CEO. The new management team succeeded in reducing losses massively in 2005, and expressed a desire to abandon the abstract company name "Unaxis" and bring back a well-established company name from the past.
In 2006, the Russian oligarch Viktor Vekselberg acquired a substantial share in the company. At the annual general meeting in May 2006, a suggestion that "Oerlikon" – the name of the village where Werkzeugmaschinenfabrik Oerlikon was founded – be used as part of the company's name was approved. Rheinmetall reacted strongly against the use of the abbreviation OC because of the potential confusion with its subsidiary Oerlikon Contraves, whose abbreviation is also OC. Unaxis was renamed Oerlikon – formally OC Oerlikon Corporation AG – with effect from the beginning of September 2006.
Naming disputes
The renaming of Unaxis was delayed due to various objections from Rheinmetall and its subsidiaries. When Oerlikon Contraves was sold in 1999, the then Oerlikon-Bührle secured the right to continue using the protected name Contraves (as in Contraves Space). Nothing was agreed regarding "Oerlikon", however, since it is the name of an actual location – Oerlikon was a village that became a district of Zurich in 1934 – and so appears in the names of several dozen companies. For these reasons, for legal purposes the company "Oerlikon" officially refers to itself as OC Oerlikon Corporation, and has only trademarked the new Oerlikon wordmark and logo. At the beginning of September 2006, the media started referring to the company primarily as "Unaxis-Oerlikon". OC Oerlikon Corporation AG was successfully entered in the commercial register in March 2006 (before the company had started operating under that name) and the renaming of Unaxis Management AG to OC Oerlikon Management AG was entered in the commercial register in May 2006.
All disputes regarding company names were settled in the third quarter of 2006, and in September 2006, Unaxis Holding AG was officially renamed OC Oerlikon Corporation AG. In December of that year, Unaxis Schweiz AG (formerly Esec SA) was renamed Oerlikon Assembly Equipment AG.
Takeover of Saurer
The investment firm Laxey attempted to replace the management of the textile machine manufacturer Saurer AG at an extraordinary general meeting, but the attempt was thwarted at an early stage. This was due to negative reporting in the media and the subsequent surprise intervention of Victory Industriebeteiligung and OC Oerlikon, at that time only a few days old. Oerlikon acquired Laxey's entire share package. With Oerlikon now holding a majority share in Saurer, the obligatory takeover bid was made to the remaining Saurer shareholders.
Restructuring and recapitalization
In 2008 and 2009, the group suffered the full force of the recession that followed in the wake of the global financial and economic crisis. Demand and sales dropped substantially, particularly in the Textile Segment, but also in the company's other Segments. Hans Ziegler was appointed as interim CEO to undertake a major restructuring of the group. According to Oerlikon's Annual Report 2009, more than 2 500 employees were laid off in 2009, and the workforce declined by a further 1 100 as a result of companies being sold off. A comprehensive restructuring of the group's finances was also necessary. Following lengthy negotiations, an agreement was reached with Oerlikon's main shareholder, Renova, and the lending banks, which was approved by the company's shareholders at the annual general meeting on May 18, 2010.
The key points of the recapitalization were a reduction in equity capital through a par value reduction from CHF 20 to CHF 1, followed by a capital increase with a subscription right offer and the issue of options for shareholders. CHF 125 million of the company's debts were erased and an old credit facility was replaced with a new contract for three tranches totaling CHF 1.48 billion. The recapitalization resulted in a reduction in debt of CHF 998 million and liquid assets of CHF 276 million for the group.
Transformation of company through acquisitions and divestments
On November 22, 2011, the group reorganized its largest and most important Segment, the Textile Segment, combining the existing five business units into three: Manmade Fibers (formerly Oerlikon Barmag and Oerlikon Neumag), Natural Fibers (formerly Oerlikon Schlafhorst and Oerlikon Saurer) and Textile Components (formerly Oerlikon Textile Components).
In the course of this reorganization process, the Segment's upper management was gradually relocated to Shanghai, and hence to the world's largest textile market. The new CEO of the Segment, Clement Woon, is from Singapore.
On December 3, 2012, Oerlikon announced that it was selling two business units from its Textile Segment – Natural Fibers and Textile Components – to the Chinese Jinsheng Group. The sale of the Natural Fibers business unit was finalized on July 4, 2013. Within textile machinery construction, the group plans to concentrate in the future on production machinery for manmade fibers.
At the beginning of 2014, OC Oerlikon acquired Sulzer Metco from Sulzer AG.
On November 20, 2015, Oerlikon announced its intention to sell its Vacuum Segment to Atlas Copco. The transaction was closed on August 31, 2016.
In July 2018, Oerlikon announced that it was selling its Drive Systems Segment to Dana Inc. The sale was finalized on February 28, 2019.
External links
Official website
References
Technology companies established in 1973
Textile machinery manufacturers
Manufacturing companies of Switzerland
Equipment semiconductor companies
Manufacturers of industrial automation
Companies listed on the SIX Swiss Exchange
Swiss brands
Renova Group
Swiss companies established in 1973
Companies based in the canton of Schwyz | OC Oerlikon | [
"Engineering"
] | 3,583 | [
"Equipment semiconductor companies",
"Semiconductor fabrication equipment"
] |
5,612,520 | https://en.wikipedia.org/wiki/Eduardo%20D.%20Sontag | Eduardo Daniel Sontag (born April 16, 1951, in Buenos Aires, Argentina) is an Argentine-American mathematician, and distinguished university professor at Northeastern University, who works in the fields control theory, dynamical systems, systems molecular biology, cancer and immunology, theoretical computer science, neural networks, and computational biology.
Biography
Sontag received his Licenciado degree from the mathematics department at the University of Buenos Aires in 1972, and his Ph.D. in Mathematics under Rudolf Kálmán at the Center for Mathematical Systems Theory at the University of Florida in 1976.
From 1977 to 2017, he was with the department of mathematics at Rutgers, The State University of New Jersey, where he was a Distinguished Professor of Mathematics as well as a Member of the Graduate Faculty of the Department of Computer Science and the Graduate Faculty of the Department of Electrical and Computer Engineering, and a Member of the Rutgers Cancer Institute of NJ. In addition, Dr. Sontag served as the head of the undergraduate Biomathematics Interdisciplinary Major, director of the Center for Quantitative Biology, and director of graduate studies of the Institute for Quantitative Biomedicine. In January 2018, Dr. Sontag was appointed as a University Distinguished Professor in the Department of Electrical and Computer Engineering and the Department of BioEngineering at Northeastern University, where he is also an affiliate member of the Department of Mathematics and the Department of Chemical Engineering. Since 2006, he has been a research affiliate at the Laboratory for Information and Decision Systems, MIT, and since 2018 he has been a member of the faculty in the Program in Therapeutic Science, Laboratory for Systems Pharmacology at Harvard Medical School.
Eduardo Sontag has authored over five hundred research papers and monographs and book chapters in the above areas with about 60,000 citations and an h-index of 104. He is in the editorial board of several journals, including: IET Proceedings Systems Biology, Synthetic and Systems Biology International Journal of Biological Sciences, and Journal of Computer and Systems Sciences, and is a former board member of SIAM Review, IEEE Transactions on Automatic Control, Systems and Control Letters, Dynamics and Control, Neurocomputing, Neural Networks, Neural Computing Surveys, Control-Theory and Advanced Technology, Nonlinear Analysis: Hybrid Systems, and Control, Optimization and the Calculus of Variations. In addition, he is a co-founder and co-Managing Editor of Mathematics of Control, Signals, and Systems.
Sontag was married to Frances David-Sontag, who died in 2017. His daughter Laura Kleiman is founder and CEO at Reboot Rx, and his son David Sontag leads the MIT Clinical Machine Learning Group.
Work
His work in control theory led to the introduction of the concept of input-to-state stability (ISS), a stability theory notion for nonlinear systems, and control-Lyapunov functions. Many of the subsequent results were proved in collaboration with his student Yuan Wang and with David Angeli. In systems biology, Sontag introduced together with David Angeli the concept of input/output monotone system. In theory of computation, he proved the first results on computational complexity in nonlinear controllability, and introduced together with his student Hava Siegelmann a new approach to analog computation and super-Turing computing.
Awards and honors
Sontag became an Institute of Electrical and Electronics Engineers (IEEE) Fellow in 1993. He was awarded the Reid Prize in Mathematics in 2001, the 2002 Hendrik W. Bode Lecture Prize from the IEEE,
the 2002 Board of Trustees Award for Excellence in Research from Rutgers University,
the 2005 Teacher/Scholar Award from Rutgers University,
and the 2011 IEEE Control Systems Award.
In 2022, he was awarded the Richard E. Bellman Control Heritage Award, which is the highest recognition in control theory and engineering in the United States. He was honored “for pioneering contributions to stability analysis and nonlinear control, and for advancing the control theoretic foundations of systems biology.”
In 2011 he became a fellow of the Society for Industrial and Applied Mathematics, in 2012 a fellow of the American Mathematical Society, and in 2014 a fellow of the International Federation of Automatic Control.
Sontag was elected a Member of the American Academy of Arts and Sciences in April 2024. He is a collaborator of the IBS Biomedical Mathematics Group.
Publications
Sontag is co-author of several hundred research papers, as well as three books:
1972, Topics in Artificial Intelligence (in Spanish, Buenos Aires: Prolam, 1972)
1979, Polynomial Response Maps (Berlin: Springer, 1979).
1998, Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd Edition (Texts in Applied Mathematics, Volume 6, Second Edition, New York: Springer, 1998)
Selected Public Research Rankings
Research.com top 100 US electrical engineers.
Research.com top 100 US mathematicians.
Most-cited author in: IEEE Transactions on Automatic Control in 1981, 1996, 1997; Systems and Control Letters 1989, 1991, 1995, 1998, and lifetime of journal; SIAM Journal on Control and Optimization 1983, 1986; Theoretical Computer Science 1994; as well as many other journal/years.
Elsevier/Stanford list of top 0.5% among 2% top scientists worldwide.
MathScinet list of three most-cited applied mathematicians who got PhD in 1976.
References
External links
Link to Eduardo Sontag's Homepage
1951 births
Living people
20th-century American mathematicians
21st-century American mathematicians
American people of Argentine descent
Argentine mathematicians
Control theorists
Fellows of the American Mathematical Society
Fellows of the Society for Industrial and Applied Mathematics
Rutgers University faculty
Systems biologists
University of Florida alumni
Northeastern University faculty
University of Buenos Aires alumni | Eduardo D. Sontag | [
"Engineering"
] | 1,133 | [
"Control engineering",
"Control theorists"
] |
5,612,656 | https://en.wikipedia.org/wiki/Bioluminescence%20imaging | Bioluminescence imaging (BLI) is a technology developed over the past decades (1990's and onward). that allows for the noninvasive study of ongoing biological processes Recently, bioluminescence tomography (BLT) has become possible and several systems have become commercially available. In 2011, PerkinElmer acquired one of the most popular lines of optical imaging systems with bioluminescence from Caliper Life Sciences.
Background
Bioluminescence is the process of light emission in living organisms. Bioluminescence imaging utilizes native light emission from one of several organisms which bioluminesce, also known as luciferase enzymes. The three main sources are the North American firefly, the sea pansy (and related marine organisms), and bacteria like Photorhabdus luminescens and Vibrio fischeri. The DNA encoding the luminescent protein is incorporated into the laboratory animal either via a viral vector or by creating a transgenic animal. Rodent models of cancer spread can be studied through bioluminescence imaging, e.g. for mouse models of breast cancer metastasis.
Systems derived from the three groups above differ in key ways:
Firefly luciferase requires D-luciferin to be injected into the subject prior to imaging. The peak emission wavelength is about 560 nm. Due to the attenuation of blue-green light in tissues, the red-shift (compared to the other systems) of this emission makes detection of firefly luciferase much more sensitive in vivo.
Renilla luciferase (from the Sea pansy) requires its substrate, coelenterazine, to be injected as well. As opposed to luciferin, coelenterazine has a lower bioavailability (likely due to MDR1 transporting it out of mammalian cells). Additionally, the peak emission wavelength is about 480 nm.
Bacterial luciferase has an advantage in that the lux operon used to express it also encodes the enzymes required for substrate biosynthesis. Although originally believed to be functional only in prokaryotic organisms, where it is widely used for developing bioluminescent pathogens, it has been genetically engineered to work in mammalian expression systems as well. This luciferase reaction has a peak wavelength of about 490 nm.
While the total amount of light emitted from bioluminescence is typically small and not detected by the human eye, an ultra-sensitive CCD camera can image bioluminescence from an external vantage point.
Applications
Common applications of BLI include in vivo studies of infection (with bioluminescent pathogens), cancer progression (using a bioluminescent cancer cell line), and reconstitution kinetics (using bioluminescent stem cells).
Researchers at UT Southwestern Medical Center have shown that bioluminescence imaging can be used to determine the effectiveness of cancer drugs that choke off a tumor's blood supply. The technique requires luciferin to be added to the bloodstream, which carries it to cells throughout the body. When luciferin reaches cells that have been altered to carry the firefly gene, those cells emit light.
The BLT inverse problem of 3D reconstruction of the distribution of bioluminescent molecules from data measured on the animal surface is inherently ill-posed. The first small animal study using BLT was conducted by researchers at the University of Southern California, Los Angeles, USA in 2005. Following this development, many research groups in USA and China have built systems that enable BLT.
Mustard plants have had the gene that makes fireflies' tails glow added to them so that the plants glow when touched. The effect lasts for an hour, but an utra-sensitive camera is needed to see the glow.
Autoluminograph
An autoluminograph is a photograph produced by placing a light emitting object directly on a piece of film. A famous example is an autoluminograph published in Science magazine in 1986 of a glowing transgenic tobacco plant bearing the luciferase gene of fireflies placed on Kodak Ektachrome 200 film.
Induced metabolic bioluminescence imaging
Induced metabolic bioluminescence imaging (imBI) is used to obtain a metabolic snapshot of biological tissues. Metabolites that may be quantified through imBI include glucose, lactate, pyruvate, ATP, glucose-6-phosphate, or D2-hydroxygluturate. imBI can be used to determine the lactate concentration of tumors or to measure the metabolism of the brain.
References
Further reading
Photographic processes
Bioluminescence
Imaging | Bioluminescence imaging | [
"Chemistry",
"Biology"
] | 931 | [
"Biochemistry",
"Luminescence",
"Bioluminescence"
] |
5,612,879 | https://en.wikipedia.org/wiki/Moisture%20sorption%20isotherm | The relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the sorption of water vapor, on the vertical axis, provide the ratio of the weight of water adsorbed divided by its dry weight, or that ratio converted into a percentage. On the horizontal axis they provide relative humidity or water activity of the air presented to the material.
Sorption Isotherms are named as such because the equilibrium established must be for a constant temperature and this temperature should be specified. Normally, materials hold less moisture when they are hotter, and more moisture when they are colder. Occasionally, a set of isotherms are provided on one graph that shows each curve at a different temperature. Such a set of adsorption isotherms is provided in Figure 3 as measured by Dini on a Type V silica gel.
See also
Desiccant
Food chemistry
Moisture vapor transmission rate
Shelf life
References
Food technology
Pharmaceutical industry | Moisture sorption isotherm | [
"Chemistry",
"Biology"
] | 568 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
5,613,018 | https://en.wikipedia.org/wiki/Mubtakkar | Mubtakkar is an Arabic word () with related meanings that translate into English as "invention", "initiative", or "inventive". The word was reportedly used by Al-Qaeda to describe a poison gas weapon developed and intended for use in an attack in the New York City Subway. According to author Ron Suskind, in his book The One Percent Doctrine: Deep Inside America's Pursuit of Its Enemies Since 9/11, the plan for this attack was called off about forty-five days before execution by Al-Qaeda commander Ayman al-Zawahiri.
The mubtakkar is described as a small binary chemical device that would generate large amounts of hydrogen cyanide gas, which could potentially kill hundreds in an enclosed space. The components contained in two separate containers would not be lethal to humans if individually released, so these bombs can be assembled, stored, and transported without appreciable danger. However, when the device is put into operation it releases large quantities of a lethal gas.
For further information, see Ron Suskind's The One Percent Doctrine, p. 192ff.
External links
Time Magazine book excerpt from The One Percent Doctrine
Chemical weapon delivery systems | Mubtakkar | [
"Chemistry"
] | 248 | [
"Chemical weapon delivery systems",
"Chemical weapons"
] |
5,613,415 | https://en.wikipedia.org/wiki/Rosocyanine | Rosocyanine and rubrocurcumin are two red colored materials, which are formed by the reaction between curcumin and borates.
Application
The color reaction between borates and curcumin is used for the spectrophotometric determination and quantification of boron present in food or materials. Curcumin is a yellow coloring natural pigment found in the root stocks of some Curcuma species, especially Curcuma longa (turmeric), in concentrations up to 3%. In the so-called curcumin method for boron quantification it serves as reaction partner for boric acid. The reaction is very sensitive and so the smallest quantities of boron can be detected. The maximum absorbance at 540 nm for rosocyanine is used in this colorimetric method. The formation of rosocyanine depends on the reaction conditions. The reaction is carried out preferentially in acidic solutions containing hydrochloric or sulfuric acid. The color reaction also takes place under different conditions; however, in alkaline solution, gradual decomposition is observed. The reaction might be disturbed at higher pH values, interfering with other compounds.
Rosocyanine is formed as a 2:1 complex from curcumin and boric acid in acidic solutions. The boron complexes formed with rosocyanine are dioxaborines (here a 1,3,2-dioxaborine). Curcumin possesses a 1,3-diketone structure and can therefore be considered as a chelating agent. Unlike the simpler 1,3-diketone–containing compound acetylacetone (which forms acetylacetonate complexes with metals), the entire skeleton of curcumin is in resonance with the 1,3-dicarbonyl section, making the backbone an extended conjugated system. Investigations of the structure have shown that the positive charge is distributed throughout the molecule. In rosocyanine, the two curcumin moieties are not coplanar but rather perpendicular relative to one another (as seen in the 3D model), as a result of the tetrahedral geometry of tetracoordinate boron. The same applies to rubrocurcumin.
In order to exclude the presence of other materials during the boron quantification using the curcumin method, a variant of the process was developed. In this process, 2,2-dimethyl-1,3-hexanediol or 2-ethyl-1,3-hexanediol are added, in addition to curcumin, to a neutral solution of the boron-containing solution. The complex formed between boron and the 1,3-hexanediol derivative is removed from the aqueous solution by extraction in an organic solvent. Acidification of the organic phase yields rubrocyanine, which can be detected by colorimetric methods. The reaction of curcumin with borates in presence of oxalic acid produces the coloring compound rubrocurcumin.
Characteristics
Rosocyanine is a dark green solid with a glossy, metallic shine that forms red colored solutions. It is almost insoluble in water and some organic solvents, very slightly soluble (up to 0.01%) in ethanol, and somewhat soluble (approximately 1%) in pyridine, sulfuric acid, and acetic acid. An alcoholic solution of rosocyanine temporarily turns deeply blue on treatment with alkali.
In rubrocurcumin one molecule of curcumin is replaced with oxalic acid. Rubrocurcumin produces a similar red colored solution. Rosocyanine is an ionic compound, while rubrocurcumin is a neutral complex.
See also
Curcuminoids
References
Curcuminoid dyes
Tetrahydroxyborate esters
Complexometric indicators
Oxycations | Rosocyanine | [
"Chemistry",
"Materials_science"
] | 829 | [
"Complexometric indicators",
"Chromism"
] |
5,613,537 | https://en.wikipedia.org/wiki/Squares%20of%20Savannah%2C%20Georgia | The city of Savannah, Province of Georgia, was laid out in 1733, in what was colonial America, around four open squares, each surrounded by four residential "tithing") blocks and four civic ("trust") blocks. The layout of a square and eight surrounding blocks was known as a "ward." The original plan (now known as the Oglethorpe Plan) was part of a larger regional plan that included gardens, farms, and "outlying villages." Once the four wards were developed in the mid-1730s, two additional wards were laid. Oglethorpe's agrarian balance was abandoned after the Georgia Trustee period. Additional squares were added during the late 18th and 19th centuries, and by 1851 there were 24 squares in the city. In the 20th century, three of the squares were demolished or altered beyond recognition, leaving 21. In 2010, one of the three "lost" squares, Ellis, was reclaimed, bringing the total to today's 22.
Most of Savannah's squares are named in honor or in memory of a person, persons or historical event; many contain monuments, markers, memorials, statues, plaques, and other tributes. The statues and monuments were placed in the squares partly to protect the squares from demolition.
Today, the area is part of a large urban preservation district known as the Savannah Historic District.
Overview
The city of Savannah was founded in 1733 by General James Oglethorpe. Although cherished by many today for their aesthetic beauty, the first squares were originally intended to provide colonists space for practical reasons such as militia training exercises. The original plan resembles the layout of contemporary military camps, which were likely quite familiar to General Oglethorpe. The layout was also a reaction against the cramped conditions that fueled the Great Fire of London in 1666. A square was established for each ward of the new city. The first four were Johnson, Perceval (now Wright), Ellis, and St. James (now Telfair) Squares, and themselves formed a larger square on the bluff overlooking the Savannah River. The original plan actually called for six squares, and as the city grew the grid of wards and squares was extended so that 33 squares were eventually created on a five-by-two-hundred grid. (Two points on this grid were occupied by Colonial Park Cemetery, established in 1750, and four others—in the southern corners of the downtown area—were never developed with squares.) When the city began to expand south of Gaston Street, the grid of squares was abandoned and Forsyth Park was allowed to serve as a single, centralized park for that area.
All of the squares measure approximately from east to west, but they vary north to south from approximately 100 to . Typically, each square is intersected north-south and east-west by wide, two-way streets. They are bounded to the west and east by the south- and north-bound lanes of the intersecting north-south street, and to the north and south by smaller one-way streets running east-to-west and west-to-east, respectively. As a result, traffic flows one way—counterclockwise—around the squares, which thus function much like traffic circles.
Each square sits (or, in some cases, sat) at the center of a ward, which often shares its name with its square. The lots to the east and west of the squares, flanking the major east-west axis, were considered "trust lots" in the original city plan and intended for large public buildings such as churches, schools, or markets. The remainder of the ward was divided into four areas, called tithings, each of which was further divided into ten residential lots. This arrangement is illustrated in the 1770 Plan of Savannah, reproduced here, and remains readily visible in the modern aerial photograph above. The distinction between trust lot and residential lot has always been fluid. Some grand homes, such as the well-known Mercer House, stand on trust lots, while many of the residential lots have long hosted commercial properties.
All of the squares are a part of the Savannah Historic District and fall within an area of less than one half square mile. The five squares along Bull Street—Monterey, Madison, Chippewa, Wright, and Johnson—were intended to be grand monument spaces and have been called Savannah's "Crown Jewels." Many of the other squares were designed more simply as commons or parks, although most serve as memorials as well.
Architect John Massengale has called Savannah's city plan "the most intelligent grid in America, perhaps the world", and Edmund Bacon wrote that "it remains as one of the finest diagrams for city organization and growth in existence." The American Society of Civil Engineers has honored Oglethorpe's plan for Savannah as a National Historic Civil Engineering Landmark, and in 1994 the plan was nominated for inclusion in the UNESCO World Heritage List. The squares are a major point of interest for millions of tourists visiting Savannah each year, and they have been credited with stabilizing once-deteriorating neighborhoods and revitalizing Savannah's downtown commercial district.
First four squares, 1733
The first four squares were laid out by James Oglethorpe in 1733, the same year in which he founded the Georgia colony and the city of Savannah.
Johnson Square
Johnson Square was the first of Savannah's squares, and remains the largest of the 22. It was named for Robert Johnson, colonial governor of South Carolina and a friend of General Oglethorpe. Interred under the Nathanael Greene Monument in the square is Revolutionary War hero General Nathanael Greene, the namesake of nearby Greene Square.
Johnson Square contains two fountains, as well as a sundial dedicated to Colonel William Bull, the namesake of Savannah's Bull Street.
Another landmark of Johnson Square is the Johnson Square Business Center. This building, formerly known as the Savannah Bank Building, was the city's first "skyscraper", built in 1911. Johnson Square is known as the financial district, or banking square, and many of the City's financial services companies are located here. These companies include the Savannah Bancorp, Savannah Bank, Coastal Bank Headquarters, Bank of America branch, SunTrust branch, United Community Bank branch, TitleMax Corporate Headquarters, and a Regions Bank building.
Johnson Square is also home to Christ Church, "the Mother Church of Georgia", established in 1733. Early clergy of the church include John Wesley and George Whitefield.
Wright Square
The second square established in Savannah, Perceval Square was named for John Perceval, 1st Earl of Egmont, generally regarded as the man who gave the colony of Georgia its name (a tribute to Great Britain's King George II). It was renamed in 1763 to honor James Wright, the third and final royal governor of Georgia. Throughout its history it has also been known as Court House Square and Post Office Square; the present Tomochichi Federal Building and U.S. Courthouse is adjacent to the west.
The square is the burial site of Tomochichi, a leader of the Creek nation of Native Americans. Tomochichi was a trusted friend of James Oglethorpe and assisted him in the founding of his colony.
Ellis Square
What was originally called Decker Square is located on Barnard between Bryan and Congress Streets. It was laid out in 1733 as part of Decker Ward, the third ward created in Savannah. The ward and square were named for Sir Matthew Decker, one of Trustees for the Establishment of the Colony of Georgia in America, Commissioner of funds collection for the Trust, director and governor of the East India Company, and member of Parliament. The square was renamed for Sir Henry Ellis, the second Royal Governor of the colony of Georgia.
It was also known as Marketplace Square, as from the 1730s through the 1950s it served as a center of commerce and was home to four successive market houses. Prior to Union General Sherman's arrival in December 1864, it was also the site of a slave market with some indications of slaves being held under the northwest corner of the square.
In 1954 the city signed a 50-year lease with the Savannah Merchants Cooperative Parking Association, allowing the association to raze the existing structure and construct a parking garage to serve the City Market retail project. Anger over the demolition of the market house helped spur the historic preservation movement (most notably the Historic Savannah Foundation) in Savannah.
When the garage's lease expired in 2004, the city began plans to restore Ellis Square. It was officially reopened at a dedication ceremony held on March 11, 2010. A bronze statue, by Susie Chisholm, of songwriter-lyricist Johnny Mercer, a native Savannahian, was formally unveiled in Ellis Square on November 18, 2009.
Telfair Square
St. James Square was named in honor of a green space in London, England, and marked one of the most fashionable neighborhoods in early Savannah. It was renamed in 1883 to honor the Telfair family. It is the only square honoring a family rather than an individual. The Telfairs included former Governor Edward Telfair, Congressman Thomas Telfair (Edward Telfair's son), and Mary Telfair (1791–1875), benefactor of Savannah's Telfair Museum of Art. Telfair Academy overlooks the western side of the square. The square also contains tributes to the Girl Scouts of the USA, founded by Savannahian Juliette Gordon Low, and to the chambered nautilus. Telfair Square is located on Barnard, between State and York Streets.
Two new squares
Oglethorpe's plan called for six wards and squares. Lower New Square and Upper New Square—now Reynolds and Oglethorpe Squares—completed the founder's vision.
Reynolds Square
Originally known as Lower New Square, laid out in 1734, the square was later renamed for Captain John Reynolds, governor of Georgia in the mid-1750s.
The square contains a bronze statue by Marshall Daugherty honoring John Wesley, founder of Methodism. Wesley spent most of his life in England but undertook a mission to Savannah (1735–1738), during which time he founded the first Sunday school in America. The statue was installed in 1969 on the spot where Wesley's home is believed to have stood. The statue is intended to show Wesley preaching out-of-doors as he did when leading services for Native Americans, a practice which angered church elders who believed that the Gospel should only be preached inside the church building.
Reynolds Square was the site of the Filature, which housed silkworms as part of an early—and unsuccessful—attempt to establish a silk industry in the Georgia colony. It is located on Abercorn, between Bryan and Congress Streets.
The Olde Pink House (also known as Habersham House) stands in the square's northwestern trust lot. Immediately to its south, across East Saint Julian Street and in the southwestern trust lot, is the Oliver Sturges House.
Oglethorpe Square
Upper New Square was laid out in 1742 and was later renamed in honor of Georgia founder General James Oglethorpe, although his statue is located in Chippewa Square, to the southwest.
The home of Georgia's first Royal Governor, John Reynolds, was located on the southeastern trust lot (now a parking lot of The Presidents' Quarters Inn) overlooking the square. Reynolds arrived in Savannah October 29, 1754.
The residences of the Royal Surveyors of Georgia and South Carolina were located on the northeastern trust lots, the site of today's Owens–Thomas House. The Presidents' Quarters Inn, a 16-room historic bed and breakfast, is located on the southeastern trust lots.
The square contains a pedestal honoring Moravian missionaries who arrived at the same time as John Wesley and settled in Savannah from 1735 to 1740, before resettling in Pennsylvania.
A Savannah veterans’ group had unsuccessfully proposed erecting a memorial to veterans of World War II in Oglethorpe Square (which was installed on River Street).
The Unitarian Universalist Church was originally based on the square, prior to its move to the western side of Troup Square in 1860.
The 1790s
Savannah grew rapidly in the late 18th century and six new wards were established in the 1790s alone, including the four that now comprise the northeastern quadrant of the Historic District. The new wards expanded the grid by one unit to the west and by two to the east. Due to space restrictions these new wards are slightly narrower east-to-west than the original six.
Washington Square
Built in 1790, Washington Square was named in 1791 for the first President of the United States, who visited Savannah in that year. It was one of only two squares named to honor a then-living person; Troup Square was the other.
Washington Square was the site of the Trustees' Garden.
The square was once the site of massive New Year's Eve bonfires; these were discontinued in the 1950s.
In 1964 Savannah Landscape Architect Clermont Huger Lee and Mills B Lane planned and initiated a project to close the fire lane, add North Carolina bluestone pavers, initiate the use of different paving materials, install water cisterns, and lastly install new walks, benches, lighting, and plantings.
Franklin Square
Franklin Square was designed and laid out in 1790. It is located on the western end of town at the intersection of Montgomery Street and W Julian Street, bordered on the north side by W Bryan St and on the south side by W Congress St. It was named in 1791 for Benjamin Franklin, who served as an agent for the colony of Georgia from 1768 to 1778 and who had died in 1790.
The square was destroyed in 1935 but was restored in the mid-1980s. The memorial sculpture includes a depiction of 12-year-old Henri Christophe, who became the commander of the Haitian army and King of Haiti.
Warren Square
Warren Square was laid out in 1791 and named for General Joseph Warren, a Revolutionary War hero killed at the Battle of Bunker Hill and who had served as President of the Provincial Government of Massachusetts. British gunpowder seized by Savannahians had been sent to aid the Americans at Bunker Hill. The "sister city" relationship between Savannah and Boston survived even the Civil War, and Bostonians sent shiploads of provisions to Savannah shortly after the city surrendered to General Sherman in 1864. Warren Square is on Habersham, between Bryan and Congress Streets.
In 1963 Savannah Landscape Architect Clermont Huger Lee and Mills B Lane planned and initiated a project to replace the sand square with plantings, add walks, benches, lighting and plantings, and install barriers to prevent drive through for fire lane.
Columbia Square
Columbia Square was laid out in 1799 and is named for Columbia, the poetic personification of the United States. It is located on Habersham, between State and York Streets. In the center of the square is a fountain that formerly stood at Wormsloe, the estate of Noble Jones, one of Georgia's first settlers. It was moved to Columbia Square in 1970 to honor Augusta and Wymberly DeRenne, descendants of Jones. It is sometimes called the "rustic fountain," as it is decorated with vines, leaves, flowers, and other woodland motifs.
Greene Square
Greene Square was laid out in 1799 and is named for Revolutionary War hero General Nathanael Greene, one of George Washington's most effective generals.
Liberty Square
Liberty Square was laid out in 1799 and named in honor of the Sons of Liberty and the victory over the British in the Revolutionary War. It was located on Montgomery between State and York Streets. It was paved over to make way for improvements to Montgomery Street. A small portion remains and is the site of the "Flame of Freedom" sculpture.
19th-century squares
Expansion of Oglethorpe's grid of wards and squares continued through the first half of the 19th century, until a total of 24 squares stood in downtown Savannah.
Elbert Square
Elbert Square was laid out in 1801 and named for Samuel Elbert, a Revolutionary soldier, sheriff of Chatham County, and Governor of Georgia. It was located on Montgomery between Hull and Perry streets. It was paved over to make way for improvements to Montgomery Street and today is represented by a small grassy area across Montgomery from the west entrance to the Civic Center.
Chippewa Square
Chippewa Square was laid out in 1815 and named in honor of American soldiers killed in the Battle of Chippawa during the War of 1812. (The spelling "Chippewa" is correct in reference to this square.)
In the center of the square is the James Oglethorpe Monument, created by sculptor Daniel Chester French and architect Henry Bacon and unveiled in 1910. Oglethorpe faces south, toward Georgia's one-time enemy in Spanish Florida, and his sword is drawn. Busts of Confederate figures Francis Stebbins Bartow and Lafayette McLaws were moved from Chippewa Square to Forsyth Park to make room for the Oglethorpe monument. Due to the location of the monument, Savannahians sometimes refer to this as Oglethorpe Square, although the actual Oglethorpe Square sits just to the northeast.
The "park bench" scene which opens the 1994 film Forrest Gump was filmed on the north side of Chippewa Square.
Chippewa Square is also home to First Baptist Church (1833), the Philbrick-Eastman House (1844), and The Savannah Theatre (1818).
Orleans Square
Orleans Square was laid out in 1815, commemorating General Andrew Jackson's victory at the Battle of New Orleans in January of that year. In the center of the square the German Memorial Fountain honors early German immigrants to Savannah. Installed in 1989 it commemorates the 250th anniversary of Georgia and of Savannah, as well as the 300th anniversary of the arrival in Philadelphia of 13 Rhenish families. Orleans Square is located on Barnard, between Hull and Perry Streets, and is adjacent to the Savannah Civic Center.
Lafayette Square
The square contains a fountain commemorating the 250th anniversary of the founding of the Georgia colony, donated by the Colonial Dames of Georgia in 1984, as well as cobblestone sidewalks.
Adjacent to the square is the Roman Catholic Cathedral Basilica of St. John the Baptist,. Given this proximity, Lafayette Square features prominently in Savannah's massive Saint Patrick's Day celebrations. Water in the fountain is dyed green for the occasion.
In this area is the museum known as the Flannery O'Connor Childhood Home, which is open to the public.
Marist Place, the former Marist School for Boys, stands in the southwest tithing of the square.
Pulaski Square
Pulaski Square was laid out in 1837 and is named for General Casimir Pulaski, a Polish-born Revolutionary War hero who died of wounds received in the siege of Savannah (1779). It is one of the few squares without a monument—General Pulaski's statue is actually in nearby Monterey Square.
Prior to the birth of the historical preservation movement and the restoration of much of Savannah's downtown Pulaski sheltered a sizeable homeless population and was one of several squares that had been paved to allow traffic to drive straight through its center.
Pulaski square is located on Barnard, between Harris and Charlton Streets, and is known for its live oaks.
Madison Square
Madison Square was laid out in 1837 and named for James Madison, fourth President of the United States.
In the center of the square is the William Jasper Monument, an 1888 work by Alexander Doyle memorializing Sergeant William Jasper, a soldier in the siege of Savannah who, though mortally wounded, heroically recovered his company's banner. Savannahians sometimes refer to this as Jasper Square, in honor of Jasper's statue.
Madison Square features a vintage cannon from the Savannah Armory. These now mark the starting points of the first highways in Georgia, the Ogeechee Road leading to Darien and the Augusta Road.
The square also includes a monument marking the center of the British resistance during the siege.
In 1971 Savannah Landscape Architect Clermont Huger Lee and Mills B. Lane planned and initiated a project to install new walk patterns with offset sitting areas and connecting walks at curbs, add new benches, lighting and planting.
Crawford Square
Crawford Square was laid out in 1841 and named in honor of Secretary of the Treasury William Harris Crawford. Crawford ran for president in 1824 but came in third, after winner John Quincy Adams and runner-up Andrew Jackson.
Although Crawford is the smallest of the squares, it anchors the largest ward, as Crawford Ward includes the territory of Colonial Park Cemetery.
During the era of Jim Crow this was the only square in which African-Americans were permitted.
While all squares were once fenced it is the only one that remains so. Crawford Square has also retained its cistern, a holdover from early fire fighting practices. After a major fire in 1820 firemen maintained duty stations in the squares, each of which was equipped with a storage cistern.
Chatham Square
Chatham Square was laid out in 1847 and named in 1851 for William Pitt, 1st Earl of Chatham. Although Pitt never visited Savannah he was an early supporter of the Georgia colony and both Chatham Square and Chatham County are named in his honor.
The square is sometimes known locally as Barnard Square, in reference to the 1901-built Barnard Street School (which actually stands at 212 West Taylor Street) and has served as a building for the Savannah College of Art and Design since 1988.
The college renamed it Pepe Hall.
Monterey Square
Monterey Square was laid out in 1847 and commemorates the Battle of Monterrey (1846), in which American forces under General Zachary Taylor captured the city of Monterrey during the Mexican–American War. (The correct spelling in reference to the square is "Monterey", with a single r.)
In the center of the square is an 1853 monument honoring General Casimir Pulaski.
Monterey Square is the site of Mercer House, built by Hugh Mercer and more recently the home of antiques dealer and conservator Jim Williams. The house (which fills an entire block), and the square itself, were featured prominently in John Berendt's 1994 true crime novel Midnight in the Garden of Good and Evil (written before Ellis Square was reinstated). Monterey Square has been used as a setting for several motion pictures, including the 1997 film version of Berendt's novel. The Comer House is also featured in the movie.
The square also is home to Congregation Mickve Israel, which boasts one of the few Gothic-style synagogues in America, dating from 1878.
All but one of the buildings surrounding the square are original to the square, the exception being the United Way Building.
Troup Square
Troup Square was laid out in 1851 and is named for former Georgia Governor, Congressman, and Senator George Troup. It is one of only two squares named for a person living at the time (the other being Washington Square). A large iron armillary sphere stands in the center of the square, supported by six small metal turtles.
A special dog fountain is located on the west side of the square. The Myers Drinking Fountain was a gift from Savannah mayor Herman Myers in 1897 and originally placed in Forsyth Park. When moved to Troup Square its height was adjusted for canine use and has become the site of an annual Blessing of the Animals.
The Unitarian Universalist Church sits on the western side of the square. It is believed that James Lord Pierpont wrote the tune to "Jingle Bells" while he was the church's music director, but other sources claim he only copyrighted it when he was in the role, and that he wrote it in Medford, Massachusetts.
In 1969 Savannah Landscape Architect Clermont Huger Lee and Mills B Lane planned and initiated a project to remove the central vandalized playground, close the fire lane, install an armillary sundial, and add new walls, benches, lighting, and plantings.
Taylor Square
Taylor Square was laid out in 1851 and was originally named for South Carolina statesman John C. Calhoun, who served as Secretary of War, Secretary of State, and as vice president under John Quincy Adams and Andrew Jackson. In 2023, it was renamed Taylor Square, in honor of the first American Civil War black nurse, educator and memoirist Susie King Taylor.
The square is sometimes called Massie Square, in reference to a neighborhood school.
The square is also home to Wesley Monumental United Methodist Church, founded in 1868.
It is the only square with all of its original buildings intact.
The square is believed to have been built over a slave burial ground, with around one thousand bodies buried in it. In 2004 a skull was found by utility workers outside the Massie Heritage Interpretation Center on the square's southeastern side.
Whitefield Square
Whitefield Square was laid out in 1851, the final square built.
It is named for the Rev. George Whitefield, founder of Bethesda Home for Boys (a residential education program – formerly the Bethesda Orphanage) in the 18th century, and still in existence on the south side of the city.
The square has a gazebo in its center.
Andrew Bryan, the founder of the First African Baptist Church, is buried in the square, as is Henry Cunningham, the minister of the Second African Baptist Church.
Forsyth Park
After 1851, as the city expanded south of Gaston Street, further extensions of Oglethorpe's grid of wards and squares were abandoned. Forsyth Park, located just south of Monterey Ward, was intended to be a single large park that would serve the growing southern portion of the city just as the squares had served their individual wards. The original northern portion of the park, surrounding the well-known fountain, occupied an area the size of an entire ward from the old city, and the park more than doubled in size during later years. Other, smaller neighborhood parks have been established in the southern portions of the city.
Summary
Analysis
While some authorities believe that the original plan allowed for growth of the city and thus expansion of the grid, the regional plan suggests otherwise: the ratio of town lots to country lots was in balance and growth of the urban grid would have destroyed that balance.
See also
Sanborn Fire Insurance Maps of Savannah
Notes
References
City of Savannah's Savannah's Squares page, accessed June 13, 2007. This page contains links to individual pages on each of Savannah's 24 squares, many with photographs. These pages are referenced throughout this article.
External links
Map and aerial views of the historic district from Visitor information from Savannah.com
Tour Guide Manual from the City of Savannah website
A street map of the historic district from Savannah.com
Another street map of the historic district from Sherpa Guides
Savannah Squares book site
Haitian American Historical Society, organizers of the Haitian Volunteers monument
Photo essay of all 24 squares in Savannah
Savannah GA Historic Squares POV Driving – Travel Towner, YouTube, January 1, 2021
Squares of Savannah, Georgia
Squares of Savannah
Squares | Squares of Savannah, Georgia | [
"Engineering"
] | 5,430 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
5,613,684 | https://en.wikipedia.org/wiki/Roller%20dam | A roller dam is a type of hydro-control device specially designed to mitigate erosion. They are most often used to divert water for irrigation but the largest and most notable examples are used to ease river navigation. The world's first roller dam () was constructed in Schweinfurt, Germany in 1902 to divert irrigation water south of the Main river.
Use
Roller dams are a type of weir, or a dam that is designed to allow water to flow over the top in continuous action. They are used on rivers or other such moving bodies of water where erosion damage is undesirable, yet likely to occur. A short wall, lip, or parabolic channel is constructed on the downstream side of the dam parallel to the dam face. As the water pouring over the dam hits this baffle, it is reflected toward the dam face, creating a continual "rolling" action at the foot of the dam; hence the name "Roller Dam". The purpose of the rolling is to dissipate the energy gained by the water as it falls from the top of the dam. Otherwise the energy would be exerted downstream, causing significant bank and river bed erosion.
Roller dams can be either fixed (non-moving) or active. Fixed roller dams are generally made from reinforced concrete or masonry. Active roller dams are made from large metal cylinders, which can be lifted out of the water using a system of powerful hydraulic rams or cables and motors. This type is also known as a roller gate. The largest of the active dams in the world is Locks and Dam 15, which spans the Mississippi River between Rock Island, Illinois, and Davenport, Iowa.
Hazards
Roller dams of any type pose an extreme drowning hazard. Any person going over the top of the dam will be caught in the rolling action at its base and may not be ejected from the cycle for days or possibly weeks. Even very buoyant objects, such as inflatable balls, inner tubes, and life vests, can often be seen resurfacing near the downstream face every few seconds for several hours before escaping the so-called "washing machine of death".
Because of the hazards, dam opponents have called for the removal of roller dams. Sixteen people have died by drowning at the roller dam on the Fox River near Yorkville, Illinois, since its construction in 1960. Most recently in a single accident on May 28, 2006, three persons died. Similar dams have already been removed on the Fox River at Aurora, Illinois, and near Batavia, Illinois, but Yorkville residents successfully petitioned to maintain the roller dam near their town because of tradition. In July 2009, a man died at a roller dam on the Cedar River in Cedar Rapids, Iowa. His death was notable because he was wearing an approved personal flotation device, intended to help bring a person quickly to the surface of the water.
See also
Tainter gate
References
External links
Tour of Locks and Dam 15
Hydrology
Weirs | Roller dam | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 600 | [
"Hydrology",
"Weirs",
"Environmental engineering"
] |
5,614,016 | https://en.wikipedia.org/wiki/Medical%20science%20liaison | A medical science liaison (MSL) is a healthcare consulting professional who is employed by pharmaceutical, biotechnology, medical device, and managed care companies. Other job titles for medical science liaisons may include medical liaisons, clinical science liaisons, medical science managers, regional medical scientists, and regional medical directors.
The term "MSL" was originally trademarked by Upjohn as "Education services – namely, initiation of drug studies in laboratory and clinical settings and development of workshops, symposia, and seminars for physicians, medical societies, specialty organizations, academicians, in concert, concerned with drug related medical topics" in 1967 and with first use in commerce in 1967.
As the number of MSL programs in healthcare increased, subsequent peer-reviewed journal publications and books became available to examine the emerging role of medical affairs and the use of MSLs in an increasingly vertically integrated biotechnology industry.
Role
MSLs build relationships with key opinion leaders or thought leaders and health care providers, providing critical windows of insight into the market and competition. Through such monitoring, MSLs can gain access to key influencers by interacting with national and regional societies and organizations. Moreover, as MSLs specialize in a particular therapeutic area and have scientific knowledge related to it. The educational background of MSLs consists primarily of MDs, DMSc, PharmD, and PhD professionals. Other professions who work as MSLs include Physician Assistants and Nurses. According to the program's advocates, the Board Certified Medical Affairs Specialist (BCMAS) program is the recognized MSL board certification for MSL professionals. They are now highly involved in activities related to clinical trials.
Responsibilities
The medical science liaison role is varied and day-to-day activities include (but are not limited to);
Managing investigator initiated studies
Performing KOL stakeholder mapping
Developing collaborative relationships with KOLs
Organising advisory boards
Maintaining a high level of therapeutic area knowledge
Training sales representatives
Providing medical review to ensure all company materials are compliant and accurately reflect the body of scientific evidence
Delivering insights from KOLs to inform the medical affairs strategy
See also
References
Pharmaceutical industry
Promotion and marketing communications | Medical science liaison | [
"Chemistry",
"Biology"
] | 424 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
5,614,153 | https://en.wikipedia.org/wiki/TDtv | TDtv combines IPWireless commercial UMTS TD-CDMA solution and 3GPP Release 6 Multimedia Broadcast Multicast Service (MBMS) to deliver Mobile TV. TDtv operates in the universal unpaired 3G spectrum bands that are available worldwide at 1900 MHz and 2100 MHz. It allows UMTS operators to fully utilize their existing spectrum and base stations to offer mobile TV and multimedia packages without affecting other voice and data 3G services.
External links
NextWave Wireless dead link due to the merger of the company
Streaming television | TDtv | [
"Technology"
] | 108 | [
"Multimedia",
"Streaming television"
] |
5,614,270 | https://en.wikipedia.org/wiki/Cooperative%20multitasking | Cooperative multitasking, also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Instead, in order to run multiple applications concurrently, processes voluntarily yield control periodically or when idle or logically blocked. This type of multitasking is called cooperative because all programs must cooperate for the scheduling scheme to work.
In this scheme, the process scheduler of an operating system is known as a cooperative scheduler whose role is limited to starting the processes and letting them return control back to it voluntarily.
This is related to the asynchronous programming approach.
Usage
Although it is rarely used as the primary scheduling mechanism in modern operating systems, it is widely used in memory-constrained embedded systems and also, in specific applications such as CICS or the JES2 subsystem. Cooperative multitasking was the primary scheduling scheme for 16-bit applications employed by Microsoft Windows before Windows 95 and Windows NT, and by the classic Mac OS. Windows 9x used non-preemptive multitasking for 16-bit legacy applications, and the PowerPC Versions of Mac OS X prior to Leopard used it for classic applications. NetWare, which is a network-oriented operating system, used cooperative multitasking up to NetWare 6.5. Cooperative multitasking is still used on RISC OS systems.
Cooperative multitasking is similar to async/await in languages, such as JavaScript or Python, that feature a single-threaded event-loop in their runtime. This contrasts with cooperative multitasking in that await cannot be invoked from a non-async function, but only an async function, which is a kind of coroutine.
Cooperative multitasking allows much simpler implementation of applications because their execution is never unexpectedly interrupted by the process scheduler; for example, various functions inside the application do not need to be reentrant.
Problems
As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that is often considered to make the entire environment unacceptably fragile, though, as noted above,
cooperative multitasking has been
used frequently in server environments including NetWare and CICS.
In contrast, preemptive multitasking interrupts applications and gives control to other processes outside the application's control.
The potential for system hang can be alleviated by using a watchdog timer, often implemented in hardware; this typically invokes a hardware reset.
References
Concurrent computing
de:Multitasking#Kooperatives Multitasking | Cooperative multitasking | [
"Technology"
] | 584 | [
"Computing platforms",
"Concurrent computing",
"IT infrastructure"
] |
5,614,784 | https://en.wikipedia.org/wiki/Moving%20magnet%20and%20conductor%20problem | The moving magnet and conductor problem is a famous thought experiment, originating in the 19th century, concerning the intersection of classical electromagnetism and special relativity. In it, the current in a conductor moving with constant velocity, v, with respect to a magnet is calculated in the frame of reference of the magnet and in the frame of reference of the conductor. The observable quantity in the experiment, the current, is the same in either case, in accordance with the basic principle of relativity, which states: "Only relative motion is observable; there is no absolute standard of rest". However, according to Maxwell's equations, the charges in the conductor experience a magnetic force in the frame of the magnet and an electric force in the frame of the conductor. The same phenomenon would seem to have two different descriptions depending on the frame of reference of the observer.
This problem, along with the Fizeau experiment, the aberration of light, and more indirectly the negative aether drift tests such as the Michelson–Morley experiment, formed the basis of Einstein's development of the theory of relativity.
Introduction
Einstein's 1905 paper that introduced the world to relativity opens with a description of the magnet/conductor problem:
An overriding requirement on the descriptions in different frameworks is that they be consistent. Consistency is an issue because Newtonian mechanics predicts one transformation (so-called Galilean invariance) for the forces that drive the charges and cause the current, while electrodynamics as expressed by Maxwell's equations predicts that the fields that give rise to these forces transform differently (according to Lorentz invariance). Observations of the aberration of light, culminating in the Michelson–Morley experiment, established the validity of Lorentz invariance, and the development of special relativity resolved the resulting disagreement with Newtonian mechanics. Special relativity revised the transformation of forces in moving reference frames to be consistent with Lorentz invariance. The details of these transformations are discussed below.
In addition to consistency, it would be nice to consolidate the descriptions so they appear to be frame-independent. A clue to a framework-independent description is the observation that magnetic fields in one reference frame become electric fields in another frame. Likewise, the solenoidal portion of electric fields (the portion that is not originated by electric charges) becomes a magnetic field in another frame: that is, the solenoidal electric fields and magnetic fields are aspects of the same thing. That means the paradox of different descriptions may be only semantic. A description that uses scalar and vector potentials φ and A instead of B and E avoids the semantical trap. A Lorentz-invariant four vector Aα = (φ / c, A) replaces E and B and provides a frame-independent description (albeit less visceral than the E– B–description). An alternative unification of descriptions is to think of the physical entity as the electromagnetic field tensor, as described later on. This tensor contains both E and B fields as components, and has the same form in all frames of reference.
Background
Electromagnetic fields are not directly observable. The existence of classical electromagnetic fields can be inferred from the motion of charged particles, whose trajectories are observable. Electromagnetic fields do explain the observed motions of classical charged particles.
A strong requirement in physics is that all observers of the motion of a particle agree on the trajectory of the particle. For instance, if one observer notes that a particle collides with the center of a bullseye, then all observers must reach the same conclusion. This requirement places constraints on the nature of electromagnetic fields and on their transformation from one reference frame to another. It also places constraints on the manner in which fields affect the acceleration and, hence, the trajectories of charged particles.
Perhaps the simplest example, and one that Einstein referenced in his 1905 paper introducing special relativity, is the problem of a conductor moving in the field of a magnet. In the frame of the magnet, a conductor experiences a magnetic force. In the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The magnetic field in the magnet frame and the electric field in the conductor frame must generate consistent results in the conductor. At the time of Einstein in 1905, the field equations as represented by Maxwell's equations were properly consistent. Newton's law of motion, however, had to be modified to provide consistent particle trajectories.
Transformation of fields, assuming Galilean transformations
Assuming that the magnet frame and the conductor frame are related by a Galilean transformation, it is straightforward to compute the fields and forces in both frames. This will demonstrate that the induced current is indeed the same in both frames. As a byproduct, this argument will also yield a general formula for the electric and magnetic fields in one frame in terms of the fields in another frame.
In reality, the frames are not related by a Galilean transformation, but by a Lorentz transformation. Nevertheless, it will be a Galilean transformation to a very good approximation, at velocities much less than the speed of light.
Unprimed quantities correspond to the rest frame of the magnet, while primed quantities correspond to the rest frame of the conductor. Let v be the velocity of the conductor, as seen from the magnet frame.
Magnet frame
In the rest frame of the magnet, the magnetic field is some fixed field B(r), determined by the structure and shape of the magnet. The electric field is zero.
In general, the force exerted upon a particle of charge q in the conductor by the electric field and magnetic field is given by (SI units):
where is the charge on the particle, is the particle velocity and F is the Lorentz force. Here, however, the electric field is zero, so the force on the particle is
Conductor frame
In the conductor frame, there is a time-varying magnetic field B′ related to the magnetic field B in the magnet frame according to:
where
In this frame, there is an electric field, and its curl is given by the Maxwell-Faraday equation:
This yields:
To make this explicable: if a conductor moves through a B-field with a gradient , along the z-axis with constant velocity , it follows that in the frame of the conductor It can be seen that this equation is consistent with by determining and from this expression and substituting it in the first expression while using that Even in the limit of infinitesimal small gradients these relations hold, and therefore the Lorentz force equation is also valid if the magnetic field in the conductor frame is not varying in time. At relativistic velocities a correction factor is needed, see below and Classical electromagnetism and special relativity and Lorentz transformation.
A charge q in the conductor will be at rest in the conductor frame. Therefore, the magnetic force term of the Lorentz force has no effect, and the force on the charge is given by
This demonstrates that the force is the same in both frames (as would be expected), and therefore any observable consequences of this force, such as the induced current, would also be the same in both frames. This is despite the fact that the force is seen to be an electric force in the conductor frame, but a magnetic force in the magnet's frame.
Galilean transformation formula for fields
A similar sort of argument can be made if the magnet's frame also contains electric fields. (The Ampere-Maxwell equation also comes into play, explaining how, in the conductor's frame, this moving electric field will contribute to the magnetic field.) The result is that, in general,
with c the speed of light in free space.
By plugging these transformation rules into the full Maxwell's equations, it can be seen that if Maxwell's equations are true in one frame, then they are almost true in the other, but contain incorrect terms proportional to the quantity v/c raised to the second or higher power. Accordingly, these are not the exact transformation rules, but are a close approximation at low velocities. At large velocities approaching the speed of light, the Galilean transformation must be replaced by the Lorentz transformation, and the field transformation equations also must be changed, according to the expressions given below.
Transformation of fields as predicted by Maxwell's equations
In a frame moving at velocity v, the E-field in the moving frame when there is no E-field in the stationary magnet frame Maxwell's equations transform as:
where
is called the Lorentz factor and c is the speed of light in free space. This result is a consequence of requiring that observers in all inertial frames arrive at the same form for Maxwell's equations. In particular, all observers must see the same speed of light c. That requirement leads to the Lorentz transformation for space and time. Assuming a Lorentz transformation, invariance of Maxwell's equations then leads to the above transformation of the fields for this example.
Consequently, the force on the charge is
This expression differs from the expression obtained from the nonrelativistic Newton's law of motion by a factor of . Special relativity modifies space and time in a manner such that the forces and fields transform consistently.
Modification of dynamics for consistency with Maxwell's equations
The Lorentz force has the same form in both frames, though the fields differ, namely:
See Figure 1. To simplify, let the magnetic field point in the z-direction and vary with location x, and let the conductor translate in the positive x-direction with velocity v. Consequently, in the magnet frame where the conductor is moving, the Lorentz force points in the negative y-direction, perpendicular to both the velocity, and the B-field. The force on a charge, here due only to the B-field, is
while in the conductor frame where the magnet is moving, the force is also in the negative y-direction, and now due only to the E-field with a value:
The two forces differ by the Lorentz factor γ. This difference is expected in a relativistic theory, however, due to the change in space-time between frames, as discussed next.
Relativity takes the Lorentz transformation of space-time suggested by invariance of Maxwell's equations and imposes it upon dynamics as well (a revision of Newton's laws of motion). In this example, the Lorentz transformation affects the x-direction only (the relative motion of the two frames is along the x-direction). The relations connecting time and space are ( primes denote the moving conductor frame ) :
These transformations lead to a change in the y-component of a force:
That is, within Lorentz invariance, force is not the same in all frames of reference, unlike Galilean invariance. But, from the earlier analysis based upon the Lorentz force law:
which agrees completely. So the force on the charge is not the same in both frames, but it transforms as expected according to relativity.
See also
Annus Mirabilis Papers
Darwin Lagrangian
Eddy current
Electric motor
Einstein's thought experiments
Faraday's law
Faraday paradox
Galilean invariance
Inertial frame
Lenz's law
Lorentz transformation
Principle of relativity
Relativistic electromagnetism
Special theory of relativity
References and notes
Further reading
(The relativity of magnetic and electric fields)
External links
Magnets and conductors in special relativity
Electromagnetism
Special relativity
Thought experiments in physics | Moving magnet and conductor problem | [
"Physics"
] | 2,368 | [
"Electromagnetism",
"Physical phenomena",
"Special relativity",
"Fundamental interactions",
"Theory of relativity"
] |
16,221,784 | https://en.wikipedia.org/wiki/Trace%20zero%20cryptography | In 1998 Gerhard Frey firstly proposed using trace zero varieties for cryptographic purpose. These varieties are subgroups of the divisor class group on a low genus hyperelliptic curve defined over a finite field. These groups can be used to establish asymmetric cryptography using the discrete logarithm problem as cryptographic primitive.
Trace zero varieties feature a better scalar multiplication performance than elliptic curves. This allows fast arithmetic in these groups, which can speed up the calculations with a factor 3 compared with elliptic curves and hence speed up the cryptosystem.
Another advantage is that for groups of cryptographically relevant size, the order of the group can simply be calculated using the characteristic polynomial of the Frobenius endomorphism. This is not the case, for example, in elliptic curve cryptography when the group of points of an elliptic curve over a prime field is used for cryptographic purpose.
However to represent an element of the trace zero variety more bits are needed compared with elements of elliptic or hyperelliptic curves. Another disadvantage, is the fact, that it is possible to reduce the security of the TZV of 1/6th of the bit length using cover attack.
Mathematical background
A hyperelliptic curve C of genus g over a prime field where q = pn (p prime) of odd characteristic is defined as
where f monic, deg(f) = 2g + 1 and deg(h) ≤ g. The curve has at least one -rational Weierstraßpoint.
The Jacobian variety of C is for all finite extension isomorphic to the ideal class group . With the Mumford's representation it is possible to represent the elements of with a pair of polynomials [u, v], where u, v ∈ .
The Frobenius endomorphism σ is used on an element [u, v] of to raise the power of each coefficient of that element to q: σ([u, v]) = [uq(x), vq(x)]. The characteristic polynomial of this endomorphism has the following form:
where ai in
With the Hasse–Weil theorem it is possible to receive the group order of any extension field by using the complex roots τi of χ(T):
Let D be an element of the of C, then it is possible to define an endomorphism of , the so-called trace of D:
Based on this endomorphism one can reduce the Jacobian variety to a subgroup G with the property, that every element is of trace zero:
G is the kernel of the trace endomorphism and thus G is a group, the so-called trace zero (sub)variety (TZV) of .
The intersection of G and is produced by the n-torsion elements of . If the greatest common divisor the intersection is empty and one can compute the group order of G:
The actual group used in cryptographic applications is a subgroup G0 of G of a large prime order l. This group may be G itself.
There exist three different cases of cryptographical relevance for TZV:
g = 1, n = 3
g = 1, n = 5
g = 2, n = 3
Arithmetic
The arithmetic used in the TZV group G0 based on the arithmetic for the whole group , But it is possible to use the Frobenius endomorphism σ to speed up the scalar multiplication. This can be archived if G0 is generated by D of order l then σ(D) = sD, for some integers s. For the given cases of TZV s can be computed as follows, where ai come from the characteristic polynomial of the Frobenius endomorphism :
For g = 1, n = 3:
For g = 1, n = 5:
For g = 2, n = 3:
Knowing this, it is possible to replace any scalar multiplication mD (|m| ≤ l/2) with:
With this trick the multiple scalar product can be reduced to about 1/(n − 1)th of doublings necessary for calculating mD, if the implied constants are small enough.
Security
The security of cryptographic systems based on trace zero subvarieties according to the results of the papers comparable to the security of hyper-elliptic curves of low genus g' over , where p' ~ (n − 1)(g/g' ) for |G| ~128 bits.
For the cases where n = 3, g = 2 and n = 5, g = 1 it is possible to reduce the security for at most 6 bits, where |G| ~ 2256, because one can not be sure that G is contained in a Jacobian of a curve of genus 6. The security of curves of genus 4 for similar fields are far less secure.
Cover attack on a trace zero crypto-system
The attack published in
shows, that the DLP in trace zero groups of genus 2 over finite fields of characteristic diverse than 2 or 3 and a field extension of degree 3 can be transformed into a DLP in a class group of degree 0 with genus of at most 6 over the base field. In this new class group the DLP can be attacked with the index calculus methods. This leads to a reduction of the bit length 1/6th.
Notes
References
Cryptography | Trace zero cryptography | [
"Mathematics",
"Engineering"
] | 1,096 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
16,225,796 | https://en.wikipedia.org/wiki/SQEP | SQEP is an acronym for suitably qualified and experienced person.
The term is notably used in the UK nuclear power industry, see for example this safety management audit report from the Health and Safety Executive.
In the UK nuclear context, it is a standard requirement for licensed sites that "The licensee shall make and implement adequate arrangements to ensure that only suitably qualified and experienced persons
perform any duties which may affect the safety of operations on the site or any other duties assigned by or under these conditions or any arrangements required under these conditions.".
In this context the term is not restricted to professionally qualified personnel or to duties requiring significant technical expertise: any means any It is essential that all personnel whose activities have the potential to impact on nuclear safety are suitably qualified and experienced (SQEP) to carry out their jobs. This includes both those who directly carry out operations and others such as directors, managers, designers, safety case authors etc whose roles, if inadequately conceived or executed, may affect safety in less visible ways – for example, through introducing latent technical or organisational weaknesses. and conversely suitably means suitably, not particularly well: the Office of Nuclear Regulation takes SQEPness to be broadly equivalent to the International Atomic Energy Agency concept of 'competence' IAEA has defined competence as "the ability to put skills and knowledge into practice in order to perform a job in an effective and efficient manner to an established standard" ONR concurs with this definition, which is widely accepted within the international nuclear community. Other factors contributing to a person's competence include the person's prior experience, aptitudes, attitudes, behaviours, skills and qualifications.
In the context of UK nuclear licensing, the term "duly authorised person" (DAP) was extensively used for trained and experience operational staff, on plant or in control rooms. This may have come from UK power-station practice originating with the CEGB and nuclear operations.
SQEP was introduced for those staff who may not have direct responsibility on plant, but whose actions or input could be safety related.
SQEP is also in wider usage in engineering, defence, human factors, training and safety-related contexts.
In spoken usage, a person can describe themselves as SQEP'd, as in "I'm not SQEP'd for that."
References
Certification marks
Engineering occupations | SQEP | [
"Mathematics"
] | 475 | [
"Symbols",
"Certification marks"
] |
16,226,628 | https://en.wikipedia.org/wiki/Cray%20XMS | The Cray XMS was a vector processor minisupercomputer sold by Cray Research from 1990 to 1991. The XMS was originally designed by Supertek Computers Inc. as the Supertek S-1, intended to be a low-cost air-cooled clone of the Cray X-MP with a CMOS re-implementation of the X-MP processor architecture, and a VMEbus-based Input/Output Subsystem (IOS). The XMS could run Cray's UNICOS operating system. Supertek were acquired by Cray Research in 1990, and the S-1 was rebadged XMS by Cray. Its processor had a 55 ns clock period (18.2 MHz clock frequency) and 16 megawords (128 MB) of memory.
The CRAY XMS system was the first CRI computer system to be supported by removable disk drives.
Serial 5011, on display, was used for marketing purposes in the Eastern Region. It traveled for over 80,000 miles during its short working life and appeared at many trade shows.
The XMS was a short-lived model, and was superseded by the Cray Y-MP EL, which was under development by Supertek (as the Supertek S-2 and briefly as the Cray YMS) at the time of the Cray acquisition.
Though powerful for its time, the CRAY XMS only had half the processing power of Microsoft's original Xbox gaming console.
References
Fred Gannett's Cray FAQ
Chippewa Falls Museum of Industry & Technology: Cray Computer Systems
Cray in Deal To Acquire Supertek, New York Times
Computer-related introductions in 1990
Xms
Vector supercomputers
64-bit computers | Cray XMS | [
"Technology"
] | 359 | [
"Computing stubs",
"Computer hardware stubs"
] |
16,227,046 | https://en.wikipedia.org/wiki/Warrant%20canary | A warrant canary is a method by which a communications service provider aims to implicitly inform its users that the provider has been served with a government subpoena despite legal prohibitions on revealing the existence of the subpoena. The warrant canary typically informs users that there has been a court-issued subpoena as of a particular date. If the canary is not updated for the period specified by the host or if the warning is removed, users might assume the host has been served with such a subpoena. The intention is for a provider to passively warn users of the existence of a subpoena, albeit violating the spirit of a court order not to do so, while not violating the letter of the order.
Some subpoenas, such as those covered under 18 U.S.C. §2709(c) (enacted as part of the USA Patriot Act), provide criminal penalties for disclosing the existence of the subpoena to any third party, including the service provider's users.
National Security Letters (NSL) originated in the 1986 Electronic Communications Privacy Act and originally targeted those suspected of being agents of a foreign power. Targeting agents of a foreign power was revised in the Patriot Act in 2001 to allow NSLs to target those who may have information thought to be relevant to either counterintelligence activities or terrorists activities directed against the United States. The idea of using negative pronouncements to thwart the nondisclosure requirements of court orders and served secret warrants was first proposed by Steven Schear on the cypherpunks mailing list, mainly to uncover targeted individuals at ISPs. It was also suggested for and used by public libraries in 2002 in response to the USA Patriot Act, which could have forced librarians to disclose the circulation history of library patrons.
Etymology
The term is an allusion to the practice of coal miners bringing canaries into mines to use as an early-warning signal for toxic gases, primarily carbon monoxide and methane. The birds are more sensitive to these gases than humans, and became sick before the miners, who would then have a chance to escape or put on protective respirators.
Usage
The first commercial use of a warrant canary was by the US cloud storage provider rsync.net, which began publishing its canary in 2006. In addition to a digital signature, it provides a recent news headline as proof that the warrant canary was recently posted as well as mirroring the posting internationally.
On November 5, 2013, Apple became the most prominent company to publicly state that it had never received an order for user data under Section 215 of the Patriot Act. On September 18, 2014, GigaOm reported that the warrant canary statement did not appear anymore in the next two Apple Transparency Reports, covering July–December 2013 and January–June 2014. Tumblr also included a warrant canary in the transparency report that it issued on February 3, 2014. In August 2014, the online cloud service Spider Oak implemented an encrypted warrant canary that publishes an "All Clear!" message every 6 months. Three PGP signatures from geographically distributed signers must sign each message—so if a government agency forced SpiderOak to update the page, they would need to enlist the help of all three signers.
In September 2014, U.S. security researcher Moxie Marlinspike wrote that "every lawyer I've spoken to has indicated that having a 'canary' you remove or choose not to update would likely have the same legal consequences as simply posting something that explicitly says you've received something."
In March 2015 it was reported that Australia outlawed the use of a certain kind of warrant canary, making it illegal to "disclose information about the existence or non-existence" of a Journalist Information Warrant issued under new mandatory data retention laws. Afterwards, computer security and privacy specialist Bruce Schneier wrote in a blog post that "[p]ersonally, I have never believed [warrant canaries] would work. It relies on the fact that a prohibition against speaking doesn't prevent someone from not speaking. But courts generally aren't impressed by this sort of thing, and I can easily imagine a secret warrant that includes a prohibition against triggering the warrant canary. And for all I know, there are right now secret legal proceedings on this very issue." This is not the first Australian law to outlaw warrant canaries. The "Telecommunications (Interception) Amendment Act 1995" was probably the first, making it illegal to "disclose information about the existence or non-existence" of Interception Warrants.
That said, case law specific to the United States would render the covert continuance of warrant canaries subject to constitutionality challenges. West Virginia State Board of Education v. Barnette and Wooley v. Maynard rule the Free Speech Clause prohibits compelling someone to speak against one's wishes; this can easily be extended to prevent someone from being compelled to lie. New York Times Co. v. United States protects one exercising the First Amendment to publish government information, even if it is against the wishes of the government, except under grave and exceptional circumstances previously set by act and precedent. This may also have implications in regards to acting against a direct government intervention, similar to a government intervention against a warrant canary.
Companies and organizations that no longer have warrant canaries
The following is a non-exhaustive list of companies and organizations whose warrant canaries no longer appear in transparency reports:
Apple
Reddit
Silent Circle
Canary Watch
In 2015, a coalition of organizations consisting of the EFF, Freedom of the Press Foundation, NYU Law, the Calyx Institute, and the Berkman Center created a website called Canary Watch in order to provide a compiled list of all companies providing warrant canaries. Its mission was to provide prompt updates of any changes in a canary's state. It is often difficult for users to ascertain a canary's validity on their own and thus Canary Watch aimed to provide a simple display of all active canaries and any blocks of time that they were not active. In May 2016, it was announced that Canary Watch "will no longer accept submissions of new canaries or monitor the existing canaries for changes or take downs". The coalition of organizations which created Canary Watch explained their decision to discontinue the project by stating that it has achieved its goals to raise awareness about "illegal and unconstitutional national security process, including National Security Letters and other secret court processes." The Electronic Frontier Foundation also noted that "the fact that canaries are non-standard makes it difficult to automatically monitor them for changes or takedowns." They explained that the project had run its course, that ample attention had been brought to canaries, and detailed warrant canary strengths and weaknesses they observed.
Examples
In 2016, the Riseup tech collective failed to update their warrant canary, due to sealed warrants from a court. The canary has since been updated, but no longer states the absence of gag orders.
In February 2024, the Ethereum Foundation removed the warrant canary from their website citing "[a] voluntary enquiry from a state authority that included a requirement for confidentiality" in the commit message.
See also
Animal sentinel
Transparency report
WikiLeaks-related Twitter court orders
References
Further reading
External links
Computer law
Internet security
Patriot Act
Privacy of telecommunications
Telecommunications-related introductions in 2002
Web hosting | Warrant canary | [
"Technology"
] | 1,493 | [
"Computer law",
"Computing and society"
] |
16,227,454 | https://en.wikipedia.org/wiki/Dodrill%E2%80%93GMR | The Dodrill–GMR machine was the first operational mechanical heart successfully used while performing open heart surgery. It was developed by Forest Dewey Dodrill, a surgeon at Harper University Hospital in Detroit, and General Motors Research.
On July 3, 1952, 41-year-old Henry Opitek suffering from shortness of breath made medical history at Harper University Hospital in Michigan. The Dodrill–GMR heart machine, considered by some to be the first operational mechanical heart was successfully used while performing heart surgery. The machine performs the functions of the heart, allowing doctors to detour blood and stop the heart of a patient during an operation. The machine is external of the body and is only used during an operation. Dodrill, a surgeon at Wayne State University's Harper Hospital in Detroit, developed the machine with funding from The American Heart Association and volunteer engineers from General Motors.
The machine had two sides, each one working as a half of a human heart. Each side had multiple pumps, consisting of a glass cylinder with two valves and a pneumatically operated finger cot, that acted as a membrane pump.
Dodrill used the machine in 1952 to bypass Henry Opitek’s left ventricle for 50 minutes while he opened the patient's left atrium and worked to repair the mitral valve. In Dodrill’s post operative report he notes, “To our knowledge, this is the first instance of survival of a patient when a mechanical heart mechanism was used to take over the complete body function of maintaining the blood supply of the body while the heart was open and operated on".
External links
50th Anniversary of First Open Heart Surgery
DMC Harper University Hospital
References
Medical equipment
Wayne State University | Dodrill–GMR | [
"Engineering",
"Biology"
] | 344 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical equipment",
"Medical technology stubs",
"Medical technology"
] |
16,228,010 | https://en.wikipedia.org/wiki/Ghirardi%E2%80%93Rimini%E2%80%93Weber%20theory | The Ghirardi–Rimini–Weber theory (GRW) is a spontaneous collapse theory in quantum mechanics, proposed in 1986 by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber.
Measurement problem and spontaneous collapses
Quantum mechanics has two fundamentally different dynamical principles: the linear and deterministic Schrödinger equation, and the nonlinear and stochastic wave packet reduction postulate. The orthodox interpretation, or Copenhagen interpretation of quantum mechanics, posits a wave function collapse every time an observer performs a measurement. One thus faces the problem of defining what an “observer” and a “measurement” are. Another issue of quantum mechanics is that it forecasts superpositions of macroscopic objects, which are not observed in nature (see Schrödinger's cat paradox). The theory does not tell where the threshold between the microscopic and macroscopic worlds is, that is when quantum mechanics should leave space to classical mechanics. The aforementioned issues constitute the measurement problem in quantum mechanics.
Collapse theories avoid the measurement problem by merging the two dynamical principles of quantum mechanics in a unique dynamical description. The physical idea that underlies collapse theories is that particles undergo spontaneous wave-function collapses, which occur randomly both in time (at a given average rate), and in space (according to the Born rule). The imprecise “observer” and “measurement” that plague the orthodox interpretation are thus avoided because the wave function collapses spontaneously. Furthermore, thanks to a so-called “amplification mechanism” (later discussed), collapse theories recover both quantum mechanics for microscopic objects, and classical mechanics for macroscopic ones.
The GRW is the first spontaneous collapse theory that was devised. In the following years several different models were proposed. Among these are
the continuous spontaneous localization model (CSL model), which is formulated in terms of identical particles;
the Diósi–Penrose model, which relates the spontaneous collapse to gravity;
the quantum mechanics with universal position localization (QMUPL) model, which proves important mathematical results on collapse theories; and the coloured QMUPL model, which is the only collapse model involving coloured stochastic processes for which the exact solution is known.
Description
The first assumption of the GRW theory is that the wave function (or state vector) represents the most accurate possible specification of the state of a physical system. This is a feature that the GRW theory shares with the standard Interpretations of quantum mechanics, and distinguishes it from hidden variable theories, like the de Broglie–Bohm theory, according to which the wave function does not give a complete description of a physical system. The GRW theory differs from standard quantum mechanics for the dynamical principles according to which the wave function evolves. More philosophical issues related to the GRW theory and to collapse theories in general one have been discussed by Ghirardi and Bassi.
Working principles
Each particle of a system described by the multi-particle wave function independently undergoes a spontaneous localization process (or jump):
,
where is the state after the operator has localized the -th particle around the position .
The localization process is random both in space and time. The jumps are Poisson distributed in time, with mean rate ; the probability density for a jump to occur at position is .
The localization operator has a Gaussian form:
,
where is the position operator of the -th particle, and is the localization distance.
In between two localization processes, the wave function evolves according to the Schrödinger equation.
These principles can be expressed in a more compact way with the statistical operator formalism. Since the localization process is Poissonian, in a time interval there is a probability that a collapse occurs, i.e. that the pure state is transformed into the statistical mixture
.
In the same time interval, there is a probability that the system keeps evolving according to the Schrödinger equation. Accordingly, the GRW master equation for particles reads
,
where is the Hamiltonian of the system, and the square brackets denote a commutator.
Two new parameters are introduced by the GRW theory, namely the collapse rate and the localization distance . These are phenomenological parameters, whose values are not fixed by any principle and should be understood as new constants of Nature. Comparison of the model's predictions with experimental data permits bounding of the values of the parameters (see CSL model). The collapse rate should be such that microscopic object are almost never localized, thus effectively recovering standard quantum mechanics. The value originally proposed was , while more recently Stephen L. Adler proposed that the value (with an uncertainty of two orders of magnitude) is more adequate. There is a general consensus on the value for the localization distance. This is a mesoscopic distance, such that microscopic superpositions are left unaltered, while macroscopic ones are collapsed.
Examples
When the wave function is hit by a sudden jump, the action of the localization operator essentially results in the multiplication of the wave function by the collapse Gaussian.
Let us consider a Gaussian wave function with spread , centered at , and let us assume that this undergoes a localization process at the position . One thus has (in one dimension)
,
where is a normalization factor. Let us further assume that the initial state is delocalised, i.e. that . In this case one has
,
where is another normalization factor. One thus finds that after the sudden jump has occurred, the initially delocalised wave function has become localized.
Another interesting case is when the initial state is the superposition of two Gaussian states, centered at and respectively: . If the localization occurs e.g. around one has
.
If one assumes that each Gaussian is localized () and that the overall superposition is delocalised (), one finds
.
We thus see that the Gaussian that is hit by the localization is left unchanged, while the other is exponentially suppressed.
Amplification mechanism
This is one of the most important features of the GRW theory, because it allows us to recover classical mechanics for macroscopic objects. Let us consider a rigid body of particles whose statistical operator evolves according to the master equation described above. We introduce the center of mass () and relative () position operators, which allow us to rewrite each particle's position operator as follows: . One can show that, when the system Hamiltonian can be split into a center of mass Hamiltonian and a relative Hamiltonian , the center of mass statistical operator evolves according to the following master equation:
,
where
.
One thus sees that the center of mass collapses with a rate that is the sum of the rates of its constituents: this is the amplification mechanism. If for simplicity one assumes that all particles collapse with the same rate , one simply gets .
An object that consists of in the order of the Avogadro number of nucleons () collapses almost instantly: GRW's and Adler's values of give respectively and . Fast reduction of macroscopic object superpositions is thus guaranteed, and the GRW theory effectively recovers classical mechanics for macroscopic objects.
Other features
The GRW theory makes different predictions than standard quantum mechanics, and as such can be tested against it (see CSL model).
The collapse noise repeatedly kicks the particles, thus inducing a diffusion process (Brownian motion). This introduces a steady amount of energy in the system, thus leading to a violation of the energy conservation principle. For the GRW model, one can show that energy grows linearly in time with rate , which for a macroscopic object amounts to . Although such an energy increase is negligible, this feature of the model is not appealing. For this reason, a dissipative extension of the GRW theory has been investigated.
The GRW theory does not allow for identical particles. An extension of the theory with identical particles has been proposed by Tumulka.
GRW is a non relativistic theory, its relativistic extension for non-interacting particles has been investigated by Tumulka, while interacting models are still under investigation.
The master equation of the GRW theory describes a decoherence process according to which the off-diagonal elements of the statistical operator are suppressed exponentially. This is a feature that the GRW theory shares with other collapse theories: those involving white noises are associated to Lindblad master equations, while the coloured QMUPL model follows a non-Markovian Gaussian master equation.
See also
Quantum decoherence
Penrose interpretation
Interpretations of quantum mechanics
References
Interpretations of quantum mechanics
Quantum measurement | Ghirardi–Rimini–Weber theory | [
"Physics"
] | 1,773 | [
"Interpretations of quantum mechanics",
"Quantum measurement",
"Quantum mechanics"
] |
16,228,488 | https://en.wikipedia.org/wiki/G292.0%2B01.8 | G292.0+01.8 is a supernova remnant located in the constellation Centaurus. It first gained notice as a strong radio source, and eventually deep images revealed a hot optical nebula at the location. It lies about 15,000 light years away.
The remnant's spectrum shows no detectable lines of hydrogen and helium and the presence of only oxygen and neon. The assumption is that a massive star burned through its hydrogen, producing oxygen and neon, and exploded before processing any heavier elements. It must have taken place relatively recently, as the oxygen and neon have not yet mixed with the interstellar hydrogen. An upper limit of 1500 years has been suggested, and it must be at least a few hundred years old since there are no records from the European presence in the southern hemisphere noting a supernova at this location.
See also
Crab Nebula
Pulsar wind nebula
Sources
Murdin, Paul, and David Allen, Catalog of the Universe, pp. 155–156, © 1979 Reference International Publishers Limited.
External links
Simbad
1RXS J112427.9-591538
Imagem SNR G292.0+01.8
Supernova remnants
Centaurus | G292.0+01.8 | [
"Astronomy"
] | 243 | [
"Nebula stubs",
"Astronomy stubs",
"Centaurus",
"Constellations"
] |
16,229,639 | https://en.wikipedia.org/wiki/Romer-Simpson%20Medal | The Romer-Simpson Medal is the highest award issued by the Society of Vertebrate Paleontology for "sustained and outstanding scholarly excellence and service to the discipline of vertebrate paleontology". The award is named in honor of Alfred S. Romer and George G. Simpson.
Past awards
Source: Society for Vertebrate Paleontology
1987 Everett C. Olson
1988 Bobb Schaeffer
1989 Edwin H. Colbert
1990 Richard Estes
1991 no award
1992 Loris S. Russell
1993 Zhou Mingzhen
1994 John H. Ostrom
1995 Zofia Kielan-Jaworowska
1996 Percy Butler
1997 Colin Patterson
1998 Albert E. Wood
1999 Robert Warren Wilson
2000 John A. Wilson
2001 Malcolm McKenna
2002 Mary R. Dawson
2003 Rainer Zangerl
2004 Robert L. Carroll
2005 Donald E. Russell
2006 William A. Clemens
2007 Wann Langston, Jr.
2008 Jose Bonaparte
2009 Farish Jenkins
2010 Rinchen Barsbold
2011 Alfred W. Crompton
2012 Philip D. Gingerich
2013 Jack Horner
2014 Hans-Peter Schultze
2015 Jim Hopson
2016 Mee-mann Chang
2017 Philip J. Currie
2018 Kay Behrensmeyer
2019 Michael Archer
2020 Jenny Clack
2021 Blaire Van Valkenburgh
2022 David W. Krause
See also
List of biology awards
List of paleontology awards
References
Paleontology awards
Awards established in 1987
American awards | Romer-Simpson Medal | [
"Technology"
] | 283 | [
"Science and technology awards",
"Science award stubs"
] |
16,229,934 | https://en.wikipedia.org/wiki/Laue%20equations | In crystallography and solid state physics, the Laue equations relate incoming waves to outgoing waves in the process of elastic scattering, where the photon energy or light temporal frequency does not change upon scattering by a crystal lattice. They are named after physicist Max von Laue (1879–1960).
The Laue equations can be written as as the condition of elastic wave scattering by a crystal lattice, where is the scattering vector, , are incoming and outgoing wave vectors (to the crystal and from the crystal, by scattering), and is a crystal reciprocal lattice vector. Due to elastic scattering , three vectors. , , and , form a rhombus if the equation is satisfied. If the scattering satisfies this equation, all the crystal lattice points scatter the incoming wave toward the scattering direction (the direction along ). If the equation is not satisfied, then for any scattering direction, only some lattice points scatter the incoming wave. (This physical interpretation of the equation is based on the assumption that scattering at a lattice point is made in a way that the scattering wave and the incoming wave have the same phase at the point.) It also can be seen as the conservation of momentum as since is the wave vector for a plane wave associated with parallel crystal lattice planes. (Wavefronts of the plane wave are coincident with these lattice planes.)
The equations are equivalent to Bragg's law; the Laue equations are vector equations while Bragg's law is in a form that is easier to solve, but these tell the same content.
The Laue equations
Let be primitive translation vectors (shortly called primitive vectors) of a crystal lattice , where atoms are located at lattice points described by with , , and as any integers. (So indicating each lattice point is an integer linear combination of the primitive vectors.)
Let be the wave vector of an incoming (incident) beam or wave toward the crystal lattice , and let be the wave vector of an outgoing (diffracted) beam or wave from . Then the vector , called the scattering vector or transferred wave vector, measures the difference between the incoming and outgoing wave vectors.
The three conditions that the scattering vector must satisfy, called the Laue equations, are the following:
where numbers are integer numbers. Each choice of integers , called Miller indices, determines a scattering vector . Hence there are infinitely many scattering vectors that satisfy the Laue equations as there are infinitely many choices of Miller indices . Allowed scattering vectors form a lattice , called the reciprocal lattice of the crystal lattice , as each indicates a point of . (This is the meaning of the Laue equations as shown below.) This condition allows a single incident beam to be diffracted in infinitely many directions. However, the beams corresponding to high Miller indices are very weak and can't be observed. These equations are enough to find a basis of the reciprocal lattice (since each observed indicates a point of the reciprocal lattice of the crystal under the measurement), from which the crystal lattice can be determined. This is the principle of x-ray crystallography.
Mathematical derivation
For an incident plane wave at a single frequency (and the angular frequency ) on a crystal, the diffracted waves from the crystal can be thought as the sum of outgoing plane waves from the crystal. (In fact, any wave can be represented as the sum of plane waves, see Fourier Optics.) The incident wave and one of plane waves of the diffracted wave are represented as
where and are wave vectors for the incident and outgoing plane waves, is the position vector, and is a scalar representing time, and and are initial phases for the waves. For simplicity we take waves as scalars here, even though the main case of interest is an electromagnetic field, which is a vector. We can think these scalar waves as components of vector waves along a certain axis (x, y, or z axis) of the Cartesian coordinate system.
The incident and diffracted waves propagate through space independently, except at points of the lattice of the crystal, where they resonate with the oscillators, so the phases of these waves must coincide. At each point of the lattice , we have
or equivalently, we must have
for some integer , that depends on the point . Since this equation holds at , at some integer . So
(We still use instead of since both the notations essentially indicate some integer.) By rearranging terms, we get
Now, it is enough to check that this condition is satisfied at the primitive vectors (which is exactly what the Laue equations say), because, at any lattice point , we have
where is the integer . The claim that each parenthesis, e.g. , is to be a multiple of (that is each Laue equation) is justified since otherwise does not hold for any arbitrary integers .
This ensures that if the Laue equations are satisfied, then the incoming and outgoing (diffracted) wave have the same phase at each point of the crystal lattice, so the oscillations of atoms of the crystal, that follows the incoming wave, can at the same time generate the outgoing wave at the same phase of the incoming wave.
Relation to reciprocal lattices and Bragg's Law
If with , , as integers represents the reciprocal lattice for a crystal lattice (defined by ) in real space, we know that with an integer due to the known orthogonality between primitive vectors for the reciprocal lattice and those for the crystal lattice. (We use the physical, not crystallographer's, definition for reciprocal lattice vectors which gives the factor of .) But notice that this is nothing but the Laue equations. Hence we identify , means that allowed scattering vectors are those equal to reciprocal lattice vectors for a crystal in diffraction, and this is the meaning of the Laue equations. This fact is sometimes called the Laue condition. In this sense, diffraction patterns are a way to experimentally measure the reciprocal lattice for a crystal lattice.
The Laue condition can be rewritten as the following.
Applying the elastic scattering condition (In other words, the incoming and diffracted waves are at the same (temporal) frequency. We can also say that the energy per photon does not change.)
To the above equation, we obtain
The second equation is obtained from the first equation by using .
The result (also ) is an equation for a plane (as the set of all points indicated by satisfying this equation) as its equivalent equation is a plane equation in geometry. Another equivalent equation, that may be easier to understand, is (also ). This indicates the plane that is perpendicular to the straight line between the reciprocal lattice origin and and located at the middle of the line. Such a plane is called Bragg plane. This plane can be understood since for scattering to occur. (It is the Laue condition, equivalent to the Laue equations.) And, the elastic scattering has been assumed so , , and form a rhombus. Each is by definition the wavevector of a plane wave in the Fourier series of a spatial function which periodicity follows the crystal lattice (e.g., the function representing the electronic density of the crystal), wavefronts of each plane wave in the Fourier series is perpendicular to the plane wave's wavevector , and these wavefronts are coincident with parallel crystal lattice planes. This means that X-rays are seemingly "reflected" off parallel crystal lattice planes perpendicular at the same angle as their angle of approach to the crystal with respect to the lattice planes; in the elastic light (typically X-ray)-crystal scattering, parallel crystal lattice planes perpendicular to a reciprocal lattice vector for the crystal lattice play as parallel mirrors for light which, together with , incoming (to the crystal) and outgoing (from the crystal by scattering) wavevectors forms a rhombus.
Since the angle between and is , (Due to the mirror-like scattering, the angle between and is also .) . Recall, with as the light (typically X-ray) wavelength, and with as the distance between adjacent parallel crystal lattice planes and as an integer. With these, we now derive Bragg's law that is equivalent to the Laue equations (also called the Laue condition):
References
Kittel, C. (1976). Introduction to Solid State Physics, New York: John Wiley & Sons.
Notes
Crystallography | Laue equations | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,711 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
16,230,280 | https://en.wikipedia.org/wiki/Hektoen%20enteric%20agar | Hektoen enteric agar (HEK, HE or HEA) is a selective and differential agar primarily used to recover Salmonella and Shigella from patient specimens. HEA contains indicators of lactose fermentation and hydrogen sulfide production; as well as inhibitors to prevent the growth of Gram-positive bacteria. It is named after the Hektoen Institute in Chicago, where researchers developed the agar.
Use
The definitive use of HEA is to discriminate between Shigella and Salmonella, although many other species may grow on these plates. However, while the other bacteria may be clinically relevant, the assay does not discriminate among them. Effectively, HEA uses a metabolic assay to divide colonies into "Salmonella and Shigella" and "everything else". Use of these plates assumes that the user is not interested in other enteric pathogens such as Klebsiella or Escherichia.
The plates contain various sugar sources (lactose, sucrose, and salicin), none of which can be used by either Shigella or Salmonella. However, the medium also includes peptone which can be used as a carbon source. Since most bacteria can use the sugars in preference to peptone, these "uninteresting" bacteria acidify the medium and turn a pH indicator yellow or red. Peptone metabolism by Shigella and Salmonella alkalises the medium, turning a pH indicator blue.
The presence of thiosulfate or ferric ammonium citrate in the medium produces a black precipitate in the presence of H2S, allowing Shigella – which does not produce H2S, and appears as green colonies – to be distinguished from Salmonella – which does produce hydrogen sulfide and appears as black colonies.
Few sulfur-reducing bacteria exist other than Salmonella, which can be isolated from the intestines. Most of these are inhibited on HEA plates by the inclusion of bile salts, so encountering a black colony that is not Salmonella is unusual, although not unheard of. Those that are may be identified as red or yellow colonies with a black centre, indicating that they are fermenting sugar and probably not Salmonella. However, rare strains of Salmonella are capable of lactose fermentation, which will appear in the same way.
See also
Agar plate
Microbiology
References
External links
More details of HEA
Microbiological media | Hektoen enteric agar | [
"Biology"
] | 512 | [
"Microbiological media",
"Microbiology equipment"
] |
16,230,368 | https://en.wikipedia.org/wiki/FETI-DP | The FETI-DP method is a domain decomposition method that enforces equality of the solution at subdomain interfaces by Lagrange multipliers except at subdomain corners, which remain primal variables. The first mathematical analysis of the method was provided by Mandel and Tezaur. The method was further improved by enforcing the equality of averages across the edges or faces on subdomain interfaces which is important for parallel scalability for 3D problems. FETI-DP is a simplification and a better performing version of FETI. The eigenvalues of FETI-DP are same as those of BDDC, except for the eigenvalue equal to one, and so the performance of FETI-DP and BDDC is essentially same.
FETI-DP methods are very suitable for high performance parallel computing. A structural simulation using a FETI-DP algorithm and running on 3783 processors of the ASCI White supercomputer was awarded a Gordon Bell prize in 2002.
A recent FETI-DP method has scaled to more than 65000 processor cores of the JUGENE supercomputer solving a model problem.
See also
BDDC
FETI
References
Domain decomposition methods | FETI-DP | [
"Mathematics"
] | 254 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
16,230,510 | https://en.wikipedia.org/wiki/Greenhouse%20Gases%20Observing%20Satellite | Greenhouse Gases Observing Satellite (GOSAT), also known as , is an Earth observation satellite and the world's first satellite dedicated to greenhouse gas monitoring. It measures the densities of carbon dioxide and methane from 56,000 locations on the Earth's atmosphere. The GOSAT was developed by the Japan Aerospace Exploration Agency (JAXA) and launched on 23 January 2009, from the Tanegashima Space Center. Japan's Ministry of the Environment, and the National Institute for Environmental Studies (NIES) use the data to track gases causing the greenhouse effect, and share the data with NASA and other international scientific organizations.
Launch
GOSAT was launched along with seven other piggyback probes using the H-IIA, Japan's primary large-scale expendable launch system, at 3:54 am on 23 January 2009 UTC on Tanegashima, a small island in southern Japan, after a two-day delay due to unfavourable weather. At approximately 16 minutes after liftoff, the separation of Ibuki from the launch rocket was confirmed.
Instruments
According to JAXA, the Ibuki satellite is equipped with a greenhouse gas observation sensor (TANSO-FTS) and a cloud/aerosol sensor (TANSO-CAI) that supplements TANSO-FTS. The greenhouse gas observation sensor of Ibuki observes a wide range of wavelengths (near-infrared region–thermal infrared region) within the infrared band to enhance observation accuracy. The satellite uses a spectrometer to measure different elements and compounds based on their response to certain types of light. This technology allows the satellite to measure "the concentration of greenhouse gases in the atmosphere at a super-high resolution."
GOSAT-2
The Greenhouse Gases Observing Satellite-2 was launched from Tanegashima Space Center by a H-IIA rocket on October 29, 2018.
See also
Orbiting Carbon Observatory 2
TanSat
Space-based Measurements of Carbon Dioxide
References
External links
GOSAT site by JAXA
GOSAT site by NIES
GOSAT-2 site by NIES
Earth observation satellites of Japan
JAXA
Spacecraft launched by H-II rockets
Spacecraft launched in 2009
Satellites monitoring GHG emissions | Greenhouse Gases Observing Satellite | [
"Chemistry",
"Environmental_science"
] | 436 | [
"Greenhouse gases",
"Environmental chemistry"
] |
16,231,138 | https://en.wikipedia.org/wiki/Shilov%20system | The Shilov system is a classic example of catalytic C-H bond activation and oxidation which preferentially activates stronger C-H bonds over weaker C-H bonds for an overall partial oxidation.
Overview
The Shilov system was discovered by Alexander E. Shilov in 1969-1972 while investigating H/D exchange between isotopologues of CH4 and H2O catalyzed simple transition metal
coordination complexes. The Shilov cycle is the partial oxidation of a hydrocarbon to an alcohol or alcohol precursor (RCl) catalyzed by PtIICl2 in an aqueous solution with [PtIVCl6]2− acting as the ultimate oxidant. The cycle consists of three major steps, the electrophilic activation of the C-H bond, oxidation of the complex, and the nucleophilic oxidation of the alkane substrate. An equivalent transformation is performed industrially by steam reforming methane to syngas then reducing the carbon monoxide to methanol. The transformation can also performed biologically by methane monooxygenase.
Overall Transformation
RH + H2O + [PtCl6]2− → ROH + 2H+ + PtCl2 + 4Cl−
Major steps
The initial and rate limiting step involving the electrophilic activation of RH2C-H by a PtII center to produce a PtII-CH2R species and a proton. The mechanism of this activation is debated. One possibility is the oxidative addition of a sigma coordinated C-H bond followed by the reductive removal of the proton. Another is a sigma-bond metathesis involving the formation of the M-C bond and a H-Cl or H-O bond. Regardless it is this step that kinetically imparts the chemoselectivity to the overall transformation. Stronger, more electron-rich bonds are activated preferentially over weaker, more electron-poor bonds of species that have already been partially oxidized. This avoids a problem that plagues many partial oxidation processes, namely, the over-oxidation of substrate to thermodynamic sinks such as H2O and CO2.
In the next step the PtII-CH2R complex is oxidized by [PtIVCl6]2− to a PtIV-CH2R complex. There have been multiple studies to find a replacement oxidant that is less expensive than [PtIVCl6]2− or a method to regenerate [PtIVCl6]2−. It would be most advantageous to develop an electron train which would use oxygen as the ultimate oxidant. It is important that the oxidant preferentially oxidizes the PtII-CH2R species over the initial PtII species since PtIV complexes will not electrophilically activate a C-H bond of the alkane (although PtIV complexes electrophilically substitute hydrogens in aromatics - see refs. [1] and [2] ). Such premature oxidation shuts down the catalysis.
Finally the PtIV-CH2R undergoes nucleophilic attack by OH− or Cl− with the departure of PtII complex to regenerate the catalyst.
References
Organometallic chemistry
Catalysis
Soviet inventions | Shilov system | [
"Chemistry"
] | 673 | [
"Catalysis",
"Chemical kinetics",
"Organometallic chemistry"
] |
16,231,600 | https://en.wikipedia.org/wiki/Energy%20minister | An energy minister is a position in many governments responsible for energy production and regulation, developing governmental energy policy, scientific research, and natural resources conservation. In some countries, environmental responsibilities are given to a separate environment minister.
Country-related articles and lists
: Minister for the Environment and Energy
: Minister of Energy
: Ministry of Power, Energy and Mineral Resources
: Ministry of Energy
: Ministry of Mines and Energy
: Ministry of Minerals and Energy
: Minister of Natural Resources
: Minister of Climate and Energy
: European Commissioner for Energy
: Ministry of Ecology, Sustainable Development and Energy
: Ministry of Energy of Georgia
: Federal Ministry for Economic Affairs and Energy (since 2013)
: Minister for the Environment, Energy and Climate Change
: Secretary for the Environment
: Ministry of Industry, Energy and Tourism
: Minister of Energy and Mineral Resources
: Minister for the Environment, Climate and Communications
: Ministry of Energy
Manitoba: Minister of Science, Energy, Technology and Mines
:Ministry of Electricity and Energy
: Ministry of Energy
: Minister of Natural Resources, Environment and Climate Change
: Ministry of Energy
: Ministry of Economic Affairs (Netherlands)
: Minister of Energy, Water Resources and Irrigation
New Zealand: Minister of Energy and Resources
: Ministry of Water and Power and Ministry of Science and Technology
: Ministry of Energy and Mines
: Secretary of Energy
: Minister of Mineral Resources and Energy
: Minister for Enterprise and Energy
: Minister of Energy
: Secretary of State for Energy and Climate Change (until 2016), Secretary of State for Business, Energy and Industrial Strategy (from 2016)
: Minister for Enterprise, Energy and Tourism
: Secretary of Energy
: Ministry of Natural Resources and Environment (Vietnam)
See also
Ministry of Environment
Ministry of Mines and Energy
Ministry of Petroleum
Ministry of Electricity
Energy | Energy minister | [
"Engineering"
] | 339 | [
"Energy organizations",
"Energy ministries"
] |
16,232,794 | https://en.wikipedia.org/wiki/World%20crystal | The world crystal is a theoretical model in cosmology which provides an alternative understanding of gravity proposed by Hagen Kleinert in line with induced gravity.
Overview
Theoretical models of the universe are valid only at large distances. The properties of spacetime at ultrashort distances of the order of the Planck length are completely unknown since they have not been explored by any experiment. At present, there are various approaches that try to predict what happens at these distances, such as Quantum Gravity.
The World Crystal model is an alternative which exploits the fact that crystals with defects have the same non-Euclidean geometry as spaces with curvature and torsion. Thus the world crystal represents a model for emergent or induced gravity in an Einstein–Cartan theory of gravitation (which embraces Einstein's theory of General Relativity). The model illustrates that the world may have, at Planck distances, quite different properties from those predicted by string theorists. In this model, matter creates defects in spacetime which generate curvature and all the effects of general relativity.
The existence of a shortest length at the Planck level has interesting consequences for quantum physics at ultrahigh energies. For example, the uncertainty relation
will be modified. The World Crystal implies specific modifications.
See also
Quantum cosmology
String cosmology
Brane cosmology
Loop quantum cosmology
Top-down cosmology
Non-standard cosmology
References
Literature
Theories of gravity
Physical cosmology | World crystal | [
"Physics",
"Astronomy"
] | 287 | [
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Theories of gravity",
"Astronomical sub-disciplines"
] |
16,233,284 | https://en.wikipedia.org/wiki/Relative%20income%20hypothesis | The relative income hypothesis was developed by James Duesenberry in 1949. It consists of two separate consumption hypothesis.
The first hypothesis states that an individual's attitude to consumption is dictated more by their income in relation to others than by an abstract standard of living. The percentage of income consumed by an individual depends on their percentile position within the income distribution.
The second hypothesis states that the present consumption is influenced not merely by present levels of absolute and relative income but also by levels of consumption attained in a previous period. In Duesenberry's opinion, it is difficult for a family to reduce a level of consumption once it is attained. The aggregate ratio of consumption to income is assumed to depend on the level of present income relative to past peak income.
Sources
Duesenberry, J. S. Income, Saving and the Theory of Consumer Behaviour. Cambridge: Harvard University Press, 1949.
Frank, Robert H., 2005. “The Mysterious Disappearance of James Duesenberry,” The New York Times, June 9, 2005.
Hollander, Heinz, 2001. “On the validity of utility statements: standard theory versus Duesenberry’s,” Journal of economic Behavior & Organization 45, 3: 227–249.
Economic theories
Behavioral economics
Hypotheses | Relative income hypothesis | [
"Biology"
] | 256 | [
"Behavior",
"Behavioral economics",
"Behaviorism"
] |
11,965,490 | https://en.wikipedia.org/wiki/Active%20EMI%20reduction | In the field of EMC, active EMI reduction (or active EMI filtering) refers to techniques aimed to reduce or to filter electromagnetic noise (EMI) making use of active electronic components. Active EMI reduction contrasts with passive filtering techniques, such as RC filters, LC filters RLC filters, which includes only passive electrical components. Hybrid solutions including both active and passive elements exist.
Standards concerning conducted and radiated emissions published by IEC
and FCC
set the maximum noise level allowed for different classes of electrical devices. The frequency range of interest spans from 150 kHz to 30 MHz for conducted emissions and from 30 MHz to 40 GHz for radiated emissions. Meeting these requirements and guaranteeing the functionality of an electrical apparatus subject to electromagnetic interference are the main reason to include an EMI filter. In an electrical system, power converters, i.e. DC/DC converters, inverters and rectifiers, are the major sources of conducted EMI, due to their high-frequency switching ratio which gives rise to unwanted fast current and voltage transients. Since power electronics is nowadays spread in many fields, from power industrial application to automotive industry, EMI filtering has become necessary. In other fields, such as the telecommunication industry where the major focus is on radiated emissions, other techniques have been developed for EMI reduction, such as spread spectrum clocking which makes use of digital electronics, or electromagnetic shielding.
Working principle
The concept behind active EMI reduction has already been implemented previously in acoustics with the active noise control and it can be described considering the following three different blocks:
Sensing stage: the undesired EMI noise, which can be treated either as a high-frequency current superimposed on the functional current or as a voltage, is sensed and sent to the electronic stage. The sensor could be a current transformer to register currents or a capacitive branch to sense voltages. The detected signal should be an exact copy of the noise, both in magnitude and phase.
Electronic stage: the recorded signal is amplified and inverted exploiting electronics. Analog devices, e.g. OpAmps and InAmps in different configurations or transistors, are used. For conducted emission frequencies, high gain and wide bandwidth can be achieved with many available devices. This electronic block requires an external power supply.
Injecting stage: the elaborated signal is eventually injected back into the system with opposite phase in order to achieve the noise reduction or cancellation. Currents can be injected using a capacitive branch, while voltages can be induced with a series transformer.
The active EMI reduction device should not affect the normal operation of the raw system. Active filters are intended to act only on the high-frequency noises produced by the system and should not modify normal operation at DC or power-line frequency.
Filter topologies
The EMI noise can be categorized as common mode (CM) and differential mode (DM).
Depending on the noise component that should be compensated, different topologies and configurations are possible. Two families of active filter exist, the feedback and the feed forward controlled: the first detects the noise at the receiver and generates a compensation signal to suppress the noise; the latter detects the noise at the noise source and generates an opposite signal to cancel out the noise.
Even though the spectrum of an EMI noise is composed by several spectral components, a single frequency at the time is taken into account to make possible a simple circuit representation, as shown in Fig. 1. The noise source is represented as a sinusoidal source with its Norton representation which delivers a sinusoidal current to the load impedance .
The target of the filter is to suppress every single frequency noise current flowing through the load, and in order to understand how it achieves the task, two very basic circuit elements are introduced: the nullator and the norator.
The nullator is an element whose voltage and current are always zero, while the norator is an element whose voltage and current can assume any value.
For example, by placing the nullator in series or in parallel to the load impedance we can either cancel the single frequency noise current or voltage across . Then the norator must be placed to satisfy the Kirchhoff's current and voltage laws (KVL and KCL). The active EMI filter always tries to keep a constant value of current or voltage at the load, in this specific case this value is equal to zero. The combination of a nullator and a norator forms a nullor, which is an element that can be represented by an ideal controlled voltage/current source.
The series and parallel combinations of Norator and Nullator gives four possible configurations of ideal controlled sources which, for the case of feedback topology, are shown in Fig. 2 and in Fig. 3 for the feedback topology.
The four implementation that can be actualized are:
Current sensing - Current injecting (current controlled current source)
Voltage sensing - Current injecting (voltage controlled current source)
Current sensing - Voltage injecting (current controlled voltage source)
Voltage sensing - Voltage injecting (voltage controlled voltage source)
Feedback
To assess the performances and the effectiveness of the filter, the Insertion loss (IL) can be evaluated in each case. The IL, expressed in dB, represents the achievable noise attenuation and it is defined as:
where is the load voltage measured without the filter and is the load voltage with the filter included in the system. By applying KVL, KCL and Ohm's law to the circuit, these two voltages can be calculated.
If is the filter's gain, i.e. the transfer function between the sensed and the injected signal, IL results to be:
Larger IL implies a greater attenuation, while a smaller than unity IL implies an undesired noise signal amplification caused by the active filter. For example, type (a) (current sensing and compensation) and (d) (voltage sensing and compensation) filters, if the mismatch between and is large enough so that one of the two becomes negligible compared to the other, provide ILs irrespective of the system impedances, which means the higher the gain, the better the performances. The large mismatch between and occurs in most of real applications, where the noise source impedance is much smaller (for the differential mode test setup) or much larger (for the common mode test setup) than the load impedance , that, in standard test setup, is equal to the LISN impedance. In these two cases ILs can be approximated to:
On the other hand, in the type (c) (current sensing and voltage compensation) active filter, the gain of the active filter should be larger than the total impedance of the given system to obtain the maximum IL. This means that the filter should provide a high series impedance between the noise source and the receiver to block the noise current. Similar conclusion can be made for a type (b) (voltage detecting and current compensating) active filter; the equivalent admittance of the active filter should be much higher than the total admittance of the system without the filter, so that the active filter reroutes the noise current and minimizes the noise voltage at the receiver port. In this way, active filters try to block and divert the noise propagation path as conventional passive LC filters do. Nevertheless, active filters employing type (b) or (c) topologies require a gain A larger than the total impedance (or admittance) of the raw system and, in other words, their ILs are always dependent on system impedance and , even though the mismatch between them is large.
Feed forward
While feedback filters register the noise at load side and inject the compensation signal at source side, the feed forward devices do the opposite: the sensing is at source end and the compensation at load port. For this reason, there cannot be feedforward-type implementation for type (b) and (c). Type (a) (current sensing and injecting) and type (d) (voltage sensing and injecting) can be implemented and the calculated ILs result to be:
Considering also in these two cases the condition for maximum noise reduction, i.e. maximum IL, it can be achieved when the filter's gain is equal to one. If , it follows that . It can also be noted that, if or, generally speaking, , the insertion loss becomes negative and thus the active filter amplifies the noise instead of reducing it.
Active vs passive
EMI passive filter performances depend upon the impedances of the surrounding electrical system, while, in some configurations, it does not happen for active filtering.
Active filters requires an external power supply for their internal circuitry.
Active filters have to deal with the stability of the electronic components.
As the functional current and voltage of a system increase, passive components increase in size and price. This issue does not affect active filters since they deal only with the detected high-frequency small signal.
See also
Passive filter
Electromagnetic compatibility
Electromagnetic interference
References
Electromagnetic compatibility | Active EMI reduction | [
"Engineering"
] | 1,837 | [
"Electrical engineering",
"Electromagnetic compatibility",
"Radio electronics"
] |
11,965,582 | https://en.wikipedia.org/wiki/British%20Energy%20Efficiency%20Federation | The British Energy Efficiency Federation (BEEF) was founded in 1996 by the United Kingdom Government to provide a forum for consultation between existing industry associations in the energy sector.
References
Business organisations based in the United Kingdom
Energy conservation in the United Kingdom
Energy industry organizations
1996 establishments in the United Kingdom
Organizations established in 1996 | British Energy Efficiency Federation | [
"Engineering"
] | 61 | [
"Energy organizations",
"Energy industry organizations"
] |
11,965,603 | https://en.wikipedia.org/wiki/Stellar%20rotation | Stellar rotation is the angular motion of a star about its axis. The rate of rotation can be measured from the spectrum of the star, or by timing the movements of active features on the surface.
The rotation of a star produces an equatorial bulge due to centrifugal force. As stars are not solid bodies, they can also undergo differential rotation. Thus the equator of the star can rotate at a different angular velocity than the higher latitudes. These differences in the rate of rotation within a star may have a significant role in the generation of a stellar magnetic field.
In its turn, the magnetic field of a star interacts with the stellar wind. As the wind moves away from the star its angular speed decreases. The magnetic field of the star interacts with the wind, which applies a drag to the stellar rotation. As a result, angular momentum is transferred from the star to the wind, and over time this gradually slows the star's rate of rotation.
Measurement
Unless a star is being observed from the direction of its pole, sections of the surface have some amount of movement toward or away from the observer. The component of movement that is in the direction of the observer is called the radial velocity. For the portion of the surface with a radial velocity component toward the observer, the radiation is shifted to a higher frequency because of Doppler shift. Likewise the region that has a component moving away from the observer is shifted to a lower frequency. When the absorption lines of a star are observed, this shift at each end of the spectrum causes the line to broaden. However, this broadening must be carefully separated from other effects that can increase the line width.
The component of the radial velocity observed through line broadening depends on the inclination of the star's pole to the line of sight. The derived value is given as , where is the rotational velocity at the equator and is the inclination. However, is not always known, so the result gives a minimum value for the star's rotational velocity. That is, if is not a right angle, then the actual velocity is greater than . This is sometimes referred to as the projected rotational velocity. In fast rotating stars polarimetry offers a method of recovering the actual velocity rather than just the rotational velocity; this technique has so far been applied only to Regulus.
For giant stars, the atmospheric microturbulence can result in line broadening that is much larger than effects of rotational, effectively drowning out the signal. However, an alternate approach can be employed that makes use of gravitational microlensing events. These occur when a massive object passes in front of the more distant star and functions like a lens, briefly magnifying the image. The more detailed information gathered by this means allows the effects of microturbulence to be distinguished from rotation.
If a star displays magnetic surface activity such as starspots, then these features can be tracked to estimate the rotation rate. However, such features can form at locations other than equator and can migrate across latitudes over the course of their life span, so differential rotation of a star can produce varying measurements. Stellar magnetic activity is often associated with rapid rotation, so this technique can be used for measurement of such stars. Observation of starspots has shown that these features can actually vary the rotation rate of a star, as the magnetic fields modify the flow of gases in the star.
Physical effects
Equatorial bulge
Gravity tends to contract celestial bodies into a perfect sphere, the shape where all the mass is as close to the center of gravity as possible. But a rotating star is not spherical in shape, it has an equatorial bulge.
As a rotating proto-stellar disk contracts to form a star its shape becomes more and more spherical, but the contraction doesn't proceed all the way to a perfect sphere. At the poles all of the gravity acts to increase the contraction, but at the equator the effective gravity is diminished by the centrifugal force. The final shape of the star after star formation is an equilibrium shape, in the sense that the effective gravity in the equatorial region (being diminished) cannot pull the star to a more spherical shape. The rotation also gives rise to gravity darkening at the equator, as described by the von Zeipel theorem.
An extreme example of an equatorial bulge is found on the star Regulus A (α Leonis A). The equator of this star has a measured rotational velocity of 317 ± 3 km/s. This corresponds to a rotation period of 15.9 hours, which is 86% of the velocity at which the star would break apart. The equatorial radius of this star is 32% larger than polar radius. Other rapidly rotating stars include Alpha Arae, Pleione, Vega and Achernar.
The break-up velocity of a star is an expression that is used to describe the case where the centrifugal force at the equator is equal to the gravitational force. For a star to be stable the rotational velocity must be below this value.
Differential rotation
Surface differential rotation is observed on stars such as the Sun when the angular velocity varies with latitude. Typically the angular velocity decreases with increasing latitude. However the reverse has also been observed, such as on the star designated HD 31993. The first such star, other than the Sun, to have its differential rotation mapped in detail is AB Doradus.
The underlying mechanism that causes differential rotation is turbulent convection inside a star. Convective motion carries energy toward the surface through the mass movement of plasma. This mass of plasma carries a portion of the angular velocity of the star. When turbulence occurs through shear and rotation, the angular momentum can become redistributed to different latitudes through meridional flow.
The interfaces between regions with sharp differences in rotation are believed to be efficient sites for the dynamo processes that generate the stellar magnetic field. There is also a complex interaction between a star's rotation distribution and its magnetic field, with the conversion of magnetic energy into kinetic energy modifying the velocity distribution.
Rotation braking
During formation
Stars are believed to form as the result of a collapse of a low-temperature cloud of gas and dust. As the cloud collapses, conservation of angular momentum causes any small net rotation of the cloud to increase, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse.
As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. Thus the rotation rate must be braked during the first 100,000 years to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind in magnetic braking. The expanding wind carries away the angular momentum and slows down the rotation rate of the collapsing protostar.
Most main-sequence stars with a spectral class between O5 and F5 have been found to rotate rapidly. For stars in this range, the measured rotation velocity increases with mass. This increase in rotation peaks among young, massive B-class stars. "As the expected life span of a star decreases with increasing mass, this can be explained as a decline in rotational velocity with age."
After formation
For main-sequence stars, the decline in rotation can be approximated by a mathematical relation:
where is the angular velocity at the equator and is the star's age. This relation is named Skumanich's law after Andrew P. Skumanich who discovered it in 1972.
Gyrochronology is the determination of a star's age based on the rotation rate, calibrated using the Sun.
Stars slowly lose mass by the emission of a stellar wind from the photosphere. The star's magnetic field exerts a torque on the ejected matter, resulting in a steady transfer of angular momentum away from the star. Stars with a rate of rotation greater than 15 km/s also exhibit more rapid mass loss, and consequently a faster rate of rotation decay. Thus as the rotation of a star is slowed because of braking, there is a decrease in rate of loss of angular momentum. Under these conditions, stars gradually approach, but never quite reach, a condition of zero rotation.
At the end of the main sequence
Ultracool dwarfs and brown dwarfs experience faster rotation as they age, due to gravitational contraction. These objects also have magnetic fields similar to the coolest stars. However, the discovery of rapidly rotating brown dwarfs such as the T6 brown dwarf WISEPC J112254.73+255021.5 lends support to theoretical models that show that rotational braking by stellar winds is over 1000 times less effective at the end of the main sequence.
Close binary systems
A close binary star system occurs when two stars orbit each other with an average separation that is of the same order of magnitude as their diameters. At these distances, more complex interactions can occur, such as tidal effects, transfer of mass and even collisions. Tidal interactions in a close binary system can result in modification of the orbital and rotational parameters. The total angular momentum of the system is conserved, but the angular momentum can be transferred between the orbital periods and the rotation rates.
Each of the members of a close binary system raises tides on the other through gravitational interaction. However the bulges can be slightly misaligned with respect to the direction of gravitational attraction. Thus the force of gravity produces a torque component on the bulge, resulting in the transfer of angular momentum (tidal acceleration). This causes the system to steadily evolve, although it can approach a stable equilibrium. The effect can be more complex in cases where the axis of rotation is not perpendicular to the orbital plane.
For contact or semi-detached binaries, the transfer of mass from a star to its companion can also result in a significant transfer of angular momentum. The accreting companion can spin up to the point where it reaches its critical rotation rate and begins losing mass along the equator.
Degenerate stars
After a star has finished generating energy through thermonuclear fusion, it evolves into a more compact, degenerate state. During this process the dimensions of the star are significantly reduced, which can result in a corresponding increase in angular velocity.
White dwarf
A white dwarf is a star that consists of material that is the by-product of thermonuclear fusion during the earlier part of its life, but lacks the mass to burn those more massive elements. It is a compact body that is supported by a quantum mechanical effect known as electron degeneracy pressure that will not allow the star to collapse any further. Generally most white dwarfs have a low rate of rotation, most likely as the result of rotational braking or by shedding angular momentum when the progenitor star lost its outer envelope. (See planetary nebula.)
A slow-rotating white dwarf star can not exceed the Chandrasekhar limit of 1.44 solar masses without collapsing to form a neutron star or exploding as a Type Ia supernova. Once the white dwarf reaches this mass, such as by accretion or collision, the gravitational force would exceed the pressure exerted by the electrons. If the white dwarf is rotating rapidly, however, the effective gravity is diminished in the equatorial region, thus allowing the white dwarf to exceed the Chandrasekhar limit. Such rapid rotation can occur, for example, as a result of mass accretion that results in a transfer of angular momentum.
Neutron star
A neutron star is a highly dense remnant of a star that is primarily composed of neutrons—a particle that is found in most atomic nuclei and has no net electrical charge. The mass of a neutron star is in the range of 1.2 to 2.1 times the mass of the Sun. As a result of the collapse, a newly formed neutron star can have a very rapid rate of rotation; on the order of a hundred rotations per second.
Pulsars are rotating neutron stars that have a magnetic field. A narrow beam of electromagnetic radiation is emitted from the poles of rotating pulsars. If the beam sweeps past the direction of the Solar System then the pulsar will produce a periodic pulse that can be detected from the Earth. The energy radiated by the magnetic field gradually slows down the rotation rate, so that older pulsars can require as long as several seconds between each pulse.
Black hole
A black hole is an object with a gravitational field that is sufficiently powerful that it can prevent light from escaping. When they are formed from the collapse of a rotating mass, they retain all of the angular momentum that is not shed in the form of ejected gas. This rotation causes the space within an oblate spheroid-shaped volume, called the "ergosphere", to be dragged around with the black hole. Mass falling into this volume gains energy by this process and some portion of the mass can then be ejected without falling into the black hole. When the mass is ejected, the black hole loses angular momentum (the "Penrose process").
See also
Rossiter–McLaughlin effect
References
External links
Rotation
Rotation
Concepts in stellar astronomy | Stellar rotation | [
"Physics",
"Astronomy"
] | 2,658 | [
"Physical phenomena",
"Concepts in astrophysics",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Concepts in stellar astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
11,965,693 | https://en.wikipedia.org/wiki/Ciro%20de%20Quadros | Ciro Carlos Araujo de Quadros (January 30, 1940 – May 28, 2014) was a Brazilian leader in the field of Public Health, in particular, the area of vaccines and preventable diseases. He was born in Rio Pardo, Brazil.
Eradication of polio
De Quadros played a critical role in developing the strategies now used worldwide in the eradication of polio.
He led the team which eradicated polio from the Americas. He received a Medical Doctor degree from the Federal University for Health Sciences (UFCSPA), Porto Alegre, Brazil, 1966, and a M.P.H. degree from the National School of Public Health, Rio de Janeiro, Brazil, 1968.
Later work
In 2003, de Quadros joined the Sabin Vaccine Institute, a non-profit organization honoring the legacy of Albert Sabin, developer of the oral polio vaccine. de Quadros is instrumental in the Institute's international immunization advocacy programs, where he works on issues such as the introduction of new vaccines, e.g. rotavirus, rubella, human papilloma virus, pneumococcal and others, and on issues related to the sustainability of national immunization programs. He is also on the faculty at Johns Hopkins School of Hygiene and Public Health and the School of Medicine of the George Washington University.
Death
He died of pancreatic cancer on May 28, 2014 at his home in Washington, D.C.
Public Health Awards
He published and presented at conferences throughout the world and received a number of international awards, including:
the 1993 Prince Mahidol Award of Thailand
the 2000 Albert B. Sabin Gold Medal
The Order of Rio Branco from his native Brazil
election to the National Institute of Medicine
2011 BBVA Foundation Frontiers of Knowledge Award of Development Cooperation for leading efforts to eliminate polio and measles from the western hemisphere and being one of the most important scientists in the eradication of smallpox around the world. His work has shown that vaccination programs can be carried out in an economically sustainable way.
Public Health Hero of the Americas award from the Pan-American Health Organization (2014)
Geneva Forum for Health Award (2014)
Featured in PAHO's Art for Research exhibit collection by photographer Theo Chalmers "Shaping the World", highlighting how research for health drives social and economic development. The collection Shaping the World that has been exhibited in Africa, Europe and throughout the Americas.
See also
Sabin Vaccine Institute
References
External links
The burden of pneumococcal disease among Latin American and Caribbean children: review of the evidence.Rev Panam Salud Pública. 2009 Mar;25(3):270-9. Review. Accessed on: 11/08/2013
Cost-effectiveness of pneumococcal conjugate vaccination in Latin America and the Caribbean: a regional analysis.Rev Panam Salud Pública. 2008 Nov;24(5):304-13. Accessed on: 11/08/2013
Identifying unit costs for use in regional economic evaluation: an illustrative analysis of childhood pneumococcal conjugate vaccine in Latin America and the Caribbean. Rev Panam Salud Pública. 2009 Nov;26(5):458-68. Accessed on: 11/08/2013
Rational use of rubella vaccine for prevention of congenital rubella syndrome in the Americas. Review. Rev Panam Salud Pública. 1998 Sep;4(3):156-60. Accessed on: 11/08/2013
Accelerated rubella control and the prevention of congenital rubella syndrome. Rev Panam Salud Pública. 2002 Apr;11(4): 273-6. Accessed on: 11/08/2013
Shaping the World, an art exhibit of the Pan American Health Organization highlighting how research for health improves people's life and human development, and yields high returns on investment.
1940 births
2014 deaths
Brazilian public health doctors
Polio
Vaccinologists
People from Rio Grande do Sul
Members of the National Academy of Medicine
Deaths from pancreatic cancer in Washington, D.C.
20th-century Brazilian physicians | Ciro de Quadros | [
"Biology"
] | 844 | [
"Vaccination",
"Vaccinologists"
] |
11,966,284 | https://en.wikipedia.org/wiki/Flash%20flood%20watch | A flash flood watch (SAME code: FFA; also referred as a "green box" by meteorologists) is severe weather watch product of the National Weather Service that is issued when conditions are favorable for flash flooding in flood-prone areas, usually when grounds are already saturated from recent rains, or when upcoming rains will have the potential to cause a flash flood. These watches are also occasionally issued when a dam may break in the near future.
Countries such as Australia also issue similarly worded warnings.
Example of a flash flood watch
Below is an example issued by the National Weather Service in Mount Holly, New Jersey.
688
WGUS61 KPHI 071932
FFAPHI
URGENT - IMMEDIATE BROADCAST REQUESTED
Flood Watch
National Weather Service Mount Holly NJ
332 PM EDT Wed Jul 7 2021
NJZ001-007-PAZ054-055-060>062-080745-
/O.NEW.KPHI.FF.A.0003.210708T1600Z-210709T1600Z/
/00000.0.ER.000000T0000Z.000000T0000Z.000000T0000Z.OO/
Sussex-Warren-Carbon-Monroe-Berks-Lehigh-Northampton-
Including the cities of Jim Thorpe, Allentown, Newton, Washington,
Bethlehem, Stroudsburg, Reading, and Easton
332 PM EDT Wed Jul 7 2021
...FLASH FLOOD WATCH IN EFFECT FROM THURSDAY AFTERNOON THROUGH
FRIDAY MORNING...
The National Weather Service in Mount Holly has issued a
* Flash Flood Watch for portions of northern New Jersey...and
Pennsylvania...including the following areas...in northern New
Jersey...Sussex and Warren. In Pennsylvania...Berks, Carbon,
Lehigh, Monroe, and Northampton.
* From Thursday afternoon through Friday morning.
* Heavy rainfall will develop over portions of eastern PA and NW New
Jersey ahead of Tropical Storm Elsa Thursday afternoon. The heavy
rains will see 1 to 2 inches with locally higher amounts and then
will see another 1 to 2 inches with the heavy rainfall associated
with Tropical Storm Elsa Thursday evening through the overnight
hours.
* Heavy rain in short periods of time will cause the potential for
streams and creeks to quickly rise out of their banks as well as
the potential for flash flooding in areas of poor drainage.
$$
DEZ001>004-MDZ012-015-019-020-NJZ008>010-012>027-PAZ070-071-101>106-
080745-
/O.NEW.KPHI.FF.A.0003.210708T2100Z-210709T1600Z/
/00000.0.ER.000000T0000Z.000000T0000Z.000000T0000Z.OO/
New Castle-Kent-Inland Sussex-Delaware Beaches-Kent MD-Queen Annes-
Talbot-Caroline-Morris-Hunterdon-Somerset-Middlesex-Western Monmouth-
Eastern Monmouth-Mercer-Salem-Gloucester-Camden-Northwestern
Burlington-Ocean-Cumberland-Atlantic-Cape May-Atlantic Coastal Cape
May-Coastal Atlantic-Coastal Ocean-Southeastern Burlington-Delaware-
Philadelphia-Western Chester-Eastern Chester-Western Montgomery-
Eastern Montgomery-Upper Bucks-Lower Bucks-
Including the cities of Glassboro, Denton, Jackson, Flemington,
Media, Philadelphia, Doylestown, Chalfont, Morristown, Pennsville,
Honey Brook, Centreville, Norristown, Georgetown, Mount Holly, Ocean
City, Cherry Hill, Moorestown, Freehold, Pottstown, Hammonton, Cape
May Court House, Oxford, Kennett Square, Sandy Hook, Chestertown,
Morrisville, West Chester, Millville, Wilmington, New Brunswick,
Dover, Trenton, Lansdale, Wharton State Forest, Atlantic City,
Collegeville, Long Beach Island, Camden, Easton, Perkasie, Rehoboth
Beach, and Somerville
332 PM EDT Wed Jul 7 2021
...FLASH FLOOD WATCH IN EFFECT FROM THURSDAY AFTERNOON THROUGH
FRIDAY MORNING...
The National Weather Service in Mount Holly has issued a
* Flash Flood Watch for portions of Delaware...northeast Maryland...
New Jersey...and southeast Pennsylvania...including the following
areas...in Delaware...Delaware Beaches, Inland Sussex, Kent, and
New Castle. In northeast Maryland...Caroline, Kent MD, Queen
Annes, and Talbot. In New Jersey...Atlantic, Atlantic Coastal Cape
May, Camden, Cape May, Coastal Atlantic, Coastal Ocean,
Cumberland, Eastern Monmouth, Gloucester, Hunterdon, Mercer,
Middlesex, Morris, Northwestern Burlington, Ocean, Salem,
Somerset, Southeastern Burlington, and Western Monmouth. In
southeast Pennsylvania...Delaware, Eastern Chester, Eastern
Montgomery, Lower Bucks, Philadelphia, Upper Bucks, Western
Chester, and Western Montgomery.
* From Thursday afternoon through Friday morning.
* Tropical Storm Elsa will move across portions of DelMarVa, New
Jersey, and eastern PA Thursday night, bringing heavy rainfall to
the region. Expected rainfall totals across DelMarVa and New
Jersey range from 2 to 3 inches, with locally higher amounts up to
5 inches possible. Further west of the I-95 corridor could expect
to see 1-2 inches, with locally higher amounts to 3 inches
possible.
* Heavy rain in short periods of time will cause the potential for
streams and creeks to quickly rise out of their banks as well as
the potential for flash flooding in urban areas.
$$
Deal
Source:
PDS watches
If a flash flood watch is likely to lead to a major flash flood disaster, then enhanced wording with the words This is a particularly dangerous situation (PDS) can be added to the watch; this is occasionally issued.
Below is an example issued by the National Weather Service in Memphis, Tennessee.
URGENT - IMMEDIATE BROADCAST REQUESTED
FLOOD WATCH
NATIONAL WEATHER SERVICE MEMPHIS TN
239 PM CDT SUN APR 24 2011
...VERY HEAVY RAINFALL THROUGH THE MIDDLE OF THIS WEEK WILL LIKELY
LEAD TO SIGNIFICANT...WIDESPREAD FLASH FLOODING...
...THIS IS A PARTICULARLY DANGEROUS SITUATION...
A BOUNDARY WILL CONTINUE TO REMAIN STATIONARY ACROSS SOUTHERN
MISSOURI INTO KENTUCKY THROUGH MONDAY. REPEATED ROUNDS OF
THUNDERSTORMS WILL TRACK ALONG THE FRONT BRINGING HEAVY RAINFALL.
THEN A LOW PRESSURE SYSTEM WILL TRACK ALONG IT INTO MISSOURI AND
PUSH THE FRONT FURTHER SOUTH TO ALONG THE I-40 CORRIDOR MONDAY
NIGHT THROUGH TUESDAY NIGHT. THIS WILL SHIFT THE HEAVY RAIN AXIS
FURTHER SOUTH TO ALONG AND JUST NORTH OF THE I-40 CORRIDOR.
A SECOND LOW PRESSURE SYSTEM WILL TRACK ALONG THE NEWLY STALLED
BOUNDARY AND SET OFF ADDITIONAL TRAINING THUNDERSTORMS LATE
TUESDAY NIGHT AND WEDNESDAY. THE FINAL COLD FRONT WILL PASS
THROUGH LATE WEDNESDAY AFTERNOON...ENDING THE PERSISTENT HEAVY
RAINFALL.
ARZ026>028-035-036-048-049-058-MSZ001>014-TNZ003-004-019>021-
048>055-088>092-250400-
/O.NEW.KMEG.FF.A.0007.110426T0000Z-110428T0000Z/
/00000.0.ER.000000T0000Z.000000T0000Z.000000T0000Z.OO/
CRAIGHEAD-POINSETT-MISSISSIPPI-CROSS-
CRITTENDEN-ST. FRANCIS-
LEE AR-PHILLIPS-DESOTO-MARSHALL-BENTON MS-TIPPAH-ALCORN-
TISHOMINGO-TUNICA-TATE-PRENTISS-
COAHOMA-QUITMAN-PANOLA-LAFAYETTE-
UNION-WEAKLEY-HENRY-DYER-GIBSON-CARROLL-LAUDERDALE-TIPTON-HAYWOOD-
CROCKETT-MADISON-CHESTER-HENDERSON-
DECATUR-SHELBY-FAYETTE-
HARDEMAN-MCNAIRY-HARDIN-
INCLUDING THE CITIES OF...JONESBORO...HARRISBURG...BLYTHEVILLE...
WYNNE...WEST MEMPHIS...FORREST CITY...HELENA...
SOUTHAVEN...
OLIVE BRANCH...CORINTH...IUKA...TUNICA...[[Booneville,
Mississippi|BOONEVILLE]]...
CLARKSDALE...BATESVILLE...OXFORD...NEW ALBANY...
MARTIN...
DRESDEN...PARIS...DYERSBURG...HUMBOLDT...MILAN...HUNTINGDON...
COVINGTON...JACKSON...LEXINGTON...BARTLETT...GERMANTOWN...
COLLIERVILLE...MEMPHIS...MILLINGTON...SOMERVILLE...
BOLIVAR...
SAVANNAH
239 PM CDT SUN APR 24 2011
...FLASH FLOOD WATCH IN EFFECT FROM MONDAY EVENING THROUGH
WEDNESDAY EVENING...
THE NATIONAL WEATHER SERVICE IN MEMPHIS HAS ISSUED A
* FLASH FLOOD WATCH FOR PORTIONS OF EAST ARKANSAS...NORTH MISSISSIPPI
AND WEST TENNESSEE...INCLUDING THE FOLLOWING AREAS...IN EAST
ARKANSAS...CRAIGHEAD...CRITTENDEN...CROSS...
LEE...MISSISSIPPI...PHILLIPS...POINSETT AND ST. FRANCIS. IN
NORTH MISSISSIPPI...ALCORN...BENTON...COAHOMA...DESOTO...
LAFAYETTE...MARSHALL...PANOLA...PRENTISS...QUITMAN...TATE...
TIPPAH...TISHOMINGO...TUNICA AND UNION. IN WEST TENNESSEE...
CARROLL...CHESTER...CROCKETT...DECATUR...DYER...FAYETTE...
GIBSON...HARDEMAN...HARDIN...HAYWOOD...HENDERSON...HENRY...
LAUDERDALE...MADISON...MCNAIRY...SHELBY...TIPTON AND WEAKLEY.
* FROM MONDAY EVENING THROUGH WEDNESDAY EVENING.
* THIS IS A PARTICULARLY DANGEROUS SITUATION
* TOTAL RAINFALL AMOUNTS OF 5 TO 8 INCHES ARE EXPECTED ALONG AND
NORTH OF I-40 WITH 2 TO 5 INCHES EXPECTED SOUTH OF I-40. LOCALLY
HIGHER AMOUNTS ARE LIKELY.
* RAINFALL AMOUNTS SUCH AS THESE MAY LEAD TO WIDESPREAD...
SIGNIFICANT...AND LIFE THREATENING FLASH FLOODING. THIS EVENT
MAY BE AS SEVERE AS THE MAY 1–2, 2010 FLOODING IN PLACES. FLASH
FLOODING OF CITIES...RURAL AREAS...RIVERS...AND SMALL STREAMS
ARE POSSIBLE.
PRECAUTIONARY/PREPAREDNESS ACTIONS...
A FLASH FLOOD WATCH MEANS THAT CONDITIONS MAY DEVELOP THAT LEAD
TO FLASH FLOODING. FLASH FLOODING IS A VERY DANGEROUS SITUATION.
YOU SHOULD MONITOR LATER FORECASTS AND BE PREPARED TO TAKE ACTION
SHOULD FLASH FLOOD WARNINGS BE ISSUED.
&&
$$
ARZ008-009-017-018-MOZ113-115-TNZ001-002-250400-
/O.EXT.KMEG.FF.A.0006.000000T0000Z-110428T0000Z/
/00000.0.ER.000000T0000Z.000000T0000Z.000000T0000Z.OO/
RANDOLPH-CLAY-LAWRENCE-GREENE-DUNKLIN-PEMISCOT-LAKE-OBION-
INCLUDING THE CITIES OF...WALNUT RIDGE...PARAGOULD...KENNETT...
CARUTHERSVILLE...UNION CITY
239 PM CDT SUN APR 24 2011
...FLASH FLOOD WATCH NOW IN EFFECT THROUGH WEDNESDAY EVENING...
THE FLASH FLOOD WATCH IS NOW IN EFFECT FOR
* PORTIONS OF EAST ARKANSAS...SOUTHEAST MISSOURI AND WEST
TENNESSEE...INCLUDING THE FOLLOWING AREAS...IN EAST ARKANSAS...
CLAY...GREENE...LAWRENCE AND RANDOLPH. IN SOUTHEAST MISSOURI...
DUNKLIN AND PEMISCOT. IN WEST TENNESSEE...LAKE AND OBION.
* THROUGH WEDNESDAY EVENING.
* THIS IS A PARTICULARLY DANGEROUS SITUATION
* ADDITIONAL RAINFALL AMOUNTS OF 6 TO 9 INCHES ARE EXPECTED.
LOCALLY HIGHER AMOUNTS ARE LIKELY. THIS...IN COMBINATION OF
THE 2 TO 4 INCHES THAT HAVE ALREADY FALLEN MAY LEAD TO TOTAL
RAINFALL AMOUNTS IN EXCESS OF 12 INCHES IN MANY LOCATIONS.
* RAINFALL AMOUNTS SUCH AS THESE WILL LIKELY LEAD TO WIDESPREAD...
SIGNIFICANT...AND LIFE THREATENING FLASH FLOODING. THIS EVENT
MAY BE AS SEVERE AS THE MAY 1-2 2010 FLOODING IN MANY PLACES.
FLASH FLOODING OF CITIES...RURAL AREAS...RIVERS...AND SMALL
STREAMS ARE POSSIBLE.
PRECAUTIONARY/PREPAREDNESS ACTIONS...
A FLASH FLOOD WATCH MEANS THAT CONDITIONS MAY DEVELOP THAT LEAD
TO FLASH FLOODING. FLASH FLOODING IS A VERY DANGEROUS SITUATION.
YOU SHOULD MONITOR LATER FORECASTS AND BE PREPARED TO TAKE ACTION
SHOULD FLASH FLOOD WARNINGS BE ISSUED.
&&
$$
BORGHOFF
See also
Tornado warning
Tornado watch
Severe thunderstorm warning
Severe thunderstorm watch
Flash flood warning
Particularly dangerous situation
References
National Weather Service
Flood control
Weather warnings and advisories | Flash flood watch | [
"Chemistry",
"Engineering"
] | 2,706 | [
"Flood control",
"Environmental engineering"
] |
11,967,342 | https://en.wikipedia.org/wiki/Lake%20Cheko | Lake Cheko () is a small freshwater lake in Siberia, near the Podkamennaya Tunguska River, in what is now the Evenkiysky District of the Krasnoyarsk Krai.
It is primarily known for its proposed relationship with the 1908 Tunguska event.
Dimensions and environs
Lake Cheko is a small bowl-shaped lake. It is about long, wide and deep.
In the lake flows the Kimchu River (Russian: Кимчу), which flows into the Chunya River (Russian: Чуня), which in turn flows into the Podkamennaya Tunguska.
Lake Cheko is roughly north-northwest of the epicenter of the Tunguska event. The lake is inside the blast zone, and in the probable direction of whatever caused the Tunguska event.
Proposed impact origin
A 1961 investigation estimated the age of the lake to be at least 5000 years, based on meters-thick silt deposits on the lake bed. However, Luca Gasperini and his co-investigators working in 2008 concluded that the sediments, isotopes, and pollen "suggest that Lake Cheko formed at the time of the Tunguska Event" and thus was only 100 years old. They also reported that acoustic-echo soundings revealed a conical shape for the lake bed, which they interpreted as consistent with an impact crater. They said the lake's long axis points to the hypocenter of the Tunguska explosion, about 7.0 km away, and they interpreted magnetic readings as indicative of a possible meter-sized chunk of rock below the lake's deepest point, that they suggested could be a meteorite.
In 2008, a BBC News story on the 100th anniversary of the Tunguska Event mentioned that researchers at Imperial College London had pointed out that many of the trees surrounding the lake are older than 100 years, which suggests that the lake could not have been created by an impact in 1908. The researchers also pointed out other problems, including the morphology of the lake and the surrounding terrain, and the lack of impactor debris and ejecta, noting that the characteristics of the impactor required by the impact theory are inconsistent with existing models of the known features of the event. Other researchers have said it is unlikely that a stony meteorite in the right size range would have the mechanical strength necessary to survive atmospheric passage intact, and yet still retain a velocity large enough to excavate a crater that size on reaching the ground.
In 2017, Russian scientists reported isotope evidence showing the lake is older than the Tunguska Event.
See also
List of possible impact structures on Earth
References
External links
geotimes.org site with 3D reconstruction of Lake Cheko
Morphobathymetric map of the Lake Cheko
Cheko
Possible impact craters on Earth
Tunguska event | Lake Cheko | [
"Physics"
] | 578 | [
"Unsolved problems in physics",
"Tunguska event"
] |
11,967,583 | https://en.wikipedia.org/wiki/Albert%20Caquot | Albert Irénée Caquot (; 1 July 1881 – 28 November 1976) was a French engineer. He received the “Croix de Guerre 1914–1918 (France)” (military honor) and was Grand-croix of the Légion d’Honneur (1951). In 1962, he was awarded the Wilhelm Exner Medal. He was a member of the French Academy of Sciences from 1934 until his death in 1976.
Early life
Albert was born to Paul Auguste Ondrine Caquot and his wife, Marie Irma (nee Cousinard). They owned a family farm in Vouziers, in the Ardennes, near the Belgian border. His father taught him modernism, by installing electricity and telephone as early as 1890. One year after high school, at eighteen years old, he was admitted at the Ecole Polytechnique ("year" 1899). Six years later, he graduated in the Corps des Ponts et Chaussées.
Career
From 1905 to 1912, he was a project manager in Troyes (Aube) and was pointed out for civil work improvements he undertook with the city sewer system. This protected the city from the centennial flood of the River Seine in 1910. In 1912, he joined a leading structural engineering firm where he applied his unique talent as a structure designer.
Albert Caquot conducted research and immediately applied it in construction. His most notable contributions include the following:
Reinforced concrete design and structural engineering in a broader sense. In 1930, he defined the intrinsic curve and explained why the elasticity theory was insufficient for modern structure design.
Geotechnics and foundation design. He stated the corresponding states theorem (CST). In 1933, his publication on the stability of pulverulent and coherent material received an admiring report from the French Academy of Sciences, where he was elected a life member in 1934. In 1948, with Jean Kérisel (1908–2005), his son-in-law and disciple, he developed an advanced theory extremely important for passive earth pressure (LINK) where there is soil-wall friction. This principle has been broadly applied ever since for the design of ground engineering structures such as retaining walls, tunnels, and foundation piles.
The revival of cable-stayed bridges with reinforced concrete (Donzère Mondragon bridge, 1952), which he envisioned with long spans, even crossing the English Channel. In 1967, he designed a conceptual double-deck bridge of this type with 810 m-wide spans and two 25 m-wide deck stages accommodating eight lanes for cars, 2 for rail, and 2 for Skytrain.
In the course of his life, Albert Caquot taught mechanical science for a long time in three of the most prominent French engineering schools in Paris: Écoles nationales supérieures des Mines, des Ponts et de l’Aéronautique.
In the course of his career as a designer, he designed more than 300 bridges and facilities, among which several were world records at the time:
the La Madeleine Bridge, in Nantes (1928), a concrete cantilever bridge over the River Loire,
the Lafayette Bridge crossing the tracks of the Gare de l’Est in Paris (1928). This is a truss bridge in reinforced concrete, where concrete vibrators using compressed air were used for the first time in history,
the new La Caille Bridge (1928), on the ravine of Usses, in the Alps, close to Annecy. This is a 140-m-span concrete arc bridge,
the great Louis Joubert dry dock (Normandie-Dock) in the port of Saint-Nazaire (1929–1933),
the La Girotte Dam (1944–1949),
the Bollène lock, on the left side (navigating downwards) of the Donzère-Mondragon Dam (built on the Donzère-Mondragon Canal, lateral to the Rhône river), the world's tallest lock (1950),
the Bildstock tunnel (1953–1955),
the world's largest tidal power plant on the River Rance, in Brittany (1961–1966). In his eighties, Albert Caquot made a critical contribution to the construction of the dam, designing an enclosure in order to protect the construction site from the 12-m-high ocean tides and the strong streams.
Two prestigious achievements made him famous internationally: the internal structure of the Christ the Redeemer statue in Rio de Janeiro (Brazil) at the peak of Corcovado Mountain (1931) and the George V Bridge on the Clyde River in Glasgow (Scotland) for which the Scottish engineers asked for his assistance.
In his late eighties, he developed a gigantic tidal power project to capture the tide energy in Mont St Michel bay, in Normandy.
Aeronautics
During the course of his life, he committed alternately to structural and aeronautical engineering, following the rhythm imposed by the First and Second World Wars. Albert Caquot's aeronautics contributions included designing the "Caquot dirigible" and technical innovations at the new French Aviation Ministry, where he created several Fluid Mechanics Institutes that still exist today. Marcel Dassault, whom Albert Caquot charged to develop several major aeronautical projects at the beginning of his career, and mentioned that he was one of the best engineers that aeronautics ever had. He (Albert Caquot) was visionary and ahead of his time. He led aeronautical innovations for forty years.
As early as 1901, already visionary, he performed his military service in an airship unit of the French army. At the beginning of First World War, he was mobilised with the 40e Compagnie d'Aérostiers equipped with Drachen type airships as first lieutenant. He noticed the poor wind behavior of these sausage shaped captive balloons, which were ineffective except in calm conditions.
In 1914, he designed a new sausage-shaped dirigible equipped with three air-filled lobes spaced evenly around the tail as stabilizers. He moved the inner air balloonette from the rear to the underside of the nose, separate from the main gas envelope. The Caquot could hold in 90 km/h winds and remain horizontal. France manufactured "Caquot dirigibles" for all the allied forces, including the English and United States armies, for three years. The United States also manufactured nearly a thousand "Caquot R balloons" in 1918-1919. This balloon gave France and its allies an advantage in military observation, significantly contributing to the allies' supremacy in artillery and aviation and eventually to the final victory. In January 1918, Georges Clémenceau named him technical director of the entire military aviation.
In 1919, Albert Caquot proposed the creation of the French aeronautical museum (today called Musée de l'Air et de l'Espace, in Le Bourget). This museum is the oldest aeronautical museum in the world.
In 1928, Albert Caquot became the first executive director of the new Aviation ministry. He implemented a research, prototypes, and mass production policy, which contributed quickly to France's leadership in the aeronautical industry. His main accomplishments are:
the development of fluid mechanics research and education. He nationalized in 1928 the Ecole Nationale Supérieure d’Aéronautique (Sup' Aero), the leading engineering school in aeronautics that contributed to French scientific excellence in aeronautics and led to the creation of several institutions like ONERA (National Office of Aerospace Studies and Research) in 1946 and the CNES (National Center of Space Studies) in 1952. The school still exists today as ISAE-SUPAERO.
the construction of the gigantic Chalais-Meudon Wind Tunnel in 1929 (120 m-long and 25 m-high) allowing to test an aircraft in real conditions, with engine running and the pilot on board. This wind tunnel was the largest of the world at the time and it was used to test the Dassault Mirage III, the Sud Aviation Caravelle and the Concorde, but also cars like the Peugeot 4 CV and the VW Beetle.
In 1933, after a budget cut prevented him from proceeding with his projects, he resigned and returned to structural engineering for several years.
In 1938, under the threat of the war, Albert Caquot was brought back to manage all the national aeronautical businesses. He resigned in January 1940.
Legacy
On 2 July 2001, a 4.5-FRF (0.69-€) stamp was issued in France to celebrate Albert Caquot's legacy on the 120th anniversary of his birth and the 25th anniversary of his death. A “Caquot dirigeable" and the bridge of La Caille, two of his creations, surround his picture on the stamp.
Since 1989, the Prix Albert Caquot is awarded annually by the French Association of Civil and Structural Engineering.
See also
Airship
French Academy of Science
École polytechnique, France
École des Ponts ParisTech
Musée de l'air et de l'espace, Le Bourget
Notes
Bibliography
« Albert Caquot 1881-1976 - Savant, soldat et bâtisseur », Jean Kérisel – August 2001
Bulletin of the SABIX, special number 28 about Albert Caquot, July 2001
Le Curieux Vouzinois, "Hyppolyte Taine and Albert Caquot", by Jean Kerisel, Vouziers (the Ardennes), 25 March 2001
Sciences Ouest, numero 112, "L'Ecole Polytechnique et la Bretagne. Le barrage et l'usine maremotrice de la Rance", June 1995
L'Union, "Une journee particulière en hommage a Albert Caquot", Vouziers (the Ardennes), 25 March 1995
La Jaune et la Rouge, "Albert Caquot (X 1899)", by Robert Paoli (X 1931), November 1993
“Albert Caquot - Wilhelm Exner Medaillen Stiftung.” Wilhelm Exner Medaillen Stiftung, 11 May 2022, www.wilhelmexner.org/en/medalists/albert-caquo/.
“Albert Caquot, 1881–1976.” Géotechnique, vol. 27, no. 3, Sept. 1977, pp. 449–50, https://doi.org/10.1680/geot.1977.27.3.449.
External links
Biography on the Ecole Nationale des Ponts et Chaussees website (in French)
Biography on the Ecole Nationale Superieure des Mines de Paris website (in French)
Biography on the Vouziers city website (in French)
Biography on the planete-TP website (in French)
List of Albert Caquot awards (AFGC) since 1989
1881 births
1976 deaths
French bridge engineers
Corps des ponts
École des Ponts ParisTech alumni
École Polytechnique alumni
Électricité de France people
French aerospace engineers
French civil engineers
Geotechnical engineers
Grand Cross of the Legion of Honour
Officers of the French Academy of Sciences
People from Vouziers
Recipients of the Croix de Guerre 1914–1918 (France)
Structural engineers | Albert Caquot | [
"Engineering"
] | 2,282 | [
"Structural engineering",
"Structural engineers"
] |
11,967,974 | https://en.wikipedia.org/wiki/White%20Bear%20Forest | The White Bear Forest is an old growth forest, located in Temagami, Ontario, Canada. The forest is named after Chief White Bear, who was the last chief of the Teme-Augama Anishnabai before Europeans appeared in the region. In some parts of the White Bear Forest trees commonly reach 200 to 300 years in age, while the oldest tree accurately aged in White Bear Forest was a red pine that was 400 years old in 1999. The White Bear Forest contains one of Canada's oldest portages, dating back some 3,000 years. Today, more than of trails access the White Bear Forest. A trail guide is available online at http://ancientforest.org/whitebear.html.
The Caribou Mountain contains a renovated fire lookout tower that visitors can climb for a small fee.
History
In 1928, the Gillies Bros. logging company logged about of the White Bear Forest surrounding Cassels Lake and Rabbit Lake. A log dam was constructed at the narrows connecting Cassels Lake and Rabbit Lake to float logs from the surrounding area out to the Ottawa River. The water level in numerous lakes in the Temagami area was increased numerous feet. The Gillies' Bros. logging company then cut the trees from the flooded forest area leaving behind the snags and stumps seen in the water. The area which we now call the White Bear Forest escaped the first wave of logging partly because the mill owner enjoyed the view of this forest, which was situated directly across the lake from the mill site. In 1992, the White Bear Forest was once again spared from logging because of local opposition, and is now promoted by the town of Temagami as a tourist attraction. In June 1996, the White Bear Forest was declared a Conservation Area by the Ministry of Natural Resources.
Trails
The trails of the White Bear Forest offer hikers, canoeists and adventurers the opportunity to travel through a portion of Ontario's forest that has changed little over time. The majority of the trails are located in an area that has never been logged or mined. The trails vary, from a leisurely one-hour hike, to all day or weekend trips. Many species of birds and wildlife can be observed in their natural surroundings. There is at least seven named trails in the White Bear Forest.
Old Ranger Trail was formerly used by fire rangers to get from Caribou Lake Portage to the fire tower on top of Caribou Mountain. They would haul their canoes through the trail and pull themselves up Caribou Mountain using an old water hose. Remnants of this hose can still be found around Caribou Mountain. The trail is about long.
Another trail adjacent to the fire tower is the White Bear Trail. It is long, coming out on the Ontario Hydro line in the east and adjoining the Old Ranger Trail in the west.
Red Fox Trail is about long, crossing the Ontario Hydro line at two locations. At Pleasant Lake, the Red Fox Trail immediately goes into the old growth forest. The Red Fox Trail comes out at two locations; one at the Beaver Pond and the other at the end of the Caribou Trail at Pinque Lake.
To the west and southwest is the long Caribou Trail. It has at least three entrances; the Trans Canada Pipeline on O'Connor Drive, the Red Fox Trail and across from Finlayson Point on Highway 11. The trail extends along the shores of both Caribou Lake and Pingue Lake.
Peregrine Trail extends along the shores of Cassels Lake and through the heart of the White Bear Forest. It is about long and comes out at three locations; Cassels Lake, Pecours Bay of Snake Island Lake and the Red Fox Trail.
Otter Trail is in length, extending largely along Cassels Lake and Pecours Bay of Snake Island Lake.
Beaver Trail loops through the heart of the White Bear Forest. In contrast to most other White Bear Forest trails, the Beaver Trail contains rocky terrain and steep cliffs. Its two southern ends come out on the Peregrine Trail whereas its northern end comes out on the Otter Trail.
See also
List of old growth forests
References
External links
Town of Temagami
Friends of Temagami
Old-growth forests
Geography of Temagami
Protected areas of Nipissing District | White Bear Forest | [
"Biology"
] | 867 | [
"Old-growth forests",
"Ecosystems"
] |
11,969,012 | https://en.wikipedia.org/wiki/Kerbango | Kerbango was both a company acquired by 3Com and its lead product. Kerbango was founded in 1998 in Silicon Valley by former executives from Apple Computer and Power Computing Corporation. On June 27, 2000, 3Com announced it was acquiring the Kerbango company in an $80 million deal. As part of the deal, Kerbango's CEO, Jon Fitch, became vice president and general manager of 3Com's Internet Audio division, working under Julie Shimer, then vice president and general manager of 3Com's Consumer Networks Business.
Kerbango Internet Radio
The "Kerbango Internet Radio" was intended to be the first stand-alone product that let users listen to Internet radio without a computer. Linux Journal quipped that the Kerbango 100E, the prototype, looked "like a cross between an old Wurlitzer jukebox and the dashboard of a '54 Buick." This initial model was even advertised on Amazon.com in anticipation of its sale, although it was never released.
The Kerbango 100E was an embedded Linux device (running Montavista's Hard Hat Linux), reportedly using RealNetworks' G2 Player to play Internet audio streams (RealAudio G2, 5.0, 4.0, and 3.0 streams as well as streaming MP3). A broadband connection to the Internet was required as dial-up connections were not supported. In addition to Internet streams, the 100E featured an AM/FM tuner. The Kerbango radio's tuning user interface was designed by Alan Luckow and long-time Apple QuickTime developer Jim Reekes and was later adopted for use within iTunes.
The Kerbango radio also had a companion website which allowed the user to control various aspects of the radio, save presets and edit account information. The website also acted as a streaming radio search engine, where users could search for, and listen to streaming radio stations through their browser.
References
Internet audio players
Online companies of the United States
Internet radio
Defunct computer companies of the United States
Defunct computer hardware companies | Kerbango | [
"Technology"
] | 426 | [
"Multimedia",
"Internet radio"
] |
11,969,224 | https://en.wikipedia.org/wiki/List%20of%20Eclipse-based%20software | The Eclipse IDE platform can be extended by adding different plug-ins. Notable examples include:
Acceleo, an open source code generator that uses EMF-based models to generate any textual language (Java, PHP, Python, etc.).
Actifsource, a modeling and code generation workbench.
Adobe ColdFusion Builder, the official Adobe IDE for ColdFusion.
Adobe Flash Builder (formerly Adobe Flex Builder), an Adobe IDE based on Eclipse for building Flex applications for the Flash Platform and mobile platforms.
ADT Eclipse plugin developed by Google for the Android SDK.
AnyLogic, a simulation modeling tool developed by The AnyLogic Company.
Appcelerator, a cross platform mobile development tool by Axway Appcelerator
Aptana, Web IDE based on Eclipse
Avaya Dialog Designer, a commercial IDE to build scripts for voice self-service applications.
Bioclipse, a visual platform for chemo- and bioinformatics.
BIRT Project, open source software project that provides reporting and business intelligence capabilities for rich client and web applications.
Bonita Open Solution relies on Eclipse for the modeling of processes, implementing a BPMN and a Web form editors.
Cantata IDE is a computer program for software testing at run time of C and C++ programs.
CityEngine procedural based city generator.
Code Composer Studio Texas Instruments' IDE for microcontroller development.
CodeWarrior Freescale's IDE for microcontrollers, since Version 10 (C/C++/Assembly compilers).
Compuware OptimalJ, a model-driven development environment for Java
Coverity Static Analysis, which finds crash-causing defects and security vulnerabilities in code
DBeaver, universal database manager and SQL client
ECLAIR, a tool for automatic program analysis, verification, testing and transformation
EasyEclipse, bundled distributions of the Eclipse IDE
g-Eclipse, an integrated workbench framework to access the power of existing Grid infrastructures
GAMA Platform, an integrated development environment for building spatially explicit agent-based simulations
GForge Advanced Server - Collaboration tool with multiframe view through Eclipse integration for multiple functions
Google Plugin for Eclipse, Development tools to design, build, optimize and deploy cloud applications to Google App Engine
GumTree, an integrated workbench for instrument control and data analysis
IBM Rational Software Architect, supporting design with UML and development of applications. This product replaces some Rational Rose products family.
IBM Rational Software Modeler is a robust, scalable solution for requirements elaboration, design, and general modeling. It supports design with UML. This product replaces some Rational Rose products family.
IBM Rational Performance Tester is a performance testing tool used to identify the presence and cause of system performance bottlenecks.
IBM Rational Method Composer, a software development process management and delivery platform
IBM Rational Publishing Engine, a document generation solution
IBM Lotus Expeditor a client-server platform that provides a framework to develop lightweight rich client applications for desktops and various mobile devices.
IBM Lotus Symphony a set of applications free of charge: a word processor, a spreadsheet program, and a presentation program, each based on OpenOffice.org
IBM Notes (since version 8), a client-server collaborative application platform, used for enterprise email and calendaring, as well as for collaborative business applications.
Intel FPGA (formerly Altera), Nios-II EDS, embedded C/C++ software development environment for Intel Nios-II and ARM processors in the HPS part of SoC FPGA's.
Kalypso (software), an Open Source software project, that can be used as a general modeling system. It is focused mainly on numerical simulations in water management such as generation of concepts for flood prevention and protection or risk management.
KNIME, an open source data analytics, reporting and integration platform.
MontaVista DevRocket, plug-in to Eclipse
MyEclipse, from Genuitec is an IDE which also enables Angular Typescript development from within the Java-Eclipse platform using its Webclipse plug-in and Angular IDE solution.
Nuxeo RCP, an open source rich client platform for ECM applications.
OEPE, Oracle Enterprise Pack for Eclipse.
OMNeT++, Network Simulation Framework.
Parasoft C/C++test, an automated C and C++ software testing tool for static analysis, Unit test-case generation and execution, regression testing, runtime error detection, and code review.
Parasoft Jtest, an automated Java software testing tool for static analysis, Unit test-case generation and execution, regression testing, runtime error detection, and code review.
Parasoft SOAtest tool suite for testing and validating APIs and API-driven applications (e.g., cloud, mobile apps, SOA).
Parasoft Virtualize, a service virtualization product that can create, deploy, and manage simulated test environments for software development and software testing purposes.
PHP Development Tools (or simply PDT) is an open source IDE with basic functions for editing and debugging PHP application.
PHPEclipse is an open source PHP IDE with integrated debugging, developed and supported by a committed community.
Polyspace detects and proves the absence of certain run-time errors in source code with a plugin for Eclipse for C, C++, and Ada languages
Powerflasher FDT is an Eclipse-based integrated development environment for building Flex applications for the Flash Platform and mobile platforms.
Pulse (ALM) from Genuitec is a free or for-fee service intended for Eclipse tool management and application delivery, collaboration and management.
PyDev is an Integrated Development Environment (IDE) used for programming in Python supporting code refactoring, graphical debugging, code analysis among other features.
Red Hat JBoss Developer Studio
Remote Component Environment is an integration platform for engineers which enables integration, workflow management and data management in a distributed environment.
Rodin, a tool for software specification and refinement using the B-Method.
RSSOwl, a Java RSS/RDF/Atom newsreader
SAP NetWeaver Developer Studio, an IDE for most of the Java part of SAP technology
Sirius allows creating custom graphical modeling workbenches by leveraging the Eclipse Modeling technologies, including EMF and GMF.
Spatiotemporal Epidemiological Modeler (STEM), is an open source tool for creating and studying new mathematical models of Infectious Disease.
SpringSource STS, plugin for Spring framework based development
Sybase PowerDesigner, a data-modeling and collaborative design tool for enterprises that need to build or re-engineer applications. Teamcenter, from version 2007.1 this Product Lifecycle Management software uses Eclipse as platform.
Tensilica Xtensa Xplorer, an IDE which integrates software development, processor configuration and optimization, multiple-processor SOC architecture tools and SOC simulation into one common design environment.
ThreadSafe, a static analysis tool for Java focused on finding and diagnosing concurrency bugs (race conditions, deadlocks, ...)
uDig, a user-friendly GIS map-making program
VistaMax IDE for Maemo, a visual Integrated Development Environment based on Eclipse
VP/MS, Eclipse-based modeling language and product lifecycle management tool by CSC.
WireframeSketcher, a wireframing tool for desktop, web and mobile applications.
XMind, a cross-platform mind-mapping/brainstorming/presentation software application.
Xilinx's EDK (Embedded Development Kit) is the development package for building MicroBlaze (and PowerPC) embedded processor systems in Xilinx FPGAs as part of the Xilinx IDE software (until version 14.7)
Xilinx SDK as part of the newer Vivado design software package
Zen Coding, A set of plugins for HTML and CSS hi-speed coding.
Zend Studio An IDE used for developing PHP websites and web services.
References
Eclipse-based software | List of Eclipse-based software | [
"Technology"
] | 1,674 | [
"Computing-related lists",
"Lists of software"
] |
11,969,823 | https://en.wikipedia.org/wiki/Nanosystems%20Initiative%20Munich | The Nanosystems Initiative Munich (NIM) is a German research cluster in the field of nano sciences. It is one of the excellence clusters being funded within the German Excellence Initiative of the Deutsche Forschungsgemeinschaft.
The cluster joins the scientific work of about 60 research groups in the Munich region and combines several disciplines: physics, biophysics, physical chemistry, biochemistry, pharmacology, biology, electrical engineering and medical science. Using the expertise in all these fields the cluster aims to create new nanosystems for information technology as well as for life sciences.
The participating institutions of the Nanosystems Initiative Munich are the Ludwig Maximilians University, the Technical University of Munich, the University of Augsburg, the Max Planck Institutes of Quantum Optics and Biochemistry, the Munich University of Applied Sciences, the Walther Meissner Institute and the "Center for New Technologies" at Deutsches Museum.
References
External links
Nanosystems Initiative Munich
NIM on the LMU Excellent website of Ludwig Maximilians University of Munich
https://www.dfg.de/forschungsfoerderung/koordinierte_programme/exzellenzinitiative/exzellenzcluster/liste/exc_detail_4.html
http://idw-online.de/pages/de/news179797
Ludwig Maximilian University of Munich
Munich University of Applied Sciences
Nanotechnology institutions
Research institutes in Munich
University of Augsburg | Nanosystems Initiative Munich | [
"Materials_science"
] | 298 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
11,971,506 | https://en.wikipedia.org/wiki/Mihama%20Nuclear%20Power%20Plant | The is operated by The Kansai Electric Power Company, Inc. and is in the town of Mihama, Fukui Prefecture, about 320 km west of Tokyo. It is on a site that is 520,000 m2 of which 60% is green space. Mihama - 1 was commissioned in 1970.
Reactors on site
Accidents
1991 accident
On 9 February 1991, a tube in the steam generator of Unit 2 ruptured. This triggered a SCRAM with full activation of the Emergency Core Cooling System. The ensuing investigation showed that a fixture designed to suppress vibration to the heat-transfer tube had not been inserted as far it was designed to be, resulting in abnormal vibrations of the tube. The high cycle fatigue, repeated over 100,000 times, led to the pipe rupturing. Eventually, a negligible amount of radiation was released to the environment.
2004 accident
On 9 August 2004, an accident occurred in a building housing turbines for the Mihama 3 reactor.
Hot water and steam leaking from a broken pipe killed five workers and resulted in six others being injured. The accident, rated at INES Level 0, had been called Japan's worst accident at a nuclear plant before the crisis at Fukushima I Nuclear Power Plant. The nuclear section was not affected (the turbines housing building is separate from the reactor building).
The Mihama 3 is an 826 MWe, 3-loop Westinghouse type pressurized water reactor (PWR) which has been in service since 1976.
The pipe rupture occurred in a outside diameter pipe in the 'A' loop condensate system between the fourth feedwater heater and the deaerator, downstream of an orifice for measuring single-phase water flow.
At the time of the secondary piping rupture, 105 workers were preparing for periodic inspections to commence.
A review of plant parameters did not uncover any precursor indicators before the accident nor were there any special operations that could have caused the pipe rupture.
An investigation concluded that water quality had been maintained since the commissioning of the plant, however the failing pipe had been omitted from an initial inspection plan and quality management systems were ineffective.
Mihama-3 restarted in January 2007 after making changes to "reestablish a safety culture" within KEPCO and obtaining permission from Fukui Prefecture and industry regulators.
Court actions against restarting nuclear power plants
In August 2011 citizens of the Shiga prefecture, at the banks of Lake Biwa, filed a lawsuit at the Otsu District Court, seeking a court order to prevent the restart of seven reactors operated by Kansai Electric Power Company, in the prefecture Fukui. Otsu District Court agreed only to stop the Takahama units.
In March 2017, the Osaka High Court quashed the Otsu District Court injunction stopping the Takahama units.
Seismic research in 2011 and 2012
On 5 March 2012 a group of seismic researchers revealed the possibility of a 7.4 M (or stronger) earthquake under the Tsuruga Nuclear Power Plant.
Before this date the Japanese governmental Earthquake Research Committee and Japan Atomic Power had calculated that the Urasoko fault under the plant, combined with other faults connected to it, was around 25 km long and could cause a 7.2M quake and a 1.7 meter displacement.
On top of this, the presence of the oceanic faults were not taken into account by NISA and JAP in the assessment of the safety of the Tsuruga nuclear power plant.
Analysis of sonic survey and other data provided by Japan Atomic Power analysed by a panel of experts of the Nuclear and Industrial Safety Agency showed the presence of multiple faults existing within 2 to 3 km from the Urasoko fault.
According to Sugiyama, a member of this group of scientists, these faults were highly likely to be activated together, and this would extend the length of the Urasoko fault to 35 km.
Computer simulations calculating the length of a fault based on its displacement showed the Urasoko fault to be 39 km long, a result close to the length estimated by the sonic survey data, and the fault could cause some five meters of displacement when activated together with other faults.
Yuichi Sugiyama, the leader of this research group of the National Institute of Advanced Industrial Science and Technology, warned that, as other faults on the south side of the Urasoko fault could become activated together, "The worst-case scenario should be taken into consideration."
According to the experts there were many other faults located under one reactor on the west side of the Urasoku fault that could move also simultaneously. If this were confirmed, the location of the Tsuruga nuclear plant would be disqualified.
On 6 March 2012 NISA asked Japan Atomic Power Co. to reassess the worst-case scenario for earthquakes at the Tsuruga Nuclear Power Plant. They were to find out what damage this could do to the buildings on the site, because the Urazoko fault, running around 250 meters from the reactor buildings, could have a serious impact on the earthquake resistance of the power plant. NISA was also planning to send similar instructions to two other nuclear power plant operators in the Fukui area: Kansai Electric Power Company, and Japan Atomic Energy Agency. The Mihama Nuclear Power Plant and the Monju fast-breeder reactor could also be affected by a possible earthquake caused by the Urazoko fault.
Unit 1 and 2 shutdown
Regulation brought about following the March 2011 nuclear disaster forbids the operation of nuclear reactors for more than 40 years. However, plant operators could secure a 20-year operation extension from the Nuclear Regulation Authority if reactors are refitted. For example, these new regulations require utilities to install power cables made from fire-retardant materials.
Kansai Electric determined that it was not economical to invest in the costly refits of the two older reactor units (Mihama 1 and 2) given their comparatively small output, and decommissioned them in March 2015.
Unit 3 life extension to 60 years
Japan's nuclear regulator approved an application to extend the life of Unit 3 through 2036. New regulations would have required the shutdown of Unit 3 by the end of 2016. This is the second such approval granted since the Fukushima disaster.
Restart will happen after safety upgrades are completed by March 2020 and will cost about 165 billion yen ($1.51 billion).
The upgrades involve fire proofing cabling and other measures.
On 23 June 2021, unit 3 was powered. The restart of unit 3 marks the country's first nuclear unit to operate beyond the initial 40-year service period and following a 10-year outage, completing the mandated upgrade works and the final inspections. But unit 3 is expected to be halted for completion of counterterrorism measures before a 25 October deadline. Work is in progress, but Kansai Electric estimates it would not meet the deadline. Furthermore, nine people from Fukui, Kyoto and Shiga prefectures, filed a lawsuit with the Osaka District Court, seeking to stop unit 3.
See also
List of nuclear power plants in Japan
References
External links
Mihama Nuclear Power Plant
Japanese nuclear operator to shut 11 plants
Worst Japanese Nuclear Accident Claims Fifth Life
Civilian nuclear power accidents
Buildings and structures in Fukui Prefecture
Nuclear power stations in Japan
Energy infrastructure completed in 1970
1970 establishments in Japan
Nuclear power stations using pressurized water reactors
Nuclear power stations with closed reactors
Mihama, Fukui | Mihama Nuclear Power Plant | [
"Technology"
] | 1,509 | [
"Environmental impact of nuclear power",
"Civilian nuclear power accidents"
] |
11,971,587 | https://en.wikipedia.org/wiki/Mixed%20metal%20oxide%20electrode | Mixed metal oxide (MMO) electrodes, also called Dimensionally Stable Anodes (DSA), are devices with high conductivity and corrosion resistance for use as anodes in electrolysis. They are made by coating a substrate, such as pure titanium plate or expanded mesh, with several kinds of metal oxides. One oxide is usually RuO2, IrO2, or PtO2, which conducts electricity and catalyzes the desired reaction such as the production of chlorine gas. The other metal oxide is typically titanium dioxide which does not conduct or catalyze the reaction, but is cheaper and prevents corrosion of the interior.
The loading or amount of precious metal on the substrate (that is, other than the titanium) can be in the order of around 10 to 12 grams per square metre.
Applications
Applications include use as anodes in electrolytic cells for producing free chlorine from saltwater in swimming pools, in electrowinning of metals, in printed circuit board manufacture, electrotinning and zinc electro-galvanising of steel, as anodes for cathodic protection of buried or submerged structures.
History
Henri Bernard Beer registered his patent on mixed metal oxide electrodes in 1965. The patent named "Beer 65", also known as "Beer I", which Beer claimed the deposition of Ruthenium oxide, and admixing a soluble titanium compound to the paint, to approximately 50% (with molar percentage RuO2:TiO2 50:50).
His second patent, Beer II, reduced the Ruthenium oxide content below 50%.
See also
Chloralkali process
References
MMO & PLATINUM COATED TITANIUM ANODES by Titan
Oxides
Electrodes | Mixed metal oxide electrode | [
"Chemistry"
] | 348 | [
"Physical chemistry stubs",
"Inorganic compounds",
"Electrodes",
"Oxides",
"Salts",
"Inorganic compound stubs",
"Electrochemistry",
"Electrochemistry stubs"
] |
11,972,837 | https://en.wikipedia.org/wiki/Ferroelasticity | Ferroelasticity is a phenomenon in which a material may exhibit a spontaneous strain, and is the mechanical equivalent of ferroelectricity and ferromagnetism in the field of ferroics. A ferroelastic crystal has two or more stable orientational states in the absence of mechanical stress or electric field, i.e. remanent states, and can be reproducibly switched between the states by applying a stress or an electric field greater than some critical value. The application of opposite fields leads to Hysteresis as the system crosses back and forth across an energy barrier. This transition dissipates an energy equal to the area enclosed by the hysteresis loop.
The transition of the crystal's parent structure to one of its stable ferroelastic strains is typically accompanied by a reduction in the crystal symmetry. The spontaneous change in strain and crystal structure can be associated with a spontaneous change in other observable properties, such as birefringence, optical absorption, and polarizability. In compatible materials, Raman spectroscopy has been used to directly image ferroelastic switching in crystals.
Landau theory has been used to accurately describe many ferroelastic phase transitions using strain as the Order parameter since nearly all ferroelastic transitions are second order. The free energy is formulated as an expansion in even powers of strain.
The shape memory effect and superelasticity are manifestations of ferroelasticity. Nitinol (nickel titanium), a common ferroelastic alloy, can display either superelasticity or the shape-memory effect at room temperature, depending on the nickel-to-titanium ratio.
Role in Transformation Toughening
Ferroelastic transitions can be used to toughen ceramics with the most notable example being Zirconia. A crack propagating through tetragonal zirconia opens up extra space, which allows the region around the crack to transform into the monoclinic phase, expanding as much as 3-4%. This expansion causes a compressive stress ahead of the crack tip, requiring extra work in order to further propagate the crack.
See also
Ferroics
Multiferroic
Flexoelectricity
Further reading
References
Materials science
Hysteresis | Ferroelasticity | [
"Physics",
"Materials_science",
"Engineering"
] | 466 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Materials science",
"nan",
"Hysteresis"
] |
11,972,971 | https://en.wikipedia.org/wiki/HTTP%20tunnel | HTTP tunneling is used to create a network link between two computers in conditions of restricted network connectivity including firewalls, NATs and ACLs, among other restrictions. The tunnel is created by an intermediary called a proxy server which is usually located in a DMZ.
Tunneling can also allow communication using a protocol that normally wouldn’t be supported on the restricted network.
HTTP CONNECT method
The most common form of HTTP tunneling is the standardized HTTP CONNECT method. In this mechanism, the client asks an HTTP proxy server to forward the TCP connection to the desired destination. The server then proceeds to make the connection on behalf of the client. Once the connection has been established by the server, the proxy server continues to proxy the TCP stream to and from the client. Only the initial connection request is HTTP - after that, the server simply proxies the established TCP connection.
This mechanism is how a client behind an HTTP proxy can access websites using SSL or TLS (i.e. HTTPS). Proxy servers may also limit connections by only allowing connections to the default HTTPS port 443, whitelisting hosts, or blocking traffic which doesn't appear to be SSL.
Example negotiation
The client connects to the proxy server and requests tunneling by specifying the port and the host computer to which it would like to connect. The port is used to indicate the protocol being requested.
CONNECT streamline.t-mobile.com:22 HTTP/1.1
Proxy-Authorization: Basic encoded-credentials
If the connection was allowed and the proxy has connected to the specified host then the proxy will return a 2XX success response.
HTTP/1.1 200 OK
The client is now being proxied to the remote host. Any data sent to the proxy server is now forwarded, unmodified, to the remote host and the client can communicate using any protocol accepted by the remote host.
In the example below, the client is starting SSH communications, as hinted at by the port number in the initial CONNECT request.
SSH-2.0-OpenSSH_4.3\r\n
...
HTTP tunneling without using CONNECT
A HTTP tunnel can also be implemented using only the usual HTTP methods as POST, GET, PUT and DELETE. This is similar to the approach used in Bidirectional-streams Over Synchronous HTTP (BOSH).
A special HTTP server runs outside the protected network and a client program is run on a computer inside the protected network. Whenever any network traffic is passed from the client, the client repackages the traffic data as a HTTP request and relays the data to the outside server, which extracts and executes the original network request for the client. The response to the request, sent to the server, is then repackaged as an HTTP response and relayed back to the client. Since all traffic is encapsulated inside normal GET and POST requests and responses, this approach works through most proxies and firewalls.
See also
ICMP tunnel
Pseudo-wire
Tunnel broker
Virtual private network (VPN)
Virtual extensible LAN
Network virtualization using generic routing encapsulation
Notes
References
Hypertext Transfer Protocol
Network protocols
Computer security | HTTP tunnel | [
"Engineering"
] | 659 | [
"Computer networks engineering",
"Tunneling protocols"
] |
11,973,439 | https://en.wikipedia.org/wiki/Auburn%20Dam | Auburn Dam was a proposed concrete arch dam on the North Fork of the American River east of the town of Auburn, California, in the United States, on the border of Placer and El Dorado Counties. Slated to be completed in the 1970s by the U.S. Bureau of Reclamation, it would have been the tallest concrete dam in California and one of the tallest in the United States, at a height of and storing of water. Straddling a gorge downstream of the confluence of the North and Middle Forks of the American River and upstream of Folsom Lake, it would have regulated water flow and provided flood control in the American River basin as part of Reclamation's immense Central Valley Project.
The dam was first proposed in the 1950s; construction work commenced in 1968, involving the diversion of the North Fork American River through a tunnel and the construction of a massive earthen cofferdam. Following a nearby earthquake and the discovery of an unrelated seismic fault that underlay the dam site, work on the project was halted for fears that the dam's design would not allow it to survive a major quake on the same fault zone. Although the dam was redesigned and a new proposal submitted by 1980, spiraling costs and limited economic justification put an end to the project until severe flooding in 1986 briefly renewed interest in Auburn's flood control potential. The California State Water Resources Control Board denied water rights for the dam project in 2008 due to lack of construction progress.
Although new proposals surfaced from time to time after the 1980s, the dam was never built for a number of reasons, including limited water storage capacity, geologic hazards, and potential harm to recreation and the local environment. Much of the original groundwork at the Auburn Dam site still exists, and up to 2007, the North Fork American River still flowed through the diversion tunnel that had been constructed in preparation for the dam. Reclamation and Placer County Water Agency completed a pump station project that year which blocked the tunnel, returned the river to its original channel, and diverted a small amount of water through another tunnel under Auburn to meet local needs. However, some groups continue to support construction of the dam, which they state would provide important water regulation and flood protection.
Background
Starting in the 1850s during the California Gold Rush, the city of Sacramento rapidly grew around the confluence of the Sacramento River and its tributary the American River, near the middle of the Central Valley of California. The city's increasing population necessitated the construction of an extensive system of levees on the two rivers to prevent flooding. These early flood control works were insufficient; in 1862, the city was inundated so completely that the state government was temporarily moved to San Francisco. In 1955, the U.S. Army Corps of Engineers built the Folsom Dam at the confluence of the North and South Forks of the American River to provide flood control for the Sacramento metropolitan area. However, the Folsom Dam, with a capacity of just 1 million acre feet (1.2 km3) compared to the annual American River flow of 2.7 million acre feet (3.3 km3), proved inadequate. A flood in 1955 filled the Folsom Reservoir to capacity, before the dam was even completed; it has also filled many times since. However, increased water uses and diversions, requirements for 200-year flood control, and joint system operations have increased seasonal flood capacity in Folsom Lake.
The demand for irrigation water in the Sacramento area and other parts of the Central Valley were also growing. In 1854, a diversion dam was constructed on the North Fork American River at the site of Auburn Dam, to divert water into ditches that supplied downstream farms. Irrigation with dam and canal systems was favored because the seasonal nature of the American River caused floods in some years and droughts in others. A large dam at the Auburn site was thus considered for both flood control and water supply. In the 1950s, the Bureau of Reclamation created the first plans for a high dam at Auburn. Several designs, ranging from earth-fill to concrete gravity dams, were considered. Before the dam could be built, the Auburn-Foresthill Road – which crosses the river just upstream of the dam site – had to be relocated. Even before the project was authorized, contracts were let for the construction of a high bridge to carry the road over the proposed reservoir, as well as preliminary excavations at the dam site.
The eventual design of Auburn Dam called for the creation of a reservoir with of capacity, more than twice that of Folsom Lake. The extra storage would greatly reduce the flood risk to Sacramento. The dam was to be the principal feature of the Auburn-Folsom South Unit of the Central Valley Project, with the purpose to "provide new and supplemental water for irrigation, municipal and industrial use, and to replenish severely depleted ground water in the Folsom South region". Congress authorized the project in 1965; the targeted completion date was 1973.
As the Auburn Dam proposal evolved, the project transformed from a primary flood-control structure to a multipurpose high dam that would serve various other purposes including long-term water storage, hydroelectricity generation, and recreation. One of the first ideas, publicized in the late 1950s, called for a embankment dam impounding of water. In 1963, a earthfill dam holding back of water was proposed. The pre-construction design was finalized in 1967, for a concrete thin-arch gravity structure over high. This dam would be long, thick at the base, and equipped with five 150 megawatt generators at its base for a total generating capacity of 700 megawatts. Two concrete-lined flip bucket spillways would abut both sides of the dam. With the initial plans set and the project authorized, construction work for the dam started in late 1968.
Construction
Site preparation
Official groundbreaking of the Auburn Dam started on October 19, 1968, with preparatory excavations and test shafts drilled into the sides of the North Fork American River gorge. The contract for the diversion tunnel through the mountainside on river left, in diameter, long, and equipped to handle a flow of (a roughly 35-year flood) was let to Walsh Western for about $5.1 million in 1968. The actual construction of the tunnel itself did not begin until mid-1971, and it was completed in late November 1972. One worker was killed during the excavation of the tunnel. In 1975, the earthen cofferdam for the Auburn project, high, was completed, diverting the river into the tunnel. The diversion tunnel bypassed a roughly section of the riverbed to allow construction of the main dam.
Upstream of the dam site, Auburn-Foresthill Road – one of the only all-weather thoroughfares of the region – would be inundated by the proposed reservoir. In preparation for the reservoir's filling, it was rerouted over a three-span, -long truss bridge rising above the river. Even though Auburn Dam would never be completed, the bridge was still required because the pool behind the cofferdam would flood the original river crossing. It also improved safety and reduced travel time by eliminating a steep, narrow and winding grade into the canyon on either side of the river, as comparisons to maps showing the old road alignment will attest. The contracts for various projects pertaining to the relocation of the roadway were given to O.K. Mittry and Sons, Hensel Phelps Construction Company, and Willamette-Western Corporation, the latter for the construction of the actual bridge. The Foresthill Bridge, the fourth highest bridge in the United States, was completed in 1973.
Earthquake and redesigning
In 1975, a magnitude 5.7 earthquake shook the Sierra Nevada near Oroville Dam, about north of the Auburn Dam construction site. This quake concerned geologists and engineers working on the project so much that the Auburn Dam construction was halted while the site was resurveyed and investigations conducted into the origins of the earthquake. It was discovered that the quake might have been caused by reservoir-induced seismicity, i.e. the weight of the water from Lake Oroville, whose dam had been completed in 1968, was pressing down on the fault zone enough to cause geologic stress, during which the fault might slip and cause an earthquake. As the concrete thin-arch design of the Auburn Dam could be vulnerable to such a quake, the project had to be drastically redesigned.
Over the next few years, while all construction was stayed, Reclamation conducted evaluations of the seismic potential of the dam site, even though these delays caused the cost of the project to rise with every passing year. The studies concluded that a major fault system underlay the vicinity of the Auburn Dam site, with many folds of metamorphic rock formed by the contact of the foothill rocks and the granite batholith of the Sierra Nevada. Reclamation predicted that the Auburn Reservoir could induce an earthquake of up to a 6.5, while the U.S. Geological Survey projected a higher magnitude of 7.0. Nevertheless, Reclamation redesigned the Auburn Dam based on their 6.5 figure, even though a 7.0 would be three times stronger. The design for the Auburn Dam was changed to a concrete thick-arch gravity dam, to provide better protection against a possible earthquake induced by its own reservoir.
Through the rest of the 1970s, other possible designs were looked at but never implemented, while preliminary work on the construction site resumed. On April 29, 1979, the foundations for the Auburn Dam were completed. However, debates continued over whether to build an arched or straight-axis gravity dam. Some favored the latter design because it would have greater mass, allowing it to better withstand earthquakes.
Cofferdam failure
In early February 1986 ten inches (254 mm) of rain fell on the Sacramento region in 11 days, melting the Sierra Nevada snowpack and causing a huge flood to pour down the American River. The 1986 floods were some of the most severe recorded in the 20th century; Placer County was quickly designated a Federal Disaster Area. Rampaging streams and rivers incurred some $7.5 million in damages within the county. The rating for Sacramento's levees, supposedly designed to prevent a 125-year flood, was dropped to a 78-year flood in studies conducted after the 1986 event, which suggested that such weather occurred more frequently than previously believed. The floods tore out levees along the Sacramento and Feather Rivers through the Sacramento Valley, and the city of Sacramento was spared by a close margin. Folsom Lake filled to dangerously high levels with runoff from the North, Middle and South Forks of the American River.
The flood rapidly filled the pool behind the Auburn cofferdam to capacity, as the diversion tunnel could not handle all the water pouring into the reservoir. At about 6:00 A.M. on February 18, the rising water overtopped the cofferdam near the right abutment, creating a waterfall that quickly eroded into the structure. Although the cofferdam was designed with a soft earthen plug to fail in a controlled manner if any such event were to occur, the structure eroded quicker than expected. The outflow reached by noon; several hours later the maximum discharge was reached at , completely inundating the construction site and destroying almost half of the cofferdam. When the high cofferdam collapsed, its backed-up water surged downstream into already-spilling Folsom Lake less than a mile downstream, deposited the dam debris and raised the lake level suddenly. Folsom Dam outflow reached , which exceeded the design capacity of levees through Sacramento, but the levees were not overtopped and severe flooding in the city was averted by a close margin. The flood events made it clear that the American River flood control system was inadequate for the flood potential of the watershed. This spurred renewed interest in the Auburn Dam, since a permanent dam would have helped store extra floodwater and also prevented the failure of the cofferdam.
Stopping the project
Economic cost
Following the floods of the 1980s, public opinion began to turn against the Auburn Dam because of the massive estimated cost to finish the project, which was then already rising into the billions of dollars, and the fairly small amount of water it would capture relative to that cost. The best dam sites require a relatively small dam that can store massive amounts of water, and most of those sites in the U.S. have already been utilized. A comparison with Hoover Dam, for example, reveals that the Auburn would store very little water compared to its structural size. Lake Mead, the reservoir behind Hoover, stores about . The proposed Auburn Reservoir, with a mere 8% of that capacity, would require the construction of a dam as tall as Hoover and over three times as wide.
As early as 1980, the cost of building the Auburn Dam was estimated at $1 billion. As of 2007, the cost to build the dam would be about $10 billion. Other projects to improve safety margins and spillway capacity of Folsom Dam, and to increase the capacity of levees in the Sacramento area, were projected to cost significantly less while also providing similar levels of flood protection. Also, the United States National Research Council believes that existing stream-flow records, which only date back about 150 years, are insuffient to justify the construction of a dam as large as Auburn. The amount of water supply that Auburn Dam would make available was also in question, because while the American River floods in some years, in other years it barely discharges enough water to fill existing reservoirs. This cast doubts that Auburn could deliver enough water to justify its cost, or the completion of Folsom South Canal, the other major feature of the Auburn-Folsom South Unit Project.
Failure risk
The Auburn Dam would also be at risk for failure from an earthquake, due to the risk of the reservoir inducing a quake on one of the many fault lines that crosses the area, known as the Bear Mountain fault zone. Surface displacement of the ground might range from a few inches/centimeters to in each direction, depending on the magnitude of the earthquake. Although a new concrete-gravity design by Reclamation was modeled to survive a magnitude 6.5 earthquake, it performed poorly under the 7.0 that the USGS had originally estimated.
A Bureau of Reclamation study released in 1980 projected that a failure of Auburn Dam would result in a giant wave reaching Folsom Lake within five minutes; depending on reservoir levels, it would cause a cascading failure of Folsom and Nimbus Dams downstream within an hour, unleashing millions of acre-feet of water which would cause far greater damage downstream than any natural flood. Most of the greater Sacramento area would be inundated; Nimbus Dam would be overtopped by of water and the California State Capitol would be under of water. An earlier study in 1975 predicted that a failure of Folsom Dam alone would result in over 250,000 deaths. If Auburn were to fail at full capacity, the resulting flood would be over three times larger, and cause even greater damage, inundating land for miles on either side of the American and Sacramento rivers.
Impact on recreation
Filling the Auburn Reservoir would result in a two-pronged, lake which would inundate numerous canyons and rapids of the North and Middle Forks of the American River. In 1981, the American River was acknowledged as the most popular recreational river in California. Over one million people visit the canyons of the North and Middle Forks of the American River each year to engage in various recreational activities, including kayaking, rafting, hiking, hunting, biking, horseback riding, gold mining, off-roading, and rock climbing. About 900,000 of these visitors go to the Auburn State Recreation Area, which includes the former dam site. The reservoir would inundate most of the Auburn recreation area, although some new recreational opportunities such as boating, water-skiing and deep water fishing would be created as a result of the new lake. Many trails, including those used by the Tevis Cup and Western States Endurance Run, would be submerged. The Auburn Reservoir would also result in the destruction of thousands of acres of riverine habitat, and the inundation of historic and archaeological sites.
Fate of the project
In the end, the Auburn Dam project, once referred to as "the dam that wouldn't die" and "with more lives than an alley cat", was defeated by the intervention of environmentalists, conservationists, and cost-conscious economists. Although four bills to revive the dam project were introduced in Congress over the next twenty years, all were turned down. Representative Norman D. Shumway introduced the Auburn Dam Revival Act of 1987, which was rejected because of the phenomenally high costs. A flood control bill in 1988 involving the Auburn Dam was also defeated. In 1992 and 1996, plans for restarting the Auburn project appeared in various water projects bills. However, even though the project was now leaning towards purely flood control instead of the original expensive multipurpose that environmental groups had opposed, both were denied. As the years dragged on, the cost of the project grew, and it officially ended with the revoking of USBR water rights to the site by the state on November 11, 2008.
Proposals for resurrecting the Auburn Dam
Although the Auburn Dam is now mostly considered history, there are still proponents and groups devoted to restarting the long-inactive project. Advocates argue that the construction of Auburn would be the only solution for providing much-needed flood protection to the Sacramento area; that millions of dollars have already been spent making preparations; that it would provide an abundant supply of reliable water and hydroelectricity; and also that the recreational areas lost under the reservoir could be rebuilt around it. A major supporter of the revival of the dam was the Sacramento County Taxpayer's League which reported in 2011 that two-thirds of Sacramento citizens support construction of the Auburn. The League also argued that the dam would only cost $2.6 billion instead of $6–10 billion, and that it is the cheapest alternative to provide flood control for the American River.
Area Congressman John Doolittle was one of the largest proponents of the Auburn Dam, and he appropriated several million dollars for funds to conduct feasibility studies for the dam. About $3 million went into the main feasibility report, and the remaining $1 million was used for a study concerning the relocation of California State Route 49, which runs through the site. After the Hurricane Katrina disaster in 2005, Doolittle drew public attention to the flood vulnerability of the Sacramento region. He also used the flood-protection "incompetence" of the Folsom Dam to his advantage, saying that "without an Auburn Dam we could soon be in the unenviable position of suffering from both severe drought and severe flooding in the very same year." He led all 18 Republican members of the United States House of Representatives from California in a protest in 2008, trying to convince Governor Arnold Schwarzenegger to revoke the water-rights decision that California had made against Reclamation. Doolittle is sometimes known as the Auburn Dam's "chief sponsor".
In response to public outcry, most pro-Auburn Dam groups now recommend the construction of a dry dam, or one that purely supports the purpose of flood control. Such a dam would stand empty most of the year, but during a flood the excess flow would pool temporarily behind the dam instead of flowing straight through, and therefore the dam could still provide flood control while leaving the American River canyons dry for most of the year (hence "dry"). Water would be impounded for only a few days or weeks each year instead of all year long, minimizing damage on the local environment. The dam would be built to protect against a 500-year flood. Also, with the construction of a "dry" Auburn Dam, Folsom Lake could be kept at a higher level throughout the year because of reduced flood-control pressure, therefore facilitating recreational access to the reservoir. Finally, regulations in flow could help groundwater recharge efforts; the lower Sacramento Valley aquifer is acknowledged as severely depleted.
Legacy
Since its inception, hundreds of millions of dollars have been poured into the Auburn Dam project, but no further work has been done since the 1980s. However, the Bureau of Reclamation continues to list the Auburn as a considered alternative for the future of its Auburn-Folsom South Unit project. As of now, massive evidence of the dam's construction still remain in the North Fork American River canyon, specifically the excavations for the abutments and spillway, with the consequences of increased erosion.
In recent decades, California has been struck with a series of severe droughts. In order to facilitate continued deliveries of water to the thirsty southern half of the state, the Central Valley and State Water Projects have been forced to cut water supplies for agriculture in much of the San Joaquin Valley. Annual deficits of water in the state are projected to rise from in 1998 to an estimated by 2025. The state has proposed three or four solutions to the shortfall. One, the Peripheral Canal, would facilitate water flow from the water-rich north to the dry south, but has never been built due to environmental concerns. The raising of Shasta Dam on the Sacramento or New Melones Dam on the Stanislaus, or the building of Sites Reservoir, has also been proposed. Lastly, the Auburn Dam has also been revived in light of this. According to supporters, it would cause the least environmental destruction of the multitude of choices, and would give the most reliable water yield, regardless of its skyrocketing costs.
In part as an alternative to Auburn Dam project, flood control for the lower American River is being improved through the US$1 billion Joint Federal Project (a collaboration of the US Bureau of Reclamation and the US Army Corps of Engineers) at Folsom Dam which adds a new lower spillway and strengthens the eight dikes that serve as part of the dam. Additional work proposed includes a possible raise of Folsom Dam several feet to improve its flood control and storage capacity. Key levees downstream have also been improved for flood control in the Sacramento area by the US Army Corps of Engineers and the Sacramento Area Flood Control Agency. Sugar Pine Reservoir, an auxiliary component of the Auburn-Folsom South Project upstream in the watershed, was transferred in title by the Bureau of Reclamation to Foresthill Public Utility District in 2003. As a result of a court decision in 1990 (Hodge Decision), the uses of Reclamation's Folsom South Canal changed further when the Freeport Project came online in 2011 to redivert water supplies for East Bay Municipal Utility District and Sacramento County Water Agency from the Sacramento River instead of from the canal via the lower American River, thereby reducing the need for additional supplies from Auburn Dam to the American River. Anticipated diversions from the Folsom South Canal had previously been reduced when the Sacramento Municipal Utility District decommissioned its Rancho Seco nuclear facility in 1989 and no longer required large quantities of cooling water from the canal.
A pumping station to supply water to the Placer County Water Agency was built in 2006 on the Middle Fork American River, supplying to a northwest-running pipeline, eliminating the need for Auburn Dam for this supply. The capacity of the station is eventually expected to be upgraded to . By 2006, the Bureau of Reclamation itself began to restore the dam site, which then had been untouched for more than a decade. The river diversion tunnel was sealed but not filled in, and the remnants of the construction site in the riverbed as well as the remains of the cofferdam excavated from the canyon. After the riverbed was leveled and graded, an artificial riverbed with manmade Class III rapids was constructed to channel the river through the site. The restoration project also included the construction of other recreational amenities in the Auburn site. This act was seen as the final step of decommissioning the Auburn project and shelving it forever.
References
Works cited
U.S. Army Corps of Engineers. American River Watershed Common Features General Reevaluation Report. Final Environmental Impact Statement/Environmental Impact Report. December 2015.
https://www.spk.usace.army.mil/Portals/12/documents/civil_works/CommonFeatures/ARCF_GRR_Final_EIS-EIR_Jan2016.pdf
External links
Auburn Dam Council
Sacramento County Taxpayers League – Auburn Dam
Auburn Dam Watch
Dams on the American River
Central Valley Project
Proposed buildings and structures in California
United States Bureau of Reclamation proposed dams
History of El Dorado County, California
History of Placer County, California
1970s in California
2008 in California | Auburn Dam | [
"Engineering"
] | 5,001 | [
"Irrigation projects",
"Central Valley Project"
] |
11,973,712 | https://en.wikipedia.org/wiki/Sonae%20Ind%C3%BAstria | Sonae Indústria is a manufacturer of engineered wood products, founded and headquartered in Maia, Portugal. Present in five countries within three continents, Sonae Indústria has a wide range of products, from simple board to complete construction systems, a large range of wood-based products and materials for furniture, construction and decoration.
Sonae Indústria worldwide
Founded and headquartered in Maia, Portugal, Sonae Indústria is in five countries within three continents, offering a wide range of products, from simple board to complete construction systems, a large range of wood-based products and materials for furniture, construction and decoration.
Canada
Sonae Indústria is present in Canada by its subsidiary Tafisa Canada in Lac-Mégantic, Quebec, since 1995 as an investment of Tableros de Fibras, S.A., later acquired by the Sonae Indústria Group
Since 2003, it became the first member of the Composite Panel Association (CPA) and producer of particleboard and thermofused melamine panels whose environmental management system met the requirements of ISO 14001.
In 2012, Tafisa Canada invested 10 million CAD on its facilities in order to recycle around 2 million trees per year
Tafisa Canada has received a third party certification for CARB Phase 2 compliance, as well as the FSC, LEED, EPP, SCS, ISO 14001 and ISO 9001 certifications.
France
Sonae Indústria has had a presence France since 1998, when it acquired Isoroy SAS. Nowadays, it operates 4 plants: Auxerre, Le Creusot and Ussel Corréze by its subsidiary Isoroy and Linxe by its subsidiary Darbo.
Germany
With 6 plants around the country ( Beeskow, Eiweiler, Horn, Kaisersesch, Meppen and Nettgau, it is present via its subsidiary Glunz AG.
Glunz AG was founded in 1932 and in 1992, acquired Isoroy SAS, which still operates in France, being acquired by Sonae Indústria Group five years later.
Portugal
Sonae Indústria headquarters is located in Maia, as well as the oldest active factory from the Group. They own as well other six plants around the country in Alcanede, Castelo de Paiva, Mangualde, Oliveira do Hospital, Sines and Vilela
South Africa
South Africa operations are based in two locations: Panbult and White River. During the latter part of 1998, Sonae Indústria invested in a R350 million plant in Panbult Mpumalanga.
Two years later acquired Sappi Novobord, including White River at their plant list.
Spain
Sonae Indústria has presence in Spain via its subsidiaries Tafisa and Tafibra, with plants in Betanzos, Linares, Pontecaldelas, Solsona and Valladolid.
United Kingdom
Spanboard Products Ltd at Coleraine in Northern Ireland began production in 1959 and was acquired by Sonae Industria, Portugal's largest privately owned industrial group in 1989. The company was extensively refurbished in the early 1990s when a new state of the art panel line for edged panels was installed. Investment was also made in a computerized panel saw to provide a cut to size service for melamine faced panels and gives greater flexibility for customer service. The factory produces wood particle board to meet the criteria of both, BS EN 312, (physical requirements for Particleboard) and also the site Quality Management System, which operates to ISO 9001.
Former Operations
Sonae Indústria had an operation in Brazil, which has been sold to Celulosa Arauco y Constitución in 2009, in a deal worth US$227m. Between 2000 and 2012 Sonae Indústria also operated a plant at Knowsley, Merseyside in the United Kingdom. The plant closed in 2012 following a series of fires and major accidents.
Health, safety and environmental concerns
According to figures released by the Health and Safety Executive (HSE) Sonae Indústria's UK plant located at Knowsley, Merseyside, was the subject of 22 reports of major accidents between 2000 and 2010. Between 2003 and 2006 it was successfully prosecuted by the HSE on four occasions and fined a total of £132,000. On 1 June 2002 an explosion occurred at the plant and 20,000 liters of pollutant escaped into local waterways. The government's Environment Agency said: "On 3 June an Agency officer visited the site and saw that the outfall from Sonae's premises was gushing a milky white liquid. Kirkby Brook was discolored white for about two kilometers downstream, and still affected for at least a further two kilometers. Samples taken by the Agency revealed that the water in the brook was polluted to almost three times the strength of raw sewage. An ecology survey taken two days later showed that for at least 200 meters downstream, all life in the brook had been completely wiped out, and even as far as two and a half kilometers away the brook was classified as 'grossly polluted.'" In 2003 the Environment Agency prosecuted the firm over five pollution incidents affecting local waterways resulting in fines totaling £37,500. The HSE closed the plant on eight occasions between 2001 and 2003. Residents living near the plant have "repeatedly called for its closure following a series of chemical leaks and fires." Between 2000 and 2007 Knowsley Council served "many statutory notices on Sonae, including two prohibition notices, 10 enforcement notices, five variation notices and one notice requiring information, with which Sonae did not comply."
In December 2005, Sonae pleaded guilty to three charges brought by Knowsley Council under the Environmental Protection Act 1990 and was fined £13,000.
In 2007, lawyers engaged by Sonae wrote to the internet company hosting the website of local magazine Nerve and threatened legal action for "a damaging effect on reputation" following the magazine's publication of an article critical of the company's safety record. The article was authored by Steve Tombs from John Moores University and David Whyte from Liverpool University, both health and safety academics. The magazine responded that "Sonae's reputation is damaged not by what is written about it, but by its actions – it is a serial offender." In February 2007 the plant was forced to close for a month following a fire which started in an oil pump room. Local MP George Howarth raised the issue in Parliament and said: "Unless and until the HSE can be satisfied that the plant can safely reopen, and guarantee the health and safety of the workforce and residents, I believe that the plant needs to remain closed."
Worker deaths
2010
In December 2010 Merseyside Police and the HSE began a joint investigation into the deaths of two workers who were killed after being dragged into machinery at the Knowsley plant on 7 December. Rossendale and Darwen MP Jake Berry said "Should the owners of the factory again be found to have fallen short of safety standards, following a thorough and detailed investigation by the Health And Safety Executive, then I hope steps will be taken to prosecute them for corporate manslaughter." Merseyside Police passed the results of their investigation on to the Crown Prosecution Service who said they were awaiting the results of the HSE investigation and the coroner's inquests into the deaths, which was expected to be held in 2011.
The inquest eventually convened at Bootle Town Hall on 9 July 2013 before Christopher Sumner, the coroner for Sefton, Knowsley, and St Helens. The inquest heard that the two men, James Bibby, 25, and Thomas Elmer, 27, were both sub-contracted mechanical engineers and fitters and were carrying out maintenance work on a stationary conveyor belt above ground level at the time of the incident. John Moutrie, an investigator from HSE, said: "Mr Bibby and Mr Elmer were both found dead in the conveyor of Silo No 4. The two men were likely to have been working inside the conveyor or reaching into it to tighten the bolts at the bottom. At this point the conveyor started and there was no means of stopping it. They were drawn into the conveyor with tragic results. If safety procedures had been followed, the incident could not have occurred. Physical isolation of the conveyor belt had not been carried out."
Paul Atkinson, the works manager of the plant, admitted he had issued the men with a work permit "which certified he had personally examined the conveyor belt and was satisfied all the necessary safety precautions were in place." He also admitted that he "had not personally examined the conveyor belt to ensure it was isolated from the electrical supply" and did not have access to the central control room panel to isolate the conveyor. It was also disclosed that Atkinson had not undertaken the required IOSH Managing Safely course or "any other general health and safety training." Donald MacLeod, the plant's health and safety manager, refused to answer questions at the inquest in case he incriminated himself. He "declined to comment" when asked by the coroner about the permit to work scheme, refused to tell the court what his responsibilities as health and safety manager were, and declined to answer when asked whether he had any knowledge of an alleged 'near miss' on one of the conveyor belts three weeks prior to the fatal accident.
On 23 July 2013 the jury returned a narrative verdict saying "The method of local isolation was communicated verbally but was not physically demonstrated to the deceased men. In addition local isolation of the conveyor was neither confirmed or checked throughout the day. While there was a risk assessment carried out on the specific work undertaken by the men it appears this was not communicated to the men directly. It is clear from the evidence that the Sonae permit to work issuer/supervisor had not been given or undertaken specific training, or provided with sufficient supervision, in the permit to work procedure prior to 7 December 2010. It is our view that the death of each man was the result of a failure to adopt appropriate procedures." The HSE said it would consider the verdict before deciding whether to bring criminal charges.
In July 2015 Sonae and the men's employer, Valma Ltd, admitted failing to ensure the safety of their employees. They were fined £220,000 and £190,000 respectively.
2011
On 6 August 2011 a 62-year-old employee was killed at the Knowsley site. The worker was a contractor for the demolition team called in to remove the damage caused by the previous fire on 9 June 2011
June 2011 fire
During the early evening of 9 June 2011 Merseyside Fire and Rescue Service sent twelve fire appliances to the plant after fire broke out in concrete bunkers containing 12,000 tonnes of woodchip. It took eight days to extinguish the fire. Mr. Howarth said: "This latest incident at Sonae serves as yet another example of the fact that this plant is unstable and hazardous to local residents, businesses and those who work there. Sonae takes up far too much of the time and resources of the fire service, Knowsley Council and the Health and Safety Executive. I will shortly be calling a joint meeting of the various bodies responsible for monitoring Sonae to pool their experience and seriously consider whether it can be allowed to continue given the risks to the community." During a Parliamentary debate about health and safety legislation on 13 June 2011, Mr. Howarth raised the issue with Chris Grayling MP, the Minister of State for the Department for Work and Pensions, and again asked for the plant to be closed down.
The fire resulted in Sonae facing a class action compensation claim from 18,000 people who alleged their health was affected by the toxic emissions from the plant during the 8 days it burned. Sonae admitted liability, "subject to causation which means each claimant has to show that they suffered personal injury and/or nuisance as a result of the fire." The case is the largest class action of its kind in UK legal history.
In June 2013 lawyers acting for the claimants secured a High Court order which required Sonae to notify the lawyers if the manufacturer's insurance policy falls below £65 million. Anthony Wilson, for the claimants, said: "We saw the factory being stripped down but we did not have confirmation they had enough to cover the insurance. We understand machinery has already been sold off to foreign companies and will find its way to other European countries. We were fearful the equipment was being taken out of the jurisdiction to prevent a weighty payout in the future. This is not about the claim culture, but defending the rights of vulnerable people who could go on to suffer long-term illness because of this plant."
In July 2015 the class action claim was rejected in the High Court of Justice by Mr Justice Jay, who decided that the symptoms suffered by the 16,626 claimants were "short-lived and had not exceeded the hurdle the law sets for actionable personal injury." In his ruling, he said: "It is difficult to say for how long the smoke and these mild symptoms lasted, but I have in mind a maximum period of about one week. Many months later – it is unclear exactly how and why – lawyers arrived on the scene and sensed the opening of a business opportunity. It proved not very difficult to recruit willing claimants to the group, not least because there was a lot of ill-feeling in the neighbourhood directed towards Sonae, and many people genuinely believed that they must have been harmed in some way. The legal process preyed on human susceptibility and vulnerability, and the rest is history."
Early Day Motion and closure
An Early Day Motion calling for the permanent closure of the plant was tabled by Knowsley MP George Howarth on 14 June 2011.
Another fire on the night of 26 January 2012 led to Howarth calling again for the factory to be closed, saying that "in view of these incidents" the council should "rescind" the company's environmental permit. The plant ceased production on 13 September 2012. The land has since been sold to The Peel Group for redevelopment.
See also
Sonae
References
Manufacturing companies of Portugal
Manufacturing companies established in 1959
Industrial fires and explosions
Industrial accident deaths | Sonae Indústria | [
"Chemistry"
] | 2,907 | [
"Industrial fires and explosions",
"Explosions"
] |
11,973,947 | https://en.wikipedia.org/wiki/Graph%20partition | In mathematics, a graph partition is the reduction of a graph to a smaller graph by partitioning its set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. If the number of resulting edges is small compared to the original graph, then the partitioned graph may be better suited for analysis and problem-solving than the original. Finding a partition that simplifies graph analysis is a hard problem, but one that has applications to scientific computing, VLSI circuit design, and task scheduling in multiprocessor computers, among others. Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications see .
Two common examples of graph partitioning are minimum cut and maximum cut problems.
Problem complexity
Typically, graph partition problems fall under the category of NP-hard problems. Solutions to these problems are generally derived using heuristics and approximation algorithms. However, uniform graph partitioning or a balanced graph partition problem can be shown to be NP-complete to approximate within any finite factor. Even for special graph classes such as trees and grids, no reasonable approximation algorithms exist, unless P=NP. Grids are a particularly interesting case since they model the graphs resulting from Finite Element Model (FEM) simulations. When not only the number of edges between the components is approximated, but also the sizes of the components, it can be shown that no reasonable fully polynomial algorithms exist for these graphs.
Problem
Consider a graph G = (V, E), where V denotes the set of n vertices and E the set of edges. For a (k,v) balanced partition problem, the objective is to partition G into k components of at most size v · (n/k), while minimizing the capacity of the edges between separate components. Also, given G and an integer k > 1, partition V into k parts (subsets) V1, V2, ..., Vk such that the parts are disjoint and have equal size, and the number of edges with endpoints in different parts is minimized. Such partition problems have been discussed in literature as bicriteria-approximation or resource augmentation approaches. A common extension is to hypergraphs, where an edge can connect more than two vertices. A hyperedge is not cut if all vertices are in one partition, and cut exactly once otherwise, no matter how many vertices are on each side. This usage is common in electronic design automation.
Analysis
For a specific (k, 1 + ε) balanced partition problem, we seek to find a minimum cost partition of G into k components with each component containing a maximum of (1 + ε)·(n/k) nodes. We compare the cost of this approximation algorithm to the cost of a (k,1) cut, wherein each of the k components must have the same size of (n/k) nodes each, thus being a more restricted problem. Thus,
We already know that (2,1) cut is the minimum bisection problem and it is NP-complete. Next, we assess a 3-partition problem wherein n = 3k, which is also bounded in polynomial time. Now, if we assume that we have a finite approximation algorithm for (k, 1)-balanced partition, then, either the 3-partition instance can be solved using the balanced (k,1) partition in G or it cannot be solved. If the 3-partition instance can be solved, then (k, 1)-balanced partitioning problem in G can be solved without cutting any edge. Otherwise, if the 3-partition instance cannot be solved, the optimum (k, 1)-balanced partitioning in G will cut at least one edge. An approximation algorithm with a finite approximation factor has to differentiate between these two cases. Hence, it can solve the 3-partition problem which is a contradiction under the assumption that P = NP. Thus, it is evident that (k,1)-balanced partitioning problem has no polynomial-time approximation algorithm with a finite approximation factor unless P = NP.
The planar separator theorem states that any n-vertex planar graph can be partitioned into roughly equal parts by the removal of O() vertices. This is not a partition in the sense described above, because the partition set consists of vertices rather than edges. However, the same result also implies that every planar graph of bounded degree has a balanced cut with O() edges.
Graph partition methods
Since graph partitioning is a hard problem, practical solutions are based on heuristics. There are two broad categories of methods, local and global. Well-known local methods are the Kernighan–Lin algorithm, and Fiduccia-Mattheyses algorithms, which were the first effective 2-way cuts by local search strategies. Their major drawback is the arbitrary initial partitioning of the vertex set, which can affect the final solution quality. Global approaches rely on properties of the entire graph and do not rely on an arbitrary initial partition. The most common example is spectral partitioning, where a partition is derived from approximate eigenvectors of the adjacency matrix, or spectral clustering that groups graph vertices using the eigendecomposition of the graph Laplacian matrix.
Multi-level methods
A multi-level graph partitioning algorithm works by applying one or more stages. Each stage reduces the size of
the graph by collapsing vertices and edges, partitions the smaller graph, then maps back and refines this partition of the original graph. A wide variety of partitioning and refinement methods can be applied within the overall multi-level scheme. In many cases, this approach can give both fast execution times and very high quality results.
One widely used example of such an approach is METIS, a graph partitioner, and hMETIS, the corresponding partitioner for hypergraphs.
An alternative approach originated from
and implemented, e.g., in scikit-learn is spectral clustering with the partitioning determined from eigenvectors of the graph Laplacian matrix for the original graph computed by LOBPCG solver with multigrid preconditioning.
Spectral partitioning and spectral bisection
Given a graph with adjacency matrix , where an entry implies an edge between node and , and degree matrix , which is a diagonal matrix, where each diagonal entry of a row , , represents the node degree of node . The Laplacian matrix is defined as . Now, a ratio-cut partition for graph is defined as a partition of into disjoint , and , minimizing the ratio
of the number of edges that actually cross this cut to the number of pairs of vertices that could support such edges. Spectral graph partitioning can be motivated by analogy with partitioning of a vibrating string or a mass-spring system and similarly extended to the case of negative weights of the graph.
Fiedler eigenvalue and eigenvector
In such a scenario, the second smallest eigenvalue () of , yields a lower bound on the optimal cost () of ratio-cut partition with . The eigenvector () corresponding to , called the Fiedler vector, bisects the graph into only two communities based on the sign of the corresponding vector entry. Division into a larger number of communities can be achieved by repeated bisection or by using multiple eigenvectors corresponding to the smallest eigenvalues. The examples in Figures 1,2 illustrate the spectral bisection approach.
Modularity and ratio-cut
Minimum cut partitioning however fails when the number of communities to be partitioned, or the partition sizes are unknown. For instance, optimizing the cut size for free group sizes puts all vertices in the same community. Additionally, cut size may be the wrong thing to minimize since a good division is not just one with small number of edges between communities. This motivated the use of Modularity (Q) as a metric to optimize a balanced graph partition. The example in Figure 3 illustrates 2 instances of the same graph such that in (a) modularity (Q) is the partitioning metric and in (b), ratio-cut is the partitioning metric.
Applications
Conductance
Another objective function used for graph partitioning is Conductance which is the ratio between the number of cut edges and the volume of the smallest part. Conductance is related to electrical flows and random walks. The Cheeger bound guarantees that spectral bisection provides partitions with nearly optimal conductance. The quality of this approximation depends on the second smallest eigenvalue of the Laplacian λ2.
Immunization
Graph partition can be useful for identifying the minimal set of nodes or links that should be immunized in order to stop epidemics.
Other graph partition methods
Spin models have been used for clustering of multivariate data wherein similarities are translated into coupling strengths. The properties of ground state spin configuration can be directly interpreted as communities. Thus, a graph is partitioned to minimize the Hamiltonian of the partitioned graph. The Hamiltonian (H) is derived by assigning the following partition rewards and penalties.
Reward internal edges between nodes of same group (same spin)
Penalize missing edges in same group
Penalize existing edges between different groups
Reward non-links between different groups.
Additionally, Kernel-PCA-based Spectral clustering takes a form of least squares Support Vector Machine framework, and hence it becomes possible to project the data entries to a kernel induced feature space that has maximal variance, thus implying a high separation between the projected communities.
Some methods express graph partitioning as a multi-criteria optimization problem which can be solved using local methods expressed in a game theoretic framework where each node makes a decision on the partition it chooses.
For very large-scale distributed graphs classical partition methods might not apply (e.g., spectral partitioning, Metis) since they require full access to graph data in order to perform global operations. For such large-scale scenarios distributed graph partitioning is used to perform partitioning through asynchronous local operations only.
Software tools
scikit-learn implements spectral clustering with the partitioning determined from eigenvectors of the graph Laplacian matrix for the original graph computed by ARPACK, or by LOBPCG solver with multigrid preconditioning.
METIS is a graph partitioning family by Karypis and Kumar. Among this family, kMetis aims at greater partitioning speed, hMetis, applies to hypergraphs and aims at partition quality, and ParMetis is a parallel implementation of the Metis graph partitioning algorithm.
KaHyPar is a multilevel hypergraph partitioning framework providing direct k-way and recursive bisection based partitioning algorithms. It instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grained n-level approach combined with strong local search heuristics, it computes solutions of very high quality.
Scotch is graph partitioning framework by Pellegrini. It uses recursive multilevel bisection and includes sequential as well as parallel partitioning techniques.
Jostle is a sequential and parallel graph partitioning solver developed by Chris Walshaw.
The commercialized version of this partitioner is known as NetWorks.
Party implements the Bubble/shape-optimized framework and the Helpful Sets algorithm.
The software packages DibaP and its MPI-parallel variant PDibaP by Meyerhenke implement the Bubble framework using diffusion; DibaP also uses AMG-based techniques for coarsening and solving linear systems arising in the diffusive approach.
Sanders and Schulz released a graph partitioning package KaHIP (Karlsruhe High Quality Partitioning) that implements for example flow-based methods, more-localized local searches and several parallel and sequential meta-heuristics.
The tools Parkway by Trifunovic and
Knottenbelt as well as Zoltan by Devine et al. focus on hypergraph
partitioning.
References
Further reading
NP-complete problems
Computational problems in graph theory | Graph partition | [
"Mathematics"
] | 2,510 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
11,974,030 | https://en.wikipedia.org/wiki/List%20of%20Chinese%20mushrooms%20and%20fungi | East Asian mushrooms and fungi are often used in East Asian cuisine, either fresh or dried. According to Chinese traditional medicine, many types of mushroom affect the eater's physical and emotional wellbeing.
List of mushrooms and fungi
See also
List of mushroom dishes
Chinese
Chinese cuisine
Chinese edible mushrooms | List of Chinese mushrooms and fungi | [
"Biology"
] | 59 | [
"Fungi",
"Lists of fungi"
] |
11,974,133 | https://en.wikipedia.org/wiki/CMU%20Pronouncing%20Dictionary | The CMU Pronouncing Dictionary (also known as CMUdict) is an open-source pronouncing dictionary originally created by the Speech Group at Carnegie Mellon University (CMU) for use in speech recognition research.
CMUdict provides a mapping orthographic/phonetic for English words in their North American pronunciations. It is commonly used to generate representations for speech recognition (ASR), e.g. the CMU Sphinx system, and speech synthesis (TTS), e.g. the Festival system. CMUdict can be used as a training corpus for building statistical grapheme-to-phoneme (g2p) models that will generate pronunciations for words not yet included in the dictionary.
The most recent release is 0.7b; it contains over 134,000 entries. An interactive lookup version is available.
Database format
The database is distributed as a plain text file with one entry to a line in the format "WORD <pronunciation>" with a two-space separator between the parts. If multiple pronunciations are available for a word, variants are identified using numbered versions (e.g. WORD(1)). The pronunciation is encoded using a modified form of the ARPABET system, with the addition of stress marks on vowels of levels 0, 1, and 2. A line-initial ;;; token indicates a comment. A derived format, directly suitable for speech recognition engines is also available as part of the distribution; this format collapses stress distinctions (typically not used in ASR).
The following is a table of phonemes used by CMU Pronouncing Dictionary.
History
Applications
The Unifon converter is based on the CMU Pronouncing Dictionary.
The Natural Language Toolkit contains an interface to the CMU Pronouncing Dictionary.
The Carnegie Mellon Logios tool incorporates the CMU Pronouncing Dictionary.
PronunDict, a pronunciation dictionary of American English, uses the CMU Pronouncing Dictionary as its data source. Pronunciation is transcribed in IPA symbols. This dictionary also supports searching by pronunciation.
Some singing voice synthesizer software like CeVIO Creative Studio and Synthesizer V uses modified version of CMU Pronouncing Dictionary for synthesizing English singing voices.
Transcriber, a tool for the full text phonetic transcription, uses the CMU Pronouncing Dictionary
15.ai, a real-time text-to-speech tool using artificial intelligence, uses the CMU Pronouncing Dictionary
See also
Moby Pronunciator, a similar project
References
External links
The current version of the dictionary is at SourceForge, although there is also a version maintained on GitHub.
Homepage – includes database search
RDF converted to Resource Description Framework by the open source Texai project.
English pronouncing dictionaries
Natural language processing
Public domain databases
Carnegie Mellon University
Software using the BSD license | CMU Pronouncing Dictionary | [
"Technology"
] | 597 | [
"Natural language processing",
"Natural language and computing"
] |
7,328,545 | https://en.wikipedia.org/wiki/Adenium%20obesum | Adenium obesum, more commonly known as a desert rose, is a poisonous species of flowering plant belonging to the tribe Nerieae of the subfamily Apocynoideae of the dogbane family, Apocynaceae. It is native to the Sahel regions south of the Sahara (from Mauritania and Senegal to Sudan), tropical and subtropical eastern and southern Africa, as well as the Arabian Peninsula. Other names for the flower include Sabi star, kudu, mock azalea, and impala lily. Adenium obesum is a popular houseplant and bonsai in temperate regions.
Description
It is an evergreen or drought-deciduous succulent shrub (which can also lose its leaves during cold spells, or according to the subspecies or cultivar). It can grow to in height, with pachycaul (disproportionately large) stems and a stout, swollen basal caudex (a rootstock that protrudes from the soil). The leaves are spirally arranged, clustered toward the tips of the shoots, simple entire, leathery in texture, long and broad. The flowers are tubular, long, with the outer portion diameter with five petals, resembling those of other related genera such as Plumeria and Nerium. The flowers tend to be red and pink, often with a whitish blush outward of the throat.
Taxonomy
Some taxonomies consider some other species in the genus to be subspecies of Adenium obesum.
Subspecies
Adenium obesum subsp. oleifolium (South Africa, Botswana)
Adenium obesum subsp. socotranum (Socotra)
Adenium obesum subsp. somalense (Eastern Africa)
Adenium obesum subsp. swazicum (Eswatini, South Africa)
Adenium obesum subsp. Arabicum (Saudi Arabia, Yemen)
Adenium swazicum is a critically endangered African species native to Eswatini and Mozambique, growing up to 0.7 m (2.29 ft) tall.
Adenium somalense is also native to Africa, inhabiting Tanzania, Kenya, and Somalia, and reaching heights of 5 m (16.40 ft), which makes it the largest of these four subspecies.
Adenium socotranum is native exclusively to the island of Socotra, and can grow to be 4.6 m (15 ft), but despite its small range, it is of least concern regarding endangerment. It can swell up to 8 feet (2.5 meters) in diameter at the base.
Adenium oleifolium is near threatened in the wild and is the smallest of these subspecies, growing at the tallest to 0.4 m (1.31 ft).
Adenium Arabicum a species is a monoecious and self-sterile, common names include desert rose, elephant's foot, and Adan bush, arabicum is native to Saudi Arabia and Yemen.
Ecology
Caterpillars of the polka-dot wasp moth (Syntomeida epilais) are known to feed on the desert rose, along with feeding on oleanders.
In areas with year-round warm weather, they can bloom throughout the year.
Uses
Adenium obesum produces a sap in its roots and stems that contains cardiac glycosides. This sap is used as arrow poison for hunting large game throughout much of Africa and as a fish toxin.
Cultivation
Adenium obesum is a popular houseplant and bonsai in temperate regions. It requires a sunny location and a minimum indoor temperature in winter of . It thrives on a xeric watering regime as required by cacti. A. obesum is typically propagated by seed or stem cuttings. The numerous hybrids are propagated mainly by grafting on to seedling rootstock. While plants grown from seed are more likely to have the swollen caudex at a young age, with time many cutting-grown plants cannot be distinguished from seed-grown plants. Like many plants, Adenium obesum can also be propagated in vitro using plant tissue culture.
This plant has gained the Royal Horticultural Society's Award of Garden Merit.
Symbolic and cultural references
The species has been depicted on postage stamps issued by various countries.
See also
List of poisonous plants
Gallery
References
External links
obesum
Flora of Africa
Flora of the Arabian Peninsula
Plants described in 1819
Garden plants of Africa
Drought-tolerant plants
House plants
Caudiciform plants
Plants that can bloom all year round | Adenium obesum | [
"Biology"
] | 912 | [
"Plants that can bloom all year round",
"Plants"
] |
7,330,158 | https://en.wikipedia.org/wiki/Database%20search%20engine | A database search engine is a search engine that operates on material stored in a digital database.
Search engines
Categories of search engine software include:
Web search or full-text search (e.g. Lucene).
Database or structured data search (e.g. Dieselpoint).
Mixed or enterprise search (e.g. Google Search Appliance).
The largest online directories, such as Google and Yahoo, utilize thousands of computers to process billions of website documents using web crawlers or spiders (software), returning results for thousands of searches per second. Processing high query volumes requires software to run in a distributed environment with redundancy.
Components
Searching for textual content in databases or structured data formats (such as XML and CSV) presents special challenges and opportunities which specialized search engines resolve. Databases allow logical queries such as the use of multi-field Boolean logic, while full-text searches do not. "Crawling" (a human by-eye search) is not necessary to find information stored in a database because the data is already structured. Indexing the data allows for faster searches.
Database search engines are usually included with major database software products.
Applications
Database search technology is used by large public and private entities including government database services, e-commerce companies, online advertising platforms, telecommunications service providers and other consumers with a need to access information in large repositories.
See also
Outline of search engines
List of search engines
External links
Searching for Text Information in Databases
Information retrieval systems | Database search engine | [
"Technology"
] | 303 | [
"Information technology",
"Information retrieval systems"
] |
7,330,356 | https://en.wikipedia.org/wiki/Ciliary%20neurotrophic%20factor | Ciliary neurotrophic factor is a protein that in humans is encoded by the CNTF gene.
The protein encoded by this gene is a polypeptide hormone and neurotrophic factor whose actions have mainly been studied in the nervous system where it promotes neurotransmitter synthesis and neurite outgrowth in certain neural populations including astrocytes. It is a hypothalamic neuropeptide that is a potent survival factor for neurons and oligodendrocytes and may be relevant in reducing tissue destruction during inflammatory attacks. A mutation in this gene, which results in aberrant splicing, leads to ciliary neurotrophic factor deficiency, but this phenotype is not causally related to neurologic disease. In addition to the predominant monocistronic transcript originating from this locus, the gene is also cotranscribed with the upstream ZFP91 gene. Cotranscription from the two loci results in a transcript that contains a complete coding region for the zinc finger protein but lacks a complete coding region for ciliary neurotrophic factor.
CNTF has also been shown to be expressed by cells on the bone surface, and to reduce the activity of bone-forming cells (osteoblasts).
Therapeutic applications
Satiety effects
In 2001, it was reported that in a human study examining the usefulness of CNTF for treatment of motor neuron disease, CNTF produced an unexpected and substantial weight loss in the study subjects. Further investigation revealed that CNTF could reduce food intake without causing hunger or stress, making it a candidate for weight control in leptin-resistant subjects, as CNTF is believed to operate like leptin, but by a non-leptin pathway.
Recombinant human CNTF (Axokine)
A recombinant version of human CNTF (rhCNTF), trade name Axokine, is a modified version with a 15 amino acid truncation of the C-terminus and two amino acid substitutions. It is three to five times more potent than CNTF in in vitro and in vivo assays and has improved stability properties. Like CNTF it is a neurotrophic factor, and may stimulate nerve cells to survive. It was tested in the 1990s as a treatment for amyotrophic lateral sclerosis. It did not improve muscle control as much as expected, but trial participants did report a loss of appetite.
Phase III clinical trials for the drug against obesity were conducted in 2003 by Axokine's maker, Regeneron Pharmaceuticals, demonstrating a small positive effect in some patients, but the drug was not commercialized. A major problem with the treatment was that in nearly 70% of the subjects tested, antibodies against Axokine were produced after approximately three months of treatment. In the minority of subjects who did not develop the antibodies, weight loss averaged 12.5 pounds in one year, versus 4.5 pounds for placebo-treated subjects. In order to obtain this benefit, subjects needed to receive daily subcutaneous injections of one microgram Axokine per kilogram body weight.
Xencor patent application raises the disturbing idea that subjects producing antibodies against CNTF analogues may eventually suffer severe adverse effects, as these antibodies could potentially interfere with the neuroprotective functions of endogenous CNTF. The application claims methods of designing CNTF analogues with lower immunogenicity than Axokine based on analysis of affinity of each modified epitope for each of 52 class II MHC alleles, and provides specific examples of such modifications. No such analogues are currently listed in Xencor's product pipeline.
NT-501
NT-501 is a product being developed by Neurotech that consists of encapsulated human cells genetically modified to secrete ciliary neurotrophic factor (CNTF). In a clinical trial, NT-501 demonstrated a statistically significant reduction of photoreceptor degradation in patients with retinitis pigmentosa.
Interactions
Human ciliary neurotrophic factor has been shown to interact with the Interleukin 6 receptor.
See also
Ciliary neurotrophic factor receptor
Interleukin 6
George Yancopoulos
References
Further reading
External links
Neurotrophic factors
Peptide hormones
Proteins
Developmental neuroscience | Ciliary neurotrophic factor | [
"Chemistry"
] | 891 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Molecular biology",
"Proteins",
"Neurochemistry",
"Neurotrophic factors"
] |
7,330,422 | https://en.wikipedia.org/wiki/Lymphotoxin | Lymphotoxin is a member of the tumor necrosis factor (TNF) superfamily of cytokines, whose members are responsible for regulating the growth and function of lymphocytes and are expressed by a wide variety of cells in the body.
Lymphotoxin plays a critical role in developing and preserving the framework of lymphoid organs and of gastrointestinal immune responses, as well as in the activation signaling of both the innate and adaptive immune responses. Lymphotoxin alpha (LT-α, previously known as TNF-beta) and lymphotoxin beta (LT-β), the two forms of lymphotoxin, each have distinctive structural characteristics and perform specific functions.
Structure and function
Each LT-α/LT-β subunit is a trimer and assembles into homotrimers or heterotrimers. LT-α binds with LT-β to form membrane-bound heterotrimers LT-α1-β2 and LT-α2-β1, which are commonly referred to as lymphotoxin beta. LT-α1-β2 is the most prevalent form of lymphotoxin beta. LT-α also forms a homotrimer, LT-α3, which is secreted by activated lymphocytes as a soluble protein.
Lymphotoxin is produced by lymphocytes upon activation and is involved with various aspects of the immune response, including inflammation and activation signaling. Upon binding to the LTβ receptor, LT-αβ transmits signals leading to proliferation, homeostasis and activation of tissue cells in secondary lymphoid organs through induced expression of chemokines, major histocompatibility complex, and adhesion molecules. LT-αβ, which is produced by activated Type 1 T helper cells (Th1), CD8+ T cells, and natural killer (NK) cells, is known to have a major role in the normal development of Peyer's patches. Studies have found that mice with an inactivated LT-α gene (LTA) lack developed Peyer's patches and lymph nodes. In addition, LT-αβ is necessary for the proper formation of the gastrointestinal immune system.
Receptor binding and signaling activation
In general, lymphotoxin ligands are expressed by immune cells, while their receptors are found on stromal and epithelial cells.
The lymphotoxin homotrimer and heterotrimers are specific to different receptors. The LT-αβ complexes are the primary ligands for the lymphotoxin beta receptor (LTβR), which is expressed on tissue cells in multiple lymphoid organs, as well as on monocytes and dendritic cells. The soluble LT-α homotrimer binds to TNF receptors 1 and 2 (TNFR-1 and TNFR-2), and the herpesvirus entry mediator, expressed on T cells, dendritic cells, macrophages, and epithelial cells. There is also evidence that LTα3 signaling through TNFRI and TNFRII contributes to the regulation of IgA antibody in the gut.
Lymphotoxin administers a variety of activation signals in the innate immune response. LT-α is necessary for the expression of LT-α1-β2 on the cell surface as LT-α aids in the movement of LT-β to the cell surface to form LT-α1-β2. In the LT-α mediated signaling pathway, LT-α binds with LT-β to form the membrane-bound LT-α1-β2 complex. Binding of LT-α1-β2 to the LT-β receptor on the target cell can activate various signaling pathways in the effector cell such as the activation of the NF-κB pathway, a major signaling pathway that results in the release of additional pro-inflammatory cytokines essential for the innate response. The binding of lymphotoxin to LT-β receptors is essential for the recruitment of B cells and cytotoxic (CD8+) T cells to specific lymphoid sites to allow the clearing of antigen. Signaling of the LT-β receptors can also induce the differentiation of NK (natural killer) and NK-T cells, which are key players in the innate immune defense and in antiviral responses.
Carcinogenic interactions
Lymphotoxin has cytotoxic properties that can aid in the destruction of tumor cells and promote the death of cancerous cells. The activation of LT-β receptors causes an up-regulation of adhesion molecules and directs B and T cells to specific sites to destroy tumor cells. Studies using mice with an LT-α knockout found increased tumor growth in the absence of LT-αβ.
However, some studies using cancer models have found that a high expression of lymphotoxin can lead to increased growth of tumors and cancerous cell lines. The signaling of the LT-β receptor may induce the inflammatory properties of specific cancerous cell lines, and that the elimination of LT-β receptors may hinder tumor growth and lower inflammation. Mutations in the regulatory factors involved in lymphotoxin signaling may increase the risk of cancer development. One major instance is the continuous initiation of the NF-κB pathway due to an excessive binding of the LT-α1-β2 complex to LT-β receptors, which can lead to specific cancerous conditions including multiple myeloma and melanoma. As excessive inflammation can result in cell damage and a higher risk of the growth of cancer cells, mutations that affect the regulation of LT-α pro-inflammatory signaling pathways can increase the potential for cancer and tumor cell development.
See also
Lymphotoxin beta receptor
Tumor necrosis factor-alpha#Discovery
References
Further reading
External links
Cytokines | Lymphotoxin | [
"Chemistry"
] | 1,211 | [
"Cytokines",
"Signal transduction"
] |
7,330,456 | https://en.wikipedia.org/wiki/Argonaute | The Argonaute protein family, first discovered for its evolutionarily conserved stem cell function, plays a central role in RNA silencing processes as essential components of the RNA-induced silencing complex (RISC). RISC is responsible for the gene silencing phenomenon known as RNA interference (RNAi). Argonaute proteins bind different classes of small non-coding RNAs, including microRNAs (miRNAs), small interfering RNAs (siRNAs) and Piwi-interacting RNAs (piRNAs). Small RNAs guide Argonaute proteins to their specific targets through sequence complementarity (base pairing), which then leads to mRNA cleavage, translation inhibition, and/or the initiation of mRNA decay.
The name of this protein family is derived from a mutant phenotype resulting from mutation of AGO1 in Arabidopsis thaliana, which was likened by Bohmert et al. to the appearance of the pelagic octopus Argonauta argo.
{{Infobox protein family
| Symbol = Piwi
| Name = Argonaute Piwi domain
| image = 1u04-argonaute.png
| width =
| caption = An argonaute protein from Pyrococcus furiosus. PDB . PIWI domain is on the right, PAZ domain to the left.
| Pfam = PF02171
| Pfam_clan =
| InterPro = IPR003165
| SMART =
| PROSITE = PS50822
| MEROPS =
| SCOP =
| TCDB =
| OPM family =
| OPM protein =
| CDD = cd02826
| PDB =
}}
RNA interference
RNA interference (RNAi) is a biological process in which RNA molecules inhibit gene expression, via either destruction of specific mRNA molecules or suppressing translation. RNAi has a significant role in defending cells against parasitic nucleotide sequences . In eukaryotes, including animals, RNAi is initiated by the enzyme Dicer. Dicer cleaves long double-stranded RNA (dsRNA, often found in viruses and small interfering RNA) molecules into short double stranded fragments of around 20 nucleotide siRNAs. The dsRNA is then separated into two single-stranded RNAs (ssRNA) – the passenger strand and the guide strand. Subsequently, the passenger strand is degraded, while the guide strand is incorporated into the RNA-induced silencing complex (RISC). The most well-studied outcome of the RNAi is post-transcriptional gene silencing, which occurs when the guide strand pairs with a complementary sequence in a messenger RNA molecule and induces cleavage by Argonaute, that lies in the core of RNA-induced silencing complex.
Argonaute proteins are the active part of RNA-induced silencing complex, cleaving the target mRNA strand complementary to their bound siRNA. Theoretically the dicer produces short double-stranded fragments so there should be also two functional single-stranded siRNA produced. But only one of the two single-stranded RNA here will be utilized to base pair with target mRNA. It is known as the guide strand, which is incorporated into the Argonaute protein and leads gene silencing. The other single-stranded RNA, named the passenger strand, is degraded during the RNA-induced silencing complex process.
Once the Argonaute is associated with the small RNA, the enzymatic activity conferred by the PIWI domain cleaves only the passenger strand of the small interfering RNA. RNA strand separation and incorporation into the Argonaute protein are guided by the strength of the hydrogen bond interaction at the 5′-ends of the RNA duplex, known as the asymmetry rule. Also the degree of complementarity between the two strands of the intermediate RNA duplex defines how the miRNA are sorted into different types of Argonaute proteins.
In animals, Argonaute associated with miRNA binds to the 3′-untranslated region of mRNA and prevents the production of proteins in various ways. The recruitment of Argonaute proteins to targeted mRNA can induce mRNA degradation. The Argonaute-miRNA complex can also affect the formation of functional ribosomes at the 5′-end of the mRNA. The complex here competes with the translation initiation factors and/or abrogate ribosome assembly. Also, the Argonaute-miRNA complex can adjust protein production by recruiting cellular factors such as peptides or post translational modifying enzymes, which degrade the growing of polypeptides.
In plants, once de novo double-stranded (ds) RNA duplexes are generated with the target mRNA, an unknown RNase-III-like enzyme produces new siRNAs, which are then loaded onto the Argonaute proteins containing PIWI domains, lacking the catalytic amino acid residues, which might induce another level of specific gene silencing.
Functional domains and mechanism
The Argonaute (AGO) gene family encodes six characteristic domains: N- terminal (N), Linker-1 (L1), PAZ, Linker-2 (L2), Mid, and a C-terminal PIWI domain.
The PAZ domain is named for Drosophila Piwi, Arabidopsis Argonaute-1, and Arabidopsis Zwille (also known as pinhead, and later renamed argonaute-10), where the domain was first recognized to be conserved. The PAZ domain is an RNA binding module that recognizes single-stranded 3′ ends of siRNA, miRNA and piRNA, in a sequence independent manner.
PIWI is named after the Drosophila Piwi protein. Structurally resembling RNaseH, the PIWI domain is essential for the target cleavage. The active site with aspartate–aspartate–glutamate triad harbors a divalent metal ion, necessary for the catalysis. Family members of AGO that lost this conserved feature during evolution lack the cleavage activity. In human AGO, the PIWI motif also mediates protein-protein interaction at the PIWI box, where it binds to Dicer at an RNase III domain.
At the interface of PIWI and Mid domains sits the 5′ phosphate of a siRNA, miRNA or piRNA, which is found essential in the functionality. Within Mid lies a MC motif, a homologue structure proposed to mimic the cap-binding structure motif found in eIF4E. It was later found that the MC motif is not involved in mRNA cap binding
Family members
In humans, there are eight AGO family members, some of which are investigated intensively. However, even though AGO1–4 are capable of loading miRNA, endonuclease activity and thus RNAi-dependent gene silencing exclusively belongs to AGO2. Considering the sequence conservation of PAZ and PIWI domains across the family, the uniqueness of AGO2 is presumed to arise from either the N-terminus or the spacing region linking PAZ and PIWI motifs.
Several AGO family members in plants also attract study. AGO1 is involved in miRNA related RNA degradation, and plays a central role in morphogenesis. In some organisms, it is strictly required for epigenetic silencing. It is regulated by miRNA itself. AGO4 does not involve in RNAi directed RNA degradation, but in DNA methylation and other epigenetic regulation, through small RNA (smRNA) pathway. AGO10 is involved in plant development. AGO7 has a function distinct from AGO 1 and 10, and is not found in gene silencing induced by transgenes. Instead, it is related to developmental timing in plants.
Disease and therapeutic tools
Argonaute proteins were reported to be associated with cancers. For the diseases that are involved with selective or elevated expression of particular identified genes, such as pancreatic cancer, the high sequence specificity of RNA interference might make it suitable to be a suitable treatment, particularly appropriate for combating cancers associated with mutated endogenous gene sequences. It has been reported several tiny non-coding RNAs(microRNAs) are related with human cancers, like miR-15a and miR-16a are frequently deleted and/or down-regulated in patients. Even though the biological functions of miRNAs are not fully understood, the roles for miRNAs in the coordination of cell proliferation and cell death during development and metabolism have been uncovered. It is trusted that the miRNAs can direct negative or positive regulation at different levels, which depends on the specific miRNAs and target base pair interaction and the cofactors that recognize them.
Because it has been widely known that many viruses have RNA rather than DNA as their genetic material and go through at least one stage in their life cycle when they make double-stranded RNA, RNA interference has been considered to be a potentially evolutionarily ancient mechanism for protecting organisms from viruses. The small interfering RNAs produced by Dicer cause sequence specific, post-transcriptional gene silencing by guiding an endonuclease, the RNA-induced silencing complex (RISC), to mRNA. This process has been seen in a wide range of organisms, such as Neurospora fungus (in which it is known as quelling), plants (post-transcriptional gene silencing) and mammalian cells(RNAi). If there is a complete or near complete sequence complementarity between the small RNA and the target, the Argonaute protein component of RISC mediates cleavage of the target transcript, the mechanism involves repression of translation predominantly.
Biotechnological applications of prokaryotic Argonaute proteins
In 2016, a group from Hebei University of Science and Technology reported genome editing using a prokaryotic Argonaute protein from Natronobacterium gregoryi. However, evidence for application of Argonaute proteins as DNA-guided nucleases for genome editing have been questioned, with the retraction of the claim from the leading journal. In 2017, a group from University of Illinois reported using a prokaryotic Argonaute protein taken from Pyrococcus furiosus (PfAgo) along with guide DNA to edit DNA in vitro'' as artificial restriction enzymes. PfAgo based artificial restriction enzymes were also used for storing data on native DNA sequences via enzymatic nicking.
References
External links
starBase database: a database for exploring microRNA–mRNA interaction maps from Argonaute CLIP-Seq(HITS-CLIP, PAR-CLIP) and Degradome-Seq data.
Ribonucleases
Molecular genetics
MicroRNA
RNA-binding proteins
RNA interference | Argonaute | [
"Chemistry",
"Biology"
] | 2,213 | [
"Molecular genetics",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.