id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
58,979,901 | https://en.wikipedia.org/wiki/Natan%20Yavlinsky | Natan Aronovich Yavlinsky (; 13 February 191228 July 1962) was a Russian physicist in the former Soviet Union who invented and developed the first working tokamak.
Early life and career
Yavlinsky was born to a family of doctors on 13 February 1912 at Kharkiv (then Kharkov), Russian Empire. Grigory Yavlinsky, an economist and politician, is related to him. He underwent professional technical school (PTU) in 1931 and finished an engineering degree in 1936 at Kharkiv Polytechnic Institute (then Kharkiv V.I. Lenin Polytechnic Institute). As a student, he worked in the Kharkiv Electromechanical Plant. He became a member of the Communist Party of the Soviet Union (then All-Union Communist Party) in 1932, but was removed from the party in 1937. His exclusion from the party also cost him his work at the Moscow Power Engineering Institute (founded as Correspondence Power Engineering Institute). While little is known about his removal from the party, his membership and his position was restored in 1939. He would continue working in the institute until 1948, when he would obtain his Candidate of Sciences, the Soviet equivalent of a Doctor of Philosophy degree. Also by 1948, Yavlinsky became a senior associate of the USSR Academy of Science.
Second World War
While Yavlinsky was exempted from rendering military service for his scientific background and as head of the factory design bureau in the Moscow Power Engineering Institute, he still volunteered when the Second World War opened in the Soviet Union in 1941 as head of the Soviet artillery repair workshop. His service during the Battle of Stalingrad earned him the Medal "For the Defence of Stalingrad" in 1942. Also for his military service, he later received the Medal "For the Victory over Germany in the Great Patriotic War 1941–1945" and the Medal "For Valiant Labor in the Great Patriotic War of 1941-1945". Two years after, in 1944, he was recalled from the front to develop electric motor systems for artillery in the institute. For this work, he was awarded the Stalin Prize in 1949.
Contribution to nuclear physics
The first attempts to build a practical fusion machine took place in the United Kingdom, where George Paget Thomson had selected the pinch effect as a promising technique in 1945. After several failed attempts to gain funding, he gave up and asked two graduate students, Stanley (Stan) W. Cousins and Alan Alfred Ware (1924-2010), to build a device out of surplus radar equipment. This was successfully operated in 1948, but showed no clear evidence of fusion and failed to gain the interest of the Atomic Energy Research Establishment. In 1948, Yavlinsky moved to the Kurchatov Institute (also known as the I. V. Kurchatov Institute of Atomic Energy, named after its head Igor Kurchatov). By this time, other Soviet scientists under Kurchatov such as Nobel Prize laureates Andrei Sakharov and Igor Tamm were working on the Soviet atomic bomb project. As for Yavlinsky, who was given his own laboratory in the institute, he was tasked to develop power supply systems. It did not take long before he also became involved in nuclear research.
After developing the bomb, Sakharov and Tamm began work on the tokamak system in 1951. A tokamak () is a device that uses a powerful magnetic field to confine a hot plasma in the shape of a torus. The tokamak is one of several types of magnetic confinement devices being developed to produce controlled thermonuclear fusion power. The word tokamak is a transliteration of the Russian word , an acronym of either:
"тороидальная камера с магнитными катушками" (toroidal'naya kamera s magnitnymi katushkami) — toroidal chamber with magnetic coils; or
"тороидальная камера с аксиальным магнитным полем" (toroidal'naya kamera s aksial'nym magnitnym polem) — toroidal chamber with axial magnetic field.
The term was attributed to Igor Golovin. Sakharov and Tamm completed a much more detailed consideration of their original proposal, calling for a device with a major radius (of the torus as a whole) of and a minor radius (the interior of the cylinder) of . The proposal suggested the system could produce of tritium a day, or breed of U233 a day. However, Yavlinsky and another scientist, Golovin, considered developing another model focusing on more static toroidal arrangement. It was the development of the concept now known as the safety factor (labelled q in mathematical notation) that guided tokamak development; by arranging the reactor so this critical factor q was always greater than 1, the tokamaks strongly suppressed the instabilities that plagued earlier designs. Yavlinsky's model led to the creation of T-1, the first real tokamak, in 1958. The T-1 used both stronger external magnets and a reduced current compared to stabilized pinch machines like ZETA. Yavlinsky was already preparing the design of an even larger model, later built as T-3, the first large tokamak. With the apparently successful ZETA announcement, Yavlinsky's engineering concept became viewed as more acceptable. For his work on "powerful impulse discharges in a gas, to obtain unusually high temperatures needed for thermonuclear processes," he was awarded the Lenin Prize and the Stalin Prize in 1958. Despite this success, Kurchatov asked Yavlinsky to develop a stellarator instead of finishing the T-3. Besides, as of 1961, the succeeding installation known as the T-2 began showing issues in the toroidal circuits. Nevertheless, Yavlinsky's design prevailed as other Soviet scientists began to favor the tokamak and persuaded Kurchatov to leave the stellarator research to the Americans.
Death
Yavlinsky was not to see the T-3 completed. On 28 July 1962, while travelling from Lviv to Sochi through Aeroflot Flight 415, he and his family died in an airplane crash at Gagra. While there has been speculation that his death was connected with politics, primarily over his intended developments in nuclear research, the government did not provide any clear indication that this was so. Despite his death, the T-3 was finished, and began to show successful results in compensating the inadequacies of other systems, including the stellarator, by 1965. The T-3 had then surpassed the Bohm limit ten times. Three years later, when the Soviets had achieved two main criteria in achieving nuclear fusion, namely the temperature level and the plasma confinement time, the so-called tokamak stampede had reached the United States.
References
1912 births
1962 deaths
Experimental physicists
Particle physicists
Russian people of Jewish descent
Recipients of the Lenin Prize
Nuclear weapons program of the Soviet Union people
Soviet inventors
Russian socialists
Soviet nuclear physicists
Recipients of the Stalin Prize
Soviet military personnel of World War II
Victims of aviation accidents or incidents in 1962
Victims of aviation accidents or incidents in the Soviet Union | Natan Yavlinsky | Physics | 1,538 |
4,055,589 | https://en.wikipedia.org/wiki/Proprietary%20hardware | Proprietary hardware is computer hardware whose interface is controlled by the proprietor, often under patent or trade-secret protection.
Historically, most early computer hardware was designed as proprietary until the 1980s, when IBM PC changed this paradigm. Earlier, in the 1970s, many vendors tried to challenge IBM's monopoly in the mainframe computer market by reverse engineering and producing hardware components electrically compatible with expensive equipment and (usually) able to run the same software. Those vendors were nicknamed plug compatible manufacturers (PCMs).
See also
Micro Channel architecture, a commonly cited historical example of proprietary hardware
Vendor lock-in
Proprietary device drivers
Proprietary firmware
Proprietary software
References
Computer peripherals | Proprietary hardware | Technology | 131 |
58,602,846 | https://en.wikipedia.org/wiki/C%C3%A9line%20B%C5%93hm | Céline Bœhm is a professor of Particle Physics at the University of Sydney. She works on astroparticle physics and dark matter.
Early life and education
Bœhm studied fundamental physics at the Pierre and Marie Curie University, graduating in 1997. She joined École Polytechnique, where she obtained a Master in Engineering in 1998. She earned the highest distinction for a postgraduate diploma in theoretical physics. She completed her PhD at the École normale supérieure in Paris in 2001, working with Pierre Fayet. She worked on supersymmetry, in the 4-body decay of the stop particle. She studied light scalar top quark and supersymmetric dark matter She looked at collisional damping, which considers the impact of dark matter and standard model particles with the cosmic microwave background.
Career and research
In 2001 Bœhm joined Joseph Silk at the University of Oxford. Here she worked on light dark matter particles which couple to light Z′ bosons. She proposed new candidates for scalar dark matter, in the form of heavy fermions or light gauge bosons. When the SPI spectrometer onboard INTEGRAL identified a 511 keV line in the Galactic Center, Bœhm suggested that this could have been the signature of dark matter. She has continued to search for new signatures of dark matter, including examining the GeV excess in the Fermi Gamma-ray Space Telescope data. In 2004 Bœhm joined the Laboratoire d'Annecy-le-Vieux de Physique Théorique, where she was promoted to senior lecturer in 2008. She was awarded the Centre national de la recherche scientifique Bronze Medal.
She looked at the analysis of the CoGeNT direct detection method, and found that it could have suffered from a large background. In 2015 Boehm was nominated as Fellow of the Institute of Physics. She is the Principal investigator of the Theia mission, a space observatory which will allow Bœhm and her team to test the dark matter predictions that arise due to the Lambda-CDM model.
Boehm was made an Emmy Noether Fellow at the Perimeter Institute for Theoretical Physics in 2016, where she continued to work on dark matter. That year, she was promoted to Professor in the Institute for Particle Physics Phenomenology at Durham University. She gave a TED talk, The Invisible is All What Matters, at Durham in 2017. Alongside her work in astroparticle physics, she works on non-crystallographic Coxeter groups. She led the dark matter working package of the Euclid Consortium. In 2017 Bœhm spent two months as a visiting professor at Columbia University, as well as working at the Paris Observatory. She proposed using circular polarisation to study dark matter and neutrinos. She joined the University of Sydney as Head of School for physics in 2018. Bœhm has written for The Conversation. She has taken part in Pint of Science.
References
Academics of the University of Oxford
Academics of Durham University
Academic staff of the University of Sydney
Particle physicists
Dark matter
École Polytechnique alumni
Fellows of the Institute of Physics
Pierre and Marie Curie University alumni
École Normale Supérieure alumni
Living people
1974 births | Céline Bœhm | Physics,Astronomy | 658 |
2,224,896 | https://en.wikipedia.org/wiki/Archibald%20Howie | Archibald "Archie" Howie (born 8 March 1934) is a British physicist and Emeritus Professor at the University of Cambridge, known for his pioneering work on the interpretation of transmission electron microscope images of crystals. Born in 1934, he attended Kirkcaldy High School and the University of Edinburgh. He received his PhD from the University of Cambridge, where he subsequently took up a permanent post. He has been a fellow of Churchill College since its foundation, and was President of its Senior Combination Room (SCR) until 2010.
In 1965, with Hirsch, Whelan, Pashley and Nicholson, he published the seminal text Electron Microscopy of Thin Crystals. He was elected to the Royal Society in 1978 and awarded their Royal Medal in 1999. In 1992 he was awarded the Guthrie Medal and Prize. He was elected an Honorary Fellow of the Royal Society of Edinburgh in 1995. He was head of the Cavendish Laboratory from 1989 to 1997.
References
1934 births
Living people
British physicists
British materials scientists
Alumni of the University of Edinburgh
Fellows of the Royal Society
Commanders of the Order of the British Empire
Fellows of Churchill College, Cambridge
Microscopists
Royal Medal winners
Alumni of Trinity College, Cambridge
Fellows of the Royal Microscopical Society
Presidents of the International Federation of Societies for Microscopy
Scientists of the Cavendish Laboratory
Presidents of the Cambridge Philosophical Society | Archibald Howie | Chemistry | 263 |
10,073,845 | https://en.wikipedia.org/wiki/1/4%20%2B%201/16%20%2B%201/64%20%2B%201/256%20%2B%20%E2%8B%AF | In mathematics, the infinite series is an example of one of the first infinite series to be summed in the history of mathematics; it was used by Archimedes circa 250–200 BC. As it is a geometric series with first term and common ratio , its sum is
Visual demonstrations
The series lends itself to some particularly simple visual demonstrations because a square and a triangle both divide into four similar pieces, each of which contains the area of the original.
In the figure on the left, if the large square is taken to have area 1, then the largest black square has area × = . Likewise, the second largest black square has area , and the third largest black square has area . The area taken up by all of the black squares together is therefore , and this is also the area taken up by the gray squares and the white squares. Since these three areas cover the unit square, the figure demonstrates that
Archimedes' own illustration, adapted at top, was slightly different, being closer to the equation
See below for details on Archimedes' interpretation.
The same geometric strategy also works for triangles, as in the figure on the right: if the large triangle has area 1, then the largest black triangle has area , and so on. The figure as a whole has a self-similarity between the large triangle and its upper sub-triangle. A related construction making the figure similar to all three of its corner pieces produces the Sierpiński triangle.
Proof by Archimedes
Archimedes encounters the series in his work Quadrature of the Parabola. He finds the area inside a parabola by the method of exhaustion, and he gets a series of triangles; each stage of the construction adds an area times the area of the previous stage. His desired result is that the total area is times the area of the first stage. To get there, he takes a break from parabolas to introduce an algebraic lemma:
Proposition 23. Given a series of areas , of which A is the greatest, and each is equal to four times the next in order, then
Archimedes proves the proposition by first calculating
On the other hand,
Subtracting this equation from the previous equation yields
and adding A to both sides gives the desired result.
Today, a more standard phrasing of Archimedes' proposition is that the partial sums of the series are:
This form can be proved by multiplying both sides by 1 − and observing that all but the first and the last of the terms on the left-hand side of the equation cancel in pairs. The same strategy works for any finite geometric series.
The limit
Archimedes' Proposition 24 applies the finite (but indeterminate) sum in Proposition 23 to the area inside a parabola by a double reductio ad absurdum. He does not quite take the limit of the above partial sums, but in modern calculus this step is easy enough:
Since the sum of an infinite series is defined as the limit of its partial sums,
Notes
References
Page images at HTML with figures and commentary at
Geometric series
Proof without words | 1/4 + 1/16 + 1/64 + 1/256 + ⋯ | Mathematics | 624 |
22,350,680 | https://en.wikipedia.org/wiki/Free%20carrier%20absorption | Free carrier absorption occurs when a material absorbs a photon, and a carrier (electron or hole) is excited from an already-excited state to another, unoccupied state in the same band (but possibly a different subband). This intraband absorption is different from interband absorption because the excited carrier is already in an excited band, such as an electron in the conduction band or a hole in the valence band, where it is free to move. In interband absorption, the carrier starts in a fixed, nonconducting band and is excited to a conducting one.
In the simplest approximation, the Drude model, free carrier absorption is proportional to the square of the wavelength.
Quantum mechanical approach
It is well known that the optical transition of electrons and holes in the solid state is a useful clue to understand the physical properties of the material. However, the dynamics of the carriers are affected by other carriers and not only by the periodic lattice potential. Moreover, the thermal fluctuation of each electron should be taken into account. Therefore, a statistical approach is needed. To predict the optical transition with appropriate precision, one chooses an approximation, called the assumption of quasi-thermal distributions, of the electrons in the conduction band and of the holes in the valence band. In this case, the diagonal components of the density matrix become negligible after introducing the thermal distribution function,
This is the Fermi–Dirac distribution for the distribution of electron energies . Thus, summing over all possible states (l and k) yields the total number of carriers N.
The optical susceptibility
Using the above distribution function, the time evolution of the density matrix can be ignored, which greatly simplifies the analysis.
The optical polarization is
With this relation and after adjusting the Fourier transformation, the optical susceptibility is
Absorption coefficient
The transition amplitude corresponds to the absorption of energy and the absorbed energy is proportional to the optical conductivity which is the imaginary part of the optical susceptibility after frequency is multiplied. Therefore, in order to obtain the absorption coefficient that is crucial quantity for investigation of electronic structure, we can use the optical susceptibility.
The energy of free carriers is proportional to the square of momentum (). Using the band gap energy and the electron-hole distribution function, we can obtain the absorption coefficient with some mathematical calculation. The final result is
This result is important to understand the optical measurement data and the electronic properties of metals and semiconductors. The absorption coefficient is negative when the material supports stimulated emission, which is the basis for the operation of lasers, particularly semiconductor laser.
References
1. H. Haug and S. W. Koch, "
", World Scientific (1994). sec.5.4 a
Quantum mechanics | Free carrier absorption | Physics | 562 |
55,127,142 | https://en.wikipedia.org/wiki/Mycorrhaphium%20sessile | Mycorrhaphium sessile is a species of tooth fungus in the family Steccherinaceae that is found in China. It was described as a new species in 2009 by mycologists Hai-Sheng Yuan and Yu-Cheng Dai. The type collection was made in Yunnan, where it was found fruiting on a fallen branch.
References
Steccherinaceae
Fungi of China
Fungi described in 1989
Taxa named by Yu-Cheng Dai
Fungus species | Mycorrhaphium sessile | Biology | 96 |
41,478,545 | https://en.wikipedia.org/wiki/Photoaffinity%20labeling | Photoaffinity labeling is a chemoproteomics technique used to attach "labels" to the active site of a large molecule, especially a protein. The "label" attaches to the molecule loosely and reversibly, and has an inactive site which can be converted using photolysis into a highly reactive form, which causes the label to bind more permanently to the large molecule via a covalent bond. The technique was first described in the 1970s. Molecules that have been used as labels in this process are often analogs of complex molecules, in which certain functional groups are replaced with a photoreactive group, such as an azide, a diazirine or a benzophenone.
References
Molecular biology techniques | Photoaffinity labeling | Chemistry,Biology | 147 |
1,434,980 | https://en.wikipedia.org/wiki/Jaguar%20%28British%20rocket%29 | The Jaguar (also called Jabiru) was a three-stage British sounding rocket built in several versions.
The first stage of the Jabiru Mk.1 was 5.6 m long and had a takeoff weight of 1,170 kilograms, of which about 866 kilograms were fuel, being powered by a Rook II engine. The second stage weighed 292 kilograms, of which 184 kilograms were allotted to fuel, and was powered by a Gosling II engine. The third stage contained 26 kilograms of fuel and was powered by a Lobster I engine. In all stages solid fuel was used. The complete rocket was 12 meters long. The Jabiru Mk.1 was launched several times between 1960 and 1964 at the aerospace testing area at Woomera, South Australia.
The follow-up version, the Jabiru Mk.2, contained an improved starting stage (Rook IIIA) and a second stage (Goldfinch II) with 307 kilograms of fuel as well as a third stage (Gosling IV) with 190 kilograms fuel. The Jabiru Mk.2 was launched ten times at Woomera between 1964 and 1970.
This rocket was replaced by the Jabiru Mk.3 which used a modified first stage of the Jabiru Mk.2 as second stage (Rook IIIB), while the first stage remained unchanged (Rook IIIA), with no third stage being used. The Jabiru Mk.3 was used for re-entry experiments between 1971 and 1974.
Versions
The Jaguar / Jabiru had several configurations:
References
Experimental rockets
Sounding rockets of the United Kingdom
History of science and technology in the United Kingdom | Jaguar (British rocket) | Astronomy | 333 |
573,528 | https://en.wikipedia.org/wiki/Systems%20development%20life%20cycle | In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
"Software development organization follows some process when developing a Software product in mature organization this is well defined and managed. In Software development life cycle, we develop Software in a Systematic and disciplined manner."
Overview
A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize.
SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations.
In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".
SDLC is not a methodology per se, but rather a description of the phases that a methodology should address. The list of phases is not definitive, but typically includes planning, analysis, design, build, test, implement, and maintenance/support. In the Scrum framework, for example, one could say a single user story goes through all the phases of the SDLC within a two-week sprint. By contrast the waterfall methodology, where every business requirement is translated into feature/functional descriptions which are then all implemented typically over a period of months or longer.
History
According to Elliott (2004), SDLC "originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".
The structured systems analysis and design method (SSADM) was produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".
Models
SDLC provides a set of phases/steps/activities for system designers and developers to follow. Each phase builds on the results of the previous one. Not every project requires that the phases be sequential. For smaller, simpler projects, phases may be combined/overlap.
Waterfall
The oldest and best known is the waterfall model, which uses a linear sequence of steps. Waterfall has different varieties. One variety is as follows:
Preliminary analysis
Conduct with a preliminary analysis, consider alternative solutions, estimate costs and benefits, and submit a preliminary plan with recommendations.
Conduct preliminary analysis: Identify the organization's objectives and define the nature and scope of the project. Ensure that the project fits with the objectives.
Consider alternative solutions: Alternatives may come from interviewing employees, clients, suppliers, and consultants, as well as competitive analysis.
Cost-benefit analysis: Analyze the costs and benefits of the project.
Systems analysis, requirements definition
Decompose project goals into defined functions and operations. This involves gathering and interpreting facts, diagnosing problems, and recommending changes. Analyze end-user information needs and resolve inconsistencies and incompleteness:
Collect facts: Obtain end-user requirements by document review, client interviews, observation, and questionnaires.
Scrutinize existing system(s): Identify pros and cons.
Analyze the proposed system: Find solutions to issues and prepare specifications, incorporating appropriate user proposals.
Systems design
At this step, desired features and operations are detailed, including screen layouts, business rules, process diagrams, pseudocode, and other deliverables.
Development
Write the code.
Integration and testing
Assemble the modules in a testing environment. Check for errors, bugs, and interoperability.
Acceptance, installation, deployment
Put the system into production. This may involve training users, deploying hardware, and loading information from the prior system.
Maintenance
Monitor the system to assess its ongoing fitness. Make modest changes and fixes as needed. To maintain the quality of the system. Continual monitoring and updates ensure the system remains effective and high-quality.
Evaluation
The system and the process are reviewed. Relevant questions include whether the newly implemented system meets requirements and achieves project goals, whether the system is usable, reliable/available, properly scaled and fault-tolerant. Process checks include review of timelines and expenses, as well as user acceptance.
Disposal
At end of life, plans are developed for discontinuing the system and transitioning to its replacement. Related information and infrastructure must be repurposed, archived, discarded, or destroyed, while appropriately protecting security.
In the following diagram, these stages are divided into ten steps, from definition to creation and modification of IT work products:
Systems analysis and design
Systems analysis and design (SAD) can be considered a meta-development activity, which serves to set the stage and bound the problem. SAD can help balance competing high-level requirements. SAD interacts with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning.
Object-oriented analysis and design
Object-oriented analysis and design (OOAD) is the process of analyzing a problem domain to develop a conceptual model that can then be used to guide development. During the analysis phase, a programmer develops written requirements and a formal vision document via interviews with stakeholders.
The conceptual model that results from OOAD typically consists of use cases, and class and interaction diagrams. It may also include a user interface mock-up.
An output artifact does not need to be completely defined to serve as input of object-oriented design; analysis and design may occur in parallel. In practice the results of one activity can feed the other in an iterative process.
Some typical input artifacts for OOAD:
Conceptual model: A conceptual model is the result of object-oriented analysis. It captures concepts in the problem domain. The conceptual model is explicitly independent of implementation details.
Use cases: A use case is a description of sequences of events that, taken together, complete a required task. Each use case provides scenarios that convey how the system should interact with actors (users). Actors may be end users or other systems. Use cases may further elaborated using diagrams. Such diagrams identify the actor and the processes they perform.
System Sequence Diagram: A System Sequence diagrams (SSD) is a picture that shows, for a particular use case, the events that actors generate, their order, including inter-system events.
User interface document: Document that shows and describes the user interface.
Data model: A data model describes how data elements relate to each other. The data model is created before the design phase. Object-oriented designs map directly from the data model. Relational designs are more involved.
System lifecycle
The system lifecycle is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out, and disposal.
Conceptual design
The conceptual design stage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated, and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptual design review has determined that the system specification properly addresses the motivating need.
Key steps within the conceptual design stage include:
Need identification
Feasibility analysis
System requirements analysis
System specification
Conceptual design review
Preliminary system design
During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements. At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development.
Key steps within the preliminary design stage include:
Functional analysis
Requirements allocation
Detailed trade-off studies
Synthesis of system options
Preliminary design of engineering models
Development specification
Preliminary design review
For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast-growing bank in Fiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers' needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs.
Detail design and development
This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment, and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification.
Key steps within the detail design and development stage include:
Detailed design
Detailed synthesis
Development of engineering and prototype models
Revision of development specification
Product, process, and material specification
Critical design review
Production and construction
During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications, and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement.
Key steps within the product construction stage include:
Production and/or construction of system components
Acceptance testing
System distribution and operation
Operational testing and evaluation
System assessment
Utilization and support
Once fully deployed, the system is used for its intended operational role and maintained within its operational environment.
Key steps within the utilization and support stage include:
System operation in the user environment
Change management
System modifications for improvement
System assessment
Phase-out and disposal
Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle. Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems.
Phases
System investigation
During this step, current priorities that would be affected and how they should be handled are considered. A feasibility study determines whether creating a new or improved system is appropriate. This helps to estimate costs, benefits, resource requirements, and specific user needs.
The feasibility study should address operational, financial, technical, human factors, and legal/political concerns.
Analysis
The goal of analysis is to determine where the problem is. This step involves decomposing the system into pieces, analyzing project goals, breaking down what needs to be created, and engaging users to define requirements.
Design
In systems design, functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. Modular design reduces complexity and allows the outputs to describe the system as a collection of subsystems.
The design stage takes as its input the requirements already defined. For each requirement, a set of design elements is produced.
Design documents typically include functional hierarchy diagrams, screen layouts, business rules, process diagrams, pseudo-code, and a complete data model with a data dictionary. These elements describe the system in sufficient detail that developers and engineers can develop and deliver the system with minimal additional input.
Testing
The code is tested at various levels in software testing. Unit, system, and user acceptance tests are typically performed. Many approaches to testing have been adopted.
The following types of testing may be relevant:
Path testing
Data set testing
Unit testing
System testing
Integration testing
Black-box testing
White-box testing
Regression testing
Automation testing
User acceptance testing
Software performance testing
Training and transition
Once a system has been stabilized through testing, SDLC ensures that proper training is prepared and performed before transitioning the system to support staff and end users. Training usually covers operational training for support staff as well as end-user training.
After training, systems engineers and developers transition the system to its production environment.
Operations and maintenance
Maintenance includes changes, fixes, and enhancements.
Evaluation
The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements.
Life cycle
Management and control
SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives while executing projects. Control objectives are clear statements of the desired result or purpose and should be defined and monitored throughout a project. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.
To manage and control a substantial SDLC initiative, a work breakdown structure (WBS) captures and schedules the work. The WBS and all programmatic material should be kept in the "project description" section of the project notebook. The project manager chooses a WBS format that best describes the project.
The diagram shows that coverage spans numerous phases of the SDLC but the associated MCD (Management Control Domains) shows mappings to SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain, and System Build and Prototype is primarily performed as part of delivery and support.
Work breakdown structured organization
The upper section of the WBS provides an overview of the project scope and timeline. It should also summarize the major phases and milestones. The middle section is based on the SDLC phases. WBS elements consist of milestones and tasks to be completed rather than activities to be undertaken and have a deadline. Each task has a measurable output (e.g., analysis document). A WBS task may rely on one or more activities (e.g. coding). Parts of the project needing support from contractors should have a statement of work (SOW). The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by contractors.
Baselines
Baselines are established after four of the five phases of the SDLC, and are critical to the iterative nature of the model. Baselines become milestones.
functional baseline: established after the conceptual design phase.
allocated baseline: established after the preliminary design phase.
product baseline: established after the detail design and development phase.
updated product baseline: established after the production construction phase.
Alternative methodologies
Alternative software development methods to systems development life cycle are:
Software prototyping
Joint applications development (JAD)
Rapid application development (RAD)
Extreme programming (XP);
Open-source development
End-user development
Object-oriented programming
Strengths and weaknesses
Fundamentally, SDLC trades flexibility for control by imposing structure. It is more commonly used for large scale projects with many developers.
See also
Application lifecycle management
Decision cycle
IPO model
Software development methodologies
References
Further reading
Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson
Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke.
Computer World, 2002, Retrieved on June 22, 2006, from the World Wide Web:
Management Information Systems, 2005, Retrieved on June 22, 2006, from the World Wide Web:
External links
The Agile System Development Lifecycle
Pension Benefit Guaranty Corporation – Information Technology Solutions Lifecycle Methodology
DoD Integrated Framework Chart IFC (front, back)
FSA Life Cycle Framework
HHS Enterprise Performance Life Cycle Framework
The Open Systems Development Life Cycle
System Development Life Cycle Evolution Modeling
Zero Deviation Life Cycle
Integrated Defense AT&L Life Cycle Management Chart, the U.S. DoD form of this concept.
Systems engineering
Computing terminology
Software development process
Software engineering | Systems development life cycle | Technology,Engineering | 3,519 |
26,240,028 | https://en.wikipedia.org/wiki/Heath%20Bunting | Heath Bunting (born 1966) is a British contemporary artist. Based in Bristol, he is a co-founder of the website irational.org, and was one of the early practitioners in the 1990s of Net.art. Bunting's work is based on creating open and democratic systems by modifying communications technologies and social systems. His work often explores the porosity of borders, both in physical space and online. In 1997, his online work Visitors Guide to London was included in the 10th documenta curated by Swiss curator Simon Lamunière.
An activist, he created a dummy site for the European Lab for Network Collision (CERN).
Biography
Born in 1966, Bunting became active in the contemporary art world in the 1980s. In 1994, he planned to open the first cybercafe in London with Ivan Pope, however they were beaten to it by Cyberia. In 1996, he co-founded the website irational.org with Daniel García Andújar, Rachel Baker, and Minerva Cuevas. It was on the site where Bunting first displayed his internet art works as part of the Net.art project.
Work
Own, Be Owned, or Remain Invisible
Created in 1998, _readme.html is a work of net.art: a simple web page with a white background and light grey text taken from an article about Heath Bunting. A vast majority of the words are hypertext, but not all. As coded for by simple HTML attributes, hyperlinked words turn from grey to black once visited.
In Own, Be Owned or Remain Invisible, Bunting makes use of appropriation. The work utilises an article about Heath Bunting written by James Flint of The Daily Telegraph. Instead of presenting the article in its traditional form, Bunting links nearly every word to [insert word].com and alters the color-scheme of the document as per his white-on-white period. Some of the linked domain may have been owned in the past twelve years, but are no longer owned any more, thereby touching on the transience of Internet ownership. Bunting's work also shows the range of banal or absurd domain names that companies have purchased. Not all words in the article are hyperlinked, however. Through these unclaimed words he spells out how the article touches on his own identity.
King's Cross Phone-In
On Friday, 5 August 1994, Bunting orchestrated a scheme that involved many people calling public phones in and in the surrounding area of London King's Cross railway station. On his then-website Cybercafe.org, founded in 1992, Bunting posted the phone numbers to all of the public phones and encouraged his followers to do one of the following: call in a pattern, call at a certain time, call and speak to a stranger, or show up and pick up the telephone. Bunting used his website as an informative source to let his readers know how to partake in his project.
When 5 August arrived, Bunting went to King's Cross to pick up telephone calls. Many people called in and he witnessed as casual passers-by engaged in conversations with strangers who were perhaps halfway across the world. The project brought people together, if only for a few brief moments, to create a network through the communication medium of telephones. In Digital Humanities, a class by Professor Michael Shanks at Stanford University, the project is described: "the train station was transformed into an art platform and the unsuspecting commuters and workers in the area became the audience." This is an early example of a flash mob and instigating action through a then-passive medium. Bunting's work has been compared to the work of Allan Kaprow, one of the pioneers in performance art.
Pirate Listening Station
Between 1999 and 2009, Bunting hosted the Pirate Listening Station which allowed visitors to the site to tune and listen in to London pirate radio stations. It is an early example of an online listening station.
BorderXing
Commissioned by the Tate Gallery and the Luxembourg-based Fondation Musée d'Art Moderne Grand-Duc Jean (Mudam) in 2002, BorderXing details ways to cross international borders throughout Europe without legal documentation. It provides video, photography, maps, and necessary materials on the project website. It demonstrate how to succeed without being located by dogs, and when not to run to avoid being shot. There is even a supplemental botanical guide so you can avoid poisonous plants. Bunting reveals that restriction of movement set in place by governments and bureaucracies. The project shows not only the restriction of physical borders, but the concept that the internet is not a borderless space. Bunting limits access to the project. You must be at a designated location to access the site or apply to be an authorized client.
The Status Project
Commenced in 2004, The Status Project taps into the themes of identity, hierarchy, and power.
References
Further reading
External links
irational.org Website
New media artists
Net.artists
Public art
Artists from Bristol
1966 births
Living people | Heath Bunting | Technology | 1,026 |
686,036 | https://en.wikipedia.org/wiki/Wave%20vector | In physics, a wave vector (or wavevector) is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave (inversely proportional to the wavelength), and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation.
A closely related vector is the angular wave vector (or angular wavevector), with a typical unit being radian per metre. The wave vector and angular wave vector are related by a fixed constant of proportionality, 2 radians per cycle.
It is common in several fields of physics to refer to the angular wave vector simply as the wave vector, in contrast to, for example, crystallography. It is also common to use the symbol for whichever is in use.
In the context of special relativity, a wave four-vector can be defined, combining the (angular) wave vector and (angular) frequency.
Definition
The terms wave vector and angular wave vector have distinct meanings. Here, the wave vector is denoted by and the wavenumber by . The angular wave vector is denoted by and the angular wavenumber by . These are related by .
A sinusoidal traveling wave follows the equation
where:
is position,
is time,
is a function of and describing the disturbance describing the wave (for example, for an ocean wave, would be the excess height of the water, or for a sound wave, would be the excess air pressure).
is the amplitude of the wave (the peak magnitude of the oscillation),
is a phase offset,
is the (temporal) angular frequency of the wave, describing how many radians it traverses per unit of time, and related to the period by the equation
is the angular wave vector of the wave, describing how many radians it traverses per unit of distance, and related to the wavelength by the equation
The equivalent equation using the wave vector and frequency is
where:
is the frequency
is the wave vector
Direction of the wave vector
The direction in which the wave vector points must be distinguished from the "direction of wave propagation". The "direction of wave propagation" is the direction of a wave's energy flow, and the direction that a small wave packet will move, i.e. the direction of the group velocity. For light waves in vacuum, this is also the direction of the Poynting vector. On the other hand, the wave vector points in the direction of phase velocity. In other words, the wave vector points in the normal direction to the surfaces of constant phase, also called wavefronts.
In a lossless isotropic medium such as air, any gas, any liquid, amorphous solids (such as glass), and cubic crystals, the direction of the wavevector is the same as the direction of wave propagation. If the medium is anisotropic, the wave vector in general points in directions other than that of the wave propagation. The wave vector is always perpendicular to surfaces of constant phase.
For example, when a wave travels through an anisotropic medium, such as light waves through an asymmetric crystal or sound waves through a sedimentary rock, the wave vector may not point exactly in the direction of wave propagation.
In solid-state physics
In solid-state physics, the "wavevector" (also called k-vector) of an electron or hole in a crystal is the wavevector of its quantum-mechanical wavefunction. These electron waves are not ordinary sinusoidal waves, but they do have a kind of envelope function which is sinusoidal, and the wavevector is defined via that envelope wave, usually using the "physics definition". See Bloch's theorem for further details.
In special relativity
A moving wave surface in special relativity may be regarded as a hypersurface (a 3D subspace) in spacetime, formed by all the events passed by the wave surface. A wavetrain (denoted by some variable ) can be regarded as a one-parameter family of such hypersurfaces in spacetime. This variable is a scalar function of position in spacetime. The derivative of this scalar is a vector that characterizes the wave, the four-wavevector.
The four-wavevector is a wave four-vector that is defined, in Minkowski coordinates, as:
where the angular frequency is the temporal component, and the wavenumber vector is the spatial component.
Alternately, the wavenumber can be written as the angular frequency divided by the phase-velocity , or in terms of inverse period and inverse wavelength .
When written out explicitly its contravariant and covariant forms are:
In general, the Lorentz scalar magnitude of the wave four-vector is:
The four-wavevector is null for massless (photonic) particles, where the rest mass
An example of a null four-wavevector would be a beam of coherent, monochromatic light, which has phase-velocity
{for light-like/null}
which would have the following relation between the frequency and the magnitude of the spatial part of the four-wavevector:
{for light-like/null}
The four-wavevector is related to the four-momentum as follows:
The four-wavevector is related to the four-frequency as follows:
The four-wavevector is related to the four-velocity as follows:
Lorentz transformation
Taking the Lorentz transformation of the four-wavevector is one way to derive the relativistic Doppler effect. The Lorentz matrix is defined as
In the situation where light is being emitted by a fast moving source and one would like to know the frequency of light detected in an earth (lab) frame, we would apply the Lorentz transformation as follows. Note that the source is in a frame and earth is in the observing frame, .
Applying the Lorentz transformation to the wave vector
and choosing just to look at the component results in
where is the direction cosine of with respect to
So
{|cellpadding="2" style="border:2px solid #ccccff"
|
|}
Source moving away (redshift)
As an example, to apply this to a situation where the source is moving directly away from the observer (), this becomes:
Source moving towards (blueshift)
To apply this to a situation where the source is moving straight towards the observer (), this becomes:
Source moving tangentially (transverse Doppler effect)
To apply this to a situation where the source is moving transversely with respect to the observer (), this becomes:
See also
Plane-wave expansion
Plane of incidence
References
Further reading
Wave mechanics
Vector physical quantities | Wave vector | Physics,Mathematics | 1,395 |
1,009,445 | https://en.wikipedia.org/wiki/European%20Launcher%20Development%20Organisation | The European Launcher Development Organisation (ELDO) is a former European space research organisation. It was first developed in order to establish a satellite launch vehicle for Europe. The three-stage rocket developed was named Europa, after the mythical Greek goddess. Overall, there were 10 launches that occurred under ELDO's funding. The organisation consisted of Belgium, Britain, France, Germany, Italy, and the Netherlands. Australia was an associate member of the organisation.
Initially, the launch site was in Woomera, Australia, but was later moved to the French site Kourou, in French Guiana. The programme was created to replace the Blue Streak Missile Programme after its cancellation in 1960. In 1974, after an unsuccessful satellite launch, the programme was merged with the European Space Research Organisation to form the European Space Agency.
Origins
After the failure to launch Britain's Blue Streak Missile, Britain wished to use its finished space launch parts in order to cut losses. In 1961, Britain and France announced that they would be working together on a launcher that would be capable of sending a one-ton satellite into space. This cooperation was later drafted into the Convention of the European Launcher Development Organisation, which Italy, Belgium, West Germany, the Netherlands and Australia would join. Australia provided a sparsely populated site for missile launcher testing and development at Woomera, South Australia. The original intent of this organisation was to develop a space programme exclusively for Europe, excluding the UN or any outside country.
History
The initial plans for the rocket were proposed in 1962. The rocket created was called the ELDO-A, later renamed Europa-1. It measured in length and weighed more than 110 tons. Europa-1 was planned to put a payload of – into a circular orbit above earth. The three stages consisted of the Blue Streak stage, the French Coralie stage, and the German stage. The first stage, the Blue Streak stage, was to fire for 160 seconds after launch. The second stage, the French Coralie stage, fired for the following 103 seconds. The third and final stage, the German stage, fired for an extra 361 seconds to launch the rocket into Earth's lower orbit. The first stage was a development of Blue Streak and was built in Stevenage, Hertfordshire U.K.
In June 1964, the first stage, F1, had its first launch at Woomera, South Australia. By the middle of 1966, ELDO decided to change Europa-1 from a three-stage launcher into a four-stage launcher that was capable of placing a satellite into geostationary transfer orbit. Following this decision, in 1969, many unsuccessful launches of Europa-1 and the resignation of Britain and Italy prompted a reconsideration of ideas. In 1970, ELDO was forced to cancel the Europa-1 programme.
By late 1970, the plans for Europa-2 were created. Europa-2 was a similarly designed rocket with an extra stage added in. The funding for Europa-2 was supplied 90% by France and Germany. On November 5, 1971, Europa-2 was launched for the first time, but unsuccessfully. The failure of the rocket led to the consideration of a Europa-3 rocket design. However, Europa-3 was never created and the lack of funding prompted the merging of the European Launcher Development Organisation and the European Space Research Organisation to form the European Space Agency.
Australian downrange tracker
The Gove Down Range Guidance and Telemetry Station was built at Gulkula on the Gove Peninsula in the Northern Territory of Australia in the 1960s, to track the downrange path of rockets launched from the RAAF Woomera Range Complex in South Australia, with its state-of-the-art technology operated by mainly Belgian scientists. The satellite tracker was moved back up to the Gove Peninsula in September 2020 by the local historical society, after spending years in storage at Woomera.
Launches
Overall, the European Launcher Development Organisation planned eleven launches, only ten of which actually occurred. Of the nine actual launches, four were successful. Four other launches were unsuccessful and there was one launch that was terminated. The first launch, F-1, occurred on 5 June 1964 which tested only the first stage of the launch and was successful. Both F-2 and F-3, which occurred on 20 October 1964 and 22 March 1965 on their respective dates tested only the first stage once again and were both successful. The fourth launch, F-4, occurred on 24 May 1966. This launch tested only the first stage of the rocket with a dummy stage 2 and 3. This flight was terminated 136 seconds into flight. The fifth launch, F-5, took place on 13 November 1966. This launch aimed to complete the same task as F-4 and was successful. The sixth launch, F-6/1, took place on 4 August 1967. This launch had an active first and second stage with a dummy third stage and satellite. On this launch, the second stage did not ignite and was unsuccessful. The seventh launch, F-6/2, took place on 5 December 1967. It aimed to do the same objective as F-6/1, but the first and second stages did not separate. The eighth launch, F-7, took place on 30 November 1968. On this launch, all three stages were active and a satellite was fitted. After the second stage ignited, the third stage exploded. The ninth launch, F-8, occurred on 3 July 1969 and aimed to do the same thing as F-7, but ended the same way. The tenth launch, F-9, occurred on 12 June 1970 and had all stages active with a satellite fitted. In this launch, all stages were successful, yet the satellite failed to orbit. After this launch, ELDO began losing funds and members and was eventually phased into the ESRO to create the ESA.
After F-10 was cancelled, it was decided that Woomera launch site was not suitable for putting satellites into geosynchronous orbit. In 1966, it was decided to move to the French site of Kourou in South America. France planned to launch F-11, on which Europa-2 launched off into the sky. However, thanks to the static discharge from the fairings travelling down to the third stage sequencer and inertial navigation computer, they cause it to hang and malfunction; signalling the range safety officer to destroy it. The launch of F12 was postponed whilst a project review was carried out, which led to the decision to abandon the Europa design.
References
Space organizations
European Space Agency | European Launcher Development Organisation | Astronomy | 1,333 |
2,910,116 | https://en.wikipedia.org/wiki/14%20Cancri | 14 Cancri is a star in the northern zodiac constellation of Cancer. It can be referred to as ψ Cancri, very occasionally as ψ2 Cancri, to distinguish it from 13 Cancri which is sometimes called ψ1 Cancri. It is just barely visible to the naked eye, having an apparent visual magnitude of +5.73. Based upon an annual parallax shift of 24.18 mas as seen from Earth, it is located 135 light years from the Sun. It may be a member of the Wolf 630 moving group of stars.
This object has a stellar classification of G7 V, which would suggest it is a G-type main-sequence star. However, Jofré et al. (2015) consider it to be a more evolved subgiant star due to a surface gravity of log g = 3.87. As such, it has an estimated 1.5 times the mass of the Sun and 3.2 times the Sun's radius. The star is 2.4 billion years old with what appears to be a leisurely rotation rate, judging by a projected rotational velocity of 0.98 km/s. It is radiating eight times the Sun's luminosity from its photosphere at an effective temperature of 5,311 K.
References
G-type main-sequence stars
G-type subgiants
Cancri, Psi2
Cancer (constellation)
BD+25 1865
Cancri, 14
067767
040023
3191 | 14 Cancri | Astronomy | 310 |
8,292,324 | https://en.wikipedia.org/wiki/Solid%20solution%20strengthening | In metallurgy, solid solution strengthening is a type of alloying that can be used to improve the strength of a pure metal. The technique works by adding atoms of one element (the alloying element) to the crystalline lattice of another element (the base metal), forming a solid solution. The local nonuniformity in the lattice due to the alloying element makes plastic deformation more difficult by impeding dislocation motion through stress fields. In contrast, alloying beyond the solubility limit can form a second phase, leading to strengthening via other mechanisms (e.g. the precipitation of intermetallic compounds).
Types
Depending on the size of the alloying element, a substitutional solid solution or an interstitial solid solution can form. In both cases, atoms are visualised as rigid spheres where the overall crystal structure is essentially unchanged. The rationale of crystal geometry to atom solubility prediction is summarized in the Hume-Rothery rules and Pauling's rules.
Substitutional solid solution strengthening occurs when the solute atom is large enough that it can replace solvent atoms in their lattice positions. Some alloying elements are only soluble in small amounts, whereas some solvent and solute pairs form a solution over the whole range of binary compositions. Generally, higher solubility is seen when solvent and solute atoms are similar in atomic size (15% according to the Hume-Rothery rules) and adopt the same crystal structure in their pure form. Examples of completely miscible binary systems are Cu-Ni and the Ag-Au face-centered cubic (FCC) binary systems, and the Mo-W body-centered cubic (BCC) binary system.
Interstitial solid solutions form when the solute atom is small enough (radii up to 57% the radii of the parent atoms) to fit at interstitial sites between the solvent atoms. The atoms crowd into the interstitial sites, causing the bonds of the solvent atoms to compress and thus deform (this rationale can be explained with Pauling's rules). Elements commonly used to form interstitial solid solutions include H, Li, Na, N, C, and O. Carbon in iron (steel) is one example of interstitial solid solution.
Mechanism
The strength of a material is dependent on how easily dislocations in its crystal lattice can be propagated. These dislocations create stress fields within the material depending on their character. When solute atoms are introduced, local stress fields are formed that interact with those of the dislocations, impeding their motion and causing an increase in the yield stress of the material, which means an increase in strength of the material. This gain is a result of both lattice distortion and the modulus effect.
When solute and solvent atoms differ in size, local stress fields are created that can attract or repel dislocations in their vicinity. This is known as the size effect. By relieving tensile or compressive strain in the lattice, the solute size mismatch can put the dislocation in a lower energy state. In substitutional solid solutions, these stress fields are spherically symmetric, meaning they have no shear stress component. As such, substitutional solute atoms do not interact with the shear stress fields characteristic of screw dislocations. Conversely, in interstitial solid solutions, solute atoms cause a tetragonal distortion, generating a shear field that can interact with edge, screw, and mixed dislocations. The attraction or repulsion of the dislocation to the solute atom depends on whether the atom sits above or below the slip plane. For example, consider an edge dislocation encountering a smaller solute atom above its slip plane. In this case, the interaction energy is negative, resulting in attraction of the dislocation to the solute. This is due to the reduced dislocation energy by the compressed volume lying above the dislocation core. If the solute atom were positioned below the slip plane, the dislocation would be repelled by the solute. However, the overall interaction energy between an edge dislocation and a smaller solute is negative because the dislocation spends more time at sites with attractive energy. This is also true for solute atom with size greater than the solvent atom. Thus, the interaction energy dictated by the size effect is generally negative.
The elastic modulus of the solute atom can also determine the extent of strengthening. For a “soft” solute with elastic modulus lower than that of the solvent, the interaction energy due to modulus mismatch (Umodulus) is negative, which reinforce the size interaction energy (Usize). In contrast, Umodulus is positive for a “hard” solute, which results in lower total interaction energy than a soft atom. Even though the interaction force is negative (attractive) in both cases when the dislocation is approaching the solute. The maximum force (Fmax) necessary to tear dislocation away from the lowest energy state (i.e. the solute atom) is greater for the soft solute than the hard one. As a result, a soft solute will strengthen a crystal more than a hard solute due to the synergistic strengthening by combining both size and modulus effects.
The elastic interaction effects (i.e. size and modulus effects) dominate solid-solution strengthening for most crystalline materials. However, other effects, including charge and stacking fault effects, may also play a role. For ionic solids where electrostatic interaction dictates bond strength, charge effect is also important. For example, addition of divalent ion to a monovalent material may strengthen the electrostatic interaction between the solute and the charged matrix atoms that comprise a dislocation. However, this strengthening is to a less extent than the elastic strengthening effects. For materials containing a higher density of stacking faults, solute atoms may interact with the stacking faults either attractively or repulsively. This lowers the stacking fault energy, leading to repulsion of the partial dislocations, which thus makes the material stronger.
Surface carburizing, or case hardening, is one example of solid solution strengthening in which the density of solute carbon atoms is increased close to the surface of the steel, resulting in a gradient of carbon atoms throughout the material. This provides superior mechanical properties to the surface of the steel without having to use a higher-cost material for the component.
Governing equations
Solid solution strengthening increases yield strength of the material by increasing the shear stress, , to move dislocations:
where c is the concentration of the solute atoms, G is the shear modulus, b is the magnitude of the Burger's vector, and is the lattice strain due to the solute. This is composed of two terms, one describing lattice distortion and the other local modulus change.
Here, the term that captures the local modulus change, a constant dependent on the solute atoms and is the lattice distortion term.
The lattice distortion term can be described as:
, where a is the lattice parameter of the material.
Meanwhile, the local modulus change is captured in the following expression:
, where G is shear modulus of the solute material.
Implications
In order to achieve noticeable material strengthening via solution strengthening, one should alloy with solutes of higher shear modulus, hence increasing the local shear modulus in the material. In addition, one should alloy with elements of different equilibrium lattice constants. The greater the difference in lattice parameter, the higher the local stress fields introduced by alloying.
Alloying with elements of higher shear modulus or of very different lattice parameters will increase the stiffness and introduce local stress fields respectively. In either case, the dislocation propagation will be hindered at these sites, impeding plasticity and increasing yield strength proportionally with solute concentration.
Solid solution strengthening depends on:
Concentration of solute atoms
Shear modulus of solute atoms
Size of solute atoms
Valency of solute atoms (for ionic materials)
For many common alloys, rough experimental fits can be found for the addition in strengthening provided in the form of:
where is a solid solution strengthening coefficient and is the concentration of solute in atomic fractions.
Nevertheless, one should not add so much solute as to precipitate a new phase. This occurs if the concentration of the solute reaches a certain critical point given by the binary system phase diagram. This critical concentration therefore puts a limit to the amount of solid solution strengthening that can be achieved with a given material.
Examples
Aluminum alloys
An example of aluminum alloys where solid solution strengthening happens by adding magnesium and manganese into the aluminum matrix. Commercially Mn can be added to the AA3xxx series and Mg can be added to the AA5xxx series. Mn addition to the Aluminum alloys assists in the recrystallization and recovery of the alloy which influences the grain size as well. Both of these systems are used in low to medium-strength applications, with appreciable formability and corrosion resistance.
Nickel-based superalloys
Many nickel-based superalloys depend on solid solution as a strengthening mechanism. The most popular example is the Inconel family, where many of these alloys contain chromium and iron and some other additions of cobalt, molybdenum, niobium, and titanium. The nickel-based superalloys are well known for their intensive use in the industrial field especially the aeronautical and the aerospace industry due to their superior mechanical and corrosion properties at high temperatures.
An example of the use of the nickel-based superalloys in the industrial field would be turbine blades. In practice, this alloy is known as MAR—M200 and is solid solution strengthened by chromium, tungsten and cobalt in the matrix and is also precipitation hardened by carbide and boride precipitates at the grain boundaries. The key impacting factor for these turbine blades lies in the grain size which an increase in grain size can lead to a significant reduction in the strain rate. An example of this reduced strain rate in MAR--M200 can be seen in the figures to the right where the figure on the bottom has a grain size of 100um and the figure on the top has a grain size of 10mm.
This reduced strain rate is extremely important for turbine blade operation because they undergo significant mechanical stress and high temperatures which can lead to the onset of creep deformation. Therefore, the precise control of grain size in nickel-based superalloys is key to creep resistance and mechanical reliability and longevity. Some ways to control the grain size lie in the manufacturing techniques like directional solidification and single crystal casting.
Stainless steel
Stainless steel is one of the most commonly used metals in many industries. Solid solution strengthening of steel is one of the mechanisms used to enhance the properties of the alloy. Austenitic steels mainly contain chromium, nickel, molybdenum, and manganese. It is being used mostly for cookware, kitchen equipment, and in marine applications for its good corrosion properties in saline environments.
Titanium alloys
Titanium and titanium alloys have been wide usage in aerospace, medical, and maritime applications. The most known titanium alloy that adopts solid solution strengthening is Ti-6Al-4V. Also, the addition of oxygen to pure Ti alloy adopts a solid solution strengthening as a mechanism to the material, while adding it to Ti-6Al-4V alloy doesn’t have the same influence.
Copper alloys
Bronze and brass are both copper alloys that are solid solution strengthened. Bronze is the result of adding about 12% tin to copper while brass is the result of adding about 34% zinc to copper. Both of these alloys are being utilized in coins production, ship hardware, and art.
See also
Strength of materials
Strengthening mechanisms of materials
References
External links
The Strengthening of Iron and Steel
Metallurgy
Strengthening mechanisms of materials
de:Mischkristallverfestigung
ru:Диффузионное насыщение металлами | Solid solution strengthening | Chemistry,Materials_science,Engineering | 2,488 |
57,682,839 | https://en.wikipedia.org/wiki/Ingle%20Brothers%20Broomcorn%20Warehouse | The Ingle Brothers Broomcorn Warehouse, in Shattuck, Oklahoma, was listed on the National Register of Historic Places in 2009. It is located at 320 NW 1st St., at Oklahoma Avenue, in Shattuck.
It was built in 1909. A photo shows it is a gable-front brick building with a stepped gable.
References
Warehouses in the United States
National Register of Historic Places in Ellis County, Oklahoma
Stepped gables | Ingle Brothers Broomcorn Warehouse | Engineering | 89 |
52,227 | https://en.wikipedia.org/wiki/SourceForge | SourceForge is a web service founded by Geoffrey B. Jeffery, Tim Perdue, and Drew Streib in November 1999. The software provides a centralized online platform for managing and hosting open-source software projects, and a directory for comparing and reviewing business software that lists over 101,600 business software titles. It provides source code repository hosting, bug tracking, mirroring of downloads for load balancing, a wiki for documentation, developer and user mailing lists, user-support forums, user-written reviews and ratings, a news bulletin, micro-blog for publishing project updates, and other features.
SourceForge was one of the first to offer this service free of charge to open-source projects. Since 2012, the website has run on Apache Allura software. SourceForge offers free hosting and free access to tools for developers of free and open-source software.
, the SourceForge repository claimed to host more than 502,000 projects and had more than 3.7 million registered users.
Concept
SourceForge is a web-based source code repository. It acts as a centralized location for free and open-source software projects. It was the first to offer this service for free to open-source projects. Project developers have access to centralized storage and tools for managing projects, though it is best known for providing revision control systems such as CVS, SVN, Bazaar, Git and Mercurial. Major features (amongst others) include project wikis, metrics and analysis, access to a MySQL database, and unique sub-domain URLs (in the form http://project-name.sourceforge.net).
The vast number of users at SourceForge.net (over three million as of 2013) exposes prominent projects to a variety of developers and can create a positive feedback loop. As a project's activity rises, SourceForge.net's internal ranking system makes it more visible to other developers through SourceForge directory and Enterprise Directory. Given that many open-source projects fail due to lack of developer support, exposure to such a large community of developers can continually breathe new life into a project.
Revenue model
SourceForge's traditional revenue model is through advertising banner sales on their site. In 2006, SourceForge Inc. reported quarterly takings of US$6.5 million. In 2009, SourceForge reported a gross quarterly income of US$23 million through media and e-commerce streams. In 2011, a revenue of US$20 million was reported for the combined value of the SourceForge, slashdot and freecode holdings, prior to SourceForge's acquisition.
Since 2013, additional revenue generation schemes, such as bundleware models, have been trialled, with the goal of increasing SourceForge's revenue. The result has in some cases been the appearance of malware bundled with SourceForge downloads. On February 9, 2016, SourceForge announced they had eliminated their DevShare program practice of bundling installers with project downloads.
Negative community reactions to the partnership program led to a review of the program, which was nonetheless opened up to all SourceForge projects on February 7, 2014. The program was canceled by new owners BIZX, LLC on February 9, 2016.
On May 17, 2016, they announced that it would scan all projects for malware and display warnings on downloads.
History
SourceForge, founded in 1999 by VA Software, was the first provider of a centralized location for free and open-source software developers to control and manage software development and offering this service without charge. The software running the SourceForge site was released as free software in January 2000 and was later named SourceForge Alexandria. The last release under a free license was made in November 2001. After the dot-com bubble, SourceForge was later powered by the proprietary SourceForge Enterprise Edition, a separate product re-written in Java which was marketed for offshore outsourcing.
SourceForge has been temporarily banned in China three times: in September 2002, in July 2008 (for about a month) and on August 6, 2012 (for several days).
In November 2008, SourceForge was sued by the French collection society Société civile des Producteurs de Phonogrammes en France (SPPF) for hosting downloads of the file sharing application Shareaza.
In 2009, SourceForge announced a new site platform known as Allura, which would be an extensible, open source platform licensed under the Apache License, utilizing components such as Python and MongoDB, and offering REST APIs. In June 2012, the Allura project was donated to the Apache Software Foundation as Apache Allura.
In September 2012, SourceForge, Slashdot, and Freecode were acquired from Geeknet by the online job site Dice.com for $20 million, and incorporated into a subsidiary known as Slashdot Media. In July 2015, Dice announced that it planned to sell SourceForge and Slashdot, and, in January 2016, the two sites were sold to the San Diego–based BIZX, LLC for an undisclosed amount. In December 2019, BIZX rebranded as Slashdot Media.
On September 26, 2012, it was reported that attackers had compromised a SourceForge mirror, and modified a download of phpMyAdmin to add security exploits.
Adware controversy
In July 2013, SourceForge announced that it would provide project owners with an optional feature called DevShare, which places closed-source ad-supported content into the binary installers and gives the project part of the ad revenue. Opinions of this new feature varied; some complained about users not being as aware of what they are getting or being able to trust the downloaded content, whereas others saw it as a reasonably harmless option that keeps individual projects and users in control.
In November 2013, GIMP, a free image manipulation program, removed its download from SourceForge, citing misleading download buttons that potentially confuse customers as well as SourceForge's own Windows installer, which bundles potentially unwanted programs with GIMP. In a statement, GIMP called SourceForge a "once useful and trustworthy place to develop and host FLOSS applications" that now faces "a problem with the ads they allow on their sites".
In May 2015, SourceForge took control of pages for five projects that had migrated to other hosting sites and replaced the project downloads with adware-laden downloads, including GIMP. This came despite SourceForge's commitment in November 2013 to never bundle adware with project downloads without developers' consent.
On June 1, 2015, SourceForge claimed that they had stopped coupling "third party offers" with unmaintained SourceForge projects. Since this announcement was made, a number of other developers have reported that their SourceForge projects had been taken over by SourceForge staff accounts (but have not had binaries edited), including nmap and VLC media player.
On June 18, 2015, SourceForge announced that SourceForge-maintained mirrored projects were removed and anticipated the formation of a Community Panel to review their mirroring practices. No such Community Panel ever materialized, but SourceForge discontinued DevShare and the bundling of installers after SourceForge was sold to BizX in early 2016. On May 17, 2016, SourceForge announced that they were now scanning all projects for malware and displaying warnings on projects detected to have malware.
Project of the Month
Since 2002, SourceForge has featured a pair of Projects of the Month, one chosen by its community and the other by its staff, but these have not been updated since December 2020.
Usage
, the SourceForge repository hosted more than 300,000 projects and had more than 3 million registered users, although not all were active. The domain sourceforge.net attracted at least 33 million visitors by August 2009 according to a Compete.com survey.
Country restrictions
In its terms of use, SourceForge states that its services are not available to users in countries on the sanction list of the U.S. Office of Foreign Assets Control (including Cuba, Iran, North Korea, Sudan and Syria). Since 2008 the secure server used for making contributions to the site has blocked access from those countries. In January 2010, the site had blocked all access from those countries, including downloads. Any IP address that appeared to belong to one of those countries could not use the site. By the following month, SourceForge relaxed the restrictions so that individual projects could indicate whether or not SourceForge should block their software from download to those countries. This, however, had been reversed by November 2020 for North Korea and other countries. Crimea has been blocked since February 1, 2015.
See also
Comparison of source-code-hosting facilities
References
External links
"The SourceForge Story", by James Maguire (2007-10-17)
Free software websites
Geeknet
Internet properties established in 1999
Internet services supporting OpenID
Open-source software hosting facilities | SourceForge | Technology | 1,841 |
11,231,804 | https://en.wikipedia.org/wiki/Acetonitrile%20%28data%20page%29 | This page provides supplementary chemical data on acetonitrile.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions.
SIRI
Fisher Scientific.
Structure and properties
Thermodynamic properties
Vapor pressure of liquid
Table data obtained from CRC Handbook of Chemistry and Physics, 44th ed. The "(s)" notation indicates temperature of solid/vapor equilibrium. Otherwise the data is temperature of liquid/vapor equilibrium.
Distillation data
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Acetonitrile (data page) | Chemistry | 133 |
48,636,729 | https://en.wikipedia.org/wiki/Oil%20print%20process | The oil print process is a photographic printmaking process that dates to the mid-19th century. Oil prints are made on paper on which a thick gelatin layer has been sensitized to light using dichromate salts. After the paper is exposed to light through a negative, the gelatin emulsion is treated in such a way that highly exposed areas take up an oil-based paint, forming the photographic image.
A significant drawback to the oil print process is that it requires the negative to be the same size as the final print because the medium is not sensitive enough to light to make use of an enlarger. A subtype of the oil print process, the bromoil process, was developed in the early 20th century to solve this problem.
The oil print and bromoil processes create soft images reminiscent of paint or pastels but with the distinctive indexicality of a photograph. For this reason, they were popular with the Pictorialists during the first half of the 20th century. The painterly qualities of the prints continue to appeal to artists and have recently led some contemporary art photographers to take up these processes again.
Oil print techniques
The origins of the oil print process go back to experiments by Alphonse Louis Poitevin with bichromated gelatin in the 1850s.
To make an oil print, a piece of paper is coated with a thick gelatin layer containing dichromate salts that sensitize it to light. A contact print is made by laying a negative over the paper and exposing it to light, which leads to hardening of the dichromated gelatin in proportion to the amount of light that reaches the paper. After exposure, the print is soaked in water and the non-hardened areas absorb more water than the hardened parts. The sponge-dried but still moist paper is then inked with an oil-based ink, which sticks preferentially to the hardened (drier) areas. The result is a positive image in the color of the ink. As with other forms of printmaking, the ink application requires considerable skill, and no two prints are identical.
Multicolor oil prints are possible through local inking of the print, and it is also possible to create reverse prints by contact-printing the wet oil print to a piece of plain paper. Artists have also sometimes created variations by applying extra paint using brushes. In the later 19th century, it was possible to buy commercially prepared gelatin-coated paper.
Bromoil process
The bromoil process is a variation on the oil print process that allows for enlargements. In 1907, E. J. Wall described how it should theoretically be possible to place a negative in an enlarger to produce a larger silver bromide positive, which would then be bleached, hardened, and inked following the oil print process. That same year C. Welborne Piper worked out the practical details. Much as Wall envisioned it, the bromoil process starts with a normally developed print exposed onto a silver-bromide paper that is then chemically bleached, hardened, and fixed. When the still-moist print is inked, the hardest (driest) areas take up the most ink while the wettest areas become the highlights.
An issue with the bromoil process is that inadequate rinsing of the chrome salts can lead to discoloration of the prints when exposed to light over long periods of time. In addition, irregularities in the thickness of the gelatin layer can, under unfavorable conditions, lead to stresses that damage the pictorial (ink) layer.
A version of the bromoil process was developed to produce full-color prints in the 1930s before commercial color film was developed. This technique requires three matching negatives of the subject, each made on Ilford Hypersensitive Panchromatic plates and shot through a blue, green, and red filter. The developed plates are enlarged and printed onto separate pieces of bromide-silver photographic paper, which are then bleached and hardened in the usual manner. The three prints are then inked with a firm bromoil ink, yellow on the blue-filtered print, red on the green-filtered print, and blue on the red-filtered print. The three inked prints are then treated as printing plates and passed through an etching press that will transfer the ink to a new piece of paper or cloth, reversing the image in the process. Care must be taken to maintain exact registration of the three plates.
See also
Carbon print
References
Further reading
External links
Photographic processes
Oils | Oil print process | Chemistry | 931 |
18,585,770 | https://en.wikipedia.org/wiki/Morphological%20gradient | In mathematical morphology and digital image processing, a morphological gradient is the difference between the dilation and the erosion of a given image. It is an image where each pixel value (typically non-negative) indicates the contrast intensity in the close neighborhood of that pixel. It is useful for edge detection and segmentation applications.
Mathematical definition and types
Let be a grayscale image, mapping points from a Euclidean space or discrete grid E (such as R2 or Z2) into the real line. Let be a grayscale structuring element. Usually, b is symmetric and has short-support, e.g.,
.
Then, the morphological gradient of f is given by:
,
where and denote the dilation and the erosion, respectively.
An internal gradient is given by:
,
and an external gradient is given by:
.
The internal and external gradients are "thinner" than the gradient, but the gradient peaks are located on the edges, whereas the internal and external ones are located at each side of the edges. Notice that .
If , then all the three gradients have non-negative values at all pixels.
References
Image Analysis and Mathematical Morphology by Jean Serra, (1982)
Image Analysis and Mathematical Morphology, Volume 2: Theoretical Advances by Jean Serra, (1988)
An Introduction to Morphological Image Processing by Edward R. Dougherty, (1992)
External links
Morphological gradients, Centre de Morphologie Mathématique, École_des_Mines_de_Paris
Mathematical morphology
Digital geometry | Morphological gradient | Technology | 305 |
3,237,784 | https://en.wikipedia.org/wiki/Boolean%20grammar | Boolean grammars, introduced by , are a class of formal grammars studied in formal language theory. They extend the basic type of grammars, the context-free grammars, with conjunction and negation operations. Besides these explicit operations, Boolean grammars allow implicit disjunction represented by multiple rules for a single nonterminal symbol, which is the only logical connective expressible in context-free grammars. Conjunction and negation can be used, in particular, to specify intersection and complement of languages. An intermediate class of grammars known as conjunctive grammars allows conjunction and disjunction, but not negation.
The rules of a Boolean grammar are of the form
where is a nonterminal, and , ..., , , ..., are strings formed of symbols in and . Informally, such a rule asserts that every string over that satisfies each of the syntactical conditions represented by , ..., and none of the syntactical conditions represented by , ..., therefore satisfies the condition defined by .
There exist several formal definitions of the language generated by a Boolean grammar. They have one thing in common: if the grammar is represented as a system of language equations with union, intersection, complementation and concatenation, the languages generated by the grammar must be the solution of this system. The semantics differ in details, some define the languages using language equations, some draw upon ideas from the field of logic programming. However, these nontrivial issues of formal definition are mostly irrelevant for practical considerations, and one can construct grammars according to the given informal semantics. The practical properties of the model are similar to those of conjunctive grammars, while the descriptional capabilities are further improved. In particular, some practically useful properties inherited from context-free grammars, such as efficient parsing algorithms, are retained, see .
References
Preprint available online, .
External links
Okhotin's page on Boolean grammars
Formal languages | Boolean grammar | Mathematics | 418 |
1,728,077 | https://en.wikipedia.org/wiki/Copywriting | Copywriting is the act or occupation of writing text for the purpose of advertising or other forms of marketing. Copywriting is aimed at selling products or services. The product, called copy or sales copy, is written content that aims to increase brand awareness and ultimately persuade a person or group to take a particular action.
Copywriters help to create billboards, brochures, catalogs, jingle lyrics, magazine and newspaper advertisements, sales letters and other direct mail, scripts for television or radio commercials, taglines, white papers, website and social media posts, pay-per-click and other marketing communications. All this aligned with the target audience's expectations while keeping the content and copy fresh, relevant, and effective.
Employment
Many copywriters are employed in marketing departments, advertising agencies, public relations firms, or copywriting agencies, or are self-employed as freelancers, whose clients may range from small to large companies. They may work at a client's office, a coworking office, a coffeehouse, or remotely from home.
Advertising agencies usually hire copywriters as part of a creative team, in which they are partnered with art directors or creative directors. The copywriter writes copy or a script for an advertisement, based largely on information obtained from a client. Either member of the team can conceptualize the overall idea and the process of collaboration often improves the work. Some agencies specialize in servicing a particular industry or sector.
Copywriting agencies combine copywriting with a range of editorial and associated services that may include positioning and messaging consulting, social media, search engine optimization, developmental editing, copy editing, proofreading, fact-checking, speechwriting, and page layout. Some agencies employ in-house copywriters, while others use external contractors or freelancers.
Digital marketing agencies commonly include copywriters, whether freelance or employees, that focus specifically on digital communication. Sometimes the work of a copywriter will overlap with that of a Content Writer as they will need to write social media advertisements, Google advertisements, online landing pages, and persuasive email copy. This new wave of copywriting born of the digital era has made the discipline more accessible.
Copywriters also work in-house for retail chains, book publishers, or other big firms that advertise frequently. They can also be employed to write advertorials for newspapers, magazines, and broadcasters.
A copywriter's job is related to, but different from that of a technical writer. Even though these jobs may overlap, the style guides for the end product have different purposes:
Technical writing saves readers or speakers time by providing valuable and complex technical information in a simple format (see, for example, Simplified Technical English). So a tech writer uses specific techniques for formatting the required information into a documentation topic. Common tasks are release notes, step-by-step instructions, technical information, diagrams, and tables. Tech writers mainly work for engineering, medical, or IT companies, using communication skills for gathering information and the logic for structuring topics.
Copywriting produces marketing texts and scenarios about products or services. The copywriter represents the company in the best way possible by talking up the product and the service, or by creating a company style guide. The key point is to create a desire to work with the company or do business with the company. The copywriter has to find the key to the audience to create the content, so business and sociology skills would be required to form a strong trust in the company.
Education
Traditionally, the level of education needed to become a copywriter is most often a Bachelor's degree in English, Advertising, Journalism, or Marketing. That is still the case for in-house copywriters. However, freelance copywriters today can learn the craft from copywriting courses or mentors. Many clients accept or even prefer writing samples over formal copywriting credentials.
In 2018, the U.S. Bureau of Labor Statistics reported an annual median salary of $62,170 for writers and authors. In 2019, PayScale.com stated that the expected salary for copywriters ranged from $35,000–$73,000.
Famous copywriters
John Emory Powers (1837—1919) was the world's first full-time copywriter. Since then, some copywriters have become well-known within the industry because they founded major advertising agencies, and others because of their lifetime body of work. Many Creative Artists worked as Copywriters before becoming famous in other fields.
David Ogilvy (1911—1999) is known as the Father of advertising. He is also famous for his famous quote dedicated to Rolls-Royce cars as he said: "At 60 miles an hour the loudest noise in this new Rolls-Royce comes from the Electric Clock". He has also written some books on the advertising field such as Ogilvy on Advertising and Confessions of an Advertising Man.
Leo Burnett (1891—1971) was named by Time as one of the 100 most influential people of the 20th century. He was the founder of Leo Burnett Worldwide. His memorable Marlboro Man is one of the most successful campaigns ever. His company was acquired by Publicis Groupe in 2002.
There are many ways advertisers try to appeal to their client base and have different types of advertising executions to do so. This includes a straight sell, scientific/technical evidence, demonstration, comparison, testimonial, slice of life, animation, personality symbols, imagery, dramatization, humor, and combinations.
Notable ad campaigns
Nike's "Just Do It" — increased Nike's sales from $800 million to more than $9.2 billion in 10 years.
California Milk Processor Board's "Got Milk?" — increased milk sales in California and has spawned many parodies since its launch.
Apple's "Get a Mac" — the Mac vs PC campaign generated 42% market share growth in its first year alone.
Formats
Internet
The Internet has expanded the range of copywriting opportunities to include landing pages and other web content, online advertisements, emails, blogs, social media, and other forms of electronic communications.
The Internet has brought new opportunities for copywriters to learn their craft, do research and view others' work. Clients, copywriters and art directors can more readily find each other, making freelancing a viable job option. There are also many new websites that make becoming a freelance copywriter a much more organized process.
Experimenting and ongoing re-evaluation are part of the process.
Search engine optimization (SEO)
Web copy may include among its objectives the achievement of higher rankings in search engines. Originally, this involved the strategic placement and repetition of keywords and phrases on web pages, but writing in a manner that human readers would consider normal, as well as their inclusion into Meta tags, page headings and sub-headings. In the case of Google, a copywriter would tailor content to its "E-E-A-T" algorithm, which ranks search results based on experience, expertise, authoritativeness, and trustworthiness.
Book publishing
In book publishing, the back of the book contains a blurb that presents a summary or details pertaining to the information inside. The author uses the back cover to grab the attention of the audience as well as provides the information for what the book contains and persuades the customer to develop an interest in the product.
Business to business (B2B)
B2B businesses sell their products and services to other companies, instead of customers. For instance, the manufacturers sell their products to warehouses, factories, etc. These products are not sold to customers. So, the Copywriters produce sales content to describe the benefits of purchasing the products. The tone is formal, conversational, and clear. Since businesses explore various vendors before buying the product, there should be a lot of engaging and appealing fresh content. B2B marketing materials include e-books, infographics, press releases, web pages, email sequences, scripts for podcasts, webinars, and so forth.
Brand copywriting
The main objective is to increase brand awareness among the target audience so that a customer thinks about the company first before buying a product. The copywriters craft a unique story that resonates with the target audience, promoting or selling a product or an idea using creative campaigns for the target audience.
Business to customer (B2C)
B2C businesses aim to sell products and services directly to customers. The main goal is to persuade the customer to take prompt action. Prominent examples are supermarkets, brick-and-mortar stores, online stores, and so on. The copywriters uses long content with consistent branding, bulletin points, sub heads, shorter sentences, and paragraphs to highlight the features of the products.
See also
Advertising
Communication design
Email marketing
Swipe file
Professional writing
References
Communication design
Advertising occupations
Journalism occupations | Copywriting | Engineering | 1,804 |
33,297,462 | https://en.wikipedia.org/wiki/Keller%27s%20conjecture | In geometry, Keller's conjecture is the conjecture that in any tiling of -dimensional Euclidean space by identical hypercubes, there are two hypercubes that share an entire -dimensional face with each other. For instance, in any tiling of the plane by identical squares, some two squares must share an entire edge, as they do in the illustration.
This conjecture was introduced by , after whom it is named. A breakthrough by showed that it is false in ten or more dimensions, and after subsequent refinements, it is now known to be true in spaces of dimension at most seven and false in all higher dimensions. The proofs of these results use a reformulation of the problem in terms of the clique number of certain graphs now known as Keller graphs.
The related Minkowski lattice cube-tiling conjecture states that whenever a tiling of space by identical cubes has the additional property that the cubes' centers form a lattice, some cubes must meet face-to-face. It was proved by György Hajós in 1942.
, , and give surveys of work on Keller's conjecture and related problems.
Statement
A tessellation or tiling of a Euclidean space is, intuitively, a family of subsets that cover the whole space without overlapping. More formally,
a family of closed sets, called tiles, forms a tiling if their union is the whole space and every two distinct sets in the family have disjoint interiors. A tiling is said to be monohedral if all of the tiles have the same shape (they are congruent to each other). Keller's conjecture concerns monohedral tilings in which all of the tiles are hypercubes of the same dimension as the space. As formulates the problem, a cube tiling is a tiling by congruent hypercubes in which the tiles are additionally required to all be translations of each other without any rotation, or equivalently, to have all of their sides parallel to the coordinate axes of the space. Not every tiling by congruent cubes has this property; for instance, three-dimensional space may be tiled by two-dimensional sheets of cubes that are twisted at arbitrary angles with respect to each other. In formulating the same problem, instead considers all tilings of space by congruent hypercubes and states, without proof, that the assumption that cubes are axis-parallel can be added without loss of generality.
An -dimensional hypercube has faces of dimension that are, themselves, hypercubes; for instance, a square has four edges, and a three-dimensional cube has six square faces. Two tiles in a cube tiling (defined in either of the above ways) meet face-to-face if there is an ()-dimensional hypercube that is a face of both of them. Keller's conjecture is the statement that every cube tiling has at least one pair of tiles that meet face-to-face in this way.
The original version of the conjecture stated by Keller was for a stronger statement: every cube tiling has a column of cubes all meeting face-to-face. This version of the problem is true or false for the same dimensions as its more commonly studied formulation.
It is a necessary part of the conjecture that the cubes in the tiling all be congruent to each other, for if cubes of unequal sizes are allowed, then the Pythagorean tiling would form a counterexample in two dimensions.
The conjecture as stated does not require all of the cubes in a tiling to meet face-to-face with other cubes. Although tilings by congruent squares in the plane have the stronger property that every square meets edge-to-edge with another square, some of the tiles in higher-dimensional hypercube tilings may not meet face-to-face with any other tile. For instance, in three dimensions, the tetrastix structure formed by three perpendicular sets of square prisms can be used to construct a cube tiling, combinatorially equivalent to the Weaire–Phelan structure, in which one fourth of the cubes (the ones not part of any prism) are surrounded by twelve other cubes without meeting any of them face-to-face.
Group-theoretic reformulation
Keller's conjecture was shown to be true in dimensions at most six by . The disproof of Keller's conjecture, for sufficiently high dimensions, has progressed through a sequence of reductions that transform it from a problem in the geometry of tilings into a problem in group theory and, from there, into a problem in graph theory.
first reformulated Keller's conjecture in terms of factorizations of abelian groups. He shows that if there is a counterexample to the conjecture, then it can be assumed to be a periodic tiling of cubes with an integer side length and integer vertex positions; thus, in studying the conjecture, it is sufficient to consider tilings of this special form. In this case, the group of integer translations, modulo the translations that preserve the tiling, forms an abelian group, and certain elements of this group correspond to the positions of the tiles. Hajós defines a family of subsets of an abelian group to be a factorization if each element of the group has a unique expression as a sum , where each belongs to . With this definition, Hajós' reformulated conjecture is that whenever an Abelian group has a factorization in which the first set may be arbitrary but each subsequent set takes the special form for some element of , then at least one element must belong to (the difference set of with itself).
showed that any tiling that forms a counterexample to the conjecture can be assumed to have an even more special form: the cubes have side length a power of two and integer vertex coordinates, and the tiling is periodic with period twice the side length of the cubes in each coordinate direction. Based on this geometric simplification, he also simplified Hajós' group-theoretic formulation, showing that it is sufficient to consider abelian groups that are the direct sums of cyclic groups of order four, with each .
Keller graphs
reformulated Szabó's result as a condition about the existence of a large clique in a certain family of graphs, which subsequently became known as the Keller graphs. More precisely, the vertices of the Keller graph of dimension are the elements where each is 0, 1, 2, or 3. Two vertices are joined by an edge if they differ in at least two coordinates and differ by exactly two in at least one coordinate. Corrádi and Szabó showed that the maximum clique in this graph has size at most and if there is a clique of this size, then Keller's conjecture is false. Given such a clique, one can form a covering of space by cubes of side two whose centers have coordinates that, when taken modulo four, are vertices of the clique. The condition that any two vertices of the clique have a coordinate that differs by two implies that cubes corresponding to these vertices do not overlap. The condition that vertices differ in two coordinates implies that these cubes cannot meet face-to-face. The condition that the clique has size implies that the cubes within any period of the tiling have the same total volume as the period itself. Together with the fact that they do not overlap, this implies that the cubes placed in this way tile space without meeting face-to-face.
disproved Keller's conjecture by finding a clique of size 210 in the Keller graph of dimension 10. This clique leads to a non-face-to-face tiling in dimension 10, and copies of it can be stacked (offset by half a unit in each coordinate direction) to produce non-face-to-face tilings in any higher dimension. Similarly, found a clique of size 28 in the Keller graph of dimension eight, leading in the same way to a non-face-to-face tiling in dimension 8 and (by stacking) in dimension 9.
Subsequently, showed that the Keller graph of dimension seven has a maximum clique of size 124. Because this is less than 27 = 128, the graph-theoretic version of Keller's conjecture is true in seven dimensions. However, the translation from cube tilings to graph theory can change the dimension of the problem, so this result does not settle the geometric version of the conjecture in seven dimensions. Finally, a 200-gigabyte computer-assisted proof in 2019 used Keller graphs to establish that the conjecture holds true in seven dimensions. Therefore, the question Keller posed can be considered solved: the conjecture is true in seven dimensions or fewer but is false when there are more than seven dimensions.
The sizes of the maximum cliques in the Keller graphs of dimensions 2, 3, 4, 5, and 6 are, respectively, 2, 5, 12, 28, and 60. The Keller graphs of dimensions 4, 5, and 6 have been included in the set of "DIMACS challenge graphs" frequently used as a benchmark for clique-finding algorithms.
Related problems
As describes, Hermann Minkowski was led to a special case of the cube-tiling conjecture from a problem in diophantine approximation. One consequence of Minkowski's theorem is that any lattice (normalized to have determinant one) must contain a nonzero point whose Chebyshev distance to the origin is at most one. The lattices that do not contain a nonzero point with Chebyshev distance strictly less than one are called critical, and the points of a critical lattice form the centers of the cubes in a cube tiling. Minkowski conjectured in 1900 that whenever a cube tiling has its cubes centered at lattice points in this way, it must contain two cubes that meet face-to-face. If this is true, then (because of the symmetries of the lattice) each cube in the tiling must be part of a column of cubes, and the cross-sections of these columns form a cube tiling of one smaller dimension. Reasoning in this way, Minkowski showed that (assuming the truth of his conjecture) every critical lattice has a basis that can be expressed as a triangular matrix, with ones on its main diagonal and numbers less than one away from the diagonal. György Hajós proved Minkowski's conjecture in 1942 using Hajós's theorem on factorizations of abelian groups, a similar group-theoretic method to the one that he would later apply to Keller's more general conjecture.
Keller's conjecture is a variant of Minkowski's conjecture in which the condition that the cube centers form a lattice is relaxed. A second related conjecture, made by Furtwängler in 1936, instead relaxes the condition that the cubes form a tiling. Furtwängler asked whether a system of cubes centered on lattice points forming a -fold covering of space (that is, all but a measure-zero subset of the points in the space must be interior to exactly cubes) must necessarily have two cubes meeting face-to-face. Furtwängler's conjecture is true for two- and three-dimensional space, but Hajós found a four-dimensional counterexample in 1938. characterized the combinations of and the dimension that permit a counterexample. Additionally, combining both Furtwängler's and Keller's conjectures, Robinson showed that -fold square coverings of the Euclidean plane must include two squares that meet edge-to-edge. However, for every and every , there is a -fold tiling of -dimensional space by cubes with no shared faces.
Once counterexamples to Keller's conjecture became known, it became of interest to ask for the maximum dimension of a shared face that can be guaranteed to exist in a cube tiling. When the dimension is at most seven, this maximum dimension is just , by the proofs of Keller's conjecture for those small dimensions, and when is at least eight, then this maximum dimension is at most . showed that it is at most , stronger for ten or more dimensions.
and found close connections between cube tilings and the spectral theory of square-integrable functions on the cube.
use cliques in the Keller graphs that are maximal but not maximum to study packings of cubes into space that cannot be extended by adding any additional cubes.
In 1975, Ludwig Danzer and independently Branko Grünbaum and G. C. Shephard found a tiling of three-dimensional space by parallelepipeds with 60° and 120° face angles in which no two parallelepipeds share a face.
Notes
References
.
.
.
.
.
.
.
. See in particular pages 43, 114, 147, 156, and 161–163, describing different computational results on the Keller graphs included in this challenge set.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Cubes
Tessellation
Parametric families of graphs
Disproved conjectures
Computer-assisted proofs | Keller's conjecture | Physics,Mathematics | 2,718 |
12,045,114 | https://en.wikipedia.org/wiki/Flory%20convention | In polymer science, the Flory convention is a convention for labelling rotational isomers of polymers. It is named after Nobel Prize-winning Paul Flory.
The convention states that for a given bond, when the dihedral angle formed between the previous and subsequent bonds projected on the plane normal to the bond is 0 degrees, the state is labelled as "trans", and when the angle is 180 degrees, the angle is labelled as "cis".
References
Biophysics | Flory convention | Physics,Biology | 96 |
12,812,870 | https://en.wikipedia.org/wiki/Stygiolobus | Stygiolobus is a genus in the family Sulfolobaceae.
See also
List of Archaea genera
References
Further reading
Scientific journals
Scientific books
External links
Archaea genera
Thermoproteota | Stygiolobus | Biology | 44 |
13,408,012 | https://en.wikipedia.org/wiki/Healthy%20user%20bias | The healthy user bias or healthy worker bias is a bias that can damage the validity of epidemiologic studies testing the efficacy of particular therapies or interventions.
Specifically, it is a sampling bias or selection bias: the kind of subjects that take up an intervention, including by enrolling in a clinical trial, are not representative of the general population. People who volunteer for a study can be expected, on average, to be healthier than people who don't volunteer, as they are concerned for their health and are predisposed to follow medical advice, both factors that would aid one's health. In a sense, being healthy or active about one's health is a precondition for becoming a subject of the study, an effect that can appear under other conditions such as studying particular groups of workers. For example, someone in ill health is unlikely to have a job as manual laborer. As a result, studies of manual laborers are studies of people who are currently healthy enough to engage in manual labor, rather than studies of people who would do manual labor if they were healthy enough.
References
Further reading
McMichael, A. J. (1976). Standardized mortality ratios and the “healthy worker effect”: Scratching beneath the surface. Journal of Occupational Medicine, 18, 165–168. doi:10.1097/00043764-197603000-00009
External links
"Do We Really Know What Makes Us Healthy?"
Epidemiology
Bias
Medical statistics
Sampling (statistics) | Healthy user bias | Environmental_science | 312 |
12,342,942 | https://en.wikipedia.org/wiki/Anderson%20orthogonality%20theorem | The Anderson orthogonality theorem is a theorem in physics by the physicist P. W. Anderson.
It relates to the introduction of a magnetic impurity in a metal. When a magnetic impurity is introduced into a metal, the conduction electrons will tend to screen the potential that the impurity creates. The N-electron ground state for the system when , which corresponds to the absence of the impurity and , which corresponds to the introduction of the impurity are orthogonal in the thermodynamic limit .
References
Condensed matter physics | Anderson orthogonality theorem | Physics,Chemistry,Materials_science,Engineering | 108 |
24,497,031 | https://en.wikipedia.org/wiki/MakerBot | MakerBot Industries, LLC was an American desktop 3D printer manufacturer company headquartered in New York City. It was founded in January 2009 by Bre Pettis, Adam Mayer, and Zach "Hoeken" Smith to build on the early progress of the RepRap Project. It was acquired by Stratasys in June 2013. , MakerBot had sold over 100,000 desktop 3D printers worldwide. Between 2009 and 2019, the company released 7 generations of 3D printers, ending with the METHOD and METHOD X.
It was at one point the leader of the desktop market with an important presence in the media, but its market share declined over the late 2010s. MakerBot also founded and operated Thingiverse, the largest online 3D printing community and file repository. In August 2022, the company completed a merger with its long-time competitor Ultimaker. The combined company is known as UltiMaker, but retains the MakerBot name for its Sketch line of education-focused 3D printers.
History
Smith was one of the founding members of the RepRap Research Foundation, a non-profit group created to help advance early research in the area of open-source 3D printers.
Bre Pettis got inspired during an art residency in Vienna with Johannes Grenzfurthner/monochrom in 2007, when he wanted to create a robot that could print shot glasses for the event Roboexotica and did research about the RepRap project at the Vienna hackerspace Metalab. Shot glasses remained a theme throughout the history of MakerBot.
The company started shipping kits in April 2009 and had sold approximately 3,500 units . Demand for the kits was so great in 2009 that the company solicited MakerBot owners to provide parts for future devices from their own MakerBots. Seed funding of $75,000 was provided by Jake Lodwick ($50,000) and Adrian Bowyer and his wife, Christine ($25,000).
In August 2011, venture capital firm The Foundry Group invested $10 million in the company and joined its board.
In April 2012, Zachary Smith was pushed out, involving disagreement on adherence to open-source principles, and likely also about integration with Stratasys. Private security led out 100 employees laid off around the same time.
On June 19, 2013, Stratasys Incorporated announced that it had acquired MakerBot in a stock deal worth $604 million, with $403 million in stock paid up front, based on the current share value of Stratasys. The deal provided that MakerBot would operate as a distinct brand and subsidiary of Stratasys, serving the consumer and desktop market segments. When acquired, Makerbot had sold 22,000 printers. Bre Pettis moved to a position at Stratasys and was replaced as CEO by Jennifer Lawton, who in 2015 was succeeded by Jonathan Jaglom, then in January 2017, Nadav Goshen.
In April 2015, it was reported that in an effort to integrate MakerBot's activities better with those of Stratasys, Jaglom laid off around 100 of 500 employees and closed the existing three MakerBot retail locations. Then, 80 other employees were laid off in October 2015.
In February 2017, MakerBot's newly minted CEO Nadav Goshen laid off more than 30% of the workforce and changed the position of the company from consumer focused to two verticals based; professional and the education sector. This lay off was coined the "Valentine's Day Massacre" as it happened the day after. Overnight MakerBot went from 400 employees to under 200 worldwide.
On August 31, 2022, the MakerBot division was merged with Ultimaker, with Stratasys keeping a minority share in the new UltiMaker company.
Products
MakerBot's first products were sold as do it yourself kits, requiring only minor soldering, with an assembly process compared to assembling IKEA furniture. Current models are designed as closed-box products, with no assembly required.
MakerBot printers print with polylactic acid (PLA), acrylonitrile butadiene styrene (ABS), high-density polyethylene (HDPE), and polyvinyl alcohol (PVA).
Cupcake CNC
The Cupcake CNC was introduced in April 2009 as a rapid prototyping machine. The source files needed to build the devices were put on Thingiverse, allowing anyone to make one from scratch. The Cupcake CNC featured a usable build volume of 100 mm x 100 mm x 130 mm (L/W/H) and has outside dimensions of 350 mm x 240 mm x 450 mm.
Because of the open source nature of the product, any suggestions for improvements came from users. During its primary production run (April 2009 to September 2010), the Cupcake CNC kit was updated several times to incorporate new upgrades into each successive version.
Thing-O-Matic
Introduced in September 2010 at Maker Faire NYC, the Thing-O-Matic was MakerBot's second kit. It shipped with many of the aftermarket upgrades that had been built for Cupcake. The stock Thing-O-Matic included a heated, automated build platform, an MK5 plastruder, a redesigned z-stage and upgraded electronics. It featured a build volume of 100 mm x 100 mm x 100 mm (4" x 4" x 4") and outside dimensions of 300 mm x 300 mm x 410 mm (12" x 12" x 16" L/W/H). The device interfaces via USB or a Secure Digital (SD) card.
The Thing-O-Matic was discontinued in the spring of 2012. MakerBot agreed to support the Thing-o-Matic until their supply of parts was exhausted. Assembly instructions are available online through the MakerBot Wiki. The Thing-O-Matic is open-source hardware and is licensed under the GNU GPLv3. As such, the Thing-O-Matic can be heavily altered and improved by users. Some MakerBot operators developed upgrades to the platform that were later incorporated into factory kits. MakerBot has credited those early innovators in their documentation, some of the companies were inspired by MakerBot and created innovations in 3D printing like 3D printed dress.
Replicator
In January 2012 MakerBot introduced the Replicator. It offered more than double the build volume of the Thing-o-Matic at 22.5 cm × 14.5 cm × 15.0 cm (8.9 in × 5.7 in × 5.9 in, L×W×H). Other features included a dual extruder allowing two-color builds, an LCD screen and a control pad. The Replicator was sold pre-assembled with no kit version available. It was the last open-source MakerBot printer.
Replicator 2 Desktop 3D Printer
In September 2012, MakerBot introduced the Replicator 2. This newest model again increased the build volume, this time to 28.5 cm × 15.3 cm × 15.5 cm (11.2 in × 6.0 in × 6.1 in, L×W×H) and can print at 100 μm per layer. The dual extruder was changed back to a single extruder head, while the upgraded electronics, LCD, and gamepad remained similar to the original Replicator. Unlike previous models, the Replicator 2 can print only using PLA plastic, which comes sold in sealed bags with desiccant to protect it from moisture. The Replicator 2 is sold only pre-assembled.
Replicator 2X Experimental 3D Printer
Alongside the Replicator 2, MakerBot also released the Replicator 2X. The 2X model was intended as an experimental version of the 2 that includes a completely enclosed build area, redesigned dual-extruders, and a heated aluminum build platform – all of which enable printing with ABS plastic and dual-material printing.
Digitizer Desktop 3D Scanner
In August 2013, MakerBot released the Digitizer, a 3D scanner. The product was designed to allow MakerBot users to scan physical objects and turn them into digital, 3D printable models. The accompanying software allowed models to be edited, printed immediately, or uploaded to Thingiverse.
5th Generation Replicator Desktop 3D Printer
In January 2014, MakerBot released its Replicator Desktop 3D Printer with a build volume of 25.2 cm x 19.9 cm x 15.0 cm (9.9" x 7.8" x 5.9" L/W/H). This Fifth Generation Replicator features WiFi enabled software that connects the printer to MakerBot desktop and mobile apps.
Replicator Mini Compact 3D Printer
Also in January 2014, MakerBot released the Replicator Mini with a build volume of 10.0 cm x 10.0 cm x 12.5 cm (3.9" x 3.9" x 4.9" L/W/H), layer resolution of 200 μm, and a positioning precision of 11 μm on the x and y-axis and 2.5 μm in the z-axis.
Replicator Z18 3D Printer
Released alongside the Replicator Mini and 5th Generation Replicator, the Z18 offers a build volume of 30.0 cm x 30.5 cm x 47.5 cm (11" x 12" x 18" L/H/W), totaling over 2,000 cubic inches.
METHOD and METHOD X 3D Printer
In December 2018, MakerBot introduced the METHOD 3D Printer as a bridge between desktop accessibility features and industrial 3D printing technologies. This new 3D Printer incorporated 15 Stratasys patents (MakerBot's parent company) and 15 new patents from MakerBot. The new 3D Printer has a Circulated Heated Chamber (60 °C) Dual Extruders, uses soluble PVA supports and has a network of 21 sensors monitoring all aspects of 3D Printing process. The Method has as spring steel build plate allowing for easy removal of 3D prints. The Method has dry-sealed humidity and temperature monitored material bays and was launched with the capability of printing in PLA, Tough™ and PET-G. An ultra rigid metal frame construction reduces flexing during printing, allowing precision layer resolution of 20 to 400 micron and dimensional accuracy of +/- 0.2mm. Connectivity is WiFi; Ethernet; USB cable; USB drive.
The build volume of the new Method with dual extrusion is 19L * 19W * 19.6H cm
This new platform allowed Makerbot to follow with the release of the METHOD X in August 2019, which includes a heated build chamber (100 °C) capable of printing with real ABS material, using SR-30 support material and with more 3D Printing materials in development.
MakerBot Innovation Center
Envisioned as a solution for major clients, the MakerBot Innovation Center incorporates hardware (optimized suite of 3D Printers), SAAS workflow software, training services, and enterprise support. The first Innovation Center was established in February 2014 at SUNY New Paltz. Customers are largely universities such as University of Maryland, Florida Polytechnic, UMass Amherst, and Xavier University. Many Innovation Centers increase their surrounding community's access to 3D printing.
Manufacturing
Until mid 2016, manufacturing was performed in its own facilities in New York, then it was contracted to Jabil Circuit. The New York manufacturing personnel were laid off, while development, logistics, and repair operations remain in New York.
Services
Makerbot has merged with Ultimaker, who now hosts the online community Thingiverse, where users can upload 3D printable files, document designs, and collaborate 3D printing projects and on open source hardware. The site is a collaborative repository for design files used in 3D printing, laser cutting and other DIY manufacturing processes.
Media coverage
MakerBot was featured on The Colbert Report in August 2011. MakerBot artist in residence Jonathan Monaghan sent a bust of Stephen Colbert, printed on a MakerBot 3D printer, into the stratosphere attached to a helium filled weather balloon.
Netflix published in September 2014 the documentary Print the Legend about Makerbot history.
Controversies
Due to its detachment from the open source community, the departure of its founders, reliability problems with its 'smart extruder' and questionable user clauses on the Thingiverse site, there were several controversies related to the Makerbot.
'Smart extruder' problems
The Fifth generation was equipped with an interchangeable extruder with some self-diagnostics capabilities. It was new in the market and supposed to help printer maintenance, but very short extruder lifespan problems were common, requiring frequent replacement at high cost. This led to a class action lawsuit which was dismissed. Ultimately, Makerbot replaced the failing extruder with a new version.
Closed source hardware
Around September 2012 the company stated that for their new Replicator 2 they "will not share the way the physical machine is designed or our GUI". This departure from the previous open-source hardware model was criticized by part of the community, including co-founder (and now former employee) Zachary Smith.
In 2014, the company faced significant criticism when it filed patent applications for designs that some claimed had been invented by members of its community and published to Thingiverse, such as the quick release extruder. Community members accused MakerBot of asserting ownership over their designs when those designs had been contributed with the understanding that they would remain open source. Then-CEO Bre Pettis released a statement dismissing these critics, citing patents that had been filed for unique inventions prior to any community-created designs, namely that the patent for the quick release extruder was originally filed in 2012 while the open source design was first published to Thingiverse in 2013.
See also
3D printing
List of 3D printer manufacturers
RepRap project
DEFCAD
Fused deposition modeling
References
External links
MakerBot Australia
Companies based in Brooklyn
Electronics companies established in 2009
Computer output devices
Manufacturing companies based in New York City
3D printer companies
2009 establishments in New York City
3D printers
3D scanners
Articles containing video clips
Fused filament fabrication
2013 mergers and acquisitions | MakerBot | Engineering | 2,862 |
22,426,302 | https://en.wikipedia.org/wiki/Beta%20Pyxidis | Beta Pyxidis, Latinized from β Pyxidis, is a double star located in the southern constellation Pyxis. It has an apparent visual magnitude of 3.954, making it the second brightest star in that faint constellation. Based upon parallax measurements, the star is an estimated 420 light-years (128 parsecs) from the Earth.
The spectrum matches a bright giant or giant star of stellar classification G7II-III.G7II/III It has 3.8 times the mass of the Sun but has expanded to 20 times the Sun's radius. The effective temperature of the star's outer envelope is about 5,283 K, giving it the characteristic yellow hue of a G-type star. Beta Pyxidis has an unusually high rate of spin for an evolved star of this type, showing a projected rotational velocity of 11.8 km/s. One possible explanation is that it may have engulfed a nearby giant planet, such as a hot Jupiter.
In 2010, the star was among a survey of massive, lower effective temperature supergiants in an attempt to detect a magnetic field. This star may have a longitudinal magnetic field with a strength of less than a Gauss. It is a young disk star system with space velocity components, = . There is a magnitude 12.5 optical companion, located at an angular separation of 12.7 arcseconds and a position angle of 118° as of the year 1943.
Naming
In Chinese, (), meaning Celestial Dog, refers to an asterism consisting of β Pyxidis, e Velorum, f Velorum, α Pyxidis, γ Pyxidis and δ Pyxidis. Consequently, β Pyxidis itself is known as (, .)
References
G-type bright giants
Pyxis
Pyxidis, Beta
Durchmusterung objects
074006
042515
3438 | Beta Pyxidis | Astronomy | 400 |
30,698,180 | https://en.wikipedia.org/wiki/N-Propyl%20azide | n-Propyl azide is an organic compound with the formula CH3CH2CH2N3. A white solid, it is a simple organic azide.
n-Propyl azide has been used in the laboratory synthesis of pharmaceutical drug candidates.
References
Further reading
Organoazides
Propyl compounds | N-Propyl azide | Chemistry | 64 |
352,354 | https://en.wikipedia.org/wiki/Limit%20state%20design | Limit State Design (LSD), also known as Load And Resistance Factor Design (LRFD), refers to a design method used in structural engineering. A limit state is a condition of a structure beyond which it no longer fulfills the relevant design criteria. The condition may refer to a degree of loading or other actions on the structure, while the criteria refer to structural integrity, fitness for use, durability or other design requirements. A structure designed by LSD is proportioned to sustain all actions likely to occur during its design life, and to remain fit for use, with an appropriate level of reliability for each limit state. Building codes based on LSD implicitly define the appropriate levels of reliability by their prescriptions.
The method of limit state design, developed in the USSR and based on research led by Professor N.S. Streletski, was introduced in USSR building regulations in 1955.
Criteria
Limit state design requires the structure to satisfy two principal criteria: the ultimate limit state (ULS) and the serviceability limit state (SLS).
Any design process involves a number of assumptions. The loads to which a structure will be subjected must be estimated, sizes of members to check must be chosen and design criteria must be selected. All engineering design criteria have a common goal: that of ensuring a safe structure and ensuring the functionality of the structure.
Ultimate limit state (ULS)
A clear distinction is made between the ultimate state (US) and the ultimate limit state (ULS). The Ultimate State is a physical situation that involves either excessive deformations leading and approaching collapse of the component under consideration or the structure as a whole, as relevant, or deformations exceeding pre-agreed values. It involves, of course, considerable inelastic (plastic) behavior of the structural scheme and residual deformations. In contrast, the ULS is not a physical situation but rather an agreed computational condition that must be fulfilled, among other additional criteria, in order to comply with the engineering demands for strength and stability under design loads. A structure is deemed to satisfy the ultimate limit state criterion if all factored bending, shear and tensile or compressive stresses are below the factored resistances calculated for the section under consideration. The factored stresses referred to are found by applying Magnification Factors to the loads on the section. Reduction Factors are applied to determine the various factored resistances of the section.
The limit state criteria can also be set in terms of load rather than stress: using this approach the structural element being analysed (i.e. a beam or a column or other load bearing elements, such as walls) is shown to be safe when the "Magnified" loads are less than the relevant "Reduced" resistances.
Complying with the design criteria of the ULS is considered as the minimum requirement (among other additional demands) to provide the proper structural safety.
Serviceability limit state (SLS)
In addition to the ULS check mentioned above, a Service Limit State (SLS) computational check must be performed. To satisfy the serviceability limit state criterion, a structure must remain functional for its intended use subject to routine (everyday) loading, and as such the structure must not cause occupant discomfort under routine conditions.
As for the ULS, the SLS is not a physical situation but rather a computational check. The aim is to prove that under the action of Characteristic design loads (un-factored), and/or whilst applying certain (un-factored) magnitudes of imposed deformations, settlements, or vibrations, or temperature gradients etc. the structural behavior complies with, and does not exceed, the SLS design criteria values, specified in the relevant standard in force. These criteria involve various stress limits, deformation limits (deflections, rotations and curvature), flexibility (or rigidity) limits, dynamic behavior limits, as well as crack control requirements (crack width) and other arrangements concerned with the durability of the structure and its level of everyday service level and human comfort achieved, and its abilities to fulfill its everyday functions. In view of non-structural issues it might also involve limits applied to acoustics and heat transmission that might also affect the structural design.
This calculation check is performed at a point located at the lower half of the elastic zone, where characteristic (un-factored) actions are applied and the structural behavior is purely elastic.
Factor development
The load and resistance factors are determined using statistics and a pre-selected probability of failure. Variability in the quality of construction, consistency of the construction material are accounted for in the factors. Generally, a factor of unity (one) or less is applied to the resistances of the material, and a factor of unity or greater to the loads. Not often used, but in some load cases a factor may be less than unity due to a reduced probability of the combined loads. These factors can differ significantly for different materials or even between differing grades of the same material. Wood and masonry typically have smaller factors than concrete, which in turn has smaller factors than steel. The factors applied to resistance also account for the degree of scientific confidence in the derivation of the values — i.e. smaller values are used when there isn't much research on the specific type of failure mode). Factors associated with loads are normally independent on the type of material involved, but can be influenced by the type of construction.
In determining the specific magnitude of the factors, more deterministic loads (like dead loads, the weight of the structure and permanent attachments like walls, floor treatments, ceiling finishes) are given lower factors (for example 1.4) than highly variable loads like earthquake, wind, or live (occupancy) loads (1.6). Impact loads are typically given higher factors still (say 2.0) in order to account for both their unpredictable magnitudes and the dynamic nature of the loading vs. the static nature of most models. While arguably not philosophically superior to permissible or allowable stress design, it does have the potential to produce a more consistently designed structure as each element is intended to have the same probability of failure. In practical terms this normally results in a more efficient structure, and as such, it can be argued that LSD is superior from a practical engineering viewpoint.
Example treatment of LSD in building codes
The following is the treatment of LSD found in the National Building Code of Canada:
NBCC 1995 Format
φR > αDD + ψ γ {αLL + αQQ + αTT}
where φ = Resistance Factor
ψ = Load Combination Factor
γ = Importance Factor
αD = Dead Load Factor
αL = Live Load Factor
αQ = Earthquake Load Factor
αT = Thermal Effect (Temperature) Load Factor
Limit state design has replaced the older concept of permissible stress design in most forms of civil engineering. A notable exception is transportation engineering. Even so, new codes are currently being developed for both geotechnical and transportation engineering which are LSD based. As a result, most modern buildings are designed in accordance with a code which is based on limit state theory. For example, in Europe, structures are designed to conform with the Eurocodes: Steel structures are designed in accordance with EN 1993, and reinforced concrete structures to EN 1992. Australia, Canada, China, France, Indonesia, and New Zealand (among many others) utilise limit state theory in the development of their design codes. In the purest sense, it is now considered inappropriate to discuss safety factors when working with LSD, as there are concerns that this may lead to confusion. Previously, it has been shown that the LRFD and ASD can produce significantly different designs of steel gable frames.
There are few situations where ASD produces significantly lighter weight steel gable frame designs. Additionally, it has been shown that in high snow regions, the difference between the methods is more dramatic.
In the United States
The United States has been particularly slow to adopt limit state design (known as Load and Resistance Factor Design in the US). Design codes and standards are issued by diverse organizations, some of which have adopted limit state design, and others have not.
The ACI 318 Building Code Requirements for Structural Concrete uses Limit State design.
The ANSI/AISC 360 Specification for Structural Steel Buildings, the ANSI/AISI S-100 North American Specification for the Design of Cold Formed Steel Structural Members, and The Aluminum Association's Aluminum Design Manual contain two methods of design side by side:
Load and Resistance Factor Design (LRFD), a Limit States Design implementation, and
Allowable Strength Design (ASD), a method where the nominal strength is divided by a safety factor to determine the allowable strength. This allowable strength is required to equal or exceed the required strength for a set of ASD load combinations. ASD is calibrated to give the same structural reliability and component size as the LRFD method with a live to dead load ratio of 3. Consequently, when structures have a live to dead load ratio that differs from 3, ASD produces designs that are either less reliable or less efficient as compared to designs resulting from the LRFD method.
In contrast, the ANSI/AWWA D100 Welded Carbon Steel Tanks for Water Storage and API 650 Welded Tanks for Oil Storage still use allowable stress design.
In Europe
In Europe, the limit state design is enforced by the Eurocodes.
See also
Allowable stress design
Probabilistic design
Seismic performance
Structural engineering
References
Citations
Sources
Structural engineering
Civil engineering | Limit state design | Engineering | 1,936 |
1,068,565 | https://en.wikipedia.org/wiki/Ernst%20Alexanderson | Ernst Frederick Werner Alexanderson (; January 25, 1878 – May 14, 1975) was a Swedish-American electrical engineer and inventor who was a pioneer in radio development. He invented the Alexanderson alternator, an early radio transmitter used between 1906 and the 1930s for longwave long distance radio transmission. Alexanderson also created the amplidyne, a direct current amplifier used during the Second World War for controlling anti-aircraft guns.
Background
Alexanderson was born in Uppsala, Sweden. He studied at the University of Lund (1896–97) and was educated at the Royal Institute of Technology in Stockholm and the Technische Hochschule in Berlin, Germany. He emigrated to the United States in 1902 and spent much of his life working for the General Electric and Radio Corporation of America.
Engineering work
Alexanderson designed the Alexanderson alternator, an early longwave radio transmitter, one of the first devices which could transmit modulated audio (sound) over radio waves. He had been employed at General Electric for only a short time when GE received an order from Canadian-born professor and researcher Reginald Fessenden, then working for the US Weather Bureau, for a specialized alternator with much higher frequency than others in existence at that time, for use as a radio transmitter. Fessenden had been working on the problem of transmitting sound by radio waves, and had concluded that a new type of radio transmitter was needed, a continuous wave transmitter. Designing a machine that would rotate fast enough to produce radio waves proved a formidable challenge. Alexanderson's family were convinced the huge spinning rotors would fly apart and kill him, and he set up a sandbagged bunker from which to test them. In the summer of 1906 Mr. Alexanderson's first effort, a 50 kHz alternator, was installed in Fessenden's radio station in Brant Rock, Massachusetts. By fall its output had been improved to 500 watts and 75 kHz. On Christmas Eve, 1906, Fessenden made an experimental broadcast of Christmas music, including him playing the violin, that was heard by Navy ships and shore stations down the East Coast as far as Arlington. This is considered the first AM radio entertainment broadcast.
Alexanderson continued improving his machine, and the Alexanderson alternator became widely used in high power very low frequency commercial and Naval wireless stations to transmit radiotelegraphy traffic at intercontinental distances, until by the 1930s it was replaced by vacuum tube transmitters. The only surviving transmitter in a working state is at the Grimeton radio station outside Varberg, Sweden. It is a prime example of pre-electronic radio technology and was added as a UNESCO's World Heritage Site list in 2004.
Alexanderson was also instrumental in the development of television. The first television broadcast in the United States was received in 1927 at his GE Plot home at 1132 Adams Rd, Schenectady, N.Y. The following year he developed the coordination of sound and movement on the first television drama, The Queen's Messenger. In 1930, he conducted an early public demonstration of his large screen television system on a closed-circuit channel at Proctors in Schenectady.
Alexanderson retired from General Electric in 1948. The inventor and engineer remained active to an advanced age. He continued television research as a consultant for the Radio Corporation of America filing his 321st patent application in 1955. Over his lifetime, Alexanderson received 345 US patents, the last filed in 1968 at age 89. He died in 1975 and was buried at Vale Cemetery in Schenectady, New York.
Alexanderson is also mentioned in connection with the emergence of the patent system, that he was partially critical to. As the technology historian David Noble writes:
Kidnapping incident
In 1923, Alexanderson's son, Verner, was kidnapped. Alexanderson broadcast an appeal for help on the radio. The child was located after three days and returned to his family. The kidnappers were later caught.
Honors
IEEE Medal of Honor from the Institute of Radio Engineers, now IEEE, (1919)
IEEE Edison Medal from the American Institute of Electrical Engineers, now IEEE, (1944)
Valdemar Poulsen Gold Medal from the (1947)
National Inventors Hall of Fame induction (1983)
Consumer Electronics Hall of Fame induction (2002)
Patents
Ernst was very active and got a total of 345 patents granted.
– High frequency alternator (100 kHz), filed April 1909; issued, November 1911
– Selective Tuning System (Tuned RF Circuit, filed October 1913; issued February 1916
– Ignition system, (RFI suppressor), filed June 1926; issued August 1929
– Radio signaling system (directional antenna), filed November 1927, issued September 1930
See also
History of numerical control
References
Other sources
Blackwelder, Julia Kirk (2014) Electric City: General Electric in Schenectady (Texas A&M University Press)
Brittain, James E. (1992) Alexanderson: Pioneer in American Electrical Engineering (Johns Hopkins University Press)
Fisher, David E. and Marshall J. Fisher (1996) Tube, the Invention of Television (Counterpoint, Washington D.C)
Related reading
Alexanderson, E.F.W. (August 1920) Trans-oceanic Radio Communication (Proceedings of the I.R.E., pp. 263–285)
External links
Illustrated biography at prof. Eugenii Katz website accessed April 10, 2006
Bibliography related to Alexanderson's contribution to history of television at Histoire de la télévision" site
Fessenden and Marconi – their technologies and transatlantic experiments compared. Accessed April 10, 2006
1878 births
1975 deaths
Electronics engineers
Swedish electrical engineers
Radio pioneers
Television pioneers
People from Schenectady, New York
KTH Royal Institute of Technology alumni
IEEE Medal of Honor recipients
IEEE Edison Medal recipients
Valdemar Poulsen Gold Medal recipients
Swedish emigrants to the United States
American electrical engineers
Swedish engineers
20th-century American engineers
20th-century American inventors
RCA people
Engineers from New York (state)
Knights of the Order of the Polar Star
American telecommunications engineers | Ernst Alexanderson | Engineering | 1,212 |
4,096,160 | https://en.wikipedia.org/wiki/ObjectARX | ObjectARX (AutoCAD Runtime eXtension) is an API for customizing and extending AutoCAD. The ObjectARX SDK is published by Autodesk and freely available under license from Autodesk. The ObjectARX SDK consists primarily of C++ headers and libraries that can be used to build Windows DLLs that can be loaded into the AutoCAD process and interact directly with the AutoCAD application. ObjectARX modules use the file extensions .arx and .dbx instead of the more common .dll.
ObjectARX is the most powerful of the various AutoCAD APIs, and the most difficult to master. The typical audience for the ObjectARX SDK includes professional programmers working either as commercial application developers or as in-house developers at companies using AutoCAD.
New versions of the ObjectARX SDK are released with each new AutoCAD release, and ObjectARX modules built with a specific SDK version are typically limited to running inside the corresponding version of AutoCAD. Recent versions of the ObjectARX SDK include support for the .NET platform by providing managed wrapper classes for native objects and functions.
The native classes and libraries that are made available via the ObjectARX API are also used internally by the AutoCAD code. As a result of this tight linkage with AutoCAD itself, the libraries are very compiler specific, and work only with the same compiler that Autodesk uses to build AutoCAD. Historically, this has required ObjectARX developers to use various versions of Microsoft Visual Studio, with different versions of the SDK requiring different versions of Visual Studio.
Although ObjectARX is specific to AutoCAD, Open Design Alliance announced in 2008 a new API called DRX (included in their DWGdirect library) that attempts to emulate the ObjectARX API in products like IntelliCAD that use the DWGdirect libraries.
References
See also
Autodesk Developer Network
Autodesk
AutoCAD
Application programming interfaces | ObjectARX | Technology | 418 |
19,719,222 | https://en.wikipedia.org/wiki/Biorepository | A biorepository is a facility that collects, catalogs, and stores samples of biological material for laboratory research. Biorepositories collect and manage specimens from animals, plants, and other living organisms. Biorepositories store many different types of specimens, including samples of blood, urine, tissue, cells, DNA, RNA, and proteins. If the samples are from people, they may be stored with medical information along with written consent to use the samples in laboratory studies.
Purpose
The purpose of a biorepository is to maintain biological specimens, and associated information, for future use in research. The biorepository maintains the quality of specimens in its collection and ensures that they are accessible for scientific research.
Operations
The four main operations of a biorepository are; (i) collection (ii) processing, (iii) storage or inventory, and (iv) distribution of biological specimens.
(i) Collection or accession occurs when a specimen arrives at the biorepository. Information about the specimen is entered into the laboratory information management system ("LIMS"), which tracks information about all of the specimens in the biorepository. Typical information linked to a specimen would be the specimen's origin and when it arrived at the biorepository.
(ii) Processing of specimens is standardized to minimize variation due to handling. Processing may prepare the specimen for long-term storage. For example, DNA samples are processed into a salt buffer (aqueous solution) of proper pH to stabilize the DNA for storage.
(iii) Storage and inventory are where all samples are held prior to being requested via a distribution request. The inventory system is composed of sample holding boxes and the boxes are stored in freezers of various types depending on the sample storage requirements.
(iv) Distribution is the process of retrieving one or more samples from the biorepository inventory system.
Standard Operating Procedures
Standard Operating Procedures (SOPs) play a crucial role in the biorepository industry. There are a number of reasons why they are important:
SOPs reduce variability within the samples and storage processes by providing standardized guidelines for proper storage and care.
Biospecimen samples should closely resemble biospecimens in their natural state. SOPs help ensure that.
SOPs provide a standardized framework of how to conduct operations within a biorepository. They ensure seamless and reliable processes be implemented throughout operations.
Biological Resource Centres
The OECD has issued best practice guidelines for biorepositories, which are referred to as biological resource centres.
They are defined by the OECD as follows:
"Biological Resource Centres are an essential part of the infrastructure underpinning biotechnology. They consist of service providers and repositories of the living cells, genomes of organisms, and information relating to heredity and the functions of biological systems. BRCs contain collections of culturable organisms (e.g. micro-organisms, plant, animal and human cells), replicable parts of these (e.g. genomes, plasmids, viruses, cDNAs), viable but not yet culturable organisms, cells and tissues, as well as databases containing molecular, physiological and structural information relevant to these collections and related bioinformatics."
Examples of Biorepositories in the United States
Cell Line Repositories
The National Institute of Neurological Disorders and Stroke (NINDS) Human Cell and Data Repository maintains a collection of cell lines to advance the study of neurological disorders.
The National Institute on Aging (NIA) Aging Cell Repository facilitates research into the mechanisms of aging by providing cell lines collected from subjects of different ages.
The National Institute of General Medical Sciences (NIGMS) Human Genetic Cell Repository is collection of well-characterized human cells for use in biomedical research.
Sample Repositories
The Intermountain Healthcare Biorepository is a collection of over 4.5 million biological samples preserved in formalin and embedded in paraffin wax.
The J. Craig Venter Institute Human Reference Genome makes available DNA samples from J. Craig Venter, whose genome has been sequenced and assembled.
The Centers for Disease Control and Prevention (CDC) Genetic Testing Reference Material Program (GeT-RM) maintains DNA samples for use in molecular genetic testing. These samples are from diseases such as Huntington Disease, Cystic Fibrosis, Fragile X Syndrome, Alpha-Thalassemia, and Muenke Syndrome.
See also
Biobank
Biological database
Gene bank
Genetic fingerprinting
Genomics
Genotype
References
External links
Specimen Central biorepository list, A worldwide listing of active biobanks and biorepositories
Clinical Specimens Database and Specimen Collections Repository
Biorepository LIMS, A LIMS software solution for biobanking and biorepositories
Global Directory of Biobanks, Tissue Banks and Biorepositories
National Institute of Allergies and Infectious Diseases HIV/AIDS Specimen Repository
International Society for Biological and Environmental Repositories ("ISBER")
ProMedDx BioServices cGMP Biostorage & Biorepository - Biorepository Consulting Design
Cell&Co Biorepository - The first French Eco-Biobank
Biological specimens | Biorepository | Biology | 1,077 |
21,575,675 | https://en.wikipedia.org/wiki/TB10Cs5H2%20snoRNA | TB10Cs5H2 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB10Cs5H2 is predicted to guide the pseudouridylation of SSU ribosomal RNA (rRNA) at residue Ψ131.
References
Non-coding RNA | TB10Cs5H2 snoRNA | Chemistry | 123 |
1,316,648 | https://en.wikipedia.org/wiki/Constructive%20dilemma | Constructive dilemma is a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either P or R is true, then either Q or S has to be true. In sum, if two conditionals are true and at least one of their antecedents is, then at least one of their consequents must be too. Constructive dilemma is the disjunctive version of modus ponens, whereas destructive dilemma is the disjunctive version of modus tollens. The constructive dilemma rule can be stated:
where the rule is that whenever instances of "", "", and "" appear on lines of a proof, "" can be placed on a subsequent line.
Formal notation
The constructive dilemma rule may be written in sequent notation:
where is a metalogical symbol meaning that is a syntactic consequence of , , and in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
where , , and are propositions expressed in some formal system.
Natural language example
If I win a million dollars, I will donate it to an orphanage.
If my friend wins a million dollars, he will donate it to a wildlife fund.
Either I win a million dollars or my friend wins a million dollars.
Therefore, either an orphanage will get a million dollars, or a wildlife fund will get a million dollars.
The dilemma derives its name because of the transfer of disjunctive operator.
References
Rules of inference
Dilemmas
Theorems in propositional logic | Constructive dilemma | Mathematics | 319 |
22,607,832 | https://en.wikipedia.org/wiki/Manta%20trawl | A manta trawl is a net system for sampling the surface of the ocean. It resembles a manta ray, with metal wings and a broad mouth. The net it pulls is made of thin mesh, and the whole trawl is towed behind a scientific research vessel. The manta trawl is useful for collecting samples from the surface of the ocean, such as sampling the plastic pieces making up the Great Pacific Garbage Patch as well as the associated plankton.
External links
a Photo at Flickr
The Plastic Ocean Project: Preparing the Manta Trawl including video
References
Planktology
Aquatic ecology
Biological oceanography
Oceanographic instrumentation | Manta trawl | Technology,Engineering,Biology | 130 |
1,240,378 | https://en.wikipedia.org/wiki/Symmetry%20breaking | In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. This collapse is often one of many possible bifurcations that a particle can take as it approaches a lower energy state. Due to the many possibilities, an observer may assume the result of the collapse to be arbitrary. This phenomenon is fundamental to quantum field theory (QFT), and further, contemporary understandings of physics. Specifically, it plays a central role in the Glashow–Weinberg–Salam model which forms part of the Standard model modelling the electroweak sector.In an infinite system (Minkowski spacetime) symmetry breaking occurs, however in a finite system (that is, any real super-condensed system), the system is less predictable, but in many cases quantum tunneling occurs. Symmetry breaking and tunneling relate through the collapse of a particle into non-symmetric state as it seeks a lower energy.
Symmetry breaking can be distinguished into two types, explicit and spontaneous. They are characterized by whether the equations of motion fail to be invariant, or the ground state fails to be invariant.
Non-technical description
This section describes spontaneous symmetry breaking. This is the idea that for a physical system, the lowest energy configuration (the vacuum state) is not the most symmetric configuration of the system. Roughly speaking there are three types of symmetry that can be broken: discrete, continuous and gauge, ordered in increasing technicality.
An example of a system with discrete symmetry is given by the figure with the red graph: consider a particle moving on this graph, subject to gravity. A similar graph could be given by the function . This system is symmetric under reflection in the y-axis. There are three possible stationary states for the particle: the top of the hill at , or the bottom, at . When the particle is at the top, the configuration respects the reflection symmetry: the particle stays in the same place when reflected. However, the lowest energy configurations are those at . When the particle is in either of these configurations, it is no longer fixed under reflection in the y-axis: reflection swaps the two vacuum states.
An example with continuous symmetry is given by a 3d analogue of the previous example, from rotating the graph around an axis through the top of the hill, or equivalently given by the graph . This is essentially the graph of the Mexican hat potential. This has a continuous symmetry given by rotation about the axis through the top of the hill (as well as a discrete symmetry by reflection through any radial plane). Again, if the particle is at the top of the hill it is fixed under rotations, but it has higher gravitational energy at the top. At the bottom, it is no longer invariant under rotations but minimizes its gravitational potential energy. Furthermore rotations move the particle from one energy minimizing configuration to another. There is a novelty here not seen in the previous example: from any of the vacuum states it is possible to access any other vacuum state with only a small amount of energy, by moving around the trough at the bottom of the hill, whereas in the previous example, to access the other vacuum, the particle would have to cross the hill, requiring a large amount of energy.
Gauge symmetry breaking is the most subtle, but has important physical consequences. Roughly speaking, for the purposes of this section a gauge symmetry is an assignment of systems with continuous symmetry to every point in spacetime. Gauge symmetry forbids mass generation for gauge fields, yet massive gauge fields (W and Z bosons) have been observed. Spontaneous symmetry breaking was developed to resolve this inconsistency. The idea is that in an early stage of the universe it was in a high energy state, analogous to the particle being at the top of the hill, and so had full gauge symmetry and all the gauge fields were massless. As it cooled, it settled into a choice of vacuum, thus spontaneously breaking the symmetry, thus removing the gauge symmetry and allowing mass generation of those gauge fields. A full explanation is highly technical: see electroweak interaction.
Spontaneous symmetry breaking
In spontaneous symmetry breaking (SSB), the equations of motion of the system are invariant, but any vacuum state (lowest energy state) is not.
For an example with two-fold symmetry, if there is some atom that has two vacuum states, occupying either one of these states breaks the two-fold symmetry. This act of selecting one of the states as the system reaches a lower energy is SSB. When this happens, the atom is no longer symmetric (reflectively symmetric) and has collapsed into a lower energy state.
Such a symmetry breaking is parametrized by an order parameter. A special case of this type of symmetry breaking is dynamical symmetry breaking.
In the Lagrangian setting of Quantum field theory (QFT), the Lagrangian is a functional of quantum fields which is invariant under the action of a symmetry group . However, the vacuum expectation value formed when the particle collapses to a lower energy may not be invariant under . In this instance, it will partially break the symmetry of , into a subgroup . This is spontaneous symmetry breaking.
Within the context of gauge symmetry however, SSB is the phenomenon by which gauge fields 'acquire mass' despite gauge-invariance enforcing that such fields be massless. This is because the SSB of gauge symmetry breaks gauge-invariance, and such a break allows for the existence of massive gauge fields. This is an important exemption from Goldstone's theorem, where a Nambu-Goldstone boson can gain mass, becoming a Higgs boson in the process.
Further, in this context the usage of 'symmetry breaking' while standard, is a misnomer, as gauge 'symmetry' is not really a symmetry but a redundancy in the description of the system. Mathematically, this redundancy is a choice of trivialization, somewhat analogous to redundancy arising from a choice of basis.
Spontaneous symmetry breaking is also associated with phase transitions. For example in the Ising model, as the temperature of the system falls below the critical temperature the symmetry of the vacuum is broken, giving a phase transition of the system.
Explicit symmetry breaking
In explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetry. In Hamiltonian mechanics or Lagrangian mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry.
In the Hamiltonian setting, this is often studied when the Hamiltonian can be written .
Here is a 'base Hamiltonian', which has some manifest symmetry. More explicitly, it is symmetric under the action of a (Lie) group . Often this is an integrable Hamiltonian.
The is a perturbation or interaction Hamiltonian. This is not invariant under the action of . It is often proportional to a small, perturbative parameter.
This is essentially the paradigm for perturbation theory in quantum mechanics. An example of its use is in finding the fine structure of atomic spectra.
Examples
Symmetry breaking can cover any of the following scenarios:
The breaking of an exact symmetry of the underlying laws of physics by the apparently random formation of some structure;
A situation in physics in which a minimal energy state has less symmetry than the system itself;
Situations where the actual state of the system does not reflect the underlying symmetries of the dynamics because the manifestly symmetric state is unstable (stability is gained at the cost of local asymmetry);
Situations where the equations of a theory may have certain symmetries, though their solutions may not (the symmetries are "hidden").
One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the non-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids.
See also
Higgs mechanism
QCD vacuum
1964 PRL symmetry breaking papers
References
External links
Symmetry
Pattern formation
Theoretical physics
Quantum field theory
Standard Model | Symmetry breaking | Physics,Mathematics | 1,749 |
588,260 | https://en.wikipedia.org/wiki/Kakeya%20set | In mathematics, a Kakeya set, or Besicovitch set, is a set of points in Euclidean space which contains a unit line segment in every direction. For instance, a disk of radius 1/2 in the Euclidean plane, or a ball of radius 1/2 in three-dimensional space, forms a Kakeya set. Much of the research in this area has studied the problem of how small such sets can be. Besicovitch showed that there are Besicovitch sets of measure zero.
A Kakeya needle set (sometimes also known as a Kakeya set) is a (Besicovitch) set in the plane with a stronger property, that a unit line segment can be rotated continuously through 180 degrees within it, returning to its original position with reversed orientation. Again, the disk of radius 1/2 is an example of a Kakeya needle set.
Kakeya needle problem
The Kakeya needle problem asks whether there is a minimum area of a region in the plane, in which a needle of unit length can be turned through 360°. This question was first posed, for convex regions, by . The minimum area for convex sets is achieved by an equilateral triangle of height 1 and area 1/, as Pál showed.
Kakeya seems to have suggested that the Kakeya set of minimum area, without the convexity restriction, would be a three-pointed deltoid shape. However, this is false; there are smaller non-convex Kakeya sets.
Besicovitch needle sets
Besicovitch was able to show that there is no lower bound > 0 for the area of such a region , in which a needle of unit length can be turned around. That is, for every , there is region of area within which the needle can move through a continuous motion that rotates it a full 360 degrees. This built on earlier work of his, on plane sets which contain a unit segment in each orientation. Such a set is now called a Besicovitch set. Besicovitch's work showing such a set could have arbitrarily small measure was from 1919. The problem may have been considered by analysts before that.
One method of constructing a Besicovitch set (see figure for corresponding illustrations) is known as a "Perron tree" after Oskar Perron who was able to simplify Besicovitch's original construction. The precise construction and numerical bounds are given in Besicovitch's popularization.
The first observation to make is that the needle can move in a straight line as far as it wants without sweeping any area. This is because the needle is a zero width line segment. The second trick of Pál, known as Pál joins describes how to move the needle between any two locations that are parallel while sweeping negligible area. The needle will follow the shape of an "N". It moves from the first location some distance up the left of the "N", sweeps out the angle to the middle diagonal, moves down the diagonal, sweeps out the second angle,
and them moves up the parallel right side of the "N" until it reaches the required second location. The only non-zero area regions swept are the two triangles of height one and the angle at the top of the "N". The swept area is proportional to this angle which is proportional to .
The construction starts with any triangle with height 1 and some substantial angle at the top through which the needle can easily sweep. The goal is to do many operations on this triangle to make its area smaller while keeping the directions though which the needle can sweep the same.
First consider dividing the triangle in two and translating the pieces over each other so that their bases overlap in a way that minimizes the total area.
The needle is able to sweep out the same directions by sweeping out those given by the first triangle, jumping over to the second, and then sweeping out the directions given by the second. The needle can jump triangles using the "N" technique because the two lines at which the original triangle was cut are parallel.
Now, suppose we divide our triangle into 2n subtriangles. The figure shows eight.
For each consecutive pair of triangles, perform the same overlapping operation we described before to get half as many new shapes, each consisting of two overlapping triangles. Next, overlap consecutive pairs of these new shapes by shifting them
so that their bases overlap in a way that minimizes the total area. Repeat this n times until there is only one shape. Again, the needle is able to sweep out the same directions by sweeping those out in each of the
2n subtriangles in order of their direction. The needle can jump consecutive triangles using the "N" technique because the two lines at which these triangle were cut are parallel.
What remains is to compute the area of the final shape. The proof is too hard to present here. Instead, we will just argue how the numbers might go.
Looking at the figure, one sees that the 2n subtriangles overlap a lot. All of them overlap at the bottom, half of them at the bottom of the left branch, a quarter of them at the bottom of the left left branch, and so on.
Suppose that the area of each shape created with i merging operations from 2i subtriangles is bounded by Ai.
Before merging two of these shapes, they have area bounded be 2Ai.
Then we move the two shapes together in the way that overlaps them as much as possible.
In a worst case, these two regions are
two 1 by ε rectangles perpendicular to each other so that they overlap at an area of only ε2. But the two shapes that we have constructed, if long and skinny, point in much of the same direction because they are made from consecutive groups of subtriangles.
The handwaving states that they over lap by at least 1% of their area. Then the merged area would be bounded by
Ai+1 = 1.99 Ai. The area of the original triangle is bounded by 1. Hence, the area of each subtriangle is bounded by
A0 = 2-n and the final shape has area bounded by
An = 1.99n × 2-n. In actuality, a careful summing up all areas that do not overlap gives that the area of the final region is much bigger, namely, 1/n.
As n grows, this area shrinks to zero.
A Besicovitch set can be created by combining six rotations of a Perron tree created from an equilateral triangle.
A similar construction can be made with parallelograms
There are other methods for constructing Besicovitch sets of measure zero aside from the 'sprouting' method.
For example, Kahane uses Cantor sets to construct a Besicovitch set of measure zero in the two-dimensional plane.
In 1941, H. J. Van Alphen showed that there are arbitrary small Kakeya needle sets inside a circle with radius 2 + ε (arbitrary ε > 0). Simply connected Kakeya needle sets with smaller area than the deltoid were found in 1965. Melvin Bloom and I. J. Schoenberg independently presented Kakeya needle sets with areas approaching to , the Bloom-Schoenberg number. Schoenberg conjectured that this number is the lower bound for the area of simply connected Kakeya needle sets. However, in 1971, F. Cunningham showed that, given ε > 0, there is a simply connected Kakeya needle set of area less than ε contained in a circle of radius 1.
Although there are Kakeya needle sets of arbitrarily small positive measure and Besicovich sets of measure 0, there are no Kakeya needle sets of measure 0.
Kakeya conjecture
Statement
The same question of how small these Besicovitch sets could be was then posed in higher dimensions, giving rise to a number of conjectures known collectively as the Kakeya conjectures, and have helped initiate the field of mathematics known as geometric measure theory. In particular, if there exist Besicovitch sets of measure zero, could they also have s-dimensional Hausdorff measure zero for some dimension s less than the dimension of the space in which they lie? This question gives rise to the following conjecture:
Kakeya set conjecture: Define a Besicovitch set in Rn to be a set which contains a unit line segment in every direction. Is it true that such sets necessarily have Hausdorff dimension and Minkowski dimension equal to n?
This is known to be true for n = 1, 2 but only partial results are known in higher dimensions.
Kakeya maximal function
A modern way of approaching this problem is to consider a particular type of maximal function, which we construct as follows: Denote Sn−1 ⊂ Rn to be the unit sphere in n-dimensional space. Define to be the cylinder of length 1, radius δ > 0, centered at the point a ∈ Rn, and whose long side is parallel to the direction of the unit vector e ∈ Sn−1. Then for a locally integrable function f, we define the Kakeya maximal function of f to be
where m denotes the n-dimensional Lebesgue measure. Notice that is defined for vectors e in the sphere Sn−1.
Then there is a conjecture for these functions that, if true, will imply the Kakeya set conjecture for higher dimensions:
Kakeya maximal function conjecture: For all ε > 0, there exists a constant Cε > 0 such that for any function f and all δ > 0, (see lp space for notation)
Results
Some results toward proving the Kakeya conjecture are the following:
The Kakeya conjecture is true for n = 1 (trivially) and n = 2 (Davies).
In any n-dimensional space, Wolff showed that the dimension of a Kakeya set must be at least (n+2)/2.
In 2002, Katz and Tao improved Wolff's bound to , which is better for n > 4.
In 2000, Katz, Łaba, and Tao proved that the Minkowski dimension of Kakeya sets in 3 dimensions is strictly greater than 5/2.
In 2000, Jean Bourgain connected the Kakeya problem to arithmetic combinatorics which involves harmonic analysis and additive number theory.
In 2017, Katz and Zahl improved the lower bound on the Hausdorff dimension of Besicovitch sets in 3 dimensions to for an absolute constant .
Applications to analysis
Somewhat surprisingly, these conjectures have been shown to be connected to a number of questions in other fields, notably in harmonic analysis. For instance, in 1971, Charles Fefferman was able to use the Besicovitch set construction to show that in dimensions greater than 1, truncated Fourier integrals taken over balls centered at the origin with radii tending to infinity need not converge in Lp norm when p ≠ 2 (this is in contrast to the one-dimensional case where such truncated integrals do converge).
Analogues and generalizations of the Kakeya problem
Sets containing circles and spheres
Analogues of the Kakeya problem include considering sets containing more general shapes than lines, such as circles.
In 1997 and 1999, Wolff proved that sets containing a sphere of every radius must have full dimension, that is, the dimension is equal to the dimension of the space it is lying in, and proved this by proving bounds on a circular maximal function analogous to the Kakeya maximal function.
It was conjectured that there existed sets containing a sphere around every point of measure zero. Results of Elias Stein proved all such sets must have positive measure when n ≥ 3, and Marstrand proved the same for the case n=2.
Sets containing k-dimensional disks
A generalization of the Kakeya conjecture is to consider sets that contain, instead of segments of lines in every direction, but, say, portions of k-dimensional subspaces. Define an (n, k)-Besicovitch set K to be a compact set in Rn containing a translate of every k-dimensional unit disk which has Lebesgue measure zero. That is, if B denotes the unit ball centered at zero, for every k-dimensional subspace P, there exists x ∈ Rn such that (P ∩ B) + x ⊆ K. Hence, a (n, 1)-Besicovitch set is the standard Besicovitch set described earlier.
The (n, k)-Besicovitch conjecture: There are no (n, k)-Besicovitch sets for k > 1.
In 1979, Marstrand proved that there were no (3, 2)-Besicovitch sets. At around the same time, however, Falconer proved that there were no (n, k)-Besicovitch sets for 2k > n. The best bound to date is by Bourgain, who proved in that no such sets exist when 2k−1 + k > n.
Kakeya sets in vector spaces over finite fields
In 1999, Wolff posed the finite field analogue to the Kakeya problem, in hopes that the techniques for solving this conjecture could be carried over to the Euclidean case.
Finite Field Kakeya Conjecture: Let F be a finite field, let K ⊆ Fn be a Kakeya set, i.e. for each vector y ∈ Fn there exists x ∈ Fn such that K contains a line {x + ty : t ∈ F}. Then the set K has size at least cn|F|n where cn>0 is a constant that only depends on n.
Zeev Dvir proved this conjecture in 2008, showing that the statement holds for cn = 1/n!. In his proof, he observed that any polynomial in n variables of degree less than |F| vanishing on a Kakeya set must be identically zero. On the other hand, the polynomials in n variables of degree less than |F| form a vector space of dimension
Therefore, there is at least one non-trivial polynomial of degree less than |F| that vanishes on any given set with less than this number of points. Combining these two observations shows that Kakeya sets must have at least |F|n/n! points.
It is not clear whether the techniques will extend to proving the original Kakeya conjecture but this proof does lend credence to the original conjecture by making essentially algebraic counterexamples unlikely. Dvir has written a survey article on progress on the finite field Kakeya problem and its relationship to randomness extractors.
See also
Nikodym set
Notes
References
External links
Kakeya at University of British Columbia
Besicovitch at UCLA
Kakeya needle problem at mathworld
Dvir's proof of the finite field Kakeya conjecture at Terence Tao's blog
An Introduction to Besicovitch-Kakeya Sets
Harmonic analysis
Real analysis
Discrete geometry
Eponyms in geometry | Kakeya set | Mathematics | 3,073 |
2,358,446 | https://en.wikipedia.org/wiki/ZBLAN | ZBLAN is the most stable, and consequently the most used, fluoride glass, a subcategory of the heavy metal fluoride glass (HMFG) group. Typically its composition is 53% ZrF4, 20% BaF2, 4% LaF3, 3% AlF3 and 20% NaF. ZBLAN is not a single material but rather has a spectrum of compositions, many of which are still untried. The biggest library in the world of ZBLAN glass compositions is currently owned by Le Verre Fluore, the oldest company working on HMFG technology. Other current ZBLAN fiber manufacturers are Thorlabs and KDD Fiberlabs. Hafnium fluoride is chemically similar to zirconium fluoride, and is sometimes used in place of it.
ZBLAN glass has a broad optical transmission window extending from 0.22 micrometers in the UV to 7 micrometers in the infrared. ZBLAN has low refractive index (about 1.5), a relatively low glass transition temperature (Tg) of 260–300 °C, low dispersion and a low and negative temperature dependence of refractive index dn/dT.
History
The first fluorozirconate glass was a serendipitous discovery in March 1974 by the Poulain brothers and their co-workers at the University of Rennes in France.
While looking for new crystalline complex fluorides, they obtained unexpected pieces of glass. In a first step, these glasses were investigated for spectroscopic purposes.
Glass formation was studied in the ZrF4-BaF2-NaF ternary system while the fluorescence of neodymium was characterized in quaternary ZrF4-BaF2-NaF-NdF3 bulk samples. The chemical composition of this original glass was very close to that of the classical ZBLAN, on the basis of a simple La/Nd substitution.
Further experimental work led to major advances. First, ammonium bifluoride processing replaced the initial preparation method based on heat treatment of anhydrous fluorides in a metallic sealed tube. This process was already used by K. H. Sun, a pioneer of beryllium fluoride glasses. It offers significant advantages: preparation is implemented at room atmosphere in long platinum crucibles, zirconium oxide can be used as a starting material instead of pure ZrF4, synthesis time is reduced from 15 hours to less than one hour, and larger samples are obtained. One of the problems encountered was the devitrification tendency upon cooling the melt.
The second breakthrough was the discovery of the stabilizing effect of aluminum fluoride in fluorozirconate glasses. The initial systems were fluorozirconates with ZrF4 as the primary constituent (>50 mol%), BaF2 main modifier (>30 mol%) and other metal fluorides LaF3, AlF3 added as tertiary constituents, to increase glass stability or improve other glass properties. Various pseudo-ternary systems were investigated at 4 mol% AlF3, leading to the definition of 7 stable glasses, such as ZBNA, ZBLA, ZBYA, ZBCA that could be cast as multi-kilogram bulk samples and resulted later in the classical ZBLAN glass composition that combines ZBNA and ZBLA.
Further development on preparation method, scale-up, improvements of the manufacturing process, material stability and formulations was largely motivated by the experiments in French telecom at that time that found that intrinsic absorption for ZBLAN fibers was quite low (~10 dB/km) which could lead to an ultra-low optical loss solution in the mid-infrared. Such optical fibers could then become an excellent technical solution for a variety of systems for telecommunications, sensing and other applications.
Glass preparation
Fluoride glasses have to be processed in a very dry atmosphere in order to avoid oxyfluoride formation which will lead to glass-ceramic (crystallized glass) formation. The material is usually manufactured by the melting-quenching method. First the raw products are introduced in a platinum crucible, then melted, fined above 800 °C and cast in a metallic mold to ensure a high cooling rate (quenching), which favors glass formation. Finally they are annealed in a furnace to reduce the thermal stresses induced during the quenching phase. This process results in large transparent pieces of fluoride glass.
Material properties
Optical
The most obvious feature of fluoride glasses is their extended transmission range. It covers a broad optical spectrum from the UV to the mid-infrared.
The polarisability of fluorine anions is smaller than that of oxygen anions. For this reason, the refractive index of crystalline fluorides is generally low. This also applies to fluoride glasses: the index of ZBLAN glass is close to 1.5 while it exceeds 2 for zirconia ZrO2. Cationic polarisability must also be considered. The general trend is that it increases with atomic number. Thus in crystals, the refractive index of lithium fluoride LiF is 1.39 while it is 1.72 for lead fluoride PbF2. One exception concerns fluorozirconate glasses: hafnium is chemically very close to zirconium, but with a much larger atomic mass (178 g vs 91 g); but the refractive index of fluorohafnate glasses is smaller than that of fluorozirconates with the same molar composition. This is classically explained by the well known lanthanidic contraction that results from the filling of the f subshell and leads to a smaller ionic radius. Substituting zirconium by hafnium makes an easy way to adjust the numerical aperture of optical fibers.
Optical dispersion expresses the variation of the refractive index with wavelength. It is expected to be low for glasses with a small refractive index. In the visible spectrum it is often quantified by the Abbe number. ZBLAN exhibits zero dispersion at about 1.72 μm, compared with 1.5 μm for silica glass.
Refractive index changes with temperature because the polarisability of the chemical bonds increases with temperature, and because thermal expansion decreases the number of polarisable elements per unit volume. As a result, dn/dT is positive for silica, while it is negative for fluoride glasses.
At high power densities, refractive index follows the relation :
n = n0 + n2I
where n0 is the index observed at low power levels, n2 the nonlinear index and I the average electromagnetic field. Nonlinearity is smaller in low-index materials. In ZBLAN n2's value lies between 1 and 2×10−20 m2W−1.
Thermal
The glass transition temperature Tg is the major characteristic temperature of a glass. It corresponds to the transition between solid state and liquid state. At temperatures higher than Tg, glass is not rigid: its shape will change under external strain or even under its own weight. For ZBLAN, Tg ranges from 250 to 300 °C, depending on composition; mainly sodium content.
Beyond Tg, molten glass becomes prone to devitrification. This transformation is commonly evidenced by differential thermal analysis (DTA). Two characteristic temperatures are measured from the DTA curve: Tx corresponds to the onset of crystallization and Tc is taken at the maximum of the exothermic peak. Glass scientists also use liquidus temperature TL. Beyond this temperature liquid does not produce any crystal and it may remain indefinitely in the liquid state.
Thermal expansion data have been reported for a number of fluoride glasses, in the temperature range between ambient and Tg. In this range, as for most glasses, expansion is almost linearly dependent on temperature.
Mechanical
Fiber optics
Thanks to their glassy state, ZBLAN can be drawn into optical fibers, using two glass compositions with different refractive indices to ensure guidance: the core glass and the cladding glass. It is critical to the quality of the manufactured fiber to ensure that during the fiber drawing process the drawing temperature and the humidity of the environment are highly controlled. In contrast to other glasses, the temperature dependence of ZBLAN's viscosity is very steep.
ZBLAN fiber manufacturers have demonstrated significant increases in mechanical properties (>100 kpsi or 700 MPa for 125 μm fiber) and attenuation as low as 3 dB/km at 2.6 μm. ZBLAN optical fibers are used in different applications such as spectroscopy and sensing, laser power delivery and fiber lasers and amplifiers.
Comparison with alternative fiber technologies
Early silica optical fiber had attenuation coefficients on the order of 1000 dB/km, as reported in 1965. Kapron et al. reported in 1970 fibers having an attenuation coefficient of ~20 dB/km at 0.632 μm, and Miya et al. reported in 1979 ~0.2 dB/km attenuation at 1.550 μm. Nowadays, silica optical fibers are routinely manufactured with an attenuation of <0.2 dB/km with Nagayama et al. reporting in 2002 an attenuation coefficient as low as 0.151 dB/km at 1.568 μm. The four order of magnitude reduction in the attenuation of silica optical fibers over four decades was the result of constant improvement of manufacturing processes, raw material purity, and improved preform and fiber designs, which allowed these fibers to approach the theoretical lower limit of attenuation.
The advantages of ZBLAN over silica are: superior transmittance (especially in the UV and IR), higher bandwidth for signal transmission, spectral broadening (or supercontinuum generation) and low chromatic dispersion.
The graph at right compares, as a function of wavelength, the theoretical predicted attenuation (dB/km) of silica (dashed blue line) with a typical ZBLAN formulation (solid gray line) as constructed from the dominant contributions: Rayleigh scattering (dashed gray line), infrared (IR) absorption (dashed black line) and UV absorption (dotted gray line).
The difficulties that the community encountered when trying to use heavy metal fluoride glasses in the early years of development for a variety of applications were mostly related to the fragility of the fibers, a major drawback that prevented their broader adoption. However, the developers and manufacturers have dedicated significant effort in the last two decades to better understand the underlying causes of fiber fragility. The original fiber failure was primarily caused by surface defects, largely related to crystallization due to nucleation and growth, phenomena induced by factors such as raw material impurities and environmental conditions (humidity of the atmosphere during drawing, atmospheric pollutants such as vapors and dust, etc.) during processing. The particular focus on processing improvements has resulted in a 10× increase in the fiber strength. Compared to silica fiber, the intrinsic fiber strength of HMFG is currently only a factor of 2–3 lower. For example, the breaking radius of a standard 125 μm single-mode fiber is < 1.5 mm for silica and < 4 mm for ZBLAN. The technology has evolved such that HMFG fibers can be jacketed to ensure that the bending radius of the cable will never reach the breaking point and thus comply with industrial requirements. The product catalogs usually call out a safe bending radius to ensure that end users handling the fiber stay within the safe margins.
Contrary to current opinion fluoride glasses are very stable even in humid atmospheres and usually don't require dry storage as long as water will remain in the vapor phase (i.e. not being condensed on the fiber). Problems arise when the surface of the fiber comes in direct contact with liquid water (the polymeric coating usually applied to the fibers is permeable to water allowing water to diffuse through it). Current storage and transportation techniques require a very simple packaging strategy: the fiber spools are usually sealed with plastic together with a desiccant to avoid water condensation on the fiber. Studies of water attack on HMFG have shown that prolonged (> 1 hour) contact with water induces a drop in the pH of the solution which in turn increases the rate of the attack of water (the rate of attack of water increases with decreased pH). The leach rate of ZBLAN in water at pH = 8 is 10−5 g·cm2/day with five orders of magnitude decrease between pH = 2 and pH = 8. The particular sensitivity of HMFG fibers such as ZBLAN to water is due to the chemical reaction between water molecules and the F− anions which leads to the slow dissolution of the fibers. Silica fibers have a similar vulnerability to hydrofluoric acid, HF, which induces direct attack on the fibers leading to their breakup. Atmospheric moisture has a very limited effect on fluoride glasses in general, and fluoride glass/fibers can be used in a wide range of operating environments over extended periods of time without any material degradation.
A large variety of multicomponent fluoride glasses have been fabricated but few can be drawn into optical fiber. The fiber fabrication is similar to any glass-fiber drawing technology. All methods involve fabrication from the melt, which creates inherent problems such as the formation of bubbles, core-clad interface irregularities, and small preform sizes. The process occurs at 310 °C in a controlled atmosphere (to minimize contamination by moisture or oxygen impurities which significantly weaken the fiber) using a narrow heat zone compared to silica. Drawing is complicated by a small difference (only 124 °C) between the glass transition temperature and the crystallization temperature. As a result, ZBLAN fibers often contain undesired crystallites. The concentration of crystallites was shown in 1998 to be reduced by making ZBLAN in zero gravity (see figure). One hypothesis is that microgravity suppresses convection in the atmosphere surrounding the fiber during the drawing process, leading to the formation of fewer crystallites. One recent experiment aims to examine whether electrostatically levitated ZBLAN fibers can be made to exhibit properties similar to those obtained in microgravity. However, as of 2021, no quantitative models have been proposed to explain the experimental observations, and the precise causes of the differences between ZBLAN fibers drawn under different gravitational situations remain unknown.
References
Non-oxide glasses
Optical materials
Zirconium(IV) compounds
Lanthanum compounds
Barium compounds
Aluminium compounds
Fluorides
Sodium compounds | ZBLAN | Physics,Chemistry | 3,026 |
30,476,777 | https://en.wikipedia.org/wiki/Console%20television | A console television is a type of CRT television most popular in, but not exclusive to, the United States and Canada. Console CRT televisions are distinguished from standard CRT televisions by their factory-built, non-removable, wooden cabinets and speakers, which form an integral part of the television's design.
Best suited to television sizes of under 30 inches, they eventually became obsolete due to the increasing popularity of ever larger televisions in the late 1980s onward. However, they were manufactured and used well into the early 2000s.
Description
Console televisions were originally accommodated in approximately rectangular radiogram style cabinets and included radio and record player facilities. However, from approximately the mid-1970s onwards, as radiograms decreased and Hi-fi equipment increased in popularity, console televisions became more cuboid in shape and contained most commonly television, and radio receiving features, and less commonly the addition of an eight track player.
Manufacturers
Companies that made these types of television included Zenith, RCA, Panasonic, Sony, Magnavox, Mitsubishi, Sylvania, and Quasar.
References
Television technology
Cabinets (furniture)
Television sets | Console television | Technology | 231 |
63,093,546 | https://en.wikipedia.org/wiki/NGC%20950 | NGC 950 is a barred spiral galaxy in the constellation Cetus. It is approximately 205 million light-years away from the Solar System and has a diameter of about 85,000 light-years. The object was discovered in 1886 by American astronomer and mathematician Ormond Stone.
See also
List of NGC objects (1–1000)
References
Barred spiral galaxies
0950
Cetus
009461 | NGC 950 | Astronomy | 79 |
58,920,692 | https://en.wikipedia.org/wiki/CN%20Andromedae | CN Andromedae (CN And) is an eclipsing binary star in the constellation Andromeda. Its maximum apparent visual magnitude is 9.62 and drops down to a minimum of 10.2 during the main eclipse. It is classified as a Beta Lyrae variable with a period roughly of 0.4628 days.
System
The two stars in this system orbit very close to each other; their spectrum cannot be separated and as a whole they have a spectrum of an F5V star. They are in marginal contact, and there is a mass flow from the primary star to the secondary at a rate of . The binary orbit is slowly decaying at rate 1.5*10−7 days/year. The third suspected component of the system is the red dwarf star with mass about 0.11 , at 38 years orbit around binary.
Variability
Confirmation of the variability of CN Andromedae was announced by R. Weber in 1956.
The light curve of the star shows a primary eclipse, with its brightness dropping down to 10.21 magnitude, and a secondary one down to a magnitude of 9.9. This phenomenon repeats with a cycle of approximately 11.1 hours, with period decreasing in time due to the mass transfer from one star to the other.
References
Andromeda (constellation)
Andromedae, CN
BD+39 59
Beta Lyrae variables
J00203054+4013337
F-type main-sequence stars | CN Andromedae | Astronomy | 298 |
35,130,341 | https://en.wikipedia.org/wiki/Comparison%20of%20search%20engines | Web search engines are listed in tables below for comparison purposes. The first table lists the company behind the engine, volume and ad support and identifies the nature of the software being used as free software or proprietary software. The second and third table lists internet privacy aspects along with other technical parameters, such as whether the engine provides personalization (alternatively viewed as a filter bubble).
Defunct or acquired search engines are not listed here.
Search crawlers
Current search engines with independent crawlers, as of December 2018.
Digital rights
Tracking and surveillance
See also
Comparison of webmail providers – often merged with web search engines by companies that host both services
List of search engines
Search engine privacy
External links
Gnod Search - A tool to compare results across many search engines
References
Web search engines
Network software comparisons | Comparison of search engines | Technology | 157 |
5,179,603 | https://en.wikipedia.org/wiki/Optical%20axis%20grating | Optical axis gratings (OAGs) are gratings of optical axis of a birefringent material. In OAGs, the birefringence of the material is constant, while the direction of optical axis is periodically modulated in a fixed direction. In this way they are different from the regular phase gratings, in which the refractive index is modulated and the direction of the optical axis is constant.
The optical axis in OAGs can be modulated in either transverse or the longitudinal direction, which causes it to act as a diffractive or a reflective component. Numerous modulation profiles allow variation in the optical properties of the OAGs.
Examples
The optical axis in a transverse or cycloidal OAG is monotonously modulated in transverse direction. This grating is capable of diffracting all incident light into either +1st or −1st order in a micrometer-thick layer
. Cycloidal OAGs have already been proven to be very efficient in beam steering and optical switching.
In another type of OAG, the optical axis is modulated in the direction of light propagation with a modulation period equal to a fraction of the wavelength (200–3000 nm). This modulation prevents these frequencies from propagating within the grating, acting as a band-stop filter. As a result, any light with frequency within the matching range will be reflected from the OAG. However, unlike cholesterics which reflect only one of two circular polarizations of incident light, this OAG reflects any polarization.
Applications
Optical axis gratings can be implemented in various materials, including liquid crystals, polymers, birefringent crystals, magnetic crystals and subwavelength gratings. This new type of grating has broad potential in imaging, liquid crystal display, communication, and numerous military applications.
References
See also
Diffraction grating
Liquid crystal
Optical devices | Optical axis grating | Materials_science,Engineering | 393 |
33,735,571 | https://en.wikipedia.org/wiki/Killer%20activation%20receptor | Killer Activation Receptors (KARs) are receptors expressed on the plasma membrane (cell membrane) of Natural Killer cells (NK cells). KARs work together with Killer Inhibitory Receptors (abbreviated as KIRs in the text), which inactivate KARs in order to regulate the NK cells functions on hosted or transformed cells. These receptors have a broad binding specificity and are able to broadcast opposite signals. It is the balance between these competing signals that determines if the cytotoxic activity of the NK cell and apoptosis of distressed cell occurs.
Killer Inhibitory Receptor vs. Killer-cell Immunglobulin-like Receptors
There is sometimes confusion regarding the KIR acronym. The KIR term has been started to be being used parallelly both for the Killer-cell immunoglobulin-like receptors (KIRs) and for the Killer Inhibitory Receptors. The Killer-cell immunoglobulin-like receptors involve both activation and inhibitory receptors. Killer-cell inhibitory receptors involve both immunoglobulin-like receptors and C-type lectin-like receptors.
Killer Activation Receptors vs. Killer Inhibitory Receptors
KARs and KIRs have some morphological features in common, such as being transmembrane proteins. The similarities are specially found in the extracellular domains.
The differences between KARs and KIRs tend to be in the intracellular domains. They can have a tyrosine containing activation or inhibitory motifs in the intracellular part of the receptor molecule (they are called ITAMs and ITIMs).
At first, it was thought that there was only one KAR and one KIR receptor present on the NK cell, known as the two-receptor model. In the last decade, many different KARs and KIRs, such as NKp46 or NKG2D, have been discovered creating the opposing-signals model.NKG2D is activated by the cell-surface ligands MICA and ULBP2.Even though KARs and KIRs are receptors with antagonistic effects on NK cells, they have some structural characteristics in common. Both receptors are usually transmembrane proteins. Also, the extracellular domains of these proteins tend to have similar molecular features and are responsible for ligand recognition.
The opposing functions of these receptors are due to differences in their intracellular domains. KARs proteins possess positively charged transmembrane residues and short cytoplasmic tails that contain few intracellular signaling domains. In contrast, KIRs proteins usually have long cytoplasmic tails.
As the chains from KARs are not able to mediate any signal transduction in isolation, a common feature of such receptors is the presence of noncovalently linked subunits that contain immunoreceptor tyrosine-based activation motifs (ITAMs) in their cytoplasmic tails. ITAMs are composed of a conserved sequence of amino acids, including two Tyr-x-x-Leu/Ile elements (where x is any amino acid) separated by six to eight amino acid residues. When the binding of an activation ligand to an activation receptor complex occurs, the tyrosine residues in the ITAMs in the associated chain are phosphorylated by kinases, and a signal that promotes natural cytotoxicity is conveyed to the interior of the NK cell. Therefore, ITAMs are involved in the facilitation of signal transduction. These subunits are moreover composed of an accessory signaling molecule such as CD3ζ, the γc chain, or one of two adaptor proteins called DAP10 and DAP12. All of these molecules possess negatively charged transmembrane domains.
A common feature of members of all KIR is the presence of immunoreceptor tyrosine-based inhibition motifs (ITIMs) in their cytoplasmic tails. ITIMs are composed of the sequence Ile/Val/Leu/Ser-x-Tyr-x-x-Leu/Val, where x denotes any amino acid. The latter are essential to the signaling functions of these molecules. When an inhibitory receptor is stimulated by the binding of MHC class I, kinases and phosphatases are recruited to the receptor complex. This is how ITIMs counteract the effect of kinases initiated by activating receptors and manage to inhibit the signal transduction within the NK cell.
Types of Killer Activation Receptors
Based on their structure there are three different groups of KARS. The first group of receptors is called Natural Cytotoxicity Receptors (NCR), which only includes activation receptors. The two other classes are: Natural Killer Group 2 (NKG2), which includes activation and inhibition receptors, and some KIRs which do not have an inhibitor role.
The three receptors that are included in the NCR class are NKp46, NKp44 and NKp30. The crystal structure of NKp46, which is representative for all three NCR, has been determined. It has two C2-set immunoglobulin domains, and it’s probable that the binding site for its ligand is near the interdomain hinge.
There are two NKG2-class receptors which are NKG2D and CD94/NKG2C. NKG2D, which doesn’t bind to CD94, is a homodimeric lectin-like receptor. CD94/NKG2C consists in a complex formed by the CD94 protein, which is a C-type lectin molecule bound to the NKG2C protein. This molecule can bind to five classes of NKG2 (A, B, C, E and H), but the union can trigger an activation or an inhibition response, depending on the NKG2 molecule (CD94/NKG2A, for example, is an inhibitor complex).
Most KIRs have an inhibitor function, however, a few KIRs that have an activator role also exist. One of these activating KIRs is KIR2DS1, which has an Ig-like structure, like KIRs in general.
Finally, there is CD16, a low affinity Fc receptor (FcγRIII) which contains N-glycosylation sites; therefore, it is a glycoprotein.
Killer Activation Receptors are associated with signaling intracellular chains. In fact, these intracellular domains determine the opposite functions of activation and inhibitory receptors. Activation receptors are associated with an accessory signaling molecule (for instance, CD3ζ) or with an adaptor protein, which can be either DAP10 or DAP12. All of these signaling molecules contain immunoreceptor tyrosine-based activated motifs (ITAMs), which are phosphorylated and consequently facilitate signal transduction.
Each of these receptors has a specific ligand, although some receptors that belong to the same class, such as NCR, recognize similar molecules.
How do they work?
KARs can detect a specific type of molecules: MICA and MICB. These molecules are in MHC class I of human cells and they are related to cellular stress: this is why MICA and MICB appear in infected or transformed cells but they aren't very common in healthy cells. KARs recognize MICA and MICB when they are in a huge proportion and get engaged. This engagement activates the natural killer cell to attack the transformed or infected cells. This action can be done in different ways. NK can kill directly the hosted cell, it can do it by segregating cytokines, IFN-β and IFN-α, or by doing both things.
There are other less common ligands, like carbohydrate domains, which are recognized by a group of receptors: C-type lectins (so named because they have calcium-dependent carbohydrates recognition domains).
In addition to lectins, there are other molecules implicated in the activation of NK. These additional proteins are: CD2 and CD16. The CD16 works in antibody-mediated recognition.
Finally, there is a group of proteins which are related to the activation in an unknown way. These are NKp30, Nkp44 and Nkp46.
These ligands activate the NK cell, however, before the activation, Killer Inhibition Receptors (KIRs) recognize certain molecules in the MHC class I of the hosted cell and get engaged with them. These molecules are typical of healthy cells but some of these molecules are repressed in infected or transformed cells. For this reason when the hosted cell is really infected the proportion of KARs engaged with ligands is bigger than the proportion of KIRs engaged with MHC I molecules. When this happens the NK is activated and the hosted cell is destroyed. On the other hand, if there are more KIRs engaged with MHC class I molecules than KARs engaged with ligands, the NK isn't activated and the suspicious hosted cell remains alive.
KARs and KIRs: their role in cancer
One way by which NK cells are able to distinguish between normal and infected or transformed cells is by monitoring the amount of MHC class I molecules cells have on their surface. When it come to an infected and a tumor cell, the expression of MHC class I decreases.
In cancers, a Killer Activation Receptor (KAR), located on the surface of the NK cell, binds to certain molecules which only appear on cells that are undergoing stress situations. In humans, this KAR is called NKG2D and the molecules it recognizes MICA and MICB. This binding provides a signal which induces the NK cell to kill the target cell.
Then, Killer Inhibitory Receptors (KIRs) examine the surface of the tumor cell in order to determine the levels of MHC class I molecules it has. If KIRs bind sufficiently to MHC class I molecules, the “killing signal” is overridden to prevent the killing of the cell. However, if KIRs are not sufficiently engaged to MHC class I molecules, killing of the target cell proceeds.
References
Further reading
Immunology
Lymphocytes
Receptors | Killer activation receptor | Chemistry,Biology | 2,091 |
53,734,156 | https://en.wikipedia.org/wiki/ChatScript | ChatScript is a combination Natural Language engine and dialog management system designed initially for creating chatbots, but is currently also used for various forms of NL processing. It is written in C++. The engine is an open source project at SourceForge. and GitHub.
ChatScript was written by Bruce Wilcox and originally released in 2011, after Suzette (written in ChatScript) won the 2010 Loebner Prize, fooling one of four human judges.
Features
In general ChatScript aims to author extremely concisely, since the limiting scalability of hand-authored chatbots is how much/fast one can write the script.
Because ChatScript is designed for interactive conversation, it automatically maintains user state across volleys. A volley is any number of sentences the user inputs at once and the chatbots response.
The basic element of scripting is the rule. A rule consists of a type, a label (optional), a pattern, and an output. There are three types of rules. Gambits are something a chatbot might say when it has control of the conversation. Rejoinders are rules that respond to a user remark tied to what the chatbot just said. Responders are rules that respond to arbitrary user input which is not necessarily tied to what the chatbot just said. Patterns describe conditions under which a rule may fire. Patterns range from extremely simplistic to deeply complex (analogous to Regex but aimed for NL). Heavy use is typically made of concept sets, which are lists of words sharing a meaning. ChatScript contains some 2000 predefined concepts and scripters can easily write their own. Output of a rule intermixes literal words to be sent to the user along with common C-style programming code.
Rules are bundled into collections called topics. Topics can have keywords, which allows the engine to automatically search the topic for relevant rules based on user input.
Example code
Topic: ~food( ~fruit fruit food eat)
t: What is your favorite food?
a: (~fruit) I like fruit also.
a: (~metal) I prefer listening to heavy metal music rather than eating it.
?: WHATMUSIC ( << what music you ~like >>) I prefer rock music.
s: ( I * ~like * _~music_types) ^if (_0 == country) {I don't like country.} else {So do I.}
Words starting with ~ are concept sets. For example, ~fruit is the list of all known fruits. The simple pattern (~fruit) reacts if any fruit is mentioned immediately after the chatbot asks for favorite food. The slightly more complex pattern for the rule labelled WHATMUSIC requires all the words what, music, you and any word or phrase meaning to like, but they may occur in any order. Responders come in three types. ?: rules react to user questions. s: rules react to user statements. u: rules react to either.
ChatScript code supports standard if-else, loops, user-defined functions and calls, and variable assignment and access.
Data
Some data in ChatScript is transient, meaning it will disappear at the end of the current volley. Other data is permanent, lasting forever until explicitly killed off. Data can be local to a single user or shared across all users at the bot level.
Internally all data is represented as text and is automatically converted to a numeric form as needed.
Variables
User variables come in several kinds. Variables purely local to a topic or function are transient. Global variables can be declared as transient or permanent. A variable is generally declared merely by using it, and its type depends on its prefix ($, $$, $_).
$_local = 1 is a local transient variable being assigned a 1
$$global1.value = “hi” is a transient global variable which is a JSON object
$global2 += 20 is a permanent global variable
Facts
In addition to variables, ChatScript supports facts – triples of data, which can also be transient or permanent. Functions can query for facts having particular values of some of the fields, making them act like an in-memory database. Fact retrieval is very quick and efficient the number of available in-memory facts is largely constrained to the available memory of the machine running the ChatScript engine. Facts can represent record structures and are how ChatScript represents JSON internally. Tables of information can be defined to generate appropriate facts.
table: ~inventors(^who ^what)
createfact(^who invent ^what)
DATA:
"Johannes Gutenberg" "printing press"
"Albert Einstein" ["Theory of Relativity" photon "Theory of General Relativity"]
The above table links people to what they invented (1 per line) with Einstein getting a list of things he did.
External communication
ChatScript embeds the Curl library and can directly read and write facts in JSON to a website.
Server
A ChatScript engine can run in local or server mode.
Pos-tagging, parsing, and ontology
ChatScript comes with a copy of English WordNet embedded within, including its ontology, and creates and extends its own ontology via concept declarations. It has an English language pos-tagger and parser and supports integration with TreeTagger for pos-tagging a number of other languages (TreeTagger commercial license required).
Databases
In addition to an internal fact database, ChatScript supports PostgreSQL, MySQL, MSSQL and MongoDB both for access by scripts, but also as a central filesystem if desired so ChatScript can be scaled horizontally. A common use case is to use a centralized database to host the user files and multiple servers to scale the ChatScript engine.
JavaScript
ChatScript also embeds DukTape, ECMAScript E5/E5.1 compatibility, with some semantics updated from ES2015+.
Spelling Correction
ChatScript has built-in automatic spell checking, which can be augmented in script as both simple word replacements or context sensitive changes. With appropriate simple rules you can change perfect legal words into other words or delete them. E.g., if you have a concept of ~electronic_goods and don't want an input of Radio Shack (a store name) to be detected as an electronic good, you can get the input to change to Radio_Shack (a single word), or allow the words to remain but block the detection of the concept.
This is particularly useful when combined with speech-to-text code that is imperfect, but you are familiar with common failings of it and can compensate for them in script.
Control flow
A chatbot's control flow is managed by the control script. This is merely another ordinary topic of rules, that invokes API functions of the engine. Thus control is fully configurable by the scripter (and functions exist to allow introspection into the engine). There are pre-processing control flow and post-processing control flow options available, for special processing.
References
External links
Cross-platform free software
Cross-platform software
Free and open source interpreters
Free software programmed in C
Scripting languages
Natural language parsing
Natural language processing
Natural language processing toolkits
Free artificial intelligence applications | ChatScript | Technology | 1,493 |
166,380 | https://en.wikipedia.org/wiki/Natural%20history | Natural history is a domain of inquiry involving organisms, including animals, fungi, and plants, in their natural environment, leaning more towards observational than experimental methods of study. A person who studies natural history is called a naturalist or natural historian.
Natural history encompasses scientific research but is not limited to it. It involves the systematic study of any category of natural objects or organisms, so while it dates from studies in the ancient Greco-Roman world and the mediaeval Arabic world, through to European Renaissance naturalists working in near isolation, today's natural history is a cross-discipline umbrella of many specialty sciences; e.g., geobiology has a strong multidisciplinary nature.
Definitions
Before 1900
The meaning of the English term "natural history" (a calque of the Latin historia naturalis) has narrowed progressively with time, while, by contrast, the meaning of the related term "nature" has widened (see also History below).
In antiquity, "natural history" covered essentially anything connected with nature, or used materials drawn from nature, such as Pliny the Elder's encyclopedia of this title, published , which covers astronomy, geography, humans and their technology, medicine, and superstition, as well as animals and plants.
Medieval European academics considered knowledge to have two main divisions: the humanities (primarily what is now known as classics) and divinity, with science studied largely through texts rather than observation or experiment. The study of nature revived in the Renaissance, and quickly became a third branch of academic knowledge, itself divided into descriptive natural history and natural philosophy, the analytical study of nature. In modern terms, natural philosophy roughly corresponded to modern physics and chemistry, while natural history included the biological and geological sciences. The two were strongly associated. During the heyday of the gentleman scientists, many people contributed to both fields, and early papers in both were commonly read at professional science society meetings such as the Royal Society and the French Academy of Sciences—both founded during the 17th century.
Natural history had been encouraged by practical motives, such as Linnaeus' aspiration to improve the economic condition of Sweden. Similarly, the Industrial Revolution prompted the development of geology to help find useful mineral deposits.
Since 1900
Modern definitions of natural history come from a variety of fields and sources, and many of the modern definitions emphasize a particular aspect of the field, creating a plurality of definitions with a number of common themes among them. For example, while natural history is most often defined as a type of observation and a subject of study, it can also be defined as a body of knowledge, and as a craft or a practice, in which the emphasis is placed more on the observer than on the observed.
Definitions from biologists often focus on the scientific study of individual organisms in their environment, as seen in this definition by Marston Bates: "Natural history is the study of animals and Plants—of organisms. ... I like to think, then, of natural history as the study of life at the level of the individual—of what plants and animals do, how they react to each other and their environment, how they are organized into larger groupings like populations and communities" and this more recent definition by D.S. Wilcove and T. Eisner: "The close observation of organisms—their origins, their evolution, their behavior, and their relationships with other species".
This focus on organisms in their environment is also echoed by H.W. Greene and J.B. Losos: "Natural history focuses on where organisms are and what they do in their environment, including interactions with other organisms. It encompasses changes in internal states insofar as they pertain to what organisms do".
Some definitions go further, focusing on direct observation of organisms in their environments, both past and present, such as this one by G.A. Bartholomew: "A student of natural history, or a naturalist, studies the world by observing plants and animals directly. Because organisms are functionally inseparable from the environment in which they live and because their structure and function cannot be adequately interpreted without knowing some of their evolutionary history, the study of natural history embraces the study of fossils as well as physiographic and other aspects of the physical environment".
A common thread in many definitions of natural history is the inclusion of a descriptive component, as seen in a recent definition by H.W. Greene: "Descriptive ecology and ethology". Several authors have argued for a more expansive view of natural history, including S. Herman, who defines the field as "the scientific study of plants and animals in their natural environments. It is concerned with levels of organization from the individual organism to the ecosystem, and stresses identification, life history, distribution, abundance, and inter-relationships. It often and appropriately includes an esthetic component", and T. Fleischner, who defines the field even more broadly, as "A practice of intentional, focused attentiveness and receptivity to the more-than-human world, guided by honesty and accuracy". These definitions explicitly include the arts in the field of natural history, and are aligned with the broad definition outlined by B. Lopez, who defines the field as the "Patient interrogation of a landscape" while referring to the natural history knowledge of the Eskimo (Inuit).
A slightly different framework for natural history, covering a similar range of themes, is also implied in the scope of work encompassed by many leading natural history museums, which often include elements of anthropology, geology, paleontology, and astronomy along with botany and zoology, or include both cultural and natural components of the world.
The plurality of definitions for this field has been recognized as both a weakness and a strength, and a range of definitions has recently been offered by practitioners in a recent collection of views on natural history.
History
Prehistory
Prior to the advent of Western science humans were engaged and highly competent in indigenous ways of understanding the more-than-human world that are now referred to as traditional ecological knowledge. 21st century definitions of natural history are inclusive of this understanding, such as this by Thomas Fleischner of the Natural History Institute (Prescott, Arizona):Natural history – a practice of intentional focused attentiveness and receptivity to the more-than-human world, guided by honesty and accuracy – is the oldest continuous human endeavor. In the evolutionary past of our species, the practice of natural history was essential for our survival, imparting critical information on habits and chronologies of plants and animals that we could eat or that could eat us. Natural history continues to be critical to human survival and thriving. It contributes to our fundamental understanding of how the world works by providing the empirical foundation of natural sciences, and it contributes directly and indirectly to human emotional and physical health, thereby fostering healthier human communities. It also serves as the basis for all conservation efforts, with natural history both informing the science and inspiring the values that drive these.
Ancient
As a precursor to Western science, natural history began with Aristotle and other ancient philosophers who analyzed the diversity of the natural world. Natural history was understood by Pliny the Elder to cover anything that could be found in the world, including living things, geology, astronomy, technology, art, and humanity.
was written between 50 and 70 AD by Pedanius Dioscorides, a Roman physician of Greek origin. It was widely read for more than 1,500 years until supplanted in the Renaissance, making it one of the longest-lasting of all natural history books.
From the ancient Greeks until the work of Carl Linnaeus and other 18th-century naturalists, a major concept of natural history was the scala naturae or Great Chain of Being, an arrangement of minerals, vegetables, more primitive forms of animals, and more complex life forms on a linear scale of supposedly increasing perfection, culminating in our species.
Medieval
Natural history was basically static through the Middle Ages in Europe—although in the Arabic and Oriental world, it proceeded at a much brisker pace. From the 13th century, the work of Aristotle was adapted rather rigidly into Christian philosophy, particularly by Thomas Aquinas, forming the basis for natural theology. During the Renaissance, scholars (herbalists and humanists, particularly) returned to direct observation of plants and animals for natural history, and many began to accumulate large collections of exotic specimens and unusual monsters. Leonhart Fuchs was one of the three founding fathers of botany, along with Otto Brunfels and Hieronymus Bock. Other important contributors to the field were Valerius Cordus, Konrad Gesner (), Frederik Ruysch, and Gaspard Bauhin. The rapid increase in the number of known organisms prompted many attempts at classifying and organizing species into taxonomic groups, culminating in the system of the Swedish naturalist Carl Linnaeus.
The British historian of Chinese science Joseph Needham calls Li Shizhen "the 'uncrowned king' of Chinese naturalists", and his Bencao gangmu "undoubtedly the greatest scientific achievement of the Ming". His works translated to many languages direct or influence many scholars and researchers.
Modern
A significant contribution to English natural history was made by parson-naturalists such as Gilbert White, William Kirby, John George Wood, and John Ray, who wrote about plants, animals, and other aspects of nature. Many of these men wrote about nature to make the natural theology argument for the existence or goodness of God. Since early modern times, however, a great number of women made contributions to natural history, particularly in the field of botany, be it as authors, collectors, or illustrators.
In modern Europe, professional disciplines such as botany, geology, mycology, palaeontology, physiology, and zoology were formed. Natural history, formerly the main subject taught by college science professors, was increasingly scorned by scientists of a more specialized manner and relegated to an "amateur" activity, rather than a part of science proper. In Victorian Scotland, the study of natural history was believed to contribute to good mental health. Particularly in Britain and the United States, this grew into specialist hobbies such as the study of birds, butterflies, seashells (malacology/conchology), beetles, and wildflowers; meanwhile, scientists tried to define a unified discipline of biology (though with only partial success, at least until the modern evolutionary synthesis). Still, the traditions of natural history continue to play a part in the study of biology, especially ecology (the study of natural systems involving living organisms and the inorganic components of the Earth's biosphere that support them), ethology (the scientific study of animal behavior), and evolutionary biology (the study of the relationships between life forms over very long periods of time), and re-emerges today as integrative organismal biology.
Amateur collectors and natural history entrepreneurs played an important role in building the world's large natural history collections, such as the Natural History Museum, London, and the National Museum of Natural History in Washington, DC.
Three of the greatest English naturalists of the 19th century, Henry Walter Bates, Charles Darwin, and Alfred Russel Wallace—who knew each other—each made natural history travels that took years, collected thousands of specimens, many of them new to science, and by their writings both advanced knowledge of "remote" parts of the world—the Amazon basin, the Galápagos Islands, and the Indonesian Archipelago, among others—and in so doing helped to transform biology from a descriptive to a theory-based science.
The understanding of "Nature" as "an organism and not as a mechanism" can be traced to the writings of Alexander von Humboldt (Prussia, 1769–1859). Humboldt's copious writings and research were seminal influences for Charles Darwin, Simón Bolívar, Henry David Thoreau, Ernst Haeckel, and John Muir.
Museums
Natural history museums, which evolved from cabinets of curiosities, played an important role in the emergence of professional biological disciplines and research programs. Particularly in the 19th century, scientists began to use their natural history collections as teaching tools for advanced students and the basis for their own morphological research.
Societies
The term "natural history" alone, or sometimes together with archaeology, forms the name of many national, regional, and local natural history societies that maintain records for animals (including birds (ornithology), insects (entomology) and mammals (mammalogy)), fungi (mycology), plants (botany), and other organisms. They may also have geological and microscopical sections.
Examples of these societies in Britain include the Natural History Society of Northumbria founded in 1829, London Natural History Society (1858), Birmingham Natural History Society (1859), British Entomological and Natural History Society founded in 1872, Glasgow Natural History Society, Manchester Microscopical and Natural History Society established in 1880, Whitby Naturalists' Club founded in 1913, Scarborough Field Naturalists' Society and the Sorby Natural History Society, Sheffield, founded in 1918. The growth of natural history societies was also spurred due to the growth of British colonies in tropical regions with numerous new species to be discovered. Many civil servants took an interest in their new surroundings, sending specimens back to museums in the Britain. (See also: Indian natural history)
Societies in other countries include the American Society of Naturalists and Polish Copernicus Society of Naturalists.
Professional societies have recognized the importance of natural history and have initiated new sections in their journals specifically for natural history observations to support the discipline. These include "Natural History Field Notes" of Biotropica, "The Scientific Naturalist" of Ecology, "From the Field" of Waterbirds, and the "Natural History Miscellany section" of the American Naturalist.
Benefits of Natural History
Natural history observations have contributed to scientific questioning and theory formation. In recent times such observations contribute to how conservation priorities are determined. Mental health benefits can ensue, as well, from regular and active observation of chosen components of nature, and these reach beyond the benefits derived from passively walking through natural areas.
See also
Evolutionary history of life
History of evolutionary thought
Naturalism (philosophy)
Nature documentary
Nature study
Nature writing
Russian naturalists
Timeline of natural history
Natural science
References
Further reading
Peter Anstey (2011), Two Forms of Natural History , Early Modern Experimental Philosophy .
Farber, Paul Lawrence (2000), Finding Order in Nature: The Naturalist Tradition from Linnaeus to E. O. Wilson. Johns Hopkins University Press: Baltimore.
Kohler, Robert E. (2002), Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. University of Chicago Press: Chicago.
Mayr, Ernst. (1982), The Growth of Biological Thought: Diversity, Evolution, and Inheritance. The Belknap Press of Harvard University Press: Cambridge, Massachusetts.
Rainger, Ronald; Keith R. Benson; and Jane Maienschein (eds) (1988), The American Development of Biology. University of Pennsylvania Press: Philadelphia.
External links
A History of the Ecological Sciences by Frank N. Egerton
The Cambridge natural history, Vol. 07 (of 10)'', London: Macmillan and Co., 1904
History of biology
History of Earth science
History of science | Natural history | Technology | 3,102 |
68,720,300 | https://en.wikipedia.org/wiki/Psilocybe%20angulospora | Psilocybe angulospora is a species of agaric fungus in the family Hymenogastraceae. The species was described from Taiwan in 2015 and is also present in New Zealand, where it is considered introduced. As a blueing member of the genus Psilocybe it contains the psychoactive compounds psilocin and psilocybin.
The fruitbodies have a small, extremely hygrophanous pale gold conical to bell-shaped cap, often with a prominent pointed central papilla, a slender whitish stipe, and fine narrowly spaced gills.
In Taiwan, the mushrooms grow wild amongst grasses on heavily manured soil and on cow dung. In New Zealand they are most frequently found in the potting mix of nursery plants, in potted plants in garden centres, and outdoors in gardens and council landscaping where those plants have been planted.
Taxonomy and naming
Psilocybe angulospora was described from Taiwan in 2015 by Yen-Wen Wang and Shean-Shong Tzean, after reports of hallucinogenic mushroom poisonings in Taipei sparked a biodiversity survey and scientific investigation. The mushrooms responsible were said to grow on dung in native grasslands in Yangmingshang National Park. Various corprophilous fruitbodies were collected from the area and studied, leading to the discovery of the species in Taiwan, and to official publication.
Etymology
The name or species epithet refers to the slightly angular shape of the spores.
Description
The cap is 10–40 mm in diameter, light brown to medium grey blue, conic to subcampanulate (cone-like to bell-shaped) with an inrolled margin and often an acute central papilla. It is translucent-striate to the margin (fine radial lines are visible around the edge of the cap when moist), extremely hygrophanous, glabrous (smooth or free of ornamentation) and slightly fibrous. The flesh inside is firm and brownish orange to yellowish. The gills are pale, thin and fairly close together, narrowly adnate (the gills meet the stipe by most of their width; they are broadly attached), with one or three short intermediate gills between two intermediate gills, and have a smooth edge. The stipe is 40–70 mm x 1–2 mm, pale greyish white, cylindrical, centered, fibrous, with brownish orange to yellowish flesh. It can be hollow or otherwise stuffed with fibres. The partial veil sometimes leaves a fragile line of raised threadlike tissue around the stipe close to halfway down. This can resemble a faint, thin raised ring, often stained blue.
Microscopic features
Spores measure 7.6–10.2(–11.5) × 5.8–8.1 × 4.7–7.1 μm. They are reddish grey, greyish orange to cinnamon brown in Meltzer's reagent, and appear subrhomboid in face view and ellipsoid to oval in side view. They are smooth with thick walls, and have a large eccentric germ pore which appears central in face view. This species has a very low spore production, often failing to produce a visible spore print. Basidia measure 20.9–27.2(–32.2) ×6.1–10.4 μm, are 4-spored, shaped broadly fusiform (like a spindle, rounded in the middle and tapering to the ends) to broadly clavate (shaped like a club). Pleurocystidia (the cystidia on the gill face) are absent or not well observed. Cheilocystidia (cystidia on the gill edge) measure 16.4–26.3(–29.2) μm long, (1.6–)1.8–3.0(–3.6) μm wide at the apex, (3.6–)4.5–7.1 μm wide at base. They are fusiform (spindle-shaped) to lageniform (having a large base tapering to a narrow neck; flask-shaped), sometimes bifurcate (branching), hyaline (transparent), clustered, and abundant. The hypodermium (the second layer of tissue of the cuticle) is composed of inflated threadlike hyphae, measuring 6.3–17.0 μm. The outer tissue of the stipe consists of short-segmented, inflated, threadlike parallel hyphae with thick walls, measuring 9.3–22.3 μm. Clamp connections are present.
Published description
"Dung-associated, Potentially Hallucinogenic Mushrooms from Taiwan" Yen-Wen Wang and Shean-Shong Tzean, 2015.
Habitat and distribution
Scattered on heavily manured soil in grassland, and directly on cow dung, at Qingtiangang in Yangmingshan National Park in Taiwan.
In potted plants and woodchip landscaping in New Zealand.
Similar species
Psilocybe angulospora can appear similar to Psilocybe hoogshagenii but the two are not closely related. DNA analysis suggests a closer relationship to Psilocybe stuntzii and Psilocybe semilanceata.
In New Zealand, it can be confused with other species of Psilocybe that appear in potted plants.
See also
List of psilocybin mushrooms
Psilocybe tasmaniana
References
External links
Manaaki Whenua - Landcare Research Online Fungi Portal New Zealand fungarium records.
Psilocybe angulospora observations on iNaturalist.
Psilocybe angulospora observations on Mushroom Observer.
angulospora
Entheogens
Psychoactive fungi
Psychedelic tryptamine carriers
Fungi described in 2015
Fungi of Asia
Fungi of New Zealand
Fungus species | Psilocybe angulospora | Biology | 1,206 |
12,783,915 | https://en.wikipedia.org/wiki/Bomab | The BOttle MAnnequin ABsorber phantom was developed by Bush in 1949 (Bush 1949) and has since been accepted in North America as the industry standard (ANSI 1995) for calibrating whole body counting systems.
The phantom consists of 10 polyethylene bottles, either cylinders or elliptical cylinders, that represent the head, neck chest, abdomen, thighs, calves, and arms. Each section is filled with a radioactive solution, in water, that has the amount of radioactivity proportional to the volume of each section. This simulates a homogeneous distribution of material throughout the body. The solution will also be acidified and contain stable element carrier so that the radioactivity does not plate out on the container walls.
The phantom, which contains a known amount of radioactivity can be used to calibrate the whole body counter by relating the observed response to the known amount of radioactivity. As different radioactive materials emit different energies of gamma photons, the calibration has to be repeated to cover the expected energy range: usually 120 to 2,000 keV.
Examples of radioactive isotopes that are used for efficiency calibration include 57Co, 60Co, 88Y, 137Cs and 152Eu.
Although the phantom was designed to be used lying down, it is used in any orientation.
Other uses
Performance testing: BOMAB phantoms are sometimes used by performance testing organizations to test operating assay facilities. Phantoms, containing known quantities of radioactive material, are sent to assay facilities as blind samples.
Design characteristics: Phantoms can be used to evaluate the relative effect of size, shape and positioning on the performance of in vivo measurement equipment.
Background: A water filled BOMAB is often used to estimate the (blank) background for in vivo assay systems.
Detection Limits: A BOMAB filled with approximately 140 g of K-40, which is the nominal content in a 70 kg man, is sometimes used to estimate detection sensitivity of in vivo personnel counting systems.
See also
Computational human phantom
Imaging phantom
References
External links
Bush F. The integral dose received from a uniformly distributed radioactive isotope. British J Radiol. 22:96-102; 1949.
Health Physics Society. Specifications for the Bottle Manikin Absorber Phantom. An American National Standard. New York: American National Standards Institute; ANSI/HPS N13.35; 1995.
Radiobiology | Bomab | Chemistry,Biology | 485 |
32,159,946 | https://en.wikipedia.org/wiki/Forest%20floor%20interception | Forest floor interception is the part of the (net) precipitation or throughfall that is temporarily stored in the top layer of the forest floor and successively evaporated within a few hours or days during and after the rainfall event. The forest floor can consist of bare soil, short vegetation (like grasses, mosses, creeping vegetation, etc.) or litter (i.e. leaves, twigs, or small branches). This throughfall is especially rich in nutrients which makes its redistribution into the soil is an important factor for the ecology and water demand of surrounding vegetation. As a hydrological process it is crucial for water resource management and climate change.
Influencing Factors
Vegetation Characteristics
There are variations in storage capacity based on forest floor types, species of vegetation such as needles and leaves have different capacities. The thickness of the layer of vegetation can be a contributing factor as thicker layers have a greater capacity for storing water.
There is an observable seasonal response throughout the year. In Fall, leaf fall accumulates on the forest floor to increase the thickness that then slowly decomposes. In the presence of snow, the layer of vegetation is compressed, reducing the storage capacity.
Precipitation Characteristics
The frequency of throughfall events, whether continuous or at irregular intervals, has a significant impact on water interception. Even if they are of equal throughfall, the latter has intervals of time that allow for partial evaporation creating more available storage. The intensity of throughfall is a crucial factor in storage, as high intensities are consistent with increased storage capacities.
Evaporative Demand
High potential evaporation expedites the evaporation of intercepted water, two facilitators of evaporation are wind and radiation. Wind is a significant part of moisture removal and tends to remain low at the ground level contributing to a higher vapid deficit. Radiation penetration through the canopy allows more radiation to reach the forest floor in the winter than in the summer, creating variation in seasonal evaporation rates.
Interception Loss
Interception loss is the portion of rainfall intercepted by the canopy and evaporated back into the atmosphere and calculated as the difference of gross rainfall and the net rainfall (sum of throughfall and stemflow) at the ground floor. The overall loss for a vegetation cover relies on the evaporation rate of wet canopy and the duration of the canopy's wetness. Seemingly a minor process, but its frequency can impede the process of rainfall recharging soil moisture and generating runoff ultimately impacting the water balance. This is especially true for forest stands that have an annual interception loss of a quarter or more of the gross rainfall. This can range from 9% in the Amazonia to 60% in Picea sitchensis and Picea abies in Brittany. Tall vegetation experience significant evaporation rates of intercepted water that exceed the transpiration rates as opposed to short vegetation.
Measurement
There is a lack of research on forest floor interception but it is quantifiable by lab or field methods. In lab methods, samples are observed under controlled conditions within a laboratory, however at the risk of disturbing the samples. Field methods are experiments done on-site thus minimizing disturbance of the samples.
Rutter-type models are the more frequent techniques in modeling the interception process. It is depicted by a balance of rainfall input, storage, and output through drainage and evaporation.
See also
Interception (water)
Canopy interception
Stemflow
Throughfall
Water cycle
References
Further reading
Gerrits, A.M.J., Savenije, H.H.G., Hoffmann, L. and Pfister, L. (2007): New technique to measure forest floor interception – an application in a beech forest in Luxembourg, Hydrology and Earth System Sciences, 11, 695–701.
William M. Putuhena, Ian Cordery (1996), Estimation of interception capacity of the forest floor, Journal of Hydrology, Volume 180, Issues 1–4, Pages 283-299, ISSN 0022-1694, https://doi.org/10.1016/0022-1694(95)02883-8
Hydrology
Forest ecology | Forest floor interception | Chemistry,Engineering,Environmental_science | 842 |
61,267,718 | https://en.wikipedia.org/wiki/Esing%20Bakery%20incident | The Esing Bakery incident, also known as the Ah Lum affair, was a food contamination scandal in the early history of British Hong Kong. On 15 January 1857, during the Second Opium War, several hundred European residents were poisoned non-lethally by arsenic, found in bread produced by a Chinese-owned store, the Esing Bakery. The proprietor of the bakery, Cheong Ah-lum, was accused of plotting the poisoning but was acquitted in a trial by jury. Nonetheless, Cheong was successfully sued for damages and was banished from the colony. The true responsibility for the incident and its intention—whether it was an individual act of terrorism, commercial sabotage, a war crime orchestrated by the Qing government, or purely accidental—both remain matters of debate.
In Britain, the incident became a political issue during the 1857 general election, helping to mobilise support for the war and the incumbent prime minister, Lord Palmerston. In Hong Kong, it sowed panic and insecurity among the local colonists, highlighting the precariousness of imperial rule in the colony. The incident contributed to growing tensions between Hong Kong's European and Chinese residents, as well as within the European community itself. The scale and potential consequences of the poisoning make it an unprecedented event in the history of the British Empire, the colonists believing at the time that its success could have wiped out their community.
Background
In 1841, in the midst of the First Opium War, Captain Charles Elliot negotiated the cession of Hong Kong by the Qing dynasty of China to the British Empire in the Convention of Chuenpi. The colony's early administrators held high hopes for Hong Kong as a gateway for British influence in China as a whole, which would combine British good government with an influx from China of what were referred to at the time as "intelligent and readily improvable artisans", as well as facilitating the transfer of coolies to the West Indies. However, the colonial government soon found it difficult to govern Hong Kong's rapidly expanding Chinese population, and was also faced with endemic piracy and continued hostility from the Qing government. In 1856, the Governor of Hong Kong, John Bowring, supported by the British prime minister, Lord Palmerston, demanded reparations from the Qing government for the seizure of a Hong Kong Chinese-owned ship, which led to the Second Opium War between Britain and China (1856–1860).
At the opening of the war in late 1856, Qing imperial commissioner Ye Mingchen unleashed a campaign of terrorism in Hong Kong by a series of proclamations offering rewards for the deaths of what he called the French and British "rebel barbarians", and ordering Chinese to renounce employment by the "foreign dogs". A committee to organise resistance to the Europeans was established at Xin'an County on the mainland. At the same time, Europeans in Hong Kong became concerned that the turmoil in China caused by the Taiping Rebellion (1850–1864) was producing a surge of Chinese criminals into the colony. Tensions between Chinese and European residents ran high, and in December 1856 and January 1857 the Hong Kong government enacted emergency legislation, imposing a curfew on Hong Kong Chinese and giving the police sweeping powers to arrest and deport Chinese criminals and to resort to lethal force at night-time. Well-off Chinese residents became increasingly disquieted by the escalating police brutality and the level of regulation of Chinese life.
Course of events
On 15 January 1857, between 300 and 500 predominantly European residents of the colony—a large proportion of the European population at the time—who had consumed loaves from the Esing Bakery () fell ill with nausea, vomiting, stomach pain, and dizziness. Later testing concluded that the bread had been adulterated with large amounts of arsenic trioxide. The quantity of arsenic involved was high enough to cause the poison to be vomited out before it could kill its victims. There were no deaths immediately attributable to the poisoning, though three deaths that occurred the following year, including that of the wife of Governor Bowring, would be ascribed to its long-term effects. The colony's doctors, led by Surgeon General Aurelius Harland, dispatched messages across the town advising that the bread was poisoned and containing instructions to induce vomiting and consume raw eggs.
The proprietor of the bakery, Cheong Ah-lum (), left for Macau with his family early in the day. He was immediately suspected of being the perpetrator, and as news of the incident rapidly spread, he was detained there and brought back to Hong Kong the next day. By the end of the day, 52 Chinese men had been rounded up and detained in connection to the incident. Many of the local Europeans, including the Attorney General, Thomas Chisholm Anstey, wished Cheong to be court-martialled—some called for him to be lynched. Governor Bowring insisted that he be tried by jury.
On 19 January ten of the men were committed to be tried at the Supreme Court after a preliminary examination. This took place on 21 January. The other detainees were taken to Cross Roads police station and confined in a small cell, which became known as the 'Black Hole of Hong Kong' after the Black Hole of Calcutta. Some were deported several days later, while the rest remained in the Black Hole for nearly three weeks.
Supreme Court trial
The trial opened on 2 February. The government had difficulty selecting appropriate charges because there was no precedent in English criminal law for dealing with the attempted murder of a whole community. One of the victims of the poisoning was selected, and Cheong and the nine other defendants were charged with "administering poison with intent to kill and murder James Carroll Dempster, Colonial Surgeon". Attorney General Anstey led the prosecution, William Thomas Bridges and John Day the defence. Chief Justice John Walter Hulme, who had himself been poisoned, presided.
The arguments at the trial focused more on Cheong's personal character than on the poisoning itself: the defence argued that Cheong was a highly regarded and prosperous member of the local community with little reason to take part in an amateurish poisoning plot, and suggested that Cheong had been framed by his commercial competitors. The prosecution, on the other hand, painted him as an agent of the Qing government, ideally positioned to sabotage the colony. They claimed he was financially desperate and had sold himself out to Chinese officials in return for money.
The defence noted that Cheong's own children had shown symptoms of poisoning; Attorney General Anstey argued that they had merely been seasick, and added that even if Cheong were innocent, it was "better to hang the wrong man than confess that British sagacity and activity have failed to discover the real criminals". Hulme retorted that "hanging the wrong man will not further the ends of justice". Cheong himself called for his own beheading, along with the rest of his family, if he were found guilty, in accordance with Chinese practice. On 6 February, the jury rejected the arguments of the prosecution and returned a 5–1 verdict of 'not guilty'.
Banishment of Cheong
The verdict triggered a sensation, and despite his acquittal, public opinion among the European residents of Hong Kong remained extremely hostile to Cheong. Governor Bowring and his Executive Council had determined while the trial was still underway that Cheong should be detained indefinitely regardless of its outcome, and he was arrested soon afterwards under emergency legislation on the pretext of being what the authorities called a "suspicious character". William Tarrant, the editor of the Friend of China, sued Cheong for damages. He was awarded $1,010. Before the sentence could be executed, Bridges, now Acting Colonial Secretary, accepted a petition from the Chinese community for Cheong to be allowed to leave peaceably from Hong Kong after putting his affairs in order. Cheong was accordingly released and left the colony on 1 August, abandoning his business.
Tarrant blamed Bridges publicly for permitting Cheong to escape, but was himself consequently sued for libel by Bridges and forced to pay a fine of £100.
Analysis
Responsibility
Modern scholars have been divided in attributing the responsibility for the incident. The historian George Beer Endacott argued that the poisoning was carried out on the instruction of Qing officials, while Jan Morris depicts Cheong as a lone wolf acting out of personal patriotism. Cheong's own clan record, written in China in 1904 at the command of the imperial court, states that the incident was entirely accidental, the result of negligence in preparing the bread rather than intentional poisoning. Yet another account says that the poisoning was carried out by two foremen at the bakery who fled Hong Kong immediately afterwards, and Cheong was uninvolved. Lowe and McLaughlin, in their 2015 investigation of the incident, classify the plausible hypotheses into three categories: that the poisoning was carried out by Cheong or an employee on orders from Chinese officials, that the poisoning was an attempt by a rival to frame Cheong, and that the poisoning was accidental.
Lowe and McLaughlin state that the chemical analyses conducted at the time do not support the theory that the incident was accidental. Cheong's clan record reports that "one day, through carelessness, a worker dropped some 'odd things' into the flour", even though the arsenic was found only in the bread itself, and in massive quantities—not in the flour, yeast, pastry, or in scrapings collected from the table, all of which were tested. If these results are correct, the poison must have been introduced shortly before baking. Moreover, despite its ultimate failure, Lowe and McLaughlin argue that the incident had certain characteristics of careful strategic planning: the decision to poison European-style bread, a food generally not eaten by Chinese at the time, would have served to separate the intended targets of the plot, while white arsenic (arsenic trioxide) was a fast-acting poison naturally available in China, and so well-suited to the task.
In June 1857, the Hong Kong Government Gazette published a confiscated letter written to Chan Kwei-tsih, the head of the resistance committee in Xin'an County, from his brother Tsz-tin, informing him of the incident. The second-hand report in the missive suggests that the committee was unlikely to have instigated the incident directly.
Toxicology
Aurelius Harland, the Surgeon General, conducted the initial tests on the bread and other materials recovered from the bakery. He recorded:
Portions of the poisoned bread were subsequently sealed and dispatched to Europe, where they were examined by the chemists Frederick Abel and Justus von Liebig, and the Scottish surgeon John Ivor Murray. Murray found the incident to be scientifically interesting because of the low number of deaths that resulted from the ingestion of such a massive quantity of arsenic. Chemical tests enabled him to obtain 62.3 grains of arsenous acid per pound of bread (9 parts per thousand), while Liebig found 64 grains/lb (10 parts per thousand). Liebig theorised that the poison had failed to act because it was vomited out before digestion could take place.
Effects and aftermath
Reception in Britain
News of the incident reached Britain during the 1857 general election, which had been called following a successful parliamentary vote of censure of Lord Palmerston's support for the Second Opium War. Mustering support for Palmerston and his war policy, the London Morning Post decried the poisoning in hyperbolic terms, describing it as a "hideous villainy, [an] unparalleled treachery, of these monsters of China", "defeated ... by its very excess of iniquity"; its perpetrators were "noxious animals ... wild beasts in human shape, without one single redeeming value" and "demons in human shape". Another newspaper supportive of Palmerston, the Globe, published a fabricated letter by Cheong admitting that he "had acted agreeably to the order of the Viceroy [Ye Mingchen]". By the time the news of Cheong's acquittal was published in London on 11 April, the election was all but over, and Palmerston was victorious.
In London, the incident came to the attention of the German author Friedrich Engels, who wrote to the New York Herald Tribune on 22 May 1857, saying that the Chinese now "poison the bread of the European community at Hong Kong by wholesale, and with the coolest premeditation". "In short, instead of moralizing on the horrible atrocities of the Chinese", he argued, "as the chivalrous English press does, we had better recognize that this is a war pro aris et focis, a popular war for the maintenance of Chinese nationality, with all its overbearing prejudice, stupidity, learned ignorance and pedantic barbarism if you like, but yet a popular war."
Others in England denied that the poisoning had even happened. In the House of Commons, Thomas Perronet Thompson alleged that the incident had been fabricated as part of a campaign of disinformation justifying the Second Opium War. Much of the disbelief centred on Cheong's name, which became an object of sarcasm and humour—bakers in 19th-century Britain often adulterated their dough with potassium alum, or simply 'alum', as a whitener —and Lowe and McLaughlin note that "a baker named Cheong Alum would have been considered funny by itself, but a baker named Cheong Alum accused of adding poison to his own dough seemed too good to be true". An official of the Colonial Office annotated a report on Cheong from Hong Kong with the remark, "Surely a mythical name".
Hong Kong
Both the scale of the poisoning and its potential consequences make the Esing bakery incident unprecedented in the history of the British Empire, with the colonists believing at the time that its success could have destroyed their community.
Morris describes the incident as "a dramatic realization of that favourite Victorian chiller, the Yellow Peril", and the affair contributed to the tensions between the European and Chinese communities in Hong Kong. In a state of panic, the colonial government conducted mass arrests and deportations of Chinese residents in the wake of the poisonings. 100 new police officers were hired and a merchant ship was commissioned to patrol the waters surrounding Hong Kong. Governor Bowring wrote to London requesting the dispatch of 5,000 soldiers to Hong Kong. A proclamation was issued reaffirming the curfew on Chinese residents, and Chinese ships were ordered to be kept beyond of Hong Kong, by force if necessary. Chan Tsz-tin described the aftermath in his letter :
The Esing Bakery was closed, and the supply of bread to the colonial community was taken over by the English entrepreneur George Duddell, described by the historian Nigel Cameron as "one of the colony's most devious crooks". Duddell's warehouse was attacked in an arson incident on 6 March 1857, indicative of the continuing problems in the colony. In the same month, one of Duddell's employees was reported to have discussed being offered $2,000 to adulterate biscuit dough with a soporific—the truth of this allegation is unknown. Soon after the poisoning, Hong Kong was rocked by the Caldwell affair, a series of scandals and controversies involving Bridges, Tarrant, Anstey, and other members of the administration, similarly focused on race relations in the colony.
Cheong himself made a prosperous living in Macau and Vietnam after his departure from Hong Kong, and later became consul for the Qing Empire in Vietnam. He died in 1900. A portion of the poisoned bread, well-preserved by its high arsenic content, was kept in a cabinet of the office of the Chief Justice of the Hong Kong Supreme Court until the 1930s.
Notes
References
Sources
(open-access preprint)
Political scandals in Hong Kong
Murder in Hong Kong
British Hong Kong
Second Opium War
Mass poisoning
Arsenic poisoning incidents
1857 in Hong Kong
Political scandals in the United Kingdom | Esing Bakery incident | Chemistry,Environmental_science | 3,273 |
4,159,307 | https://en.wikipedia.org/wiki/Data%20access%20layer | A data access layer (DAL) in computer software is a layer of a computer program which provides simplified access to data stored in persistent storage of some kind, such as an entity-relational database. This acronym is prevalently used in Microsoft environments.
For example, the DAL might return a reference to an object (in terms of object-oriented programming) complete with its attributes instead of a row of fields from a database table. This allows the client (or user) modules to be created with a higher level of abstraction. This kind of model could be implemented by creating a class of data access methods that directly reference a corresponding set of database stored procedures. Another implementation could potentially retrieve or write records to or from a file system. The DAL hides this complexity of the underlying data store from the external world.
For example, instead of using commands such as insert, delete, and update to access a specific table in a database, a class and a few stored procedures could be created in the database. The procedures would be called from a method inside the class, which would return an object containing the requested values. Or, the insert, delete and update commands could be executed within simple functions like registeruser or loginuser stored within the data access layer.
Also, business logic methods from an application can be mapped to the data access layer. So, for example, instead of making a query into a database to fetch all users from several tables, the application can call a single method from a DAL which abstracts those database calls.
Applications using a data access layer can be either database server dependent or independent. If the data access layer supports multiple database types, the application becomes able to use whatever databases the DAL can talk to. In either circumstance, having a data access layer provides a centralized location for all calls into the database, and thus makes it easier to port the application to other database systems (assuming that 100% of the database interaction is done in the DAL for a given application).
Object-Relational Mapping tools provide data layers in this fashion, following the Active Record or Data Mapper patterns. The ORM/active-record model is popular with web frameworks.
See also
Data access object
Database abstraction layer
References
External links
Microsoft Application Architecture Guide
ASP.NET DAL tutorial
Object-oriented programming
Data mapping
Databases | Data access layer | Engineering | 465 |
38,973,439 | https://en.wikipedia.org/wiki/Higgs%20field%20%28classical%29 | Spontaneous symmetry breaking, a vacuum Higgs field, and its associated fundamental particle the Higgs boson are quantum phenomena. A vacuum Higgs field is responsible for spontaneous symmetry breaking the gauge symmetries of fundamental interactions and provides the Higgs mechanism of generating mass of elementary particles.
At the same time, classical gauge theory admits comprehensive geometric formulation where gauge fields are represented by connections on principal bundles. In this framework, spontaneous symmetry breaking is characterized as a reduction of the structure group of a principal bundle to its closed subgroup . By the well-known theorem, such a reduction takes place if and only if there exists a global section of the quotient bundle . This section is treated as a classical Higgs field.
A key point is that there exists a composite bundle where is a principal bundle with the structure group . Then matter fields, possessing an exact symmetry group , in the presence of classical Higgs fields are described by sections of some composite bundle , where is some associated bundle to . Herewith, a Lagrangian of these matter fields is gauge invariant only if it factorizes through the vertical covariant differential of some connection on a principal bundle , but not .
An example of a classical Higgs field is a classical gravitational field identified with a pseudo-Riemannian metric on a world manifold . In the framework of gauge gravitation theory, it is described as a global section of the quotient bundle where is a principal bundle of the tangent frames to with the structure group .
See also
Gauge gravitation theory
Reduction of the structure group
Spontaneous symmetry breaking
Bibliography
External links
G. Sardanashvily, Geometry of classical Higgs fields, Int. J. Geom. Methods Mod. Phys. 3 (2006) 139; .
Theoretical physics
Gauge theories
Symmetry | Higgs field (classical) | Physics,Mathematics | 361 |
2,665 | https://en.wikipedia.org/wiki/Affray | In many legal jurisdictions related to English common law, affray is a public order offence consisting of the fighting of one or more persons in a public place to the terror (in ) of ordinary people. Depending on their actions, and the laws of the prevailing jurisdiction, those engaged in an affray may also render themselves liable to prosecution for assault, unlawful assembly, or riot; if so, it is for one of these offences that they are usually charged.
United Kingdom
England and Wales
The common law offence of affray was abolished for England and Wales on 1 April 1987. Affray is now a statutory offence that is triable either way. It is created by section 3 of the Public Order Act 1986 which provides:
The term "violence" is defined by section 8.
Section 3(6) once provided that a constable could arrest without warrant anyone he reasonably suspected to be committing affray, but that subsection was repealed by paragraph 26(2) of Schedule 7 to, and Schedule 17 to, the Serious Organised Crime and Police Act 2005, which includes more general provisions for police to make arrests without warrant.
The mens rea of affray is that person is guilty of affray only if he intends to use or threaten violence or is aware that his conduct may be violent or threaten violence.
The offence of affray has been used by HM Government to address the problem of drunken or violent individuals who cause serious trouble on airliners.
In R v Childs & Price (2015), the Court of Appeal quashed a murder verdict and replaced it with affray, having dismissed an allegation of common purpose.
Northern Ireland
Affray is a serious offence for the purposes of Chapter 3 of the Criminal Justice (Northern Ireland) Order 2008.
Australia
In New South Wales, section 93C of Crimes Act 1900 defines that a person will be guilty of affray if he or she threatens unlawful violence towards another and his or her conduct is such as would cause a person of reasonable firmness present at the scene to fear for his or her personal safety. A person will only be guilty of affray if the person intends to use or threaten violence or is aware that his or her conduct may be violent or threaten violence. The maximum penalty for an offence of affray contrary to section 93C is a period of imprisonment of 10 years.
In Queensland, section 72 of the Criminal Code of 1899 defines affray as taking part in a fight in a public highway or taking part in a fight of such a nature as to alarm the public in any other place to which the public have access. This definition is taken from that in the English Criminal Code Bill of 1880, cl. 96. Section 72 says "Any person who takes part in a fight in a public place, or takes part in a fight of such a nature as to alarm the public in any other place to which the public have access, commits a misdemeanour. Maximum penalty—1 year’s imprisonment."
In Victoria, Affray was a common law offence until 2017, when it was abolished and was replaced with the statutory offence that can be found under section 195H of the Crimes Act 1958 (Vic). The section defines Affray as the use or threat of unlawful violence by a person in a manner that would cause a person of reasonable firmness present at the scene to be terrified. However, a person who commits this conduct may only be found guilty of Affray if the use or threat of violence was intended, or if the person was reckless as to whether the conduct involves the use or threat of violence. If found guilty, the maximum penalty that may be imposed for Affray is imprisonment for 5 years or, if at the time of committing the offence the person was wearing a face covering used primarily to conceal their identity or to protect them from the effects of crowd-controlling substances, imprisonment for 7 years.
India
The Indian Penal Code (sect. 159) adopts the old English common law definition of affray, with the substitution of "actual disturbance of the peace for causing terror to the lieges".
New Zealand
In New Zealand affray has been codified as "fighting in a public place" by section 7 of the Summary Offences Act 1981.
South Africa
Under the Roman-Dutch law in force in South Africa affray falls within the definition of vis publica.
United States
In the United States, the English common law as to affray applies, subject to certain modifications by the statutes of particular states.
See also
Assault
Battery
Combat
References
Blackstones Police Manual Volume 4: General police duties, Fraser Simpson (2006). pp. 247. Oxford University Press.
Crimes
Legal terminology | Affray | Biology | 960 |
5,305,433 | https://en.wikipedia.org/wiki/Life%20course%20approach | The life course approach, also known as the life course perspective or life course theory, refers to an approach developed in the 1960s for analyzing people's lives within structural, social, and cultural contexts. It views one's life as a socially sequenced timeline and recognizes the importance of factors such as generational succession and age in shaping behavior and career. Development does not end at childhood, but instead extends through multiple life stages to influence life trajectory.
The origins of this approach can be traced back to pioneering studies of the 1920s such as William I. Thomas and Florian Znaniecki's The Polish Peasant in Europe and America and Karl Mannheim's essay on the "Problem of Generations".
Overview
The life course approach examines an individual's life history and investigates, for example, how early events influenced future decisions and events such as marriage and divorce, engagement in crime, or disease incidence. The primary factor promoting standardization of the life course was improvement in mortality rates brought about by the management of contagious and infectious diseases such as smallpox. A life course is defined as "a sequence of socially defined events and roles that the individual enacts over time". In particular, the approach focuses on the connection between individuals and the historical and socioeconomic context in which these individuals lived.
The method encompasses observations including history, sociology, demography, developmental psychology, biology, public health and economics. So far, empirical research from a life course perspective has not resulted in the development of a formal theory.
Glen Elder theorized the life course as based on five key principles: life-span development, human agency, historical time and geographic place, timing of decisions, and linked lives. As a concept, a life course is defined as "a sequence of socially defined events and roles that the individual enacts over time" (Giele and Elder 1998, p. 22). These events and roles do not necessarily proceed in a given sequence, but rather constitute the sum total of the person's actual experience. Thus the concept of life course implies age-differentiated social phenomena distinct from uniform life-cycle stages and the life span. Life span refers to duration of life and characteristics that are closely related to age but that vary little across time and place.
In contrast, the life course perspective elaborates the importance of time, context, process, and meaning on human development and family life (Bengtson and Allen 1993). The family is perceived as a micro social group within a macro social context—a "collection of individuals with shared history who interact within ever-changing social contexts across ever increasing time and space" (Bengtson and Allen 1993, p. 470). Aging and developmental change, therefore, are continuous processes that are experienced throughout life. As such, the life course reflects the intersection of social and historical factors with personal biography and development within which the study of family life and social change can ensue (Elder 1985; Hareven 1996).
Life course theory also has moved in a constructionist direction. Rather than taking time, sequence, and linearity for granted, in their book Constructing the Life Course, Jaber F. Gubrium and James A. Holstein (2000) take their point of departure from accounts of experience through time. This shifts the figure and ground of experience and its stories, foregrounding how time, sequence, linearity, and related concepts are used in everyday life. It presents a radical turn in understanding experience through time, moving well beyond the notion of a multidisciplinary paradigm, providing an altogether different paradigm from traditional time-centered approaches. Rather than concepts of time being the principal building blocks of propositions, concepts of time are analytically bracketed and become focal topics of research and constructive understanding.
The life course approach has been applied to topics such as the occupational health of immigrants, and retirement age. It has also become increasingly important in other areas such as in the role of childhood experiences affecting the behaviour of students later in life or physical activity in old age.
References
Further reading
Elder G. H. Jr & Giele J.Z. (2009). Life Course Studies. An Evolving Field. In Elder G. H. Jr & Giele J.Z. (Eds.), The Craft of Life Course Research (pp 1–28). New-york, London: The Guilford Press.
Levy, R., Ghisletta, P., Le Goff, J. M., Spini, D., & Widmer, E. (2005). Towards an Interdisciplinary Perspective on the Life Course. pp. 3–32. Elsevier.
Developmental psychology
Methods in sociology
Epidemiology | Life course approach | Biology,Environmental_science | 950 |
44,252,271 | https://en.wikipedia.org/wiki/List%20of%20largest%20galaxies | This is a list of largest galaxies known, sorted by order of increasing major axis diameters. The unit of measurement used is the light-year (approximately 9.46 kilometers).
Overview
Galaxies are vast collections of stars, planets, nebulae and other objects that are surrounded by an interstellar medium and held together by gravity. They do not have a definite boundary by nature, and are characterized with gradually decreasing stellar density as a function of increasing distance from its center. Because of this, measuring the sizes of galaxies can often be difficult and have a wide range of results depending on the sensitivity of the detection equipment and the methodology being used. Some galaxies emit more strongly in wavelengths outside the visible spectrum, depending on its stellar population, whose stars may emit more strongly in other wavelengths that are beyond the detection range. It is also important to consider the morphology of the galaxy when attempting to measure its size – an issue that has been raised by the Russian astrophysicist B.A. Vorontsov-Vel'Yaminov in 1961, which considers separate determination methods in measuring the sizes of spiral and elliptical galaxies.
For a full context about how the diameters of galaxies are measured, including the estimation methods stated in this list, see section Galaxy#Physical diameters.
List
Listed below are galaxies with diameters greater than 700,000 light-years. This list uses the mean cosmological parameters of the Lambda-CDM model based on results from the 2015 Planck collaboration, where H0 = 67.74 km/s/Mpc, ΩΛ = 0.6911, and Ωm = 0.3089. Due to different techniques, each figure listed on the galaxies has varying degrees of confidence in them. The reference to those sizes plus further additional details can be accessed by clicking the link for the NASA/IPAC Extragalactic Database (NED) on the right-hand side of the table.
Listed below are some notable galaxies under 700,000 light-years in diameter, for the purpose of comparison. All links to NED are available, except for the Milky Way, which is linked to the relevant paper detailing its size.
See also
List of largest known stars
List of most massive stars
List of most massive black holes
List of largest cosmic structures
List of largest nebulae
Notes
References
Further reading
Galaxies
Lists of superlatives in astronomy
Lists of extreme points
Lists of galaxies | List of largest galaxies | Astronomy | 485 |
33,301,978 | https://en.wikipedia.org/wiki/OU%20Geminorum | OU Geminorum (OU Gem) is a visual binary or possible triple star located in the constellation of Gemini.
The system has an absolute magnitude of 5.93, so at a distance of 48 light years it has an apparent magnitude of 6.77 when viewed from earth. It also has a total proper motion of 0.210"/yr and belongs to the Ursa Major stream.
The system is a much studied BY Draconis variable star with a period of 6.99 days. The primary star has a spectral type of K3Vk. The secondary star in the system has a surface temperature of and orbits the primary in about seven days.
References
External links
Binary stars
Gemini (constellation)
0233
K-type main-sequence stars
Gliese, 0233
Geminorum, OU
045088
Durchmusterung objects
030630 | OU Geminorum | Astronomy | 179 |
35,249,712 | https://en.wikipedia.org/wiki/Constant%20strain%20triangle%20element | In numerical mathematics, the constant strain triangle element, also known as the CST element or T3 element, is a type of element used in finite element analysis which is used to provide an approximate solution in a 2D domain to the exact solution of a given differential equation.
The name of this element reflects how the partial derivatives of this element's shape function are linear functions. When applied to plane stress and plane strain problems, this means that the approximate solution obtained for the stress and strain fields are constant throughout the element's domain.
The element provides an approximation for the exact solution of a partial differential equation which is parametrized barycentric coordinate system (mathematics)
FEM elements | Constant strain triangle element | Mathematics | 140 |
27,743,621 | https://en.wikipedia.org/wiki/NASA%20RealWorld-InWorld%20Engineering%20Design%20Challenge | The NASA Real World-In World Engineering Design Challenge is an educational program targeting students in grades 7–12 to foster skills relevant to STEM careers. The program is structured into two phases: project-based learning and team competitions. Participants tackle engineering challenges, collaborating with university students and mentors in a virtual reality environment. The projects focus on technologies such as the James Webb Space Telescope and the Robonaut 2 humanoid robot. The Real World-In World initiative is a collaboration among NASA, the National Institute of Aerospace (NIA), and USA TODAY Education. It builds on the foundation of the Sight/Insight design challenge by NASA and USA TODAY Education, and the Virtual Exploration Sustainability Challenge (VESC), a joint effort of NIA and NASA.
Scheme followed in 2011
Phase 1: RealWorld
Participants
Teachers, coaches, and high school-aged students who are involved in the RealWorld-InWorld Engineering Design Challenge collaborated to address engineering problems inspired by NASA.
Objective
Small teams of high school students and coaches or teachers worked together on two real-world problems associated with either the James Webb Space Telescope or Robonaut 2. The final project solutions submitted by teams were showcased on the RealWorld-InWorld website.
Phase 2: InWorld
Participants
Participating college students form teams consisting of 3-5 high school-aged students and their teacher or coach. Each team chose an engineering mentor from among the participants. Many of the participants were also involved in NASA's INSPIRE program.
Objective
To collaborate within a 3D virtual environment to improve designs and generate 3D models of the James Webb Space Telescope and Robonaut 2. Engineers from both projects engaged in virtual conversations within the InWorld phase of the challenge. In contrast to the RealWorld phase, the InWorld challenge took place in a virtual environment hosted within the NIA Universe virtual reality world. Teams developed and constructed their solutions to the given problems within this virtual setting.
References
Science competitions | NASA RealWorld-InWorld Engineering Design Challenge | Technology | 388 |
52,177,827 | https://en.wikipedia.org/wiki/Cosmetic%20packaging | The term cosmetic packaging is used for containers (primary packaging) and secondary packaging of fragrances and cosmetic products. Cosmetic products are substances intended for human cleansing, beautifying and promoting an enhanced appearance without altering the body's structure or functions.
Cosmetic packaging is governed by an international norm set by the International Organization for Standardization and by national or regional regulations such as those of the EU or the FDA. Marketers and manufacturers must comply with these to distribute their products in the corresponding areas of jurisdiction.
History
A cosmetic container, cosmetic box, or cosmetic vessel is found in the historical records, both as an artifact, as relief items in some cultures, and are sometimes referenced in historical or archaeological literature. They are sometimes created in specific styles, shapes, or motifs.
The named 'cosmetic vessel' in Ancient Greece is the pyxis. In Ancient Egypt artifacts of hieroglyphically inscribed kohl tubes are found; also kohl vessels, and kohl spoons, which were formed in stylized shapes relevant to Egyptian ideology, including specific hieroglyphs.
The use of the cosmetic vessel may extend to trinket items, car-keys, toiletry accessories, for example a nail clipper; as a non-toiletry storage container, it becomes an 'all-purpose' decorated, special-use vessel.
Containers are known from many societies, ancient and modern. The Native Americans of the Americas made small containers woven from basketry materials, including pine needles.
Ancient Egypt
In Ancient Egypt toiletry items began in the Predynastic Period with ivory cosmetic articles; also bone, stone, or pottery. Ivory combs, and kohl spoons were among the first, with many shapes; common themes for shapes became the ankh symbol, ducks, lotus flowers, etc. In the time of the Predynastic and Old Kingdom, bowls were also mechanically drilled, including miniature sizes, and were used in life and also included as grave goods. The bowls were either a type of unguent jar, or a toiletry "kohl cosmetic vessel". The desert sun or Nile floodwaters during inundation produced a need for facial-eye protection, using 'eyepaint' or eyeliner, when working in the flooded lands; theoretically it was also used by males. The creation of predynastic cosmetic palettes with their eyepaint 'mixing circle', may have been the start of the lineage of the kohl cosmetic artifacts. The famed Narmer Palette, which scholars believe to commemorate the unification of upper and lower Egypt, is believed to be such a cosmetic article, perhaps even for the cosmetics of the king.
Description
The term cosmetic packaging includes primary and secondary packaging. Primary packaging, also called cosmetic containers, is housing the cosmetic product. It is in direct contact with the cosmetic product. Secondary packaging is the outer wrapping of one or several cosmetic containers. An important difference between primary and secondary packaging is that any information that is necessary to clarify the safety of the product must appear on the primary package. Otherwise, much of the required information can appear on just the secondary packaging.
The cosmetic container shall carry the name of the distributor, the ingredients, define storage, nominal content, product identification (e.g., batch number), warning notices, and directions for use.
The secondary packaging shall, in addition, carry the address of the distributor and information on the cosmetic's mode of action. The secondary packaging does not need to carry any product identification notice.
In cases where the cosmetic product is only wrapped by one single container, this container needs to carry all the information.
Purpose of cosmetic packaging
There are multiple reasons why care must be put into cosmetic containers. Not only must they protect the product, they need to provide conveniences for vendors and ultimately consumers.
The main purpose of a cosmetic container is to protect the product while it is in storage or being transported. The container must be a well thought out solution that protects the product from deterioration and helps preserve its quality. It must be an attractive looking container as part of the marketing of a beauty product.
The container must also contain labels that legibly display basic information about the product and the manufacturer. These labels include contact information, ingredients, expiration dates, warnings and instructions. Labels not only identify products and their origins, they help provide consumers with the facts that cannot be confusing or misleading.
Ideally, the container is made of durable material to give the product a long shelf life. It must last even longer through consumer use. The frequent opening and closing of the container can take a toll on its condition over time. Ultimately, the container must protect the product to the degree that it remains a safe product for human consumption. In other words, the container must shield the product from dirt, dust and germs.
The aesthetics of the container are considered extremely important since cosmetic products are mainly sold on brand image. Since cosmetic products are not considered medicine or survival products, the marketing of cosmetics depends heavily on associating brand awareness with emotion. The container must convey emotions about how the product will improve one's appearance and attitude. Many times cosmetics are repackaged and rebranded to help give them more market visibility.
Protection
The main purpose of a container is to store the product so that it is not degraded through storage, shipping and handling. Degradation and damage can be caused by various causes. These causes can be categorized into biological, chemical, thermal causes, damage caused by radiation and damage caused by human interaction, by electric sources or by pressure.
In addition to protecting the product, packaging also plays a big role marketing cosmetic products. While product quality is a major factor in the product's success, its packaging must be attractive since that's the essence of beauty marketing. Package design must capture the imagination and be associated with enhancing appearance. One of the keys to attractive packaging is the artistic use of colors. Most relevant for the marketer is the outer secondary packaging. However, there are cosmetics which are distributed in one single cosmetic container.
Creation of brand awareness
Cosmetic packages must not only convey beauty, they must equate to brand awareness. Since the package is what the consumer initially sees, it is very influential in shaping perceptions about the product. Part of building brand awareness for a cosmetic product is associating it with emotion. Since it is not a survival product it is marketed to appeal to the desire to enhance appearance. The packaging must stimulate this emotion.
Labelling
Labels tell consumers what they need to know about the product, as far as how to use it and where it comes from. Companies must list the ingredients and the function of the product, especially when it is unclear. The label must contain the contact information of the entity responsible for putting the product on the market. Labels also provide product tracking information.
The label must be easy to read, particularly for a customer where the product is being displayed. Certain compositions, such as perfumes, can be listed as one ingredient. Secondary packages are what the consumer sees as the outermost package. Primary packages are within the secondary package. Certain information can appear just on secondary packages. The most important information, particularly if the product is prone to misuse, must be displayed on both the primary and secondary packaging.
Information accuracy
One of the most important aspects of regulations on labeling is that the information is accurate. Although the FDA does not have the resources to inspect all cosmetic products on the market, it can issue penalties for various violations involving packaging and labeling. It is the manufacturer's responsibility to make sure that its product is safe for public consumption.
Avoidance of misleading information
None of the information, including name and address, may be misleading. Words can be abbreviated only if it is clear what they represent. All text must be printed clearly on the packaging. Smaller packages in which text is too difficult to read should include tags with legible text.
Listing of ingredients
Ingredients must be listed in a certain order with priority given to ingredients that represent 1% or more of the volume. These ingredients must be listed in descending order, based on weight. This group of ingredients is then followed by those that represent 1% or less of the product and listed in any order. Colorants may also be listed in any order.
Packaging in multiple layers
Many times cosmetic products are packaged in multiple layers. Whenever it is difficult to detect for the consumer, the number of units should be listed on the outer package, which should contain details about how to use the product and warnings on what to do if it is misused. It is essential that the product is protected from environmental elements such as mould and bacteria.
The packaging must be sufficient to protect the mechanical, thermal, biological, and chemical properties of the product. It should also be strong enough to withstand human tampering and radiation damage.
FDA and EU regulations
The FDA oversees cosmetic packaging but does not test products. It leaves testing for safety up to manufacturers. It still provides regulations and can issue recalls when a product is associated with safety hazards. While the FDA does not have many restrictions on ingredients for cosmetic products, it does require that certain chemicals and colorants be listed.
As far as EU regulations regarding packaging, manufacturers must be compliant with EC No. 1223/2009. One of these requirements involves the manufacturer issuing a safety report before putting the product on the market. The manufacturer must also disclose any serious undesirable effects (SUE) to the EU. Marketers are required to list nano-materials.
The EU's definition of "ingredients" does not include raw or technical materials used in production that do not end up in the final product. In some cases when durability is an issue, the manufacturer must list an expiration date after the product has been opened. The words "best used before" are common for identifying the product expiration date.
ISO-Standard
Standard ISO 22715 provides specifications for the packaging and labeling of all cosmetic products that are sold or distributed at no charge; i.e. free samples. National regulations dictate what products are to be regarded as cosmetics. While ISO 22715 is not legally binding, national regulations regarding cosmetic products can be even stricter than those laid out in ISO 22715. The link between standards and regulations is that a standard often represents the common denominator of national law, as the standardization committee consists of members of most countries.
Standards and regulations
In addition to cosmetic containers meeting the requirements of ISO, they must also comply with regulations set by the European Union and the United States. Cosmetics products marketed in the EU must comply with the EU-Regulation (EC) No. 1223/2009 of the European Parliament and of the Council on cosmetic products. The entity that puts the product on the market, known as the "responsible person," must prepare a product safety report for the EU. Manufacturers must notify the EU Cosmetic Products Notification Portal (CPNP) when they plan on putting products on the market. Some of the main EU requirements include identifying colorants and nanomaterials and disclosing serious undesirable effects (SUE) to the EU.
The main issues to remember about labels on cosmetic containers is that they provide safety guidance, including instructions for use and proper disposal. In the United States, companies must comply with the Federal Food, Drug and Cosmetic Act. Although the FDA does exercise authority over the cosmetic industry, it does not allocate sufficient resources to constantly monitor the industry. ISO defines an ingredient as a material that makes up the final product and not necessarily raw materials used in the production of the product. While the FDA does not have strict requirements for ingredients, they must still be listed on the primary container or secondary package.
Environmental aspects
The materials used for cosmetic containers are of concern for both protective and sustainable reasons. The container must protect the product from environmental elements and it must also move in the direction of eco-friendly solutions. In other words, the more the container can be recycled, the better for both the environment and cost efficiency. Some of the factors that affect container durability include how the product substance responds to usage, chemical composition and biology. It is essential that the container is able to withstand mold and mildew, as well as contaminants.
The container must be made of materials resistant to hot or cold temperatures. It must also protect the product from ultraviolet rays, which can potentially damage the product. The container also cannot absorb product substances. Traditionally, plastic material or glass have been used to house cosmetics. Aluminum has become a popular type of container due to its lightweight yet sturdy quality, flexibility, durability and recyclability. A key factor in what type of material can be used for containers is how compatible the material is with the product.
There is also a choice of eco-friendly boxes for cosmetic packaging. Companies also use custom cosmetic boxes for the packaging of their products. Plastic packaging materials are controversially discussed because of their polluting effects in particular to the marine environment. In 2014, a scientific study estimates the amount of floating plastics in the world's oceans to 5 trillion pieces with an accumulated weight of 250,000 tons.
Examples
References
Books
Lockhart, H., and Paine, F.A., "Packaging of Pharmaceuticals and Healthcare Products", 1996, Blackie,
Yam, K.L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
Dayan. N, "Formulating, Packaging, and Marketing of Natural Cosmetic Products", 2011
External links
www.http://eur-lex.europa.eu/cosmetics
https://www.fda.gov/Cosmetics/GuidanceRegulation
www.iso.org/iso 22715:2006
Technical specifications
Perfumery
Cosmetics
Toiletry
Perfumes
Packaging
Processes
Retail packaging | Cosmetic packaging | Technology | 2,779 |
61,594,653 | https://en.wikipedia.org/wiki/Cytomegalovirus%20papiinebeta4 | Cytomegalovirus papiinebeta4, formerly Papiine betaherpesvirus 4 (PaHV-4), is a species of virus in the genus Cytomegalovirus, subfamily Betaherpesvirinae, family Herpesviridae, and order Herpesvirales.
References
Betaherpesvirinae | Cytomegalovirus papiinebeta4 | Biology | 71 |
522,062 | https://en.wikipedia.org/wiki/Engineering%20tolerance | Engineering tolerance is the permissible limit or limits of variation in:
a physical dimension;
a measured value or physical property of a material, manufactured object, system, or service;
other measured values (such as temperature, humidity, etc.);
in engineering and safety, a physical distance or space (tolerance), as in a truck (lorry), train or boat under a bridge as well as a train in a tunnel (see structure gauge and loading gauge);
in mechanical engineering, the space between a bolt and a nut or a hole, etc.
Dimensions, properties, or conditions may have some variation without significantly affecting functioning of systems, machines, structures, etc. A variation beyond the tolerance (for example, a temperature that is too hot or too cold) is said to be noncompliant, rejected, or exceeding the tolerance.
Considerations when setting tolerances
A primary concern is to determine how wide the tolerances may be without affecting other factors or the outcome of a process. This can be by the use of scientific principles, engineering knowledge, and professional experience. Experimental investigation is very useful to investigate the effects of tolerances: Design of experiments, formal engineering evaluations, etc.
A good set of engineering tolerances in a specification, by itself, does not imply that compliance with those tolerances will be achieved. Actual production of any product (or operation of any system) involves some inherent variation of input and output. Measurement error and statistical uncertainty are also present in all measurements. With a normal distribution, the tails of measured values may extend well beyond plus and minus three standard deviations from the process average. Appreciable portions of one (or both) tails might extend beyond the specified tolerance.
The process capability of systems, materials, and products needs to be compatible with the specified engineering tolerances. Process controls must be in place and an effective quality management system, such as Total Quality Management, needs to keep actual production within the desired tolerances. A process capability index is used to indicate the relationship between tolerances and actual measured production.
The choice of tolerances is also affected by the intended statistical sampling plan and its characteristics such as the Acceptable Quality Level. This relates to the question of whether tolerances must be extremely rigid (high confidence in 100% conformance) or whether some small percentage of being out-of-tolerance may sometimes be acceptable.
An alternative view of tolerances
Genichi Taguchi and others have suggested that traditional two-sided tolerancing is analogous to "goal posts" in a football game: It implies that all data within those tolerances are equally acceptable. The alternative is that the best product has a measurement which is precisely on target. There is an increasing loss which is a function of the deviation or variability from the target value of any design parameter. The greater the deviation from target, the greater is the loss. This is described as the Taguchi loss function or quality loss function, and it is the key principle of an alternative system called inertial tolerancing.
Research and development work conducted by M. Pillet and colleagues at the Savoy University has resulted in industry-specific adoption. Recently the publishing of the French standard NFX 04-008 has allowed further consideration by the manufacturing community.
Mechanical component tolerance
Dimensional tolerance is related to, but different from fit in mechanical engineering, which is a designed-in clearance or interference between two parts. Tolerances are assigned to parts for manufacturing purposes, as boundaries for acceptable build. No machine can hold dimensions precisely to the nominal value, so there must be acceptable degrees of variation. If a part is manufactured, but has dimensions that are out of tolerance, it is not a usable part according to the design intent. Tolerances can be applied to any dimension. The commonly used terms are:
Basic size The nominal diameter of the shaft (or bolt) and the hole. This is, in general, the same for both components.
Lower deviation The difference between the minimum possible component size and the basic size.
Upper deviation The difference between the maximum possible component size and the basic size.
Fundamental deviation The minimum difference in size between a component and the basic size.
This is identical to the upper deviation for shafts and the lower deviation for holes. If the fundamental deviation is greater than zero, the bolt will always be smaller than the basic size and he hole will always be wider. Fundamental deviation is a form of allowance, rather than tolerance.
International Tolerance grade This is a standardised measure of the maximum difference in size between the component and the basic size (see below).
For example, if a shaft with a nominal diameter of 10mm is to have a sliding fit within a hole, the shaft might be specified with a tolerance range from 9.964 to 10 mm (i.e., a zero fundamental deviation, but a lower deviation of 0.036 mm) and the hole might be specified with a tolerance range from 10.04 mm to 10.076 mm (0.04 mm fundamental deviation and 0.076 mm upper deviation). This would provide a clearance fit of somewhere between 0.04 mm (largest shaft paired with the smallest hole, called the Maximum Material Condition - MMC) and 0.112 mm (smallest shaft paired with the largest hole, Least Material Condition - LMC). In this case the size of the tolerance range for both the shaft and hole is chosen to be the same (0.036 mm), meaning that both components have the same International Tolerance grade but this need not be the case in general.
When no other tolerances are provided, the machining industry uses the following standard tolerances:
International Tolerance grades
When designing mechanical components, a system of standardized tolerances called International Tolerance grades are often used. The standard (size) tolerances are divided into two categories: hole and shaft. They are labelled with a letter (capitals for holes and lowercase for shafts) and a number. For example: H7 (hole, tapped hole, or nut) and h7 (shaft or bolt). H7/h6 is a very common standard tolerance which gives a tight fit. The tolerances work in such a way that for a hole H7 means that the hole should be made slightly larger than the base dimension (in this case for an ISO fit 10+0.015−0, meaning that it may be up to 0.015 mm larger than the base dimension, and 0 mm smaller). The actual amount bigger/smaller depends on the base dimension. For a shaft of the same size, h6 would mean 10+0−0.009, which means the shaft may be as small as 0.009 mm smaller than the base dimension and 0 mm larger. This method of standard tolerances is also known as Limits and Fits and can be found in ISO 286-1:2010 (Link to ISO catalog).
The table below summarises the International Tolerance (IT) grades and the general applications of these grades:
An analysis of fit by statistical interference is also extremely useful: It indicates the frequency (or probability) of parts properly fitting together.
Electrical component tolerance
An electrical specification might call for a resistor with a nominal value of 100 Ω (ohms), but will also state a tolerance such as "±1%". This means that any resistor with a value in the range 99–101Ω is acceptable. For critical components, one might specify that the actual resistance must remain within tolerance within a specified temperature range, over a specified lifetime, and so on.
Many commercially available resistors and capacitors of standard types, and some small inductors, are often marked with coloured bands to indicate their value and the tolerance. High-precision components of non-standard values may have numerical information printed on them.
Low tolerance means only a small deviation from the components given value, when new, under normal operating conditions and at room temperature. Higher tolerance means the component will have a wider range of possible values.
Difference between allowance and tolerance
The terms are often confused but sometimes a difference is maintained. See .
Clearance (civil engineering)
In civil engineering, clearance refers to the difference between the loading gauge and the structure gauge in the case of railroad cars or trams, or the difference between the size of any vehicle and the width/height of doors, the width/height of an overpass or the diameter of a tunnel as well as the air draft under a bridge, the width of a lock or diameter of a tunnel in the case of watercraft. In addition there is the difference between the deep draft and the stream bed or sea bed of a waterway.
See also
Backlash (engineering)
Geometric dimensioning and tolerancing
Engineering fit
Key relevance
Loading gauge
Margin of error
Precision engineering
Probabilistic design
Process capability
Slack action
Specification (technical standard)
Statistical process control
Statistical tolerance
Structure gauge
Taguchi methods
Tolerance coning
Tolerance interval
Tolerance stacks
Verification and validation
Notes
Further reading
Pyzdek, T, "Quality Engineering Handbook", 2003,
Godfrey, A. B., "Juran's Quality Handbook", 1999,
ASTM D4356 Standard Practice for Establishing Consistent Test Method Tolerances
External links
Tolerance Engineering Design Limits & Fits
Online calculation of fits
Index of ISO Hole and Shaft tolerances/limits pages
Quality
Engineering concepts
Statistical deviation and dispersion
Mechanical standards
Metrology
Metalworking terminology
Approximations | Engineering tolerance | Mathematics,Engineering | 1,903 |
39,132,225 | https://en.wikipedia.org/wiki/Wallowing | Wallowing in animals is comfort behaviour during which an animal rolls about or lies in mud, water or snow. Some definitions include rolling about in dust, however, in ethology this is usually referred to as dust bathing. Wallowing is often combined with other behaviours to fulfil its purpose; for example, elephants will often blow dirt over themselves after wallowing to create a thicker "coating", or pigs will allow the mud to dry before rubbing themselves on a tree or rock to remove ectoparasites stuck in the mud.
Functions
Many functions of wallowing have been proposed although not all have been tested by rigorous scientific investigation. Proposed functions include:
Thermoregulation – domestic pigs (Sus scrofa), great Indian rhinoceros (Rhinoceros unicornis), warthogs (Phacochoerus aethiopicus), elephants (family Elephantidae)
Providing a sunscreen – pigs, warthogs, elephants
Male-male conflict social behaviour – elk (Cervus elaphus), European bison (Bison bonasus), deer
Removal of ectoparasites – white rhinoceros (Ceratotherium simum), American bison (Bison bison), warthog
Social cohesion – American bison
Relief from moulting – European bison, elephant seals (genus Mirounga)
Relief from biting insects – tamaraw (Bubalus mindorensis), American bison, tapirs (Tapirus bairdii), warthog, elephants
Play in young animals – American bison
Skin maintenance (preventing dehydration) – hippopotamus (Hippopotamus amphibius)
Camouflage – warthog
Scent-marking – Some animals urinate in a wallow before entering and rolling in it, presumably as a form of scent-marking behaviour
Skin microbiome selection – Horses
Domestic pigs
Pigs lack functional sweat glands and are almost incapable of panting. To thermoregulate, they rely on wallowing in water or mud to cool the body. Adult pigs under natural or free-range conditions can often be seen to wallow when air temperature exceeds 20 °C. Mud is the preferred substrate; after wallowing, the wet mud provides a cooling, and probably protecting, layer on the body. When pigs enter a wallow, they normally dig and root in the mud before entering with the fore-body first. They then wriggle the body back and forth, and rub their faces in the mud so all of the body surface is covered. Before they leave the wallow, they often shake their heads and body, often finishing with rubbing against a tree or a stone next to the wallow. When indoors and hot, domestic pigs often attempt to wallow on wet floor surfaces and in the dunging areas.
Although temperature regulation seems to be the main motivation for wallowing in pigs, they will still wallow in colder weather. While many have suggested that pigs wallow in mud because of a lack of sweat glands, pigs and other wallowing animals may have not evolved functional sweat glands because wallowing was a part of their behavioural repertoire.
Pigs are genetically related to animals such as hippopotamus and whales. It has been argued that wallowing behaviour and the desire to be in shallow, murky water could have been a step to the evolution of whales and other marine mammals from land-dwelling mammals.
Sumatran rhinoceros
The Sumatran rhino (Dicerorhinus Sumatrensis) spends a large part of its day wallowing. When mud holes are unavailable, the rhino will deepen puddles with its feet and horns. One 20-month study of wallowing behaviour found they will visit no more than three wallows at any given time. After two to 12 weeks using a particular wallow, the rhino will abandon it. Typically, the rhino will wallow around midday for two to three hours at a time before foraging for food. Although in zoos the Sumatran rhino has been observed wallowing less than 45 minutes a day, the study of wild animals found 80–300 minutes per day spent in wallows. Captive individuals deprived of adequate wallowing have quickly developed broken and inflamed skins, suppurations, eye problems, inflamed nails, hair loss and have eventually died.
Deer
Many deer perform wallowing, creating wallow sites in wet depressions in the ground, eventually forming quite large sites (2–3 m across and up to 1 m deep). However, it has been claimed that only some species of deer wallow; red deer (Cervus elaphus) particularly like to wallow but fallow deer (Dama dama), for example, do not wallow. Even within the red deer species, there is variation between sub-species and breeds in wallowing behaviour. For example, although wapiti do wallow, they and crossbreds are less inclined to wallow than European red deer.
See also
Personal grooming
Mineral lick
References
External links
BBC Nature - Elephants videos, news and facts - Video of elephants wallowing
Ethology | Wallowing | Biology | 1,044 |
23,285,206 | https://en.wikipedia.org/wiki/Rubidium-82 | Rubidium-82 (82Rb) is a radioactive isotope of rubidium. 82Rb is widely used in myocardial perfusion imaging. This isotope undergoes rapid uptake by myocardiocytes, which makes it a valuable tool for identifying myocardial ischemia in Positron Emission Tomography (PET) imaging. 82Rb is used in the pharmaceutical industry and is marketed as Rubidium-82 chloride under the trade names RUBY-FILL and CardioGen-82.
History
In 1953, it was discovered that rubidium carried a biological activity that was comparable to potassium. In 1959, preclinical trials showed in dogs that myocardial uptake of this radionuclide was directly proportional to myocardial blood flow. In 1979, Yano et al. compared several ion-exchange columns to be used in an automated 82Sr/82Rb generator for clinical testing. Around 1980, pre-clinical trials began using 82Rb in PET. In 1982, Selwyn et al. examined the relation between myocardial perfusion and rubidium-82 uptake during acute ischemia in six dogs after coronary stenosis and in five volunteers and five patients with coronary artery disease. Myocardial tomograms, recorded at rest and after exercise in the volunteers showed homogeneous uptake in reproducible and repeatable scans. Rubidium-82 has shown considerable accuracy, comparable to that of 99mTc-SPECT. In 1989, the FDA approved the 82Rb/82Sr generator for commercial use in the U.S. With increased 82Sr production capabilities, the use of 82Rb has increased over the last 10 years and is now approved by several health authorities worldwide.
Production
Rubidium-82 is produced by electron capture of its parent nucleus, strontium-82. The generator contains accelerator produced 82Sr adsorbed on stannic oxide in a lead-shielded column and provides a means for obtaining sterile nonpyrogenic solutions of rubidium chloride (halide salt form capable of injection). The amount (millicuries) of 82Rb obtained in each elution will depend on the potency of the generator. When eluted at a rate of 50 mL/minute, each generator eluate at the end of elution should not contain more than 0.02 microcuries of strontium 82Sr and not more than 0.2 microcuries of 85Sr per millicurie of 82RbCl injection, and not more than 1 microgram of tin per mL of eluate.
Pharmacology
Mechanism of action
82Rb has activity very similar to that of a potassium ion (K+). Once in the myocardium, it is an active participant in the sodium-potassium exchange pump of cells. It is rapidly extracted by the myocardium proportional to blood flow. Its radioactivity is increased in viable myocardial cells reflecting cellular retention, while the tracer is cleared rapidly from necrotic or infarcted tissue.
Pharmacodynamics
When tested clinically, 82Rb is seen in the myocardium within the first minute of intravenous injection. When the myocardium is affected with ischemia or infarction, they will be visualized between 2–7 minutes. These affected areas will be shown as photon deficient on the PET scan. 82Rb passes through the entire body on the first pass of circulation and has visible uptake in organs such as the kidney, liver, spleen and lung. This is due to the high vascularity of those organs.
Use in PET
Rubidium is rapidly extracted from the blood and is taken up by the myocardium in relation to myocardial perfusion, which requires energy for myocardial uptake through Na+/K+-ATPase similar to thallium-201. 82Rb is capable of producing a clear perfusion image similar to single photon emission computed tomography(SPECT)-MPI because it is an extractable tracer. The short half-life requires rapid image acquisition shortly after tracer administration, which reduces total study time. The short half-life also allows for less radiation experienced by the patient. A standard visual perfusion imaging assessment is based on defining regional uptake relative to the maximum uptake in the myocardium. Importantly, 82Rb PET also seems to provide prognostic value in patients who are obese and whose diagnosis remains uncertain after SPECT-MPI.
82Rb myocardial blood flow quantification is expected to improve the detection of multivessel coronary heart disease. 82Rb/PET is a valuable tool in ischemia identification. Myocardial Ischemia is an inadequate blood supply to the heart. 82Rb/PET can be used to quantify the myocardial flow reserve in the ventricles which then allows the medical professional to make an accurate diagnosis and prognosis of the patient. Various vasoreactivity studies are made possible through 82Rb/PET imaging due to its quantification of myocardial blood flow. It is possible to quantify stress in patients under the same reasoning. Recently it has been shown that neuroendocrine tumor metastasis can be imaged with 82Rb due to its ability to quantify myocardial blood flow (MBF) during rest and pharmacological stress, commonly performed with adenosine.
Advantages
One of the main advantages of 82Rb is its availability in nuclear medicine departments. This isotope is available after 10-minute elution of a 82Sr column; this makes it possible to produce enough samples to inject about 10–15 patients a day. Another advantage of 82Rb would be its high count density in myocardial tissue. 82Rb/PET has shown greater uniformity and count density than 99mTc-SPECT when examining the myocardium. This results in higher interpretive confidence and greater accuracy. It allows for quantification of coronary flow reserve and myocardial blood flow. 82Rb also has an advantage in that it has a very short half-life which results in much lower radiation exposure for the patient. This is especially important as the use of myocardial imaging increases in the medical field. When it comes to patients, 82Rb is beneficial to use when the patient is obese or physically unable to perform a stress test. It also has side effects limited to minor irritation around the injection site.
Limitations
A serious limitation of 82Rb would be its cost. Currently 99mTc costs on average $70 per dose, needing two doses; whereas 82Rb costs about $250 a dose. Another limitation of this isotope is that it needs a dedicated PET/CT camera, and in places like Europe where a 82Sr/82Rb generator is still yet to be approved that can be hard to find.
References
Further reading
Rubidium
Isotopes of rubidium
Positron emitters
Cardiac imaging
3D nuclear medical imaging
PET radiotracers
Medical isotopes | Rubidium-82 | Chemistry | 1,428 |
4,038,083 | https://en.wikipedia.org/wiki/Moral%20treatment | Moral treatment was an approach to mental disorder based on humane psychosocial care or moral discipline that emerged in the 18th century and came to the fore for much of the 19th century, deriving partly from psychiatry or psychology and partly from religious or moral concerns. The movement is particularly associated with reform and development of the asylum system in Western Europe at that time. It fell into decline as a distinct method by the 20th century, however, due to overcrowding and misuse of asylums and the predominance of biomedical methods. The movement is widely seen as influencing certain areas of psychiatric practice up to the present day. The approach has been praised for freeing sufferers from shackles and barbaric physical treatments, instead considering such things as emotions and social interactions, but has also been criticised for blaming or oppressing individuals according to the standards of a particular social class or religion.
Context
Moral treatment developed in the context of the Enlightenment and its focus on social welfare and individual rights. At the start of the 18th century, the "insane" were typically viewed as wild animals who had lost their reason. They were not held morally responsible but were subject to scorn and ridicule by the public, sometimes kept in madhouses in appalling conditions, often in chains and neglected for years or subject to numerous torturous "treatments" including whipping, beating, bloodletting, shocking, starvation, irritant chemicals, and isolation. There were some attempts to argue for more psychological understanding and therapeutic environments. For example, in England John Locke popularized the idea that there is a degree of madness in most people because emotions can cause people to incorrectly associate ideas and perceptions, and William Battie suggested a more psychological approach, but conditions generally remained poor. The treatment of King George III also led to increased optimism about the possibility of therapeutic interventions.
Early development
Italy
Under the Enlightened concern of Grand Duke Pietro Leopoldo in Florence, Italian physician Vincenzo Chiarugi instituted humanitarian reforms. Between 1785 and 1788 he managed to outlaw chains as a means of restraint at the Santa Dorotea hospital, building on prior attempts made there since the 1750s. From 1788 at the newly renovated St. Bonifacio Hospital he did the same, and led the development of new rules establishing a more humane regime.
France
The ex-patient Jean-Baptiste Pussin and his wife Margueritte, and the physician Philippe Pinel (1745–1826), are also recognized as the first instigators of more humane conditions in asylums. From the early 1780s, Pussin had been in charge of the mental hospital division of the La Bicêtre, an asylum in Paris for male patients. From the mid-1780s, Pinel was publishing articles on links between emotions, social conditions and insanity. In 1792 (formally recorded in 1793), Pinel became the chief physician at the Bicetre. Pussin showed Pinel how really knowing the patients meant they could be managed with sympathy and kindness as well as authority and control. In 1797, Pussin first freed patients of their chains and banned physical punishment, although straitjackets could be used instead. Patients were allowed to move freely about the hospital grounds, and eventually dark dungeons were replaced with sunny, well-ventilated rooms. Pussin and Pinel's approach was seen as remarkably successful and they later brought similar reforms to a mental hospital in Paris for female patients, La Salpetrière. Pinel's student and successor, Jean Esquirol (1772–1840), went on to help establish 10 new mental hospitals that operated on the same principles. There was an emphasis on the selection and supervision of attendants in order to establish a suitable setting to facilitate psychological work, and particularly on the employment of ex-patients as they were thought most likely to refrain from inhumane treatment while being able to stand up to pleading, menaces, or complaining.
Pinel used the term "traitement moral" for the new approach. At that time "moral", in French and internationally, had a mixed meaning of either psychological/emotional (mental) or moral (ethical). Pinel distanced himself from the more religious work that was developed by the Tukes, and in fact considered that excessive religiosity could be harmful. He sometimes took a moral stance himself, however, as to what he considered to be mentally healthy and socially appropriate.
England
English Quaker William Tuke (1732–1822) independently led the development of a radical new type of institution in northern England, following the death of a fellow Quaker in a local asylum in 1790. In 1796, with the help of fellow Quakers and others, he founded the York Retreat, where eventually about 30 patients lived as part of a small community in a quiet country house and engaged in a combination of rest, talk, and manual work. Rejecting medical theories and techniques, the efforts of the York Retreat centered around minimizing restraints and cultivating rationality and moral strength. The entire Tuke family became known as founders of moral treatment. They created a family-style ethos and patients performed chores to give them a sense of contribution. There was a daily routine of both work and leisure time. If patients behaved well, they were rewarded; if they behaved poorly, there was some minimal use of restraints or instilling of fear. The patients were told that treatment depended on their conduct. In this sense, the patient's moral autonomy was recognized. William Tuke's grandson, Samuel Tuke, published an influential work in the early 19th century on the methods of the retreat; Pinel's Treatise On Insanity had by then been published, and Samuel Tuke translated his term as "moral treatment".
Scotland
A very different background to the moral approach may be discerned in Scotland. Interest in mental illness was a feature of the Edinburgh medical school in the eighteenth century, with influential teachers including William Cullen (1710–1790) and Robert Whytt (1714–1766) emphasising the clinical importance of psychiatric disorders. In 1816, the phrenologist Johann Spurzheim (1776–1832) visited Edinburgh and lectured on his craniological and phrenological concepts, arousing considerable hostility, not least from the theologically doctrinaire Church of Scotland. Some of the medical students, however, notably William A.F. Browne (1805–1885), responded very positively to this materialist conception of the nervous system and, by implication, of mental disorder. George Combe (1788–1858), an Edinburgh solicitor, became an unrivalled exponent of phrenological thinking, and his brother, Andrew Combe (1797–1847), who was later appointed a physician to Queen Victoria, wrote a phrenological treatise entitled Observations on Mental Derangement (1831). George and Andrew Combe exerted a rather dictatorial authority over the Edinburgh Phrenological Society, and in the mid-1820s manipulated the de facto expulsion of the Christian phrenologists.
This tradition of medical materialism found a ready partner in the Lamarckian biology purveyed by the naturalist Robert Edmond Grant (1793–1874) who exercised a striking influence on the young Charles Darwin during his time as a medical student in Edinburgh in 1826/1827. William Browne advanced his own versions of evolutionary phrenology at influential meetings of the Edinburgh Phrenological Society, the Royal Medical Society and the Plinian Society. Later, as superintendent of Sunnyside Royal Hospital (the Montrose Asylum) from 1834 to 1838, and, more extravagantly, at the Crichton Royal in Dumfries from 1838 to 1859, Browne implemented his general approach of moral management, indicating a clinical sensitivity to the social groupings, shifting symptom patterns, dreams and art-works of the patients in his care. Browne summarised his moral approach to asylum management in his book (actually the transcripts of five public lectures) which he entitled What Asylums Were, Are, and Ought To Be. His achievements with this style of psychiatric practice were rewarded with his appointment as a Commissioner in Lunacy for Scotland, and by his election to the Presidency of the Medico-Psychological Association in 1866. Browne's eldest surviving son, James Crichton-Browne (1840–1938), did much to extend his father's work in psychiatry, and, on 29 February 1924, he delivered a remarkable lecture The Story of the Brain, in which he recorded a generous appreciation of the role of the phrenologists in the early foundations of psychiatric thought and practice.
United States
A key figure in the early spread of moral treatment in the United States was Benjamin Rush (1745–1813), an eminent physician at Pennsylvania Hospital. He limited his practice to mental illness and developed innovative, humane approaches to treatment. He required that the hospital hire intelligent and sensitive attendants to work closely with patients, reading and talking to them and taking them on regular walks. He also suggested that it would be therapeutic for doctors to give small gifts to their patients every so often. However, Rush's treatment methods included bloodletting (bleeding), purging, hot and cold baths, mercury, and strapping patients to spinning boards and "tranquilizer" chairs.
A Boston schoolteacher, Dorothea Dix (1802–1887), also helped make humane care a public and a political concern in the US. On a restorative trip to England for a year, she met Samuel Tuke. In 1841 she visited a local prison to teach Sunday school and was shocked at the conditions for the inmates and the treatment of those with mental illnesses. She began to investigate and crusaded on the issue in Massachusetts and all over the country. She supported the moral treatment model of care. She spoke to many state legislatures about the horrible sights she had witnessed at the prisons and called for reform. Dix fought for new laws and greater government funding to improve the treatment of people with mental disorders from 1841 until 1881, and personally helped establish 32 state hospitals that were to offer moral treatment. Many asylums were built according to the so-called Kirkbride Plan.
Consequences
The moral treatment movement was initially opposed by those in the mental health profession. By the mid-19th century, however, many psychologists had adopted the strategy. They became advocates of moral treatment, but argued that since the mentally ill often had separate physical/organic problems, medical approaches were also necessary. Making this argument stick has been described as an important step in the profession's eventual success at securing a monopoly on the treatment of "lunacy".
The moral treatment movement had a huge influence on asylum construction and practice. Many countries were introducing legislation requiring local authorities to provide asylums for the local population, and they were increasingly designed and run along moral treatment lines. Additional "non-restraint movements" also developed. There was great belief in the curability of mental disorders, particularly in the US, and statistics were reported showing high recovery rates. They were later much criticized, particularly for not differentiating between new admissions and re-admissions (i.e. those who hadn't really achieved a sustained recovery). It has been noted, however, that the cure statistics showed a decline from the 1830s onwards, particularly sharply in the second half of the century, which has been linked to the dream of small, curative asylums giving way to large, centralized, overcrowded asylums.
There was also criticism from some ex-patients and their allies. By the mid-19th century in England, the Alleged Lunatics' Friend Society was proclaiming that the new moral treatment was a form of social repression achieved "by mildness and coaxing, and by solitary confinement"; that its implication that the "alleged lunatics" needed re-educating meant it treated them as if they were children incapable of making their own decisions; and that it failed to properly inform people of their rights or involve them in discussion about their treatment. The Society was suspicious of the tranquility of the asylums, suggesting that patients were simply being crushed and then discharged to live a "milk sop" (meek) existence in society.
In the context of industrialization, public asylums expanded in size and number. Bound up in this was the development of the profession of psychiatry, able to expand with large numbers of inmates collected together. By the end of the 19th century and into the 20th, these large out-of-town asylums had become overcrowded, misused, isolated and run-down. The therapeutic principles had often been neglected along with the patients. Moral management techniques had turned into mindless institutional routines within an authoritarian structure. Consideration of costs quickly overrode ideals. There was compromise over decoration—no longer a homey, family atmosphere but drab and minimalist. There was an emphasis on security, custody, high walls, closed doors, shutting people off from society, and physical restraint was often used. It is well documented that there was very little therapeutic activity, and medics were little more than administrators who seldom attended to patients and mainly then for other, somatic, problems. Any hope of moral treatment or a family atmosphere was "obliterated". In 1827 the average number of asylum inmates in Britain was 166; by 1930 it was 1221. The relative proportion of the public officially diagnosed as insane grew.
Although the Retreat had been based on a non-medical approach and environment, medically based reformers emulating it spoke of "patients" and "hospitals". Asylum "nurses" and attendants, once valued as a core part of providing good holistic care, were often scapegoated for the failures of the system. Towards the end of the 19th century, somatic theories, pessimism in prognosis, and custodialism had returned. Theories of hereditary degeneracy and eugenics took over, and in the 20th century the concepts of mental hygiene and mental health developed. From the mid 20th century, however, a process of antipsychiatry and deinstitutionalization occurred in many countries in the West, and asylums in many areas were gradually replaced with more local community mental health services.
In the 1960s, Michel Foucault renewed the argument that moral treatment had really been a new form of moral oppression, replacing physical oppression, and his arguments were widely adopted within the antipsychiatry movement. Foucault was interested in ideas of "the other" and how society defines normalcy by defining the abnormal and its relationship to the normal. A patient in the asylum had to go through four moral syntheses: silence, recognition in the mirror, perpetual judgment, and the apotheosis of the medical personage. The mad were ignored and verbally isolated. They were made to see madness in others and then in themselves until they felt guilt and remorse. The doctor, despite his lack of medical knowledge about the underlying processes, had all powers of authority and defined insanity. Thus Foucault argues that the "moral" asylum is "not a free realm of observation, diagnosis, and therapeutics; it is a juridical space where one is accused, judged, and condemned." Foucault's reassessment was succeeded by a more balanced view, recognizing that the manipulation and ambiguous "kindness" of Tuke and Pinel may have been preferable to the harsh coercion and physical "treatments" of previous generations, while aware of moral treatment's less benevolent aspects and its potential to deteriorate into repression.
The moral treatment movement is widely seen as influencing psychiatric practice up to the present day, including specifically therapeutic communities (although they were intended to be less repressive); occupational therapy and Soteria houses. The Recovery model is said to have echoes of the concept of moral treatment.
See also
Erwadi fire incident
Humane treatment of the mentally ill
Moral insanity
The Retreat (First institution to implement moral treatment)
Testimony of equality describing actions of the Quakers towards equality
References
Abnormal psychology
History of mental health
Psychotherapy
Ethics in psychiatry
Psychiatric hospitals | Moral treatment | Biology | 3,262 |
20,673,489 | https://en.wikipedia.org/wiki/National%20Construction%20Equipment%20Museum | The National Construction Equipment Museum is a non-profit organization located in Bowling Green, Ohio, United States that is dedicated to preserving the history of construction, dredging and surface mining industries and equipment. The museum is operated by the Historical Construction Equipment Association and features many different types of construction equipment, including cranes, shovels, rollers, scrapers, bulldozers, dump trucks, concrete mixers, drills and other heavy equipment.
At the end of December 2021 an effort began to expand the museum.
References
External links
Historical Construction Equipment Association
National Construction Equipment Museum - Discover Ohio
Information about the museum's holdings and visitation
Ohio Traveler - article about the museum
Museums in Wood County, Ohio
Transportation museums in Ohio
Engineering vehicles
Technology museums in Ohio
Industry museums in Ohio
Bowling Green, Ohio | National Construction Equipment Museum | Engineering | 157 |
6,820,847 | https://en.wikipedia.org/wiki/Computational%20semantics | Computational semantics is the study of how to automate the process of constructing and reasoning with meaning representations of natural language expressions. It consequently plays an important role in natural-language processing and computational linguistics.
Some traditional topics of interest are: construction of meaning representations, semantic underspecification, anaphora resolution, presupposition projection, and quantifier scope resolution. Methods employed usually draw from formal semantics or statistical semantics. Computational semantics has points of contact with the areas of lexical semantics (word-sense disambiguation and semantic role labeling), discourse semantics, knowledge representation and automated reasoning (in particular, automated theorem proving). Since 1999 there has been an ACL special interest group on computational semantics, SIGSEM.
See also
Discourse representation theory
Formal semantics (natural language)
Minimal recursion semantics
Natural-language understanding
Semantic compression
Semantic parsing
Semantic Web
SemEval
WordNet
Further reading
Blackburn, P., and Bos, J. (2005), Representation and Inference for Natural Language: A First Course in Computational Semantics, CSLI Publications. .
Bunt, H., and Muskens, R. (1999), Computing Meaning, Volume 1, Kluwer Publishing, Dordrecht. .
Bunt, H., Muskens, R., and Thijsse, E. (2001), Computing Meaning, Volume 2, Kluwer Publishing, Dordrecht. .
Copestake, A., Flickinger, D. P., Sag, I. A., & Pollard, C. (2005). Minimal Recursion Semantics. An introduction. In Research on Language and Computation. 3:281–332.
Eijck, J. van, and C. Unger (2010): Computational Semantics with Functional Programming. Cambridge University Press.
Wilks, Y., and Charniak, E. (1976), Computational Semantics: An Introduction to Artificial Intelligence and Natural Language Understanding, North-Holland, Amsterdam. .
References
External links
Special Interest Group on Computational Semantics (SIGSEM) of the Association for Computational Linguistics (ACL)
IWCS - International Workshop on Computational Semantics (endorsed by SIGSEM)
ICoS - Inference in Computational Semantics (endorsed by SIGSEM)
Computational linguistics
Natural language processing
Semantics
Computational fields of study | Computational semantics | Technology | 476 |
8,466,950 | https://en.wikipedia.org/wiki/Ecover | Ecover is a Belgian company that manufactures ecologically sound cleaning products (made from plant-based and mineral ingredients), owned by S. C. Johnson & Son since 2017.
History
The company was founded in 1979 by Frans Bogaerts to create phosphate-free cleaning products to reduce the environmental impact of cleaning agents. Following expansion to support sales through supermarkets, it ran into financial difficulties during the early 1990s. The business was sold to Bogaerts' son and rescued by Gunter Pauli, a member of the company's board since 1990. Pauli, in turn, enlisted in 1992 the financial clout of now-deceased Danish investor, Jørgen Philip-Sørensen, through the private investment company Skagen. The company's relaunch commenced with the construction of an "ecological factory", followed by investments into research projects for the purpose of developing appropriate plant-based and renewable raw materials for cleaning products.
Ecover is part of the Skagen Conscience Capital, a global organisation. Aquaver and the Change Initiatives are other companies of Skagen Conscience Capital.
In 2012 Ecover bought Method Products, a San Francisco, United States, headquartered manufacturer of biodegradable natural cleaning supplies with a focus on minimalist product design, to assist its entry of the North American market. The new group had annual revenues of $200 million at that time and were the world's largest green cleaning products company by sales. Method had been founded in 2001 by Eric Ryan, a designer and marketer, and Adam Lowry, a chemical engineer. Method opened a factory in the Pullman neighborhood of Chicago in 2015.
In 2017 S. C. Johnson & Son purchased the Ecover and Method brands on undisclosed terms.
Products
Ecover comprises the following brands:
Ecover: domestic detergents, cleansing agents and personal care products.
Held: domestic detergents and cleansing agents.
Techno Green: professional detergents and cleansing agents.
Ecover Professional: professional cleansing agents.
Wellments: personal care products
A number of Ecover products - washing up detergent (domestic and professional), fabric conditioner, laundry detergent and multi-surface cleaner - are available from a container refill service (customers reuse the products original container) to reduce the overall environmental impact of distributing the product. Ecover refill locations have previously been limited to independent health food stores and small local cooperative schemes, with the company having stated that it will expand its reach in this regard.
Factories
Ecover built the world's first "ecological factory" in Malle, Belgium, with a green roof extending over more than . The factory opened in 1992 and was featured on television news programs that allowed the company to feature the recycled and recyclable materials that make up most of the structure. In 2007, Ecover opened another factory based on the same "ecological" premise in Boulogne-sur-Mer, Northern France, and also secured ownership of a factory in Steffisburg, Switzerland, through the acquisition of the private Held AG company (manufacturer and distributor of ecological washing agents) in 2003.
Awards
In 1993, UNEP awarded the "Global 500 Roll of Honour" to Ecover for "outstanding achievements in the protection and improvement of the environment". In 2008, Time magazine honored Ecover CEO, Mick Bremans, with the title Hero of the Environment together with 29 other eco-pioneers working for a green future. In 2010, Ecover earned a finalist nomination from the European Business Awards for the Environment for a pioneering project in green innovation in the process category. In 2018, Method was recognized as one of "the 50 most sustainable companies in the world" at the SEAL Business Sustainability Awards. For the company's national and international experience in sustainable development, and eco-friendly products, the A.A. Environment Possibility Award conferred the "Award of Green-Trend Leader" to Ecover in 2020.
Controversy
In 2007, the Vegan Society withdrew their Vegan Trademark registration from Ecover products due to the company's use of daphnia (water fleas) to test the effects of its products on aquatic life, plus rabbit blood to test stain removal. Daphnia are not vertebrates and therefore are not classified as "animals" according to EU animal-testing rules. However, the Vegan Society's definition incorporates the entire animal kingdom, which is inclusive of invertebrates, as part of its Vegan Trademark registration criteria. Ecover continues to use the Daphtox acute toxicity test that observes daphnia behaviour to calculate the EC50 values of their products, so it can assess the environmental quality of its products.
In 2010, a Which? study of 14 household products, including laundry tablets, toilet cleaners and nappies, reported that Ecover was among a number of companies where each was believed to have exaggerated at least one "green claim" or was not proven by the manufacturer's evidence. The panel of experts found, for instance, no convincing evidence to show the chemicals found in standard toilet cleaner and market-leading laundry tablets would have a significantly worse impact on aquatic life than their "eco" equals. Which? said: "When companies make clear green claims it helps consumers make eco choices with confidence. But our experts concluded that many of the companies did not provide enough evidence to back up their claims and thought that some were exaggerated. This makes it hard for people to choose." Ecover responded several days later.
Ecover had previously been criticized for not subscribing to the British Union for the Abolition of Vivisection's "Humane Household Products Standard", which requires a "fixed cut-off date" on animal-tested ingredients. Ecover stated that "a fixed cut-off date [means] that we wouldn't be able to improve our products on what we have today. We do not believe that it is necessary to carry the 'Humane Household Products Standard' to uphold our core values of transparency, honesty and integrity." However, in October 2012 Ecover's products were certified into the Cruelty Free International (formerly BUAV) "The Leaping Bunny Program" and awarded the internationally-recognised Leaping Bunny logo for products certified free from animal testing and which comply with the comprehensive criteria of the Humane Household Products Standard. Ecover CEO Philip Malmberg said "Being accepted into this program is an absolute privilege for Ecover and a great way to show the world that we care. Ecover has been animal friendly since the day it was founded in 1979. The decision to align with Leaping Bunny and provide our customers with household cleaning and laundry products that are certified as safe and cruelty-free was an obvious next step."
In 2014, Ecover confirmed that it was trialling oil derived from algae. In response, 23 environmental, consumer and farmers groups called on Ecover to drop the algae. Some of the groups launched a petition and web site, declaring that "Synthetic is not Natural", in reference to Ecover's marketing, which relies heavily on words like "natural" and "eco-friendly". The petition collected thousands of signatures calling on Ecover to stop using synthetic algae, citing a lack of regulation and knowledge about synthetic organisms, and effects on farmers. Ecover claimed that the algal oil it is using employs the natural mutation process of algae and standard industrial fermentation and would be less destructive than the palm kernel oil it currently uses, a claim disputed by some of the opposing groups because the algae was fed sugarcane which is also associated with biodiversity destruction.
Due to the open refusal of owner SC Johnson to abandon its use of animal testing, the Naturewatch Foundation revoked Ecover and Method's Compassionate Shopping Guide accreditations.
In January 2021 the company issued a product recall on its Ecover Zero % Non-Bio Laundry Liquid, as it had been discovered that the liquid contained hazardous levels of potassium hydroxide.
Sponsorship
Ecover sponsored yachtsman Mike Golding.
Golding skippered the Ecover Sailing Team in the 2009 iShares cup, a selection of races all over Europe, sailing catamarans in competitive races against world-leaders in the sport. The races took place in Venice, Hyères, Cowes, Kiel, Amsterdam and Almeria.
References
External links
Ecover
Aquaver
The Change Initiative
Belgian companies established in 1979
Cleaning products
Companies based in Antwerp Province
Malle | Ecover | Chemistry | 1,693 |
17,021,230 | https://en.wikipedia.org/wiki/WASP-14b | WASP-14b is an extrasolar planet discovered in 2008 by SuperWASP using the transit method. Follow-up radial velocity measurements showed that the mass of WASP-14b is almost eight times larger than that of Jupiter. The radius found by the transit observations show that it has a radius 25% larger than Jupiter. This makes WASP-14b one of the densest exoplanets known. Its radius best fits the model of Jonathan Fortney.
Orbit
First calculation of WASP-14b's Rossiter–McLaughlin effect and so spin-orbit angle was −14 ± 17 degrees. It is too eccentric for its age and so is possibly pulled into its orbit by another planet. The study in 2012 has updated spin-orbit angle to 33.1°.
References
External links
WASP Planets
Exoplanets discovered by WASP
Exoplanets discovered in 2008
Giant planets
Hot Jupiters
Transiting exoplanets
Boötes
de:WASP-14 b | WASP-14b | Astronomy | 196 |
40,536,771 | https://en.wikipedia.org/wiki/Muon%20tomography | Muon tomography or muography is a technique that uses cosmic ray muons to generate two or three-dimensional images of volumes using information contained in the Coulomb scattering of the muons. Since muons are much more deeply penetrating than X-rays, muon tomography can be used to image through much thicker material than x-ray based tomography such as CT scanning. The muon flux at the Earth's surface is such that a single muon passes through an area the size of a human hand per second.
Since its development in the 1950s, muon tomography has taken many forms, the most important of which are muon transmission radiography and muon scattering tomography.
Muography uses muons by tracking the number of muons that pass through the target volume to determine the density of the inaccessible internal structure. Muography is a technique similar in principle to radiography (imaging with X-rays) but capable of surveying much larger objects. Since muons are less likely to interact, stop and decay in low density matter than high density matter, a larger number of muons will travel through the low density regions of target objects in comparison to higher density regions. The apparatuses record the trajectory of each event to produce a muogram that displays the matrix of the resulting numbers of transmitted muons after they have passed through objects up to multiple kilometers in thickness. The internal structure of the object, imaged in terms of density, is displayed by converting muograms to muographic images.
Muon tomography imagers are under development for the purposes of detecting nuclear material in road transport vehicles and cargo containers for the purposes of non-proliferation.
Another application is the usage of muon tomography to monitor potential underground sites used for carbon sequestration.
Etymology and use
The term muon tomography is based on the word "tomography", a word produced by combining Ancient Greek tomos "cut" and graphe "drawing." The technique produces cross-sectional images (not projection images) of large-scaled objects that cannot be imaged with conventional radiography. Some authors hence see this modality as a subset of muography.
Muography was named by Hiroyuki K. M. Tanaka. There are two explanations for the origin of the word "muography": (A) a combination of the elementary particle muon and Greek γραφή (graphé) "drawing," together suggesting the meaning "drawing with muons"; and (B) a shortened combination of "muon" and "radiography." Although these techniques are related, they differ in that radiography uses X-rays to image the inside of objects on the scale of meters, while muography uses muons to image the inside of objects on the scale of hectometers to kilometers.
Invention of muography
Precursor technologies
Twenty years after Carl David Anderson and Seth Neddermeyer discovered that muons were generated from cosmic rays in 1936, Australian physicist E.P. George made the first known attempt to measure the areal density of the rock overburden of the Guthega-Munyang tunnel (part of the Snowy Mountains Hydro-Electric Scheme) with cosmic ray muons. He used a Geiger counter. Although he succeeded in measuring the areal density of rock overburden placed above the detector, and even successfully matched the result from core samples, due to the lack of directional sensitivity in the Geiger counter, imaging was impossible.
In a famous experiment in the 1960s, Luis Alvarez used muon transmission imaging to search for hidden chambers in the Pyramid of Chephren in Giza, although none were found at the time; a later effort discovered a previously unknown void in the Great Pyramid. In all cases the information about the absorption of the muons was used as a measure of the thickness of the material crossed by the cosmic ray particles.
First muogram
The first muogram was produced in 1970 by a team led by American physicist Luis Walter Alvarez, who installed detection apparatus in the Belzoni Chamber of the Pyramid of Khafre to search for hidden rooms within the structure. He recorded the number of muons after they had passed through the Pyramid. With an invention of this particle tracking technique, he worked out the methods to generate the muogram as a function of muon's arriving angles. The generated muogram was compared with the results of the computer simulations, and he concluded that there were no hidden chambers in the Pyramid of Chephren after the apparatus was exposed to the Pyramid for several months.
Film muography
Tanaka and Niwa’s pioneering work created film muography, which uses nuclear emulsion. Exposures of nuclear emulsions were taken in the direction of the volcano and then analyzed with a newly invented scanning microscope, custom built for the purpose of identifying particle tracks more efficiently. Film muography enabled them to obtain the first interior imaging of an active volcano in 2007, revealing the structure of the magma pathway of Asama volcano.
Real-time muography
In 1968, the group of Alvarez used spark chambers with a digital read out for their Pyramid experiment. Tracking data from the apparatus was onto magnetic tape in the Belzoni Chamber, then the data were analyzed by the IBM 1130 computer, and later by the CDC 6600 computer located at Ein Shams University and Lawrence Radiation Laboratory, respectively. Strictly speaking these were not real time measurements.
Real-time muography requires muon sensors to convert the muon's kinetic energy into a number of electrons in order to process muon events as electronic data rather than as chemical changes on film. Electronic tracking data can be processed almost instantly with an adequate computer processor; in contrast, film muography data have to be developed before the muon tracks can be observed. Real-time tracking of muon trajectories produce real-time muograms that would be difficult or impossible to obtain with film muography.
High-resolution muography
The MicroMegas detector has a positioning resolution of 0.3 mm, an order of magnitude higher than that of the scintillator-based apparatus (10 mm), and thus has a capability to create better angular resolution for muograms.
Applications
Geology
Muons have been used to image magma chambers to predict volcanic eruptions. Kanetada Nagamine et al. continue active research into the prediction of volcanic eruptions through cosmic ray attenuation radiography. Minato used cosmic ray counts to radiograph a large temple gate. Emil Frlež et al. reported using tomographic methods to track the passage of cosmic rays muons through cesium iodide crystals for quality control purposes. All of these studies have been based on finding some part of the imaged material that has a lower density than the rest, indicating a cavity. Muon transmission imaging is the most suitable method for acquiring this type of information.
In 2021, Giovanni Leone and his group revealed that volcanic eruption frequency is related to the amount of volcanic material which moves through a near-surface conduit in an active volcano.
Vesuvius
The Mu-Ray project has been using muography to image Vesuvius, famous for its eruption of 79 AD, which destroyed local settlements including Pompeii and Herculaneum. The Mu-Ray project is funded by the Istituto Nazionale di Fisica Nucleare (INFN, Italian National Institute for Nuclear Physics) and the Istituto Nazionale di Geofisica e Vulcanologia (Italian National Institute for Geophysics and Volcanology). The last time this volcano erupted was in 1944. The goal of this project is to "see" inside the volcano which is being developed by scientists in Italy, France, the US and Japan. This technology can be applied to volcanoes all around the world, to have a better understanding of when volcanoes will erupt.
Etna
The ASTRI SST-2M Project is using muography to generate the internal images of the magma pathways of Etna volcano. The last major eruption of 1669 caused widespread damage and the death of approximately 20,000 people. Monitoring the magma flows with muography may help to predict the direction from which lava from future eruptions may emit.
From August 2017 to October 2019, time sequential muography imaging of the Etna edifice was conducted to study differences in density levels which would indicate interior volcanic activities. Some of the findings of this research were the following: imaging of a cavity formation prior to crater floor collapse, underground fracture identification, and imaging of the formation of a new vent in 2019 which became active and subsequently erupted.
Stromboli
The apparatuses use nuclear emulsions to collect data near Stromboli volcano. Recent emulsion scanning improvements developed during the course of the Oscillation Project with Emulsion tRacking Apparatus (OPERA experiment) led to film muography. Unlike other muography particle trackers, nuclear emulsion can acquire high angular resolution without electricity. An emulsion-based tracker has been collecting data at Stromboli since December 2011.
Over a period of 5 months in 2019, an experiment using nuclear emulsion muography was done at Stromboli volcano. Emulsion films were prepared in Italy and analyzed in Italy and Japan. The images revealed a low-density zone at the summit of the volcano which is thought to influence the stability of the “Sciara del Fuoco” slope (the source of many landslides).
Puy de Dôme
Since 2010, a muographic imaging survey has been conducted at the dormant volcano, Puy de Dôme, in France. It has been using the existing closed building structures located directly underneath the southern and eastern sides of the volcano for equipment testing and experiments. Preliminary muographs have revealed previously unknown density features at the top of Puy de Dôme that have been confirmed with gravimetric imaging.
A joint measurement was conducted by French and Italian research groups in 2013-2014 during which different strategies for improved detector designs were tested, particularly their capacities to reduce background noise.
Underground water monitoring
Muography has been applied to groundwater and saturation level monitoring for bedrock in a landslide area as a response to major rainfall events. The measurement results were compared with borehole groundwater level measurements and rock resistivity.
Glaciers
The applicability of muography to glacier studies was first demonstrated with a survey of the top portion of Aletch glacier located in the Central European Alps.
In 2017, a Japanese/Swiss collaboration conducted a larger scale muography imaging experiment based at Eiger Glacier to determine the bedrock geometry beneath active glaciers in the steep alpine environment of the Jungfrau region in Switzerland. 5-6 double side coated emulsion films were set in frames with stainless steel plates for shielding to be installed in 3 regions of a railway tunnel which was located underneath the targeted glacier. Production of the emulsion films was done in Switzerland and analysis was done in Japan.
Underlying bedrock erosion and its boundary between glacier and bedrock could be successfully imaged for the first time. The methodology provided important information on subglacial mechanisms of bedrock erosion.
Mining
TRIUMF and its spin-off company Ideon Technologies developed a muograph designed specifically for surveys of possible uranium deposit sites with industry-standard boreholes
Civil engineering
Muography has been used to map the inside of big civil engineering structures, such as dams, and their surroundings for safety and risk prevention purposes. Muography imaging was applied to the identification of hidden construction shafts located above the Alfreton Old Tunnel (constructed in 1862) in the UK.
Nuclear reactors
Muography was applied to investigating the conditions of nuclear reactors damaged by the Fukushima nuclear disaster, and helped to confirm its state of near-complete meltdown.
Nuclear waste imaging
Tomographic techniques can be effective for non-invasive nuclear waste characterization and for nuclear material accountancy of spent fuel inside dry storage containers. Cosmic muons can improve the accuracy of data on nuclear waste and Dry Storage Containers (DSC). Imaging of DSC exceeds the IAEA detection target for nuclear material accountancy. In Canada, spent nuclear fuel is stored in large pools (fuel bays or wet storage) for a nominal period of 10 years to allow for sufficient radioactive cooling.
Challenges and issues for nuclear waste characterization are covered at great length, summarized below:
Historical waste. Non-traceable waste stream poses a challenge for characterization. Different types of waste can be distinguished: tanks with liquids, fabrication facilities to be decontaminated before decommissioning, interim waste storage sites, etc.
Some waste form may be difficult and/or impossible to measure and characterize (i.e. encapsulated alpha/beta emitters, heavily shielded waste).
Direct measurements, i.e. destructive assay, are not possible in many cases and Non-Destructive Assay (NDA) techniques are required, which often do not provide conclusive characterization.
Homogeneity of the waste needs characterization (i.e. sludge in tanks, in-homogeneities in cemented waste, etc.).
Condition of the waste and waste package: breach of containment, corrosion, voids, etc.
Accounting for all of these issues can take a great deal of time and effort. Muon Tomography can be useful to assess the characterization of waste, radiation cooling, and condition of the waste container.
Los Alamos Concrete Reactor
In the summer of 2011, a reactor mockup was imaged using Muon Mini Tracker (MMT) at Los Alamos. The MMT consists of two muon trackers made up of sealed drift tubes. In the demonstration, cosmic-ray muons passing through a physical arrangement of concrete and lead; materials similar to a reactor were measured. The mockup consisted of two layers of concrete shielding blocks, and a lead assembly in between; one tracker was installed at height, and another tracker was installed on the ground level at the other side. Lead with a conical void similar in shape to the melted core of the Three Mile Island reactor was imaged through the concrete walls. It took three weeks to accumulate muon events. The analysis was based on point of closest approach, where the track pairs were projected to the mid-plane of the target, and the scattered angle was plotted at the intersection. This test object was successfully imaged, even though it was significantly smaller than expected at Fukushima Daiichi for the proposed Fukushima Muon Tracker (FMT).
Fukushima application
On March 11, 2011, a 9.0-magnitude earthquake, followed by a tsunami, caused an ongoing nuclear crisis at the Fukushima Daiichi power plant. Though the reactors are stabilized, complete shutdown will require knowledge of the extent and location of the damage to the reactors. A cold shutdown was announced by the Japanese government in December, 2011, and a new phase of nuclear cleanup and decommissioning was started. However, it is hard to plan the dismantling of the reactors without any realistic estimate of the extent of the damage to the cores, and knowledge of the location of the melted fuel.
Since the radiation levels are still very high at the inside of the reactor core, it is not likely anyone can go inside to assess the damage. The Fukushima Daiichi Tracker (FDT) is proposed to see the extent of the damage from a safe distance. A few months of measurements with muon tomography, will show the distribution of the reactor core. From that, a plan can be made for reactor dismantlement; thus potentially shortening the time of the project many years.
In August 2014, Decision Sciences International Corporation it had been awarded a contract by Toshiba Corporation (Toshiba) to support the reclamation of the Fukushima Daiichi Nuclear complex with the use of Decision Science's muon tracking detectors.
Industrial muography has found an application in reactor inspection. It was used to locate the nuclear fuel in the Fukushima Daiichi nuclear power plant, which was damaged by the 2011 Tōhoku earthquake and tsunami.
Non-proliferation
The Nuclear Non-proliferation Treaty (NPT) signed in 1968 was a major step in the non-proliferation of nuclear weapons. Under the NPT, non-nuclear weapon states were prohibited from, among other things, possessing, manufacturing or acquiring nuclear weapons or other nuclear explosive devices. All signatories, including nuclear weapon states, were committed to the goal of total nuclear disarmament.
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) bans all nuclear explosions in any environments. Tools such as muon tomography can help to stop the spread of nuclear material before it is armed into a weapon.
The New START treaty signed by the US and Russia aims to reduce the nuclear arsenal by as much as a third. The verification involves a number of logistically and technically difficult problems. New methods of warhead imaging are of crucial importance for the success of mutual inspections.
Muon tomography can be used for treaty verification due to many important factors. It is a passive method; it is safe for humans and will not apply an artificial radiological dose to the warhead. Cosmic rays are much more penetrating than gamma or x-rays. Warheads can be imaged in a container behind significant shielding and in presence of clutter. Exposure times depend on the object and detector configuration (~few minutes if optimized). While special nuclear material (SNM) detection can be reliably confirmed, and discrete SNM objects can be counted and localized, the system can be designed to not reveal potentially sensitive details of the object design and composition.
The Multi-Mode Passive Detection System (MMPDS) port scanner, located in the Freeport, Bahamas can detect both shielded nuclear material, as well as explosives and contraband. The scanner is large enough for a cargo container to pass through, making it a scaled-up version of the Mini Muon Tracker. It then produces a 3-D image of what is scanned.
Tools such as the MMPDS can be used to prevent the spread of nuclear weapons. The safe but effective use of cosmic rays can be implemented in ports to help non-proliferation efforts, or even in cities, under overpasses, or entrances to government buildings.
Archaeology
Egyptian pyramids
In 2015, 45 years after Alvarez’s experiment, the ScanPyramids Project, which is composed of an international team of scientists from Egypt, France, Canada, and Japan, started using muography and thermography imaging techniques to survey the Giza pyramid complex. In 2017, scientists involved in the project discovered a large cavity, named "ScanPyramids Big Void", above the Grand Gallery of the Great Pyramid of Giza. In 2023, "a corridor-shaped structure" was found in Khufu's Pyramid using the cosmic-ray muons. It was named "ScanPyramids North Face Corridor".
Mexican pyramids
The 3rd largest pyramid in the world, the Pyramid of the Sun, situated near Mexico City in the ancient city of Teotihuacan was surveyed with muography. One of the motivations of the team was to discover if inaccessible chambers inside the Pyramid might hold the tomb of a Teotihuacan ruler. The apparatus was transported in components and then reassembled inside a small tunnel leading to an underground chamber directly underneath the pyramid. A low density region approximately 60 meters wide was reported as a preliminary result, which has led some researchers to suggest that the structure of the pyramid might have been weakened and it is in danger of collapse.
In 2020, the US National Science Foundation awarded a US-Mexico international group a grant for muography to investigate El Castillo, the largest pyramid in Chichen Itza.
Mt. Echia
A three-dimensional muography experiment was done in the underground tunnels of Mt Echia (in Naples, Italy) with 2 muon detectors, MU-RAY and MIMA, which successfully imaged 2 known cavities and discovered one unknown cavity. Mt Echia is where the earliest Naples settlement started in the 8th century and is located underground. Using measurements from 3 different locations in the underground tunnels, a 3D reconstruction was created for the unknown cavity. The method used for this experiment could be applied to other archeological targets to check the structural integrity of ancient sites and to potentially discover hidden historical regions within known sites.
China's imperial chambers
Yuanyuan Liu of the Beijing Normal University and her group showed the feasibility of muography to image the underground chamber of the first emperor of China.
Planetary science
Mars
Muography may potentially be implemented to image extraterrestrial objects such as the geology of Mars. Cosmic rays are numerous and omnipresent in outer space. Therefore, it is predicted that the interaction of the cosmic rays in the Earth’s atmosphere to generate pions/mesons and subsequently to decay into muons also occurs in the atmosphere of other planets. It has been calculated that the atmosphere of Mars is sufficient to produce a horizontal muon flux for practical muography, roughly equivalent to the Earth’s muon flux. In the future, it may be viable to include a high-resolution muography apparatus in a future space mission to Mars, for instance inside a Mars rover. Getting accurate images of the density of Martian structures could be used for surveying sources of ice or water.
Small Solar System bodies
The “NASA Innovative Advanced Concepts (NIAC) program” is now in the process of assessing whether muography may be used for imaging the density structures of small Solar System bodies (SSBs). While the SSBs tend to generate lower muon flux than the Earth's atmosphere, some are sufficient to allow for muography of objects ranging from 1 km or less in diameter. The program includes calculating the muon flux for each potential target, creating imaging simulations and considering the engineering challenges of building a more lightweight, compact apparatus appropriate for such a mission.
Hydrospheric muography
The Hyper-kilometric Submarine Deep Detector (HKMSDD) was designed as a technique to operate muographic observations autonomously under the sea at reasonable costs by combining linear arrays of muographic sensor modules with underwater tube structures.
In undersea muography, time-dependent mass movements consisting of or within targeted gigantic fluid bodies and submerged solid material bodies can be more precisely imaged than with land-based muography. Time-dependent fluctuations of the muon flux due to atmospheric pressure variations are suppressed when muography is conducted under the seafloor by the “inverse barometric effect (IBE)” of seawater. Low atmospheric pressures, such as the pressures observed at the center of a cyclone suck up seawater; on the other hand, high atmospheric pressures will push down seawater. The muon’s barometric pressure fluctuation, therefore, are mostly compensated by IBE at sea levels.
Carbon capture and storage
The success of carbon capture and storage (CCS) hinges upon being able to reliably contain the materials within the storage containers. It has been proposed to use muography as a monitoring tool for CCS. In 2018, a 2 month study supported the feasibility of CCS muography monitoring. It was completed in the UK at the Boulby Mine site in a deep borehole.
Technique variants
Muon scattering tomography (MST)
Muon scattering tomography was first proposed by Chris Morris and his group at Los Alamos National Laboratory (LANL). This technique is capable of locating the muon's Rutherford scattering source by tracking incoming and outgoing muons from the target. Since the radiation lengths tend to be shorter for higher atomic number materials; hence larger scattering angles are expected for the same path lengths, this technique is more sensitive to distinguishing differences between materials within structures and is therefore can be used for imaging heavy metals hidden inside light materials. On the other hand, this technique is not suitable for imaging void structures or light materials located inside heavy materials.
LANL and its spinoff company Decision Sciences applied the MST technique to image the interiors of large trucks and other storage containers in order to detect nuclear materials. A similar system that used MST was developed at the University of Glasgow and its spin-off company Lynkeos Technology to apply towards monitoring the robustness of nuclear waste containers at the Sellafield storage site.
With muon scattering tomography, both incoming and outgoing trajectories for each particle are reconstructed. This technique has been shown to be useful to find materials with high atomic number in a background of high-z material such as uranium or material with a low atomic number.< Since the development of this technique at Los Alamos, a few different companies have started to use it for several purposes, most notably for detecting nuclear cargo entering ports and crossing over borders.
The Los Alamos National Laboratory team has built a portable Mini Muon Tracker (MMT). This muon tracker is constructed from sealed aluminum drift tubes, which are grouped into twenty-four planes. The drift tubes measure particle coordinates in X and Y with a typical accuracy of several hundred micrometers. The MMT can be moved via a pallet jack or a fork lift. If a nuclear material has been detected it is important to be able to measure details of its construction in order to correctly evaluate the threat.
MT uses multiple scattering radiography. In addition to energy loss and stopping cosmic rays undergo Coulomb scattering. The angular distribution is the result of many single scatters. This results in an angular distribution that is Gaussian in shape with tails from large angle single and plural scattering. The scattering provides a novel method for obtaining radiographic information with charged particle beams. More recently, scattering information from cosmic ray muons has been shown to be a useful method of radiography for homeland security applications.
Multiple scattering can be defined as when the thickness increases and the number of interactions become high the angular dispersion can be modelled as Gaussian. Where the dominant part of the multiple scattering polar-angular distribution is
where θ is the muon scattering angle and θ0 is the standard deviation of scattering angle, is given approximately by
The muon momentum and velocity are p and β, respectively, c is the speed of light, X is the length of scattering medium, and X0 is the radiation length for the material. This needs to be convolved with the cosmic ray momentum spectrum in order to describe the angular distribution.
The Image can then be reconstructed by use of GEANT4. These runs include input and output vectors, in and out for each incident particle.
The incident flux projected to the core location was used to normalize transmission radiography (attenuation method). From here the calculations are normalized for the zenith angle of the flux.
Muon Momentum Integrated Tomography System
Despite the various benefits of using cosmic ray muons for imaging large and dense objects, i.e., spent nuclear fuel casks and nuclear reactors, their wide applications are often limited by the naturally low muon flux at sea level, approximately 10,000 m−2min−1. To overcome this limitation, two important quantities—scattering angle, θ and momentum, p—for each muon event must be measured during the measurement. To measure cosmic ray muon momentum in the field, a fieldable muon spectrometer using multi-layer pressurized gas Cherenkov radiators has been developed and the muon spectrometer-tomography shows improved muon scattering tomography resolutions.
Muon computational axial tomography (Mu-CAT)
Mu-CAT is a technique which combines multiple projected muographic images to create a 3D muography image. In principle, it is similar to medical imaging used in radiology (CAT scans) to obtain three-dimensional internal images of the body. While medical CAT scanners use a rotating X-ray generator around the target object, Mu-CAT uses multiple detectors around the target object and naturally occurring muons as probes. Either the tomographic reconstruction technique or the inverse problem is applied to these data from the Mu-CAT observations to reconstruct 3d images.
Mu-CAT revealed the three-dimensional position of a fractured zone below the crater floor of an active volcano related to a past eruption that had caused a large pyroclastic and lava flow on its northern slope.
Cosmic Ray Inspection and Passive Tomography (CRIPT)
The Cosmic Ray Inspection and Passive Tomography (CRIPT) detector is a Canadian muon tomography project which tracks muon scattering events while simultaneously estimating the muon momentum. The CRIPT detector is tall and has a mass of . The majority of the detector mass is located in the muon momentum spectrometer which is a feature unique to CRIPT regarding muon tomography.
After initial construction and commissioning at Carleton University in Ottawa, Canada, the CRIPT detector was moved to Atomic Energy Of Canada Limited's Chalk River Laboratories.
The CRIPT detector is presently examining the limitations on detection time for border security applications, limitations on muon tomography image resolution, nuclear waste stockpile verification, and space weather observation through muon detection.
Technical aspects
The apparatus is a muon-tracking device that consists of muon sensors and recording media. There are several different kinds of muon sensors used in muography apparatuses: plastic scintillators, nuclear emulsions, or gaseous ionization detectors. The recording medium is the film itself, digital magnetic or electronic memory. The apparatus is directed towards the target volume, exposing the muon sensor until the muon events required in order to form a statistically sufficient muogram are recorded, after which, (post processing) a muograph displaying the average density along each muon path is created.
Advantages
There are several advantages that muography has over traditional geophysical surveys. First, muons are naturally abundant and travel from the atmosphere towards the Earth’s surface. This abundant muon flux is nearly constant, therefore muography can be used worldwide. Second, because of the high-contrast resolution of muography, a small void of less than 0.001% of the entire volume can be distinguished. Finally, the apparatus has much lower power requirements than other imaging techniques since they use natural probes, rather than relying on artificially generated signals.
Process
In the field of muography, the transmission coefficient is defined as the ratio of the transmission through the object over the incident muon flux. By applying the muon's range through matter to the open-sky muon energy spectrum, the value of the fraction of incident muon flux that is transmitted through the object can be analytically derived. A muon with a different energy has a different range, which is defined as a distance that the incident muon can traverse in matter before it stops. For example, 1 TeV energy muons have a continuous slowing down approximation range (CSDA range) of 2500 m water equivalent (m.w.e.) in silica dioxide whereas the range is reduced to 400 m.w.e. for 100 GeV muons. This range varies if the material is different, e.g., 1 TeV muons have a CSDA range of 1500 m.w.e. in lead.
The numbers (or later colors) forming a muogram are displayed in terms of the transmitted number of muon events. Each pixel in the muogram is a two dimensional unit based on the angular resolution of the apparatus. The phenomenon that muography cannot differentiate density variations is called the "Volume Effects." Volume Effects happen when a large amount of low density materials and a thin layer of high density materials cause the same attenuation in muon flux. Therefore, in order to avoid false data arising from Volume Effects, the exterior shape of the volume has to be accurately determined and used for analyzing the data.
References
Imaging
Particle detectors | Muon tomography | Technology,Engineering | 6,455 |
15,703,283 | https://en.wikipedia.org/wiki/DAVID | DAVID (the database for annotation, visualization and integrated discovery) is a free online bioinformatics resource developed by the Laboratory of Human Retrovirology and Immunoinformatics (LHRI). All tools in the DAVID Bioinformatics Resources aim to provide functional interpretation of large lists of genes derived from genomic studies, e.g. microarray and proteomics studies. DAVID can be found at https://david.ncifcrf.gov/
The DAVID Bioinformatics Resources consists of the DAVID Knowledgebase and five integrated, web-based functional annotation tool suites: the DAVID Gene Functional Classification Tool, the DAVID Functional Annotation Tool, the DAVID Gene ID Conversion Tool, the DAVID Gene Name Viewer and the DAVID NIAID Pathogen Genome Browser. The expanded DAVID Knowledgebase now integrates almost all major and well-known public bioinformatics resources centralized by the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of diverse gene/protein identifiers and annotation terms from a variety of public bioinformatics databases. For any uploaded gene list, the DAVID Resources now provides not only the typical gene-term enrichment analysis, but also new tools and functions that allow users to condense large gene lists into gene functional groups, convert between gene/protein identifiers, visualize many-genes-to-many-terms relationships, cluster redundant and heterogeneous terms into groups, search for interesting and related genes or terms, dynamically view genes from their lists on bio-pathways and more.
DAVID 2021 update was released in December 2021. The knowledgebase has been scheduled to update quarterly.
Functionality
DAVID provides a comprehensive set of functional annotation tools for investigators to understand biological meaning behind large list of genes. For any given gene list, DAVID tools are able to:
Identify enriched biological themes, particularly GO terms
Discover enriched functional-related gene groups
Cluster redundant annotation terms
Visualize genes on BioCarta & KEGG pathway maps
Display related many-genes-to-many-terms on 2-D view.
Search for other functionally related genes not in the list
List interacting proteins
Explore gene names in batch
Link gene-disease associations
Highlight protein functional domains and motifs
Redirect to related literatures
Convert gene identifiers from one type to another.
External links
https://david-d.ncifcrf.gov/
Plant GO annotation for 165 species and GO enrichment analysis
References
Biochemistry databases
Bioinformatics software
Genetics databases
Laboratory software
Systems biology | DAVID | Chemistry,Biology | 525 |
70,961,697 | https://en.wikipedia.org/wiki/Power-voltage%20curve | Power-voltage curve (also P-V curve) describes the relationship between the active power delivered to the electrical load and the voltage at the load terminals in an electric power system under a constant power factor. When plotted with power as a horizontal axis, the curve resembles a human nose, thus it is sometimes called a nose curve. The overall shape of the curve (similar to a parabola placed on its side) is defined by the basic electrical equations and does not change much when the characteristics of the system vary: leading power factor lead stretches the "nose" further to the right and upwards, while the lagging one shrinks the curve. The curve is important for voltage stability analysis, as the coordinate of the tip of the nose defines the maximum power that can be delivered by the system.
As the load increases from zero, the power-voltage point travels from the top left part of the curve to the tip of the "nose" (power increases, but the voltage drops). The tip corresponds to the maximum power that can be delivered to the load (as long as sufficient reactive power reserves are available). Past this "collapse" point additional loads cause drop in both voltage and power, as the power-voltage point travels to the bottom left corner of the plot. Intuitively this result can be explained when a load that consists entirely of resistors is considered: as the load increases (its resistance thus lowers), more and more of the generator power dissipates inside the generator itself (that has it own fixed resistance connected sequentially with the load). Operation on the bottom part of the curve (where the same power is delivered with lower voltage – and thus higher current and losses) is not practical, as it corresponds to the "uncontrollability" region.
If sufficient reactive power is not available, the limit of the load power will be reached prior to the power-voltage point getting to the tip of the "nose". The operator shall maintain a sufficient margin between the operating point on the P-V curve and this maximum loading condition, otherwise, a voltage collapse can occur.
A similar curve for the reactive power is called Q-V curve.
References
Sources
Electrical engineering | Power-voltage curve | Engineering | 445 |
55,164,247 | https://en.wikipedia.org/wiki/NGC%204620 | NGC 4620 is a lenticular galaxy located about 65 million light-years away in the constellation of Virgo. It was discovered by astronomer John Herschel on March 29, 1830. NGC 4620 is a member of the Virgo Cluster.
See also
List of NGC objects (4001–5000)
NGC 4733
References
External links
Lenticular galaxies
Virgo (constellation)
4620
42619
7859
Astronomical objects discovered in 1830
Virgo Cluster | NGC 4620 | Astronomy | 92 |
7,799,518 | https://en.wikipedia.org/wiki/Nu%20Hydrae | Nu Hydrae, Latinized from ν Hydrae, is an orange-hued star in the constellation Hydra, near the border with the neighboring constellation of Crater. It has an apparent visual magnitude of 3.115, which is bright enough to be seen with the naked eye. Based upon parallax measurements, this star is located at a distance of about from the Earth.
The spectrum of this star matches a stellar classification of K0/K1 III, where the luminosity class of 'III' indicates this is a giant star that has exhausted the supply of hydrogen at its core and evolved away from the main sequence. The radius of this star has expanded to 21 times the Sun's radius and it radiates about 151 times the luminosity of the Sun. This expanded outer envelope has an effective temperature of about 4,335 K, giving it the characteristic orange hue of a K-type star.
Nu Hydrae is an X-ray emitter with an estimated luminosity of in the X-ray band. The abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, is about half that in the Sun. It has a relatively high proper motion across the celestial sphere, suggesting that it has a peculiar velocity roughly three times higher than its neighbors.
Nu Hydrae was a later designation of 4 Crateris.
Notes
References
K-type giants
Hydra (constellation)
Hydrae, Nu
Durchmusterung objects
Crateris, 04
093813
052943
4232 | Nu Hydrae | Astronomy | 311 |
60,430,591 | https://en.wikipedia.org/wiki/Aline%20Miller | Aline Fiona Miller (born 1975) is a Professor of Biomolecular Engineering at the University of Manchester. She specialises in the characterisation of polymer, biopolymer and peptides, using neutron and x-ray scattering, as well as the development of functionalised nanostructures for regenerative medicine and toxicology testing.
Early life and education
Miller studied Chemistry at the University of Strathclyde and graduated in 1997. She was an undergraduate exchange student at Franklin & Marshall College. Miller joined Durham University as a post graduate student, earning a PhD in 2000 under the supervision of Randal Richards. Miller worked on graft copolymers, which included polynorbornene and polyethylene oxide, and studied their organisation at air-water interfaces. After completing her doctorate, Miller moved to New Hall, Cambridge, where she was appointed a Junior Research Fellow and worked with Athene Donald on cellulose. She was inspired to have a career in research during this fellowship.
Research and career
Miller joined the University of Manchester Institute of Science and Technology (UMIST) in 2002. She was made a full Professor in 2014. She currently works in CEAS - Academic & Research Department of Chemical Engineering & Analytical Science at the Manchester Institute of Biotechnology. She investigates the behaviour of molecules at different interfaces, including the air-liquid and liquid-liquid interface. Surfactants and polymers can be used to promote or inhibit the crystallisation of small molecules, for example the use of hydroxyl based polymers in the crystallisation of ice cream. To mimic how fish use macromolecules to stop their blood freezing, Miller combines antifreeze proteins with ice crystals. In 2004 Miller established the University of Manchester Polymers & Peptides Research Group. Here she works on the characterisation of polymer, biopolymer and peptides, using neutron and x-ray scattering. The in-depth characterisation of these materials allows Miller to tailor them for specific applications.
Miller also works in biomedical engineering, creating three-dimensional scaffolds through the control of proteins and peptides. She explores the relationship between mesoscopic structure, material properties and cell response. She has studied how proteins self-assemble, including what causes them to unfold and form fibril structures. The morphology (roughness, porosity) and mechanical properties (such as Young's modulus and viscosity) can be controlled through self-assembly. The self-assembling peptides can be conjugated with polymers that are sensitive to pH and temperature. Through the synthesis of short peptides with various amino acid sequences the Miller group are studying the self-assembly of Beta sheets. She has developed a biocompatible, biodegradable cardiac patch, created from a thick porous scaffold coated with a material that mimics the extracellular matrix. She also studies the degradation mechanism of these materials.
Miller was awarded a small grant from the University of Manchester to develop the synthesis of peptide-based hydrogels. The synthetic peptide hydrogels were so successful that she set up the spin-out company PeptiGelDesign, a group which worked to commercialise hydrogel technologies. Since 2008 PeptiGelDesign have raised over £6 million in funding. Recognising the reach and potential of PeptiGelDesign, the company relaunched as Manchester BIOGEL in 2018, continuing to offer peptide-based hydrogels amongst other biomaterials. The hydrogels can be used to improve the quality of drug toxicity testing, DNA sensing and regenerative medicine.
Awards and honours
Her awards and honours include;
1995 University of Strathclyde William Marr Dux Award
1996 University of Strathclyde Dean's Honours Award
1996 University of Strathclyde Hackman Scholarship Research Award
1997 Sir George Beilby Memorial Medal
1999 Imperial Chemical Industries-Dupont Prize
2001 New Hall Junior Research Fellowship
2004 Exxon Mobil Teaching Fellowship
2008 Institute of Physics Polymer Physics Group and American Physical Society Division of Polymer Physics Young Researchers Award
2008 Royal Society of Chemistry Macro Group UK Young Researchers Medal
2014 Philip Leverhulme Prize for Engineering
2014 Finalist for the WISE Campaign Research Award
Personal life
Miller is married to Alberto Saiani, a materials scientist at the University of Manchester. They have three children.
References
External links
1975 births
Living people
Alumni of the University of Strathclyde
Academics of the University of Cambridge
Academics of the University of Manchester
Academics of the University of Manchester Institute of Science and Technology
Polymer physics
British physicists
British women physicists
British bioengineers
Alumni of Durham University Graduate Society | Aline Miller | Chemistry,Materials_science | 938 |
41,482,032 | https://en.wikipedia.org/wiki/Wind-wave%20dissipation | Wind-wave dissipation or "swell dissipation" is process in which a wave generated via a weather system loses its mechanical energy transferred from the atmosphere via wind. Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, capillary gravity waves play an essential role in this effect, "wind waves" or "swell" are also known as surface gravity waves.
General physics and theory
The process of wind-wave dissipation can be explained by applying energy spectrum theory in a similar manner as for the formation of wind-waves (generally assuming spectral dissipation is a function of wave spectrum). However, although even some of recent innovative improvements for field observations (such as Banner & Babanin et al. ) have contributed to solve the riddles of wave breaking behaviors, unfortunately there hasn't been a clear understanding for exact theories of the wind wave dissipation process still yet because of its non-linear behaviors.
By past and present observations and derived theories, the physics of the ocean-wave dissipation can be categorized by its passing regions along to water depth. In deep water, wave dissipation occurs by the actions of friction or drag forces such as opposite-directed winds or viscous forces generated by turbulent flows—usually nonlinear forces. In shallow water, the behaviors of wave dissipations are mostly types of shore wave breaking (see Types of wave breaking).
Some of simple general descriptions of wind-wave dissipation (defined by Luigi Cavaleri et al. ) were proposed when we consider only ocean surface waves such as wind waves. By means of the simple, the interactions of waves with the vertical structure of the upper layers of the ocean are ignored for simplified theory in many proposed mechanisms.
Sources of wind-wave dissipation
In general understanding, the physics of wave dissipation can be categorized by considering with its dissipation sources, such as 1) wave breaking 2) wave–turbulence interaction 3) wave–wave modulation respectively. (descriptions below of this chapter also follow the reference )
1) dissipation by "wave breaking"
Wind-wave breaking at coastal area is a major source of the wind-wave dissipation. The wind waves lose their energy to the shore or sometimes back to the ocean when those break at the shore. (see more explains -> “Ocean surface wave breaking”)
2) dissipation by "wave–turbulence interaction"
The turbulent wind flows and viscous eddies inside waves can both affect wave dissipation. In the very early understandings, the viscosity could barely affect the wind waves, so that the dissipation of the swells by viscosity was also barely considered. However, recent weather forecasting models begin considering “wave-turbulence interaction” for the wave modeling. It is still arguable how much the turbulent-induced dissipations contribute to change the whole wave profiles, but the ideas of wave-turbulence interaction for surface viscous layers and wave bottom boundary layers are recently accepted.
3) dissipation by "wave-wave modulation"
Wave–wave interactions can affect to the wave dissipation. In the early eras, the ideas that a short wave breaking can take energy from the long waves through the modulation were proposed by Phillips (1963), and Longuett-Higgins (1969) as well. These ideas had been debated (new results that the dissipations by interactions between wave modulations should be much weaker than the theory's of Phillips) by Hasselmann's works (1971), but in the recent understanding, the dissipations of these cases are typically little stronger than the dissipation by “wave-turbulence interactions” when the reasonable modulation transfer functions implemented. Most cases of the swell dissipations are due to this dissipation type.
Ocean-surface wave breaking
When wind waves approach to coast area from deep water, the waves change their heights and lengths. The wave height becomes higher and the wavelength becomes shorter as the wave velocity is slowed when ocean waves approach to the shore. If the water depth is sufficiently shallow, the wave crest become steeper and the trough gets broader and shallower; finally, the ocean waves break at the shore. The motions of wave breaking are different with along to the steepness of shores and waves, and can be categorized by below three types.
• Spilling breaker
With lower shore slope, the waves lose energy slowly as approaching to the shore. The waves spill sea water down the front of the waves when those are breaking.
• Plunging breaker
With moderately steep shore slope, the wave loses energy quickly. If the shore slope is steep enough, the crest of wave moves faster than the trough. The crest curls over front of the wave, and after the crest plunges sea water to the trough. (Plunging breakers are good for surfing)
• Surging breaker
With highly steep shore slope (for extreme steepness, such as seawalls), if the shore steepness is very high, the waves can't reach to the critical steepness to break. The waves climb along through the shore slope, and release energy to the backward from the shore. It never shows white-cap breaks, but for extreme steepness case, such as seawall, the waves break with white-foams.
See also
Dispersion (water waves)
External links
Breaking and dissipation of ocean surface waves – Alexander V. Babanin
References
Coastal geography
Physical oceanography
Water waves
Oceanographical terminology | Wind-wave dissipation | Physics,Chemistry | 1,149 |
11,590,152 | https://en.wikipedia.org/wiki/Amazon%20Elastic%20Compute%20Cloud | Amazon Elastic Compute Cloud (EC2) is a part of Amazon's cloud-computing platform, Amazon Web Services (AWS), that allows users to rent virtual computers on which to run their own computer applications. EC2 encourages scalable deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to configure a virtual machine, which Amazon calls an "instance", containing any software desired. A user can create, launch, and terminate server-instances as needed, paying by the second for active servershence the term "elastic". EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy. In November 2010, Amazon switched its own retail website platform to EC2 and AWS.
History
Amazon announced a limited public beta test of EC2 on August 25, 2006, offering access on a first-come, first-served basis.
Amazon added two new instance types (Large and Extra-Large) on October 16, 2007. On May 29, 2008, two more types were added, High-CPU Medium and High-CPU Extra Large. There were twelve types of instances available.
Amazon added three new features on March 27, 2008, static IP addresses, availability zones, and user selectable kernels. On August 20, 2008, Amazon added Elastic Block Store (EBS)
This provides persistent storage, a feature that had been lacking since the service was introduced.
Amazon EC2 went into full production when it dropped the beta label on October 23, 2008. On the same day, Amazon announced the following features:
a service level agreement for EC2,
Microsoft Windows in beta form on EC2,
Microsoft SQL Server in beta form on EC2,
plans for an AWS management console, and
plans for load balancing, autoscaling, and cloud monitoring services.
These features were subsequently added on May 18, 2009.
Amazon EC2 was developed mostly by a team in Cape Town, South Africa led by Chris Pinkham. Pinkham provided the initial architecture guidance for EC2 and then built the team and led the development of the project along with Willem van Biljon.
Instance types
Initially, EC2 used Xen virtualization exclusively. However, on November 6, 2017, Amazon announced the new C5 family of instances that were based on a custom architecture around the KVM hypervisor, called Nitro. Each virtual machine, called an "instance", functions as a virtual private server. Amazon sizes instances based on "Elastic Compute Units". The performance of otherwise identical virtual machines may vary. On November 28, 2017, AWS announced a bare-metal instance, a departure from exclusively offering virtualized instance types.
As of January 2019, the following instance types were offered:
General Purpose: A1, T3, T2, M5, M5a, M4, T3a
Compute Optimized: C5, C5n, C4
Memory Optimized: R5, R5a, R4, X1e, X1, High Memory, z1d
Accelerated Computing: P3, P2, G3, F1
Storage Optimized: H1, I3, D2
, the following payment methods by instance were offered:
On-demand: pay by the hour without commitment.
Reserved: rent instances with one-time payment receiving discounts on the hourly charge.
Spot: bid-based service: runs the jobs only if the spot price is below the bid specified by bidder. The spot price is claimed to be supply-demand based, however a 2011 study concluded that the price was generally not set to clear the market, but was dominated by an undisclosed reserve price.
Cost
, Amazon charged about $0.0058 per hour ($4.176 per month) for the smallest "Nano Instance" (t2.nano) virtual machine running Linux or Windows. Storage-optimized instances cost as much as $4.992 per hour (i3.16xlarge). "Reserved" instances can go as low as $2.50 per month for a three-year prepaid plan. The data transfer charge ranges from free to $0.12 per gigabyte, depending on the direction and monthly volume (inbound data transfer is free on all AWS services).
EC2 costs can be analyzed using the Amazon Cost and Usage Report. There are many different cost categories for EC2 including: hourly Instance Charges, Data Transfer, EBS Volumes, EBS Volume Snapshots, and Nat Gateway.
Free tier
Amazon offered a bundle of free resource credits to new account holders. The credits are designed to run a "micro" sized server, storage (EBS), and bandwidth for one year. Unused credits cannot be carried over from one month to the next.
Reserved instances
Reserved instances enable EC2 or RDS service users to reserve an instance for one or three years. The corresponding hourly rate charged by Amazon to operate the instance is 35 to 75% lower than the rate charged for on-demand instances.
Reserved instances can be purchased with three different payment options: All Upfront, Partial Upfront and No Upfront. The different purchase options allow for different structuring of payment models, with a larger discount given to customers that pay their reservation upfront.
Reserved Instances are purchased based on a resource commitment. These reservations are made based on an instance type and a count of that instance type. For example, you could reserve 100 i3.large instances for a 3-year term.
In September 2016, AWS announced several enhancements to Reserved instances, introducing a new feature called scope and a new reservation type called a Convertible. In October 2017, AWS announced the allowance to subdivide the instances purchased for more flexibility.
Spot instances
Cloud providers maintain large amounts of excess capacity they have to sell or risk incurring losses.
Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available at up to 90% discount compared to On-Demand prices. As a trade-off, AWS offers no SLA on these instances and customers take the risk that it can be interrupted with only two minutes of notification when Amazon needs the capacity back. Researchers from the Israeli Institute of Technology found that "they (Spot instances) are typically generated at random from within a tight price interval via a dynamic hidden reserve price". Some companies, like Spotinst, are using machine learning to predict spot interruptions up to 15 minutes in advance.
Savings Plans
In November 2019, Amazon announced Savings Plans. Savings Plans are an alternative to Reserved Instances that come in two different plan types: Compute Savings Plans and EC2 Instances Savings Plans. Compute Savings Plans allow an organization to commit to EC2 and Fargate usage with the freedom to change region, family, size, availability zone, OS and tenancy inside the lifespan of the commitment. EC2 Instance Savings plans provide a larger discount than Compute Savings Plans but are less flexible meaning a user must commit to individual instance families within a region to take advantage, but with the freedom to change instances within the family in that region.
AWS uses the Cost Explorer to automatically calculate recommendations for the commitments you should make how that commitment will look like as a monthly charge on your AWS bill. AWS Savings Plans are purchased based on hourly spend commitment. This hourly commitment is made using the discounted pricing of the savings plan you are purchasing. For example, you could commit to spending $5 per hour, on a Compute Savings Plan, for a 3-year term.
Features
Operating systems
When it launched in August 2006, the EC2 service offered Linux and later Sun Microsystems' OpenSolaris and Solaris Express Community Edition. In October 2008, EC2 added the Windows Server 2003 and Windows Server 2008 operating systems to the list of available operating systems.
In March 2011, NetBSD AMIs became available. In November 2012, Windows Server 2012 support was added.
Since 2006, Colin Percival, a FreeBSD developer and Security Officer, solicited Amazon to add FreeBSD. In November 2012, Amazon officially supported running FreeBSD in EC2. The FreeBSD/EC2 platform is maintained by Percival who also developed the secure deduplicating Amazon S3-cloud based backup service Tarsnap.
Amazon has their own Linux distribution based on Fedora and Red Hat Enterprise Linux as a low cost offering known as the Amazon Linux AMI. Version 2013.03 included: Linux kernel, Java OpenJDK Runtime Environment and GNU Compiler Collection.
On November 30, 2020, Amazon announced that it would be adding macOS to the EC2 service. Initial support was announced for macOS Mojave and macOS Catalina running on Mac Mini.
Managed Container and Kubernetes Services
Amazon Elastic Container Registry (ECR) is a Docker registry service for Amazon EC2 instances to access repositories and images.
Amazon Elastic Kubernetes Service (EKS) a managed Kubernetes service running on top of EC2 without needing to provision or manage instances.
Persistent storage
An EC2 instance may be launched with a choice of two types of storage for its boot disk or "root device." The first option is a local "instance-store" disk as a root device (originally the only choice). The second option is to use an EBS volume as a root device. Instance-store volumes are temporary storage, which survive rebooting an EC2 instance, but when the instance is stopped or terminated (e.g., by an API call, or due to a failure), this store is lost.
The Amazon Elastic Block Store (EBS) provides raw block devices that can be attached to Amazon EC2 instances. These block devices can then be used like any raw block device. In a typical use case, this would include formatting the device with a filesystem and mounting it. In addition, EBS supports a number of advanced storage features, including snapshotting and cloning. EBS volumes can be up to 16 TB in size. EBS volumes are built on replicated storage, so that the failure of a single component will not cause data loss.
EBS was introduced to the general public by Amazon in August 2008.
EBS volumes provide persistent storage independent of the lifetime of the EC2 instance, and act much like hard drives on a real server. More accurately, they appear as block devices to the operating system that are backed by Amazon's disk arrays. The OS is free to use the device however it wants. In the most common case, a file system is loaded and the volume acts as a hard drive. Another possible use is the creation of RAID arrays by combining two or more EBS volumes. RAID allows increases of speed and/or reliability of EBS. Users can set up and manage storage volumes of sizes from 1 GB to 16 TB. The volumes support snapshots, which can be taken from a GUI tool or the API. EBS volumes can be attached or detached from instances while they are running, and moved from one instance to another.
Simple Storage Service (S3) is a storage system in which data is accessible to EC2 instances, or directly over the network to suitably authenticated callers (all communication is over HTTP). Amazon does not charge for the bandwidth for communications between EC2 instances and S3 storage "in the same region." Accessing S3 data stored in a different region (for example, data stored in Europe from a US East Coast EC2 instance) will be billed at Amazon's normal rates.
S3-based storage is priced per gigabyte per month. Applications access S3 through an API. For example, Apache Hadoop supports a special s3: filesystem to support reading from and writing to S3 storage during a MapReduce job. There are also S3 filesystems for Linux, which mount a remote S3 filestore on an EC2 image, as if it were local storage. As S3 is not a full POSIX filesystem, things may not behave the same as on a local disk (e.g., no locking support).
Elastic IP addresses
Amazon's elastic IP address feature is similar to static IP address in traditional data centers, with one key difference. A user can programmatically map an elastic IP address to any virtual machine instance without a network administrator's help and without having to wait for DNS to propagate the binding. In this sense an Elastic IP Address belongs to the account and not to a virtual machine instance. It exists until it is explicitly removed, and remains associated with the account even while it is associated with no instance.
Amazon CloudWatch
Amazon CloudWatch is a web service that provides real-time monitoring to Amazon's EC2 customers on their resource utilization such as CPU, disk, network and replica lag for RDS Database replicas. CloudWatch does not provide any memory, disk space, or load average metrics without running additional software on the instance. Since December 2017 Amazon provides a CloudWatch Agent for Windows and Linux operating systems included disk and previously not available memory information, previously Amazon provided example scripts for Linux instances to collect OS information. The data is aggregated and provided through AWS management console. It can also be accessed through command line tools and Web APIs, if the customer desires to monitor their EC2 resources through their enterprise monitoring software. Amazon provides an API which allows clients to operate on CloudWatch alarms.
The metrics collected by Amazon CloudWatch enables the auto-scaling feature to dynamically add or remove EC2 instances. The customers are charged by the number of monitoring instances.
Since May 2011, Amazon CloudWatch accepts custom metrics that can be submitted programmatically via Web Services API and then monitored the same way as all other internal metrics, including setting up the alarms for them, and since July 2014 Cloudwatch Logs service is also available.
Basic Amazon CloudWatch is included in Amazon Free Tier service.
Automated scaling
Amazon's auto-scaling feature of EC2 allows it to automatically adapt computing capacity to site traffic. The schedule-based (e.g. time-of-the-day) and rule-based (e.g. CPU utilization thresholds) auto scaling mechanisms are easy to use and efficient for simple applications. However, one potential problem is that VMs may take up to several minutes to be ready to use, which are not suitable for time critical applications. The VM startup time is dependent on image size, VM type, data center locations, etc. The convenience of using EC2 enables you to dynamically increase capacity in accordance with demand and access resources rapidly.
Pricing
NOTE: the examples, figures and comparison charts in this section are from 2018 in the best case; please bear this in mind, as the situation has changed a lot from then.
On Demand EC2 instances are priced per hour. An example of this pricing would be $0.096 per hour for a Linux, m5.large, EC2 instance in the us-east-1 region. Pricing will vary based on the instance type, region, and operating system of the instance. Public on-demand pricing for EC2 can be found on the AWS website.
The other pricing models for EC2 have different pricing models.
Spot instances also have a cost per instance hour, but the cost will change on a regular basis based on the supply of EC2 spot capacity.
Reserved Instances and Compute Savings plans are priced per hour. Each of these reservation tools has its own price per hour based on the payment option, term and reservation product being used. These prices are locked in for either a 1-year or 3-year term.
Amazon EC2 price varies from $2.5 per month for "nano" instance with 1 vCPU and 0.5 GB RAM on board to "xlarge" type of instances with 32 vCPU and 488 GB RAM billed up to $3997.19 per month.
The charts above show how Amazon EC2 pricing is compared to similar Cloud Computing services: Microsoft Azure, Google Cloud Platform, Kamatera, and Vultr.
Reliability
To make EC2 more fault-tolerant, Amazon engineered Availability Zones that are designed to be insulated from failures in other availability zones. Availability zones do not share the same infrastructure. Applications running in more than one availability zone can achieve higher availability.
EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy. For example, to minimize downtime, a user can set up server instances in multiple zones that are insulated from each other for most causes of failure such that one backs up the other.
Higher-availability database services, like Amazon Relational Database Service run separately from EC2 instances.
Issues
In early July 2008, the anti-spam organizations Outblaze and Spamhaus.org began blocking Amazon's EC2 address pool due to problems with the distribution of spam and malware.
On December 1, 2010, Amazon pulled its service to WikiLeaks after coming under political pressure in the US. Assange said that WikiLeaks chose Amazon knowing it would probably be kicked off the service "in order to separate rhetoric from reality". The Internet group Anonymous attempted to attack EC2 in revenge; however, Amazon was not affected by the attack.
Amazon's websites were temporarily offline on December 12, 2010, although it was initially unclear if this was due to attacks or a hardware failure. An Amazon official later stated that it was due to a hardware failure.
Shortly before 5 am ET on April 21, 2011, an outage started at EC2's Northern Virginia data center that brought down several websites, including Foursquare, Springpad, Reddit, Quora, and Hootsuite. Specifically, attempts to use Amazon's elastic-disk and database services hung, failed, or were slow. Service was restored to some parts of the data center (three of four "availability zones" in Amazon's terms) by late afternoon Eastern time that day; problems for at least some customers were continuing as of April 25. 0.07% of EBS volumes in one zone have also been lost; EBS failures were a part of normal operation even before this outage and were a risk documented by Amazon, though the number of failures and the number of simultaneous failures may find some EC2 users unprepared.
On Sunday August 6, 2011, Amazon suffered a power outage in one of their Ireland availability zones. Lightning was originally blamed for the outage; however, on August 11, Irish energy supplier ESB Networks dismissed this as a cause, but at time of writing, could not confirm what the cause of the problem was. The power outage raised multiple questions regarding Amazon's EBS infrastructure, which caused several bugs in their software to be exposed. The bugs resulted in some customers' data being deleted when recovering EBS volumes in a mid-write operation during the crash.
August 8, 2011, saw another network connectivity outage of Amazon's Northern Virginia data center, knocking out the likes of Reddit, Quora, Netflix and FourSquare. The outage lasted around 25 minutes.
Another Northern Virginia data center outage occurred on October 22, 2012, from approximately 10 am to 4 pm PT. Edmodo, Airbnb, Flipboard, Reddit, and other customers were affected. Anonymous claimed responsibility, but Amazon denied this assertion.
See also
Amazon Virtual Private Cloud
Alibaba Cloud
AppScale
Bitnami
CopperEgg
ElasticHosts
Eucalyptus (software)
FlexiScale
FUJITSU Cloud IaaS Trusted Public S5
GoGrid
Google App Engine
Google Cloud Platform
GreenQloud
Internap
Linode
Lunacloud
Microsoft Azure
Nimbula
OpenShift
Oracle Cloud
OrionVM
OVHcloud
Rackspace Cloud
RightScale
Savvis
TurnKey Linux Virtual Appliance Library
Zadara
Notes
References
External links
Amazon Web Services
Cloud computing
Cloud computing providers
Cloud infrastructure
Cloud platforms
Web services | Amazon Elastic Compute Cloud | Technology | 4,107 |
11,903,542 | https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20R%C3%A9dei | László Rédei (15 November 1900 – 21 November 1980) was a Hungarian mathematician.
Rédei graduated from the University of Budapest and initially worked as a schoolteacher. In 1940 he was appointed professor in the University of Szeged and in 1967 moved to the Mathematical Institute of the Hungarian Academy of Sciences in Budapest.
His mathematical work was in algebraic number theory and abstract algebra, especially group theory. He proved that every finite tournament contains an odd number of Hamiltonian paths. He gave several proofs of the theorem on quadratic reciprocity. He proved important results concerning the invariants of the class groups of quadratic number fields. In several cases, he determined if the ring of integers of the real quadratic field Q() is Euclidean or not. He successfully generalized Hajós's theorem. This led him to the investigations of lacunary polynomials over finite fields, which he eventually published in a book. This work on lacunary polynomials has had a big influence in the field of finite geometry where it plays an important role in the theory of blocking sets. He introduced a very general notion of skew product of groups, of which both the Schreier-extension and the Zappa–Szép product are special case. He explicitly determined those finite noncommutative groups whose all proper subgroups were commutative (1947). This is one of the very early results which eventually led to the classification of all finite simple groups.
Rédei was the president of the János Bolyai Mathematical Society (1947–1949). He was awarded the Kossuth Prize twice. He was elected corresponding member (1949), full member (1955) of the Hungarian Academy of Sciences.
Books
1959: Algebra. Erster Teil, Mathematik und ihre Anwendungen in Physik und Technik, Reihe A, 26, Teil 1 Akademische Verlagsgesellschaft, Geest & Portig, K.-G., Leipzig, xv+797 pp.
1967: English translation, Algebra, volume 1, Pergamon Press
1963: Theorie der endlich erzeugbaren kommutativen Halbgruppen, Hamburger Mathematische Einzelschriften, 41, Physica-Verlag, Würzburg 228 pp.
1968: Foundation of Euclidean and non-Euclidean geometries according to F. Klein, Pergamon Press, 404 pp.
1970: Lückenhafte Polynome über endlichen Körpern, Lehrbücher und Monographien aus dem Gebiete der exakten Wissenschaften, Mathematische Reihe, 42, Birkhäuser Verlag, Basel-Stuttgart, 271 pp.
1973: English translation: I. Földes: Lacunary Polynomials over Finite Fields North--Holland, London and Amsterdam, American Elsevier, New York, (Europe) (US)
1989: Endliche p-Gruppen, Akadémiai Kiadó, Budapest, 304 pp.
References
1981: László Rédei, Acta Scientiarum Mathematicarum, 43: 1–2
L. Márki (1985) "A tribute to L. Rédei", Semigroup Forum, 32, 1–21.
External links
1900 births
1980 deaths
Academic staff of the University of Szeged
Members of the Hungarian Academy of Sciences
20th-century Hungarian mathematicians
Number theorists
Algebraists
Mathematicians from Austria-Hungary | László Rédei | Mathematics | 709 |
37,329,194 | https://en.wikipedia.org/wiki/Stenella%20anthuriicola | Stenella anthuriicola is a species of anamorph fungus in the family Mycosphaerellaceae. It grows on the leaves of Anthurium plants in Thailand.
References
External links
anthuriicola
Fungi described in 2006
Fungi of Asia
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Fungus species | Stenella anthuriicola | Biology | 69 |
15,022,475 | https://en.wikipedia.org/wiki/Brick%20Renaissance | Brick Renaissance is the Northern European continuation of brick architecture after Brick Romanesque and Brick Gothic. Although the term Brick Gothic is often used generally for all of this architecture, especially in regard to the Hanseatic cities of the Baltic, the stylistic changes that led to the end of Gothic architecture did reach Northern Germany and northern Europe with delay, leading to the adoption of Renaissance elements into brick building. Nonetheless, it is very difficult for non-experts to distinguish transitional phases or early Brick Renaissance, as the style maintained many typical features of Brick Gothic, such as stepped gables. A clearer distinction only developed at the transition to Baroque architecture. In Lübeck, for example, Brick Renaissance is clearly recognisable in buildings equipped with terracotta reliefs by the artist Statius von Düren, who was also active at Schwerin (Schwerin Castle) and Wismar (Fürstenhof).
More clearly recognisable as Renaissance are brick buildings strongly influenced by the Dutch Renaissance style, such as Reinbek Castle at Reinbek near Hamburg, the Zeughaus at Lübeck, or Friedrichstadt in Schleswig-Holstein.
Belarus
Denmark
Germany
Italy
Lithuania
Poland
Sweden
References
External links
B01
.
Brick
Renaissance Brick | Brick Renaissance | Engineering | 246 |
1,580,554 | https://en.wikipedia.org/wiki/Tiltmeter | A tiltmeter is a sensitive inclinometer designed to measure very small changes from the vertical level, either on the ground or in structures. Tiltmeters are used extensively for monitoring volcanoes, the response of dams to filling, the small movements of potential landslides, the orientation and volume of hydraulic fractures, and the response of structures to various influences such as loading and foundation settlement. Tiltmeters may be purely mechanical or incorporate vibrating-wire or electrolytic sensors for electronic measurement. A sensitive instrument can detect changes of as little as one arc second.
Tiltmeters have a long, diverse history, somewhat parallel to the history of the seismometer. The very first tiltmeter was a long-length stationary pendulum. These were used in the very first large concrete dams, and are still in use today, augmented with newer technology such as laser reflectors. Although they had been used for other applications such as volcano monitoring, they have distinct disadvantages, such as their huge length and sensitivity to air currents. Even in dams, they are slowly being replaced by the modern electronic tiltmeter.
Volcano and Earth movement monitoring then used the water-tube, long baseline tiltmeter. In 1919, the physicist, Albert A. Michelson, noted that the most favorable arrangement to obtain high sensitivity and immunity from temperature perturbations is to use the equipotential surface defined by water in a buried half-filled water pipe. This was a simple arrangement of two water pots, connected by a long water-filled tube. Any change in tilt would be registered by a difference in fill-mark of one pot compared to the other. Although extensively used throughout the world for Earth-science research, they have proven to be quite difficult to operate. For example, due to their high sensitivity to temperature differentials, these always have to be read in the middle of the night.
The modern electronic tiltmeter, which is slowly replacing all other forms of tiltmeter, uses a simple bubble level principle, as used in the common carpenter level. As shown in the figure, an arrangement of electrodes senses the exact position of the bubble in the electrolytic solution, to a high degree of precision. Any small changes in the level are recorded using a standard datalogger. This arrangement is quite insensitive to temperature, and can be fully compensated, using built-in thermal electronics.
A newer technology using microelectromechanical systems (MEMS) sensors enables tilt angle measuring tasks to be performed conveniently in both single and dual axis mode. Ultra-high precision 2-axis MEMS driven digital inclinometer/ tiltmeter instruments are available for speedy angle measurement applications and surface profiling requiring very high resolution and accuracy of one arc second. The 2-axis MEMS driven inclinometers/ tiltmeters can be digitally compensated and precisely calibrated for non-linearity and operating temperature variation, resulting in higher angular accuracy and stability performance over wider angular measurement range and broader operating temperature range. Further, digital display of readings can effectively prevent parallax error as experienced when viewing traditional ‘bubble’ vials located at a distance.
The most dramatic application of tiltmeters is in the area of volcanic eruption prediction. As shown in this figure from the USGS, the main volcano in Hawaii (Kilauea) has a pattern of filling the main chamber with magma, and then discharging to a side vent. The graph shows this pattern of swelling of the main chamber (recorded by the tiltmeter), draining of that chamber, and then an eruption of the adjoining vent. Each number at the peak of tilt, on the graph, is a recorded eruption.
Gallery
See also
Dam safety system
Differential GPS
Geomechanic
Inclinometer
Remote sensing methods
Rock mechanics
Tilt test (geotechnical engineering)
References
Inclinometers
Seismology instruments
Volcanology
Geological tools | Tiltmeter | Technology,Engineering | 785 |
38,482,305 | https://en.wikipedia.org/wiki/KarTrak | KarTrak, sometimes KarTrak ACI (Automatic Car Identification) or just ACI was a colored barcode system designed to automatically identify railcars and other rolling stock. KarTrak was made a requirement in North America in 1967, but technical problems led to the abandonment of the system by around 1977.
History
Issue and early development
Railroads have struggled with the tracking of railroad cars across their vast networks, a problem that became worse with the increased growth of systems and movement of rail cars from network to network via interchange. A railroad's car could end up a thousand miles away on another company's tracks. This didn't factor the ever growing fleet of privately owned railroad cars, from companies such as TrailerTrain and Union Tank Car Company, who owned massive fleets of railroad cars, but were not actually railroads. A missing car took time to track down, often requiring workers to walk rail yards looking at cars until it was located.
In 1959 David Jarrett Collins approached his employer GTE Sylvania to use a newly developed computer system in conjunction with scanners to track railroad cars. The idea was inspired by Collins summers in college where he worked for the Pennsylvania Railroad. During the early portion of the 1960s, Sylvania's Applied Research Lab team met with representatives of various railroads to gain insight into their needs and wants for a car tracking system.
Features and design aspects desired by the railroads included:
Low label cost, approximately $1 per label
Ability to scan labels at 0 -
A label life span of 7 years
Scanners capable of scanning at around , to enable scanning of labels on railcars, shipping containers and piggyback highway trailers
Scanners capable of operating in isolated locations, and resistant to gunfire.
KarTrak's development testing occurred in 1961 on the Boston & Maine Railroad, using passenger trains and a gravel train that did not leave the Boston & Maine railroad network. Using trains that were always confined to Boston & Maine enabled easy testing, refinement and demonstration the KarTrak system, as cars fitted with the system were always around and their movements known.
Sylvania early on moved to sell KarTrak to smaller, 'captive' railroad systems. Captive railroads, such as those used to supply coal to a power station on an isolated system were a prime environment, as issues caused by cars not equipped by KarTrak wouldn't occur due to the lack of cars entering or leaving the railroad, and all cars being owned by the railroad in question and thus able to be equipped with labels. In three years, 50,000 railroad cars were equipped KarTrak labels. This served a dual purpose, allowing Sylvania to generate money to invest in further development of the system, while also denying a foothold to competing car tracking systems.
KarTrak was also be advertised to railroads in publications such as Fortune, and The Wall Street Journal in large, full page ads pushing the monetary and efficiency benefits.
By the mid to late 1960s, railroads in North America began searching for a system that would allow them to automatically identify railcars and other rolling stock. Through the efforts of the Association of American Railroads (AAR), a number of companies developed automatic equipment identification (AEI) systems. The AAR selected four systems for extensive field tests:
General Electric - a RFID system
ABEX - a microwave system
Wabco - a black-and-white barcode system
GTE Sylvania - KarTrak, a color barcode system
All those systems, except the RFID system, had labels that were mounted on each side of the railcar, and a trackside scanner.
Following disagreements with Sylvania regarding the future potential of KarTrak, Collins departed in 1968 to form his own company to continue research and development into scanners and barcodes.
Implementation
After the initial field tests, the ABEX, Wabco, and GTE KarTrak ACI systems were selected for a head-to-head accuracy test on the Pennsylvania Railroad, at Spruce Creek, Pennsylvania. The KarTrak system was declared the winner and selected by the AAR as the standard.
Starting in 1967, all railcar owners were required by the AAR to install ACI labels on their cars. By 1970, roughly 86% of the 2 million railroad freight cars were carrying an ACI plate, with some railroads having completed labeling of their freight cars. Twelve railroads had completed installation of approximately 50 ACI trackside scanners.
In 1972, GTE Sylvania decided to exit the railcar tracking field, and sold KarTrak to Servo Corporation of America.
By 1975, 90% of all railcars were labeled. The read rate was about 80%, which means that after seven years of service, 10% of the labels had failed for reasons such as physical damage and dirt accumulation. The dirt accumulation was most evident on flatcars that had low-mounted labels.
Demise
The AAR had recognized from their field tests that periodic inspection and label maintenance would be requirements to maintain a high level of label readability. Regulations were instituted for label inspection and repair whenever a railcar was in the repair shop, which on average happened every two years. The maintenance program never gained sufficient compliance. Without maintenance, the read rate failed to improve, and the KarTrak system was abandoned by 1977.
Even towards the end of and after the demise of KarTrak, development of improvements based on the system did continue, with three patents being issued in 1976, 1977 and 1982 that were based on the KarTrak technology, one for a variable label that could signal an issue with car, like a refrigerator car that was too warm, a self cleaning ACI label, and a three-dimensional 'optical target' as another attempt to eliminate the known issue with dirty labels.
In November 1977, the Association of American Railroads released a short white paper that flagged several problems with KarTrak: Frequent inaccuracies in data, ACI labels reaching the end of their life span and requiring replacement, and lack of universal adoption within the railroad industry. A weighted ballot would be conducted of all interchange railroads, weighted based on ownership of railcars, to if the ACI requirements would be eliminated. The result of this ballot was to eliminate the requirement to install ACI labels. The decision was overwhelming, with a 5 to 1 margin. Despite claiming in their white paper that the dissatisfaction with ACI "would not mean the railroad industry was taking a step backward in car utilization, or operating efficiency or in the adoption of modern technology." of this failure, the railroad industry did not seriously search for another system to identify railcars until the mid-1980s.
Design
Tags and label design
KarTrak ACI tags consisted of a plate with 13 horizontal labels put in a vertical arrangement that are also understood as data lines, which could have 13 different forms. These labels, or symbols, stand for the single digits 0-9, the number 10 as an extra feature for the checksum line, and the "START" and "STOP" labels that gave reference to the vertical line position of the tag. Present day available depictions of the labels do often name the upper color first and then the lower color.
In practice people found that there was a significant number of cases where the label set was not done correctly or the label application had errors such as a 180° rotation of it - whilst as a rule of thumbs the blue stripes of START and STOP would have been needed to point to the left with a to-the-middle-of-the-tag orientation. Especially the color selection and sequence ordering of STOP seems to be the subject of such errors leading to decoding errors and needs for decoder workarounds for the field that effectively weakened the system. Even some early times advertisement materials exposed such flaws. Also its said that checksum labels had been wrong sometimes, and even the label set itself had some variations in respect to the imprinted number.
- = not used / reserved
white = white/black checker pattern aka checkered
The labels, also understood as data lines, each had two horizontal stripes that together represented a single symbol of information. The used colors for the stripes were blue, white, red and black. This does make up a total of 16 combinations where only 12 were used in the center area by just excluding black to be the lower color. For sensor reasons the white color was dimmed down by a black checkerboard so that they roughly met with the intensity of red and blue that were light sensed via a color filters.
The labels each are wide and high. With a vertical gap between the labels realized a total height of . Labels could be affixed directly to the car side, but usually were applied to dark plates, which were then riveted to each side of the car.
The labels were made from retroreflective plastic sheet that was coated with red or blue dye to provide distinguishable color filters. The retroreflective material gave a clear optical signal that could be read from a distance and easily distinguished from the other markings on the railcar. The white areas provided both a red and blue optical response to the reader, and were patterned with dots so that their brightness would be about the same as a red or blue stripe.
The start and stop labels were partially filled, so that the reader scanning beam would be centered on them before they were recognized. This ensured that the entire label was centered and had the best chance of being read accurately.
Data contained in Label Lines
The labels are to be read from bottom to top:
Line 13: check digit.
Line 12: stop label.
Lines 6 to 11: car number.
Lines 2 to 5: equipment/owner code.
Line 2: equipment code
Lines 3 to 5: ownership code
Line 1: start label.
The first digit of the equipment owner (line 2) marks the type of equipment: 0 for railroad-owned, 1 for privately-owned, or 6 for non-revenue equipment.
The car number is left-padded with zeroes if necessary. For locomotives, line 6 is the type of unit and line 7 the suffix number.
The check digit is calculated as follows: Each number digit is multiplied by two to the power of the labels's position minus two. Thus, the first digit (line 2) is multiplied by 1, the second by 2, the third by 4, the fourth by 8 and so on, until the 10th, which is multiplied by 512. The sum of all these numbers modulo 11 is the check digit.
The code on the caboose in the picture at top can be decoded as Start 8350199918 Stop 5. This means a car with equipment code number 8, ownership code 350, which lists this as a car of the Illinois Central Railroad, car number 199918, with a check digit of 5.
Label placement
Labels were placed on both sides of all railroad equipment, including locomotives, passenger cars, and cabooses. Labels were required to be unobstructed, and couldn't have anything such as ladders, railings, grab iron between them and the scanner. When placed on rail cars with external For curved surfaces of tank cars, an oversized ACI label was available, known as a 'extended-range panel' The retroreflective stripes on these panels were taller than standard stripes.
Trackside scanners
The readers were optical scanners, somewhat like the barcode scanners used for retail store barcode items today. The scanning distances and speeds meant that the processing electronics needed to be state-of-the-art for its day. They were placed along the rail lines, often at the entrance and exit of a switchyard and at major junctions, spaced back from the tracks so that the labels would pass in the reading zone, from the scanner and with the scanner aperture at above the railhead.
The scanners were housed in metal boxes typically about the size of a mini-refrigerator, . They consisted of a collimated 100-200 watt xenon arc light source arranged co-axially with red and blue sensing photo tubes. The coaxial optical arrangement provided optimum sensing of the retroreflective labels. This optical source and sensing beam was directed to a large () mirrored rotating wheel that provided the vertical scanning of the railcar. The movement of the train provided the horizontal scanning. Although the system could capture labels at , often the speeds were much lower.
The scanner's analog video signals were passed to a nearby rail equipment hut where the processing and computing electronics were located. The first systems were discrete circuits and logic and only provided an ASCII-coded list of the labels that passed the scanner. These were forwarded to the rail operators for manual tracking or integration with their computer systems. Later reading systems were coupled with era minicomputers (Digital Equipment Corporation PDP-8s), and more elaborate tracking and weighing systems were integrated. Sometimes these included many railyard input sensors, for rail switch position, car passage, and hot wheel bearing sensors. Some of the more productive and thus longer-lived systems were installed in captive rail applications that carried bulk goods from mines to smelter, where the weight of individual cars loaded and unloaded tracked the bulk inventory.
Legacy
The KarTrak system proved to need too much maintenance to be practical. Up to 20% of the cars were not read correctly. Further, ACI did not have any centralized system or network, even within railroad companies. The information collected from wayside scanners was printed out with little means of searching for information beyond going through piles of paperwork. Clerical personnel became frustrated by the increasing error rate. These issues would lead to the abandonment by the ARR who discontinued the requirement for rail cars to have KarTrak labels. Between 1967 and 1977, the railroad industry spent $150 million on KarTrak, and up to 95% of cars were barcoded.
Railroad cars that were in service prior to 1977 would go on to carry KarTrak labels, with labels being still observed on freight cars into the 2000s. These labels have vanished in time due to a combination of repainting, major overhaul, and the retirement of cars, particularly due to the AAR Rule 88 and 90, which restrict use of rail cars built prior to July 1, 1974 to a 40 life, which ran out for most cars in the mid-2010s. Cars built on and after 1 July 1974 are subject to a 50 year life, with mandatory retirements to start in 2024.
Versions of KarTrak technology were trialed in other fields. In the late 1960s, the New Jersey Turnpike explored the system as a way of billing vehicles using the toll road, as well as identifying the vehicle. A computer would calculate the toll due and a bill would be sent to the driver. Like the original version of KarTrak, vehicles would be fitted with a label approximately that would be scanned by a camera at toll booths.
In 1984, Computer Identics Corporation, Collins' company following his departure from GTE Sylvania, would sue Southern Pacific Transportation, along with three other companies, alleging they'd acted in a conspiracy to intentionally undermine KarTrak, in favor of a system Southern Pacific had been working on called TOPS. The lawsuit was ultimately unsuccessful, with the jury having found there was no evidence of a conspiracy, which was then upheld on appeal.
Notes
References
Information systems
Rail technologies | KarTrak | Technology | 3,125 |
319,122 | https://en.wikipedia.org/wiki/Pinwheel%20Galaxy | The Pinwheel Galaxy (also known as Messier 101, M101 or NGC 5457) is a face-on, unbarred, and counterclockwise spiral galaxy located from Earth in the constellation Ursa Major. It was discovered by Pierre Méchain in 1781 and was communicated that year to Charles Messier, who verified its position for inclusion in the Messier Catalogue as one of its final entries.
On February 28, 2006, NASA and the European Space Agency released a very detailed image of the Pinwheel Galaxy, which was the largest and most detailed image of a galaxy by Hubble Space Telescope at the time. The image was composed of 51 individual exposures, plus some extra ground-based photos.
Discovery
Pierre Méchain, the discoverer of the galaxy, described it as a "nebula without star, very obscure and pretty large, 6' to 7' in diameter, between the left hand of Bootes and the tail of the great Bear. It is difficult to distinguish when one lits the [grating] wires."
William Herschel wrote in 1784 that the galaxy was one of several which "...in my 7-, 10-, and 20-feet [focal length] reflectors shewed a mottled kind of nebulosity, which I shall call resolvable; so that I expect my present telescope will, perhaps, render the stars visible of which I suppose them to be composed."
Lord Rosse observed the galaxy in his 72-inch-diameter Newtonian reflector during the second half of the 19th century. He was the first to make extensive note of the spiral structure and made several sketches.
Though the galaxy can be detected with binoculars or a small telescope, to observe the spiral structure in a telescope without a camera requires a fairly large instrument, very dark skies, and a low-power eyepiece.
Structure and composition
M101 is a large galaxy, with a diameter of 170,000 light-years. By comparison, the Milky Way has a diameter of 87,400 light-years. It has around a trillion stars. It has a disk mass on the order of 100 billion solar masses, along with a small central bulge of about 3 billion solar masses. Its characteristics can be compared to those of Andromeda Galaxy.
M101 has a high population of H II regions, many of which are very large and bright. H II regions usually accompany the enormous clouds of high density molecular hydrogen gas contracting under their own gravitational force where stars form. H II regions are ionized by large numbers of extremely bright and hot young stars; those in M101 are capable of creating hot superbubbles. In a 1990 study, 1,264 H II regions were cataloged in the galaxy. Three are prominent enough to receive New General Catalogue numbers—NGC 5461, NGC 5462, and NGC 5471.
M101 is asymmetrical due to the tidal forces from interactions with its companion galaxies. These gravitational interactions compress interstellar hydrogen gas, which then triggers strong star formation activity in M101's spiral arms that can be detected in ultraviolet images.
In 2001, the X-ray source P98, located in M101, was identified as an ultra-luminous X-ray source—a source more powerful than any single star but less powerful than a whole galaxy—using the Chandra X-ray Observatory. It received the designation M101 ULX-1. In 2005, Hubble and XMM-Newton observations showed the presence of an optical counterpart, strongly indicating that M101 ULX-1 is an X-ray binary. Further observations showed that the system deviated from expected models—the black hole is just 20 to 30 solar masses, and consumes material (including captured stellar wind) at a higher rate than theory suggests.
It is estimated that M101 has about 150 globular clusters, the same as the number of the Milky Way's globular clusters.
Companion galaxies
M101 has six prominent companion galaxies: NGC 5204, NGC 5474, NGC 5477, NGC 5585, UGC 8837 and UGC 9405. As stated above, the gravitational interaction between it and its satellites may have spawned its grand design pattern. The galaxy has probably distorted the second-listed companion. The list comprises most or all of the M101 Group.
Supernovae and luminous red nova
Six internal supernovae have been recorded:
SN 1909A was discovered by Max Wolf in January 1909 and reached magnitude 12.1.
SN 1951H was discovered by Milton Humason on 1 September 1951 and reached magnitude 17.5.
SN 1970G (typeII, mag. 11.5) was discovered by Miklós Lovas on 30 July 1970.
On August 24, 2011, a Type Ia supernova, SN 2011fe, initially designated PTF 11kly, was discovered in M101. It had visual magnitude 17.2 at discovery and reached 9.9 at its peak.
On February 10, 2015, a luminous red nova, known as M101 OT2015-1 was discovered in the Pinwheel Galaxy.
On May 19, 2023, SN 2023ixf was discovered in M101, and immediately classified as a Type II supernova.
See also
List of Messier objects
– a similar face-on spiral galaxy
– a similar face-on spiral galaxy that is sometimes called the Southern Pinwheel Galaxy
– a similar face-on spiral galaxy
– another galaxy sometimes called the Pinwheel Galaxy
References
External links
SEDS: Spiral Galaxy M101
Intermediate spiral galaxies
M101 Group
Ursa Major
Messier objects
NGC objects
08981
50063
026
Astronomical objects discovered in 1781
Discoveries by Pierre Méchain | Pinwheel Galaxy | Astronomy | 1,172 |
31,727,214 | https://en.wikipedia.org/wiki/Radical%20disproportionation | Radical disproportionation encompasses a group of reactions in organic chemistry in which two radicals react to form two different non-radical products. Radicals in chemistry are defined as reactive atoms or molecules that contain an unpaired electron or electrons in an open shell. The unpaired electrons can cause radicals to be unstable and reactive. Reactions in radical chemistry can generate both radical and non-radical products. Radical disproportionation reactions can occur with many radicals in solution and in the gas phase. Due to the reactive nature of radical molecules, disproportionation proceeds rapidly and requires little to no activation energy. The most thoroughly studied radical disproportionation reactions have been conducted with alkyl radicals, but there are many organic molecules that can exhibit more complex, multi-step disproportionation reactions.
Mechanism of radical disproportionation
In radical disproportionation reactions one molecule acts as an acceptor while the other molecule acts as a donor. In the most common disproportionation reactions, a hydrogen atom is taken, or abstracted by the acceptor as the donor molecule undergoes an elimination reaction to form a double bond. Other atoms such as halogens may also be abstracted during a disproportionation reaction. Abstraction occurs as a head to tail reaction with the atom that is being abstracted facing the radical atom on the other molecule.
Disproportionation and steric effects
Radical disproportionation is often thought of as occurring in a linear fashion with the donor radical, the acceptor radical, and the atom being accepted all along the same axis. In fact, most disproportionation reactions do not require linear orientations in space. Molecules that are more sterically hindered require arrangements that are more linear, and thus react more slowly. Steric effects play a significant role in disproportionation with ethyl radicals acting as more effective acceptors than tert-butyl radicals. Tert-butyl radicals have many hydrogens on adjacent carbons to donate and steric effects often prevent tert-butyl radicals from getting close to abstracting hydrogens.
Alkyl radical disproportionation
Alkyl radical disproportionation has been studied extensively in scientific literature. During alkyl radical disproportionation, an alkane and an alkene are the end products and the bond order of the products increases by one over the reactants. Thus the reaction is exothermic (ΔH = ) and proceeds rapidly.
\underset{Alkene\ and\ Alkane\ Formation}{2CH3-\underset{^\bullet}CH2 -> {H2C=CH2} + H3C-CH3}
Cross disproportionation of alkyl radicals
Cross disproportionation occurs when two different alkyl radicals disproportionate to form two new products. Different products can be formed depending on which alkyl radical acts as a donor and which acts as an acceptor. The efficiency of primary and secondary alkyl radicals as donors depends on the steric effects and configuration of the radical acceptors.
Competition with recombination
Another reaction that can sometimes occur instead of disproportionation is recombination. During recombination, two radicals form one new non-radical product and one new bond. Similar to disproportionation, the recombination reaction is exothermic and requires little to no activation energy. The ratio of the rates of disproportionation to recombination is referred to as kD/kC and often favors recombination compared with disproportionation for alkyl radicals. As the number of transferable hydrogens increase, the rate constant for disproportionation increases relative to the rate constant for recombination.
Kinetic isotope effect on disproportionation and recombination
When the hydrogen atoms in an alkyl radical are displaced with deuterium, disproportionation proceeds at a slightly slower rate whereas the rate of recombination remains the same. Thus disproportionation is weakly affected by the kinetic isotope effect with kH/kD = 1.20 ± 0.15 for ethylene. Hydrogens and deuterons are not involved in recombination reactions. However, deuteron abstraction during disproportionation occurs more slowly than hydrogen abstraction due to the increased mass and reduced vibrational energy of deuterium, although the experimentally observed kH/kD is close to one.
Polar effects and alkoxy radical disproportionation
Alkoxy radicals which contain unpaired electrons on an oxygen atom display a higher kD/kC compared to alkyl radicals. The oxygen has a partial negative charge which removes electron density from the donor carbon atom thereby facilitating hydrogen abstraction. The rate of disproportionation is also aided by the more electronegative oxygen on the acceptor molecule.
Termination of chain processes
Many radical processes involve chain reactions or chain propagation with disproportionation and recombination occurring in the terminal step of the reaction. Terminating chain propagation is often most significant during polymerization as the desired chain propagation cannot take place if disproportionation and recombination reactions readily occur. Controlling termination products and regulating disproportionation and recombination reactions in the terminal step are important considerations in radical chemistry and polymerization. In some reactions (such as the one shown below) one or both of the termination pathways can be hindered by steric or solvent effects.
Reducing disproportionation in living free radical polymerization
Many polymer chemists are concerned with limiting the rate of disproportionation during polymerization. Although disproportionation results in formation of one new double bond which may react with the polymer chain, a saturated hydrocarbon is also formed, and thus the chain reaction does not readily proceed. During living free radical polymerization, termination pathways for a growing polymer chain are removed. This can be achieved through several methods, one of which is reversible termination with stable radicals. Nitroxide radicals and other stable radicals reduce recombination and disproportionation rates and control the concentration of polymeric radicals.
References
Organic chemistry | Radical disproportionation | Chemistry | 1,341 |
1,143,774 | https://en.wikipedia.org/wiki/California%20Institute%20for%20Regenerative%20Medicine | The California Institute for Regenerative Medicine (CIRM) is a state agency that supports research and education in the fields of stem cell and gene therapies. It was created in 2004 after 59% of California voters approved California Proposition 71: the Research and Cures Initiative, which allocated $3 billion to fund stem cell research in California. In 2020 voters approved Proposition 14 that allocated additional funds to CIRM.
CIRM supports research and training at many stem cell institutes throughout California, including Sanford Consortium, University of California, Santa Cruz, Stanford University, University of California Davis, University of California Irvine, University of California San Francisco, University of California Los Angeles and University of Southern California. In addition, it has supported the establishment of nine "Alpha Stem Cell Clinics" that lead clinical trials for stem cell therapies at City of Hope, University of California San Diego, University of California San Francisco, University of California Davis, a joint clinic at University of California Los Angeles and University of California Irvine, Cedars Sinai, Stanford, and USC/Children’s Hospital Los Angeles.
History
CIRM was established via California Proposition 71 (2004). However, its implementation was delayed when out-of-state based opponents incorporated in California to file two lawsuits that challenged the proposition's constitutionality. Opponents argued that the initiative created a taxpayer-funded entity not under state control, that the Independent Citizen's Oversight Committee (ICOC) had a conflict of interest with representatives being eligible for grant money, and that the initiative violated the single-subject requirement of initiatives by funding areas beyond stem cell research. In May 2007, the Supreme Court of California declined to review the two lower court decisions, thereby upholding Proposition 71 as constitutional and permitting CIRM to fund stem cell research in California.
Examples of CIRM funding include:
In 2018, UC San Francisco (UCSF) received a $12 million grant to study severe combined immunodeficiency (SCID). The research UCSF was able to conduct due to the funding the institution received contributed in part to a potential cure in 2019, described in a study published in the New England Journal of Medicine: Lentiviral Gene Therapy Combined with Low-Dose Busulfan in Infants with SCID-X1.
In 2017, CIRM awarded $2 million to a University of California San Diego scientist searching for a cure for the Zika infection. Research resulted successfully finding a pre-approved drug to block Zika virus replication and infection, as well as transmission from mother to child.
In 2011, CIRM awarded $25 million to support a spinal cord injury trial – the first award dedicated to a human clinical trial – to Geron Corporation, which was later taken up by Asterias Biotherapeutics. The clinical trial led to significant benefits to a paralyzed high school student, Jake Javier, who was able to regain function in his upper body.
In late 2019, CIRM had awarded more than $2.67 billion in grant funding across six broad categories: physical and institutional infrastructure, basic research, education and training, research translation, research application and clinical trials.
The $3 billion initially provided to CIRM through Proposition 71 was budgeted to last until 2017. In February 2014, Robert Klein, a leader in the initial campaign for Proposition 71 and former CIRM Board Chair, presented a proposal at the UC San Diego Moores Cancer Center to extend CIRM funding. Another option discussed at that time, was for CIRM to become a private, non-profit organization that would rely solely on outside funding.
In 2020, as CIRM's funding from the 2004 Proposition 71 was expiring, another ballot measure, Proposition 14, was advanced in California to add an additional $5.5 billion to CIRM, to enable it to continue its mission. The measure passed with 51% of the vote, and so the CIRM will continue operating.
Oversight
The CIRM Board is composed of members appointed by elected state officers, including the Governor, Lt. Governor, the State Treasurer, The Controller, the Speaker of the CA State Assembly and the President Pro Tempore of the California State Senate. Only one member shall be appointed from a single university, institution or entity.
The most recent 2018 audit found CIRM has a collaborative, engaged and performance-oriented culture, is patient-centered and has improved processes to be more efficient and effective since the implementation of CIRM 2.0.
In 2008 the Little Hoover Commission evaluated CIRM at the request of California Senators Sheila Kuehl and George Runner. The Commission commented specifically on the structure of the CIRM governance board and the need for greater transparency and accountability. The Commission provided suggestions on how to improve the structure and enhance the functioning of the CIRM board some of which included: decreasing the size of the ICOC from 29 to 15 members with four having no affiliations with CIRM-funded organizations; allowing board members to serve a maximum of four years; and eliminating the overlapping responsibilities of the agency chair and the board president. In addition, the Commission recommended that CIRM also allow outside experts to evaluate grant proposals.
CA Senator Dean Florez, Little Hoover Commission member and State Senate Majority leader at the time, was not satisfied with the report, highlighting several concerns in a letter to the Little Hoover Commission, stating: “I am concerned about the Commission's apparent rush to conclude its report. As one member said at the meeting, five minutes and a sandwich is not adequate time for Commission members to absorb the information that was presented. While I appreciate the substantial effort that Commission members and staff put into drafting the report, I am concerned that due to its rush to approve the report, the Commission gave disproportionate weight to CIRM's critics and did not consider a broader range of views on the complex issues that are the subject of the report.”
A 2008 “Review Of Conflict-Of-Interest Policies, Grant Administration, Administrative Expenses, And Expenditures,” by the State Controller’s office, which examined 18 straight months of the agency’s operations, found that “CIRM has extensive conflict-of-interest policies and processes that are modeled after and, in some instances, go beyond National Institute of Health requirements. Our conclusion is consistent with the Bureau of State Audits in its audit report of CIRM issued in February 2007.”
In December 2012, the Institute of Medicine (IOM) released a report, “The California Institute for Regenerative Medicine: Science, Governance, and the Pursuit of Cures”, that evaluated CIRM programs and operations since its start in 2004. The IOM committee made recommendations similar to those made in the Little Hoover Commission. In general, the IOM recommended that the ICOC separate their responsibilities as executor and overseer and noted potential conflicts of interest among the CIRM board members. Several active CIRM board members also represented organizations that currently received or benefited from CIRM grants. The IOM committee also recommended that CIRM organize a single Scientific Advisory Board with experts in stem cell biology and cell-based therapies.
In 2014 the integrity of CIRM's grant review process was challenged, after CIRM awarded a Stanford-led consortium $40 million stem cell genomics award, making it the largest CIRM research grant. In February 2013, CIRM reviewers evaluated applications for genomics awards but, for the first time, declined to send any grant proposals to the board for a final decision. Comments were sent back to the researchers and re-submissions were accepted in Fall 2013. During the Fall 2013 review, CIRM reviewers sent all four genomic award proposals to the CIRM board, recommending that all four projects receive funding despite the projects exceeding the budget of $40 million. The CIRM President, Alan Trounson, became involved in the selection process and the final decision was to fund the Stanford project only, totaling $40 million. The CIRM grant review and scoring process and the role of President Trounson have been questioned, especially by those that did not receive funding like Jeanne Loring from the stem cell program at Scripps Research Institute.
References
External links
California Institute for Regenerative Medicine homepage
Californians for Stem Cell Research, Treatments and Cures homepage
Government agencies established in 2005
Medical research institutes in California
Research institutes in the San Francisco Bay Area
Institute for Regenerative Medicine
Stem cell research | California Institute for Regenerative Medicine | Chemistry,Biology | 1,699 |
6,003,864 | https://en.wikipedia.org/wiki/Markarian%20421 | Markarian 421 (Mrk 421, Mkn 421) is a blazar located in the constellation Ursa Major. The object is an active galaxy and a BL Lacertae object, and is a strong source of gamma rays. It is about 397 million light-years (redshift: z=0.0308 eq. 122Mpc) to 434 million light-years (133Mpc) from the Earth. It is one of the closest blazars to Earth, making it one of the brightest quasars in the night sky. It is suspected to have a supermassive black hole (SMBH) at its center due to its active nature. An early-type high inclination spiral galaxy (Markarian 421-5) is located 14 arc-seconds northeast of Markarian 421.
It was first determined to be a very high energy gamma ray emitter in 1992 by M. Punch at the Whipple Observatory, and an extremely rapid outburst in very-high-energy gamma rays (15-minute rise-time) was measured in 1996 by J. Gaidos at Whipple Observatory.
Markarian 421 also had an outburst in 2001 and is monitored by the Whole Earth Blazar Telescope project.
Due to its brightness (around 13.3 magnitude, max. 11.6 mag. and min. 16 mag.) the object can also be viewed by amateurs in smaller telescopes.
References
External links
Focus on Markarian 421
BL Lacertae objects
Ursa Major
Blazars
Discoveries by Benjamin Markarian
Markarian galaxies
06132
033452 | Markarian 421 | Astronomy | 329 |
25,850,552 | https://en.wikipedia.org/wiki/ISS%20ECLSS | The International Space Station (ISS) Environmental Control and Life Support System (ECLSS) is a life support system that provides or controls atmospheric pressure, fire detection and suppression, oxygen levels, proper ventilation, waste management and water supply. It was jointly designed and tested by NASA's Marshall Space Flight Center, UTC Aerospace Systems, Boeing, Lockheed Martin, and Honeywell.
The system has three primary functions: Water Recovery, Air Revitalization, and Oxygen Generation, the purpose of which is to ensure safe and comfortable environments for personnel aboard the ISS. The system also serves as a potential proof of concept for more advanced systems building off of the ECLSS for use in deep space missions.
Water recovery systems
The ISS has two water recovery systems. Zvezda contains a water recovery system that processes water vapor from the atmosphere that could be used for drinking in an emergency but is normally fed to the Elektron system to produce oxygen. The American segment has a Water Recovery System installed during STS-126 that can process water vapour collected from the atmosphere and urine into water that is intended for drinking. The Water Recovery System was installed initially in Destiny on a temporary basis in November 2008 and moved into Tranquility (Node 3) in February 2010.
The Water Recovery System consists of a Urine Processor Assembly and a Water Processor Assembly, housed in two of the three ECLSS racks.
The Urine Processor Assembly uses a low pressure vacuum distillation process that uses a centrifuge to compensate for the lack of gravity and thus aid in separating liquids and gasses. The Urine Processor Assembly is designed to handle a load of 9 kg/day, corresponding to the needs of a 6-person crew. Although the design called for recovery of 85% of the water content, subsequent experience with calcium sulfate precipitation (in the free-fall conditions present on the ISS, calcium levels in urine are elevated due to bone density loss) has led to a revised operational level of recovering 70% of the water content.
Water from the Urine Processor Assembly and from waste water sources are combined to feed the Water Processor Assembly that filters out gasses and solid materials before passing through filter beds and then a high-temperature catalytic reactor assembly. The water is then tested by onboard sensors and unacceptable water is cycled back through the water processor assembly.
The Volatile Removal Assembly flew on STS-89 in January 1998 to demonstrate the Water Processor Assembly's catalytic reactor in microgravity. A Vapour Compression Distillation Flight Experiment flew, but was destroyed, in STS-107.
The distillation assembly of the Urine Processor Assembly failed on 21 November 2008, one day after the initial installation. One of the three centrifuge speed sensors was reporting anomalous speeds, and high centrifuge motor current was observed. This was corrected by re-mounting the distillation assembly without several rubber vibration isolators. The distillation assembly failed again on 28 December 2008 due to high motor current and was replaced on 20 March 2009. Ultimately, during post-failure testing, one centrifuge speed sensor was found to be out of alignment and a compressor bearing had failed.
Atmosphere
Several systems are currently used on board the ISS to maintain the spacecraft's atmosphere, which is similar to the Earth's. Normal air pressure on the ISS is 101.3 kPa (14.7 psi); the same as at sea level on Earth. "While members of the ISS crew could stay healthy even with the pressure at a lower level, the equipment on the Station is very sensitive to pressure. If the pressure were to drop too far, it could cause problems with the Station equipment."
The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station.
The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters.
Carbon dioxide is removed from the air by the Vozdukh system in Zvezda. One Carbon Dioxide Removal Assembly (CDRA) is located in the U.S. Lab module, and one in the US Node 3 module. Other by-products of human metabolism, such as methane from flatulence and ammonia from sweat, are removed by activated charcoal filters or by the Trace Contaminant Control System (TCCS).
Air revitalization system
Carbon dioxide and trace contaminants are removed by the Air Revitalization System. This is a NASA rack, placed in Tranquility, designed to provide a Carbon Dioxide Removal Assembly (CDRA), a Trace Contaminant Control Subassembly (TCCS) to remove hazardous trace contamination from the atmosphere and a Major Constituent Analyser (MCA) to monitor nitrogen, oxygen, carbon dioxide, methane, hydrogen, and water vapour. The Air Revitalization System was flown to the station aboard STS-128 and was temporarily installed in the Japanese Experiment Module pressurised module. The system was scheduled to be transferred to Tranquility after it arrived and was installed during Space Shuttle Endeavour mission STS-130.
Oxygen generating system
The Oxygen Generating System (OGS) is a NASA rack which electrolyses water from the Water Recovery System to produce oxygen and hydrogen, like the Russian Elektron oxygen generator. The oxygen is delivered to the cabin atmosphere. The unit is installed in the Destiny module. During a spacewalk, STS-117 astronauts installed a hydrogen vent valve required to operate the OGS. The OGS was delivered in 2006 by STS-121, and became operational on 12 July 2007. From 2001, the US orbital segment had used oxygen stored in a pressurized tank on the Quest airlock module, or from the Russian service module. Prior to the activation of the Sabatier System in October 2010, hydrogen and carbon dioxide extracted from the cabin was vented overboard.
In October 2010, the OGS stopped running well due to the water input becoming slightly too acidic. The station crew relied on the Elektron oxygen generator and oxygen brought up from Earth for six months. In March 2011, STS-133 delivered the repair kit, and the OGS was brought into full operation.
Advanced Closed Loop System
The Advanced Closed Loop System (ACLS) is an ESA rack that converts carbon dioxide () and water into oxygen and methane. The is removed from the station air by an amine scrubber, then removed from the scrubber by steam. 50% of the is converted to methane and water by a Sabatier reaction. The other 50% of carbon dioxide is jettisoned from the ISS along with the methane that is generated. The water is recycled by electrolysis, producing hydrogen (used in the Sabatier reactor) and oxygen. This is very different from the NASA oxygen-generating rack that is reliant on a steady supply of water from Earth in order to generate oxygen. This water-saving capability reduced the needed water in cargo resupply by 400 liters per year. By itself it can regenerate enough oxygen for three astronauts.
The ACLS was delivered on the Kounotori 7 launch in September 2018 and installed in the Destiny module as a technology demonstrator (planned to operate for one to two years). It was successful, and remains on board the ISS permanently.
ACLS has three subsystems:
The Carbon dioxide Concentration Assembly (CCA) uses an amine reaction to absorb and concentrate carbon dioxide from cabin air to keep carbon dioxide within acceptable levels.
The Carbon dioxide Reprocessing Assembly (CRA). A Sabatier reactor reacts from the CCA with hydrogen from the OGA to produce water and methane.
The Oxygen Generation Assembly (OGA), electrolyses water into oxygen and hydrogen.
NASA Sabatier system
The NASA Sabatier system (used from 2010 until 2017) closed the oxygen loop in the ECLSS by combining waste hydrogen from the Oxygen Generating System and carbon dioxide from the station atmosphere using the Sabatier reaction to recover the oxygen. The outputs of this reaction were water and methane. The water was recycled to reduce the total amount of water carried to the station from Earth, and the methane was vented overboard by the hydrogen vent line installed for the Oxygen Generating System.
Elektron
Elektron is a Russian Electrolytic Oxygen Generator, which was also used on Mir. It uses electrolysis to convert water molecules reclaimed from other uses on board the station into oxygen and hydrogen. The oxygen is vented into the cabin and the hydrogen is vented into space. The three Elektron units on the ISS have been plagued with problems, frequently forcing the crew to use backup sources (either bottled oxygen or the Vika system discussed below). To support a crew of six, NASA added the oxygen generating system discussed above.
In 2004, the Elektron unit shut down due to (initially) unknown causes. Two weeks of troubleshooting resulted in the unit starting up again, then immediately shutting down. The cause was eventually traced to gas bubbles in the unit, which remained non-functional until a Progress resupply mission in October 2004. In 2005, ISS personnel tapped into the oxygen supply of the recently arrived Progress resupply spacecraft when the Elektron unit failed. In 2006, fumes from a malfunctioning Elektron unit prompted NASA flight engineers to declare a "spacecraft emergency". A burning smell led the ISS crew to suspect another Elektron fire, but the unit was only "very hot". A leak of corrosive, odorless potassium hydroxide forced the ISS crew to don gloves and face masks. It has been conjectured that the smell came from overheated rubber seals. The incident occurred shortly after STS-115 left and just before arrival of a resupply mission (including space tourist Anousheh Ansari). The Elektron did not come back online until November 2006, after new valves and cables arrived on the October 2006 Progress resupply vessel. The ERPTC (Electrical Recovery Processing Terminal Current) was inserted into the ISS to prevent harm to the systems. In October 2020, the Elektron system failed and had to be deactivated for a short time before being repaired.
Vika
The Vika or TGK oxygen generator, also known as Solid Fuel Oxygen Generation (SFOG) when used on the ISS, is a chemical oxygen generator originally developed by Roscosmos for Mir, and it provides an alternate oxygen generating system. It uses canisters of solid lithium perchlorate, which decomposes into gaseous oxygen and solid lithium chloride when heated. Each canister can supply the oxygen needs of one crewmember for one day.
Vozdukh
Another Russian system, Vozdukh (Russian Воздух, meaning "air"), removes carbon dioxide from the air with regenerable absorbers of carbon dioxide gas.
Temperature and Humidity Control
Temperature and Humidity Control (THC) is the subsystem of the ISS ECLSS which maintains a steady air temperature and controls moisture in the station's air supply. Thermal Control System (TCS) is a component part of the THC system and subdivides into the Active Thermal Control System (ATCS) and Passive Thermal Control System (PTCS). Controlling humidity is possible through lowering or raising the temperature and through adding moisture to the air.
Fire Detection and Suppression
Fire Detection and Suppression (FDS) is the subsystem devoted to identifying that there has been a fire and taking steps to fight it.
See also
International Space Station maintenance
References
External links
Components of the International Space Station
Medical technology
Spacecraft life support systems | ISS ECLSS | Biology | 2,369 |
37,918,612 | https://en.wikipedia.org/wiki/Extractive%20electrospray%20ionization | Extractive electrospray ionization (EESI) is a spray-type, ambient ionization source in mass spectrometry that uses two colliding aerosols, one of which is generated by electrospray. In standard EESI, syringe pumps provide the liquids for both an electrospray and a sample spray. In neutral desorption EESI (ND-EESI), the liquid for the sample aerosol is provided by a flow of nitrogen.
Principle of operation
A ND-EESI experiment is simple in concept and implementation. A room temperature (20 °C) nitrogen gas stream is flowed through a narrow opening (i.d.~0.1 mm) to form a sharp jet targeted at a surface. The nitrogen molecules desorb analytes from the surface. The jet is only 2–3 mm above the surface, and the gas flow is about 200 mL/min with gas speeds around 300 m/s. The sample area is about 10 mm2. An optional enclosure, most commonly made of glass, can cover the sampling area to ensure proper positioning of the gas jet and the sample transfer line. A tube carries the neutral aerosol to the ESI spray.
The sample spray in EESI produces a liquid aerosol with the analyte in sample droplets. The ESI spray produces droplets with protons. The sample droplets and the proton-rich droplets bump into each other. Each droplet has properties: analyte solubility in the ESI spray solvent and surface tension of the spray solution and of the sample solution. With dissimilar properties, some collisions produce no extraction because the droplets “bounce", but with similar properties, some collisions produce coalescence and liquid-liquid extraction. The extent of the extraction depends on the similarity of the properties.
Applications
Ambient ionization techniques are attractive for many samples for their high tolerance to complex mixtures and for fast testing. EESI has been employed for the rapid characterization of living objects, native proteins, and metabolic biomarkers.
EESI has been applied to food samples, urine, serum, exhaled breath and protein samples. A general investigation of urine, serum, milk and milk powders was reported in 2006. Breath analysis of valproic acid with EESI was reported in 2007. The maturity of fruit was classified with the combination of EESI and principal component analysis, and live samples were tested a short time later. Perfumes were classified with the combination of EESI and characteristic ions. On-line monitoring was performed in 2008. Melamine in tainted milk was detected in 2009. Breath analysis was performed with the combination of EESI and an ion trap mass spectrometer. Beverages, over-the-counter drugs, uranyl waste water, and aquiculture water were tested with EESI between 2010 and 2016.
See also
Secondary electrospray ionization
Tandem mass spectrometry
References
Mass spectrometry
Ion source | Extractive electrospray ionization | Physics,Chemistry | 600 |
10,100,068 | https://en.wikipedia.org/wiki/Moribito%3A%20Guardian%20of%20the%20Spirit | is a Japanese novel that was first published in July 1996. It is the first in the 12-volume series of Japanese fantasy novels by Nahoko Uehashi. It was the recipient of the Batchelder Award An ALA Notable Children's Book in 2009. It has since been adapted into numerous media, including radio, manga, anime, and taiga drama adaptations. Scholastic released the first novel in English in June 2008. Media Blasters has confirmed that they acquired the rights to the anime. The anime series adaptation premiered on Adult Swim in the U.S. at 1:30 a.m. ET on August 24, 2008, but was dropped from the schedule without warning or explanation on January 15, 2009, after two runs of the first ten episodes. The program returned to Adult Swim during the summer 2009 line-up with an airing of the entire series.
Synopsis
Balsa, spear wielder and bodyguard, is a wandering warrior who has vowed to atone for eight deaths in her past by saving an equivalent number of lives. On her journey, she saves Prince Chagum and is tasked with becoming his bodyguard. His own father, the Mikado, has ordered his assassination. The two begin a perilous journey to ensure the survival of the prince. Balsa's complicated past begins to come to light and they uncover Chagum's mysterious connection to a legendary water spirit with the power to destroy the kingdom.
Media
Novel
The novel was first published in hardback by Kaiseisha as children's literature, but it had many adult fans. Shinchosha republished it in bunkobon format in March 2007.
Seirei no Moribito (Guardian of the Spirit) (, 1996-07) (Bunko , 2007-03)
Adapted into the anime series. Balsa is hired to protect a prince with a mysterious spirit living inside him.
Published in English by Arthur A. Levine Books/Scholastic in the summer of 2008; translated by Cathy Hirano.
The novel received the Mildred L. Batchelder Award from the American Library Association in 2009.
Radio drama
The series has been adapted into a radio drama series, written by Satoshi Maruo. It aired in on NHK FM Broadcast from August 7, 2006, to August 18, 2006.
Anime
The series has been adapted into an anime television series, produced by Production I.G and directed by Kenji Kamiyama, which premiered in Japan on NHK from April 7, 2007. The anime runs 26 episodes and is based entirely on the first novel in the Guardian series, and greatly expands the midsection of the novel.
At the Tokyo International Anime Fair 2007 in March, Geneon announced that they had acquired the license to the anime and Scholastic announced they had US distribution rights to the novels. After Geneon discontinued its US distribution division, the rights transferred to Media Blasters. The series premiered in the United States at 1:30 a.m. ET on August 24, 2008, on Cartoon Network's Adult Swim block, but was dropped from the schedule without warning or explanation on January 15, 2009, after two runs of the first ten episodes. On June 13, 2009, the series was back on Cartoon Network's Adult Swim block in the United States at 1:30 a.m. ET on Sundays, but was later moved to 2:30 a.m. ET, swapping it with Fullmetal Alchemist in November. Viz Media re-released the entire series on DVD and Blu-ray on August 26, 2014. It also aired on their digital broadcasting channel, Neon Alley January 17, 2014, until the channel's closure on May 6, 2016. In August 2020, Sentai Filmworks announced that they acquired the series for home video and digital release.
The series feature two theme songs. The opening title is "Shine" by L'Arc-en-Ciel, while Sachi Tainaka performs "Itoshii Hito e" for the ending title.
Taiga fantasy drama (television)
The series has been adapted into a live-action taiga fantasy drama television series by NHK, shot in 4K resolution. It stars Haruka Ayase as Balsa. Season one was shown in four episodes in March and April 2016. Season two was shown over nine episodes from January to March 2017. The third and final season was shown from November 2017 to January 2018, also over nine episodes.
Musical
A stage musical was produced at Nissay Theater in Tokyo in 2023, starring Rio Asumi as Balsa. It has been released on DVD.
Reception
Daniel Baird reviewed this book and its sequel for Mythprint, praised the first volume as enjoyable both by children and adults due to "plenty of richness in its characterization and fantasy world".
See also
References
External links
Official site of the novels
Official site of the anime
Production I.G site
Production I.G site
1996 children's books
1996 Japanese novels
Anime and manga based on novels
Anime Works
Fictional bodyguards
Gangan Comics manga
Geneon USA
Japanese children's novels
Japanese fantasy novels
Martial arts anime and manga
NBCUniversal Entertainment Japan
NHK original programming
Novels by Nahoko Uehashi
Production I.G
Samurai in anime and manga
Sentai Filmworks
Shinchosha books
Shōnen manga
Sword and sorcery anime and manga
Taiga drama
Viz Media anime
Works about atonement | Moribito: Guardian of the Spirit | Biology | 1,089 |
27,146,862 | https://en.wikipedia.org/wiki/Oliva%20tisiphona | Oliva tisiphona is a species of sea snail, a marine gastropod mollusk in the family Olividae, the olives.
This is a nomen dubium.
Distribution
This marine species occurs off Martinique.
References
Paulmier G. , 2014. - La famille des Olividae Latreille, 1825 (Neogastropoda). Le genre Oliva Bruguière, 1789, aux Antilles et en Guyane françaises. Description de Oliva lilacea nov. sp. Bulletin de la Société Linnéenne de Bordeaux 41(4) "2013": 437-454, sér. 148, nouvelle série
Vervaet F.L.J. (2018). The living Olividae species as described by Pierre-Louis Duclos. Vita Malacologica. 17: 1-111
tisiphona
Gastropods described in 1845
Nomina dubia | Oliva tisiphona | Biology | 188 |
153,522 | https://en.wikipedia.org/wiki/Plastid | A plastid is a membrane-bound organelle found in the cells of plants, algae, and some other eukaryotic organisms. Plastids are considered to be intracellular endosymbiotic cyanobacteria.
Examples of plastids include chloroplasts (used for photosynthesis); chromoplasts (used for synthesis and storage of pigments); leucoplasts (non-pigmented plastids, some of which can differentiate); and apicoplasts (non-photosynthetic plastids of apicomplexa derived from secondary endosymbiosis).
A permanent primary endosymbiosis event occurred about 1.5 billion years ago in the Archaeplastida cladeland plants, red algae, green algae and glaucophytesprobably with a cyanobiont, a symbiotic cyanobacteria related to the genus Gloeomargarita. Another primary endosymbiosis event occurred later, between 140 to 90 million years ago, in the photosynthetic plastids Paulinella amoeboids of the cyanobacteria genera Prochlorococcus and Synechococcus, or the "PS-clade". Secondary and tertiary endosymbiosis events have also occurred in a wide variety of organisms; and some organisms developed the capacity to sequester ingested plastidsa process known as kleptoplasty.
A. F. W. Schimper was the first to name, describe, and provide a clear definition of plastids, which possess a double-stranded DNA molecule that long has been thought of as circular in shape, like that of the circular chromosome of prokaryotic cellsbut now, perhaps not; (see "..a linear shape"). Plastids are sites for manufacturing and storing pigments and other important chemical compounds used by the cells of autotrophic eukaryotes. Some contain biological pigments such as used in photosynthesis or which determine a cell's color. Plastids in organisms that have lost their photosynthetic properties are highly useful for manufacturing molecules like the isoprenoids.
In land plants
Chloroplasts, proplastids, and differentiation
In land plants, the plastids that contain chlorophyll can perform photosynthesis, thereby creating internal chemical energy from external sunlight energy while capturing carbon from Earth's atmosphere and furnishing the atmosphere with life-giving oxygen. These are the chlorophyll-plastidsand they are named chloroplasts; (see top graphic).
Other plastids can synthesize fatty acids and terpenes, which may be used to produce energy or as raw material to synthesize other molecules. For example, plastid epidermal cells manufacture the components of the tissue system known as plant cuticle, including its epicuticular wax, from palmitic acidwhich itself is synthesized in the chloroplasts of the mesophyll tissue. Plastids function to store different components including starches, fats, and proteins.
All plastids are derived from proplastids, which are present in the meristematic regions of the plant. Proplastids and young chloroplasts typically divide by binary fission, but more mature chloroplasts also have this capacity.
Plant proplastids (undifferentiated plastids) may differentiate into several forms, depending upon which function they perform in the cell, (see top graphic). They may develop into any of the following variants:
Chloroplasts: typically green plastids that perform photosynthesis.
Etioplasts: precursors of chloroplasts.
Chromoplasts: coloured plastids that synthesize and store pigments.
Gerontoplasts: plastids that control the dismantling of the photosynthetic apparatus during plant senescence.
Leucoplasts: colourless plastids that synthesize monoterpenes.
Leucoplasts differentiate into even more specialized plastids, such as:
the aleuroplasts;
Amyloplasts: storing starch and detecting gravityfor maintaining geotropism.
Elaioplasts: storing fats.
Proteinoplasts: storing and modifying protein.
or Tannosomes: synthesizing and producing tannins and polyphenols.
Depending on their morphology and target function, plastids have the ability to differentiate or redifferentiate between these and other forms.
Plastomes and Chloroplast DNA/ RNA; plastid DNA and plastid nucleoids
Each plastid creates multiple copies of its own unique genome, or plastome, (from 'plastid genome')which for a chlorophyll plastid (or chloroplast) is equivalent to a 'chloroplast genome', or a 'chloroplast DNA'. The number of genome copies produced per plastid is variable, ranging from 1000 or more in rapidly dividing new cells, encompassing only a few plastids, down to 100 or less in mature cells, encompassing numerous plastids.
A plastome typically contains a genome that encodes transfer ribonucleic acids (tRNA)s and ribosomal ribonucleic acids (rRNAs). It also contains proteins involved in photosynthesis and plastid gene transcription and translation. But these proteins represent only a small fraction of the total protein set-up necessary to build and maintain any particular type of plastid. Nuclear genes (in the cell nucleus of a plant) encode the vast majority of plastid proteins; and the expression of nuclear and plastid genes is co-regulated to coordinate the development and differention of plastids.
Many plastids, particularly those responsible for photosynthesis, possess numerous internal membrane layers. Plastid DNA exists as protein-DNA complexes associated as localized regions within the plastid's inner envelope membrane; and these complexes are called 'plastid nucleoids'. Unlike the nucleus of a eukaryotic cell, a plastid nucleoid is not surrounded by a nuclear membrane. The region of each nucleoid may contain more than 10 copies of the plastid DNA.
Where the proplastid (undifferentiated plastid) contains a single nucleoid region located near the centre of the proplastid, the developing (or differentiating) plastid has many nucleoids localized at the periphery of the plastid and bound to the inner envelope membrane. During the development/ differentiation of proplastids to chloroplastsand when plastids are differentiating from one type to anothernucleoids change in morphology, size, and location within the organelle. The remodelling of plastid nucleoids is believed to occur by modifications to the abundance of and the composition of nucleoid proteins.
In normal plant cells long thin protuberances called stromules sometimes formextending from the plastid body into the cell cytosol while interconnecting several plastids. Proteins and smaller molecules can move around and through the stromules. Comparatively, in the laboratory, most cultured cellswhich are large compared to normal plant cellsproduce very long and abundant stromules that extend to the cell periphery.
In 2014, evidence was found of the possible loss of plastid genome in Rafflesia lagascae, a non-photosynthetic parasitic flowering plant, and in Polytomella, a genus of non-photosynthetic green algae. Extensive searches for plastid genes in both taxons yielded no results, but concluding that their plastomes are entirely missing is still disputed. Some scientists argue that plastid genome loss is unlikely since even these non-photosynthetic plastids contain genes necessary to complete various biosynthetic pathways including heme biosynthesis.
Even with any loss of plastid genome in Rafflesiaceae, the plastids still occur there as "shells" without DNA content, which is reminiscent of hydrogenosomes in various organisms.
In algae and protists
Plastid types in algae and protists include:
Chloroplasts: found in green algae (plants) and other organisms that derived their genomes from green algae.
Muroplasts: also known as cyanoplasts or cyanelles, the plastids of glaucophyte algae are similar to plant chloroplasts, excepting they have a peptidoglycan cell wall that is similar to that of bacteria.
Rhodoplasts: the red plastids found in red algae, which allows them to photosynthesize down to marine depths of 268 m. The chloroplasts of plants differ from rhodoplasts in their ability to synthesize starch, which is stored in the form of granules within the plastids. In red algae, floridean starch is synthesized and stored outside the plastids in the cytosol.
Secondary and tertiary plastids: from endosymbiosis of green algae and red algae.
Leucoplast: in algae, the term is used for all unpigmented plastids. Their function differs from the leucoplasts of plants.
Apicoplast: the non-photosynthetic plastids of Apicomplexa derived from secondary endosymbiosis.
The plastid of photosynthetic Paulinella species is often referred to as the 'cyanelle' or chromatophore, and is used in photosynthesis. It had a much more recent endosymbiotic event, in the range of 140–90 million years ago, which is the only other known primary endosymbiosis event of cyanobacteria.
Etioplasts, amyloplasts and chromoplasts are plant-specific and do not occur in algae. Plastids in algae and hornworts may also differ from plant plastids in that they contain pyrenoids .
Inheritance
In reproducing, most plants inherit their plastids from only one parent. In general, angiosperms inherit plastids from the female gamete, where many gymnosperms inherit plastids from the male pollen. Algae also inherit plastids from just one parent. Thus the plastid DNA of the other parent is completely lost.
In normal intraspecific crossingsresulting in normal hybrids of one speciesthe inheriting of plastid DNA appears to be strictly uniparental; i.e., from the female. In interspecific hybridisations, however, the inheriting is apparently more erratic. Although plastids are inherited mainly from the female in interspecific hybridisations, there are many reports of hybrids of flowering plants producing plastids from the male.
Approximately 20% of angiosperms, including alfalfa (Medicago sativa), normally show biparental inheriting of plastids.
DNA damage and repair
The plastid DNA of maize seedlings is subjected to increasing damage as the seedlings develop. The DNA damage is due to oxidative environments created by photo-oxidative reactions and photosynthetic/ respiratory electron transfer. Some DNA molecules are repaired but DNA with unrepaired damage is apparently degraded to non-functional fragments.
DNA repair proteins are encoded by the cell's nuclear genome and then translocated to plastids where they maintain genome stability/ integrity by repairing the plastid's DNA. For example, in chloroplasts of the moss Physcomitrella patens, a protein employed in DNA mismatch repair (Msh1) interacts with proteins employed in recombinational repair (RecA and RecG) to maintain plastid genome stability.
Origin
Plastids are thought to be descended from endosymbiotic cyanobacteria. The primary endosymbiotic event of the Archaeplastida is hypothesized to have occurred around 1.5 billion years ago and enabled eukaryotes to carry out oxygenic photosynthesis. Three evolutionary lineages in the Archaeplastida have since emerged in which the plastids are named differently: chloroplasts in green algae and/or plants, rhodoplasts in red algae, and muroplasts in the glaucophytes. The plastids differ both in their pigmentation and in their ultrastructure. For example, chloroplasts in plants and green algae have lost all phycobilisomes, the light harvesting complexes found in cyanobacteria, red algae and glaucophytes, but instead contain stroma and grana thylakoids. The glaucocystophycean plastid—in contrast to chloroplasts and rhodoplasts—is still surrounded by the remains of the cyanobacterial cell wall. All these primary plastids are surrounded by two membranes.
The plastid of photosynthetic Paulinella species is often referred to as the 'cyanelle' or chromatophore, and had a much more recent endosymbiotic event about 90–140 million years ago; it is the only known primary endosymbiosis event of cyanobacteria outside of the Archaeplastida. The plastid belongs to the "PS-clade" (of the cyanobacteria genera Prochlorococcus and Synechococcus), which is a different sister clade to the plastids belonging to the Archaeplastida.
In contrast to primary plastids derived from primary endosymbiosis of a prokaryoctyic cyanobacteria, complex plastids originated by secondary endosymbiosis in which a eukaryotic organism engulfed another eukaryotic organism that contained a primary plastid. When a eukaryote engulfs a red or a green alga and retains the algal plastid, that plastid is typically surrounded by more than two membranes. In some cases these plastids may be reduced in their metabolic and/or photosynthetic capacity. Algae with complex plastids derived by secondary endosymbiosis of a red alga include the heterokonts, haptophytes, cryptomonads, and most dinoflagellates (= rhodoplasts). Those that endosymbiosed a green alga include the euglenids and chlorarachniophytes (= chloroplasts). The Apicomplexa, a phylum of obligate parasitic alveolates including the causative agents of malaria (Plasmodium spp.), toxoplasmosis (Toxoplasma gondii), and many other human or animal diseases also harbor a complex plastid (although this organelle has been lost in some apicomplexans, such as Cryptosporidium parvum, which causes cryptosporidiosis). The 'apicoplast' is no longer capable of photosynthesis, but is an essential organelle, and a promising target for antiparasitic drug development.
Some dinoflagellates and sea slugs, in particular of the genus Elysia, take up algae as food and keep the plastid of the digested alga to profit from the photosynthesis; after a while, the plastids are also digested. This process is known as kleptoplasty, from the Greek, kleptes (), thief.
Plastid development cycle
In 1977 J.M Whatley proposed a plastid development cycle which said that plastid development is not always unidirectional but is instead a complicated cyclic process. Proplastids are the precursor of the more differentiated forms of plastids, as shown in the diagram to the right.
See also
Notes
References
Further reading
External links
Transplastomic plants for biocontainment (biological confinement of transgenes) — Co-extra research project on coexistence and traceability of GM and non-GM supply chains
Tree of Life Eukaryotes
Organelles
Plant physiology
Photosynthesis
Endosymbiotic events | Plastid | Chemistry,Biology | 3,503 |
658,839 | https://en.wikipedia.org/wiki/Mines%20Paris%20%E2%80%93%20PSL | Mines Paris – PSL, officially École nationale supérieure des mines de Paris (; until May 2022 Mines ParisTech), and also known as École des mines de Paris, ENSMP, Mines de Paris, les Mines, or Paris School of Mines, is a French grande école and a constituent college of PSL Research University. It was originally established in 1783 by King Louis XVI.
Mines Paris is distinguished for the outstanding performance of its research centers and the quality of its international partnerships with other prestigious universities in the world, which include Massachusetts Institute of Technology (MIT), California Institute of Technology (Caltech), Harvard John A. Paulson School of Engineering and Applied Sciences (Harvard SEAS), Shanghai Jiao Tong University, University of Hong Kong, National University of Singapore (NUS), Novosibirsk State University, Pontifical Catholic University of Chile, and Tokyo Tech.
Mines Paris also publishes a world university ranking based on the number of alumni holding the post of CEO in one of the 500 largest companies in the world: the Mines ParisTech: Professional Ranking of World Universities. The school is a member of the ParisTech (Paris Institute of Technology) alliance.
History
A school of mining had been proposed by Henri Bertin in 1765 but it was the chemist Balthazar-Georges Sage who, though not a chemist of repute, was a royalist who was able to influence Jacques Necker (1732–1804) of the value of mineralogy in training students in mining. This was achieved through the use of his own large collections of minerals, and a chair in mineralogy was established on July 11, 1778. The school of mines was begun at the mint, the Hôtel de la Monnaie, Paris. The school was officially opened by decree of the French King's Counsel on March 19, 1783.
The school disappeared at the beginning of the French Revolution but was re-established by decree of the Committee of Public Safety in 1794, the 13th Messidor Year II. It moved to Savoie, after a decree of the consuls the 23rd Pluviôse Year X (1802).
After the Bourbon Restoration in 1814, the school moved to the Hôtel de Vendôme (in the 6th arrondissement in Paris' Jardin du Luxembourg). From the 1960s onwards, it created research laboratories in Fontainebleau, Évry, and Sophia Antipolis (Nice).
Education
École des mines de Paris is a member of the Groupe des écoles des mines (GEM), a group of 8 Institut Mines-Telecom (IMT) engineering schools that are Grandes Écoles, a French institution of higher education that is separate from, but parallel and connected to the main framework of the French public university system. Similar to the Ivy League in the United States, Oxbridge in the UK, and C9 League in China, Grandes Écoles are elite academic institutions that admit students through an extremely competitive process. Alums go on to occupy elite positions within government, administration, and corporate firms in France.
The initial aim of the École des mines de Paris, namely to train high-level mining engineers, evolved with time to adapt to the technological and structural transformations undergone by society. Mines Paris - PSL has now become one of the most prestigious French engineering schools with a broad variety of subjects. Its students are trained to have management positions, work in research and development departments, or as operations officers, etc. They receive a well-rounded education in a variety of subjects, ranging from the most technical (Mathematics, Physics) to economics, social sciences or even art in order to be able to tackle the managing or engineering-related issues they are to face. Exchange programs are possible during the third semester with prestigious universities around the world, such as Massachusetts Institute of Technology (MIT), California Institute of Technology (Caltech), University of Hong Kong, National University of Singapore (NUS), Tokyo Tech, Seoul National University...
Although the IMT engineering schools are more expensive than public universities in France, Grandes Écoles typically have much smaller class sizes and student bodies, and many of their programs are taught in English. International internships, study abroad opportunities, and close ties with government and the corporate world are a hallmark of the Grandes Écoles. Many of the top ranked schools in Europe are members of the (CGE), as are the IMT engineering schools. Degrees from the IMT are accredited by the and awarded by the Ministry of National Education (France) ().
Mines Paris - PSL provides different educational paths:
The Ingénieurs civils degree (Master of Science and Executive Engineering), ranked among the best French grandes écoles engineering degrees, similar to that offered at , École des Ponts ParisTech and CentraleSupélec.
The Corps of Mines, one of the greatest technical corps of the French state. It is a third cycle degree, lasting for three years, consisting in two long-term internships both in public and private economical institutions and courses in economics and public institutions. The admission to the Corps des Mines is highly selective as only the top students from , École normale supérieure, Mines ParisTech and Telecom Paris may apply.
Mastère Spécialisé degree, (post-graduate specialization degree) post-graduate programs accredited by the , in the fields of Energy, Environment, Transport and Logistics, Informatics, Safety and management in industry and Materials engineering.
Doctoral (19 schools) and Master (9 programs) studies in various fields.
For students having studied in the Classe Préparatoire aux Grandes Ecoles (a two-year highly selective undergraduate program in Mathematics, Physics and Engineering, among others), admission to Civil Engineer of Mines is decided through a nationwide competitive examination. Every year, ten applications are also accepted from students around the world according to their academic achievements.
Admission to the Corps of Mines is possible for French students at the end of the studies in École polytechnique, École normale supérieure, École des télécommunications de Paris and École des mines de Paris (these two later, after a specific examination), or from the other great technical corps of the French state. Admission in third year is also open to one Ph.D graduate.
Rankings
National ranking (ranked as Mines Paris for its Master of Sciences in Engineering)
International Rankings (Ranked as PSL University)
Student unions and organizations
A Student Union is elected every year after a one-week campaign, and is in charge of enhancing the contact between students and various sponsoring industries as well as organizing events for the students.
Various other organizations are part of students' lives: the Students' Sport Committee (BDS), the Junior Enterprise (JUMP), the Arts' Office (BDA), Cahier Vert (social opening and tutoring), CAV (wine-tasting club), Catholic community, fanfare band, entrepreneur club (Mines Genius), humanitarian organizations (Heliotopia, Ceres, Zanbinou), photography club, and sailing club, among others.
Alumni
Academics & Scientists
Maurice Allais (1911–2010), Nobel Prize in Economics, 1988
Léon Walras (1834–1910), mathematical economist
Georges Charpak (1924–2010), Nobel Prize in Physics 1992
Ignacy Domeyko (1802–1889), geologist, mineralogist, educator, rector of University of Chile
Philippe Jamet (born 1961), Director General of IONIS Education Group
Henri Poincaré (1854-1912), mathematician and physicist
Jean-Baptiste Élie de Beaumont (1798–1874), founder of geology, Wollaston Medal 1843
Auguste Laurent (1808–1853), chemist, precursor of Organic Chemistry modern
Alfred-Marie Liénard (1869–1958), famous for the Liénard–Wiechert potential
Louis Paul Cailletet (1832–1913), physicist and inventor
Jean-Jacques Favier (1949–), astronaut
Marie-Adolphe Carnot, (1839-1920), French chemist, mining engineer and politician, having uranium ore carnotite named after him.
Sylvaine Neveu (born 1968), chemist and scientific director of the Solvay group
Business leaders
Odile Hembise Fanton d’Andon. CEO of the ACRI-ST (since 2000)
Anne Rigail, CEO of Air France (since 2018)
Patrick Pouyanné, CEO of TotalEnergies (since 2014)
Jacques Aschenbroich, CEO of Valeo (since 2009)
Jean-Laurent Bonnafé, CEO of BNP Paribas (since 2011)
Tidjane Thiam, CEO of Credit Suisse (2015-2020)
Carlos Ghosn, CEO of Nissan (2001-2018) and CEO of Renault-Nissan (2005-2018)
Anne Lauvergeon, CEO of Areva (2001-2011)
Thierry Desmarest, CEO of Total (1995-2010)
Didier Lombard, CEO of France Télécom (2005-2010)
Jean-Louis Beffa, CEO of Saint-Gobain (1986-2007)
Jean-Martin Folz, CEO of PSA Peugeot Citroën (1995-2007)
Denis Ranque, CEO of Thales Group (1998-2009)
Noël Forgeard, former CEO of Airbus (1998-2005) and EADS (2005-2006)
Francis Mer, CEO of Usinor (1986-2001) and former Minister of Finances of France (2002-2004)
Eckley Brinton Coxe (1839-1895), Owner Coxe Brothers and Company, Pennsylvania State Senator
Entrepreneurs
Franck Le Ouay and Romain Nicolli, co-founders of Criteo
Politicians
Alain Poher (1909–1996), politician, president of Sénat, president by interim of French Republic.
Jean-Louis Bianco (1943–), General Secretary of President of France (1982–1991), Minister of Social Affairs (France) (1991–1992), Minister of Transport (France) (1992–1993), députy of Alpes de Haute Provence's 1st constituency (1997–)
Charles de Freycinet, prime minister of France at the end of the 19th century
Albert François Lebrun (1871–1950), president of France
Najla Bouden Romdhane (1958–), designated prime minister of Tunisia (2021–)
Adam Seybert (1773-1825), American congressman and mineralogist
Research centres
Energy and Processes
CES (Energy efficiency of Systems Center)
CTP (Thermodynamics of Processes Center)
OIE (Observation, Impacts, Energy Center)
PERSEE (Processes, Renewable Energies and Energy Systems Center)
Mathematics and Systems
CAOR (Robotics Center)
CAS (Automatic Control and Systems Center)
CBIO (Computational Biology Center)
CMA (Applied Mathematics Center)
CMM (Mathematical Morphology Center)
CRI (Computer Science Center)
Earth Science and Environment
Geosciences (Geosciences and Geoengineering Center). Located in Fontainebleau, the Geosciences and Geoengineering Department (a research structure common to MINES ParisTech and ARMINES) focuses on research and teaching activities in the field of Earth and Environmental Sciences.
ISIGE (Environmental Engineering and Management Center)
Economics, Management, Society
CERNA (Industrial Economics Center)
CGS (Scientific Management Center)
CRC (Crisis and Risk Research Center)
CSI (Sociology of Innovation Center)
Mechanical and Materials Engineering
CEMEF (Material Forming Center)
Materials Center
Source:
Other schools of Mines in France
École nationale supérieure des Mines d'Albi Carmaux (Mines Albi-Carmaux)
École nationale supérieure des Mines d'Alès (Mines Alès)
École nationale supérieure des Mines de Douai (Mines Douai)
École nationale supérieure des Mines de Nancy
École nationale supérieure des Mines de Nantes (Mines Nantes)
École nationale supérieure des mines de Saint-Étienne (Mines Saint-Étienne)
Other schools of Mines in the UK
Royal School of Mines
Camborne School of Mines
Other schools of Mines in Africa
École nationale supérieure des Mines de Rabat (Mines Rabat)
Other schools of Mines in the USA
Colorado School of Mines
Columbia School of Mines
See also
PSL Research University
ParisTech
Institut Mines-Télécom
École des mines d'Albi-Carmaux
École des mines d'Alès
École des mines de Douai
École des mines de Nantes
École nationale supérieure des mines de Nancy
École nationale supérieure des mines de Saint-Étienne
École Nationale Supérieure des Mines de Rabat
Musée de Minéralogie
Télécom SudParis
Mines ParisTech: Professional Ranking of World Universities
Notes and references
External links
School's official Web Portal
School's Linkedin page
Students' Web Portal
ISIGE – Mines ParisTech's faculty of Sustainable development
ParisTech
Schools of mines
Universities and colleges in Paris
Buildings and structures in the 6th arrondissement of Paris
Engineering universities and colleges in France
Grandes écoles
Technical universities and colleges in France
1783 establishments in France
Educational institutions established in 1783 | Mines Paris – PSL | Engineering | 2,659 |
5,842,560 | https://en.wikipedia.org/wiki/Conical%20function | In mathematics, conical functions or Mehler functions are functions which can be expressed in terms of Legendre functions of the first and second kind,
and
The functions were introduced by Gustav Ferdinand Mehler, in 1868, when expanding in series the distance of a point on the axis of a cone to a point located on the surface of the cone. Mehler used the notation to represent these functions. He obtained integral representation and series of functions representations for them. He also established an addition theorem
for the conical functions. Carl Neumann obtained an expansion of the functions in terms
of the Legendre polynomials in 1881. Leonhardt introduced for the conical functions the equivalent of the spherical harmonics in 1882.
External links
G. F. Mehler "Ueber die Vertheilung der statischen Elektricität in einem von zwei Kugelkalotten begrenzten Körper" Journal für die reine und angewandte Mathematik 68, 134 (1868).
G. F. Mehler "Ueber eine mit den Kugel- und Cylinderfunctionen verwandte Function und ihre Anwendung in der Theorie der Elektricitätsvertheilung" Mathematische Annalen 18 p. 161 (1881).
C. Neumann "Ueber die Mehler'schen Kegelfunctionen und deren Anwendung auf elektrostatische Probleme" Mathematische Annalen 18 p. 195 (1881).
G. Leonhardt " Integraleigenschaften der adjungirten Kegelfunctionen" Mathematische Annalen 19 p. 578 (1882).
Milton Abramowitz and Irene Stegun (Eds.) Handbook of Mathematical Functions (Dover, 1972) p. 337
A. Gil, J. Segura, N. M. Temme "Computing the conical function $P^{\mu}_{-1/2+i\tau}(x)$" SIAM J. Sci. Comput. 31(3), 1716–1741 (2009).
Tiwari, U. N.; Pandey, J. N. The Mehler-Fock transform of distributions. Rocky Mountain J. Math. 10 (1980), no. 2, 401–408.
Special functions | Conical function | Mathematics | 486 |
52,846,723 | https://en.wikipedia.org/wiki/Evolution%20of%20molecular%20chaperones | Chaperones, also called molecular chaperones, are proteins that assist other proteins in assuming their three-dimensional fold, which is necessary for protein function. However, the fold of a protein is sensitive to environmental conditions, such as temperature and pH, and thus chaperones are needed to keep proteins in their functional fold across various environmental conditions. Chaperones are an integral part of a cell's protein quality control network by assisting in protein folding and are ubiquitous across diverse biological taxa. Since protein folding, and therefore protein function, is susceptible to environmental conditions, chaperones could represent an important cellular aspect of biodiversity and environmental tolerance by organisms living in hazardous conditions. Chaperones also affect the evolution of proteins in general, as many proteins fundamentally require chaperones to fold or are naturally prone to misfolding, and therefore mitigates protein aggregation.
Evolution of chaperones
The evolutionary development of chaperones is highly linked to the evolution of proteins in general, as their primary function is dependent on the presence of proteins. Proteins were selected as the main biological catalysts over ribozymes, RNA molecules capable of catalyzing biological reactions, early in cellular evolution. Diversity of monomers (4 nucleotides versus 20 amino acids), interactions during folding, and consequences of changes in sequence are some of the hypotheses that attempt to explain why proteins were selected over ribozymes.
Small proteins fold spontaneously, but the development of increasingly larger proteins, which have more complex folding patterns and intramolecular interactions, would have required chaperones to prevent protein aggregation due to misfolding. Folding of early proteins would have been error-prone in ancient cell cytosol and chaperones would have been needed to assist in unfolding and re-folding.
Heat shock proteins
Heat shock proteins (HSPs) are a diverse class of molecular chaperones that assist in folding under stress. While originally identified in heat stress response (hence the name “heat shock”), inducible HSP expression is a consequence of all known stressors (pH, osmotic, temperature, energy depletion, ion concentration, etc.). Genetic stress, a result of deleterious mutations, also increases HSP expression. HSPs are ubiquitous across all domains of life (Bacteria, Archaea, and Eukarya) and have been found in every species for which they have been tested. HSPs are divided into families, based on sequence homology and molecular weight (hsp110, hsp100, hsp90, hsp70, hsp60, hsp40, hsp10, and small hsp families).
Proteins are highly susceptible to denaturation due to environmental conditions and organisms that live in hazardous conditions should have a basal level of HSP expression. However, other adaptations, such as colonizing less hazardous microhabitats or other behavioral adaptations, could also contribute to acclimation in stressful habitats. Additionally, “normal” environments can also place stress on inhabitants (drought or seasonal changes, for example). These factors muddy the relationship between HSP expression and environmental stress resistance and HSP expression in nature is not well characterized.
Elevated expression of heat shock proteins is not correlated with chronic environmental stress and is thought to be due to the costs of HSP expression. High levels of hsp70 are known to accompany deficits in cell division, reproduction, and reproductive success. Intracellularly, HSP expression shuts down normal cell functions and diverts a large amount of energy for stress resistance. Additionally, high levels of HSP is hypothesized to be toxic due to disruption of cell functions, possibly by excessive binding of client proteins. These results suggest that the costs of HSP expression are more suited to temporary stressors.
Chaperone buffering
Chaperones have also been implicated in the understanding the relationship between genotype and phenotype. Protein folding in itself transitions from genotype to phenotype: primary structure/amino acid sequence reflects genotype while the final, functional fold, either tertiary or quaternary structure, represents phenotype. Since chaperones are mediators of this transition by assisting in the fold of the client protein, chaperone activity is thought to modulate the adaptive evolution of the proteome.
One observation in line with this hypothesis is chaperone buffering, where the activity of a chaperone masks or “buffers” deleterious or destabilizing mutations in a client protein. In Drosophila melanogaster, reduced activity of hsp90 resulted in deficient phenotypes caused by mutations in developmental pathways. Hsp70 in Drosophila was also shown to buffer deleterious mutations. Similar results have been shown in Saccharomyces cerevisiae and Arabidopsis thaliana. Work in Escherichia coli showed that the GroES/GroEL system (aka hsp10 and hsp60 respectively) similarly buffered the effect of destabilizing mutations in a phosphotriesterase. The mutation disrupted the fold of the protein, but conferred an increase in efficiency upon chaperone-assisted folding. These results illustrate a model in which evolution can act on the phenotype of a protein while the deleterious effect of the genotype is mitigated by chaperones.
Chaperones and the endosymbiosis theory
Chaperones are ancient proteins that have been evolutionarily conserved across all domains of life and are ubiquitous across all biological taxa. Since they are so widespread and ancient, they can be used as molecular markers in studies of ancient cellular evolution.
Phylogenetic analysis using two families of HSPs (hsp10 and hsp60, also called chaperonins) support the current endosymbiosis model of the origin of mitochondria and chloroplasts. Hsp10 and hsp60 are present in all eubacteria and organelles of eukaryotes (mitochondria and chloroplasts), but not in eukaryotic cell cytosol and archaebacteria. Phylogenetic trees were generated using 56 total amino acid sequences from Gram positive and Gram negative bacteria; mitochondria from plants, animals, fungi, and protists; cyanobacteria; and chloroplasts. Any two hsp60 amino acid sequences share at least 40% similarity, with 18-20% of differences coming from conservative changes (uncharged amino acid to another uncharged amino acid). Any two hsp10 amino acid sequences share at least 30% similarity, with 15-20% conservative changes. Phylogenetic analysis using hsp10 and hsp60 yield similar results to that of rRNA and other genes. Mitochondria were found to be most closely related to the α-purple subdivision of Gram negative bacteria and chloroplasts were most similar to cyanobacteria, similar to other data supporting the endosymbiosis theory. Gram positive bacteria were found to be the most ancestral, which is also supported by other studies.
References
Evolutionary biology concepts
Homeostasis
Molecular evolution
Protein biosynthesis
Protein folding
Molecular chaperones, evolution
Proteomics | Evolution of molecular chaperones | Chemistry,Biology | 1,485 |
56,935,931 | https://en.wikipedia.org/wiki/Trendione | Trendione (developmental code name RU-2065; nickname Trenavar), also known as estra-4,9,11-triene-3,17-dione, is an androgen prohormone as well as metabolite of the anabolic steroid trenbolone. Trendione is to trenbolone as androstenedione is to testosterone. The compound is inactive itself, showing more than 100-fold lower affinity for the androgen and progesterone receptors than trenbolone. It is a designer steroid and has been sold on the internet as a "nutritional supplement". Trendione is listed in the United States Designer Anabolic Steroid Control Act of 2014.
See also
List of androgens/anabolic steroids
References
Abandoned drugs
Anabolic–androgenic steroids
Designer drugs
Diketones
Estranes
Human drug metabolites
Prodrugs
Progestogens | Trendione | Chemistry | 200 |
9,091,963 | https://en.wikipedia.org/wiki/Units%20of%20textile%20measurement | Textile fibers, threads, yarns and fabrics are measured in a multiplicity of units.
A fiber, a single filament of natural material, such as cotton, linen or wool, or artificial material such as nylon, polyester, metal or mineral fiber, or human-made cellulosic fibre like viscose, Modal, Lyocell or other rayon fiber is measured in terms of linear mass density, the weight of a given length of fiber. Various units are used to refer to the measurement of a fiber, such as: the denier and tex (linear mass density of fibers), super S (fineness of wool fiber), worsted count, woolen count, linen count (wet spun) (or Number English (Ne)), cotton count (or Number English (Ne)), Number metric (Nm) and yield (the reciprocal of denier and tex).
A yarn, a spun agglomeration of fibers used for knitting, weaving or sewing, is measured in terms of cotton count and yarn density.
Thread, usually consisting of multiple yarns plied together producing a long, thin strand used in sewing or weaving, is measured in the same units as yarn.
Fabric, material typically produced by weaving, knitting or knotting textile fibers, yarns or threads, is measured in units such as the momme, thread count (a measure of the coarseness or fineness of fabric), ends per inch (e.p.i) and picks per inch (p.p.i).
Fibers
Micronaire
Micronaire is a measure of the air permeability of cotton fiber and is an indication of fineness and maturity. Micronaire affects various aspects of cotton processing.
Micron
One millionth of a metre, or one thousandth of a millimetre; about one-fourth the width of a strand of spider silk.
Cotton Bale Size
Cotton lint is usually measured in bales, although there is no standard and the bale size may vary country to country. For example, in the United States it measures approximately and weighs . In India, a bale equals .
S or super S number
Not a true unit of measure, S or super S number is an index of the fineness of wool fiber and is most commonly seen as a label on wool apparel, fabric, and yarn.
Slivers, tops and rovings
Slivers, tops and rovings are terms used in the worsted process. The sliver come off the card, tops come after the comb, rovings come before a yarn, and all have a heavier linear density.
Grams per metre
If the metric system is in use the linear density of slivers and tops is given in grams per metre. Tops destined for machine processing are typically 20 grams per metre. Hobby spinners typical use a little heavier top.
Yield
Similar to tex and denier, yield is a term that helps describe the linear density of a roving of fibers. However, unlike tex and denier, yield is the inverse of linear density and is usually expressed in yards per pound (yd/lb).
Yarn and thread
Twist
Twists per inch
Number of twists per inch.
Twists per metre
Number of twists per metre.
Linear density
There are two systems used for presenting linear density, direct and indirect. When the direct method is used, the length is fixed and the weight of yarn is measured; for example, tex gives the weight in grams of one thousand metres of yarn. An indirect method fixes the weight and gives the length of yarn created.
Units
The textile industry has a long history and there are various units in use. Tex is more likely to be used in Canada and Continental Europe, while denier remains more common in the United States.
tex: Grams per 1,000 metres of yarn. Tex is a direct measure of linear density.
den (denier): Grams per 9,000 metres of yarn. Den is a direct measure of linear density.
dtex (deci-tex): Grams per 10,000 metres of yarn. Dtex is a direct measure of linear density.
gr/yard: Grains per yard of yarn. Gr/yard is a direct measure of linear density, but is rarely used in the modern textile industry.
ECC or NeC or Ne (English Cotton Count): The number of 840 yd lengths per pound. ECC is an indirect measure of linear density. It is the number of hanks of skein material that weighs 1 lb. Under this system, the higher the number, the finer the yarn. In the United States cotton counts between 1 and 20 are referred to as coarse counts.
NeK or NeW (Worsted Count): The number of 560 yd lengths per 1 lb of yarn. NeK is an indirect measure of linear density. NeK is also referred to as the spinning count.
NeL or Lea (Linen Count): The number of 300 yd lengths per 1 lb of yarn. NeL is an indirect measure of linear density.
NeS (Woollen Count or Yorkshire Skeins Woollen): The number of 256 yd lengths per 1 lb of yarn. NeS is an indirect measure of linear density. One of the best known of the many different woolen yarn counts.
Conversion table
The following table summarizes several measures of linear density and gives equivalences.
Denier
Denier () or den (abbreviated D), a unit of measure for the linear mass density of fibers, is the mass in grams per 9,000 metres of the fiber. The denier is based on a natural reference: a single strand of silk is approximately one denier; a 9,000-metre strand of silk weighs about one gram. The term denier comes from the French denier, a coin of small value (worth sou). Applied to yarn, a denier was held to be equal in weight to .
There is a difference between filament and total measurements in deniers. Both are defined as above, but the first relates to a single filament of fiber (commonly called denier per filament (DPF)), whereas the second relates to a yarn.
Broader terms, such as fine may be applied, either because the overall yarn is fine or because fibers within this yarn are thin. A 75-denier yarn is considered fine even if it contains only a few fibers, such as thirty 2.5-denier fibers; but a heavier yarn, such as 150 denier, is considered fine only if its fibers are individually as thin as one denier.
The following relationship applies to straight, uniform filaments:
DPF = total denier / quantity of uniform filaments
The denier system of measurement is used on two- and single-filament fibers. Some common calculations are as follows:
In practice, measuring is both time-consuming and unrealistic. Generally a sample of 900 metres is weighed, and the result is multiplied by ten to obtain the denier weight.
A fiber is generally considered a microfiber if it is one denier or less.
A one-denier polyester fiber has a diameter of about ten micrometres.
In tights and pantyhose, the linear density of yarn used in the manufacturing process determines the opacity of the article in the following categories of commerce: ultra sheer (below 10 denier), sheer (10 to 30 denier), semi-opaque (30 to 40 denier), opaque (40 to 70 denier) and thick opaque (70 denier or higher).
For single fibers, instead of weighing, a machine called a vibroscope is used. A known length of the fiber (usually 20 mm) is set to vibrate, and its fundamental frequency measured, allowing the calculation of the mass and thus the linear density.
Yarn length
Given the linear density and weight the yarn length can be calculated; for example:
, where is the yarn length in metres, is the English cotton count and is the yarn weight in kilograms.
The following length units are defined.
Bundle: usually
Thread: a length of —the circumference of a warp beam
Lea:
Hank: a length of 7 leas or
Spyndle: —used in the English rope industry
Fabrics
Grams per square metre (GSM)
Fabric weight is measured in grams per square metre or g/m2 (also abbreviated as GSM). GSM is the metric measurement of the weight of a fabric—it is a critical parameter for any textile product. The weight may affect density, thickness and many physical properties of the fabric, such as strength. GSM is accountable for the linear metres and specific use of the fabric. The fabric weight is measured in grams. In the metric system, the mass per unit area of all types of textiles is expressed in grams per square metre (g/m2).
The gram (alternative spelling: gramme; SI unit symbol: g) is a metric system unit of mass. A gram is defined as one thousandth of the SI base unit, the kilogram, or . Square metre (alternative spelling: square meter; SI unit symbol: m2) is a superficial area equal to that of a square whose sides' lengths are each one metre.
Typically a cheap T-shirt fabric is approximately 150 g/m2. GSM of fabric helps in determining the consumption, cost and application. The more the gsm transposes to thicker and heavy construction.
Mommes
Mommes (mm), traditionally used to measure silk fabrics, the weight in pounds of a piece of fabric if it were sized 45 inches by 100 yards (1.2 m by 90 m). One momme = 4.340 g/m2; 8 mommes is approximately 1 ounce per square yard or 35 g/m2.
The momme is based on the standard width of silk of wide (though silk is regularly produced in widths and uncommonly in larger widths).
The usual range of momme weight for different weaves of silk are:
Habutai—5 to 16 mm
Chiffon—6 to 8 mm (can be made in double thickness, i.e. 12 to 16 mm)
Crepe de Chine—12 to 16 mm
Gauze—3 to 5 mm
Raw silk—35 to 40 mm (heavier silks appear more "wooly")
Organza—4 to 6 mm
Charmeuse—12 to 30 mm
The higher the weight in mommes, the more durable the weave and the more suitable it is for heavy-duty use. Also, the heavier the silk, the more opaque it becomes. This can vary even within the same weave of silk: for example, lightweight charmeuse is translucent when used in clothing, but 30-momme charmeuse is opaque.
Thread count
Thread count, also called threadcount or threads per inch (TPI), is a measure of the coarseness or fineness of fabric. It is measured by counting the number of threads contained in one square inch of fabric or one square centimetre, including both the length (warp) and width (weft) threads. The thread count is the number of threads counted along two sides (up and across) of the square inch, added together. It is used especially with cotton linens such as bed sheets, and has been known to be used in the classification of towels.
There is a common misconception that thread count is an important consideration when purchasing bedding. However, linen experts claim that beyond a thread count of 400, there is no difference in quality. They further highlight that sheet material is of greater importance than thread count. The amount of thread that can fit into a square inch of fabric is limited, suggesting that bedding beyond 400 count is likely a marketing strategy. Inflated thread counts are usually the result of including the number of strands in a twisted yarn in the claimed thread count.
Industry standard
Thread count is often used as a measure of fabric quality, thus "standard" cotton thread counts are around 150 while "good-quality" sheets start at 180 and a count of 200 or higher is considered "percale". Some (but not all) extremely high thread counts (typically over 500) mislead as they usually count the individual threads in "plied" yarns (a yarn that is made by twisting together multiple finer threads). For marketing purposes, a fabric with 250 two-ply yarns in both the vertical and horizontal direction could have the component threads counted to a 1,000 thread count although according to the National Textile Association (NTA), which cites the international standards group ASTM International, accepted industry practice is to count each thread as one, even threads spun as two- or three-ply yarn. The Federal Trade Commission in an August 2005 letter to the NTA agreed that consumers "could be deceived or misled" by inflated thread counts.
In 2002, ASTM proposed a definition for "thread count" that has been called "the industry's first formal definition for thread count". A small number of the ASTM committee argued for the higher yarn count number obtained by counting each single yarn in a plied yarn and cited as authority the provision relating to woven fabric in the Harmonized Tariff Schedule of the United States, which states each ply should be counted as one using the "average yarn number." In 2017, the Federal Trade Commission issued a General Exclusion Order barring entry of woven textile fabrics and products marked with inflated thread counts. The inflated thread counts were deemed false advertising under section 43 of the Lanham Act, 15 U.S.C. 1125(a)(1)(B).
In tartans
In the context of tartans, thread counts are used not for determining coarseness, but rather for recording and reliably repeating the cross-striped pattern of the cloth. Such a thread count (which for the typical worsted woollen cloth used for a kilt must in total be divisible by 4) is given as a series of colour-code and thread-count pairs. Sometimes, with typical symmetrical (reflective) tartans, slash (/ ) markup at the ends is used to indicate whether (and how much of) a "pivot" colour is to be repeated when the design is mirrored and repeated backwards. For example, calls for a pattern of (left to right) blue, white, blue, red, black, green, and white, and indicates that when mirrored the two white threads (going one direction) or 24 blue threads (going the other) are repeated after mirroring, resulting in a total of 4 white going rightward and 48 blue heading left. This is known as a half-count at pivot thread count. The same sett (technically a half-sett) could also be represented , in a full-count at pivot thread count; this indicates that after the four white threads, the pattern resumes backwards with 24 green without repetition of any of the white count. The old style, without slash markup——is considered ambiguous, but is most often interpreted as a full count. The comparatively rare non-symmetrical tartans are given in full setts and are simply repeated without mirroring.
Ends per inch
Ends per inch (EPI or e.p.i.) is the number of warp threads per inch of woven fabric. In general, the higher the ends per inch, the finer the fabric is.
Ends per inch is very commonly used by weavers who must use the number of ends per inch in order to pick the right reed to weave with. The number of ends per inch varies on the pattern to be woven and the thickness of the thread. The number of times the thread can be wrapped around a ruler in adjacent turns over an inch is called the wraps per inch. Plain weaves generally use half the number of wraps per inch for the number of ends per inch, whereas denser weaves like a twill weave will use a higher ratio like two-thirds of the number of wraps per inch. Finer threads require more threads per inch than thick ones and thus result in a higher number of ends per inch.
The number of ends per inch in a piece of woven cloth varies depending on the stage of manufacture. Before the cloth is woven, the warp has a certain number of ends per inch, which is directly related to the size reed being used. After weaving, the number of ends per inch will increase, and it will increase again after being washed. This increase in the number of ends per inch (and picks per inch) and shrinkage in the size of the fabric is known as the take-up. The take-up depends on many factors, including the material and how tightly the cloth is woven. Tightly woven fabric shrinks more (and thus the number of ends per inch increases more) than loosely woven fabric, as do more elastic yarns and fibers.
Picks per inch
Picks per inch (or p.p.i.) is the number of weft threads per inch of woven fabric. A pick is a single weft thread, hence the term. In general, the higher the picks per inch, the finer is the fabric.
Courses and wales
Loops are the building blocks of knitted fabrics, and courses and wales in knitted fabrics are importantly similar to ends and pick in woven fabrics. The knitting structure is formed by intermeshing the loops in consecutive rows.
Courses are the total number of horizontal rows measured in per inch or per centimetre. The course is a horizontal row of loops formed by all the adjacent needles during one revolution. Course length is obtained by multiplying loop length with the number of needles involved in the production of the course.
Wales are the number of vertical columns measured in per inch or per centimetre.
Because the number of courses and wales per inch or per centimetre infers (more or less) the tight and loose knitting. Stitch or loop density is the total number of loops in a unit area such as per square centimetre or per square inch.
Stitch/loop length is a major factor in a knitted fabric's overall quality, affecting dimensional stability, drape and appearance, etc. Loop length is the length of yarn contained to form a loop.
Air permeability
Air permeability is a measure of the ability of air to pass through a fabric. Air permeability is defined as "the volume of air in cubic centimetres (cm3) which is passed through in one second through 100 cm2 of the fabric at a pressure difference of 10 cm head of water", also known as the Gurley unit. It is standardized by, among others, norm ASTM D737-18 and norm ISO 9237-1995.
Factors that affect air permeability include porosity, fabric thickness and construction, yarn density, twist, crimp, layering, and moisture within the fabric.
The concept of air permeability is important for the design of active wear and insect netting.
References
Bibliography
External links
Textiles Intelligence Glossary
Textiles
textile measurement | Units of textile measurement | Mathematics | 3,895 |
38,605,597 | https://en.wikipedia.org/wiki/Coronal%20rain | Coronal rain is a phenomenon that occurs in the Sun's corona when hot plasma cools and condenses in strong magnetic fields and falls to the photosphere. It is usually associated with active regions. Coronal rain formed when impulsive heating from magnetic reconnection occurs.
The material that makes up the coronal rain can be up to hundreds of times cooler than the surrounding environment.
References
External links
July 2012: Coronal Rain
The Sun's Coronal Rain Puzzle Solved : Discovery News
Solar phenomena
Articles containing video clips | Coronal rain | Physics | 109 |
367,986 | https://en.wikipedia.org/wiki/Soo%20Locks | The Soo Locks (sometimes spelled Sault Locks but pronounced "soo") are a set of parallel locks, operated and maintained by the United States Army Corps of Engineers, Detroit District, that enable ships to travel between Lake Superior and the lower Great Lakes. They are located on the St. Marys River between Lake Superior and Lake Huron, between the Upper Peninsula of the U.S. state of Michigan and the Canadian province of Ontario. They bypass the rapids of the river, where the water falls . The locks pass an average of 10,000 ships per year, despite being closed during the winter from January through March, when ice shuts down shipping on the Great Lakes. The winter closure period is used to inspect and maintain the locks.
The locks share a name (usually shortened and anglicized as Soo) with the two cities named Sault Ste. Marie, in Ontario and in Michigan, located on either side of the St. Marys River. The Sault Ste. Marie International Bridge between the United States and Canada permits vehicular traffic to pass over the locks. A railroad bridge crosses the St. Marys River just upstream of the highway bridge.
The first locks were opened in 1855. Along with the Erie Canal, constructed in 1824 in central New York State, they were among the great infrastructure engineering projects of the antebellum United States. The Soo Locks were designated a National Historic Landmark in 1966.
United States locks
The U.S. locks form part of a canal formally named the St. Marys Falls Canal. The entire canal, including the locks, is owned and maintained by the United States Army Corps of Engineers, which provides free passage. The first iteration of the U.S. Soo Locks was completed in May 1855; it was operated by the state of Michigan until transferred to the U.S. Army in 1881.
Locks
The configuration consists of two parallel lock chambers.Starting at the Michigan shoreline and moving north toward Ontario, these are:
The MacArthur Lock, built in 1943. It is long, wide, and deep. This is large enough to handle ocean-going vessels ("salties") that must also pass through the smaller locks in the Welland Canal. The first vessel through was the SS Carl D. Bradley. Per 33 CFR § 207.440 (v), "The maximum overall dimensions of vessels that will be permitted to transit MacArthur Lock are 730 feet in length and 75 feet in width, except as provided in paragraph (v)(1) of this section." Per U.S. Army Corps of Engineers, Sault St Marie, the length of the ship is restricted to 730’ due to the southwest wall alignment entering and exiting the MacArthur Lock.
The Poe Lock, built in 1896. The first vessel to pass through was the U.S. Army Corps of Engineers tug USS Hancock. The original Poe Lock was engineered by Orlando Poe and, at long and wide, was the largest in the world when completed in 1896. The lock was re-built in 1968 to accommodate larger ships, after the Saint Lawrence Seaway opened and made passage of such ships possible to the Great Lakes. It is now long, wide, and deep. It can take ships carrying of cargo. The Poe is the only lock that can handle the large lake freighters used on the Upper Lakes. The first passage after the rebuild was by the Phillip R. Clarke in 1969.
Former locks
The State Lock, built between 1853 and 1855. The State of Michigan was given land by the federal government to construct a lock to allow for quicker transit of new copper and iron ore deposits discovered around the Lake Superior basin. The lock consisted of two chambers back-to-back to bridge the difference in water level. The each chamber ws long, wide at the top of its walls and at its bottom, and deep. The State Lock was replaced by the original Poe Lock in 1896.
The Weitzel Lock, was built between 1873 and 1881 directly south of the State Lock, and was the first lock to be operated by the federal government. At long, wide, and deep, it was the longest lock in the world upon its completion. It was decommissioned in 1919, and was eventually replaced by the MacArthur Lock in 1943.
The Davis Lock, built in 1914. At the time of its completion, the Davis Lock was the longest lock in the world at long, and was also wide and deep. It was officially decommissioned in 2010.
The Sabin Lock, built in 1919. It was constructed as a twin lock to the Davis Lock, and named after Louis Sabin, who served as the Detroit District Engineer. It was officially decommissioned in 2010 at the same time as the Davis Lock.
New lock
A new lock is under construction and is slated to be completed by 2030. Groundbreaking for the new lock project was held on June 30, 2009. The lock will be equal in size to the Poe Lock and will provide much needed additional capacity for the large lake freighters. The new lock replaces two locks (Davis Lock and Sabin Lock), which were obsolete and used infrequently. In May 2020, construction on Phase One of the replacement of the Sabin Lock was started.
North of the new lock is an additional channel with a small hydroelectric plant, which provides electricity for the lock complex.
Engineers Day
The U.S. Army Corps of Engineers, Detroit District, operates the Soo Locks Visitors Center and viewing deck for the public. On the last Friday of every June, the public is allowed to go behind the security fence and cross the lock gates of the U.S. Soo Locks for the annual Engineers Day Open House. During this event, visitors are able to get close enough to touch ships passing through the two regularly operating locks. Other than on that day, because the locks are United States Federal property under command of the U.S. Army Corps of Engineers, unauthorized personnel and civilians are restricted from the locks under threat of fines or imprisonment for trespassing.
Canadian lock
The first lock to be built in the St. Marys River was on the Canadian side in 1798 by the Northwest Fur Company to facilitate the fur trade. It was destroyed by the Americans in 1814 during the War of 1812 to disrupt British trade. Currently, a single small lock is operated on the Canadian side of the Soo. Opened in 1895, it was rebuilt in 1987, and is long, wide and deep. The Canadian lock is used for recreational and tour boats; major shipping traffic uses the U.S. locks.
Gallery
References
33 CFR 207.440
33 CFR 207.441
Further reading
Briggs, Michelle (July/August 2024). "Charles T. Harvey: And America's First Soo Lock". Michigan History. p. 52+. Lansing, Michigan: Historical Society of Michigan. ISSN 0026-2196. Retrieved via Gale OneFile
External links
Aerial views
Soo Locks homepage U.S. Army Corps of Engineers Soo Locks page
Web Camera view of the American locks NOTE: This Connection is Untrusted
Animation of how the Soo Locks work.
YouTube video HD video of a ship passing through the MacArthur Lock
Canals in Michigan
Locks of the United States
Locks on the National Register of Historic Places
Great Lakes Waterway
Ship canals
St. Marys River (Michigan–Ontario)
Buildings and structures in Sault Ste. Marie, Michigan
Michigan State Historic Sites in Chippewa County
National Historic Landmarks in Michigan
Canals on the National Register of Historic Places in Michigan
Transportation in Chippewa County, Michigan
National Register of Historic Places in Chippewa County, Michigan
Transportation buildings and structures on the National Register of Historic Places in Michigan
Transportation buildings and structures in Michigan
1855 establishments in Michigan
United States Army Corps of Engineers
Canada–United States border | Soo Locks | Engineering | 1,568 |
34,119,939 | https://en.wikipedia.org/wiki/SCH%20900271 | SCH 900271 is a nicotinic acid derivative designed to treat dyslipidemia. It reduced plasma free fatty acids levels, but without significant flushing, a side effect common with niacin that limits its usefulness. SCH 900271 is currently in human trials.
References
Hypolipidemic agents
Cyclopropanes
Pyrimidinediones | SCH 900271 | Chemistry | 81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.