id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,721,609
https://en.wikipedia.org/wiki/V%20Centauri
V Centauri (V Cen) is a Classical Cepheid variable, a type of variable star, in the constellation Centaurus. It is approximately 2,350 light-years (720 parsecs) away based on parallax. Alexander W. Roberts discovered this star in 1894, and from 267 visual observations he determined is period of variation. V Centauri varies regularly between visual magnitudes 6.42 and 7.22 every 5.5 days. It is classified as a Cepheid variable on the basis of its light variations, with the brightness increase from minimum to maximum taking only a third of the time of the decrease from maximum to minimum. Cepheids are pulsating variable stars and V Centauri expands and contracts over its pulsation cycle as well as changing temperature. According to the South African Astronomical Observatory, the chemical composition was derived as being high in sodium (Na) and aluminium (Al) and low in magnesium (Mg). Following a normal composition for a Cepheid star, V Cen does not have any unusual characteristics. V Centauri's composition was observed alongside six other Classical Cepheid variable stars with the support of Russian, Chilean, and Ukrainian observatories. References Centaurus F-type supergiants Centauri, V 071116 Durchmusterung objects 127297 5421 Classical Cepheid variables
V Centauri
[ "Astronomy" ]
294
[ "Centaurus", "Constellations" ]
8,721,696
https://en.wikipedia.org/wiki/W%20Centauri
The designations W Centauri and w Centauri refer to two different stars in the constellation Centaurus. W Centauri, the variable star designation for the faint Mira variable HD 103513 HD 110458, a red giant also known by its Latin-letter Bayer designation w Centauri See also ω Centauri Centaurus
W Centauri
[ "Astronomy" ]
71
[ "Centaurus", "Constellations" ]
8,721,698
https://en.wikipedia.org/wiki/Resolvent%20set
In linear algebra and operator theory, the resolvent set of a linear operator is a set of complex numbers for which the operator is in some sense "well-behaved". The resolvent set plays an important role in the resolvent formalism. Definitions Let X be a Banach space and let be a linear operator with domain . Let id denote the identity operator on X. For any , let A complex number is said to be a regular value if the following three statements are true: is injective, that is, the corestriction of to its image has an inverse called the resolvent; is a bounded linear operator; is defined on a dense subspace of X, that is, has dense range. The resolvent set of L is the set of all regular values of L: The spectrum is the complement of the resolvent set and subject to a mutually singular spectral decomposition into the point spectrum (when condition 1 fails), the continuous spectrum (when condition 2 fails) and the residual spectrum (when condition 3 fails). If is a closed operator, then so is each , and condition 3 may be replaced by requiring that be surjective. Properties The resolvent set of a bounded linear operator L is an open set. More generally, the resolvent set of a densely defined closed unbounded operator is an open set. Notes References (See section 8.3) External links See also Resolvent formalism Spectrum (functional analysis) Decomposition of spectrum (functional analysis) Linear algebra Operator theory
Resolvent set
[ "Mathematics" ]
307
[ "Linear algebra", "Algebra" ]
8,721,871
https://en.wikipedia.org/wiki/Dark-sky%20preserve
A dark-sky preserve (DSP) is an area, usually surrounding a park or observatory, that restricts or reduces light pollution or maintains and protects naturally dark night skies. Different terms have been used to describe these areas as national organizations and governments have worked independently to create programs. DarkSky International (DarkSky) uses "International Dark Sky Reserve" (IDSR) and "International Dark Sky Park" (IDSP) among others when certifying Dark Sky Places. History DarkSky International was founded in 1988 to reserve public or private land for an exquisite outlook of nocturnal territories and starry night skies. Dark-sky preserves are specifically conserved for their cultural, scientific, natural, or educational value and public enjoyment. In 2007, the Mont Mégantic Observatory in Quebec was the first site to be certified as an International Dark Sky Reserve by DarkSky. The same year, Natural Bridges National Monument in Utah became the first International Dark Sky Park. The Gabriela Mistral Dark Sky Sanctuary in the Elqui Valley of Chile was designated as the world's first International Dark Sky Sanctuary in 2015. A dark-sky preserve, or dark-sky reserve, should be sufficiently dark to promote astronomy. The lighting protocol for a dark-sky preserve is based on the sensitivity of wildlife to artificial light at night. Canada has established an extensive and stringent standard for dark-sky preserves, that addresses lighting within dark-sky preserves and influences from skyglow from urban areas in the region. This was based on the work of the Royal Astronomical Society of Canada. Dark Sky Places DarkSky International's Dark Sky Places program currently offers five types of designations: International Dark Sky Communities – Communities are legally organized cities and towns that adopt quality outdoor lighting ordinances and undertake efforts to educate residents about the importance of dark skies. International Dark Sky Parks – Parks are publicly or privately owned spaces protected for natural conservation that implement good outdoor lighting and provide dark sky programs for visitors. International Dark Sky Reserves – Reserves consist of a dark "core" zone surrounded by a populated periphery where policy controls are enacted to protect the darkness of the core. These sites are established by a partnership of multiple land managers. International Dark Sky Sanctuaries – Sanctuaries are the most remote (and often darkest) places in the world whose conservation state is most fragile. The geographic isolation of these places significantly limits opportunities for outreach, so this designation is designed to increase awareness of these sites and promote their long-term conservation. Urban Night Sky Places – These places do not qualify for designation within any other category but are recognized for their efforts to educate the public on the benefits of proper outdoor lighting that ensures safety while minimizing potential harm to the natural nighttime environment. Urban Night Sky Places can be municipal parks, open spaces, or similar properties near or surrounded by an urban environment, but whose planning and design actively promote an authentic nighttime experience in the midst of significant artificial light. Dark Sky Developments of Distinction recognize subdivisions, master planned communities, and unincorporated neighborhoods and townships whose planning actively promotes a more natural night sky but does not qualify them for the International Dark Sky Community designation. This designation was retired in 2020. Further designations include "Dark Sky Nation", given to the Kaibab Indian Reservation, and "Parashant International Night Sky Province-Window to the Cosmos", given to Grand Canyon-Parashant National Monument. Dark sky preserves, reserves, and parks As of January 2023, there are 201 certified Dark Sky Places globally: 38 Communities, 115 Parks, 20 Reserves, 16 Sanctuaries, 6 Developments of Distinction and 6 Urban Night Sky Places. Protected zones Around observatories Other Some regions, like the following, are protected without any reference to an observatory or a park. Regions of Coquimbo, Atacama, and Antofagasta in northern Chile The island of La Palma of the Canary Islands The Big Island of Hawaii Florida beach communities restrict lighting on beaches, to preserve hatchling Sea Turtles. By country Canada In the Canadian program, lighting within the area must be strictly controlled to minimize the impact of artificial lighting on wildlife. These guidelines are more stringent than in other countries that lack the extensive wilderness areas that still exist in Canada. The management of a Canadian DSP extends their outreach programs from the public that visit the site to include the promotion of better lighting policies in surrounding urban areas. Currently, dark-sky preserves have more control over internal and external lighting than other programs. With the increase in regional light pollution, some observatories have actively worked with cities in their region to establish protection zones where there is controlled light pollution. These areas may not yet have been declared dark-sky preserves. Although dark-sky preserve designations are generally sought by astronomers, it is clear that preserving natural darkness has positive effects on the health of nocturnal wildlife within the parks. For example, the nocturnal black-footed ferret was reintroduced to the Grasslands National Park dark-sky preserve and the success of the reintroduction is enhanced by the pristine natural darkness maintained within the park by the DSP agreement. See also Noctcaelador Scotobiology DarkSky International Dark-Sky Movement United States National Radio Quiet Zone References External links DarkSky International DarkSky International: International Dark Sky Places DarkSky International: Dark Sky Place Certification and Application Information Izera Dark-Sky Park Poloniny Dark-Sky Park Torrance Barrens Dark-Sky Preserve . Essay by Michael Silver. Royal Astronomical Society of Canada. Veľká Fatra Dark-Sky Park North Frontenac Dark Sky Preserve Light pollution Protected areas Darkness Environmental protection 1993 introductions
Dark-sky preserve
[ "Astronomy" ]
1,128
[ "Dark-sky preserves" ]
8,722,051
https://en.wikipedia.org/wiki/Laplacian%20smoothing
Laplacian smoothing is an algorithm to smooth a polygonal mesh. For each vertex in a mesh, a new position is chosen based on local information (such as the position of neighbours) and the vertex is moved there. In the case that a mesh is topologically a rectangular grid (that is, each internal vertex is connected to four neighbours) then this operation produces the Laplacian of the mesh. More formally, the smoothing operation may be described per-vertex as: Where is the number of adjacent vertices to node , is the position of the -th adjacent vertex and is the new position for node . See also Tutte embedding, an embedding of a planar mesh in which each vertex is already at the average of its neighbours' positions References Mesh generation Geometry processing
Laplacian smoothing
[ "Physics", "Mathematics" ]
161
[ "Mesh generation", "Tessellation", "Geometry", "Geometry stubs", "Symmetry" ]
8,722,168
https://en.wikipedia.org/wiki/Terminology%20extraction
Terminology extraction (also known as term extraction, glossary extraction, term recognition, or terminology mining) is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a given corpus. In the semantic web era, a growing number of communities and networked enterprises started to access and interoperate through the internet. Modeling these communities and their information needs is important for several web applications, like topic-driven web crawlers, web services, recommender systems, etc. The development of terminology extraction is also essential to the language industry. One of the first steps to model a knowledge domain is to collect a vocabulary of domain-relevant terms, constituting the linguistic surface manifestation of domain concepts. Several methods to automatically extract technical terms from domain-specific document warehouses have been described in the literature. Typically, approaches to automatic term extraction make use of linguistic processors (part of speech tagging, phrase chunking) to extract terminological candidates, i.e. syntactically plausible terminological noun phrases. Noun phrases include compounds (e.g. "credit card"), adjective noun phrases (e.g. "local tourist information office"), and prepositional noun phrases (e.g. "board of directors"). In English, the first two (compounds and adjective noun phrases) are the most frequent. Terminological entries are then filtered from the candidate list using statistical and machine learning methods. Once filtered, because of their low ambiguity and high specificity, these terms are particularly useful for conceptualizing a knowledge domain or for supporting the creation of a domain ontology or a terminology base. Furthermore, terminology extraction is a very useful starting point for semantic similarity, knowledge management, human translation and machine translation, etc. Bilingual terminology extraction The methods for terminology extraction can be applied to parallel corpora. Combined with e.g. co-occurrence statistics, candidates for term translations can be obtained. Bilingual terminology can be extracted also from comparable corpora (corpora containing texts within the same text type, domain but not translations of documents between each other). See also Computational linguistics Glossary Natural language processing Domain ontology Subject indexing Taxonomy (general) Terminology Text mining Text simplification References Tasks of natural language processing Library science terminology Computing terminology
Terminology extraction
[ "Technology" ]
466
[ "Computing terminology" ]
8,722,775
https://en.wikipedia.org/wiki/Systematic%20code
In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols. Systematic codes have the advantage that the parity data can simply be appended to the source block, and receivers do not need to recover the original source symbols if received correctly – this is useful for example if error-correction coding is combined with a hash function for quickly determining the correctness of the received source symbols, or in cases where errors occur in erasures and a received symbol is thus always correct. Furthermore, for engineering purposes such as synchronization and monitoring, it is desirable to get reasonable good estimates of the received source symbols without going through the lengthy decoding process which may be carried out at a remote site at a later time. Properties Every non-systematic linear code can be transformed into a systematic code with essentially the same properties (i.e., minimum distance). Because of the advantages cited above, linear error-correcting codes are therefore generally implemented as systematic codes. However, for certain decoding algorithms such as sequential decoding or maximum-likelihood decoding, a non-systematic structure can increase performance in terms of undetected decoding error probability when the minimum free distance of the code is larger. For a systematic linear code, the generator matrix, , can always be written as , where is the identity matrix of size . Examples Checksums and hash functions, combined with the input data, can be viewed as systematic error-detecting codes. Linear codes are usually implemented as systematic error-correcting codes (e.g., Reed-Solomon codes in CDs). Convolutional codes are implemented as either systematic or non-systematic codes. Non-systematic convolutional codes can provide better performance under maximum-likelihood (Viterbi) decoding. In DVB-H, for additional error protection and power efficiency for mobile receivers, a systematic Reed-Solomon code is employed as an erasure code over packets within a data burst, where each packet is protected with a CRC: data in verified packets count as correctly received symbols, and if all are received correctly, evaluation of the additional parity data can be omitted, and receiver devices can switch off reception until the start of the next burst. Fountain codes may be either systematic or non-systematic: as they do not exhibit a fixed code rate, the set of source symbols is diminishing among the possible output set. Notes References Coding theory
Systematic code
[ "Mathematics" ]
519
[ "Discrete mathematics", "Coding theory" ]
8,723,207
https://en.wikipedia.org/wiki/Stokesian%20dynamics
Stokesian dynamics is a solution technique for the Langevin equation, which is the relevant form of Newton's 2nd law for a Brownian particle. The method treats the suspended particles in a discrete sense while the continuum approximation remains valid for the surrounding fluid, i.e., the suspended particles are generally assumed to be significantly larger than the molecules of the solvent. The particles then interact through hydrodynamic forces transmitted via the continuum fluid, and when the particle Reynolds number is small, these forces are determined through the linear Stokes equations (hence the name of the method). In addition, the method can also resolve non-hydrodynamic forces, such as Brownian forces, arising from the fluctuating motion of the fluid, and interparticle or external forces. Stokesian Dynamics can thus be applied to a variety of problems, including sedimentation, diffusion and rheology, and it aims to provide the same level of understanding for multiphase particulate systems as molecular dynamics does for statistical properties of matter. For rigid particles of radius suspended in an incompressible Newtonian fluid of viscosity and density , the motion of the fluid is governed by the Navier–Stokes equations, while the motion of the particles is described by the coupled equation of motion: In the above equation is the particle translational/rotational velocity vector of dimension 6N. is the hydrodynamic force, i.e., force exerted by the fluid on the particle due to relative motion between them. is the stochastic Brownian force due to thermal motion of fluid particles. is the deterministic nonhydrodynamic force, which may be almost any form of interparticle or external force, e.g. electrostatic repulsion between like charged particles. Brownian dynamics is one of the popular techniques of solving the Langevin equation, but the hydrodynamic interaction in Brownian dynamics is highly simplified and normally includes only the isolated body resistance. On the other hand, Stokesian dynamics includes the many body hydrodynamic interactions. Hydrodynamic interaction is very important for non-equilibrium suspensions, like a sheared suspension, where it plays a vital role in its microstructure and hence its properties. Stokesian dynamics is used primarily for non-equilibrium suspensions where it has been shown to provide results which agree with experiments. Hydrodynamic interaction When the motion on the particle scale is such that the particle Reynolds number is small, the hydrodynamic force exerted on the particles in a suspension undergoing a bulk linear shear flow is: Here, is the velocity of the bulk shear flow evaluated at the particle center, is the symmetric part of the velocity-gradient tensor; and are the configuration-dependent resistance matrices that give the hydrodynamic force/torque on the particles due to their motion relative to the fluid () and due to the imposed shear flow (). Note that the subscripts on the matrices indicate the coupling between kinematic () and dynamic () quantities. One of the key features of Stokesian dynamics is its handling of the hydrodynamic interactions, which is fairly accurate without being computationally inhibitive (like boundary integral methods) for a large number of particles. Classical Stokesian dynamics requires operations where N is the number of particles in the system (usually a periodic box). Recent advances have reduced the computational cost to about Brownian force The stochastic or Brownian force arises from the thermal fluctuations in the fluid and is characterized by: The angle brackets denote an ensemble average, is the Boltzmann constant, is the absolute temperature and is the delta function. The amplitude of the correlation between the Brownian forces at time and at time results from the fluctuation-dissipation theorem for the N-body system. See also Immersed boundary methods Stochastic Eulerian Lagrangian methods References Statistical mechanics Equations Fluid mechanics
Stokesian dynamics
[ "Physics", "Mathematics", "Engineering" ]
802
[ "Mathematical objects", "Equations", "Civil engineering", "Statistical mechanics", "Fluid mechanics" ]
8,723,369
https://en.wikipedia.org/wiki/Computer%20architecture%20simulator
A computer architecture simulator is a program that simulates the execution of computer architecture. Computer architecture simulators are used for the following purposes: Lowering cost by evaluating hardware designs without building physical hardware systems. Enabling access to unobtainable hardware. Increasing the precision and volume of computer performance data. Introducing abilities that are not normally possible on real hardware such as running code backwards when an error is detected or running in faster-than-real time. Categories Computer architecture simulators can be classified into many different categories depending on the context. Scope: Microarchitecture simulators model the microprocessor and its components. Full-system simulators also model the processor, memory systems, and I/O devices. Detail: Functional simulators, such as instruction set simulators, achieve the same function as modeled components. They can be simulated faster if timing is not considered. Timing simulators are functional simulators that also reproduce timing. Timing simulators can be further categorized into digital cycle-accurate and analog sub-cycle simulators. Workload: Trace-driven simulators (also called event-driven simulators) react to pre-recorded streams of instructions with some fixed input. Execution-driven simulators allow dynamic change of instructions to be executed depending on different input data. Full-system simulators A full-system simulator is execution-driven architecture simulation at such a level of detail that complete software stacks from real systems can run on the simulator without any modification. A full system simulator provides virtual hardware that is independent of the nature of the host computer. The full-system model typically includes processor cores, peripheral devices, memories, interconnection buses, and network connections. Emulators are full system simulators that imitate obsolete hardware instead of under development hardware. The defining property of full-system simulation compared to an instruction set simulator is that the model allows real device drivers and operating systems to be run, not just single programs. Thus, full-system simulation makes it possible to simulate individual computers and networked computer nodes with all their software, from network device drivers to operating systems, network stacks, middleware, servers, and application programs. Full system simulation can speed the system development process by making it easier to detect, recreate and repair flaws. The use of multi-core processors is driving the need for full system simulation, because it can be extremely difficult and time-consuming to recreate and debug errors without the controlled environment provided by virtual hardware. This also allows the software development to take place before the hardware is ready, thus helping to validate design decisions. Cycle-accurate simulator A cycle-accurate simulator is a computer program that simulates a microarchitecture on a cycle-by-cycle basis. In contrast an instruction set simulator simulates an instruction set architecture usually faster but not cycle-accurate to a specific implementation of this architecture; they are often used when emulating older hardware, where time precision is important for legacy reasons. Often, a cycle-accurate simulator is used when designing new microprocessorsthey can be tested, and benchmarked accurately (including running full operating system, or compilers) without actually building a physical chip, and easily change design many times to meet expected plan. Cycle-accurate simulators must ensure that all operations are executed in the proper virtual (or real if it is possible) timebranch prediction, cache misses, fetches, pipeline stalls, thread context switching, and many other subtle aspects of microprocessors. See also Instruction set simulator References External links The Archer virtual infrastructure for computer architecture simulation Simulation software Computer architecture
Computer architecture simulator
[ "Technology", "Engineering" ]
730
[ "Computers", "Computer engineering", "Computer architecture" ]
8,723,393
https://en.wikipedia.org/wiki/Oneiric%20%28film%20theory%29
In film theory, the term oneiric ( , adjective; "pertaining to dreams") refers to the depiction of dream-like states or to the use of the metaphor of a dream or the dream-state in the analysis of a film. The term comes from the Greek Óneiros, the personification of dreams. History Early film theorists such as Ricciotto Canudo (1879–1923) and Jean Epstein (1897–1953) argued that films had a dreamlike quality. Raymond Bellour and Guy Rosolato have made psychoanalytical analogies between films and the dream state, claiming films as having a "latent" content that can be psychoanalyzed as if it were a dream. Lydia Marinelli states that before the 1930s, psychoanalysts "primarily attempted to apply the interpretative schemata found in Sigmund Freud's Interpretation of Dreams to films." Author Douglas Fowler surmises that "images arising from dreams are the well spring of all our efforts to give enduring form and meaning to the urgencies within," seeing this as the reason why "the deep structure of human narrative is conceived in dreams and the genesis of all myth is dreams." Author Robert Eberwein describes the filmic experience as the merging of a viewer's consciousness with the projected consciousness of the screen's subject, a process whereby the viewer's prior experiences with dreaming "help to create a sense of oneness" with cinema, causing the gap between viewer and what is being viewed to narrow. Under this theory, no matter what is being shown on the screen — whether the literal representation of a character dreaming, or the fictional characters of a story going on about their fictional lives — the very process of viewing film itself "replicates activities associated with the oneiric experience." Films and dreams are also connected in psychological analysis by examining the relationship between the cinema screening process and the spectator (who is perceived as passive). Roland Barthes, a French literary critic and semiotician, described film spectators as being in a "para-oneiric" state, feeling "sleepy and drowsy as if they had just woken up" when a film ends. Similarly, the French surrealist André Breton argues that film viewers enter a state between being "awake and falling asleep", what French filmmaker René Clair called a "dreamlike state". Jean Mitry's first volume of Esthétique et psychologie du cinéma (1963) also discuss the connection between films and the dream state. Filmmakers Filmmakers described as using oneiric or dreamlike elements in their films include: Sergei Parajanov (e.g., Shadows of Forgotten Ancestors) David Lynch (e.g., Twin Peaks, Mulholland Drive) Andrei Tarkovsky (e.g. Andrei Rublev, Solaris) Stan Brakhage (e.g., Dog Star Man) Michelangelo Antonioni (e.g. The Passenger, Zabriskie Point) Jaromil Jireš (e.g., Valerie and Her Week of Wonders) Krzysztof Kieslowski (e.g. The Double Life of Veronique) Federico Fellini (e.g., Amarcord) Francis Ford Coppola (e.g., Apocalypse Now) Ingmar Bergman (e.g., Wild Strawberries) Jean Cocteau (e.g., Orphic Trilogy) Gaspar Noé (e.g. Enter the Void, Love, Climax) Raúl Ruiz (e.g., City of Pirates) Edgar G. Ulmer (e.g., The Black Cat) Jacques Tourneur (e.g., I Walked With a Zombie) Maya Deren (e.g., Meshes of the Afternoon) Wojciech Has Kenneth Anger See also Bertram D. Lewin Experimental film Art film References Further reading Bächler, Odile. "Images de film, images de rêve; le véhicule de la vision", CinémAction, 50 (1989), pp. 40–46. Botz-Bornstein, Thorsten. Films and Dreams: Tarkovsky, Bergman, Sokurov, Kubrick, Wong Kar-wai. Lanham: Lexington, 2009. Burns, Gary. "Dreams and Mediation in Music Video", Wide Angle, v. 10, 2 (1988), pp. 41–61. Cubitt, Sean. The Cinema Effect. Cambridge and London: The MIT Press, 2004, pp. 273–299. Halpern, Leslie. Dreams on Film: The cinematic struggle between art and science. Jefferson, N.C. : McFarland & Co., 2003. Hobson, J. Allan. 1980. "Film and the Physiology of Dreaming Sleep: The Brain as a Camera-Projector". Dreamworks 1(1): pp. 9–25. Lewin, Bertram D. "Inferences from the dream screen", International Journal of Psychoanalysis, vol. XXIX, 4 (1948), p. 224. Marinelli, Lydia. "Screening Wish Theories: Dream Psychologies and Early Cinema". Science in Context (2006), 19: 87-110 Petrić, Vlada, Film and Dreams: An Approach to Bergman, Redgrave, NY, 1981. Concepts in film theory Film and video terminology Film styles History of film Fiction about dreams Dream
Oneiric (film theory)
[ "Biology" ]
1,113
[ "Dream", "Behavior", "Sleep" ]
8,724,110
https://en.wikipedia.org/wiki/Automatic%20Independent%20Surveillance%20%E2%80%93%20Privacy
Automatic Independent Surveillance – Privacy (AIS-P) is a data packet protocol for the TailLight system of aircraft Traffic Collision Avoidance System (TCAS), wherein a single Mode S 64 microsecond message is transmitted by an aircraft ATCRBS or Mode S transponder, and received by aircraft and Air Traffic Control on the ground. This is an augmentation to aircraft transponders, which report aircraft position and velocity in such a way as to minimize interference with any other avionics system, maximize the possible number of participating aircraft, while not relying on any equipment on the ground, and protecting aircraft from potential attack. AIS-P and ADS-B are competing protocols for aircraft based surveillance of traffic, a replacement technology for Mode S radar and TCAS. AIS-P as an alternative to ADS-B The TailLight, which is offered as a complimentary feature in General Aviation ATCRBS transponders like the AT-155, utilizes the AIS-P protocol to effectively deliver the advertised collision avoidance benefits of ADS-B in both airport terminal and en route airspace. It has no adverse impact on other avionics systems, can accommodate up to 335,000 aircraft within line-of-sight range of each other, and is interoperable with other collision avoidance systems while ensuring the aircraft's protection from potential attacks. The AIS-P protocol is an alternative to the ADS-B and Mode S based TCAS protocols, and solves the problems of frequency congestion, by eliminating a requirement for multiple packet messages, or new longer packet definitions for ADS-B not established by international treaty, and by eliminating the 24 bit overhead for named identity in each packet of the message (required to tie multiple packets together into a message). One packet encodes latitude and longitude, altitude, direction, and speed (full position and velocity), handles error detection and recovery, along with channel use arbitration, in the AIS-P protocol. This reduces verbose overhead unnecessary for collision avoidance purposes. The AIS-P protocol is not meant for purposes of billing and targeting. Additionally, one of the requirements satisfied by the AIS-P protocol is that a missile with an ADS-B type target homer aimed at the unnamed aircraft alone in the sky would miss. See also ADS-B References External links What is Wrong With ATC Transponders, And How to Fix Them For Just About Free, B. Keith Peshak, Proceedings of the 58th Annual Meeting of The Institute of Navigation and CIGTF 21st Guidance Test Symposium, 2002 Avionics
Automatic Independent Surveillance – Privacy
[ "Technology" ]
527
[ "Avionics", "Aircraft instruments" ]
8,724,310
https://en.wikipedia.org/wiki/Three%20Principles%20Psychology
Three Principles Psychology (TPP), previously known as Health Realization (HR), is a resiliency approach to personal and community psychology first developed in the 1980s by Roger C. Mills and George Pransky, who were influenced by the teachings of philosopher and author Sydney Banks. The approach first gained recognition for its application in economically and socially marginalized communities experiencing high levels of stress. (see Community Applications below). The foundational concepts of TPP are the Three Principles of Mind, Consciousness, and Thought, which were originally articulated by Sydney Banks in the early 1970s. Banks, a Scottish welder with a ninth-grade education who lived in British Columbia, Canada, provided the philosophical basis for TPP, emphasizing how these principles underlie all human psychological experiences. The core of TPP lies in the understanding that an individual's psychological experience is shaped by their thought processes. TPP teaches that by recognizing the role of Thought in shaping one's experience, individuals can transform their responses to situations. This transformation is achieved by accessing what TPP refers to as "innate health" and "inner wisdom." TPP is also known by other names, including Psychology of Mind, Neo-cognitive Psychology, Innate Health, the Inside-Out Understanding and colloquially, the 3Ps. Discovery of the Three Principles According to verbal accounts provided by Banks in his recorded lectures, he realised the Three Principles during a marriage seminar on Cortes Island, British Columbia, Canada in 1973. As they were preparing to depart, Banks engaged in a conversation with a therapist who was also attending the seminar. At the time, Banks described himself as "an insecure mess" and began listing the various ways in which he felt insecure. The therapist responded, "I've never heard such nonsense in all my life. You're not insecure, Syd; you just think you are." This statement profoundly impacted Banks. He realized that insecurity was not a real, inherent condition but merely a product of his thoughts. Reflecting on the experience, Banks described it as a revelatory moment: What I heard was: there's no such thing as insecurity, it's only Thought. All my insecurity was only my own thoughts! It was like a bomb going off in my head … It was so enlightening! It was unbelievable … [And after that,] there was such beauty coming into my life. The specific terms "Mind," "Consciousness," and "Thought" were not immediately clear to Banks during this initial experience. Over time, through his talks and lectures, these terms became more clearly defined, and Banks referred to them collectively as the "psychological trinity." Banks, who passed away from metastatic cancer in May 2009, challenged many traditional notions and practices of psychotherapy. He asserted that mental well-being does not require processing the past or analyzing the content of personal thought systems. Everyone in mental institutions is sitting in the middle of mental health and they don't know it. Banks was also against using techniques or developing concepts to convey his understanding to others. Three Principles Psychology model In Three Principles Psychology (TPP), all psychological phenomena—from severe disorders to optimal mental health—are understood as manifestations of three operative "principles" first articulated by Sydney Banks as the basis of human experience and feeling states.: Mind - The energy and intelligence that animates all life, both in its physical form and in the formless. The Universal Mind, often referred to as "wisdom" or the "impersonal" mind, is constant and unchanging, acting as the source of innate health and well-being. In contrast, the personal mind is in a continuous state of flux. Consciousness - The capacity to be aware of one's life and experiences. Consciousness is the gift of awareness that enables the recognition of form, with form being an expression of Thought. Thought - The ability to think, which allows individuals to create their personal experience of reality. Thought is a divine gift, not self-created, that is present from birth. It serves as the creative agent through which individuals navigate and direct their lives. In the TPP model, "Mind" is often compared to the electricity powering a movie projector, while "Thought" is likened to the images on the film. "Consciousness" is analogous to the light from the projector that casts the images onto the screen, making them appear real. According to TPP, individuals experience reality and their circumstances through the continual filter of their thoughts. Consciousness gives this filtered reality the appearance of being "the way it really is," leading people to react to it as if it were absolute truth. However, when their thinking changes, their perception of reality shifts, and their reactions change accordingly. Thus, TPP posits that people are constantly creating their own experience of reality through their thoughts. Also according to TPP, people tend to perceive their reality as stressful when they are engaged in insecure or negative thoughts. However, TPP suggests that these thoughts do not need to be taken seriously. By choosing to take such thoughts more lightly, the mind can quiet down, allowing positive feelings to emerge naturally. TPP teaches that everyone has an inherent capacity for health and well-being, referred to as "innate health," which surfaces when troubled thinking subsides. When this occurs, individuals also gain access to common sense and can tap into a universal capacity for creative problem-solving, known as "inner wisdom." Both peer-reviewed and anecdotal evidence indicates that when someone deeply understands the principles behind TPP, they may experience a profound sense of emotional freedom and well-being. Three Principles Psychology as therapy In contrast to psychotherapies that focus on the content of the clients' dysfunctional thinking, TPP focuses on "innate health" and the role of "Mind, Consciousness and Thought" in creating the clients' experience of life. The TPP counselor does not suggest to clients that they attempt to change their thoughts, "think positive", or "reframe" negative thoughts to positive ones. According to TPP, one's ability to control one's thoughts is limited and the effort to do so can itself be a source of stress. Instead, clients are encouraged to consider that their "minds are using thought continuously to determine their subjective, personal reality in each moment." In the TPP model, feelings and emotions are seen as indicators of the quality of one's thinking. Unpleasant or stressful emotions, suggest that an individual's thinking is influenced by insecurity, negative beliefs, conditioning, or learned patterns that may be irrelevant to, and thereby distort, the present moment. These emotions also indicate a temporary lapse in recognizing one's role in shaping their own experience. Conversely, pleasant emotions—such as well-being, gratitude, compassion, or peace—indicate that one's thinking is aligned with what TPP considers optimal for the current situation. TPP holds that the therapeutic "working through" of personal issues from the past to achieve wholeness is unnecessary. According to the TPP model, people are already whole and healthy. The traumas of the past are only important to the extent that the individual lets them influence his or her thoughts in the present. According to TPP, one's "issues" and memories are simply thoughts, and the individual can react to them or not. The more clients recognize that they are creating their own painful feelings through their "power of Thought," the less these feelings tend to bother them.. Sedgeman compares this to making scary faces in the mirror: because we know it's just us, it's impossible to scare ourselves that way. Thus TPP addresses personal insecurities and dysfunctional patterns en masse, aiming for an understanding of the "key role of thought", an understanding that ideally allows the individual to step free at once from a large number of different patterns all connected by insecure thinking. With this approach, it is rare for the practitioner to delve into specific content When specific thoughts are recognized as limiting or based on insecurity or conditioning, they often come with an uncomfortable feeling. The counselor points out that this understanding activates the body's homeostatic system, which naturally prefers feeling good over feeling bad. As a result, the individual has the capacity to let go of these thoughts if they choose to. Relationships From the perspective of TPP, relationship problems stem from a low awareness of each partner's role in creating their own experience through thought and consciousness. Partners who embrace TPP reportedly stop blaming and recriminating, leading to a different way of interacting. TPP counselors encourage couples to recognize that their feelings are not determined by their partner, and that most issues that previously disrupted their relationship were based on insecure, negative, and conditioned thinking. Counselors also emphasize that everyone experiences emotional ups and downs, and that thinking during a "down" mood is likely to be distorted. TPP teaches that it is generally counterproductive to "talk through" relationship problems when partners are in a bad mood. Instead, TPP suggests waiting until both have calmed down and can discuss things from a place of inner comfort and security. Chemical dependency and addiction TPP views chemical dependency and related behaviors as a response to a lack of self-efficacy, rather than the result of disease. According to TPP, individuals who are "unaware" of their own "innate health" and their role in creating stress through their thoughts may turn to alcohol, drugs, or other compulsive behaviors in an attempt to quell their stressful feelings and regain a temporary sense of control. TPP seeks to provide deeper relief by demonstrating that negative and stressful feelings are self-generated and can be self-quieted, offering a pathway to well-being that does not rely on external circumstances or substances. Application Over the past forty years, Sydney Banks' "insight" has been applied in a wide range of settings, including hospitals, correctional institutions, social services, individual and couples therapy, community housing, drug and alcohol prevention and treatment programmes, schools and multi-national corporations. The Three Principles of Mind, Consciousness, and Thought have gained global recognition and are now implemented in the United States, Canada, Sweden, Norway, Denmark, The Netherlands, France, Germany, Spain, Italy, Ukraine, Israel, Czech Republic, Russia, Scotland, England, Ireland, South Africa, Australia, New Zealand and Thailand. Community applications The Three Principles Psychology (TPP) model has been applied in a variety of challenging settings. An early project, which garnered national publicity under the leadership of Roger Mills, introduced TPP (then known as Health Realization (HR) to residents of two low-income housing developments in Miami known as Modello and Homestead Gardens. After three years, there were major documented reductions in crime, illegal drug trade, teenage pregnancy, child abuse, child neglect, school absenteeism, unemployment, and families on public assistance. Jack Pransky has chronicled the transformations that unfolded in his book Modello, A Story of Hope for the Inner City and Beyond. Later projects in some of the most violence-affected housing developments in New York, Minnesota, and California, as well as in other communities in California, Hawaii, and Colorado, expanded on the foundational work done in Modello and Homestead Gardens. The Coliseum Gardens housing complex in Oakland, California, once had the fourth highest homicide rate among similar complexes in the U.S. However, after the introduction of HR classes, the homicide rate began to decrease significantly. Gang warfare and ethnic clashes between Cambodian and African-American youth ceased. In 1997, Sargeant Jerry Williams was awarded the California Wellness Foundation Peace Prize on behalf of the Health Realization Community Empowerment Project at Coliseum Gardens. By the year 2006, there had been no homicides in the Complex for nine straight years. The TPP model has also found application in police departments, prisons, mental health clinics, community health clinics and nursing, drug and alcohol rehabilitation programs, services for the homeless, schools, and a variety of state and local government programs. The County of Santa Clara, California, for example, has established a Health Realization Services Division which provides HR training to County employees and the public. The Services Division "seeks to enhance the life of the individual by teaching the understanding of the psychological principles of Mind, Consciousness and Thought, and how these principles function to create our life experience... enabling them to live healthier and more productive lives so that the community becomes a model of health and wellness." The Department of Alcohol and Drug Services introduced HR in Santa Clara County in 1994. The Health Realization Services Division has an approved budget of over $800,000 (gross expenditure) for FY 2008, a 41% increase over 2007, at a time when a number of programs within the Alcohol and Drug Services Department have sustained budget cuts. HR (TPP) community projects have received grant funding from a variety of sources. For example, grant partners for the Visitacion Valley Community Resiliency Project, a five-year, multimillion-dollar community revitalization project, have included Wells Fargo Bank, Charles Schwab Corporation Foundation, Charles and Helen Schwab Foundation, Isabel Allende Foundation, Pottruck Family Foundation, McKesson Foundation, Richard and Rhoda Goldman Fund, S.H. Cowell Foundation, San Francisco Foundation, Evelyn & Walter Haas Jr. Fund, Milagro Foundation, and Dresdner RCM Global Investors. Other projects based upon the HR (TPP) approach have been funded by the National Institute of Mental Health, the U.S. Department of Justice, the National Institute on Drug Abuse, the California Wellness Foundation, and the Shinnyo-en Foundation. Ongoing community projects organized by the Center for Sustainable Change, a non-profit organization founded by Dr. Roger Mills and Ami Chen Mills-Naim, are funded by the W.K. Kellogg Foundation. The Center for Sustainable Change works in partnership with grassroots organizations in Des Moines, Iowa; Charlotte, North Carolina; and the Mississippi Delta to bring Three Principles training to at-risk communities under the umbrella of the National Community Resiliency Project. The center also works with schools, agencies and corporations. Organizational applications In the course of their exposure to Health Realization (HR), or the foundational concept referred to as Three Principles Psychology, individuals within the business realm have incorporated these principles into their respective professional domains. This assimilation has manifested as a discernible trend wherein practitioners, having grasped the core tenets, integrate and apply these ideas within their organizational contexts. The approach has been introduced to people in medicine, law, investment and financial services, technology, marketing, manufacturing, publishing, and a variety of other commercial and financial roles. It has been reported anecdotally to have had significant impact in the areas of individual performance and development, teamwork, leadership, change and diversity. According to HR/Three Principles adherents, these results flow naturally as the individuals exposed to the ideas learn how their thoughts have been creating barriers to others and barriers to their own innate creativity, common sense, and well-being. As people learn how to access their full potential more consistently, HR adherents say, they get better results with less effort and less stress in less time. Two peer-reviewed articles on effectiveness with leadership development were published in professional journals in 2008 (ADHR) and 2009 (ODJ). See "Organizations and Business" section below (Polsfuss & Ardichvili). Philosophical context The Three Principles rests on the non-academic philosophy of Sydney Banks, which Mr. Banks has expounded upon in several books. Mr. Banks was a day laborer with no education beyond ninth grade (age 14) in Scotland who, in 1973, reportedly had a profound insight into the nature of human experience. Mr. Banks does not particularly attempt to position his ideas within the larger traditions of philosophy or religion; he is neither academically trained nor well read. His philosophy focuses on the illusory, thought-created nature of reality, the Three Principles of "Mind", "Thought", and "Consciousness", the potential relief of human suffering that can come from a fundamental shift in personal awareness and understanding and the importance of a direct, experiential grasp of these matters, as opposed to a mere intellectual comprehension or analysis. Mr. Banks suggests that his philosophy is best understood not intellectually but by "listening for a positive feeling;" and a grasp of the Three Principles is said to come through a series of "insights," that is, shifts in experiential understanding. Teaching of The Three Principles Three Principles Psychology (TPP), much like Sydney Banks's philosophy, is not presented as a collection of 'techniques.' Instead, it's an experiential 'understanding' that transcends the mere transfer of information. There are no steps, no uniformly appropriate internal attitudes, and no techniques within it. The "health of the helper" is considered crucial; that is, trainers or counselors ideally will "live in the understanding that allows them to enjoy life," and thereby continuously model their understanding of TPP by staying calm and relaxed, not taking things personally, assuming the potential in others, displaying common sense, and listening respectfully to all. Facilitators ideally teach in the moment, from "what they know" (e.g. their own experience), trusting that they will find the right words to say and the right approach to use in the immediate situation to stimulate the students' understanding of the "Three Principles". Rapport with students and a positive mood in the session or class are more important than the specific content of the facilitator's presentation. Evaluations of Health Realization A 2007 peer-reviewed article evaluating the effectiveness of HR suggests that the results of residential substance abuse treatment structured around the teaching of HR are equivalent to those of treatment structured around 12-step programs. The authors note that "these results are consistent with the general findings in the substance abuse literature, which suggests that treatment generally yields benefits, irrespective of approach." A small peer-reviewed study in preparation for a planned larger study evaluated the teaching of HR/Innate Health via a one-and-a-half-day seminar, as a stress and anxiety reduction intervention for HIV-positive patients. All but one of the eight volunteer participants in the study showed improved scores on the Brief Symptom Inventory after the seminar, and those participants who scored in the "psychiatric outpatient" range at the beginning of the seminar all showed improvement that was sustained upon follow-up one month later. The study's authors concluded that "The HR/IH psychoeducational approach deserves further study as a brief intervention for stress-reduction in HIV-positive patients." A 2007 pilot study funded by the National Institutes of Health evaluated HR in lowering stress among Somali and Oromo refugee women who had experienced violence and torture in their homelands, but for whom Western-style psychotherapeutic treatment of trauma was not culturally appropriate. The pilot study showed that "the use of HR with refugee trauma survivors was feasible, culturally acceptable, and relevant to the participants." In a post-intervention focus group, "many women reported using new strategies to calm down, quiet their minds and make healthier decisions." Co-investigator Cheryl Robertson, Assistant Professor in the School of Nursing at the University of Minnesota, was quoted as saying, "This is a promising intervention that doesn't involve the use of highly trained personnel. And it can be done in the community." The Visitacion Valley Community Resiliency Project (VVCRP) was reviewed by an independent evaluator hired by the Pottruck Foundation. Her final report notes that "Early program evaluation...found that the VVCRP was successful in reducing individuals’ feelings of depression and isolation, and increasing their sense of happiness and self-control. The cumulative evaluation research conducted on the VVCRP and the HR model in general concludes that HR is a powerful tool for changing individuals’ beliefs and behaviors." In the Summary of Case Studies, the report goes on to state, "The VVCRP was effective over a period of five years of sustained involvement in two major neighborhood institutions... at influencing not just individuals, but also organizational policies, practices, and culture. This level of organizational influence is impressive when the relatively modest level of VVCRP staff time and resources invested into making these changes is taken into account. The pivotal levers of change at each organization were individual leaders who were moved by the HR principles to make major changes in their own beliefs, attitudes, and behaviors, and then took the initiative to inspire, enable, and mandate similar changes within their organizations. This method of reaching "critical mass" of HR awareness within these organizations appears to be both efficient and effective when the leadership conditions are right. However, this pathway to change is vulnerable to the loss of the key individual leader." Research efforts on effectiveness Pransky has reviewed the research on HR (through 2001) in relation to its results for prevention and education, citing 20 manuscripts, most of which were conference papers, and none peer-reviewed journal articles, although two were unpublished doctoral dissertations. (Kelley (2003) cites two more unpublished doctoral dissertations.) Pransky concludes, "Every study of Health Realization and its various incarnations, however weak or strong the design, has shown a decrease in problem behaviors and internally experienced problems. This approach appears to reduce problem behaviors and to improve mental health and well-being. At the very least, this suggests the field of prevention should further examine the efficacy of this... approach by conducting independent, rigorous, controlled, longitudinal studies." Practitioners of the Three Principles believe that feeling states (and all experience) are created (through mental activity i.e., thought). Scientific research by Lisa Feldman Barrett supports this notion that mental states (i.e. emotions) are indeed constructed from within the human mind. Practitioners believe that beyond each person's limited, conscious, and personal thought system lies a vast reservoir of wisdom, insight and spiritual intelligence. No one person has greater access to wisdom than any other. Mental health is the resting state, or "default" setting of the mind, which brings with it non-contingent feelings of love, compassion, resilience, creativity and unity, both with others and with life itself. Research by George Bonnano, Professor of Clinical Psychology at Columbia University, supports this notion that resilience, not recovery, is a common response to difficult life events such as trauma and loss. Criticism In a criticism of the philosophy of Sydney Banks and, by implication, the HR approach, Bonelle Strickling, a psychotherapist and Professor of Philosophy, is quoted in an article in the Vancouver Sun as objecting that "it makes it appear as if people can, through straightforward positive thinking, 'choose' to transcend their troubled upbringings and begin leading a contented life." She goes on to say that, "it can be depressing for people to hear it's supposed to be that easy. It hasn't been my experience that people can simply choose not to be negatively influenced by their past." Referring to Banks's own experience, she says, "Most people are not blessed with such a life-changing experience.... When most people change, it usually happens in a much more gradual way." The West Virginia Initiative for Innate Health (at West Virginia University Health Sciences Center), which promotes HR/Innate Health and the philosophy of Sydney Banks through teaching, writing, and research, was the center of controversy soon after its inception in 2000 as the Sydney Banks Institute for Innate Health. Initiated by Robert M. D'Alessandri, the Dean of the medical school there, the institute was reportedly criticized as pushing "junk science," and Banks's philosophy was characterized as "a kind of bastardized Buddhism" and "New Age." William Post, an orthopedic surgeon who quit the medical school because of the institute, was reported, along with other unnamed professors, to have accused the Sydney Banks Institute of promoting religion in a state-funded institution. Harvey Silvergate, a civil-liberties lawyer, was quoted as agreeing that "essentially [the institute] seems like a cover for a religious-type belief system which has been prettified in order to be secular and even scientific.” A Dr. Blaha, who resigned as chairman of Orthopedics at WVU, was quoted as criticizing the institute as being part of a culture at the Health Sciences Center that, in his view, places too much emphasis on agreement, consensus, and getting along. Other professors reportedly supported the institute. In contrast, Anthony DiBartolomeo, chief of the rheumatology section, was quoted as calling it, "a valuable addition" to the health-sciences center, saying its greatest value was in helping students, residents, and patients deal with stress. Reportedly in response to the controversy, the WVIIH changed its name from The Sydney Banks Institute to the West Virginia Initiative for Innate Health, although its mission remains unchanged. Support for specific tenets of TPP from other philosophies and approaches Some of the tenets of TPP are consistent with the theories of philosophers, authors and researchers independently developing other approaches to change and psychotherapy. A large body of peer-reviewed case literature in psychotherapy by Milton Erickson, M.D., founding president of the American Society for Clinical Hypnosis, and others working in the field of Ericksonian psychotherapy, supports the notion that lasting change in psychotherapy can occur rapidly without directly addressing clients' past problematic experiences. Many case examples, and a modest body of controlled outcome research in solution focused brief therapy (SFBT), have likewise supported the notion that change in psychotherapy can occur rapidly, without delving into the clients' past negative experiences. Proponents of SFBT suggest that such change often occurs when the therapist assists clients to step out of their usual problem-oriented thinking. The philosophy of social constructionism, which is echoed in SFBT, asserts that reality is reproduced by people acting on their interpretations and their knowledge of it. (Likewise, TPP asserts that our experience of the world is shaped by thought.) A major body of peer-reviewed research on "focusing", a change process developed by philosopher Eugene Gendlin, supports the theory that progress in psychotherapy is dependent on something clients do inside themselves during pauses in the therapy process, and that a particular internal activity "focusing" can be taught to help clients improve their progress. The first step of the six-step process used to teach focusing involves setting aside one's current worries and concerns to create a "cleared space" for effective inner reflection. Gendlin has called this first step by itself "a superior stress-reduction method". (Correspondingly, TPP emphasizes the importance of quieting one's insecure and negative thinking to reduce stress and gain access to "inner wisdom," "common sense," and well-being.) Positive psychology emphasizes the human capacity for health and well-being, asserts the poor correlation between social circumstances and individual happiness, and insists on the importance of one's thinking in determining one's feelings. Work by Herbert Benson argues that humans have an innate 'breakout principle' providing creative solutions and peak experiences, which allow the restoration of a 'new-normal' state of higher functioning. This breakout principle is activated by severing connections with current circular or repetitive thinking. This is heavily reminiscent of Health Realization discussion of the Principle of Mind and how it is activated. Finally, resilience research, such as that by Emmy Werner, has demonstrated that many high-risk children display resilience and develop into normal, happy adults despite problematic developmental histories. See also National Resilience Resource Center LLC additional discussion of resilience research and complementary science found on the Research page at http://www.nationalresilienceresource.com . See also Psychoneuroimmunology References Further reading Community applications S.G. Wartel, A Strengths-Based Practice Model: Psychology of Mind and Health Realization, Families in Society: The Journal of Contemporary Human Services, pp. 185 – 191, 84(2) 2003; Center for Sustainable Change, Awakening the Beloved Community: Report on Year 2 of the National Community Resiliency Project, 2010. Available online PDF version C. L. Robertson, L. Halcon, S. J. Hoffman, N. Osman, A. Mohamed, E. Areba, K. Savik, & M. A. Mathiason. Health Realization Community Coping Intervention for Somali Refugee Women. Journal of Immigrant and Minority Health, 21, 2019, pp. 1077–1084. L. L. Halcón, C. L. Robertson, K. A. Monson, & C. C. Claypatch A Theoretical Framework for Using Health Realization to Reduce Stress and Improve Coping in Refugee Communities. Journal of Holistic Nursing, 25(3), 2007, pp. 186–194. R.C. Mills and E. Spittle, The Health Realization Primer, Lone Pine Publishing. 2003. , J. Pransky, Modello: A Story of Hope for the Inner City and Beyond: An Inside-Out Model of Prevention and Resiliency in Action through Health Realization, NEHRI Publications 1998. , Thomas M. Kelley, William F. Pettit Jr., Judith A. Sedgeman & Jack B. Pransky (2021), Psychiatry's pursuit of euthymia: another wild goose chase or an opportunity for principle-based facilitation?, International Journal of Psychiatry in Clinical Practice, 25:4, 333–335, Books S. Banks, Dear Liza, Lone Pine Publishing 2004. , S. Banks, The Enlightened Gardener, Lone Pine Publishing 2001. , S. Banks, The Enlightened Gardener Revisited, Lone Pine Publishing 2006. , S. Banks, In Quest of the Pearl, Duvall-Bibb Publishing 1989. , S. Banks, The Missing Link: Reflections on Philosophy and Spirit, Lone Pine Publishing 1998. , S. Banks, Second Chance, Duvall-Bibb Publishing 1983. , M. Neill, The Inside-Out Revolution, Hay House Inc, 2013, , J. Bailey, Slowing Down to the Speed of Love, McGraw-Hill, 2004. , R. Carlson, You Can be Happy No Matter What, 2nd ed., New World Library 1997. , R. Carlson and J. Bailey, Slowing Down to the Speed of Life, HarperSanFrancisco 1998. , T.M. Kelley, Falling in Love with Life, Bookman 2004. , R.C. Mills, Realizing Mental Health: Toward a new Psychology of Resiliency, Sulberger & Graham Publishing, Ltd. 1995. R.C. Mills and E. Spittle, The Wisdom Within, Lone Pine Publishing. 2001. , J. Pransky, Somebody Should Have Told Us, Airleaf Publishing 2006. , E. Spittle, Wisdom for Life, Lone Pine Publishing. 2005. , Organizations and business R.C. Kausen, We've Got to Start Meeting Like This, Life Education 2003. , R.C. Kausen, Customer Satisfaction Guaranteed, Life Education 1989. , C.L. Polsfuss & A.Ardichvili, "Three Principles Psychology: Applications in Leadership Development & Coaching", Advances in Developing Human Resources Journal, 2008; 10; 671 . Online article at: http://adh.sagepub.com/cgi/content/abstract/10/5/671. C.L. Polsfuss & A.Ardichvili, "State of Mind as the Master Competency for High-Performance Leadership", Organizational Development Journal, Volume 27, Number 3, Fall 2009. Parenting J. Pransky, Parenting from the Heart: A Guide to the Essence of Parenting, Authorhouse 2001 , Prevention J. Pransky, Prevention from the Inside Out, Authorhouse 2003. , J. Pransky and L. Carpenos, Healthy Feeling/Thinking/Doing from the Inside Out: A Middle School Curriculum and Guide for the Prevention of Violence and Other Problem Behaviors, SaferSocietyPress 2000. , K. Marshall, Resilience in our Schools: Discovering Mental Health and Hope from the Inside-Out. in Persistently Safe Schools 2005: The National Conference of the Hamilton Fish Institute on School and Community Violence. Retrieved on October 31, 2007. Recovery/substance abuse J. Bailey, The Serenity Principle: Finding Inner Peace in Recovery, HarperSanFrancisco, 1990. , Relationships G. Pransky, The Relationship Handbook, Pransky and Associates, 2001. , Youth A. Chen Mills-Naim, The Spark Inside: A Special Book for Youth, Lone Pine Publishing. 2005. , T.M.Kelley, A critique of social bonding and control theory of delinquency using the principles of psychology, Adolescence Vol. 31 Issue 122, 1996, pp. 321–38. T. M. Kelley, Health Realization: A Principle-Based Psychology of Positive Youth Development, Child & Youth Care Forum, Vol. 32, Issue 1, 2003, pp. 47–72. T.M. Kelley, Positive Psychology and Adolescent Mental Health: False Promise or True Breakthrough? Adolescence, June 22, 2004 T.M. Kelley, & S.A. Stack, Thought Recognition, Locus of Control, and Adolescent Well-being, Adolescence, Vol. 35 Issue 139, 2000, pp. 531–51. Audio Attitude! — CD Great Spirit, The — CD & Audio Cassette Hawaii Lectures - 2-CD set In Quest of the Pearl — CD Long Beach Lectures - 2-CD set One Thought Away — CD (CD-Audio) Second Chance — CD & Audio Cassette Washington Lectures - CD What is Truth? — CD & Audio Cassette Video Hawaii Lecture #1 - Secret to the Mind — DVD Hawaii Lecture #2 - Oneness of Life — DVD & VHS Hawaii Lecture #3 - The Power of Thought — DVD & VHS Hawaii Lecture #4 - Going Home — DVD & VHS Long Beach Lecture #1 - The Great Illusion — DVD Long Beach Lecture #2 - Truth Lies Within — DVD & VHS Long Beach Lecture #3 - The Experience — DVD & VHS Long Beach Lecture #4 - Jumping the Boundaries of Time — DVD & VHS Long Beach Lectures - 4 video set — VHS External links Sydney Banks Coaching from the Inside Out Michael Neill George Pransky Center for Sustainable Change One Thought Heartfelt Presence Psychology
Three Principles Psychology
[ "Biology" ]
7,125
[ "Behavioural sciences", "Behavior", "Psychology" ]
8,725,303
https://en.wikipedia.org/wiki/Comparison%20of%20high-definition%20optical%20disc%20formats
This article compares the technical specifications of multiple high-definition formats, including HD DVD and Blu-ray Disc; two mutually incompatible, high-definition optical disc formats that, beginning in 2006, attempted to improve upon and eventually replace the DVD standard. The two formats remained in a format war until February 19, 2008, when Toshiba, HD DVD's creator, announced plans to cease development, manufacturing and marketing of HD DVD players and recorders. Other high-definition optical disc formats were attempted, including the multi-layered red-laser Versatile Multilayer Disc and a Chinese-made format called EVD. Both appear to have been abandoned by their respective developers. Technical details a These maximum storage capacities apply to currently released media as of January 2012. First two layers of Blu-ray have a 25 GB capacity, but the triple layer disc adds a further 50 GB making 100 GB total. The fourth layer adds a further 28 GB. b All HD DVD players are required to decode the two primary channels (left and right) of any Dolby TrueHD track; however, every Toshiba made stand-alone HD DVD player released thus far decodes 5.1 channels of TrueHD. c On November 1, 2007 Secondary video and audio decoder became mandatory for new Blu-ray Disc players when the Bonus View requirement came into effect. However, players introduced to the market before this date can continue to be sold without Bonus View. d There are some differences in the implementation of Dolby Digital Plus (DD+) on the two formats. On Blu-ray Disc, DD+ can only be used to extend a primary Dolby Digital (DD) 5.1 audiotrack. In this method 640 kbit/s is allocated to the primary DD 5.1 audiotrack (which is independently playable on players that do not support DD+), and up to 1 Mbit/s is allocated for the DD+ extension. The DD+ extension is used to replace the rear channels of the DD track with higher fidelity versions, along with adding additional channels for 6.1/7.1 audiotracks. On HD DVD, DD+ is used to encode all channels (up to 7.1), and no legacy DD track is required since all HD DVD players are required to decode DD+. e On PAL DVDs, 24 frame per second content is stored as 50 interlaced frames per second and gets replayed 4% faster. This process can be reversed to retrieve the original 24 frame per second content. On NTSC DVDs, 24 frame per second content is stored as 60 interlaced frames per second using a process called 3:2 pulldown, which if done properly can also be reversed. f As of July 2008, about 66.7% of Blu-ray discs are region free and 33.3% use region codes. g DVD supports any valid MPEG-2 refresh rate as long as it is packaged with metadata converting it to 576i50 or 480i60, This metadata takes the form of REPEAT_FIRST_FIELD instructions embedded in the MPEG-2 stream itself, and is a part of the MPEG-2 standard. HD DVD is the only high-def disc format that can decode 1080p25 while Blu-ray and HD DVD can both decode 1080p24 and 1080p30. 1080p25 content can only be presented on Blu-ray as 1080i50. h Linear PCM is the only lossless audio codec that is mandatory for both HD DVD and Blu-ray disc players, only HD DVD players are required to decode two lossless sound formats and those are Linear PCM and Dolby TrueHD. Dolby TrueHD and DTS-HD Master Audio have become sound format of choice for many studios on their Blu-ray titles but ever since Blu-ray won the format war, it has not become clear if they are now Mandatory for all new Blu-ray disc players since the end of the format war. Capacity/codecs Blu-ray Disc has a higher maximum disc capacity than HD DVD (50 GB vs. 30 GB for a double layered disc). In September 2007 the DVD Forum approved a preliminary specification for the triple-layer 51 GB HD DVD (ROM only) disc though Toshiba never stated whether it was compatible with existing HD DVD players. In September 2006 TDK announced a prototype Blu-ray Disc with a capacity of 200GB. TDK was also the first to develop a Blu-ray prototype with a capacity of 100GB in May 2005. In October 2007 Hitachi developed a Blu-ray prototype with a capacity of 100GB. Hitachi has stated that current Blu-ray drives would only require a few firmware updates in order to play the disc. The first 50 GB dual-layer Blu-ray Disc release was the movie Click, which was released on October 10, 2006. As of July 2008, over 95% of Blu-ray movies/games are published on 50 GB dual layer discs with the remainder on 25 GB discs. 85% of HD DVD movies are published on 30 GB dual layer discs, with the remainder on 15 GB discs. The choice of video compression technology (codec) complicates any comparison of the formats. Blu-ray Disc and HD DVD both support the same three video compression standards: MPEG-2, VC-1 and AVC, each of which exhibits different bitrate/noise-ratio curves, visual impairments/artifacts, and encoder maturity. Initial Blu-ray Disc titles often used MPEG-2 video, which requires the highest average bitrate and thus the most space, to match the picture quality of the other two video codecs. As of July 2008 over 70% of Blu-ray Disc titles have been authored with the newer compression standards: AVC and VC-1. HD DVD titles have used VC-1 and AVC almost exclusively since the format's introduction. Warner Bros., which used to release movies in both formats prior to June 1, 2007, often used the same encode (with VC-1 codec) for both Blu-ray Disc and HD DVD, with identical results. In contrast, Paramount used different encodings: initially MPEG-2 for early Blu-ray Disc releases, VC-1 for early HD DVD releases, and eventually AVC for both formats. Whilst the two formats support similar audio codecs, their usage varies. Most titles released on the Blu-ray format include Dolby Digital tracks for each language in the region, a DTS-HD Master Audio track for all 20th Century Fox and Sony Pictures and many upcoming Universal titles, Dolby TrueHD for Disney and Sony Pictures and some Paramount and Warner titles, and for many Blu-ray titles a Linear PCM track for the primary language. On the other hand, most titles released on the HD DVD format include Dolby Digital Plus tracks for each language in the region, and some also include a Dolby TrueHD track for the primary language. Interactivity Both Blu-ray Disc and HD DVD have two main options for interactivity (on-screen menus, bonus features, etc.). HD DVD's Standard Content is a minor change from standard DVD's subpicture technology, while Blu-ray's BDMV is completely new. This makes transitioning from standard DVD to Standard Content HD DVD relatively simple —for example, Apple's DVD Studio Pro has supported authoring Standard Content since version 4.0.3. For more advanced interactivity Blu-ray disc supports BD-J while HD DVD supports Advanced Content. Disc construction Blu-ray Discs contain their data relatively close to the surface (less than 0.1 mm) which combined with the smaller spot size presents a problem when the surface is scratched as data would be destroyed. To overcome this, TDK, Sony, and Panasonic each have developed a proprietary scratch resistant surface coating. TDK trademarked theirs as Durabis, which has withstood direct abrasion by steel wool and marring with markers in tests. HD DVD uses traditional material and has the same scratch and surface characteristics of a regular DVD. The data is at the same depth (0.6 mm) as DVD as to minimize damage from scratching. As with DVD the construction of the HD DVD allows for a second side of either HD DVD or DVD. A study performed by Home Media Magazine (August 5, 2007) concluded that HD DVDs and Blu-ray discs are essentially equal in production cost. Quotes from several disc manufacturers for 25,000 units of HD DVDs and Blu-rays revealed a price differential of only 5-10 cents. (Lowest price: 90 cents versus 100 cents. Highest price: $1.45 versus $1.50.) Another study performed by Wesley Tech (February 9, 2007) arrived at a similar conclusion. Quotes for 10,000 discs show that a 15 gigabyte HD DVD costs $11,500 total, and 25 gigabyte Blu-ray or a 30 gigabyte HD DVD costs $13,000 total. For larger quantities of 100,000 units, the 30 gigabyte HD DVD was more expensive than the 25 gigabyte Blu-ray ($1.55 versus $1.49). While there is a HD-DVD variant that acts as a successor for the DVD-RAM, the HD DVD-RAM, a "BD-RAM" has never been released. Although the BD-RE has unrestricted random writing access capabilities, its rewrite cycle count of around 1000 times is much lower than the potential 100,000 rewrite cycles of some DVD-RAM variants. Hybrid discs At the Consumer Electronics Show, on 4 January 2007, Warner Bros. introduced a hybrid technology, Total HD, which would reportedly support both formats on a single disc. The new discs were to overlay the Blu-ray and HD DVD layers, placing them respectively and beneath the surface. The Blu-ray top layer would act as a two-way mirror, reflecting just enough light for a Blu-ray reader to read and an HD DVD player to ignore. Later that year, however, in September 2007, Warner President Ron Sanders said that the technology was on hold due to Warner being the only company who would publish on it. One year after the original announcement, on 4 January 2008, Warner Bros. stated that it would support the Blu-ray format exclusively beginning on 1 June 2008, which, along with the demise of HD DVD the following month, ended development of hybrid discs permanently. Copy protection The primary copy protection system used on both formats is the Advanced Access Content System (AACS). Other copy protection systems include: Region coding The Blu-ray specification and all currently available players support region coding. As of July 2008 about 66.7% of Blu-ray Disc titles are region-free and 33.3% use region codes. The HD DVD specification had no region coding, so a HD DVD from anywhere in the world will work in any player. The DVD Forum's steering committee discussed a request from Disney to add it, but many of the 20 companies on the committee actively opposed it. Some film titles that were exclusive to Blu-ray in the United States such as Sony's xXx, Fox's Fantastic Four: Rise of the Silver Surfer and The Prestige, were released on HD DVD in other countries due to different distribution agreements; for example, The Prestige was released outside the U.S. by once format-neutral studio Warner Bros. Pictures. Since HD DVDs had no region coding, there are no restrictions playing foreign-bought HD DVDs in an HD DVD player. References High-definition television Video storage Technological comparisons
Comparison of high-definition optical disc formats
[ "Technology" ]
2,392
[ "nan" ]
16,056,274
https://en.wikipedia.org/wiki/Wmctrl
wmctrl is a command used to control windows in EWMH- and NetWM-compatible X Window window managers. Some of its common operations are list, resize, and close window. It also has the ability to interact with virtual desktops and give information about the window manager. wmctrl is a command-line program, however, it has some functions that allow the mouse to select a window for an operation. Operations wmctrl operations List all desktops List all windows Switch desktop of a window Close window Resize window Move window Set window's icon name Set window title Add, remove, or toggle windows properties modal sticky maximized_vert maximized_horz shaded skip_taskbar skip_pager, hidden fullscreen above below Move window to another desktop Change geometry (common size) of desktops Display information about the window manager Change number of desktops Compatible window managers Compatible, or mostly compatible, window managers Blackbox ≥ version 0.70 IceWM KWin (the default WM for KDE) Metacity (the default WM for GNOME 2, replaced by Mutter in GNOME 3) Openbox ≥ 3 (the default WM of Lubuntu) sawfish FVWM ≥ 2.5 waimea PekWM enlightenment ≥ 0.16.6 Xfwm ≥ 4 (the default WM for Xfce) Fluxbox ≥ 0.9.6 matchbox Window Maker ≥ 0.91 compiz Awesome Xmonad Qubes Qtile References External links Website, archived 2023-04-07 Extended Window Manager Hints (EWMH) NetWM Application programming interfaces X Window System Computing commands
Wmctrl
[ "Technology" ]
354
[ "Computing commands" ]
16,056,559
https://en.wikipedia.org/wiki/Christie%20G.%20Enke
Christie G. Enke is a United States academic chemist who made pioneering contributions to the field of analytical chemistry. Life and career Chris Enke was born in Minneapolis, Minnesota on July 8, 1933. His parents were Alvin Enke and Mae Nichols. He graduated from Central High School in Minneapolis in 1951. He received a BA degree from Principia College in 1955 and a PhD from the University of Illinois in 1959. His thesis, concerning the anodic formation of surface oxide films on platinum electrodes, was performed under the guidance of Herbert Laitinen. While at Illinois, he also worked with Howard Malmstadt to introduce a graduate lab and lecture course in the electronics of laboratory instrumentation. He is now Professor Emeriti of Chemistry at the University of New Mexico and Michigan State University. Prior to his move to the University of New Mexico in 1994, he was an instructor and assistant professor at Princeton (1959 –1966), then an associate professor and professor at Michigan State University. Education 1955 B.S. Principia College 1959 M.S. University of Illinois 1959 PhD University of Illinois Research and teaching Electroanalytical chemistry: Enke's early research in electrochemistry centered on high-speed charge transfer kinetic studies. He also pioneered the use of operational amplifiers in electroanalytical instrumentation and later, computer control. He is co-inventor of the bipolar pulse method for measuring electrolytic conductance. Teaching electronics to scientists: Howard Malmstadt and Enke wrote the pioneering work, Electronics for Scientists. Then Malmstadt, Stan Crouch, and Enke wrote eight more texts and lab books in the electronics of laboratory instrumentation. This same team developed and presented the hands-on ACS short course, Electronics for Laboratory Instrumentation beginning in 1979. Enke also wrote an introductory analytical chemistry text called The Art and Science of Chemical Analysis. Mass spectrometry: Enke, his graduate student, Rick Yost, and a colleague, James Morrison, discovered low-energy collisional ion fragmentation in 1979. Collisional dissociation in an RF-only quadrupole mass filter between two quadrupole mass analyzers resulted in the first triple quadrupole mass spectrometer. Its low cost and unit resolution ushered in the technique now known as tandem mass spectrometry. Enke continued research in mass spectrometry including developing a distributed microprocessor control system for the triple-quadrupole, a fast integrating detector system for time-of-flight mass spectrometry, development of a tandem time-of-flight instrument with photofragmentation of ions, the equilibrium partition theory of electrospray ionization, and the invention of distance-of-flight mass spectrometry. Comprehensive analysis of complex mixtures: With Luc Nagels, Enke discovered that the concentrations of components in many natural complex mixtures have a log-normal distribution. With this information, one can learn the number and concentrations of components that are below the detection limit. Awards 1974 American Chemical Society Award for Chemical Instrumentation 1981 Fellow, American Association for the Advancement of Science 1989 American Chemical Society Award for Computers in Chemistry 1992 Michigan State University Distinguished Faculty Award 1993 Distinguished Contribution in Mass Spectrometry Award (shared with Richard Yost) 2003 J. Calvin Giddings Award for Excellence in Education from Analytical Division of the American Chemical Society 2011 American Chemical Society Award in Analytical Chemistry 2011 Fellow, American Chemical Society 2014 Distinguished Service in the Advancement of Analytical Chemistry Award from the Analytical Division of the American Chemical Society 2015 Eastern Analytical Symposium Award for Outstanding Achievements in the Fields of Analytical Chemistry Service Chair-elect, Chair and Past Chair, Analytical Division, American Chemical Society, 2004-2008 V.P. for Programs, President, Past President, American Society for Mass Spectrometry, 1992-1998 Program Chairman, Chairman, Div. of Computers in Chem., American Chemical Society, 1981-1985 Editorial Advisory Board, Analytical Chemistry, 1972-1974 Chair, Physical Electrochemistry Div., The Electrochemical. Soc.1963-1971 References 21st-century American chemists Mass spectrometrists 1933 births Living people Scientists from Illinois University of New Mexico faculty Principia College alumni University of Illinois alumni Place of birth missing (living people) Scientists from Minneapolis
Christie G. Enke
[ "Physics", "Chemistry" ]
874
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,057,501
https://en.wikipedia.org/wiki/Railway%20time
Railway time was the standardised time arrangement first applied by the Great Western Railway in England in November 1840, the first recorded occasion when different local mean times were synchronised and a single standard time applied. The key goals behind introducing railway time were to overcome the confusion caused by having non-uniform local times in each town and station stop along the expanding railway network and to reduce the incidence of accidents and near misses, which were becoming more frequent as the number of train journeys increased. Railway time was progressively taken up by all railway companies in Great Britain over the following seven years. The schedules by which trains were organised and the time station clocks displayed were brought in line with the local mean time for London or "London Time", the time set at Greenwich by the Royal Observatory, which was already widely known as Greenwich Mean Time (GMT). The development of railway networks in North America in the 1850s, India in around 1860, and in Europe, prompted the introduction of standard time influenced by geography, industrial development, and political governance. The railway companies sometimes faced concerted resistance from local people who refused to adjust their public clocks to bring them into line with London Time. As a consequence, two different times would be displayed in the town and in use, with the station clocks and the times published in train timetables differing by several minutes from that on other clocks. Despite this early reluctance, railway time rapidly became adopted as the default time across the whole of Great Britain, although it took until 1880 for the government to legislate on the establishment of a single standard time and a single time zone for the country. Some contemporary commentators referred to the influence of railway time on encouraging greater precision in daily tasks and the demand for punctuality. History Until the latter part of the 18th century, time was normally determined in each town by a local sundial. Solar time is calculated with reference to the relative position of the sun. This provided only an approximation as to time due to variations in orbits and had become unsuitable for day-to-day purposes. It was replaced by local mean time, which eliminated the variation due to seasonal differences and anomalies. It also took account of the longitude of a location and enabled a precise time to be applied. Such new-found precision did not overcome a different problem: the differences between the local times of neighbouring towns. In Britain, local time differed by up to 20 minutes from that of London. For example, Oxford Time was 5 minutes behind Greenwich Time, Leeds Time 6 minutes behind, Carnforth 11 minutes behind, and Barrow almost 13 minutes behind. In India and North America, these differences could be 60 minutes or more. Almanacs containing tables were published and instructions attached to sundials to enable the differences between local times to be computed. Before the arrival of the railways, journeys between the larger cities and towns could take many hours or days, and these differences could be dealt with by adjusting the hands of a watch periodically en route. In Britain, the coaching companies published schedules providing details of the corrections required. However, this variation in local times was large enough to present problems for the railway schedules. For instance, Leeds time was six minutes behind London, while Bristol was ten minutes behind; sunrise for towns to the east, such as Norwich, occurred several minutes ahead of London. It soon became apparent that even such small discrepancies in times caused confusion, disruption, or even accidents. Influence of the electric telegraph The electric telegraph, which had been developed in the early part of the 19th century, was refined by William Fothergill Cooke and Charles Wheatstone and was installed on a short section of the Great Western Railway in 1839. By 1852 a telegraph link had been constructed between a new electro-magnetic clock at Greenwich and initially Lewisham, and shortly after this London Bridge stations. It also connected via the Central Telegraph Station of the Electric Time Company in the City of London, which enabled the transmission of a time signal along the railway telegraphic network to other stations. By 1855 time signals from Greenwich could be sent through wires alongside the railway lines across the length and breadth of Britain. This technology was also used in India to synchronise railway time. Introduction of railway time Great Britain Before the advent of the telegraph, stationmasters adjusted their clocks using tables supplied by the railway company to convert local time to London Time. In turn, train guards set their chronometers against those clocks. The introduction of railway time was in the end swift despite not being straightforward. The Great Western Railway was the first to standardise its timetable on Greenwich Mean Time, in November 1840. One of the most vociferous proponents of standardising time on the railways was Henry Booth, Secretary of the Liverpool and Manchester Railway, who by January 1846 had ordered the adjustment of clocks to Greenwich Mean Time at both Liverpool and Manchester stations. The Midland Railway adopted London Time at all of its stations on 1 January 1846. As a consequence, in February 1846 the town council of Nottingham ordered that the town clocks be furnished with three hands, two indicating local time and the additional one the railway and post-office London time. On 22 September 1847, the Railway Clearing House, set up five years earlier to coordinate the distribution of revenue between railway companies, decreed that "GMT be adopted at all stations as soon as the General Post Office permitted it". From 1 December 1847, the London and North Western and the Caledonian Railways switched over. By January 1848, according to Bradshaws Railway Guide, the railways that had adopted London Time included the London and South Western, the Midland, the Chester and Birkenhead, the Lancaster and Carlisle, the East Lancashire and the York and North Midland. It was reported that by 1855 that 98 percent of towns and cities had transferred to GMT. On the other hand, not all railway companies convinced the local dignitaries to bring their clocks on public buildings in line without stern resistance. Although by 1844 the Bristol and Exeter Railway was running to London Time, the public clocks at both Exeter and Bristol operated to local time but showed London Time by a second minute hand, 14 and 10 minutes ahead, respectively, of its companion. In Exeter this situation arose due to the reluctance of the Dean of Exeter Cathedral to concede to the demands of the railway company, the cathedral clock being the principal timekeeper for the city. Similarly, the clock at The Bristol Exchange installed in 1822 subsequently had a second minute hand added. Bristol did not solely recognise railway time until September 1852. It was not for a further eight years and the arrival of the electric telegraph that railway time was the sole time recognised in these towns and others in the West Country, including Bath, Devonport and Plymouth. Another town that stood its ground was Oxford where the great clock on Tom Tower at Christ Church, Oxford had two minute hands. It was not until 2 August 1880, when the Statutes (Definition of Time) Act received the Royal Assent, that a unified standard time for the whole of Great Britain achieved legal status. As late as the 1950s, the Western Region of British Railways had an elaborate telephone ritual at 11:00 am for all signal boxes to synchronise their clocks with that at Paddington Station. United States One of the first reported incidents which brought about a change in how time was organised on railways in the United States occurred in New England in August 1853, the Valley Falls train collision. Two trains heading towards each other on the same track collided as the conductors had different times set on their watches, resulting in the death of 14 passengers. Railway schedules were co-ordinated in New England shortly after this incident Numerous other collisions led to the setting up of the General Time Convention, a committee of railway companies to agree on scheduling. In 1870 Charles F. Dowd, who was unconnected with the railway movement or civil authorities, proposed A System of National Times for Railroads, which involved a single time for railways but the keeping of local times for towns. Although this did not find favour with railway managers, in 1881 they agreed for the idea to be investigated by William Frederick Allen, Secretary of the General Time Convention and Managing Editor of the Travellers' Official Guide to the Railways. He proposed replacing the 50 different railway times with five time zones. He eventually persuaded the railway managers and the politicians running the cities that had several railway stations that it was in their interests to speedily adopt his simpler proposals, which aligned the zones with cities' railroad stations. In doing so they would pre-empt the imposition of more costly and cumbersome arrangements by different state legislators and the naval authorities, both of whom favoured retention of local times. Right to the end there was opposition expressed by many smaller towns and cities to the imposition of railway time. For example, in Indianapolis the report in the daily Sentinel for 17 November 1883 protested that people would have to "eat sleep work ... and marry by railroad time". However, with the support of nearly all railway companies, most cities and influential observatories such as Yale and Harvard, this collaborative approach led to standard railway time being introduced at noon on 18 November 1883. This consensus held and was incorporated into federal law only in 1918. France France adopted Paris Mean Time as its standard national time in 1891. It also required clocks inside railway stations and train schedules to be set five minutes late to allow travelers to arrive late without missing their trains, even while clocks on the external walls of railway stations displayed Paris Mean Time. In 1911, France adopted Paris Mean Time delayed 9 minutes 21 seconds, making it equivalent to Greenwich Mean Time without mentioning Greenwich. At the same time, slow railway station clocks were eliminated. Germany In Germany the standardisation of time had started to be discussed in the 1870s. North German railways were already regulated to Berlin Time in 1874. However, it was not until 1 April 1893 that a law was established by the German Empire "concerning the introduction of uniform time reckoning" by which all railways would operate and also all aspects of social, industrial and civil activity would henceforth be strictly regulated. Italy Italy was newly unified as a country when on 12 December 1866 at the start of the winter season the railway timetables centred on Turin, Verona, Florence, Rome, Naples and Palermo were synchronised on the time in Rome, which although it would remain notionally at least under French military control until 1870, was seen as the heart of the nation. In addition to the adoption of a single railway time there was a progressive standardisation of time for civil and commercial purposes. Milan came in line straight away, Turin and Bologna on 1 January 1867, Venice on 1 May 1880 and Cagliari in 1886. Ireland Ireland and France were the only countries that decided not to officially adopt Greenwich Time, reflecting the political sensitivities of the time. Dublin mean time was set 25 minutes behind London time, although it came into line with international standard time in October 1916 when summer time ended, and most railway clocks were adjusted by 35 minutes rather than one hour. A slight variation was in railway stations in Ulster such as Belfast and Bangor where clocks displayed on the same dial both Dublin mean time (railway time for the island of Ireland) and Belfast Time (local time), 23 minutes and 39 seconds behind Greenwich. Netherlands Netherlands Railway time was based on GMT until 1909 when the country adopted 'Amsterdam time' as the standard time, 19 minutes ahead of GMT. This persisted until 1940, when the Nazi occupation of the Netherlands required a shift to German time, which has continued to be the standard. Sweden Though several private railways had been built construction of public railways in Sweden came later than most other European countries, delayed by concerns over construction costs and resistance from powerful shipping owners. Railway time was introduced on the main railway line between Stockholm and Gothenburg which opened in 1862. Timetables were based on solar time at Gothenburg, the westernmost end of the line. As a consequence passengers and businesses following local time would arrive at the station ahead of the train. There were many private railways that followed local time or their own railway time. On 1 January 1879, a national standard time was introduced across the whole of Sweden, one hour advanced of Greenwich mean time. Russia Until 1 August 2018, Russia had a separate railway time, meaning that timetables and tickets on Russian railways followed Moscow Time regardless of local time. Starting from 1 August 2018, each station uses the local timezone. India The Indian railway companies had to contend with different local times as the rapidly expanding routes extended out from Mumbai (formerly Bombay), Kolkata (formerly Calcutta), Lahore and Chennai (formerly Madras). Towards the end of the 1860s the situation became even more confused as the networks linked up. In 1870, to overcome the problems occurring, Chennai (Madras) time was adopted for all railways, for two reasons: the longitude of Chennai is roughly midway between those of Kolkata and Mumbai, and the Observatory there ran the telegraphic service which could be utilised to synchronise station times via the same time-signal system first used in Britain in 1852 to regulate railway time. Madras Time was popularised by its use in Newman's Indian Bradshaw Timetables. However, unlike in Britain, where railway time was rapidly adopted countrywide and evolved not long after into standard time, in India the much larger size of the country and the autonomy enjoyed by Mumbai and Kolkata resulted in both Presidencies' retaining local times well into the 20th century. For the remaining part of the 19th century Madras time continued to be used by all railways. Proposals had been put forward for at least one meridian–based time zone for India as early as 1884. However, no consensus could be reached until 1906, when a single time zone based on Allahabad was established, and a standard time was introduced, which the railways came in line with. Despite this, Kolkata kept its own time until 1948 and to a lesser extent Mumbai continued to unofficially until 1955. Korea In 1904, during the Russio-Japanese War the Chosen Keifu Railway, a privately run company, introduced Japan Central Standard Time as its railway time instead of Korean traditional time (UTC+08:28). In 1908, Chosen Keibu Railway adopted new Korean standard time (UTC+08:30). References Notes Bibliography Rail transport operations Time scales
Railway time
[ "Physics", "Astronomy" ]
2,884
[ "Physical quantities", "Time", "Astronomical coordinate systems", "Spacetime", "Time scales" ]
16,057,614
https://en.wikipedia.org/wiki/ACAM2000
ACAM2000 is a smallpox vaccine and an mpox vaccine manufactured by Emergent Biosolutions. It provides protection against smallpox for people determined to be at high risk for smallpox infection. ACAM2000 is a live replicating vaccinia virus vaccine. Medical uses ACAM2000 is indicated for active immunization against smallpox disease for individuals determined to be at high risk for smallpox infection. It is also indicated for the active prevention of mpox disease in individuals determined to be at high risk for mpox infection. History ACAM2000 is a vaccine developed by Acambis, which was acquired by Sanofi Pasteur in 2008, before selling the smallpox vaccine to Emergent Biosolutions in 2017. Six strains of vaccinia were isolated from 3,000 doses of Dryvax and found to exhibit significant variation in virulence. The strain with the most similar virulence to the overall Dryvax mixture was selected and grown in MRC-5 cells to make the ACAM1000 vaccine. After a successful Phase I trial of ACAM1000, the virus was passaged three times in Vero cells to develop ACAM2000, which entered mass production at Baxter. The United States ordered over 200 million doses of ACAM2000 in 1999–2001 for its stockpile, and production is ongoing to replace expired vaccine. Emergent Biosolutions developed ACAM2000 under a contract with the US Centers for Disease Control and Prevention (CDC). The US Food and Drug Administration (FDA) approved ACAM2000 in August 2007. By February 2008, it replaced Dryvax for all smallpox vaccinations. As of 2010, there were over 200 million doses manufactured for the US Strategic National Stockpile. According to the US FDA, "The approval and availability of this second-generation smallpox vaccine in the Strategic National Stockpile (SNS) enhances the emergency preparedness of the United States against the use of smallpox as a dangerous biological weapon." In August 2024, ACAM2000 was approved for mpox prevention in the United States. Administration of ACAM2000 The ACAM2000 vaccine is produced from the vaccinia virus, which is sufficiently closely related to smallpox to provide immunity, but the ACAM2000 vaccine cannot cause smallpox because it does not contain the smallpox virus. Other vaccines containing live viruses include measles, mumps, rubella, polio and chickenpox. The vaccine is administered using a bifurcated stainless steel needle. The needle is dipped into the vaccine solution and used to prick the skin several times in the upper arm. The vaccinia virus will begin to grow at the injection site. It will cause a localized infection, with a red itchy sore produced at the vaccination site within three to four days. If the infection occurs, that is an indication that the vaccine was successful. Ultimately, the sore turns into a blister and then dries up. A scab forms and then falls off in the third week, leaving a small scar behind. Risks Administration of ACAM2000 poses risks and may cause side effects. Most people who have taken the vaccine only report mild reactions. Reactions may include a sore arm, fever, and body aches. Some people may have more serious side effects, including effects that may be life-threatening. According to the FDA-approved prescribing information leaflet, "Common adverse events include inoculation site signs and symptoms, lymphadenitis, and constitutional symptoms, such as malaise, fatigue, fever, myalgia, and headache." These reactions are less frequent in people being revaccinated than those receiving the vaccine for the first time. No known contraindications exist to receiving the vaccine in case of an outbreak emergency. Furthermore, it is recommended that the vaccine should be given to pregnant women who have been exposed to smallpox. "Because the risk of maternal serious illness or death, prematurity, miscarriage, or stillbirth from a smallpox infection are greater than the risk of the vaccination, smallpox vaccine is recommended and should be offered to pregnant women in case of an outbreak emergency." References External links Healthcare in the United States History of immunology Infectious diseases Vaccine Smallpox vaccines Vaccines
ACAM2000
[ "Biology" ]
879
[ "Vaccination", "Vaccines" ]
16,058,302
https://en.wikipedia.org/wiki/EAA%20Biplane
The EAA Biplane is a recreational aircraft that was designed by the Experimental Aircraft Association in the United States and marketed as plans for home-built aircraft. Design and development A preliminary design was produced for the EAA by a team of Allison engineers led by EAA member Jim D. Stewart in 1955. This team took the Gere Sport of the 1930s as their starting point and eventually developed a completely new design, which also incorporated several later design changes made by Robert D. Blacker, the prototype's builder and one of its test pilots. Blacker's design changes included adding a +2 degree of dihedral to the upper wing, redesign of the horizontal stabilizer, installation of a diagonal brace at Stations 2 and 3, a change to the fuselage truss assembly, strengthening of the control column support, and a ball-bearing arrangement. The design is a single-seat biplane of conventional configuration, with staggered, single-bay equal-span wings braced with N-struts. The undercarriage is of fixed tailwheel type. The fuselage is fabric-covered welded steel tube, and the wings fabric-covered wood. This prototype EAA Biplane was built by Blacker (President of EAA Chapter 15 at the time) and his students at St. Rita of Cascia High School in Chicago, Illinois, as the second airplane completed as part of EAA's Project Schoolflight. The EAA Biplane construction began in September 1957, with a first flight in June, 1960. During the construction of the prototype, Blacker wrote several "EAA Biplane Progress Reports" published in EAA's Sport Aviation magazine. Blacker put the prototype's incomplete fuselage as on display at EAA's 1958 fly-in. The prototype EAA Biplane work, along with the other facets of Project Schoolflight, resulted in the award of the Mechanix Illustrated trophy for "Outstanding Achievement in Home-Built Aircraft". The completed prototype EAA Biplane was first publicly shown at the 1961 Rockford, Illinois Fly-In. Operational history Plans for the biplane remained available until 1972, with 7,000 sets sold. Aircraft on display EAA Aviation Museum, Oshkosh, Wisconsin - prototype. Specifications (typical) References AirVenture Museum page on type airVenture Museum specification page for Biplane with 85 hp engine and open cockpit List of magazine articles about the EAA Biplane External links 1960s United States sport aircraft Homebuilt aircraft Biplane Single-engined tractor aircraft Biplanes Aircraft first flown in 1960 Experimental Aircraft Association
EAA Biplane
[ "Engineering" ]
512
[ "Experimental Aircraft Association", "Aerospace engineering organizations" ]
16,058,443
https://en.wikipedia.org/wiki/Energy%20Resources%20Aotearoa
Energy Resources Aotearoa, formerly known as Petroleum Exploration and Production Association of New Zealand (PEPANZ) until March 2021, is an incorporated society based in Wellington which represents the wider energy resources sector, including the upstream oil and gas sector in New Zealand. They work with central and local government, stakeholders and the wider public. As part of this they hold events, publish educational booklets, make numerous submissions and run the social media campaign Energy Voices to promote use of natural gas. Members Full members include: Beach Energy New Zealand Oil and Gas OMV New Zealand Limited Todd Energy Associate members include service providers to the oil and gas industry in New Zealand (such as contractors, legal firms, engineers). Climate change Energy Resources Aotearoa says it supports the transition to lower emissions. As PEPANZ, the organisation was criticised for advocating increased use of fossil fuels, such as oil and natural gas. See also Energy in New Zealand Oil and gas industry in New Zealand References External links Oil and gas companies of New Zealand Petroleum organizations 1972 establishments in New Zealand
Energy Resources Aotearoa
[ "Chemistry", "Engineering" ]
215
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
16,058,982
https://en.wikipedia.org/wiki/Dihydrostreptomycin
Dihydrostreptomycin is a derivative of streptomycin that has a bactericidal properties. It is a semisynthetic aminoglycoside antibiotic used in the treatment of tuberculosis. It acts by irreversibly binding the S12 protein in the bacterial 30S ribosomal subunit, after being actively transported across the cell membrane, which interferes with the initiation complex between the mRNA and the bacterial ribosome. This leads to the synthesis of defective, nonfunctional proteins, which results in the bacterial cell's death. It causes ototoxicity, which is why it is no longer used in humans. See also Translation (biology) References External links Dihydrostreptomycin | C21H41N7O12 - PubChem Aminoglycoside antibiotics Guanidines
Dihydrostreptomycin
[ "Chemistry" ]
176
[ "Guanidines", "Functional groups" ]
16,059,132
https://en.wikipedia.org/wiki/Hilbert%20C%2A-module
Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces (which are themselves generalisations of Euclidean space), in that they endow a linear space with an "inner product" that takes values in a C*-algebra. They were first introduced in the work of Irving Kaplansky in 1953, which developed the theory for commutative, unital algebras (though Kaplansky observed that the assumption of a unit element was not "vital"). In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke and Marc Rieffel, the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory, and provide the right framework to extend the notion of Morita equivalence to C*-algebras. They can be viewed as the generalization of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry, notably in C*-algebraic quantum group theory, and groupoid C*-algebras. Definitions Inner-product C*-modules Let be a C*-algebra (not assumed to be commutative or unital), its involution denoted by . An inner-product -module (or pre-Hilbert -module) is a complex linear space equipped with a compatible right -module structure, together with a map that satisfies the following properties: For all , , in , and , in : (i.e. the inner product is -linear in its second argument). For all , in , and in : For all , in : from which it follows that the inner product is conjugate linear in its first argument (i.e. it is a sesquilinear form). For all in : in the sense of being a positive element of A, and (An element of a C*-algebra is said to be positive if it is self-adjoint with non-negative spectrum.) Hilbert C*-modules An analogue to the Cauchy–Schwarz inequality holds for an inner-product -module : for , in . On the pre-Hilbert module , define a norm by The norm-completion of , still denoted by , is said to be a Hilbert -module or a Hilbert C*-module over the C*-algebra . The Cauchy–Schwarz inequality implies the inner product is jointly continuous in norm and can therefore be extended to the completion. The action of on is continuous: for all in Similarly, if is an approximate unit for (a net of self-adjoint elements of for which and tend to for each in ), then for in Whence it follows that is dense in , and when is unital. Let then the closure of is a two-sided ideal in . Two-sided ideals are C*-subalgebras and therefore possess approximate units. One can verify that is dense in . In the case when is dense in , is said to be full. This does not generally hold. Examples Hilbert spaces Since the complex numbers are a C*-algebra with an involution given by complex conjugation, a complex Hilbert space is a Hilbert -module under scalar multipliation by complex numbers and its inner product. Vector bundles If is a locally compact Hausdorff space and a vector bundle over with projection a Hermitian metric , then the space of continuous sections of is a Hilbert -module. Given sections of and the right action is defined by and the inner product is given by The converse holds as well: Every countably generated Hilbert C*-module over a commutative unital C*-algebra is isomorphic to the space of sections vanishing at infinity of a continuous field of Hilbert spaces over . C*-algebras Any C*-algebra is a Hilbert -module with the action given by right multiplication in and the inner product . By the C*-identity, the Hilbert module norm coincides with C*-norm on . The (algebraic) direct sum of copies of can be made into a Hilbert -module by defining If is a projection in the C*-algebra , then is also a Hilbert -module with the same inner product as the direct sum. The standard Hilbert module One may also consider the following subspace of elements in the countable direct product of Endowed with the obvious inner product (analogous to that of ), the resulting Hilbert -module is called the standard Hilbert module over . The fact that there is a unique separable Hilbert space has a generalization to Hilbert modules in the form of the Kasparov stabilization theorem, which states that if is a countably generated Hilbert -module, there is an isometric isomorphism Maps between Hilbert modules Let and be two Hilbert modules over the same C*-algebra . These are then Banach spaces, so it is possible to speak of the Banach space of bounded linear maps , normed by the operator norm. The adjointable and compact adjointable operators are subspaces of this Banach space defined using the inner product structures on and . In the special case where is these reduce to bounded and compact operators on Hilbert spaces respectively. Adjointable maps A map (not necessarily linear) is defined to be adjointable if there is another map , known as the adjoint of , such that for every and , Both and are then automatically linear and also -module maps. The closed graph theorem can be used to show that they are also bounded. Analogously to the adjoint of operators on Hilbert spaces, is unique (if it exists) and itself adjointable with adjoint . If is a second adjointable map, is adjointable with adjoint . The adjointable operators form a subspace of , which is complete in the operator norm. In the case , the space of adjointable operators from to itself is denoted , and is a C*-algebra. Compact adjointable maps Given and , the map is defined, analogously to the rank one operators of Hilbert spaces, to be This is adjointable with adjoint . The compact adjointable operators are defined to be the closed span of in . As with the bounded operators, is denoted . This is a (closed, two-sided) ideal of . C*-correspondences If and are C*-algebras, an C*-correspondence is a Hilbert -module equipped with a left action of by adjointable maps that is faithful. (NB: Some authors require the left action to be non-degenerate instead.) These objects are used in the formulation of Morita equivalence for C*-algebras, see applications in the construction of Toeplitz and Cuntz-Pimsner algebras, and can be employed to put the structure of a bicategory on the collection of C*-algebras. Tensor products and the bicategory of correspondences If is an and a correspondence, the algebraic tensor product of and as vector spaces inherits left and right - and -module structures respectively. It can also be endowed with the -valued sesquilinear form defined on pure tensors by This is positive semidefinite, and the Hausdorff completion of in the resulting seminorm is denoted . The left- and right-actions of and extend to make this an correspondence. The collection of C*-algebras can then be endowed with the structure of a bicategory, with C*-algebras as objects, correspondences as arrows , and isomorphisms of correspondences (bijective module maps that preserve inner products) as 2-arrows. Toeplitz algebra of a correspondence Given a C*-algebra , and an correspondence , its Toeplitz algebra is defined as the universal algebra for Toeplitz representations (defined below). The classical Toeplitz algebra can be recovered as a special case, and the Cuntz-Pimsner algebras are defined as particular quotients of Toeplitz algebras. In particular, graph algebras , crossed products by , and the Cuntz algebras are all quotients of specific Toeplitz algebras. Toeplitz representations A Toeplitz representation of in a C*-algebra is a pair of a linear map and a homomorphism such that is "isometric": for all , resembles a bimodule map: and for and . Toeplitz algebra The Toeplitz algebra is the universal Toeplitz representation. That is, there is a Toeplitz representation of in such that if is any Toeplitz representation of (in an arbitrary algebra ) there is a unique *-homomorphism such that and . Examples If is taken to be the algebra of complex numbers, and the vector space , endowed with the natural -bimodule structure, the corresponding Toeplitz algebra is the universal algebra generated by isometries with mutually orthogonal range projections. In particular, is the universal algebra generated by a single isometry, which is the classical Toeplitz algebra. See also Operator algebra Notes References External links Hilbert C*-Modules Home Page, a literature list C*-algebras Operator theory Theoretical physics
Hilbert C*-module
[ "Physics" ]
1,933
[ "Theoretical physics" ]
16,059,206
https://en.wikipedia.org/wiki/Integration%20by%20reduction%20formulae
In integral calculus, integration by reduction formulae is a method relying on recurrence relations. It is used when an expression containing an integer parameter, usually in the form of powers of elementary functions, or products of transcendental functions and polynomials of arbitrary degree, can't be integrated directly. But using other methods of integration a reduction formula can be set up to obtain the integral of the same or similar expression with a lower integer parameter, progressively simplifying the integral until it can be evaluated. This method of integration is one of the earliest used. How to find the reduction formula The reduction formula can be derived using any of the common methods of integration, like integration by substitution, integration by parts, integration by trigonometric substitution, integration by partial fractions, etc. The main idea is to express an integral involving an integer parameter (e.g. power) of a function, represented by In, in terms of an integral that involves a lower value of the parameter (lower power) of that function, for example In-1 or In-2. This makes the reduction formula a type of recurrence relation. In other words, the reduction formula expresses the integral in terms of where How to compute the integral To compute the integral, we set n to its value and use the reduction formula to express it in terms of the (n – 1) or (n – 2) integral. The lower index integral can be used to calculate the higher index ones; the process is continued repeatedly until we reach a point where the function to be integrated can be computed, usually when its index is 0 or 1. Then we back-substitute the previous results until we have computed In. Examples Below are examples of the procedure. Cosine integral Typically, integrals like can be evaluated by a reduction formula. Start by setting: Now re-write as: Integrating by this substitution: Now integrating by parts: solving for In: so the reduction formula is: To supplement the example, the above can be used to evaluate the integral for (say) n = 5; Calculating lower indices: back-substituting: where C is a constant. Exponential integral Another typical example is: Start by setting: Integrating by substitution: Now integrating by parts: shifting indices back by 1 (so n + 1 → n, n → n – 1): solving for In: so the reduction formula is: An alternative way in which the derivation could be done starts by substituting . Integration by substitution: Now integrating by parts: which gives the reduction formula when substituting back: which is equivalent to: Another alternative way in which the derivation could be done by integrating by parts: Remember: which gives the reduction formula when substituting back: which is equivalent to: Tables of integral reduction formulas Rational functions The following integrals contain: Factors of the linear radical Linear factors and the linear radical Quadratic factors Quadratic factors , for Quadratic factors , for (Irreducible) quadratic factors Radicals of irreducible quadratic factors note that by the laws of indices: Transcendental functions The following integrals contain: Factors of sine Factors of cosine Factors of sine and cosine products and quotients Products/quotients of exponential factors and powers of x Products of exponential and sine/cosine factors References Bibliography Anton, Bivens, Davis, Calculus, 7th edition. Integral calculus
Integration by reduction formulae
[ "Mathematics" ]
688
[ "Integral calculus", "Calculus" ]
16,059,380
https://en.wikipedia.org/wiki/David%20E.%20Clemmer
David E. Clemmer (February 23, 1965, Alamosa, Colorado) is an analytical chemist and the Distinguished Professor and Robert and Marjorie Mann Chair of Chemistry at Indiana University in Bloomington, Indiana, where he leads the Clemmer Group. Clemmer develops new scientific instruments for ion mobility mass spectrometry (IMS/MS), including the first instrument for nested ion-mobility time-of-flight mass spectrometry. He has received a number of awards, including the Biemann Medal in 2006 "for his pioneering contributions to the integration of ion mobility separations with a variety of mass spectrometry technologies." Early life and education Clemmer was born on February 23, 1965, to Ed Clemmer, an artist, and his wife MaryAnn, a teacher, of Alamosa, Colorado. He attended Adams State College, where he originally majored in music, before changing to science. He received his B.S. in chemistry with honors in 1987. He then attended the University of Utah, receiving his Ph.D. in physical chemistry in 1992. His thesis advisor was Peter B. Armentrout, with whom he studied transition metal ions in gaseous reactions. Career During 1992–1993, Clemmer was a postdoctoral fellow in Japan, supported by the Japan Society for the Promotion of Science Fellowship at the Himeji Institute of Technology. He worked with Kenji Honma on electron transfer mechanisms and reactions of excited-state metal atoms and gaseous molecules. From 1993 to 1995, Clemmer was a postdoctoral research associate at Northwestern University, where he worked with Martin F. Jarrold, studying protein folding and protein conformation in the gas phase, using techniques such as Ion-mobility spectrometry. In 1995, Clemmer joined the Department of Chemistry at Indiana University. He served as chair of the Chemistry Department from 2002 to 2006. He is a full professor, and holds the Robert and Marjorie Mann Chair of Chemistry, to which he was named in 2002. He has published more than 230 papers. Among those who have influenced him, he includes Michael T. Bowers, Jesse L. Beauchamp, R. Graham Cooks, Scott A. McLuckey, Fred McLafferty, Evan R. Williams, Joseph A. Loo, Vicki Wysocki, and Julie A. Leary. His graduate students have included Renã A. S. Robinson, Stephen Valentine, Cherokee Hoaglund-Hyzer, and Catherine Srebalus Barnes. Research Clemmer is particularly interested in studying the structural characterization and conformational dynamics of complex low-symmetry systems. Clemmer develops scientific instruments and methods for the examination of biomolecular structure and complex biomolecular mixtures in the gas phase using ion-mobility spectrometry. Ion mobility methods separate ions into different groups based on their ability to move through an electrically-charged buffer gas. This enables complex mixtures to be differentiated in ways that could not be achieved by mass spectrometry alone. Even minute amounts of compounds can be distinguished and differentially examined according to characteristics such as size, shape and charge as well as mass. Clemmer has helped to establish ion mobility as both a powerful tool and a field of research through his "thorough studies" and "revolutionary instrumental methods". In early work, Clemmer and Jarrold used long drift tubes with nonclustering gas atmospheres to increase the resolving power of ion-mobility spectrometry. Clemmer's work on gas-phase separation methods for ion mobility-mass spectrometry (IM-MS) and their application to the structural analysis of intact proteins is considered a "particularly important milestone" in the application of IM-MS to the examination of biomolecular structures. Clemmer and his colleagues have developed at least a dozen different configurations combining modular components for ion-mobility with mass spectrometry instruments. These include combining ion mobility with Time-of-flight mass spectrometry (TOF). They also developed the first instrument for nested ion-mobility time-of-flight mass spectrometry. Such equipment allows researchers to learn more about both the structures and the conformational dynamics of systems. Clemmer has identified fundamental relationships between charge states and structures, and has shown that a single charge state can exist in more than one conformation in gaseous states. Such techniques can be used for the study of both proteins and peptides. In early work, Clemmer showed that multiple conformations of the hemeprotein cytochrome c could be differentiated based on their mobilities. In addition, the mobility of different chiral isomers was related to their protein folding. More recent techniques enable researchers to track transitions in the conformations of macromolecular ions during the gas phase. A short pulse of ions is introduced into a drift tube by electrospray ionization. Structures separate based on differences in their mobilities. By exposing specific states to energizing collisions, new structures can be established and tracked through different conformational changes. Changes in conformation during the gas-phase data can then be mapped back to the original populations of structures. In this way, researchers can understand the possible pathways between structures. Understanding how protein folding occurs in three-dimensional molecules is one of biology's enduring problems. Proteins with different shapes often have very different biological activity and medical usefulness. Clemmer's work has applications in the life sciences for understanding the conformation of structures in large protein complexes, profiling the plasma proteome, examining the role of proteins and protein folding in neurodegenerative diseases, identifying possible cancer-related markers in blood, urine, or saliva, and increasing the efficiency of drug-discovery. Ion mobility-mass spectrometry techniques also allow the measurement and correlation of a wide variety of different characteristics simultaneously in a single analysis. Researchers can use these techniques to examine complex biological samples for lipidomics, proteomics, glycomics, and metabolomics information. Companies Clemmer is a co-founder of Beyond Genomics, a systems biology company, and the founder of Predictive Physiology and Medicine, a biotechnology company specializing in personalized medicine. Hobbies In addition to playing several instruments, Clemmer enjoys running marathons. Awards and honors 2023, Field and Franklin Award for Outstanding Achievement in Mass Spectrometry 2022, Wylie Innovation Catalyst Medal 2020, Bicentennial Medal Award Winner 2018, John B. Fenn Distinguished Contribution, shared with Martin F. Jarrold and Gert von Helden, American Society for Mass Spectrometry 2017, Fellow, National Academy of Inventors 2014–2015, ANACHEM Award 2014, Distinguished Professor, Indiana University Bloomington 2014, Distinguished Chemistry Alumni, University of Utah 2012, American Chemical Society (ACS) Chemical Instrumentation Award 2011, Fellow, American Association for the Advancement of Science (AAAS) 2011–2012, Fellow, Japanese Society for the Promotion of Science (JSPS) 2010, Adams State Outstanding Alumnus 2009, Tracy M. Sonneborn Award, Indiana University Bloomington 2007, American Chemical Society (ACS) Akron award 2006, Biemann Medal, American Society for Mass Spectrometry 2005, Fellow, Royal Society of Chemistry 2003–2005, National Science Foundation Special Creativity Award 2002, Pittcon Achievement Award 2002, Named one of Popular Sciences 10 Most Brilliant List 2002, named Robert and Marjorie Mann Chair of Chemistry, Indiana University Bloomington 2000–2002, Eli Lilly Analytical Chemistry Award 1998–2001, Fellow, Alfred P. Sloan Research 2000, National Fresenius Award, Phi Lambda Upsilon 1999–2000, American Chemical Society (ACS), Division of Analytical Chemistry, Arthur F. Findeis Award 1999, Camille Dreyfus Teacher-Scholar Award, The Camille and Henry Dreyfus Foundation 1999, "Innovators Under 35", MIT Technology Review 1996–2000, National Science Foundation Early Career Award References External links http://www.indiana.edu/~clemmer/home.htm 21st-century American chemists Mass spectrometrists Living people Indiana University faculty 1965 births
David E. Clemmer
[ "Physics", "Chemistry" ]
1,682
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,059,756
https://en.wikipedia.org/wiki/Automated%20decision%20support
Automated Decision Support, or ADS, systems are rule-based systems that are able to automatically provide solutions to repetitive management problems. ADSs are very closely related to business informatics and business analytics. Automated decision support systems are based on business rules. These business rules can be created or operated by the business analytics. The business rules can trigger a decision that is part of the business informatics. ADSs are most useful in situations that require solutions to repetitive problems mostly using electronically available information. The required knowledge and relevant decision criteria must be very clearly defined and structured. The problem situation at hand must be clear and well understood. Components to ADSs are also provided by software development companies. The following components are provided: Rules engines Mathematical and statistical algorithms Industry-specific packages Enterprise systems Workflow applications See also Automated decision-making Decision support system Enterprise Decision Management References Further reading DeSanctis, Gerardine; Gallupe, R. Brent "A Foundation for the Study of Group Decision Support Systems ," Gallupe Management Science, Vol. 33, No. 5. (May, 1987), pp. 589–609 Fjermestad, Jerry and Hiltz, Starr Roxanne. "An assessment of group support systems experimental research: methodology and results," Journal of Management Information Systems Volume 15, Issue 3 (December 1998), pp. 7 – 149 Jessup, Leonard M. and Tansik, David A. "Decision Making in an Automated Environment: The Effects of Anonymity and Proximity with a Group Decision Support System," Decision Sciences 22 (2), 1991, pp. 266–279 Nunamaker, J. F., Applegate, Lynda M. and Konsynski, Benn R. "Computer-Aided Deliberation: Model Management and Group Decision Support (in Special Focus on Decision Support Systems)," Operations Research, Vol. 36, No. 6. (Nov. - Dec., 1988), pp. 826–848. O'Keefe, Robert M. and McEachern, Tim. "Web-based customer decision support systems," Communications of the ACM archive Volume 41, Issue 3 (March 1998), pp. 71 – 78 Turban, Leidner, McLean and Wetherbe, Information Technology for Management, Wiley & Sons, Inc. 2007, Decision support systems
Automated decision support
[ "Technology" ]
488
[ "Information systems", "Decision support systems" ]
16,061,714
https://en.wikipedia.org/wiki/Catastrophin
Catastrophin (Catastrophe-related protein) is a term use to describe proteins that are associated with the disassembly of microtubules. Catastrophins affect microtubule shortening, a process known as microtubule catastrophe. Microtubule dynamics Microtubules are polymers of tubulin subunits arranged in cylindrical tubes. The subunit is made up of alpha and beta tubulin. GTP binds to alpha tubulin irreversibly. Beta tubulin binds GTP and hydrolyzes to GDP. It is the GDP bound to beta-tubulin that regulates the growth or disassembly of the microtubule. However, this GDP can be displaced by GTP. Beta-tubulin bounded to GTP are described as having a GTP-cap that enables stable growth. Microtubules exist in either a stable or unstable state. The unstable form of a microtubule is often found in cells that are undergoing rapid changes such as mitosis. The unstable form exists in a state of dynamic instability where the filaments grow and shrink seemingly randomly. A mechanistic understanding of what causes microtubules to shrink is still being developed. Model of catastrophe One model proposes that loss of the GTP-cap causes the GDP-containing protofilaments to shrink. Based on this GTP-cap model, catastrophe happens randomly. The model proposes that an increase in microtubule growth will correlate with a decrease in random catastrophe frequency or vice versa. The discovery of microtubule-associated proteins that change the rate of catastrophe while not impacting the rate of microtubule growth challenges this model of stochastic growth and shrinkage. Increases Oncoprotein 18/Stathmin has been shown to increase the frequency of catastrophe. Oncoprotein 18 (Op18) is a cytosolic protein that are found in abundance in either benign or malignant tumor site: through the complex timing of phosphorylation, this biomolecule regulates the depolymerization of microtubules. It has four sites of phosphorylation characterized by serine residues and are associated with cyclin-dependent protein kinases (CDKs): Ser16, Ser25, Ser38 and Ser63. There are two different models that are in contention regarding the destabilization of microtubules due to Op18: the inhibition of tubulin dimer formation or a catastrophe phenomena. The Kinesin-related protein XKCM1 stimulates catastrophes in Xenopus microtubules. The Kinesin-Related Protein 13 MCAK increases the frequency of catastrophe without affecting the promotion of microtubule growth. Decreases Doublecortin (DCX) shows an ability to inhibit catastrophe without affecting the microtubule growth rate Xenopus Microtubule Protein 215 (XMAP215) has been implicated in inhibiting catastrophe. Mechanism Some catastrophins affect catastrophe by binding to the ends of microtubules and promoting the dissociation of tubulin dimers. Different mathematical models of microtubule development are being developed to take into account in vitro and in vivo observations. Meanwhile, there are new in vitro models of microtubule polymerization dynamics, of which catastrophins take a part in, being tested to emulate in vivo behaviors of microtubules. See also Microtubule-associated protein Kinesin References Motor proteins
Catastrophin
[ "Chemistry" ]
705
[ "Molecular machines", "Motor proteins" ]
16,061,824
https://en.wikipedia.org/wiki/FamilySearch%20Indexing
FamilySearch Indexing is a volunteer project established and run by FamilySearch, a genealogy organization of the Church of Jesus Christ of Latter-day Saints. The project aims to create searchable digital indexes of scanned images of historical documents that are relevant to genealogy. The documents include census records, birth and death certificates, marriage licenses, military and property records, and other vital records maintained by local, state, and national governments. However, to access the billions of names that appear on these images, indexes are needed to be able to search them efficiently. Since FamilySearch indexing began in 2006, this crowdsourcing effort has produced more than one billion searchable records. The digital images and corresponding indexes are valuable to professionals, hobbyists, and family organization researchers. How it works Volunteers (including jail inmates) use online software on the FamilySearch website to download images of historical documents. They then read the information on the image and transcribe the information. A second, more experienced volunteer reviews this information for accuracy before it is submitted. Indexed records eventually can be searched on the FamilySearch website. From 2006 to 2017 FamilySearch Indexing was only available as a downloadable program, and two volunteers separately indexed each document. A third person checked their work for accuracy. As of 2016, FamilySearch Indexing is also available as a web-based effort. Types of records Up to December 2008, the FamilySearch Indexing project focused primarily on indexing state and federal census records from the United States of America, though census records from Mexico and vital records from other locales have also been indexed. In 2012, FamilySearch Indexing collaborated with Archives.com and FindMyPast to index the 1940 US Federal Census. In 2014, an emphasis was placed on obituary projects. As of December 2015, the organization had indexed 1,379,890,025 records since its inception. As of July 2018 there were 226 active indexing projects, with documents from all over the world being indexed. The majority of projects come from either North America or Europe. The United States is the country with the most records but a majority of projects now come from outside the United States. In addition to the general indexing projects, the site also partners with other genealogical organizations to complete specialized indexing projects. Partners have included the Arkansas Genealogical Society, the Black History Museum, the Indiana Genealogical Society, the Ohio Genealogical Society, the US National Archives and Records Administration, and the Utah Genealogical Association. On September 21, 2021, FamilySearch Indexing announced that it had completed full digitization of its entire collection of 2.4 million rolls of microfilm. The rolls represented records from over 200 countries and more than 11.5 billion individuals. See also Genealogy Crowdsourcing software development Granite Mountain Records Vault References External links FamilySearch Indexing main site Crowdsourcing Distributed computing projects Genealogy and the Church of Jesus Christ of Latter-day Saints American genealogy websites Human-based computation
FamilySearch Indexing
[ "Technology", "Engineering" ]
607
[ "Information systems", "Human-based computation", "Distributed computing projects", "Information technology projects" ]
16,063,197
https://en.wikipedia.org/wiki/Membrane%20ruffling
Within molecular and cell biology membrane ruffling (also known as cell ruffling) is the formation of a motile cell surface that contains a meshwork of newly polymerized actin filaments. It can also be regarded as one of the earliest structural changes observed in the cell. The GTP-binding protein Rac is the regulator of this membrane ruffling. Changes in the Polyphosphoinositide metabolism and changes in Ca2+ level of the cell may also play an important role. A number of actin-binding and organizing proteins localize to membrane ruffles and potentially target to transducing molecules. Characteristic feature of migrating cells Membrane ruffling is a characteristic feature of many actively migrating cells. When the membrane is unable to attach to the substrate, the membrane protrusion is recycled back into the cell. The ruffling of membranes is thought to be controlled by a group of enzymes known as Rho GTPases, specifically RhoA, Rac1 and cdc42. Bacterial infection Some bacteria such as enteropathogenic E. coli and enterohemorrhagic E. coli can induce membrane ruffling by secreting toxins via the type three secretion system and modifying the host cytoskeleton. Such toxins include EspT, Map, and SopE, which mimic RhoGEF and activate endogenous Rho GTPases to manipulate actin polymerisation in the infected cell. See also Filopodia Lamellipodia References External links http://www.reading.ac.uk/nitricoxide/intro/migration/dynamics.htm Cell biology
Membrane ruffling
[ "Chemistry", "Biology" ]
342
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry", "Cell biology" ]
16,063,325
https://en.wikipedia.org/wiki/Peter%20B.%20Armentrout
Peter B. Armentrout (born 1953) is a researcher in thermochemistry, kinetics, and dynamics of simple and complex chemical reactions. He is a Chemistry Professor at the University of Utah. Career Armentrout received his B.S. degree from Case Western Reserve University in 1975 and earned his Ph.D. from the California Institute of Technology in 1980. During these studies he determined that much of the published information on thermodynamic states was not reliable, or was presented in differing formats. When he became a research professor he used this frustration as motivation to invent and construct the guided ion-beam tandem mass spectrometer, which provided highly accurate thermodynamic measurements. With this instrument in hand, he went on to invent or improve tools to analyze those measurements, including advanced computer algorithms. He has published much data on the properties of transition metals, and has worked most recently on the thermodynamic properties of biological systems. Awards 1984–1989 National Science Foundation Presidential Young Investigator Award 2001 Biemann Medal Case Western Chemistry Department - Outstanding Alumnus of the Year American Chemical Society Utah Section - Award of Chemistry Member of Phi Kappa Phi Honor Society 2009 American Chemical Society – Award for Outstanding Achievement in Mass Spectrometry References 21st-century American chemists Mass spectrometrists California Institute of Technology alumni Case Western Reserve University alumni Living people University of Utah faculty 1953 births Fellows of the American Physical Society
Peter B. Armentrout
[ "Physics", "Chemistry" ]
296
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,063,680
https://en.wikipedia.org/wiki/Apple%20Performa%20Plus%20Display
The Apple Performa Plus Display is a color 14″, 13″ viewable shadow mask CRT that was manufactured by Apple Inc. from September 14, 1992, until July 18, 1994. The video cable uses a standard Macintosh DA-15 video connector and its resolution is fixed at 640×480 pixels. References EveryMac.com Apple Inc. displays
Apple Performa Plus Display
[ "Technology" ]
74
[ "Computing stubs", "Computer hardware stubs" ]
16,063,686
https://en.wikipedia.org/wiki/Pirkle%27s%20alcohol
Pirkle's alcohol is an off-white, crystalline solid that is stable at room temperature when protected from light and oxygen. This chiral molecule is typically used, in nonracemic form, as a chiral shift reagent in nuclear magnetic resonance spectroscopy, in order to simultaneously determine absolute configuration and enantiomeric purity of other chiral molecules. The molecule is named after William H. Pirkle, Professor of Chemistry at the University of Illinois whose group reported its synthesis and its application as a chiral shift reagent. Synthesis Pirkle's alcohol is synthesized by trifluoroacetylation of anthracene, to yield trifluoromethyl 9-anthryl ketone. Trifluoromethyl 9-anthryl ketone may be reduced with a chiral hydride reagent prepared from lithium aluminium hydride and (4S,5S)-(–)-2-ethyl-4-hydroxymethyl-5-phenyl-2-oxazoline to generate Pirkle's alcohol with R absolute configuration. Alternatively, trifluoromethyl 9-anthryl ketone may be reduced with sodium borohydride to generate racemic Pirkle's alcohol. The enantiomers are then derivatized to diastereomeric carbamates using enantioenriched 1-(1-Naphthyl)ethyl isocyanate (also developed by Pirkle). These diastereomers may be separated by column chromatography and hydrolyzed to obtain each enantiomer of Pirkle's alcohol in enantiopure form. Application The determination of enantiomeric purity and absolute configuration is frequently necessary in organic synthesis. Pirkle's alcohol is applied to obtain this information by NMR spectroscopy. When Pirkle's alcohol is in solution with an ensemble of chiral molecules, short-lived diastereomeric solvates may be formed from Pirkle's alcohol and the enantiomers of the analyte. Enantiomorphic protons of the analyte enantiomers, which without Pirkle's alcohol are indistinguishable by NMR, become diastereomorphic when the analyte interacts with Pirkle's alcohol, and appear as different signals in an NMR spectrum. The relative magnitude of the signals quantitatively reveals the enantiomeric purity of the analyte. Also, a model of the solvated complex may be used to deduce absolute configuration of an enantioenriched analyte. See also Mosher's acid 9-Anthracenemethanol References Stereochemistry Secondary alcohols Trifluoromethyl compounds Anthracenes
Pirkle's alcohol
[ "Physics", "Chemistry" ]
592
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
16,064,549
https://en.wikipedia.org/wiki/Cary%20Peppermint
Cary Peppermint (born 1970) is a New York-based conceptual, new media, performance, and environmental artist. Peppermint was born in Rome, Georgia, in 1970 and received in M.F.A. from Syracuse University in 1997. Peppermint has conducted a series of Dadaist and Fluxus inspired digital, networked performances via his website RestlessCulture, an ongoing, post-cinema living documentary database. In Artforum, Mark Tribe called this series of work “twenty-first-century takes on Warhol's Factory.” In 2005, Peppermint founded ecoarttech with his partner Leila Christine Nadir. Their collaborative explores environmental issues and convergent media and technologies from an interdisciplinary perspective. In a 2012 interview with , Peppermint and Nadir report that "movement between environmental extremes–between mega-cities and green landscapes–has always been the most creatively stimulating 'place' for us to dwell in. No matter where we go, we are always fascinated by the technologies and systems that human beings use to produce their survival and to create meaning in their lives." One of ecoarttech's inaugural works was “Wilderness Trouble” (2007). More recent works include “Indeterminate Hikes” (2011), a smartphone app and installation that transforms chance encounters in everyday locales into public performances of bio-cultural diversity and wild happenings, created originally for the Whitney Museum of American Art's 2010 ISP exhibition; “Untitled Landscape #5” (2009), an internet-based work commissioned by the Whitney Museum of American Art for its Sunrise and Sunset series; “Center for Wildness in the Everyday” (2010), a series of networked performances about the “wildness” of water in the Texas Trinity River Basin, commissioned by the University of North Texas College of Visual Arts and Design; and “Eclipse” (2009), a net art work exploring the politics of pollution, the myth of wilderness, and the surplus of online information, commissioned by Turbulence.org of New Radio and Performing Arts, Inc. He is currently an Assistant Professor of Art and Art History at the University of Rochester. His work is in the collections of the Walker Art Center, Rhizome.org at the New Museum, The Whitney Museum of American Art, and Computer Fine Arts. References External links Restlessculture.net ecoarttech Furtherfield.org Interview VisualMAG Interview NYFA Interview 1970 births Living people American digital artists American video artists Net.artists Syracuse University alumni Artists from New York (state) American conceptual artists American new media artists Colgate University faculty University of Rochester faculty
Cary Peppermint
[ "Technology" ]
538
[ "Multimedia", "Net.artists" ]
16,065,051
https://en.wikipedia.org/wiki/HD%20125072
HD 125072 is a star in the southern constellation of Centaurus. It is a challenge to view with the naked eye, having an apparent visual magnitude of 6.637. The star is located at a distance of 38.6 light years from the Sun based on parallax. It is drifting closer with a radial velocity of −14.9 km/s. The components of the space velocity for this star are U=−18.5, V=−6.9 and W=−26.9 km/s. The stellar classification of this star is K3 IV, matching a K-type subgiant that is evolving into a giant. It has 81% of the Sun's mass and 83% of the radius of the Sun. The star is radiating 34.7% of the Sun's luminosity from its photosphere at an effective temperature of 4,858 K. Based on the composition and kinematics of this star, it has an estimated age of about 10 billion years. It is spinning with a projected rotational velocity of 4 km/s. References K-type subgiants Centaurus Durchmusterung objects 0542 125072 069972
HD 125072
[ "Astronomy" ]
254
[ "Centaurus", "Constellations" ]
16,065,330
https://en.wikipedia.org/wiki/Pseudin
Pseudin is a peptide derived from Pseudis paradoxa. Pseudins have some antimicrobial function. There are several different forms: pseudin-1 pseudin-2 -- has been proposed as a treatment for type 2 diabetes. pseudin-4 Pseudin-2 Pseudin-2 is the most abundant version of the pseudins found on the skin of the paradoxical frog. The primary sequence reads as GLNALKKVFQGIHEAIKLINNHVQ. Its secondary/tertiary structure consists of one cationic amphipathic α-helix. Antibacterial activity Pseudin-2 was shown to have potent antibacterial activity, but a lower cytotoxicity. The cytotoxicity of a peptide can be measured by its effect on human erythrocytes. It takes a lower concentration of Pseudin-2 to kill bacteria or fungi such as E. coli, S. aureus, and C. albicans than to kill human erythrocytes. It is hypothesized that Pseudin-2 binding to the cell membrane of the bacteria results in a conformational change in which the peptide forms an α-helical shape, which allows it to perform cell lysis by inserting itself in the hydrophobic portion of the membrane. This mechanism is applicable to similar amphipathic α-helical peptides created by many frog species, although most of these peptides aren't very potent against bacteria. By increasing the cationicity and amphipathic nature of the molecule, it is possible to create analogues of Pseudin-2 that are even more selective towards bacteria. This is done by substituting leucine residues with lysine residues and glycine residues with proline residues, which results in two shorter α-helices (linked by the substituted proline) that are more attuned to penetrating bacterial cell membranes. See also Exenatide References Peptides
Pseudin
[ "Chemistry", "Biology" ]
423
[ "Biomolecules by chemical classification", "Biotechnology stubs", "Biochemistry stubs", "Molecular biology", "Biochemistry", "Peptides" ]
16,065,341
https://en.wikipedia.org/wiki/HR%203384
HR 3384 (11 G. Pyxidis) is solitary star in the southern constellation of Pyxis. It has an apparent magnitude of 6.38, indicating it is faintly visible to the naked eye. Based on the Bortle scale, the star can be viewed from dark rural skies. Astrometric measurements of the star by the Hipparcos spacecraft, give an estimated distance of about from Earth. It is moving away from the Sun with a radial velocity of +81.91. This star is lower in mass than the Sun at around 75%, and has just 85% of the Sun's radius. The spectrum matches a spectral class G9V, indicating that this is a G-type main sequence star that is generating energy through the nuclear fusion of hydrogen at its core. The star is radiating 44% of the Sun's luminosity from its photosphere at an effective temperature of 5,290 K. It is about six billion years old and is rotating slowly with a period of around 40 days. Surface magnetic activity has been detected with a periodic cycle of  days. HR 3384 has been examined for evidence of a circumstellar debris disk or planets, but, as of 2012, none have been discovered. References External links http://www.stellar-database.com/Scripts/search_star.exe?Name=HR+3384 G-type main-sequence stars HR, 3384 Pyxis CD-31 6229 0309 Pyxidis, 11 072673 041926 3384
HR 3384
[ "Astronomy" ]
323
[ "Pyxis", "Constellations" ]
16,065,393
https://en.wikipedia.org/wiki/Green%20nanotechnology
Green nanotechnology refers to the use of nanotechnology to enhance the environmental sustainability of processes producing negative externalities. It also refers to the use of the products of nanotechnology to enhance sustainability. It includes making green nano-products and using nano-products in support of sustainability. The word GREEN in the name Green Nanotechnology has dual meaning. On one hand it describes the environment friendly technologies utilized to synthesize particles in nano scale; on the other hand it refers to the nanoparticles synthesis mediated by extracts of chlorophyllus plants. Green nanotechnology has been described as the development of clean technologies, "to minimize potential environmental and human health risks associated with the manufacture and use of nanotechnology products. It also encourages replacement of existing products with new nano-products that are more environmentally friendly throughout their lifecycle." Aim Green nanotechnology has two goals: producing nanomaterials and products without harming the environment or human health, and producing nano-products that provide solutions to environmental problems. It uses existing principles of green chemistry and green engineering to make nanomaterials and nano-products without toxic ingredients, at low temperatures using less energy and renewable inputs wherever possible, and using lifecycle thinking in all design and engineering stages. In addition to making nanomaterials and products with less impact to the environment, green nanotechnology also means using nanotechnology to make current manufacturing processes for non-nano materials and products more environmentally friendly. For example, nanoscale membranes can help separate desired chemical reaction products from waste materials from plants. Nanoscale catalysts can make chemical reactions more efficient and less wasteful. Sensors at the nanoscale can form a part of process control systems, working with nano-enabled information systems. Using alternative energy systems, made possible by nanotechnology, is another way to "green" manufacturing processes. The second goal of green nanotechnology involves developing products that benefit the environment either directly or indirectly. Nanomaterials or products directly can clean hazardous waste sites, desalinate water, treat pollutants, or sense and monitor environmental pollutants. Indirectly, lightweight nanocomposites for automobiles and other means of transportation could save fuel and reduce materials used for production; nanotechnology-enabled fuel cells and light-emitting diodes (LEDs) could reduce pollution from energy generation and help conserve fossil fuels; self-cleaning nanoscale surface coatings could reduce or eliminate many cleaning chemicals used in regular maintenance routines; and enhanced battery life could lead to less material use and less waste. Green Nanotechnology takes a broad systems view of nanomaterials and products, ensuring that unforeseen consequences are minimized and that impacts are anticipated throughout the full life cycle. Current research Solar cells Research is underway to use nanomaterials for purposes including more efficient solar cells, practical fuel cells, and environmentally friendly batteries. The most advanced nanotechnology projects related to energy are: storage, conversion, manufacturing improvements by reducing materials and process rates, energy saving (by better thermal insulation for example), and enhanced renewable energy sources. One major project that is being worked on is the development of nanotechnology in solar cells. Solar cells are more efficient as they get tinier and solar energy is a renewable resource. The price per watt of solar energy is lower than one dollar. Research is ongoing to use nanowires and other nanostructured materials with the hope of to create cheaper and more efficient solar cells than are possible with conventional planar silicon solar cells. Another example is the use of fuel cells powered by hydrogen, potentially using a catalyst consisting of carbon supported noble metal particles with diameters of 1–5 nm. Materials with small nanosized pores may be suitable for hydrogen storage. Nanotechnology may also find applications in batteries, where the use of nanomaterials may enable batteries with higher energy content or supercapacitors with a higher rate of recharging. Nanotechnology is already used to provide improved performance coatings for photovoltaic (PV) and solar thermal panels. Hydrophobic and self-cleaning properties combine to create more efficient solar panels, especially during inclement weather. PV covered with nanotechnology coatings are said to stay cleaner for longer to ensure maximum energy efficiency is maintained. Nanoremediation and water treatment Nanotechnology offers the potential of novel nanomaterials for the treatment of surface water, groundwater, wastewater, and other environmental materials contaminated by toxic metal ions, organic and inorganic solutes, and microorganisms. Due to their unique activity toward recalcitrant contaminants, many nanomaterials are under active research and development for use in the treatment of water and contaminated sites. The present market of nanotech-based technologies applied in water treatment consists of reverse osmosis(RO), nanofiltration, ultrafiltration membranes. Indeed, among emerging products one can name nanofiber filters, carbon nanotubes and various nanoparticles. Nanotechnology is expected to deal more efficiently with contaminants which convectional water treatment systems struggle to treat, including bacteria, viruses and heavy metals. This efficiency generally stems from the very high specific surface area of nanomaterials, which increases dissolution, reactivity and sorption of contaminants. Environmental remediation Nanoremediation is the use of nanoparticles for environmental remediation. Nanoremediation has been most widely used for groundwater treatment, with additional extensive research in wastewater treatment. Nanoremediation has also been tested for soil and sediment cleanup. Even more preliminary research is exploring the use of nanoparticles to remove toxic materials from gases. Some nanoremediation methods, particularly the use of nano zerovalent iron for groundwater cleanup, have been deployed at full-scale cleanup sites. Nanoremediation is an emerging industry; by 2009, nanoremediation technologies had been documented in at least 44 cleanup sites around the world, predominantly in the United States. During nanoremediation, a nanoparticle agent must be brought into contact with the target contaminant under conditions that allow a detoxifying or immobilizing reaction. This process typically involves a pump-and-treat process or in situ application. Other methods remain in research phases. Scientists have been researching the capabilities of buckminsterfullerene in controlling pollution, as it may be able to control certain chemical reactions. Buckminsterfullerene has been demonstrated as having the ability of inducing the protection of reactive oxygen species and causing lipid peroxidation. This material may allow for hydrogen fuel to be more accessible to consumers. Water cleaning technology In 2017 the RingwooditE Co Ltd was formed in order to explore Thermonuclear Trap Technology (TTT) for the purpose of cleaning all sources of water from pollution and toxic contents. This patented nanotechnology uses a high pressure and temperature chamber to separate isotopes that should by nature not be in drinking water to pure drinking water, as to the by the WHO´s established classification. This method has been developed by among others, by professor Vladimir Afanasiew, at the Moscow Nuclear Institution. This technology is targeted to clean Sea, river, lake and landfill waste waters. It even removes radioactive isotopes from the sea water, after Nuclear Power Stations catastrophes and cooling water plant towers. By this technology pharmaca rests are being removed as well as narcotics and tranquilizers. Bottom layers and sides at lake and rivers can be returned, after being cleaned. Machinery used for this purpose are much similar to those of deep sea mining. Removed waste items are being sorted by the process, and can be re used as raw material for other industrial production. Water filtration Nanofiltration is a relatively recent membrane filtration process used most often with low total dissolved solids water such as surface water and fresh groundwater, with the purpose of softening (polyvalent cation removal) and removal of disinfection by-product precursors such as natural organic matter and synthetic organic matter. Nanofiltration is also becoming more widely used in food processing applications such as dairy, for simultaneous concentration and partial (monovalent ion) demineralisation. Nanofiltration is a membrane filtration based method that uses nanometer sized cylindrical through-pores that pass through the membrane at a 90°. Nanofiltration membranes have pore sizes from 1-10 Angstrom, smaller than that used in microfiltration and ultrafiltration, but just larger than that in reverse osmosis. Membranes used are predominantly created from polymer thin films. Materials that are commonly used include polyethylene terephthalate or metals such as aluminum. Pore dimensions are controlled by pH, temperature and time during development with pore densities ranging from 1 to 106 pores per cm2. Membranes made from polyethylene terephthalate and other similar materials, are referred to as "track-etch" membranes, named after the way the pores on the membranes are made. "Tracking" involves bombarding the polymer thin film with high energy particles. This results in making tracks that are chemically developed into the membrane, or "etched" into the membrane, which are the pores. Membranes created from metal such as alumina membranes, are made by electrochemically growing a thin layer of aluminum oxide from aluminum in an acidic medium. Some water-treatment devices incorporating nanotechnology are already on the market, with more in development. Low-cost nanostructured separation membranes methods have been shown to be effective in producing potable water in a recent study. Nanotech to disinfect water Nanotechnology provides an alternative solution to clean germs in water, a problem that has been getting worse due to the population explosion, growing need for clean water and the emergence of additional pollutants. One of the alternatives offered is antimicrobial nanotechnology stated that several nanomaterials showed strong antimicrobial properties through diverse mechanisms, such as photocatalytic production of reactive oxygen species that damage cell components and viruses. There is also the case of the synthetically-fabricated nanometallic particles that produce antimicrobial action called oligodynamic disinfection, which can inactivate microorganisms at low concentrations. Commercial purification systems based on titanium oxide photocatalysis also currently exist and studies show that this technology can achieve complete inactivation of fecal coliforms in 15 minutes once activated by sunlight. There are four classes of nanomaterials that are employed for water treatment and these are dendrimers, zeolites, carbonaceous nanomaterials, and metals containing nanoparticles. The benefits of the reduction of the size of the metals (e.g. silver, copper, titanium, and cobalt) to the nanoscale such as contact efficiency, greater surface area, and better elution properties. Medicical values The plants have been known to possess various phytochemicals (secondary metabolites) which help them to protect themselves, these phytoehemicals since time immemorial have been used by humans for their medicinal needs. The microbes are developing resistant again multiple synthetic drugs, thus leading to the emergence of MDR (Multi Drug Resistant) strains of microbes, which pose a challenge to the modern drug system. To overcome this challenge, the nanoparticles synthesized using extracts of plant and plant parts have emerged as a hope. Many workers have reported that the nanoparticles synthesized using plant extracts have shown to exhibit enhanced medicinal properties as compared to the extract(s) alone. Cleaning up oil spills The U.S. Environmental Protection Agency (EPA) documents more than ten thousand oil spills per year. Conventionally, biological, dispersing, and gelling agents are deployed to remedy oil spills. Although, these methods have been used for decades, none of these techniques can retrieve the irreplaceable lost oil. However, nanowires can not only swiftly clean up oil spills but also recover as much oil as possible. These nanowires form a mesh that absorbs up to twenty times its weight in hydrophobic liquids while rejecting water with its water repelling coating. Since the potassium manganese oxide is very stable even at high temperatures, the oil can be boiled off the nanowires and both the oil and the nanowires can then be reused. In 2005, Hurricane Katrina damaged or destroyed more than thirty oil platforms and nine refineries. The Interface Science Corporation successfully launched a new oil remediation and recovery application, which used the water repelling nanowires to clean up the oil spilled by the damaged oil platforms and refineries. Removing plastics from oceans One innovation of green nanotechnology that is currently under development are nanomachines modeled after a bacterium bioengineered to consume plastics, Ideonella sakaiensis. These nano-machines are able to decompose plastics dozens of times faster than the bioengineered bacteria not only because of their increased surface area but also because the energy released from decomposing the plastic is used to fuel the nano-machines. Air pollution control In addition to water treatment and environmental remediation, nanotechnology is currently improving air quality. Nanoparticles can be engineered to catalyze, or hasten, the reaction to transform environmentally pernicious gases into harmless ones. For example, many industrial factories that produce large amounts harmful gases employ a type of nanofiber catalyst made of magnesium oxide (Mg2O) to purify dangerous organic substances in the smoke. Although chemical catalysts already exist in the gaseous vapors from cars, nanotechnology has a greater chance of reacting with the harmful substances in the vapors. This greater probability comes from the fact that nanotechnology can interact with more particles because of its greater surface area. Nanotechnology has been used to remediate air pollution including car exhaust pollution, and potentially greenhouse gases due to its high surface area. Based on research done by the Environmental Science Pollution Research International, nanotechnology can specifically help to treat carbon-based nanoparticles, greenhouse gases, and volatile organic compounds. There is also work being done to develop antibacterial nanoparticles, metal oxide nanoparticles, and amendment agents for phytoremediation processes. Nanotechnology can also give the possibility of preventing air pollution in the first place due to its extremely small scale. Nanotechnology has been accepted as a tool for many industrial and domestic fields like gas monitoring systems, fire and toxic gas detectors, ventilation control, breath alcohol detectors and many more. Other sources state that nanotechnology has the potential to develop the pollutants sensing and detection methods that already exist. The ability to detect pollutants and sense unwanted materials will be heightened by the large surface area of nanomaterials and their high surface energy. The World Health Organization declared in 2014 that air contamination caused around 7 million deaths in 2012. This new technology could be an essential asset to this epidemic. The three ways that nanotechnology is being used to treat air pollution are nano-adsorptive materials, degradation by nanocatalysis, and filtration/separation by nanofilters. Nanoscale adsorbents being the main alleviator for many air pollution difficulties. Their structure permits a great interaction with organic compounds as well as increased selectivity and stability in maximum adsorption capacity. Other advantages include high electrical and thermal conductivities, high strength, high hardness. Target pollutants that can be targeted by nanomolecules are 〖NO〗_x, 〖CO〗_2, 〖NH〗_3, N_2, VOCs, Isopropyl vapor, 〖CH〗_3 OH gases, N_2 O, H_2 S. Carbon nanotubes specifically remove particles in many ways. One method is by passing them through the nanotubes where the molecules are oxidized; the molecules then are adsorbed on a nitrate species. Carbon nanotubes with amine groups provide numerous chemical sites for carbon dioxide adsorption at low temperature ranges of 20°-100° degrees Celsius. Van der Waals forces and π-π interactions also are used to pull molecules onto surface functional groups. Fullerene can be used to rid of carbon dioxide pollution due to its high adsorption capacity. Graphene nanotubes have functional groups that adsorb gases. There are plenty of nanocatalysts that can be used for air pollution reduction and air quality. Some of these materials include 〖TiO〗_2, Vanadium, Platinum, Palladium, Rhodium, and Silver. Catalytic industrial emission reduction, car exhaust reduction, and air purification are just some of the major thrusts that these nanomaterials are being utilized within. Certain applications are not widely spread, but other are more popular. Indoor air pollution is barely on the market yet, but it is being developed more efficiently due to complications with health effects. Car exhaust emission reduction is widely used in diesel fueled automobiles currently being one of the more popular applications. Industrial emission reduction is also widely used. It is n integral method specifically at coal fired power plants as well as refineries. These methods are analyzed and reviewed using SEM imaging to ensure its usefulness and accuracy. Additionally, research is currently being conducted to find out if nanoparticles can be engineered to separate car exhaust from methane or carbon dioxide, which has been known to damage the Earth's ozone layer. In fact, John Zhu, a professor at the University of Queensland, is exploring the creation of a carbon nanotube(CNT) which can trap greenhouse gases hundreds of times more efficiently than current methods can. Nanotechnology for sensors Perpetual exposure to heavy metal pollution and particulate matter will lead to health concerns such as lung cancer, heart conditions, and even motor neuron diseases. However, humanity's ability to shield themselves from these health problems can be improved by accurate and swift nanocontact-sensors able to detect pollutants at the atomic level. These nanocontact sensors do not require much energy to detect metal ions or radioactive elements. Additionally, they can be made in automatic mode so that they can be readably used at any given moment. Additionally, these nanocontact sensors are energy and cost effective since they are composed with conventional microelectronic manufacturing equipment using electrochemical techniques. Some examples of nano-based monitoring include: Functionalized nanoparticles able to form anionic oxidants bonding thereby allowing the detection of carcinogenic substances at very low concentrations. Polymer nanospheres have been developed to measure organic contaminates in very low concentrations "Peptide nanoelectrodes have been employed based on the concept of thermocouple. In a 'nano-distance separation gap, a peptide molecule is placed to form a molecular junction. When a specific metal ion is bound to the gap; the electrical current will result conductance in a unique value. Hence the metal ion will be easily detected." Composite electrodes, a mixture of nanotubes and copper, have been created to detect substances such as organophosphorus pesticides, carbohydrates and other woods pathogenic substances in low concentrations. Concerns Although green nanotechnology poses many advantages over traditional methods, there is still much debate about the concerns brought about by nanotechnology. For example, since the nanoparticles are small enough to be absorbed into skin and/or inhaled, countries are mandating that additional research revolving around the impact of nanotechnology on organisms be heavily studied. In fact, the field of eco-nanotoxicology was founded solely to study the effect of nanotechnology on earth and all of its organisms. At the moment, scientists are unsure of what will happen when nanoparticles seep into soil and water, but organizations, such as NanoImpactNet, have set out to study these effects. See also Bioremediation Clean technology Environmental microbiology Green chemistry Industrial microbiology LifeSaver bottle NBI Knowledgebase Tata Swach References Further reading Evaluation of 'green' nanotechnology requires a full life cycle assessment Nano Flakes May Revolutionize Solar Cells External links Safer Nanomaterials and Nanomanufacturing Initiative Clean Tech Law & Business Project on Emerging Nanotechnologies Nanotechnology Lab National Nanotechnology Initiative The Berkeley Nanosciences and Nanoengineering Institute Nanotechnology: Green Manufacturing Nanotechnology Now "Can nanotechnology be green?" Folia Water – The Safe Water Book, containing 26 nanosilver-impregnated filter papers for water purification. Nanotechnology and the environment
Green nanotechnology
[ "Materials_science" ]
4,262
[ "Nanotechnology", "Nanotechnology and the environment" ]
16,065,426
https://en.wikipedia.org/wiki/Roe%20solver
The Roe approximate Riemann solver, devised by Phil Roe, is an approximate Riemann solver based on the Godunov scheme and involves finding an estimate for the intercell numerical flux or Godunov flux at the interface between two computational cells and , on some discretised space-time computational domain. Roe scheme Quasi-linear hyperbolic system A non-linear system of hyperbolic partial differential equations representing a set of conservation laws in one spatial dimension can be written in the form Applying the chain rule to the second term we get the quasi-linear hyperbolic system where is the Jacobian matrix of the flux vector . Roe matrix The Roe method consists of finding a matrix that is assumed constant between two cells. The Riemann problem can then be solved as a truly linear hyperbolic system at each cell interface. The Roe matrix must obey the following conditions: Diagonalizable with real eigenvalues: ensures that the new linear system is truly hyperbolic. Consistency with the exact jacobian: when we demand that Conserving: Phil Roe introduced a method of parameter vectors to find such a matrix for some systems of conservation laws. Intercell flux Once the Roe matrix corresponding to the interface between two cells is found, the intercell flux is given by solving the quasi-linear system as a truly linear system. See also Riemann solver References Further reading Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. Numerical differential equations Conservation equations
Roe solver
[ "Physics", "Mathematics" ]
309
[ "Conservation laws", "Mathematical objects", "Equations", "Conservation equations", "Symmetry", "Physics theorems" ]
3,027,889
https://en.wikipedia.org/wiki/Rotronics%20Wafadrive
The Rotronics Wafadrive is a magnetic tape storage peripheral launched in late 1984 for the ZX Spectrum home computer. Each tape is a continuous loop, unlike cassette tape. It was intended to compete with Sinclair's ZX Interface 1 and ZX Microdrive. The Wafadrive comprises two continuous loop stringy floppy tape drives, an RS-232 interface and Centronics parallel port. The drives can run at two speeds: High speed (for seeking) and low speed (for reading/writing, which was significantly slower than that of Microdrives). The cartridges (or "wafers"), the same as those used in Entrepo stringy floppy devices for other microcomputers, are physically larger than Microdrive cartridges. They were available in three different capacities, nominally 16 kB, 64 kB or 128 kB. The larger sizes had the disadvantage of slower access, due to the longer length of tape. The same drive mechanism, manufactured by BSR, and cartridges were used in at least the following similar devices: Quick Data Drive (QDD), designed to connect to the cassette port of Commodore 64 and VIC-20 home computers. A&J Micro Drive System 100, for TRS-80 Model 100 and it's clones (Kyotronic KC-85, NEC PC-8201 & PC-8300, Olivetti M10), connected via the RS-232 port. References External links Rotronics Wafadrive User Manual meulie.net Rotronics Wafadrive User Manual Review of Waferdrive in Your Sinclair, Issue 5, May 1986 Computer storage devices Home computer peripherals ZX Spectrum Tape-based computer storage
Rotronics Wafadrive
[ "Technology" ]
346
[ "Computer storage devices", "Computing stubs", "Recording devices", "Computer hardware stubs" ]
3,028,137
https://en.wikipedia.org/wiki/Fanshawe%20College%27s%20Music%20Industry%20Arts%20program
The Music Industry Arts Program at Fanshawe College trains students for careers in the contemporary music industry. It was started in 1970 as Creative Electronics by former Radio Caroline DJ Tom Lodge, but when the college demanded that Creative Electronics become a career program, Lodge had the students build a recording studio, gathered music industry executives for an advisory group and changed the name of the program to Music Industry Arts. The program has been the starting point for hundreds of acclaimed recording engineers, record producers, live performers, sound editors and entertainment industry executives. The program is highly competitive, receiving over 800 applications every year with only about 115 students being accepted. Students in the MIA program are also eligible for membership in a Student Section of the Audio Engineering Society. History A part of the School of Contemporary Media, Fanshawe's Music Industry Arts (MIA) program was founded as Creative Electronics in 1973 by British disc jockey Tom Lodge formerly of Radio Caroline. With six professors and 35 students in its inaugural year, courses in the three-year program centered on electronics and music synthesizers. When the college demanded that Creative Electronics become a career program, he had the students build a recording studio, gathered music industry executives for an advisory group and in 1975 renamed the program Music Industry Arts (MIA). Along with the change in name, course offerings were expanded to include music recording and engineering, music production, artist development, live performance, music writing and audio post-production. In the mid-1980s, graduates continued to earn College of Applied Arts and Technology (CAAT) diplomas, but the length of the program was reduced to two years. In 2010 there were 11 MIA professors, three technologists, and two lab assistants. Audio post-production professor Steve Malison, who joined MIA in 1995, became the program coordinator in 2007. Dan Brodbeck became the program coordinator in May 2016. From 1984 until his retirement in 2007, Canadian music Producer Jack Richardson was also a professor of the MIA program. Kelly Samuel was also a professor of the social media course. Facilities The Music Industry Arts program currently houses a total of six recording studios, a 20-station audio production lab, and a 120-seat live music performance venue. Its two main recording studios, Studio 1 (designated for first year students) and Studio 2 (for second year students) feature SSL Duality SE consoles which were installed in the summer months of 2010. Two smaller rooms, Studios 3 and 4, house SSL AWS 924 consoles. Recording is primarily done on Mac Pro systems utilizing Pro Tools and Logic software. Curriculum First year courses include Artist Development, Recording Engineering, Music Theory, Music Production, Music Preproduction, Live Performance, and Music Lab. Second year students take courses such as Recording Engineering, Audio Post Production, Artist Development, Live Performance, Entertainment Law, Music Business, Music Production, and Music Industry Connections. Guest lecturers Guest lecturers of note include: Bob Ezrin Phil Ramone Ken Scott Garth Richardson Greg Nori Fred Penner Bernie Finkelstein The Birthday Massacre Alan Cross Ralph Murphy Dala Corin Raymond Dan Weston Notable alumni Sarina Haggarty, songwriter Trevor Morris Les Stroud, Survivorman TV show Nathan Robitaille, sound editor Mike Roth Emm Gryner Kevin Banks, music editor Greg Below, Distort Entertainment Dave Wilson Deric Ruttan, songwriter Greg Hanna, Billboard Charting Artist Haviah Mighty, rapper Trevor Dubois, media personality Kelly Samuel, media personality Tom Trafalski, music editor References Music schools in Canada Audio engineering schools
Fanshawe College's Music Industry Arts program
[ "Engineering" ]
718
[ "Audio engineering", "Audio engineering schools" ]
3,028,181
https://en.wikipedia.org/wiki/Hasse%E2%80%93Witt%20matrix
In mathematics, the Hasse–Witt matrix H of a non-singular algebraic curve C over a finite field F is the matrix of the Frobenius mapping (p-th power mapping where F has q elements, q a power of the prime number p) with respect to a basis for the differentials of the first kind. It is a g × g matrix where C has genus g. The rank of the Hasse–Witt matrix is the Hasse or Hasse–Witt invariant. Approach to the definition This definition, as given in the introduction, is natural in classical terms, and is due to Helmut Hasse and Ernst Witt (1936). It provides a solution to the question of the p-rank of the Jacobian variety J of C; the p-rank is bounded by the rank of H, specifically it is the rank of the Frobenius mapping composed with itself g times. It is also a definition that is in principle algorithmic. There has been substantial recent interest in this as of practical application to cryptography, in the case of C a hyperelliptic curve. The curve C is superspecial if H = 0. That definition needs a couple of caveats, at least. Firstly, there is a convention about Frobenius mappings, and under the modern understanding what is required for H is the transpose of Frobenius (see arithmetic and geometric Frobenius for more discussion). Secondly, the Frobenius mapping is not F-linear; it is linear over the prime field Z/pZ in F. Therefore the matrix can be written down but does not represent a linear mapping in the straightforward sense. Cohomology The interpretation for sheaf cohomology is this: the p-power map acts on H1(C,OC), or in other words the first cohomology of C with coefficients in its structure sheaf. This is now called the Cartier–Manin operator (sometimes just Cartier operator), for Pierre Cartier and Yuri Manin. The connection with the Hasse–Witt definition is by means of Serre duality, which for a curve relates that group to H0(C, ΩC) where ΩC = Ω1C is the sheaf of Kähler differentials on C. Abelian varieties and their p-rank The p-rank of an abelian variety A over a field K of characteristic p is the integer k for which the kernel A[p] of multiplication by p has pk points. It may take any value from 0 to d, the dimension of A; by contrast for any other prime number l there are l2d points in A[l]. The reason that the p-rank is lower is that multiplication by p on A is an inseparable isogeny: the differential is p which is 0 in K. By looking at the kernel as a group scheme one can get the more complete structure (reference David Mumford Abelian Varieties pp. 146–7); but if for example one looks at reduction mod p of a division equation, the number of solutions must drop. The rank of the Cartier–Manin operator, or Hasse–Witt matrix, therefore gives an upper bound for the p-rank. The p-rank is the rank of the Frobenius operator composed with itself g times. In the original paper of Hasse and Witt the problem is phrased in terms intrinsic to C, not relying on J. It is there a question of classifying the possible Artin–Schreier extensions of the function field F(C) (the analogue in this case of Kummer theory). Case of genus 1 The case of elliptic curves was worked out by Hasse in 1934. Since the genus is 1, the only possibilities for the matrix H are: H is zero, Hasse invariant 0, p-rank 0, the supersingular case; or H non-zero, Hasse invariant 1, p-rank 1, the ordinary case. Here there is a congruence formula saying that H is congruent modulo p to the number N of points on C over F, at least when q = p. Because of Hasse's theorem on elliptic curves, knowing N modulo p determines N for p ≥ 5. This connection with local zeta-functions has been investigated in depth. For a plane curve defined by a cubic f(X,Y,Z) = 0, the Hasse invariant is zero if and only if the coefficient of (XYZ)p−1 in fp−1 is zero. Notes References (English translation of a Russian original) Algebraic curves Finite fields Matrices Complex manifolds
Hasse–Witt matrix
[ "Mathematics" ]
963
[ "Matrices (mathematics)", "Mathematical objects" ]
3,028,232
https://en.wikipedia.org/wiki/Topographical%20code
In medicine, "topographical codes" (or "topography codes") are codes that indicate a specific location in the body. Examples Only the first of these is a system dedicated only to topography. The others are more generalized systems that contain topographic axes. Nomina Anatomica (updated to Terminologia Anatomica) ICD-O SNOMED MeSH (the 'A' axis) See also Medical classification References Anatomy
Topographical code
[ "Biology" ]
90
[ "Anatomy" ]
3,028,413
https://en.wikipedia.org/wiki/Hasse%20invariant%20of%20a%20quadratic%20form
In mathematics, the Hasse invariant (or Hasse–Witt invariant) of a quadratic form Q over a field K takes values in the Brauer group Br(K). The name "Hasse–Witt" comes from Helmut Hasse and Ernst Witt. The quadratic form Q may be taken as a diagonal form Σ aixi2. Its invariant is then defined as the product of the classes in the Brauer group of all the quaternion algebras (ai, aj) for i < j. This is independent of the diagonal form chosen to compute it. It may also be viewed as the second Stiefel–Whitney class of Q. Symbols The invariant may be computed for a specific symbol φ taking values in the group C2 = {±1}. In the context of quadratic forms over a local field, the Hasse invariant may be defined using the Hilbert symbol, the unique symbol taking values in C2. The invariants of a quadratic forms over a local field are precisely the dimension, discriminant and Hasse invariant. For quadratic forms over a number field, there is a Hasse invariant ±1 for every finite place. The invariants of a form over a number field are precisely the dimension, discriminant, all local Hasse invariants and the signatures coming from real embeddings. See also Hasse–Minkowski theorem References Quadratic forms
Hasse invariant of a quadratic form
[ "Mathematics" ]
293
[ "Quadratic forms", "Number theory" ]
3,028,468
https://en.wikipedia.org/wiki/Trimethylolpropane%20triacrylate
Trimethylolpropane triacrylate (TMPTA) is a trifunctional acrylate ester monomer derived from trimethylolpropane, used in the manufacture of plastics, adhesives, acrylic glue, anaerobic sealants, and ink. It is useful for its low volatility and fast cure response. It has the properties of weather, chemical and water resistance, as well as good abrasion resistance. End products include alkyd coatings, compact discs, hardwood floors, concrete and cementitious applications, Dental composites, photolithography, letterpress, screen printing, elastomers, automobile headlamps, acrylics and plastic components for the medical industry. Other uses As the molecule has acrylic functionality, it is capable of doing the Michael reaction with an amine. This allows its use in epoxy chemistry where its use speeds up the cure time considerably See also Pentaerythritol tetraacrylate 1,6-Hexanediol diacrylate References TRIMETHYLOLPROPANE TRIACRYLATE at chemicalland21.com Trimethylolpropane Triacrylate at OSHA Trimethylolpropane triacrylate CAS Number: 15625-89-5 at ntp.niehs.nih.gov Acrylate esters Monomers
Trimethylolpropane triacrylate
[ "Chemistry", "Materials_science" ]
297
[ "Monomers", "Polymer chemistry" ]
3,028,544
https://en.wikipedia.org/wiki/Dr.%20Seuss%27s%20Sleep%20Book
Dr. Seuss's Sleep Book, also known as The Sleep Book, is an American children's book written by Dr. Seuss in 1962. The story centers on the activity of sleep as readers follow the journey of many different characters preparing to slip into a deep slumber. This book documents the different sleeping activities that some of the creatures join in on: Jo and Mo Redd-Zoff participate in competitive sleep talking and a group "near Finnigan Fen" enjoys group sleepwalking. It opens with a small bug, named Van Vleck, yawning. This single yawn sets off a chain reaction, effectively putting "ninety-nine zillion nine trillion and two" creatures to sleep. Summary The book is written in the style of a reporter on the news who is reporting on the number of sleepers in the world. The book starts with a "very small bug" named Van Vleck yawning. The narrator then tells the reader that this is very important news and goes on to explain that a yawn is contagious and will cause sleep across the countryside. The narrator then takes us around the world to various locations where people are going to sleep, such as Herk-Heimer Falls, the Castle of Krupp, and the towns of Culppeper Springs and Mercedd. Various silly groups of people go to sleep together, such as a "Hinkle Horn Honking Club". Various creatures go to sleep too, such as a Collapsible Frink and the Chippendale Mupp. The narrator explains that they count the number of people and creatures asleep using an "Audio Telly O-Tally O-Count" which spies on people to know when they went to bed. The narrator then explores the latest news in the sports of sleeptalking and sleepwalking before returning to the previous standard of discussing various locations (such as the Zwieback Motel and the District of Dofft) and creatures such as the Foona Lagoona Baboona and a Jedd. Then the book explains that "Ninety Nine Zillion, Nine Trillion and Two" creatures are asleep and then asks "What about you?" The final line of the book is a "Good night", which is unmetered. Genre Similarly to his other books, Dr. Seuss's Sleep Book is a fictional book classified under children’s literature and characterized by its rhythmic sequence. His illustrations are known to depict a wide variety of unique creatures and odd relationships. Dr. Seuss uses his standard red yellow and turquoise colors, only deviating from this pattern to add hints of purple and one orange Moose Juice alongside a green Goose Juice. Analysis The Sleep Book sets a good example for young kids on the proper hygiene methods used before bedtime such as brushing their teeth, putting their things away, and making sure their alarm is set for the morning. It has also popularly been used in Pre-K through Grade 1 to help kids with the pronunciation of their "sl" sounds. Throughout this story, Dr. Seuss introduces his young audience to a number of sleep related habits and activities: dreaming, sleep talking, sleep walking, yawing, and snoring. The book specifically indicates that it must be "read in bed" because of its ability to put kids to sleep. Reception Parents have praised Dr. Seuss's Sleep Book for its soothing rhythmic element that helps their children fall asleep. This children's book is said to be a top choice for parents to read to their kids at night due to its soothing rhythm and "timeless story". Additionally, Verlo reviewed Dr. Seuss’s Sleep Book, commenting on its "relaxing" methods that effectively put children to sleep calmly and easily. One article praised Dr. Seuss's ability to write to the younger audience "without condescension" and reasoned this factor to be his backing for such great success. During Read Across America Week, organizations such as the CPSD celebrate Dr. Seuss Day by reading The Sleep Book. Changes from earlier drafts Dr. Seuss's Sleep Book went through several iterations before the final draft was cemented. In an early draft, the County of Keck was named the County of Teck and Van Vleck, Van Geck. In another draft, the book included a stanza which reads: In another draft, the narrator is from "The Nightly News about just who's taking their nightly snooze". Book information Format: Hardcover Category: Juvenile Fiction - Bedtime & DreamsJuvenile Fiction - Bedtime & DreamsJuvenile Fiction - Stories In Verse Author: Dr. Seuss The book is also available as an eBook. References External links American picture books Books by Dr. Seuss 1962 children's books Random House books Sleep in fiction
Dr. Seuss's Sleep Book
[ "Biology" ]
991
[ "Behavior", "Sleep in fiction", "Sleep" ]
3,028,786
https://en.wikipedia.org/wiki/Drafted%20masonry
Drafted masonry, in architecture, is the term given to large stones, the face of which has been dressed round the edge in a draft or sunken surface, leaving the centre portion as it came from the quarry. The dressing is worked with an adze of eight teeth to the inch, used in a vertical direction and to a width of two to four inches. The earliest example of drafted masonry is found in the immense platform built by Cyrus in 530 BC at Pasargadae in Persia. It occurs again in the palace of Hyrcanus, known as the Arak-el-Emir (176 BC), but is there inferior in execution. The finest drafted masonry is that dating from the time of Herod the Great, in the tower of David and the walls of the Haram in Jerusalem, and at Hebron. In the castles built by the Crusaders, the adze has been worked in a diagonal direction instead of vertically. In all these examples the size of the stones employed is sometimes enormous, so that the traditional influence of the Phoenician stonemasons seems to have lasted till the twelfth century. References Masonry
Drafted masonry
[ "Engineering" ]
230
[ "Construction", "Masonry" ]
3,029,260
https://en.wikipedia.org/wiki/Enriques%E2%80%93Kodaira%20classification
In mathematics, the Enriques–Kodaira classification groups compact complex surfaces into ten classes, each parametrized by a moduli space. For most of the classes the moduli spaces are well understood, but for the class of surfaces of general type the moduli spaces seem too complicated to describe explicitly, though some components are known. Max Noether began the systematic study of algebraic surfaces, and Guido Castelnuovo proved important parts of the classification. described the classification of complex projective surfaces. later extended the classification to include non-algebraic compact surfaces. The analogous classification of surfaces in positive characteristic was begun by and completed by ; it is similar to the characteristic 0 projective case, except that one also gets singular and supersingular Enriques surfaces in characteristic 2, and quasi-hyperelliptic surfaces in characteristics 2 and 3. Statement of the classification The Enriques–Kodaira classification of compact complex surfaces states that every nonsingular minimal compact complex surface is of exactly one of the 10 types listed on this page; in other words, it is one of the rational, ruled (genus > 0), type VII, K3, Enriques, Kodaira, toric, hyperelliptic, properly quasi-elliptic, or general type surfaces. For the 9 classes of surfaces other than general type, there is a fairly complete description of what all the surfaces look like (which for class VII depends on the global spherical shell conjecture, still unproved in 2024). For surfaces of general type not much is known about their explicit classification, though many examples have been found. The classification of algebraic surfaces in positive characteristic (, ) is similar to that of algebraic surfaces in characteristic 0, except that there are no Kodaira surfaces or surfaces of type VII, and there are some extra families of Enriques surfaces in characteristic 2, and hyperelliptic surfaces in characteristics 2 and 3, and in Kodaira dimension 1 in characteristics 2 and 3 one also allows quasielliptic fibrations. These extra families can be understood as follows: In characteristic 0 these surfaces are the quotients of surfaces by finite groups, but in finite characteristics it is also possible to take quotients by finite group schemes that are not étale. Oscar Zariski constructed some surfaces in positive characteristic that are unirational but not rational, derived from inseparable extensions (Zariski surfaces). In positive characteristic Serre showed that may differ from , and Igusa showed that even when they are equal they may be greater than the irregularity (the dimension of the Picard variety). Invariants of surfaces Hodge numbers and Kodaira dimension The most important invariants of a compact complex surfaces used in the classification can be given in terms of the dimensions of various coherent sheaf cohomology groups. The basic ones are the plurigenera and the Hodge numbers defined as follows: K is the canonical line bundle whose sections are the holomorphic 2-forms. are called the plurigenera. They are birational invariants, i.e., invariant under blowing up. Using Seiberg–Witten theory, Robert Friedman and John Morgan showed that for complex manifolds they only depend on the underlying oriented smooth 4-manifold. For non-Kähler surfaces the plurigenera are determined by the fundamental group, but for Kähler surfaces there are examples of surfaces that are homeomorphic but have different plurigenera and Kodaira dimensions. The individual plurigenera are not often used; the most important thing about them is their growth rate, measured by the Kodaira dimension. is the Kodaira dimension: it is (sometimes written −1) if the plurigenera are all 0, and is otherwise the smallest number (0, 1, or 2 for surfaces) such that is bounded. Enriques did not use this definition: instead he used the values of and . These determine the Kodaira dimension given the following correspondence: where is the sheaf of holomorphic i-forms, are the Hodge numbers, often arranged in the Hodge diamond: By Serre duality and The Hodge numbers of a complex surface depend only on the oriented real cohomology ring of the surface, and are invariant under birational transformations except for which increases by 1 under blowing up a single point. If the surface is Kähler then and there are only three independent Hodge numbers. If the surface is compact then equals or Invariants related to Hodge numbers There are many invariants that (at least for complex surfaces) can be written as linear combinations of the Hodge numbers, as follows: Betti numbers: defined by In characteristic p > 0 the Betti numbers are defined using l-adic cohomology and need not satisfy these relations. Euler characteristic or Euler number: The irregularity is defined as the dimension of the Picard variety and the Albanese variety and denoted by q. For complex surfaces (but not always for surfaces of prime characteristic) The geometric genus: The arithmetic genus: The holomorphic Euler characteristic of the trivial bundle (usually differs from the Euler number e defined above): By Noether's formula it is also equal to the Todd genus The signature of the second cohomology group for complex surfaces is denoted by : are the dimensions of the maximal positive and negative definite subspaces of so: c2 = e and are the Chern numbers, defined as the integrals of various polynomials in the Chern classes over the manifold. Other invariants There are further invariants of compact complex surfaces that are not used so much in the classification. These include algebraic invariants such as the Picard group Pic(X) of divisors modulo linear equivalence, its quotient the Néron–Severi group NS(X) with rank the Picard number ρ, topological invariants such as the fundamental group π1 and the integral homology and cohomology groups, and invariants of the underlying smooth 4-manifold such as the Seiberg–Witten invariants and Donaldson invariants. Minimal models and blowing up Any surface is birational to a non-singular surface, so for most purposes it is enough to classify the non-singular surfaces. Given any point on a surface, we can form a new surface by blowing up this point, which means roughly that we replace it by a copy of the projective line. For the purpose of this article, a non-singular surface X is called minimal if it cannot be obtained from another non-singular surface by blowing up a point. By Castelnuovo's contraction theorem, this is equivalent to saying that X has no (−1)-curves (smooth rational curves with self-intersection number −1). (In the more modern terminology of the minimal model program, a smooth projective surface X would be called minimal if its canonical line bundle KX is nef. A smooth projective surface has a minimal model in that stronger sense if and only if its Kodaira dimension is nonnegative.) Every surface X is birational to a minimal non-singular surface, and this minimal non-singular surface is unique if X has Kodaira dimension at least 0 or is not algebraic. Algebraic surfaces of Kodaira dimension may be birational to more than one minimal non-singular surface, but it is easy to describe the relation between these minimal surfaces. For example, P1 × P1 blown up at a point is isomorphic to P2 blown up twice. So to classify all compact complex surfaces up to birational isomorphism it is (more or less) enough to classify the minimal non-singular ones. Surfaces of Kodaira dimension −∞ Algebraic surfaces of Kodaira dimension can be classified as follows. If q > 0 then the map to the Albanese variety has fibers that are projective lines (if the surface is minimal) so the surface is a ruled surface. If q = 0 this argument does not work as the Albanese variety is a point, but in this case Castelnuovo's theorem implies that the surface is rational. For non-algebraic surfaces Kodaira found an extra class of surfaces, called type VII, which are still not well understood. Rational surfaces Rational surface means surface birational to the complex projective plane P2. These are all algebraic. The minimal rational surfaces are P2 itself and the Hirzebruch surfaces Σn for n = 0 or n ≥ 2. (The Hirzebruch surface Σn is the P1 bundle over P1 associated to the sheaf O(0) + O(n). The surface Σ0 is isomorphic to P1 × P1, and Σ1 is isomorphic to P2 blown up at a point so is not minimal.) Invariants: The plurigenera are all 0 and the fundamental group is trivial. Hodge diamond: Examples: P2, P1 × P1 = Σ0, Hirzebruch surfaces Σn, quadrics, cubic surfaces, del Pezzo surfaces, Veronese surface. Many of these examples are non-minimal. Ruled surfaces of genus > 0 Ruled surfaces of genus g have a smooth morphism to a curve of genus g whose fibers are lines P1. They are all algebraic. (The ones of genus 0 are the Hirzebruch surfaces and are rational.) Any ruled surface is birationally equivalent to P1 × C for a unique curve C, so the classification of ruled surfaces up to birational equivalence is essentially the same as the classification of curves. A ruled surface not isomorphic to P1 × P1 has a unique ruling (P1 × P1 has two). Invariants: The plurigenera are all 0. Hodge diamond: Examples: The product of any curve of genus > 0 with P1. Surfaces of class VII These surfaces are never algebraic or Kähler. The minimal ones with b2 = 0 have been classified by Bogomolov, and are either Hopf surfaces or Inoue surfaces. Examples with positive second Betti number include Inoue-Hirzebruch surfaces, Enoki surfaces, and more generally Kato surfaces. The global spherical shell conjecture implies that all minimal class VII surfaces with positive second Betti number are Kato surfaces, which would more or less complete the classification of the type VII surfaces. Invariants: q = 1, h1,0 = 0. All plurigenera are 0. Hodge diamond: Surfaces of Kodaira dimension 0 These surfaces are classified by starting with Noether's formula For Kodaira dimension 0, K has zero intersection number with itself, so Using we arrive at: Moreover since κ = 0 we have: combining this with the previous equation gives: In general 2h0,1 ≥ b1, so three terms on the left are non-negative integers and there are only a few solutions to this equation. For algebraic surfaces 2h0,1 − b1 is an even integer between 0 and 2pg. For compact complex surfaces 2h0,1 − b1 = 0 or 1. For Kähler surfaces 2h0,1 − b1 = 0 and h1,0 = h0,1. Most solutions to these conditions correspond to classes of surfaces, as in the following table: K3 surfaces These are the minimal compact complex surfaces of Kodaira dimension 0 with q = 0 and trivial canonical line bundle. They are all Kähler manifolds. All K3 surfaces are diffeomorphic, and their diffeomorphism class is an important example of a smooth spin simply connected 4-manifold. Invariants: The second cohomology group H2(X, Z) is isomorphic to the unique even unimodular lattice II3,19 of dimension 22 and signature −16. Hodge diamond: Examples: Degree 4 hypersurfaces in P3(C) Kummer surfaces. These are obtained by quotienting out an abelian surface by the automorphism a → −a, then blowing up the 16 singular points. A marked K3 surface is a K3 surface together with an isomorphism from II3,19 to H2(X, Z). The moduli space of marked K3 surfaces is connected non-Hausdorff smooth analytic space of dimension 20. The algebraic K3 surfaces form a countable collection of 19-dimensional subvarieties of it. Abelian surfaces and 2-dimensional complex tori The two-dimensional complex tori include the abelian surfaces. One-dimensional complex tori are just elliptic curves and are all algebraic, but Riemann discovered that most complex tori of dimension 2 are not algebraic. The algebraic ones are exactly the 2-dimensional abelian varieties. Most of their theory is a special case of the theory of higher-dimensional tori or abelian varieties. Criteria to be a product of two elliptic curves (up to isogeny) were a popular study in the nineteenth century. Invariants: The plurigenera are all 1. The surface is diffeomorphic to S1 × S1 × S1 × S1 so the fundamental group is Z4. Hodge diamond: Examples: A product of two elliptic curves. The Jacobian of a genus 2 curve. Any quotient of C2 by a lattice. Kodaira surfaces These are never algebraic, though they have non-constant meromorphic functions. They are usually divided into two subtypes: primary Kodaira surfaces with trivial canonical bundle, and secondary Kodaira surfaces which are quotients of these by finite groups of orders 2, 3, 4, or 6, and which have non-trivial canonical bundles. The secondary Kodaira surfaces have the same relation to primary ones that Enriques surfaces have to K3 surfaces, or bielliptic surfaces have to abelian surfaces. Invariants: If the surface is the quotient of a primary Kodaira surface by a group of order k = 1, 2, 3, 4, 6, then the plurigenera Pn are 1 if n is divisible by k and 0 otherwise. Hodge diamond: Examples: Take a non-trivial line bundle over an elliptic curve, remove the zero section, then quotient out the fibers by Z acting as multiplication by powers of some complex number z. This gives a primary Kodaira surface. Enriques surfaces These are the complex surfaces such that q = 0 and the canonical line bundle is non-trivial, but has trivial square. Enriques surfaces are all algebraic (and therefore Kähler). They are quotients of K3 surfaces by a group of order 2 and their theory is similar to that of algebraic K3 surfaces. Invariants: The plurigenera Pn are 1 if n is even and 0 if n is odd. The fundamental group has order 2. The second cohomology group H2(X, Z) is isomorphic to the sum of the unique even unimodular lattice II1,9 of dimension 10 and signature −8 and a group of order 2. Hodge diamond: Marked Enriques surfaces form a connected 10-dimensional family, which has been described explicitly. In characteristic 2 there are some extra families of Enriques surfaces called singular and supersingular Enriques surfaces; see the article on Enriques surfaces for details. Hyperelliptic (or bielliptic) surfaces Over the complex numbers these are quotients of a product of two elliptic curves by a finite group of automorphisms. The finite group can be Z/2Z,  Z/2Z + Z/2Z, Z/3Z,  Z/3Z + Z/3Z,  Z/4Z,  Z/4Z + Z/2Z, or Z/6Z, giving seven families of such surfaces. Hodge diamond: Over fields of characteristics 2 or 3 there are some extra families given by taking quotients by a non-etale group scheme; see the article on hyperelliptic surfaces for details. Surfaces of Kodaira dimension 1 An elliptic surface is a surface equipped with an elliptic fibration (a surjective holomorphic map to a curve B such that all but finitely many fibers are smooth irreducible curves of genus 1). The generic fiber in such a fibration is a genus 1 curve over the function field of B. Conversely, given a genus 1 curve over the function field of a curve, its relative minimal model is an elliptic surface. Kodaira and others have given a fairly complete description of all elliptic surfaces. In particular, Kodaira gave a complete list of the possible singular fibers. The theory of elliptic surfaces is analogous to the theory of proper regular models of elliptic curves over discrete valuation rings (e.g., the ring of p-adic integers) and Dedekind domains (e.g., the ring of integers of a number field). In finite characteristic 2 and 3 one can also get quasi-elliptic surfaces, whose fibers may almost all be rational curves with a single node, which are "degenerate elliptic curves". Every surface of Kodaira dimension 1 is an elliptic surface (or a quasielliptic surface in characteristics 2 or 3), but the converse is not true: an elliptic surface can have Kodaira dimension , 0, or 1. All Enriques surfaces, all hyperelliptic surfaces, all Kodaira surfaces, some K3 surfaces, some abelian surfaces, and some rational surfaces are elliptic surfaces, and these examples have Kodaira dimension less than 1. An elliptic surface whose base curve B is of genus at least 2 always has Kodaira dimension 1, but the Kodaira dimension can be 1 also for some elliptic surfaces with B of genus 0 or 1. Invariants: Example: If E is an elliptic curve and B is a curve of genus at least 2, then E×B is an elliptic surface of Kodaira dimension 1. Surfaces of Kodaira dimension 2 (surfaces of general type) These are all algebraic, and in some sense most surfaces are in this class. Gieseker showed that there is a coarse moduli scheme for surfaces of general type; this means that for any fixed values of the Chern numbers c and c2, there is a quasi-projective scheme classifying the surfaces of general type with those Chern numbers. However it is a very difficult problem to describe these schemes explicitly, and there are very few pairs of Chern numbers for which this has been done (except when the scheme is empty!) Invariants: There are several conditions that the Chern numbers of a minimal complex surface of general type must satisfy: (the Bogomolov–Miyaoka–Yau inequality) (the Noether inequality) Most pairs of integers satisfying these conditions are the Chern numbers for some complex surface of general type. Examples: The simplest examples are the product of two curves of genus at least 2, and a hypersurface of degree at least 5 in P3. There are a large number of other constructions known. However, there is no known construction that can produce "typical" surfaces of general type for large Chern numbers; in fact it is not even known if there is any reasonable concept of a "typical" surface of general type. There are many other examples that have been found, including most Hilbert modular surfaces, fake projective planes, Barlow surfaces, and so on. See also List of algebraic surfaces References – the standard reference book for compact complex surfaces ; ( softcover) – including a more elementary introduction to the classification Lang, William E. "Quasi-elliptic surfaces in characteristic three", Annales scientifiques de l'École Normale Supérieure, Série 4, Tome 12 (1979) no. 4, pp. 473-500. doi : 10.24033/asens.1373. Theorem 4.3 of this article classifies the Hodge numbers of a quasi-hyperelliptic surface in characteristic three. External links le superficie algebriche is an interactive visualisation of the Enriques--Kodaira classification, by Pieter Belmans and Johan Commelin Complex surfaces Birational geometry Algebraic surfaces Mathematical classification systems
Enriques–Kodaira classification
[ "Mathematics" ]
4,140
[ "nan" ]
3,029,823
https://en.wikipedia.org/wiki/Groatland
A groatland, also known as a fourpenceland, fourpennyland or “Còta bàn” (meaning "white coat") was a Scottish land measurement. It was so called, because the annual rent paid on it was a Scottish “groat” (coin). See also Obsolete Scottish units of measurement In the East Highlands: Rood Scottish acre = 4 roods Oxgang (Damh-imir) = the area an ox could plow in a year (around 20 acres) Ploughgate (?) = 8 oxgangs Daugh (Dabhach) = 4 ploughgates In the West Highlands: Markland (Marg-fhearann) = 8 Ouncelands (varied) Ounceland (Tir-unga) =20 Pennylands Pennyland (Peighinn) = basic unit; sub-divided into half penny-land and farthing-land (Other terms in use; Quarterland (Ceathramh): variable value; Groatland (Còta bàn) References Obsolete Scottish units of measurement Units of area
Groatland
[ "Mathematics" ]
225
[ "Quantity", "Units of area", "Units of measurement" ]
3,029,829
https://en.wikipedia.org/wiki/Cross-interleaved%20Reed%E2%80%93Solomon%20coding
In the compact disc system, cross-interleaved Reed–Solomon code (CIRC) provides error detection and error correction. CIRC adds to every three data bytes one redundant parity byte. Overview Reed–Solomon codes are specifically useful in combating mixtures of random and burst errors. CIRC corrects error bursts up to 4000 data bits in sequence (2.5 mm in length as seen on CD surface) and compensates for error bursts up to 12,000 bits (7.5 mm) that may be caused by minor scratches. Characteristics High random error correctability Long burst error correctability In case the burst correction capability is exceeded, interpolation may provide concealment by approximation Simple decoder strategy possible with reasonably-sized external random access memory Very high efficiency Room for future introduction of four audio channels without major changes in the format (as of 2024, this has not been implemented). Interleave Errors found in compact discs (CDs) are a combination of random and burst errors. In order to alleviate the strain on the error control code, some form of interleaving is required. The CD system employs two concatenated Reed–Solomon codes, which are interleaved cross-wise. Judicious positioning of the stereo channels as well as the audio samples on even or odd-number instants within the interleaving scheme, provide the error concealment ability, and the multitude of interleave structures used on the CD makes it possible to correct and detect errors with a relatively low amount of redundancy. See also Multiplexing Parity (mathematics) Parity (telecommunication) Checksum References Error detection and correction Compact disc
Cross-interleaved Reed–Solomon coding
[ "Engineering" ]
337
[ "Error detection and correction", "Reliability engineering" ]
3,029,842
https://en.wikipedia.org/wiki/Quarterland
A Quarterland or Ceathramh (Scottish Gaelic) was a Scottish land measurement. It was used mainly in the west and north. It was supposed to be equivalent to eight fourpennylands, roughly equivalent to a quarter of a markland. However, in Islay, a quarterland was equivalent to a quarter of an ounceland. Half of a quarterland would be an ochdamh(ie.one-eighth), and in Islay a quarter of a quarterland a leothras(ie.one-sixteenth). The name appears in many Scottish placenames, notably Kirriemuir. Kerrowaird – Ceathramh àrd (High Quarterland) Kerrowgair – Ceathramh geàrr (Rough Quarterland) Kerry (Cowal) - An Ceathramh Còmh’lach (The Cowal Quarterland) Kerrycroy - An Ceathramh cruaidh (The Hard Quarterland) Kirriemuir – An Ceathramh Mòr/Ceathramh Mhoire (either "The Big Quarterland" or "Mary’s Quarterland") Ceathramh was also used in Gàidhlig for a bushel and a firlot (or four pecks), as was Feòirling, the term used for a farthlingland. Isle of Man The Isle of Man retained a similar system into historic times: in the traditional land divisions of treens (c.f. the Scottish Gaelic word trian, a third part) which are in turn subdivided into smaller units called quarterlands. See also Obsolete Scottish units of measurement In the East Highlands: Rood Scottish acre = 4 roods Oxgang (Damh-imir) = the area an ox could plow in a year (around 20 acres) Ploughgate (?) = 8 oxgangs Daugh (Dabhach) = 4 ploughgates In the West Highlands: Markland (Marg-fhearann) = 8 Ouncelands (varied) Ounceland (Tir-unga) =20 Pennylands Pennyland (Peighinn) = basic unit; sub-divided into half penny-land and farthing-land (Other terms in use; Quarterland (Ceathramh): variable value; Groatland (Còta bàn) Townland Township (Scotland) References Obsolete Scottish units of measurement History of the Isle of Man Units of area
Quarterland
[ "Mathematics" ]
507
[ "Quantity", "Units of area", "Units of measurement" ]
3,029,998
https://en.wikipedia.org/wiki/McMurry%20reaction
The McMurry reaction is an organic reaction in which two ketone or aldehyde groups are coupled to form an alkene using a titanium chloride compound such as titanium(III) chloride and a reducing agent. The reaction is named after its co-discoverer, John E. McMurry. The McMurry reaction originally involved the use of a mixture TiCl3 and LiAlH4, which produces the active reagents. Related species have been developed involving the combination of TiCl3 or TiCl4 with various other reducing agents, including potassium, zinc, and magnesium. This reaction is related to the Pinacol coupling reaction which also proceeds by reductive coupling of carbonyl compounds. Reaction mechanism This reductive coupling can be viewed as involving two steps. First is the formation of a pinacolate (1,2-diolate) complex, a step which is equivalent to the pinacol coupling reaction. The second step is the deoxygenation of the pinacolate, which yields the alkene, this second step exploits the oxophilicity of titanium. Several mechanisms have been discussed for this reaction. Low-valent titanium species induce coupling of the carbonyls by single electron transfer to the carbonyl groups. The required low-valent titanium species are generated via reduction, usually with zinc powder. This reaction is often performed in THF because it solubilizes intermediate complexes, facilitates the electron transfer steps, and is not reduced under the reaction conditions. The nature of low-valent titanium species formed is varied as the products formed by reduction of the precursor titanium halide complex will naturally depend upon both the solvent (most commonly THF or DME) and the reducing agent employed: typically, lithium aluminum hydride, zinc-copper couple, zinc dust, magnesium-mercury amalgam, magnesium, or alkali metals. Bogdanovic and Bolte identified the nature and mode of action of the active species in some classical McMurry systems, and an overview of proposed reaction mechanisms has been published. It is of note that titanium dioxide is not generally a product of the coupling reaction. Although it is true that titanium dioxide is usually the eventual fate of titanium used in these reactions, it is generally formed upon the aqueous workup of the reaction mixture. Background and scope The original publication by Mukaiyama demonstrated reductive coupling of ketones using reduced titanium reagents. McMurry and Fleming coupled retinal to give carotene using a mixture of titanium trichloride and lithium aluminium hydride. Other symmetrical alkenes were prepared similarly, e.g. from dihydrocivetone, adamantanone and benzophenone (the latter yielding tetraphenylethylene). A McMurry reaction using titanium tetrachloride and zinc is employed in the synthesis of a first-generation molecular motor. In another example, the Nicolaou's total synthesis of Taxol uses this reaction, although coupling stops with the formation of a cis-diol, rather than an olefin. Optimized procedures employ the dimethoxyethane complex of TiCl3 in combination with the Zn(Cu). The first porphyrin isomer, porphycene, was synthesised by McMurry coupling. Further reading References External links McMurry reaction in organic-chemistry.org Mcmurry reaction at the University of Sussex Olefination reactions Carbon-carbon bond forming reactions Substitution reactions Name reactions
McMurry reaction
[ "Chemistry" ]
724
[ "Olefination reactions", "Carbon-carbon bond forming reactions", "Coupling reactions", "Organic reactions", "Name reactions" ]
3,030,001
https://en.wikipedia.org/wiki/Adamantanone
Adamantanone is the ketone of adamantane. A white solid, it is prepared by oxidation of adamantane. It is a precursor to several adamantane derivatives. Adamantanone and some related polycyclic ketones, are reluctant to form enolates. This barrier arises because the resulting carbanion cannot exist in conjugation with the carbonyl pi-bond. References Adamantanes Ketones
Adamantanone
[ "Chemistry" ]
86
[ "Ketones", "Functional groups" ]
3,030,168
https://en.wikipedia.org/wiki/1%2C3%2C2%2C4-Dithiadiphosphetane%202%2C4-disulfides
1,3,2,4-Dithiadiphosphetane 2,4-disulfides are a class of organophosphorus, four-membered ring compounds which contain a ring. Many of these compounds are able to act as sources of the dithiophosphine ylides; the most well known example is Lawesson's reagent. Other examples of this class of compound have been made; many inorganic chemists are now using (Fc = ferrocene) as a starting material in reactions investigating the general chemistry of the 1,3,2,4-dithiadiphosphetane 2,4-disulfides, one reaction for this is that the compound and all its derivatives are red which make column chromatography of the products more easy. Also the ferrocenyl groups provide an electrochemical handle which provide another means of investigating the properties of the products. Examples While several different routes to the 1,3,2,4-dithiadiphosphetane 2,4-disulfides exist the most commonly used is the electrophilic aromatic substitution reaction of an arene with . An alternative reaction is the reaction of a thiol with to form a substance like the Davy reagent. The Davy reagent is identical to Lawesson's reagent except in place of the para-methoxyphenyl groups it has aryl sulfide groups. While the Davy reagent is more soluble than the Lawesson's reagent it is likely that the very vile nature of the thiol starting material is likely to make the synthesis of this compound not worth the trouble. In both the patent and academic chemical literature are examples of 1,3,2,4-dithiadiphosphetane 2,4-disulfides with higher solubilities. These highly soluble versions of Lawesson's reagent are created by the reaction of with aryl ethers which are different from anisole. For instance butoxybenzene and 2-tert-butylanisole have both been reacted to form more soluble thionation reagents of the 1,3,2,4-dithiadiphosphetane 2,4-disulfide class. An important subclass of these compounds are the naphthalen-1,8-diyl 1,3,2,4-dithiadiphosphetane 2,4-disulfides; these are intellectually interesting because the two dithiophosphine ylides are fixed together in space by the rigid naphthalene unit. The reactivity of these compounds is very different from that of 1,3,2,4-dithiadiphosphetane 2,4-disulfides. Reactions The dithiophosphine ylides are normally attacked at the phosphorus atom by a nucleophile, for instance the reaction of an alkoxide, phenolate, alcohol or phenol with a 1,3,2,4-dithiadiphosphetane 2,4-disulfide can form a new compound with a phosphorus-oxygen bond. Such a reaction has been used in the formation of metal binding agents and in the synthesis of insecticides. The reaction of an electrophile with 1,3,2,4-dithiadiphosphetane 2,4-disulfides is less common, but the reaction of an alkyl halide with a 1,3,2,4-dithiadiphosphetane 2,4-disulfide forms a new compound with a sulfur-carbon bond and a phosphorus-halide bond. Such a compound could act as an acetylcholinesterase inhibitor in insects, but in order to make a better insecticide it would be best to convert the halide to another leaving group which would form a less water sensitive product. For instance the reaction of para-nitrophenolate would form a compound similar to parathion. Lawesson's reagent has been used as a starting material for a herbicide by reaction with a 1-alkoxy-2,3-dihydroxy propane. This formed a compound which could be used to kill plants. This reaction of a 1,2-diol with lawesson's reagent results in a symmetric breaking of the ring, both halves of the lawesson's reagent end up being converted to the same product. A different type of ring breaking reaction can occur when Lawesson's reagent is reacted with a metal compounds such as a platinum dichloride bis-phosphine complex, in this case one molecule of is formed as a side product to the platinum complex (). Lawesson's reagent can be used as a dehydrating reagent, for example it has been used to convert a β-aminoamide into an imidazoline. Another useful reaction of Lawesson's reagent is the conversion of a 1,4-diketone into a thiophene ring, this reaction can be done with but a much higher temperature would be required to make it work with . It was claimed in a German patent that the reaction of 1,3,2,4-dithiadiphosphetane 2,4-disulfides with dialkyl cyanamides formed plant protection agents which contained six-membered () rings. It has been proven in recent times by the reaction of diferrocenyl 1,3,2,4-dithiadiphosphetane 2,4-disulfide (and Lawesson's reagent) with dimethyl cyanamide that in fact a mixture of several different phosphorus containing compounds is formed. Depending on the concentration of the dimethyl cyanamide in the reaction mixture either a different six membered ring compound () or a non-heterocylic compound () is formed as the major product, the other compound is formed as a minor product. In addition small traces of other compounds are also formed in the reaction. It is unlikely that the ring compound (P-N=C-S-C=N-) {or its isomer} would act as a plant protection agent, but () compounds can act as nerve poisons in insects. These compounds bearing terminal sulfur atoms on the phosphorus atom are much less toxic than the compounds (such as sarin, VX and tetraethyl pyrophosphate) which have an oxygen in place of this terminal sulfur. This is because the P=S compound is not active as an acetylcholinesterase inhibitor in either mammals or insects, in mammals the animals metabolism tends to remove lipophilic side groups from the phosphorus atom while an insect tends to oxidise the compound so removing the terminal sulfur and replacing it with a terminal oxygen which causes the compound to be more able to act as an acetylcholinesterase inhibitor. The dithiophosphine ylides of LR and related compounds can react with strained alkenes, for example the bicyclic norbornadiene reacts with to form a compound with a ring. Unlike small rings containing only first row elements such as carbon, nitrogen and oxygen the small rings containing more heavy elements such as sulfur and selenium are more stable with regards to ring opening. Hence, the rings such as are much more stable than things like epoxides. A selenium version of this ring type has been made, one notable example has been named Woollins' reagent and is , this is made by the reaction of with selenium metal. The solubility of this compound is very low but the group of Prof John Derek Woollins have published some reactions of this compound. For instance the reaction of Woollins' reagent with a dialkyl cyanamide has been found to form a bicyclic system. References Organophosphorus compounds Phosphorus compounds Four-membered rings
1,3,2,4-Dithiadiphosphetane 2,4-disulfides
[ "Chemistry" ]
1,700
[ "Organophosphorus compounds", "Organic compounds", "Functional groups" ]
3,030,181
https://en.wikipedia.org/wiki/Knowledge-based%20engineering
Knowledge-based engineering (KBE) is the application of knowledge-based systems technology to the domain of manufacturing design and production. The design process is inherently a knowledge-intensive activity, so a great deal of the emphasis for KBE is on the use of knowledge-based technology to support computer-aided design (CAD) however knowledge-based techniques (e.g. knowledge management) can be applied to the entire product lifecycle. The CAD domain has always been an early adopter of software-engineering techniques used in knowledge-based systems, such as object-orientation and rules. Knowledge-based engineering integrates these technologies with CAD and other traditional engineering software tools. Benefits of KBE include improved collaboration of the design team due to knowledge management, improved re-use of design artifacts, and automation of major parts of the product lifecycle. Overview KBE is essentially engineering on the basis of knowledge models. A knowledge model uses knowledge representation to represent the artifacts of the design process (as well as the process itself) rather than or in addition to conventional programming and database techniques. The advantages to using knowledge representation to model industrial engineering tasks and artifacts are: Improved integration. In traditional CAD and industrial systems each application often has its own slightly different model. Having a standardized knowledge model makes integration easier across different systems and applications. More re-use. A knowledge model facilitates storing and tagging design artifacts so that they can easily be found again and re-used. Also, knowledge models are themselves more re-usable by virtue of using formalism such as IS-A relations (classes and subclasses in the object-oriented paradigm). With subclassing it can be very easy to create new types of artifacts and processes by starting with an existing class and adding a new subclass that inherits all the default properties and behaviors of its parents and then can be adapted as needed. Better maintenance. Class hierarchies not only facilitate re-use they also facilitate maintenance of systems. By having one definition of a class that is shared by multiple systems, issues of change control and consistency are greatly simplified. More automation. Expert system rules can capture and automate decision making that is left to human experts with most conventional systems. KBE can have a wide scope that covers the full range of activities related to Product Lifecycle Management and Multidisciplinary design optimization. KBE's scope includes design, analysis (computer-aided engineering – CAE), manufacturing, and support. In this inclusive role, KBE has to cover a large multi-disciplinary role related to many computer-aided technologies (CAx). There are two primary ways that KBE can be implemented: Build knowledge models from the ground up using knowledge-based technology Layer knowledge-based technology on top of existing CAD, simulation, and other engineering applications An early example of the first approach was the Simkit tool developed by Intellicorp in the 1980s. Simkit was developed on top of Intellicorp's Knowledge Engineering Environment (KEE). KEE was a very powerful knowledge-based systems development environment. KEE started on Lisp and added frames, objects, and rules, as well as powerful additional tools, such as hypothetical reasoning and truth maintenance. Simkit added stochastic simulation capabilities to the KEE environment. These capabilities included an event model, random distribution generators, simulation visualization, and more. The Simkit tool was an early example of KBE. It could define a simulation in terms of class models and rules and then run the simulation as a conventional simulation would. Along the way, the simulation could continue to invoke rules, demons, and object methods, providing the potential for much richer simulation as well as analysis than conventional simulation tools. One of the issues that Simkit faced was a common issue for most early KBE systems developed with this method: The Lisp knowledge-based environments provide very powerful knowledge representation and reasoning capabilities; however, they did so at the cost of massive requirements for memory and processing that stretched the limits of the computers of the time. Simkit could run simulations with thousands of objects and do very sophisticated analysis on those objects. However, industrial simulations often required tens or hundreds of thousands of objects, and Simkit had difficulty scaling up to such levels. The second alternative to developing KBE is illustrated by the CATIA product suite. CATIA started with products for CAD and other traditional industrial engineering applications and added knowledge-based capabilities on to them; for example, their KnowledgeWare module. History KBE developed in the 1980s. It was part of the initial wave of investment in Artificial Intelligence for business that fueled expert systems. Like expert systems, it relied on what at the time were leading edge advances in corporate information technology such as PCs, workstations, and client-server architectures. These same technologies were also facilitating the growth of CAx and CAD software. CAD tended to drive leading edge technologies and even push them past their current limits. The best example of this was object-oriented programming and database technology, which were adapted by CAD when most corporate information technology shops were dominated by relational databases and procedural programming. As with expert systems, KBE suffered a downturn during the AI Winter. Also, as with expert systems and artificial intelligence technology in general, there was renewed interest with the Internet. In the case of KBE, the interest was perhaps strongest in the business-to-business type of electronic commerce and technologies that facilitate the definition of industry standard vocabularies and ontologies for manufactured products. The semantic web is the vision of Tim Berners Lee for the next generation of the Internet. This will be a knowledge-based Internet built on ontologies, objects, and frame technologies that were also enabling technologies for KBE. Important technologies for the semantic web are XML, RDF, and OWL. The semantic web has excellent potential for KBE, and KBE ontologies and projects are a strong area for current research. KBE and product lifecycle management Product Lifecycle Management (PLM) is the management of the manufacturing process of any industry that produces goods. It can span the full product lifecycle from idea generation to implementation, delivery, and disposal. KBE at this level will deal with product issues of a more generic nature than it will with CAx. A natural area of emphasis is on the production process; however, lifecycle management can cover many more issues such as business planning, marketing, etc. An advantage of using KBE is getting the automated reasoning and knowledge management services of a knowledge-based environment integrated with the many diverse but related needs of lifecycle management. KBE supports the decision processes involved with configuration, trades, control, management, and a number of other areas, such as optimization. KBE and CAx CAx refers to the domain of computer-aided tools for analysis and design. CAx spans multiple domains. Examples are computer-aided design of manufactured parts, software, the architecture of buildings, etc. Although each specific domain of CAx will have very different kinds of problems and artifacts, they all share common issues as well such as having to manage collaboration of sophisticated knowledge workers, design and re-use of complex artifacts, etc. Essentially KBE extends, builds on, and integrates with the CAx domain typically referred to as Computer Aided Design (CAD). In this sense KBE is analogous to Knowledge-Based Software Engineering, which extended the domain of Computer Aided Software Engineering with knowledge-based tools and technology. What KBSE was to software and CASE, KBE is to manufactured products and CAD. An example can be taken from Boeing's experience. The 777 Program took on the challenge of having a digitally-defined plane. That required an investment in large-scale systems, databases, and workstations for design and analytical engineering work. Given the magnitude of the computing work that was required, KBE got its toe in the door, so to speak, through a "pay as you go plan." Essentially, this technique was to show benefits and then to obtain more work (think agile engineering) thereby. In the case of the 777, the project got to where influences to changes in the early part of the design/build stream (loads) could be recomputed over a weekend to allow evaluation by downstream processes. As required, engineers were in the loop to finish and sign off on work. At the same time, CAx allowed tighter tolerances to be met. With the 777, KBE was so successful that subsequent programs applied it in more areas. Over time, KBE facilities were integrated into the CAx platform and are a normal part of the operation. KBE and knowledge management One of the most important knowledge-based technologies for KBE is knowledge management. Knowledge management tools support a wide spectrum repository, i.e., a repository that can support all different types of work artifacts: informal drawings and notes, large database tables, multimedia and hypertext objects, etc. Knowledge management provides the various group support tools to help diverse stake holders collaborate on the design and implementation of products. It also provides tools to automate the design process (e.g., rules) and to facilitate re-use. KBE methodology The development of KBE applications concerns the requirements to identify, capture, structure, formalize, and finally implement knowledge. Many different so-called KBE platforms support only the implementation step, which is not always the main bottleneck in the KBE development process. In order to limit the risk associated with the development and maintenance of KBE application, there is a need to rely on an appropriate methodology for managing the knowledge and maintaining it up to date. As example of such KBE methodology, the EU project MOKA, "Methodology and tools Oriented to Knowledge based Applications," proposes solutions which focus on the structuring and formalization steps as well as links to the implementation. An alternative to MOKA is to use general knowledge engineering methods that have been developed for expert systems across all industries or to use general software development methodologies such as the Rational Unified Process or Agile methods. Languages for KBE Two critical issues for the languages and formalisms used for KBE are: Knowledge-based vs. procedural programming Standardization vs. proprietary Knowledge-based vs. procedural programming A fundamental trade-off identified with knowledge representation in artificial intelligence is between expressive power and computability. As Levesque demonstrated in his classic paper on the topic, the more powerful a knowledge-representation formalism one designs, the closer the formalism will come to the expressive power of first order logic. As Levesque also demonstrated, the closer a language is to First Order Logic, the more probable that it will allow expressions that are undecidable or require exponential processing power to complete. In the implementation of KBE systems, this trade off is reflected in the choice to use powerful knowledge-based environments or more conventional procedural and object-oriented programming environments. Standardization vs. proprietary There is a trade off between using standards such as STEM and vendor- or business-specific proprietary languages. Standardization facilitates knowledge sharing, integration, and re-use. Proprietary formats (such as CATIA) can provide competitive advantage and powerful features beyond current standardization. Genworks GDL, a commercial product whose core is based on the AGPL-licensed Gendl Project, addresses the issue of application longevity by providing a high-level declarative language kernel which is a superset of a standard dialect of the Lisp programming language (ANSI Common Lisp, or CL). Gendl/GDL itself is proposed as a de facto standard for ANSI CL-based KBE languages. In 2006, the Object Management Group released a KBE services RFP document and requested feedback. To date, no OMG specification for KBE exists; however, there is an OMG standard for CAD services. An example of a system-independent language for the development of machine-readable ontologies that is in the KBE domain is Gellish English. See also Knowledge-based systems Knowledge engineering Knowledge management Multidisciplinary design optimization References External links Practical issues of AI (1994) - Switlik, J.M. (based upon ICAD project) McGoey, Paul (2011) A Hitch-hikers Guide to: Knowledge Based Engineering in Aerospace (& other industries) Alcyon Engineering: Introduction to Knowledge Based Engineering A KBE System for the Design of Wind Tunnel Models Using Reusable Knowledge Components ASME Newsletter ASME celebrates 125th Anniversary COE Newsnet 02/07 How Paradigms of Computing Might Relate to KBE COE Newsnet KBE Best Practices - Discussion Forum KE-works knowledge engineering - a company introducing KBE applications to industry - KBE explanatory video Keys to Success with Knowledge-Based Techniques - SAE Paper Number 2008-01-2262 Knowledge Based Engineering across Product Realization - A whitepaper presented on KBE in PLM domain. Knowledge Technologies - a free e-book by Nick Milton that has a chapter describing KBE (Chapter 3, co-authored with G. La Rocca from TU Delft) Computer-aided design Knowledge engineering Product lifecycle management Knowledge management
Knowledge-based engineering
[ "Engineering" ]
2,676
[ "Computer-aided design", "Design engineering", "Systems engineering", "Knowledge engineering" ]
3,031,317
https://en.wikipedia.org/wiki/Q0906%2B6930
Q0906+6930 was the most distant known blazar (redshift 5.47 / 12.2 billion light years) at the time of its discovery in July, 2004. The engine of the blazar is a supermassive black hole (SMBH) approximately 2 billion times the mass of the Sun (the mass of the Milky Way Galaxy is around 1.5 trillion solar masses). The event horizon volume is on the order of 1,000 times that of the Solar System. It is one of the most massive black holes on record. Distance measurements The "distance" of a far away galaxy depends on the distance measurement used. With a redshift of 5.47, light from this active galaxy is estimated to have taken around 12.3 billion light-years to reach Earth. But since this galaxy is receding from Earth at an estimated rate of 285,803 km/s (the speed of light is 299,792 km/s), the present (co-moving) distance to this galaxy is estimated to be around 26 billion light-years (7961 Mpc). Statistics Classification: FSRQ R = 19.9 Power (BL Lac) = 1.4-3.5 External links arXiv preprint of the Astrophysical Journal paper Space.com – Massive Black Hole Stumps Researchers References Q0906+6930: The Highest Redshift Blazar The Astrophysical Journal, volume 610, part 2 (2004), pages L9–L11 Ursa Major Q0906+6930 Blazars Supermassive black holes
Q0906+6930
[ "Physics", "Astronomy" ]
339
[ "Black holes", "Galaxy stubs", "Ursa Major", "Unsolved problems in physics", "Supermassive black holes", "Astronomy stubs", "Constellations" ]
3,031,477
https://en.wikipedia.org/wiki/Kalb%E2%80%93Ramond%20field
In theoretical physics in general and string theory in particular, the Kalb–Ramond field (named after Michael Kalb and Pierre Ramond), also known as the Kalb–Ramond B-field or Kalb–Ramond NS–NS B-field, is a quantum field that transforms as a two-form, i.e., an antisymmetric tensor field with two indices. The adjective "NS" reflects the fact that in the RNS formalism, these fields appear in the NS–NS sector in which all vector fermions are anti-periodic. Both uses of the word "NS" refer to André Neveu and John Henry Schwarz, who studied such boundary conditions (the so-called Neveu–Schwarz boundary conditions) and the fields that satisfy them in 1971. Details The Kalb–Ramond field generalizes the electromagnetic potential but it has two indices instead of one. This difference is related to the fact that the electromagnetic potential is integrated over one-dimensional worldlines of particles to obtain one of its contributions to the action while the Kalb–Ramond field must be integrated over the two-dimensional worldsheet of the string. In particular, while the action for a charged particle moving in an electromagnetic potential is given by that for a string coupled to the Kalb–Ramond field has the form This term in the action implies that the fundamental string of string theory is a source of the NS–NS B-field, much like charged particles are sources of the electromagnetic field. The Kalb–Ramond field appears, together with the metric tensor and dilaton, as a set of massless excitations of a closed string. See also Curtright field p-form electrodynamics Ramond–Ramond field References String theory Gauge bosons
Kalb–Ramond field
[ "Astronomy" ]
369
[ "String theory", "Astronomical hypotheses" ]
3,031,555
https://en.wikipedia.org/wiki/Goldberger%E2%80%93Wise%20mechanism
In particle physics, the Goldberger–Wise mechanism is a popular mechanism that determines the size of the fifth dimension in Randall–Sundrum models. The mechanism uses a scalar field that propagates throughout the five-dimensional bulk. On each of the branes that end the fifth dimension (frequently referred to as the Planck brane and TeV brane, respectively) there is a potential for this scalar field. The minima for the potentials on the Planck brane and TeV brane are different and causes the vacuum expectation value of the scalar field to change throughout the fifth dimension. This configuration generates a potential for the radion causing it to have a vacuum expectation value and a mass. With reasonable values for the scalar potential, the size of the extra dimension is large enough to solve the hierarchy problem. References Physics beyond the Standard Model
Goldberger–Wise mechanism
[ "Physics" ]
175
[ "Particle physics stubs", "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
3,031,585
https://en.wikipedia.org/wiki/Fuzzy%20sphere
In mathematics, the fuzzy sphere is one of the simplest and most canonical examples of non-commutative geometry. Ordinarily, the functions defined on a sphere form a commuting algebra. A fuzzy sphere differs from an ordinary sphere because the algebra of functions on it is not commutative. It is generated by spherical harmonics whose spin l is at most equal to some j. The terms in the product of two spherical harmonics that involve spherical harmonics with spin exceeding j are simply omitted in the product. This truncation replaces an infinite-dimensional commutative algebra by a -dimensional non-commutative algebra. The simplest way to see this sphere is to realize this truncated algebra of functions as a matrix algebra on some finite-dimensional vector space. Take the three j-dimensional square matrices that form a basis for the j dimensional irreducible representation of the Lie algebra su(2). They satisfy the relations , where is the totally antisymmetric symbol with , and generate via the matrix product the algebra of j dimensional matrices. The value of the su(2) Casimir operator in this representation is where is the j-dimensional identity matrix. Thus, if we define the 'coordinates' where r is the radius of the sphere and k is a parameter, related to r and j by , then the above equation concerning the Casimir operator can be rewritten as , which is the usual relation for the coordinates on a sphere of radius r embedded in three dimensional space. One can define an integral on this space, by where F is the matrix corresponding to the function f. For example, the integral of unity, which gives the surface of the sphere in the commutative case is here equal to which converges to the value of the surface of the sphere if one takes j to infinity. Notes Jens Hoppe, "Membranes and Matrix Models", lectures presented during the summer school on ‘Quantum Field Theory – from a Hamiltonian Point of View’, August 2–9, 2000, John Madore, An introduction to Noncommutative Differential Geometry and its Physical Applications, London Mathematical Society Lecture Note Series. 257, Cambridge University Press 2002 References J. Hoppe, Quantum Theory of a Massless Relativistic Surface and a Two dimensional Bound State Problem. PhD thesis, Massachusetts Institute of Technology, 1982. Mathematical quantization Noncommutative geometry
Fuzzy sphere
[ "Physics" ]
483
[ "Mathematical quantization", "Quantum mechanics" ]
3,031,620
https://en.wikipedia.org/wiki/Hohlraum
In radiation thermodynamics, a hohlraum (; a non-specific German word for a "hollow space", "empty room", or "cavity") is a cavity whose walls are in radiative equilibrium with the radiant energy within the cavity. First proposed by Gustav Kirchhoff in 1860 and used in the study of black-body radiation (hohlraumstrahlung), this idealized cavity can be approximated in practice by a hollow container of any opaque material. The radiation escaping through a small perforation in the wall of such a container will be a good approximation of black-body radiation at the temperature of the interior of the container. Indeed, a hohlraum can even be constructed from cardboard, as shown by Purcell's Black Body Box, a hohlraum demonstrator. In spectroscopy, the Hohlraum effect occurs when an object achieves thermodynamic equilibrium with an enclosing hohlraum. As a consequence of Kirchhoff’s law, everything optically blends together and contrast between the walls and the object effectively disappears. Applications Hohlraums are used in High Energy Density Physics (HEDP) and Inertial Confinement Fusion (ICF) experiments to convert laser energy to thermal x-rays for imploding capsules, heating targets, and generating thermal radiation waves. They may also be used in Nuclear Weapon designs. Inertial confinement fusion The indirect drive approach to inertial confinement fusion is as follows: the fusion fuel capsule is held inside a cylindrical hohlraum. The hohlraum body is manufactured using a high-Z (high atomic number) element, usually gold or uranium. Inside the hohlraum is a fuel capsule containing deuterium and tritium (D-T) fuel. A frozen layer of D-T ice adheres inside the fuel capsule. The fuel capsule wall is synthesized using light elements such as plastic, beryllium, or high density carbon, i.e. diamond. The outer portion of the fuel capsule explodes outward when ablated by the x-rays produced by the hohlraum wall upon irradiation by lasers. Due to Newton's third law, the inner portion of the fuel capsule implodes, causing the D-T fuel to be supercompressed, activating a fusion reaction. The radiation source (e.g., laser) is pointed at the interior of the hohlraum rather than at the fuel capsule itself. The hohlraum absorbs and re-radiates the energy as X-rays, a process known as indirect drive. The advantage to this approach, compared to direct drive, is that high mode structures from the laser spot are smoothed out when the energy is re-radiated from the hohlraum walls. The disadvantage to this approach is that low mode asymmetries are harder to control. It is important to be able to control both high mode and low mode asymmetries to achieve a uniform implosion. The hohlraum walls must have surface roughness less than 1 micron, and hence accurate machining is required during fabrication. Any imperfection of the hohlraum wall during fabrication will cause uneven and non-symmetrical compression of the fuel capsule inside the hohlraum during inertial confinement fusion. Hence imperfection is to be carefully prevented so surface finishing is extremely important, as during ICF laser shots, due to intense pressure and temperature, results are highly susceptible to hohlraum texture roughness. The fuel capsule must be precisely spherical, with texture roughness less than one nanometer, for fusion ignition to start. Otherwise, instability will cause fusion to fizzle. The fuel capsule contains a small fill hole with less than 5 microns diameter to inject the capsule with D-T gas. The X-ray intensity around the capsule must be very symmetrical to avoid hydrodynamic instabilities during compression. Earlier designs had radiators at the ends of the hohlraum, but it proved difficult to maintain adequate X-ray symmetry with this geometry. By the end of the 1990s, target physicists developed a new family of designs in which the ion beams are absorbed in the hohlraum walls, so that X-rays are radiated from a large fraction of the solid angle surrounding the capsule. With a judicious choice of absorbing materials, this arrangement, referred to as a "distributed-radiator" target, gives better X-ray symmetry and target gain in simulations than earlier designs. Nuclear weapon design The term hohlraum is also used to describe the casing of a thermonuclear bomb following the Teller-Ulam design. The casing's purpose is to contain and focus the energy of the primary (fission) stage in order to implode the secondary (fusion) stage. Notes and references External links NIF Hohlraum – High resolution picture at Lawrence Livermore National Laboratory. Electromagnetic radiation Inertial confinement fusion
Hohlraum
[ "Physics" ]
1,038
[ "Electromagnetic radiation", "Physical phenomena", "Radiation", "nan" ]
3,031,660
https://en.wikipedia.org/wiki/INT%2013H
INT 13h is shorthand for BIOS interrupt call 13hex, the 20th interrupt vector in an x86-based (IBM PC-descended) computer system. The BIOS typically sets up a real mode interrupt handler at this vector that provides sector-based hard disk and floppy disk read and write services using cylinder-head-sector (CHS) addressing. Modern PC BIOSes also include INT 13h extension functions, originated by IBM and Microsoft in 1992, that provide those same disk access services using 64-bit LBA addressing; with minor additions, these were quasi-standardized by Phoenix Technologies and others as the EDD (Enhanced Disk Drive) BIOS extensions. INT is an x86 instruction that triggers a software interrupt, and 13hex is the interrupt number (as a hexadecimal value) being called. Modern computers come with both BIOS INT 13h and UEFI functionality that provides the same services and more, with the exception of UEFI Class 3 that completely removes CSM thus lacks INT 13h and other interrupts. Typically, UEFI drivers use LBA-addressing instead of CHS-addressing. Overview Under real mode operating systems, such as DOS, calling INT 13h would jump into the computer's ROM-BIOS code for low-level disk services, which would carry out physical sector-based disk read or write operations for the program. In DOS, it serves as the low-level interface for the built-in block device drivers for hard disks and floppy disks. This allows INT 25h and INT 26h to provide absolute disk read/write functions for logical sectors to the FAT file system driver in the DOS kernel, which handles file-related requests through DOS API (INT 21h) functions. Under protected mode operating systems, such as Microsoft Windows NT derivatives (e.g. NT4, 2000, XP, and Server 2003) and Linux with dosemu, the OS intercepts the call and passes it to the operating system's native disk I/O mechanism. Windows 9x and Windows for Workgroups 3.11 also bypass BIOS routines when using 32-bit Disk Access. Besides performing low-level disk access, INT 13h calls and related BIOS data structures also provide information about the types and capacities of disks (or other DASD devices) attached to the system; when a protected-mode OS boots, it may use that information from the BIOS to enumerate disk hardware so that it (the OS) can load and configure appropriate disk I/O drivers. The original BIOS real-mode INT 13h interface supports drives of sizes up to about 8 GB using what is commonly referred to as physical CHS addressing. This limit originates from the hardware interface of the IBM PC/XT disk hardware. The BIOS used the cylinder-head-sector (CHS) address given in the INT 13h call, and transferred it directly to the hardware interface. A lesser limit, about 504 MB, was imposed by the combination of CHS addressing limits used by the BIOS and those used by ATA hard disks, which are dissimilar. When the CHS addressing limits of both the BIOS and ATA are combined (i.e. when they are applied simultaneously), the number of 512-byte sectors that can be addressed represent a total of about 504 MB. The 504 MB limit was overcome using CHS translation, a technique by which the BIOS would simulate a fictitious CHS geometry at the INT 13h interface, while communicating with the ATA drive using its native logical CHS geometry. (By the time the 504 MB barrier was being approached, ATA disks had long before ceased to present their real physical geometry parameters at the external ATA interface.) Translation allows the BIOS, still using CHS addressing, to effectively address ATA disks with sizes up to 8064 MB, the native capacity of the BIOS CHS interface alone. (The ATA interface has a much larger native CHS addressing capacity, so once the "interference" of the CHS limits of BIOS and ATA was resolved by addressing, only the smaller limitation of the BIOS was significant.) CHS translation is sometimes referred to as logical CHS addressing, but that is actually a misnomer since by the time of this BIOS development, ATA CHS addresses were already logical, not physical. The 8064 MB limit originates from a combination of the register value based calling convention used in the INT 13h interface and the goal of maintaining backward compatibility—dictating that the format or size of CHS addresses passed to INT 13h could not be changed to add more bits to one of the fields, e.g. the Cylinder-number field. This limit uses 1024 cylinders, 256 heads, 63 sectors, and 512 byte blocks, allowing exactly 7.875 GiB of addressing (102425663). There were briefly a number of BIOSes that offered incompatible versions of this interface—for example, AWARD AT BIOS and AMI 386sx BIOS have been extended to handle up to 4096 cylinders by placing bits 10 and 11 of the cylinder number into bits 6 and 7 of register DH. All versions of MS-DOS, (including MS-DOS 7 and Windows 95) have a bug which prevents booting disk drives with 256 heads (register value 0xFF), so many modern BIOSes provide CHS translation mappings with at most 255 (0xFE) heads, thus reducing the total addressable space to exactly 8032.5 MiB (approx 7.844 GiB). To support addressing of even larger disks, an interface known as INT 13h Extensions was introduced by IBM and Microsoft, then later re-published and slightly extended by Phoenix Technologies as part of BIOS Enhanced Disk Drive Services (EDD). It defines new functions within the INT 13h service, all having function numbers greater than 40h, that use 64-bit logical block addressing (LBA), which allows addressing up to 8 ZiB. (An ATA drive can also support 28-bit or 48-bit LBA which allows up to 128 GiB or 128 PiB respectively, assuming a 512-byte sector/block size). This is a "packet" interface, because it uses a pointer to a packet of information rather than the register based calling convention of the original INT 13h interface. This packet is a very simple data structure that contains an interface version, data size, and LBAs. For software backward-compatibility, the extended functions are implemented alongside the original CHS functions, and calls to functions from both sets can be intermixed, even for the same drive, with the caveat that the CHS functions cannot reach past the first 8064 MB of the disk. Some cache drivers flush their buffers when detecting that DOS is bypassed by directly issuing INT 13h from applications. A dummy read via INT 13h can be used as one of several methods to force cache flushing for unknown caches (e.g. before rebooting). AMI BIOSes from around 1990–1991 trash word unaligned buffers. Some DOS and terminate-and-stay-resident programs clobber interrupt enabling and registers so PC DOS and MS-DOS install their own filters to prevent this. List of services If the second column is empty then the function may be used both for floppy and hard disk. FD: for floppy disk only. HD: for hard disk only. PS/2: for hard disk on PS/2 system only. EXT: part of the Extensions which were written in the 1990s to support hard drives with more than 8 GB. : Reset Disk System : Get Status of Last Drive Operation Bit 7=0 for floppy drive, bit 7=1 for fixed drive : Read Sectors From Drive Remarks Register CX contains both the cylinder number (10 bits, possible values are 0 to 1023) and the sector number (6 bits, possible values are 1 to 63). Cylinder and Sector bits are numbered below: CX = ---CH--- ---CL--- cylinder : 76543210 98 sector : 543210 Examples of translation: CX := ( ( cylinder and 255 ) shl 8 ) or ( ( cylinder and 768 ) shr 2 ) or sector; cylinder := ( (CX and $FF00) shr 8 ) or ( (CX and $C0) shl 2) sector := CX and 63; Addressing of Buffer should guarantee that the complete buffer is inside the given segment, i.e. ( BX + size_of_buffer ) <= 10000h. Otherwise the interrupt may fail with some BIOS or hardware versions. Example Assume you want to read 16 sectors (= 2000h bytes) and your buffer starts at memory address 4FF00h. Utilizing memory segmentation, there are different ways to calculate the register values, e.g.: ES = segment = 4F00h BX = offset = 0F00h sum = memory address = 4FF00h would be a good choice because 0F00h + 2000h = 2F00h <= 10000h ES = segment = 4000h BX = offset = FF00h sum = memory address = 4FF00h would not be a good choice because FF00h + 2000h = 11F00h > 10000h Function 02h of interrupt 13h may only read sectors of the first 16,450,560 sectors of your hard drive, to read sectors beyond the 8 GB limit you should use function 42h of Extensions. Another alternate may be DOS interrupt 25h which reads sectors within a partition. Code Example [ORG 7c00h] ; code starts at 7c00h xor ax, ax ; make sure ds is set to 0 mov ds, ax cld ; start putting in values: mov ah, 2h ; int13h function 2 mov al, 63 ; we want to read 63 sectors mov ch, 0 ; from cylinder number 0 mov cl, 2 ; the sector number 2 - second sector (starts from 1, not 0) mov dh, 0 ; head number 0 xor bx, bx mov es, bx ; es should be 0 mov bx, 7e00h ; 512bytes from origin address 7c00h int 13h jmp 7e00h ; jump to the next sector ; to fill this sector and make it bootable: times 510-($-$$) db 0 dw 0AA55h After this code section (which the asm file should start with), you may write code and it will be loaded to memory and executed. Notice how we didn't change dl (the drive). That is because when the computer first loads up, dl is set to the number of the drive that was booted, so assuming we want to read from the drive we booted from, there is no need to change dl. : Write Sectors To Drive : Verify Sectors From Drive : Format Track : Format Track Set Bad Sector Flags : Format Drive Starting at Track : Read Drive Parameters Remarks Logical values of function 08h may/should differ from physical CHS values of function 48h. Result register CX contains both cylinders and sector/track values, see remark of function 02h. : Init Drive Pair Characteristics AH=0Ah: Read Long Sectors From Drive The only difference between this function and function 02h (see above) is that function 0Ah reads 516 bytes per sector instead of only 512. The last 4 bytes contains the Error Correction Code (ECC), a checksum of sector data. : Check Extensions Present : Extended Read Sectors From Drive As already stated with int 13h AH=02h, care must be taken to ensure that the complete buffer is inside the given segment, i.e. ( BX + size_of_buffer ) <= 10000h : Extended Write Sectors to Drive : Extended Read Drive Parameters Remark Physical CHS values of function 48h may/should differ from logical values of function 08h. INT 13h AH=4Bh: Get Drive Emulation Type See also INT 10H BIOS interrupt call Cylinder-head-sector INT (x86 instruction) DPMI (DOS Protected Mode Interface) Ralf Brown's Interrupt List BIOS Enhanced Disk Drive Specification References External links BIOS Interrupt 13h Extensions Ralf Brown's comprehensive Interrupt List Norton Guide about int 13h, ah = 00h .. 1ah IBM PC compatibles BIOS Interrupts
INT 13H
[ "Technology" ]
2,590
[ "Interrupts", "Events (computing)" ]
3,031,874
https://en.wikipedia.org/wiki/MW%20DX
MW DX, short for mediumwave DXing, is the hobby of receiving distant mediumwave (also known as AM) radio stations. MW DX is similar to TV and FM DX in that broadcast band (BCB) stations are the reception targets. However, the nature of the lower frequencies (530 – 1710 kHz) used by mediumwave radio stations is very much different from that of the VHF and UHF bands used by FM and TV broadcast stations, and therefore involves different receiving equipment, radio propagation, and reception techniques. Propagation During the daytime, medium and high-powered mediumwave AM radio stations have a normal reception range of about 20 to 250 miles (32 to 400+ km), depending on the transmitter power, location, and the quality of the receiving equipment, including the amount of man-made and natural electromagnetic noise present. Long-distance reception is normally impeded by the D layer of the ionosphere, which during the daylight hours absorbs signals in the mediumwave range. As the sun sets, the D layer weakens, allowing medium wave radio waves from such stations to bounce off the F layer of the ionosphere, producing reliable, long distance reception of (especially) high-powered stations up to about 1,200 miles (2,000 km) away on a nightly basis. Aside from the more or less regular reception of certain high-powered transmitters, variable conditions allow reception of different stations at different times - for example, on one night a medium-powered broadcaster from Cleveland, Ohio may be audible in Duluth, Minnesota, but not on the following night. Much of the hobby consists in trying to receive and log as many of these stations as possible, identifying target stations and frequencies to listen to and log. Near or on the coastlines, trans-oceanic reception is quite common and a favored target of DXers in those areas. Very distant inter-continental DX from stations several thousands of miles away is possible even far inland, but may require exceptionally good conditions and a good receiver and antenna on the listening side. DX stations evaporate from the dial as the sun rises. However, sunrise and sunset ("SRS" and "SSS") periods can provide interesting loggings. MW DX in North America In the United States and Canada, stations on the mediumwave dial are spaced at 10 kHz intervals from 520 to 1710 kHz as prescribed since 1941 by the North American Regional Broadcasting Agreement. The tremendous number of radio stations in this region of the world and limited number of available frequencies means congestion is very common, and DXers may hear two, three, or more stations on the same frequency (especially on Class C "graveyard" frequencies where many lower-powered stations operate). The most powerful stations in the two countries are clear-channel stations which can transmit with 50 kilowatts of power. Examples of stations in this category from the List of clear-channel stations are: WLS in Chicago on 890 kHz, KMOX in St. Louis on 1120 kHz, WSB in Atlanta on 750 kHz, WCCO in Minneapolis on 830 kHz, WWL in New Orleans on 870 kHz, CJBC from Toronto on 860 kHz, WABC in New York City on 770 kHz, WLW in Cincinnati on 700 kHz, WHSQ, 880 kHz in New York City, and WTAM in Cleveland on 1100 kHz, all of which can be heard over much of the United States and Canada east of the Rocky Mountains. In the southern half of the United States, several Mexican stations can be heard. Many of these are called Border blaster stations because they program in English to reach the American market. Some of these operate with over 100 kW of power with highly directional antennae aimed northward to avoid interfering in the rest of Mexico. Many can be heard on a similar night-to-night basis. Many of these stations are also treaty allocated clear-channel stations, ensuring that there will be no interference or limited interference on the same frequency. Although some distant listeners may rely on such stations for non-DX purposes, such as to hear a certain talk show or sporting event, DX'ers generally log these stations when they begin the hobby and afterwards pay little attention to them while seeking out new, less powerful and well-heard stations, often with a few kilowatts of power or less, or unusually distant stations. Especially prized in the former category are receptions of distant traveler information service (TIS) stations, operated by the Department of Transportation to give visitors information. These stations typically run at very low powers (limited to 10 watts) and are only intended to cover small areas, but may travel thousands of miles under certain instances. Similar are the tiny radio stations operated by high schools. On the East Coast of the United States, it is not unusual for DX'ers to hear the high-powered European stations, which operate at 9 kHz intervals, rather than the 10 kHz in the United States, helping to reduce co-channel interference from domestic stations, from countries such as Spain and Norway. Stations from Africa and the Middle East are also often heard. The Pacific Coast of the US provides a similar opportunity with stations from Asian countries and Australia / New Zealand although a considerably longer distance must be covered. On both coasts, as well as in the middle portion of the country, "Pan-American" DX from Latin American and Caribbean nations is often sought and logged. The AM expanded band, or "X-Band" as MW DXers often call it (not to be confused with the range of microwave frequencies), runs from 1610 kHz to 1710 kHz. This is a relatively new portion of the mediumwave broadcast spectrum, with the first two applications for frequencies having been granted in 1997. The lower density of stations in this area of the spectrum, as well as a lack of stations with more than 10 kW of power in the United States, has led to many DX'ers taking interest here. MW DX in Europe Stations in Europe often run higher power than American stations, sometimes several hundreds of kilowatts. Synchronous networks are also commonly used, with local transmitter stations often having less of a local identity than those in the United States and Canada. The wide variety of languages spoken over the DX'ing range, from Spanish to Arabic, adds an element of challenge to DXing in the region. Some stations in Europe have taken to Digital Radio Mondiale transmissions, requiring a receiver capable of demodulating such signals, or a computer loaded with special software coupled to the receiver. DX reception of North American stations has been observed on many occasions. CJYQ 930 kHz and VOCM 590 kHz (both from St. John's, Newfoundland and Labrador) are generally the easiest to receive, and their presence is taken as an indication that the reception of more distant stations is possible. North American stations whose frequencies are furthest from the 9 kHz multiples used in Europe are easier to receive, particularly since 24-hour broadcasting is normal in Europe. MW DX in Asia In the southern half of the China, Japan, Korea(both south and north) and Taiwan stations, some of which operate with over 200 kW of power, may be heard on a similar night-to-night basis. Many of these stations are also clear-channel stations, ensuring that there will be no interference or limited interference on the same frequency. Equipment While any radio covering the mediumwave (AM radio) band can be used for DX purposes, serious DXers generally invest in a higher-quality receiver, and often a specialised indoor tuned box loop or outdoor longwire antenna. At the lower end of the spectrum, a portable radio with a larger-than-normal internal ferrite core antenna designed for long-distance AM radio reception may be used, such as the discontinued GE Superadio, CC Radio, or the Panasonic RF-2200. The Sony ICF-SW7600G and the newer GR model are also excellent for budget minded MW dxing. More serious DXers may spend much more for a tabletop shortwave communications receiver with good performance on the lower mediumwave frequencies using an external antenna, such as the AOR 7030+, Drake R8/R8A/R8B, Icom R-75, or Palstar R-30. Various models by Hallicrafters, Hammarlund and even home-made models from Heathkit have been popular. In recent years, software-defined radios have become more popular for mediumwave DX. Radios like the Microtelecom Perseus and the Elad FDM-S2 can record the entire mediumwave band to a computer hard drive, which can then be played back and tuned later. With any such receiver, a high-performance loop antenna may be employed, or in the alternative, one or more outdoor longwire Beverage antennas, sometimes many hundreds of meters long. In order to cancel out reception of unwanted stations, some DX listeners employ elaborate phased arrays of multiple Beverage antennas. For trans-Atlantic or trans-Pacific reception, where the target station is on a 9 kHz rather than a 10 kHz multiple or vice versa, receivers with narrow RF filters are useful in rejecting adjacent broadcasts on the listener's own continent. To combat noise, DXers may use an outboard noise attenuation device, or a radio with built-in digital signal processing capabilities. A personal computer with specialized logging software or simply a paper notebook is used to write logs. Recording devices can be used to archive memorable DX moments, or identify hard-to-hear station receptions after the fact. See also AM broadcasting Border blaster Clear-channel station DX communication DX station Ionosphere Skywave List of European medium wave transmitters Medium wave Radio propagation Shortwave radio TV-FM DX References External links An Introduction to Long Distance Medium Wave Listening World Radio TV Handbook - The Bible of International Broadcasting The Medium Wave Circle - The premier club for MW/LW radio enthusiasts DXing.info - News, reports, sound files and logs Hard Core DX National Radio Club Mediumwave Info MWLIST worldwide database of MW and LW stations AMANDX Radio Pages DXMidAmerica NZRDXL MW DX Introduction - from the New Zealand Radio DX League (archived) MW Arctic DX Weblog from Kongsfjord, Norway International Radio Club of America Radio frequency propagation Radio hobbies
MW DX
[ "Physics" ]
2,131
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
3,031,996
https://en.wikipedia.org/wiki/Wine%20fault
A wine fault is a sensory-associated (organoleptic) characteristic of a wine that is unpleasant, and may include elements of taste, smell, or appearance, elements that may arise from a "chemical or a microbial origin", where particular sensory experiences (e.g., an off-odor) might arise from more than one wine fault. Wine faults may result from poor winemaking practices or storage conditions that lead to wine spoilage. In the case of a chemical origin, many compounds causing wine faults are already naturally present in wine, but at insufficient concentrations to be of issue, and in fact may impart positive characters to the wine; however, when the concentration of such compounds exceed a sensory threshold, they replace or obscure desirable flavors and aromas that the winemaker wants the wine to express. The ultimate result is that the quality of the wine is reduced (less appealing, sometimes undrinkable), with consequent impact on its value.<ref name="Baldy pp 37-39, et al">M. Baldy: "The University Wine Course", Third Edition, pp. 37-39, 69-80, 134-140. The Wine Appreciation Guild 2009 .</ref> There are many underlying causes of wine faults, including poor hygiene at the winery, excessive or insufficient exposure of the wine to oxygen, excessive or insufficient exposure of the wine to sulphur, overextended maceration of the wine either pre- or post-fermentation, faulty fining, filtering and stabilization of the wine, the use of dirty oak barrels, over-extended barrel aging and the use of poor quality corks. Outside of the winery, other factors within the control of the retailer or end user of the wine can contribute to the perception of flaws in the wine. These include poor storage of the wine that exposes it to excessive heat and temperature fluctuations as well as the use of dirty stemware during wine tasting that can introduce materials or aromas to what was previously a clean and fault-free wine. Differences between flaws and faults In wine tasting, there is a big distinction made between what is considered a flaw and a fault. Wine flaws are minor attributes that depart from what are perceived as normal wine characteristics. These include excessive sulfur dioxide, volatile acidity, Brettanomyces or "Brett aromas" and diacetyl or buttery aromas. The amount to which these aromas or attributes become excessive is dependent on the particular tastes and recognition threshold of the wine taster. Generally, a wine exhibiting these qualities is still considered drinkable by most people. However, some flaws such as volatile acidity and Brettanomyces can be considered a fault when they are in such an excess that they overwhelm other components of the wine. Wine faults are generally major attributes that make a wine undrinkable to most wine tasters. Examples of wine faults include acetaldehyde (except when purposely induced in wines like Sherry and Rancio), ethyl acetate and cork taint. Detecting faults in wine tasting The vast majority of wine faults are detected by the nose and the distinctive aromas that they give off. However, the presence of some wine faults can be detected by visual and taste perceptions. For example, premature oxidation can be noticed by the yellowing and browning of the wine's color. The sign of gas bubbles in wines that are not meant to be sparkling can be a sign of refermentation or malolactic fermentation happening in the bottle. Unusual breaks in the color of the wine could be a sign of excessive copper, iron or proteins that were not removed during fining or filtering. A wine with an unusual color for its variety or wine region could be a sign of excessive or insufficient maceration as well as poor temperature controls during fermentation. Tactile clues of potential wine faults include the burning, acidic taste associated with volatile acidity that can make a wine seem out of balance. Oxidation The oxidation of wine is perhaps the most common of wine faults, as the presence of oxygen and a catalyst are the only requirements for the process to occur. Oxidation can occur throughout the winemaking process, and even after the wine has been bottled. Anthocyanins, catechins, epicatechins and other phenols present in wine are those most easily oxidised, which leads to a loss of colour, flavour and aroma - sometimes referred to as flattening. In most cases compounds such as sulfur dioxide or erythorbic acid are added to wine by winemakers, which protect the wine from oxidation and also bind with some of the oxidation products to reduce their organoleptic effect. Apart from phenolic oxidation, the ethanol present within wine can also be oxidised into other compounds responsible for flavour and aroma taints. Some wine styles can be oxidised intentionally, as in certain Sherry wines and Vin jaune from the Jura region of France. Acetaldehyde Acetaldehyde is an intermediate product of yeast fermentation; however, it is more commonly associated with ethanol oxidation catalysed by the enzyme ethanol dehydrogenase. Acetaldehyde production is also associated with the presence of surface film forming yeasts and bacteria, such as acetic acid bacteria, which form the compound by the decarboxylation of pyruvate. The sensory threshold for acetaldehyde is 100–125 mg/L. Beyond this level it imparts a sherry type character to the wine which can also be described as green apple, sour and metallic. Acetaldehyde intoxication is also implicated in hangovers. Acetic acid Acetic acid in wine, often referred to as volatile acidity (VA) or vinegar taint, can be contributed by many wine spoilage yeasts and bacteria. This can be from either a by-product of fermentation, or due to the spoilage of finished wine. Acetic acid bacteria, such as those from the genera Acetobacter and Gluconobacter produce high levels of acetic acid. The sensory threshold for acetic acid in wine is >700 mg/L, with concentrations greater than 1.2-1.3 g/L becoming unpleasant. There are different opinions as to what level of volatile acidity is appropriate for higher quality wine. Although too high a concentration is sure to leave an undesirable, 'vinegar' tasting wine, some wine's acetic acid levels are developed to create a more 'complex', desirable taste. The renowned 1947 Cheval Blanc is widely recognized to contain high levels of volatile acidity. Ethyl acetate is formed in wine by the esterification of ethanol and acetic acid. Therefore, wines with high acetic acid levels are more likely to see ethyl acetate formation, but the compound does not contribute to the volatile acidity. It is a common microbial fault produced by wine spoilage yeasts, particularly Pichia anomala or Kloeckera apiculata. High levels of ethyl acetate are also produced by lactic acid bacteria and acetic acid bacteria. Sulfur compounds Sulfur is used as an additive throughout the winemaking process, primarily to stop oxidation as mentioned above but also as antimicrobial agent. When managed properly in wine, its presence there is often undetected, however when used recklessly it can contribute to flavour and aroma taints which are very volatile and potent. Sulfur compounds typically have low sensory thresholds. Sulfur dioxide Sulfur dioxide is a common wine additive, used for its antioxidant and preservative properties. When its use is not managed well it can be overadded, with its perception in wine reminiscent of matchsticks, burnt rubber, or mothballs. Wines such as these are often termed sulfitic. Hydrogen sulfide Hydrogen sulfide (H2S) is generally thought to be a metabolic by-product of yeast fermentation in nitrogen limited environments. It is formed when yeast ferments via the sulfate reduction pathway. Fermenting wine is often supplemented with diammonium phosphate (DAP) as a nitrogen source to prevent H2S formation. The sensory threshold for hydrogen sulfide is 8-10 μg/L, with levels above this imparting a distinct rotten egg aroma to the wine. Hydrogen sulfide can further react with wine compounds to form mercaptans and disulfides. Mercaptans Mercaptans (thiols) are produced in wine by the reaction of hydrogen sulfide with other wine components such as ethanol. They can be formed if finished wine is allowed prolonged contact with the lees. This can be prevented by racking the wine. Mercaptans have a very low sensory threshold, around 1.5 μg/L, with levels above causing onion, rubber, and skunk type odours. Note that dimethyl disulfide is formed from the oxidation of methyl mercaptan. Dimethyl sulfide Dimethyl sulfide (DMS) is naturally present in most wines, probably from the breakdown of sulfur containing amino acids. Like ethyl acetate, levels of DMS below the sensory threshold can have a positive effect on flavour, contributing to fruityness, fullness, and complexity. Levels above the sensory threshold of >30 μg/L in white wines and >50 μg/L for red wines, give the wine characteristics of cooked cabbage, canned corn, asparagus or truffles. Environmental Cork taint Cork taint is a wine fault mostly attributed to the compound 2,4,6-trichloroanisole (TCA), although other compounds such as guaiacol, geosmin, 2-methylisoborneol, 1-octen-3-ol, 1-octen-3-one, 2,3,4,6-tetrachloroanisole, pentachloroanisole, and 2,4,6-tribromoanisole are also thought to be involved. TCA most likely originates as a metabolite of mould growth on chlorine-bleached wine corks and barrels. It causes earthy, mouldy, and musty aromas in wine that easily mask the natural fruit aromas, making the wine very unappealing. Wines in this state are often described as "corked". As cork taint has gained a wide reputation as a wine fault, other faults are often mistakenly attributed to it. Heat damage Heat damaged wines are often casually referred to as cooked, which suggests how heat can affect a wine. They are also known as maderized wine, from Madeira wine, which is intentionally exposed to heat. The ideal storage temperature for wine is generally accepted to be 13 °C (55 °F). Wines that are stored at temperatures greatly higher than this will experience an increased aging rate. Wines exposed to extreme temperatures will thermally expand, and may even push up between the cork and bottle and leak from the top. When opening a bottle of wine, if a trace of wine is visible along the length of the cork, the cork is partially pushed out of the bottle, or wine is visible on the top of the cork while it is still in the bottle, it has most likely been heat damaged. Heat damaged wines often become oxidized, and red wines may take on a brick color. Even if the temperatures do not reach extremes, temperature variation alone can also damage bottled wine through oxidation. All corks allow some leakage of air (hence old wines become increasingly oxidized), and temperature fluctuations will vary the pressure differential between the inside and outside of the bottle and will act to "pump" air into the bottle at a faster rate than will occur at any temperature strictly maintained. Reputedly, heat damage is the most widespread and common problem found in wines. It often goes unnoticed because of the prevalence of the problem, consumers don't know it's possible, and most often would just chalk the problem up to poor quality, or other factors. Lightstrike Lightstruck wines are those that have had excessive exposure to ultraviolet light, particularly in the range 325 to 450 nm. Very delicate wines, such as Champagnes, are generally worst affected, with the fault causing a wet cardboard or wet wool type flavour and aroma. Red wines rarely become lightstruck because of the phenolic compounds present within the wine that protect it. Lightstrike is thought to be caused by sulfur compounds such as dimethyl sulfide. In France lightstrike is known as "goût de lumière", which translates to a taste of light. The fault explains why wines are generally bottled in coloured glass, which blocks the ultraviolet light, and why wine should be stored in dark environments. Ladybird (pyrazine) taint Some insects present in the grapes at harvest inevitably end up in the press and for the most part are inoffensive. Others, notably the Asian lady beetle, release unpleasant-smelling nitrogen heterocycles as a defensive mechanism when disturbed. In sufficient quantities, these can affect the wine's odor and taste. With an olfactory detection threshold of a few ppb, the principal active compound is isopropyl methoxypyrazine; this molecule is perceived as rancid peanut butter, green bell pepper, urine, or simply bitter. This is also a naturally occurring compound in Sauvignon grapes, and so pyrazine taint has been known to make Rieslings taste like Sauvignon blanc. Microbiological Brettanomyces (Dekkera) The yeast Brettanomyces produces an array of metabolites when growing in wine, some of which are volatile phenolic compounds. Together these compounds are often referred to as phenolic taint, "Brettanomyces character", or simply "Brett". The main constituents are listed below, with their sensory threshold and common sensory descriptors: 4-ethylphenol (>140 μg/L): Band-aids, barnyard, horse stable, antiseptic 4-ethylguaiacol (>600 μg/L): Bacon, spice, cloves, smoky isovaleric acid: Sweaty, cheese, rancidity Geosmin Geosmin is a compound with a very distinct earthy, musty, beetroot, even turnip flavour and aroma and has an extremely low sensory threshold of down to 10 parts per trillion. Its presence in wine is usually derived as metabolite from the growth of filamentous actinomycetes such as Streptomyces, and moulds such as Botrytis cinerea and Penicillium expansum, on grapes. Wines affected by but not attributed to geosmins are often thought to have earthy properties due to terroir. The geosmin fault occurs worldwide and has been found in recent vintages of red wines from Beaujolais, Bordeaux, Burgundy and the Loire in France. Geosmin is also thought to be a contributing factor in cork taint. Lactic acid bacteria Lactic acid bacteria have a useful role in winemaking converting malic acid to lactic acid in malolactic fermentation. However, after this function has completed, the bacteria may still be present within the wine, where they can metabolise other compounds and produce wine faults. Wines that have not undergone malolactic fermentation may be contaminated with lactic acid bacteria, leading to refermentation of the wine with it becoming turbid, swampy, and slightly effervescent or spritzy. This can be avoided by sterile filtering wine directly before bottling. Lactic acid bacteria can also be responsible for other wine faults such as those below. Bitterness taint Bitterness taint or amertume is rather uncommon and is produced by certain strains of bacteria from the genera Pediococcus, Lactobacillus, and Oenococcus. It begins by the degradation of glycerol, a compound naturally found in wine at levels of 5-8 g/L, via a dehydratase enzyme to 3-hydroxypropionaldehyde. During ageing this is further dehydrated to acrolein which reacts with the anthocyanins and other phenols present within the wine to form the taint. As red wines contain high levels of anthocyanins they are generally more susceptible. Diacetyl Diacetyl in wine is produced by lactic acid bacteria, mainly Oenococcus oeni. In low levels it can impart positive nutty or caramel characters, however at levels above 5 mg/L it creates an intense buttery or butterscotch flavour, where it is perceived as a flaw. The sensory threshold for the compound can vary depending on the levels of certain wine components, such as sulfur dioxide. It can be produced as a metabolite of citric acid when all of the malic acid has been consumed. Diacetyl rarely taints wine to levels where it becomes undrinkable. Geranium taint Geranium taint, as the name suggests, is a flavour and aroma taint in wine reminiscent of geranium leaves. The compound responsible is 2-ethoxyhexa-3,5-diene, which has a low sensory threshold concentration of 1 ng/L. In wine it is formed during the metabolism of potassium sorbate by lactic acid bacteria. Potassium sorbate is sometimes added to wine as a preservative against yeast, however its use is generally kept to a minimum due to the possibility of the taint developing. The production of the taint begins with the conversion of sorbic acid to the alcohol sorbinol. The alcohol is then isomerised in the presence of acid to 3,5-hexadiene-2-ol, which is then esterified with ethanol to form 2-ethoxy-3,5-hexadiene. As ethanol is necessary for the conversion, the geranium taint is not usually found in must. Mannitol Mannitol is a sugar alcohol, and in wine it is produced by heterofermentative lactic acid bacteria, such as Lactobacillus brevis, by the reduction of fructose. Its perception is often complicated as it generally exists in wine alongside other faults, but it is usually described as viscous, ester-like combined with a sweet and irritating finish. Mannitol is usually produced in wines that undergo malolactic fermentation with a high level of residual sugars still present. Expert winemakers oftentimes add small amounts of sulfur dioxide during the crushing step to reduce early bacterial growth. Ropiness Ropiness is manifested as an increase in viscosity and a slimey or fatty mouthfeel of a wine. In France the fault is known as "graisse", which translates to fat. The problem stems from the production of dextrins and polysaccharides by certain lactic acid bacteria, particularly of the genera Leuconostoc and Pediococcus. Mousiness Mousiness is a wine fault most often attributed to Brettanomyces but can also originate from the lactic acid bacteria Lactobacillus brevis, Lactobacillus fermentum, and Lactobacillus hilgardii, and hence can occur in malolactic fermentation. The compounds responsible are lysine derivatives, mainly; 2-acetyl-3,4,5,6-tetrahydropyridine 2-acetyl-1,4,5,6-tetrahydropyridine 2-ethyltetrahydropyridine 2-acetyl-1-pyrrolene The taints are not volatile at the pH of wine, and therefore not obvious as an aroma. However, when mixed with the slightly basic pH of saliva they can become very apparent on the palate, especially at the back of the mouth, as mouse cage or mouse urine. Refermentation Refermentation, sometimes called secondary fermentation, is caused by yeasts refermenting the residual sugar present within bottled wine. It occurs when sweet wines are bottled in non-sterile conditions, allowing the presence of microorganisms. The most common yeast to referment wine is the standard wine fermentation yeast Saccharomyces cerevisiae, but has also been attributed to Schizosaccharomyces pombe and Zygosaccharomyces bailii. The main issues associated with the fault include turbidity (from yeast biomass production), excess ethanol production (may violate labelling laws), slight carbonation, and some coarse odours. Refermentation can be prevented by bottling wines dry (with residual sugar levels <1.0g/L), sterile filtering wine prior to bottling, or adding preservative chemicals such as dimethyl dicarbonate. The Portuguese wine style known as "vinhos verdes" used to rely on this secondary fermentation in bottle to impart a slight spritziness to the wine, but now usually uses artificial carbonation. Bunch rots Organisms responsible for bunch rot of grape berries are filamentous fungi, the most common of these being Botrytis cinerea (gray mold) However, there are a range of other fungi responsible for the rotting of grapes such as Aspergillus spp., Penicillium spp., and fungi found in subtropical climates (e.g., Colletotrichum spp. (ripe rot) and Greeneria uvicola (bitter rot)). A further group more commonly associated with diseases of the vegetative tissues of the vine can also infect grape berries (e.g., Botryosphaeriaceae, Phomopsis viticola). Compounds found in bunch rot affected grapes and wine are typically described as having mushroom, earthy odors and include geosmin, 2-methylisoborneol, 1-octen-3-ol, 2-octen-1-ol, fenchol and fenchone. See also Oenology Acids in wine Browning in red wine Storage of wine References External links Organoleptic defects in wine (PDF document) Link Not Working How to Spot Faulty Wine, from The Wine Doctor Link Not Working in Enobytes Wine Online'' Oenology Wine chemistry Wine tasting Product expiration
Wine fault
[ "Chemistry" ]
4,661
[ "Wine chemistry", "Alcohol chemistry" ]
3,032,011
https://en.wikipedia.org/wiki/Self-loading%20rifle
A self-loading rifle or auto-loading rifle is a rifle with an action using a portion of the energy of each cartridge fired to load another cartridge. Self-loading pistols are similar, but intended to be held and fired by a single hand, while rifles are designed to be held with both hands and fired from the shoulder. Evolution Early breech-loading firearms were single-shot devices holding a single cartridge. When that cartridge had been fired, the person using the firearm would remove the empty cartridge, find another cartridge from a pocket or other carrying apparatus, and load that cartridge into the firearm chamber before another shot could be fired. Later repeating rifles and pistols were equipped with a magazine holding several cartridges with a spring to push those cartridges into position to be loaded by manually operating the action of the firearm—as by a lever, bolt, or pump mechanism—thus avoiding the procedure of locating and manually positioning each new cartridge. Developed starting in the early 1900s, self-loading firearms avoid manual operation of the action by using the energy of the cartridge being fired to operate the action, so the shooter may fire additional cartridges without manually operating the firearm action until the magazine is empty. Variations Self-loading rifles include: Semi-automatic rifle, a type of firearm which fires a single shot with the pull of a trigger, and uses the energy of that shot to chamber the next round. Examples: Remington Model 8 Winchester Model 1907 Automatic rifle, a firearm that automatically loads and fires rounds, through the bullet's energy, as long as its trigger is held down. Examples: Lewis gun Bren light machine gun Selective-fire rifle, e.g. assault rifle, that is capable of switching between semi-automatic, fully automatic and/or burst fire mode of operation. Examples: M16 rifle AK-47 See also Glossary of firearms terms Firearm components Firearm terminology Glossary of military abbreviations List of established military terms References Firearm terminology Firearm components Firearms Rifles ru:Самозарядная винтовка
Self-loading rifle
[ "Technology" ]
412
[ "Firearm components", "Components" ]
3,032,068
https://en.wikipedia.org/wiki/Mayluu-Suu
Mayluu-Suu (, Mayli-Say) is a mining town in the Jalal-Abad Region of southern Kyrgyzstan. It is a city of regional significance, not part of a district. Its area is , and its resident population was 25,892 in 2021. It has been economically depressed since the fall of the Soviet Union. From 1946 to 1968 the Zapadnyi Mining and Chemical Combine in Mayluu-Suu mined and processed more than of uranium ore for the Soviet nuclear program. Uranium mining and processing is no longer economical, leaving much of the local population of about 20,000 without meaningful work. The town was classified as one of the Soviet government's secret cities, officially known only as "Mailbox 200". Mayluu-Suu consists of the town proper, the urban-type settlement Kök-Tash and the villages Sary-Bee, Kögoy and Kara-Jygach. Population Uranium mills The USSR left 23 unstable uranium tailings pits on the tectonically unstable hillside above the town. A breached tailings dam in April 1958 released of radioactive tailings into the river Mayluu-Suu. In 1994, a landslide blocked the river, which flowed over its banks and flooded another waste reservoir. A flood caused by a mudslide nearly submerged a tailings pit in 2002. Mayluu-Suu was found to be one of the 10 most polluted sites in the world in a study published in 2006 by the Blacksmith Institute. The World Bank approved a US$5 million grant to reclaim the tailings pits in 2004, and approved an additional $1 million grant for the project in 2011. However, grave threats still persist. References External links Webpage of Blacksmith Institute about Mayluu-Suu, archived from the original Latest report on Pure Earth (formerly Blacksmith Institute), accessed 2021-02-19 Populated places in Jalal-Abad Region Uranium mines in the Soviet Union Mines in Kyrgyzstan Radioactively contaminated areas
Mayluu-Suu
[ "Chemistry", "Technology" ]
412
[ "Radioactively contaminated areas", "Soil contamination", "Radioactive contamination" ]
3,032,314
https://en.wikipedia.org/wiki/History%20of%20the%20camera
The history of the camera began even before the introduction of photography. Cameras evolved from the camera obscura through many generations of photographic technologydaguerreotypes, calotypes, dry plates, filmto the modern day with digital cameras and camera phones. Camera obscura (pre-17th century) The camera obscura (from the Latin for 'dark room') is a natural optical phenomenon and precursor of the photographic camera. It projects an inverted image (flipped left to right and upside down) of a scene from the other side of a screen or wall through a small aperture onto a surface opposite the opening. The earliest documented explanation of this principle comes from Chinese philosopher Mozi (), who correctly argued that the inversion of the camera obscura image is a result of light traveling in straight lines from its source. From around 1550, lenses were used in the openings of walls or closed window shutters in dark rooms to project images, aiding in drawing. By the late 17th century, portable camera obscura devices in tents and boxes had come into use as drawing tools. The images produced by these early cameras could only be preserved by manually tracing them, as no photographic processes had been invented yet. The first cameras were large enough to accommodate one or more people, and over time they evolved into increasingly compact models. By the time of Niépce, portable box camera obscurae suitable for photography were widely available. Johann Zahn envisioned the first camera small and portable enough for practical photography in 1685, but it took nearly 150 years for such an application to become possible. Ibn al-Haytham (1040), an Arab physicist also known as Alhazen, made significant contributions to the understanding of the camera obscura, conducting experiments with light in a darkened room with a small opening. He is often credited with the invention of the pinhole camera. He also provided the first correct analysis of the camera obscura, offering the first geometrical and quantitative descriptions of the phenomenon, and was the first to utilize a screen in a dark room for image projection from a hole in the surface. He was the first to understand the relationship between the focal point and the pinhole, and was the pioneer of early afterimage experiments. The work of Ibn al-Haytham on optics, circulated through Latin translations, played a significant role in inspiring notable individuals such as Witelo, John Peckham, Roger Bacon, Leonardo da Vinci, René Descartes, and Johannes Kepler. The Camera Obscura was used as a drawing aid since at least around 1550. By the late 17th century, portable versions of the device housed in tents and boxes became commonly used for drawing purposes. Early photographic camera (18th–19th centuries) Before the development of the photography camera, it had been known for hundreds of years that some substances, such as silver salts, darkened when exposed to sunlight. In a series of experiments, published in 1727, the German scientist Johann Heinrich Schulze demonstrated that the darkening of the salts was due to light alone, and not influenced by heat or exposure to air.The Swedish chemist Carl Wilhelm Scheele showed in 1777 that silver chloride was especially susceptible to darkening from light exposure, and that once darkened, it becomes insoluble in an ammonia solution. The first person to use this chemistry to create images was Thomas Wedgwood. To create images, Wedgwood placed items, such as leaves and insect wings, on ceramic pots coated with silver nitrate, and exposed the set-up to light. These images weren't permanent, however, as Wedgwood didn't employ a fixing mechanism. He ultimately failed at his goal of using the process to create fixed images created by a camera obscura. The first permanent photograph of a camera image was made in 1826 by Nicéphore Niépce using a sliding wooden box camera made by Charles and Vincent Chevalier in Paris. Niépce had been experimenting with ways to fix the images of a camera obscura since 1816. The photograph Niépce succeeded in creating shows the view from his window. It was made using an 8-hour exposure on pewter coated with bitumen. Niépce called his process "heliography". Niépce corresponded with the inventor Louis Daguerre, and the pair entered into a partnership to improve the heliographic process. Niépce had experimented further with other chemicals, to improve contrast in his heliographs. Daguerre contributed an improved camera obscura design, but the partnership ended when Niépce died in 1833. Daguerre succeeded in developing a high-contrast and extremely sharp image by exposing on a plate coated with silver iodide, and exposing this plate again to mercury vapor. By 1837, he was able to fix the images with a common salt solution. He called this process Daguerreotype, and tried unsuccessfully for a couple of years to commercialize it. Eventually, with help of the scientist and politician François Arago, the French government acquired Daguerre's process for public release. In exchange, pensions were provided to Daguerre as well as Niépce's son, Isidore. In the 1830s, the English scientist William Henry Fox Talbot independently invented a process to capture camera images using silver salts. Although dismayed that Daguerre had beaten him to the announcement of photography, he submitted a pamphlet to the Royal Institution entitled Some Account of the Art of Photogenic Drawing on 31 Jan 1839, which was the first published description of photography. Within two years, Talbot developed a two-step process for creating photographs on paper, which he called calotypes. The calotype process was the first to utilize negative printing, which reverses all values in the reproduction process – black shows up as white and vice versa. Negative printing allows, in principle, an unlimited number of positive prints to be made from the original negative. The Calotype process also introduced the ability for a printmaker to alter the resulting image through retouching of the negative.Calotypes were never as popular or widespread as daguerreotypes, owing mainly to the fact that the latter produced sharper details. However, because daguerreotypes only produce a direct positive print, no duplicates can be made. It is the two-step negative/positive process that formed the basis for modern photography. The first photographic camera developed for commercial manufacture was a daguerreotype camera, built by Alphonse Giroux in 1839. Giroux signed a contract with Daguerre and Isidore Niépce to produce the cameras in France, with each device and accessories costing 400 francs. The camera was a double-box design, with a landscape lens fitted to the outer box, and a holder for a ground glass focusing screen and image plate on the inner box. By sliding the inner box, objects at various distances could be brought to as sharp a focus as desired. After a satisfactory image had been focused on the screen, the screen was replaced with a sensitized plate. A knurled wheel controlled a copper flap in front of the lens, which functioned as a shutter. The early daguerreotype cameras required long exposure times, which in 1839 could be from 5 to 30 minutes. After the introduction of the Giroux daguerreotype camera, other manufacturers quickly produced improved variations. Charles Chevalier, who had earlier provided Niépce with lenses, created in 1841 a double-box camera using a half-sized plate for imaging. Chevalier's camera had a hinged bed, allowing for half of the bed to fold onto the back of the nested box. In addition to having increased portability, the camera had a faster lens, bringing exposure times down to 3 minutes, and a prism at the front of the lens, which allowed the image to be laterally correct. Another French design emerged in 1841, created by Marc Antoine Gaudin. The Nouvel Appareil Gaudin camera had a metal disc with three differently-sized holes mounted on the front of the lens. Rotating to a different hole effectively provided variable f-stops, allowing different amounts of light into the camera. Instead of using nested boxes to focus, the Gaudin camera used nested brass tubes. In Germany, Peter Friedrich Voigtländer designed an all-metal camera with a conical shape that produced circular pictures of about 3 inches in diameter. The distinguishing characteristic of the Voigtländer camera was its use of a lens designed by Joseph Petzval. The Petzval lens was nearly 30 times faster than any other lens of the period, and was the first to be made specifically for portraiture. Its design was the most widely used for portraits until Carl Zeiss introduced the anastigmat lens in 1889. Within a decade of being introduced in America, 3 general forms of camera were in popular use: the American- or chamfered-box camera, the Robert's-type camera or "Boston box", and the Lewis-type camera. The American-box camera had beveled edges at the front and rear, and an opening in the rear where the formed image could be viewed on ground glass. The top of the camera had hinged doors for placing photographic plates. Inside there was one available slot for distant objects, and another slot in the back for close-ups. The lens was focused either by sliding or with a rack and pinion mechanism. The Robert's-type cameras were similar to the American-box, except for having a knob-fronted worm gear on the front of the camera, which moved the back box for focusing. Many Robert's-type cameras allowed focusing directly on the lens mount. The third popular daguerreotype camera in America was the Lewis-type, introduced in 1851, which utilized a bellows for focusing. The main body of the Lewis-type camera was mounted on the front box, but the rear section was slotted into the bed for easy sliding. Once focused, a set screw was tightened to hold the rear section in place. Having the bellows in the middle of the body facilitated making a second, in-camera copy of the original image. Daguerreotype cameras formed images on silvered copper plates and images were only able to develop with mercury vapor. The earliest daguerreotype cameras required several minutes to half an hour to expose images on the plates. By 1840, exposure times were reduced to just a few seconds owing to improvements in the chemical preparation and development processes, and to advances in lens design. American daguerreotypists introduced manufactured plates in mass production, and plate sizes became internationally standardized: whole plate (6.5 × 8.5 inches), three-quarter plate (5.5 × 7 1/8 inches), half plate (4.5 × 5.5 inches), quarter plate (3.25 × 4.25 inches), sixth plate (2.75 × 3.25 inches), and ninth plate (2 × 2.5 inches). Plates were often cut to fit cases and jewelry with circular and oval shapes. Larger plates were produced, with sizes such as 9 × 13 inches ("double-whole" plate), or 13.5 × 16.5 inches (Southworth & Hawes' plate). The collodion wet plate process that gradually replaced the daguerreotype during the 1850s required photographers to coat and sensitize thin glass or iron plates shortly before use and expose them in the camera while still wet. Early wet plate cameras were very simple and little different from Daguerreotype cameras, but more sophisticated designs eventually appeared. The Dubroni of 1864 allowed the sensitizing and developing of the plates to be carried out inside the camera itself rather than in a separate darkroom. Other cameras were fitted with multiple lenses for photographing several small portraits on a single larger plate, useful when making cartes de visite. It was during the wet plate era that the use of bellows for focusing became widespread, making the bulkier and less easily adjusted nested box design obsolete. For many years, exposure times were long enough that the photographer simply removed the lens cap, counted off the number of seconds (or minutes) estimated to be required by the lighting conditions, then replaced the cap. As more sensitive photographic materials became available, cameras began to incorporate mechanical shutter mechanisms that allowed very short and accurately timed exposures to be made. The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the "Kodak", was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras. Films also made possible capture of motion (cinematography) establishing the movie industry by the end of the 19th century. Early fixed images The first partially successful photograph of a camera image was made in approximately 1816 by Nicéphore Niépce, using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Niépce, so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light necessary for viewing it. In the mid-1820s, Niépce used a wooden box camera made by Parisian opticians Charles and Vincent Chevalier, to experiment with photography on surfaces thinly coated with Bitumen of Judea. The bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One of those photographs has survived. Daguerreotypes and calotypes After Niépce's death in 1833, his partner Louis Daguerre continued to experiment and by 1837 had created the first practical photographic process, which he named the daguerreotype and publicly unveiled in 1839. Daguerre treated a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of the holder, uncapped the lens, and counted off as many minutes as the lighting conditions seemed to require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic lenses were standard. Dry plates Collodion dry plates had been available since 1857, thanks to the work of Désiré van Monckhoven, but it was not until the invention of the gelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally made so-called "instantaneous" snapshot exposures practical. For the first time, a tripod or other support was no longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking the picture. The ranks of amateur photographers swelled and informal "candid" portraits became popular. There was a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box cameras, and even "detective cameras" disguised as pocket watches, hats, or other objects. The short exposure times that made candid photography possible also necessitated another innovation, the mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the end of the 19th century. Invention of photographic film The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1888–1889. His first camera, which he called the "Kodak", was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras. In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models remained on sale until the 1960s. Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool. Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also available, as were backs that enabled rollfilm cameras to use plates. Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until the end of the 20th century when electronic photography replaced them. 35 mm A number of manufacturers started to use 35 mm film for still photography between 1905 and 1913. The first 35 mm cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the Simplex, in 1914. Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years by World War I. It wasn't until after World War I that Leica commercialized their first 35 mm cameras. Leitz test-marketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into production as the Leica I (for Leitz camera) in 1925. The Leica's immediate popularity spawned several of competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of choice for high-end compact cameras. Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3. Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3 was discontinued in 1966. The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere. TLRs and SLRs The first practical reflex camera was the Franke & Heidecke Rolleiflex medium format TLR of 1928. Though both single- and twin-lens reflex cameras had been available for decades, they were too bulky to achieve much popularity. The Rolleiflex, however, was sufficiently compact to achieve widespread popularity and the medium-format TLR design became popular for both high- and low-end cameras. A similar revolution in SLR design began in 1933 with the introduction of the Ihagee Exakta, a compact SLR which used 127 rollfilm. This was followed three years later by the first Western SLR to use 135 film (otherwise known as 35 mm film), the Kine Exakta (World's first true 35 mm SLR was Soviet "Sport" camera, marketed several months before Kine Exakta, though "Sport" used its own film cartridge). The 35 mm SLR design gained immediate popularity and there was an explosion of new models and innovative features after World War II. There were also a few 35 mm TLRs, the best-known of which was the Contaflex of 1935, but for the most part these met with little success. The first major post-war SLR innovation was the eye-level viewfinder, which first appeared on the Hungarian Duflex in 1947 and was refined in 1948 with the Contax S, the first camera to use a pentaprism. Prior to this, all SLRs were equipped with waist-level focusing screens. The Duflex was also the first SLR with an instant-return mirror, which prevented the viewfinder from being blacked out after each exposure. This same time period also saw the introduction of the Hasselblad 1600F, which set the standard for medium format SLRs for decades. In 1952 the Asahi Optical Company (which later became well known for its Pentax cameras) introduced the first Japanese SLR using 135 film, the Asahiflex. Several other Japanese camera makers also entered the SLR market in the 1950s, including Canon, Yashica, and Nikon. Nikon's entry, the Nikon F, had a full line of interchangeable components and accessories and is generally regarded as the first Japanese system camera. It was the F, along with the earlier S series of rangefinder cameras, that helped establish Nikon's reputation as a maker of professional-quality equipment and one of the world's best known brands. Instant cameras While conventional cameras were becoming more refined and sophisticated, an entirely new type of camera appeared on the market in 1949. This was the Polaroid Model 95, the world's first viable instant-picture camera. Known as a Land Camera after its inventor, of 1965, was a huge success and remains one of the top-selling cameras of all time. Automation In 1936, Albert Einstein and Gustav Bucky designed one of the first automatic cameras which used an electric eye to determine aperture and exposure. The first production camera to feature automatic exposure was the selenium light meter-equipped, fully automatic Super Kodak Six-20 pack of 1938, but its extremely high price (for the time) of $225 () kept it from achieving any degree of success. By the 1960s, however, low-cost electronic components were commonplace and cameras equipped with light meters and automatic exposure systems became increasingly widespread. The next technological advance came in 1960, when the German Mec 16 SB subminiature became the first camera to place the light meter behind the lens for more accurate metering. However, through-the-lens metering ultimately became a feature more commonly found on SLRs than other types of camera; the first SLR equipped with a TTL system was the Topcon RE Super of 1962. Digital cameras Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print, or share photos, and are commonly found on mobile phones. Digital imaging technology The first semiconductor image sensor was the CCD, invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting. The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. Early digital camera prototypes The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of (0.64 megapixels). At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on "All Solid State Radiation Imagers" on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970. Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built. The Cromemco Cyclops, introduced as a hobbyist construction project in 1975, was the first digital camera to be interfaced to a microcomputer. Its image sensor was a modified metal–oxide–semiconductor (MOS) dynamic RAM (DRAM) memory chip. The first recorded attempt at building a self-contained digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak. It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973. The camera weighed 8 pounds (3.6 kg), recorded black-and-white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production. Analog electronic cameras Handheld electronic cameras, in the sense of a device meant to be carried and used as a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch "video floppy". In essence, it was a video movie camera that recorded single frames, 50 per disk in field mode, and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions. Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shimbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000, ), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The "video floppy" disks later had several reader devices available for viewing on a screen but were never standardized as a computer drive. The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the 1989 Tiananmen Square protests and the first Gulf War in 1991. US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real-time air-to-sea surveillance system. The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks. Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital photographs without modification was announced in late 1998. Silicon Film was to work as a roll of 35 mm film, with a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The product, which was never released, became increasingly obsolete due to improvements in digital camera technology and affordability. Silicon Films' parent company filed for bankruptcy in 2001. Early true digital cameras By the late 1970s, the technology required to produce truly commercial digital cameras existed. The first true portable digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 2 MB SRAM (static RAM) memory card that used a battery to keep the data in memory. This camera was never marketed to the public. The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987 though there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed commercially was sold in December 1989 in Japan, the DS-X by Fuji The first commercially available portable digital camera in the United States was the Dycam Model 1, first shipped in November 1990. It was originally a commercial failure because it was black-and-white, low in resolution, and cost nearly $1,000 (). It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for download. Digital SLRs (DSLRs) Nikon was interested in digital photography since the mid-1980s. In July 1986, while presenting to Photokina, Nikon introduced an operational prototype of the first SLR-type digital camera (Still Video Camera), manufactured by Panasonic. The Nikon SVC was built around a sensor 2/3 " charge-coupled device of 300,000 pixels. Storage media, a magnetic floppy inside the camera, allows recording 25 or 50 B&W images, depending on the definition. In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of professional Kodak DCS SLR cameras that were based in part on film bodies, often Nikons. The Kodak DCS was the first commercially available Digital SLR (DSLR) It used a 1.3 megapixel sensor, had a bulky external digital storage system and was priced at $13,000 (). At the arrival of the Kodak DCS-200, the Kodak DCS was dubbed Kodak DCS-100. The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 developed by a team led by Hiroyuki Suetaka in 1995. The first camera to use CompactFlash was the Kodak DC-25 in 1996. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995. In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 () at introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned. Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. Since 2003, digital cameras have outsold film cameras and Kodak announced in January 2004 that they would no longer sell Kodak-branded film cameras in the developed world – and in 2012 filed for bankruptcy after struggling to adapt to the changing industry. Camera phones The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It stored up to 20 JPEG digital images, which could be sent over e-mail, or the phone could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network. The Samsung SCH-V200, released in South Korea in June 2000, was also one of the first phones with a built-in camera. It had a TFT liquid-crystal display (LCD) and stored up to 20 digital photos at 350,000-pixel resolution. However, it could not send the resulting image over the telephone function, but required a computer connection to access photos. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication. One of the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough to enable the widespread adoption of camera phones. Smartphones now routinely include high resolution digital cameras. See also History of photography Photographic lens design Movie camera List of photographs considered the most important References External links The Digital Camera Museum, with history section Cameras Camera Camera
History of the camera
[ "Technology" ]
7,369
[ "History of technology", "Science and technology studies", "Cameras", "Recording devices", "History of science and technology" ]
3,032,321
https://en.wikipedia.org/wiki/525%20Adelaide
525 Adelaide is an S-type asteroid belonging to the Flora family in the Main Belt. It was discovered 21 October 1908 by Joel Hastings Metcalf. Previously, the object A904 EB, discovered 14 March 1904 by Max Wolf, had been named 525 Adelaide but was subsequently lost. When it was rediscovered 3 October 1930 by Sylvain Arend as 1930 TA, it was named 1171 Rusthawelia. Some 28 years passed before the two objects were realized to be the same. 1930 TA retained the name Rusthawelia (and discovery credited to Arend); the name 525 Adelaide was reused for the object 1908 EKa. Another confusion occurred in 1929, one year before Arend's discovery, when American astronomer Anne Sewell Young thought to have found long-lost "Adelide", when in fact she mistook the asteroid for comet 31P/Schwassmann–Wachmann that had a very similar orbital eccentricity. References External links Flora asteroids Adelaide Adelaide S-type asteroids SU-type asteroids (Tholen) 19040314 19081021 19301003 Recovered astronomical objects
525 Adelaide
[ "Astronomy" ]
229
[ "Recovered astronomical objects", "Astronomical objects" ]
3,032,383
https://en.wikipedia.org/wiki/Moon%20clip
A moon clip is a ring-shaped or star-shaped piece of metal designed to hold multiple cartridges together as a unit, for simultaneous insertion and extraction from a revolver cylinder. Moon clips may either hold an entire cylinder's worth of cartridges together (full moon clip), half a cylinder (half moon clip), or just two neighboring cartridges. The two-cartridge moon clips can be used for those revolvers that have an odd number of loading chambers such as five or seven and also for those revolvers that allow a shooter to mix both rimless and rimmed types of cartridges in one loading of the same cylinder (e.g., two adjacent rounds of .45 ACP, two rounds of .45 Colt, and two rounds of .410 in a single six-chamber S&W Governor cylinder). Moon clips can be used either to chamber rimless cartridges in a double-action revolver (which would normally require rimmed cartridges), or to chamber multiple rimmed cartridges simultaneously. Moon clips are generally made from spring grade steel, although plastic versions have also been produced. Unlike a speedloader, a moon clip remains in place during firing, and after firing is used to extract the empty cartridge cases. History The modern moon clip was devised shortly before World War I in 1908. The device then became widespread during the war, when the relatively new M1911 semi-automatic pistol could not be manufactured fast enough for the war effort. The U.S. War Department asked Smith & Wesson and Colt to devise ways to use the M1911's .45 ACP rimless cartridge in their revolvers. The result was the M1917 revolver, employing moon clips to chamber the military-issue .45 ACP ammunition. Smith & Wesson invented and patented the half-moon clip, but at the request of the Army allowed Colt to also use the design free of charge in their own version of the M1917 revolver. After the war, Naomi Alan, an engineer employed by Smith & Wesson, developed a 6-round full-moon clip. However, many civilian shooters disliked and still dislike using moon clips. Although full moon clips allow a revolver to be very quickly reloaded, loading and unloading the clips is tedious, and bent clips can bind the cylinder and cause misfires. Moon clips can be formed by stamped high-carbon steel, heat treated, and finished to prevent rust. Alternatively they can be made from pre-heat-treated stainless steel and cut out using either wire EDM or laser machinery. They can also be made by injection molding plastic. Each process has its benefits and drawbacks such as cost and durability. Speed Moon clips can be even faster to use than a speedloader with the proper training. Jerry Miculek, an IPSC revolver shooter, has demonstrated the ability to fire six rounds from a Smith & Wesson Model 625 .45 ACP revolver, reload, and then fire six more rounds at the A zone of an IPSC target at in 2.99 seconds. This feat was possible by using moon clips to allow quick and reliable ejection of the fired rounds, and a quick reload of all six chambers at once. Use Moon clips have been made available for many different calibers including; .30 Carbine, .327 Federal Magnum, .380 ACP, 9mm Luger, .38 Super, .38 S&W, .38 Special, .357 Magnum, .40 S&W, 10mm Auto, .41 Magnum, .44 Special, .44 Magnum, .45 Colt, .454 Casull, .45 ACP, .45 Super, .460 Rowland, and .460 S&W Magnum. Common revolver models that are manufactured to use moon clips: 9×17 mm and 9×18mm Makarov OTs-01 Kobalt R-92 9mm Luger Charter Arms Pitbull Chiappa Rhino S&W Model 940 S&W Model 929 S&W Model 986 Ruger LCR Ruger SP101 Ruger Speed-Six Taurus Model 905 Alfa Proj. 9200 series 10mm Auto/.40 S&W S&W Model 610 .40 S&W S&W Model 646 .45 ACP M1917 revolver Ruger Redhawk S&W Model 22 S&W Models 25 (for blue) and 625 (for stainless) S&W Governor Webley revolver (Beginning with the Mark VI, which was British Army standard issue from 1915. Moon clips can also be used to enable .455 Webleys to shoot the .45 ACP cartridge — a common, though dangerous adaptation, as the .45 ACP standard pressure is above maximum pressure ratings for all models of Webley revolver.) 7.62×41.5mm SP-4 OTs-38 Stechkin References Firearm components
Moon clip
[ "Technology" ]
1,003
[ "Firearm components", "Components" ]
3,032,460
https://en.wikipedia.org/wiki/Flash%20photolysis
Flash photolysis is a pump-probe laboratory technique, in which a sample is first excited by a strong pulse of light from a pulsed laser of nanosecond, picosecond, or femtosecond pulse width or by another short-pulse light source such as a flash lamp. This first strong pulse is called the pump pulse and starts a chemical reaction or leads to an increased population for energy levels other than the ground state within a sample of atoms or molecules. Typically the absorption of light by the sample is recorded within short time intervals (by a so-called test or probe pulses) to monitor relaxation or reaction processes initiated by the pump pulse. Flash photolysis was developed shortly after World War II as an outgrowth of attempts by military scientists to build cameras fast enough to photograph missiles in flight. The technique was developed in 1949 by Manfred Eigen, Ronald George Wreyford Norrish and George Porter, who won the 1967 Nobel Prize in Chemistry for this invention. Over the next 40 years the technique became more powerful and sophisticated due to developments in optics and lasers. Interest in this method grew considerably as its practical applications expanded from chemistry to areas such as biology, materials science, and environmental sciences. Today, flash photolysis facilities are extensively used by researchers to study light-induced processes in organic molecules, polymers, nanoparticles, semiconductors, photosynthesis in plants, signaling, and light-induced conformational changes in biological systems. See also Attophysics (1 attosecond = 10−18 s) Femtochemistry Femtotechnology Ultrafast laser spectroscopy Ultrashort pulse References Photochemistry Chemical kinetics Time-resolved spectroscopy
Flash photolysis
[ "Physics", "Chemistry" ]
345
[ "Chemical reaction engineering", "Spectrum (physical sciences)", "Time-resolved spectroscopy", "nan", "Chemical kinetics", "Spectroscopy" ]
5,531,239
https://en.wikipedia.org/wiki/Sapropel
Sapropel (a contraction of Ancient Greek words sapros and pelos, meaning putrefaction and mud (or clay), respectively) is a term used in marine geology to describe dark-coloured sediments that are rich in organic matter. Organic carbon concentrations in sapropels commonly exceed 2 wt.% in weight. The term sapropel events may also refer to cyclic oceanic anoxic events (OAE), in particular those affecting the Mediterranean Sea with a periodicity of about 21,000 years. Formation Sapropels have been recorded in the Mediterranean sediments since the closure of the Eastern Tethys Ocean 13.5 million years ago. The formation of sapropel events in the Mediterranean Sea occurs approximately every 21,000 years and last between 3,000 and 5,000 years. The first identification of these events occurred in the mid-20th century. Since then, their formulative conditions of have been investigated. The occurrence of sapropels has been related to the Earth's orbital parameters (Milankovitch cycles). The precession cycles influence the African monsoon, which influences the Mediterranean circulation through increases in freshwater inputs. Sapropels develop during episodes of reduced oxygen availability in bottom waters, such as an oceanic anoxic event (OAE). Most studies of formational mechanisms infer some degree of reduced deep-water circulation. Oxygen can only reach the deep sea by new deep-water formation and consequent "ventilation" of deep basins. There are two main causes of OAE: reduction in deep-water circulation or raised oxygen demand from upper level. A reduction in deep-water circulation will eventually lead to a serious decrease in deep-water oxygen concentrations due to biochemical oxygen demand associated with the decay of organic matter. This sinks into the deep sea as a result of export production from surface waters. Oxygen depletion in bottom waters then favors the enhanced preservation of the organic matter during burial by the sediments. Organic-rich sediments may also form in well-ventilated settings that have highly productive surface waters; here the high surface demand simply extracts the oxygen before it can enter the deep circulation current thus depriving the bottom waters of oxygen. Significance Sapropelic deposits from global ocean anoxic events form important oil source rocks. Detailed process studies of sapropel formation have concentrated on the fairly recent eastern Mediterranean deposits, the last of which occurred between 9.5 and 5.5 thousand years ago. The Mediterranean sapropels of the Pleistocene reflect increased density stratification in the isolated Mediterranean basin. They record a higher organic carbon concentration than non-sapropel times; an increase in the δ15N and corresponding decrease in δ13C tells of rising productivity as a result of nitrogen fixation. This effect is more pronounced further east in the basin, suggesting that increased precipitation was most pronounced at that end of the sea. In the Black Sea In the Black Sea, sapropels are distributed at a depth of 500 to 2200 m, and in different morpholithological zones they have different thicknesses. Deep sea sediments are called the sediments formed outside the zone of influence of hydrogenic factors such as wind-driven waves and internal waves as well as of the transgressive and regressive cycles of the Black Sea basin. Here, under the conditions of relative stagnation, can be observed uninterrupted cross-sections because this area was under the sea level during the entire Pleistocene and Holocene. Deep sea organogenic mineral sediments (DSOMS) are those sediments that contain more than 3% organic carbon. The sapropels form a single horizon with constant thickness typical of the Black Sea basin. Analogues of the sapropels on the continental shelf and the upper part of the continental slope are the green aleurite-pelite, oozes with accumulation of plant detritus and decomposed shells of Mytilus galloprovincialis. The transition from aleurite-pelitic oozes to sapropels is facial. The organic matter in the sapropels is of heterogeneous origin. They are composed primarily of planktogenic organisms (about 80%) and continental organic matter (20%). The planktonic organisms are well preserved in most cases under the conditions of the hydrogen sulfide zone. The main components of the sapropels are the dinoflagellate cysts, diatom algae, coccolithophorids, peridiniales. The mineral part of sapropel muds is represented by a poly-component mixture of clay minerals. The minerals illite and montmorillonite predominate, chlorite and kaolinite occur in subordinate quantities. Individual grains of quartz, feldspar, volcanic glass and others are rarely found among them. Carbonate minerals are mainly represented by calcite and dolomite. It is generally accepted that the main source of hydrogen sulfide in the Black Sea today are the processes of anaerobic decomposition of organic matter by sulfate-reducing bacteria (SRB). The organic substance that is fixed at the bottom of the basin in the form of organogenic-mineral sediments (sapropels) is a product of the mass extinction of the plankton biomass as a result of the Black Sea flood. There is an excess of a huge amount of organic matter, which creates favorable conditions for the development of bacterial sulfate reduction. Non-conventional source of energy Bulgarian Professor Petko Dimitrov is the creator of the idea for the application of sapropel sediments from the bottom of the Black Sea as a natural ecological fertilizer and biological products. According to the Romanian tycoon Dinu Patriciu, the sapropel sediments have the potential to be a source of non-conventional energy. Patriciu has created a marine exploration project in the Black Sea which examines the sapropel sediments of that region. Sediment cores are collected and investigated by several universities and research institutes across the world. See also Pelite, mud rocks References External links "Using the material choking Russian lakes for sustainable water technologies" discusses uses for sapropel Marine geology Sediments Stratigraphy Geochemistry Mineralogy Radiocarbon dating
Sapropel
[ "Chemistry" ]
1,263
[ "nan" ]
5,531,434
https://en.wikipedia.org/wiki/Conservation-dependent%20species
A conservation-dependent species is a species which has been categorized as "Conservation Dependent" ("LR/cd") by the International Union for Conservation of Nature (IUCN), as dependent on conservation efforts to prevent it from becoming endangered. A species that is reliant on the conservation attempts of humans is considered conservation dependent. Such species must be the focus of a continuing species-specific and/or habitat-specific conservation program, the cessation of which would result in the species qualifying for one of the threatened categories within a period of five years. The determination of status is constantly monitored and can change. This category is part of the IUCN 1994 Categories & Criteria (version 2.3), which is no longer used in evaluation of taxa, but persists in the IUCN Red List for taxa evaluated prior to 2001, when version 3.1 was first used. Using the 2001 (version 3.1) system these taxa are classed as near threatened, but those that have not been re-evaluated remain with the "Conservation Dependent" category. Conservation-dependent species require maintenance additional to the use of the United States Endangered Species Act of 1973. This act is said to protect species from extinction by concerns and acts of conservation. Challenges Conservation-dependent species rely on population connectivity between humans and animals to maintain their life. Connectivity is based in regard to the federal regulatory provisions that protect the species and its habitat. Habitats and species are difficult to conserve when they are not susceptible to the regulations put in place. It is also seen that laws and acts have flaws that cause gaps in their motive. The Endangered Species Act fails to account for biological ecosystem conservation and threats to a species presence. Conservation in these conditions causes data gaps and leads to the depletion of species. Funding of the federal provisions show to be a major concern when efforts are being made to conserve species. Legal members who don't agree on where funding should go cause more harm to the conservation-dependent species by making no effort for restoration. Despite legal efforts for defining a restoration program and setting regulatory provisions, conservation-dependent species are still in danger. Flora vs Fauna conservation methods While conservation dependent plants and animals fall under the same risk status in the environment different methods are used to protect them. Conservation dependent animals are typically protected by recovery plans and agreements for conservation by the government. Plants that are conservation dependent have less protection behind them as the major method for conservation is keeping a habitat healthy. In order to do so, keeping areas uncivilized and minimizing pollution emissions are predictable solutions. Keeping the flora(plants) and fauna(animals) in their region out of the conservation dependent category is the main goal of these methods. Threatened categories Species that are considered Conservation- Dependent are under the lower risk category of status in the IUCN Red List of Threatened Species. The category of species may change and vary depending on its status in its environment. The lower risk status section has three categories that species may fluctuate through. Near Threatened (NT) Conservation Dependent (CD) Least Concern (LC) Conservation Attempts In fisheries around the world, there is a list of rules that people must follow which are in place as a conservation effort. These rules protect the conservation dependent listing of Scalloped Hammerhead shark (Sphyrna lewini) under the EPBC Act is one major step for conservation of endangered species. Reporting catch by phone: fishers must report their catch of a shark to QDAF's automated interactive voice response. Species specific catch and discard information in logbooks: all catches of sharks must be recorded in a log book. Data validation: one hour after docking when there is a shark on board, fisheries officers are allowed to inspect boat and catches. Conservation-dependent animals Examples of conservation-dependent species include the black caiman (Melanosuchus niger), the sinarapan, and the California ground cricket. As of December 2015, there remains 209 conservation-dependent plant species and 29 conservation-dependent animal species. As of September 2022, the IUCN still lists 20 conservation-dependent animal species, and one conservation-dependent subpopulations or stocks. Reptiles Black caiman Mollusks Bear paw clam China clam Maxima clam Fluted giant clam Arthropods Mono Lake brine shrimp Attheyella yemanjae Canthocamptus campaneri Metacyclops campestris Murunducaris juneae Muscocyclops bidentatus Muscocyclops therasiae California ground cricket Ponticyclops boscoi Spaniacris deserticola Stenopelmatus nigrocapitatus Thermocyclops parvus EPBC Act In Australia, the Environment Protection and Biodiversity Conservation Act 1999 still uses a "Conservation Dependent" category for classifying fauna and flora species. Species recognized as "Conservation Dependent" do not receive special protection, as they are not considered "matters of national environmental significance under the EPBC Act". Any assemblage of species may be listed as a "threatened ecological community" under the EPBC Act. Fauna may be classified under this category if its flora is directly threatened. The legislation uses categories similar to those of the IUCN 1994 Categories & Criteria. It does not, however, have a near threatened category or any other "lower risk" categories. As of December 2018, eight species of fishes have received the status under the act: Orange roughy (Hoplostethus atlanticus) Silver gemfish (Rexea solandri) School shark (Galeorhinus galeus) Southern bluefin tuna (Thunnus maccoyii) Southern dogfish (Centrophorus zeehaani) Dumb gulper shark (Centrophorus harrissoni) Blue warehou (Seriolella brama) Scalloped hammerhead (Sphyrna lewini) No flora has been given the category under the EPBC Act. See also Conservation-reliant species IUCN Red List conservation dependent species, ordered by taxonomic rank. :Category:IUCN Red List conservation dependent species, ordered alphabetically. References 01 IUCN Red List
Conservation-dependent species
[ "Biology" ]
1,236
[ "Conservation dependent species", "Biota by conservation status" ]
5,532,216
https://en.wikipedia.org/wiki/Limes%20Saxoniae
The (Latin for "Limit of Saxony"), also known as the Limes Saxonicus or Sachsenwall ("Saxon Dyke"), was an unfortified limes or border between the Saxons and the Slavic Obotrites, established about 810 in present-day Schleswig-Holstein. After Charlemagne had removed Saxons from some of their lands and given it to the Obotrites (who were allies of Charlemagne), he finally managed to conquer the Saxons in the Saxon Wars. In 811 he signed the Treaty of Heiligen with the neighbouring Danes and may at the same time have reached a border agreement with the Polabian Slavs in the east. This border should not be thought of as a fortified line, however, but rather a defined line running through the middle of the border zone, an area of bog and thick forest that was difficult to pass through. According to Adam of Bremen's description in the Gesta Hammaburgensis ecclesiae pontificum about 1075, it ran from the Elbe river near Boizenburg northwards along the Bille river to the mouth of the Schwentine at the Kiel Fjord and the Baltic Sea. It was breached several times by the Slavic Obotrites (983 and 1086) and Mieszko II Lambert of Poland (1028 and 1030). The Limes was dissolved during the first phase of the Ostsiedlung, when Count Henry of Badewide campaigned in Wagrian lands in 1138/39 and the Slavic population was Germanized by German, mostly Saxon, settlers. Bibliography Matthias Hardt: "Hesse, Elbe, Saale and the Frontiers of the Carolingian Empire." In: Walther Pool / Ian N. Wood / Helmut Reimitz (Hrsg.): The Transformation of Frontiers from Late Antiquity to the Carolingians. The Transformation of the Roman World 10. Leiden-Boston-Köln 2001, S. 219–232, . Matthias Hardt: "Limes Saxoniae." In: Reallexikon der Germanischen Altertumskunde, Bd. 18, Landschaftsrecht – Loxstedt. Berlin-New York 2001, S. 442–446, . Günther Bock: "Böhmische Dörfer“ in Stormarn? – Verlauf und Bedeutung des Limes Saxoniae zwischen Bille und Trave." In: Derselbe: Studien zur Geschichte Stormarns im Mittelalter. Neumünster 1996 (Stormarner Hefte 19), S. 25–70 (mit Karten), . Geography of Schleswig-Holstein Obotrites Holstein Borders
Limes Saxoniae
[ "Physics" ]
558
[ "Spacetime", "Borders", "Space" ]
5,532,341
https://en.wikipedia.org/wiki/Flavodoxin
Flavodoxins (Fld) are small, soluble electron-transfer proteins. Flavodoxins contains flavin mononucleotide as prosthetic group. The structure of flavodoxin is characterized by a five-stranded parallel beta sheet, surrounded by five alpha helices. They have been isolated from prokaryotes, cyanobacteria, and some eukaryotic algae. Background Originally found in cyanobacteria and clostridia, flavodoxins were discovered over 50 years ago. These proteins evolved from an anaerobic environment, due to selective pressures. Ferredoxin, another redox protein, was the only protein able to be used in this manner. However, when oxygen became present in the environment, iron became limited. Ferredoxin is iron-dependant as well as oxidant-sensitive. Under these limited iron conditions, ferredoxin was no longer preferred. Flavodoxin on the other hand is the opposite of these traits, as it is oxidant-resistant and has iron-free isofunctional counterparts. Therefore, for some time flavodoxin was the primary redox protein. Now however, when ferredoxin and flavodoxin are present in the same genome, ferredoxin is still used but under low iron conditions, flavodoxin is induced. Structure Three forms of flavodoxin exist: Oxidized, (OX) semiquinone, (SQ) and hydroquinone (HQ). While relatively small (Mw = 15-22 kDa), flavodoxins exist in "long" and "short" chain classifications. Short chain flavodoxins contain between 140 and 180 amino acid residues, while long chain flavodoxins include a 20 amino acid insertion into the last beta-strand. These residues form a loop which may be used to increase the binding affinity of flavin mononucleotide as well as assist in the formation of folded intermediates. However, it is still not certain what the loops true function is. In addition, the flavin mononucleotide is non-covalently bound to the flavodoxin protein and works to shuttle electrons. Medical applications Helicobacter pylori (Hp), the most prevalent human gastric pathogen, requires flavodoxins in its essential POR (pyruvate oxidoreductase enzyme complex) used in pyruvate decarboxylation. Most flavodoxins have a large hydrophobic residue such as tryptophan near the FMN, but Hp has an alanine residue instead, allowing for a pocket of solute to form. Current research is being done to identify non toxic, Hp specific flavodoxin inhibitors for the purpose of treating infection. Mechanism Flavodoxins require a highly negative redox potential to be active. The semiquinone conformation is stabilized by a hydrogen bond to the N-5 position of the flavin. This bond, as well as a common tryptophan residue near the binding site, aid in lowering SQ reactivity. The hydroquinone form is forced into a planar conformation, destabilizing it. Electron transfer occurs at the dimethylbenzene ring of the FMN. Flavodoxins in Cyanobacteria In cyanobacteria such as Nostoc sp., flavodoxins are heterocyst-specific, and used in photosystem 1 to deliver electrons to nitrogenase, as well as reducing N2 and NADP+, nitrogen fixation and H2 formation. References External links "Flavodoxin Folding and Stability Research at Wageningen University, the Netherlands" "The crossovers of flavodoxin" at virginia.edu Diagram at ohio-state.edu Proteins Bacteria
Flavodoxin
[ "Chemistry", "Biology" ]
820
[ "Biomolecules by chemical classification", "Prokaryotes", "Bacteria", "Molecular biology", "Proteins", "Microorganisms" ]
5,532,455
https://en.wikipedia.org/wiki/Interactive%20design
Interactive design is a user-oriented field of study that focuses on meaningful communication using media to create products through cyclical and collaborative processes between people and technology. Successful interactive designs have simple, clearly defined goals, a strong purpose and intuitive screen interface. Interactive design compared to interaction design In some cases interactive design is equated to interaction design; however, in the specialized study of interactive design there are defined differences. To assist in this distinction, interaction design can be thought of as: Making devices usable, useful, and fun, focusing on the efficiency and intuitive hardware A fusion of product design, computer science, and communication design A process of solving specific problems under a specific set of contextual circumstances The creation of form for the behavior of products, services, environments, and systems Making dialogue between technology and user invisible, i.e. reducing the limitations of communication through and with technology. About connecting people through various products and services, Whereas interactive design can be thought of as: Giving purpose to interaction design through meaningful experiences Consisting of six main components including User control, Responsiveness, Real-Time Interactions, Connectedness, Personalization, and Playfulness Focuses on the use and experience of the software Retrieving and processing information through on-demand responsiveness Acting upon information to transform it The constant changing of information and media, regardless of changes in the device Providing interactivity through a focus on the capabilities and constraints of human cognitive processing While both definitions indicate a strong focus on the user, the difference arises from the purposes of interactive design and interaction design. In essence interactive design involves the creation of interactive products and services, while interaction design focuses on the design of those products and services. Interaction design without interactive design provides only design concepts. Interactive design without interaction design may not built products good enough for the user. History Fluxus Interactive Design is heavily influenced by the Fluxus movement, which focuses on a "do-it-yourself" aesthetic, anti-commercialism and an anti-art sensibility. Fluxus is different from Dada in its richer set of aspirations. Fluxus is not a modern-art movement or an art style, rather it is a loose international organization which consists of many artists from different countries. There are 12 core ideas that form Fluxus. Globalism Unity of Art and Life Intermedia Experimentalism Chance Playfulness Simplicity Implicativeness Exemplativism Specificity Presence in time Musicality Computers The birth of the personal computer gave users the ability to become more interactive with what they were able to input into the machine. This was mostly due to the invention of the mouse. With an early prototype created in 1963 by Douglas Engelbart, the mouse was conceptualized as a tool to make the computer more interactive. The Internet and Interactive Design With the tendency of increasing use to the Internet, the advent of interactive media and computing, and eventually the emergence of digital interactive consumer products, the two cultures of design and engineering gravitated towards a common interest in flexible use and user experience. The most important characteristic of the Internet is its openness to communication between people and people. In other words, everyone can readily communicate and interact with what they want on the Internet. Recent century, the notion of interactive design started popularity with Internet environment. Stuart Moulthrop was shown interactive media by using hypertext, and made genre of hypertext fiction on the Internet. Stuart philosophies could be helpful to the hypertext improvements and media revolution with developing of the Internet. This is a short history of Hypertext. In 1945, the first concept of Hypertext had originated by Vannevar Bush as he wrote in his article As We May Think. And a computer game called Adventure was invented as responding users' needs via the first hypertextual narrative in the early 1960s. And then Douglas Engelbart and Theodor Holm Nelson who made Xanadu collaborated to make a system called FRESS in the 1970's. Their efforts brought immense political ramifications. By 1987, Computer Lib and Dream Machine were published by Microsoft Press. And Nelson joined Autodesk, which announced plans to support Xanadu as a commercial. The definition of Xanadu is a project that has declared an improvement over the World Wide Web, with mission statement that today's popular software simulates paper. The World Wide Web trivializes our original hypertext model with one-way ever-breaking links and no management of version or contents. In the late 1980s, Apple computer began giving away Hypercard. Hypercard is relatively cheap and simple to operate. In the early 1990s, the hypertext concept has finally received some attention from humanist academics. We can see the acceptance through Jay David Bolters ' Writing Space (1991)', and George Landow's Hypertext. Advertising Upon the transition from analogue to digital technology, one sees a further transition from digital technology to interactive media in advertising agencies. This transition caused many of the agencies to reexamine their business and try to stay ahead of the curve. Although it is a challenging transition, the creative potential of interactive design lies in combining almost all forms of media and information delivery: text, images, film, video and sound, and that in turn negates many boundaries for advertising agencies, making it a creative haven. Hence, with this constant motion forward, agencies such as R/GA have established a routine to keep up. Founded in 1977 by Richard and Robert Greenberg, the company has reconstructed its business model every nine years. Starting from computer-assisted animation camera, it is now an "Agency for the Digital World". Robert Greenberg explains: "the process of changing models is painful because you have to be ready to move on from the things that you're good at". This is one example of how to adapt to such a fast-paced industry, and one major conference that stays on top of things is the How Interactive Design Conference, which helps designers make the leap towards the digital age. Interactive new media art Nowadays, following the development of science and technology, various new media appear in different areas, like art, industry and science. Most technologies described as "new media" are digital, often having characteristics of being manipulated, networkable, dense, compressible, and interactive (like the internet, video games and mobiles). In the industry field, companies no longer focus on products itself, they focus more on human-centered design. Therefore, "interactive" become an important element in the new media. Interactivity is not only computer and video signal presenting with each other, but it should be more referred to communication and respondence among viewers and works. According to Selnow's (1988) theory, interactivity has three levels: Communicative Recognition: This communication is specific to the partner. Feedback is based on recognition of the partner. When a learner inputs information into a computer and the computer responds specifically to that input, there is mutual recognition. The menu format allows mutual recognition. Feedback: The responses are based on previous feedback. As the communication continues, the feedback progresses to reflect understanding. When a learner refines a search query and the computer responds with a refined list, message exchange is progressing. Information Flow: There is an opportunity for a two-way flow of information. It is necessary both the learner and the computer have means of exchanging information. The search engine tool allows for learner input via use of the keyboard and the computer responds with written information. New media has been described as the "mixture between existing cultural conventions and the conventions of software. For instance newspapers and television, they have been produced from traditional outlets to forms of interactive multimedia." New media can allow audiences access to content anytime, anywhere, on any digital device. It also promotes interactive feedback, participation, and community creation around the media content. New media is a vague term to mean a whole slew of things. The Internet and social media are both forms of new media. Any type of technology that enables digital interactivity is a form of new media. Video games, as well as Facebook, would be a great example of a type of new media. New media art is simply art that utilizes these new media technologies, such as digital art, computer graphics, computer animation, virtual art, Internet art, and interactive art. New media art is very focused on the interactivity between the artist and the spectator. Many new media art works, such as Jonah Brucker-Cohen and Katherine Moriwaki's UMBRELLA.net and Golan Levin et al.'s Dialtones: A Telesymphony, involve audience participation. Other works of new media art require audience members to interact with the work but not to participate in its production. In interactive new media art, the work responds to audience input but is not altered by it. Audience members may click on a screen to navigate through a web of linked pages, or activate motion sensors that trigger computer programs, but their actions leave no trace on the work itself. Each member of the audience experiences the piece differently based on the choices he or she makes as while interacting with the work. In Olia Lialina's My Boyfriend Came Back From The War, for example, visitors click through a series of frames on a Web page to reveal images and fragments of text. Although the elements of the story never change, the way the story unfolds is determined by each visitor's own actions. References Further reading Iuppa, Nicholas. (2001) Interactive Design for New Media and the Web Boston, Focal Print. Software design zh:互動設計
Interactive design
[ "Engineering" ]
1,940
[ "Design", "Software design" ]
5,532,622
https://en.wikipedia.org/wiki/Fatty%20acid%20transport%20proteins
Fatty acid transport proteins (FATPs, SLC27, SLC27A) are a family of trans-membrane transport proteins, which allow and enhance the uptake of long chain fatty acids into cells. This subfamily is part of the solute carrier protein family. Within humans this family contains six very homologous proteins, which are expressed in all tissues of the body which use fatty acids: SLC27A1 (FATP1) Long-chain fatty acid transport protein 1 SLC27A2 (FATP2) Very long-chain acyl-CoA synthetase SLC27A3 (FATP3) Solute carrier family 27 member 3 SLC27A4 (FATP4) Long-chain fatty acid transport protein 4 SLC27A5 (FATP5) Bile acyl-CoA synthetase SLC27A6 (FATP6) Long-chain fatty acid transport protein 6 References Protein families Transport proteins
Fatty acid transport proteins
[ "Biology" ]
196
[ "Protein families", "Protein classification" ]
5,532,777
https://en.wikipedia.org/wiki/Neurophysins
Neurophysins are carrier proteins which transport the hormones oxytocin and vasopressin to the posterior pituitary from the paraventricular and supraoptic nucleus of the hypothalamus, respectively. Inside the neurosecretory granules, the analogous neurophysin I and II form stabilizing complexes via covalent interactions. Stabilizing neurophysin-hormone complexes that are formed within neurosecretory granules located in the posterior pituitary gland aid in intra-axonal transport. During intra-axonal transport, the neurophysin's are believed to prevent the bound hormone from leaking into the cytoplasmic space and proteolytic digestion via enzymes. However, due to the low concentration of neurophysin in the blood, it is likely the protein-hormone complex dissociates, indicating the neurophysin does not aid in transporting the hormone through the circulatory system. Neurophysins are also secreted out of the posterior pituitary hypothalamus, each carrying their respective associated passenger hormone. When the posterior pituitary hypothalamus secretes vasopressin and its neurophysin carrier, it also secretes a glycopeptide. There are two types: Neurophysin I - Oxytocin Neurophysin II - Vasopressin (Also known as "antidiuretic hormone" or ADH) Biosynthesis of Neurophysins These proteins are synthesized in the cell bodies of the supraoptic and paraventricular regions of the hypothalamus. The disulfide-rich neurophysin protein is suggested to be congruent with the synthesis of insulin in which a precursor molecule of higher molecular weight is proteolytically cleaved and forms disulfide linkages. Although not enough data has been obtained, it is hypothesized that there is a common precursor molecule between neurophysin and the two hormones it stabilizes. Structure Neurophysins are acidic proteins with a molecular weight of approximately 10,000 Da that are rich in cysteine, glycine, and proline residues . The protein is double domain with a polypeptide chain of 93-95 residues with 14 cysteine residues forming 7 disulfide bridges . Domain I contains a COOH terminal with a disulfide loop; domain II lacks this COOH terminal disulfide loop . Based on the resemblance of the disulfide loop present on vasopressin and oxytocin, it is suggested that the hormones form covalent linkages to this disulfide loop present on the COOH terminal of domain I. See also Herring bodies References External links Membrane biology Biological matter
Neurophysins
[ "Chemistry", "Biology" ]
587
[ "Membrane biology", "Biotechnology stubs", "Biochemistry stubs", "Molecular biology", "Biochemistry" ]
5,532,813
https://en.wikipedia.org/wiki/AEGIS%20SecureConnect
AEGIS SecureConnect (or simply AEGIS) is the former name of a network authentication system used in IEEE 802.1X networks. It was developed by Meetinghouse Data Communications, Inc.; the system was renamed "Cisco Secure Services Client" when Meetinghouse was acquired by Cisco Systems. The AEGIS Protocol is an 802.1X supplicant (i.e. handles authentication for wired and wireless networks, such as those that use WPA-PSK, WPA-Radius, or Certificate-based authentication), and is commonly installed along with a Network Interface Card's (NIC) or VPN drivers. References External links Cisco Secure Services Client Q&A (Cisco Systems, Inc.) Computer network security IEEE 802.11
AEGIS SecureConnect
[ "Technology", "Engineering" ]
157
[ "Cybersecurity engineering", "Computer network stubs", "Computer networks engineering", "Computer network security", "Computing stubs" ]
5,532,906
https://en.wikipedia.org/wiki/Directional%20stability
Directional stability is stability of a moving body or vehicle about an axis which is perpendicular to its direction of motion. Stability of a vehicle concerns itself with the tendency of a vehicle to return to its original direction in relation to the oncoming medium (water, air, road surface, etc.) when disturbed (rotated) away from that original direction. If a vehicle is directionally stable, a restoring moment is produced which is in a direction opposite to the rotational disturbance. This "pushes" the vehicle (in rotation) so as to return it to the original orientation, thus tending to keep the vehicle oriented in the original direction. Directional stability is frequently called "weather vaning" because a directionally stable vehicle free to rotate about its center of mass is similar to a weather vane rotating about its (vertical) pivot. With the exception of spacecraft, vehicles generally have a recognisable front and rear and are designed so that the front points more or less in the direction of motion. Without this stability, they may tumble end over end, spin or orient themselves at a high angle of attack, even broadside on to the direction of motion. At high angles of attack, drag forces may become excessive, the vehicle may be impossible to control, or may even experience structural failure. In general, land, sea, air and underwater vehicles are designed to have a natural tendency to point in the direction of motion. Example: road vehicle Arrows, darts, rockets, and airships have tail surfaces (fins or feathers) to achieve directional stability; an airplane uses its vertical stabilizer for the same purpose. A road vehicle does not have elements specifically designed to maintain stability, but relies primarily on the distribution of mass. Introduction These points are best illustrated with an example. The first stage of studying the stability of a road vehicle is the derivation of a reasonable approximation to the equations of motion. The diagram illustrates a four-wheel vehicle, in which the front axle is located a distance ahead of the centre of gravity and the rear axle is a distance aft of the cg. The body of the car is pointing in a direction (theta) whilst it is travelling in a direction (psi). In general, these are not the same. The tyre treads at the region of contact point in the direction of travel, but the hubs are aligned with the vehicle body, with the steering held central. The tyres distort as they rotate to accommodate this mis-alignment, and generate side forces as a consequence. The net side force Y on the vehicle is the centripetal force causing the vehicle to change the direction it is traveling: where M is the vehicle mass and V the speed. The angles are all assumed small, so the lateral force equation is: The rotation of the body subjected to a yawing moment N is governed by: where I is the moment of inertia in yaw. The forces and moments of interest arise from the distortion of the tyres. The angle between the direction the tread is rolling and the hub is called the slip angle. This is a bit of a misnomer, because the tyre as a whole does not actually slip, part of the region in contact with the road adheres, and part of the region slips. We assume that the tyre force is directly proportional to the slip angle (). This is made up of the slip of the vehicle as a whole modified by the angular velocity of the body. For the front axle: whilst for the rear axle: Let the constant of proportionality be k. The sideforce is, therefore: The moment is: Denoting the angular velocity , the equations of motion are: Let (beta), the slip angle for the vehicle as a whole: Eliminating yields the following equation in : This is called a second-order linear homogeneous equation, and its properties form the basis of much of control theory. Stability analysis We do not need to solve the equation of motion explicitly to decide whether the solution diverges indefinitely or converges to zero following an initial perturbation. The form of the solution depends on the signs of the coefficients. The coefficient of will be called the 'damping' by analogy with a mass-spring-damper which has a similar equation of motion. By the same analogy, the coefficient of will be called the 'stiffness', as its function is to return the system to zero deflection, in the same manner as a spring. The form of the solution depends only on the signs of the damping and stiffness terms. The four possible solution types are presented in the figure. The only satisfactory solution requires both stiffness and damping to be positive. The damping term is: The tyre slip coefficient k is positive, as are the mass, moment of inertia and speed, so the damping is positive, and the directional motion should be dynamically stable. The stiffness term is: If the centre of gravity is ahead of the centre of the wheelbase (, this will always be positive, and the vehicle will be stable at all speeds. However, if it lies further aft, the term has the potential of becoming negative above a speed given by: Above this speed, the vehicle will be directionally unstable. Relative effect of front and rear tyres If for some reason (incorrect inflation pressure, worn tread) the tyres on one axle are incapable of generating significant lateral force, the stability will obviously be affected. Assume to begin with that the rear tyres are faulty, what is the effect on stability? If the rear tyres produce no significant forces, the side force and yawing moment become: The equation of motion becomes: The coefficient of is negative, so the vehicle will be unstable. Now consider the effect of faulty tyres at the front. The Side force and yawing moment become: The equation of motion becomes: The coefficient of is positive, so the vehicle will be stable but unsteerable. It follows that the condition of the rear tyres is more critical to directional stability than the state of the front tyres. Also, locking the rear wheels by applying the handbrake, renders the vehicle directionally unstable, causing it to spin. Since the vehicle is not under control during the spin, the 'handbrake turn' is usually illegal on public roads. Steering forces Deflecting the steering changes the slip angle of the front tyres, generating a sideforce. With conventional steering, the tyres are deflected by different amounts, but for the purposes of this analysis, the additional slip will be considered equal for both front tyres. The side force becomes: where (eta) is the steering deflection. Similarly, the yawing moment becomes: Including the steering term introduces a forced response: The steady state response is with all time derivatives set to zero. Stability requires that the coefficient of must be positive, so the sign of the response is determined by the coefficient of : This is a function of speed. When the speed is low, the slip is negative and the body points out of the corner (it understeers). At a speed given by: The body points in the direction of motion. Above this speed, the body points into the corner (oversteers). As an example: with k=10kN/radian, M=1000kg, b=1.0m, a=1.0m, the vehicle understeers below 11.3mph. Evidently moving the centre of gravity forwards increases this speed, giving the vehicle a tendency to understeer. Note: Installing a heavy, powerful engine in a light weight production vehicle designed around a small engine increases both its directional stability, and its tendency to understeer. The result is an overpowered vehicle with poor cornering performance. Even worse is the installation of an oversized power unit into a rear engined production vehicle without corresponding modification of suspension or mass distribution, as the result will be directionally unstable at high speed. Limitations of the analysis The forces arising from slip depend on the loading on the tyre as well as the slip angle, this effect has been ignored, but could be taken into account by assuming different values of k for the front and rear axles. Roll motion due to cornering will redistribute the tyre loads between the nearside and offside of the vehicle, again modifying the tyre forces. Engine torque likewise re-distributes the load between front and rear tyres. A full analysis should also take account of the suspension response. The complete analysis is essential for the design of high performance road vehicles, but is beyond the scope of this article. Aviation Directional stability about the aircraft's vertical axis is also referred to as yawing. This is primarily achieved by the area of the vertical stabilizer and the sides of the fuselage aft of the center of gravity. When an airplane is flying in a straight line while hit by a side gust of wind, the left/right yawing motion will be stopped by the air striking at the right/left side of the vertical stabilizer. See also Relaxed stability Car handling Flight dynamics Dutch roll Longitudinal stability Hunting oscillation References Barwell F T : Automation and Control in Transport, Pergamon Press, 1972. Synge J L and B A Griffiths : Principles of Mechanics, Section 6.3, McGraw-Hill Kogakusha Ltd,3rd Edition, 1970. External links Mechanics
Directional stability
[ "Physics", "Engineering" ]
1,894
[ "Mechanics", "Mechanical engineering" ]
5,532,987
https://en.wikipedia.org/wiki/CCAAT-enhancer-binding%20proteins
CCAAT-enhancer-binding proteins (or C/EBPs) is a family of transcription factors composed of six members, named from C/EBPα to C/EBPζ. They promote the expression of certain genes through interaction with their promoters. Once bound to DNA, C/EBPs can recruit so-called co-activators (such as CBP) that in turn can open up chromatin structure or recruit basal transcription factors. Function C/EBP proteins interact with the CCAAT (cytosine-cytosine-adenosine-adenosine-thymidine) box motif, which is present in several gene promoters. They are characterized by a highly conserved basic-leucine zipper (bZIP) domain at the C-terminus. This domain is involved in dimerization and DNA binding, as are other transcription factors of the leucine zipper domain-containing family (c-Fos and c-jun). The bZIP domain structure of C/EBPs is composed of an α-helix that forms a "coiled coil" structure when it dimerizes. Members of the C/EBP family can form homodimers or heterodimers with other C/EBPs and with other transcription factors, which may or may not contain the leucine zipper domain. The dimerization is necessary to enable C/EBPs to bind specifically to DNA through a palindromic sequence in the major groove of the DNA. C/EBP proteins also contain activation domains at the N-terminus and regulatory domains. These proteins are found in hepatocytes, adipocytes, hematopoietic cells, spleen, kidney, brain, and many other organs. C/EBP proteins are involved in different cellular responses, such as in the control of cellular proliferation, growth and differentiation, in metabolism, and in immunity. Nearly all the members of the C/EBP family can induce transcription through their activation domains by interacting with components of the basal transcription apparatus. (C/EBPγ is an exception that lacks a functional transcriptional activation domain.) Their expression is regulated at multiple levels, including through hormones, mitogens, cytokines, nutrients, and other factors. This protein is expressed in the mammalian nervous system and plays a significant role in the development and function of nerve cells. C/EBPβ plays a role in neuronal differentiation, in learning, in memory processes, in glial and neuronal cell functions, and in neurotrophic factor expression. Gene transcription The C/EBPα, C/EBPβ, C/EBPγ and C/EBPδ genes are without introns. C/EBPζ has four exons; C/EBPε has two, which lead to four isoforms due to an alternative use of promoters and splicing. For C/EBPα and C/EBPβ, different sizes of polypeptides can be produced by alternative use of initiation codons. This is thought to be due to weak ribosome scanning mechanisms. The mRNA of C/EBPα can transcribe into two polypeptides. For C/EBPβ, three different polypeptides are made: LAP* (38 kDa), LAP (35 kDa) and LIP (20 kDa). The most translated isoform is LAP, then LAP* and LIP. LIP can act as an inhibitor of the other C/EBPs by forming non-functional heterodimers. Regulation C/EBPβ function is regulated by multiple mechanisms, including phosphorylation, acetylation, activation, autoregulation, and repression via other transcription factors, oncogenic elements, or chemokines. C/EBPβ can interact with CREB, NF-κB, and other proteins, leading to a trans-activation potential. Phosphorylation of C/EBPβ can have an activation or a repression effect. For example, phosphorylation of threonine 235 in human C/EBPβ, or of threonine 188 in mouse and rat C/EBPβ, is important for C/EBPβ trans-activation capacity. Phosphorylation(s) of C/EBPβ in its regulatory domain can also modulate its function. It was shown in C. elegans that multiple cis elements of cebp-1 mRNA 3'UTR interact with mak-2 to upregulate expression of CEBP-1 in neuronal development. Clinical significance Role in adipogenesis C/EBPβ and δ are transiently induced during the early stages of adipocyte differentiation (adipogenesis), while C/EBPα is upregulated during the terminal stages of adipogenesis. In vitro and in vivo studies have demonstrated that each plays an important role in this process. For example, Murine Embryonic Fibroblasts (MEFs) from mice lacking both C/EBPβ and C/EBPδ show impaired adipocyte differentiation in response to adipogenic stimuli. In contrast, ectopic expression of C/EBPβ and δ in 3T3-L1 preadipocytes promotes adipogenesis, even in the absence of adipogenic stimuli. C/EBPβ and δ promote adipogenesis, at least in part by inducing the expression of the "master" adipogenic transcription factors C/EBPα and PPARγ. C/EBPα is required both for adipogenesis and for normal adipocyte function. For example, mice lacking C/EBPα in all tissues except the liver (where it is needed to avoid postnatal lethality) show abnormal adipose tissue formation. Moreover, ectopic expression of C/EBPα in various fibroblast cell lines promotes adipogenesis. C/EBPα probably promotes adipogenesis by inducing the expression of PPARγ. Role in osteoporosis C/EBPβ has been found to have a role in the development of osteoporosis. The full-length isoform of the C/EBPβ protein (LAP) activates the MafB gene, whereas the short isoform (LIP) suppresses it. MafB gene activation suppresses the formation of osteoclasts. Thus, upregulation of LAP diminishes the number of osteoclasts, and this weakens the osteoporotic process, whereas upregulation of LIP does the opposite, increasing loss of bone mass. The LAP/LIP balance is determined by the mTOR protein. Inhibition of the expression of mTOR can stop osteoclast activity. Role in cancer CCAAT/enhancer-binding proteins are often involved in growth arrest and differentiation, which has been interpreted to suggest that these proteins harbor tumor suppressive activities. However, CCAAT/enhancer-binding protein over-expression correlates with poor prognosis in glioblastoma and promotes genomic instability in cervical cancer, hinting at an oncogenic role. Importantly, however, C/EBPδ acts as a tumor suppressor in pancreatic ductal adenocarcinoma. This is of particular interest since only few tumor suppressors have been identified in the context of pancreatic cancer. The function of CCAAT/enhancer-binding proteins in cancer is thus clearly context dependent but largely tumor suppressive. Role in neurodegeneration C/EBPβ levels are increased in cortical samples of Alzheimer's and Parkinson's disease victims at autopsy. Cell culture studies in mice and human microglia lines also find increased C/EBPβ activity associated with pathogenic inflammation and cytokine responses. Downstream analysis of genes regulated by C/EBPβ have significance in immune response, mitochondrial health, and autophagy. Molecular interference of these cellular processes have been shown to play a role in neurodegenerative pathogenesis. Genetic and molecular pathways with degenerative implications involving C/EBPβ and its homologs are conserved across multiple model organisms including Mus musculus, Drosophila melanogaster, Caenorhabditis elegans, and Danio rerio. Upstream regulators of C/EBPβ include genes known to be associated with neurodegenerative and neurodevelopmental disease when mutated or dysregulated. This includes a well characterized cellular stress response pathway involving p38 and JNK. References External links Gene expression Transcription factors
CCAAT-enhancer-binding proteins
[ "Chemistry", "Biology" ]
1,853
[ "Gene expression", "Signal transduction", "Molecular genetics", "Cellular processes", "Induced stem cells", "Molecular biology", "Biochemistry", "Transcription factors" ]
5,533,026
https://en.wikipedia.org/wiki/Interferon%20regulatory%20factors
Interferon regulatory factors (IRF) are proteins which regulate transcription of interferons (see regulation of gene expression). Interferon regulatory factors contain a conserved N-terminal region of about 120 amino acids, which folds into a structure that binds specifically to the IRF-element (IRF-E) motifs, which is located upstream of the interferon genes. Some viruses have evolved defense mechanisms that regulate and interfere with IRF functions to escape the host immune system. For instance, the remaining parts of the interferon regulatory factor sequence vary depending on the precise function of the protein. The Kaposi sarcoma herpesvirus, KSHV, is a cancer virus that encodes four different IRF-like genes; including vIRF1, which is a transforming oncoprotein that inhibits type 1 interferon activity. In addition, the expression of IRF genes is under epigenetic regulation by promoter DNA methylation. Role in IFN signaling IRFs primarily regulate type I IFNs in the host after pathogen invasion and are considered the crucial mediators of an antiviral response. Following a viral infection, pathogens are detected by Pattern Recognition Receptors (PRRs), including various types of Toll-like Receptors (TLR) and cytosolic PRRs, in the host cell. The downstream signaling pathways from PRR activation phosphorylate ubiquitously expressed IRFs (IRF1, IRF3, and IRF7) through IRF kinases, such as TANK-binding kinase 1 (TBK1). Phosphorylated IRFs are translocated to the nucleus where they bind to IRF-E motifs and activate the transcription of Type I IFNs. In addition to IFNs, IRF1 and IRF5 has been found to induce transcription of pro-inflammatory cytokines. Some IFNs like IRF2 and IRF4 regulate the activation of IFNs and pro-inflammatory cytokines through inhibition. IRF2 contains a repressor region that downregulates expression of type I IFNs. IRF4 competes with IRF5, and inhibits its sustained activity. Role in immune cell development In addition to the signal transduction functions of IRFs in innate immune responses, multiple IRFs (IRF1, IRF2, IRF4, and IRF8) play essential roles in the development of immune cells, including dendritic, myeloid, natural killer (NK), B, and T cells. Dendritic cells (DC) are a group of heterogeneous cells that can be divided into different subsets with distinct functions and developmental programs. IRF4 and IRF8 specify and direct the differentiation of different subsets of DCs by stimulating subset-specific gene expression. For example, IRF4 is required for the generation of CD4 + DCs, whereas IRF8 is essential for CD8α + DCs. In addition to IRF4 and IRF8, IRF1 and IRF2 are also involved in DC subset development. IRF8 has also been implicated in the promotion of macrophage development from common myeloid progenitors (CMPs) and the inhibition of granulocytic differentiation during the divergence of granulocytes and monocytes. IRF8 and IRF4 are also involved in the regulation of B and T-cell development at multiple stages. IRF8 and IRF4 function redundantly to drive common lymphoid progenitors (CLPs) to B-cell lineage. IRF8 and IRF4 are also required in the regulation of germinal center (GC) B cell differentiation. Role in diseases IRFs are critical regulators of immune responses and immune cell development, and abnormalities in IRF expression and function have been linked to numerous diseases. Due to their critical role in IFN type I activation, IRFs are implicated in autoimmune diseases that are linked to activation of IFN type I system, such as systemic lupus erythematosus (SLE). Accumulating evidence also indicates that IRFs play a major role in the regulation of cellular responses linked to oncogenesis. In addition to autoimmune diseases and cancers, IRFs are also found to be involved in the pathogenesis of metabolic, cardiovascular, and neurological diseases, such as hepatic steatosis, diabetes, cardiac hypertrophy, atherosclerosis, and stroke. Genes IRF1 IRF2 IRF3 IRF4 IRF5 IRF6 IRF7 IRF8 IRF9 See also Interferon References External links Transcription factors Protein families
Interferon regulatory factors
[ "Chemistry", "Biology" ]
967
[ "Transcription factors", "Gene expression", "Protein classification", "Signal transduction", "Protein families", "Induced stem cells" ]
5,533,371
https://en.wikipedia.org/wiki/E-selectin
E-selectin, also known as CD62 antigen-like family member E (CD62E), endothelial-leukocyte adhesion molecule 1 (ELAM-1), or leukocyte-endothelial cell adhesion molecule 2 (LECAM2), is a selectin cell adhesion molecule expressed only on endothelial cells activated by cytokines. Like other selectins, it plays an important part in inflammation. In humans, E-selectin is encoded by the SELE gene. Structure E selectin has a cassette structure: an N-terminal, C-type lectin domain, an EGF (epidermal-growth-factor)-like domain, 6 Sushi domain (SCR repeat) units, a transmembrane domain (TM) and an intracellular cytoplasmic tail (cyto). The three-dimensional structure of the ligand-binding region of human E-selectin has been determined at 2.0 Å resolution in 1994. The structure reveals limited contact between the two domains and a coordination of Ca2+ not predicted from other C-type lectins. Structure/function analysis indicates a defined region and specific amino-acid side chains that may be involved in ligand binding. The E-selectin bound to sialyl-LewisX (SLeX; NeuNAcα2,3Galβ1,4[Fucα1,3]GlcNAc) tetrasaccharide was solved in 2000. Gene and regulation In humans, E-selectin is encoded by the SELE gene. Its C-type lectin domain, EGF-like, SCR repeats, and transmembrane domains are each encoded by separate exons, whereas the E-selectin cytosolic domain derives from two exons. The E-selectin locus flanks the L-selectin locus on chromosome 1. Different from P-selectin, which is stored in vesicles called Weibel-Palade bodies, E-selectin is not stored in the cell and has to be transcribed, translated, and transported to the cell surface. The production of E-selectin is stimulated by the expression of P-selectin which in turn, is stimulated by tumor necrosis factor α (TNFα), interleukin-1 (IL-1) and lipopolysaccharide (LPS). It takes about two hours, after cytokine recognition, for E-selectin to be expressed on the endothelial cell's surface. Maximal expression of E-selectin occurs around 6–12 hours after cytokine stimulation, and levels returns to baseline within 24 hours. Shear forces are also found to affect E-selectin expression. A high laminar shear enhances acute endothelial cell response to interleukin-1β in naïve or shear-conditioned endothelial cells as may be found in the pathological setting of ischemia/reperfusion injury while conferring rapid E-selectin down regulation to protect against chronic inflammation. Phytoestrogens, plant compounds with estrogen-like biological activity, such as genistein, formononetin, biochanin A and daidzein, as well as a mixture of these phytoestrogens were found able to reduce E-selectin as well as VCAM-1 and ICAM-1 on cell surface and in culture supernatant. Ligands E-selectin recognizes and binds to sialylated carbohydrates present on the surface proteins of certain leukocytes. E-selectin ligands are expressed by neutrophils, monocytes, eosinophils, memory-effector T-like lymphocytes, and natural killer cells. Each of these cell types is found in acute and chronic inflammatory sites in association with expression of E-selectin, thus implicating E-selectin in the recruitment of these cells to such inflammatory sites. These carbohydrates include members of the Lewis X and Lewis A families found on monocytes, granulocytes, and T-lymphocytes. The glycoprotein ESL-1, present on neutrophils and myeloid cells, was the first counter-receptor for E-selectin to be described. It is a variant of the tyrosine kinase FGF glycoreceptor, raising the possibility that its binding to E-selectin is involved in initiating signaling in the bound cells. P-selectin glycoprotein ligand-1 (PSGL-1) derived from human neutrophils is also a high-efficiency ligand for endothelium-expressed E-selectin under flow. It mediates the rolling of leukocytes on the activated endothelium surrounding an inflamed tissue. Both ESL-1 and PSGL-1 should bear sialyl Lewis a/x in order to bind E/P-selectins. E-selectin is found to mediate the adhesion of tumor cells to endothelial cells, by binding to E-selectin ligands on the tumor cells. E-selectin ligands also play a role in cancer metastasis. The role of these two E-selectin ligands in metastasis in vivo is poorly defined and remains to be firmly demonstrated. PSGL-1 was detected on the surfaces of bone-metastatic prostate tumor cells, suggesting that it may have a functional role in the bone tropism of prostate tumor cells. In cancer cells, CD44, death receptor-3 (DR3), LAMP1, and LAMP2 were identified as E-selectin ligands present on colon cancer cells, and CD44v, Mac2-BP, and gangliosides were identified as E-selectin ligands present on breast cancer cells. On human neutrophils the glycosphingolipid NeuAcα2-3Galβ1-4GlcNAcβ1-3[Galβ1-4(Fucα1-3)GlcNAcβ1-3]2[Galβ1-4GlcNAcβ1-3]2Galβ1-4GlcβCer (and closely related structures) are functional E-selectin receptors. Function Role in inflammation During inflammation, E-selectin plays an important part in recruiting leukocytes to the site of injury. The local release of cytokines IL-1 and TNF-α by macrophages in the inflamed tissue induces the over-expression of E-selectin on endothelial cells of nearby blood vessels. Leukocytes in the blood expressing the correct ligand will bind with low affinity to E-selectin, also under the shear stress of blood flow, causing the leukocytes to "roll" along the internal surface of the blood vessel as temporary interactions are made and broken. As the inflammatory response progresses, chemokines released by injured tissue enter the blood vessels and activate the rolling leukocytes, which are now able to tightly bind to the endothelial surface and begin making their way into the tissue. P-selectin has a similar function, but is expressed on the endothelial cell surface within minutes as it is stored within the cell rather than produced on demand. Role in cancer E-selectin was first discovered as an transmembrane receptor induced in endothelial cells upon inflammatory stimulation which mediated adhesion of monocytic or HL60 leukemic cells. This led to the hypothesis that cancer cells secreted inflammatory cytokines such as IL-1β or TNFα to induce E-selectin at distant metastatic sites. This induction would enable circulating tumor cells to arrest at stimulated sites, roll along activated endothelium, extravasate, and form metastases. Studies since have showed that E-selectin binding to colon cancer cells correlates with increasing metastatic potential, and that cancer cells of multiple tumor types bind E-selectin using glycoprotein or glycolipid ligands normally expressed on immune cells. Studies have further described a mechanistic cascade wherein cancer cells first bind E-selectin at shear flow rates: E-selectin binding results in a velcro-like interaction allowing the cancer cells to engage higher affinity integrin binding that eventually results in a tight binding between tumor cells and the activated endothelium. While numerous pieces of in vitro and clinical evidence continue to support this hypothesis of E-selectin-mediated cancer metastasis, in vivo studies of cancer metastasis have shown that E-selectin knockout only minimally affects leukemic cell adhesion to bone immediately following injection. while experimental lung metastasis is not affected by the genetic deletion of E-selectin. Furthermore, studies have also shown that primary tumor growth is increased in E-selectin knockout mice. This paradox was more recently solved by a trio of studies showing that E-selectin is only constitutively expressed in the bone marrow endothelium where it is thought to perform functions vital to hematopoiesis. that are hijacked specifically by cells metastasizing to bone and not other sites. This data supports ongoing clinical efforts to inhibit breast cancer bone metastasis with E-selectin-blocking agents. The complexity of E-selectin ligand biology may also play a role in these discrepant in vitro and in vivo results. At least 15 different glycoprotein and glycolipid substrates for E-selectin have been described on various cancer cells, while only n-glycan Glg1 (Esl1) was shown to mediate bone metastasis. Other ligands or combinations thereof may result in distinct mechanisms during cancer metastasis. Beyond a direct interaction with tumor cells, E-selectin induction in response to cytokines locally secreted by cancer cells enables specific tumor targeting of sLeX-conjugated nanoparticles or thioaptamers containing anti-tumor payloads. In addition, E-selectin may also function to recruit monocytes to primary tumors or lung metastases to promote an inflammatory pro-tumor microenvironment. Blocking these interactions or enabling trafficking of CAR-T cells to E-selectin-positive sites may hold promise for future therapeutic development. Pathological relevance Critical illness polyneuromyopathy In cases of elevated blood glucose levels, such as in sepsis, E-selectin expression is higher than normal, resulting in greater microvascular permeability. The greater permeability leads to edema (swelling) of the skeletal endothelium (blood vessel linings), resulting in skeletal muscle ischemia (restricted blood supply) and eventually necrosis (cell death). This underlying pathology is the cause of the symptomatic disease critical illness polyneuromyopathy (CIPNM). Traditional Chinese herbal medicines, like berberine downregulate E-selectin. Pathogen attachment Study shows the adherence of Porphyromonas gingivalis to human umbilical vein endothelial cells increases with the induction of E-selectin expression by TNF-α. An antibody to E-selectin and sialyl LewisX suppressed P. gingivalis adherence to stimulated HUVECs. P. gingivalis mutants lacking OmpA-like proteins Pgm6/7 had reduced adherence to stimulated HUVECs, but fimbriae-deficient mutants were not affected. E-selecin-mediated P. gingivalis adherence activated endothelial exocytosis. These results suggest that the interaction between host E-selectin and pathogen Pgm6/7 mediates P. gingivalis adherence to endothelial cells and may trigger vascular inflammation. Acute coronary syndrome The immunohistochemical expressions of E-selectin and PECAM-1 were significantly increased at intima in vulnerable plaques of acute coronary syndrome (ACS) group, especially in neovascular endothelial cells, and positively correlated with inflammatory cell density, suggesting that PECAM-1 and E-selectin might play an important role in inflammatory reaction and development of vulnerable plaque. E-selectin Ser128Arg polymorphism is associated with ACS, and it might be a risk factor for ACS. Nicotine-mediated induction Smoking is highly correlated with enhanced likelihood of atherosclerosis by inducing endothelial dysfunction. In endothelial cells, various cell-adhesion molecules including E-selectin, are shown to be upregulated upon exposure to nicotine, the addictive component of tobacco smoke. Nicotine-stimulated adhesion of monocytes to endothelial cells is dependent on the activation of α7-nAChRs, β-Arr1 and cSrc regulated increase in E2F1-mediated transcription of E-selectin gene. Therefore, agents such as RRD-251 that can target activity of E2F1 may have potential therapeutic benefit against cigarette smoke induced atherosclerosis. Cerebral aneurysm It's also found that E-selectin expression increased in human ruptured cerebral aneurysm tissues. E-selectin might be an important factor involved in the process of cerebral aneurysm formation and rupture, by promoting inflammation and weakening cerebral artery walls. As a biomarker E-selectin is also an emerging biomarker for the metastatic potential of some cancers including colorectal cancer and recurrences. References External links Cell adhesion proteins Clusters of differentiation Glycoproteins Transmembrane receptors Selectins Biomarkers
E-selectin
[ "Chemistry", "Biology" ]
2,866
[ "Transmembrane receptors", "Biomarkers", "Signal transduction", "Glycoproteins", "Glycobiology" ]
5,533,592
https://en.wikipedia.org/wiki/Scholl%20reaction
The Scholl reaction is a coupling reaction between two arene compounds with the aid of a Lewis acid and a protic acid. It is named after its discoverer, Roland Scholl, a Swiss chemist. In 1910 Scholl reported the synthesis of a quinone and of perylene from naphthalene both with aluminum chloride. Perylene was also synthesised from 1,1’-binaphthalene in 1913. The synthesis of Benzanthrone was reported in 1912. The protic acid in the Scholl reaction is often an impurity in the Lewis Acid and also formed in the course of a Scholl reaction. Reagents are iron(III) chloride in dichloromethane, copper(II) chloride, PIFA and boron trifluoride etherate in dichloromethane, Molybdenum(V) chloride and lead tetraacetate with BF3 in acetonitrile. Given the high reaction temperature and the requirement for strongly acidic catalysts the chemical yield often is low and the method is not a popular one. Intramolecular reactions fare better than the intermolecular ones, for instance in the organic synthesis of 9-phenylfluorene: Or the formation of the pyrene dibenzo-(a.1)-pyrene from the anthracene 1-phenylbenz(a)anthracene (66% yield). One study showed that the reaction lends itself to cascade reactions to form more complex polycyclic aromatic hydrocarbons In certain applications such as triphenylene synthesis this reaction is advocated as an alternative for the Suzuki reaction. A recurring problem is oligomerization of the product which can be prevented by blocking tert-butyl substituents: Reaction mechanism The exact reaction mechanism is not known but could very well proceed through an arenium ion. Just as in electrophilic aromatic substitution, Activating groups such as methoxy improve yield and selectivity: Indeed, oxidative coupling of phenols is a research strategy in modern organic synthesis. Two mechanisms may compete. In step one of a radical cation mechanism a radical cation is formed from one reaction partner by oxidation, in step two the radical ion attacks the second neutral partner in a substitution reaction and a new radical ion is formed with one ring bearing the positive charge and the other one the radical position. In step three dihydrogen is split off with rearomatisation to the biaryl compound. In the arenium ion mechanism one reaction partner is protonated to an arenium ion which then attacks the second reaction partner. The arenium ion can also be formed by attack of the Lewis acid. The mechanisms are difficult to distinguish because many Lewis acids can behave as oxidants. Reactions taking place at room-temperature with well-known one-electron oxidizing agents likely proceed through a radical cation mechanism and reactions requiring elevated temperatures likely proceed through an arenium ion mechanism. References See also Friedel-Crafts alkylation Substitution reactions Name reactions
Scholl reaction
[ "Chemistry" ]
643
[ "Coupling reactions", "Name reactions", "Organic reactions" ]
5,533,690
https://en.wikipedia.org/wiki/Infosphere
Infosphere is a metaphysical realm of information, data, knowledge, and communication, populated by informational entities called inforgs (or, informational organisms). Infosphere is portmanteau of information and -sphere. Though one example is cyberspace, infospheres are not limited to purely online environments; they can include both offline and analogue information. History The first documented use of the infosphere was in 1970 by Kenneth E. Boulding, who viewed it as one among the six "spheres" in his own system (the others being the sociosphere, biosphere, hydrosphere, lithosphere, and atmosphere). Boulding claimed:[T]he infosphere...consists of inputs and outputs of conversation, books, television, radio, speeches, church services, classes, and lectures as well as information received from the physical world by personal observation.... It is clearly a segment of the sociosphere in its own right, and indeed it has considerable claim to dominate the other segments. It can be argued that development of any kind is essentially a learning process and that it is primarily dependent on a network of information flows.In 1971, the term was used in a Time Magazine book review by R.Z. Sheppard, who wrote:In much the way that fish cannot conceptualize water or birds the air, man barely understands his infosphere, that encircling layer of electronic and typographical smog composed of cliches from journalism, entertainment, advertising and government.In 1980, it was used by Alvin Toffler in his book The Third Wave, in which he writes:What is inescapably clear, whatever we choose to believe, is that we are altering our infosphere fundamentally...we are adding a whole new strata of communication to the social system. The emerging Third Wave infosphere makes that of the Second Wave era - dominated by its mass media, the post office, and the telephone - seem hopelessly primitive by contrast.Toffler's definition proved to be prophetic, as the use of infosphere in the 1990s expanded beyond media to speculate about the common evolution of the Internet, society and culture. In his book Digital Dharma, Steven Vedro writes:Emerging from what French philosopher-priest Pierre Teilhard de Chardin called the shared noosphere of collective human thought, invention and spiritual seeking, the Infosphere is sometimes used to conceptualize a field that engulfs our physical, mental and etheric bodies; it affects our dreaming and our cultural life. Our evolving nervous system has been extended, as media sage Marshall McLuhan predicted in the early 1960s, into a global embrace.In 1999, the term was reinterpreted by Luciano Floridi, on the basis of biosphere, to denote the whole informational environment constituted by all informational entities (including informational agents), their properties, interactions, processes, and mutual relations. Floridi writes:[T]he computerised description and control of the physical environment, together with the digital construction of a synthetic world, are, finally, intertwined with a fourth area of application, represented by the transformation of the encyclopadeic macrocosm of data, information, ideas, knowledge, beliefs, codified experiences, memories, images, artistic interpretations, and other mental creations into a global infosphere. The infosphere is the whole system of services and documents, encoded in any semiotic and physical media, whose contents include any sort of data, information and knowledge...with no limitations either in size, typology, or logical structure. Hence it ranges from alphanumeric texts (i.e., texts, including letters, numbers, and diacritic symbols) and multimedia products to statistical data, from films and hypertexts to whole text-banks and collections of pictures, from mathematical formulae to sounds and videoclips.To him, it is an environment comparable to, but different from cyberspace (which is only one of its sub-regions, as it were), since it also includes offline and analogue spaces of information. According to Floridi, it is possible to equate the infosphere to the totality of Being; this equation leads him to an informational ontology. Manipulation of the Infosphere The manipulation of the infosphere is subject to metaphysics and its rules. Information is considered to be Shannon information and is treated in a physical sense separate from energy and matter. The manipulations to the infosphere include the erasing, transfer, duplication, and destruction of information. Use in popular culture The term was used by Dan Simmons in the science-fiction saga Hyperion (1989) to indicate what the Internet could become in the future: a place parallel, virtual, formed of billions of networks, with "artificial life" on various scales, from what is equivalent to an insect (small programs) to what is equivalent to a god (artificial intelligences), whose motivations are diverse, seeking to both help mankind and harm it. In the animated sitcom Futurama, the Infosphere is a huge sphere floating in space, in which a species of giant, talking, floating brains attempts to store all of the information known in the universe. The IBM Software Group created the InfoSphere brand in 2008 for its Information Management software products. See also Noosphere Semiosphere Ideosphere Simulated reality Umwelt Wikipedia References External links The Infosphere, The Futurama Wiki Blog by Steven Vedro based on his book, Digital Dharma L. Floridi, A Look into the Future Impact of ICT on our Lives Preface of L. Floridi, Philosophy and Computing: An Introduction. London/New York: Routledge, 1999. L. Floridi Ethics in the Infosphere Ethics of science and technology Information society Ontology (information science) Science fiction themes
Infosphere
[ "Technology" ]
1,204
[ "Computing and society", "Information society", "Ethics of science and technology" ]
5,533,997
https://en.wikipedia.org/wiki/FX8010
The FX8010, is a DSP architecture, designed for realtime audio effects, designed by E-mu, around their E-mu 10K1 chip. One key feature of the architecture, is not providing any branching instructions, but rather running the whole program in a sample locked constant loop, i.e. a constant number of instructions is executed per sample. Instructions are given conditional execution flag akin to some RISC processors (notably the ARM), thus providing a constant runtime. External links kxProject documentation page - Some documentation available here Digital signal processors
FX8010
[ "Technology" ]
118
[ "Computing stubs", "Computer hardware stubs" ]
5,534,001
https://en.wikipedia.org/wiki/Matching%20pursuit
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small, where the residual after calculating and is denoted by . If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used. For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals . By taking an extremely redundant dictionary, we can look in it for atoms (functions) that best match a signal . The algorithm If contains a large number of vectors, searching for the most sparse representation of is computationally unacceptable for practical applications. In 1993, Mallat and Zhang proposed a greedy solution that they named "Matching Pursuit." For any signal and any dictionary , the algorithm iteratively generates a sorted list of atom indices and weighting scalars, which form the sub-optimal solution to the problem of sparse signal representation. Input: Signal: , dictionary with normalized columns . Output: List of coefficients and indices for corresponding atoms . Initialization: ; ; Repeat: Find with maximum inner product ; ; ; ; Until stop condition (for example: ) return In signal processing, the concept of matching pursuit is related to statistical projection pursuit, in which "interesting" projections are found; ones that deviate more from a normal distribution are considered to be more interesting. Properties The algorithm converges (i.e. ) for any that is in the space spanned by the dictionary. The error decreases monotonically. As at each step, the residual is orthogonal to the selected filter, the energy conservation equation is satisfied for each : . In the case that the vectors in are orthonormal, rather than being redundant, then MP is a form of principal component analysis Applications Matching pursuit has been applied to signal, image and video coding, shape representation and recognition, 3D objects coding, and in interdisciplinary applications like structural health monitoring. It has been shown that it performs better than DCT based coding for low bit rates in both efficiency of coding and quality of image. The main problem with matching pursuit is the computational complexity of the encoder. In the basic version of an algorithm, the large dictionary needs to be searched at each iteration. Improvements include the use of approximate dictionary representations and suboptimal ways of choosing the best match at each iteration (atom extraction). The matching pursuit algorithm is used in MP/SOFT, a method of simulating quantum dynamics. MP is also used in dictionary learning. In this algorithm, atoms are learned from a database (in general, natural scenes such as usual images) and not chosen from generic dictionaries. A very recent application of MP is its use in linear computation coding to speed-up the computation of matrix-vector products. Extensions A popular extension of Matching Pursuit (MP) is its orthogonal version: Orthogonal Matching Pursuit (OMP). The main difference from MP is that after every step, all the coefficients extracted so far are updated, by computing the orthogonal projection of the signal onto the subspace spanned by the set of atoms selected so far. This can lead to results better than standard MP, but requires more computation. OMP was shown to have stability and performance guarantees under certain restricted isometry conditions. The incremental multi-parameter algorithm (IMP), published three years before MP, works in the same way as OMP. Extensions such as Multichannel MP and Multichannel OMP allow one to process multicomponent signals. An obvious extension of Matching Pursuit is over multiple positions and scales, by augmenting the dictionary to be that of a wavelet basis. This can be done efficiently using the convolution operator without changing the core algorithm. Matching pursuit is related to the field of compressed sensing and has been extended by researchers in that community. Notable extensions are Orthogonal Matching Pursuit (OMP), Stagewise OMP (StOMP), compressive sampling matching pursuit (CoSaMP), Generalized OMP (gOMP), and Multipath Matching Pursuit (MMP). See also CLEAN algorithm Image processing Least-squares spectral analysis Principal component analysis (PCA) Projection pursuit Signal processing Sparse approximation Stepwise regression References Multivariate statistics Signal processing
Matching pursuit
[ "Technology", "Engineering" ]
1,171
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
5,534,071
https://en.wikipedia.org/wiki/Whangee
Whangee ( ) refers to any of over forty Asian grasses of the genus Phyllostachys, a genus of bamboos. They are a hardy evergreen plant from Japan, China, and the Himalayas whose woody stems are sometimes used to make canes and umbrella handles. The word derives from the Chinese (Mandarin) huáng lí. It can also refer to a cane made from whangee. John Steed, the dapper secret agent from television's The Avengers, carried an umbrella with a whangee handle made by British Umbrella maker Swaine Adeney Brigg. Charlie Chaplin's character, The Little Tramp, is famously known for his whangee cane. The firm of Dunhill created custom smoking pipes and cigarette holders out of whangee, lacquering the surface of the plant stems and adding a black plastic or Bakelite mouthpiece. Terry-Thomas, the well-known British comedic actor, habitually used an 8-inch (20 cm)-long custom black lacquered whangee cigarette holder. It became his trademark and is seen in most of his publicity photographs. His collection included a valuable holder with a spiral of diamonds set in gold over the black lacquered whangee. It was stolen from his dressing room by a young Jimmy Tarbuck and was recovered in a damaged state. Bertie Wooster in The Inimitable Jeeves (chapter 1) says, "Then bring me my whangee, my yellowest shoes, and the old green Homburg. I'm going into the park to do pastoral dances." The author, P.G. Wodehouse, does not elaborate on the meaning of whangee, assuming that any of his audience would immediately know to what it refers. Sylvester McCoy used an umbrella with a whangee handle during his early days as The Doctor. References Bamboo Plant common names
Whangee
[ "Biology" ]
387
[ "Plants", "Plant common names", "Common names of organisms" ]
5,534,333
https://en.wikipedia.org/wiki/Gauss%27s%20principle%20of%20least%20constraint
The principle of least constraint is one variational formulation of classical mechanics enunciated by Carl Friedrich Gauss in 1829, equivalent to all other formulations of analytical mechanics. Intuitively, it says that the acceleration of a constrained physical system will be as similar as possible to that of the corresponding unconstrained system. Statement The principle of least constraint is a least squares principle stating that the true accelerations of a mechanical system of masses is the minimum of the quantity where the jth particle has mass , position vector , and applied non-constraint force acting on the mass. The notation indicates time derivative of a vector function , i.e. position. The corresponding accelerations satisfy the imposed constraints, which in general depends on the current state of the system, . It is recalled the fact that due to active and reactive (constraint) forces being applied, with resultant , a system will experience an acceleration . Connections to other formulations Gauss's principle is equivalent to D'Alembert's principle. The principle of least constraint is qualitatively similar to Hamilton's principle, which states that the true path taken by a mechanical system is an extremum of the action. However, Gauss's principle is a true (local) minimal principle, whereas the other is an extremal principle. Hertz's principle of least curvature Hertz's principle of least curvature is a special case of Gauss's principle, restricted by the three conditions that there are no externally applied forces, no interactions (which can usually be expressed as a potential energy), and all masses are equal. Without loss of generality, the masses may be set equal to one. Under these conditions, Gauss's minimized quantity can be written The kinetic energy is also conserved under these conditions Since the line element in the -dimensional space of the coordinates is defined the conservation of energy may also be written Dividing by yields another minimal quantity Since is the local curvature of the trajectory in the -dimensional space of the coordinates, minimization of is equivalent to finding the trajectory of least curvature (a geodesic) that is consistent with the constraints. Hertz's principle is also a special case of Jacobi's formulation of the least-action principle. Philosophy Hertz designed the principle to eliminate the concept of force and dynamics, so that physics would consist exclusively of kinematics, of material points in constrained motion. He was critical of the "logical obscurity" surrounding the idea of force.I would mention the experience that it is exceedingly difficult to expound to thoughtful hearers that very introduction to mechanics without being occasionally embarrassed, without feeling tempted now and again to apologize, without wishing to get as quickly as possible over the rudiments, and on to examples which speak for themselves. I fancy that Newton himself must have felt this embarrassment...To replace the concept of force, he proposed that the acceleration of visible masses are to be accounted for, not by force, but by geometric constraints on the visible masses, and their geometric linkages to invisible masses. In this, he understood himself as continuing the tradition of Cartesian mechanical philosophy, such as Boltzmann's explaining of heat by atomic motion, and Maxwell's explaining of electromagnetism by ether motion. Even though both atoms and the ether were not observable except via their effects, they were successful in explaining apparently non-mechanical phenomena mechanically. In trying to explain away "mechanical force", Hertz was "mechanizing classical mechanics". See also Appell's equation of motion Literature References External links A modern discussion and proof of Gauss's principle Gauss principle in the Encyclopedia of Mathematics Hertz principle in the Encyclopedia of Mathematics Classical mechanics
Gauss's principle of least constraint
[ "Physics" ]
763
[ "Mechanics", "Classical mechanics" ]
5,534,425
https://en.wikipedia.org/wiki/Transverse%20mass
The transverse mass is a useful quantity to define for use in particle physics as it is invariant under Lorentz boost along the z direction. In natural units, it is: where the z-direction is along the beam pipe and so and are the momentum perpendicular to the beam pipe and is the (invariant) mass. This definition of the transverse mass is used in conjunction with the definition of the (directed) transverse energy with the transverse momentum vector . It is easy to see that for vanishing mass () the three quantities are the same: . The transverse mass is used together with the rapidity, transverse momentum and polar angle in the parameterization of the four-momentum of a single particle: Using these definitions (in particular for ) gives for the mass of a two particle system: Looking at the transverse projection of this system (by setting ) gives: These are also the definitions that are used by the software package ROOT, which is commonly used in high energy physics. Transverse mass in two-particle systems Hadron collider physicists use another definition of transverse mass (and transverse energy), in the case of a decay into two particles. This is often used when one particle cannot be detected directly but is only indicated by missing transverse energy. In that case, the total energy is unknown and the above definition cannot be used. where is the transverse energy of each daughter, a positive quantity defined using its true invariant mass as: , which is coincidentally the definition of the transverse mass for a single particle given above. Using these two definitions, one also gets the form: (but with slightly different definitions for !) For massless daughters, where , we again have , and the transverse mass of the two particle system becomes: where is the angle between the daughters in the transverse plane. The distribution of has an end-point at the invariant mass of the system with . This has been used to determine the mass at the Tevatron. References - See sections 38.5.2 () and 38.6.1 () for definitions of transverse mass. - See sections 43.5.2 () and 43.6.1 () for definitions of transverse mass. Particle physics Kinematics Special relativity
Transverse mass
[ "Physics", "Technology" ]
446
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Physical systems", "Special relativity", "Motion (physics)", "Mechanics", "Particle physics", "Theory of relativity", "Particle physics stubs" ]
5,534,434
https://en.wikipedia.org/wiki/Nitrate%20reductase
Nitrate reductases are molybdoenzymes that reduce nitrate () to nitrite (). This reaction is critical for the production of protein in most crop plants, as nitrate is the predominant source of nitrogen in fertilized soils. Types Eukaryotic Eukaryotic nitrate reductases are part of the sulfite oxidase family of molybdoenzymes. They transfer electrons from NADH or NADPH to nitrate. Prokaryotic Prokaryotic nitrate reductases belong to the DMSO reductase family of molybdoenzymes and have been classified into three groups, assimilatory nitrate reductases (Nas), respiratory nitrate reductase (Nar), and periplasmic nitrate reductases (Nap). The active site of these enzymes is a molybdenum ion that is bound to the four thiolate functional groups of two pterin molecules. The coordination sphere of the molybdenum ion is completed by one amino-acid side chain and oxygen and/or sulfur ligands. In Nap, the molybdenum is covalently attached to the protein by a cysteine side chain, and an aspartate side chain in Nar. Structure Prokaryotic nitrate reductases have two major types, transmembrane nitrate reductases (NAR) and periplasmic nitrate reductases (NAP). NAR allows for proton translocation across the cellular membrane and can contribute to the generation of ATP by the proton motive force. NAP cannot do so. The transmembrane respiratory nitrate reductase is composed of three subunits; an 1 alpha, 1 beta and 2 gamma. It can substitute for the NRA enzyme in Escherichia coli, allowing it to use nitrate as an electron acceptor for anaerobic respiration. A transmembrane nitrate reductase that can function as a proton pump (similar to the case of anaerobic respiration) has been discovered in the diatom Thalassiosira weissflogii. The nitrate reductase of higher plants, algae, and fungi is a homodimeric cytosolic protein with five conserved domains in each monomer: 1) an Mo-MPT domain that contains the single molybdopterin cofactor, 2) a dimer interface domain, 3) a cytochrome b domain, and 4) an NADH-binding domain that combines with 5) an FAD-binding domain to form the cytochrome b reductase fragment. There exists a Glycophosphatidylinositol-anchored variant that is found on the outer face of the plasma membrane. Its function is not clear. Mechanism In prokaryotic periplasmic nitrate reductase, the nitrate anion binds to Mo(IV). Oxygen transfer yields an Mo(VI) oxo intermediate with release of nitrite. Reduction of the Mo oxide and protonolysis removes the oxo group, regenerating Mo(IV). Similar to the prokaryotic nitrate reduction mechanism, in eukaryotic nitrate reductase, an oxygen in nitrate binds to Mo in the (IV) oxidation state, displacing a hydroxide ion. Then the Mo d-orbital electrons flip over, creating a multiple bond between Mo(VI) and that oxygen, ejecting nitrite. The Mo(VI) double bond to oxygen is reduced by NAD(P)H passed through the intramolecular transport chain. Regulation Nitrate reductase (NR) is regulated at the transcriptional and translational levels induced by light, nitrate, and possibly a negative feedback mechanism. First, nitrate assimilation is initiated by the uptake of nitrate from the root system, reduced to nitrite by nitrate reductase, and then nitrite is reduced to ammonia by nitrite reductase. Ammonia then goes into the GS-GOGAT pathway to be incorporated into amino acids. When the plant is under stress, instead of reducing nitrate via NR to be incorporated into amino acids, the nitrate is reduced to nitric oxide which can have many damaging effects on the plant. Thus, the importance of regulating nitrate reductase activity is to limit the amount of nitric oxide being produced. Inactivation of nitrate reductase The inactivation of nitrate reductase has many steps and many different signals that aid in the inactivation of the enzyme. Specifically in spinach, the very first step of nitrate reductase inactivation is the phosphorylation of NR on the 543-serine residue. The very last step of nitrate reductase inactivation is the binding of the 14-3-3 adapter protein, which is initiated by the presence of Mg2+ and Ca2+. Higher plants and some algae post-translationally regulate NR by phosphorylation of serine residues and subsequent binding of a 14-3-3 protein. Anoxic conditions Studies were done measuring the nitrate uptake and nitrate reductase activity in anoxic conditions to see if there was a difference in activity level and tolerance to anoxia. These studies found that nitrate reductase, in anoxic conditions improves the plants tolerance to being less aerated. This increased activity of nitrate reductase was also related to an increase in nitrite release in the roots. The results of this study showed that the dramatic increase in nitrate reductase in anoxic conditions can be directly attributed to the anoxic conditions inducing the dissociation of 14-3-3 protein from NR and the dephosphorylation of the nitrate reductase. Applications Nitrate reductase activity can be used as a biochemical tool for predicting grain yield and grain protein production. Nitrate reductase can be used to test nitrate concentrations in biofluids. Nitrate reductase promotes amino acid production in tea leaves. Under south Indian conditions, it is reported that tea plants sprayed with various micronutrients (like Zn, Mn and B) along with Mo enhanced the amino acid content of tea shoots and also the crop yield. References External links Enzymes Integral membrane proteins EC 1.7.99 Protein families
Nitrate reductase
[ "Biology" ]
1,310
[ "Protein families", "Protein classification" ]
5,534,542
https://en.wikipedia.org/wiki/Kush%20%28cannabis%29
Kush generally refers to a pure or hybrid Cannabis indica strain. Pure C. indica strains include Afghan Kush, Hindu Kush, Green Kush, and Purple Kush. Hybrid strains of C. indica include Blueberry Kush and Golden Jamaican Kush. The term "kush" is now also used as a slang word for cannabis. The origins of Kush Cannabis are from landrace plants mainly in Afghanistan, Northern Pakistan and North-Western India with the name coming from the Hindu Kush mountain range. "Hindu Kush" strains of Cannabis were taken to the United States in the mid-to-late 1970s and continue to be available there to the present day. Popular kush strains include OG Kush, Bubba Kush, and Purple Kush. See also Medical cannabis References Cannabis strains Cannabis in Afghanistan Cannabis in Pakistan
Kush (cannabis)
[ "Biology" ]
173
[ "Cannabis strains", "Biopiracy" ]
5,534,558
https://en.wikipedia.org/wiki/Design%20for%20assembly
Design for assembly (DFA) is a process by which products are designed with ease of assembly in mind. If a product contains fewer parts it will take less time to assemble, thereby reducing assembly costs. In addition, if the parts are provided with features which make it easier to grasp, move, orient and insert them, this will also reduce assembly time and assembly costs. The reduction of the number of parts in an assembly has the added benefit of generally reducing the total cost of parts in the assembly. This is usually where the major cost benefits of the application of design for assembly occur. Approaches Design for assembly can take different forms. In the 1960s and 1970s various rules and recommendations were proposed in order to help designers consider assembly problems during the design process. Many of these rules and recommendations were presented together with practical examples showing how assembly difficulty could be improved. However, it was not until the 1970s that numerical evaluation methods were developed to allow design for assembly studies to be carried out on existing and proposed designs. The first evaluation method was developed at Hitachi and was called the Assembly Evaluation Method (AEM). This method is based on the principle of "one motion for one part." For more complicated motions, a point-loss standard is used and the ease of assembly of the whole product is evaluated by subtracting points lost. The method was originally developed in order to rate assemblies for ease of automatic assembly. Starting in 1977, Geoff Boothroyd, supported by an NSF grant at the University of Massachusetts Amherst, developed the Design for Assembly method (DFA), which could be used to estimate the time for manual assembly of a product and the cost of assembling the product on an automatic assembly machine. Recognizing that the most important factor in reducing assembly costs was the minimization of the number of separate parts in a product, he introduced three simple criteria which could be used to determine theoretically whether any of the parts in the product could be eliminated or combined with other parts. These criteria, together with tables relating assembly time to various design factors influencing part grasping, orientation and insertion, could be used to estimate total assembly time and to rate the quality of a product design from an assembly viewpoint. For automatic assembly, tables of factors could be used to estimate the cost of automatic feeding and orienting and automatic insertion of the parts on an assembly machine. In the 1980s and 1990s, variations of the AEM and DFA methods have been proposed, namely: the GE Hitachi method which is based on the AEM and DFA; the Lucas method, the Westinghouse method and several others which were based on the original DFA method. All methods are now referred to as design for assembly methods. Implementation Most products are assembled manually and the original DFA method for manual assembly is the most widely used method and has had the greatest industrial impact throughout the world. The DFA method, like the AEM method, was originally made available in the form of a handbook where the user would enter data on worksheets to obtain a rating for the ease of assembly of a product. Starting in 1981, Geoffrey Boothroyd and Peter Dewhurst developed a computerized version of the DFA method which allowed its implementation in a broad range of companies. For this work they were presented with many awards including the National Medal of Technology. There are many published examples of significant savings obtained through the application of DFA. For example, in 1981, Sidney Liebson, manager of manufacturing engineering for Xerox, estimated that his company would save hundreds of millions of dollars through the application of DFA. In 1988, Ford Motor Company credited the software with overall savings approaching $1 billion. In many companies DFA is a corporate requirement and DFA software is continually being adopted by companies attempting to obtain greater control over their manufacturing costs. There are many key principles in design for assembly. Notable examples Two notable examples of good design for assembly are the Sony Walkman and the Swatch watch. Both were designed for fully automated assembly. The Walkman line was designed for "vertical assembly", in which parts are inserted in straight-down moves only. The Sony SMART assembly system, used to assemble Walkman-type products, is a robotic system for assembling small devices designed for vertical assembly. The IBM Proprinter used design for automated assembly (DFAA) rules. These DFAA rules help design a product that can be assembled automatically by robots, but they are useful even with products assembled by manual assembly. See also Design for inspection Design for manufacturability Design for X Design for verification DFMA Notes Further information For more information on Design for Assembly and the subject of Design for Manufacture and Assembly see: Boothroyd, G. "Assembly Automation and Product Design, 2nd Edition", Taylor and Francis, Boca Raton, Florida, 2005. Boothroyd, G., Dewhurst, P. and Knight, W., "Product Design for Manufacture and Assembly, 2nd Edition", Marcel Dekker, New York, 2002. External links "Successful Design for Assembly" - February 26, 2007 article from Assembly Magazine Product development Design Design for X
Design for assembly
[ "Engineering" ]
1,043
[ "Design", "Design for X" ]
5,534,701
https://en.wikipedia.org/wiki/Nuclear%20power%20by%20country
Nuclear power plants operate in 32 countries and generate about a tenth of the world's electricity. Most are in Europe, North America and East Asia. The United States is the largest producer of nuclear power, while France has the largest share of electricity generated by nuclear power, at about 70%. Some countries operated nuclear reactors in the past but have no operating nuclear power plants at present. Among them, Italy closed all of its nuclear stations by 1990 and nuclear power has since been discontinued because of the 1987 referendums. Kazakhstan phased out nuclear power in 1999 but is planning to reintroduce it possibly by 2035 under referendum. Germany operated nuclear plants since 1960 until the completion of its phaseout policy in 2023. Austria (Zwentendorf Nuclear Power Plant) and the Philippines (Bataan Nuclear Power Plant) never started to use their first nuclear plants that were completely built. Sweden and Belgium originally had phase-out policies however they have now moved away from their original plans. The Philippines relaunched their nuclear programme on February 28, 2022 and may try to operate the 1984 mothballed Bataan Plant. As of 2020, Poland was in advanced planning phase for 1.5 GW and planned to have up to 9 GW by 2040. Hong Kong has no nuclear power plants within its boundary, but imports 80% of the electricity generated from Daya Bay Nuclear Power Station located across the border, in which the power company of the territory holds stake. In 2021, Iraq declared it was planning to build 8 nuclear reactors by 2030 to supply up to 25% electric power in a grid that was suffering from shortages. Overview Of the 32 countries in which nuclear power plants operate, only France, Slovakia, Ukraine and Belgium use them as the source for a majority of the country's electricity supply as of 2021. Other countries have significant amounts of nuclear power generation capacity. By far the largest nuclear electricity producers are the United States with 779,186 GWh of nuclear electricity in 2023, followed by China with 406,484 GWh. As of the end of 2023, 418 reactors with a net capacity of 371,540 MWe were operational, and 59 reactors with net capacity of 61,637 MWe were under construction. Of the reactors under construction, 25 reactors with 26,301 MWe were in China and 7 reactors with a capacity of 5,398 MWe were in India. See also List of commercial nuclear reactors List of nuclear power stations Nuclear energy policy by country List of nuclear power accidents by country List of countries by uranium reserves World Nuclear Industry Status Report Notes References External links World Nuclear Generation and Capacity Nuclear technology
Nuclear power by country
[ "Physics" ]
539
[ "Nuclear technology", "Nuclear physics" ]
5,534,844
https://en.wikipedia.org/wiki/Markov%20strategy
In game theory, a Markov strategy is one that depends only on state variables that summarize the history of the game in one way or another. For instance, a state variable can be the current play in a repeated game, or it can be any interpretation of a recent sequence of play. A profile of Markov strategies is a Markov perfect equilibrium if it is a Nash equilibrium in every state of the game. The Markov strategy was invented by Andrey Markov. References Game theory
Markov strategy
[ "Mathematics" ]
101
[ "Game theory", "Strategy (game theory)" ]
5,535,220
https://en.wikipedia.org/wiki/Hans%20Hagen
Hans Hagen (born 1953) is a professor of computer science at the University of Kaiserslautern. His main research interests are scientific visualization and geometric modelling. From 1999 to 2003 he was the editor in chief of IEEE Transactions on Visualization and Computer Graphics. He got the John Gregory Memorial Award and the Solid Modelling Pioneer Award for his achievements in Geometric Modeling in 2002. His lifetime contributions to Scientific Visualization were honored by the IEEE Visualization Career Award and the IEEE Visualization Academy of Science membership. (Prof. Dr.) Hans Hagen is not to be confused with Hans Hagen, the author of the ConTeXt macro package for the TeX typesetting system. References 1953 births Living people German computer scientists
Hans Hagen
[ "Technology" ]
143
[ "Computing stubs", "Computer specialist stubs" ]
5,535,224
https://en.wikipedia.org/wiki/Weigh%20lock
A weigh lock is a specialized canal lock designed to determine the weight of barges in order to assess toll payments based upon the weight and value of the cargo carried. This requires that the unladen weight of the barge be known. A barge to be weighed was brought into a supporting cradle connected by levers to a weighing mechanism. The water was then drained and the scale balance adjusted to determine the barge gross weight. Subtracting the tare weight (the weight of the barge when empty) would give the cargo weight. Originally weighlocks measured the weight of the barge, initially by measuring the displacement of water from the lock by collecting the liquid in a separate measuring chamber after the barge had entered. This method also requires that the unladen weight of the barge be known. See also Weigh bridge, a device for weighing trucks and railcars. References External links Erie Canal — The Weigh Lock Locks (water navigation) Weighing instruments
Weigh lock
[ "Physics", "Technology", "Engineering" ]
189
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
5,535,702
https://en.wikipedia.org/wiki/Ed%20Carpenter%20%28artist%29
Ed Carpenter (born 1946) is an artist specializing in large-scale public sculptures made of glass. His work can be found in conference centers, libraries, and airports. Early life and education Carpenter studied architecture at the Rhode Island School of Design, where he studied with Dale Chihuly. He attended the University of California, Berkeley from 1968-1971. Glass technique Carpenter specializes in large-scale installations in glass. He is known for his technical innovation using cold-bent tempered glass, encapsulated glass elements, and programmed lighting elements. His work is often described as "architectural". Works While working with Dale Chihuly they created lead glass doors that are in the collections of the Corning Museum of Glass and the Toledo Museum of Art. In 2019 he installed the first phase of a dichroic glass sculpture in the Portland Public Library, called "Mollie's Garden". The piece honored his mother, a library volunteer named Mollie Starbuck, who died in her 80's. His work "Aloft" is a 360 foot glass sculpture in the Wichita Dwight D. Eisenhower National Airport lobby and was featured as an event by the Wichita Art Museum on November 18, 2021. He created a lobby sculpture for the Meydenbauer Convention Center in Bellevue, Washington; a large (17 meters x 18 meters x 6.5 meters) work for the Morgan Library at Colorado State University (commissioned by the Colorado Council on the Arts); and glass windows for the Christian Theological Seminary in Indianapolis, Indiana. Other works include the Flying Bridge between buildings at Central Washington University, an installation at the Hokkaido Sports Center, and a large sphere for the atrium of Carlson school. He also created an outdoor sculpture for the Broadway pumphouse. Personal life Carpenter lives and has his studio in Portland, Oregon. Writings References External links Ed Carpenter's official web site Living people Artists from Portland, Oregon Glass architecture American glass artists Rhode Island School of Design alumni University of California, Berkeley alumni 1946 births
Ed Carpenter (artist)
[ "Materials_science", "Engineering" ]
404
[ "Glass architecture", "Glass engineering and science" ]
5,535,731
https://en.wikipedia.org/wiki/Richard%20G.%20Colling
Richard G. Colling is a former professor of biology and chairman of the biological sciences department at Olivet Nazarene University in Bourbonnais, Illinois, who was barred from teaching general biology after writing a book that attempts to reconcile Christian belief with a scientific understanding of evolution. Education and career Colling attended Olivet Nazarene University as an undergraduate, graduating in 1976. He earned a Ph.D. in microbiology from the University of Kansas in 1980, did postdoctoral research in molecular oncology at Baylor College of Medicine, and joined the faculty of Olivet Nazarene in 1981. He was granted tenure at the institution in 1988. In 2000 he was Olivet Nazarene's "faculty member of the year." In 2004, Colling published the book Random Designer: Created From Chaos, To Connect With the Creator. Colling left the Olivet Nazarene faculty in 2009. Evolution at Olivet Nazarene In September 2007, Olivet President John C. Bowling decided after consultation with denominational leaders to prohibit Colling from teaching the general biology class he had taught since 1991. Bowling also banned professors from assigning a book Colling wrote attempting to reconcile the foundations of modern evolutionary biology with the principles of modern Christian faith. According to an interview with Newsweek, the reason behind Bowling's response was to "get the bull's-eye off Colling and let the storm die down." Newsweek noted, however, contributions by Nazarene professors like Karl Giberson, author of Saving Darwin: How to be a Christian and believe in evolution and other books on the topic, who have received more acclaim than punishment for their scholarship. The Manual of the Church of the Nazarene states: “903.8. Creation: The Church of the Nazarene believes in the biblical account of creation (‘In the beginning God created the heavens and the earth...’ — Genesis 1:1). We oppose any godless interpretation of the origin of the universe and of humankind. However, the church accepts as valid all scientifically verifiable discoveries in geology and other natural phenomena, for we firmly believe that God is the Creator." The American Association of University Professors later investigated and filed a report finding that Colling's rights as a professor were violated when Bowling placed the concerns of the denomination which ran the school above so-called principles of academic freedom. In 2009, Colling resigned from the Olivet Nazarene University faculty in an agreement with the school. See also Theistic evolution Creation–evolution controversy References External links Random Designer web site "Not such intelligent design" reprint of an article Colling wrote on Intelligent Design "Richard Colling: religious brothers are telling falsehoods" American evolutionary biologists 21st-century American biologists American Christian writers Living people Olivet Nazarene University faculty Olivet Nazarene University alumni University of Kansas alumni Year of birth missing (living people) Theistic evolutionists
Richard G. Colling
[ "Biology" ]
602
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
5,535,955
https://en.wikipedia.org/wiki/Matt%20Jones%20%28interaction%20designer%29
Matt Jones is the co-author - with Gary Marsden - of Mobile Interaction Design ( ) and a full research Professor at Swansea University. With the late Marsden and Simon Robinson he authored a new book in 2015 - There's Not an App for That (Morgan Kaufmann). He is an active researcher and has organized large scale of scientific conferences such as ACM CHI 2014. He has also edited several special issues of journals including an ACM ToCHI journal special issue on social issues and the "turn to the wild". His work has included studies and prototypes for mobile search and browsing; pedestrian navigation; and multi-modality. Since the early 2000s he has been actively pursuing a mobile research agenda focused on interfaces and interactions for "developing world" users, looking at how to address issues around lower computer and textual literacy and resource access. He has been awarded a Royal Society Wolfson Research Merit Award for this work. He has worked with many industry partners such as Microsoft Research, Reuters and Orange. He has spent time as visiting fellow at Nokia Research, Finland. He was also on the Scientific Advisory Board of Nokia Research (Tampere and Helsinki Labs). In 2010 he was awarded an IBM Faculty Award to work with the Spoken Web group in IBM Research India (Delhi). From March 2011 to August 2014 he was Head of the Department of Computer Science at Swansea University. From 2014 he has been Head of Science at Swansea University. From October 2020 he is the founding director of the Morgan Advanced Studies Institute (www.swansea.ac.uk/masi) He is the Director of the EPSRC Centre for Doctoral Training in Human Driven AI and is the Principal Investigator of the £32.5M Computational Foundry. External links Morgan Advanced Studies Institute Personal home page Links to projects and publications by M Jones StoryBank – using mobiles to share stories in an Indian village., Matt Jones in receiver magazine, summer 2008 Living people Year of birth missing (living people)
Matt Jones (interaction designer)
[ "Technology" ]
399
[ "Computing stubs", "Computer specialist stubs" ]
5,535,962
https://en.wikipedia.org/wiki/Institut%20f%C3%BCr%20Nukleare%20Entsorgung
The Institut für Nukleare Entsorgung (English: Institute for Nuclear Waste Disposal) is a large German research center at Karlsruhe Institute of Technology where R&D in the field of safe disposal of nuclear waste is being provided. It is located east of Linkenheim-Hochstetten. External links Homepage Nuclear research institutes Research institutes in Germany
Institut für Nukleare Entsorgung
[ "Engineering" ]
74
[ "Nuclear research institutes", "Nuclear organizations" ]
5,536,187
https://en.wikipedia.org/wiki/Topopolis
A topopolis is a proposed tube-shaped space habitat, rotating to produce artificial gravity via centrifugal force on the inner surface, which is extended into a loop around the local planet or star. The concept was invented by writer Patrick Gunkel. Varieties of topopolises and similar fictional structures A topopolis has been compared to an O'Neill cylinder, or a McKendree cylinder, that has been extended in length so that it encircles a star. A "normal" topopolis would be hundreds of millions of miles/kilometers long and at least several miles (kilometers) in diameter. Topopoles can be looped several times around the local star, in a geometric figure known as a torus knot. Topopolises are also called cosmic spaghetti. A topopolis with big enough diameter could theoretically have multiple levels of concentric cylinders. Larry Niven (1974) mentioned the idea in a much-reprinted magazine article "Bigger Than Worlds". Examples in novels In Matter, Iain M. Banks (2008) depicts a topopolis that loops its system star many times in various braidings, and houses trillions of sapient residents. The topopolis was so massive that stray gases from the system collected within the major spacing within the braids by gravitation alone, producing a slight atmosphere between the strands, that the author describes as a "haze". Dennis E. Taylor (2020) in the book Heaven’s River features an alien civilization inhabiting a topopolis. See also Big dumb object Ringworld References External links Megastructures Fictional space stations
Topopolis
[ "Technology" ]
334
[ "Exploratory engineering", "Megastructures" ]
5,536,298
https://en.wikipedia.org/wiki/Blood%E2%80%93air%20barrier
The blood–air barrier or air–blood barrier, (alveolar–capillary barrier or membrane) exists in the gas exchanging region of the lungs. It exists to prevent air bubbles from forming in the blood, and from blood entering the alveoli. It is formed by the type I pneumocytes of the alveolar wall, the endothelial cells of the capillaries and the basement membrane between. The barrier is permeable to molecular oxygen, carbon dioxide, carbon monoxide and many other gases. Structure This blood–air barrier is extremely thin (approximately 600 nm-2μm; in some places merely 200 nm) to allow sufficient oxygen diffusion, yet it is extremely strong. This strength comes from the type IV collagen in between the endothelial and epithelial cells. Damage can occur to this barrier at a pressure difference of around . Clinical significance Failure of the barrier may occur in a pulmonary barotrauma. This can be a result of several possible causes, including blast injury, swimming-induced pulmonary edema, and breathing gas entrapment or retention in the lung during depressurization, which can occur during ascent from underwater diving or loss of pressure from a pressurized vehicle, habitat or pressure suit. Possible consequences of rupture of the blood–air barrier include arterial gas embolism and hemoptysis. See also References External links – "Mammal, lung vasculature (EM, High)" Respiratory system Underwater diving physiology
Blood–air barrier
[ "Biology" ]
309
[ "Organ systems", "Respiratory system" ]
5,536,529
https://en.wikipedia.org/wiki/LED%20circuit
In electronics, an LED circuit or LED driver is an electrical circuit used to power a light-emitting diode (LED). The circuit must provide sufficient current to light the LED at the required brightness, but must limit the current to prevent damaging the LED. The voltage drop across a lit LED is approximately constant over a wide range of operating current; therefore, a small increase in applied voltage greatly increases the current. Datasheets may specify this drop as a "forward voltage" () at a particular operating current. Very simple circuits are used for low-power indicator LEDs. More complex, current source circuits are required when driving high-power LEDs for illumination to achieve correct current regulation. Basic circuit The simplest circuit to drive an LED is through a series resistor. It is commonly used for indicators and digital displays in many consumer appliances. However, this circuit is not energy-efficient, because energy is dissipated in the resistor as heat. The LED's depends on its material. Ohm's law and Kirchhoff's circuit laws are used to calculate the appropriate resistor value, by subtracting the LED's from the supply voltage and dividing by the desired operating current. With a sufficiently high supply voltage, multiple LEDs in series can be powered with one resistor. If the supply voltage is close or equal to the LED's , then no reasonable value for the resistor can be calculated, so some other method of current limiting is used. Power source considerations The voltage versus current characteristics of an LED is similar to any diode. Current is approximately an exponential function of voltage according to the Shockley diode equation, and a small voltage change may result in a large change in current. If the voltage is below or equal to the threshold no current flows and the result is an unlit LED. If the voltage is too high, the current will exceed the maximum rating, overheating and potentially destroying the LED. LED drivers are designed to handle fluctuation load, providing enough current to achieve the required brightness while not allowing damaging levels of current to flow. Drivers may be constant current (CC) or constant voltage (CV). In CC drivers, the voltage changes while the current stays the same. CC drivers are used when the electrical load of the LED circuit is either unknown or fluctuates, for example, a lighting circuit where a variable number of LED lamp fixtures may be installed. As an LED heats up, its voltage drop decreases (band gap decrease). This can encourage the current to increase. MOSFET drivers An active constant current source is commonly used for high power LEDs, stabilizing light output over a wide range of input voltages which might increase the useful life of batteries. Active constant current is typically regulated using a depletion-mode MOSFET (metal–oxide–semiconductor field-effect transistor), which is the simplest current limiter. Low drop-out (LDO) constant current regulators also allow the total LED voltage to be a higher fraction of the power supply voltage. Switched-mode power supplies (e.g. buck, boost, and buck-boost converters) are used in LED flashlights and household LED lamps. Power MOSFETs are typically used for switching LED drivers, which is an efficient solution to drive high-brightness LEDs. Power integrated circuit (IC) chips are widely used to drive the MOSFETs directly, without the need for additional circuitry. Series resistor Series resistors are a simple way to stabilize the LED current, but energy is wasted in the resistor. Miniature indicator LEDs are normally driven from low voltage DC via a current-limiting resistor. Currents of 2 mA, 10 mA and 20 mA are common. Sub-mA indicators may be made by driving ultra-bright LEDs at very low current. Efficiency tends to reduce at low currents, but indicators running on 100 μA are still practical. In coin cell powered keyring-type LED lights, the resistance of the cell itself is usually the only current limiting device. LEDs with built-in series resistors are available. These may save printed circuit board space, and are especially useful when building prototypes or populating a PCB in a way other than its designers intended. However, the resistor value is set at the time of manufacture, removing one of the key methods of setting the LED's intensity. The value for the series resistance may be obtained from Ohm's law, considering that the supply voltage is offset by the diode's , which varies little over the range of useful currents: or where: is resistance in ohms, typically rounded up to the next higher resistor value. is the power supply voltage in volts, e.g. 9-volt battery. is the LED's forward voltage drop in volts when lit. and the LED's light frequency (which we perceive as color) increase with the band gap of the LED's materials. Consequently, ranges from around 1.7 to 2.0 volts for red LEDs to around 2.8 to 4.0 volts for violet LEDs. is the voltage drop across the switch in volts: (A) for no switch, use 0 volts, (B) for mechanical switch, use 0 volts, (C) for BJT transistor, use collector-emitter saturation voltage from the transistor datasheet. is the desired current of the LED in amps. The maximum continuous-on current is shown on LED datasheets, for example 20 mA (0.020 A) is common for most small LEDs. Many circuits operate LEDs at less than the specified maximum current to save power, or to reduce brightness, or to use a common resistor value. For indoor use, tiny surface mount high-efficiency LEDs can easily light up with 1 mA (0.001 A) or more current, which most digital logic outputs can easily source or sink. Using the algebraic formula (above) and assuming is 0 (to simplify examples), the resistance is calculated as follows: Example 1 with of 9 V, = 1.8 V, = 5 mA: = (9 V - 1.8 V) / 5 mA = (9 - 1.8) / 0.005 = 1440 ohms, then round up to a 1.5K ohm resistor (per common resistor values). Example 2 with of 5 V, = 1.8 V, = 1K ohm: = (5 V - 1.8 V) / 1K ohm = (5 - 1.8) / 1000 = 0.0032, which is 3.2 mA LED arrays Strings of multiple LEDs are normally connected in series. In one configuration, the source voltage must be greater than or equal to the sum of the individual LED voltages; typically the LED voltages add up to around two-thirds of the supply voltage. A single current-limiting resistor may be used for each string. Parallel operation is also possible but can be more problematic. Parallel LEDs must have closely matched in order to have similar branch currents and, therefore, similar light output. Variations in the manufacturing process can make it difficult to obtain satisfactory operation when connecting some types of LEDs in parallel. LED display LEDs are often arranged in ways such that each LED (or each string of LEDs) can be individually turned on and off. Direct drive is the simplest-to-understand approach—it uses many independent single-LED (or single-string) circuits. For example, a person could design a digital clock such that when the clock displays "12:34" on a seven-segment display, the clock would turn on the appropriate segments directly and leave them on until something else needs to be displayed. However, multiplexed display techniques are more often used than direct drive, because they have lower net hardware costs. For example, most people who design digital clocks design them such that when the clock displays "12:34" on a seven-segment display, at any one instant the clock turns on the appropriate segments of one of the digits—all the other digits are dark. The clock scans through the digits rapidly enough that it gives the illusion that it is "constantly" displaying "12:34" for an entire minute. However, each "on" segment is actually being rapidly pulsed on and off many times a second. An extension of this technique is Charlieplexing where the ability of some microcontrollers to tri-state their output pins means larger numbers of LEDs can be driven, without using latches. For N pins, it is possible to drive n2-n LEDs. The use of integrated circuit technology to drive LEDs dates back to the late 1960s. In 1969, Hewlett-Packard introduced the HP Model 5082-7000 Numeric Indicator, an early LED display and the first LED device to use integrated circuit technology. Its development was led by Howard C. Borden and Gerald P. Pighini at HP Associates and HP Labs, who had engaged in research and development (R&D) on practical LEDs between 1962 and 1968. It was the first intelligent LED display, making it a revolution in digital display technology, replacing the Nixie tube and becoming the basis for later LED displays. Polarity Unlike incandescent light bulbs, which illuminate regardless of the electrical polarity, LEDs will only light with the correct electrical polarity. When the voltage across the p-n junction is in the correct direction, a significant current flows and the device is said to be forward-biased. If the voltage is of the wrong polarity, the device is said to be reverse biased, very little current flows, and no light is emitted. LEDs can be operated with alternating current, but they will only light on the half of the AC cycle where the LED is forward-biased. This causes the LED to turn on and off at the frequency of the AC supply. Most LEDs have relatively low reverse breakdown voltage ratings compared to standard diodes, so it may be easier than expected to enter this mode and cause damage to the LED due to overcurrent. However, the cut-in voltage is always less than the breakdown voltage, so no special reverse protections are necessary when driving an LED directly from an AC supply when properly current-limited for forward-biased operation. The manufacturer will normally advise how to determine the polarity of the LED in the product datasheet. However, there is no standardization of polarity markings for surface mount devices. Pulsed operation Many systems pulse LEDs on and off, by applying power periodically or intermittently. So long as the flicker rate is greater than the human flicker fusion threshold, and the LED is stationary relative to the eye, the LED will appear to be continuously lit. Varying the on/off ratio of the pulses is known as pulse-width modulation (PWM). In some cases, PWM-based drivers are more efficient than constant current or constant voltage drivers. Most LED data sheets specify a maximum DC current that is safe for continuous operation. Often they specify some higher maximum pulsed current that is safe for brief pulses, as long as the LED controller keeps the pulse short enough and then turns off the power to the LED long enough for the LED to cool off. LED as a light sensor In addition to emission, an LED can be used as a photodiode in light detection. This capability may be used in a variety of applications including ambient light detection and bidirectional communications. As a photodiode, an LED is sensitive to wavelengths equal to or shorter than the predominant wavelength it emits. For example, a green LED is sensitive to blue light and some green light, but not to yellow or red light. This implementation of LEDs may be added to designs with only minor modifications in circuitry. An LED can be multiplexed in such a circuit, such that it can be used for both light emission and sensing at different times. See also Joule thief - powering an LED using 1.5 V battery and voltage booster circuit Planck–Einstein relation - relation between band gap and photon frequency Shockley diode equation - relation between forward voltage and current References External links LED Resistor Calculator Analog circuits Light-emitting diodes American inventions
LED circuit
[ "Engineering" ]
2,531
[ "Analog circuits", "Electronic engineering" ]
5,536,595
https://en.wikipedia.org/wiki/Analytic%20and%20enumerative%20statistical%20studies
Analytic and enumerative statistical studies are two types of scientific studies: In any statistical study the ultimate aim is to provide a rational basis for action. Enumerative and analytic studies differ by where the action is taken. Deming first published on this topic in 1942. Deming summarized the distinction between enumerative and analytic studies as follows: These terms were introduced in Some Theory of Sampling (1950, Chapter 7) by W. Edwards Deming. In other words, an enumerative study is a statistical study in which the focus is on judgment of results, and an analytic study is one in which the focus is on improvement of the process or system which created the results being evaluated and which will continue creating results in the future. A statistical study can be enumerative or analytic, but it cannot be both. Statistical theory in enumerative studies is used to describe the precision of estimates and the validity of hypotheses for the population studied. In analytical studies, the standard error of a statistic does not address the most important source of uncertainty, namely, the change in study conditions in the future. Although analytical studies need to take into account the uncertainty due to sampling, as in enumerative studies, the attributes of the study design and analysis of the data primarily deal with the uncertainty resulting from extrapolation to the future (generalisation to the conditions in future time periods). The methods used in analytical studies encourage the exploration of mechanisms through multifactor designs, contextual variables introduced through blocking and replication over time. This distinction between enumerative and analytic studies is the theory behind the Fourteen Points for Management. Dr. Deming's philosophy is that management should be analytic instead of enumerative. In other words, management should focus on improvement of processes for the future instead of on judgment of current results. Notes Neave HR. The deming dimension. Knoxville, Tenn: SPC Press; 1990:440. External links On the distinction between enumerative and analytic surveys by W. Edwards Deming Neave HR. The deming dimension. Knoxville, Tenn: SPC Press; 1990:440. Philosophy of statistics Quality
Analytic and enumerative statistical studies
[ "Mathematics" ]
439
[ "Philosophy of statistics" ]