id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,632,049
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20emission%20mechanisms
Gamma-ray burst emission mechanisms are theories that explain how the energy from a gamma-ray burst progenitor (regardless of the actual nature of the progenitor) is turned into radiation. These mechanisms are a major topic of research as of 2007. Neither the light curves nor the early-time spectra of GRBs show resemblance to the radiation emitted by any familiar physical process. Compactness problem It has been known for many years that ejection of matter at relativistic velocities (velocities very close to the speed of light) is a necessary requirement for producing the emission in a gamma-ray burst. GRBs vary on such short timescales (as short as milliseconds) that the size of the emitting region must be very small, or else the time delay due to the finite speed of light would "smear" the emission out in time, wiping out any short-timescale behavior. At the energies involved in a typical GRB, so much energy crammed into such a small space would make the system opaque to photon-photon pair production, making the burst far less luminous and also giving it a very different spectrum from what is observed. However, if the emitting system is moving towards Earth at relativistic velocities, the burst is compressed in time (as seen by an Earth observer, due to the relativistic Doppler effect) and the emitting region inferred from the finite speed of light becomes much smaller than the true size of the GRB (see relativistic beaming). GRBs and internal shocks A related constraint is imposed by the relative timescales seen in some bursts between the short-timescale variability and the total length of the GRB. Often this variability timescale is far shorter than the total burst length. For example, in bursts as long as 100 seconds, the majority of the energy can be released in short episodes less than 1 second long. If the GRB were due to matter moving towards Earth (as the relativistic motion argument enforces), it is hard to understand why it would release its energy in such brief interludes. The generally accepted explanation for this is that these bursts involve the collision of multiple shells traveling at slightly different velocities; so-called "internal shocks". The collision of two thin shells flash-heats the matter, converting enormous amounts of kinetic energy into the random motion of particles, greatly amplifying the energy release due to all emission mechanisms. Which physical mechanisms are at play in producing the observed photons is still an area of debate, but the most likely candidates appear to be synchrotron radiation and inverse Compton scattering. As of 2007 there is no theory that has successfully described the spectrum of all gamma-ray bursts (though some theories work for a subset). However, the so-called Band function (named after David Band) has been fairly successful at fitting, empirically, the spectra of most gamma-ray bursts: A few gamma-ray bursts have shown evidence for an additional, delayed emission component at very high energies (GeV and higher). One theory for this emission invokes inverse Compton scattering. If a GRB progenitor, such as a Wolf-Rayet star, were to explode within a stellar cluster, the resulting shock wave could generate gamma-rays by scattering photons from neighboring stars. About 30% of known galactic Wolf-Rayet stars, are located in dense clusters of O stars with intense ultraviolet radiation fields, and the collapsar model suggests that WR stars are likely GRB progenitors. Therefore, a substantial fraction of GRBs are expected to occur in such clusters. As the relativistic matter ejected from an explosion slows and interacts with ultraviolet-wavelength photons, some photons gain energy, generating gamma-rays. Afterglows and external shocks The GRB itself is very rapid, lasting from less than a second up to a few minutes at most. Once it disappears, it leaves behind a counterpart at longer wavelengths (X-ray, UV, optical, infrared, and radio) known as the afterglow that generally remains detectable for days or longer. In contrast to the GRB emission, the afterglow emission is not believed to be dominated by internal shocks. In general, all the ejected matter has by this time coalesced into a single shell traveling outward into the interstellar medium (or possibly the stellar wind) around the star. At the front of this shell of matter is a shock wave referred to as the "external shock" as the still relativistically moving matter ploughs into the tenuous interstellar gas or the gas surrounding the star. As the interstellar matter moves across the shock, it is immediately heated to extreme temperatures. (How this happens is still poorly understood as of 2007, since the particle density across the shock wave is too low to create a shock wave comparable to those familiar in dense terrestrial environments – the topic of "collisionless shocks" is still largely hypothesis but seems to accurately describe a number of astrophysical situations. Magnetic fields are probably critically involved.) These particles, now relativistically moving, encounter a strong local magnetic field and are accelerated perpendicular to the magnetic field, causing them to radiate their energy via synchrotron radiation. Synchrotron radiation is well understood, and the afterglow spectrum has been modeled fairly successfully using this template. It is generally dominated by electrons (which move and therefore radiate much faster than protons and other particles) so radiation from other particles is generally ignored. In general, the GRB assumes the form of a power-law with three break points (and therefore four different power-law segments.) The lowest break point, , corresponds to the frequency below which the GRB is opaque to radiation and so the spectrum attains the form Rayleigh-Jeans tail of blackbody radiation. The two other break points, and , are related to the minimum energy acquired by an electron after it crosses the shock wave and the time it takes an electron to radiate most of its energy, respectively. Depending on which of these two frequencies is higher, two different regimes are possible: Fast cooling () - Shortly after the GRB, the shock wave imparts immense energy to the electrons and the minimum electron Lorentz factor is very high. In this case, the spectrum looks like: Slow cooling () – Later after the GRB, the shock wave has slowed down and the minimum electron Lorentz factor is much lower.: The afterglow changes with time. It must fade, obviously, but the spectrum changes as well. For the simplest case of adiabatic expansion into a uniform-density medium, the critical parameters evolve as: Here is the flux at the current peak frequency of the GRB spectrum. (During fast-cooling this is at ; during slow-cooling it is at .) Note that because drops faster than , the system eventually switches from fast-cooling to slow-cooling. Different scalings are derived for radiative evolution and for a non-constant-density environment (such as a stellar wind), but share the general power-law behavior observed in this case. Several other known effects can modify the evolution of the afterglow: Reverse shocks and the optical flash There can be "reverse shocks", which propagate back into the shocked matter once it begins to encounter the interstellar medium. The twice-shocked material can produce a bright optical/UV flash, which has been seen in a few GRBs, though it appears not to be a common phenomenon. Refreshed shocks and late-time flares There can be "refreshed" shocks if the central engine continues to release fast-moving matter in small amounts even out to late times, these new shocks will catch up with the external shock to produce something like a late-time internal shock. This explanation has been invoked to explain the frequent flares seen in X-rays and at other wavelengths in many bursts, though some theorists are uncomfortable with the apparent demand that the progenitor (which one would think would be destroyed by the GRB) remains active for very long. Jet effects Gamma-ray burst emission is believed to be released in jets, not spherical shells. Initially the two scenarios are equivalent: the center of the jet is not "aware" of the jet edge, and due to relativistic beaming we only see a small fraction of the jet. However, as the jet slows down, two things eventually occur (each at about the same time): First, information from the edge of the jet that there is no pressure to the side propagates to its center, and the jet matter can spread laterally. Second, relativistic beaming effects subside, and once Earth observers see the entire jet the widening of the relativistic beam is no longer compensated by the fact that we see a larger emitting region. Once these effects appear the jet fades very rapidly, an effect that is visible as a power-law "break" in the afterglow light curve. This is the so-called "jet break" that has been seen in some events and is often cited as evidence for the consensus view of GRBs as jets. Many GRB afterglows do not display jet breaks, especially in the X-ray, but they are more common in the optical light curves. Though as jet breaks generally occur at very late times (~1 day or more) when the afterglow is quite faint, and often undetectable, this is not necessarily surprising. Dust extinction and hydrogen absorption There may be dust along the line of sight from the GRB to Earth, both in the host galaxy and in the Milky Way. If so, the light will be attenuated and reddened and an afterglow spectrum may look very different from that modeled. At very high frequencies (far-ultraviolet and X-ray) interstellar hydrogen gas becomes a significant absorber. In particular, a photon with a wavelength of less than 91 nanometers is energetic enough to completely ionize neutral hydrogen and is absorbed with almost 100% probability even through relatively thin gas clouds. (At much shorter wavelengths the probability of absorption begins to drop again, which is why X-ray afterglows are still detectable.) As a result, observed spectra of very high-redshift GRBs often drop to zero at wavelengths less than that of where this hydrogen ionization threshold (known as the Lyman break) would be in the GRB host's reference frame. Other, less dramatic hydrogen absorption features are also commonly seen in high-z GRBs, such as the Lyman alpha forest. References Gamma-ray bursts
Gamma-ray burst emission mechanisms
[ "Physics", "Astronomy" ]
2,200
[ "Physical phenomena", "Stellar phenomena", "Astronomical events", "Gamma-ray bursts" ]
13,632,416
https://en.wikipedia.org/wiki/Humanistic%20intelligence
Humanistic Intelligence (HI) is defined, in the context of wearable computing, by Marvin Minsky, Ray Kurzweil, and Steve Mann, as follows: Humanistic Intelligence [HI] is intelligence that arises because of a human being in the feedback loop of a computational process, where the human and computer are inextricably intertwined. When a wearable computer embodies HI and becomes so technologically advanced that its intelligence matches our own biological brain, something much more powerful emerges from this synergy that gives rise to superhuman intelligence within the single “cyborg” being. More generally (beyond only wearable computing), HI describes the creation of intelligence that results from a feedback loop between a computational process and a human being, where the human and computer are inextricably intertwined. In the field of human-computer interaction (HCI) it has been common to think of the human and computer as separate entities. HCI emphasizes this separateness by treating the human and computer as different entities that interact. However, HI theory thinks of the wearer and the computer with its associated input and output facilities not as separate entities, but regards the computer as a second brain and its sensory modalities as additional senses, in which synthetic synesthesia merges with the wearer's senses. When a wearable computer functions in a successful embodiment of HI, the computer uses the human's mind and body as one of its peripherals, just as the human uses the computer as a peripheral. This reciprocal relationship is at the heart of HI. Courses The principles are taught in a variety of university courses, such as: CSE40814, Mobile Computing, Fall 2014, University of Notre Dame ECE516, Intelligent Image Processing, 1998-2022, University of Toronto ECE1724, "Superhumachines" (Super-human-machine intelligence), University of Toronto Course: Wearable Computing, VAK: 03-799.01, Time: Mo, 13-15, Place: 1.51 TAB (ECO5), Instructor: Dr. Holger Kenn, Microsoft EMIC, Monday: Tel: 3035, TAB, 1.92, Universität Bremen See also Cybernetics References External links Hawkeye Project Human–computer interaction
Humanistic intelligence
[ "Engineering" ]
474
[ "Human–computer interaction", "Human–machine interaction" ]
13,633,477
https://en.wikipedia.org/wiki/Direct%20integration%20of%20a%20beam
Direct integration is a structural analysis method for measuring internal shear, internal moment, rotation, and deflection of a beam. For a beam with an applied weight , taking downward to be positive, the internal shear force is given by taking the negative integral of the weight: The internal moment is the integral of the internal shear: = The angle of rotation from the horizontal, , is the integral of the internal moment divided by the product of the Young's modulus and the area moment of inertia: Integrating the angle of rotation obtains the vertical displacement : Integrating Each time an integration is carried out, a constant of integration needs to be obtained. These constants are determined by using either the forces at supports, or at free ends. For internal shear and moment, the constants can be found by analyzing the beam's free body diagram. For rotation and displacement, the constants are found using conditions dependent on the type of supports. For a cantilever beam, the fixed support has zero rotation and zero displacement. For a beam supported by a pin and roller, both the supports have zero displacement. Sample calculations Take the beam shown at right supported by a fixed pin at the left and a roller at the right. There are no applied moments, the weight is a constant 10 kN, and - due to symmetry - each support applies a 75 kN vertical force to the beam. Taking x as the distance from the pin, Integrating, where represents the applied loads. For these calculations, the only load having an effect on the beam is the 75 kN load applied by the pin, applied at x=0, giving Integrating the internal shear, where, because there is no applied moment, . Assuming an EI value of 1 kNmm (for simplicity, real EI values for structural members such as steel are normally greater by powers of ten) * and Because of the vertical supports at each end of the beam, the displacement () at x = 0 and x = 15m is zero. Substituting (x = 0, ν(0) = 0) and (x = 15m, ν(15m) = 0), we can solve for constants =-1406.25 and =0, yielding and For the given EI value, the maximum displacement, at x=7.5m, is approximately 440 times the length of the beam. For a more realistic situation, such as a uniform load of 1 kN and an EI value of 5,000 kN·m², the displacement would be approximately 13 cm. Note that for the rotation the units are meters divided by meters (or any other units of length which reduce to unity). This is because rotation is given as a slope, the vertical displacement divided by the horizontal change. See also Bending Beam theory Euler–Bernoulli static beam equation Solid Mechanics Virtual Work References Hibbeler, R.C., Mechanics Materials, sixth edition; Pearson Prentice Hall, 2005. . External links Beam Deflection by Double Integration Method
Direct integration of a beam
[ "Engineering" ]
611
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
13,633,817
https://en.wikipedia.org/wiki/Counterproductive%20norms
Counterproductive norms are group norms that prevent a group, organization, or other collective entities from performing or accomplishing its originally stated function by working oppositely to how they were initially intended. Group norms are typically enforced to facilitate group survival, to make group member behaviour predictable, to help avoid embarrassing interpersonal interactions, or to clarify distinctive aspects of the group’s identity. Counterproductive norms exist despite the fact that they cause opposite outcomes of the intended prosocial functions. Group norms are informal rules and standards that guide and regulate the behaviour of a group’s members. These norms may be implicit or explicit and are intended to provide information on appropriate behaviour for group members in particular social situations. Thus, counterproductive norms instead illicit inappropriate behaviour from group members. Group norms are not predetermined but rather arise out of social interactions. These norms can have powerful influence over group behaviour. Norms may arise due to critical events in a group’s history that established a precedent, as a result of primacy (the first emergent behaviour that sets group expectations), or from carry-over behaviours from past situations. Groups establish these norms based on specific group values and goals and may establish sanctions in response to deviation from these norms. Such sanctions are typically applied in the form of social exclusion or disapproval. Counterproductive norms also typically consist of these attributes but the intention behind their activation is usually not prosocial and is instead opposite to their original function. Mechanisms of counterproductive norms Social proof Counterproductive norms manifest in part because of the principle of social proof. Social proof is what happens when we learn what is correct by referring to the views of others. This is especially true in unclear or ambiguous situations. When people infer the appropriate behavior from the descriptive norm, they are looking to the behaviors of others to try and figure out the most effective course of action. This might be a cognitive “short-cut" to determining most effective action, as the functional perspective of normative production might suggest. Counterproductive norms can be created by looking to the behavior of others. Normative influence Both descriptive norms and injunctive norms are used in normative communications. If used incorrectly, they can create counter productive norms. Descriptive norms describe what constitutes a normal behavior in a given context. They are often referred to as the “is" norms, because they depict things as they actually are. Injunctive norms describe whether a given action is considered acceptable. They are called the “ought" norms" because they constitute what should be. The descriptive norm is very powerful. The way that communications are phrased actually has a big impact on the effectiveness of the message. If that phrasing is used incorrectly, it follows that a counterproductive norm can develop. Norm transmission Norms may only exist in the context of a group. In other words, social norms do not exist with an independent individual. Norms may be transmitted deliberately by group members instructing others members on acceptable behaviour. They may also be transmitted passively through observation of others and their behaviours which are deemed acceptable by the group. Counterproductive norms are perpetuated by the same mechanisms but differ from group norms in terms of their outcomes. Theoretical perspectives Two different perspectives give explanations for the formation and existence of group norms and counterproductive group norms. The Societal-Value Perspective suggests that norms are arbitrary rules that exist as a result of cultural value or reinforcement. This theory states that a norm’s power depends on the value it represents to the culture. Social norms evolve out of behaviours that repeatedly occur and are reinforced. Thus, the strength of norms and counterproductive norms depend on various group dynamics. Because they evolve out of social interaction, one factor of norm strength is the available opportunities for group members to communicate. The strongest norms are those that are important to the group. As well, strength depends on the cohesiveness and unity of the group. The Functional Perspective suggests that norms exist to enhance survival potential by curtailing dysfunctional behaviours while encouraging socially proactive ones. Unlike the Societal-Value perspective, the Functional perspective states that norms are not arbitrary. Instead, they are meant to balance the needs of the individual with the goals of the group of social control and harmony. Thus, norms exist to serve a purpose of survival. However, counterproductive norms work in opposition to socially proactive functions and therefore, cannot be adequately explained by this theory. Both the societal-value perspective and the functional perspective theories can be integrated to describe the fact that individuals experience pressure to communicate effectively with others within a cultural belief system with behaviour patterns that are relevant and informative, in the form of customs and traditions fulfill overarching needs based on the local social culture and physical environment. Examples of counterproductive behaviors Industrial behavior Much research has been done regarding counterproductive work behaviours. These behaviours include things such as theft, sabotage, workplace violence and aggression, incivility, revenge and service sabotage that are willfully committed with the intention of harming an organization or its members. Some research suggests that these counterproductive behaviours are enacted when individuals or groups feel maltreated or as if they do not have legitimate options to protest. Possible antecedents of counterproductive norms include personality variables, organizational culture, control systems and injustice. Personality variables refer to individual attributes such as integrity. In fact, results on integrity tests have been shown to be correlated to counterproductive work behaviors. Organizational culture includes both the behavior of people within an organization and the meaning placed on these behaviors. An increased perception of the level of organizational acceptance for sexual harassment being correlated to actual reports of unwanted sexual coercion is an example of organizational culture influence on counterproductive workplace behavior. Control systems are physical or procedural entities that aim to reduce counterproductive behaviors or increase the penalties for engaging in these behaviors in the workplace. Sophisticated security systems are typically put in place with the intention of preventing counterproductive workplace behaviors but may be used in some situations as a means of committing sabotage (e.g. by falsifying records). Injustice in the work environment consists of perceived inequity as well as various other ideas within the concept of organizational justice. Organizational justice is composed of the concepts of distributive justice, which refers to equitable allocation of resources, and procedural justice, which refers to how these decisions are made and their perceived fairness. Feelings of injustice and frustration have been linked to various counterproductive behaviors such as sabotage, time-wasting, interpersonal aggression, job apathy, and other anti-social behaviors. Environmental messaging Iron Eyes Cody PSA One example of counterproductive norms are the Iron Eyes Cody Keep America Beautiful public service announcements. Cialdini (2003) argues that while the ad makers convey an injunctive norm about environmentalism, they contrasted this by portraying littering as a descriptive norm. While they did have a lot of success and have been recognized as some of the best PSAs of all time, Cialdini argues that they could have been more effective, had they conveyed different descriptive norms. Petrified wood Example A study by Cialdini and colleagues tested whether signs conveying different norms had an effect on the rate of theft of petrified wood in a national forest. They used one sign with a descriptive norm and one with an injunctive norm. The descriptive norm “normalized" the behavior of theft, and as a result, raised the amount of theft. The injunctive norm was more effective at reducing theft, and lowered it from the baseline. The study gives us some empirical evidence that when messaging uses normative influence incorrectly, it can create or maintain a counterproductive norm. References Bibliography Organizational behavior
Counterproductive norms
[ "Biology" ]
1,564
[ "Behavior", "Organizational behavior", "Human behavior" ]
13,634,965
https://en.wikipedia.org/wiki/Peer-to-peer%20video%20sharing
Peer-to-peer video sharing is a basic service on top of the IP Multimedia Subsystem (IMS). Early proprietary implementations might also run a simple SIP infrastructure, too. The GSM Association calls it "Video Share". The peer-to-peer video sharing functionality is defined by the Phase 1 of the GSMA Video Share service. For a more detailed description of the full GSMA Video Share service, please see the Wikipedia entry for Video Share. The most basic form is typically connected to a classical circuit-switched (CS) telephone call. While talking on the CS line the speaker can start in parallel a multimedia IMS session. The session is normally a video stream, with audio being optional (since there is an audio session already open on the CS domain). It is also possible to share photos or files. Actually, P2P video sharing does not require a full IMS implementation. It could work with a pure IETF Session Initiation Protocol (SIP) infrastructure and simple HTTP Digest authentication. However, mobile operators may want to use it without username/password provisioning and the related frauds problems. One possible solution is the Early IMS Authentication method. In the future USIM/ISIM based authentication could be introduced, too. So the IMS adds up extra security and management features that are normally required by a mobile operator by default. Early implementation by Nokia The early Nokia implementation requires the manual setting of an attribute in the phone book. When the video session is triggered (by simply pulling down the back-side camera cover on a 6680), the video sharing client looks up the destination URI based on the MSISDN number of the B party of the current open CS voice call. The video sharing is possible only if this number has a valid entry in the phone book and a valid URI for the SIP call. However, this method is not really scalable, since the user has to enter very complex strings into the phone book manually. Because this service does not involve any application server, it is difficult to make a good business model for it. Usually, the first commercial services were based on the idea that video sharing will increase the length of the voice sessions, and the resulting increased revenue would be enough to cover the costs of the video sharing service. History The P2P video sharing was introduced in 2004 by Nokia. Two major operators started commercial implementations: "Turbo Call" from Telecom Italia Mobile (TIM) in Italy and Telecomunicações Móveis Nacionais, SA (TMN) in Portugal. The first handsets to support P2P video sharing were the Nokia 6630 and 6680. The 6680 is especially suited for turning on the video sharing by having a slider on top of the back-side camera. Later the Nokia N70 was added to the commercially supported handsets. Popularity TIM Italy reported about 10% penetration (based on the potentially available customers with appropriate handsets). Supported handsets Nokia 6630, 6680 Nokia N70 Nokia 5230 References External links https://web.archive.org/web/20071011005614/http://gsmworld.com/sip/e2e/videoshare.shtml - GSM Association Video Share homepage http://sw.nokia.com/id/ced67f36-2a98-4f21-9277-209bb4a2429c/Video_Sharing.pdf - Technical description on the Forum Nokia site https://web.archive.org/web/20080821215654/http://press.nokia.com/PR/200502/980522_5.html - Announcement of the "Turbo Call" service from TIM in cooperation with Nokia IMS services
Peer-to-peer video sharing
[ "Technology" ]
786
[ "IMS services" ]
13,635,369
https://en.wikipedia.org/wiki/Total%20mixed%20ration
Total mixed ration (TMR) is a method of feeding beef and dairy cattle. A TMR diet achieves a wide distribution of nutrients in uniform feed rather than switching between several types. A cow's ration should include good quality forages, a balance of grains and proteins, vitamins and minerals. Management The number of groups of animals to be part of a TMR diet depend on existing herd size, the layout of barn, and loafing areas. Typically a dairy barn will have high, medium and low production lactating cows, far off and close up dry cows, and pre-breeding and post-breeding heifers. These groups are essential as some (such as first lactation cows) don't do well in overcrowded feed areas. When working with two or three groups it is easier as the caretaker can feed the low costing forage to the low group and the high quality forage to the higher group to increase the overall health and performance of the cows. For dry cows this system can minimize metabolic and nutritional disorders in calving and the post-partpartum period. A pre-breeding and post-breeding system is essential for heifers to ensure proper growth and development. It is important for pre-breeding heifers to have an energy- and protein-dense diet, while post-breeding heifers lack the ability to consume high forage diets. Summary Complete rations feature the blended approach; all forages, concentrates, protein supplements, minerals and vitamins are mixed and offered as a single feed. Complete-ration systems can save labour and reduce overall feeding costs. It is extremely important to keep the mixture exactly the same day after day and to make big changes gradually. Early detection of problems with the ration system is possible by observing the bulk tank milk level after each milking. Forage analysis is necessary and should include dry matter, crude protein, acid detergent fiber, neutral detergent fiber, calcium and phosphorus. TMR can be used effectively by many dairy farmers, but it is not a substitute for good management. In fact, the intensity of management may be increased. Most of all, management skills and competency of the dairy farmer are critical to make this system work effectively. References Intensive farming Cattle Dairy farming
Total mixed ration
[ "Chemistry" ]
470
[ "Eutrophication", "Intensive farming" ]
13,635,394
https://en.wikipedia.org/wiki/Indo-1
Indo-1 is a popular dye that is used as a ratiometric calcium indicator similar to Fura-2. In contrast to Fura-2, Indo-1 has a dual emissions peak and a single excitation. The main emission peak in calcium-free solution is 475 nm while in the presence of calcium the emission is shifted to 400 nm. It is widely used in flow cytometry and laser scanning microscopy, due to its single excitation property. However, its use for confocal microscopy is limited due to its photo-instability caused by photobleaching. Indo-1 is also able to keep possession of its ratiometric emission, dissimilar to Fura-2. The penta potassium salt is commercially available and preferred to the free acid because of its higher solubility in water. While Indo-1 is not cell permeable the penta acetoxymethyl ester Indo-1 AM enters the cell where it is cleaved by intracellular esterases to Indo-1. The synthesis and properties of Indo-1 were presented in 1985 by the group of Roger Y Tsien. In intact heart muscle, Indo-1, in combination with bioluminescent protein aequorin, can be utilized as a tool to distinguish between the internal and exterior inotropic regulation processes. References Biochemistry methods Cell imaging Chelating agents Fluorescent dyes Glycol ethers Indoles
Indo-1
[ "Chemistry", "Biology" ]
295
[ "Biochemistry methods", "Microscopy", "Biochemistry", "Chelating agents", "Cell imaging", "Process chemicals" ]
13,635,966
https://en.wikipedia.org/wiki/Common%20Purpose%20UK
Common Purpose is a British-founded charity that runs leadership-development programmes around the world. Common Purpose UK is a subsidiary of Common Purpose. Founded in 1989 by Julia Middleton, its aim is to develop leaders who cross boundaries so they can solve complex problems in work and in society. Adirupa Sengupta was appointed as Group CEO in 2019. As of 2015 Common Purpose ran local programmes for leaders in cities across the world, and its global programmes bring together leaders from over 100 countries across six continents. As of 2019, 85,000 leaders worldwide have taken part in Common Purpose programmes. Activities Courses Common Purpose works with a wide range of organisations and individuals across the business, public and NGO sectors. As of 2019, 85,000 leaders have taken part in Common Purpose programmes. International Certificate in Education In May 2024, Times Higher Education (THE), NAFSA and Common Purpose collaborated to launch a new International Education Professional Certificate (IPEC) IEPC is offered online, and designed to provide “knowledge, skills and competencies for success in global education”. The 12 week certificate course is backed by three trusted brands who serve the global higher education sector. There's NAFSA's comprehensive collection of competencies for success in international education, the many years of experience offered by Common Purpose, and the broad industry reach offered by Times Higher Education (THE). Education and young people Common Purpose works with universities to run programmes for students to develop global leadership skills. As of 2019, 8,000 students completed Common Purpose programmes each year. They also run free leadership programmes for 18-25 year olds in the US, Singapore, Pakistan, Bangladesh, Nigeria, Germany and the UK as part of their Legacy campaign. In 2021, Common Purpose partnered with Times Higher Education to launch an online course designed for students to increase their employability skills. Senior executives What Next? was a 2010 course run by leadership development organisation Common Purpose and the Said Business School to help redundant executives identify opportunities to continue to use the experience they have accumulated during their careers. Between 2013 and 2019, Common Purpose partnered with the Commonwealth Study Conference to run CSCLeaders, an annual global leadership programme for 100 exceptional senior leaders selected from governments, businesses and NGOs across the 54 countries of the Commonwealth. Leadership campaigns In July 2009 Common Purpose was commissioned by the Government Equalities Office to conduct an online survey of individuals in leadership positions, and produce a report entitled "Diversity of Representation in Public Appointments". Subsequently, Common Purpose and the Government Equalities Office set up The About Time Public Leaders Courses, designed to support the government's aim to increase the diversity of public-body board members and the pool of talented individuals ready to take up public appointments. The schemes were formally launched in January 2010. In January 2010, Common Purpose Chief Executive, Julia Middleton, published interviews with 12 leaders from the private, public and voluntary sectors, including Sir David Bell and Dame Suzi Leather about the qualities needed for good leadership in challenging times. Projects In July 2008 Common Purpose introduced a project in Bangalore, India, which took 50 people from different sectors, e.g. IT and banking, and encouraged them to share local and international knowledge in order to solve problems associated with trading in a recession. It has also run projects in Germany, to highlight the importance of having good facilities for the disabled. Press coverage In May 2008 the Yorkshire Post revealed that Common Purpose had been granted free office space at the Department for Children, Schools and Families in Sheffield in 1997. A DCSF spokeswoman said the free office accommodation had been given in line with the policy of the then Education Secretary David Blunkett, a Sheffield MP, who had wanted to build better links with the local community. But Philip Davies, Conservative MP for Shipley, criticised the relationship between Government and Common Purpose as well as the fact it did not put the content of its training in the public domain. In January 2009 Third Sector magazine reported that Common Purpose was to face no further action from the Information Commissioner's Office. The announcement came following the ICO's ruling in October 2008 that the charity was unlikely to have complied with the provisions in the Data Protection Act on processing personal data when it compiled a list containing the personal details of people who had made what it (CP) contended were "vexatious" requests under the Freedom of Information Act 2000 relating to its dealings with public authorities. Leveson Inquiry controversy A number of UK national newspapers ran stories implying that Common Purpose had exerted improper influence over the Leveson Inquiry, in the days preceding publication of its report. These stories centred on the role of Inquiry member Sir David Bell, who was both a trustee of Common Purpose, and had set up the Media Standards Trust (a lobbying group which presented evidence to the Inquiry) together with Julia Middleton. Moreover, the Media Standards Trust set up and provided funding for the lobbying group Hacked Off, which also presented evidence to the Inquiry. Bell resigned from the Media Standards Trust when he was appointed a member of the Inquiry. On 25 November, The Daily Telegraph too published a comment piece on CPUK, noting that the Rotherham Director of Children's Services, Joyce Thacker, heavily criticised in the Rotherham child sexual exploitation scandal, was a member of CPUK, and noting that Common Purpose had been described as "[a] secretive Fabian organisation [... that] has been described as a Left-wing version of the Freemasons." Writing in The Guardian, Roy Greenslade described the Mail coverage of Common Purpose in general, and the central focus on Sir David Bell in particular, as "a classic example of conspiracist innuendo" and went on that "through a series of leaps of logic and phoney 'revelations' of Bell's publicly acknowledged positions, the articles persistently insinuate that he has been up to no good." This opinion was shared in an article in the New Statesman by Peter Wilby. Also in The Guardian, Michael White acknowledged that, "anti-establishment bodies should be as much fair game for accountability as those of the old establishment", but said: "I couldn't help thinking as I read it that the analysis itself is a bit of a conspiracy. Delete 'Common Purpose' throughout and insert 'Jew', 'Etonian' or 'Freemason' and you'd rightly feel uneasy." References Further reading Move outside your comfort zone - Guardian.co.uk "I'm very good with a hatchet" - Guardian.co.uk The Wealth of Experience The Guardian, 16 April 2008 (Article by Julia Middleton) Be yourself – but know who you are meant to be The Financial Times, 17 March 2008 (Comments by Julia Middleton) Networking | Generation Y recognise the benefits Personnel Today 'Work Clinic' (Research by Common Purpose and comments by Julia Middleton) External links Charities based in London Personal development Organizations established in 1989 1989 establishments in the United Kingdom Private companies limited by guarantee of England
Common Purpose UK
[ "Biology" ]
1,413
[ "Personal development", "Behavior", "Human behavior" ]
13,636,007
https://en.wikipedia.org/wiki/Sperm-mediated%20gene%20transfer
Sperm-mediated gene transfer (SMGT) is a transgenic technique that transfers genes based on the ability of sperm cells to spontaneously bind to and internalize exogenous DNA and transport it into an oocyte during fertilization to produce genetically modified animals.1 Exogenous DNA refers to DNA that originates outside of the organism. Transgenic animals have been obtained using SMGT, but the efficiency of this technique is low. Low efficiency is mainly due to low uptake of exogenous DNA by the spermatozoa, reducing the chances of fertilizing the oocytes with transfected spermatozoa.2 In order to successfully produce transgenic animals by SMGT, the spermatozoa must attach the exogenous DNA into the head and these transfected spermatozoa must maintain their functionality to fertilize the oocyte.2 Genetically modified animals produced by SMGT are useful for research in biomedical, agricultural, and veterinary fields of study. SMGT could also be useful in generating animals as models for human diseases or lead to future discoveries relating to human gene therapy. Sperm-Mediated Gene Transfer Mechanism The method for SMGT uses the sperm cell, a natural vector of genetic material, to transport exogenous DNA. The exogenous DNA molecules bind to the cell membrane of the head of the sperm cell. This binding and internalization of the DNA is not a random event. The exogenous DNA interacts with the DNA-binding proteins (DBPs) that are present on the surface of the sperm cell.3 Spermatozoa are naturally protected against the intrusion of exogenous DNA molecules by an inhibitory factor present in mammals’ seminal fluid. This factor blocks the binding of sperm cells and exogenous DNA because in the presence of the inhibitory factor, DBPs lose their ability to bind to exogenous DNA. In the absence of this inhibitory factor, DBPs on sperm cells are able to interact with DNA and can then translocate the DNA into the cell. Therefore, the seminal fluid must be removed from the sperm samples by extensive washing immediately after ejaculation.3 After the DNA is internalized, the exogenous DNA must be integrated into the genome. There are various mechanisms suggested for DNA integration, including integrating DNA at oocyte activation, at nucleus decondensation, or at the formation of the pronuclei, but all of these suggested mechanisms imply that the integration of DNA happens after the penetration of the sperm cell into the oocyte.3 Sperm-Mediated Gene Transfer Controversy Sperm-mediated gene transfer is considered controversial because despite the successes, it has not yet become established as a reliable form of genetic manipulation. Skepticism arises based on the assumption that evolutionary chaos could arise if sperm cells could act as vectors for exogenous DNA.4 Reasonable assumption tells us that because reproductive tracts contain free DNA molecules, sperm cells should be highly resistant to the risk of picking up exogenous DNA molecules. SMGT has been demonstrated experimentally and followed the assumption that nature has barriers against SMGT. These barriers are not always absolute and could explain the inconsistent experimental outcomes of SMGT.4 If there are natural barriers against SMGT, then the successes may only represent unusual cases in which the barriers failed. Two barriers have been identified; the inhibitory factor in seminal fluid that prevents binding to foreign DNA molecules and a sperm endogenous nuclease activity that is triggered upon interaction of sperm cells with foreign DNA molecules.4 These protections give reason to believe that unintentional interactions between sperm and exogenous genetic sequences is kept to a minimal. These barriers allow for protection against the threat that every fertilization event could become a potentially mutagenic one.4 Applications of Sperm-Mediated Gene Transfer Animal Transgenesis Transgenic animals have been produced successfully using gene transfer techniques such as sperm-mediated gene transfer. Though this production has been successful, the efficiency of the process is low. Low efficiency of SMGT in the production of transgenic animals is mainly due to poor uptake of the exogenous DNA by the sperm cells, thus reducing the number of fertilized oocytes with transfected spermatozoa.5 From 1989 to 2004, there were over 30 claims for the production of viable transgenic animals using SMGT, but only about 25 percent of these demonstrated a transmission of the transgenes beyond the F0 generation.4 This transmission is required in order to claim usable animal transgenesis. According to previous studies, numerous animal species, including mammals, birds, insects, and fish, have been found susceptible to SMGT techniques, thus indicating that SMGT has broad applicability across a wide variety of Metazoan species.4 Currently, despite the low frequency of transmission of transgenes, the frequency of phenotype modifications and overall animal transgenesis has been as high as 80 percent in some experiments.4 Gene Therapy The potential use of sperm-mediated gene transfer for embryo somatic gene therapy is a possibility for future research. Embryo somatic gene therapy would be advantageous because there seems to be an inverse correlation between the age of the patient and the effectiveness of gene therapy. Therefore, the possibility of gene therapy treatment before irreversible damage occurs would be ideal.4 A majority of the experiments that report successful SMGT provide evidence of post-fertilization transfer and maintenance of transgenes.6 SMGT has potential advantages of being a simple and cost-effective method of gene therapy, especially in contrast with pronuclear microinjection, another transgenic technique. Nevertheless, despite some successes and its potential utility, SMGT is not yet established as a reliable form of genetic modification.6 References 1. Lavitrano M, Giovannoni R, Cerrito MG. 2013. Methods for sperm-mediated gene transfer. Methods Molecular Biology. 927:519-529. 2. García-Vázquez FA, Ruiz S, Grullón LA, Ondiz AD, Gutiérrez-Adán A, Gadea J. 2011. Factors affecting porcine sperm mediated gene transfer. Research in Veterinary Science. 91(3):446-53. 3. Lavitrano M, Busnelli M, Cerrito MG, Giovannoni R, Manzini S, Vargiolu A. 2006. Sperm-mediated gene transfer. Reproduction, Fertility and Development. 18:19-23. 4. Smith K, Spadafora C. 2005. Sperm-mediated gene transfer: applications and implications. BioEssays. 27(5):551-562. 5. Collares T, Campos VF, de Leon PM, Moura, Cavalcanti PV, Amaral, MG, et al. 2011. Transgene transmission in chickens by sperm-mediated gene transfer after seminal plasma removal and exogenous DNA treated with dimethylsulfoxide or N,N-dimethylacetamide. Journal of Biosciences. 36(4):613-620. 6. Smith K. 2004. Gene therapy: the potential applicability of gene transfer technology to the human germline. International Journal of Medical Sciences. 1(2):76-91. Genetic engineering
Sperm-mediated gene transfer
[ "Chemistry", "Engineering", "Biology" ]
1,488
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
13,636,040
https://en.wikipedia.org/wiki/Lema%C3%AEtre%20coordinates
Lemaître coordinates are a particular set of coordinates for the Schwarzschild metric—a spherically symmetric solution to the Einstein field equations in vacuum—introduced by Georges Lemaître in 1932. Changing from Schwarzschild to Lemaître coordinates removes the coordinate singularity at the Schwarzschild radius. Metric The original Schwarzschild coordinate expression of the Schwarzschild metric, in natural units (), is given as where is the invariant interval; is the Schwarzschild radius; is the mass of the central body; are the Schwarzschild coordinates (which asymptotically turn into the flat spherical coordinates); is the speed of light; and is the gravitational constant. This metric has a coordinate singularity at the Schwarzschild radius . Georges Lemaître was the first to show that this is not a real physical singularity but simply a manifestation of the fact that the static Schwarzschild coordinates cannot be realized with material bodies inside the Schwarzschild radius. Indeed, inside the Schwarzschild radius everything falls towards the centre and it is impossible for a physical body to keep a constant radius. A transformation of the Schwarzschild coordinate system from to the new coordinates (the numerator and denominator are switched inside the square-roots), leads to the Lemaître coordinate expression of the metric, where The metric in Lemaître coordinates is non-singular at the Schwarzschild radius . This corresponds to the point . There remains a genuine gravitational singularity at the center, where , which cannot be removed by a coordinate change. The time coordinate used in the Lemaître coordinates is identical to the "raindrop" time coordinate used in the Gullstrand–Painlevé coordinates. The other three: the radial and angular coordinates of the Gullstrand–Painlevé coordinates are identical to those of the Schwarzschild chart. That is, Gullstrand–Painlevé applies one coordinate transform to go from the Schwarzschild time to the raindrop coordinate . Then Lemaître applies a second coordinate transform to the radial component, so as to get rid of the off-diagonal entry in the Gullstrand–Painlevé chart. The notation used in this article for the time coordinate should not be confused with the proper time. It is true that gives the proper time for radially infalling observers; it does not give the proper time for observers traveling along other geodesics. Geodesics The trajectories with ρ constant are timelike geodesics with τ the proper time along these geodesics. They represent the motion of freely falling particles which start out with zero velocity at infinity. At any point their speed is just equal to the escape velocity from that point. The Lemaître coordinate system is synchronous, that is, the global time coordinate of the metric defines the proper time of co-moving observers. The radially falling bodies reach the Schwarzschild radius and the centre within finite proper time. Radial null geodesics correspond to , which have solutions . Here, is just a short-hand for The two signs correspond to outward-moving and inward-moving light rays, respectively. Re-expressing this in terms of the coordinate gives Note that when . This is interpreted as saying that no signal can escape from inside the Schwarzschild radius, with light rays emitted radially either inwards or outwards both end up at the origin as the proper time increases. The Lemaître coordinate chart is not geodesically complete. This can be seen by tracing outward-moving radial null geodesics backwards in time. The outward-moving geodesics correspond to the plus sign in the above. Selecting a starting point at , the above equation integrates to as . Going backwards in proper time, one has as . Starting at and integrating forward, one arrives at in finite proper time. Going backwards, one has, once again that as . Thus, one concludes that, although the metric is non-singular at , all outward-traveling geodesics extend to as . See also Kruskal-Szekeres coordinates Eddington–Finkelstein coordinates Lemaître–Tolman metric Introduction to the mathematics of general relativity Stress–energy tensor Metric tensor (general relativity) Relativistic angular momentum References Metric tensors Spacetime Coordinate charts in general relativity General relativity Gravity Exact solutions in general relativity
Lemaître coordinates
[ "Physics", "Mathematics", "Engineering" ]
921
[ "Exact solutions in general relativity", "Tensors", "Vector spaces", "Mathematical objects", "Theory of relativity", "General relativity", "Equations", "Space (mathematics)", "Metric tensors", "Coordinate systems", "Spacetime", "Coordinate charts in general relativity" ]
13,636,115
https://en.wikipedia.org/wiki/Radical%20trust
Radical trust is the confidence that any structured organization, such as a government, library, business, religion, or museum, has in collaboration and empowerment within online communities. Specifically, it pertains to the use of blogs, wiki and online social networking platforms by organizations to cultivate relationships with an online community that then can provide feedback and direction for the organization's interest. The organization 'trusts' and uses that input in its management. One of the first appearances of the notion of radical trust appears in an info graphic outlining the base principles of web 2.0 in Tim O'Reilly's weblog post "What is Web 2.0" . Radical Trust is listed as the guiding example of trusting the validity of consumer generated media. This concept is considered to be an underlying assumption of Library 2.0. The adoption of radical trust by a library would require its management let go of some of its control over the library and building an organization without an end result in mind. (Harris, 2006). The direction a library would take would be based on input provided by people through online communities. These changes in the organization may merely be anecdotal in nature, making this method of organization management dramatically distinct from data-based or evidence based management. In marketing, Collin Douma further describes the notion of radical trust in the article "Radical Trust" August 28, 2006 Marketing Magazine (Canada) as a key mindset required for marketers and advertisers to enter the social media marketing space. Conventional marketing dictates and maintains control of messages to cause the greatest persuasion in consumer decisions. Given the proliferation of peer-to-peer consumer reviews and other social media platforms that enable consumer generated media, marketing agencies can no longer control the skew of that information. Marketers who are creating and participating in online platforms to facilitate conversation are radically trusting the consumer to build the brand based on the experience that is most relevant to them. References O'Reilly, Tim. What is Web 2.0 . Design Patterns and Business Models for the Next Generation of Software, Sept 30, 2005. Wikipedia listed as primary example of radical trust. Douma, Collin. Radical Trust. Marketing Magazine . August 28. 2006 Douma, Collin. What is Radical Trust? Oct 1, 2006 Chan, Sebastian and Jim Spadaccini. Radical Trust: The State of the Museum Blogosphere Fichter, Darlene. Web 2.0, Library 2.0 and Radical Trust: A First Take. April 2, 2006. Harris, Christopher. A Matter of (Radical) Trust. School Library Journal. New York: Nov 2006. Vol. 52, (11) pg. 24. Kimberly Bolan, Meg Canada, Rob Cullin. Web, Library, and Teen Services 2.0. Young Adult Library Services. Chicago: Winter 2007. Vol. 5, Iss. 2; pg. 40-43. Social media Social networks Web 2.0 Library 2.0
Radical trust
[ "Technology" ]
604
[ "Computing and society", "Social media" ]
13,637,234
https://en.wikipedia.org/wiki/LG%20Venus
LG Venus (model no. LG VX8800 (CDMA) or LG KF600 (GSM)) is a slider cell phone by LG Electronics. The phone has two screens: a regular one as well as a unique touchscreen pad on the bottom third of the front (called "InteractPad") which changes to suit the activity currently being done on the phone. It features a music player, Bluetooth capabilities, up to an 8 GB microSD slot, video messaging, speaker phone and voice command, among other features. It is considered by many to be a spiritual successor to LG's popular "Chocolate" line, which includes the previous LG Chocolate (VX8500) and LG Chocolate Spin (VX8550) handsets. As part of the VX series, the VX8800 LG Venus was sold exclusively to Verizon Wireless in the United States. Pre-ordering began on November 8, 2007, and the release date for Verizon Wireless was November 19, 2007. On March 27, 2008, Telus Mobility announced that it would be made available through their stores and retail partners around mid-April. The GSM version of the Venus is the LG KF600, announced January 16, 2008 and released in March. It has an improved, 3.2-megapixel camera up from 2.0-megapixel on the VX8800. See also Samsung U900 Soul Samsung E950 Sony Ericsson W580 LG Shine References External links PhoneArena MobileBurn Venus info at VerizonWireless.com Featured in PCWorld.ca's round-up of Top Canadian Smartphones and Cell Phones VX8800 Mobile phones introduced in 2007
LG Venus
[ "Technology" ]
374
[ "Mobile technology stubs", "Mobile phone stubs" ]
13,639,109
https://en.wikipedia.org/wiki/Benzofuranylpropylaminopentane
(–)-Benzofuranylpropylaminopentane (BPAP; developmental code name FPFS-1169) is an experimental drug related to selegiline which acts as a monoaminergic activity enhancer (MAE). It is orally active in animals. BPAP is a highly potent MAE and enhances the nerve impulse propagation-mediated release of serotonin, norepinephrine, and dopamine. At much higher concentrations, BPAP is also a monoamine reuptake inhibitor, specifically of dopamine and norepinephrine and to a much lesser extent of serotonin. BPAP produces psychostimulant-like effects in animals, with these effects mediated by its MAE actions. The drug is a substituted benzofuran derivative and tryptamine relative structurally related to phenylpropylaminopentane (PPAP). BPAP was first described in 1999. There has been interest in BPAP for potential clinical use in humans, including in the treatment of Parkinson's disease, Alzheimer's disease, and depression. There has also been interest in BPAP to help slow aging. Pharmacology Pharmacodynamics Monoaminergic activity enhancer BPAP is a monoaminergic activity enhancer (MAE). It stimulates the impulse propagation mediated release of the monoamine neurotransmitters serotonin, dopamine, and norepinephrine in the brain. However, whereas the related MAE phenylpropylaminopentane (PPAP) is only a catecholaminergic activity enhancer (CAE), BPAP enhances both serotonin and the catecholamines. In addition, BPAP is a more potent MAE than PPAP. Unlike psychostimulants like amphetamine, which are monoamine releasing agents that induce release of a flood of monoamine neurotransmitters in an uncontrolled manner, BPAP instead only increases the amount of neurotransmitter that is released when a neuron is stimulated by receiving an impulse from a neighbouring neuron. As such, while both amphetamine and BPAP increase the amount of neurotransmitters that are released, amphetamine causes neurons to dump neurotransmitter stores into the synapse regardless of external input, while with BPAP the pattern of neurotransmitter release is not changed. Instead, when the neuron would normally release neurotransmitter, a larger amount than normal is released with BPAP. In an in vivo rodent study, BPAP was found to maximally increase dopamine levels in the striatum by 44%, in the substantia nigra by 118%, and in the olfactory tubercle by 57%; norepinephrine levels in the locus coeruleus by 228%; and serotonin levels in the raphe nucleus by 166%. MAEs, including BPAP, have a peculiar and characteristic bimodal concentration–response relationship, with two bell-shaped curves of MAE activity across tested concentration ranges. Hence, there is a narrow concentration range for optimal pharmacodynamic activity. The actions of BPAP and other MAEs are distinct from those of monoamine reuptake inhibitors and monoamine oxidase inhibitors. Whereas BPAP enhances the nerve stimulation-induced release of serotonin, norepinephrine, and dopamine in the rat brain stem in vitro, the selective norepinephrine reuptake inhibitor desipramine (desmethylimipramine), the selective serotonin reuptake inhibitor fluoxetine, the selective MAO-A inhibitor clorgyline, the selective MAO-B inhibitor lazabemide, and the potent dopamine receptor agonists bromocriptine and pergolide were all ineffective. Recent findings have suggested that known synthetic MAEs like BPAP may exert their effects via trace amine-associated receptor 1 (TAAR1) agonism. This was evidenced by the TAAR1 antagonist EPPTB reversing its MAE effects, among other findings. Another compound, rasagiline, has likewise been found to reverse the effects of MAEs, and has been proposed as a possible TAAR1 antagonist. The MAE effects of BPAP, for instance on dopamine, can be blocked by monoamine reuptake inhibitors, like nomifensine. This is thought to be because BPAP uses the monoamine transporters, like the dopamine transporter, to enter monoaminergic neurons and then mediates its MAE effects via intracellular TAAR1 activation whilst inside of pre-synaptic nerve terminals. Other compounds which produce MAE effects are the endogenous trace amines phenethylamine and tryptamine, the MAO-B inhibitor selegiline (L-deprenyl), and phenylpropylaminopentane (PPAP). However, BPAP is the most potent MAE known, with 130times the in vivo potency of selegiline, in vitro activity at concentrations in the femtomolar to picomolar range, and in vivo activity at microgram doses. BPAP increases locomotor activity, a measure of psychostimulant-like effect, in normal rats, and reverses hypolocomotion in reserpine-treated rats. These effects are reversed by the dopamine D1 receptor antagonist SCH-23390 but not by the dopamine D2 receptor antagonist sulpiride, suggesting that they are mediated by the dopaminergic system. Unlike amphetamines, but similarly to selegiline, BPAP is not expected to have misuse potential. BPAP antagonizes tetrabenazine-induced inhibition of learning in the shuttle box. It has been found to have neuroprotective effects similar to those of selegiline in some animal models. Following a peak in adolescence, monoamine release in the brain declines with age in rodents and this is associated with reduced behavioral activity. Rodent studies have found that MAEs like BPAP and selegiline augment brain monoamine release, slow age-related monoaminergic neurodegeneration, help to preserve behavioral activity with age, and prolong lifespan. Other actions In addition to its MAE actions, BPAP is a monoamine reuptake inhibitor at higher concentrations. Its values in terms of binding affinity for the dopamine transporter, norepinephrine transporter, and serotonin transporter are 16 ± 2nM, 211 ± 61nM, and 638 ± 63nM, respectively. Conversely, its values for inhibition of dopamine, norepinephrine, and serotonin reuptake are 42 ± 9nM, 52 ± 19nM, and 640 ± 120nM, respectively. It has no classical monoamine releasing agent actions, in contrast to amphetamines. It has been said that the monoamine reuptake inhibition of BPAP is not of pharmacological significance at the much lower concentrations that have MAE activity. While selegiline is a potent monoamine oxidase inhibitor (MAOI), BPAP is only a weak MAO-A inhibitor at high concentrations, and at low concentrations produces only MAE effects. It is 10,000-fold less potent than the potent MAO-A inhibitor clorgyline in terms of MAO-A inhibition. The weak MAO-A inhibition of BPAP is said to be without pharmacological significance. BPAP has relatively weak affinity for the α2-adrenergic receptor. However, this occurs at concentrations well below its MAE actions. The drug is also a weak agonist of the sigma receptor likewise at high concentrations. Pharmacokinetics The pharmacokinetics of BPAP have been studied in rodents. It is well-absorbed with parenteral and oral routes and shows substantial oral bioavailability. Peak levels are reached within 30 to 60minutes. There is a second peak after 4hours due to enterohepatic circulation. It crosses the blood–brain barrier and distributes into various brain areas. The drug is not metabolized by monoamine oxidase. BPAP is preferentially eliminated in urine and to a lesser extent in feces. Its elimination half-life was 5.5 to 5.8hours. The drug is recovered more than 90% in urine and feces 72hours after administration. Chemistry BPAP (1-(benzofuran-2-yl)-2-propylaminopentane) is a substituted benzofuran derivative and tryptamine relative and was derived from structural modification of phenylpropylaminopentane (PPAP). It was developed by replacement of the benzene ring in PPAP with a benzofuran ring. The compound is generally studied and used as the R(–)-enantiomer, R(–)-BPAP or simply (–)-BPAP (FPFS-1169). This enantiomer is more potent than the S(+)-enantiomer (FPFS-1170). Indolylpropylaminopentane (IPAP), an analogue of BPAP, is a MAE for serotonin, norepinephrine, and dopamine that was derived from tryptamine. Unlike BPAP, it shows some selectivity for serotonin, with its maximal impact on this neurotransmitter occurring at 10-fold lower concentrations than for norepinephrine or dopamine. A derivative of BPAP, 3-F-BPAP, has weak MAE activity and has been found to antagonize the MAE actions of BPAP. These findings suggest that 3-F-BPAP interacts with the same receptor or biological target as BPAP and acts as a MAE antagonist. Enantioselective synthesis of (–)-BPAP has been described. History BPAP was first described in the scientific literature in 1999. It was derived via structural modification of phenylpropylaminopentane (PPAP). It was discovered by the developers of selegiline, including József Knoll and colleagues like Ildikó Miklya. PPAP had previously been derived by modification of selegiline. Research BPAP has been studied in preclinical research for potential treatment of Alzheimer's disease, Parkinson's disease, depression, and aging. It has been found to be active in multiple animal models of antidepressant action. It also attenuates reinstatement of methamphetamine-seeking behavior in rodents. The drug has been proposed for potential clinical development for use in humans. An effective dosage of BPAP of 0.1mg/day, one-tenth of that of the less-potent compound selegiline (1mg/day), has been suggested for study and use in humans. References Anti-aging substances Antidepressants Antiparkinsonian agents Aphrodisiacs Benzofuranethanamines Drugs with unknown mechanisms of action Enantiopure drugs Experimental drugs Monoaminergic activity enhancers Neuroprotective agents Pro-motivational agents Serotonin–norepinephrine–dopamine reuptake inhibitors Stimulants TAAR1 agonists
Benzofuranylpropylaminopentane
[ "Chemistry", "Biology" ]
2,425
[ "Senescence", "Anti-aging substances", "Stereochemistry", "Enantiopure drugs" ]
13,640,867
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20progenitors
Gamma-ray burst progenitors are the types of celestial objects that can emit gamma-ray bursts (GRBs). GRBs show an extraordinary degree of diversity. They can last anywhere from a fraction of a second to many minutes. Bursts could have a single profile or oscillate wildly up and down in intensity, and their spectra are highly variable unlike other objects in space. The near complete lack of observational constraint led to a profusion of theories, including evaporating black holes, magnetic flares on white dwarfs, accretion of matter onto neutron stars, antimatter accretion, supernovae, hypernovae, and rapid extraction of rotational energy from supermassive black holes, among others. There are at least two different types of progenitors (sources) of GRBs: one responsible for the long-duration, soft-spectrum bursts and one (or possibly more) responsible for short-duration, hard-spectrum bursts. The progenitors of long GRBs are believed to be massive, low-metallicity stars exploding due to the collapse of their cores. The progenitors of short GRBs are thought to arise from mergers of compact binary systems like neutron stars, which was confirmed by the GW170817 observation of a neutron star merger and a kilonova. Long GRBs: massive stars Collapsar model As of 2007, there is almost universal agreement in the astrophysics community that the long-duration bursts are associated with the deaths of massive stars in a specific kind of supernova-like event commonly referred to as a collapsar or hypernova. Very massive stars are able to fuse material in their centers all the way to iron, at which point a star cannot continue to generate energy by fusion and collapses, in this case, immediately forming a black hole. Matter from the star around the core rains down towards the center and (for rapidly rotating stars) swirls into a high-density accretion disk. The infall of this material into the black hole drives a pair of jets out along the rotational axis, where the matter density is much lower than in the accretion disk, towards the poles of the star at velocities approaching the speed of light, creating a relativistic shock wave at the front. If the star is not surrounded by a thick, diffuse hydrogen envelope, the jets' material can pummel all the way to the stellar surface. The leading shock actually accelerates as the density of the stellar matter it travels through decreases, and by the time it reaches the surface of the star it may be traveling with a Lorentz factor of 100 or higher (that is, a velocity of 0.9999 times the speed of light). Once it reaches the surface, the shock wave breaks out into space, with much of its energy released in the form of gamma-rays. Three very special conditions are required for a star to evolve all the way to a gamma-ray burst under this theory: the star must be very massive (probably at least 40 Solar masses on the main sequence) to form a central black hole in the first place, the star must be rapidly rotating to develop an accretion torus capable of launching jets, and the star must have low metallicity in order to strip off its hydrogen envelope so the jets can reach the surface. As a result, gamma-ray bursts are far rarer than ordinary core-collapse supernovae, which only require that the star be massive enough to fuse all the way to iron. Evidence for the collapsar view This consensus is based largely on two lines of evidence. First, long gamma-ray bursts are found without exception in systems with abundant recent star formation, such as in irregular galaxies and in the arms of spiral galaxies. This is strong evidence of a link to massive stars, which evolve and die within a few hundred million years and are never found in regions where star formation has long ceased. This does not necessarily prove the collapsar model (other models also predict an association with star formation) but does provide significant support. Second, there are now several observed cases where a supernova has immediately followed a gamma-ray burst. While most GRBs occur too far away for current instruments to have any chance of detecting the relatively faint emission from a supernova at that distance, for lower-redshift systems there are several well-documented cases where a GRB was followed within a few days by the appearance of a supernova. These supernovae that have been successfully classified are type Ib/c, a rare class of supernova caused by core collapse. Type Ib and Ic supernovae lack hydrogen absorption lines, consistent with the theoretical prediction of stars that have lost their hydrogen envelope. The GRBs with the most obvious supernova signatures include GRB 060218 (SN 2006aj), GRB 030329 (SN 2003dh), and GRB 980425 (SN 1998bw), and a handful of more distant GRBs show supernova "bumps" in their afterglow light curves at late times. Possible challenges to this theory emerged recently, with the discovery of two nearby long gamma-ray bursts that lacked the signature of any type of supernova: both GRB060614 and GRB 060505 defied predictions that a supernova would emerge despite intense scrutiny from ground-based telescopes. Both events were, however, associated with actively star-forming stellar populations. One possible explanation is that during the core collapse of a very massive star a black hole can form, which then 'swallows' the entire star before the supernova blast can reach the surface. Short GRBs: degenerate binary systems Short gamma-ray bursts appear to be an exception. Until 2007, only a handful of these events have been localized to a definite galactic host. However, those that have been localized appear to show significant differences from the long-burst population. While at least one short burst has been found in the star-forming central region of a galaxy, several others have been associated with the outer regions and even the outer halo of large elliptical galaxies in which star formation has nearly ceased. All the hosts identified so far have also been at low redshift. Furthermore, despite the relatively nearby distances and detailed follow-up study for these events, no supernova has been associated with any short GRB. Neutron star and neutron star/black hole mergers While the astrophysical community has yet to settle on a single, universally favored model for the progenitors of short GRBs, the generally preferred model is the merger of two compact objects as a result of gravitational inspiral: two neutron stars, or a neutron star and a black hole. While thought to be rare in the Universe, a small number of cases of close neutron star - neutron star binaries are known in our Galaxy, and neutron star - black hole binaries are believed to exist as well. According to Einstein's theory of general relativity, systems of this nature will slowly lose energy due to gravitational radiation and the two degenerate objects will spiral closer and closer together, until in the last few moments, tidal forces rip the neutron star (or stars) apart and an immense amount of energy is liberated before the matter plunges into a single black hole. The whole process is believed to occur extremely quickly and be completely over within a few seconds, accounting for the short nature of these bursts. Unlike long-duration bursts, there is no conventional star to explode and therefore no supernova. This model has been well-supported so far by the distribution of short GRB host galaxies, which have been observed in old galaxies with no star formation (for example, GRB050509B, the first short burst to be localized to a probable host) as well as in galaxies with star formation still occurring (such as GRB050709, the second), as even younger-looking galaxies can have significant populations of old stars. However, the picture is clouded somewhat by the observation of X-ray flaring in short GRBs out to very late times (up to many days), long after the merger should have been completed, and the failure to find nearby hosts of any sort for some short GRBs. Magnetar giant flares One final possible model that may describe a small subset of short GRBs are the so-called magnetar giant flares (also called megaflares or hyperflares). Early high-energy satellites discovered a small population of objects in the Galactic plane that frequently produced repeated bursts of soft gamma-rays and hard X-rays. Because these sources repeat and because the explosions have very soft (generally thermal) high-energy spectra, they were quickly realized to be a separate class of object from normal gamma-ray bursts and excluded from subsequent GRB studies. However, on rare occasions these objects, now believed to be extremely magnetized neutron stars and sometimes termed magnetars, are capable of producing extremely luminous outbursts. The most powerful such event observed to date, the giant flare of 27 December 2004, originated from the magnetar SGR 1806-20 and was bright enough to saturate the detectors of every gamma-ray satellite in orbit and significantly disrupted Earth's ionosphere. While still significantly less luminous than "normal" gamma-ray bursts (short or long), such an event would be detectable to current spacecraft from galaxies as far as the Virgo cluster and, at this distance, would be difficult to distinguish from other types of short gamma-ray burst on the basis of the light curve alone. To date, three gamma-ray bursts have been associated with SGR flares in galaxies beyond the Milky Way: GRB 790305b in the Large Magellanic Cloud, GRB 051103 from M81 and GRB 070201 from M31. Diversity in the origin of long GRBs HETE II and Swift observations reveal that long gamma-ray bursts come with and without supernovae, and with and without pronounced X-ray afterglows. It gives a clue to a diversity in the origin of long GRBs, possibly in- and outside of star-forming regions, with otherwise a common inner engine. The timescale of tens of seconds of long GRBs hereby appears to be intrinsic to their inner engine, for example, associated with a viscous or a dissipative process. The most powerful stellar mass transient sources are the above-mentioned progenitors (collapsars and mergers of compact objects), all producing rotating black holes surrounded by debris in the form of an accretion disk or torus. A rotating black hole carries spin-energy in angular momentum as does a spinning top: where and denote the moment of inertia and the angular velocity of the black hole in the trigonometric expression for the specific angular momentum of a Kerr black hole of mass . With no small parameter present, it has been well-recognized that the spin energy of a Kerr black hole can reach a substantial fraction (29%) of its total mass-energy , thus holding promise to power the most remarkable transient sources in the sky. Of particular interest are mechanisms for producing non-thermal radiation by the gravitational field of rotating black holes, in the process of spin-down against their surroundings in aforementioned scenarios. By Mach's principle, spacetime is dragged along with mass-energy, with the distant stars on cosmological scales or with a black hole in close proximity. Thus, matter tends to spin-up around rotating black holes, for the same reason that pulsars spin down by shedding angular momentum in radiation to infinity. A major amount of spin-energy of rapidly spinning black holes can thereby be released in a process of viscous spin-down against an inner disk or torus—into various emission channels. Spin-down of rapidly spinning stellar mass black holes in their lowest energy state takes tens of seconds against an inner disk, representing the remnant debris of the merger of two neutron stars, the break-up of a neutron star around a companion black hole or formed in core-collapse of a massive star. Forced turbulence in the inner disk stimulates the creation of magnetic fields and multipole mass-moments, thereby opening radiation channels in radio, neutrinos and, mostly, in gravitational waves with distinctive chirps shown in the diagram with the creation of astronomical amounts of Bekenstein-Hawking entropy. Transparency of matter to gravitational waves offers a new probe to the inner-most workings of supernovae and GRBs. The gravitational-wave observatories LIGO and Virgo are designed to probe stellar mass transients in a frequency range of tens to about fifteen hundred Hz. The above-mentioned gravitational-wave emissions fall well within the LIGO-Virgo bandwidth of sensitivity; for long GRBs powered by "naked inner engines" produced in the binary merger of a neutron star with another neutron star or companion black hole, the above-mentioned magnetic disk winds dissipate into long-duration radio-bursts, that may be observed by the novel Low Frequency Array (LOFAR). See also Gamma-ray burst emission mechanisms Quark-nova References Gamma-ray bursts Star types
Gamma-ray burst progenitors
[ "Physics", "Astronomy" ]
2,702
[ "Physical phenomena", "Astronomical events", "Gamma-ray bursts", "Astronomical classification systems", "Stellar phenomena", "Star types" ]
13,640,902
https://en.wikipedia.org/wiki/Estrogen%20receptor%20beta
Estrogen receptor beta (ERβ) also known as NR3A2 (nuclear receptor subfamily 3, group A, member 2) is one of two main types of estrogen receptor—a nuclear receptor which is activated by the sex hormone estrogen. In humans ERβ is encoded by the ESR2 gene. Function ERβ is a member of the family of estrogen receptors and the superfamily of nuclear receptor transcription factors. The gene product contains an N-terminal DNA binding domain and C-terminal ligand binding domain and is localized to the nucleus, cytoplasm, and mitochondria. Upon binding to 17-β-estradiol, estriol or related ligands, the encoded protein forms homo-dimers or hetero-dimers with estrogen receptor α that interact with specific DNA sequences to activate transcription. Some isoforms dominantly inhibit the activity of other estrogen receptor family members. Several alternatively spliced transcript variants of this gene have been described, but the full-length nature of some of these variants has not been fully characterized. ERβ may inhibit cell proliferation and opposes the actions of ERα in reproductive tissue. ERβ may also have an important role in adaptive function of the lung during pregnancy. ERβ is a potent tumor suppressor and plays a crucial role in many cancer types such as prostate cancer and ovarian cancer. Mammary gland ERβ knockout mice show normal mammary gland development at puberty and are able to lactate normally. The mammary glands of adult virgin female mice are indistinguishable from those of age-matched wild-type virgin female mice. This is in contrast to ERα knockout mice, in which a complete absence of mammary gland development at puberty and thereafter is observed. Administration of the selective ERβ agonist ERB-041 to immature ovariectomized female rats produced no observable effects in the mammary glands, further indicating that the ERβ is non-mammotrophic. Although ERβ is not required for pubertal development of the mammary glands, it may be involved in terminal differentiation in pregnancy, and may also be necessary to maintain the organization and differentiation of mammary epithelium in adulthood. In old female ERβ knockout mice, severe cystic mammary disease that is similar in appearance to postmenopausal mastopathy develops, whereas this does not occur in aged wild-type female mice. However, ERβ knockout mice are not only deficient in ERβ signaling in the mammary glands, but also have deficient progesterone exposure due to impairment of corpora lutea formation. This complicates attribution of the preceding findings to mammary ERβ signaling. Selective ERβ agonism with diarylpropionitrile (DPN) has been found to counteract the proliferative effects in the mammary glands of selective ERα agonism with propylpyrazoletriol (PPT) in ovariectomized postmenopausal female rats. Similarly, overexpression of ERβ via lentiviral infection in mature virgin female rats decreases mammary proliferation. ERα signaling has proliferative effects in both normal breast and breast cancer cell lines, whereas ERβ has generally antiproliferative effects in such cell lines. However, ERβ has been found to have proliferative effects in some breast cell lines. Expression of ERα and ERβ in the mammary gland have been found to vary throughout the menstrual cycle and in an ovariectomized state in female rats. Whereas mammary ERα in rhesus macaques is downregulated in response to increased estradiol levels, expression of ERβ in the mammary glands is not. Expression of ERα and ERβ in the mammary glands also differs throughout life in female mice. Mammary ERα expression is higher and mammary ERβ expression lower in younger female mice, while mammary ERα expression is lower and mammary ERβ expression higher in older female mice as well as in parous female mice. Mammary proliferation and estrogen sensitivity is higher in young female mice than in old or parous female mice, particularly during pubertal mammary gland development. Tissue distribution ERβ is expressed by many tissues including the uterus, blood monocytes and tissue macrophages, colonic and pulmonary epithelial cells and in prostatic epithelium and in malignant counterparts of these tissues. Also, ERβ is found throughout the brain at different concentrations in different neuron clusters. ERβ is also highly expressed in normal breast epithelium, although its expression declines with cancer progression. ERβ is expressed in all subtypes of breast cancer. Controversy regarding ERβ protein expression has hindered study of ERβ, but highly sensitive monoclonal antibodies have been produced and well-validated to address these issues. ERβ abnormalities ERβ function is related to various cardiovascular targets including ATP-binding cassette transporter A1 (ABCA1) and apolipoprotein A1 (ApoA-1). Polymorphism may affect ERβ function and lead to altered responses in postmenopausal women receiving hormone replacement therapy. Abnormalities in gene expression associated with ERβ have also been linked to autism spectrum disorder. Disease Cardiovascular disease Mutations in ERβ have been shown to influence cardiomyocytes, the cells that comprise the largest part of the heart, and can lead to an increased risk of cardiovascular disease (CVD). There is a disparity in prevalence of CVD between pre- and post-menopausal women, and the difference can be attributed to estrogen levels. Many types of ERβ receptors exist in order to help regulate gene expression and subsequent health in the body, but binding of 17βE2 (a naturally occurring estrogen) specifically improves cardiac metabolism. The heart utilizes a lot of energy in the form of ATP to properly pump blood and maintain physiological requirements in order to live, and 17βE2 helps by increasing these myocardial ATP levels and respiratory function. In addition, 17βE2 can alter myocardial signaling pathways and stimulate myocyte regeneration, which can aid in inhibiting myocyte cell death. The ERβ signaling pathway plays a role in both vasodilation and arterial dilation, which contributes to an individual having a healthy heart rate and a decrease in blood pressure. This regulation can increase endothelial function and arterial perfusion, both of which are important to myocyte health. Thus, alterations in this signaling pathways due to ERβ mutation could lead to myocyte cell death from physiological stress. While ERα has a more profound role in regeneration after myocyte cell death, ERβ can still help by increasing endothelial progenitor cell activation and subsequent cardiac function. Alzheimer's disease Genetic variation in ERβ is both sex and age dependent and ERβ polymorphism can lead to accelerated brain aging, cognitive impairment, and development of AD pathology. Similar to CVD, post-menopausal women have an increased risk of developing Alzheimer's disease (AD) due to a loss of estrogen, which affects proper aging of the hippocampus, neural survival and regeneration, and amyloid metabolism. ERβ mRNA is highly expressed in hippocampal formation, an area of the brain that is associated with memory. This expression contributes to increased neuronal survival and helps protect against neurodegenerative diseases such as AD. The pathology of AD is also associated with accumulation of amyloid beta peptide (Aβ). While a proper concentration of Aβ in the brain is important for healthy functioning, too much can lead to cognitive impairment. Thus, ERβ helps control Aβ levels by maintaining the protein it is derived from, β-amyloid precursor protein. ERβ helps by up-regulating insulin-degrading enzyme (IDE), which leads to β-amyloid degradation when accumulation levels begin to rise. However, in AD, lack of ERβ causes a decrease in this degradation and an increase in plaque build-up. ERβ also plays a role in regulating APOE, a risk factor for AD that redistributes lipids across cells. APOE expression in the hippocampus is specifically regulated by 17βE2, affecting learning and memory in individuals afflicted with AD. Thus, estrogen therapy via an ERβ-targeted approach can be used as a prevention method for AD either before or at the onset of menopause. Interactions between ERα and ERβ can lead to antagonistic actions in the brain, so an ERβ-targeted approach can increase therapeutic neural responses independently of ERα. Therapeutically, ERβ can be used in both men and women in order to regulate plaque formation in the brain. Neuroprotective benefits Synaptic strength and plasticity ERβ levels can dictate both synaptic strength and neuroplasticity through neural structure modifications. Variations in endogenous estrogen levels cause changes in dendritic architecture in the hippocampus, which affects neural signaling and plasticity. Specifically, lower estrogen levels lead to decreased dendritic spines and improper signaling, inhibiting plasticity of the brain. However, treatment of 17βE2 can reverse this affect, giving it the ability to modify hippocampal structure. As a result of the relationship between dendritic architecture and long-term potentiation (LTP), ERβ can enhance LTP and lead to an increase in synaptic strength. Furthermore, 17βE2 promotes neurogenesis in developing hippocampal neurons and neurons in the subventricular zone and dentate gyrus of the adult human brain. Specifically, ERβ increases the proliferation of progenitor cells to create new neurons and can be increased later in life through 17βE2 treatment. Ligands Agonists Non-selective Endogenous estrogens (e.g., estradiol, estrone, estriol, estetrol) Natural estrogens (e.g., conjugated estrogens) Synthetic estrogens (e.g., ethinylestradiol, diethylstilbestrol) Selective Agonists of ERβ selective over ERα include: 20-Hydroxyecdysone (ecdysterone, 20-HE, 20-E) — phytoecdysteroid 3β-Androstanediol (3β-diol) – endogenous 8β-VE2 AC-186 Apigenin – phytoestrogen Daidzein – phytoestrogen DCW234 Dehydroepiandrosterone (DHEA) – endogenous Diarylpropionitrile (DPN) ERB-79 and its active enantiomer ERB-26 ERB-196 (WAY-202196) Erteberel (SERBA-1, LY-500307) FERb 033 – 62-fold selectivity for ERβ over ERα Genistein – phytoestrogen; 16-fold selectivity for ERβ over ERα Liquiritigenin (Menerba) – phytoestrogen Penduletin – phytoestrogen Prinaberel (ERB-041, WAY-202041) S-Equol ((S)-4',7-isoflavandiol) – phytoestrogen; 13-fold selectivity for ERβ over ERα WAY-166818 WAY-200070 WAY-214156 Antagonists Non-selective Selective estrogen receptor modulators (e.g., tamoxifen, raloxifene) Antiestrogens (e.g., fulvestrant, ICI-164384) Selective Antagonists of ERβ selective over ERα include: PHTPP (R,R)-Tetrahydrochrysene ((R,R)-THC) – actually not selective over ERα, but rather an agonist instead of antagonist of ERα Affinities Interactions Estrogen receptor beta has been shown to interact with: CCND1, ESR1 MAD2L1, NCOA3, NCOA6, RBM39, and SRC. References Further reading External links Intracellular receptors Transcription factors
Estrogen receptor beta
[ "Chemistry", "Biology" ]
2,592
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,641,386
https://en.wikipedia.org/wiki/Julius%20Kessler
Julius Kessler (August 4, 1855 – 10 December 1940) was the founder of Kessler Whiskey. Kessler was born in Budapest, Austrian Empire in 1855. He came to America to make his fortune. He founded Kessler Whisky in the 1870s in Leadville, Colorado. In the early days, to get his product out, Kessler went from saloon to saloon selling the whiskey. Kessler retired from the business in 1921, aged 65. Due to the prominence of his whiskey, Julius Kessler became president of the Distiller's Securities Corporation, also known as the "Whiskey Trust." Kessler spent his retirement in Vienna, Austria, but returned to America several times. In 1935, Julius Kessler Distilling Co., Inc. was formed as a wholly owned subsidiary of Seagram, with Kessler as president. He died in his home in December 1940 at the age of 85. References 1855 births 1940 deaths Drink distillers Businesspeople from Colorado Emigrants from Austria-Hungary to the United States
Julius Kessler
[ "Chemistry" ]
211
[ "Distillation", "Drink distillers" ]
13,641,453
https://en.wikipedia.org/wiki/Field%20of%20bullets
The field of bullets hypothesis describes a model in which extinction is non-selective and occurs randomly. The metaphor of the field of bullets suggest that species are simply out in a field and "bullets" are hitting them at random, thus their extinction is due only to stochastic effects. The field of bullets operates without relation to the organisms' adaptability, or fitness of specific animals. Under this hypothesis all species are subjugated to the same probability of extinction no matter where they originate in their taxonomy, and irrespective of traits that typically buffer against extinction. References Extinction Biological hypotheses Metaphors referring to war and violence Metaphors referring to objects
Field of bullets
[ "Biology" ]
135
[ "Biological hypotheses" ]
17,588,004
https://en.wikipedia.org/wiki/Marine%20outfall
A marine outfall (or ocean outfall) is a pipeline or tunnel that discharges municipal or industrial wastewater, stormwater, combined sewer overflows (CSOs), cooling water, or brine effluents from water desalination plants to the sea. Usually they discharge under the sea's surface (submarine outfall). In the case of municipal wastewater, effluent is often being discharged after having undergone no or only primary treatment, with the intention of using the assimilative capacity of the sea for further treatment. Submarine outfalls are common throughout the world and probably number in the thousands. The light intensity and salinity in natural sea water disinfects the wastewater to ocean outfall system significantly. More than 200 outfalls alone have been listed in a single international database maintained by the Institute for Hydromechanics at Karlsruhe University for the International Association of Hydraulic Engineering and Research (IAHR) / International Water Association (IWA) Committee on Marine Outfall Systems. The world's first marine outfall was built in Santa Monica, United States, in 1910. In Latin America and the Caribbean there were 134 outfalls with more than 500 m length in 2006 for wastewater disposal alone, according to a survey by the Pan American Center for Sanitary Engineering and Environmental Sciences (CEPIS) of PAHO. According to the survey, the largest number of municipal wastewater outfalls in the region exist in Venezuela (39), Chile (39) and Brazil (22). The world's largest marine outfall stems from the Deer Island Waste Water Treatment Plant located in Boston, United States. Currently, Boston has approximately 235 miles of combined sewers and 37 active CSO outfalls. Many outfalls are simply known by a public used name, e.g. Boston Outfall. Advantages The main advantages of marine outfalls for the discharge of wastewater are: the natural dilution and dispersion of organic matter, pathogens and other pollutants the ability to keep the sewage field submerged because of the depth at which the sewage is being released the greater die-off rate of pathogens due to the greater distance they will have to travel to shore. They also tend to be less expensive than advanced wastewater treatment plants, using the natural assimilative capacity of the sea instead of energy-intensive treatment processes in a plant. For example, preliminary treatment of wastewater is sufficient with an effective outfall and diffuser. The costs of preliminary treatment are about one tenth that of secondary treatment. Preliminary treatment also requires much less land than advanced wastewater treatment. Disadvantages Marine outfalls for partially treated or untreated wastewater remain controversial. The design calculation and computer models for pollution modeling have been criticized, arguing that dilution has been overemphasized and that other mechanisms work in the opposite direction, such as bioaccumulation of toxins, sedimentation of sludge particles and agglomeration of sewage particles with grease. Accumulative mechanisms include slick formation, windrow formation, flocculate formation and agglomerated formation. Grease or wax can interfere with dispersion, so that bacteria and viruses could be carried to remote locations where the concentration of bacterial predators would be low and the die-off rate much lower. Technology Outfalls vary in diameter from as narrow as 15 cm to as wide as 8 m; the widest registered outfall in the world with 8 m diameter is located in Navia (Spain) for the discharge of industrial wastewater. Outfalls vary in length from 50 m to 55 km, the longest registered outfalls being the Boston outfall with a length of 16 km and an industrial outfall in Ankleshwar (India) with a length of 55 km. The depth of the deepest point of an outfall varies from 3 m to up to 60 m, the deepest registered outfall being located in Macuto, Vargas (Venezuela) for the discharge of untreated municipal wastewater. Outfall materials include polyethylene, stainless steel, carbon steel, glass-reinforced plastic, reinforced concrete, cast iron or tunnels through rock. Common installation methods for pipelines are float and sink, bottom pull and top pull. Examples Submarine outfalls exist, existed or have been considered in the following locations, among many others: Africa Casablanca (Morocco). Cape Town (South Africa). Asia Manila Bay (Philippines). Mumbai (India). Mutwall ( Sri Lanka). Wellawaththa (Sri Lanka). Lunawa (Sri Lanka). Oceania Anglesea, Victoria. Geelong, Victoria. Sydney (e.g., Bondi Ocean Outfall Sewer) Europe Barcelona, Spain Costa do Estoril (Portugal) Marmara Sea near Istanbul (Turkey) San Sebastián (Spain) Split (Croatia) Thames Estuary downstream of London (UK) Edinburgh, Scotland. North America Honolulu (USA) New York Bight (USA) Southern California Bight (USA). and Victoria, British Columbia, (Canada). Santa Monica, United States (world's first) Boston, United States (world's largest) The city of San Diego used Pacific Ocean dilution of primary treated effluent into the 21st century. Latin America and the Caribbean Cartagena, Colombia Ipanema Beach beach from Rio de Janeiro (Brazil). This outfall, built in 1975, discharges untreated wastewater through a pipe with a diameter of 2.4m and a length of 4,775m at a depth of 27m. Sosua (Dominican Republic). Controversies In the 1960s the city of Sydney decided to build ocean sewage outfalls to discharge partially treated sewage 2–4 km offshore at a cost of US$300 million. In the late 1980s, however, the government promised to upgrade the coastal treatment plants so that sewage would be treated to at least secondary treatment standards before discharge into the ocean. The submarine outfall in Cartagena, Colombia was financed with a loan by the World Bank. It was subsequently challenged by residents claiming that the wastewater caused damage to the marine environment and to fisheries. The case was taken up by the World Bank's Inspection Panel, which contracted two independent three-dimensional modeling efforts in 2006. Both "confirmed that the 2.85km long submarine outfall (was) adequate." For disposal into the ocean, environmental treaty requirements have to met. As international treaties often manage water over countries' borders, wastewater disposal is easier in bodies of water found entirely under the jurisdiction of one country. References Sources IWA Committee on Marine Outfall Systems Salas, Henry J.:Submarine outfalls a viable alternative for sewage discharge of coastal cities in Latin America and the Caribbean, Lima; CEPIS, 2000 External links IAHR/IWA Committee on Marine Outfall Systems Waste treatment technology Environmental engineering Hydrology Hydraulics Oceanography
Marine outfall
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,390
[ "Hydrology", "Applied and interdisciplinary physics", "Water treatment", "Oceanography", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Waste treatment technology", "Fluid dynamics" ]
17,589,443
https://en.wikipedia.org/wiki/Gerald%20J.%20Wasserburg
Gerald J. Wasserburg (March 25, 1927 – June 13, 2016) was an American geologist. At the time of his death, he was the John D. MacArthur Professor of Geology and Geophysics, emeritus, at the California Institute of Technology. He was known for his work in the fields of isotope geochemistry, cosmochemistry, meteoritics, and astrophysics. Education After leaving the US Army, where he received the Combat Infantryman Badge, he graduated from high school and attended college on the G.I. Bill. Wasserburg completed his Ph.D. from the University of Chicago in 1954, with a thesis on the development of K–Ar dating, done under the sponsorship of Harold Urey and Mark Inghram. Career Academia He joined the faculty at Caltech in 1955 as assistant professor. He became associate professor in 1959 and professor of geology and geophysics in 1962. In 1982 he became the John D. MacArthur Professor of Geology and Geophysics; he retired in 2001. He, Typhoon Lee and D.A. Papanastassiou discovered the presence of short-lived radioactive 26Al in the early Solar System and short-lived 107Pd with William R. Kelly. Apollo Program Wasserburg was deeply involved in the Apollo Program with the returned Lunar samples, including being a member of the so-called "Four Horsemen", along with Bob Walker, Jim Arnold, and Paul Werner Gast. He pioneered the precise measurement of ultra-small samples under strict clean room conditions with minimal contamination. He was the co-inventor of the Lunatic Spectrometer (the first fully digital mass spectrometer with computer-controlled magnetic field scanning and rapid switching) and founder of the "Lunatic Asylum" research laboratory at Caltech specializing in high precision, high sensitivity isotopic analyses of meteorites, lunar and terrestrial samples. He and his co-workers were major contributors to establishing a chronology for the Moon and proposed the hypothesis of the Late Heavy Bombardment (LHB) of the whole inner Solar System at near 4.0 Gy ago (with F. Tera, D. A. Papanastassiou). Impact of research Wasserburg's research led to a butter understanding of the origins and history of the Solar System and its component bodies and the precursor stellar sources contributing to the Solar System; this research established a time scale for the development of the early Solar System, including the processes of nucleosynthesis and the formation and evolution of the planets, the Moon and the meteorites. More recently, he investigated models of the chemical evolution of the Galaxy. Organizational affiliations and legacy He was a member of the U.S. National Academy of Sciences, the American Philosophical Society, the American Academy of Arts and Sciences, and the Norwegian Academy of Science and Letters. He won the Arthur L. Day Medal in 1970, the NASA Distinguished Public Service Medal in 1972 and 1978, the Wollaston Medal in 1985, the Gold Medal of the Royal Astronomical Society in 1991 and the Bowie Medal in 2008. He was co-winner, with Claude Allègre, of the Crafoord Prize in Geosciences in 1986. He was the recipient of several honorary degrees. He was recipient of the J.F.Kemp Medal with Paul W. Gast Columbia Univ 1973,the H. Hess Medal of the American Geophysical Union in 1985,the Leonard Medal of the Meteoritical Soc. 1975,the J.Lawrence Smith Medal of the National Academy of Science 1985, the Arthur L. Day Prize & Lectureship of the National Academy of Science 1981, the Holmes Medal of the European Union of Geosciences in 1986 and the V. M. Goldschmidt medal of the Geochemical society in 1978. Minor planet 4765 Wasserburg is named in his honor. Family He is survived by his sons Charles and Daniel Wasserburg, as well his grandchildren Ori, Philip, Roscoe and Benjamin Wasserburg. References Isotopic Adventures (autobiography) Annu.Rev. Earth Planet Sci.2003,v31,1-74 http://www.annualreviews.org/doi/abs/10.1146/annurev.earth.31.100901.141409 External links Interview with Gerald J. Wasserburg for NOVA series: To the Moon WGBH Educational Foundation, raw footage, 1998 1927 births 2016 deaths California Institute of Technology faculty University of Chicago alumni Recipients of the Gold Medal of the Royal Astronomical Society Members of the Norwegian Academy of Science and Letters Members of the United States National Academy of Sciences Wollaston Medal winners American geochemists American geophysicists American astrophysicists Recipients of the V. M. Goldschmidt Award
Gerald J. Wasserburg
[ "Chemistry" ]
975
[ "Geochemists", "Recipients of the V. M. Goldschmidt Award", "American geochemists" ]
17,589,963
https://en.wikipedia.org/wiki/Pulvinone
Pulvinone, an organic compound belonging to the esters, lactones, alcohols and butenolides classes, is a yellow crystalline solid. Although the pulvinone is not a natural product, several naturally occurring hydroxylated derivatives are known. These hydroxylated pulvinones are produced by fungal species, such as the in Europe common Larch Bolete (Boletus elegans, also known as Suillus grevillei), or by moulds such as Aspergillus terreus. History Fungi (such as boleti), moulds and lichens produce a wide range of pigments made up of one (monomer) or several (oligomers) units of pulvinic acid. In 1831, in the course of a study of the constituents of lichens (Cetraria Vulpina), the French chemist and pharmacist Antoine Bebert discovered a compound named vulpinic acid, the first known naturally occurring methyl ester of pulvinic acid. More details about the structure of this pigment were disclosed in 1860 by the German chemists Franz Möller and Adolph Strecker. While trying to elucidate the structure of vulpinic acid, the German chemist Adolf Spiegel found in 1880 that the vulpinic acid could be saponified to a diacid. He named the resulting diacid pulvinic acid. The German chemist Jacob Volhard elucidated the constitution of pulvinic acid by synthesizing it through the basic hydrolysis of a corresponding dicyanocompound. In the process, he also obtained small amounts of a side-product. One year later Ludwig Claisen and Thomas Ewan achieved the synthesis of this side-product and characterized it as the 5-benzylidene-4-hydroxy-3-phenylfuran-2(5H)-one. Claisen and Ewan described it as das der Pulvinsäure zu Grunde liegende Lacton (the lactone underlying the structure of pulvinic acid): that was the origin of the name pulvinone. Natural occurrence It was a century after the synthesis of the first pulvinone that the word pulvinone turned into a collective term. In 1973, Edwards and Gill isolated the first naturally occurring hydroxylated pulvinone derivative. This trihydroxylated pulvinone was found as one of the main pigments responsible for the yellow colour of the stem and caps of the European mushroom Larch Bolete (Boletus elegans, also known as Suillus grevillei). In the very same year 1973, Seto and coworkers also found hydroxylated pulvinones in cultures of the mould Aspergillus terreus. To insist on their origin - and thereby differentiate them from the hydroxylated pulvinones found in Suillus grevillei - Seto and coworkers named these compounds Aspulvinones. The aspulvinone terminology also incorporates a letter, indicating the order of chromatographic elution of these compounds (hence, the least polar aspulvinone was named Aspulvinone A, the one eluting next Aspulvinone B, etc...). Like many other yellow pigments in fungi and lichens, pulvinones can be traced back from the pulvinic acid pathway. The pulvinone structural unit is found in a number of natural products. All monomeric (such as pulvinic acid itself, vulpinic acid, comphidic acid, the aspulvinones and the Kodaistatins) or oligomeric (Badiones, Norbadione, Aurantricholon) derivatives of the pulvinic acid contain the pulvinone structural element. So far, all naturally occurring pulvinone derivatives were found to be Z-configured. Pharmacological properties Rehse et al. showed the anti-coagulant activity of some pulvinones in rats. At the beginning of the 80s, the companies ICI and Smith Kline & French patented a large number of derivatives of the vulpinic acid because of their anti-inflammatory, anti-fever and pain-killing properties. Yet vulpinic acid - as well as many of its derivatives - is a cytotoxic compound. Since pulvinones exhibit a lower cytotoxicity compared to vulpinic acid and its derivatives, Organon investigated the pharmaceutical potential of more than 100 pulvinones. To date, the results of these studies were not fully disclosed. In 2005, the Wyeth company patented biphenyl-substituted pulvinones due to their promising activity against Gram-positive bacteria, including otherwise resistant bacteria. However, pulvinone-based antibiotics were so far only patented for animal use. Chemical properties Pulvinone is a lactone, more precisely an intramolecular ester of the trans-1,4-diphenyl-2,3-dihydroxy-1,3-butadiene-1-carboxylic acid, from which it can be prepared through removal of one equivalent of water: The central 5-membered ring core of pulvinone reveals a 4-hydroxy-butenolide structure. They are essentially found in their enol form, which exhibits acidic properties due to the relative lability of the hydroxylic proton. 4-hydroxy-butenolides such as pulvinones are therefore referred to as tetronic acids, and belong to the larger category of vinylogous acids. Biosynthesis The fungal biosynthesis starts from aromatic aminoacids such as phenylalanine and tyrosine; after oxydeamination to the corresponding arylpyruvic acid, the pulvinone skeleton is formed by a sequence of dimerisation, oxidative ring-cleavage and decarboxylation. Total synthesis Jacob Volhard was the first to synthesise vulpinic acid, pulvinic acid and pulvinone. To date, 11 total syntheses of pulvinones were reported: 1895 by Claisen and Ewan, 1975 and 1979 by Knight and Pattenden, 1979 by Jerris, Wovkulich and Amos B. Smith III, 1984 by Ramage et al., 1985 by Campbell et al., 1990 by Gill et al., 1991 by Pattenden, Turvill and Chorlton, 2005 by Caufield et al., 2006 by Antane et al., 2007 by Kaczybura and Brückner, 2007 by Bernier, Moser and Brückner. See also Pulvinic acid Variegatic acid Vulpinic acid Sources Enols Anticoagulants Antibiotics Analgesics Furanones Phenyl compounds
Pulvinone
[ "Chemistry", "Biology" ]
1,469
[ "Enols", "Biotechnology products", "Functional groups", "Antibiotics", "Biocides" ]
17,589,968
https://en.wikipedia.org/wiki/MMTS%20%28meteorology%29
A Maximum Minimum Temperature System or MMTS is a temperature recording system that keeps track of the maximum and minimum temperatures that have occurred over some given time period. The earliest, and still perhaps most familiar, form is the Maximum minimum thermometer invented by James Six in 1782. Today a typical MMTS is a thermistor. This may be read locally or can transmit its results electronically. See also SNOTEL References Meteorological instrumentation and equipment Thermometers
MMTS (meteorology)
[ "Technology", "Engineering" ]
94
[ "Meteorological instrumentation and equipment", "Thermometers", "Measuring instruments" ]
17,590,286
https://en.wikipedia.org/wiki/Yoshiaki%20Arata
was a Japanese physicist. Arata was one of the pioneering researchers into nuclear fusion in Japan and a former professor at Osaka University. He was reported to be a strong nationalist, speaking only Japanese in public. He received the Order of Culture in 2006. Arata started researching and publishing in the field of cold fusion around 1998, together with his colleague Yue Chang Zhang. Further reading Japan's "Cold fusion" Effort Produces Startling Claims of Bursts of Neutrons", Wall Street Journal, 4 December 1989 "New life for cold fusion?" New Scientist, 9 December 1989, p. 19 N. Wada and K. Nishizawa, "Nuclear fusion in solid", Japanese Journal of Applied Physics, 1989, 28:L2017 Publications Y. Arata and Y. C. Zhang. "Achievement of intense 'cold' fusion reaction," Proceedings of the Japanese Academy, series B, 1990. 66:l. Y.Arata. Patent Application US 2006/0153752 A References 1924 births 2018 deaths Japanese nuclear physicists Japanese metallurgists Recipients of the Order of Culture Academic staff of Osaka University Osaka University alumni Cold fusion People from Kyoto Prefecture
Yoshiaki Arata
[ "Physics", "Chemistry" ]
238
[ "Nuclear fusion", "Cold fusion", "Nuclear physics" ]
17,590,530
https://en.wikipedia.org/wiki/Sequential%20dynamical%20system
Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory. Definition An SDS is constructed from the following components: A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order). A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of i in Y. A word w = (w1, w2, ... , wm) over v[Y]. It is convenient to introduce the Y-local maps Fi constructed from the vertex functions by The word w specifies the sequence in which the Y-local maps are composed to derive the sequential dynamical system map F: Kn → Kn as If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. The phase space associated to a sequential dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update sequence w. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents. Example Consider the case where Y is the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states from K = {0,1}. For vertex functions use the symmetric, boolean function nor : K3 → K defined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pick w = (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at time t = 0 one computes the state of vertex 1 at time t=1 as nor(0,0,0) = 1. The state of vertex 2 at time t=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at time t=1 is used immediately. Next one obtains the state of vertex 3 at time t=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map. See also Graph dynamical system Boolean network Gene regulatory network Dynamic Bayesian network Petri net References Predecessor and Permutation Existence Problems for Sequential Dynamical Systems Genetic Sequential Dynamical Systems Combinatorics Graph theory Networks Abstract algebra Dynamical systems
Sequential dynamical system
[ "Physics", "Mathematics" ]
799
[ "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations", "Mechanics", "Abstract algebra", "Algebra", "Dynamical systems" ]
17,590,563
https://en.wikipedia.org/wiki/Network%20dynamics
Network dynamics is a research field for the study of networks whose status changes in time. The dynamics may refer to the structure of connections of the units of a network, to the collective internal state of the network, or both. The networked systems could be from the fields of biology, chemistry, physics, sociology, economics, computer science, etc. Networked systems are typically characterized as complex systems consisting of many units coupled by specific, potentially changing, interaction topologies. For a dynamical systems' approach to discrete network dynamics, see sequential dynamical system. See also References Networks
Network dynamics
[ "Mathematics" ]
118
[ "Combinatorics stubs", "Combinatorics" ]
17,591,554
https://en.wikipedia.org/wiki/KCNC3
Potassium voltage-gated channel, Shaw-related subfamily, member 3 also known as KCNC3 or Kv3.3 is a protein that in humans is encoded by the KCNC3. Function The Shaker gene family of Drosophila encodes components of voltage-gated potassium channels and comprises four subfamilies. Based on sequence similarity, this gene is similar to one of these subfamilies, namely the Shaw subfamily. The protein encoded by this gene belongs to the delayed rectifier class of channel proteins and is an integral membrane protein that mediates the voltage-dependent potassium ion permeability of excitable membranes. Clinical significance KCNC3 is associated with spinocerebellar ataxia type 13. See also Voltage-gated potassium channel References External links GeneReviews/NCBI/NIH/UW entry on Spinocerebellar Ataxia Type 13 Further reading Ion channels
KCNC3
[ "Chemistry" ]
191
[ "Neurochemistry", "Ion channels" ]
17,591,563
https://en.wikipedia.org/wiki/E%20chart
An E chart, also known as a tumbling E chart, is an eye chart used to measure a patient's visual acuity. Uses This chart does not depend on a patient's easy familiarity with a particular writing system (such as the Latin alphabet). This is often desirable, for instance with very young children. This also allows use with patients not readily fluent in the alphabetfor example, in China. The chart contains rows of the letter "E" in various kinds of rotation. The patient is asked to state (usually by pointing) where the limbs of the E are pointing, "up, down, left or right." Depending on how far the patient can "read", his or her visual acuity is quantified. It works on the same principle as Snellen's distant vision chart. See also Visual acuity Landolt C References Charts Diagnostic ophthalmology Medical tests Optotypes
E chart
[ "Mathematics" ]
189
[ "Symbols", "Optotypes" ]
17,591,639
https://en.wikipedia.org/wiki/Olidata
Olidata is an Italian computer system manufacturer. The company was founded in Cesena, Italy in 1982 by Carlo Rossi and Adolfo Savini as a limited liability company (LLC). Olidata specializes in software development. The company's accounting software and administrative software divisions were eventually sold to Olivetti. Olidata is one of the largest manufacturers of computer hardware in Italy. The company also manufactures LCD televisions. In April 2008, Olidata announced the production of its JumPc, a modified version of Intel's Classmate PC. In 2009, Acer acquired 29.9% of Olidata. See also List of Italian Companies References External links Official Site Computer hardware companies Computer systems companies Electronics companies established in 1982 Italian companies established in 1982 Italian brands Electronics companies of Italy Display technology companies
Olidata
[ "Technology" ]
166
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
17,591,939
https://en.wikipedia.org/wiki/Tecticornia%20halocnemoides
Tecticornia halocnemoides, commonly known as shrubby samphire or grey glasswort, is a species of succulent, salt tolerant plant endemic to Australia. It grows as a spreading or erect shrub up to fifty centimetres high. It was first published as Arthrocnemum halocnemoides in 1845, but transferred into Halosarcia in 1980, and into Tecticornia in 2007. It is a highly variable species, with five published subspecies, some of which are themselves highly variable. These are T. halocnemoides subsp. halocnemoides, T. halocnemoides subsp. caudata, T. halocnemoides subsp. longispicata, T. halocnemoides subsp. catenulata and T. halocnemoides subsp. tenuis. There is also an unpublished putative subspecies, which is currently given the manuscript name T. halocnemoides subsp. Lake Grace (N. Casson G231. 10). References halocnemoides Caryophyllales of Australia Flora of New South Wales Flora of the Northern Territory Flora of Queensland Flora of South Australia Flora of Victoria (state) Eudicots of Western Australia Halophytes Taxa named by Christian Gottfried Daniel Nees von Esenbeck
Tecticornia halocnemoides
[ "Chemistry" ]
266
[ "Halophytes", "Salts" ]
17,593,652
https://en.wikipedia.org/wiki/Optional%20stopping%20theorem
In probability theory, the optional stopping theorem (or sometimes Doob's optional sampling theorem, for American probabilist Joseph Doob) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies. The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing. Statement A discrete-time version of the theorem is given below, with denoting the set of natural integers, including zero. Let be a discrete-time martingale and a stopping time with values in }, both with respect to a filtration . Assume that one of the following three conditions holds: () The stopping time is almost surely bounded, i.e., there exists a constant such that a.s. () The stopping time has finite expectation and the conditional expectations of the absolute value of the martingale increments are almost surely bounded, more precisely, and there exists a constant such that almost surely on the event } for all . () There exists a constant such that a.s. for all where denotes the minimum operator. Then is an almost surely well defined random variable and Similarly, if the stochastic process is a submartingale or a supermartingale and one of the above conditions holds, then for a submartingale, and for a supermartingale. Remark Under condition () it is possible that happens with positive probability. On this event is defined as the almost surely existing pointwise limit of , see the proof below for details. Applications The optional stopping theorem can be used to prove the impossibility of successful betting strategies for a gambler with a finite lifetime (which gives condition ()) or a house limit on bets (condition ()). Suppose that the gambler can wager up to c dollars on a fair coin flip at times 1, 2, 3, etc., winning his wager if the coin comes up heads and losing it if the coin comes up tails. Suppose further that he can quit whenever he likes, but cannot predict the outcome of gambles that haven't happened yet. Then the gambler's fortune over time is a martingale, and the time at which he decides to quit (or goes broke and is forced to quit) is a stopping time. So the theorem says that . In other words, the gambler leaves with the same amount of money on average as when he started. (The same result holds if the gambler, instead of having a house limit on individual bets, has a finite limit on his line of credit or how far in debt he may go, though this is easier to show with another version of the theorem.) Suppose a random walk starting at that goes up or down by one with equal probability on each step. Suppose further that the walk stops if it reaches or ; the time at which this first occurs is a stopping time. If it is known that the expected time at which the walk ends is finite (say, from Markov chain theory), the optional stopping theorem predicts that the expected stop position is equal to the initial position . Solving for the probability that the walk reaches before gives . Now consider a random walk that starts at and stops if it reaches or , and use the martingale from the examples section. If is the time at which first reaches , then . This gives . Care must be taken, however, to ensure that one of the conditions of the theorem hold. For example, suppose the last example had instead used a 'one-sided' stopping time, so that stopping only occurred at , not at . The value of at this stopping time would therefore be . Therefore, the expectation value must also be , seemingly in violation of the theorem which would give . The failure of the optional stopping theorem shows that all three of the conditions fail. Proof Let denote the stopped process, it is also a martingale (or a submartingale or supermartingale, respectively). Under condition () or (), the random variable is well defined. Under condition () the stopped process is bounded, hence by Doob's martingale convergence theorem it converges a.s. pointwise to a random variable which we call . If condition () holds, then the stopped process is bounded by the constant random variable . Otherwise, writing the stopped process as gives for all , where . By the monotone convergence theorem . If condition () holds, then this series only has a finite number of non-zero terms, hence is integrable. If condition () holds, then we continue by inserting a conditional expectation and using that the event } is known at time (note that is assumed to be a stopping time with respect to the filtration), hence where a representation of the expected value of non-negative integer-valued random variables is used for the last equality. Therefore, under any one of the three conditions in the theorem, the stopped process is dominated by an integrable random variable . Since the stopped process converges almost surely to , the dominated convergence theorem implies By the martingale property of the stopped process, hence Similarly, if is a submartingale or supermartingale, respectively, change the equality in the last two formulas to the appropriate inequality. References External links Doob's Optional Stopping Theorem Probability theorems Theorems in statistics Articles containing proofs Martingale theory
Optional stopping theorem
[ "Mathematics" ]
1,195
[ "Theorems in statistics", "Theorems in probability theory", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
17,595,351
https://en.wikipedia.org/wiki/HD%2045652
HD 45652 is a star with an exoplanetary companion in the equatorial constellation of Monoceros. It was officially named Lusitânia on 17 December 2019, after the IAU100 press conference in Paris by the IAU (International Astronomical Union). This star has an apparent visual magnitude of 8.10, making it an 8th magnitude star that is too dim to be visible to the naked eye. The system is located at a distance of 114 light-years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −5 km/s. It shows a high proper motion, traversing the celestial sphere at an angular rate of . The measured atmospheric properties match a metal-rich late G- or early K-type dwarf star. It is a middle-aged main sequence star, about five billion years old, and is chromospherically inactive. The star is smaller and less massive than the Sun. It is radiating 61% of the Sun's luminosity from its photosphere at an effective temperature of 5,342 K. HD 45652 is spinning with a projected rotational velocity of 3.5 km/s. Planetary system In May 2008, the discovery of an extrasolar planet, HD 45652 b, orbiting the star was announced. The planet was detected by the radial velocity method, using observations made from 2005 to 2007. It has been assigned the name Viriato by the IAU Division C Working Group on Star Names. References External links G-type main-sequence stars K-type main-sequence stars Planetary systems with one confirmed planet Monoceros Durchmusterung objects 045652 030905
HD 45652
[ "Astronomy" ]
349
[ "Monoceros", "Constellations" ]
17,595,973
https://en.wikipedia.org/wiki/Saharo-Arabian%20region
The Saharo-Arabian region is a floristic region of the Holarctic kingdom proposed by Armen Takhtajan. The region is covered by hot deserts, semideserts and savanna. The region occupies the temperate parts of the Sahara desert, Sinai Peninsula, Arabian Peninsula (geographically defined), Southern Palestine and Lower Mesopotamia. Flora Much of its flora is shared with the neighboring Mediterranean and Irano-Turanian regions of the Holarctic kingdom and Sudano-Zambezian region of the Paleotropical kingdom. However, about a quarter of the species, especially in the families Asteraceae, Brassicaceae and Chenopodiaceae, are endemic. Endemism Some of the endemic genera are Nucularia, Fredolia, Agathophora, Muricaria, Nasturtiopsis, Zilla, Oudneya, Foleyola, Lonchophora, Gymnarrhena, Lifago. References Floristic regions Flora of North Africa Desert flora Sahara Holarctic
Saharo-Arabian region
[ "Biology" ]
213
[ "Flora", "Desert flora" ]
17,596,012
https://en.wikipedia.org/wiki/Minimum%20mass
In astronomy, minimum mass is the lower-bound calculated mass of observed objects such as planets, stars, binary systems, nebulae, and black holes. Minimum mass is a widely cited statistic for extrasolar planets detected by the radial velocity method or Doppler spectroscopy, and is determined using the binary mass function. This method reveals planets by measuring changes in the movement of stars in the line-of-sight, so the real orbital inclinations and true masses of the planets are generally unknown. This is a result of sin i degeneracy. If inclination i can be determined, the true mass can be obtained from the calculated minimum mass using the following relationship: Exoplanets Orientation of the transit to Earth Most stars will not have their planets lined up and orientated so that they eclipse over the center of the star and give the viewer on earth a perfect transit. It is for this reason that when we often are only able to extrapolate a minimum mass when viewing a star's wobble because we do not know the inclination and therefore only be able to calculate the part pulling the star on the plane of celestial sphere. For orbiting bodies in extrasolar planetary systems, an inclination of 0° or 180° corresponds to a face-on orbit (which cannot be observed by radial velocity), whereas an inclination of 90° corresponds to an edge-on orbit (for which the true mass equals the minimum mass). Planets with orbits highly inclined to the line of sight from Earth produce smaller visible wobbles, and are thus more difficult to detect. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass (). This is called Sin i degeneracy. The posterior distribution of the inclination angle i depends on the true mass distribution of the planets. Radial velocity method However, when there are multiple planets in the system that orbit relatively close to each other and have sufficient mass, orbital stability analysis allows one to constrain the maximum mass of these planets. The radial velocity method can be used to confirm findings made by the transit method. When both methods are used in combination, then the planet's true mass can be estimated. Although radial velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial velocity of the planet itself can be found, and this gives the inclination of the planet's orbit. This enables measurement of the planet's actual mass. This also rules out false positives, and also provides data about the composition of the planet. The main issue is that such detection is possible only if the planet orbits around a relatively bright star and if the planet reflects or emits a lot of light. The term true mass is synonymous with the term mass, but is used in astronomy to differentiate the measured mass of a planet from the minimum mass usually obtained from radial velocity techniques. Methods used to determine the true mass of a planet include measuring the distance and period of one of its satellites, advanced astrometry techniques that use the motions of other planets in the same star system, combining radial velocity techniques with transit observations (which indicate very low orbital inclinations), and combining radial velocity techniques with stellar parallax measurements (which also determine orbital inclinations). Use of sine function In trigonometry, a unit circle is the circle of radius one centered at the origin (0, 0) in the Cartesian coordinate system. Let a line through the origin, making an angle of θ with the positive half of the x-axis, intersect the unit circle. The x- and y-coordinates of this point of intersection are equal to and , respectively. The point's distance from the origin is always 1. Stars With a mass only 93 times that of Jupiter (), or , AB Doradus C, a companion to AB Doradus A, is the smallest known star undergoing nuclear fusion in its core. For stars with similar metallicity to the Sun, the theoretical minimum mass the star can have, and still undergo fusion at the core, is estimated to be about . When the metallicity is very low, however, a recent study of the faintest stars found that the minimum star size seems to be about 8.3% of the solar mass, or about . Smaller bodies are called brown dwarfs, which occupy a poorly defined grey area between stars and gas giants. References Exoplanetology Mass
Minimum mass
[ "Physics", "Mathematics" ]
935
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Size", "Wikipedia categories named after physical quantities", "Matter" ]
17,596,201
https://en.wikipedia.org/wiki/HD%2045652%20b
HD 45652 b, also named Viriato, is a gas giant extrasolar planet orbiting at only 0.23 AU from the star HD 45652, with an orbital period of 44 days. It has mass at least half that of Jupiter. As it was detected using the radial velocity method, its true mass is dependent on the inclination of its orbit; if it is low, then the true mass will be larger. Also, its radius is not known. This planet was discovered by measurements taken by the ELODIE spectrograph from 2005 and 2006, and later confirmed by CORALIE and SOPHIE between 2006 and 2007. The discovery was announced in May 2008. Naming HD 45652 b, was officially named Viriato on 17 December 2019 after the IAU100 press conference in Paris by the IAU (International Astronomical Union). References External links Monoceros Exoplanets discovered in 2008 Giant planets Exoplanets detected by radial velocity Exoplanets with proper names de:HD 45652 b
HD 45652 b
[ "Astronomy" ]
211
[ "Monoceros", "Constellations" ]
17,596,630
https://en.wikipedia.org/wiki/Icky-pick
Icky-pick or icky-pic is a gelatinous cable compound used in outdoor-rated communications cables, including both twisted-pair copper cabling and fiber-optic cabling. "PIC" is the abbreviation for "plastic insulated cable". The cable is filled with an "icky" substance. The filled cable itself, therefore, is called an "icky PIC". Icky-pick has two primary functions: Deter animals from biting and damaging the cable due to the smell and taste of the gel Seal any nick or gash in the outer jacket if they do bite it, preventing water from entering the cable and damaging it by corrosion and freeze expansion The actual icky-pick compound is a very thick petroleum-based substance e.g. petroleum jelly, and is only rated for outdoor use, frequently direct-buried in the ground. An outdoor cable spliced onto an indoor terminal block is prone to leak the gelatin, hence in many situations the icky-pic cable is spliced outside the building to a short run of normal cable which runs through a protective conduit into the building. The thick gel stains clothing and hands and is very difficult to remove. When fiber-optic cables are to be spliced, the gel must be removed with solvents and swabs to prevent fouling of the splice. Paint thinner or charcoal starter is a frequently used and commonly available remover and clean-up agent. References Petroleum products Signal cables
Icky-pick
[ "Chemistry" ]
304
[ "Petroleum", "Petroleum products" ]
17,596,903
https://en.wikipedia.org/wiki/Nuclear%20electronics
Nuclear electronics is a subfield of electronics concerned with the design and use of high-speed electronic systems for nuclear physics and elementary particle physics research, and for industrial and medical use. Essential elements of such systems include fast detectors for charged particles, discriminators for separating them by energy, counters for counting the pulses produced by individual particles, fast logic circuits (including coincidence and veto gates), for identification of particular types of complex particle events, and pulse height analyzers (PHAs) for sorting and counting gamma rays or particle interactions by energy, for spectral analysis. Elementary components Some of the essential components that make up the elements of a nuclear electronic analysis system include: Detectors Bias voltage supplies Preamplifiers Discriminators Coincidence and veto logic gates. Counters Pulse height analyzers These elements were originally developed and built in the laboratories of the scientists doing the pioneering work in the field, but are nowadays designed, developed, and manufactured by a variety of specialized vendors: EG&G Ortec Oxford Instruments Stanford Research Systems Tennelec See also Nuclear Instrumentation Module External links Nuclear electronics college class. Electronics Nuclear technology
Nuclear electronics
[ "Physics" ]
226
[ "Nuclear technology", "Nuclear physics" ]
17,597,147
https://en.wikipedia.org/wiki/Pulse-height%20analyzer
A pulse-height analyzer (PHA) is an instrument that accepts electronic pulses of varying heights from particle and event detectors, digitizes the pulse heights, and saves the number of pulses of each height in registers or channels, thus recording a pulse-height spectrum or pulse-height distribution used for later pulse-height analysis. PHAs are used in nuclear- and elementary-particle physics research. A PHA is a specific modification to multichannel analyzers. A pulse-height analyzer is also integrated into particle counters or used as a discrete module to calibrate particle counters. See also Nuclear electronics Experimental particle physics
Pulse-height analyzer
[ "Physics" ]
127
[ "Experimental particle physics", "Nuclear and atomic physics stubs", "Particle physics", "Experimental physics", "Nuclear physics", "Particle physics stubs" ]
17,597,677
https://en.wikipedia.org/wiki/Semi-infinite%20programming
In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case the constraints are typically parameterized. Mathematical formulation of the problem The problem can be stated simply as: where SIP can be seen as a special case of bilevel programs in which the lower-level variables do not participate in the objective function. Methods for solving the problem In the meantime, see external links below for a complete tutorial. Examples In the meantime, see external links below for a complete tutorial. See also Optimization Generalized semi-infinite programming (GSIP) References External links Description of semi-infinite programming from INFORMS (Institute for Operations Research and Management Science). Optimization in vector spaces Approximation theory Numerical analysis
Semi-infinite programming
[ "Mathematics" ]
170
[ "Applied mathematics", "Approximation theory", "Computational mathematics", "Mathematical relations", "Applied mathematics stubs", "Numerical analysis", "Approximations" ]
3,125,205
https://en.wikipedia.org/wiki/Hierarchical%20state%20routing
Hierarchical state routing (HSR), proposed in Scalable Routing Strategies for Ad Hoc Wireless Networks by Iwata et al. (1999), is a typical example of a hierarchical routing protocol. HSR maintains a hierarchical topology, where elected clusterheads at the lowest level become members of the next higher level. On the higher level, superclusters are formed, and so on. Nodes which want to communicate to a node outside of their cluster ask their clusterhead to forward their packet to the next level, until a clusterhead of the other node is in the same cluster. The packet then travels down to the destination node. Furthermore, HSR proposes to cluster nodes in a logical way instead of in a geological way: members of the same company or in the same battlegroup are clustered together, assuming they will communicate much within the logical cluster. HSR does not specify how a cluster is to be formed. Routing algorithms
Hierarchical state routing
[ "Technology" ]
189
[ "Computing stubs", "Computer network stubs" ]
3,125,808
https://en.wikipedia.org/wiki/Instantaneous%20phase%20and%20frequency
Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function: where arg is the complex argument function. The instantaneous frequency is the temporal rate of change of the instantaneous phase. And for a real-valued function s(t), it is determined from the function's analytic representation, sa(t): where represents the Hilbert transform of s(t). When φ(t) is constrained to its principal value, either the interval or , it is called wrapped phase. Otherwise it is called unwrapped phase, which is a continuous function of argument t, assuming sa(t) is a continuous function of t. Unless otherwise indicated, the continuous form should be inferred. Examples Example 1 where ω > 0. In this simple sinusoidal example, the constant θ is also commonly referred to as phase or phase offset. φ(t) is a function of time; θ is not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified. φ(t) is unambiguously defined. Example 2 where ω > 0. In both examples the local maxima of s(t) correspond to φ(t) = 2N for integer values of N. This has applications in the field of computer vision. Formulations Instantaneous angular frequency is defined as: and instantaneous (ordinary) frequency is defined as: where φ(t) must be the unwrapped phase; otherwise, if φ(t) is wrapped, discontinuities in φ(t) will result in Dirac delta impulses in f(t). The inverse operation, which always unwraps phase, is: This instantaneous frequency, ω(t), can be derived directly from the real and imaginary parts of sa(t), instead of the complex arg without concern of phase unwrapping. 2m1 and m2 are the integer multiples of necessary to add to unwrap the phase. At values of time, t, where there is no change to integer m2, the derivative of φ(t) is For discrete-time functions, this can be written as a recursion: Discontinuities can then be removed by adding 2 whenever Δφ[n] ≤ −, and subtracting 2 whenever Δφ[n] > . That allows φ[n] to accumulate without limit and produces an unwrapped instantaneous phase. An equivalent formulation that replaces the modulo 2 operation with a complex multiplication is: where the asterisk denotes complex conjugate. The discrete-time instantaneous frequency (in units of radians per sample) is simply the advancement of phase for that sample Complex representation In some applications, such as averaging the values of phase at several moments of time, it may be useful to convert each value to a complex number, or vector representation: This representation is similar to the wrapped phase representation in that it does not distinguish between multiples of 2 in the phase, but similar to the unwrapped phase representation since it is continuous. A vector-average phase can be obtained as the arg of the sum of the complex numbers without concern about wrap-around. See also Angular displacement Analytic signal Frequency modulation Group delay Instantaneous amplitude Negative frequency References Further reading Signal processing Digital signal processing Time–frequency analysis Fourier analysis Electrical engineering Audio engineering
Instantaneous phase and frequency
[ "Physics", "Technology", "Engineering" ]
750
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis", "Electrical engineering", "Audio engineering" ]
3,125,930
https://en.wikipedia.org/wiki/Scheinerman%27s%20conjecture
In mathematics, Scheinerman's conjecture, now a theorem, states that every planar graph is the intersection graph of a set of line segments in the plane. This conjecture was formulated by E. R. Scheinerman in his Ph.D. thesis (1984), following earlier results that every planar graph could be represented as the intersection graph of a set of simple curves in the plane . It was proven by . For instance, the graph G shown below to the left may be represented as the intersection graph of the set of segments shown below to the right. Here, vertices of G are represented by straight line segments and edges of G are represented by intersection points.   Scheinerman also conjectured that segments with only three directions would be sufficient to represent 3-colorable graphs, and conjectured that analogously every planar graph could be represented using four directions. If a graph is represented with segments having only k directions and no two segments belong to the same line, then the graph can be colored using k colors, one color for each direction. Therefore, if every planar graph can be represented in this way with only four directions, then the four color theorem follows. proved that some planar graphs cannot be represented in that way. and proved that every bipartite planar graph can be represented as an intersection graph of horizontal and vertical line segments; for this result see also . proved that every triangle-free planar graph can be represented as an intersection graph of line segments having only three directions; this result implies Grötzsch's theorem that triangle-free planar graphs can be colored with three colors. proved that if a planar graph G can be 4-colored in such a way that no separating cycle uses all four colors, then G has a representation as an intersection graph of segments. proved that planar graphs are in 1-STRING, the class of intersection graphs of simple curves in the plane that intersect each other in at most one crossing point per pair. This class is intermediate between the intersection graphs of segments appearing in Scheinerman's conjecture and the intersection graphs of unrestricted simple curves from the result of Ehrlich et al. It can also be viewed as a generalization of the circle packing theorem, which shows the same result when curves are allowed to intersect in a tangent. The proof of the conjecture by was based on an improvement of this result. References . . . . . . . . . . . . Planar graphs Conjectures that have been proved
Scheinerman's conjecture
[ "Mathematics" ]
509
[ "Mathematical theorems", "Statements about planar graphs", "Planar graphs", "Conjectures that have been proved", "Planes (geometry)", "Mathematical problems" ]
3,126,072
https://en.wikipedia.org/wiki/Serotiny
Serotiny in botany simply means 'following' or 'later'. In the case of serotinous flowers, it means flowers which grow following the growth of leaves, or even more simply, flowering later in the season than is customary with allied species. Having serotinous leaves is also possible, these follow the flowering. Serotiny is contrasted with coetany. Coetaneous flowers or leaves appear together with each other. In the case of serotinous fruit, the term is used in the more general sense of plants that release their seed over a long period of time, irrespective of whether release is spontaneous; in this sense the term is synonymous with bradyspory. In the case of certain Australian, North American, South African or Californian plants which grow in areas subjected to regular wildfires, serotinous fruit can also mean an ecological adaptation exhibited by some seed plants, in which seed release occurs in response to an environmental trigger, rather than spontaneously at seed maturation. The most common and best studied trigger is fire, and the term serotiny is used to refer to this specific case. Possible triggers include: Death of the parent plant or branch (necriscence) Wetting (hygriscence) Warming by the sun (soliscence) Drying atmospheric conditions (xyriscence) Fire (pyriscence) – this is the most common and best studied case, and the term serotiny is often used where pyriscence is intended. Fire followed by wetting (pyrohydriscence) Some plants may respond to more than one of these triggers. For example, Pinus halepensis exhibits primarily fire-mediated serotiny, but responds weakly to drying atmospheric conditions. Similarly, Sierras sequoias and some Banksia species are strongly serotinous with respect to fire, but also release some seed in response to plant or branch death. Serotiny can occur in various degrees. Plants that retain all of their seed indefinitely in the absence of a trigger event are strongly serotinous. Plants that eventually release some of their seed spontaneously in the absence of a trigger are weakly serotinous. Finally, some plants release all of their seed spontaneously after a period of seed storage, but the occurrence of a trigger event curtails the seed storage period, causing all seed to be released immediately; such plants are essentially non-serotinous, but may be termed facultatively serotinous. Fire-mediated serotiny In the southern hemisphere, fire-mediated serotiny is found in angiosperms in fire-prone parts of Australia and South Africa. It is extremely common in the Proteaceae of these areas, and also occurs in other taxa, such as Eucalyptus (Myrtaceae) and even exceptionally in Erica sessiliflora (Ericaceae). In the northern hemisphere, it is found in a range of conifer taxa, including species of Pinus, Cupressus, Sequoiadendron, and more rarely Picea. Since even non-serotinous cones and woody fruits can provide protection from the heat of fire, the key adaptation of fire-induced serotiny is seed storage in a canopy seed bank, which can be released by fire. The fire-release mechanism is commonly a resin that seals the fruit or cone scales shut, but which melts when heated. This mechanism is refined in some Banksia by the presence inside the follicle of a winged seed separator which blocks the opening, preventing the seed from falling out. Thus, the follicles open after fire, but seed release does not occur. As the cone dries, wetting by rain or humidity causes the cone scales to expand and reflex, promoting seed release. The seed separator thus acts as a lever against the seeds, gradually prying them out of the follicle over the course of one or more wet-dry cycles. The effect of this adaptation is to ensure that seed release occurs not in response to fire, but in response to the onset of rains following fire. The relative importance of serotiny can vary among populations of the same plant species. For example, North American populations of lodgepole pine (Pinus contorta) can vary from being highly serotinous to having no serotiny at all, opening annually to release seed. Different levels of cone serotiny have been linked to variations in the local fire regime: areas that experience more frequent crown-fire tend to have high rates of serotiny, while areas with infrequent crown-fire have low levels of serotiny. Additionally, herbivory of lodgepole pines can make fire-mediated serotiny less advantageous in a population. Red squirrels (Sciurus vulgaris) and red crossbills (Loxia curvirostra) will eat seeds, and so serotinous cones, which last in the canopy longer, are more likely to be chosen. Serotiny occurs less frequently in areas where this seed predation is common. Pyriscence can be understood as an adaptation to an environment in which fires are regular and in which post-fire environments offer the best germination and seedling survival rates. In Australia, for example, fire-mediated serotiny occurs in areas that not only are prone to regular fires but also possess oligotrophic soils and a seasonally dry climate. This results in intense competition for nutrients and moisture, leading to very low seedling survival rates. The passage of fire, however, reduces competition by clearing out undergrowth, and results in an ash bed that temporarily increases soil nutrition; thus the survival rates of post-fire seedlings is greatly increased. Furthermore, releasing a large number of seeds at once, rather than gradually, increases the possibility that some of those seeds will escape predation. Similar pressures apply in Northern Hemisphere conifer forests, but in this case there is the further issue of allelopathic leaf litter, which suppresses seed germination. Fire clears out this litter, eliminating this obstacle to germination. Evolution Serotinous adaptations occur in at least 530 species in 40 genera, in multiple (paraphyletic) lineages. Serotiny likely evolved separately in these species, but may in some cases have been lost by the related non-serotinous species. In the genus Pinus, serotiny likely evolved because of the atmospheric conditions during the Cretaceous period. The atmosphere during the Cretaceous had higher oxygen and carbon dioxide levels than our atmosphere. Fire occurred more frequently than it does currently, and plant growth was high enough to create an abundance of flammable material. Many Pinus species adapted to this fire-prone environment with serotinous pine cones. A set of conditions must be met in order for long-term seed storage to be evolutionarily viable for a plant: The plant must be phylogenetically able (pre-adapted) to develop the necessary reproductive structures The seeds must remain viable until cued to release Seed release must be cued by a trigger that indicates environmental conditions that are favorable to germination, The cue must occur on an average timescale that is within the reproductive lifespan of the plant The plant must have the capacity and opportunity to produce enough seeds prior to release to ensure population replacement Serotiny must be heritable References Plant morphology Plant physiology
Serotiny
[ "Biology" ]
1,519
[ "Plant morphology", "Plant physiology", "Plants" ]
3,126,130
https://en.wikipedia.org/wiki/Wagner%27s%20theorem
In graph theory, Wagner's theorem is a mathematical forbidden graph characterization of planar graphs, named after Klaus Wagner, stating that a finite graph is planar if and only if its minors include neither K5 (the complete graph on five vertices) nor K3,3 (the utility graph, a complete bipartite graph on six vertices). This was one of the earliest results in the theory of graph minors and can be seen as a forerunner of the Robertson–Seymour theorem. Definitions and statement A planar embedding of a given graph is a drawing of the graph in the Euclidean plane, with points for its vertices and curves for its edges, in such a way that the only intersections between pairs of edges are at a common endpoint of the two edges. A minor of a given graph is another graph formed by deleting vertices, deleting edges, and contracting edges. When an edge is contracted, its two endpoints are merged to form a single vertex. In some versions of graph minor theory the graph resulting from a contraction is simplified by removing self-loops and multiple adjacencies, while in other version multigraphs are allowed, but this variation makes no difference to Wagner's theorem. Wagner's theorem states that every graph has either a planar embedding, or a minor of one of two types, the complete graph K5 or the complete bipartite graph K3,3. (It is also possible for a single graph to have both types of minor.) If a given graph is planar, so are all its minors: vertex and edge deletion obviously preserve planarity, and edge contraction can also be done in a planarity-preserving way, by leaving one of the two endpoints of the contracted edge in place and routing all of the edges that were incident to the other endpoint along the path of the contracted edge. A minor-minimal non-planar graph is a graph that is not planar, but in which all proper minors (minors formed by at least one deletion or contraction) are planar. Another way of stating Wagner's theorem is that there are only two minor-minimal non-planar graphs, K5 and K3,3. Another result also sometimes known as Wagner's theorem states that a four-connected graph is planar if and only if it has no K5 minor. That is, by assuming a higher level of connectivity, the graph K3,3 can be made unnecessary in the characterization, leaving only a single forbidden minor, K5. Correspondingly, the Kelmans–Seymour conjecture states that a 5-connected graph is planar if and only if it does not have K5 as a topological minor. History and relation to Kuratowski's theorem Wagner published both theorems in 1937, subsequent to the 1930 publication of Kuratowski's theorem, according to which a graph is planar if and only if it does not contain as a subgraph a subdivision of one of the same two forbidden graphs K5 and K3,3. In a sense, Kuratowski's theorem is stronger than Wagner's theorem: a subdivision can be converted into a minor of the same type by contracting all but one edge in each path formed by the subdivision process, but converting a minor into a subdivision of the same type is not always possible. However, in the case of the two graphs K5 and K3,3, it is straightforward to prove that a graph that has at least one of these two graphs as a minor also has at least one of them as a subdivision, so the two theorems are equivalent. Implications One consequence of the stronger version of Wagner's theorem for four-connected graphs is to characterize the graphs that do not have a K5 minor. The theorem can be rephrased as stating that every such graph is either planar or it can be decomposed into simpler pieces. Using this idea, the K5-minor-free graphs may be characterized as the graphs that can be formed as combinations of planar graphs and the eight-vertex Wagner graph, glued together by clique-sum operations. For instance, K3,3 can be formed in this way as a clique-sum of three planar graphs, each of which is a copy of the tetrahedral graph K4. Wagner's theorem is an important precursor to the theory of graph minors, which culminated in the proofs of two deep and far-reaching results: the graph structure theorem (a generalization of Wagner's clique-sum decomposition of K5-minor-free graphs) and the Robertson–Seymour theorem (a generalization of the forbidden minor characterization of planar graphs, stating that every graph family closed under the operation of taking minors has a characterization by a finite number of forbidden minors). Analogues of Wagner's theorem can also be extended to the theory of matroids: in particular, the same two graphs K5 and K3,3 (along with three other forbidden configurations) appear in a characterization of the graphic matroids by forbidden matroid minors. References Planar graphs Graph minor theory Theorems in graph theory
Wagner's theorem
[ "Mathematics" ]
1,058
[ "Statements about planar graphs", "Planar graphs", "Graph theory", "Theorems in discrete mathematics", "Mathematical relations", "Planes (geometry)", "Theorems in graph theory", "Graph minor theory" ]
3,126,156
https://en.wikipedia.org/wiki/Whitney%27s%20planarity%20criterion
In mathematics, Whitney's planarity criterion is a matroid-theoretic characterization of planar graphs, named after Hassler Whitney. It states that a graph G is planar if and only if its graphic matroid is also cographic (that is, it is the dual matroid of another graphic matroid). In purely graph-theoretic terms, this criterion can be stated as follows: There must be another (dual) graph  = (,) and a bijective correspondence between the edges and the edges E of the original graph G, such that a subset T of E forms a spanning tree of G if and only if the edges corresponding to the complementary subset E − T form a spanning tree of . Algebraic duals An equivalent form of Whitney's criterion is that a graph G is planar if and only if it has a dual graph whose graphic matroid is dual to the graphic matroid of G. A graph whose graphic matroid is dual to the graphic matroid of G is known as an algebraic dual of G. Thus, Whitney's planarity criterion can be expressed succinctly as: a graph is planar if and only if it has an algebraic dual. Topological duals If a graph is embedded into a topological surface such as the plane, in such a way that every face of the embedding is a topological disk, then the dual graph of the embedding is defined as the graph (or in some cases multigraph) H that has a vertex for every face of the embedding, and an edge for every adjacency between a pair of faces. According to Whitney's criterion, the following conditions are equivalent: The surface on which the embedding exists is topologically equivalent to the plane, sphere, or punctured plane H is an algebraic dual of G Every simple cycle in G corresponds to a minimal cut in H, and vice versa Every simple cycle in H corresponds to a minimal cut in G, and vice versa Every spanning tree in G corresponds to the complement of a spanning tree in H, and vice versa. It is possible to define dual graphs of graphs embedded on nonplanar surfaces such as the torus, but these duals do not generally have the correspondence between cuts, cycles, and spanning trees required by Whitney's criterion. References Matroid theory Planar graphs
Whitney's planarity criterion
[ "Mathematics" ]
485
[ "Statements about planar graphs", "Planar graphs", "Combinatorics", "Planes (geometry)", "Matroid theory" ]
3,126,169
https://en.wikipedia.org/wiki/Ultra-low%20particulate%20air
Ultra-low particulate air (ULPA) is a type of air filter. A ULPA filter can remove from the air at least 99.999% of dust, pollen, mold, bacteria and any airborne particles with a minimum particle penetration size of 120 nanometres (0.12 μm, ultrafine particles). A ULPA filter can remove—to a large extent but not 100%—oil smoke, tobacco smoke, rosin smoke, smog, and insecticide dust. It can also remove carbon black to some extent. Some fan filter units incorporate ULPA filters. The EN 1822 and ISO 29463 standards may be used to rate ULPA filters. Materials used in ULPA filters Both high-efficiency particulate air (HEPA) and ULPA filter media have similar designs. The filter media is like an enormous web of randomly arranged fibres. When air passes through this dense web, the solid particles get attached to the fibres and thus eliminated from the air. Porosity is one of the key considerations of these fibres. Lower porosity, while decreasing the speed of filtration, increases the quality of filtered air. This parameter is measured in pores per linear inch. Method of functioning Physically blocking particles with a filter, called sieving, cannot remove smaller-sized particles. The cleaning process, based on the particle size of the pollutant, is based on four techniques: Sieving Diffusion Inertial impaction Interception A number of recommended practices have been written on testing these filters, including: IEST-RP-CC001: HEPA and ULPA Filters, IEST-RP-CC007: Testing ULPA Filters, IEST-RP-CC022: Testing HEPA and ULPA Filter Media, and IEST-RP-CC034: HEPA and ULPA Filter Leak Tests. Specifications See also the different classes for air filters for comparison See also Minimum efficiency reporting value (MERV) High-efficiency particulate air (HEPA) Microparticle performance rating (MPR) References External links ULPA Filter Efficiency Chart: Sentry Air Systems European Standard for EPA, HEPA & ULPA Filters — EN 1822 p. 6 EN 1822: the standard that greatly impacted the European cleanrooms market Ulpa Filter Designs and How it clears the air Filters Cleanroom technology
Ultra-low particulate air
[ "Chemistry", "Engineering" ]
483
[ "Chemical equipment", "Filters", "Filtration", "Cleanroom technology" ]
3,126,254
https://en.wikipedia.org/wiki/Water%20Resistant%20mark
Water Resistant is a common mark stamped on the back of wrist watches to indicate how well a watch is sealed against the ingress of water. It is usually accompanied by an indication of the static test pressure that a sample of newly manufactured watches were exposed to in a leakage test. The test pressure can be indicated either directly in units of pressure such as bar, atmospheres, or (more commonly) as an equivalent water depth in metres (in the United States sometimes also in feet). An indication of the test pressure in terms of water depth does not mean a water-resistant watch was designed for repeated long-term use in such water depths. For example, a watch marked 30 metres water resistant cannot be expected to withstand activity for longer time periods in a swimming pool, let alone continue to function at 30 metres under water. This is because the test is conducted only once using static pressure on a sample of newly manufactured watches. As only a small sample is tested, there is a small likelihood that any individual watch is not water resistant to the certified depth or even at all. The test for qualifying a diving watch to bear the word "diver's" on the dial is for repeated usage in a given depth and includes safety margins to take factors into account like aging of the seals, the properties of water and seawater, rapidly changing water pressure and temperature, as well as dynamic mechanical stresses encountered by a watch. Every "diver's" badged watch has to be taken through a small but highly specified battery of tests designed to simulate those stresses including being tested for continued water resistance up to 125% of the stated rating (a "200 meter" watch has to be pressured up to 250 meters water depth equivalent and show no signs of intrusion). ISO 2281 water-resistant watches standard The International Organization for Standardization (ISO) issued a standard for water-resistant watches which also prohibits the term waterproof to be used with watches, which many countries have adopted. This standard was introduced in 1990 as the ISO 2281:1990 and only designed for watches intended for ordinary daily use and are resistant to water during exercises such as swimming for a short period. They may be used under conditions where water pressure and temperature vary; German Industrial Norm DIN 8310 is an equivalent standard. However, whether they bear an additional indication of overpressure or not, they are not intended for submarine diving. The ISO 2281 standard specifies a detailed testing procedure for each mark that defines not only pressures but also test duration, water temperature, and other parameters. Besides this ISO 2859-2 Sampling plans indexed by limiting quality (LQ) for isolated lot inspection and ISO 2859-3 Sampling procedures for inspection by attributes – Part 3: Skip-lot sampling procedures concerning procedures regarding lot sampling testing come into play, since not every single watch has to be tested for ISO 2281 approval. ISO 2281 water resistance testing of a watch consists of: Resistance when immersed in water at a depth of 10 cm. Immersion of the watch in 10 cm of water for 1 hour. Resistance of operative parts. Immersion of the watch in 10 cm of water with a force of 5 N perpendicular to the crown and pusher buttons (if any) for 10 minutes. Condensation test. The watch shall be placed on a heated plate at a temperature between 40 °C and 45 °C until the watch has reached the temperature of the heated plate (in practice, a heating time of 10 minutes to 20 minutes, depending on the type of watch, will be sufficient). A drop of water, at a temperature between 18 °C and 25 °C shall be placed on the glass of the watch. After about 1 minute, the glass shall be wiped with a dry rag. Any watch which has condensation on the interior surface of the glass shall be eliminated. Resistance to different temperatures. Immersion of the watch in 10 cm of water at the following temperatures for 5 minutes each, 40 °C, 20 °C and 40 °C again, with the transition between temperatures not to exceed 1 minute. No evidence of water intrusion or condensation is allowed. Resistance to water overpressure. Immersion of the watch in a suitable pressure vessel and subjecting it within 1 minute to the rated pressure for 10 minutes, or to 2 bar in case where no additional indication is given. Then the overpressure is reduced to the ambient pressure within 1 minute. No evidence of water intrusion or condensation is allowed. Resistance to air overpressure. Exposing the watch to an overpressure of 2 bar. The watch shall show no air-flow exceeding 50 μg/min. No magnetic or shock resistance properties are required. No negative pressure test is required. No strap attachment test is required. No corrosion test is required. Except the thermal shock resistance test all further ISO 2281 testing should be conducted at 18 °C to 25 °C temperature. Regarding pressure ISO 2281 defines: 1 bar = 105 Pa = 105 N/m2. This has since been replaced by the ISO 22810:2010 standard, which covers all activities up to specified depth and clears up ambiguities with the previous standard. In practice, the survivability of the watch will depend not only on the water depth, but also on the age of the sealing material, past damage, temperature, and additional mechanical stresses. ISO 6425 divers' watches standard The standards and features for diving watches are regulated by the ISO 6425 – Divers' watches international standard. This standard was introduced in 1996. ISO 6425 defines such watches as: A watch designed to withstand diving in water at depths of at least 100 m and possessing a system to control the time. Diving watches are tested in static or still water under 125% of the rated (water) pressure, thus a watch with a 200-metre rating will be water resistant if it is stationary and under 250 metres of static water. ISO 6425 testing of the water resistance or water-tightness and resistance at a water overpressure as it is officially defined is fundamentally different from non-dive watches, because every single watch has to be tested. Testing diving watches for ISO 6425 compliance is voluntary and involves costs, so not every manufacturer present their watches for certification according to this standard. ISO 6425 testing of a diver's watch consists of: Reliability under water. The watches under test shall be immersed in water to a depth of 30±2 cm for 50 hours at 18 to 25 °C and all the mechanisms shall still function correctly. The condensation test shall be carried out before and after this test to ensure that the result is related to the above test. Condensation test. The watch shall be placed on a heated plate at a temperature between 40 and 45 °C until the watch has reached the temperature of the heated plate (in practice, a heating time of 10 minutes to 20 minutes, depending on the type of watch, will be sufficient). A drop of water, at a temperature of 18 to 25 °C shall be placed on the glass of the watch. After about 1 minute, the glass shall be wiped with a dry rag. Any watch which has condensation on the interior surface of the glass shall be eliminated. Resistance of crowns and other setting devices to an external force. The watches under test shall be subjected to an overpressure in water of 125% of the rated pressure for 10 minutes and to an external force of 5 N perpendicular to the crown and pusher buttons (if any). The condensation test shall be carried out before and after this test to ensure that the result is related to the above test. Water-tightness and resistance at a water overpressure. The watches under test shall be immersed in water contained in a suitable vessel. Then an overpressure of 125% of the rated pressure shall be applied within 1 minute and maintained for 2 hours. Subsequently, the overpressure shall be reduced to 0.3 bar within 1 minute and maintained at this pressure for 1 hour. The watches shall then be removed from the water and dried with a rag. No evidence of water intrusion or condensation is allowed. Resistance to thermal shock. Immersion of the watch in 30±2 cm of water at the following temperatures for 10 minutes each, 40 °C, 5 °C and 40 °C again. The time of transition from one immersion to the other shall not exceed 1 minute. No evidence of water intrusion or condensation is allowed. An optional test originating from the ISO 2281 tests (but not required for obtaining ISO 6425 approval) is exposing the watch to an overpressure of 200 kPa. The watch shall show no air-flow exceeding 50 μg/min. Except the thermal shock resistance test all further ISO 6425 testing should be conducted at 18 to 25 °C temperature. Regarding pressure ISO 6425 defines: 1 bar = 105 Pa = 105 N/m2. The required 125% test pressure provides a safety margin against dynamic pressure increase events, water density variations (seawater is 2% to 5% denser than freshwater) and degradation of the seals. Movement induced dynamic pressure increase is sometimes the subject of urban myths and marketing arguments for diver's watches with high water resistance ratings. When a diver makes a fast swimming movement of 10 m/s (32.8 ft/s) (the best competitive swimmers and finswimmers do not move their hands nor swim that fast) physics dictates that the diver generates a dynamic pressure of 50 kPa or the equivalent of 5 metres of additional water depth. Besides water resistance standards to a minimum of depth rating ISO 6425 also provides minimum requirements for mechanical diver's watches (quartz and digital watches have slightly differing readability requirements) such as: The presence of a time-preselecting device, for example a unidirectional rotating bezel or a digital display. Such a device shall be protected against inadvertent rotation or wrong manipulation. If it is a rotating bezel, it shall have a minute scale going up to 60 min. The markings on the dial, if existing, shall be coordinated with those of the preselecting device and shall be clearly visible. If the preselecting device is a digital display, it shall be clearly visible. The following items of the watch shall be legible at a distance of in the dark: time (the minute hand shall be clearly distinguishable from the hour hand); set time of the time-preselecting device; indication that the watch is running (This is usually indicated by a running second hand with a luminous tip or tail.); in the case of battery-powered watches, a battery end-of-life indication. The presence of an indication that the watch is running in total darkness. This is usually indicated by a running second hand with a luminous tip or tail. Magnetic resistance. This is tested by 3 expositions to a direct current magnetic field of 4 800 A/m. The watch must keep its accuracy to ±30 seconds/day as measured before the test despite the magnetic field. Shock resistance. This is tested by two shocks (one on the 9 o'clock side, and one to the crystal and perpendicular to the face). The shock is usually delivered by a hard plastic hammer mounted as a pendulum, so as to deliver a measured amount of energy, specifically, a 3 kg hammer with an impact velocity of 4.43 m/s. The change in rate allowed is ±60 seconds/day. Resistance to salty water. The watches under test shall be put in a 30 g/L NaCl (sodium chloride) solution and kept there for 24 hours at 18 to 25 °C. This test water solution has salinity comparable to normal seawater. After this test, the case and accessories shall be examined for any possible changes. Moving parts, particularly the rotating bezel, shall be checked for correct functioning. Resistance of attachments to an external force (strap/band solidity). This is tested by applying a force of 200 N (45 lbf) to each springbar (or attaching point) in opposite directions with no damage to the watch of attachment point. The bracelet of the watch being tested shall be closed. Marking. Watches conforming to ISO 6425 are marked with the word DIVER'S WATCH xxx M or DIVER'S xxx M to distinguish diving watches from look-a-like watches that are not suitable for actual scuba diving. The letters xxx are replaced by the diving depth, in metres, guaranteed by the manufacturer. Diver's watches for mixed-gas diving Diving at a great depth and for a long period is done in a diving chamber, with the (saturation) diver spending time alternately in the water and in a pressurized environment, breathing a gas mixture. In this case, the watch is subjected to the pressure of the gas mixture and its functioning can be disturbed. Consequently, it is recommended to subject the watch to a special extra test. ISO 6425 defines a diver's watch for mixed-gas diving as: A watch required to be resistant during diving in water to a depth of at least 100 m and to be unaffected by the overpressure of the mixed gas used for breathing. The following specific additional requirements for testing of diver's watches for mixed-gas diving are provided by ISO 6425: Test of operation at a gas overpressure. The watch is subject to the overpressure of gas which will actually be used, i.e. 125% of the rated pressure, for 15 days. Then a rapid reduction in pressure to the atmospheric pressure shall be carried out in a time not exceeding 3 minutes. After this test, the watch shall function correctly. An electronic watch shall function normally during and after the test. A mechanical watch shall function normally after the test (the power reserve normally being less than 15 days). Test by internal pressure (simulation of decompression). Remove the crown together with the winding and/or setting stem. In its place, fit a crown of the same type with a hole. Through this hole, introduce the gas mixture which will actually be used and create an overpressure of the rated pressure/20 bar in the watch for a period of 10 hours. Then carry out the test at the rated water overpressure. In this case, the original crown with the stem shall be refitted beforehand. After this test, the watch shall function correctly. Marking. Watches used for mix-gas diving which satisfy the test requirements are marked with the words "DIVER'S WATCH xxx M FOR MIXED-GAS DIVING". The letters xxx are replaced by the diving depth, in metres, guaranteed by the manufacturer. The composition of the gas mixture used for the test shall be given in the operating instructions accompanying the watch. Most manufacturers recommend divers to have their diving watch pressure tested by an authorized service and repair facility annually or every two to three years and have the seals replaced. Water resistance classification Watches are often classified by watch manufacturers by their degree of water resistance which, due to the absence of official classification standards, roughly translates to the following (1 metre ≈ 3.29 feet). These vagueries have since been superseded by ISO 22810:2010, in which "any watch on the market sold as water-resistant must satisfy ISO 22810 – regardless of the brand." See also IP code Watch Diving watch Magnetic-resistant watch Shock-resistant watch Certified chronometer References External links Watch Engineering - How Does Water Resistance Work?... Certification marks Watches Waterproofing
Water Resistant mark
[ "Mathematics" ]
3,147
[ "Symbols", "Certification marks" ]
3,126,358
https://en.wikipedia.org/wiki/Operad
In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad , one defines an algebra over to be a set together with concrete operations on this set which behave just like the abstract operations of . For instance, there is a Lie operad such that the algebras over are precisely the Lie algebras; in a sense abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations. History Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972. Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads: "The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898." The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer). Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher. Intuition Suppose is a set and for we define , the set of all functions from the cartesian product of copies of to . We can compose these functions: given , , the function is defined as follows: given arguments from , we divide them into blocks, the first one having arguments, the second one arguments, etc., and then apply to the first block, to the second block, etc. We then apply to the list of values obtained from in such a way. We can also permute arguments, i.e. we have a right action of the symmetric group on , defined by for , and . The definition of a symmetric operad given below captures the essential properties of these two operations and . Definition Non-symmetric operad A non-symmetric operad (sometimes called an operad without permutations, or a non- or plain operad) consists of the following: a sequence of sets, whose elements are called -ary operations, an element in called the identity, for all positive integers , , a composition function satisfying the following coherence axioms: identity: associativity: Symmetric operad A symmetric operad (often just called operad) is a non-symmetric operad as above, together with a right action of the symmetric group on for , denoted by and satisfying equivariance: given a permutation , (where on the right hand side refers to the element of that acts on the set by breaking it into blocks, the first of size , the second of size , through the th block of size , and then permutes these blocks by , keeping each block intact) and given permutations , (where denotes the element of that permutes the first of these blocks by , the second by , etc., and keeps their overall order intact). The permutation actions in this definition are vital to most applications, including the original application to loop spaces. Morphisms A morphism of operads consists of a sequence that: preserves the identity: preserves composition: for every n-ary operation and operations , preserves the permutation actions: . Operads therefore form a category denoted by . In other categories So far operads have only been considered in the category of sets. More generally, it is possible to define operads in any symmetric monoidal category C . In that case, each is an object of C, the composition is a morphism in C (where denotes the tensor product of the monoidal category), and the actions of the symmetric group elements are given by isomorphisms in C. A common example is the category of topological spaces and continuous maps, with the monoidal product given by the cartesian product. In this case, a topological operad is given by a sequence of spaces (instead of sets) . The structure maps of the operad (the composition and the actions of the symmetric groups) are then assumed to be continuous. The result is called a topological operad. Similarly, in the definition of a morphism of operads, it would be necessary to assume that the maps involved are continuous. Other common settings to define operads include, for example, modules over a commutative ring, chain complexes, groupoids (or even the category of categories itself), coalgebras, etc. Algebraist definition Given a commutative ring R we consider the category of modules over R. An operad over R can be defined as a monoid object in the monoidal category of endofunctors on (it is a monad) satisfying some finiteness condition. For example, a monoid object in the category of "polynomial endofunctors" on is an operad. Similarly, a symmetric operad can be defined as a monoid object in the category of -objects, where means a symmetric group. A monoid object in the category of combinatorial species is an operad in finite sets. An operad in the above sense is sometimes thought of as a generalized ring. For example, Nikolai Durov defines his generalized rings as monoid objects in the monoidal category of endofunctors on that commute with filtered colimits. This is a generalization of a ring since each ordinary ring R defines a monad that sends a set X to the underlying set of the free R-module generated by X. Understanding the axioms Associativity axiom "Associativity" means that composition of operations is associative (the function is associative), analogous to the axiom in category theory that ; it does not mean that the operations themselves are associative as operations. Compare with the associative operad, below. Associativity in operad theory means that expressions can be written involving operations without ambiguity from the omitted compositions, just as associativity for operations allows products to be written without ambiguity from the omitted parentheses. For instance, if is a binary operation, which is written as or . So that may or may not be associative. Then what is commonly written is unambiguously written operadically as . This sends to (apply on the first two, and the identity on the third), and then the on the left "multiplies" by . This is clearer when depicted as a tree: which yields a 3-ary operation: However, the expression is a priori ambiguous: it could mean , if the inner compositions are performed first, or it could mean , if the outer compositions are performed first (operations are read from right to left). Writing , this is versus . That is, the tree is missing "vertical parentheses": If the top two rows of operations are composed first (puts an upward parenthesis at the line; does the inner composition first), the following results: which then evaluates unambiguously to yield a 4-ary operation. As an annotated expression: If the bottom two rows of operations are composed first (puts a downward parenthesis at the line; does the outer composition first), following results: which then evaluates unambiguously to yield a 4-ary operation: The operad axiom of associativity is that these yield the same result, and thus that the expression is unambiguous. Identity axiom The identity axiom (for a binary operation) can be visualized in a tree as: meaning that the three operations obtained are equal: pre- or post- composing with the identity makes no difference. As for categories, is a corollary of the identity axiom. Examples Endomorphism operad in sets and operad algebras The most basic operads are the ones given in the section on "Intuition", above. For any set , we obtain the endomorphism operad consisting of all functions . These operads are important because they serve to define operad algebras. If is an operad, an operad algebra over is given by a set and an operad morphism . Intuitively, such a morphism turns each "abstract" operation of into a "concrete" -ary operation on the set . An operad algebra over thus consists of a set together with concrete operations on that follow the rules abstractely specified by the operad . Endomorphism operad in vector spaces and operad algebras If k is a field, we can consider the category of finite-dimensional vector spaces over k; this becomes a monoidal category using the ordinary tensor product over k. We can then define endomorphism operads in this category, as follows. Let V be a finite-dimensional vector space The endomorphism operad of V consists of = the space of linear maps , (composition) given , , ..., , their composition is given by the map , (identity) The identity element in is the identity map , (symmetric group action) operates on by permuting the components of the tensors in . If is an operad, a k-linear operad algebra over is given by a finite-dimensional vector space V over k and an operad morphism ; this amounts to specifying concrete multilinear operations on V that behave like the operations of . (Notice the analogy between operads&operad algebras and rings&modules: a module over a ring R is given by an abelian group M together with a ring homomorphism .) Depending on applications, variations of the above are possible: for example, in algebraic topology, instead of vector spaces and tensor products between them, one uses (reasonable) topological spaces and cartesian products between them. "Little something" operads The little 2-disks operad is a topological operad where consists of ordered lists of n disjoint disks inside the unit disk of centered at the origin. The symmetric group acts on such configurations by permuting the list of little disks. The operadic composition for little disks is illustrated in the accompanying figure to the right, where an element is composed with an element to yield the element obtained by shrinking the configuration of and inserting it into the i-th disk of , for . Analogously, one can define the little n-disks operad by considering configurations of disjoint n-balls inside the unit ball of . Originally the little n-cubes operad or the little intervals operad (initially called little n-cubes PROPs) was defined by Michael Boardman and Rainer Vogt in a similar way, in terms of configurations of disjoint axis-aligned n-dimensional hypercubes (n-dimensional intervals) inside the unit hypercube. Later it was generalized by May to the little convex bodies operad, and "little disks" is a case of "folklore" derived from the "little convex bodies". Rooted trees In graph theory, rooted trees form a natural operad. Here, is the set of all rooted trees with n leaves, where the leaves are numbered from 1 to n. The group operates on this set by permuting the leaf labels. Operadic composition is given by replacing the i-th leaf of by the root of the i-th tree , for , thus attaching the n trees to and forming a larger tree, whose root is taken to be the same as the root of and whose leaves are numbered in order. Swiss-cheese operad The Swiss-cheese operad is a two-colored topological operad defined in terms of configurations of disjoint n-dimensional disks inside a unit n-semidisk and n-dimensional semidisks, centered at the base of the unit semidisk and sitting inside of it. The operadic composition comes from gluing configurations of "little" disks inside the unit disk into the "little" disks in another unit semidisk and configurations of "little" disks and semidisks inside the unit semidisk into the other unit semidisk. The Swiss-cheese operad was defined by Alexander A. Voronov. It was used by Maxim Kontsevich to formulate a Swiss-cheese version of Deligne's conjecture on Hochschild cohomology. Kontsevich's conjecture was proven partly by Po Hu, Igor Kriz, and Alexander A. Voronov and then fully by Justin Thomas. Associative operad Another class of examples of operads are those capturing the structures of algebraic structures, such as associative algebras, commutative algebras and Lie algebras. Each of these can be exhibited as a finitely presented operad, in each of these three generated by binary operations. For example, the associative operad is a symmetric operad generated by a binary operation , subject only to the condition that This condition corresponds to associativity of the binary operation ; writing multiplicatively, the above condition is . This associativity of the operation should not be confused with associativity of composition which holds in any operad; see the axiom of associativity, above. In the associative operad, each is given by the symmetric group , on which acts by right multiplication. The composite permutes its inputs in blocks according to , and within blocks according to the appropriate . The algebras over the associative operad are precisely the semigroups: sets together with a single binary associative operation. The k-linear algebras over the associative operad are precisely the associative k-algebras. Terminal symmetric operad The terminal symmetric operad is the operad which has a single n-ary operation for each n, with each acting trivially. The algebras over this operad are the commutative semigroups; the k-linear algebras are the commutative associative k-algebras. Operads from the braid groups Similarly, there is a non- operad for which each is given by the Artin braid group . Moreover, this non- operad has the structure of a braided operad, which generalizes the notion of an operad from symmetric to braid groups. Linear algebra In linear algebra, real vector spaces can be considered to be algebras over the operad of all linear combinations . This operad is defined by for , with the obvious action of permuting components, and composition given by the concatentation of the vectors , where . The vector for instance represents the operation of forming a linear combination with coefficients 2,3,-5,0,... This point of view formalizes the notion that linear combinations are the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of vector addition and scalar multiplication are a generating set for the operad of all linear combinations, while the linear combinations operad canonically encodes all possible operations on a vector space. Similarly, affine combinations, conical combinations, and convex combinations can be considered to correspond to the sub-operads where the terms of the vector sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. Commutative-ring operad and Lie operad The commutative-ring operad is an operad whose algebras are the commutative rings. It is defined by , with the obvious action of and operadic composition given by substituting polynomials (with renumbered variables) for variables. A similar operad can be defined whose algebras are the associative, commutative algebras over some fixed base field. The Koszul-dual of this operad is the Lie operad (whose algebras are the Lie algebras), and vice versa. Free Operads Typical algebraic constructions (e.g., free algebra construction) can be extended to operads. Let denote the category whose objects are sets on which the group acts. Then there is a forgetful functor , which simply forgets the operadic composition. It is possible to construct a left adjoint to this forgetful functor (this is the usual definition of free functor). Given a collection of operations E, is the free operad on E. Like a group or a ring, the free construction allows to express an operad in terms of generators and relations. By a free representation of an operad , we mean writing as a quotient of a free operad where E describes generators of and the kernel of the epimorphism describes the relations. A (symmetric) operad is called quadratic if it has a free presentation such that is the generator and the relation is contained in . Clones Clones are the special case of operads that are also closed under identifying arguments together ("reusing" some data). Clones can be equivalently defined as operads that are also a minion (or clonoid). Operads in homotopy theory In , Stasheff writes: Operads are particularly important and useful in categories with a good notion of "homotopy", where they play a key role in organizing hierarchies of higher homotopies. See also PRO (category theory) Algebra over an operad Higher-order operad E∞-operad Pseudoalgebra Multicategory Notes Citations References Miguel A. Mendéz (2015). Set Operads in Combinatorics and Computer Science. SpringerBriefs in Mathematics. . Samuele Giraudo (2018). Nonsymmetric Operads in Combinatorics. Springer International Publishing. . External links https://golem.ph.utexas.edu/category/2011/05/an_operadic_introduction_to_en.html Abstract algebra Category theory
Operad
[ "Mathematics" ]
4,000
[ "Functions and mappings", "Mathematical structures", "Algebra", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Abstract algebra" ]
3,126,455
https://en.wikipedia.org/wiki/Inverse%20search
Inverse search (also called "reverse search") is a feature of some non-interactive typesetting programs, such as LaTeX and GNU LilyPond. These programs read an abstract, textual, definition of a document as input, and convert this into a graphical format such as DVI or PDF. In a windowing system, this typically means that the source code is entered in one editor window, and the resulting output is viewed in a different output window. Inverse search means that a graphical object in the output window works as a hyperlink, which brings you back to the line and column in the editor, where the clicked object was defined. The inverse search feature is particularly useful during proofreading. Implementations In TeX and LaTeX, the package srcltx provides an inverse search feature through DVI output files (e.g., with yap or Xdvi), while vpe, pdfsync and SyncTeX provide similar functionality for PDF output, among other techniques. The Comparison of TeX editors has a column on support of inverse search; most of them provide it nowadays. GNU LilyPond provides an inverse search feature through PDF output files, since version 2.6. The program calls this feature Point-and-click, Many integrated development environments for programming use inverse search to display compilation error messages, and during debugging when a breakpoint is hit. References Bibliography Jérôme Laurens, ”Direct and reverse synchronization with SyncTeX”, in TUGboat 29(3), 2008, p365–371, PDF (532KB) — including an overview of synchronization techniques with TeX External links How to set up inverse search with xdvi Software development
Inverse search
[ "Technology", "Engineering" ]
354
[ "Computer occupations", "Software engineering", "Digital typography stubs", "Computing stubs", "Software development" ]
3,126,916
https://en.wikipedia.org/wiki/Trimethylsilyldiazomethane
Trimethylsilyldiazomethane is the organosilicon compound with the formula (CH3)3SiCHN2. It is classified as a diazo compound. Trimethylsilyldiazomethane is commercially available as solutions in hexanes, DCM, and ether. It is a specialized reagent used in organic chemistry as a methylating agent for carboxylic acids. It is a safer replacement for diazomethane, which is a sensitive explosive gas, whereas trimethylsilyldiazomethane is a relatively stable liquid and thus easier to handle. Preparation Trimethylsilyldiazomethane can be prepared by treating (trimethylsilyl)methylmagnesium chloride with diphenyl phosphorazidate. An isotopically labelled variant, with 13C at the diazomethyl carbon, is also known. Reactions Trimethylsilyldiazomethane is useful for conversion of carboxylic acids to their methyl esters in high yield. The typical reaction conditions for this purpose use methanol as a cosolvent. Under these conditions, diazomethane itself is generated in situ as the active methylating agent, by an acid-catalyzed reaction between trimethylsilyldiazomethane and the alcohol with trimethylsilyldiazomethane as byproduct: CH3OH\ + TMSCHN2 ->[{}\atop\ce{acid}] CH3OTMS\ + CH2N2 When the methanol is omitted, substantial amounts of trimethylsilyl ester and trimethylmethyl ester products are formed as well. It also reacts with alcohols to give methyl ethers, whereas diazomethane may not. The compound is a reagent in the Doyle–Kirmse reaction with allyl sulfides and allyl amines. Trimethylsilyldiazomethane is deprotonated by butyllithium: (CH3)3SiCHN2 + BuLi → (CH3)3SiCLiN2 + BuH The lithio compound is versatile. From it can be prepared other trimethylsilyldiazoalkanes: (CH3)3SiCLiN2 + RX → (CH3)3SiCRN2 + LiX (CH3)3SiCLiN2 reacts with ketones and aldehydes to give, depending on the substituents, acetylenes. Applications It has also been employed widely in tandem with GC-MS for the analysis of various carboxylic compounds which are ubiquitous in nature. The fact that the reaction is rapid and occurs readily makes it attractive. However, it can form artifacts which complicate spectral interpretation. Such artifacts are usually the trimethylsilylmethyl esters, RCO2CH2SiMe3, formed when insufficient methanol is present. Acid-catalysed methanolysis is necessary to achieve near-quantitative yields of the desired methyl esters, RCO2Me. Safety Trimethylsilyldiazomethane is highly toxic. It has been implicated in the death of at least two chemists, a pharmaceutical worker in Windsor, Nova Scotia and one in New Jersey. Inhalation of diazomethane is known to cause pulmonary edema; trimethylsilyldiazomethane is suspected to behave similarly. It is possible that upon contact with water on the surface of the lung, trimethylsilyldiazomethane converts to diazomethane. Trimethylsilyldiazomethane is nonexplosive. References Diazo compounds Reagents for organic chemistry Carbosilanes Methylating agents Trimethylsilyl compounds
Trimethylsilyldiazomethane
[ "Chemistry" ]
788
[ "Functional groups", "Trimethylsilyl compounds", "Methylating agents", "Methylation", "Reagents for organic chemistry" ]
3,126,927
https://en.wikipedia.org/wiki/Chionophile
Chionophiles are any organisms (animals, plants, fungi, etc.) that can thrive in cold winter conditions (the word is derived from the Greek word chion meaning "snow", and -phile meaning "lover"). These animals have specialized adaptations that help them survive the harshest winters. Polar regions Arctic animals Animals such as caribou, Arctic hares, Arctic ground squirrels, snowy owls, puffins, tundra swan, snow geese, Steller's eiders and willow ptarmigan all survive the harsh Arctic winters quite easily and some, like the willow ptarmigan, are only found in the Arctic region. Antarctic animals Antarctica, also known as the southern pole, is larger and can become much colder than the northern pole. As a result, few animals can survive on the mainland of Antarctica, and those that do mostly live near the coast. The few animals that live on the mainland are birds such as Antarctic terns, grey-headed albatross, imperial shag, snowy sheathbill and the most well known inhabitant of Antarctica, penguins. The inhospitable environment helps to deter predators; the few predators that hunt on the mainland, including the south polar skua and the southern giant petrel, mainly prey upon chicks. Most Antarctic predators are found in the polar waters, including the orca and the leopard seal. Polar adaptations Normally when colder conditions arrive, animals go into a state of suspended animation called hibernation, when they go into a state of inactivity for long periods of time, which they do not come out of until more suitable conditions for them to survive in arrive. However, when animals live in an environment that is inhospitable for much of the year, then hibernation is not necessary. One of the few animals that does so are lemmings, which have a mass migration after they come out of dormancy. However, most animals living in the arctic would still be active, even during the most brutal times of winter. Aquatic animals such as Greenland shark, wolf fish, Atlantic cod, Atlantic halibut and Arctic char must cope with the sub-zero temperatures in their waters. Some aquatic mammals, such as walrus, seal, sea lion, narwhals, beluga whales and killer whales, can store fat called blubber that they use to help keep warm in the icy waters. Some ungulates that live in frigid conditions often have pads under their hooves to help have a stronger tension on the icy ground or to help in climbing up on rocky terrain. But mammals that already have a pad under their foot such as polar bears, wolverines, Arctic wolves and Arctic foxes will have fur under their pads to help keep their flesh concealed from the cold. Other mammals such as the musk oxen can keep warm by growing long, shaggy fur to help insulate heat. And this can be quickly shed off when warmer temperatures arrive. But with the snowshoe hare it will change the color of its fur from white to brown or with patches of brown when it sheds off its winter coat. This is to help camouflage itself in its new environment to match with the dirt during the summer or back again when it regrows its longer white fur to match with the snow during the winter. Mountainous regions Other chionophiles can be found on or near the equator and yet still live in freezing temperatures. This is mostly due to their geographical range, such as on high altitude mountains where it can reach very cold temperatures and have less oxygen the higher the altitude. These may include the Andes, the Himalayas and the Hindu Kush mountains, where animals such as snow leopards, pumas, wild yaks, mountain sheep, mountain goats, ibex, vicuñas and guanacos can thrive. Known chionophiles The following animals are known chionophiles: ABC Islands bear Adélie penguin Alaska marmot Alaska moose Alaska Peninsula brown bear Alaskan hare Alaskan tundra wolf Antarctic fur seal Antarctic petrel Antarctic tern Arctic fox Arctic hare Arctic redpoll Arctic tern Arctic warbler Arctic wolf Atlantic puffin Baird's sandpiper Baffin Island wolf Barnacle goose Barren ground shrew Barren-ground caribou Bearded seal Black guillemot Black-bellied storm petrel Black-legged kittiwake Brown skua Buff-breasted sandpiper Cape petrel Chinstrap penguin Common murre Crabeater seal Crested auklet Emperor penguin Gentoo penguin Glaucous gull Great skua Greater white-fronted goose Grey plover Grey seal Gyrfalcon Harbor seal Harp seal Heuglin's gull Hooded seal Iceland gull Ivory gull Kelp gull King eider Lapland longspur Least auklet Lemming Leopard seal Lesser white-fronted goose Little auk Little stint Long-tailed jaeger Macaroni penguin Muskox Nelson's collared lemming Northern collared lemming Northern elephant seal Northern fur seal North American brown lemming Pacific golden plover Parasitic jaeger Peary caribou Pectoral sandpiper Polar bear Pomarine jaeger Purple sandpiper Red knot Red phalarope Red-legged kittiwake Reindeer Ribbon seal Ringed seal Ross seal Ross's gull Ruddy turnstone Sabine's gull Sanderling Siberian brown lemming Snow bunting Snow goose Snow petrel Snowshoe hare Snowy owl Snowy sheathbill South polar skua Southern elephant seal Southern fulmar Spectacled eider Spotted seal Steller's eider Svalbard reindeer Thick-billed murre Tundra vole Ungava collared lemming Walrus Weddell seal White-rumped sandpiper Wolverine Yellow-billed loon See also Aquatic animals Arboreal locomotion List of birds of Antarctica Psychrophile Troglobite Xerocole References Animals by adaptation Fauna of the Arctic
Chionophile
[ "Biology" ]
1,211
[ "Organisms by adaptation", "Animals", "Animals by adaptation" ]
3,126,968
https://en.wikipedia.org/wiki/Boolean%20expression
fa: عملگرهای بولی In computer science, a Boolean expression is an expression used in programming languages that produces a Boolean value when evaluated. A Boolean value is either true or false. A Boolean expression may be composed of a combination of the Boolean constants True/False or Yes/No, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions. Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits. Boolean operators Most programming languages have the Boolean operators OR, AND and NOT; in C and some languages inspired by it, these are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively, while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde). In the mathematical literature the symbols used are often "+" (plus), "·" (dot) and overbar, or "∨" (vel), "∧" (et) and "¬" (not) or "′" (prime). Some languages, e.g., Perl and Ruby, have two sets of Boolean operators, with identical functions but different precedence. Typically these languages use and, or and not for the lower precedence operators. Some programming languages derived from PL/I have a bit string type and use BIT(1) rather than a separate Boolean type. In those languages the same operators serve for Boolean operations and bitwise operations. The languages represent OR, AND, NOT and EXCLUSIVE OR by "|", "&", "¬" (infix) and "¬" (prefix). Short-circuit operators Some programming languages, e.g., Ada, have short-circuit Boolean operators. These operators use a lazy evaluation, that is, if the value of the expression can be determined from the left hand Boolean expression then they do not evaluate the right hand Boolean expression. As a result, there may be side effects that only occur for one value of the left hand operand. Examples The expression is evaluated as . The expression is evaluated as . and are equivalent Boolean expressions, both of which are evaluated as . Of course, most Boolean expressions will contain at least one variable (), and often more (). See also Expression (computer science) Expression (mathematics) Boolean function References External links The Calculus of Logic, by George Boole, Cambridge and Dublin Mathematical Journal Vol. III (1848), pp. 183–98. Boolean algebra Operators (programming)
Boolean expression
[ "Mathematics" ]
563
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
3,127,040
https://en.wikipedia.org/wiki/Arctic%E2%80%93alpine
An Arctic–alpine taxon is one whose natural distribution includes the Arctic and more southerly mountain ranges, particularly the Alps. The presence of identical or similar taxa in both the tundra of the far north, and high mountain ranges much further south is testament to the similar environmental conditions found in the two locations. Arctic–alpine plants, for instance, must be adapted to the low temperatures, extremes of temperature, strong winds and short growing season; they are therefore typically low-growing and often form mats or cushions to reduce water loss through evapotranspiration. It is often assumed that an organism which currently has an Arctic–alpine distribution was, during colder periods of the Earth's history (such as during the Pleistocene glaciations), widespread across the area between the Arctic and the Alps. This is known from pollen records to be true for Dryas octopetala, for instance. In other cases, the disjunct distribution may be the result of long-distance dispersal. Examples of Arctic–alpine plants include: Arabis alpina Betula nana Draba incana Dryas octopetala Gagea serotina (syn. Lloydia serotina) Loiseleuria procumbens Micranthes stellaris Oxyria digyna Ranunculus glacialis Salix herbacea Saussurea alpina Saxifraga oppositifolia Silene acaulis Thalictrum alpinum Veronica alpina References Biogeography
Arctic–alpine
[ "Biology" ]
301
[ "Biogeography" ]
3,127,042
https://en.wikipedia.org/wiki/Behavioral%20medicine
Behavioral medicine is concerned with the integration of knowledge in the biological, behavioral, psychological, and social sciences relevant to health and illness. These sciences include epidemiology, anthropology, sociology, psychology, physiology, pharmacology, nutrition, neuroanatomy, endocrinology, and immunology. The term is often used interchangeably, but incorrectly, with health psychology. The practice of behavioral medicine encompasses health psychology, but also includes applied psychophysiological therapies such as biofeedback, hypnosis, and bio-behavioral therapy of physical disorders, aspects of occupational therapy, rehabilitation medicine, and physiatry, as well as preventive medicine. In contrast, health psychology represents a stronger emphasis specifically on psychology's role in both behavioral medicine and behavioral health. Behavioral medicine is especially relevant in recent days, where many of the health problems are primarily viewed as behavioral in nature, as opposed to medical. For example, smoking, leading a sedentary lifestyle, and alcohol use disorder or other substance use disorder are all factors in the leading causes of death in the modern society. Practitioners of behavioral medicine include appropriately qualified nurses, social workers, psychologists, and physicians (including medical students and residents), and these professionals often act as behavioral change agents, even in their medical roles. Behavioral medicine uses the biopsychosocial model of illness instead of the medical model. This model incorporates biological, psychological, and social elements into its approach to disease instead of relying only on a biological deviation from the standard or normal functioning. Origins and history Writings from the earliest civilizations have alluded to the relationship between mind and body, the fundamental concept underlying behavioral medicine. The field of psychosomatic medicine is among its academic forebears, albeit, it is now obsolete as an psychological discipline. In the form in which it is generally understood today, the field dates back to the 1970s. The earliest uses of the term were in the title of a book by Lee Birk (Biofeedback: Behavioral Medicine), published in 1973; and in the names of two clinical research units, the Center for Behavioral Medicine, founded by Ovide F. Pomerleau and John Paul Brady at the University of Pennsylvania in 1973, and the Laboratory for the Study of Behavioral Medicine, founded by William Stewart Agras at Stanford University in 1974. Subsequently, the field burgeoned, and inquiry into behavioral, physiological, and biochemical interactions with health and illness gained prominence under the rubric of behavioral medicine. In 1976, in recognition of this trend, the National Institutes of Health created the Behavioral Medicine Study Section to encourage and facilitate collaborative research across disciplines. The 1977 Yale Conference on Behavioral Medicine and a meeting of the National Academy of Sciences were explicitly aimed at defining and delineating the field in the hopes of helping to guide future research. Based on deliberations at the Yale conference, Schwartz and Weiss proposed the biopsychosocial model, emphasizing the new field's interdisciplinary roots and calling for the integration of knowledge and techniques broadly derived from behavioral and biomedical science. Shortly after, Pomerleau and Brady published a book entitled Behavioral Medicine: Theory and Practice, in which they offered an alternative definition focusing more closely on the particular contribution of the experimental analysis of behavior in shaping the field. Additional developments during this period of growth and ferment included the establishment of learned societies (the Society of Behavioral Medicine and the Academy of Behavioral Medicine Research, both in 1978) and of journals (the Journal of Behavioral Medicine in 1977 and the Annals of Behavioral Medicine in 1979). In 1990, at the International Congress of Behavioral Medicine in Sweden, the International Society of Behavioral Medicine was founded to provide, through its many daughter societies and through its own peer-reviewed journal (the International Journal of Behavioral Medicine), an international focus for professional and academic development. Areas of study Behavior-related illnesses Many chronic diseases have a behavioral component, but the following illnesses can be significantly and directly modified by behavior, as opposed to using pharmacological treatment alone: Substance use: many studies demonstrate that medication is most effective when combined with behavioral intervention Hypertension: deliberate attempts to reduce stress can also reduce high blood pressure Insomnia: cognitive and behavioural interventions are recommended as a first line treatment for insomnia Treatment adherence and compliance Medications work best for controlling chronic illness when the patients use them as prescribed and do not deviate from the physician's instructions. This is true for both physiological and mental illnesses. However, in order for the patient to adhere to a treatment regimen, the physician must provide accurate information about the regimen, an adequate explanation of what the patient must do, and should also offer more frequent reinforcement of appropriate compliance. Patients with strong social support systems, particularly through marriages and families, typically exhibit better compliance with their treatment regimen. Examples: telemonitoring through telephone or video conference with the patient case management by using a range of medical professionals to consistently follow up with the patient Doctor-patient relationship It is important for doctors to make meaningful connections and relationships with their patients, instead of simply having interactions with them, which often occurs in a system that relies heavily on specialist care. For this reason, behavioral medicine emphasizes honest and clear communication between the doctor and the patient in the successful treatment of any illness, and also in the maintenance of an optimal level of physical and mental health. Obstacles to effective communication include power dynamics, vulnerability, and feelings of helplessness or fear. Doctors and other healthcare providers also struggle with interviewing difficult or uncooperative patients, as well as giving undesirable medical news to patients and their families. The field has placed increasing emphasis on working towards sharing the power in the relationship, as well as training the doctor to empower the patient to make their own behavioral changes. More recently, behavioral medicine has expanded its area of practice to interventions with providers of medical services, in recognition of the fact that the behavior of providers can have a determinative effect on patient outcomes. Objectives include maintaining professional conduct, productivity, and altruism, in addition to preventing burnout, depression, and job dissatisfaction among practitioners. Learning principles, models and theories Behavioral medicine includes understanding the clinical applications of learning principles such as reinforcement, avoidance, generalisation, and discrimination, and of cognitive-social learning models as well, such as the cognitive-social learning model of relapse prevention by Marlatt. Learning theory Learning can be defined as a relatively permanent change in a behavioral tendency occurring as a result of reinforced practice. A behavior is significantly more likely to occur again in the future as a result of learning, making learning important in acquiring maladaptive physiological responses that can lead to psychosomatic disease. This also implies that patients can change their unhealthy behaviors in order to improve their diagnoses or health, especially in treating addictions and phobias. The three primary theories of learning are: classical conditioning operant conditioning modeling Other areas include correcting perceptual bias in diagnostic behavior; remediating clinicians' attitudes that impinge negatively upon patient treatment; and addressing clinicians' behaviors that promote disease development and illness maintenance in patients, whether within a malpractice framework or not. Our modern-day culture involves many acute, microstressors that add up to a large amount of chronic stress over time, leading to disease and illness. According to Hans Selye, the body's stress response is designed to heal and involves three phases of his General Adaptation Syndrome: alarm, resistance, and exhaustion. Applications An example of how to apply the biopsychosocial model that behavioral medicine utilizes is through chronic pain management. Before this model was adopted, physicians were unable to explain why certain patients did not experience pain despite experiencing significant tissue damage, which led them to see the purely biomedical model of disease as inadequate. However, increasing damage to body parts and tissues is generally associated with increasing levels of pain. Doctors started including a cognitive component to pain, leading to the gate control theory and the discovery of the placebo effect. Psychological factors that affect pain include self-efficacy, anxiety, fear, abuse, life stressors, and pain catastrophizing, which is particularly responsive to behavioral interventions. In addition, one's genetic predisposition to psychological distress and pain sensitivity will affect pain management. Finally, social factors such as socioeconomic status, race, and ethnicity also play a role in the experience of pain. Behavioral medicine involves examining all of the many factors associated with illness, instead of just the biomedical aspect, and heals disease by including a component of behavioral change on the part of the patient. In a review published 2011 Fisher et al. illustrates how a behavior medical approach can be applied on a number of common diseases and risk factors such as cardiovascular disease/diabetes, cancer, HIV/AIDS and tobacco use, poor diet, physical inactivity and excessive alcohol consumption. Evidence indicates that behavioral interventions are cost effectiveness and add in terms of quality of life. Importantly behavioral interventions can have broad effects and benefits on prevention, disease management, and well-being across the life span. Journals Annals of Behavioral Medicine International Journal of Behavioral Medicine Journal of Behavior Analysis of Sports, Health, Fitness and Behavioral Medicine Journal of Behavioral Health and Medicine Journal of Behavioral Medicine Organizations Association for Behavior Analysis International's Behavioral Medicine Special Interest Group Society of Behavioral Medicine International Society of Behavioral Medicine See also Health psychology Organizational psychology Medical psychology Occupational health psychology References Epidemiology Health Interdisciplinary branches of psychology Neuroanatomy
Behavioral medicine
[ "Environmental_science" ]
1,930
[ "Epidemiology", "Environmental social science" ]
3,127,114
https://en.wikipedia.org/wiki/Join%20%28topology%29
In topology, a field of mathematics, the join of two topological spaces and , often denoted by or , is a topological space formed by taking the disjoint union of the two spaces, and attaching line segments joining every point in to every point in . The join of a space with itself is denoted by . The join is defined in slightly different ways in different contexts Geometric sets If and are subsets of the Euclidean space , then:,that is, the set of all line-segments between a point in and a point in . Some authors restrict the definition to subsets that are joinable: any two different line-segments, connecting a point of A to a point of B, meet in at most a common endpoint (that is, they do not intersect in their interior). Every two subsets can be made "joinable". For example, if is in and is in , then and are joinable in . The figure above shows an example for m=n=1, where and are line-segments. Examples The join of two simplices is a simplex: the join of an n-dimensional and an m-dimensional simplex is an (m+n+1)-dimensional simplex. Some special cases are: The join of two disjoint points is an interval (m=n=0). The join of a point and an interval is a triangle (m=0, n=1). The join of two line segments is homeomorphic to a solid tetrahedron or disphenoid, illustrated in the figure above right (m=n=1). The join of a point and an (n-1)-dimensional simplex is an n-dimensional simplex. The join of a point and a polygon (or any polytope) is a pyramid, like the join of a point and square is a square pyramid. The join of a point and a cube is a cubic pyramid. The join of a point and a circle is a cone, and the join of a point and a sphere is a hypercone. Topological spaces If and are any topological spaces, then: where the cylinder is attached to the original spaces and along the natural projections of the faces of the cylinder: Usually it is implicitly assumed that and are non-empty, in which case the definition is often phrased a bit differently: instead of attaching the faces of the cylinder to the spaces and , these faces are simply collapsed in a way suggested by the attachment projections : we form the quotient space where the equivalence relation is generated by At the endpoints, this collapses to and to . If and are bounded subsets of the Euclidean space , and and , where are disjoint subspaces of such that the dimension of their affine hull is (e.g. two non-intersecting non-parallel lines in ), then the topological definition reduces to the geometric definition, that is, the "geometric join" is homeomorphic to the "topological join": Abstract simplicial complexes If and are any abstract simplicial complexes, then their join is an abstract simplicial complex defined as follows: The vertex set is a disjoint union of and . The simplices of are all disjoint unions of a simplex of with a simplex of : (in the special case in which and are disjoint, the join is simply ). Examples Suppose and , that is, two sets with a single point. Then , which represents a line-segment. Note that the vertex sets of A and B are disjoint; otherwise, we should have made them disjoint. For example, where a1 and a2 are two copies of the single element in V(A). Topologically, the result is the same as - a line-segment. Suppose and . Then , which represents a triangle. Suppose and , that is, two sets with two discrete points. then is a complex with facets , which represents a "square". The combinatorial definition is equivalent to the topological definition in the following sense: for every two abstract simplicial complexes and , is homeomorphic to , where denotes any geometric realization of the complex . Maps Given two maps and , their join is defined based on the representation of each point in the join as , for some : Special cases The cone of a topological space , denoted , is a join of with a single point. The suspension of a topological space , denoted , is a join of with (the 0-dimensional sphere, or, the discrete space with two points). Properties Commutativity The join of two spaces is commutative up to homeomorphism, i.e. . Associativity It is not true that the join operation defined above is associative up to homeomorphism for arbitrary topological spaces. However, for locally compact Hausdorff spaces we have Therefore, one can define the k-times join of a space with itself, (k times). It is possible to define a different join operation which uses the same underlying set as but a different topology, and this operation is associative for all topological spaces. For locally compact Hausdorff spaces and , the joins and coincide. Homotopy equivalence If and are homotopy equivalent, then and are homotopy equivalent too. Reduced join Given basepointed CW complexes and , the "reduced join" is homeomorphic to the reduced suspensionof the smash product. Consequently, since is contractible, there is a homotopy equivalence This equivalence establishes the isomorphism . Homotopical connectivity Given two triangulable spaces , the homotopical connectivity () of their join is at least the sum of connectivities of its parts: . As an example, let be a set of two disconnected points. There is a 1-dimensional hole between the points, so . The join is a square, which is homeomorphic to a circle that has a 2-dimensional hole, so . The join of this square with a third copy of is a octahedron, which is homeomorphic to , whose hole is 3-dimensional. In general, the join of n copies of is homeomorphic to and . Deleted join The deleted join of an abstract complex A is an abstract complex containing all disjoint unions of disjoint faces of A: Examples Suppose (a single point). Then , that is, a discrete space with two disjoint points (recall that = an interval). Suppose (two points). Then is a complex with facets (two disjoint edges). Suppose (an edge). Then is a complex with facets (a square). Recall that represents a solid tetrahedron. Suppose A represents an (n-1)-dimensional simplex (with n vertices). Then the join is a (2n-1)-dimensional simplex (with 2n vertices): it is the set of all points (x1,...,x2n) with non-negative coordinates such that x1+...+x2n=1. The deleted join can be regarded as a subset of this simplex: it is the set of all points (x1,...,x2n) in that simplex, such that the only nonzero coordinates are some k coordinates in x1,..,xn, and the complementary n-k coordinates in xn+1,...,x2n. Properties The deleted join operation commutes with the join. That is, for every two abstract complexes A and B: Proof. Each simplex in the left-hand-side complex is of the form , where , and are disjoint. Due to the properties of a disjoint union, the latter condition is equivalent to: are disjoint and are disjoint. Each simplex in the right-hand-side complex is of the form , where , and are disjoint and are disjoint. So the sets of simplices on both sides are exactly the same. □ In particular, the deleted join of the n-dimensional simplex with itself is the n-dimensional crosspolytope, which is homeomorphic to the n-dimensional sphere . Generalization The n-fold k-wise deleted join of a simplicial complex A is defined as:, where "k-wise disjoint" means that every subset of k have an empty intersection.In particular, the n-fold n-wise deleted join contains all disjoint unions of n faces whose intersection is empty, and the n-fold 2-wise deleted join is smaller: it contains only the disjoint unions of n faces that are pairwise-disjoint. The 2-fold 2-wise deleted join is just the simple deleted join defined above. The n-fold 2-wise deleted join of a discrete space with m points is called the (m,n)-chessboard complex. See also Desuspension References Hatcher, Allen, Algebraic topology. Cambridge University Press, Cambridge, 2002. xii+544 pp. and Brown, Ronald, Topology and Groupoids Section 5.7 Joins. Algebraic topology Operations on structures
Join (topology)
[ "Mathematics" ]
1,923
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
3,127,352
https://en.wikipedia.org/wiki/Hypholoma%20fasciculare
Hypholoma fasciculare, commonly known as the sulphur tuft or clustered woodlover, is a common woodland mushroom, often in evidence when hardly any other mushrooms are to be found. This saprotrophic small gill fungus grows prolifically in large clumps on stumps, dead roots or rotting trunks of broadleaved trees. The "sulphur tuft" is bitter and poisonous; consuming it can cause vomiting, diarrhea and convulsions. The toxins are steroids known as fasciculols and have been shown to be calmodulin inhibitors. Taxonomy and naming The specific epithet is derived from the Latin fascicularis 'in bundles' or 'clustered', referring to its habit of growing in clumps. Its name in Japanese is Nigakuritake (苦栗茸, means "Bitter kuritake"). Description The hemispherical cap ranges from in diameter. It is smooth and sulphur yellow with an orange-brown centre and whitish margin. The crowded gills are initially yellow but darken to a distinctive green colour as the blackish spores develop on the yellow flesh. It has a purple-brown spore print. The stipe is tall and 4–10 mm wide, light yellow, orange-brown below, often with an indistinct ring zone coloured dark by the spores. The taste is very bitter, though not bitter when cooked, but still poisonous. Similar species The edible Hypholoma capnoides is similar, but lacks the greenish-yellow gills and bitter taste. H. sublateritium is similar as well, with a reddish cap. Microscopic characteristics The spores are purple-black in colour. The spores are 6-8 × 4-4.5 μm in size, and are shaped like an egg. Distribution and habitat Hypholoma fasciculare grows prolifically on the dead wood of both deciduous and coniferous trees. It is more commonly found on decaying deciduous wood due to the lower lignin content of this wood relative to coniferous wood. Hypholoma fasciculare is widespread and abundant in northern Europe and North America. It has been recorded from Iran, and also eastern Anatolia in Turkey. It can appear anytime from spring to autumn. Use in forestry Hypholoma fasciculare has been used successfully as an experimental treatment to competitively displace a common fungal disease of conifers, Armillaria root rot, from managed coniferous forests. Chemistry and toxicity The toxicity of sulfur tuft mushrooms has been attributed, at least partially, to the toxic steroids fasciculol E and fasciculol F (in mice, with LD50(i.p.) values of 50 mg/kg and 168 mg/kg, respectively). In humans, symptoms may be delayed for 5–10 hours after consumption, after which time there may be diarrhea, nausea, vomiting, proteinuria and collapse. Paralysis and impaired vision have been recorded. Symptoms generally resolve over a few days. The autopsy of one fatality revealed fulminant hepatitis reminiscent of amatoxin poisoning, along with involvement of kidneys and myocardium. The mushroom was consumed in a dish with other species so the death cannot be attributed to sulfur tuft with certainty. Extracts of the mushroom show anticoagulant effects. Gallery References External links fasciculare Poisonous fungi Fungi described in 1778 Fungi of North America Fungi of Europe Fungus species
Hypholoma fasciculare
[ "Biology", "Environmental_science" ]
722
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
3,127,378
https://en.wikipedia.org/wiki/Bicinchoninic%20acid%20assay
The bicinchoninic acid assay (BCA assay), also known as the Smith assay, after its inventor, Paul K. Smith at the Pierce Chemical Company, now part of Thermo Fisher Scientific, is a biochemical assay for determining the total concentration of protein in a solution (0.5 μg/mL to 1.5 mg/mL), similar to Lowry protein assay, Bradford protein assay or biuret reagent. The total protein concentration is exhibited by a color change of the sample solution from blue to purple in proportion to protein concentration, which can then be measured using colorimetric techniques. The BCA assay was patented by Pierce Chemical Company in 1989 & the patent expired in 2006. Mechanism A stock BCA solution contains the following ingredients in a highly alkaline solution with a pH 11.25: bicinchoninic acid, sodium carbonate, sodium bicarbonate, sodium tartrate, and copper(II) sulfate pentahydrate. The BCA assay primarily relies on two reactions. First, the peptide bonds in protein reduce Cu2+ ions from the copper(II) sulfate to Cu1+ (a temperature dependent reaction). The amount of Cu2+ reduced is proportional to the amount of protein present in the solution. Next, two molecules of bicinchoninic acid chelate with each Cu1+ ion, forming a purple-colored complex that strongly absorbs light at a wavelength of 562 nm. The bicinchoninic acid Cu1+ complex is influenced in protein samples by the presence of cysteine/cystine, tyrosine, and tryptophan side chains. At higher temperatures (37 to 60 °C), peptide bonds assist in the formation of the reaction complex. Incubating the BCA assay at higher temperatures is recommended as a way to increase assay sensitivity while minimizing the variances caused by unequal amino acid composition. The amount of protein present in a solution can be quantified by measuring the absorption spectra and comparing with protein solutions of known concentration. Limitations The BCA assay is largely incompatible with reducing agents and metal chelators, although trace quantities may be tolerated. The BCA assay also reportedly responds to common membrane lipids and phospholipids. Assay variants There are a few alternative variants of the BCA assay: Original BCA assay As described by Smith, the original BCA assay is a two-component protocol. The two reagents are "stable indefinitely at room temperature". Modern (likely exact or highly similar) formulations are available from at least two commercial vendors. The BCA Working solution is generated by mixing Reagent A and Reagent B in a 50:1 ratio, and can be prepared either weekly (it is moderately stable), or as needed. Reagent A 1% w/v BCA-Na2 (CAS: 979-88-4) 2% w/v Na2CO3·H2O (CAS: 5968-11-6) 0.16% w/v Na2 tartrate (CAS: 868-18-8) 0.4% w/v NaOH (CAS: 1310-73-2) 0.95% w/v NaHCO3 (CAS: 144-55-8) Add 50% NaOH or solid NaHCO3 to adjust the pH to 11.25 A suggested but untested alternative formulation in the Smith manuscript is to leave out the NaOH (and presumably not perform the manual pH adjustment to 11.25), but instead to dissolve the other components in a preprepared buffer of 0.25 M Na2CO3 and 0.01 M NaHCO3. Notably, Smith synthesized their own BCA via the Pfitzinger reaction of isatin and acetoin, substituting NaOH for KOH but otherwise following the synthetic method of Lesene and Henze, as the BCA available from commercial vendors of that time was too impure for their use. At least three successive recrystallizations of their synthesized BCA from 70˚C water was needed to sufficiently purify it for the assay. Reagent B 4% w/v CuSO4·5H2O Micro BCA assay (for dilute solutions) The BCA Micro BCA assay is a 3-component protocol which uses concentrated stocks of the Biuret reaction, BCA, and copper(II) reagents. It allows for an improved sensitivity of ~2 - 40 μg/mL vs 20 - 2000 μg/mL of the original BCA assay. However, it has a different, and generally speaking more sensitive, interference from non-protein components. Kits for the Micro BCA assay are available from at least two commercial vendors. Notably, the composition and use of a "Micro BCA Reagent and Protocol" was described in the original manuscript by Smith, and modern kits likely consist of an exact or highly similar formulation. The protocol consists of mixing Micro-Reagent B and the Copper Solution 25:1 to form Micro-Reagent C (MC), which is not shelf stable and should be freshly prepared, and then mixing MC 1:1 with Micro-Reagent A to produce the final (also unstable) assay working solution. Micro-Reagent A, Micro-Reagent B, and Copper Solution are stable indefinitely at room temperature. Micro-Reagent A (MA) 8% w/v Na2CO3·H2O (CAS: 5968-11-6) 1.6% w/v NaOH (CAS: 1310-73-2) 1.6% w/v Na2 tartrate (CAS: 868-18-8) (10x concentration as Reagent A in Original BCA Assay above) Sufficient NaHCO3 (CAS: 144-55-8) to adjust pH to 11.25 Micro-Reagent B (MB) 4% w/v BCA-Na2 (CAS: 979-88-4) (4x concentration as Reagent A in Original BCA Assay above) Copper Solution 4% w/v CuSO4·5H2O (CAS: 7758-99-8) (Same concentration as Reagent B in Original BCA Assay above) Reducing agent compatible (RAC) BSA assay This type of BCA assay includes a proprietary thiol covalent blocking "Compatibility Reagent" aka a Reducing Agent Compatibility Agent (RACA). Although this allows greater compatibility with reducing agents, the assay has a different interference profile from other non-protein components. Rapid Gold BCA This type of BCA assay seems to only be available from Thermo Fisher Scientific. Reportedly it uses "the same copper reduction method as the traditional BCA Protein Assay with a unique [proprietary] copper chelator.", that absorbs at 480 nm instead of 562 nm. This proprietor chelator and presumed optimized Biuret reaction formulation allows the assay to provide rapid (<5 min) results without the 37˚C+ incubation of the original BCA assay. However the assay has a different interference profile from other non-protein components. The Pierce Quantitative Colorimetric Peptide Assay (now owned by and available from Thermo Fisher Scientific) appears to use a similar or identical 480 nm absorbing proprietary copper chelator. See also Biuret test Bradford assay Colloidal gold protein assay References External links OpenWetWare BCA assay chemistry Biochemistry methods Chemical tests de:Bicinchoninsäure
Bicinchoninic acid assay
[ "Chemistry", "Biology" ]
1,611
[ "Biochemistry methods", "Biochemistry", "Chemical tests" ]
3,127,492
https://en.wikipedia.org/wiki/Geon%20%28psychology%29
Geons are the simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles and rectangles corresponding to the simple parts of an object in Biederman's recognition-by-components theory. The theory proposes that the visual input is matched against structural representations of objects in the brain. These structural representations consist of geons and their relations (e.g., an ice cream cone could be broken down into a sphere located above a cone). Only a modest number of geons (< 40) are assumed. When combined in different relations to each other (e.g., on-top-of, larger-than, end-to-end, end-to-middle) and coarse metric variation such as aspect ratio and 2D orientation, billions of possible 2- and 3-geon objects can be generated. Two classes of shape-based visual identification that are not done through geon representations, are those involved in: a) distinguishing between similar faces, and b) classifications that don’t have definite boundaries, such as that of bushes or a crumpled garment. Typically, such identifications are not viewpoint-invariant. Properties of geons There are 4 essential properties of geons: View-invariance: Each geon can be distinguished from the others from almost any viewpoints except for “accidents” at highly restricted angles in which one geon projects an image that could be a different geon, as, for example, when an end-on view of a cylinder can be a sphere or circle. Objects represented as an arrangement of geons would, similarly, be viewpoint invariant. Stability or resistance to visual noise: Because the geons are simple, they are readily supported by the Gestalt property of smooth continuation, rendering their identification robust to partial occlusion and degradation by visual noise as, for example, when a cylinder might be viewed behind a bush. Invariance to illumination direction and surface markings and texture. High distinctiveness: The geons differ qualitatively, with only two or three levels of an attributes, such as straight vs. curved, parallel vs. non parallel, positive vs. negative curvature. These qualitative differences can be readily distinguished thus rendering the geons readily distinguishable and the objects so composed, readily distinguishable. Derivation of invariant properties of geons Viewpoint invariance: The viewpoint invariance of geons derives from their being distinguished by three nonaccidental properties (NAPs) of contours that do not change with orientation in depth: Whether the contour is straight or curved, The vertex that is formed when two or three contours coterminate (that is, end together at the same point), in the image, i.e., an L (2 contours), fork (3 contours with all angles < 180°), or an arrow (3 contours, with one angle > 180°), and Whether a pair of contours is parallel or not (with allowance for perspective). When not parallel, the contours can be straight (converging or diverging) or curved, with positive or negative curvature forming a convex or concave, envelope, respectively (see Figure below). NAPs can be distinguished from metric properties (MPs), such as the degree of non-zero curvature of a contour or its length, which do vary with changes in orientation in depth. Invariance to lighting direction and surface characteristics Geons can be determined from the contours that mark the edges at orientation and depth discontinuities of an image of an object, i.e., the contours that specify a good line drawing of the object’s shape or volume. Orientation discontinuities define those edges where there is a sharp change in the orientation of the normal to the surface of a volume, as occurs at the contour at the boundaries of the different sides of a brick. A depth discontinuity is where the observer’s line of sight jumps from the surface of an object to the background (i.e., is tangent to the surface), as occurs at the sides of a cylinder. The same contour might mark both an orientation and depth discontinuity, as with the back edge of a brick. Because the geons are based on these discontinuities, they are invariant to variations in the direction of lighting, shadows, and surface texture and markings. Geons and generalized cones The geons constitute a partition of the set of generalized cones, which are the volumes created when a cross section is swept along an axis. For example, a circle swept along a straight axis would define a cylinder (see Figure). A rectangle swept along a straight axis would define a "brick" (see Figure). Four dimensions with contrastive values (i.e., mutually exclusive values) define the current set of geons (see Figure): Shape of cross section: round vs. straight. For example, as stated above, a rectangle swept along a straight axis would define a "brick" and the cross section would be straight. Axis: straight vs. curved. Size of cross-section as it is swept along an axis: constant vs. expanding (or contracting) vs. expanding then contracting vs. contracting then expanding. The cross section size of a "brick" would be constant. Termination of geon with constant sized cross-sections: truncated vs. converging to a point vs. rounded. These variations in the generating of geons create shapes that differ in NAPs. Experimental tests of the viewpoint invariance of geons There is now considerable support for the major assumptions of geon theory (See Recognition-by-components theory). One issue that generated some discussion was the finding that the geons were viewpoint invariant with little or no cost in the speed or accuracy of recognizing or matching a geon from an orientation in depth not previously experienced. Some studies reported modest costs in matching geons at new orientations in depth but these studies had several methodological shortcomings. Research on geons There is much research out about geons and how they are interpreted. Kim Kirkpatrick-Steger, Edward A. Wasserman and Irving Biederman have found that the individual geons along with their spatial composition are important in recognition. Furthermore, the findings in this research seem to indicate that non-accidental sensitivity can be found in all shape discriminating species. Notes Vision Perception Spatial cognition
Geon (psychology)
[ "Physics" ]
1,333
[ "Spacetime", "Space", "Spatial cognition" ]
3,127,574
https://en.wikipedia.org/wiki/Outbreeding%20depression
In biology, outbreeding depression happens when crosses between two genetically distant groups or populations result in a reduction of fitness. The concept is in contrast to inbreeding depression, although the two effects can occur simultaneously on different traits. Outbreeding depression is a risk that sometimes limits the potential for genetic rescue or augmentations. It is considered postzygotic response because outbreeding depression is noted usually in the performance of the progeny. Outbreeding depression manifests in two ways: Generating intermediate genotypes that are less fit than either parental form. For example, selection in one population might favor a large body size, whereas in another population small body size might be more advantageous, while individuals with intermediate body sizes are comparatively disadvantaged in both populations. As another example, in the Tatra Mountains, the introduction of ibex from the Middle East resulted in hybrids which produced calves at the coldest time of the year. Breakdown of biochemical or physiological compatibility. Within isolated breeding populations, alleles are selected in the context of the local genetic background. Because the same alleles may have rather different effects in different genetic backgrounds, this can result in different locally coadapted gene complexes. Outcrossing between individuals with differently adapted gene complexes can result in disruption of this selective advantage, resulting in a loss of fitness. Mechanisms The different mechanisms of outbreeding depression can operate at the same time. However, determining which mechanism is likely to occur in a particular population can be very difficult. There are three main mechanisms for generating outbreeding depression: Fixed chromosomal differences resulting in the partial or complete sterility of F1 hybrids. Adaptive differentiation among populations Population bottlenecks and genetic drift Some mechanisms may not appear until two or more generations later (F2 or greater), when recombination has undermined vitality of positive epistasis. Hybrid vigor in the first generation can, in some circumstances, be strong enough to mask the effects of outbreeding depression. An example of this is that plant breeders will make F1 hybrids from purebred strains, which will improve the uniformity and vigor of the offspring; however, the F2 generation are not used for further breeding because of unpredictable phenotypes in their offspring. Unless there is strong selective pressure, outbreeding depression can increase in further generations as coadapted gene complexes are broken apart without the forging of new coadapted gene complexes to take their place. If the outcrossing is limited and populations are large enough, selective pressure acting on each generation can restore fitness. Unless the F1 hybrid generation is sterile or very low fitness, selection will act in each generation using the increased diversity to adapt to the environment. This can lead to recovery in fitness to baseline, and sometimes even greater fitness than original parental types in that environment. However, as the hybrid population will likely to go through a decline in fitness for a few generations, they will need to persist long enough to allow selection to act before they can rebound. Examples The first mechanism has the greatest effects on fitness for polyploids, an intermediate effect on translocations, and a modest effect on centric fusions and inversions. Generally this mechanism will be more prevalent in the first generation (F1) after the initial outcrossing when most individuals are made up of the intermediate phenotype. Examples of the second mechanism include stickleback fish, which developed benthic and limnetic forms when separated. When crosses occurred between the two forms, there were low spawning rates. However, when the same forms mated with each other and no crossing occurred between lakes, the spawning rates were normal. This pattern has also been studied in Drosophila and leaf beetles, where the F1 progeny and later progeny resulted in intermediate fitness between the two parents. This circumstance is more likely to happen and occurs more quickly with selection than genetic drift. For the third mechanism, examples include poison dart frogs, anole lizards, and cichlid fish. Selection over genetic drift seems to be the dominant mechanism for outbreeding depression. Ligers are also an example of outbreeding depression. Although tigers and lions share the same amount of chromosomes, their hybrid offspring have genetic abnormalities and the males are often sterile. In plants For plants, outbreeding depression represents a partial crossing barrier. Outbreeding depression is not understood well in angiosperms. After observing Ipomopsis aggregata over time by crossing plants that were between 10–100 m apart, a pattern was noticed that plants that were farther away spatially had a higher likelihood of outbreeding depression. Some general takeaways from this were that spatial patterns of selection on plant genotypes will vary in scale and pattern, and outbreeding depression reflects the genetic constitution of "hybrid" progeny and the environments in which the parents and progeny grow. This means that although outbreeding depression cannot be predicted in angiosperms yet, the environment has a role in it. See also Dominance versus overdominance Haldane's rule Heterozygote advantage Inbreeding depression References Breeding Population genetics
Outbreeding depression
[ "Biology" ]
1,042
[ "Behavior", "Breeding", "Reproduction" ]
3,127,858
https://en.wikipedia.org/wiki/Critical%20ionization%20velocity
Critical ionization velocity (CIV), or critical velocity (CV), is the relative velocity between a neutral gas and plasma (an ionized gas), at which the neutral gas will start to ionize. If more energy is supplied, the velocity of the atoms or molecules will not exceed the critical ionization velocity until the gas becomes almost fully ionized. The phenomenon was predicted by Swedish engineer and plasma scientist, Hannes Alfvén, in connection with his model on the origin of the Solar System (1942). At the time, no known mechanism was available to explain the phenomenon, but the theory was subsequently demonstrated in the laboratory. Subsequent research by Brenning and Axnäs (1988) have suggested that a lower hybrid plasma instability is involved in transferring energy from the larger ions to electrons so that they have sufficient energy to ionize. Application of the theory to astronomy through a number of experiments have produced mixed results. Experimental research The Royal Institute of Technology in Stockholm carried out the first laboratory tests, and found that (a) the relative velocity between a plasma and neutral gas could be increased to the critical velocity, but then additional energy put into the system went into ionizing the neutral gas, rather than into increasing the relative velocity, (b) the critical velocity is roughly independent of the pressure and magnetic field. In 1973, Lars Danielsson published a review of critical ionization velocity, and concluded that the existence of the phenomenon "is proved by sufficient experimental evidence". In 1976, Alfvén reported that "The first observation of the critical velocity effect under cosmic conditions was reported by Manka et al. (1972) from the Moon. When an abandoned lunar [391] excursion module was made to impact on the dark side of the Moon not very far from the terminator, a gas cloud was produced which when it had expanded so that it was hit by the solar wind gave rise to superthermal electrons." In the laboratory, critical ionization velocity has been recognised for some time, and is seen in the penumbra produced by a dense plasma focus device (or plasma gun). Its existence in cosmic plasmas has not been confirmed. In 1986, Gerhard Haerendel, suggested that critical velocity ionization may stabilize the plasma flow in a cometary coma,. In 1992, E. Golbraikh and M. Filippov argued that critical ionization velocity could play a role in coronal mass ejections and solar flares, and in 1992, Anthony Peratt and Gerrit Verschuur suggested that interstellar neutral hydrogen emissions bore the signature of critical velocity ionization,. A 2001 review of the phenomenon by Shu T. Lai reports that ".. laboratory experiments, and computer simulations have all shown CIV as feasible and reasonably understood, although all CIV experiments in space have yielded negative results with perhaps three exceptions". Also in 2001, C. Konz, et al., ".. discuss the critical velocity effect as a possible explanation for the observed Hα emission [..] in the Galactic halo near the edges of cold gas clouds of the Magellanic Stream" Theory development Mathematically, the critical ionization velocity of a neutral cloud, that is, when the cloud begins to become ionized, is when the relative kinetic energy is equal to the ionization energy, that is: where eVion is the ionization potential of the atoms or molecules in the gas cloud, m is the mass, v is the velocity. The phenomenon is also called the Critical velocity ionization, and also Critical velocity effect,. Alfvén considered a neutral gas cloud entering the Solar System, and noted that a neutral atom will fall towards the Sun under the influence of gravity, and its kinetic energy will increase. If their motion is random, collisions will cause the gas temperature to rise, so that at a certain distance from the Sun, the gas will ionize. Alfvén writes that the ionization potential of the gas, Vion, occurs when: that is, at a distance of: (where ri is the ion distance from the Sun of mass M, m''' is the atom weight, Vion is in volts, k is the gravitational constant). Then when the gas becomes ionized, electromagnetic forces come into effect, of which the most important is the magnetic force which is usually greater than the gravitational force which gives rise to a magnetic repulsion from the Sun. In other words, a neutral gas falling from infinity toward the Sun is stopped at a distance ri where it will accumulate, and perhaps condense into planets. Alfvén found that by taking a gas cloud with an average ionisation voltage of 12 V, and average atomic weight of 7, then the distance ri is found to coincide with the orbit of Jupiter. The critical ionization velocity of hydrogen 50.9 x 105 cm/s (50.9 km/s), and helium is 34.3 x 10 5cm/s (34.3 km/s). Background Alfvén discusses his thoughts behind critical velocity, in his NASA publications Evolution of the Solar System. After criticising the "Inadequacy of the Homogeneous Disc Theory", he writes: ".. it is more attractive to turn to the alternative that the secondary bodies derive from matter falling in from "infinity" (a distance large compared to. the satellite orbit). This matter (after being stopped and given sufficient angular momentum) accumulates at specific distances from the central body. Such a process may take place when atoms or molecules in free fall reach a kinetic energy equal to their ionization energy. At this stage, the gas can become ionized by the process discussed in sec. 21.4; the ionized gas can then be stopped by the magnetic field of the central body and receive angular momentum by transfer from the central body as described in sec. 16.3.". Notes Other references Brenning, N . A comparison between laboratory and space experiments on Alfven's CIV effect, in IEEE Transactions on Plasma Science (ISSN 0093-3813), vol. 20, no. 6, p. 778-786. (1996) Review of the CIV phenomenon, in Space Science Reviews (ISSN 0038-6308), vol. 59, Feb. 1992, p. 209-314. (1992) Limits on the magnetic field strength for critical ionization velocity interaction, Physics of Fluids'' -- November 1985—Volume 28, Issue 11, pp. 3424–3426 Astrophysics Plasma theory and modeling
Critical ionization velocity
[ "Physics", "Astronomy" ]
1,333
[ "Plasma theory and modeling", "Astronomical sub-disciplines", "Astrophysics", "Plasma physics" ]
3,128,229
https://en.wikipedia.org/wiki/Shivdaspur
Shivdaspur is a census town and a red light district in Varanasi in eastern Uttar Pradesh in India. It resides on periphery of Varanasi city, surrounded by Lahartara, Manduadih. Demographics As of the 2001 Census of India, Shivdaspur had a population of 11,432. Males constitute 54% of the population and females 46%. Shivdaspur has an average literacy rate of 59%, lower than the national average of 59.5%: male literacy is 68%, and female literacy is 48%.14% of the population is under 6 years of age. Red-light district In the 1970s the red-light district in Dalamandi, near the Kashi Vishwanath Temple, was closed and the sex-workers moved to Shivdaspur. The area has declined as the number of customers has fallen, partly because the fear of HIV. There have been calls from locals and politicians for the area to be closed. See also Prostitution in India Prostitution in Asia Prostitution in Kolkata Prostitution in Mumbai Sonagachi All Bengal Women's Union Durbar Mahila Samanwaya Committee Male prostitution References Census towns in Varanasi district Cities and towns in Varanasi district Prostitution in India Neighbourhoods in Varanasi Red-light districts in India
Shivdaspur
[ "Biology" ]
269
[ "Behavior", "Sexuality stubs", "Sexuality" ]
3,128,547
https://en.wikipedia.org/wiki/Monkey%20testing
In software testing, monkey testing is a technique where the user tests the application or system by providing random inputs and checking the behavior, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automated unit tests. While the source of the name "monkey" is uncertain, it is believed by some that the name has to do with the infinite monkey theorem, which states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. Some others believe that the name comes from the classic Mac OS application "The Monkey" developed by Steve Capps prior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs in MacPaint. Monkey Testing is also included in Android Studio as part of the standard testing tools for stress testing. Types of monkey testing Monkey testing can be categorized into smart monkey tests or dumb monkey tests. Smart monkey tests Smart monkeys are usually identified by the following characteristics: Have a brief idea about the application or system Know its own location, where it can go and where it has been Know its own capability and the system's capability Focus to break the system Report bugs they found Some smart monkeys are also referred to as brilliant monkeys, which perform testing as per user's behavior and can estimate the probability of certain bugs. Dumb monkey tests Dumb monkeys, also known as "ignorant monkeys", are usually identified by the following characteristics: Have no knowledge about the application or system Don't know if their input or behavior is valid or invalid Don't know their or the system's capabilities, nor the flow of the application Can find fewer bugs than smart monkeys, but can also find important bugs that are hard to catch by smart monkeys Advantages and disadvantages Advantages Monkey testing is an effective way to identify some out-of-the-box errors. Since the scenarios tested are usually ad-hoc, monkey testing can also be a good way to perform load and stress testing. The intrinsic randomness of monkey testing also makes it a good way to find major bugs that can break the entire system. The setup of monkey testing is easy, therefore good for any application. Smart monkeys, if properly set up with an accurate state model, can be really good at finding various kinds of bugs. Disadvantages The randomness of monkey testing often makes the bugs found difficult or impossible to reproduce. Unexpected bugs found by monkey testing can also be challenging and time consuming to analyze. In some systems, monkey testing can go on for a long time before finding a bug. For smart monkeys, the ability highly depends on the state model provided, and developing a good state model can be expensive. Similar techniques and distinctions While monkey testing is sometimes treated the same as fuzz testing and the two terms are usually used together, some believe they are different by arguing that monkey testing is more about random actions while fuzz testing is more about random data input. Monkey testing is also different from ad-hoc testing in that ad-hoc testing is performed without planning and documentation and the objective of ad-hoc testing is to divide the system randomly into subparts and check their functionality, which is not the case in monkey testing. See also Random testing References Software testing
Monkey testing
[ "Engineering" ]
669
[ "Software engineering", "Software testing" ]
3,128,974
https://en.wikipedia.org/wiki/Hydropneumatic%20device
Hydropneumatic devices (or hydro-pneumatic devices) are systems that operate using water and gas. The devices are used in various applications. Description A hydropneumatic device is a tool that functions by using using water and gas. Hydropneumatic refers to the pneumatic (gas) and hydraulic (water) components needed for operation of the devices. Hydropneumatic accumulators or pulsation dampeners are devices which prevent, but do not absorb, alleviate, arrest, attenuate, or suppress a shock that already exists, meaning that these devices prevent the creation of a shock wave at an otherwise earlier stage. These can include pulsation dampeners, hydropneumatic accumulators, water hammer preventers, water hammer arrestors, and other things. Devices Hydropneumatic suspension Hydropneumatic lock Hydropneumatic recoil mechanism Hydropneumatic water hammer preventers Hydropneumatic water hammer preventers are chambers of sufficient volume to allow an extension of time in which a given flow may be accelerated or decelerated without sudden large change in pressure. See also expansion tank. When shock waves of an incompressible fluid within a piping system exist, especially at a high velocity, there is a high chance for water hammer. To help prevent a swing check from slamming and causing water hammer, a spring-assisted non-slam check valve is installed. Rather than relying on flow or gravity to be closed, the non-slam design prevents a sudden velocity decrease and reverse flow. The hydropneumatic water hammer preventer chamber is generally adapted to contain a separator member which prevents the escape of a pre-filled compressed inert gas. They may be Placed closely before a valve that is closed quickly. Stops water hammering. Placed immediately after the discharge of a pump that is started fast into a pipe full of a long column of liquid. Reduces start up surge pressure. Placed immediately after a pump, which when caused to stop suddenly, enables a vacuum to form, which pulls the flow back towards the pump. Prevents an implosion bang. Variations on the design include Having a separator membrane into the interior of which the liquid is communicated. Used for corrosive liquids, so that the chamber metal can be of low cost. Having a metal bellows separator membrane for use at low and higher temperatures than are compatible with an elastomeric or plastomeric membrane. Having a float separator to reduce the rate of gas absorption at the liquid interface, typically used in vessel chambers larger than 500 gallons. Hydropneumatic pump controllers Hydropneumatic pump controllers provide a Means of control for multiple fixed delivery volume, low cost low complexity pumps; to provide variable flow as required by small (say +/- 10 psi) pressure increase or decrease of a system. Means of control for pump unloading / recirculation against no pressure, without electric pressure switches. The controllers are pressure cylinders containing a movable separator member between a gas and a liquid, said moveable member causing the actuation of directional control valve or valves. The controllers are used in a circuit after a pump that is followed by a valved-side branch, and beyond a check valve, so that this device can only discharge liquid volume by a pressure fall of the system. Variations on the design include Having a protruding drive rod, cams from which trip valve handles. Having magnetically actuated reed switched. Having infrared signaling of separator position. Hydropneumatic pulsation filters Hydropneumatic pulsation filters provide means of reducing the amplitude of pressure changes the velocity of which is in the order of 1.4 km/s. All are used in industry. A hydropneumatic pulsation filter is a pressure container with separate inlet and outlet, connectable to a pipe system so that all pressure changes must attempt to pass through said chamber. Entry and exit of said chamber being of a diameter relative to chamber diameter that provides a high discharge coefficient, and without close proximity of any reflective surface. Lack of any sudden change in cross section area of flow path that would reflect a pressure wave, i.e. no orifice plate(s). Variations include Combination "dual purpose" devices addressing "acceleration head reduction" by means of a gas containment. The devices have applications by frequency response For pulsation above frequency 100 Hz (i.e., for high speed pumps and all pipe systems shorter than say 80 yards): no moving parts devices. For pulsation frequencies below 100 Hz: certain moving parts devices of known membrane response characteristics. Hydropneumatic acceleration head reducers Hydropneumatic acceleration head reducers minimize the mass of liquid that has to be accelerated when flow velocity changes. Within a piping system, pressure rises when a volume of fluid becomes present. This acceleration head needs to be reduced to prevent damage to pump components and excessive noise. These devices are typically mountable in any orientation such that the device is connectable directly to the suction check valve beneath the pump or directly to any vertical or horizontal discharge check valve; minimizing the length of any liquid column mass that will experience velocity change. Pump connection being separate from system connection so that no acceleration head changes occur due to reciprocation within one port. Applications for hydropneumatic acceleration head reducers include Reduction in drive energy costs required by any pump. Reduction in pipe diameter and schedule (wall thickness) costs of any pipe system. Decrease in fatigue and increase in safety of all pressure piping systems. Increase in accuracy and automatability of all pressure and flow control instruments. Increase in rotating equipment life and MTBF. Reduction in service down time. Variations on the design include For chemicals and process pump systems: having PTFE membranes. For sludges and slurries: having a clear unobstructed flow path direct from in to out. For general purposes: having an elastomeric bladder separator. Pulsation dampeners Misuse of the term Some manufacturers of pulsation dampeners provide items which do not dampen pulsations. The compressibility of a gas, often nitrogen because it is inert at normal temperatures, stores any sudden volume change. Storing sudden volume change enables volume to change against a soft gas cushion, without the need to accelerate all the existing liquid in the system out of the way of the new volume coming from a pump. Therefore, as all the volume in a system does not have to be suddenly accelerated, the cushion is preventing "acceleration head" (force) having to be generated. The pressure pulse is accordingly not generated in the first place, so it is not dampened at all. The gas cushion simply allows volume change to be stored. The manufacturers are providing, are liquid accumulators, not an item which removes energy. Hydropneumatic accumulators Gas cushion (spring) pre-filled accumulators of liquids are called hydropneumatic accumulators. "Hydro" because a liquid (like water) is involved. "Pneumatic" because a gas (like air) is involved. "Accumulator" because the purpose is to store or accumulate liquid volume by easy compression of the gas. These devices are typified by having only one liquid connection that goes to a "T" on the system. Non-hydropneumatic There are other forms of accumulator used for fluid power hydraulic purposes. For example, coil spring plus sealed piston; though these are less popular. Therefore, a hydraulic accumulator is not necessarily a hydropneumatic accumulator. References Hydraulics
Hydropneumatic device
[ "Physics", "Chemistry" ]
1,570
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
3,129,081
https://en.wikipedia.org/wiki/Isovanillin
Isovanillin is a phenolic aldehyde, an organic compound and isomer of vanillin. It is a selective inhibitor of aldehyde oxidase. It is not a substrate of that enzyme, and is metabolized by aldehyde dehydrogenase into isovanillic acid, which could make it a candidate drug for use in alcohol aversion therapy. Isovanillin can be used as a precursor in the chemical total synthesis of morphine. The proposed metabolism of isovanillin (and vanillin) in rat has been described in literature, and is part of the WikiPathways machine readable pathway collection. See also Vanillin 2-Hydroxy-5-methoxybenzaldehyde ortho-Vanillin 2-Hydroxy-4-methoxybenzaldehyde References Hydroxybenzaldehydes Flavors Perfume ingredients Phenol ethers
Isovanillin
[ "Chemistry" ]
193
[]
3,129,323
https://en.wikipedia.org/wiki/Opposite%20ring
In mathematics, specifically abstract algebra, the opposite of a ring is another ring with the same elements and addition operation, but with the multiplication performed in the reverse order. More explicitly, the opposite of a ring is the ring whose multiplication ∗ is defined by for all in R. The opposite ring can be used to define multimodules, a generalization of bimodules. They also help clarify the relationship between left and right modules (see ). Monoids, groups, rings, and algebras can all be viewed as categories with a single object. The construction of the opposite category generalizes the opposite group, opposite ring, etc. Relation to automorphisms and antiautomorphisms In this section the symbol for multiplication in the opposite ring is changed from asterisk to diamond, to avoid confusing it with some unary operations. A ring is called a self-opposite ring if it is isomorphic to its opposite ring, which name indicates that is essentially the same as . All commutative rings are self-opposite. Let us define the antiisomorphism , where for . It is indeed an antiisomorphism, since . The antiisomorphism can be defined generally for semigroups, monoids, groups, rings, rngs, algebras. In case of rings (and rngs) we obtain the general equivalence. A ring is self-opposite if and only if it has at least one antiautomorphism. Proof: : Let be self-opposite. If is an isomorphism, then , being a composition of antiisomorphism and isomorphism, is an antiisomorphism from to itself, hence antiautomorphism. : If is an antiautomorphism, then is an isomorphism as a composition of two antiisomorphisms. So is self-opposite. and If is self-opposite and the group of automorphisms is finite, then the number of antiautomorphisms equals the number of automorphisms. Proof: By the assumption and the above equivalence there exist antiautomorphisms. If we pick one of them and denote it by , then the map , where runs over , is clearly injective but also surjective, since each antiautomorphism for some automorphism . It can be proven in a similar way, that under the same assumptions the number of isomorphisms from to equals the number of antiautomorphisms of . If some antiautomorphism is also an automorphism, then for each Since is bijective, for all and , so the ring is commutative and all antiautomorphisms are automorphisms. By contraposition, if a ring is noncommutative (and self-opposite), then no antiautomorphism is an automorphism. Denote by the group of all automorphisms together with all antiautomorphisms. The above remarks imply, that if a ring (or rng) is noncommutative and self-opposite. If it is commutative or non-self-opposite, then . Examples The smallest noncommutative ring with unity The smallest such ring has eight elements and it is the only noncommutative ring among 11 rings with unity of order 8, up to isomorphism. It has the additive group . Obviously is antiisomorphic to , as is always the case, but it is also isomorphic to . Below are the tables of addition and multiplication in , and multiplication in the opposite ring, which is a transposed table. To prove that the two rings are isomorphic, take a map given by the table The map swaps elements in only two pairs: and . Rename accordingly the elements in the multiplication table for (arguments and values). Next, rearrange rows and columns to bring the arguments back to ascending order. The table becomes exactly the multiplication table of . Similar changes in the table of additive group yield the same table, so is an automorphism of this group, and since , it is indeed a ring isomorphism. The map is involutory, i.e. , so = and it is an isomorphism from to equally well. So, the permutation can be reinterpreted to define isomorphism and then is an antiautomorphism of given by the same permutation . The ring has exactly two automorphisms: identity and , that is . So its full group has four elements with two of them antiautomorphisms. One is and the second, denote it by , can be calculated There is no element of order 4, so the group is not cyclic and must be the group (the Klein group ), which can be confirmed by calculation. The "symmetry group" of this ring is isomorphic to the symmetry group of rectangle. Noncommutative ring with 27 elements The ring of the upper triangular matrices over the field with 3 elements has 27 elements and is a noncommutative ring. It is unique up to isomorphism, that is, all noncommutative rings with unity and 27 elements are isomorphic to it. The largest noncommutative ring listed in the "Book of the Rings" has 27 elements, and is also isomorphic. In this section the notation from "The Book" for the elements of is used. Two things should be kept in mind: that the element denoted by is the unity of and that is not the unity. The additive group of is . The group of all automorphisms has 6 elements: Since is self-opposite, it has also 6 antiautomorphisms. One isomorphism is , which can be verified using the tables of operations in "The Book" like in the first example by renaming and rearranging. This time the changes should be made in the original tables of operations of . The result is the multiplication table of and the addition table remains unchanged. Thus, one antiautomorphism is given by the same permutation. The other five can be calculated (in the multiplicative notation the composition symbol can be dropped): The group has 7 elements of order 2 (3 automorphisms and 4 antiautomorphisms) and can be identified as the dihedral group (see List of small groups). In geometric analogy the ring has the "symmetry group" isomorphic to the symmetry group of 3-antiprism, which is the point group in Schoenflies notation or in short Hermann–Mauguin notation for 3-dimensional space. The smallest non-self-opposite rings with unity All the rings with unity of orders ranging from 9 up to 15 are commutative, so they are self-opposite. The rings, that are not self-opposite, appear for the first time among the rings of order 16. There are 4 different non-self-opposite rings out of the total number of 50 rings with unity having 16 elements (37 commutative and 13 noncommutative). They can be coupled in two pairs of rings opposite to each other in a pair, and necessarily with the same additive group, since an antiisomorphism of rings is an isomorphism of their additive groups. One pair of rings and has the additive group and the other pair and , the group . Their tables of operations are not presented in this article, as they can be found in the source cited, and it can be verified that , they are opposite, but not isomorphic. The same is true for the pair and , however, the ring listed in "The Book of the Rings" is not equal but only isomorphic to . The remaining noncommutative rings are self-opposite. Free algebra with two generators The free algebra over a field with generators has multiplication from the multiplication of words. For example, Then the opposite algebra has multiplication given by which are not equal elements. Quaternion algebra The quaternion algebra over a field with is a division algebra defined by three generators with the relations All elements are of the form , where For example, if , then is the usual quaternion algebra. If the multiplication of is denoted , it has the multiplication table {| class="wikitable" style="text-align: center" |+ ! ! ! ! |- ! | | | |- ! | | | |- ! | | | |} Then the opposite algebra with multiplication denoted has the table {| class="wikitable" style="text-align: center" |+ ! ! ! ! |- ! | | | |- ! | | | |- ! | | | |} Commutative ring A commutative ring is isomorphic to its opposite ring since for all and in . They are even equal , since their operations are equal, i.e. . Properties Two rings R1 and R2 are isomorphic if and only if their corresponding opposite rings are isomorphic. The opposite of the opposite of a ring is identical with , that is (Rop)op = . A ring and its opposite ring are anti-isomorphic. A ring is commutative if and only if its operation coincides with its opposite operation. The left ideals of a ring are the right ideals of its opposite. The opposite ring of a division ring is a division ring. A left module over a ring is a right module over its opposite, and vice versa. Notes Citations References See also Opposite group Opposite category Ring theory
Opposite ring
[ "Mathematics" ]
1,941
[ "Fields of abstract algebra", "Ring theory" ]
3,129,341
https://en.wikipedia.org/wiki/Quasi-Lie%20algebra
In mathematics, a quasi-Lie algebra in abstract algebra is just like a Lie algebra, but with the usual axiom replaced by (anti-symmetry). In characteristic other than 2, these are equivalent (in the presence of bilinearity), so this distinction doesn't arise when considering real or complex Lie algebras. It can however become important, when considering Lie algebras over the integers. In a quasi-Lie algebra, Therefore, the bracket of any element with itself is 2-torsion, if it does not actually vanish. See also Whitehead product References Lie algebras
Quasi-Lie algebra
[ "Mathematics" ]
122
[]
3,129,368
https://en.wikipedia.org/wiki/Far-infrared%20laser
Far-infrared laser or terahertz laser (FIR laser, THz laser) is a laser with output wavelength in between 30 and 1000 μm (frequency 0.3-10 THz), in the far infrared or terahertz frequency band of the electromagnetic spectrum. FIR lasers have application in terahertz spectroscopy, terahertz imaging as well in fusion plasma physics diagnostics. They can be used to detect explosives and chemical warfare agents, by the means of infrared spectroscopy or to evaluate the plasma densities by the means of interferometry techniques. FIR lasers typically consist of a long (1–3 meters) waveguide filled with gaseous organic molecules, optically pumped or via HV discharge. They are highly inefficient, often require helium cooling, high magnetic fields, and/or are only line tunable. Efforts to develop smaller solid-state alternatives are under way. The p-Ge (p-type germanium) laser is a tunable, solid state, far infrared laser which has existed for over 25 years. It operates in crossed electric and magnetic fields at liquid helium temperatures. Wavelength selection can be achieved by changing the applied electric/magnetic fields or through the introduction of intracavity elements. Quantum cascade laser (QCL) is a construction of such alternative. It is a solid-state semiconductor laser that can operate continuously with output power of over 100 mW and wavelength of 9.5 μm. A prototype was already demonstrated. and potential use shown. A molecular FIR laser optically pumped by a QCL has been demonstrated in 2016. It operates at room-temperature and is smaller than molecular FIR lasers optically pumped by CO2 lasers. Free electron lasers can also operate on far infrared wavelengths. Femtosecond Ti:sapphire mode-locked lasers are also being used to generate very short pulses that can be optically rectified to produce a terahertz pulse. See also Laser Xaser Infrared laser Terahertz radiation References Terahertz technology Laser types
Far-infrared laser
[ "Physics" ]
418
[ "Spectrum (physical sciences)", "Electromagnetic spectrum", "Terahertz technology" ]
3,129,901
https://en.wikipedia.org/wiki/Stacking%20%28chemistry%29
In chemistry, stacking refers to superposition of molecules or atomic sheets owing to attractive interactions between these molecules or sheets. Metal dichalcogenide compounds Metal dichalcogenides have the formula ME2, where M = a transition metal and E = S, Se, Te. In terms of their electronic structures, these compounds are usually viewed as derivatives of M4+. They adopt stacked structures, which is relevant to their ability to undergo intercalation, e.g. by lithium, and their lubricating properties. The corresponding diselenides and even ditellurides are known, e.g., TiSe2, MoSe2, and WSe2. Charge transfer salts A combination of tetracyanoquinodimethane (TCNQ) and tetrathiafulvalene (TTF) forms a strong charge-transfer complex referred to as TTF-TCNQ. The solid shows almost metallic electrical conductance. In a TTF-TCNQ crystal, TTF and TCNQ molecules are arranged independently in separate parallel-aligned stacks, and an electron transfer occurs from donor (TTF) to acceptor (TCNQ) stacks. Graphite Graphite consists of stacked sheets of covalently bonded carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. Bonding between layers is relatively weak van der Waals bonds, which allows the graphene-like layers to be easily separated and to glide past each other. Electrical conductivity perpendicular to the layers is consequently about 1000 times lower. Linear chain compounds Linear chain compounds are materials composed of stacked arrays of metal-metal bonded molecules or ions. Such materials exhibit anisotropic electrical conductivity. One example is (acac = acetylacetonate, which stack with distances of about 326 pm. Classic examples include Krogmann's salt and Magnus's green salt. Counterexample: benzene dimer and related species π–π stacking is a noncovalent interaction between the pi bonds of aromatic rings. Such "sandwich interactions" are however generally electrostatically repulsive. What is more commonly observed are either a staggered stacking (parallel displaced) or pi-teeing (perpendicular T-shaped) interaction both of which are electrostatic attractive. For example, the most commonly observed interactions between aromatic rings of amino acid residues in proteins is a staggered stacked followed by a perpendicular orientation. Sandwiched orientations are relatively rare. Pi stacking is repulsive as it places carbon atoms with partial negative charges from one ring on top of other partial negatively charged carbon atoms from the second ring and hydrogen atoms with partial positive charges on top of other hydrogen atoms that likewise carry partial positive charges. π–π interactions play a role in supramolecular chemistry, specifically the synthesis of catenane. The major challenge for the synthesis of catenane is to interlock molecules in a controlled fashion. Attractive π–π interactions exist between electron-rich benzene derivatives and electron-poor pyridinium rings. [2]Catanene was synthesized by treating bis(pyridinium) (A), bisparaphenylene-34-crown-10 (B), and 1, 4-bis(bromomethyl)benzene (C) (Fig. 2). The π–π interaction between A and B directed the formation of an interlocked template intermediate that was further cyclized by substitution reaction with compound C to generate the [2]catenane product. See also Noncovalent interaction Dispersion (chemistry) Cation–pi interaction Intercalation (biochemistry) Intercalation (chemistry) References External links Larry Wolf (2011): π-π (π-Stacking) interactions: origin and modulation Organic chemistry Chemical bonding Supramolecular chemistry
Stacking (chemistry)
[ "Physics", "Chemistry", "Materials_science" ]
837
[ "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Supramolecular chemistry" ]
3,129,916
https://en.wikipedia.org/wiki/Quantum%20defect
The term quantum defect refers to two concepts: energy loss in lasers and energy levels in alkali elements. Both deal with quantum systems where matter interacts with light. In laser science In laser science, the term quantum defect refers to the fact that the energy of a pump photon is generally higher than that of a signal photon (photon of the output radiation). The energy difference is lost to heat, which may carry away the excess entropy delivered by the multimode incoherent pump. The quantum defect of a laser can be defined as the part of the energy of the pumping photon which is lost (not turned into photons at the lasing wavelength) in the gain medium during lasing. At given frequency of pump and given frequency of lasing, the quantum defect . Such a quantum defect has dimensions of energy; for the efficient operation, the temperature of the gain medium (measured in units of energy) should be small compared to the quantum defect. The quantum defect may also be defined as follows: at a given frequency of pump and given frequency of lasing, the quantum defect ; according to this definition, quantum defect is dimensionless. At a fixed pump frequency, the higher the quantum defect, the lower is the upper bound for the power efficiency. In hydrogenic atoms The quantum defect of an alkali atom refers to a correction to the energy levels predicted by the classic calculation of the hydrogen wavefunction. A simple model of the potential experienced by the single valence electron of an alkali atom is that the ionic core acts as a point charge with effective charge e and the wavefunctions are hydrogenic. However, the structure of the ionic core alters the potential at small radii. The 1/r potential in the hydrogen atom leads to an electron binding energy given by where is the Rydberg constant, is the Planck constant, is the speed of light and is the principal quantum number. For alkali atoms with small orbital angular momentum, the wavefunction of the valence electron is non-negligible in the ion core where the screened Coulomb potential with an effective charge of e no longer describes the potential. The spectrum is still described well by the Rydberg formula with an angular momentum dependent quantum defect, : The largest shifts occur when the orbital angular momentum is zero (normally labeled 's') and these are shown in the table for the alkali metals: See also External quantum efficiency Quantum efficiency of a solar cell References Atoms Laser science
Quantum defect
[ "Physics" ]
503
[ "Atoms", "Matter" ]
3,130,291
https://en.wikipedia.org/wiki/Pot%20metal
Pot metal (or monkey metal) is an alloy of low-melting point metals that manufacturers use to make fast, inexpensive castings. The term "pot metal" came about because of automobile factories' practice in the early 20th century of gathering up non-ferrous metal scraps from the manufacturing processes and melting them in one pot to form into cast products. Small amounts of iron often made it into the castings but never in significant quantity because too much iron would raise the melting point too high for simple casting operations. In stained glass, "pot metal" or pot metal glass refers to glass coloured with metal oxides while it is molten (in a pot), as opposed to other methods of colouring glass in sheet form. Metallurgy There is no metallurgical standard for pot metal. Common metals in pot metal include zinc, lead, copper, tin, magnesium, aluminum, iron, and cadmium. The primary advantage of pot metal is that it is quick and easy to cast. Because of its low melting temperature, it requires no sophisticated foundry equipment or specialized molds. Manufacturers sometimes use it to experiment with molds and ideas (e.g., prototypes) before casting final products in a higher quality alloy. Depending on the exact metals "thrown into the pot", pot metal can become unstable over time, as it has a tendency to bend, distort, crack, shatter, and pit with age. The low boiling point of zinc and fast cooling of newly cast parts often trap air bubbles within the cast part, weakening it. Many components common in pot metal are susceptible to corrosion from airborne acids and other contaminants, and internal corrosion of the metal often causes decorative plating to flake off. Pot metal is not easily glued, soldered, or welded. In the late nineteenth century, pot metal referred specifically to a copper alloy that was primarily alloyed with lead. Mixtures of 67% copper with 29% lead and 4% antimony and another one of 80% copper with 20% lead were common formulations. The primary component of pot metal is zinc, but often the caster adds other metals to the mix to strengthen the cast part, improve flow of the molten metal, or to reduce cost. With a low melting point of 420 °C (786 °F), zinc is often alloyed with other metals including lead, tin, aluminium, and copper. Uses Pot metal is generally used for parts that are not subject to high stresses or torque. Items created from pot metal include toys, furniture fittings, tool parts, electronics components, automotive parts, inexpensive jewelry and improvised weaponry.. Pot metal was commonly used to manufacture gramophone parts in the late 1920s and 1930s, with notable examples being the back covers on some HMV no.4 soundboxes and HMV no.5 soundboxes. It was also used to make loudspeaker transducers used with early radio horn speakers before cone speakers were developed. It is also used in inexpensive electric guitars and other budget-priced musical instruments. See also Babbitt (alloy) Pewter White metal Zamak Zinc aluminium Zinc pest References Copper alloys Zinc alloys
Pot metal
[ "Chemistry" ]
646
[ "Alloys", "Zinc alloys", "Copper alloys" ]
3,130,340
https://en.wikipedia.org/wiki/Collaborative%20human%20interpreter
The collaborative human interpreter (CHI) is a proposed software interface for human-based computation (first proposed as a programming language on the blog Google Blogoscoped, but implementable via an API in virtually any programming language) specially designed for collecting and making use of human intelligence in a computer program. One typical usage is implementing impossible-to-automate functions. For example, it is currently difficult for a computer to differentiate between images of men, women and non-humans. However, this is easy for people. A programmer using CHI could write a code fragment along these lines: enum GenderCode { MALE, FEMALE, NOT_A_HUMAN } Photo photo = loadPhoto(file) GenderCode result = checkGender(photo) Code for the function checkGender(Photo p) can currently only approximate a result, but the task can easily be solved by a person. When the function checkGender() is called, the system will send a request to someone, and the person who received the request will process the task and input the result. If the person (task processor) inputs value MALE, you'll get the value in your variable result, in your program. This querying process can be highly automated. Deployment On November 6, 2005, Amazon.com launched CHI as its business platform in the Amazon Mechanical Turk. It's the first business application using CHI. Origins CHI is originally mentioned in Philipp Lenssen's blog. References External links "Amazon looks to solve problems that stump computers", ZDnet, Nov 10, 2005 Domain-specific programming languages Human-based computation
Collaborative human interpreter
[ "Technology" ]
329
[ "Information systems", "Human-based computation" ]
3,130,497
https://en.wikipedia.org/wiki/Jovan%20Karamata
Jovan Karamata (; February 1, 1902 – August 14, 1967) was a Serbian mathematician and university professor. He is remembered for contributions to analysis, in particular, the Tauberian theory and the theory of slowly varying functions. Considered to be among the most influential Serbian mathematicians of the 20th century, Karamata was one of the founders of the Mathematical Institute of the Serbian Academy of Sciences and Arts, established in 1946. Life Jovan Karamata was born in Zagreb on February 1, 1902, into a family descended from merchants based in the city of Zemun, which was then in Austria-Hungary, and now in Serbia. Being of Aromanian origin, the family traced its roots back to Pyrgoi, Eordaia, West Macedonia (his father Ioannis Karamatas was the president of the "Greek Community of Zemun"); Aromanians mainly lived and still live in the area of modern Greece. Its business affairs on the borders of the Austro-Hungarian and Ottoman empires were very well known. In 1914, he finished most of his primary school in Zemun but because of constant warfare on the borderlands, Karamata's father sent him, together with his brothers and his sister, to Switzerland for their own safety. In Lausanne, 1920, he finished primary school oriented towards mathematics and sciences. In the same year he enrolled at the Engineering faculty of Belgrade University and, after several years moved to the Philosophy and Mathematicians sector, where he graduated in 1925. He spent the years 1927–1928 in Paris, as a fellow of the Rockefeller Foundation, and in 1928 he became Assistant for Mathematics at the Faculty of Philosophy of Belgrade University. In 1930 he became Assistant Professor, in 1937 Associate Professor and, after the end of World War II, in 1950 he became Full Professor. In 1951 he was elected Full Professor at the University of Geneva. In 1933 he became a member of Yugoslav Academy of Sciences and Arts, Czech Royal Society in 1936, and Serbian Royal Academy in 1939 as well as a fellow of Serbian Academy of Sciences in 1948. He was one of the founders of the Mathematical Institute of the Serbian Academy of Sciences and Arts in 1946. Karamata was member of the Swiss, French and German mathematical societies, the French Association for the Development of Science, and the primary editor of the journal L’Enseignement Mathématique in Geneva. He also taught at the University of Novi Sad. In 1931 he married Emilija Nikolajevic, who gave birth to their two sons and two twin daughters. His wife died in 1959. After a long illness, Karamata died on August 14, 1967, in Geneva. His ashes rest in his native town of Zemun. Legacy Karamata published 122 scientific papers, 15 monographs and text-books as well as 7 professional-pedagogical papers. Karamata is best known for his work on mathematical analysis. He introduced the notion of regularly varying function, and discovered a new class of theorems of Tauberian type, today known as Karamata's tauberian theorems. He also worked on Mercer's theorems, Frullani integral, and other topics in analysis. In 1935 he introduced the brackets and braces notation for Stirling numbers (analogous to the binomial coefficients notation), which is now known as Karamata notation. He is also cited for Karamata's inequality. In Serbia, Karamata founded the "Karamata's (Yugoslav) school of mathematics”. Today, Karamata is the most cited Serbian mathematician. He is the developer and co-developer of dozens of mathematical theorems and has had a lasting influence in 20th-century mathematics. See also Mihailo Petrović Alas Bogdan Gavrilović References Further reading N.H. Bingham, C.M. Goldie, J.L. Teugels, Regular Variation, Encyclopedia of Mathematics and its Applications, vol. 27, Cambridge Univ. Press, 1987. J.L. Geluk, L. de Haan, Regular Variation Extensions and Tauberian Theorems, CWI Tract 40, Amsterdam, 1987. Maric V, Radasin Z, Regularly Varying Functions in Asymptotic Analysis Nikolic A, About two famous results of Jovan Karamata, Archives Internationales d’Histoire des Sciences Nikolic A, Jovan Karamata (1902–1967), Lives and work of the Serbian scientists, SANU, Biographies and bibliographies, Book 5 Tomic M, Academician Jovan Karamata, on occasion of his death, SANU, Vol CDXXIII, t. 37, Belgrade, 1968 (in Serbian) Tomic M, Jovan Karamata (1902–1967), L’Enseignement Mathématique Tomic M, Aljancic S, Remembering Karamata, Publications de l’Institut Mathématique External links Tổng quát về Bất đẳng thức Karamata (tiếng Việt) 1902 births 1967 deaths Scientists from Zagreb Serbs of Croatia Serbian mathematicians University of Belgrade Faculty of Philosophy alumni Academic staff of the University of Geneva Mathematical analysts Yugoslav mathematicians Serbian people of Aromanian descent
Jovan Karamata
[ "Mathematics" ]
1,047
[ "Mathematical analysis", "Mathematical analysts" ]
3,130,540
https://en.wikipedia.org/wiki/Planned%20unit%20development
A planned unit development (PUD) is a type of flexible, non-Euclidean zoning device that redefines the land uses allowed within a stated land area. PUDs consist of unitary site plans that promote the creation of open spaces, mixed-use housing and land uses, environmental preservation and sustainability, and development flexibility. Areas rezoned as PUDs include building developments, designed groupings of both varied and compatible land uses—such as housing, recreation, commercial centers, and industrial parks—within one contained development or subdivision. Developed areas vary in size and by zoned uses, such as industrial, commercial, and residential. Other types of similar zoning devices include floating zones, overlay zones, special district zoning, performance-based codes, and transferable development rights. History The conceptual origins of PUDs date back to the 1926 enactment of the Model Planning Enabling Act of 1925 by the Committee on the Regional Plan of New York, which allowed for the decisions of planning boards and commissions to precede decisions required by local zoning regulations. Specifically, Section 12 of the Model Planning Enabling Act authorized planning boards and commissions to reasonably modify or change development plans and limited average population density and total land area covered by buildings. Similarly, Sections 14 and 15 of the Standard City Planning Enabling Act of 1928 allowed planning commissions to authorize PUDs, upon an agreement between the government and developers on the PUD's design principles and its impact to both the surrounding community and economy. The physical origins of PUDs are rooted in the increased suburbanization of the mid-twentieth century, during which the oldest forms of PUDs in America appeared shortly after World War II in the Levittown and Park Forest developments. Increased implementation of PUDs arose in response to both the lack of aesthetic variation among suburban homes and the increasing need for higher suburban density to accommodate rising population sizes. PUDs resolved the problems of large-scale, suburban development in multiple, separate land uses were efficiently combined, preserving valuable open space and suburban aesthetics within specific site parameters limitations. The first zoning evidence of PUD was created by Prince George's County, Maryland in 1949, in which the developmental unit consisted of multiple land uses, in contrast to the county's previous commitment to single-land use Euclidean zoning. The usage of PUDs in new American communities have been, in part, the result of some international influence; British towns, like Reston, England, in the 1950s attempted to increase their economic base through the integration of industrial elements into the area. Though American new communities had to attract industry post-development of residential sectors, American new communities had similar economic needs to these British towns and, consequently, used PUDS to increase the percent of allowed industrial acreage relative to residential and nonresidential acreage. Current definitions PUD is a means of land regulation that promotes large scale, site-specific, mixed-use land development. PUDs are a very flexible form of zoning, as compared to Euclidean zoning, in that PUDs promotes innovative and creative design, can promote environmental conservation and affordable housing, clustering and increased density. Where appropriate, this type of development promotes: Upfront completion of project plans before development begins A mixture of both land uses—such as those of commercial and industrial natures—and dwelling types that is more innovative than standard zoning ordinances The clustering of residential land uses provides public and common open space. Environmentally friendly preservation of sensitive lands, such as hills or wetlands, that would otherwise have been developed Reduction of greenhouse gas emissions if PUD prioritizes walkability alongside mixed uses Reduction of development and administrative costs, particularly infrastructure costs like those of streets, driveways, and water and sewer lines Increased administrative discretion to a local professional planning staff while setting aside present land use regulations and rigid plat approval processes The enhancement of the bargaining process between the developer and government municipalities Increased population densities and reduced street widths Increase in available community amenities—like bike trails and recreation centers—and natural open spaces Frequently, PUDs take on a variety of forms ranging from small clusters of houses combined with open spaces to new and developing towns with thousands of residents and various land uses. Mixtures of land uses PUDs are in direct contradiction to the single-type zoning that has traditionally underscored zoning in the United States since the Village of Euclid v. Ambler Realty Co. decision. Within PUDs, zoning becomes much more integrated with multiple land uses and districts being placed on adjacent land parcels. The enactment of the Standard Zoning Enabling Act (SZEA) reinforced that density is considered a regulatory priority, further supporting the development of PUDs through the integration of varying lot sizes, varied building uses, and the mixture of different types of housing. PUDs combine residential and non-residential uses—like offices, commercial stores, and other services—and attracts a diversified community. Notably, sidewalks and streets of PUDs tend to be more active and safer, both at day and night, and experiences reduced congestion during peak times. Detailed plans and review processes are required to approve development of PUDs, due to the nature of mixing residential and non-residential uses into land areas previously only allowed for single-use; approving a PUD essentially requires a legal rezoning, in which variances and conditional use permits must be cleared by a planning board or commission with regards to the municipality's comprehensive plan. Maintenance of common areas PUD provisions include consideration of the ability of the owners, or related stakeholders, to ensure the maintenance of common, public areas; such provisions include related costs, income level and interests of stakeholders, and the nature of both the common area and surrounding developments. Throughout the development process, which is already scheduled and preapproved as part of the PUD application process, maintenance of common areas can be sustained through calculated servicing of inroads. Post-development, governing documents of homeowners associations within PUDs often delegate most of the maintenance responsibilities to the owners, assuming the least amount of responsibility possible. Design principles Minimum parcel size The minimum parcel size requirement can be in regards to either dwelling units or acres and can vary depending on both the type and location of a development. Given that minimum parcel sizes are a factor rarely necessary for project approval, maximum density requirements are more often used instead, focusing on either a maximum number of units per acre or a minimum lot acre per each dwelling unit. Uses permitted Uses permitted is determined by allotting certain percentages of land use to residential, commercial, and industrial uses. Oftentimes, the amount allotted is dependent on the percentage of residential uses relative to non-residential uses within a defined area. Density Given that PUDs focus on integrating mixed uses into a specified area, density is calculated based on the Federal Housing Act's Land Use Intensity (LUI) rating, which encompasses floor area, open space, livability, and recreational spaces within a single, numerical rating. Houses and placement of houses Houses in PUDS often include access to a large, shared open space surrounding the house, as well as a smaller, private yard. Clustered residential homes and buildings also provides homeowners with lower prices, in the form of lower infrastructural costs from shorter streets and by offering a mix of single-family, two-family, multiple-family housing. PUDs can be considered a legal alternative to large lot, single-family zoning for how the land area can increase residential housing available while maintaining a small impact on local property taxes. Homes can be placed next to commercial and office land uses, while ensuring the preservation of other areas. Homes, however, often visually feature garages instead of front porches, as they are placed in the periphery, relative to small strip malls created through other PUD initiatives. Size considerations Smaller-sized PUDs, generally less than 250 acres, can promote the mixed-use of the development, but, because they are too small to influence nearby development, can contribute to sprawl in nearby peripheral and rural areas. In smaller PUDs, offering shopping facilities for only the residents may not be sustainable or viable for the development. Mid-sized PUDs, generally greater than 250 acres but less than 1000 acres, can maintain a balance between developmental influence to nearby areas, while still maintaining adequate cash flow requirements. Moderately-sized PUDs also contribute to the development of commercial highways and residential areas, which can increase the ability of smaller-areas of land being more efficiently developed or used. Larger-sized PUDs, generally greater than 1000 acres, can control sprawl-related issues, yet may also strain the management capacity of local developers. Usable, public open space There are multiple provisions PUDs must include in regards to available open spaces, which include, upon conditional approval, those concerning quantity, location, and maintenance of public areas. Approval for such provisions can be satisfied by one of the following: satisfying a minimum acreage requirement relative to a specific number of dwelling units or a direct percentage of gross acreage; approval from a planning board on the proposed location of the public, open space; or cosigned maintenance agreements between residents —regardless of whether it be by a municipality or an organized residential community, like a homeowner' association or a community trust. The requirement of these aforementioned revisions is to ensure that open, public land, facilities, amenities, and necessities are well-kept for ease of public use and accessibility. Streets Street patterns can be used to change the neighborhood character of a residential community, particularly by allowing developers to flexibly arrange buildings without having to adhere to non-PUD zoning regulations. Wide, curvilinear, and cul-de-sac street patterns are examples. The usage of these street, round street patterns allow developers to cluster buildings and maximize available open space. Existing street and block patterns, historic preservation, and reservation of ground-floor streetfronts for non-residential, commercial uses are also considered when a community approves a PUD. Combining design features The flexibility to include multiple amenities—like utilities, recreational facilities, schools, and parks—within a development unit is representative of how untraditional, Euclidean zoning practices can increase the mixed-use capability of a given piece of land. PUD project plans require a balance of residential, such as single-family homes and apartments, and non-residential requirements, ensuring that interacting individuals and vehicles are able to safely, and conveniently, navigate the varied buildings, spaces, and streets of PUDs. Ownership and responsibility of such PUDs may be either public or private. References Urban planning
Planned unit development
[ "Engineering" ]
2,137
[ "Urban planning", "Architecture" ]
3,131,342
https://en.wikipedia.org/wiki/Electrogravitics
Electrogravitics is claimed to be an unconventional type of effect or anti-gravity force created by an electric field's effect on a mass. The name was coined in the 1920s by the discoverer of the effect, Thomas Townsend Brown, who spent most of his life trying to develop it and sell it as a propulsion system. Through Brown's promotion of the idea, it was researched for a short while by aerospace companies in the 1950s. Electrogravitics is popular with conspiracy theorists, with claims that it is powering flying saucers and the B-2 Stealth Bomber. Since apparatuses based on Brown's ideas have often yielded varying and highly controversial results when tested within controlled vacuum conditions, the effect observed has often been attributed to the ion drift or ion wind effect instead of anti-gravity. Origins Electrogravitics had its origins in experiments started in 1921 by Thomas Townsend Brown (who coined the name) while he was in high school. He discovered an unusual effect while experimenting with a Coolidge tube, a type of X-ray vacuum tube where, if he placed on a balance scale with the tube's positive electrode facing up, the tube's mass seemed to decrease; when facing down, the tube's mass seemed to increase. Brown showed this effect to his college professors and even newspaper reporters and told them he was convinced that he had managed to influence gravity electronically. Brown developed this into large, high-voltage capacitors that would produce a tiny, propulsive force causing the capacitor to jump in one direction when the power was turned on. In 1929, Brown published "How I Control Gravitation" in Science and Invention where he claimed the capacitors were producing a mysterious force that interacted with the pull of gravity. He envisioned a future where, if his device could be scaled up, "Multi-impulse gravitators, weighing hundreds of tons, may propel the ocean liners of the future" or even "fantastic 'space cars'" to Mars. Somewhere along the way, Brown devised the name Biefeld–Brown effect, named after his former teacher, professor of astronomy Paul Alfred Biefeld at Denison University in Ohio. Brown claimed Biefeld as his mentor and co-experimenter. After World War II, Brown sought to develop the effect as a means of propulsion for aircraft and spacecraft, demonstrating a working apparatus to an audience of scientists and military officials in 1952. A Cal-Tech physicist invited to observe Brown's disk device in the early '50s noted during the demonstration that its motivation force was the well-known phenomenon of "electric wind", and not anti-gravity, saying, “I’m afraid these gentlemen played hooky from their high school physics classes…”. Research into the phenomenon was popular in the mid-1950s, at one point, the Glenn L. Martin Company placed advertisements looking for scientists who were "interested in gravity", but rapidly declined in popularity thereafter. Since this effect could not be explained by known physics at the time, the effect has been believed to be caused by ionized particles that produces a type of ion drift or ionic wind that transfers its momentum to surrounding neutral particles, electrokinetic phenomena or more widely referred to as electrohydrodynamics (EHD). Claims Electrogravitics has become popular with UFO, anti-gravity, and government conspiracy theorists where it is seen as an example of something much more exotic than electrokinetics, i.e. that electrogravitics is a true anti-gravity technology that can "create a force that depends upon an object’s mass, even as gravity does". There are claims that all major aerospace companies in the 1950s, including Martin, Convair, Lear, Sperry, Raytheon, were working on it, that the technology became highly classified in the early 1960s, that it is used to power the B-2 bomber, and that it can be used to generate free energy. Charles Berlitz devoted an entire chapter of his book on The Philadelphia Experiment (The Philadelphia Experiment: Project Invisibility) to a retelling of Brown's early work with the effect, implying the electrogravitics effect was being used by UFOs. The researcher and author Paul LaViolette has produced many self-published books on electrogravitics, making many claims over the years, including his view that the technology could have helped to avoid another Space Shuttle Columbia disaster. Criticism Many claims as to the validity of electrogravitics as an anti-gravity force revolve around research and videos on the internet purported to show lifter-style, capacitor devices working in a vacuum, therefore not receiving propulsion from ion drift or ion wind being generated in air. Followups on the claims (R. L. Talley in a 1990 U.S. Air Force study, NASA scientist Jonathan Campbell in a 2003 experiment, and Martin Tajmar in a 2004 paper) have found that no thrust could be observed in a vacuum, consistent with the phenomenon of ion wind. Campbell pointed out to a Wired magazine reporter that creating a true vacuum similar to space for the test requires tens of thousands of dollars in equipment. Byron Preiss, in his 1985 book on the current science and future of the Solar System titled The Planets, commented that electrogravitics development seemed to be "much ado about nothing, started by a bunch of engineers who didn't know enough physics". Preiss stated that electrogravitics, like exobiology, is "a science without a single specimen for study". See also United States gravity control propulsion initiative List of topics characterized as pseudoscience References Further reading Thomas Valone, Electrogravitics Systems: Reports on a New Propulsion Methodology. Integrity Research Institute; 2nd ed edition (November 1995). 102 pages. Thomas Valone, Electrogravitics II: Validating Reports on a New Propulsion Methodology. Integrity Research Institute; 2Rev Ed edition (July 1, 2005). 160 pages. Jen-shih Chang, Handbook of Electrostatic Processes. CRC Press, 1995. Nick Cook, The Hunt for Zero Point: Inside the Classified World of Antigravity Technology. Broadway; 1 edition (August 13, 2002). 304 pages Paul A. LaViolette, "Secrets of Antigravity Propulsion: Tesla, UFOs, and Classified Aerospace Technology". Bear & Company, Rochester VT (2008), Paperback: 512 pages, External links Electrogravitics at American Antigravity A page of YouTube talks and demonstrations by supporters. Anti-gravity Fringe physics Hypothetical technology
Electrogravitics
[ "Astronomy" ]
1,350
[ "Astronomical hypotheses", "Anti-gravity" ]
3,131,407
https://en.wikipedia.org/wiki/B%C3%A9zout%20domain
In mathematics, a Bézout domain is an integral domain in which the sum of two principal ideals is also a principal ideal. This means that Bézout's identity holds for every pair of elements, and that every finitely generated ideal is principal. Bézout domains are a form of Prüfer domain. Any principal ideal domain (PID) is a Bézout domain, but a Bézout domain need not be a Noetherian ring, so it could have non-finitely generated ideals; if so, it is not a unique factorization domain (UFD), but is still a GCD domain. The theory of Bézout domains retains many of the properties of PIDs, without requiring the Noetherian property. Bézout domains are named after the French mathematician Étienne Bézout. Examples All PIDs are Bézout domains. Examples of Bézout domains that are not PIDs include the ring of entire functions (functions holomorphic on the whole complex plane) and the ring of all algebraic integers. In case of entire functions, the only irreducible elements are functions associated to a polynomial function of degree 1, so an element has a factorization only if it has finitely many zeroes. In the case of the algebraic integers there are no irreducible elements at all, since for any algebraic integer its square root (for instance) is also an algebraic integer. This shows in both cases that the ring is not a UFD, and so certainly not a PID. Valuation rings are Bézout domains. Any non-Noetherian valuation ring is an example of a non-noetherian Bézout domain. The following general construction produces a Bézout domain S that is not a UFD from any Bézout domain R that is not a field, for instance from a PID; the case is the basic example to have in mind. Let F be the field of fractions of R, and put , the subring of polynomials in F[X] with constant term in R. This ring is not Noetherian, since an element like X with zero constant term can be divided indefinitely by noninvertible elements of R, which are still noninvertible in S, and the ideal generated by all these quotients of is not finitely generated (and so X has no factorization in S). One shows as follows that S is a Bézout domain. It suffices to prove that for every pair a, b in S there exist s, t in S such that divides both a and b. If a and b have a common divisor d, it suffices to prove this for a/d and b/d, since the same s, t will do. We may assume the polynomials a and b nonzero; if both have a zero constant term, then let n be the minimal exponent such that at least one of them has a nonzero coefficient of Xn; one can find f in F such that fXn is a common divisor of a and b and divide by it. We may therefore assume at least one of a, b has a nonzero constant term. If a and b viewed as elements of F[X] are not relatively prime, there is a greatest common divisor of a and b in this UFD that has constant term 1, and therefore lies in S; we can divide by this factor. We may therefore also assume that a and b are relatively prime in F[X], so that 1 lies in , and some constant polynomial r in R lies in . Also, since R is a Bézout domain, the gcd d in R of the constant terms a0 and b0 lies in . Since any element without constant term, like or , is divisible by any nonzero constant, the constant d is a common divisor in S of a and b; we shall show it is in fact a greatest common divisor by showing that it lies in . Multiplying a and b respectively by the Bézout coefficients for d with respect to a0 and b0 gives a polynomial p in with constant term d. Then has a zero constant term, and so is a multiple in S of the constant polynomial r, and therefore lies in . But then d does as well, which completes the proof. Properties A ring is a Bézout domain if and only if it is an integral domain in which any two elements have a greatest common divisor that is a linear combination of them: this is equivalent to the statement that an ideal which is generated by two elements is also generated by a single element, and induction demonstrates that all finitely generated ideals are principal. The expression of the greatest common divisor of two elements of a PID as a linear combination is often called Bézout's identity, whence the terminology. Note that the above gcd condition is stronger than the mere existence of a gcd. An integral domain where a gcd exists for any two elements is called a GCD domain and thus Bézout domains are GCD domains. In particular, in a Bézout domain, irreducibles are prime (but as the algebraic integer example shows, they need not exist). For a Bézout domain R, the following conditions are all equivalent: R is a principal ideal domain. R is Noetherian. R is a unique factorization domain (UFD). R satisfies the ascending chain condition on principal ideals (ACCP). Every nonzero nonunit in R factors into a product of irreducibles (R is an atomic domain). The equivalence of (1) and (2) was noted above. Since a Bézout domain is a GCD domain, it follows immediately that (3), (4) and (5) are equivalent. Finally, if R is not Noetherian, then there exists an infinite ascending chain of finitely generated ideals, so in a Bézout domain an infinite ascending chain of principal ideals. (4) and (2) are thus equivalent. A Bézout domain is a Prüfer domain, i.e., a domain in which each finitely generated ideal is invertible, or said another way, a commutative semihereditary domain.) Consequently, one may view the equivalence "Bézout domain iff Prüfer domain and GCD-domain" as analogous to the more familiar "PID iff Dedekind domain and UFD". Prüfer domains can be characterized as integral domains whose localizations at all prime (equivalently, at all maximal) ideals are valuation domains. So the localization of a Bézout domain at a prime ideal is a valuation domain. Since an invertible ideal in a local ring is principal, a local ring is a Bézout domain iff it is a valuation domain. Moreover, a valuation domain with noncyclic (equivalently non-discrete) value group is not Noetherian, and every totally ordered abelian group is the value group of some valuation domain. This gives many examples of non-Noetherian Bézout domains. In noncommutative algebra, right Bézout domains are domains whose finitely generated right ideals are principal right ideals, that is, of the form xR for some x in R. One notable result is that a right Bézout domain is a right Ore domain. This fact is not interesting in the commutative case, since every commutative domain is an Ore domain. Right Bézout domains are also right semihereditary rings. Modules over a Bézout domain Some facts about modules over a PID extend to modules over a Bézout domain. Let R be a Bézout domain and M finitely generated module over R. Then M is flat if and only if it is torsion-free. See also Semifir (a commutative semifir is precisely a Bézout domain.) Bézout ring References Bibliography Commutative algebra Ring theory
Bézout domain
[ "Mathematics" ]
1,679
[ "Fields of abstract algebra", "Commutative algebra", "Ring theory" ]
3,131,507
https://en.wikipedia.org/wiki/RNA-binding%20protein
RNA-binding proteins (often abbreviated as RBPs) are proteins that bind to the double or single stranded RNA in cells and participate in forming ribonucleoprotein complexes. RBPs contain various structural motifs, such as RNA recognition motif (RRM), dsRNA binding domain, zinc finger and others. They are cytoplasmic and nuclear proteins. However, since most mature RNA is exported from the nucleus relatively quickly, most RBPs in the nucleus exist as complexes of protein and pre-mRNA called heterogeneous ribonucleoprotein particles (hnRNPs). RBPs have crucial roles in various cellular processes such as: cellular function, transport and localization. They especially play a major role in post-transcriptional control of RNAs, such as: splicing, polyadenylation, mRNA stabilization, mRNA localization and translation. Eukaryotic cells express diverse RBPs with unique RNA-binding activity and protein–protein interaction. According to the Eukaryotic RBP Database (EuRBPDB), there are 2961 genes encoding RBPs in humans. During evolution, the diversity of RBPs greatly increased with the increase in the number of introns. Diversity enabled eukaryotic cells to utilize RNA exons in various arrangements, giving rise to a unique RNP (ribonucleoprotein) for each RNA. Although RBPs have a crucial role in post-transcriptional regulation in gene expression, relatively few RBPs have been studied systematically. It has now become clear that RNA–RBP interactions play important roles in many biological processes among organisms. Structure Many RBPs have modular structures and are composed of multiple repeats of just a few specific basic domains that often have limited sequences. Different RBPs contain these sequences arranged in varying combinations. A specific protein's recognition of a specific RNA has evolved through the rearrangement of these few basic domains. Each basic domain recognizes RNA, but many of these proteins require multiple copies of one of the many common domains to function. Diversity As nuclear RNA emerges from RNA polymerase, RNA transcripts are immediately covered with RNA-binding proteins that regulate every aspect of RNA metabolism and function including RNA biogenesis, maturation, transport, cellular localization and stability. All RBPs bind RNA, however they do so with different RNA-sequence specificities and affinities, which allows the RBPs to be as diverse as their targets and functions. These targets include mRNA, which codes for proteins, as well as a number of functional non-coding RNAs. NcRNAs almost always function as ribonucleoprotein complexes and not as naked RNAs. These non-coding RNAs include microRNAs, small interfering RNAs (siRNA), as well as spliceosomal small nuclear RNAs (snRNA). Function RNA processing and modification Alternative splicing Alternative splicing is a mechanism by which different forms of mature mRNAs (messengers RNAs) are generated from the same gene. It is a regulatory mechanism by which variations in the incorporation of the exons into mRNA leads to the production of more than one related protein, thus expanding possible genomic outputs. RBPs function extensively in the regulation of this process. Some binding proteins such as neuronal specific RNA-binding proteins, namely NOVA1, control the alternative splicing of a subset of hnRNA by recognizing and binding to a specific sequence in the RNA (YCAY where Y indicates pyrimidine, U or C). These proteins then recruit splicesomal proteins to this target site. SR proteins are also well known for their role in alternative splicing through the recruitment of snRNPs that form the splicesome, namely U1 snRNP and U2AF snRNP. However, RBPs are also part of the splicesome itself. The splicesome is a complex of snRNA and protein subunits and acts as the mechanical agent that removes introns and ligates the flanking exons. Other than core splicesome complex, RBPs also bind to the sites of Cis-acting RNA elements that influence exons inclusion or exclusion during splicing. These sites are referred to as exonic splicing enhancers (ESEs), exonic splicing silencers (ESSs), intronic splicing enhancers (ISEs) and intronic splicing silencers (ISSs) and depending on their location of binding, RBPs work as splicing silencers or enhancers. RNA editing The most extensively studied form of RNA editing involves the ADAR protein. This protein functions through post-transcriptional modification of mRNA transcripts by changing the nucleotide content of the RNA. This is done through the conversion of adenosine to inosine in an enzymatic reaction catalyzed by ADAR. This process effectively changes the RNA sequence from that encoded by the genome and extends the diversity of the gene products. The majority of RNA editing occurs on non-coding regions of RNA; however, some protein-encoding RNA transcripts have been shown to be subject to editing resulting in a difference in their protein's amino acid sequence. An example of this is the glutamate receptor mRNA where glutamine is converted to arginine leading to a change in the functionality of the protein. Polyadenylation Polyadenylation is the addition of a "tail" of adenylate residues to an RNA transcript about 20 bases downstream of the AAUAAA sequence within the three prime untranslated region. Polyadenylation of mRNA has a strong effect on its nuclear transport, translation efficiency, and stability. All of these as well as the process of polyadenylation depend on binding of specific RBPs. All eukaryotic mRNAs with few exceptions are processed to receive 3' poly (A) tails of about 200 nucleotides. One of the necessary protein complexes in this process is CPSF. CPSF binds to the 3' tail (AAUAAA) sequence and together with another protein called poly(A)-binding protein, recruits and stimulates the activity of poly(A) polymerase. Poly(A) polymerase is inactive on its own and requires the binding of these other proteins to function properly. Export After processing is complete, mRNA needs to be transported from the cell nucleus to cytoplasm. This is a three-step process involving the generation of a cargo-carrier complex in the nucleus followed by translocation of the complex through the nuclear pore complex and finally release of the cargo into cytoplasm. The carrier is then subsequently recycled. TAP/NXF1:p15 heterodimer is thought to be the key player in mRNA export. Over-expression of TAP in Xenopus laevis frogs increases the export of transcripts that are otherwise inefficiently exported. However TAP needs adaptor proteins because it is unable interact directly with mRNA. Aly/REF protein interacts and binds to the mRNA recruiting TAP. mRNA localization mRNA localization is critical for regulation of gene expression by allowing spatially regulated protein production. Through mRNA localization proteins are translated in their intended target site of the cell. This is especially important during early development when rapid cell cleavages give different cells various combinations of mRNA which can then lead to drastically different cell fates. RBPs are critical in the localization of this mRNA that insures proteins are only translated in their intended regions. One of these proteins is ZBP1. ZBP1 binds to beta-actin mRNA at the site of transcription and moves with mRNA into the cytoplasm. It then localizes this mRNA to the lamella region of several asymmetric cell types where it can then be translated. In 2008 it was proposed that FMRP was involved in the stimulus-induced localization of several dendritic mRNAs in the neuronal dendrites of cultured hippocampal neurons. More recent studies of FMRP-bound RNAs present in microdissected dendrites of CA1 hippocampal neurons revealed no changes in localization in wild type versus FMRP-null mouse brains. Translation Translational regulation provides a rapid mechanism to control gene expression. Rather than controlling gene expression at the transcriptional level, mRNA is already transcribed but the recruitment of ribosomes is controlled. This allows rapid generation of proteins when a signal activates translation. ZBP1 in addition to its role in the localization of B-actin mRNA is also involved in the translational repression of beta-actin mRNA by blocking translation initiation. ZBP1 must be removed from the mRNA to allow the ribosome to properly bind and translation to begin. Protein–RNA interactions RNA-binding proteins exhibit highly specific recognition of their RNA targets by recognizing their sequences, structures, motifs and RNA modifications. Specific binding of the RNA-binding proteins allow them to distinguish their targets and regulate a variety of cellular functions via control of the generation, maturation, and lifespan of the RNA transcript. This interaction begins during transcription as some RBPs remain bound to RNA until degradation whereas others only transiently bind to RNA to regulate RNA splicing, processing, transport, and localization. Cross-linking immunoprecipitation (CLIP) methods are used to stringently identify direct RNA binding sites of RNA-binding proteins in a variety of tissues and organisms. In this section, three classes of the most widely studied RNA-binding domains (RNA-recognition motif, double-stranded RNA-binding motif, zinc-finger motif) will be discussed. RNA-recognition motif (RRM) The RNA recognition motif, which is the most common RNA-binding motif, is a small protein domain of 75–85 amino acids that forms a four-stranded β-sheet against the two α-helices. This recognition motif exerts its role in numerous cellular functions, especially in mRNA/rRNA processing, splicing, translation regulation, RNA export, and RNA stability. Ten structures of an RRM have been identified through NMR spectroscopy and X-ray crystallography. These structures illustrate the intricacy of protein–RNA recognition of RRM as it entails RNA–RNA and protein–protein interactions in addition to protein–RNA interactions. Despite their complexity, all ten structures have some common features. All RRMs' main protein surfaces' four-stranded β-sheet was found to interact with the RNA, which usually contacts two or three nucleotides in a specific manner. In addition, strong RNA binding affinity and specificity towards variation are achieved through an interaction between the inter-domain linker and the RNA and between RRMs themselves. This plasticity of the RRM explains why RRM is the most abundant domain and why it plays an important role in various biological functions. Double-stranded RNA-binding motif The double-stranded RNA-binding motif (dsRM, dsRBD), a 70–75 amino-acid domain, plays a critical role in RNA processing, RNA localization, RNA interference, RNA editing, and translational repression. All three structures of the domain solved as of 2005 possess uniting features that explain how dsRMs only bind to dsRNA instead of dsDNA. The dsRMs were found to interact along the RNA duplex via both α-helices and β1-β2 loop. Moreover, all three dsRBM structures make contact with the sugar-phosphate backbone of the major groove and of one minor groove, which is mediated by the β1-β2 loop along with the N-terminus region of the alpha helix 2. This interaction is a unique adaptation for the shape of an RNA double helix as it involves 2'-hydroxyls and phosphate oxygen. Despite the common structural features among dsRBMs, they exhibit distinct chemical frameworks, which permits specificity for a variety for RNA structures including stem-loops, internal loops, bulges or helices containing mismatches. Zinc fingers CCHH-type zinc-finger domains are the most common DNA-binding domain within the eukaryotic genome. In order to attain high sequence-specific recognition of DNA, several zinc fingers are utilized in a modular fashion. Zinc fingers exhibit ββα protein fold in which a β-hairpin and a α-helix are joined via a ion. Furthermore, the interaction between protein side-chains of the α-helix with the DNA bases in the major groove allows for the DNA-sequence-specific recognition. Despite its wide recognition of DNA, there has been recent discoveries that zinc fingers also have the ability to recognize RNA. In addition to CCHH zinc fingers, CCCH zinc fingers were recently discovered to employ sequence-specific recognition of single-stranded RNA through an interaction between intermolecular hydrogen bonds and Watson-Crick edges of the RNA bases. CCHH-type zinc fingers employ two methods of RNA binding. First, the zinc fingers exert non-specific interaction with the backbone of a double helix whereas the second mode allows zinc fingers to specifically recognize the individual bases that bulge out. Differing from the CCHH-type, the CCCH-type zinc finger displays another mode of RNA binding, in which single-stranded RNA is identified in a sequence-specific manner. Overall, zinc fingers can directly recognize DNA via binding to dsDNA sequence and RNA via binding to ssRNA sequence. Role in embryonic development RNA-binding proteins' transcriptional and post-transcriptional regulation of RNA has a role in regulating the patterns of gene expression during development. Extensive research on the nematode C. elegans has identified RNA-binding proteins as essential factors during germline and early embryonic development. Their specific function involves the development of somatic tissues (neurons, hypodermis, muscles and excretory cells) as well as providing timing cues for the developmental events. Nevertheless, it is exceptionally challenging to discover the mechanism behind RBPs' function in development due to the difficulty in identifying their RNA targets. This is because most RBPs usually have multiple RNA targets. However, it is indisputable that RBPs exert a critical control in regulating developmental pathways in a concerted manner. Germline development In Drosophila melanogaster, Elav, Sxl and tra-2 are RNA-binding protein encoding genes that are critical in the early sex determination and the maintenance of the somatic sexual state. These genes impose effects on the post-transcriptional level by regulating sex-specific splicing in Drosophila. Sxl exerts positive regulation of the feminizing gene tra to produce a functional tra mRNA in females. In C. elegans, RNA-binding proteins including FOG-1, MOG-1/-4/-5 and RNP-4 regulate germline and somatic sex determination. Furthermore, several RBPs such as GLD-1, GLD-3, DAZ-1, PGL-1 and OMA-1/-2 exert their regulatory functions during meiotic prophase progression, gametogenesis, and oocyte maturation. Somatic development In addition to RBPs' functions in germline development, post-transcriptional control also plays a significant role in somatic development. Differing from RBPs that are involved in germline and early embryo development, RBPs functioning in somatic development regulate tissue-specific alternative splicing of the mRNA targets. For instance, MEC-8 and UNC-75 containing RRM domains localize to regions of hypodermis and nervous system, respectively. Furthermore, another RRM-containing RBP, EXC-7, is revealed to localize in embryonic excretory canal cells and throughout the nervous system during somatic development. Neuronal development ZBP1 was shown to regulate dendritogenesis (dendrite formation) in hippocampal neurons. Other RNA-binding proteins involved in dendrite formation are Pumilio and Nanos, FMRP, CPEB and Staufen 1 Role in cancer RBPs are emerging to play a crucial role in tumor development. Hundreds of RBPs are markedly dysregulated across human cancers and showed predominant downregulation in tumors related to normal tissues. Many RBPs are differentially expressed in different cancer types for example KHDRBS1(Sam68), ELAVL1(HuR), FXR1 and UHMK1. For some RBPs, the change in expression are related with Copy Number Variations (CNV), for example CNV gains of BYSL in colorectal cancer cells and ESRP1, CELF3 in breast cancer, RBM24 in liver cancer, IGF2BP2, IGF2BP3 in lung cancer or CNV losses of KHDRBS2 in lung cancer. Some expression changes are cause due to protein affecting mutations on these RBPs for example NSUN6, ZC3H13, ELAC1, RBMS3, and ZGPAT, SF3B1, SRSF2, RBM10, U2AF1, SF3B1, PPRC1, RBMXL1, HNRNPCL1 etc. Several studies have related this change in expression of RBPs to aberrant alternative splicing in cancer. Current research As RNA-binding proteins exert significant control over numerous cellular functions, they have been a popular area of investigation for many researchers. Due to its importance in the biological field, numerous discoveries regarding RNA-binding proteins' potentials have been recently unveiled. Recent development in experimental identification of RNA-binding proteins has extended the number of RNA-binding proteins significantly RNA-binding protein Sam68 controls the spatial and temporal compartmentalization of RNA metabolism to attain proper synaptic function in dendrites. Loss of Sam68 results in abnormal posttranscriptional regulation and ultimately leads to neurological disorders such as fragile X-associated tremor/ataxia syndrome. Sam68 was found to interact with the mRNA encoding β-actin, which regulates the synaptic formation of the dendritic spines with its cytoskeletal components. Therefore, Sam68 plays a critical role in regulating synapse number via control of postsynaptic β-actin mRNA metabolism. Neuron-specific CELF family RNA-binding protein UNC-75 specifically binds to the UUGUUGUGUUGU mRNA stretch via its three RNA recognition motifs for the exon 7a selection in C. elegans''' neuronal cells. As exon 7a is skipped due to its weak splice sites in non-neuronal cells, UNC-75 was found to specifically activate splicing between exon 7a and exon 8 only in the neuronal cells. The cold inducible RNA binding protein CIRBP plays a role in controlling the cellular response upon confronting a variety of cellular stresses, including short wavelength ultraviolet light, hypoxia, and hypothermia. This research yielded potential implications for the association of disease states with inflammation. Serine-arginine family of RNA-binding protein Slr1 was found exert control on the polarized growth in Candida albicans. Slr1 mutations in mice results in decreased filamentation and reduces damage to epithelial and endothelial cells that leads to extended survival rate compared to the Slr1 wild-type strains. Therefore, this research reveals that SR-like protein Slr1 plays a role in instigating the hyphal formation and virulence in C. albicans''. See also DNA-binding protein RNA-binding protein database Ribonucleoprotein External links starBase platform: a platform for decoding binding sites of RNA binding proteins (RBPs) from large-scale CLIP-Seq (HITS-CLIP, PAR-CLIP, iCLIP, CLASH) datasets. RBPDB database: a database of RNA binding proteins. oRNAment: a database of putative RBP binding site instances in both coding and non-coding RNA in various species. ATtRACt database: a database of RNA binding proteins and associated motifs. SplicedAid-F: a database of hand -cureted human RNA binding proteins database. RsiteDB: RNA binding site database SPOT-Seq-RNA: Template-based prediction of RNA binding proteins and their complex structures. SPOT-Struct-RNA: RNA binding proteins prediction from 3D structures. ENCODE Project: A collection of genomic datasets (i.e. RNA Bind-n-seq, eCLIP, RBP targeted shRNA RNA-seq) for RBPs RBP Image Database: Images showing the cellular localization of RBPs in cells RBPSpot Software: A Deep-Learning based highly accurate software to detect RBP-RNA interaction. It also provides a module to build new RBP-RNA interaction models. References Cell biology
RNA-binding protein
[ "Biology" ]
4,324
[ "Cell biology" ]
3,131,706
https://en.wikipedia.org/wiki/Holomorphic%20functional%20calculus
In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function f of a complex argument z and an operator T, the aim is to construct an operator, f(T), which naturally extends the function f from complex argument to operator argument. More precisely, the functional calculus defines a continuous algebra homomorphism from the holomorphic functions on a neighbourhood of the spectrum of T to the bounded operators. This article will discuss the case where T is a bounded linear operator on some Banach space. In particular, T can be a square matrix with complex entries, a case which will be used to illustrate functional calculus and provide some heuristic insights for the assumptions involved in the general construction. Motivation Need for a general functional calculus In this section T will be assumed to be a n × n matrix with complex entries. If a given function f is of certain special type, there are natural ways of defining f(T). For instance, if is a complex polynomial, one can simply substitute T for z and define where T0 = I, the identity matrix. This is the polynomial functional calculus. It is a homomorphism from the ring of polynomials to the ring of n × n matrices. Extending slightly from the polynomials, if f : C → C is holomorphic everywhere, i.e. an entire function, with MacLaurin series mimicking the polynomial case suggests we define Since the MacLaurin series converges everywhere, the above series will converge, in a chosen operator norm. An example of this is the exponential of a matrix. Replacing z by T in the MacLaurin series of f(z) = ez gives The requirement that the MacLaurin series of f converges everywhere can be relaxed somewhat. From above it is evident that all that is really needed is the radius of convergence of the MacLaurin series be greater than ǁTǁ, the operator norm of T. This enlarges somewhat the family of f for which f(T) can be defined using the above approach. However it is not quite satisfactory. For instance, it is a fact from matrix theory that every non-singular T has a logarithm S in the sense that eS = T. It is desirable to have a functional calculus that allows one to define, for a non-singular T, ln(T) such that it coincides with S. This can not be done via power series, for example the logarithmic series converges only on the open unit disk. Substituting T for z in the series fails to give a well-defined expression for ln(T + I) for invertible T + I with ǁTǁ ≥ 1. Thus a more general functional calculus is needed. Functional calculus and the spectrum It is expected that a necessary condition for f(T) to make sense is f be defined on the spectrum of T. For example, the spectral theorem for normal matrices states every normal matrix is unitarily diagonalizable. This leads to a definition of f(T) when T is normal. One encounters difficulties if f(λ) is not defined for some eigenvalue λ of T. Other indications also reinforce the idea that f(T) can be defined only if f is defined on the spectrum of T. If T is not invertible, then (recalling that T is an n x n matrix) 0 is an eigenvalue. Since the natural logarithm is undefined at 0, one would expect that ln(T) can not be defined naturally. This is indeed the case. As another example, for the reasonable way of calculating f(T) would seem to be However, this expression is not defined if the inverses on the right-hand side do not exist, that is, if either 2 or 5 are eigenvalues of T. For a given matrix T, the eigenvalues of T dictate to what extent f(T) can be defined; i.e., f(λ) must be defined for all eigenvalues λ of T. For a general bounded operator this condition translates to "f must be defined on the spectrum of T". This assumption turns out to be an enabling condition such that the functional calculus map, f → f(T), has certain desirable properties. Functional calculus for a bounded operator Let X be a complex Banach space, and L(X) denote the family of bounded operators on X. Recall the Cauchy integral formula from classical function theory. Let f : C → C be holomorphic on some open set D ⊂ C, and Γ be a rectifiable Jordan curve in D, that is, a closed curve of finite length without self-intersections. Assume that the set U of points lying in the inside of Γ, i.e. such that the winding number of Γ about z is 1, is contained in D. The Cauchy integral formula states for any z in U. The idea is to extend this formula to functions taking values in the Banach space L(X). Cauchy's integral formula suggests the following definition (purely formal, for now): where (ζ−T)−1 is the resolvent of T at ζ. Assuming this Banach space-valued integral is appropriately defined, this proposed functional calculus implies the following necessary conditions: As the scalar version of Cauchy's integral formula applies to holomorphic f, we anticipate that is also the case for the Banach space case, where there should be a suitable notion of holomorphy for functions taking values in the Banach space L(X). As the resolvent mapping ζ → (ζ−T)−1 is undefined on the spectrum of T, σ(T), the Jordan curve Γ should not intersect σ(T). Now, the resolvent mapping will be holomorphic on the complement of σ(T). So to obtain a non-trivial functional calculus, Γ must enclose (at least part of) σ(T). The functional calculus should be well-defined in the sense that f(T) has to be independent of Γ. The full definition of the functional calculus is as follows: For T ∈ L(X), define where f is a holomorphic function defined on an open set D ⊂ C which contains σ(T), and Γ = {γ1, ..., γm} is a collection of disjoint Jordan curves in D bounding an "inside" set U, such that σ(T) lies in U, and each γi is oriented in the boundary sense. The open set D may vary with f and need not be connected or simply connected, as shown by the figures on the right. The following subsections make precise the notions invoked in the definition and show f(T) is indeed well defined under given assumptions. Banach space-valued integral Cf. Bochner integral For a continuous function g defined in an open neighborhood of Γ and taking values in L(X), the contour integral ∫Γg is defined in the same way as for the scalar case. One can parametrize each γi ∈ Γ by a real interval [a, b], and the integral is the limit of the Riemann sums obtained from ever-finer partitions of [a, b]. The Riemann sums converge in the uniform operator topology. We define In the definition of the functional calculus, f is assumed to be holomorphic in an open neighborhood of Γ. It will be shown below that the resolvent mapping is holomorphic on the resolvent set. Therefore, the integral makes sense. The resolvent mapping The mapping ζ → (ζ−T)−1 is called the resolvent mapping of T. It is defined on the complement of σ(T), called the resolvent set of T and will be denoted by ρ(T). Much of classical function theory depends on the properties of the integral The holomorphic functional calculus is similar in that the resolvent mapping plays a crucial role in obtaining properties one requires from a nice functional calculus. This subsection outlines properties of the resolvent map that are essential in this context. The 1st resolvent formula Direct calculation shows, for z1, z2 ∈ ρ(T), Therefore, This equation is called the first resolvent formula. The formula shows (z1−T)−1 and (z2−T)−1 commute, which hints at the fact that the image of the functional calculus will be a commutative algebra. Letting z2 → z1 shows the resolvent map is (complex-) differentiable at each z1 ∈ ρ(T); so the integral in the expression of functional calculus converges in L(X). Analyticity Stronger statement than differentiability can be made regarding the resolvent map. The resolvent set ρ(T) is actually an open set on which the resolvent map is analytic. This property will be used in subsequent arguments for the functional calculus. To verify this claim, let z1 ∈ ρ(T) and notice the formal expression suggests we consider for (z2−T)−1. The above series converges in L(X), which implies the existence of (z2−T)−1, if Therefore, the resolvent set ρ(T) is open and the power series expression on an open disk centered at z1 ∈ ρ(T) shows the resolvent map is analytic on ρ(T). Neumann series Another expression for (z−T)−1 will also be useful. The formal expression leads one to consider This series, the Neumann series, converges to (z−T)−1 if Compactness of σ(T) From the last two properties of the resolvent we can deduce that the spectrum σ(T) of a bounded operator T is a compact subset of C. Therefore, for any open set D such that σ(T) ⊂ D, there exists a positively oriented and smooth system of Jordan curves Γ = {γ1, ..., γm} such that σ(T) is in the inside of Γ and the complement of D is contained in the outside of Γ. Hence, for the definition of the functional calculus, indeed a suitable family of Jordan curves can be found for each f that is holomorphic on some D. Well-definedness The previous discussion has shown that the integral makes sense, i.e. a suitable collection Γ of Jordan curves does exist for each f and the integral does converge in the appropriate sense. What has not been shown is that the definition of the functional calculus is unambiguous, i.e. does not depend on the choice of Γ. This issue we now try to resolve. A preliminary fact For a collection of Jordan curves Γ = {γ1, ..., γm} and a point a ∈ C, the winding number of Γ with respect to a is the sum of the winding numbers of its elements. If we define: the following theorem is by Cauchy: Theorem. Let G ⊂ C be an open set and Γ ⊂ G. If g : G → C is holomorphic, and for all a in the complement of G, n(Γ, a) = 0, then the contour integral of g on Γ is zero. We will need the vector-valued analog of this result when g takes values in L(X). To this end, let g : G → L(X) be holomorphic, with the same assumptions on Γ. The idea is use the dual space L(X)* of L(X), and pass to Cauchy's theorem for the scalar case. Consider the integral if we can show that all φ ∈ L(X)* vanish on this integral then the integral itself has to be zero. Since φ is bounded and the integral converges in norm, we have: But g is holomorphic, hence the composition φ(g): G ⊂ C → C is holomorphic and therefore by Cauchy's theorem Main argument The well-definedness of functional calculus now follows as an easy consequence. Let D be an open set containing σ(T). Suppose Γ = {γi} and Ω = {ωj} are two (finite) collections of Jordan curves satisfying the assumption given for the functional calculus. We wish to show Let Ω′ be obtained from Ω by reversing the orientation of each ωj, then Consider the union of the two collections Γ ∪ Ω′. Both Γ ∪ Ω′ and σ(T) are compact. So there is some open set U containing Γ ∪ Ω′ such that σ(T) lies in the complement of U. Any a in the complement of U has winding number n(Γ ∪ Ω′, a) = 0 and the function is holomorphic on U. So the vector-valued version of Cauchy's theorem gives i.e. Hence the functional calculus is well-defined. Consequently, if f1 and f2 are two holomorphic functions defined on neighborhoods D1 and D2 of σ(T) and they are equal on an open set containing σ(T), then f1(T) = f2(T). Moreover, even though the D1 may not be D2, the operator (f1 + f2) (T) is well-defined. Same holds for the definition of (f1·f2)(T). On the assumption that f be holomorphic over an open neighborhood of σ(T) So far the full strength of this assumption has not been used. For convergence of the integral, only continuity was used. For well-definedness, we only needed f to be holomorphic on an open set U containing the contours Γ ∪ Ω′ but not necessarily σ(T). The assumption will be applied in its entirety in showing the homomorphism property of the functional calculus. Properties Polynomial case The linearity of the map f ↦ f(T) follows from the convergence of the integral and that linear operations on a Banach space are continuous. We recover the polynomial functional calculus when f(z) = Σ0 ≤ i ≤ m ai zi is a polynomial. To prove this, it is sufficient to show, for k ≥ 0 and f(z) = zk, it is true that f(T) = Tk, i.e. for any suitable Γ enclosing σ(T). Choose Γ to be a circle of radius greater than the operator norm of T. As stated above, on such Γ, the resolvent map admits a power series representation Substituting gives which is The δ is the Kronecker delta symbol. The homomorphism property For any f1 and f2 satisfying the appropriate assumptions, the homomorphism property states We sketch an argument which invokes the first resolvent formula and the assumptions placed on f. First we choose the Jordan curves such that Γ1 lies in the inside of Γ2. The reason for this will become clear below. Start by calculating directly The last line follows from the fact that ω ∈ Γ2 lies outside of Γ1 and f1 is holomorphic on some open neighborhood of σ(T) and therefore the second term vanishes. Therefore, we have: Continuity with respect to compact convergence Let G ⊂ C be open with σ(T) ⊂ G. Suppose a sequence {fk} of holomorphic functions on G converges uniformly on compact subsets of G (this is sometimes called compact convergence). Then {fk(T)} is convergent in L(X): Assume for simplicity that Γ consists of only one Jordan curve. We estimate By combining the uniform convergence assumption and various continuity considerations, we see that the above tends to 0 as k, l → ∞. So {fk(T)} is Cauchy, therefore convergent. Uniqueness To summarize, we have shown the holomorphic functional calculus, f → f(T), has the following properties: It extends the polynomial functional calculus. It is an algebra homomorphism from the algebra of holomorphic functions defined on a neighborhood of σ(T) to L(X) It preserves uniform convergence on compact sets. It can be proved that a calculus satisfying the above properties is unique. We note that, everything discussed so far holds verbatim if the family of bounded operators L(X) is replaced by a Banach algebra A. The functional calculus can be defined in exactly the same way for an element in A. Spectral considerations Spectral mapping theorem It is known that the spectral mapping theorem holds for the polynomial functional calculus: for any polynomial p, σ(p(T)) = p(σ(T)). This can be extended to the holomorphic calculus. To show f(σ(T)) ⊂ σ(f(T)), let μ be any complex number. By a result from complex analysis, there exists a function g holomorphic on a neighborhood of σ(T) such that According to the homomorphism property, f(T) − f(μ) = (T − μ)g(T). Therefore, μ ∈ σ(T) implies f(μ) ∈ σ(f(T)). For the other inclusion, if μ is not in f(σ(T)), then the functional calculus is applicable to So g(T)(f(T) − μ) = I. Therefore, μ does not lie in σ(f(T)). Spectral projections The underlying idea is as follows. Suppose that K is a subset of σ(T) and U,V are disjoint neighbourhoods of K and σ(T) \ K respectively. Define e(z) = 1 if z ∈ U and e(z) = 0 if z ∈ V. Then e is a holomorphic function with [e(z)]2 = e(z) and so, for a suitable contour Γ which lies in U ∪ V and which encloses σ(T), the linear operator will be a bounded projection that commutes with T and provides a great deal of useful information. It transpires that this scenario is possible if and only if K is both open and closed in the subspace topology on σ(T). Moreover, the set V can be safely ignored since e is zero on it and therefore makes no contribution to the integral. The projection e(T) is called the spectral projection of T at K and is denoted by P(K;T). Thus every subset K of σ(T) that is both open and closed in the subspace topology has an associated spectral projection given by where Γ is a contour that encloses K but no other points of σ(T). Since P = P(K;T) is bounded and commutes with T it enables T to be expressed in the form U ⊕ V where U = T|PX and V = T|(1−P)X. Both PX and (1 − P)X are invariant subspaces of T moreover σ(U) = K and σ(V) = σ(T) \ K. A key property is mutual orthogonality. If L is another open and closed set in the subspace topology on σ(T) then P(K;T)P(L;T) = P(L;T)P(K;T) = P(K ∩ L;T) which is zero whenever K and L are disjoint. Spectral projections have numerous applications. Any isolated point of σ(T) is both open and closed in the subspace topology and therefore has an associated spectral projection. When X has finite dimension σ(T) consists of isolated points and the resultant spectral projections lead to a variant of Jordan normal form wherein all the Jordan blocks corresponding to the same eigenvalue are consolidated. In other words there is precisely one block per distinct eigenvalue. The next section considers this decomposition in more detail. Sometimes spectral projections inherit properties from their parent operators. For example if T is a positive matrix with spectral radius r then the Perron–Frobenius theorem asserts that r ∈ σ(T). The associated spectral projection P = P(r;T) is also positive and by mutual orthogonality no other spectral projection can have a positive row or column. In fact TP = rP and (T/r)n → P as n → ∞ so this projection P (which is called the Perron projection) approximates (T/r)n as n increases, and each of its columns is an eigenvector of T. More generally if T is a compact operator then all non-zero points in σ(T) are isolated and so any finite subset of them can be used to decompose T. The associated spectral projection always has finite rank. Those operators in L(X) with similar spectral characteristics are known as Riesz operators. Many classes of Riesz operators (including the compact operators) are ideals in L(X) and provide a rich field for research. However if X is a Hilbert space there is exactly one closed ideal sandwiched between the Riesz operators and those of finite rank. Much of the foregoing discussion can be set in the more general context of a complex Banach algebra. Here spectral projections are referred to as spectral idempotents since there may no longer be a space for them to project onto. Invariant subspace decomposition If the spectrum σ(T) is not connected, X can be decomposed into invariant subspaces of T using the functional calculus. Let σ(T) be a disjoint union Define ei to be 1 on some neighborhood that contains only the component Fi and 0 elsewhere. By the homomorphism property, ei(T) is a projection for all i. In fact it is just the spectral projection P(Fi;T) described above. The relation ei(T) T = T ei(T) means the range of each ei(T), denoted by Xi, is an invariant subspace of T. Since X can be expressed in terms of these complementary subspaces: Similarly, if Ti is T restricted to Xi, then Consider the direct sum With the norm X' is a Banach space. The mapping R: X' → X defined by is a Banach space isomorphism, and we see that This can be viewed as a block diagonalization of T. When X is finite-dimensional, σ(T) = {λi} is a finite set of points in the complex plane. Choose ei to be 1 on an open disc containing only λi from the spectrum. The corresponding block-diagonal matrix is the Jordan canonical form of T. Related results With stronger assumptions, when T is a normal operator acting on a Hilbert space, the domain of the functional calculus can be broadened. When comparing the two results, a rough analogy can be made with the relationship between the spectral theorem for normal matrices and the Jordan canonical form. When T is a normal operator, a continuous functional calculus can be obtained, that is, one can evaluate f(T) with f being a continuous function defined on σ(T). Using the machinery of measure theory, this can be extended to functions which are only measurable (see Borel functional calculus). In that context, if E ⊂ σ(T) is a Borel set and 1E is the characteristic function of E, the projection operator 1E(T) is a refinement of ei(T) discussed above. The Borel functional calculus extends to unbounded self-adjoint operators on a Hilbert space. In slightly more abstract language, the holomorphic functional calculus can be extended to any element of a Banach algebra, using essentially the same arguments as above. Similarly, the continuous functional calculus holds for normal elements in any C*-algebra and the measurable functional calculus for normal elements in any von Neumann algebra. Unbounded operators A holomorphic functional calculus can be defined in a similar fashion for unbounded closed operators with non-empty resolvent set. See also Helffer–Sjöstrand formula Resolvent formalism Jordan canonical form, where the finite-dimensional case is discussed in some detail. References N. Dunford and J.T. Schwartz, Linear Operators, Part I: General Theory, Interscience, 1958. Steven G Krantz. Dictionary of Algebra, Arithmetic, and Trigonometry. CRC Press, 2000. . Israel Gohberg, Seymour Goldberg and Marinus A. Kaashoek, Classes of Linear Operators: Volume 1. Birkhauser, 1991. . holomorphic functions
Holomorphic functional calculus
[ "Mathematics" ]
5,152
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Functional calculus" ]
11,084,673
https://en.wikipedia.org/wiki/Motorola%20i835w
The Motorola i835w is a phone in the Motorola i835 series. References I835w
Motorola i835w
[ "Technology" ]
24
[ "Mobile technology stubs", "Mobile phone stubs" ]
11,084,741
https://en.wikipedia.org/wiki/Global%20Strategy%20for%20Plant%20Conservation
The Global Strategy for Plant Conservation (GSPC) is a program of the UN's Convention on Biological Diversity founded in 1999. The GSPC seeks to slow the pace of plant extinction around the world through a strategy of 5 objectives. History The Global Strategy for Plant Conservation (GSPC) began as a grass-roots movement in 1999 with discussions at the 16th International Botanical Congress in St. Louis. A group of specialists subsequently met in Gran Canaria and issued the 'Gran Canaria Declaration Calling for a Global Plant Conservation Strategy'. Following extensive consultations, the fleshed-out GSPC was adopted by the Parties to the Convention on Biological Diversity (CBD) in April 2002. The initial version of the GSPC sought to slow the pace of plant extinction around the world by 2010, with Target 1 of the Strategy calling for the completion of "a widely accessible working list of all known plant species, as a step towards a complete world Flora". In 2010, Version 1 of The Plant List was launched, intended to be comprehensive for species of vascular plants (flowering plants, conifers, ferns and their allies) and Bryophytes (mosses and liverworts). In 2010, GSPC targets were updated through an extensive consultation process within the CBD, with revised targets for 2020. In 2012, the Missouri Botanical Garden, The New York Botanical Garden, the Royal Botanic Garden Edinburgh and the Royal Botanic Gardens in Kew agreed to collaborate to develop a World Flora Online in response to the revised GSPC Target 1. Vision Our lives depend on plants and without them the ecosystem would cease to function. Our survival and survival of all species are tied to plants. The Global Strategy for Plant Conservation seeks to limit the rates of plant diversity loss, while having a positive vision regards the efforts and the results. Forming an idea of having a sustainable future where plant species are able to thrive and be maintained (including their survival, preservation of their communities and habitats, plants' gene pool and ecological associations) under supporting human activities, and in turn where the diversity of plant species improve and support the livelihoods and well-being. Mission The Global Strategy for Plant Conservation is a platform that gathers efforts from all the different levels-local, national, regional and global, in order to strengthen the needs for conservation and substantiality and implement steps toward the awareness and actions that should be made. Implementation Sufficient human, technical and financial resources are contained within the strategy in order to prevent the slowing down of the program in case of limited funds and lack of training. Some of the implementation strategies include these involving a range of actors: (i) International initiatives (e.g., international conventions, intergovernmental organizations, United Nations agencies, multilateral aid agencies); (ii) Members of the Global Partnership for Plant Conservation; (iii) Conservation and research organizations (including protected-area management boards, botanic gardens, gene banks, universities, research institutes, non-governmental organizations and networks of non-governmental organizations); (iv) Communities and major groups (including indigenous and local communities, farmers, women, youth); (v) Governments (central, regional, local authorities); and (vi) The private sector Goals The heart of the GSPC are five goals, expressed as a total of 16 targets. The five objectives and their 16 targets for 2020 are: Objective I: Plant diversity is well understood, documented and recognized Target 1: An online flora of all known plants. Target 2: An assessment of the conservation status of all known plant species, as far as possible, to guide conservation action. Target 3: Information, research and associated outputs, and methods necessary to implement the Strategy developed and shared. Objective II: Plant diversity is urgently and effectively conserved Target 4: At least 15% of each ecological region or vegetation type secured through effective management and/or restoration. Target 5: At least 75% of the most important areas for plant diversity of each ecological region protected with effective management in place for conserving plants and their genetic diversity. Target 6: At least 75% of production lands in each sector managed sustainably, consistent with the conservation of plant diversity. Target 7: At least 75% of known threatened plant species conserved in situ. Target 8: At least 75% of threatened plant species in ex situ collections, preferably in the country of origin, and at least 20 per cent available for recovery and restoration programmes. Target 9: 70% of the genetic diversity of crops including their wild relatives and other socio-economically valuable plant species conserved, while respecting, preserving and maintaining associated indigenous and local knowledge. Target 10: Effective management plans in place to prevent new biological invasions and to manage important areas for plant diversity that are invaded. Objective III: Plant diversity is used in a sustainable and equitable manner Target 11: No species of wild flora endangered by international trade. Target 12: All wild harvested plant-based products sourced sustainably. Target 13: Indigenous and local knowledge innovations and practices associated with plant resources maintained or increased, as appropriate, to support customary use, sustainable livelihoods, local food security and health care. Objective IV: Education and awareness about plant diversity, its role in sustainable livelihoods and importance to all life on earth is promoted Target 14: The importance of plant diversity and the need for its conservation incorporated into communication, education and public awareness programmes. Objective V: The capacities and public engagement necessary to implement the Strategy have been developed Target 15: The number of trained people working with appropriate facilities sufficient according to national needs, to achieve the targets of this Strategy. Target 16: Institutions, networks and partnerships for plant conservation established or strengthened at national, regional and international levels to achieve the targets of this Strategy. The GSPC was being put through a formal review of progress by the Convention on Biological Diversity, culminating in major discussions in May 2008 in Bonn, Germany at the 9th Conference of the Parties to the CBD. References External links Global Strategy for Plant Conservation Official site Botanic Gardens Conservation International Key contributor to the GSPC Global Partnership for Plant Conservation NGO partnership aiding in achieving the GSPC Gran Canaria Declaration on Climate Change and Plant Conservation Background information with link to a downloadable pdf of the Declaration The GSPC - A Plan to Save the World's Plant Species Link to downloadable pdfs of the GSPC 2010 and GSPC 2020 in several languages World Flora Online - An Online Flora of All Known Plants Project of the World Flora Online Consortium Nature conservation organizations United Nations Environment Programme Convention on Biological Diversity
Global Strategy for Plant Conservation
[ "Biology" ]
1,311
[ "Convention on Biological Diversity", "Biodiversity" ]
11,084,792
https://en.wikipedia.org/wiki/Haradh
Haradh () is a large town and industrial city in the Ahsa Governorate in the Eastern Province of Saudi Arabia, approximately southwest of Hofuf. Due to its location above the Ghawar oil field, several oil and gas wells, along with oil refineries are located in the area, all operated by Saudi Aramco. The village is also the site of one of the largest integrated dairy farms in the world, owned and operated by the National Agricultural Development Company (NADEC). Haradh is Habitat Sandy-gravel desert Naturally low vegetation covers the site's compact gravel, sand, and clay soils, creating a habitat for larks and wheatears, and reptiles, including the dhub. History The initial discovery of the southern part of the Ghawar field was in 1949 at the Haradh field, where American geologist Ernie Berg mapped the surface of the Haradh anticline using the ordinary, tried-and-true plane table method. Haradh Gas Plant Haradh is located above the massive Ghawar Field. Saudi Aramco owns and operates all oil infrastructure in the area, which produces approximately 1,000,000 barrels (159 million liters) of petroleum a day. Saudi Aramco also operates the Haradh Gas Plant Department which covers area of 8.3 km. Saudi Aramco is planning to install gas compression facilities at Haradh Saudi Arabia.Haradh Gas Plant is capable of delivering 1.5 billion cubic feet a day of sales gas to Saudi Arabia's Master Gas System and a gas oil separation plant (GOSP) capable of stabilising 300,000 barrels per day (bpd) of Arabian Light crude oil. About 87 wells feed the Haradh plant. The wells are connected to the plant via manifolds at Haradh, Wagr and Tinat. Sweet and sour gas from the wells is transported through the Haradh manifold, located 12km from plant. The integrated farm owned and operated by the National Agricultural Development Company (NADEC) in the Haradh area is one of the largest integrated dairy farms in the world. Transport There is a small airport in Haradh for the exclusive use of Saudi Aramco offering flights for its employees to Al-Hasa and Dammam. The airport occupies an area of 1.1 km2 next to a residential camp and is approximately 6 km away, northeast of existing Haradh Gas Plant (HGP) facilities. Haradh is well connected by road from Riyadh (290 km approximately) which may take 3 hours 16 min to reach Via Route 10 and from Al Hasa it is 174 (Km Approximately) and shall take 1 hour 50 mins to reach via route 75. References External links News report Populated places in Eastern Province, Saudi Arabia Saudi Aramco oil and gas fields
Haradh
[ "Chemistry" ]
570
[ "Petroleum", "Petroleum stubs" ]
11,084,869
https://en.wikipedia.org/wiki/Gravitational-wave%20observatory
A gravitational-wave detector (used in a gravitational-wave observatory) is any device designed to measure tiny distortions of spacetime called gravitational waves. Since the 1960s, various kinds of gravitational-wave detectors have been built and constantly improved. The present-day generation of laser interferometers has reached the necessary sensitivity to detect gravitational waves from astronomical sources, thus forming the primary tool of gravitational-wave astronomy. The first direct observation of gravitational waves was made in September 2015 by the Advanced LIGO observatories, detecting gravitational waves with wavelengths of a few thousand kilometers from a merging binary of stellar black holes. In June 2023, four pulsar timing array collaborations presented the first strong evidence for a gravitational wave background of wavelengths spanning light years, most likely from many binaries of supermassive black holes. Challenge The direct detection of gravitational waves is complicated by the extraordinarily small effect the waves produce on a detector. The amplitude of a spherical wave falls off as the inverse of the distance from the source. Thus, even waves from extreme systems such as merging binary black holes die out to a very small amplitude by the time they reach the Earth. Astrophysicists predicted that some gravitational waves passing the Earth might produce differential motion on the order 10−18 m in a LIGO-size instrument. Resonant mass antennas A simple device to detect the expected wave motion is called a resonant mass antenna – a large, solid body of metal isolated from outside vibrations. This type of instrument was the first type of gravitational-wave detector. Strains in space due to an incident gravitational wave excite the body's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. However, up to 2018, no gravitational wave observation that would have been widely accepted by the research community has been made on any type of resonant mass antenna, despite certain claims of observation by researchers operating the antennas. There are three types of resonant mass antenna that have been built: room-temperature bar antennas, cryogenically cooled bar antennas and cryogenically cooled spherical antennas. The earliest type was the room-temperature bar-shaped antenna called a Weber bar; these were dominant in 1960s and 1970s and many were built around the world. It was claimed by Weber and some others in the late 1960s and early 1970s that these devices detected gravitational waves; however, other experimenters failed to detect gravitational waves using them, and a consensus developed that Weber bars would not be a practical means to detect gravitational waves. The second generation of resonant mass antennas, developed in the 1980s and 1990s, were the cryogenic bar antennas which are also sometimes called Weber bars. In the 1990s there were five major cryogenic bar antennas: AURIGA (Padua, Italy), NAUTILUS (Rome, Italy), EXPLORER (CERN, Switzerland), ALLEGRO (Louisiana, US), and NIOBE (Perth, Australia). In 1997, these five antennas run by four research groups formed the International Gravitational Event Collaboration (IGEC) for collaboration. While there were several cases of unexplained deviations from the background signal, there were no confirmed instances of the observation of gravitational waves with these detectors. In the 1980s, there was also a cryogenic bar antenna called ALTAIR, which, along with a room-temperature bar antenna called GEOGRAV, was built in Italy as a prototype for later bar antennas. Operators of the GEOGRAV-detector claimed to have observed gravitational waves coming from the supernova SN1987A (along with another room-temperature bar antenna), but these claims were not adopted by the wider community. These modern cryogenic forms of the Weber bar operated with superconducting quantum interference devices to detect vibration (ALLEGRO, for example). Some of them continued in operation after the interferometric antennas started to reach astrophysical sensitivity, such as AURIGA, an ultracryogenic resonant cylindrical bar gravitational wave detector based at INFN in Italy. The AURIGA and LIGO teams collaborated in joint observations. In the 2000s, the third generation of resonant mass antennas, the spherical cryogenic antennas, emerged. Four spherical antennas were proposed around year 2000 and two of them were built as downsized versions, the others were cancelled. The proposed antennas were GRAIL (Netherlands, downsized to MiniGRAIL), TIGA (US, small prototypes made), SFERA (Italy), and Graviton (Brasil, downsized to Mario Schenberg). The two downsized antennas, MiniGRAIL and the Mario Schenberg, are similar in design and are operated as a collaborative effort. MiniGRAIL is based at Leiden University, and consists of an exactingly machined sphere cryogenically cooled to . The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. It is the current consensus that current cryogenic resonant mass detectors are not sensitive enough to detect anything but extremely powerful (and thus very rare) gravitational waves. As of 2020, no detection of gravitational waves by cryogenic resonant antennas has occurred. Laser interferometers A more sensitive detector uses laser interferometry to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive ground-based laser interferometer is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO is famous as the site of the first confirmed detections of gravitational waves in 2015. LIGO has two detectors: one in Livingston, Louisiana; the other at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 4 km in length. These are at 90 degree angles to each other, with the light passing through diameter vacuum tubes running the entire . A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which a Michelson interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as . Upgrades to LIGO and other detectors such as Virgo, GEO600, and TAMA 300 should increase the sensitivity further, and the next generation of instruments (Advanced LIGO Plus and Advanced Virgo Plus) will be more sensitive still. Another highly sensitive interferometer (KAGRA) began operations in 2020. A key point is that a ten-times increase in sensitivity (radius of "reach") increases the volume of space accessible to the instrument by one thousand. This increases the rate at which detectable signals should be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly. One analogy is to rainfall: the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals at low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these "stationary" (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other "non-stationary" noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All these must be taken into account and excluded by analysis before a detection may be considered a true gravitational-wave event. Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to shot noise, as well as artifacts caused by cosmic rays and solar wind. Einstein@Home In some sense, the easiest signals to detect should be constant sources. Supernovae and neutron star or black hole mergers should have larger amplitudes and be more interesting, but the waves generated will be more complicated. The waves given off by a spinning, bumpy neutron star would be "monochromatic" – like a pure tone in acoustics. It would not change very much in amplitude or frequency. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Pulsar timing arrays A different approach to detecting gravitational waves is used by pulsar timing arrays, such as the European Pulsar Timing Array, the North American Nanohertz Observatory for Gravitational Waves, and the Parkes Pulsar Timing Array. These projects propose to detect gravitational waves by looking at the effect these waves have on the incoming signals from an array of 20–50 well-known millisecond pulsars. As a gravitational wave passing through the Earth contracts space in one direction and expands space in another, the times of arrival of pulsar signals from those directions are shifted correspondingly. By studying a fixed set of pulsars across the sky, these arrays should be able to detect gravitational waves in the nanohertz range. Such signals are expected to be emitted by pairs of merging supermassive black holes. In June 2023, four pulsar timing array collaborations, the three mentioned above and the Chinese Pulsar Timing Array, presented independent but similar evidence for a stochastic background of nanohertz gravitational waves. The source of this background could not yet be identified. Cosmic microwave background The cosmic microwave background, radiation left over from when the Universe cooled sufficiently for the first atoms to form, can contain the imprint of gravitational waves from the very early Universe. The microwave radiation is polarized. The pattern of polarization can be split into two classes called E-modes and B-modes. This is in analogy to electrostatics where the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes can be created by a variety of processes, but the B-modes can only be produced by gravitational lensing, gravitational waves, or scattering from dust. On 17 March 2014, astronomers at the Harvard-Smithsonian Center for Astrophysics announced the apparent detection of the imprint gravitational waves in the cosmic microwave background, which, if confirmed, would provide strong evidence for inflation and the Big Bang. However, on 19 June 2014, lowered confidence in confirming the findings was reported; and on 19 September 2014, even more lowered confidence. Finally, on 30 January 2015, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. Novel detector designs There are currently two detectors focusing on detections at the higher end of the gravitational-wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Two have been fabricated and they are currently expected to be sensitive to periodic spacetime strains of , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of , with an expectation to reach a sensitivity of . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ~ 1010 Hz (10 GHz) and h ~ 10−30 to 10−31. Levitated Sensor Detector is a proposed detector for gravitational waves with a frequency between 10 kHz and 300 kHz, potentially coming from primordial black holes. It will use optically-levitated dielectric particles in an optical cavity. A torsion-bar antenna (TOBA) is a proposed design composed of two, long, thin bars, suspended as torsion pendula in a cross-like fashion, in which the differential angle is sensitive to tidal gravitational wave forces. Detectors based on matter waves (atom interferometers) have also been proposed and are being developed. There have been proposals since the beginning of the 2000s. Atom interferometry is proposed to extend the detection bandwidth in the infrasound band (10 mHz – 10 Hz), where current ground based detectors are limited by low frequency gravity noise. A demonstrator project called Matter wave laser based Interferometer Gravitation Antenna (MIGA) started construction in 2018 in the underground environment of LSBB (Rustrel, France). List of gravitational wave detectors Resonant mass detectors First generation Weber bar (1960s–80s) Second generation EXPLORER (CERN, 1985–) GEOGRAV (Rome, 1980s–) ALTAIR (Frascati, 1990–) ALLEGRO (Baton Rouge, 1991–2008) NIOBE (Perth, 1993–) NAUTILUS (Rome, 1995–) AURIGA (Padova, 1997–) Third generation Mario Schenberg (São Paulo, 2003–) MiniGrail (Leiden, 2003–) Interferometers Interferometric gravitational-wave detectors are often grouped into generations based on the technology used. The interferometric detectors deployed in the 1990s and 2000s were proving grounds for many of the foundational technologies necessary for initial detection and are commonly referred to as the first generation. The second generation of detectors operating in the 2010s, mostly at the same facilities like LIGO and Virgo, improved on these designs with sophisticated techniques such as cryogenic mirrors and the injection of squeezed vacuum. This led to the first unambiguous detection of a gravitational wave by Advanced LIGO in 2015. The third generation of detectors are currently in the planning phase, and seek to improve over the second generation by achieving greater detection sensitivity and a larger range of accessible frequencies. All these experiments involve many technologies under continuous development over multiple decades, so the categorization by generation is necessarily only rough. First generation (1995) TAMA 300 (1995) GEO600 (2002) LIGO (2006) CLIO (2007) Virgo interferometer Second generation (2010) GEO High Frequency (2015) Advanced LIGO (2016) Advanced Virgo (2019) KAGRA (LCGT) (2023) IndIGO (LIGO-India) Third generation (2030s) Einstein Telescope (2030s) Cosmic Explorer Space based (2034) Laser Interferometer Space Antenna (LISA, its technology demonstrator LISA Pathfinder was launched December 2015) (2030s?) Taiji (gravitational wave observatory) (2035) TianQin (2027) Deci-hertz Interferometer Gravitational wave Observatory (DECIGO) Pulsar timing (2005) Parkes Pulsar Timing Array (2009) European Pulsar Timing Array (2010) North American Nanohertz Observatory for Gravitational Waves (NANOGrav) (2016) International Pulsar Timing Array, a joint project combining the Parkes, European and NANOGrav arrays above (2016) Indian Pulsar Timing Array Experiment (InPTA) (?) Chinese Pulsar Timing Array (CPTA) (?) MeerKAT Pulsar Timing Array (MeerTime) See also Detection theory Gravitational-wave astronomy Matched filter References External links Video (04:36) – Detecting a gravitational wave, Dennis Overbye, NYT (11 February 2016). Video (71:29) – Press Conference announcing discovery: "LIGO detects gravitational waves", National Science Foundation (11 February 2016). Astronomical observatories Gravitational instruments observatory Articles containing video clips
Gravitational-wave observatory
[ "Physics", "Astronomy", "Technology", "Engineering" ]
3,539
[ "Astronomical observatories", "Astrophysics", "Astronomy organizations", "Measuring instruments", "Gravitational instruments", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
11,084,989
https://en.wikipedia.org/wiki/Gravitational-wave%20astronomy
Gravitational-wave astronomy is a subfield of astronomy concerned with the detection and study of gravitational waves emitted by astrophysical sources. Gravitational waves are minute distortions or ripples in spacetime caused by the acceleration of massive objects. They are produced by cataclysmic events such as the merger of binary black holes, the coalescence of binary neutron stars, supernova explosions and processes including those of the early universe shortly after the Big Bang. Studying them offers a new way to observe the universe, providing valuable insights into the behavior of matter under extreme conditions. Similar to electromagnetic radiation (such as light wave, radio wave, infrared radiation and X-rays) which involves transport of energy via propagation of electromagnetic field fluctuations, gravitational radiation involves fluctuations of the relatively weaker gravitational field. The existence of gravitational waves was first suggested by Oliver Heaviside in 1893 and then later conjectured by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves before they were predicted by Albert Einstein in 1916 as a corollary to his theory of general relativity. In 1978, Russell Alan Hulse and Joseph Hooton Taylor Jr. provided the first experimental evidence for the existence of gravitational waves by observing two neutron stars orbiting each other and won the 1993 Nobel Prize in physics for their work. In 2015, nearly a century after Einstein's forecast, the first direct observation of gravitational waves as a signal from the merger of two black holes confirmed the existence of these elusive phenomena and opened a new era in astronomy. Subsequent detections have included binary black hole mergers, neutron star collisions, and other violent cosmic events. Gravitational waves are now detected using laser interferometry, which measures tiny changes in the length of two perpendicular arms caused by passing waves. Observatories like LIGO (Laser Interferometer Gravitational-wave Observatory), Virgo and KAGRA (Kamioka Gravitational Wave Detector) use this technology to capture the faint signals from distant cosmic events. LIGO co-founders Barry C. Barish, Kip S. Thorne, and Rainer Weiss were awarded the 2017 Nobel Prize in Physics for their ground-breaking contributions in gravitational wave astronomy. When distant astronomical objects are observed using electromagnetic waves, different phenomena like scattering, absorption, reflection, refraction, etc. causes information loss. There remains various regions in space only partially penetrable by photons, such as the insides of nebulae, the dense dust clouds at the galactic core, the regions near black holes, etc. Gravitational astronomy have the potential to be used parallelly with electromagnetic astronomy to study the universe at a better resolution. In an approach known as multi-messenger astronomy, gravitational wave data is combined with data from other wavelengths to get a more complete picture of astrophysical phenomena. Gravitational wave astronomy helps understand the early universe, test theories of gravity, and reveal the distribution of dark matter and dark energy. Particularly, it can help find the Hubble constant, which tells about the rate of accelerated expansion of the universe. All of these open doors to a physics beyond the Standard Model (BSM). Challenges that remain in the field include noise interference, the lack of ultra-sensitive instruments, and the detection of low-frequency waves. Ground-based detectors face problems with seismic vibrations produced by environmental disturbances and the limitation of the arm length of detectors due to the curvature of the Earth’s surface. In the future, the field of gravitational wave astronomy will try develop upgraded detectors and next-generation observatories, along with possible space-based detectors such as LISA (Laser Interferometer Space Antenna). LISA will be able to listen to distant sources like compact supermassive black holes in the galactic core and primordial black holes, as well as low-frequency sensitive signals sources such as binary white dwarf merger and sources from the early universe. Introduction Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as waves similar to electromagnetic waves but the gravitational equivalent. Gravitational waves were later predicted in 1916 by Albert Einstein on the basis of his general theory of relativity as ripples in spacetime. Later he refused to accept gravitational waves. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. Direct observation of gravitational waves was not made until 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang. Instruments and challenges Collaboration between detectors aids in collecting unique and valuable information, owing to different specifications and sensitivity of each. There are several ground-based laser interferometers which span several miles/kilometers, including: the two Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors in WA and LA, USA; Virgo, at the European Gravitational Observatory in Italy; GEO600 in Germany, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. While LIGO, Virgo, and KAGRA have made joint observations to date, GEO600 is currently utilized for trial and test runs, due to lower sensitivity of its instruments, and has not participated in joint runs with the others recently. High frequency In 2015, the LIGO project was the first to directly observe gravitational waves using laser interferometers. The LIGO detectors observed gravitational waves from the merger of two stellar-mass black holes, matching predictions of general relativity. These observations demonstrated the existence of binary stellar-mass black hole systems, and were the first direct detection of gravitational waves and the first observation of a binary black hole merger. This finding has been characterized as revolutionary to science, because of the verification of our ability to use gravitational-wave astronomy to progress in our search and exploration of dark matter and the big bang. Low frequency An alternative means of observation is using pulsar timing arrays (PTAs). There are three consortia, the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), and the Parkes Pulsar Timing Array (PPTA), which co-operate as the International Pulsar Timing Array. These use existing radio telescopes, but since they are sensitive to frequencies in the nanohertz range, many years of observation are needed to detect a signal and detector sensitivity improves gradually. Current bounds are approaching those expected for astrophysical sources. In June 2023, four PTA collaborations, the three mentioned above and the Chinese Pulsar Timing Array, delivered independent but similar evidence for a stochastic background of nanohertz gravitational waves. Each provided an independent first measurement of the theoretical Hellings-Downs curve, i.e., the quadrupolar correlation between two pulsars as a function of their angular separation in the sky, which is a telltale sign of the gravitational wave origin of the observed background. The sources of this background remain to be identified, although binaries of supermassive black holes are the most likely candidates. Intermediate frequencies Further in the future, there is the possibility of space-borne detectors. The European Space Agency has selected a gravitational-wave mission for its L3 mission, due to launch 2034, the current concept is the evolved Laser Interferometer Space Antenna (eLISA). Also in development is the Japanese Deci-hertz Interferometer Gravitational wave Observatory (DECIGO). Scientific value Astronomy has traditionally relied on electromagnetic radiation. Originating with the visible band, as technology advanced, it became possible to observe other parts of the electromagnetic spectrum, from radio to gamma rays. Each new frequency band gave a new perspective on the Universe and heralded new discoveries. During the 20th century, indirect and later direct measurements of high-energy, massive particles provided an additional window into the cosmos. Late in the 20th century, the detection of solar neutrinos founded the field of neutrino astronomy, giving an insight into previously inaccessible phenomena, such as the inner workings of the Sun. The observation of gravitational waves provides a further means of making astrophysical observations. Russell Hulse and Joseph Taylor were awarded the 1993 Nobel Prize in Physics for showing that the orbital decay of a pair of neutron stars, one of them a pulsar, fits general relativity's predictions of gravitational radiation. Subsequently, many other binary pulsars (including one double pulsar system) have been observed, all fitting gravitational-wave predictions. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the first detection of gravitational waves. Gravitational waves provide complementary information to that provided by other means. By combining observations of a single event made using different means, it is possible to gain a more complete understanding of the source's properties. This is known as multi-messenger astronomy. Gravitational waves can also be used to observe systems that are invisible (or almost impossible to detect) by any other means. For example, they provide a unique method of measuring the properties of black holes. Gravitational waves can be emitted by many systems, but, to produce detectable signals, the source must consist of extremely massive objects moving at a significant fraction of the speed of light. The main source is a binary of two compact objects. Example systems include: Compact binaries made up of two closely orbiting stellar-mass objects, such as white dwarfs, neutron stars or black holes. Wider binaries, which have lower orbital frequencies, are a source for detectors like LISA. Closer binaries produce a signal for ground-based detectors like LIGO. Ground-based detectors could potentially detect binaries containing an intermediate mass black hole of several hundred solar masses. Supermassive black hole binaries, consisting of two black holes with masses of 105–109 solar masses. Supermassive black holes are found at the centre of galaxies. When galaxies merge, it is expected that their central supermassive black holes merge too. These are potentially the loudest gravitational-wave signals. The most massive binaries are a source for PTAs. Less massive binaries (about a million solar masses) are a source for space-borne detectors like LISA. Extreme-mass-ratio systems of a stellar-mass compact object orbiting a supermassive black hole. These are sources for detectors like LISA. Systems with highly eccentric orbits produce a burst of gravitational radiation as they pass through the point of closest approach; systems with near-circular orbits, which are expected towards the end of the inspiral, emit continuously within LISA's frequency band. Extreme-mass-ratio inspirals can be observed over many orbits. This makes them excellent probes of the background spacetime geometry, allowing for precision tests of general relativity. In addition to binaries, there are other potential sources: Supernovae generate high-frequency bursts of gravitational waves that could be detected with LIGO or Virgo. Rotating neutron stars are a source of continuous high-frequency waves if they possess axial asymmetry. Early universe processes, such as inflation or a phase transition. Cosmic strings could also emit gravitational radiation if they do exist. Discovery of these gravitational waves would confirm the existence of cosmic strings. Gravitational waves interact only weakly with matter. This is what makes them difficult to detect. It also means that they can travel freely through the Universe, and are not absorbed or scattered like electromagnetic radiation. It is therefore possible to see to the center of dense systems, like the cores of supernovae or the Galactic Center. It is also possible to see further back in time than with electromagnetic radiation, as the early universe was opaque to light prior to recombination, but transparent to gravitational waves. The ability of gravitational waves to move freely through matter also means that gravitational-wave detectors, unlike telescopes, are not pointed to observe a single field of view but observe the entire sky. Detectors are more sensitive in some directions than others, which is one reason why it is beneficial to have a network of detectors. Directionalization is also poor, due to the small number of detectors. In cosmic inflation Cosmic inflation, a hypothesized period when the universe rapidly expanded during the first 10−36 seconds after the Big Bang, would have given rise to gravitational waves; that would have left a characteristic imprint in the polarization of the CMB radiation. It is possible to calculate the properties of the primordial gravitational waves from measurements of the patterns in the microwave radiation, and use those calculations to learn about the early universe. Development As a young area of research, gravitational-wave astronomy is still in development; however, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy. Gravitational-wave observations complement observations in the electromagnetic spectrum. These waves also promise to yield information in ways not possible via detection and analysis of electromagnetic waves. Electromagnetic waves can be absorbed and re-radiated in ways that make extracting information about the source difficult. Gravitational waves, however, only interact weakly with matter, meaning that they are not scattered or absorbed. This should allow astronomers to view the center of a supernova, stellar nebulae, and even colliding galactic cores in new ways. Ground-based detectors have yielded new information about the inspiral phase and mergers of binary systems of two stellar mass black holes, and merger of two neutron stars. They could also detect signals from core-collapse supernovae, and from periodic sources such as pulsars with small deformations. If there is truth to speculation about certain kinds of phase transitions or kink bursts from long cosmic strings in the very early universe (at cosmic times around 10−25 seconds), these could also be detectable. Space-based detectors like LISA should detect objects such as binaries consisting of two white dwarfs, and AM CVn stars (a white dwarf accreting matter from its binary partner, a low-mass helium star), and also observe the mergers of supermassive black holes and the inspiral of smaller objects (between one and a thousand solar masses) into such black holes. LISA should also be able to listen to the same kind of sources from the early universe as ground-based detectors, but at even lower frequencies and with greatly increased sensitivity. Detecting emitted gravitational waves is a difficult endeavor. It involves ultra-stable high-quality lasers and detectors calibrated with a sensitivity of at least 2·10−22 Hz−1/2 as shown at the ground-based detector, GEO600. It has also been proposed that even from large astronomical events, such as supernova explosions, these waves are likely to degrade to vibrations as small as an atomic diameter. Pinpointing the location of where the gravitational waves comes from is also a challenge. But deflected waves through gravitational lensing combined with machine learning could make it easier and more accurate. Just as the light from the SN Refsdal supernova was detected a second time almost a year after it was first discovered, due to gravitational lensing sending some of the light on a different path through the universe, the same approach could be used for gravitational waves. While still at an early stage, a technique similar to the triangulation used by cell phones to determine their location in relation to GPS satellites, will help astronomers tracking down the origin of the waves. See also Gravitational wave background Gravitational-wave observatory List of gravitational wave observations Matched filter#Gravitational-wave astronomy References Further reading External links LIGO Scientific Collaboration AstroGravS: Astrophysical Gravitational-Wave Sources Archive Video (04:36) – Detecting a gravitational wave, Dennis Overbye, NYT (11 February 2016). Video (71:29) – Press Conference announcing discovery: "LIGO detects gravitational waves", National Science Foundation (11 February 2016). Gravitational Wave Astronomy Gravity General relativity Observational astronomy Astrophysics Astronomical sub-disciplines
Gravitational-wave astronomy
[ "Physics", "Astronomy" ]
3,483
[ "Observational astronomy", "Astrophysics", "General relativity", "Theory of relativity", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
11,085,278
https://en.wikipedia.org/wiki/Langlands%20decomposition
In mathematics, the Langlands decomposition writes a parabolic subgroup P of a semisimple Lie group as a product of a reductive subgroup M, an abelian subgroup A, and a nilpotent subgroup N. Applications A key application is in parabolic induction, which leads to the Langlands program: if is a reductive algebraic group and is the Langlands decomposition of a parabolic subgroup P, then parabolic induction consists of taking a representation of , extending it to by letting act trivially, and inducing the result from to . See also Lie group decompositions References Sources A. W. Knapp, Structure theory of semisimple Lie groups. . Lie groups Algebraic groups
Langlands decomposition
[ "Mathematics" ]
143
[ "Mathematical analysis", "Mathematical structures", "Lie groups", "Mathematical analysis stubs", "Algebraic structures" ]
11,085,304
https://en.wikipedia.org/wiki/Lysophospholipid%20receptor
The lysophospholipid receptor (LPL-R) group are members of the G protein-coupled receptor family of integral membrane proteins that are important for lipid signaling. In humans, there are eleven LPL receptors, each encoded by a separate gene. These LPL receptor genes are also sometimes referred to as "Edg" (an acronym for endothelial differentiation gene). Ligands The ligands for LPL-R group are the lysophospholipid extracellular signaling molecules, lysophosphatidic acid (LPA) and sphingosine 1-phosphate (S1P). Origin of name The term lysophospholipid (LPL) refers to any phospholipid that is missing one of its two O-acyl chains. Thus, LPLs have a free alcohol in either the sn-1 or the sn-2 position. The prefix 'lyso-' comes from the fact that lysophospholipids were originally found to be hemolytic, however it is now used to refer generally to phospholipids missing an acyl chain. LPLs are usually the result of phospholipase A-type enzymatic activity on regular phospholipids such as phosphatidylcholine or phosphatidic acid, although they can also be generated by the acylation of glycerophospholipids or the phosphorylation of monoacylglycerols. Some LPLs serve important signaling functions such as lysophosphatidic acid. Function LPL receptor ligands bind to and activate their cognate receptors located in the cell membrane. Depending on which ligand, receptor, and cell type is involved, the activated receptor can have a range of effects on the cell. These include primary effects of inhibition of adenylyl cyclase and release of calcium from the endoplasmic reticulum, as well as secondary effects of preventing apoptosis and increasing cell proliferation. Group members The following is a list of the eleven known human LPL receptors: See also Lipid signaling Gintonin References External links G protein-coupled receptors
Lysophospholipid receptor
[ "Chemistry" ]
466
[ "G protein-coupled receptors", "Signal transduction" ]
11,085,311
https://en.wikipedia.org/wiki/Anomaly%20matching%20condition
In quantum field theory, the anomaly matching condition by Gerard 't Hooft states that the calculation of any chiral anomaly for the flavor symmetry must not depend on what scale is chosen for the calculation if it is done by using the degrees of freedom of the theory at some energy scale. It is also known as the 't Hooft condition and the 't Hooft UV-IR anomaly matching condition. 't Hooft anomalies There are two closely related but different types of obstructions to formulating a quantum field theory that are both called anomalies: chiral, or Adler–Bell–Jackiw anomalies, and 't Hooft anomalies. If we say that the symmetry of the theory has a 't Hooft anomaly, it means that the symmetry is exact as a global symmetry of the quantum theory, but there is some impediment to using it as a gauge in the theory. As an example of a 't Hooft anomaly, we consider quantum chromodynamics with massless fermions: This is the gauge theory with massless Dirac fermions. This theory has the global symmetry , which is often called the flavor symmetry, and this has a 't Hooft anomaly. Anomaly matching for continuous symmetry The anomaly matching condition by G. 't Hooft proposes that a 't Hooft anomaly of continuous symmetry can be computed both in the high-energy and low-energy degrees of freedom (“UV” and “IR”) and give the same answer. Example For example, consider the quantum chromodynamics with Nf massless quarks. This theory has the flavor symmetry This flavor symmetry becomes anomalous when the background gauge field is introduced. One may use either the degrees of freedom at the far low energy limit (far “IR” ) or the degrees of freedom at the far high energy limit (far “UV”) in order to calculate the anomaly. In the former case one should only consider massless fermions or Nambu–Goldstone bosons which may be composite particles, while in the latter case one should only consider the elementary fermions of the underlying short-distance theory. In both cases, the answer must be the same. Indeed, in the case of QCD, the chiral symmetry breaking occurs and the Wess–Zumino–Witten term for the Nambu–Goldstone bosons reproduces the anomaly. Proof One proves this condition by the following procedure: we may add to the theory a gauge field which couples to the current related with this symmetry, as well as chiral fermions which couple only to this gauge field, and cancel the anomaly (so that the gauge symmetry will remain non-anomalous, as needed for consistency). In the limit where the coupling constants we have added go to zero, one gets back to the original theory, plus the fermions we have added; the latter remain good degrees of freedom at every energy scale, as they are free fermions at this limit. The gauge symmetry anomaly can be computed at any energy scale, and must always be zero, so that the theory is consistent. One may now get the anomaly of the symmetry in the original theory by subtracting the free fermions we have added, and the result is independent of the energy scale. Alternative proof Another way to prove the anomaly matching for continuous symmetries is to use the anomaly inflow mechanism. To be specific, we consider four-dimensional spacetime in the following. For global continuous symmetries , we introduce the background gauge field and compute the effective action . If there is a 't Hooft anomaly for , the effective action is not invariant under the gauge transformation on the background gauge field and it cannot be restored by adding any four-dimensional local counter terms of . Wess–Zumino consistency condition shows that we can make it gauge invariant by adding the five-dimensional Chern–Simons action. With the extra dimension, we can now define the effective action by using the low-energy effective theory that only contains the massless degrees of freedom by integrating out massive fields. Since it must be again gauge invariant by adding the same five-dimensional Chern–Simons term, the 't Hooft anomaly does not change by integrating out massive degrees of freedom. See also 't Hooft–Polyakov monopole 't Hooft loop 't Hooft symbol Notes References Anomalies (physics) Quantum field theory
Anomaly matching condition
[ "Physics" ]
920
[ "Quantum field theory", "Quantum mechanics" ]
11,085,324
https://en.wikipedia.org/wiki/Free-energy%20perturbation
Free-energy perturbation (FEP) is a method based on statistical mechanics that is used in computational chemistry for computing free-energy differences from molecular dynamics or Metropolis Monte Carlo simulations. The FEP method was introduced by Robert W. Zwanzig in 1954. According to the free-energy perturbation method, the free-energy difference for going from state A to state B is obtained from the following equation, known as the Zwanzig equation: where T is the temperature, kB is the Boltzmann constant, and the angular brackets denote an average over a simulation run for state A. In practice, one runs a normal simulation for state A, but each time a new configuration is accepted, the energy for state B is also computed. The difference between states A and B may be in the atom types involved, in which case the ΔF obtained is for "mutating" one molecule onto another, or it may be a difference of geometry, in which case one obtains a free-energy map along one or more reaction coordinates. This free-energy map is also known as a potential of mean force (PMF). Free-energy perturbation calculations only converge properly when the difference between the two states is small enough; therefore it is usually necessary to divide a perturbation into a series of smaller "windows", which are computed independently. Since there is no need for constant communication between the simulation for one window and the next, the process can be trivially parallelized by running each window on a different CPU, in what is known as an "embarrassingly parallel" setup. Application FEP calculations have been used for studying host–guest binding energetics, pKa predictions, solvent effects on reactions, and enzymatic reactions. Other applications are the virtual screening of ligands in drug discovery, in silico mutagenesis studies and antibody affinity maturation. For the study of reactions it is often necessary to involve a quantum-mechanical (QM) representation of the reaction center because the molecular mechanics (MM) force fields used for FEP simulations cannot handle breaking bonds. A hybrid method that has the advantages of both QM and MM calculations is called QM/MM. Umbrella sampling is another free-energy calculation technique that is typically used for calculating the free-energy change associated with a change in "position" coordinates as opposed to "chemical" coordinates, although umbrella sampling can also be used for a chemical transformation when the "chemical" coordinate is treated as a dynamic variable (as in the case of the Lambda dynamics approach of Kong and Brooks). An alternative to free-energy perturbation for computing potentials of mean force in chemical space is thermodynamic integration. Another alternative, which is probably more efficient, is the Bennett acceptance ratio method. Adaptations to FEP exist which attempt to apportion free-energy changes to subsections of the chemical structure. Software Several software packages have been developed to help perform FEP calculations. Below is a short list of some of the most common programs: Flare FEP FEP+ AMBER BOSS CHARMM Desmond GROMACS MacroModel MOLARIS NAMD Tinker Q QUELO See also Thermodynamic integration Umbrella sampling References Computational chemistry Statistical mechanics
Free-energy perturbation
[ "Physics", "Chemistry" ]
665
[ "Theoretical chemistry", "Statistical mechanics", "Computational chemistry" ]
11,085,533
https://en.wikipedia.org/wiki/Lead%20compound
A lead compound (, i.e. a "leading" compound, not to be confused with various compounds of the metallic element lead) in drug discovery is a chemical compound that has pharmacological or biological activity likely to be therapeutically useful, but may nevertheless have suboptimal structure that requires modification to fit better to the target; lead drugs offer the prospect of being followed by back-up compounds. Its chemical structure serves as a starting point for chemical modifications in order to improve potency, selectivity, or pharmacokinetic parameters. Furthermore, newly invented pharmacologically active moieties may have poor druglikeness and may require chemical modification to become drug-like enough to be tested biologically or clinically. Terminology Lead compounds are sometimes called developmental candidates. This is because the discovery and selection of lead compounds occurs prior to preclinical and clinical development of the candidate. Discovering lead compounds Discovery of a drugable target Before lead compounds can be discovered, a suitable target for rational drug design must be selected on the basis of biological plausibility or identified through screening potential lead compounds against multiple targets. Drug libraries are often tested by high-throughput screenings (active compounds are designated as "hits") which can screen compounds for their ability to inhibit (antagonist) or stimulate (agonist) a receptor of interest as well as determine their selectivity for them. Development of a lead compound A lead compound may arise from a variety of different sources. Lead compounds are found by characterizing natural products, employing combinatorial chemistry, or by molecular modeling as in rational drug design. Chemicals identified as hits through high-throughput screening may also become lead compounds. Once a lead compound is selected it must undergo lead optimization, which involves making the compound more "drug-like." This is where Lipinski's rule of five comes into play, sometimes also referred to as the "Pfizer rule" or simply as the "rule of five." Other factors, such as the ease of scaling up the manufacturing of the chemical, must be taken into consideration. See also Drug development Drug design Rational drug design Drug discovery hit to lead Lead optimization References Drug discovery Medicinal chemistry
Lead compound
[ "Chemistry", "Biology" ]
447
[ "Life sciences industry", "Drug discovery", "Medicinal chemistry", "nan", "Biochemistry" ]
11,086,565
https://en.wikipedia.org/wiki/Flame%20programmer
A flame programmer is an electrical, electro-mechanical, or electronic device used to program the safe lighting of fuel burning equipment, as well as the safe shut-down of the flame when it is not needed. These programmers are made with different time sequences to accommodate very small household furnaces to mammoth industrial steam boilers and many other fuel burning processes in industry. A typical cycle Pre-purge During pre-purge, the combustion air fan is started and the dampers are opened to allow fresh air into the combustion chamber and exhaust any other gases or residual air-fuel mixtures. Pilot trial In the pilot trial, the pilot solenoid is opened and the ignition transformer is turned on. The fuel is lit to make sure the pilot flame stays lit. There is a flame detection device that will shut down the fuel valve if the flame fails or goes out Main-flame trial With the pilot still burning, the main burner valves are opened and the main flame is lit. The flame detection device here will also shut-down or close all the fuel valves if the flame is extinguished. Run The pilot valve is shut and the main burner is left on and burning. The flame detector remains operational to kill the fuel valves if the flame should fail. Post-purge When the flame is no longer needed, the fuel valves are shut and the combustion air fan is left running to clear the combustion chamber of all unburnt fuel and products of combustion. The combustion air fan is shut down after a period of time. Industrial equipment
Flame programmer
[ "Engineering" ]
307
[ "nan" ]
11,087,086
https://en.wikipedia.org/wiki/Smart%20card%20application%20protocol%20data%20unit
In the context of smart cards, an application protocol data unit (APDU) is the communication unit between a smart card reader and a smart card. The structure of the APDU is defined by ISO/IEC 7816-4 Organization, security and commands for interchange. APDU message command-response pair There are two categories of APDUs: command APDUs and response APDUs. A command APDU is sent by the reader to the card – it contains a mandatory 4-byte header (CLA, INS, P1, P2) and from 0 to 65 535 bytes of data. A response APDU is sent by the card to the reader – it contains from 0 to 65 536 bytes of data, and 2 mandatory status bytes (SW1, SW2). References See also Protocol data unit External links Smartcard ISOs, contents Selected list of smartcard APDU commands Selected list of SW1 SW2 Status bytes More information about APDU commands and APDU responses Smart cards ISO standards IEC standards
Smart card application protocol data unit
[ "Technology" ]
208
[ "Computer standards", "IEC standards" ]
11,087,090
https://en.wikipedia.org/wiki/Corps%20of%20Engineers%20%28Ireland%29
The Corps of Engineers (ENGR) () is the military engineering branch of the Defence Forces of Ireland. The Corps is responsible for combat engineering, construction engineering, and fire fighting services within the Defence Forces. The main role of the combat engineers is to provide engineering on the battlefield; the Corps has successfully leveraged its skill and expertise in several of the Irish Army's deployments on United Nations operations. History Following the establishment of the Irish Free State on 6 December 1922 General Routine Orders were issued which laid down the organisation of the first centralised Defence Forces. From an engineering point of view there were three particular problem areas to be overcome:- The Barracks and Posts throughout the state were in great need of repair following the War of Independence and the Civil War. There was a general shortage of materials. Most of the railway system was in disarray with many towns cut off. To meet these requirements three (3) Corps were set up: The Works Corps - to carry out repairs and reconstruction. The Salvage Corps - to recover materials from damaged buildings for use elsewhere. The Railway Protection, Repair and Maintenance Corps - to rebuild the railway system. The Corps of Engineers was established and took over from these three Corps with effect from 1 October 1924. In 1931 Field Engineer Companies and the School of Military Engineering were added to the establishment. Roles The Corps has a wide variety of roles, covering conventional warfare, and training for the Defence Forces. With such a wide range of skills, the Engineer Corps provide a variety of support to the Army. This support includes anything from the provision of: Mobility Clearing terrain obstacles Constructing roads and bridges Demining Counter mobility Planting landmines Digging trenches and ditches Demolishing roads and bridges Demolitions Survivability Building fortifications Camp Construction: Camp Clara (Monrovia, Liberia), Camp Clark (Kosovo) General Engineer Support Counter Terrorist Search Fire fighting & RTA EOD CBRN defence Humanitarian Demining Missions The Corps have seen active service in UNMIK), Somalia (UNOSOM II), Congo (ONUC), Lebanon (UNIFIL), Liberia (UNMIL) & Chad (EUFOR Tchad/RCA) - where the Engineer Corps was deployed to construct Camp Ciara in advance of a contingent of more than 500 troops. Army engineers were deployed alongside personnel from the Naval Service and NSR in early 2020 as part of Ireland's response to the coronavirus pandemic (COVID-19). Equipment Aardvark Midi Mine flail DOK-ING MV-4 Remotely Operated Mine flail Cyclops Mk4 Remotely Operated Vehicle Remote Firing Demolitions Equipment (BIRIS, PRIME, DRFD) Mabey Johnson Bridge Infantry Assault Bridge FFV 013 Area Defence Munition Rigid-hulled inflatable boats (RIBs) (Delta 7 metre, Lencraft 5.1 metre dive, and Lencraft 7.5&6.5 metre intruder RIBs) Corps of Engineers Units (2013) 1st Engineer Group (Replaced 1st Field Engineer Company) 2nd Engineer Group (Replaced 4th Field Engineer Company) Engineer Section, Air Corps. Attached Irish Air Corps Engineer Section, Naval Service. Attached Irish Naval Service Engineer Group, Logistics Base Curragh Disbanded (Defence Forces Re-org 2012) 2nd Field Engineer Company 31st Field Engineer Company 62nd Field Engineer Company 54th Field Engineer Company Future developments As in all aspects of society, legislative changes and technological advances have required workforces to become more specialised and more highly skilled. The Irish Defence Forces is no exception - requiring specialist skilled engineers. Compared with Defence Forces in other countries e.g. British and French Armies, the number of engineers in the Irish Defence Forces is low, 5.5% against 8.8% and 12.8% respectively. Gallery References External links Official website Engineers, Corps of Military engineer corps Military units and formations established in 1924
Corps of Engineers (Ireland)
[ "Engineering" ]
782
[ "Engineering units and formations", "Military engineer corps" ]
11,087,461
https://en.wikipedia.org/wiki/Electron-transfer%20dissociation
Electron-transfer dissociation (ETD) is a method of fragmenting multiply-charged gaseous macromolecules in a mass spectrometer between the stages of tandem mass spectrometry (MS/MS). Similar to electron-capture dissociation, ETD induces fragmentation of large, multiply-charged cations by transferring electrons to them. ETD is used extensively with polymers and biological molecules such as proteins and peptides for sequence analysis. Transferring an electron causes peptide backbone cleavage into c- and z-ions while leaving labile post translational modifications (PTM) intact. The technique only works well for higher charge state peptide or polymer ions (z>2). However, relative to collision-induced dissociation (CID), ETD is advantageous for the fragmentation of longer peptides or even entire proteins. This makes the technique important for top-down proteomics. The method was developed by Hunt and coworkers at the University of Virginia. History Electron-capture dissociation (ECD) was developed in 1998 to fragment large proteins for mass spectrometric analysis. Because ECD requires a large amount of near-thermal electrons (<0.2eV), originally it was used exclusively with Fourier transform ion cyclotron resonance mass spectrometry (FTICR), the most expensive form of MS instrumentation. Less costly options such as quadrupole time-of-flight (Q-TOF), quadrupole ion trap (QIT) and linear quadrupole ion trap (QLT) instruments used the more energy-intensive collision-induced dissociation method (CID), resulting in random fragmentation of peptides and proteins. In 2004 Syka et al. announced the creation of ETD, a dissociation method similar to ECD, but using a low-cost, widely available commercial spectrometer. The first ETD experiments were run on a QLT mass spectrometer with an electrospray ionization (ESI) source. Principle of operation Several steps are involved in electron transfer dissociation. Usually a protein mixture is first separated using high performance liquid chromatography (HPLC). Next multiply-protonated precursor molecules are generated by electrospray ionization and injected into the mass spectrometer. (Only molecules with a charge of 2+ or greater can be used in ETD.) In order for an electron to be transferred to the positive precursor molecules radical anions are generated and put into the ion trap with them. During the ion/ion reaction an electron is transferred to the positively-charged protein or peptide, causing fragmentation along the peptide backbone. Finally the resultant fragments are mass analyzed. Radical anion preparation In the original ETD experiments anthracene (C14H10) was used to generate reactive radical anions through negative chemical ionization. Several polycyclic aromatic hydrocarbon molecules have been used in subsequent experiments, with fluoranthene currently the preferred reagent. Fluoranthene has only about 40% efficiency in electron transfer, however, so other molecules with low electron affinity are being sought. Injection and fragmentation When the precursor cations (proteins or peptides) and radical anions are combined in the ion trap an electron is transferred to the multiply-charged cation. This forms an unstable positive radical cation with one less positive charge and an odd electron. Fragmentation takes place along the peptide backbone at a N− Cα bond, resulting in c- and z-type fragment ions. Mass analysis Fragmentation caused by ETD allows more complete protein sequence information to be obtained from ETD spectra than from CID tandem mass spectrometry. Because many peptide backbone c- and z- type ions are detected, almost complete sequence coverage of many peptides can be discerned from ETD fragmentation spectra. Sequences of 15-40 amino acids at both the N-terminus and the C-terminus of the protein can be read using mass-to-charge values for the singly and doubly charged ions. These sequences, together with the measured mass of the intact protein, can be compared to database entries for known proteins and to reveal post-translational modifications. Instrumentation Electron transfer dissociation takes place in an ion trap mass spectrometer with an electrospray ionization source. The first ETD experiments at the University of Virginia utilized a radio frequency quadrupole linear ion trap (LQT) modified with a chemical ionization (CI) source at the back side of the instrument (see diagram at right). Because a spectrum can be obtained in about 300 milliseconds, liquid chromatography is often coupled with the ETD MS/MS. The disadvantage of using LQT is that the mass resolving power is less than that of other mass spectrometers. Subsequent studies have tried other instrumentation to improve mass resolution. Having a negative CI source at the back of the instrument interfered with the high-resolution analyzer in LQT-Orbitrap and quadrupole time-of-flight (QTOF), so alternate ionization methods for the radical anions have been introduced. In 2006 a group at Purdue University led by Scott McLuckey used a quadrupole/time-of-flight (QqTOF) tandem mass spectrometer with pulsed nano-ESI/atmospheric pressure chemical ionization (APCI) dual ionization source using radical anions of 1,3-dinitrobenzene as the electron donor. Later a lab at the University of Wisconsin adapted a hybrid quadrupole linear ion trap-orbitrap mass spectrometer to use ETD. This method also used a front-end ionization method for the radical anions of 9-anthracenecarboxylic acid via pulsed dual ESI sources. As ETD is increasingly popular for protein and peptide structure analysis, implementation on easily available ion-trap mass spectrometers coupled with high resolution mass analyzers continues to evolve. Applications Proteomics ETD is widely used in the analysis of protein and large peptides. Important post translational modifications including phosphorylation, glycosylation and disulfide linkages are all analyzed using ETD. Polymer chemistry Although MS-based analyses of polymers have largely been performed using single-stage MS, tandem MS has also been used to characterize polymer components. CID is the most common method of dissociation used, but ETD has been used as a complementary method. Unique bond cleavages resulting from ETD supply valuable diagnostic information. See also Negative electron-transfer dissociation Tandem mass spectrometry Electrospray References Tandem mass spectrometry
Electron-transfer dissociation
[ "Physics" ]
1,377
[ "Mass spectrometry", "Spectrum (physical sciences)", "Tandem mass spectrometry" ]
11,087,478
https://en.wikipedia.org/wiki/1929%20Ottawa%20sewer%20explosion
On May 29, 1929, a series of explosions in the sewers of Ottawa, Ontario, Canada, killed one person. The first blast occurred just after noon in the Golden Triangle area, west of the canal; over the next 25 minutes, a series of explosions travelled the length of the main line of the sewer system. The explosions first moved east under the canal and then moved through Sandy Hill under Somerset Street. After passing under the Rideau River, they followed the line as it turned north through what is today Vanier, before going through New Edinburgh to the point where the sewer system emptied into the Ottawa River. The blasts were fairly small, except when manhole covers were involved. At these points, the access to oxygen fuelled towering flames that erupted through the manhole covers onto city streets. The covers themselves were blown high into the air. Most of the damage from the sewer explosions occurred where sewage lines were attached to less sturdy pipes inside houses; blasts destroyed the plumbing in many residential basements. Besides property damage, the explosions caused one death and many injuries. The cause of the explosions was never definitively determined. Methane naturally occurs in sewers, but it never accumulates in a concentration powerful enough to cause explosions of the magnitude seen in Ottawa. The Ottawa Gas Company vehemently insisted that the disaster could not have been caused by its lines. It is now thought that the fuel stations and mechanic shops in the city—new since the introduction of the automobile—contributed to the calamity. While these shops were required by law to dispose of all waste oils in a safe manner, there were no inspections; dumping waste into the sewage system was commonplace. In combination with problems in the sewer system's design, this pollution likely caused the 1929 blasts. See also Louisville sewer explosions, 1981 1992 Guadalajara explosions 2014 Kaohsiung gas explosions References Bibliography Explosions in 1929 May 1929 events in Canada 1929 sewer explosion Explosions in Canada Sewerage 1929 disasters in Canada 1920s in Ottawa 1929 in Ontario
1929 Ottawa sewer explosion
[ "Chemistry", "Engineering", "Environmental_science" ]
402
[ "Sewerage", "Environmental engineering", "Water pollution" ]
11,087,760
https://en.wikipedia.org/wiki/Biomimetic%20material
Biomimetic materials are materials developed using inspiration from nature. This may be useful in the design of composite materials. Natural structures have inspired and innovated human creations. Notable examples of these natural structures include: honeycomb structure of the beehive, strength of spider silks, bird flight mechanics, and shark skin water repellency. The etymological roots of the neologism "biomimetic" derive from Greek, since means "life" and means "imitative". Tissue engineering Biomimetic materials in tissue engineering are materials that have been designed such that they elicit specified cellular responses mediated by interactions with scaffold-tethered peptides from extracellular matrix (ECM) proteins; essentially, the incorporation of cell-binding peptides into biomaterials via chemical or physical modification. Amino acids located within the peptides are used as building blocks by other biological structures. These peptides are often referred to as "self-assembling peptides", since they can be modified to contain biologically active motifs. This allows them to replicate information derived from tissue and to reproduce the same information independently. Thus, these peptides act as building blocks capable of conducting multiple biochemical activities, including tissue engineering. Tissue engineering research currently being performed on both short chain and long chain peptides is still in early stages. Such peptides include both native long chains of ECM proteins as well as short peptide sequences derived from intact ECM proteins. The idea is that the biomimetic material will mimic some of the roles that an ECM plays in neural tissue. In addition to promoting cellular growth and mobilization, the incorporated peptides could also mediate by specific protease enzymes or initiate cellular responses not present in a local native tissue. In the beginning, long chains of ECM proteins including fibronectin (FN), vitronectin (VN), and laminin (LN) were used, but more recently the advantages of using short peptides have been discovered. Short peptides are more advantageous because, unlike the long chains that fold randomly upon adsorption causing the active protein domains to be sterically unavailable, short peptides remain stable and do not hide the receptor binding domains when adsorbed. Another advantage to short peptides is that they can be replicated more economically due to the smaller size. A bi-functional cross-linker with a long spacer arm is used to tether peptides to the substrate surface. If a functional group is not available for attaching the cross-linker, photochemical immobilization may be used. In addition to modifying the surface, biomaterials can be modified in bulk, meaning that the cell signaling peptides and recognition sites are present not just on the surface but also throughout the bulk of the material. The strength of cell attachment, cell migration rate, and extent of cytoskeletal organization formation is determined by the receptor binding to the ligand bound to the material; thus, receptor-ligand affinity, the density of the ligand, and the spatial distribution of the ligand must be carefully considered when designing a biomimetic material. Biomimetic mineralization Proteins of the developing enamel extracellular matrix (such as amelogenin) control initial mineral deposition (nucleation) and subsequent crystal growth, ultimately determining the physico-mechanical properties of the mature mineralized tissue. Nucleators bring together mineral ions from the surrounding fluids (such as saliva) into the form of a crystal lattice structure, by stabilizing small nuclei to permit crystal growth, forming mineral tissue. Mutations in enamel ECM proteins result in enamel defects such as amelogenesis imperfecta. Type-I collagen is thought to have a similar role for the formation of dentin and bone. Dental enamel mineral (as well as dentin and bone) is made of hydroxylapatite with foreign ions incorporated in the structure. Carbonate, fluoride, and magnesium are the most common heteroionic substituents. In a biomimetic mineralization strategy based on normal enamel histogenesis, a three-dimensional scaffold is formed to attract and arrange calcium and/or phosphate ions to induce de novo precipitation of hydroxylapatite. Two general strategies have been applied. One is using fragments known to support natural mineralization proteins, such as Amelogenin, Collagen, or Dentin Phosphophoryn as the basis. Alternatively, de novo macromolecular structures have been designed to support mineralization, not based on natural molecules, but on rational design. One example is oligopeptide P11-4. In dental orthopedics and implants, a more traditional strategy to improve the density of the underlying jaw bone is via the in situ application of calcium phosphate materials. Commonly used materials include hydroxylapatite, tricalcium phosphate, and calcium phosphate cement. Newer bioactive glasses follow this line of strategy, where the added silicone provides an important bonus to the local absorption of calcium. Extracellular matrix proteins Many studies utilize laminin-1 when designing a biomimetic material. Laminin is a component of the extracellular matrix that is able to promote neuron attachment and differentiation, in addition to axon growth guidance. Its primary functional site for bioactivity is its core protein domain isoleucine-lysine-valine-alanine-valine (IKVAV), which is located in the α-1 chain of laminin. A recent study by Wu, Zheng et al., synthesized a self-assembled IKVAV peptide nanofiber and tested its effect on the adhesion of neuron-like pc12 cells. Early cell adhesion is very important for preventing cell degeneration; the longer cells are suspended in culture, the more likely they are to degenerate. The purpose was to develop a biomaterial with good cell adherence and bioactivity with IKVAV, which is able to inhibit differentiation and adhesion of glial cells in addition to promoting neuronal cell adhesion and differentiation. The IKVAV peptide domain is on the surface of the nanofibers so that it is exposed and accessible for promoting cell contact interactions. The IKVAV nanofibers promoted stronger cell adherence than the electrostatic attraction induced by poly-L-lysine, and cell adherence increased with increasing density of IKVAV until the saturation point was reached. IKVAV does not exhibit time dependent effects because the adherence was shown to be the same at 1 hour and at 3 hours. Laminin is known to stimulate neurite outgrowth and it plays a role in the developing nervous system. It is known that gradients are critical for the guidance of growth cones to their target tissues in the developing nervous system. There has been much research done on soluble gradients; however, little emphasis has been placed on gradients of substratum bound substances of the extracellular matrix such as laminin. Dodla and Bellamkonda, fabricated an anisotropic 3D agarose gel with gradients of coupled laminin-1 (LN-1). Concentration gradients of LN-1 were shown to promote faster neurite extension than the highest neurite growth rate observed with isotropic LN-1 concentrations. Neurites grew both up and down the gradients, but growth was faster at less steep gradients and was faster up the gradients than down the gradients. Biomimetic artificial muscles Electroactive polymers (EAPs) are also known as artificial muscles. EAPs are polymeric materials and they are able to produce large deformation when applied in an electric field. This provides large potential in applications in biotechnology and robotics, sensors, and actuators. Biomimetic photonic structures The production of structural colours concerns a large array of organisms. From bacteria (Flavobacterium strain IR1) to multicellular organisms, (Hibiscus trionum, Doryteuthis pealeii (squid), or Chrysochroa fulgidissima (beetle)), manipulation of light is not limited to rare and exotic life forms. Different organisms evolved different mechanisms to produce structural colours: multilayered cuticle in some insects and plants, grating like surface in plants, geometrically organised cells in bacteria... all of theme stand for a source of inspiration towards the development of structurally coloured materials. Study of the firefly abdomen revealed the presence of a 3-layer system comprising the cuticle, the Photogenic layer and then a reflector layer. Microscopy of the reflector layer revealed a granulate structure. Directly inspired from the fire fly Reflector layer, an artificial granulate film composed of hollow silica beads of about 1.05 μm was correlated with a high reflection index and could be used to improve light emission in chemiluminescent systems. Artificial enzyme Artificial enzymes are synthetic materials that can mimic (partial) function of a natural enzyme without necessarily being a protein. Among them, some nanomaterials have been used to mimic natural enzymes. These nanomaterials are termed nanozymes. Nanozymes as well as other artificial enzymes have found wide applications, from biosensing and immunoassays, to stem cell growth and pollutant removal. Biomimetic composite Biomimetic composites are being made by mimicking natural design strategies. The designs or structures found in animals and plants have been studied and these biological structures are applied to manufacture composite structure. Advanced manufacturing techniques like 3d printing are being used by the researcher to fabricate them. References material degradation Neuroscience Tissues (biology) Biomedical engineering
Biomimetic material
[ "Physics", "Engineering", "Biology" ]
2,006
[ "Biomaterials", "Biological engineering", "Neuroscience", "Biomedical engineering", "Materials", "Matter", "Medical technology" ]
11,088,829
https://en.wikipedia.org/wiki/Nerve%20guidance%20conduit
A nerve guidance conduit (also referred to as an artificial nerve conduit or artificial nerve graft, as opposed to an autograft) is an artificial means of guiding axonal regrowth to facilitate nerve regeneration and is one of several clinical treatments for nerve injuries. When direct suturing of the two stumps of a severed nerve cannot be accomplished without tension, the standard clinical treatment for peripheral nerve injuries is autologous nerve grafting. Due to the limited availability of donor tissue and functional recovery in autologous nerve grafting, neural tissue engineering research has focused on the development of bioartificial nerve guidance conduits as an alternative treatment, especially for large defects. Similar techniques are also being explored for nerve repair in the spinal cord but nerve regeneration in the central nervous system poses a greater challenge because its axons do not regenerate appreciably in their native environment. The creation of artificial conduits is also known as entubulation because the nerve ends and intervening gap are enclosed within a tube composed of biological or synthetic materials. Whether the conduit is in the form of a biologic tube, synthetic tube or tissue-engineered conduit, it should facilitate neurotropic and neurotrophic communication between the proximal and distal ends of the nerve gap, block external inhibitory factors, and provide a physical guidance for axonal regrowth. The most basic objective of a nerve guidance conduit is to combine physical, chemical, and biological cues under conditions that will foster tissue formation. Materials that have been used to make biologic tubes include blood vessels and skeletal muscles, while nonabsorbable and bioabsorbable synthetic tubes have been made from silicone and polyglycolide respectively. Tissue-engineered nerve guidance conduits are a combination of many elements: scaffold structure, scaffold material, cellular therapies, neurotrophic factors and biomimetic materials. The choice of which physical, chemical and biological cues to use is based on the properties of the nerve environment, which is critical in creating the most desirable environment for axon regeneration. The factors that control material selection include biocompatibility, biodegradability, mechanical integrity, controllability during nerve growth, implantation and sterilization. Scaffold topography In tissue engineering, the three main levels of scaffold structure are considered to be: the superstructure, the overall shape of the scaffold; the microstructure, the cellular level structure of the surface; and the nanostructure, the subcellular level structure of the surface. Superstructure The superstructure of a conduit or scaffold is important for simulating in vivo conditions for nerve tissue formation. The extracellular matrix, which is mainly responsible for directing tissue growth and formation, has a complex superstructure created by many interwoven fibrous molecules. Ways of forming artificial superstructure include the use of thermo-responsive hydrogels, longitudinally oriented channels, longitudinally oriented fibers, stretch-grown axons, and nanofibrous scaffolds. Thermo-responsive hydrogels In traumatic brain injury (TBI), a series of damaging events is initiated that lead to cell death and overall dysfunction, which cause the formation of an irregularly-shaped lesion cavity. The resulting cavity causes many problems for tissue-engineered scaffolds because invasive implantation is required, and often the scaffold does not conform to the cavity shape. In order to get around these difficulties, thermo-responsive hydrogels have been engineered to undergo solution-gelation (sol-gel) transitions, which are caused by differences in room and physiological temperatures, to facilitate implantation through in situ gelation and conformation to cavity shape caused, allowing them to be injected in a minimally invasively manner. Methylcellulose (MC) is a material with well-defined sol-gel transitions in the optimal range of temperatures. MC gelation occurs because of an increase in intra- and inter-molecular hydrophobic interactions as the temperature increases. The sol-gel transition is governed by the lower critical solution temperature (LCST), which is the temperature at which the elastic modulus equals the viscous modulus. The LCST must not exceed physiological temperature (37 °C) if the scaffold is to gel upon implantation, creating a minimally invasive delivery. Following implantation into a TBI lesion cavity or peripheral nerve guidance conduit, MC elicits a minimal inflammatory response. It is also very important for minimally invasive delivery that the MC solution has a viscosity at temperatures below its LCST, which allows it to be injected through a small gauge needle for implantation in in vivo applications. MC has been successfully used as a delivery agent for intra-optical and oral pharmaceutical therapies. Some disadvantages of MC include its limited propensity for protein adsorption and neuronal cellular adhesion making it a non-bioactive hydrogel. Due to these disadvantages, use of MC in neural tissue regeneration requires attaching a biologically active group onto the polymer backbone in order to enhance cell adhesion. Another thermo-responsive gel is one that is formed by combining chitosan with glycerophosphate (GP) salt. This solution experiences gelation at temperatures above 37 °C. Gelation of chitosan/GP is rather slow, taking half an hour to initially set and 9 more hours to completely stabilize. Gel strength varies from 67 to 1572 Pa depending on the concentration of chitosan; the lower end of this range approaches the stiffness of brain tissue. Chitosan/GP has shown success in vitro, but the addition of polylysine is needed to enhance nerve cell attachment. Polylysine was covalently bonded to chitosan in order to prevent it from diffusing away. Polylysine was selected because of its positive nature and high hydrophilicity, which promotes neurite growth. Neuron survival was doubled, though neurite outgrowth did not change with the added polylysine. Longitudinally oriented channels Longitudinally oriented channels are macroscopic structures that can be added to a conduit in order to give the regenerating axons a well-defined guide for growing straight along the scaffold. In a scaffold with microtubular channel architecture, regenerating axons are able to extend through open longitudinal channels as they would normally extend through endoneurial tubes of peripheral nerves. Additionally, the channels increase the surface area available for cell contact. The channels are usually created by inserting a needle, wire, or second polymer solution within a polymer scaffold; after stabilizing the shape of the main polymer, the needle, wire, or second polymer is removed in order to form the channels. Typically multiple channels are created; however, the scaffold can consist of just one large channel, which is simply one hollow tube. A molding technique was created by Wang et al. for forming a nerve guidance conduit with a multi-channel inner matrix and an outer tube wall from chitosan. In their 2006 study, Wang et al. threaded acupuncture needles through a hollow chitosan tube, where they are held in place by fixing, on either end, patches created using CAD. A chitosan solution is then injected into the tube and solidified, after which the needles are removed, creating longitudinally oriented channels. A representative scaffold was then created for characterization with 21 channels using acupuncture needles of 400 μm in diameter. Upon investigation under a microscope, the channels were found to be approximately circular with slight irregularities; all channels were aligned with the inner diameter of the outer tube wall. It was confirmed by micro-CT imaging that the channels went through the entire length of the scaffold. Under water absorption, the inner and outer diameters of the scaffold became larger, but the channel diameters did not vary significantly, which is necessary for maintaining the scaffold shape that guides neurite extension. The inner structure provides an increase in compressive strength compared to a hollow tube alone, which can prevent collapse of the scaffold onto growing neurites. Neuro-2a cells were able to growth on the inner matrix of the scaffold, and they oriented along the channels. Although this method has only been tested on chitosan, it can be tailored to other materials. lyophilizing and wire-heating process is another method of creating longitudinally oriented channels, developed by Huang et al. (2005). A chitosan and acetic acid solution was frozen around nickel-copper (Ni-Cu) wires in a liquid nitrogen trap; subsequently the wires were heated and removed. Ni-Cu wires were chosen because they have a high resistance level. Temperature-controlled lyophilizers were used to sublimate the acetic acid. There was no evidence of the channels merging or splitting. After lyophilizing, scaffold dimensions shrunk causing channels to be a bit smaller than the wire used. The scaffolds were neutralized to a physiological pH value using a base, which had dramatic effects on the porous structure. Weaker bases kept the porous structure uniform, but stronger base made it uncontrollable. The technique used here can be slightly modified to accommodate other polymers and solvents. Another way to create longitudinally oriented channels is to create a conduit from one polymer with embedded longitudinally oriented fibers from another polymer; then selectively dissolve the fibers to form longitudinally oriented channels. Polycaprolactone (PCL) fibers were embedded in a (Hydroxyethyl)methacrylate (HEMA) scaffold. PCL was chosen over poly (lactic acid) (PLA) and poly (lactic-co-glycolic acid) (PLGA), because it is insoluble in HEMA but soluble in acetone. This is important because HEMA was used for the main conduit material and acetone was used to selectively dissolve the polymer fibers. Extruded PCL fibers were inserted into a glass tube and the HEMA solution was injected. The number of channels created was consistent from batch to batch and the variations in fiber diameter could be reduced by creating a more controlled PCL fiber extrusion system. The channels formed were confirmed to be continuous and homogeneous by examination of porosity variations. This process is safe, reproducible and has controllable dimensions. In a similar study conducted by Yu and Shoichet (2005), HEMA was copolymerized with AEMA to create a P(HEMA-co-AMEA) gel. Polycaprolactone (PCL) fibers were embedded in the gel, and then selectively dissolved by acetone with sonication to create channels. It was found that HEMA in mixture with 1% AEMA created the strongest gels. When compared to scaffolds without channels, the addition of 82–132 channels can provide an approximately 6–9 fold increase in surface area, which may be advantageous for regeneration studies that depend on contact-mediated cues. Itoh et al. (2003) developed a scaffold consisting of a single large longitudinally oriented channel was created using chitosan tendons from crabs. Tendons were harvested from crabs (Macrocheira kaempferi) and repeatedly washed with sodium hydroxide solution to remove proteins and to deacetylate the tendon chitin, which subsequently became known as tendon chitosan. A stainless steel bar with triangular-shaped cross-section (each side 2.1 mm long) was inserted into a hollow tendon chitosan tube of circular-shaped cross-section (diameter: 2 mm; length: 15 mm). When comparing the circular-shaped and triangular-shaped tubes, it was found that the triangular tubes had improved mechanical strength, held their shape better, and increased the surface area available. While this is an effective method for creating a single channel, it does not provide as much surface area for cellular growth as the multi-channel scaffolds. Newman et al. (2006) inserted conductive and non-conductive fibers into a collagen-TERP scaffold (collagen cross-linked with a terpolymer of poly(N-isopropylacrylamide) (PNiPAAm) ). The fibers were embedded by tightly wrapping them on a small glass slide and sandwiching a collagen-TERP solution between it and another glass slide; spacers between the glass slides set the gel thickness to 800 μm. The conductive fibers were carbon fiber and Kevlar, and the nonconductive fibers were nylon-6 and tungsten wire. Neurites extend in all directions in thick bundles on the carbon fiber; however with the other three fibers, neurites extended in fine web-like conformations. The neurites showed no directional growth on the carbon and Kevlar fibers, but they grew along the nylon-6 fibers and to some extent along the tungsten wire. The tungsten wire and nylon-6 fiber scaffolds had neurites grow into the gel near the fiber-gel interface in addition to growing along the surface. All fiber gels except Kevlar showed a significant increase in neurite extension compared to non-fiber gels. There was no difference in the neurite extension between the non-conductive and the conductive fibers. In their 2005 study, Cai et al. added Poly (L-lactic acid) (PLLA) microfilaments to hollow poly(lactic acid) (PLA) and silicon tubes. The microfiber guidance characteristics were inversely related to the fiber diameter with smaller diameters promoting better longitudinally oriented cell migration and axonal regeneration. The microfibers also promoted myelination during peripheral nerve repair. Stretch-grown axons Mature axon tracts has been demonstrated to experience growth when mechanically stretched at the central portion of the axon cylinder. Such mechanical stretch was applied by a custom axon stretch-growth bioreactor composed of four main components: custom-designed axon expansion chamber, linear motion table, stepper motor and controller. The nerve tissue culture is placed within the expansion chamber with a port for gas exchange and a removable stretching frame, which is able to separate two groups of somas (neuron cell bodies) and thus stretch their axons. Collagen gel was used to promote the growth of larger stretch-grown axon tracts that were visible to the unaided eye. There are two reasons for the growth enhancement due to the collagen coating: 1) the culture became hydrophobic after the collagen dried, which permitted a denser concentration of neurons to grow, and 2) the collagen coating created an unobstructed coating across the two elongation substrates. Examination by scanning electron microscope and TEM showed no signs of axon thinning due to stretch, and the cytoskeleton appeared to be normal and intact. The stretch-grown axon tracts were cultured on a biocompatible membrane, which could be directly formed into a cylindrical structure for transplantation, eliminating the need to transfer axons to a scaffold after growth was complete. The stretch-grown axons were able to grow at an unprecedented rate of 1 cm/day after only 8 days of acclimation, which is much greater than the 1 mm/day maximal growth rate as measured for growth cone extension. The rate of 1 mm/day is also the average transport speed for structural elements such as neurofilaments. Nanofibers scaffolds Research on nanoscale fibers attempts to mimic the in vivo extracellular environment in order to promote directional growth and regeneration. Three distinct methods for forming nanofibrous scaffolds are self-assembly, phase separation and electrospinning. However, there are many other methods for forming nanofibrous scaffolds. Self-assembly of nanofibrous scaffolds is able to occur only when the fibers themselves are engineered for self-assembly. One common way to drive the self-assembly of scaffold fibers is to use amphiphilic peptides so that in water the hydrophobic moiety drives the self-assembly. Carefully calculated engineering of the amphiphilic peptides allows for precise control over the self-assembled matrix. Self-assembly is able to create both ordered and unordered topographies. Phillips et al. (2005) developed and tested in vitro and in vivo a self-aligned collagen-Schwann cell matrix, which allowed DRG neurite extension alignment in vitro. Collagen gels have been used extensively as substrates for three-dimensional tissue culture. Cells are able to form integrin-mediated attachments with collagen, which initiates cytoskeleton assembly and cell motility. As cells move along the collagen fibers they generate forces that contract the gel. When the collagen fibers are tethered at both ends, cell-generated forces create uniaxial strain, causing the cells and collagen fibers to align. The advantages of this matrix are its simplicity and speed of preparation. Soluble plasma fibronectin can also self-assemble into stable insoluble fibers when put under direct mechanical shearing within a viscous solution. Phillips et al. (2004) investigated a new method of shear aggregation that causes an improved aggregation. The mechanical shearing was created by dragging out a 0.2 ml bolus to 3 cm with forceps; fibronectin aggregates into insoluble fibers at the rapidly moving interface in an ultrafiltration cell. The proposed mechanism for this fiber aggregation is protein extension and elongation under mechanical shear force, which leads to lateral packing and protein aggregation of fibers. Phillips et al. showed that mechanical shear produced by stretching a high viscosity fibronectin gel causes substantial changes in its structure and that when applied through uniaxial extension, a viscous fibronectin gel forms oriented fibrous fibronectin aggregates; additionally, the fibrous aggregates have a decreased solubility and can support the various cell types in vitro. Phase separation allows for three-dimensional sub-micrometre fiber scaffolds to be created without the use of specialized equipment. The five steps involved in phase separation are polymer dissolution, phase separation and gelation, solvent extraction from the gel, freezing and freeze drying in water. The final product is a continuous fiber network. Phase separation can be modified to fit many different applications, and pore structure can be varied by using different solvents, which can change the entire process from liquid–liquid to solid–liquid. Porosity and fiber diameter can also be modified by varying the initial concentration of the polymer; a higher initial concentration leads to less pores and larger fiber diameters. This technique can be used to create networks of fibers with diameters reaching type I collagen fiber diameters. The fibrous network created is randomly oriented and so far work has not been done to attempt to organize the fibers. Phase separation is a widely used technique for creating highly porous nanofibrous scaffolds with ease. Electrospinning provides a robust platform for development of synthetic nerve guidance conduits. Electrospinning can serve to create scaffolds at controlled dimensions with varying chemistry and topography. Furthermore, different materials can be encapsulated within fibers including particles, growth factors, and even cells. Electrospinning creates fibers by electrically charging a droplet of polymer melt or solution and suspending it from a capillary. Then, an electric field is applied at one end of the capillary until the charge exceeds the surface tension, creating a polymer jet that elongates and thins. This polymer jet discharges as a Taylor cone, leaving behind electrically charged polymers, which are collected on a grounded surface as the solvent as the solvent evaporates from the jets. Fibers have been spun with diameters ranging from less than 3 nm to over 1 μm. The process is affected by system parameters such as polymer type, polymer molecular weight, and solution properties and by process parameters such as flow rate, voltage, capillary diameter, distance between the collector and the capillary, and motion of the collector. The fibrous network created is unordered and contains a high surface-to-volume ratio as a result of a high porosity; a large network surface area is ideal for growth and transport of wastes and nutrients in neural tissue engineering. The two features of electrospun scaffolds that are advantageous for neural tissue engineering are the morphology and architecture, which closely mimics the ECM, and the pores, which are the correct range of sizes that allows nutrient exchange but prevents in growth of glial scar tissue (around 10 μm). Random electrospun PLLA scaffolds have been demonstrated to have increased cell adhesion, which may be due to an increased surface roughness. Chemically modified electrospun fiber mats have also been shown to influence neural stem cell differentiation and increase cell proliferation. In the past decade, scientists have also developed numerous methods for production of aligned nanofiber scaffolds, which serve to provide additional topographic cues to cells. This is advantageous because large scale three-dimensional aligned scaffolds cannot be created easily using traditional fabrication techniques. In a study conducted by Yang et al. (2005), aligned and random electrospun poly (L-lactic acid) (PLLA) microfibrous and nanofibrous scaffolds were created, characterized, and compared. Fiber diameters were directly proportional to the initial polymer concentration used for electrospinning; the average diameter of aligned fibers was smaller than that of random fibers under identical processing conditions. It was shown that neural stem cells elongated parallel to the aligned electrospun fibers. The aligned nanofibers had a longer average neurite length compared to aligned microfibers, random microfibers, and random nanofibers. In addition, more cells differentiated on aligned nanofibers than aligned microfibers. Thus, the results of this study demonstrated that aligned nanofibers may be more beneficial than nonaligned fibers or microfibers for promoting nerve regeneration. Microstructure and nanostructure Microstructure and nanostructure, along with superstructure are three main levels of scaffold structure that deserve consideration when creating scaffold topography. While the superstructure refers to the overall shape of the scaffold, the microstructure refers to the cellular level structure of the surface and the nanostructure refers to the subcellular level structure of the surface. All three levels of structure are capable of eliciting cell responses; however, there is significant interest in the response of cells to nanoscale topography motivated by the presence of numerous nanoscale structures within the extracellular matrix. There are a growing number of methods for the manufacture of micro- and nanostructures (many originating from the semiconductor industry) allowing for the creation of various topographies with controlled size, shape, and chemistry. Physical cues Physical cues are formed by creating an ordered surface structure at the level of the microstructure and/or nanostructure. Physical cues on the nanoscale have been shown to modulate cell adhesion, migration, orientation, contact inhibition, gene expression, and cytoskeletal formation. This allows for the direction of cell processes such as proliferation, differentiation, and spreading. There are numerous methods for the manufacture of micro- and nanoscale topographies, which can be divided into those that create ordered topographies and those that create unordered topographies. Ordered topographies are defined as patterns that are organized and geometrically precise. Though there are many methods for creating ordered topographies, they are usually time-consuming, requiring skill and experience and the use of expensive equipment. Photolithography involves exposing a light source to a photoresist-coated silicon wafer; a mask with the desired pattern is place between the light source and the wafer, thereby selectively allowing light to filter through and create the pattern on the photoresist. Further development of the wafer brings out the pattern in the photoresist. Photolithography performed in the near-UV is often viewed as the standard for fabricating topographies on the micro-scale. However, because the lower limit for size is a function of the wavelength, this method cannot be used to create nanoscale features. In their 2005 study, Mahoney et al. created organized arrays of polyimide channels (11 μm in height and 20–60 μm in width) were created on a glass substrate by photolithography. Polyimide was used because it adheres to glass well, is chemically stable in aqueous solution, and is biocompatible. It is hypothesized that the microchannels limited the range of angles that cytoskeletal elements within the neurite growth cones could accumulate, assemble, and orient. There was a significant decrease in the number of neurites emerging from the soma; however, there was less decrease as the range of angles over which the neurites emerged was increased. Also, the neurites were on average two times longer when the neurons were cultured on the microchannels versus the controls on a flat surface; this could be due to a more efficient alignment of filaments. In electron beam lithography (EBL), an electron-sensitive resist is exposed to a beam of high-energy electrons. There is the choice of a positive or negative type resist; however, lower feature resolution can be obtained with negative resists. Patterns are created by programming the beam of electrons for the exact path to follow along the surface of the material. Resolution is affected by other factors such as electron scattering in the resist and backscattering from the substrate. EBL can create single surface features on the order of 3–5 nm. If multiple features are required over a large surface area, as is the case in tissue engineering, the resolution drops and features can only be created as small as 30–40 nm, and the resist development begins to weigh more heavily on pattern formation. To prevent dissolution of the resist, ultrasonic agitation can be used to overcome intermolecular forces. In addition, isopropyl alcohol (IPA) helps develop high-density arrays. EBL can become a quicker and less costly process by replicating nanometer patterns in polymeric materials; the replication process has been demonstrated with polycaprolactone (PCL) using hot embossing and solvent casting. In a study conducted by Gomez et al. (2007), microchannels 1 and 2 μm wide and 400 and 800 nm deep created by EBL on PDMS were shown to enhance axon formation of hippocampal cells in culture more so than immobilized chemical cues. X-ray lithography is another method for forming ordered patterns that can be used to investigate the role that topography plays in promoting neuritogenesis. The mask parameters determine the pattern periodicity, but ridge width and depth are determined by the etching conditions. In a study, ridges were created with periods ranging from 400 through 4000 nm, widths ranging from 70 through 1900 nm, and a groove depth of 600 nm; developing neurites demonstrated contact guidance with features as small as 70 nm and greater than 90% of the neurites were within 10 degrees of parallel alignment with the ridges and grooves. There was not a significant difference in orientation with respect to the feature sizes used. The number of neurites per cell was constrained by the ridges and grooves, producing bipolar rather than branching phenotypes. Unordered topographies are generally created by processes that occur spontaneously during other processing; the patterns are random in orientation and organization with imprecise or no control over feature geometry. The advantage to creating unordered topographies over ordered is that the processes are often less time-consuming, less expensive, and do not require great skill and experience. Unordered topographies can be created by polymer demixing, colloidal lithography and chemical etching. In polymer demixing, polymer blends experience spontaneous phase separation; it often occurs during conditions such as spin casting onto silicon wafers. Features that can be created by this method include nanoscale pits, islands, and ribbons, which can be controlled to an extent by adjusting the polymer ratio and concentration to change the feature shape and size, respectively. There is not much control in the horizontal direction, though the vertical direction of the features can be precisely controlled. Because the pattern is very unordered horizontally, this method can only be used to study cell interactions with specific height nanotopographies. Colloidal lithography is inexpensive and can be used to create surfaces with controlled heights and diameters. Nanocolliods are used as an etch mask by spreading them along the material surface, and then ion beam bombardment or film evaporation is used to etch away around the nanocolliods, creating nanocolumns and nanopits, respectively. The final surface structure can be controlled by varying the area covered by colloids and the colloid size. The area covered by the colloids can be changed by modifying the ionic strength of the colloid solution. This technique is able to create large patterned surface areas, which is necessary for tissue engineering applications. Chemical etching involves soaking the material surface in an etchant such as hydrofluoric acid (HF) or sodium hydroxide (NaOH) until the surface is etched away to a desired roughness as created by pits and protrusions on the nanometer scale. Longer etch times lead to rougher surfaces (i.e., smaller surface pits and protrusions). Structures with specific geometry or organization cannot be created by this rudimentary method because at best it can be considered a surface treatment for changing the surface roughness. The significant advantages of this method are ease of use and low cost for creating a surface with nanotopographies. Silicon wafers were etched using HF, and it was demonstrated that cell adhesion was enhanced only in a specified range of roughness (20–50 nm). Chemical cues In addition to creating topography with physical cues, it can be created with chemical cues by selectively depositing polymer solution in patterns on the surface of a substrate. There are different methods for depositing the chemical cues. Two methods for dispensing chemical solutions include stripe patterning and piezoelectric microdispensing. Stripe-patterned polymer films can be formed on solid substrates by casting diluted polymer solution. This method is relatively easy, inexpensive, and has no restriction on the scaffold materials that can be used. The procedure involves horizontally overlapping glass plates while keeping them vertically separated by a narrow gap filled with a polymer solution. The upper plate is moved at a constant velocity between 60 and 100 μm/s. A thin liquid film of solution is continuously formed at the edge of the sliding glass following evaporation of the solvent. Stripe patterns prepared at speeds of 60, 70, and 100 μm/s created width and groove spacings of 2.2 and 6.1 μm, 3.6 and 8.4 μm, and 4.3 and 12.7 μm, respectively; the range of heights for the ridges was 50–100 nm. Tsuruma, Tanaka et al. demonstrated that embryonic neural cells cultured on film coated with poly-L-lysine attached and elongated parallel to poly(ε-caprolactone)/chloroform solution (1g/L) stripes with narrow pattern width and spacing (width: 2.2 μm, spacing: 6.1 μm). However, the neurons grew across the axis of the patterns with wide width and spacing (width: 4.3 μm, spacing: 12.7 μm). On average, the neurons on the stripe-patterned films had less neurites per cell and longer neurites compared to the neurons on non-patterned films. Thus, the stripe pattern parameters are able to determine the growth direction, the length of neurites, and the number of neurites per cell. Microdispensing was used to create micropatterns on polystyrene culture dishes by dispensing droplets of adhesive laminin and non-adhesive bovine serum albumin (BSA) solutions. The microdispenser is a piezoelectric element attached to a push-bar on top of a channel etched in silicon, which has one inlet at each end and a nozzle in the middle. The piezoelectric element expands when voltage is applied, causing liquid to be dispensed through the nozzle. The microdispenser is moved using a computer-controlled x-y table. The micropattern resolution depends on many factors: dispensed liquid viscosity, drop pitch (the distance between the centre of two adjacent droplets in a line or array), and the substrate. With increasing viscosity the lines become thinner, but if the liquid viscosity is too high the liquid cannot be expelled. Heating the solution creates more uniform protein lines. Although some droplet overlap is necessary to create continuous lines, uneven evaporation may cause uneven protein concentration along the lines; this can be prevented through smoother evaporation by modifying the dispensed solution properties. For patterns containing 0.5 mg/mL laminin, a higher proportion of neurites grew on the microdispensed lines than between the lines. On 10 mg/mL and 1 mg/mL BSA protein patterns and fatty-acid free BSA protein patterns a significant number of neurites avoided the protein lines and grew between the lines. Thus, the fatty-acid-containing BSA lines were just as non-permissive for neurite growth as lines containing BSA with fatty acids. Because microdispensing does not require direct contact with the substrate surfaces, this technique can utilitze surfaces with delicate micro- or nanotopology that could be destroyed by contact. It is possible to vary the amount of protein deposited by dispensing more or less droplets. An advantage of microdispensing is that patterns can be created quickly in 5–10 minutes. Because the piezoelectric microdispenser does not require heating, heat-sensitive proteins and fluids as well as living cells can be dispensed. Scaffold material The selection of the scaffold material is perhaps the most important decision to be made. It must be biocompatible and biodegradable; in addition, it must be able to incorporate any physical, chemical, or biological cues desired, which in the case of some chemical cues means that it must have a site available for chemically linking peptides and other molecules. The scaffold materials chosen for nerve guidance conduits are almost always hydrogels. The hydrogel may be composed of either biological or synthetic polymers. Both biological and synthetic polymers have their strengths and weaknesses. It is important to note that the conduit material can cause inadequate recovery when (1) degradation and resorption rates do not match the tissue formation rate, (2) the stress-strain properties do not compare well to those of neural tissue, (3) when degrading swelling occurs, causing significant deformation, (4) a large inflammatory response is elicited, or (5) the material has low permeability. Hydrogel Hydrogels are a class of biomaterials that are chemically or physically cross-linked water-soluble polymers. They can be either degradable or non-degradable as determined by their chemistry, but degradable is more desirable whenever possible. There has been great interest in hydrogels for tissue engineering purposes, because they generally possess high biocompatibility, mechanical properties similar to soft tissue, and the ability to be injected as a liquid that gels. When hydrogels are physically cross-linked they must rely on phase separation for gelation; the phase separation is temperature-dependent and reversible. Some other advantages of hydrogels are that they use only non-toxic aqueous solvents, allow infusion of nutrients and exit of waste products, and allow cells to assemble spontaneously. Hydrogels have low interfacial tension, meaning cells can easily migrate across the tissue-implant boundary. However, with hydrogels it is difficult to form a broad range of mechanical properties or structures with controlled pore size. Synthetic polymer A synthetic polymer may be non-degradable or degradable. For the purpose of neural tissue engineering degradable materials are preferred whenever possible, because long-term effects such as inflammation and scar could severely damage nerve function. The degradation rate is dependent on the molecular weight of the polymer, its crystallinity, and the ratio of glycolic acid to lactic acid subunits. Because of a methyl group, lactic acid is more hydrophobic than glycolic acid causing its hydrolysis to be slower. Synthetic polymers have more wieldy mechanical properties and degradation rates that can be controlled over a wide range, and they eliminate the concern for immunogenicity. There are many different synthetic polymers currently being used in neural tissue engineering. However, the drawbacks of many of these polymers include a lack of biocompatibility and bioactivity, which prevents these polymers from promoting cell attachment, proliferation, and differentiation. Synthetic conduits have only been clinically successful for the repair of very short nerve lesion gaps less than 1–2 cm. Furthermore, nerve regeneration with these conduits has yet to reach the level of functional recovery seen with nerve autografts. Collagen-terpolymer Collagen is a major component of the extracellular matrix, and it is found in the supporting tissues of peripheral nerves. A terpolymer (TERP) was synthesized by free radical copolymerization of its three monomers and cross-linked with collagen, creating a hybrid biological-synthetic hydrogel scaffold. The terpolymer is based on poly(NIPAAM), which is known to be a cell friendly polymer. TERP is used both as a cross-linker to increase hydrogel robustness and as a site for grafting of bioactive peptides or growth factors, by reacting some of its acryloxysuccinimide groups with the –NH2 groups on the peptides or growth factors. Because the collagen-terpolymer (collagen-TERP) hydrogel lacks a bioactive component, a study attached to it a common cell adhesion peptide found in laminin (YIGSR) in order to enhance its cell adhesion properties. Poly (lactic-co-glycolic acid) family The polymers in the PLGA family include poly (lactic acid) (PLA), poly (glycolic acid) (PGA), and their copolymer poly (lactic-co-glycolic acid) (PLGA). All three polymers have been approved by the Food and Drug Administration for employment in various devices. These polymers are brittle and they do not have regions for permissible chemical modification; in addition, they degrade by bulk rather than by surface, which is not a smooth and ideal degradation process. In an attempt to overcome the lack of functionalities, free amines have been incorporated into their structures from which peptides can be tethered to control cell attachment and behavior. Methacrylated dextran (Dex-MA) copolymerized with aminoethyl methacrylate (AEMA) Dextran is a polysaccharide derived from bacteria; it is usually produced by enzymes from certain strains of leuconostoc or Streptococcus. It consists of α-1,6-linked D-glucopyranose residues. Cross-linked dextran hydrogel beads have been widely used as low protein-binding matrices for column chromatography applications and for microcarrier cell culture technology. However, it has not been until recently that dextran hydrogels have been investigated in biomaterials applications and specifically as drug delivery vehicles. An advantage of using dextran in biomaterials applications include its resistance to protein adsorption and cell-adhesion, which allows specific cell adhesion to be determined by deliberately attached peptides from ECM components. AEMA was copolymerized with Dex-MA in order to introduce primary amine groups to provide a site for attachment of ECM-derived peptides to promote cell adhesion. The peptides can be immobilized using sulfo-SMMC coupling chemistry and cysteine-terminated peptides. Copolymerization of Dex-MA with AEMA allowed the macroporous geometry of the scaffolds to be preserved in addition to promoting cellular interactions. Poly(glycerol sebacate) (PGS) A novel biodegradable, tough elastomer has been developed from poly(glycerol sebacate) (PGS) for use in creation of a nerve guidance conduit. PGS was originally developed for soft tissue engineering purposes to specifically mimic ECM mechanical properties. It is considered an elastomer because it is able to recover from deformation in mechanically dynamic environments and to effectively distribute stress evenly throughout regenerating tissues in the form of microstresses. PGS is synthesized by a polycondensation reaction of glycerol and sebacic acid, which can be melt processed or solvent processed into the desired shape. PGS has a Young's modulus of 0.28 MPa and an ultimate tensile strength greater than 0.5 MPa. Peripheral nerve has a Young's modulus of approximately 0.45 MPa, which is very close to that of PGS. Additionally, PGS experiences surface degradation, accompanied by losses in linear mass and strength during resorption. Following implantation, the degradation half-life was determined to be 21 days; complete degradation occurred at day 60. PGS experiences minimal water absorption during degradation and does not have detectable swelling; swelling can cause distortion, which narrows the tubular lumen and can impede regeneration. It is advantageous that the degradation time of PGS can be varied by changing the degree of crosslinking and the ratio of sebacic acid to glycerol. In a study by Sundback et al. (2005), implanted PGS and PLGA conduits had similar early tissue responses; however, PLGA inflammatory responses spiked later, while PGS inflammatory responses continued to decreases. Polyethylene glycol hydrogel Polyethylene glycol (PEG) hydrogels are biocompatible and proven to be tolerated in many tissue types, including the CNS. Mahoney and Anseth formed PEG hydrogels by photopolymerizing methacrylate groups covalently linked to degradable PEG macromers. Hydrogel degradation was monitored over time by measuring mechanical strength (compressive modulus) and average mesh size from swelling ratio data. Initially, the polymer chains were highly cross-linked, but as degradation proceeded, ester bonds were hydrolyzed, allowing the gel to swell; the compressive modulus decreased as the mesh size increased until the hydrogel was completely dissolved. It was demonstrated that neural precursor cells were able to be photoencapsulated and cultured on the PEG gels with minimal cell death. Because the mesh size is initially small, the hydrogel blocks inflammatory and other inhibitory signals from surrounding tissue. As the mesh size increases, the hydrogel is able to serve as a scaffold for axon regeneration. Biological polymers There are advantages to using biological polymers over synthetic polymers. They are very likely to have good biocompatibility and be easily degraded, because they are already present in nature in some form. However, there are also several disadvantages. They have unwieldy mechanical properties and degradation rates that cannot be controlled over a wide range. In addition, there is always the possibility that naturally-derived materials may cause an immune response or contain microbes. In the production of naturally-derived materials there will also be batch-to-batch variation in large-scale isolation procedures that cannot be controlled. Some other problems plaguing natural polymers are their inability to support growth across long lesion gaps due to the possibility of collapse, scar formation, and early re-absorption. Despite all these disadvantages, some of which can be overcome, biological polymers still prove to be the optimal choice in many situations. Polysialic acid (PSA) Polysialic acid (PSA) is a relatively new biocompatible and bioresorbable material for artificial nerve conduits. It is a homopolymer of α2,8-linked sialic acid residues and a dynamically regulated posttranslational modification of the neural cell adhesion molecule (NCAM). Recent studies have demonstrated that polysialylated NCAM (polySia-NCAM) promotes regeneration in the motor system. PSA shows stability under cell culture conditions and allows for induced degradation by enzymes. It has also been discovered recently that PSA is involved in steering processes like neuritogenesis, axonal path finding, and neuroblast migration. Animals with PSA genetically knocked out express a lethal phenotype, which has unsuccessful path finding; nerves connecting the two brain hemispheres were aberrant or missing. Thus PSA is vital for proper nervous system development. Collagen Type I/III Collagen is the major component of the extracellular matrix and has been widely used in nerve regeneration and repair. Due to its smooth microgeometry and permeability, collagen gels are able to allow diffusion of molecules through them. Collagen resorption rates are able to be controlled by crosslinking collagen with polypoxy compounds. Additionally, collagen type I/III scaffolds have demonstrated good biocompatibility and are able to promote Schwann cell proliferation. However, collagen conduits filled with Schwann cells used to bridge nerve gaps in rats have shown surprisingly unsuccessful nerve regeneration compared to nerve autografts. This is because biocompatibility is not the only factor necessary for successful nerve regeneration; other parameters such as inner diameter, inner microtopography, porosity, wall thickness, and Schwann cell seeding density will need to be examined in future studies in order to improve the results obtained by these collagen I/III gels. Spider silk fiber Spider silk fibers are shown to promote cellular adhesion, proliferation, and vitality. Allmeling, Jokuszies et al. showed that Schwann cells attach quickly and firmly to the silk fibers, growing in a bipolar shape; proliferation and survival rates were normal on the silk fibers. They used spider silk fibers to create a nerve conduit with Schwann cells and acellularized xenogenic veins. The Schwann cells formed columns along the silk fibers in a short amount of time, and the columns were similar to bands of Bungner that grow in vivo after PNS injury. Spider silk has not been used in tissue engineering until now because of the predatory nature of spiders and the low yield of silk from individual spiders. It has been discovered that the species Nephila clavipes produces silk that is less immunogenic than silkworm silk; it has a tensile strength of 4 x 109 N/m, which is six times the breaking strength of steel. Because spider silk is proteolytically degraded, there is not a shift in pH from the physiological pH during degradation. Other advantages of spider silk include its resistance to fungal and bacterial decomposition for weeks and the fact that it does not swell. Also, the silk's structure promotes cell adhesion and migration. However, silk harvest is still a tedious task and the exact composition varies among species and even among individuals of the same species depending on diet and environment. There have been attempts to synthetically manufacture spider silk. Further studies are needed to test the feasibility of using a spider silk nerve conduit in vitro and in vivo. Silkworm silk fibroin In addition to spiders, silkworms are another source of silk. Protein from Bombyx mori silkworms is a core of fibroin protein surrounded by sericin, which is a family of glue-like proteins. Fibroin has been characterized as a heavy chain with a repeated hydrophobic and crystallizable sequence: Gly-Ala-Gly-Ala-Gly-X (X stands for Ser or Tyr). The surrounding sericin is more hydrophilic due to many polar residues, but it does still have some hydrophobic β-sheet portions. Silks have been long been used as sutures due to their high mechanical strength and flexibility as well as permeability to water and oxygen. In addition, silk fibroin can be easily manipulated and sterilized. However, silk use halted when undesirable immunological reactions were reported. Recently, it has been discovered that the cause of the immunological problems lies solely with the surrounding sericin. Since this discovery, silk with the sericin removed has been used in many pharmaceutical and biomedical applications. Because it is necessary to remove the sericin from around the fibroin before the silk can be used, an efficient procedure needs to be developed for its removal, which is known as degumming. One degumming method uses boiling aqueous Na2CO3 solution, which removes the sericin without damaging the fibroin. Yang, Chen et al. demonstrated that the silk fibroin and silk fibroin extract fluid show good biocompatibility with Schwann cells, with no cytotoxic effects on proliferation. Chitosan Chitosan and chitin belong to a family of biopolymers composed of β(1–4)-linked N-acetyl-D-glucosamine and D-glucosamine subunits. Chitosan is formed by alkaline N-deacetylation of chitin, which is the second most abundant natural polymer after cellulose. Chitosan is a biodegradable polysaccharide that has been useful in many biomedical applications such as a chelating agent, drug carrier, membrane, and water treatment additive. Chitosan is soluble in dilute aqueous solutions, but precipitates into a gel at a neutral pH. It does not support neural cell attachment and proliferation well, but can be enhanced by ECM-derived peptide attachment. Chitosan also contains weak mechanical properties, which are more challenging to overcome. Degree of acetylation (DA) for soluble chitosan ranges from 0% to 60%, depending on processing conditions. A study was conducted to characterize how varying DA affects the properties of chitosan. Varying DA was obtained using acetic anhydride or alkaline hydrolysis. It was found that decreasing acetylation created an increase in compressive strength. Biodegradation was examined by use of lysozyme, which is known to be mainly responsible for degrading chitosan in vivo by hydrolyzing its glycosidic bonds and is released by phagocytic cells after nerve injury. The results reveal that there was an accelerated mass loss with intermediate DAs, compared with high and low DAs over the time period studied. When DRG cells were grown on the N-acetylated chitosan, cell viability decreased with increasing DA. Also, chitosan has an increasing charge density with decreasing DA, which is responsible for greater cell adhesion. Thus, controlling the DA of chitosan is important for regulating the degradation time. This knowledge could help in the development of a nerve guidance conduit from chitosan. Aragonite Aragonite scaffolds have recently been shown to support the growth of neurons from rat hippocampi. Shany et al. (2006) proved that aragonite matrices can support the growth of astrocytic networks in vitro and in vivo. Thus, aragonite scaffolds may be useful for nerve tissue repair and regeneration. It is hypothesized that aragonite-derived Ca2+ is essential for promoting cell adherence and cell–cell contact. This is probably carried out through the help of Ca2+-dependent adhesion molecules such as cadherins. Aragonite crystalline matrices have many advantages over hydrogels. They have larger pores, which allows for better cell growth, and the material is bioactive as a result of releasing Ca2+, which promotes cell adhesion and survival. In addition, the aragonite matrices have higher mechanical strength than hydrogels, allowing them to withstand more pressure when pressed into an injured tissue. Alginate Alginate is a polysaccharide that readily forms chains; it can be cross-linked at its carboxylic groups with multivalent cations such as Cu2+, Ca2+, or Al3+ to form a more mechanically stable hydrogel. Calcium alginates form polymers that are both biocompatible and non-immunogenic and have been used in tissue engineering applications. However, they are unable to support longitudinally oriented growth, which is necessary for reconnection of the proximal end with its target. In order to overcome this problem, anisotropic capillary hydrogels (ACH) have been developed. They are created by superimposing aqueous solutions of sodium alginate with aqueous solutions of multivalent cations in layers. After formation, the electrolyte ions diffuse into the polymer solution layers, and a dissipative convective process causes the ions to precipitate, creating capillaries. The dissipative convective process results the opposition of diffusion gradients and friction between the polyelectrolyte chains. The capillary walls are lined with the precipitated metal alginate, while the lumen is filled with the extruded water. Prang et al. (2006) assessed the capacity of ACH gels to promote directed axonal regrowth in the injured mammalian CNS. The multivalent ions used to create the alginate-based ACH gels were copper ions, whose diffusion into the sodium alginate layers created hexagonally structured anisotropic capillary gels. After precipitation, the entire gel was traversed by longitudinally oriented capillaries. The ACH scaffolds promoted adult NPC survival and highly oriented axon regeneration. This is the first instance of using alginates to produce anisotropic structured capillary gels. Future studies are need to study the long-term physical stability of the ACH scaffolds, because CNS axon regeneration can take many months; however, in addition to being able to provide long-term support the scaffolds must also be degradable. Of all the biological and synthetic biopolymers investigated by Prang et al. (2006), only agarose-based gels were able to compare with the linear regeneration caused by ACH scaffolds. Future studies will also need to investigate whether the ACH scaffolds allow for reinnervation of the target in vivo after a spinal cord injury. Hyaluronic acid hydrogel Hyaluronic acid (HA) is a widely used biomaterial as a result of its excellent biocompatibility and its physiologic function diversity. It is abundant in the extracellular matrix (ECM) where it binds large glycosaminoglycans (GAGs) and proteoglycans through specific HA-protein interactions. HA also binds cell surface receptors such as CD44, which results in the activation of intracellular signaling cascades that regulate cell adhesion and motility and promote proliferation and differentiation. HA is also known to support angiogenesis because its degradation products stimulate endothelial cell proliferation and migration. Thus, HA plays a pivotal role in maintaining the normal processes necessary for tissue survival. Unmodified HA has been used in clinical applications such as ocular surgery, wound healing, and plastic surgery. HA can be crosslinked to form hydrogels. HA hydrogels that were either unmodified or modified with laminin were implanted into an adult central nervous system lesion and tested for their ability to induce neural tissue formation in a study by Hou et al.. They demonstrated the ability to support cell ingrowth and angiogenesis, in addition to inhibiting glial scar formation. Also, the HA hydrogels modified with laminin were able to promote neurite extension. These results support HA gels as a promising biomaterial for a nerve guidance conduit. Cellular therapies In addition to scaffold material and physical cues, biological cues can also be incorporated into a bioartificial nerve conduit in the form of cells. In the nervous system there are many different cell types that help support the growth and maintenance of neurons. These cells are collectively termed glial cells. Glial cells have been investigated in an attempt to understand the mechanisms behind their abilities to promote axon regeneration. Three types of glial cells are discussed: Schwann cells, astrocytes, and olfactory ensheathing cells. In addition to glial cells, stem cells also have potential benefit for repair and regeneration because many are able to differentiate into neurons or glial cells. This article briefly discusses the use of adult, transdifferentiated mesenchymal, ectomesenchymal, neural and neural progenitor stem cells. Glial cells Glial cells are necessary for supporting the growth and maintenance of neurons in the peripheral and central nervous system. Most glial cells are specific to either the peripheral or central nervous system. Schwann cells are located in the peripheral nervous system where they myelinate the axons of neurons. Astrocytes are specific to the central nervous system; they provide nutrients, physical support, and insulation for neurons. They also form the blood brain barrier. Olfactory ensheathing cells, however, cross the CNS-PNS boundary, because they guide olfactory receptor neurons from the PNS to the CNS. Schwann cells Schwann cells (SC) are crucial to peripheral nerve regeneration; they play both structural and functional roles. Schwann cells are responsible for taking part in both Wallerian degeneration and bands of Bungner. When a peripheral nerve is damaged, Schwann cells alter their morphology, behavior and proliferation to become involved in Wallerian degeneration and Bungner bands. In Wallerian degeneration, Schwann cells grow in ordered columns along the endoneurial tube, creating a band of Bungner (boB) that protects and preserves the endoneurial channel. Additionally, they release neurotrophic factors that enhance regrowth in conjunction with macrophages. There are some disadvantages to using Schwann cells in neural tissue engineering; for example, it is difficult to selectively isolate Schwann cells and they show poor proliferation once isolated. One way to overcome this difficulty is to artificially induce other cells such as stem cells into SC-like phenotypes. Eguchi et al. (2003) have investigated the use of magnetic fields in order to align Schwann cells. They used a horizontal type superconducting magnet, which produces an 8 T field at its center. Within 60 hours of exposure, Schwann cells aligned parallel to the field; during the same interval, Schwann cells not exposed oriented in a random fashion. It is hypothesized that differences in magnetic field susceptibility of membrane components and cytoskeletal elements may cause the magnetic orientation. Collagen fibers were also exposed to the magnetic field, and within 2 hours, they aligned perpendicular to the magnetic field, while collagen fibers formed a random meshwork pattern without magnetic field exposure. When cultured on the collagen fibers, Schwann cells aligned along the magnetically oriented collagen after two hours of 8-T magnetic field exposure. In contrast, the Schwann cells randomly oriented on the collagen fibers without magnetic field exposure. Thus, culture on collagen fibers allowed Schwann cells to be oriented perpendicular to the magnetic field and oriented much quicker. These findings may be useful for aligning Schwann cells in a nervous system injury to promote the formation of bands of Bungner, which are crucial for maintaining the endoneurial tube that guides the regrowing axons back to their targets. It is nearly impossible to align Schwann cells by external physical techniques; thus, the discovery of an alternative technique for alignment is significant. However, the technique developed still has its disadvantages, namely that it takes a considerable amount of energy to sustain the magnetic field for extended periods. Studies have been conducted in attempts to enhance the migratory ability of Schwann cells. Schwann cell migration is regulated by integrins with ECM molecules such as fibronectin and laminin. In addition, neural cell adhesion molecule (NCAM) is known to enhance Schwann cell motility in vitro. NCAM is a glycoprotein that is expressed on axonal and Schwann cell membranes. Polysialic acid (PSA) is synthesized on NCAM by polysialyltransferase (PST) and sialyltransferase X (STX). During the development of the CNS, PSA expression on NCAM is upregulated until postnatal stages. However, in the adult brain PSA is found only in regions with high plasticity. PSA expression does not occur on Schwann cells. Lavdas et al. (2006) investigated whether sustained expression of PSA on Schwann cells enhances their migration. Schwann cells were tranduced with a retroviral vector encoding STX in order to induce PSA expression. PSA-expressing Schwann cells did obtain enhanced motility as demonstrated in a gap bridging assay and after grafting in postnatal forebrain slice cultures. PSA expression did not alter molecular and morphological differentiation. The PSA-expressing Schwann cells were able to myelinate CNS axons in cerebellar slices, which is not normally possible in vivo. It is hopeful that these PSA-expressing Schwann cells will be able to migrate throughout the CNS without loss of myelinating abilities and may become useful for regeneration and myelination of axons in the central nervous system. Astrocytes Astrocytes are glial cells that are abundant in the central nervous system. They are crucial for the metabolic and trophic support of neurons; additionally, astrocytes provide ion buffering and neurotransmitter clearance. Growing axons are guided by cues created by astrocytes; thus, astrocytes can regulate neurite pathfinding and subsequently, patterning in the developing brain. The glial scar that forms post-injury in the central nervous system is formed by astrocytes and fibroblasts; it is the most significant obstacle for regeneration. The glial scar consists of hypertrophied astrocytes, connective tissue, and ECM. Two goals of neural tissue engineering are to understand astrocyte function and to develop control over astrocytic growth. Studies by Shany et al. (2006) have demonstrated that astrocyte survival rates are increased on 3D aragonite matrices compared to conventional 2D cell cultures. The ability of cell processes to stretch out across curves and pores allows for the formation of multiple cell layers with complex 3D configurations. The three distinct ways by which the cells acquired a 3D shape are: adhering to surface and following the 3D contour stretching some processes between 2 curvatures extending processes in 3D within cell layers when located within multilayer tissue In conventional cell culture, growth is restricted to one plane, causing monolayer formation with most cells contacting the surface; however, the 3D curvature of the aragonite surface allows multiple layers to develop and for astrocytes far apart to contact each other. It is important to promote process formation similar to 3D in vivo conditions, because astrocytic process morphology is essential in guiding directionality of regenerating axons. The aragonite topography provides a high surface area to volume ratio and lacks edges, which leads to a reduction of the culture edge effect. Crystalline matrices such as the aragonite mentioned here are allowed for the promotion of a complex 3D tissue formation that approaches in vivo conditions. Olfactory ensheathing cells The mammalian primary olfactory system has retained the ability to continuously regenerate during adulthood. Olfactory receptor neurons have an average lifespan of 6–8 weeks and therefore must be replaced by cells differentiated from the stem cells that are within a layer at the nearby epithelium's base. The new olfactory receptor neurons must project their axons through the CNS to an olfactory bulb in order to be functional. Axonal growth is guided by the glial composition and cytoarchitecture of the olfactory bulb in addition to the presence of olfactory ensheathing cells (OECs). It is postulated that OECs originate in the olfactory placode, suggesting a different developmental origin than other similar nervous system microglia. Another interesting concept is that OECs are found in both the peripheral and central nervous system portions of the primary olfactory system, that is, the olfactory epithelium and bulb. OECs are similar to Schwann cells in that they provide an upregulation of low-affinity NGF receptor p75 following injury; however, unlike Schwann cells they produce lower levels of neurotrophins. Several studies have shown evidence of OECs being able to support regeneration of lesioned axons, but these results are often unable to be reproduced. Regardless, OECs have been investigated thoroughly in relation to spinal cord injuries, amyotrophic lateral sclerosis, and other neurodegenerative diseases. Researchers suggest that these cells possess a unique ability to remyelinate injured neurons. OECs have properties similar to those of astrocytes, both of which have been identified as being susceptible to viral infection. Stem cells Stem cells are characterized by their ability to self-renew for a prolonged time and still maintain the ability to differentiate along one or more cell lineages. Stem cells may be unipotent, multipotent, or pluripotent, meaning they can differentiate into one, multiple, or all cell types, respectively. Pluripotent stem cells can become cells derived from any of the three embryonic germ layers. Stem cells have the advantage over glial cells because they are able to proliferate more easily in culture. However, it remains difficult to preferentially differentiate these cells into varied cell types in an ordered manner. Another difficulty with stem cells is the lack of a well-defined definition of stem cells beyond hematopoietic stem cells (HSCs). Each stem cell 'type' has more than one method for identifying, isolating, and expanding the cells; this has caused much confusion because all stem cells of a 'type' (neural, mesenchymal, retinal) do not necessarily behave in the same manner under identical conditions. Adult stem cells Adult stem cells are not able to proliferate and differentiate as effectively in vitro as they are able to in vivo. Adult stem cells can come from many different tissue locations, but it is difficult to isolate them because they are defined by behavior and not surface markers. A method has yet to be developed for clearly distinguishing between stem cells and the differentiated cells surrounding them. However, surface markers can still be used to a certain extent to remove most of the unwanted differentiated cells. Stem cell plasticity is the ability to differentiate across embryonic germ line boundaries. Though, the presence of plasticity has been hotly contested. Some claim that plasticity is caused by heterogeneity among the cells or cell fusion events. Currently, cells can be differentiated across cell lines with yields ranging from 10% to 90% depending on techniques used. More studies need to be done in order to standardize the yield with transdifferentiation. Transdifferentiation of multipotent stem cells is a potential means for obtaining stem cells that are not available or not easily obtained in the adult. Mesenchymal stem cells Mesenchymal stem cells are adult stem cells that are located in the bone marrow; they are able to differentiate into lineages of mesodermal origin. Some examples of tissue they form are bone, cartilage, fat, and tendon. MSCs are obtained by aspiration of bone marrow. Many factors promote the growth of MSCs including: platelet-derived growth factor, epidermal growth factor β, and insulin-like growth factor-1. In addition to their normal differentiation paths, MSCs can be transdifferentiated along nonmesenchymal lineages such as astrocytes, neurons, and PNS myelinating cells. MSCs are potentially useful for nerve regeneration strategies because: their use is not an ethical concern no immunosuppression is needed they are an abundant and accessible resource they tolerate genetic manipulations Keilhoff et al. (2006) performed a study comparing the nerve regeneration capacity of non-differentiated and transdifferentiated MSCs to Schwann cells in devitalized muscle grafts bridging a 2-cm gap in the rat sciatic nerve. All cells were autologous. The transdifferentiated MSCs were cultured in a mixture of factors in order to promote Schwann cell-like cell formation. The undifferentiated MSCs demonstrated no regenerative capacity, while the transdifferentiated MSCs showed some regenerative capacity, though it did not reach the capacity of the Schwann cells. Ectomesenchymal stem cells The difficulty of isolating Schwann cells and subsequently inducing proliferation is a large obstacle. A solution is to selectively induce cells such as ectomesenchymal stem cells (EMSCs) into Schwann cell-like phenotypes. EMSCs are neural crest cells that migrate from the cranial neural crest into the first branchial arch during early development of the peripheral nervous system. EMSCs are multipotent and possess a self-renewing capacity. They can be thought of as Schwann progenitor cells because they are associated with dorsal root ganglion and motor nerve development. EMSC differentiation appears to be regulated by intrinsic genetic programs and extracellular signals in the surrounding environment. Schwann cells are the source for both neurotropic and neurotrophic factors essential for regenerating nerves and a scaffold for guiding growth. Nie, Zhang et al. conducted a study investigating the benefits of culturing EMSCs within PLGA conduits. Adding foskolin and BPE to an EMSC culture caused the formation of elongated cell processes, which is common to Schwann cells in vitro. Thus, foskolin and BPF may induce differentiation into Schwann cell-like phenotypes. BPE contains the cytokines GDNF, basic fibroblast growth factor and platelet-derived growth factor, which cause differentiation and proliferation of glial and Schwann cells by activating MAP kinases. When implanted into the PLGA conduits, the EMSCs maintained long-term survival and promoted peripheral nerve regeneration across a 10 mm gap, which usually demonstrates little to no regeneration. Myelinated axons were present within the grafts and basal laminae were formed within the myelin. These observations suggest that EMSCs may promote myelination of regenerated nerve fibers within the conduit. Neural progenitor cells Inserting neurons into a bioartificial nerve conduit seems like the most obvious method for replacing damaged nerves; however, neurons are unable to proliferate and they are often short-lived in culture. Thus, neural progenitor cells are more promising candidates for replacing damaged and degenerated neurons because they are self-renewing, which allows for the in vitro production of many cells with minimal donor material. In order to confirm that the new neurons formed from neural progenitor cells are a part of a functional network, the presence of synapse formation is required. A study by Ma, Fitzgerald et al. is the first demonstration of murine neural stem and progenitor cell-derived functional synapse and neuronal network formation on a 3D collagen matrix. The neural progenitor cells expanded and spontaneously differentiated into excitable neurons and formed synapses; furthermore, they retained the ability to differentiate into the three neural tissue lineages. It was also demonstrated that not only active synaptic vesicle recycling occurred, but also that excitatory and inhibitory connections capable of generating action potentials spontaneously were formed. Thus, neural progenitor cells are a viable and relatively unlimited source for creating functional neurons. Neural stem cells Neural stem cells (NSCs) have the capability to self-renew and to differentiate into neuronal and glial lineages. Many culture methods have been developed for directing NSC differentiation; however, the creation of biomaterials for directing NSC differentiation is seen as a more clinically relevant and usable technology. One approach to develop a biomaterial for directing NSC differentiation is to combine extracellular matrix (ECM) components and growth factors. A very recent study by Nakajima, Ishimuro et al. examined the effects of different molecular pairs consisting of a growth factor and an ECM component on the differentiation of NSCs into astrocytes and neuronal cells. The ECM components investigated were laminin-1 and fibronectin, which are natural ECM components, and ProNectin F plus (Pro-F) and ProNectin L (Pro-L), which are artificial ECM components, and poly(ethyleneimine) (PEI). The neurotrophic factors used were epidermal growth factor (EGF), fibroblast growth factor-2 (FGF-2), nerve growth factor (NGF), neurotrophin-3 (NT-3), and ciliary neurotrophic factor (CNTF). The pair combinations were immobilized onto matrix cell arrays, on which the NSCs were cultured. After 2 days in culture, the cells were stained with antibodies against nestin, β-tubulin III, and GFAP, which are markers for NSCs, neuronal cells, and astrocytes, respectively. The results provide valuable information on advantageous combinations of ECM components and growth factors as a practical method for developing a biomaterial for directing differentiation of NSCs. Neurotrophic factors Currently, neurotrophic factors are being intensely studied for use in bioartificial nerve conduits because they are necessary in vivo for directing axon growth and regeneration. In studies, neurotrophic factors are normally used in conjunction with other techniques such as biological and physical cues created by the addition of cells and specific topographies. The neurotrophic factors may or may not be immobilized to the scaffold structure, though immobilization is preferred because it allows for the creation of permanent, controllable gradients. In some cases, such as neural drug delivery systems, they are loosely immobilized such that they can be selectively released at specified times and in specified amounts. Drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Biomimetic materials Many biomaterials used for nerve guidance conduits are biomimetic materials. Biomimetic materials are materials that have been design such that they elicit specified cellular responses mediated by interactions with scaffold-tethered peptides from ECM proteins; essentially, the incorporation of cell-binding peptides into biomaterials via chemical or physical modification. Synergism Synergism often occurs when two elements are combined; it is an interaction between two elements that causes an effect greater than the combined effects of each element separately. Synergism is evident in the combining of scaffold material and topography with cellular therapies, neurotrophic factors, and biomimetic materials. Investigation of synergism is the next step after individual techniques have proven to be successful by themselves. The combinations of these different factors need to be carefully studied in order to optimize synergistic effects. Optimizing neurotrophic factor combinations It was hypothesized that interactions between neurotrophic factors could alter the optimal concentrations of each factor. While cell survival and phenotype maintenance are important, the emphasis of evaluation was on neurite extension. A combination of NGF, glial cell-line derived neurotrophic factor (GDNF), and ciliary neurotrophic factor (CNTF) was presented to Dorsal root ganglion cultures in vitro. One factor from each neurotrophic family was used. It was determined that there is not a difference in individual optimal concentration and combinatorial optimal concentration; however, around day 5 or 6 the neurites ceased extension and began to degrade. This was hypothesized to be due to lack of a critical nutrient or of proper gradients; previous studies have shown that growth factors are able to optimize neurite extension best when presented in gradients. Future studies on neurotrophic factor combinations will need to include gradients. Combination of neural cell adhesion molecules and GFD-5 Cell adhesion molecules (CAMs) and neurotrophic factors embedded together into biocompatible matrices is a relatively new concept being investigated. CAMs of the immunoglobulin superfamily (IgSF), which includes L1/NgCAM and neurofascin, are particularly promising, because they are expressed in the developing nervous system on neurons or Schwann cells. They are known to serve as guidance cues and mediate neuronal differentiation. Neurotrophic factors such as NGF and growth differentiation factor 5 (GDF-5), however, are well established as promoters of regeneration in vivo. A recent study by Niere, Brown et al. investigated the synergistic effects of combining L1 and neurofascin with NGF and GDF-5 on DRG neurons in culture; this combination enhanced neurite outgrowth. Further enhancement was demonstrated by combining L1 and neurofascin into an artificial fusion protein, which improves efficiency since factors are not delivered individually. Not only can different cues be used, but they may even be fused into a single 'new' cue. Topography in synergy with chemical and biological cues The effect of presenting multiple stimuli types such as chemical, physical, and biological cues on neural progenitor cell differentiation has not been explored. A study was conducted in which three different stimuli were presented to adult rat hippocampal progenitor cells (AHPCs): postnatal rat type-1 astrocytes (biological), laminin (chemical), and micropatterned substrate (physical). Over 75% of the AHPCs aligned within 20° of the grooves compared to random growth on the non-patterned substrates. When AHPCs were grown on micropatterned substrates with astrocytes, outgrowth was influenced by the astrocytes that had aligned with the grooves; namely, the AHPCs extended processes along the astrocytic cytoskeletal filaments. However, the alignment was not as significant as that seen by the AHPCs in culture alone with the micropatterned substrate. In order to assess the different phenotypes expressed as a result of differentiation, the cells were stained with antibodies for class III β-tubulin (TuJI), receptor interacting protein (RIP), and glial fibrillary acidic protein (GFAP), which are markers for early neurons, oligodendrocytes, and astrocytes, respectively. The greatest amount of differentiation was seen with AHPCs cultured on patterned substrates with astrocytes. References External links Georgia Institute of Technology: Laboratory for Neuroengineering Society for Neuroscience Biological engineering Neurology Nervous system
Nerve guidance conduit
[ "Engineering", "Biology" ]
16,852
[ "Organ systems", "Biological engineering", "Nervous system" ]
11,089,355
https://en.wikipedia.org/wiki/Kalahari%20Meerkat%20Project
The Kalahari Meerkat Project, or KMP, is a long term research project focused on studying the evolutionary causes and ecological consequences of cooperative behaviors in meerkats. The secondary aims of the project are to determine what factors affect the reproductive success of the meerkats and what behavioral and physiological mechanisms control both reproduction and cooperative behavior. The project is also working on monitoring overall plant and animal populations within the reserve and work with the nearby community of Van Zylsrus in the areas of conservation and sustainable use of resources. Situated at the Kuruman River Reserve in Northern Cape, South Africa, close to the border to Botswana, the project is jointly funded by Cambridge University and the Kalahari Research Trust. History The project was founded in the early nineties by researchers (Prof. Tim Clutton-Brock) at Cambridge University. It was originally based on the Kgalagadi Transfrontier Park, but in 1993 moved to the Kuruman River Reserve, an area spanning approximately twenty square miles of semi-arid area of the Kalahari Desert on either side of the mostly dry Kuruman River. The reserve consists primarily of sparsely vegetated fossil dunes that flatten out near the river, which is usually dry. The project is now part of the university's "Large Animal Research Group" headed by Tim Clutton-Brock, FRS, who has headed the Meerkat project since 1993. Staff The project usually has 10–15 volunteers who form the main meerkat project staff. They are supervised by a Field Coordinator and a Field Manager. Volunteers come from all over the world and the project is regularly hiring volunteers (see http://www.kalahari-meerkats.com/index.php?id=volunteers). In addition to the core researchers, Earthwatch volunteers aid in collecting research data after being partnered with a staff researcher. There is also usually a South African technician responsible for project logistics, 6–8 post-graduate interns from Europe or South Africa, and a number of doctorate and independent researchers carrying out their own research in the area. There are rarely fewer than 10 people working in the project area at any given time. The principal investigators of the project are Prof. Tim Clutton-Brock, Professor of Animal Ecology at the University of Cambridge, and Prof. Marta Manser, Professor of Animal Behavior at the University of Zurich. Subjects The KMP study encompasses 16 groups of meerkats, with six living exclusively on the reserve and the rest having ranges that extend into the surrounding farmland. Most members of the groups are familiar enough with the human researchers that they are undisturbed by their presence and are relatively easy to touch and collect samples from. Extremely accurate life history records are kept for each meerkat in the study populations, including the recording of births, deaths, pregnancies, lactation and oestrus cycles, changes in social status and group affiliation, and any abnormal behaviors or activities. The project team offers film crews and wildlife photographers the chance to film the habituated groups of meerkats at the reserve. The KMP meerkats have been the subjects of several documentary programs, including: Meerkat Manor, a popular Animal Planet docu-drama series focused mostly on The Whiskers, one of the long-term study groups Ella, A Meerkat's Tale, a 2005 one-hour special from Oxford Scientific Films that follows the life of one young female that breaks the rules and has pups despite being a subordinate female Meerkats, a 2003 Nigel Marven film "Life of Mammals", a 2002 episode for Sir David Attenborough's series on BBC "Walking with Meerkats: Meerkat Madness", a 2001 30-minute National Geographic special produced by Big Wave TV that focuses on the Lazuli research group The title may refer to the BBC production Walking with Dinosaurs and its sister shows Walking with Beasts and Walking with Monsters. In May 2010, Lapland Studio announced it was releasing a video game entitled Lead the Meerkats for the Nintendo Wii and would be donating proceeds from copies sold on the first day to the project. Clutton-Brock and Evi Bauer, president of the Friends of the Kalahari Meerkat Project, expressed excitement over the games release as a way to educate people about meerkats through a fairly realistic game. Friends of the Kalahari Meerkat Project "Friends of the Kalahari Meerkat Project" is a legally independent, but functionally integrated, sponsoring organization of the project that was founded in Switzerland on 23 November 2007. Through this website, the Kalahari Meerkat Project releases information about the meerkats, including life history updates for all of the individual meerkats and the meerkat groups being studied, updates on the current groups, historical information on lost groups, and basic information about meerkats. The project uploads its own photographs and video footage of the meerkats, available for viewing for free. In April 2008, the site began selling "Friends" packages to offer a way to support the project. The Friends package includes additional project information not published on the site, as well as detailed information comparing the actual project meerkats to their counterparts in Meerkat Manor. On 8 June 2008, the site was expanded to include a virtual store, powered by Zazzle, through which the project offers a variety of custom meerkat items. Proceeds from the items go to support the project and the Friends program. References External links Kalahari Meerkat Project website Friends of the Kalahari Meerkat Project website Behavioral ecology Meerkat Manor Kalahari Desert
Kalahari Meerkat Project
[ "Biology" ]
1,165
[ "Behavioural sciences", "Ethology", "Behavior", "Behavioral ecology" ]
11,089,838
https://en.wikipedia.org/wiki/Lombricine
Lombricine is a phosphagen that is unique to earthworms. Structurally, it is a phosphodiester of 2-guanidinoethanol and D-serine (not the usual L-serine), which is then further phosphorylated by lombricine kinase to phospholombricine. References Organophosphates Guanidines Alpha-Amino acids
Lombricine
[ "Chemistry" ]
92
[ "Guanidines", "Functional groups" ]
11,090,023
https://en.wikipedia.org/wiki/Tinguiririca%20fauna
The fossil Tinguiririca fauna, entombed in volcanic mudflows and ash layers at the onset of the Oligocene, about 33-31.5 million years ago, represents a unique snapshot of the history of South America's endemic fauna, which was extinguished when the former island continent was joined to North America by the rising Isthmus of Panama. The fossil-bearing sedimentary layers of the Abanico Formation were first discovered in the valley of the Tinguiririca River, high in the Andes of central Chile. The faunal assemblage lends its name to the Tinguirirican stage in the South American land mammal age (SALMA) classification. Description The endemic fauna bridges a massive gap in the history of those mammals that were unique to South America. Paleontologists knew the earlier sloth and anteater forebears of 40 mya, but no fossils from this previously poorly sampled transitional age had been seen. Fossils of the Tinguiririca fauna include the chinchilla-like earliest rodents discovered in South America, a wide range of the hoofed herbivores called notoungulates, a shrew-like marsupial and ancestors of today's sloth and armadillos. Many of the herbivores have teeth adapted to grass-eating; though no plant fossils have been recovered, the high-crowned hypsodont teeth, protected by tough enamel well below the gumline, identifies grazers suited to a gritty diet. "The proportion of hypsodont taxa relative to other dental types generally increases with the amount of open habitat," John Flynn explained in Scientific American (May 2007) "and the Tinguiririca level of hysodonty surpasses even that observed for mammals living in modern, open habitats such as the Great Plains of North America." Statistical analyses of the number of species categorized by body size ("cenogram" analysis, an aspect of body size scaling) and of their broad ecological niches ("macroniche" analysis) bears out the existence of dry grasslands. Previously, no grassland ecosystem anywhere had been identified prior to Miocene systems fifteen million years later than the Tinguiririca fauna. Grasslands spread as the Earth's paleoclimate grew cooler and drier. New fossils were uncovered of the New World monkeys and caviomorph rodents— the group that includes the capybara— which are known not to have evolved in situ. Some of the new fossils demonstrate by the form of their teeth that they lie closer to African fossil relatives than to the North American ones, which previously had been assumed to have rafted to the island continent. Now it appears that some may have made the crossing of a younger, much narrower Atlantic Ocean. A notable discovery was the miniature skull of a delicate progenitor of New World marmosets and tamarins; it has been given the name Chilecebus carrascoensis. The first of the fossils were found in 1988. Since then, in strata representing repeated catastrophic lahar events, more than 1500 individual fossils have been recovered from multiple sites in the region, ranging in age from 40 to 10 mya. The mammal species Archaeotypotherium tinguiriricaense is named after the site. See also South American land mammal age - list of fossils from this age and site References Further reading Flynn, John J., André R. Wyss, and Reynaldo Charrier, "South America's missing mammals", Scientific American (May 2007) pp 68–75. The article is the source of the present summary. (on-line text). Simpson, George Gaylord, Splendid Isolation: The Curious History of South American Mammals (Yale University Press) 1980. The previous status quo in this field. External links Mammal lineages of island South America John J. Flynn, André R. Wyss, Darin A. Croft and Reynaldo Charrier, "The Tinguiririca fauna, Chile: biochronology, paleoecology, biogeography and a new earliest Oligocene South American Land Mammal 'Age'" (abstract: pdf file) D. A. Croft, "New species" including several from the Tinguiririca fauna . Eocene South America Oligocene South America Cenozoic animals of South America Paleogene Chile Paleontology in Chile Biogeography Prehistoric fauna by locality Cenozoic paleobiotas
Tinguiririca fauna
[ "Biology" ]
921
[ "Biogeography", "Cenozoic paleobiotas", "Prehistoric fauna by locality", "Prehistoric biotas" ]
11,090,262
https://en.wikipedia.org/wiki/Tom%20%28programming%20language%29
Tom is a programming language particularly well-suited for programming various transformations on tree structures and XML-based documents. Tom is a language extension which adds new matching primitives to C and Java as well as support for rewrite rules systems. The rules can be controlled using a strategy language. Tom is good for: programming by pattern matching developing compilers and domain-specific languages (DSL) transforming XML documents implementing rule-based systems describing algebraic transformations References External links Tom language website Tom gforge website Tutorial and Reference Manual Programming language implementation Pattern matching Pattern matching programming languages Term-rewriting programming languages Graph rewriting
Tom (programming language)
[ "Mathematics" ]
125
[ "Mathematical relations", "Graph theory", "Graph rewriting" ]
11,090,929
https://en.wikipedia.org/wiki/TechNet%20%28computer%20network%29
TechNet Augusta was established in 1991 as a closed research and development computer network for academics at the National University of Singapore (NUS). It was set up by the National Science and Technology Board of Singapore (NSTB), providing Singapore's first Internet access service. TechNet's international connectivity then was provided by a 128 kbit/s satellite link from JvNCnet. In March 1995, the Pacific Internet Consortium (SembMedia, ST Computer Systems and SIM) bought TechNet, and commercialised its services in September 1995 when it launched Pacific Internet Corporation Pte Ltd. TechNet was also responsible for the allocation of IP numbers in Singapore along with Singnet. References Further reading TechNet 1991 establishments in Singapore 1995 establishments in Singapore Internet properties established in 1995 1995 mergers and acquisitions Wide area networks
TechNet (computer network)
[ "Technology" ]
166
[ "Computing stubs", "Computer network stubs" ]
11,091,254
https://en.wikipedia.org/wiki/Carl%20Sagan%20Medal
The Carl Sagan Medal for Excellence in Public Communication in Planetary Science is an award established by the Division for Planetary Sciences of the American Astronomical Society to recognize and honor outstanding communication by an active planetary scientist to the general public. It is awarded to scientists whose efforts have significantly contributed to a public understanding of, and enthusiasm for planetary science. Carl Sagan Medal winners See also List of astronomy awards References Astronomy prizes Carl Sagan
Carl Sagan Medal
[ "Astronomy", "Technology" ]
85
[ "Science and technology awards", "Astronomy prizes" ]
11,091,328
https://en.wikipedia.org/wiki/HTC%20Sonata
The HTC Sonata is a smartphone model designed by HTC and powered by the Windows Mobile 2003 SE Smartphone Edition operating system. It has 2.2" 176x220px screen resolution. This phone was released in September 2004. It is also known as Dopod 577W, QTek 8310, O2 Xda IQ, i-mate SP5 & SP5m, Vodafone V1240. The HTC Sonata is also known as T-Mobile SDA released in Europe. It differs from the T-Mobile SDA II released by the T-Mobile USA which is a HTC Tornado. References Sonata Windows Mobile Standard devices
HTC Sonata
[ "Technology" ]
137
[ "Mobile technology stubs", "Mobile phone stubs" ]
11,091,346
https://en.wikipedia.org/wiki/Acciona%20Energ%C3%ADa
Acciona Energía, a subsidiary of Acciona based in Madrid, is involved in the energy industry: the development and structuring of projects, engineering, construction, supply, operations, maintenance, asset management and management and sales of clean energy. History Milestones in its history include the installation in December 1994 of the first commercial wind farm in Spain on the Sierra del Perdón, next to Pamplona, Navarre, by the Energía Hidroeléctrica de Navarra, S.A. company, acquired by ACCIONA in 2003 and 2004, and the KW Tarifa wind farm by the Alabe company, a subsidiary of ACCIONA, in 1995.   In 2009, it acquired more than two GW (gigawatts) of renewable assets as part of the operation agreed with the Enel electric group where ACCIONA stopped participating in Endesa.   Since July 1, 2021, ACCIONA Energía has been listed on the Madrid, Barcelona, Bilbao and Valencia Stock Exchanges, under the ANE ticker, with ACCIONA, S.A. as the primary shareholder as of December 31, 2021. Business lines ACCIONA Energía (with the business name of Corporación Acciona Energías Renovables, S.A.) counts with renewable energy assets in five technologies (wind power, solar energy, hydroelectricity, biomass and solar thermal energy) which, as of December 31, 2021, added up to 11.2 gigawatts (GW) of installed capacity. This capacity is distributed between 16 countries in all five continents and in 2021 it produced a total of 24.5 terawatt-hours (TWh) of 100% renewable energy, equivalent to the electric consumption of 7.6 million homes. The company has announced its goals of reaching a total installed capacity of 20 GW by 2025 and of 30 GW by 2030, with new installations primarily for wind power and solar energy. Besides generating and selling renewable energy, ACCIONA Energía also works in the following industries: energy for self-consumption, energy efficiency services, installation and operation of infrastructure for charging electric vehicles, and green hydrogen industries, focused primarily on corporate and institutional clients. In 2021, it invested more than 91 million euros in innovation projects. This activity was primarily focused on green hydrogen, offshore wind power, innovative photovoltaic systems, smart bidirectional chargers for electric vehicles, circular economy, life extension of renewable assets, advanced O&M technologies and energy storage, among others. International presence ACCIONA Energía counts with an active presence in 25 countries throughout the five continents. The primary geographic areas where it operates, besides Spain, are North America (the United States and Canada), Latin America (Mexico, Chile, Brazil, Peru, Costa Rica, and the Dominican Republic) and Australia. It is also present in Africa, with projects in Egypt and South Africa, as well as in other European countries (Portugal, France, Italy, Poland, Croatia, Ukraine and Hungary). See also Acciona References External links Official ACCIONA Energía Company Website - Global Operations: https://www.acciona-energia.com/ ACCIONA Energía North America Company Website: https://www.acciona.us/ ACCIONA Energía's energy solutions in France: https://solutions.acciona-energia.fr/ Energy companies of Spain Renewable energy companies of Europe Engine manufacturers Solar energy companies Wind power companies Wind turbine manufacturers Wind power companies of the United States Energy companies established in 2007 Renewable resource companies established in 2007 Manufacturing companies based in Iowa Stephenson County, Illinois Spanish brands Kohlberg Kravis Roberts companies Spanish companies established in 2007 IBEX 35
Acciona Energía
[ "Technology" ]
788
[ "Engine manufacturers", "Engines" ]