id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
40,324,167
https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning%20%28tagging%29
The IOB format (short for inside, outside, beginning), also commonly referred to as the BIO format, is a common tagging format for tagging tokens in a chunking task in computational linguistics (ex. named-entity recognition). It was presented by Ramshaw and Marcus in their paper "Text Chunking using Transformation-Based Learning", 1995 The I- prefix before a tag indicates that the tag is inside a chunk. An O tag indicates that a token belongs to no chunk. The B- prefix before a tag indicates that the tag is the beginning of a chunk that immediately follows another chunk of the same type without O tags between them. It is used only in that case: when a chunk comes after an O tag, the first token of the chunk takes the I- prefix. Another similar format which is widely used is IOB2 format, which is the same as the IOB format except that the B- tag is used in the beginning of every chunk (i.e. all chunks start with the B- tag). A readable introduction to entity tagging is given in Bob Carpenter's blog post, "Coding Chunkers as Taggers". An example with IOB format: Notice how "Alex", "Los" and "California", although first tokens of their chunk, have the "I-" prefix. The same example after filtering out stop words: Notice how "California" now has the "B-" prefix, because it immediately follows another LOC chunk. The same example with IOB2 format (with tagging unaffected by stop word filtering): Related tagging schemes sometimes include "START/END: This consists of the tags B, E, I, S or O where S is used to represent a chunk containing a single token. Chunks of length greater than or equal to two always start with the B tag and end with the E tag." Other Tagging Scheme's include BIOES/BILOU, where 'E' and 'L' denotes Last or Ending character is such a sequence and 'S' denotes Single element or 'U' Unit element. An Example with BIOES format: Drawbacks IOB syntax does not permit any nesting, so cannot (unless extended) also represent even very simple phenomena such as sentence boundaries (which are not trivial to locate reliably), the scope of parenthetical expressions in sentences, grammatical structures, nested Named Entities such as "University of Wisconsin Dept. of Computer Science", and so on. It also leaves no place for metadata such as an identifier for the particular sample, the confidence level of the NER assignment, and so on, which are commonplace in NLP systems. Because of these limitations, data must often be converted out of IOB format, or projects must create custom extensions, which has led to a large number of not-quite-interoperable "IOB-like" formats. Many extended variations will also "pass" a non-extended parser, so it is easy to process incorrectly without noticing. The space and "O" (meaning "not in any chunk") convey no information and could simply be omitted. The same is true for putting the "type" suffix on "I-" or "E-" markers as in some variants of "BIOES"; and for marking both "I" and "E" (if you have begun and not ended you are "in", and if you are "in", you have begun and not ended). Some other formats deploy verbosity to improve readability and/or error-checking, but no such benefits appear to come to IOB in exchange for its verbosity. IOB's "one token per line" depends on the tokenization used, even though tokenization is not standardized in NLP, and details of tokenization do not have to be entangled with the representations of NERs. "11/31/2019" could be anywhere from one to five tokens in different systems, but the NER is the same. Some systems even permit whitespace within tokens, and space as a delimiter collides with this, narrowing the applicability of IOB and motivating more extensions. "space" might or might not include tab, multiple spaces, hard spaces, and so on, differences which are difficult to detect when proofreading. IOB variants that allow multiple tokens per line often use "/" or another reserved character to separate the label from the token. This effectively "reserves" that character, which then cannot occur in tokens (or must be escaped, introducing more incompatibilities). IOB files have no place to put commonly-needed meta-data, such as the character encoding being used, the data source, internal location-markers, and so on. More powerful formats (most obviously XML, but even JSON or s-expressions) can handle far more diverse annotations, have far less variation between implementations, and are often shorter and more readable as well. For example: XML takes 80 bytes to do the same things as the 91 byte BIOES version shown above, or the 79 byte IOB version. However, it can easily also support sentence boundaries, part-of-speech annotations, location markers, and other features commonly needed in NLP systems. Breaking all tokens in particular places is not strictly part of the NER task; but even if every token were tagged (like "is") the total would grow only to 139 bytes: References Computational linguistics
Inside–outside–beginning (tagging)
Technology
1,141
64,258,653
https://en.wikipedia.org/wiki/Contact%20region
A Contact Region is a concept in robotics which describes the region between an object and a robot’s end effector. This is used in object manipulation planning, and with the addition of sensors built into the manipulation system, can be used to produce a surface map or contact model of the object being grasped. In Robotics For a robot to autonomously grasp an object, it is necessary for the robot to have an understanding of its own construction and movement capabilities (described through the math of inverse kinematics), and an understanding of the object to be grasped. The relationship between these two is described through a contact model, which is a set of the potential points of contact between the robot and the object being grasped. This, in turn, is used to create a more concrete mathematical representation of the grasp to be attempted, which can then be computed through path planning techniques and executed. In Mathematics Depending on the complexity of the end effector, or through usage of external sensors such as a Lidar or Depth camera, a more complex model of the planes involved in the object being grasped can be produced. In particular, sensors embedded in the fingertips of an end effector have been demonstrated to be an effective approach for producing a surface map from a given contact region. Through knowledge of the robot's position of each individual finger, the location of the sensors in each finger, and the amount of force being exerted by the object onto each sensor, points of contact can be calculated. These points of contact can then be turned into a three-dimensional ellipsis, producing a surface map of the object. Applications In hand manipulation is a typical use case. A robot hand interacts with static and deformable objects, described with soft-body dynamics. Sometimes, additional tools has to be controlled by the robot hand for example a screwdriver. Such interaction produces a complex situation in which the robot hand has similar contact points with the tool. Apart from robotics control, tactile models are calculated in virtual environments. If a human operator touches with a data glove on an object, he produces a heatmap on the contact points with the object. This surface can be displayed in realtime and allows a better understanding of motion models. References Object manipulation Robotics engineering
Contact region
Technology,Engineering,Biology
455
1,044,109
https://en.wikipedia.org/wiki/BLAST%20model%20checker
The Berkeley Lazy Abstraction Software verification Tool (BLAST) is a software model checking tool for C programs. The task addressed by BLAST is the need to check whether software satisfies the behavioral requirements of its associated interfaces. BLAST employs counterexample-driven automatic abstraction refinement to construct an abstract model that is then model-checked for safety properties. The abstraction is constructed on the fly, and only to the requested precision. Achievements BLAST came first in the category DeviceDrivers64 in the 1st Competition on Software Verification (2012) that was held at TACAS 2012 in Tallinn. BLAST came third (category DeviceDrivers64) in the 2nd Competition on Software Verification (2013) that was held at TACAS 2013 in Rome. BLAST came first in the category DeviceDrivers64 in the 3rd Competition on Software Verification (2014), that was held at TACAS 2014 in Grenoble. References Notes External links BLAST 2.5 website BLAST 2.7 website Free software testing tools Model checkers Static program analysis tools Software using the Apache license
BLAST model checker
Mathematics
212
1,710,634
https://en.wikipedia.org/wiki/Dramaturgy%20%28sociology%29
Dramaturgy is a sociological perspective that analyzes micro-sociological accounts of everyday social interactions through the analogy of performativity and theatrical dramaturgy, dividing such interactions between "actors", "audience" members, and various "front" and "back" stages. The term was first adapted into sociology from the theatre by Erving Goffman, who developed most of the related terminology and ideas in his 1956 book, The Presentation of Self in Everyday Life. Kenneth Burke, whom Goffman would later acknowledge as an influence, had earlier presented his notions of dramatism in 1945, which in turn derives from Shakespeare. The fundamental difference between Burke's and Goffman's view, however, is that Burke believed that life was in fact theatre, whereas Goffman viewed theatre as a metaphor. If people imagine themselves as directors observing what goes on in the theatre of everyday life, they are doing what Goffman called dramaturgical analysis, the study of social interaction in terms of theatrical performance. In dramaturgical sociology, it is argued that the elements of human interactions are dependent upon time, place, and audience. In other words, to Goffman, the self is a sense of who one is, a dramatic effect emerging from the immediate scene being presented. Goffman forms a theatrical metaphor in defining the method in which one human being presents itself to another based on cultural values, norms, and beliefs. Performances can have disruptions (actors are aware of such), but most are successful. The goal of this presentation of self is acceptance from the audience through carefully conducted performance. If the actor succeeds, the audience will view the actor as he or she wants to be viewed. A dramaturgical action is a social action that is designed to be seen by others and to improve one's public self-image. In addition to Goffman, this concept has been used by Jürgen Habermas and Harold Garfinkel, among others. Overview The theatrical metaphor can be seen in the origins of the word person, which comes from the Latin persona, meaning 'a mask worn by actors'. One behaves differently (plays different roles) in front of different people (audiences). A person picks out clothing (a costume) that is consistent with the image they wish to project. They enlist the help of friends, caterers, and decorators (fellow actors and stage crew) to help them successfully “stage” a dinner for a friend, a birthday party for a relative, or a gala for a fundraiser. If they need to adjust their clothing or wish to say something unflattering about one of their guests, they are careful to do so out of sight of others (backstage). One's presentation of oneself to others is known as dramaturgy. Dramaturgical perspective is one of several sociological paradigms separated from other sociological theories or theoretical frameworks because, rather than examining the cause of human behavior, it analyzes the context. This is, however, debatable within sociology. In Frame Analysis (1974), Goffman writes, "What is important is the sense he [a person or actor] provides them [the others or audience] through his dealing with them of what sort of person he is behind the role he is in." The dramaturgical perspective can be seen as an anchor to this perspective, wherein the individual's identity is performed through role(s) and consensus between the actor and the audience. Because of this dependence on consensus to define social situations, the perspective argues that there is no concrete meaning to any interaction that could not be redefined. Dramaturgy emphasizes expressiveness as the main component of interactions; it is thus termed as the "fully two-sided view of human interaction." Dramaturgical theory suggests that a person's identity is not a stable and independent psychological entity, but rather, it is constantly remade as the person interacts with others. In a dramaturgical model, social interaction is analyzed in terms of how people live their lives, like actors performing on a stage. This analysis offers a look at the concepts of status, which is like a part in a play; and role, which serves as a script, supplying dialogue and action for the characters. Just as on the stage, people in their everyday lives manage settings, clothing, words, and nonverbal actions to give a particular impression to others. Goffman described each individual's "performance" as the presentation of self; a person's efforts to create specific impressions in the minds of others. This process is also sometimes called impression management. Goffman makes an important distinction between front stage behaviour, which are actions that are visible to the audience and are part of the performance; and back stage behavior, which are actions that people engage in when no audience is present. For example, a server in a restaurant is likely to perform one way in front of customers but might be much more casual in the kitchen. It is likely that he or she does things in the kitchen that might seem unseemly in front of customers. Before interaction with another, an individual prepares a role, or impression, that he or she wants to make on the other. These roles are subject to what is, in theater, termed breaking character. Inopportune intrusions may occur in which a backstage performance is interrupted by someone who is not meant to see it. In addition, there are examples of how the audience for any personal performance plays a part in determining the course it takes: how typically people ignore many performance flaws out of tact, such as if someone trips or spits as they speak. Within dramaturgy analysis, teams are groups of individuals who cooperate with each other in order to share the 'party line.' Team members must share information as mistakes reflect on everyone. Team members also have inside knowledge and are not fooled by one another's performances. Perinbanayagam's dramaturgical theory Performance There are seven important elements Goffman identifies with respect to the performance: Belief in the part that one is playing: Belief is important, even if it cannot be judged by others; the audience can only try to guess whether the performer is sincere or cynical. The front (or "mask"): a standardized, generalizable, and transferable technique for the performer to control the manner in which the audience perceives them. People put on different masks throughout their lives. Dramatic realization: a portrayal of aspects of the performer that they want the audience to know. When the performer wants to stress something, they will carry on the dramatic realization, e.g. showing how accomplished one is when going on a date to make a good first impression. Idealization: a performance often presents an idealized view of the situation to avoid confusion (misrepresentation) and strengthen other elements (e.g., fronts, dramatic realization). Audiences often have an 'idea' of what a given situation (performance) should look like, and performers will try to carry out the performance according to that idea. Maintenance of expressive control: the need to stay 'in character'. The performance has to make sure that they send out the correct signals, as well as silencing the occasional compulsion to convey misleading ones that might detract from the performance. Misrepresentation: the danger of conveying a wrong message. The audience tends to think of a performance as genuine or false, and performers generally wish to avoid having an audience disbelieve them (whether they are being truly genuine or not). Mystification: the concealment of certain information from the audience, whether to increase the audience's interest in the user or to avoid divulging information which could be damaging to the performer. Stages Stages or regions refer to the three distinct areas where different individuals with different roles and information can be found. There are three stages: front, back, and outside. Front stage Within society, individuals are expected to present themselves in a certain way; however, when a person goes against the norm, society tends to notice. Therefore, individuals are expected to put on a costume and act differently when in front of the 'audience'. Goffman noticed this habit of society and developed the idea of front stage. In his book The Presentation of Self in Everyday Life, Goffman defines front as "that part of the individual's performance which regularly functions in a general and fixed fashion we define the situation for those who observe the performance. Front, then, is the expressive equipment of a standard kind intentionally or unwittingly employed by the individual during his performance." During the front stage, the actor formally performs and adheres to conventions that have meaning to the audience. It is a part of the dramaturgical performance that is consistent and contains generalized ways to explain the situation or role the actor is playing to the audience that observes it. The actor knows that they are being watched and acts accordingly. Goffman explains that the front stage involves a differentiation between setting and personal front, two concepts that are necessary for the actor to have a successful performance. Setting is the scene that must be present in order for the actor to perform; if it is gone, the actor cannot perform. Personal front consists of items or equipment needed in order to perform. These items are usually identifiable by the audience as a constant representation of the performance and actor. The personal front is divided into two different aspects: appearance, which refers to the items of the personal front that are a reflection of the actor's social status; and manners, which refers to the way an actor conducts themselves. The actor's manner tells the audience what to expect from their performance. Back stage In The Presentation of Self in Everyday Life, Goffman explains that the back stage is where "the performer can relax; he can drop his front, forgo speaking in his lines, and step out of character." When the individual returns to the back stage, they feel a sense of relief knowing the actions that would not be condoned in the front stage are free to be expressed. In the backstage, actions are not to please anyone but the self. Back stage is where performers are present but audience is not, hence the performers can step out of character without fear of disrupting the performance. It is where various kinds of informal actions, or facts suppressed in the front stage, may appear. Simply put, the back stage is completely separate from the front stage, and it is where the performance of a routine is prepared. No members of the audience may appear in the back, and the actor takes many methods to ensure this. Back region is a relative term, in that it exists only in regards to a specific audience: where two or more people are present, there will almost never be a true 'back region'. Off-stage Outside, or off-stage, is the place where individuals who are outsiders are not involved in the performance (although they may not be aware of it). The off-stage is where individual actors meet the audience members independently of the team performance on the front stage. Specific performances may be given when the audience is segmented as such. Borders/regions Borders, or boundaries, are important as they prevent or restrict movement of individuals between various regions. Performers need to be able to maneuver boundaries to manage who has the access to the performance, when and how. The border phenomenon is highlighted by Victor Turner's concept of liminality, and thus prolonged in the imaginable field: semiotics of ritual. The management of thresholds may be operated on several axes; the most crude is exclusion-inclusion, similar to the basic digital on-off (1 – 0). To be a part or not may be seen as the fundamental asset in a society, but as far society is perceived as a rhizomatic conglomerate, rather such than a unitary or arborescent whole. Border-control, so to speak, becomes in a paradoxical fashion the central issue. Thus the study of liminality in sociology, ritual and theatre reveals the fictional elements underpinning society. Rites of passage seem to reflect this as the enactments of exclusion, and dissociation seem to be an essential feature of such. The enactment of exclusion from a society seem to be essential for the formation of an imaginary central governing (cf. Michel Foucault). Discrepant roles Many performances need to prevent the audience from getting some information (secrets). For that, several specialized roles are created. Secrets There are different types of secrets that have to be concealed for various reasons: Dark secrets: represent information about the performing team which could contradict the image the team is presenting to the audience. Strategic secrets: represent the team's goals, capabilities and know-hows which allows the team to control the audience and lead it in the direction the team desires. Inside secrets: represent information known by the team and are seen as something that is shared only with other teammates to increase team bonding. Entrusted secrets: secrets have to be kept in order to maintain the role and team integrity; keeping them demonstrates trustworthiness. Free secrets: the secrets of another, unrelated to oneself, that can be disclosed while still maintaining the role. Disclosure of such secrets should not affect the performance. Roles There are three basic roles in Goffman's scheme, each centered on who has access to what information: performers are most knowledgeable; audiences know only what the performers disclosed and what they have observed themselves; and outsiders have little if any relevant information. These roles can be divided into three groups: Roles dealing with manipulation information and team borders: The informer: a pretender to the role of a team member who gains teams trust, is allowed backstage, but then joins the audience and discloses information on the performance. Example: spies, traitors. The shill: this role is an opposite of the informer; the shill pretends to be a member of the audience but is a member of the performing team. His role is to manipulate the audience reactions. The spotter: a member of the audience who has much information about the performance in general. The spotter analyzes the performers and may reveal information to the audience. Example: food critic in a restaurant. Roles dealing with facilitating interactions between two other teams: The go-between or mediator: usually acts with the permission of both sides, acting as a mediator and/or messenger, facilitating communication between various teams. Go-between learn many secrets, and may not be neutral. Roles that mix front and back region up: The non-person: individuals who are present during the performance, may even be allowed in the back stage but are not part of the 'show'. Their role is usually obvious and thus they are usually ignored by the performers and the audience. Example: a waiter, cleaning lady. The service specialist: individuals whose specialized services are required, usually by the performers. They are often invited by the performers to the back region. Example: hairdresser, plumbers, bankers with tax knowledge. "The colleague: individuals who are similar to the performers but are not members of the team in question. Example: coworkers. The confidant: individuals to whom the performer reveals details of the performance. Communication out of character Performers may communicate out of character on purpose, in order to signal to others on their team, or by accident. Common backstage out-of-character communications include: Treatment of the absent: derogatory discussion of the absent audience or performers affecting team cohesion. Staging talk: discussion of technical aspects of the performance, gossip. Common frontstage communications out of character include: Team collusion: between team members, during the performance but not endangering it. Example: staging cues, kicking a friend under a table. Realigning actions: between members of opposing teams. For example: unofficial grumbling. Impression management Impression management refers to work on maintaining the desired impression, and is composed of defensive and protective techniques. Protective techniques are used in order to cover mistakes, only once the interaction begins. For example, relying on audience to use tact and overlook mistakes of the performers. In contrast, defensive techniques are employed before an interaction begins, and involves: Dramaturgical loyalty: work to keep the team members loyal to the team members and to the performance itself. Dramaturgical discipline: dedicating oneself to the performance but without losing oneself in it. Self-control, making sure one can play the part properly, rehearsal. Dramaturgical circumspection: minimizing risk by preparing for expected problems. Being careful to avoid situations where a mistake or a potential problem can occur, choosing the right audience, length and venue of performance. Criticism Believing that theories should not be applied where they have not been tested, it has been argued that dramaturgy should only be applied in instances that involve people associated with a total institution, for which the theory was designed. In addition to this, it also has been said that dramaturgy does not contribute to sociology's goal of understanding the legitimacy of society. It is claimed to be drafting on positivism, which does not offer an interest in both reason and rationality. John Welsh called it a "commodity." Application Research on dramaturgy is best done through fieldwork such as participant observation. For one, dramaturgy has been used to depict how social movements communicate power. Robert D. Benford and Scott A. Hunt argued that "social movements can be described as dramas in which protagonists and antagonists compete to affect audiences' interpretations of power relations in a variety of domains." The people seeking power present their front stage self in order to captivate attention. However, the back stage self is still present, though undetectable. This is a competition of power, a prime example of dramaturgy. A useful, and everyday way of understanding dramaturgy (specifically front stage and back stage) is to think of a waiter or waitress at a restaurant. The main avenue of concern for the waiter is "customer service." Even if a customer is rude, one is expected to be polite ("the customer is always right") as part of their job responsibilities. They speak differently when going out to the break room: they may complain, mimic and discuss with their fellow peers how irritating and rude the customer is. In this example, the waiter acts a certain way when dealing with customers and acts a completely different way when with her/his fellow employees. Dramaturgy has also been applied to the emerging interdisciplinary domain of scholarly research known as technoself studies, which deals with human identity in a technological society. In terms of social media profiles, users and their followers share a social space online. Social media users create profiles and post things that are specifically curated to portray a certain image that they want their followers to see. Often times this curated image is a facade. This is an “authoritative performance” of ones lifestyle. A dynamic is created between the user and their followers where the user is in control of how and what represents them, while the followers are spectators to this presentation of the user's self but they themselves are also presenting themselves in the same way. Dramaturgy can also be applied to all aspects of theatre performers. See also Character mask Epistemic virtue Role engulfment Signalling theory References Further reading Brissett, Dennis, and Charles Edgley, eds. 1990. Life as Theater: A Dramaturgical Source Book (2nd ed.). New York: Walter de Gruyter. Cohen, Robert. 2004. "Role Distance: On Stage and On the Merry-Go-Round." Journal of Dramatic Theory and Criticism. Edgley, Charles, ed. 2013. The Drama of Social Life: A Dramaturgical Handbook. UK: Ashgate Publishing Co. Goffman, Erving. 1959. The Presentation of Self In Everyday Life. New York: Doubleday. External links Goffman, Erving. "The Presentation of Self in Everyday Life: The Main Argument, and the Starting Assumption." The Glamour of Motives: Applications of Kenneth Burke within the Sociological Field. Interpersonal relationships Symbolic interactionism Erving Goffman Everyday life
Dramaturgy (sociology)
Biology
4,124
31,676,967
https://en.wikipedia.org/wiki/HP%20Neoview
HP Neoview was a data warehouse and business intelligence computer server line based on the Hewlett-Packard NonStop line. It acted as a database server, providing NonStop OS and NonStop SQL, but lacked the transaction processing functionality of the original NonStop systems. The line was retired, and no longer marketed, as of January 24, 2011. References Fault-tolerant computer systems Neoview
HP Neoview
Technology,Engineering
77
33,990,395
https://en.wikipedia.org/wiki/DDoS%20attacks%20during%20the%20October%202011%20South%20Korean%20by-election
The DDoS attacks during the October 2011 South Korean by-election were allegedly two separate distributed denial-of-service (DDoS) attacks that occurred on October 26, 2011. The attacks, which took place during the October 2011 Seoul mayoral by-election, targeted the websites of the National Election Commission (NEC) and then-mayoral candidate Park Won-soon. Investigators assert that the attacks were carried out in hopes of suppressing young voters, to the benefit of the Grand National Party. An aide of Grand National Party legislator Choi Gu-sik was found responsible for the attacks. The attacks The attacks consisted of two separate denial-of-service attacks against independent National Election Commission and mayoral candidate Park Won-soon, carried out with the help of a botnet of 200 infected computers. The attacks were conducted during the morning, when citizens--particularly young voters looking to vote before work--would have been expected to look up polling station locations. It has been theorized that the attacks were conducted in the belief that they may reduce voter turnout, to the benefit of the Grand National Party's candidate Na Kyung-won. Police stated that the attack against the NEC lasted about two hours, specifically impacting the part of the website with information on polling locations; Park Won-soon's website was attacked twice that day. The National Police Agency later revealed that an aide to Grand National Part lawmaker Choi Gu-sik, referred to in the media by only their surname "Gong," was responsible for the two attacks. The National Police Agency later arrested Gong and four other associates. Some researchers, however, have questioned the official narrative. Doubts have been raised as to whether Gong had the technical expertise or resources to pull off the attack. Others have pointed out that under a DDoS attack, it would be unusual for parts of a website to be offline while others are online, suggesting perhaps a technical failure instead. These events caused a collective panic amongst GNP members as they often denounce the online activities of South Korean progressives. Political impact The exposure of his role in the attacks led to Choi Gu-sik officially resigning his position as a lawmaker, along with several other members of the GNP. In the wake of the scandal, reformists in the conservative Grand National Party put pressure on core members of the party who were closely affiliated with the Lee Myung-bak government; this in turn led to Park Geun-hye being brought back into the spotlight to reorganize the GNP. Social impact More than 30 university student associations made a joint statement calling for a thorough investigation of the attacks. See also 2008 Grand National Party Convention Bribery Incident Lee Myung-bak government References 2011 in South Korea Presidency of Lee Myung-bak Denial-of-service attacks Liberty Korea Party Cyberwarfare
DDoS attacks during the October 2011 South Korean by-election
Technology
576
9,508,543
https://en.wikipedia.org/wiki/Bacterial%20initiation%20factor
A bacterial initiation factor (IF) is a protein that stabilizes the initiation complex for polypeptide translation. Translation initiation is essential to protein synthesis and regulates mRNA translation fidelity and efficiency in bacteria. The 30S ribosomal subunit, initiator tRNA, and mRNA form an initiation complex for elongation. This complex process requires three essential protein factors in bacteria – IF1, IF2, and IF3. These factors bind to the 30S subunit and promote correct initiation codon selection on the mRNA. IF1, the smallest factor at 8.2 kDa, blocks elongator tRNA binding at the A-site. IF2 is the major component that transports initiator tRNA to the P-site. IF3 checks P-site codon-anticodon pairing and rejects incorrect initiation complexes. The orderly mechanism of initiation starts with IF3 attaching to the 30S subunit and changing its shape. IF1 joins next, followed by mRNA binding, and starts codon-P-site interaction. IF2 enters with the initiator tRNA and places it on the start codon. GTP hydrolysis by IF2 releases it and IF3, enabling 50S subunit joining. The coordinated binding and activities of IF1, IF2, and IF3 are essential for the rapid and precise translation initiation in bacteria. They facilitate start codon selection and assemble an active, protein-synthesis-ready 70S ribosome. IF1 Bacterial initiation factor 1 associates with the 30S ribosomal subunit in the A site and prevents an aminoacyl-tRNA from entering. It modulates IF2 binding to the ribosome by increasing its affinity. It may also prevent the 50S subunit from binding, stopping the formation of the 70S subunit. It also contains a β-domain fold common for nucleic acid-binding proteins. It is a homolog of eIF1A. Initiation factor IF-1 is the smallest translation factor at only 8.2kDa. Beyond blocking the A-site, it affects the dynamics of ribosome association and dissociation. IF-1 enhances dissociation with IF-3, likely by inducing conformational changes in the 30S subunit. It also increases the binding affinity of IF-2 to the 30S subunit, possibly by altering the subunit configuration. Though IF-1 occupies the A-site, it does so in a way that is distinct from tRNA binding. Structural studies show IF-1 inserts a loop into the minor groove of helix 44 of 16S rRNA, flipping out bases A1492 and A1493. This insertion repositions nucleotides of helix 44, transmitting a conformational change over a 70Å distance and rotating the head of the 30S subunit. IF-1 mutants can exhibit cold-sensitive phenotypes, indicating a role for the factor in cold shock adaptation. Certain mutations also lead to o of genes at low temperatures, suggesting IF-1 is involved in gene regulation. IF-1 actively modifies ribosome structure and dynamics during initiation, in addition to just blocking the A-site. IF2 The IF2 initiation factor is a crucial component in the process of protein synthesis. The largest among the three indispensable translation initiation factors is IF-2, which possesses a molecular mass of 97 kDa. The protein has many domains, including an N-terminal domain, a GTPase domain, a linker region, C1, C2, and C-terminal domains. The GTPase domain encompasses the G1-G5 motif, which is responsible for the binding and hydrolysis of GTP. The activity of IF2 is regulated by conformational changes induced by the binding and hydrolysis of GTP. The primary function of IF-2 is to transport the initiator fMet-tRNA to the P-site of the 30S ribosomal subunit. The C2 domain of IF2 has a unique recognition and binding affinity towards the initiator tRNA. The IF-2 protein has been observed to form a ternary complex when interacting with GTP and fMet-tRNA. This complex has been found to interact with the 30S subunit. The initiation of mRNA translation involves the placement of the start codon in the P-site through the codon-anticodon base matching with the tRNA anti-codon. IF2 regulates start codon selection accuracy and inhibits elongator tRNAs' binding by selectively binding to fMet-tRNA. Additionally, it relocates the initiator tRNA on the 30S subunit to enhance the optimum contact with the P-site. Furthermore, IF2 exhibits RNA chaperone activity, which enables it to rectify misfolded RNA structures. In general, the IF2 protein plays a crucial role in coordinating many steps of translation initiation, including the binding of mRNA and fMet-tRNA to the start codon, the joining of sub-units, and the activation of GTPase. IF3 Initiation factor IF3 is a small protein of 21 kDa containing two compact α/β domains (IF3C and IF3N) connected by a flexible lysine-rich linker. Most IF3 functions are mediated by the IF3C domain, while IF3N regulates 30S subunit binding. Bacterial initiation factor 3 (infC) is not universally found in all bacterial species but in E. coli it is required for the 30S subunit to bind to the initiation site in mRNA. IF3 is required by the small subunit to form initiation complexes, but has to be released to allow the 50S subunit to bind. IF3 attaches to the platform side of the 30S subunit, close to helices 23, 24, 25, 26 and 45 of 16S rRNA, as well as ribosomal proteins S7, S11, and S12. The IF3C domain interacts with the 30S subunit via its conserved basic residues R99, R116, R147 and R168 . A major function of IF3 is inspecting codon-anticodon pairing at the P-site during start codon selection. It accelerates the dissociation of non-canonical initiation complexes containing mismatched or incorrect tRNAs. IF3 also inspects the initiator tRNA, rejecting elongator tRNAs and it also promotes the dissociation of the 70S ribosome into subunits, providing a pool of free 30S subunits for initiation. Another key role of IF3 is repositioning mRNA on the 30S subunit from a standby site to the P-site decoding site for start codon selection. IF3 works cooperatively with IF1 and IF2 during initiation and modulates IF2 binding and enhances the fidelity of start codon selection. References External links Protein biosynthesis Gene expression
Bacterial initiation factor
Chemistry,Biology
1,414
18,906,408
https://en.wikipedia.org/wiki/Allsport%20GPS
Allsport GPS was a fitness tracking phone application combined with a website. As of March 2016, it was discontinued and services were shut down. It uses GPS to provide performance statistics and is run on a GPS-enabled cell phone. The GPS gives Allsport GPS a precise way of measuring statistics such as pace, speed, time and distance. Users can view their route overlaid on a map. The application is used for fitness training regimes and goal tracking. The workout information uploads to the Allsport GPS website wirelessly. In 2006 Allsport GPS introduced the ability to view workouts in the Trimble Outdoors Google Earth layer. History Allsport GPS is a part of the Trimble Outdoors product family. It is owned by Trimble Navigation which was founded in 1978. The Allsport GPS application was bought by Trimble in April 2006. The software continues to be updated periodically. Allsport GPS started out as only available on limited phone models and carriers, but this list has steadily been expanding since then. In 2007 Allsport GPS was released on Blackberry phones. Allsport GPS was released on AT&T phones in 2008. Functions The purpose of Allsport GPS is to support fitness and performance tracking. It is part of a trio of cell phone applications called Trimble Outdoors. It can be used for workouts such as running, jogging, mountain biking, road biking, and walking. The application is downloaded onto a GPS cell phone. The user then straps the phone onto themselves or onto their bike, or holds the phone for the duration of their workout. During the workout Allsport GPS supplies real time statistics such as calories burned, time, speed and distance. These statistics are updated every ten seconds. After the workout, the data is automatically uploaded wirelessly to the website. The data can then be viewed, as well as a trip calendar showing all workouts over time, and elevation and speed profiles. On the Allsport map function, the workout can be viewed on a map both on the phone and on the website. The route can be made public and shared with others. The user can do a trip search on the website and view other users' shared workouts as well as workouts from Bicycling Magazine. These routes can be downloaded from the website. The phone application has a race-against-yourself feature that enables the user to compare their times and distances multiple times over the same track. Reviews Allsport GPS has been mentioned in print and internet publications such as Men’s Health Magazine and The New York Times Online. In 2007 it was named GPS Gadget of the Week by GeoCarta. Both Fred Zahradnik from About.com GPS and Laptop Magazine gave Allsport GPS 4/5 stars in 2007. Related software, social platforms and mobile apps Runtastic Endomondo References External links http://online.wsj.com/public/article/SB119265199498662338.html http://www.trimbleoutdoors.com GPS sports tracking applications Physical exercise Cross-platform mobile software Fitness apps
Allsport GPS
Technology
627
33,217,864
https://en.wikipedia.org/wiki/C24H25ClFN5O3
{{DISPLAYTITLE:C24H25ClFN5O3}} The molecular formula C24H25ClFN5O3 (molar mass: 485.94 g/mol, exact mass: 485.1630 u) may refer to: Afatinib Canertinib (CI-1033) Molecular formulas
C24H25ClFN5O3
Physics,Chemistry
75
18,904,461
https://en.wikipedia.org/wiki/Surface%20%28magazine%29
Surface is an American publication covering design, architecture, fashion, culture and travel; with print and digital publications. The publication has an online presence through the Design Dispatch daily newsletter, as well as through social media. History Surface was founded in 1993 by Richard Klein and Riley Johndonnell. The magazine was based in San Francisco until 2005, when the main offices were relocated to New York City. In 1994, Surface was described by Vanity Fair as one of 10 “upstart magazines to watch”. In 1997, Surface introduced its inaugural Avant Guardian issue, which focused on the Avant Guardian Awards, a fashion photography competition. Winners have advanced to work for fashion houses such as Giorgio Armani, Hermès, Banana Republic, Nike, IBM and Levi's—as well as fashion magazines such as Vogue and Elle, and general interest magazines such as Harper’s Bazaar, Mademoiselle, and The New York Times Magazine. Leaders in fashion and design industries selected 10 finalists annually to have their work featured in a fashion spread and at a launch event. For several years, the competition was expanded to include designers. Between 2011 and 2013, the Avant Guardian was on hiatus, but was revived again in 2014 and 2015, for photographers only. The 15th and final Avant Guardian issue was published in October 2015. In 2009, the publication was acquired by Quadra Media. In 2011, Surface was sold to Sandow Media. In 2012, Surface magazine was acquired by Eric Crown, co-founder and current chairman emeritus of the Arizona-based company Insight Enterprises. Surface Media LLC was formed in 2014. Under CEO Marc Lotenberg and editor-in-chief Spencer Bailey, Surface Media has launched new ventures, including Design Dialogues and Surface Studios. Surface remains a voice in the design industry. With the June/July 2013 issue—Bailey's first as editor—Surface unveiled a major design overhaul created in partnership with the consultancy Noë & Associates. Contributing editors who joined during Bailey's first year included Valerie Steele, the director and chief curator of The Museum at the Fashion Institute of Technology; Bettina Korek, an arts advocate, writer, and the founder of ForYourArt; and architect and designer David Rockwell. Contributors have offered provocative comments. Wolfe has published highly successful novels and journalism related to New York real estate and society. In the June/July 2016 issue, former New York Times reporter Jayson Blair criticized the media for its failure to have real-time fact checking and to point out lies stated by presidential candidate Donald Trump. New additions In 2013 Surface launched the "Design Dialogue" talk series, in which editor-in-chief Spencer Bailey discusses design topics with featured international designers and brands. In 2015, editor-in-chief Spencer Bailey said that new columns would be added to the magazine, focusing equally on the craftsmanship of fashion and design. These new columns include Detail, which looks at garment and textile detail; Taste, in which a thought leader shares a current personal interest; How It's Made, intended to replace the yearly issue of the same name; Executive, in which an executive discusses the convergence of business and creativity; and Dialogue, a conversation between two creatives. In February 2016, Surface released its first Builders Issue, which focuses on real estate development and architecture. In 2015, Surface was nominated for ASME's Best Style & Design Cover. In 2016, Surface Media also launched a licensing partnership SurfaceHotels.com and a custom publishing house Surface Studios. Surface Studios, before even creating a logo or media kit, has already secured more than $3 Million in contracts including a custom publication for the second-largest real-estate project in the United States, Brickell City Centre. As a response to the global COVID-19 pandemic, CEO Marc Lotenberg and architect Winka Dubbeldam announced the launch of the Surface Summer School at Penn, a design challenge and competition in collaboration with the University of Pennsylvania Stuart Weitzman School of Design. The first partnership was the first its kind between a media company and an accredited university, the challenge invited students from Penn to respond to a brief for architectural and design solutions for a mobile medical testing unit for COVID-19. Events Surface hosts events throughout the year, including art events during Art Basel in Miami, Florida; design events during NYC x Design, and VIP dinners. References External links Surface website 2009 mergers and acquisitions 2011 mergers and acquisitions 2012 mergers and acquisitions Architecture magazines Design magazines Fashion magazines published in the United States Magazines established in 1993 Magazines published in New York City Magazines published in San Francisco Visual arts magazines published in the United States
Surface (magazine)
Engineering
929
48,759,979
https://en.wikipedia.org/wiki/Supermicelle
Supermicelle is a hierarchical micelle structure (supramolecular assembly) where individual components are also micelles. Supermicelles are formed via bottom-up chemical approaches, such as self-assembly of long cylindrical micelles into radial cross-, star- or dandelion-like patterns in a specially selected solvent; solid nanoparticles may be added to the solution to act as nucleation centers and form the central core of the supermicelle. The stems of the primary cylindrical micelles are composed of various block copolymers connected by strong covalent bonds; within the supermicelle structure they are loosely held together by hydrogen bonds, electrostatic or solvophobic interactions. References Supramolecular chemistry Colloidal chemistry
Supermicelle
Chemistry,Materials_science
156
55,901,257
https://en.wikipedia.org/wiki/NGC%201980
NGC 1980 (also known as OCL 529, Collinder 72 and The Lost Jewel of Orion) is a young open cluster associated with an emission nebula in the constellation Orion. It was discovered by William Herschel on 31 January 1786. Its apparent size is 14 × 14 arc minutes and it is located around the star Iota Orionis on the southern tip of the Orion constellation. Herschel made his first observation of the cluster which was called WH V 31 on 31 January 1786, but he possibly observed it during his studies of double stars on 20 September 1783. References Open clusters Orion molecular cloud complex 1980 Orion (constellation) Astronomical objects discovered in 1783 Discoveries by William Herschel Orion–Cygnus Arm
NGC 1980
Astronomy
144
52,768,641
https://en.wikipedia.org/wiki/Dhairya%20Dand
Dhairya Dand (; born 1989) is an Indian-American inventor and artist based in New York City. His work investigates the human body as a medium for computation; new materials as a tool to embody interactions; and design as a vehicle for mindfulness. His work takes the form of devices, objects, installations, new technology and materials. Currently, Dand is a principal at ODD Industries, a futurist factory and lab in NYC. Previously an artist in residence at NEW INC and on the scientific advisory board of the X Prize Foundation. Dand was an invited member of the W3C Standards Committee which defines standards for the Internet. He was a key member of Amazon's secretive Concept Lab which invented several Alexa devices. He has taught conceptual design-based courses at the Art Institute of Seattle, the Carnegie Mellon School of Design and the MIT Design Innovation Workshops. Dand is a graduate of the Media, Arts and Sciences program at the MIT Media Lab. Early life Dand was born in Nasik in a interfaith multilingual family: his father was a Kutchi Jain, while his mother belonged to the Marathi Saraswat community. His father, a plumber by profession, did not complete high school, while his mother worked as a Sanskrit teacher in Mumbai before the family moved to Nasik. He attended Veermata Jijabai Technological Institute for undergraduate studies in computer science and the Industrial Design Centre for courses in design. Dand later lived in Singapore, Phnom Penh, Tokyo, and London before moving to the United States to study at the Massachusetts Institute of Technology. Works Dand's inventions include sensorial interfaces, smart devices, display technologies, Alexa, social systems, prosthesis, bio-based architecture, educational toys, and emotional robots. In SuperShoes, Dand created insoles that work on a tickling interface. The shoes tickle the feet and guide the wearer across the city. The insoles sync with the user's smartphone for location, data, and access to the user's personality preferences. The insoles provide navigation and reminders and promote taking mindful breaks and discovering new places in a city. In Programmable Hair, Dand made a device worn on the hair that allows the wearer to program their hairstyle, either by choosing from a library of hairstyles or by taking a picture of someone else's hairstyle. With Obake, Dand created a 2.5D elastic computer display technology that has shape memory. The display can be physically deformed, stretched, pulled, and pushed. It remembers shapes and can self-actuate. While in Seattle, Dand was part of Amazon's secretive Concept Lab, where he is credited for key inventions such as Alexa devices. Some of his inventions which are public involve invisible interfaces and using hand gestures to use the air as a medium for computing. Dand's Cheers are alcohol-aware ice cubes that detect how much a person is drinking. The cubes change color depending on how much alcohol a person has consumed. The cubes also strobe in response to ambient music. Dand designed a bio-building that responds and reacts to its environment. During the day, cells in the building's "membranes" open up, allowing for more ventilation; at night, the cells generate and conserve warmth. Dand's ThinkerToys are modular educational toys made from e-waste which later led to an NGO called openTOYS. By plugging in these modules, a keyboard can be used as a piano, a mouse for language learning, and speakers as storytelling devices. One of Dand's early works was Lokshahi, which was a m-governance system for political transparency in rural India. Dand has also worked on several accessibility-related inventions for emotional communication, autism and motor impairment. Awards and exhibits Dand was named in the Forbes magazine's 30 under 30 list in 2016 and 2015. In 2015 Future of StoryTelling named him as a fellow. Dand was one of Elle magazine's 20 names to know and Vogue Cool People list. In 2014 Wired UK named him as an Innovation fellow. INK Talks named him as an INK Fellow. Dand's work was selected by the Smithsonian as one of finalist for the National Design Award. In 2013, Dand was one of the Boston Globe Top 25 Innovators. He has presented at W3C's Annual Summit, Tencent's WE Summit, Tokyo Designers Week, Wired UK Innovation Conference, INK Talks, TEDx events including TEDxHamburg and TEDxBerlin, the ICA and the MIT Media Lab. Dand's work has been exhibited at the prestigious Victoria and Albert Museum (V&A) in London, MIT Museum in Cambridge, Singapore Arts House and at international conferences including UIST St Andrews, CHI Paris, and in TEI Barcelona. See also Indians in the New York City metropolitan area References External links ODD Industries Website TED Talk Profile page at MIT Media Lab Forbes India profile on Dhairya Dand Living people Human–computer interaction Year of birth missing (living people) Indian emigrants to the United States 21st-century American inventors Artists from New York City Massachusetts Institute of Technology alumni MIT Media Lab people Human–computer interaction researchers
Dhairya Dand
Engineering
1,085
35,690,482
https://en.wikipedia.org/wiki/Birge%E2%80%93Sponer%20method
In molecular spectroscopy, the Birge–Sponer method or Birge–Sponer plot is a way to calculate the dissociation energy of a molecule. This method takes its name from Raymond Thayer Birge and Hertha Sponer, the two physical chemists that developed it. A detailed example may be found here. Description By observing transitions between as many vibrational energy levels as possible, for example through electronic or infrared spectroscopy, the difference between the energy levels, can be calculated. This sum will have a maximum at , representing the point of bond dissociation; summing over all the differences up to this point gives the total energy required to dissociate the molecule, i.e. to promote it from the ground state to an unbound state. This can be written: where is the dissociation energy. If a Morse potential is assumed, plotting against should give a straight line, from which it is easy to extract from the intercept with the x-axis. In practice, such plots often give curves because of unaccounted anharmonicity in the potential; furthermore, the low population of the higher states (or the Franck–Condon principle) makes it difficult to experimentally obtain data at high values of . Thus the extrapolation can be inaccurate and only an upper limit for the value of the dissociation energy can be obtained. References Spectroscopy
Birge–Sponer method
Physics,Chemistry
287
51,752,859
https://en.wikipedia.org/wiki/Linear%20graph%20grammar
In computer science, a linear graph grammar (also a connection graph reduction system or a port graph grammar) is a class of graph grammar on which nodes have a number of ports connected together by edges and edges connect exactly two ports together. Interaction nets are a special subclass of linear graph grammars in which rewriting is confluent. Implementations Bawden introduces linear graphs in the context of a compiler for a fragment of the Scheme programming language. Bawden and Mairson (1998) describe the design of a distributed implementation in which the linear graph is spread across many computing nodes and may freely migrate in order to make rewrites possible. Notes References Bawden, Alan (1986), Connection graphs, In Proceedings of the 1986 ACM conference on LISP and functional programming, pp. 258–265, ACM Press. Bawden, Alan (1992), Linear graph reduction: confronting the cost of naming, PhD dissertation, MIT. Bawden, Alan (1993), Implementing Distributed Systems Using Linear Naming, A.I. Technical Report No. 1627, MIT. Bawden and Mairson (1998), Linear naming: experimental software for optimizing communication protocols, Working paper #1, Dept. Computer Science, Brandeis University. Graph rewriting
Linear graph grammar
Mathematics,Technology
264
1,249,541
https://en.wikipedia.org/wiki/Water%20of%20crystallization
In chemistry, water(s) of crystallization or water(s) of hydration are water molecules that are present inside crystals. Water is often incorporated in the formation of crystals from aqueous solutions. In some contexts, water of crystallization is the total mass of water in a substance at a given temperature and is mostly present in a definite (stoichiometric) ratio. Classically, "water of crystallization" refers to water that is found in the crystalline framework of a metal complex or a salt, which is not directly bonded to the metal cation. Upon crystallization from water, or water-containing solvents, many compounds incorporate water molecules in their crystalline frameworks. Water of crystallization can generally be removed by heating a sample but the crystalline properties are often lost. Compared to inorganic salts, proteins crystallize with large amounts of water in the crystal lattice. A water content of 50% is not uncommon for proteins. Applications Knowledge of hydration is essential for calculating the masses for many compounds. The reactivity of many salt-like solids is sensitive to the presence of water. The hydration and dehydration of salts is central to the use of phase-change materials for energy storage. Position in the crystal structure A salt with associated water of crystallization is known as a hydrate. The structure of hydrates can be quite elaborate, because of the existence of hydrogen bonds that define polymeric structures. Historically, the structures of many hydrates were unknown, and the dot in the formula of a hydrate was employed to specify the composition without indicating how the water is bound. Per IUPAC's recommendations, the middle dot is not surrounded by spaces when indicating a chemical adduct. Examples: – copper(II) sulfate pentahydrate – cobalt(II) chloride hexahydrate – tin(II) (or stannous) chloride dihydrate For many salts, the exact bonding of the water is unimportant because the water molecules are made labile upon dissolution. For example, an aqueous solution prepared from and anhydrous behave identically. Therefore, knowledge of the degree of hydration is important only for determining the equivalent weight: one mole of weighs more than one mole of . In some cases, the degree of hydration can be critical to the resulting chemical properties. For example, anhydrous is not soluble in water and is relatively useless in organometallic chemistry whereas is versatile. Similarly, hydrated is a poor Lewis acid and thus inactive as a catalyst for Friedel-Crafts reactions. Samples of must therefore be protected from atmospheric moisture to preclude the formation of hydrates. Crystals of hydrated copper(II) sulfate consist of centers linked to ions. Copper is surrounded by six oxygen atoms, provided by two different sulfate groups and four molecules of water. A fifth water resides elsewhere in the framework but does not bind directly to copper. The cobalt chloride mentioned above occurs as and . In tin chloride, each Sn(II) center is pyramidal (mean angle is 83°) being bound to two chloride ions and one water. The second water in the formula unit is hydrogen-bonded to the chloride and to the coordinated water molecule. Water of crystallization is stabilized by electrostatic attractions, consequently hydrates are common for salts that contain +2 and +3 cations as well as −2 anions. In some cases, the majority of the weight of a compound arises from water. Glauber's salt, , is a white crystalline solid with greater than 50% water by weight. Consider the case of nickel(II) chloride hexahydrate. This species has the formula . Crystallographic analysis reveals that the solid consists of subunits that are hydrogen bonded to each other as well as two additional molecules of . Thus one third of the water molecules in the crystal are not directly bonded to , and these might be termed "water of crystallization". Analysis The water content of most compounds can be determined with a knowledge of its formula. An unknown sample can be determined through thermogravimetric analysis (TGA) where the sample is heated strongly, and the accurate weight of a sample is plotted against the temperature. The amount of water driven off is then divided by the molar mass of water to obtain the number of molecules of water bound to the salt. Other solvents of crystallization Water is particularly common solvent to be found in crystals because it is small and polar. But all solvents can be found in some host crystals. Water is noteworthy because it is reactive, whereas other solvents such as benzene are considered to be chemically innocuous. Occasionally more than one solvent is found in a crystal, and often the stoichiometry is variable, reflected in the crystallographic concept of "partial occupancy". It is common and conventional for a chemist to "dry" a sample with a combination of vacuum and heat "to constant weight". For other solvents of crystallization, analysis is conveniently accomplished by dissolving the sample in a deuterated solvent and analyzing the sample for solvent signals by NMR spectroscopy. Single crystal X-ray crystallography is often able to detect the presence of these solvents of crystallization as well. Other methods may be currently available. Table of crystallization water in some inorganic halides In the table below are indicated the number of molecules of water per metal in various salts. Examples are rare for second and third row metals. No entries exist for Mo, W, Tc, Ru, Os, Rh, Ir, Pd, Hg, Au. AuCl3(H2O) has been invoked but its crystal structure has not been reported. Hydrates of metal sulfates Transition metal sulfates form a variety of hydrates, each of which crystallizes in only one form. The sulfate group often binds to the metal, especially for those salts with fewer than six aquo ligands. The heptahydrates, which are often the most common salts, crystallize as monoclinic and the less common orthorhombic forms. In the heptahydrates, one water is in the lattice and the other six are coordinated to the ferrous center. Many of the metal sulfates occur in nature, being the result of weathering of mineral sulfides. Many monohydrates are known. Hydrates of metal nitrates Transition metal nitrates form a variety of hydrates. The nitrate anion often binds to the metal, especially for those salts with fewer than six aquo ligands. Nitrates are uncommon in nature, so few minerals are represented here. Hydrated ferrous nitrate has not been characterized crystallographically. Photos See also Hydrate Mineral hydration Hydrous oxide References Crystallography Hydrates
Water of crystallization
Physics,Chemistry,Materials_science,Engineering
1,401
8,433,728
https://en.wikipedia.org/wiki/Pulse%20compression
Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth (or equivalently range resolution) of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse. Simple pulse Signal description The ideal model for the simplest, and historically first type of signals a pulse radar or sonar can transmit is a truncated sinusoidal pulse (also called a CW --carrier wave-- pulse), of amplitude and carrier frequency, , truncated by a rectangular function of width, . The pulse is transmitted periodically, but that is not the main topic of this article; we will consider only a single pulse, . If we assume the pulse to start at time , the signal can be written the following way, using the complex notation: Range resolution Let us determine the range resolution which can be obtained with such a signal. The return signal, written , is an attenuated and time-shifted copy of the original transmitted signal (in reality, Doppler effect can play a role too, but this is not important here). There is also noise in the incoming signal, both on the imaginary and the real channel. The noise is assumed to be band-limited, that is to have frequencies only in (this generally holds in reality, where a bandpass filter is generally used as one of the first stages in the reception chain); we write to denote that noise. To detect the incoming signal, a matched filter is commonly used. This method is optimal when a known signal is to be detected among additive noise having a normal distribution. In other words, the cross-correlation of the received signal with the transmitted signal is computed. This is achieved by convolving the incoming signal with a conjugated and time-reversed version of the transmitted signal. This operation can be done either in software or with hardware. We write for this cross-correlation. We have: If the reflected signal comes back to the receiver at time and is attenuated by factor , this yields: Since we know the transmitted signal, we obtain: where , is the result of the intercorrelation between the noise and the transmitted signal. Function is the triangle function, its value is 0 on , it increases linearly on where it reaches its maximum 1, and it decreases linearly on until it reaches 0 again. Figures at the end of this paragraph show the shape of the intercorrelation for a sample signal (in red), in this case a real truncated sine, of duration seconds, of unit amplitude, and frequency hertz. Two echoes (in blue) come back with delays of 3 and 5 seconds and amplitudes equal to 0.5 and 0.3 times the amplitude of the transmitted pulse, respectively; these are just random values for the sake of the example. Since the signal is real, the intercorrelation is weighted by an additional factor. If two pulses come back (nearly) at the same time, the intercorrelation is equal to the sum of the intercorrelations of the two elementary signals. To distinguish one "triangular" envelope from that of the other pulse, it is clearly visible that the times of arrival of the two pulses must be separated by at least so that the maxima of both pulses can be separated. If this condition is not met, both triangles will be mixed together and impossible to separate. Since the distance travelled by a wave during is (where c is the speed of the wave in the medium), and since this distance corresponds to a round-trip time, we get: Energy and signal-to-noise ratio of the received signal The instantaneous power of the received pulse is . The energy put into that signal is: If is the standard deviation of the noise which is assumed to have the same bandwidth as the signal, the signal-to-noise ratio (SNR) at the receiver is: The SNR is proportional to pulse duration , if other parameters are held constant. This introduces a tradeoff: increasing improves the SNR, but reduces the resolution, and vice versa. Pulse compression by linear frequency modulation (or chirping) Basic principles How can one have a large enough pulse (to still have a good SNR at the receiver) without poor resolution? This is where pulse compression enters the picture. The basic principle is the following: a signal is transmitted, with a long enough length so that the energy budget is correct this signal is designed so that after matched filtering, the width of the intercorrelated signals is smaller than the width obtained by the standard sinusoidal pulse, as explained above (hence the name of the technique: pulse compression). In radar or sonar applications, linear chirps are the most typically used signals to achieve pulse compression. The pulse being of finite length, the amplitude is a rectangle function. If the transmitted signal has a duration , begins at and linearly sweeps the frequency band centered on carrier , it can be written: The chirp definition above means that the phase of the chirped signal (that is, the argument of the complex exponential), is the quadratic: thus the instantaneous frequency is (by definition): which is the intended linear ramp going from at to at . The relation of phase to frequency is often used in the other direction, starting with the desired and writing the chirp phase via the integration of frequency: This transmitted signal is typically reflected by the target and undergoes attenuation due to various causes, so the received signal is a time-delayed, attenuated version of the transmitted signal plus an additive noise of constant power spectral density on , and zero everywhere else: Cross-correlation between the transmitted and the received signal We now endeavor to compute the correlation of the received signal with the transmitted signals. Two actions are going to be taken to do this: - The first action is a simplification. Instead of computing the cross-correlation we are going to compute an auto-correlation which amounts to assuming that the autocorrelation peak is centered at zero. This will not change the resolution and the amplitudes but will simplify the math: - The second action is, as shown below, is to set an amplitude for the reference signal which is not one, but . Constant is to be determined so that energy is conserved through correlation. Now, it can be shown that the correlation function of with is: where is the correlation of the reference signal with the received noise. Width of the signal after correlation Assuming noise is zero, the maximum of the autocorrelation function of is reached at 0. Around 0, this function behaves as the sinc (or cardinal sine) term, defined here as . The −3 dB temporal width of that cardinal sine is more or less equal to . Everything happens as if, after matched filtering, we had the resolution that would have been reached with a simple pulse of duration . For the common values of , is smaller than , hence the pulse compression name. Since the cardinal sine can have annoying sidelobes, a common practice is to filter the result by a window (Hamming, Hann, etc.). In practice, this can be done at the same time as the adapted filtering by multiplying the reference chirp with the filter. The result will be a signal with a slightly lower maximum amplitude, but the sidelobes will be filtered out, which is more important. Energy and peak power after correlation When the reference signal is correctly scaled using term , then it is possible to conserve the energy before and after correlation. The peak (and average) power before correlation is: Since, before compression, the pulse is box-shaped, the energy before correlation is: The peak power after correlation is reached at : Note that if this peak power is the energy of the received signal before correlation, which is as expected. After compression, the pulse is approximal by a box having a width equal to the typical width of the function, that is, a width , so the energy after correlation is: If energy is conserved: ... it comes that: so that the peak power after correlation is: As a conclusion, the peak power of the pulse-compressed signal is that of the raw received signal (assuming that the template is correctly scaled to conserve energy through correlation). Signal-to-noise gain after correlation As we have seen above, things are written so that the energy of the signal does not vary during pulse compression. However, it is now located in the main lobe of the cardinal sine, whose width is approximately . If is the power of the signal before compression, and the power of the signal after compression, energy is conserved and we have: which yields an increase in power after pulse compression: In the spectral domain, the power spectrum of the chirp has a nearly constant spectral density in interval and zero elsewhere, so that energy is equivalently expressed as . This spectral density remains the same after matched filtering. Imagining now an equivalent sinusoidal (CW) pulse of duration and identical input power, this equivalent sinusoidal pulse has an energy: After matched filtering, the equivalent sinusoidal pulse turns into a triangular-shaped signal of twice its original width but the same peak power. Energy is conserved. The spectral domain is approximated by a nearly constant spectral density in interval where . Through conservation of energy, we have: Since by definition we also have: it comes that: meaning that the spectral densities of the chirped pulse, and the equivalent CW pulse are very nearly identical, and are equivalent to that of a bandpass filter on . The filtering effect of correlation also acts on the noise, meaning that the reference band for the noise is and since , the same filtering effect is obtained on the noise in both cases after correlation. This means that the net effect of pulse compression is that, compared to the equivalent CW pulse, the signal-to-noise ratio (SNR) has improved by a factor because the signal is amplified but not the noise. As a consequence:     For technical reasons, correlation is not necessarily done for actual received CW pulses as for chirped pulses. However during baseband shifting the signal undergoes a bandpass filtering on which has the same net effect on the noise as the correlation, so the overall reasoning remains the same (that is, the SNR makes only sense for noise defined on a given bandwidth, here being that of the signal). This gain in the SNR seems magical, but remember that the power spectral density does not represent the phase of the signal. In reality the phases are different for the equivalent CW pulse, the CW pulse after correlation, the original chirped pulse and the correlated chirped pulse, which explains the different shapes of the signals (especially the varying lengths) despite having (nearly) the same power spectrum in all cases. If the peak transmitting power and the bandwidth are constrained, pulse compression thus achieves a better peak power (but same resolution) by transmitting a longer pulse (that is, more energy), compared to an equivalent CW pulse of same peak power and bandwidth , and squeezing the pulse by correlation. This works best only for a limited number of signal types which, after correlation, have a narrower peak than the original signal, and low sidelobes. Stretch processing While pulse compression can ensure good SNR and fine range resolution in the same time, digital signal processing in such a system can be difficult to implement because of the high instantaneous bandwidth of the waveform ( can be hundreds of megahertz or even exceed 1 GHz.) Stretch Processing is a technique for matched filtering of wideband chirping waveform and is suitable for applications seeking very fine range resolution over relatively short range intervals. Picture above shows the scenario for analyzing stretch processing. The central reference point(CRP) is in the middle of the range window of interest at range of , corresponding to a time delay of . If the transmitted waveform is the chirp waveform: then the echo from the target at distance can be expressed as: where is proportional to the scatterer reflectivity. We then multiply the echo by and the echo will become: where is the wavelength of electromagnetic wave in air. After conducting sampling and discrete Fourier transform on y(t) the sinusoid frequency can be solved: and the differential range can be obtained: To show that the bandwidth of y(t) is less than the original signal bandwidth , we suppose that the range window is long. If the target is at the lower bound of the range window, the echo will arrive seconds after transmission; similarly, If the target is at the upper bound of the range window, the echo will arrive seconds after transmission. The differential arrive time for each case is and , respectively. We can then obtain the bandwidth by considering the difference in sinusoid frequency for targets at the lower and upper bound of the range window: As a consequence:     To demonstrate that stretch processing preserves range resolution, we need to understand that y(t) is actually an impulse train with pulse duration T and period , which is equal to the period of the transmitted impulse train. As a result, the Fourier transform of y(t) is actually a sinc function with Rayleigh resolution . That is, the processor will be able to resolve scatterers whose are at least apart. Consequently, and, which is the same as the resolution of the original linear frequency modulation waveform. Stepped-frequency waveform Although stretch processing can reduce the bandwidth of received baseband signal, all of the analog components in RF front-end circuitry still must be able to support an instantaneous bandwidth of . In addition, the effective wavelength of the electromagnetic wave changes during the frequency sweep of a chirp signal, and therefore the antenna look direction will be inevitably changed in a Phased array system. Stepped-frequency waveforms are an alternative technique that can preserve fine range resolution and SNR of the received signal without large instantaneous bandwidth. Unlike the chirping waveform, which sweeps linearly across a total bandwidth of in a single pulse, stepped-frequency waveform employs an impulse train where the frequency of each pulse is increased by from the preceding pulse. The baseband signal can be expressed as: where is a rectangular impulse of length and M is the number of pulses in a single pulse train. The total bandwidth of the waveform is still equal to , but the analog components can be reset to support the frequency of the following pulse during the time between pulses. As a result, the problem mentioned above can be avoided. To calculate the distance of the target corresponding to a delay , individual pulses are processed through the simple pulse matched filter: and the output of the matched filter is: where If we sample at , we can get: where l means the range bin l. Conduct DTFT (m is served as time here) and we can get: ,and the peak of the summation occurs when . Consequently, the DTFT of provides a measure of the delay of the target relative to the range bin delay : and the differential range can be obtained: where c is the speed of light. To demonstrate stepped-frequency waveform preserves range resolution, it should be noticed that is a sinc-like function, and therefore it has a Rayleigh resolution of . As a result: and therefore the differential range resolution is : which is the same of the resolution of the original linear-frequency-modulation waveform. Pulse compression by phase coding There are other means to modulate the signal. Phase modulation is a commonly used technique; in this case, the pulse is divided in time slots of duration for which the phase at the origin is chosen according to a pre-established convention. For instance, it is possible to not change the phase for some time slots (which comes down to just leaving the signal as it is, in those slots) and de-phase the signal in the other slots by (which is equivalent of changing the sign of the signal); this is known as binary phase-shift keying. The precise way of choosing the sequence of phases can be done according to a technique known as Barker codes. The advantages of the Barker codes are their simplicity (as indicated above, a de-phasing is a simple sign change), but the pulse compression ratio is lower than in the chirp case and the compression is very sensitive to frequency changes due to the Doppler effect if that change is larger than . Other pseudorandom binary sequences have nearly optimal pulse compression properties, such as Gold codes, JPL codes or Kasami codes, because their autocorrelation peak is very narrow. These sequences have other interesting properties making them suitable for GNSS positioning, for instance. It is possible to code the sequence on more than two phases (polyphase coding). As with a linear chirp, pulse compression is achieved through intercorrelation. See also Continuous-wave radar Spread spectrum Chirp compression Notes Further reading Nadav Levanon, and Eli Mozeson. Radar signals. Wiley. com, 2004. Hao He, Jian Li, and Petre Stoica. Waveform design for active sensing systems: a computational approach. Cambridge University Press, 2012. M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014. Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005. Fulvio Gini, Antonio De Maio, and Lee Patton, eds. Waveform design and diversity for advanced radar systems. Institution of engineering and technology, 2012. John J. Benedetto, Ioannis Konstantinidis, and Muralidhar Rangaswamy. "Phase-coded waveforms and their design." IEEE Signal Processing Magazine, 26.1 (2009): 22-31. Ducoff, Michael R., and Byron W. Tietjen. "Pulse compression radar." Radar Handbook (2008): 8-3. Signal processing Radar signal processing
Pulse compression
Technology,Engineering
3,716
598,801
https://en.wikipedia.org/wiki/Asmara-Massawa%20Cableway
The Asmara-Massawa Cableway was a cableway (or "ropeway") built in Italian Eritrea before World War II. The Eritrean Ropeway, completed in 1937, ran 71.8 km from the south end of Asmara to the city-port of Massawa. History The cableway was built by the Italian engineering firm Ceretti and Tanfani S.A. in Eritrea. It connected the port of Massawa with the city of Italian Asmara and ran a distance of nearly 72 kilometres. It also moved food, supplies and war materials for the Imperial Italian Army, which had also conquered Ethiopia in 1936. In August 1936 the first section of 26.6 km was opened from Ghinda to Godaif, a suburb of Asmara. With the capacity to transport 30 tons of material every hour in each direction from the seaport of Massawa to 2326 meters above sea level in Asmara, the cableway was the longest of its kind in the world when inaugurated in 1937. The bearing cables were in almost 30 sections, were powered by diesel engines, and carried freight in 1540 small transport gondolas. In southern Eritrea there was another small ropeway. During their eleven-year military administration (1941-1952) of the former Italian colony, the British dismantled the installations. They removed the diesel engines, the steel cables, and other equipment as war reparations. Iron towers that remained were scrapped in the 1980s. See also Eritrean Railway Italian Eritrea Notes External links Extensive article by Mike Metras Facsimile of La Teleferica Massaua-Asmara cableway brochure, translated by Mike Metras, Dave Engstrom, and Renato Guadino History of Asmara Massawa Transport in Eritrea Vertical transport devices Italian Eritrea Italian East Africa Aerial tramways
Asmara-Massawa Cableway
Technology
371
56,215,685
https://en.wikipedia.org/wiki/C8H10N2O3S
{{DISPLAYTITLE:C8H10N2O3S}} The molecular formula C8H10N2O3S (molar mass: 214.242 g/mol) may refer to: Diazald, or N-methyl-N-nitroso-p-toluenesulfonamide Sulfacetamide Molecular formulas
C8H10N2O3S
Physics,Chemistry
76
45,521,144
https://en.wikipedia.org/wiki/Aging%20and%20society
Aging has a significant impact on society. People of different ages and genders tend to differ in many aspects, such as legal and social responsibilities, outlooks on life, and self-perceptions. Young people tend to have fewer legal privileges (if they are below the age of majority), they are more likely to push for political and social change, to develop and adopt new technologies, and to need education. Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights. Older people are also more likely to vote, and in many countries the young are forbidden from voting. Thus, the aged have comparatively more, or at least different, political influence. In different societies, age may be viewed or treated differently. For example, age may be measured starting from conception or birth, starting at either zero or age one. Transitions such as reaching puberty, age of majority, or retirement are often socially significant. The concepts of successful aging and healthy aging refer to both social and physical aspects of the aging process. Cultural variations Arbitrary divisions set to mark periods of life may include: juvenile (via infancy, childhood, preadolescence, adolescence), early adulthood, middle adulthood, and late adulthood. More casual terms may include "teenagers", "tweens", "twentysomething", "thirtysomething", etc. as well as "vicenarian", "tricenarian", "quadragenarian", etc. The age of an adult human is commonly measured in whole years since the day of birth. Fractional years, months, or even weeks may be used to describe the ages of children and infants for finer resolution. The time of day the birth occurred is not commonly considered. In some cultures, there are other ways to express age. For example, some cultures measure age by counting years, including the current year, while others count years without including it. It could be said for the same person that he is twenty years old or that he is in the twenty-first year of his life. In Russian, the former expression is generally used. Still, the latter has restricted usage: it is used for the age of a deceased person in obituaries and for the age of an adult when it is desired to show him/her older than he/she is. (Psychologically, a woman in her 20th year seems older than one who is 19 years old.) Other cultures that express age differently may not use years elapsed since birth at all. Inuit culture is an example in which birthdays are not celebrated because maturity is not signified in terms of years. The Navajo culture is another in which age is not counted through years elapsed from birth. In this case, age is measured through certain milestones in a person's life, such as the first time they laugh. In cultures where age is not measured by years since birth, most individuals do not know how old they are in years. People in these cultures may find more importance in other aspects of their birth, such as the season, agricultural practices, or spiritual connections taking place when they were born. A culture may also choose to place a greater emphasis on family lineage than age, as is done in Mayan society. A Mayan adult would not determine a child's responsibility and status in terms of age by years, but instead by relative seniority to others in the family or community. The main purpose of counting age in terms of years from birth is to conveniently group individuals by age, as is needed in industrialized society. The medical practices and compulsory schooling that resulted from industrialization factored largely into the need to count age in terms of years since birth. Even in Westernized societies such as the United States, age in terms of years since birth did not begin until the mid-1800s. Depending on cultural and personal philosophy, ageing can be seen as an undesirable phenomenon, reducing beauty and bringing one closer to death, or as an accumulation of wisdom, a mark of survival, and a status worthy of respect. In some cases, numerical age is important (whether good or bad), whereas others find the stage in life that one has reached (adulthood, independence, marriage, retirement, career success) to be more important. East Asian age reckoning is different from that found in Western culture. Traditional Chinese culture uses a different ageing method, called Xusui (虛歲) with respect to common ageing which is called Zhousui (周歲). According to Luo Zhufeng (1991), the Xusui method, people are born at age 1, not age 0, possibly because conception is already considered to be the start of the life span and possibly because the number '0' was not historically present in Ancient China, and another difference is the ageing day: Xusui grows up at the Spring Festival (aka. Chinese New Year's Day), while Zhousui grows up at one's birthday. In parts of Tibet, age is counted from conception i.e. one is usually 9 months old when one is born. Age in prenatal development is normally measured in gestational age, taking the last menstruation of the mother as a point of beginning. Alternatively, fertilisation age, beginning from fertilisation can be taken. Legal Most legal systems define a specific age for when an individual is allowed or obliged to do particular activities. These age specifications include voting age, drinking age, age of consent, age of majority, age of criminal responsibility, marriageable age, age of candidacy, and mandatory retirement age. Admission to a movie, for instance, may depend on age according to a motion picture rating system. A bus fare might be discounted for the young or old. Each nation, government and non-government organisation has different ways of classifying age. Similarly, in many countries in jurisprudence, the defense of infancy is a form of defence by which a defendant argues that, at the time a law was broken, they were not liable for their actions and thus should not be held liable for a crime. Many courts recognise that defendants who are considered to be juveniles may avoid criminal prosecution on account of their age and in borderline cases the age of the offender is often held to be a mitigating circumstance. Political Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights. Older people are also more likely to vote, and in many countries the young are forbidden from voting. Thus, the aged have comparatively more, or at least different, political influence. Education tends to lose political significance for people as they age. Coping and well-being Psychologists have examined coping skills in the elderly. Various factors, such as social support, religion and spirituality, active engagement with life, and having an internal locus of control, have been proposed as being beneficial in helping people to cope with stressful life events in later life. Social support and personal control are possibly the two most important factors that predict well-being, morbidity and mortality in adults. Other factors that may link to well-being and quality of life in the elderly include social relationships (possibly relationships with pets as well as humans), and health. Retirement, a common transition faced by the elderly, may have both positive and negative consequences. Individuals in different wings in the same retirement home have demonstrated a lower risk of mortality and higher alertness and self-rated health in the wing where residents had greater control over their environment, though personal control may have less impact on specific measures of health. Social control, perceptions of how much influence one has over one's social relationships, shows support as a moderator variable for the relationship between social support and perceived health in the elderly and may positively influence coping in the elderly. Religion Religion is an important factor used by the elderly in coping with the demands of later life and appears more often than other forms of coping later in life. Religiosity is a multidimensional variable; while participation in religious activities in the sense of participation in formal and organised rituals may decline, it may become a more informal, but still important aspect of life such as through personal or private prayer. Self-rated health Positive self-perception of health has been correlated with higher well-being and reduced mortality in the elderly. Various reasons have been proposed for this association; people who are objectively healthy may naturally rate their health better than that of their ill counterparts, though this link has been observed even in studies which have controlled for socioeconomic status, psychological functioning and health status. This finding is generally stronger for men than women, though the pattern between genders is not universal across all studies and some results suggest sex-based differences only appear in certain age groups, for certain causes of mortality and within a specific sub-set of self-ratings of health. Paradox of ageing Seniors' subjective health remains relatively stable, while objective health worsens with age. Furthermore, it seems that the perceived health improves with age when objective health is controlled in the equation. This phenomenon is known as the paradox of ageing. People's expectations concerning health co-evolve with the health norms surrounding one's age. Elderly people often associate their functional and physical decline with the normal ageing process. The elderly may actually enhance their perception of their own health through social comparison; for instance, the older people get, the more they may consider themselves in better health than their same-aged peers. Hence, the older a person becomes and the more their actual health declines, the greater the potential role is for social comparison processes to create a gap between a person's objective and subjective health. Healthcare Many societies in Western Europe and Japan have ageing populations. While the effects on society are complex, there is a concern about the impact on healthcare demand. The large number of suggestions in the literature for specific interventions to cope with the expected increase in demand for long-term care in ageing societies can be organised under four headings: improve system performance; redesign service delivery; support informal caregivers; and shift demographic parameters. However, the annual growth in national health spending is not mainly due to increasing demand from ageing populations, but rather has been driven by rising incomes, costly new medical technology, a shortage of health care workers and informational asymmetries between providers and patients. Several health problems become more prevalent as people get older. These include mental health problems as well as physical health problems, especially dementia. Even so, it has been estimated that population ageing only explains 0.2 percentage points of the annual growth rate in medical spending of 4.3 percent since 1970. In addition, certain reforms to the Medicare system in the United States decreased elderly spending on home health care by 12.5 percent per year between 1996 and 2000. This would suggest that the impact of ageing populations on health care costs is not inevitable. In United States prisons, medical costs for an ageing inmate could be above $100 per day as of July 2007, while typical inmates cost $33 per day. Most States DOCs report spending more than 10 percent of the annual budget on elderly care. That is expected to rise over the next 10–20 years. Some states have talked about releasing ageing inmates early. Housing As Taiwan heads into an ageing society, a study in the city of Kaoshiung suggests that compared to their parents, the current generation of adults has shown a greater interest in age-friendly housing of high-quality building materials and community environment. The poor living conditions for the elderly was exposed after a fire in the city tore through multiple stories of a dilapidated apartment block. Successful ageing The concept of successful ageing can be traced back to the 1950s and was popularised in the 1980s. Previous research into ageing exaggerated the extent to which health disabilities, such as diabetes or osteoporosis, could be attributed exclusively to age and research in gerontology exaggerated the homogeneity of samples of elderly people. Other research shows that even late in life, the potential exists for physical, mental, and social growth and development. Successful ageing consists of three components: The avoidance of illness and disease High cognitive and physical function Social and productive engagement A greater number of people self-report successful ageing than those that strictly meet these criteria. Successful ageing may be viewed an interdisciplinary concept, spanning both psychology and sociology, where it is seen as the transaction between society and individuals across the life span with a specific focus on the later years of life. The terms "healthy ageing" and "optimal ageing" have been proposed as alternatives to successful ageing, partly because the term "successful ageing" has been criticised for making healthy ageing sound too competitive. Six suggested dimensions of successful ageing include: No physical disability over the age of 75 as rated by a physician; Good subjective health assessment (i.e. good self-ratings of one's health); Length of undisabled life; Good mental health; Objective social support; Self-rated life satisfaction in eight domains: marriage, income-related work, children, friendship and social contacts, hobbies, community service activities, religion and recreation/sports. Numerous worldwide health, ageing, and retirement surveys contain questions about pensions. The Meta Data Repository – created by the non-profit RAND Corporation and sponsored by the National Institute on Aging at the National Institutes of Health – provides access to metadata for these questions as well as links to obtain respondent data from the originating surveys. Recent studies utilizing artificial intelligence showed that in order to stay biologically younger and lower the chances of most age-related diseases, people should not be unhappy and lonely. Ageing and communication Healthy ageing implies optimal well-being in spite of barriers resulting from age. The global population is ageing and will continue to have communication inabilities unless barriers of communication with the elderly are more highly promoted. Sensory impairments include hearing and vision deficits, which can cause communication barriers. Changes in cognition, hearing, and vision are easily associated with healthy ageing and can cause problems when diagnosing dementia and aphasia due to the similarities. Hearing loss Hearing loss is a common condition among ageing adults. Common conditions that can increase the risk of hearing loss in elderly people are high blood pressure, diabetes, or the use of certain medications that are harmful to the ear. Hearing aids are commonly referred to as personal amplifying systems, which can generally improve hearing by about 50%. Hearing loss among the aged community lessens elders' ability to compensate for other age-related social and/or physical problems. Communication problems of elderly adults can be greatly impacted by mechanical problems such as the translation of ideas into linguistic representation or expression, the perception of linguistic stimuli, or the derivation of an idea from a given unit of disclosure. Changes in these mechanical problems are more important than changes in linguistic knowledge. The main goal of hearing aids is to improve communication and quality of life, not just to restore hearing. Presbycusis is an example of a hearing deficit that cannot be corrected by hearing aids. Presbycusis, the alteration of hearing sensitivity associated with normal hearing loss, is caused by the decreased amount of hair cells of the inner ear. This is normally caused by long periods of distressing noise that diminish the hair cells which with increasing age will not grow back. Presbycusis and other hearing-related problems promote social withdrawal, as individuals lose touch with the world around them. Hearing loss among the aged community lessens elders' ability to compensate for other age-related social and/or physical problems. This impairment can cause elders to lose touch of social skills because they may have trouble keeping up with fast-paced or hearing different pitched voices in conversation. Visual impairment The interpretation of facial expressions and mouthing can be difficult to understand when an individual has a visual impairment. Such problems hinder the ability of people to understand stimuli and translate information pertaining to perception with their brain for analysis. Non-verbal communication is important in effective communication, and elders with vision loss are more likely to misinterpret or read the other person's actions in a wrong way. Visual impairments also cause a loss in positive perceptions of the environment around them. This can lead to isolation and possible depression in elderly people. Macular degeneration is a common cause of vision loss in elderly people. It diminishes the macula of the eye, which is responsible for clear vision. It causes progressive loss of central vision and possible loss of colour vision. This degeneration is caused by systemic changes in the circulation of waste products and the growth of abnormal vessels around the retina causing the photoreceptors not to receive proper images. Though ageing almost always causes this, other possible effects and risk factors include smoking, obesity, family history, and excessive sunlight exposure. Digital world In a world increasingly relying on digital technologies, older adults face higher risks of social exclusion and prejudices (see digital ageism). Generational segregation naturalizes youth as digitally adept and the old as digitally inept. Older adults' experiences are often excluded from research agendas on digital media. Political struggle against ageing Though many scientists state that radical life extension, delaying and stopping ageing are achievable, there are still no international or national programmes focused on stopping ageing or on radical life extension. There are political forces staying for and against life extension. In 2012 the Longevity political parties started in Russia, then in the US, Israel and the Netherlands. These parties aim to provide political support to anti-ageing and radical life extension research and technologies and want to ensure the fastest possible and at the same time the softest societal transition to the next step: radical life extension and life without ageing, that will make it possible to provide the access to such technologies to most of the currently living people. Societal effects of negligible senescence describes the possible societal outcomes if ageing is successfully treated. Social science of ageing Disengagement theory is the idea that the separation of older people from active roles in society is normal and appropriate and benefits both society and older individuals. Disengagement theory, first proposed by Cumming and Henry, has received considerable attention in gerontology, but has been much criticised. The original data on which Cumming and Henry based the theory were from a rather small sample of older adults in Kansas City and from this select sample Cumming and Henry then took disengagement to be a universal theory. There are research data suggesting that the elderly who do become detached from society are those who were initially reclusive individuals and such disengagement is not purely a response to ageing. Activity theory, in contrast to disengagement theory, implies that the more active elderly people are, the more likely they are to be satisfied with life. The view that elderly adults should maintain well-being by keeping active has had a considerable history, and since 1972, this has become known as activity theory. However, this theory may be just as inappropriate as disengagement for some people as the current paradigm on the psychology of ageing is that both disengagement theory and activity theory may be optimal for certain people in old age, depending on both circumstances and personality traits of the individual concerned. There are also data which query whether, as activity theory implies, greater social activity is linked with well-being in adulthood. Selectivity theory mediates between the activity and disengagement theories and suggests that it may benefit older people to become more active in some aspects of their lives and more disengaged in others. Continuity theory is the view that in ageing people are inclined to maintain, as much as they can, the same habits, personalities and styles of life that they have developed in earlier years. Continuity theory is Atchley's theory that individuals, in later life, make adaptations to enable them to gain a sense of continuity between the past and the present and the theory implies that this sense of continuity helps to contribute to well-being in later life. Disengagement theory, activity theory and continuity theory are social theories about ageing, though all may be products of their era rather than a valid, universal theory. Other definitions As cyborgs currently are on the rise some theorists argue there is a need to develop new definitions of aging and for instance a bio-techno-social definition of aging has been suggested. References Gerontology
Aging and society
Biology
4,173
1,436,839
https://en.wikipedia.org/wiki/Ivan%20Roitt
Ivan Maurice Roitt (born 30 September 1927) is a British scientist. He was educated at King Edward's School, Birmingham and Balliol College, Oxford University. He was Head of the Department of Immunology at University College London from 1967 to 1992, and is currently Honorary Director of the Centre for Investigative & Diagnostic Oncology at Middlesex University, London. In 1956, together with Deborah Doniach and Peter Campbell, he made the classic discovery of thyroglobulin autoantibodies in Hashimoto's thyroiditis which helped to open the whole concept of a relationship between autoimmunity and human disease. The work was extended to an intensive study of autoimmune phenomena in pernicious anemia and primary biliary cirrhosis. In 1983 he was elected a Fellow of the Royal Society, and has been elected to Honorary Membership of the Royal College of Physicians and appointed Honorary Fellow of The Royal Society of Medicine. He was awarded the Gairdner Foundation International Award in 1964. He is an honorary member of the British Society for Immunology. References External links http://www.roitt.com/ 1927 births British biochemists Fellows of the Royal Society Living people Alumni of Balliol College, Oxford People educated at King Edward's School, Birmingham Academics of University College London Jewish British scientists
Ivan Roitt
Chemistry
274
10,777,559
https://en.wikipedia.org/wiki/Galvannealed
Galvannealed or galvanneal (galvannealed steel) is the result from the processes of galvanizing followed by annealing of sheet steel. Galvannealed steel is a matte uniform grey color, which can be easily painted. In comparison to galvanized steel the coating is harder, and more brittle. Production and properties Production of galvannealed sheet steel begins with hot dip galvanization of sheet steel. After passing through the galvanizing zinc bath the sheet steel passes through air knives to remove excess zinc, and is then heated in an annealing furnace for several seconds causing iron and zinc layers to diffuse into one another causing the formation of zinc-iron alloy layers at the surface. The annealing step is performed with the strip still hot after the galvanizing step, with the zinc still liquid. The galvanising bath contains slightly over 0.1% aluminium, added to form a layer bonding between the iron and coated zinc. Annealing temperatures are around 500 to 565 °C. Pre-1990 annealing lines used gas-fired heating; post-1990s the use of induction furnaces became common. Three distinct alloys are identified in the galvannealed surface. From the steel boundary these are named the Gamma (Γ), Zeta (ζ), and Delta (δ) layers, of compositions Fe3Zn10, FeZn10, FeZn13 respectively; resulting in an overall bulk iron content of 9-12%. The layers also contain around 1-4% aluminium. Composition depends primarily on heating time and temperature, limited by the diffusion of the two metals. The resulting coating has a matte appearance, and is hard and brittle - under further working such as pressing or bending powder is produced from degradation of the coating, together with cracks on the surface. In comparison to a zinc (galvanized) coating galvannealed has better spot weldability, and is paintable, Due to iron present in the surface alloy phase galvanneal develops a reddish patina in moist environments - it is generally used painted. Zinc phosphate coating is a common pre-painting surface treatment. Galvannealed sheet can also be produced from electroplated zinc steel sheet. History Patents relating to Galvannealed wire were obtained by the Keystone Steel and Wire Company (Peoria, Illinois, USA) c. 1923. The company used the name "Galvannealed" as a brand name. The key early patent was US patent No. 1430648 (J.L. Herman, 1922, Peoria, Illinois, USA) "Process of coating and treating materials having an iron base". The patent described the galvannealing process with specific reference to iron wires. Uses A major market for galvannealed steel is the automobile industry. In the mid 1980s, the Chrysler Corporation pioneered the use of Galvannealed sheet steels in the manufacture of their vehicles. In the 1990s galvannealled coatings were used by Honda, Toyota and Ford, with hot dip galvanized, electrogalvanized and other coatings (e.g. Zn-Ni) being used by other manufacturers, with variations depending on the part within the car frame, as well as due to local price differences. Galvannealed steel is the preferred material for use in the construction of permanent debris and linen chute systems. References Sources Coatings Corrosion prevention Metal plating Zinc
Galvannealed
Chemistry
704
15,066,184
https://en.wikipedia.org/wiki/Mycoestrogen
Mycoestrogens are xenoestrogens produced by fungi. They are sometimes referred to as mycotoxins. Among important mycoestrogens are zearalenone, zearalenol and zearalanol. Although all of these can be produced by various Fusarium species, zearalenol and zearalanol may also be produced endogenously in ruminants that have ingested zearalenone. Alpha-zearalanol is also produced semisynthetically, for veterinary use; such use is prohibited in the European Union. Mechanism of action Mycoestrogens act as agonists of the estrogen receptors, ERα and ERβ. Sources Mycoestrogens are produced by various strains of fungi, many of which fall under the genus Fusarium. Fusarium fungi are filamentous fungi that are found in the soil and are associated with plants and some crops, especially cereals. Zearalenone is mainly produced by F. graminearum and F. culmorum strains, which inhabit different areas depending on temperature and humidity. F. graminearum prefers to inhabit warmer and more humid locations such as Eastern Europe, Northern America, Eastern Australia, and Southern China in comparison to F. colmorum which is found in colder Western Europe. Health effects Mycoestrogens mimic natural estrogen in the body by acting as estrogen receptor (ER) ligands. Mycoestrogens have been identified as endocrine disruptors due to their high binding affinity for ERα and ERβ, exceeding that of well known antagonists such as bisphenol A and DDT. Studies have been performed that strongly suggest a relationship between detectable levels of mycoestrogen and growth and pubertal development. More than one study has shown that detectable levels of zearalenone and its metabolite alpha-zearalanol in girls are associated with significantly shorter heights at menarche. Other reports have documented premature onset of puberty in girls. Estrogen are known to cause decreased body weight in model animals, and the same effect has been seen in rats exposed to zearalenone. Interactions of ZEN and its metabolite with human androgen receptors (hAR) have also been documented. Metabolism Zearalenone has two major phase I metabolites: α-zearalenol and β-zearalenol. When exposed orally ZEN is absorbed by the intestinal lining and metabolized there as well as in the liver. Research into the metabolism of ZEN has been difficult because of the significant difference in biotransformation between species making comparison challenging. Phase I The first transformation of metabolism of ZEN will reduce the ketone group to an alcohol via aliphatic hydroxylation and result in the formation of the two zearalenol metabolites. This process is catalyzed by 3 α- and 3 β-hydroxy steroid dehydrogenase (HSD). CYP450 enzymes will then catalyze aromatic hydroxylation at the 13 or 15 position resulting in 13- or 15- catechols. Steric hindrance of at the 13 position is suspected to be the reason that in humans and rats there is more of the 15-catechol present. The catechols are the processed into mono-ethyl esters by catechol-o-methyl transferase (COMT) and S-adenosyl methionine (SAM). After this transformation they may be metabolized further to quinones which can cause the formation of reactive oxygen species (ROS) and cause covalent modification of DNA. Phase II In phase-II metabolizing includes glucuronidation and sulfation of the mycoestrogen compound. Glucuronidation is the major phase II metabolic pathway. The transferase UGT (5'-diphosphate glucuronosyltransferase) adds a glucuronic acid group sourced from uridine 5'-diphosphate glucuronic acid (UDPGA). Excretion Mycoestrogens and their metabolites are largely excreted in urine in humans and in feces in other animal systems. In food Mycoestrogens are commonly found in stored grain. They can come from fungi growing on the grain as it grows, or after harvest during storage. Mycoestrogens can be found in silage. Some estimates state that 25% of global cereal production and 20% of global plant production may be at some point contaminated by mycotoxins of which mycoestrogens, especially those from fusarium strains, may make up a significant portion. Among mycoestrogens that contaminate plants are ZEN and its phase I metabolites. The limit for ZEN in unprocessed cereals, milling products, and cereal foodstuffs is 20-400 μg/kg (depending on the product in question). Types trans ZEN isomer References Mycoestrogens
Mycoestrogen
Chemistry,Biology
1,055
11,889,100
https://en.wikipedia.org/wiki/Wildlife%20of%20Eritrea
The wildlife of Eritrea is composed of its flora and fauna. Eritrea has 96 species of mammals and a rich avifauna of 566 species of birds. Fauna Mammals Birds References External links Eastern Africa: Ethiopia, extending into Eritrea, Eritrean coastal desert Eastern Africa: Eritrea, Ethiopia, Kenya, Somalia, and Sudan Biota of Eritrea Eritrea
Wildlife of Eritrea
Biology
71
2,376,390
https://en.wikipedia.org/wiki/Nuclear%20lightbulb
A nuclear lightbulb is a hypothetical type of spacecraft engine using a gaseous fission reactor to achieve nuclear propulsion. Specifically it would be a type of gas core reactor rocket that uses a quartz wall to separate nuclear fuel from coolant/propellant. It would be operated at temperatures of up to 22,000°C where the vast majority of the electromagnetic emissions would be in the hard ultraviolet range. Fused silica is almost completely transparent to this light, so it would be used to contain the uranium hexafluoride and allow the light to heat reaction mass in a rocket or to generate electricity using a heat engine or photovoltaics. This type of reactor shows great promise in both of these roles. Rocket engine As a rocket engine it, like all nuclear rocket designs, can greatly exceed the exhaust speed and specific impulse of a chemical rocket. However, it also does not involve the release of any radioactive material from the rocket, unlike open cycle designs which would cause nuclear fallout if used in a planetary atmosphere (e.g. Project Orion). The theoretical specific impulse (Isp) range from 1500 to 3000 seconds. Electrical power generation As a method to generate electricity, nuclear lightbulbs are extremely efficient because higher-temperature heat contains more Gibbs free energy than the low-temperature heat produced in current fossil-fuel plants and water-cooled nuclear reactors. References . Nuclear reactors Nuclear spacecraft propulsion Nuclear technology
Nuclear lightbulb
Physics
286
216,219
https://en.wikipedia.org/wiki/Chronic%20toxicity
Chronic toxicity, the development of adverse effects as a result of long term exposure to a contaminant or other stressor, is an important aspect of aquatic toxicology. Adverse effects associated with chronic toxicity can be directly lethal but are more commonly sublethal, including changes in growth, reproduction, or behavior. Chronic toxicity is in contrast to acute toxicity, which occurs over a shorter period of time to higher concentrations. Various toxicity tests can be performed to assess the chronic toxicity of different contaminants, and usually last at least 10% of an organism's lifespan. Results of aquatic chronic toxicity tests can be used to determine water quality guidelines and regulations for protection of aquatic organisms. Definition Chronic toxicity is the development of adverse effects as the result of long term exposure to a toxicant or other stressor. It can manifest as direct lethality but more commonly refers to sublethal endpoints such as decreased growth, reduced reproduction, or behavioral changes such as impacted swimming performance. Common aquatic tests Chronic toxicity tests are performed to determine the long term toxicity potential of toxicants or other stressors, commonly to aquatic organisms. Examples of common aquatic chronic toxicity test organisms, durations, and endpoints include: Fathead minnow, Pimephales promelas, larval survival and growth Daphnia, Daphnia magna, 21-d survival and reproduction Green algae, Raphidocelis subcapitata, 72-h growth Amphipod, Hyalella azteca, 42-d survival, growth, and reproduction Application of test results Results from chronic toxicity tests can be used to calculate values that can be used for determining water quality standards. These include: NOEC/LOEC The no observed effects concentration (NOEC) is determined as the highest tested concentration that shows no statistically significant difference from the control. The lowest observed effects concentration (LOEC) is the lowest concentration of those tested that produced a statistically significant difference from the control. NOECs and LOECs can be derived from both acute and chronic tests and are used by agencies to set water quality standards. MATC/CV The maximum acceptable toxicant concentration (MATC) is calculated as the geometric mean of the NOEC and LOEC. MATC is sometimes called the chronic value (CV) and defined as “the concentration (threshold) at which chronic effects are first observed”. PEC/PNEC The predicted no effects concentration (PNEC) is calculated from toxicity tests to determine the concentration that is not thought to cause adverse effects to aquatic organisms. Determination of aquatic PNEC values requires toxicity test results from freshwater fish (e.g. ‘‘Pimephales promelas’’), freshwater invertebrates (e.g. ‘‘Daphnia magna’’), and freshwater algae (e.g. ‘‘Raphidocelis subcapitata’’) The probable effects concentration (PEC), the concentration predicted to be in the environment, is compared with the PNEC in risk assessment. The PEC takes into account both acute and chronic exposures to toxicants. ACR/AF The acute to chronic ratio (ACR) allows for an estimation of Chronic toxicity using acute toxicity data. It is calculated by dividing the LC50 by the MATC. The inverse of this (MATC/LC50) is termed the application factor (AF). AFs can be used when chronic toxicity data is not known for a specific species. Challenges with testing The chronic toxicity of toxicants is useful information to know in determining water quality guidelines, but this information is not always easily obtained. Chronic toxicity tests can be costly and difficult, due to challenges in keeping control organisms alive, maintaining water quality, retaining constant chemical exposures, and the sheer time required for tests. Because of this, acute toxicity tests are more commonly employed, and ACRs and AFs are used to estimate chronic toxicity of toxicants to organisms. Factors that influence toxicity There are many factors that can increase or decrease the toxicity of toxicants or stressors, making interpretation of test results difficult. These can be chemical, biological, or toxicological. Chemical factors Water chemistry plays an important role in the toxicity of certain toxicants. This includes pH, salinity, water hardness, conductivity, temperature, and amounts of dissolved organic carbon (DOC) For instance, the toxicity of copper is decreased with increasing amounts of DOC, as described by the biotic ligand model (BLM). Biological factors Chronic toxicity will vary with differences in organisms, including species, size, and age. Certain species are more susceptible to toxic effects, as shown in species sensitivity distributions (SSDs). Certain life stages are more susceptible to adverse effects, which is why early life stage (ELS) toxicity tests are performed for certain aquatic species. In addition, other physical factors, like organism size, can lead to differences in response to toxicants. Examples for use in water quality guidelines Water quality guidelines are determined based on the results of both acute and chronic toxicity tests. Criteria maximum concentrations (CMCs) are obtained from acute toxicity tests, whereas criteria continuous concentrations (CCCs) are obtained from chronic toxicity tests. They are values determined by the U.S. EPA to be protective of aquatic organisms. See also Aquatic toxicology Environmental toxicology Ecotoxicology Toxicology Acute toxicity References Toxicology
Chronic toxicity
Environmental_science
1,093
9,250,314
https://en.wikipedia.org/wiki/Selective%20adsorption
In surface science, selective adsorption is the effect when minima associated with bound-state resonances occur in specular intensity in atom-surface scattering. In crystal growth, selective adsorption refers to the phenomenon where adsorbing molecules attach preferentially to certain crystal faces. An example of selective adsorption can be demonstrated in the growth of Rochelle salt crystals. If copper ions are added to solution during the growth process, some crystal faces will slow down as copper apparently becomes a barrier to adsorption. However, by then adding sodium hydroxide to the solution, the preferred crystal faces will change once again. Discovery Pronounced intensity minima were first observed in 1930 by Theodor Estermann, Otto Frisch, and Otto Stern, during a series of gas-surface interaction experiments attempting to demonstrate the wave nature of atoms and molecules. The phenomenon has been explained in 1936 by John Lennard-Jones and Devonshire in terms of resonant transitions to bound surface states. Significance The selective adsorption binding energies can supply information on the gas-surface interaction potentials by yielding the vibrational energy spectrum of the gas atom bound to the surface. Starting from the 1970s, it has been extensively studied, both theoretically and experimentally. Energy levels measured with this technique are available for many systems. References Surface science
Selective adsorption
Physics,Chemistry,Materials_science
267
2,243,695
https://en.wikipedia.org/wiki/Propanamide
Propanamide has the chemical formula CH3CH2C=O(NH2). It is the amide of propanoic acid. This organic compound is a mono-substituted amide. Organic compounds of the amide group can react in many different organic processes to form other useful compounds for synthesis. Preparation Propanamide can be prepared by the condensation reaction between urea and propanoic acid: or by the dehydration of ammonium propionate: Reactions Propanamide being an amide can participate in a Hofmann rearrangement to produce ethylamine gas. References Propionamides
Propanamide
Chemistry
131
888,959
https://en.wikipedia.org/wiki/Sudanese%20kinship
Sudanese kinship, also referred to as the descriptive system, is a kinship system used to define family. Identified by Lewis Henry Morgan in his 1871 work Systems of Consanguinity and Affinity of the Human Family, the Sudanese system is one of the six major kinship systems (Eskimo, Hawaiian, Iroquois, Crow, Omaha and Sudanese). The Sudanese kinship system is the most complicated of all kinship systems. It maintains a separate designation for almost every one of Ego's kin, based on their distance from Ego, their relation, and their gender. Ego's father is distinguished from Ego's father's brother and from Ego's mother's brother. Ego's mother is similarly distinguished from Ego's mother's sister and from Ego's father's sister. For cousins, there are eight possible terms. Usage The system is named after the peoples of South Sudan. The Sudanese kinship system also existed in ancient Latin-speaking and Anglo-Saxon cultures. It exists today among present-day Arab, Turkish, and Chinese cultures. It tends to co-occur with patrilineal descent, and it is often said to be common in complex and stratified cultures. Variants Balkan kinships such as Bulgarian, Serbian, and Bosniak follow this system for different patrilinear and matrilinear uncles but collapse mother's sister and father's sister into the same term of "aunt" and Croatian and Macedonian further collapse the offspring of the uncles into one term. Similarly, Finnish kinship terms separate patrilinear and matrilinear uncles but not aunts, while making no distinctions between first cousins but giving a separate term for second cousins. Further distinctions (some much more common than others) can be made between a patrilinear or matrilinear grandson/granddaughter, niece/nephew, grandfather/grandmother and others, by using compound words. An interesting feature is the presence of many unique words originating from Proto-Uralic and Germanic languages to describe affinal kinships such as, but not limited to, the brother of a spouse (lanko), mother-in-law (anoppi) or even the husband of a daughter (vävy). On the opposite side, Chinese adds an additional dimension of relative age. Ego's older siblings are distinguished from younger, as are those of Ego's parents. One must specify whether older (e.g. Mandarin 哥哥 gēge) or younger (e.g. Mandarin 弟弟 dìdi). Similarly, a term for "uncle" or (in at least in some varieties of Chinese, including Mandarin) even "father's brother" does not exist without circumlocution; the speaker must either specify "father's older brother" (e.g. Mandarin 伯伯 bóbo) or "father's younger brother" (e.g. Mandarin 叔叔 shūshu). This does not apply to maternal uncles. See also Chinese kinship References Further reading William Haviland, Cultural Anthropology, Wadsworth Publishing, 2002. External links The nature of kinship Sudanese kin terms, University of Manitoba Kinship and descent Kinship terminology
Sudanese kinship
Biology
645
21,889,162
https://en.wikipedia.org/wiki/Cao%20L%E1%BB%97
Cao Lỗ (高魯, also known as Cao Thông, Đô Lỗ, Thạch Thần, or Đại Than Đô Lỗ Thạch Thần) was a Vietnamese weaponry engineer and minister who lived during the reign of King An Dương Vương. According to mythology, he built a crossbow from the claw of a turtle god that can fire 300 arrows in a single shot. See also Âu Lạc An Dương Vương Hồ Nguyên Trừng Trần Đại Nghĩa References Bibliography Vietnamese engineers Military engineers Firearm designers People from Bắc Ninh province Ancient Vietnam Deified Vietnamese people Vietnamese deities Vietnamese gods
Cao Lỗ
Engineering
121
10,224,728
https://en.wikipedia.org/wiki/Kir6.2
{{DISPLAYTITLE:Kir6.2}} Kir6.2 is a major subunit of the ATP-sensitive K+ channel, a lipid-gated inward-rectifier potassium ion channel. The gene encoding the channel is called KCNJ11 and mutations in this gene are associated with congenital hyperinsulinism. Structure It is an integral membrane protein. The protein, which has a greater tendency to allow potassium to flow into a cell rather than out of a cell, is controlled by G-proteins and is found associated with the sulfonylurea receptor (SUR) to constitute the ATP-sensitive K+ channel. Pathology Mutations in this gene are a cause of congenital hyperinsulinism (CHI), an autosomal recessive disorder characterized by unregulated insulin secretion. Defects in this gene may also contribute to autosomal dominant non-insulin-dependent diabetes mellitus type II (NIDDM). See also Inward-rectifier potassium ion channel Potassium channel References Further reading External links GeneReviews/NCBI/NIH/UW entry on Familial Hyperinsulinism GeneReviews/NCBI/NIH/UW entry on Permanent Neonatal Diabetes Mellitus Ion channels
Kir6.2
Chemistry
265
70,903,695
https://en.wikipedia.org/wiki/EP%20Aquarii
EP Aquarii is a semiregular variable star in the equatorial constellation of Aquarius. At its peak brightness, visual magnitude 6.37, it might be faintly visible to the unaided eye under ideal observing conditions. A cool red giant on the asymptotic giant branch (AGB), its visible light brightness varies by about 1/2 magnitude over a period of 55 days. EP Aquarii has a complex circumstellar envelope (CSE), which has been the subject of numerous studies. In 1877, John Birmingham published a set of ten magnitude estimates for EP Aquarii (number 596 on his list) made during the 1870s, which ranged from magnitude 6 to 8. He listed the star as "Variable (?)", although he also claimed to have observed "a quick change" in magnitude. Birminghan's magnitude range is far wider than the 6.37 to 6.82 range listed in the GCVS, nonetheless Birmingham's publication was cited as the reference when EQ Aquarii received its variable star designation in 1973. The study of EP Aquarii's extended CSE began in 1984, when a spectral line arising from a rotational transition of carbon monoxide (CO) was detected by Zuckerman and Dyck, using the NRAO 12m telescope. In the early 1990s, analysis of the IRAS satellite data showed the presence of an extended dust shell surrounding the star, with a radius of about 1 lightyear. In the late 1990s, high spectral-resolution observations at the Caltech Submillimeter Observatory (CSO) showed that EP Aquarii's CO line profiles had an unusual shape that suggested the presence of two distinct stellar winds, expanding at dramatically different velocities: 1.4 and 11 km/sec. In the early 2000s, observations of the 21 cm line of atomic hydrogen at the Nançay Radio Observatory confirmed the presence of a large circumstellar shell with multiple velocity components. The completion of Atacama Large Millimeter Array allowed EP Aquarii to be studied with far higher sensitivity and angular resolution than was available to earlier researchers. The very narrow emission feature (indicating an expansion rate of 1.4 km/sec) seen in the CSO spectra was found to arise from a spiral structure, nearly face-on to our line of sight, which suggested the presence of an unseen companion star. The higher velocity wind arises from a bi-conical outflow, the pole of which is roughly aligned to our line of sight. Which chemical compounds are found in the CSEs of AGB stars is largely determined by whether or not the stellar atmosphere contains more carbon than oxygen. EP Aquarii's atmosphere contains more oxygen than carbon. References M-type giants Semiregular variable stars Aquarius (constellation) Durchmusterung objects 207076 107516 Aquarii, EP
EP Aquarii
Astronomy
599
4,271,984
https://en.wikipedia.org/wiki/Latent%20tuberculosis
Latent tuberculosis (LTB), also called latent tuberculosis infection (LTBI) is when a person is infected with Mycobacterium tuberculosis, but does not have active tuberculosis (TB). Active tuberculosis can be contagious while latent tuberculosis is not, and it is therefore not possible to get TB from someone with latent tuberculosis. Various treatment regimens are in use for latent tuberculosis. They generally need to be taken for several months. Epidemiology The latent tuberculosis is worldwide, approximately one third of the world's population is latently infected with M. tuberculosis, with a new case occurring approximately every second. The spread of tuberculosis is uneven throughout the world, with approximately 80% of the population in many Asian and African countries testing positive on tuberculin tests, while only 5–10% of the US population tests positive. Transmission Latent disease TB Bacteria Are Spread Only from a Person with Active TB Disease ... In people who develop active TB of the lungs, also called pulmonary TB, the TB skin test will often be positive. In addition, they will show all the signs and symptoms of TB disease, and can pass the bacteria to others. So, if a person with TB of the lungs sneezes, coughs, talks, sings, or does anything that forces the bacteria into the air, other people nearby may breathe in TB bacteria. Statistics show that approximately one-third of people exposed to pulmonary TB become infected with the bacteria, but only one in ten of these infected people develops active TB disease during their lifetimes.However, exposure to tuberculosis is very unlikely to happen when one is exposed for a few minutes in a store or in a few minutes social contact. "It usually takes prolonged exposure to someone with active TB disease for someone to become infected. After exposure, it usually takes 8 to 10 weeks before the TB test would show if someone had become infected."Depending on ventilation and other factors, these tiny droplets [from the person who has active tuberculosis] can remain suspended in the air for several hours. Should another person inhale them, he or she may become infected with TB. The probability of transmission will be related to the infectiousness of the person with TB, the environment where the exposure occurred, the duration of the exposure, and the susceptibility of the host.In fact, "it isn't easy to catch TB. You need consistent exposure to the contagious person for a long time. For that reason, you're more likely to catch TB from a relative than a stranger." If a person had latent tuberculosis, they do not have active/contagious tuberculosis. Once exposed, people very often have latent tuberculosis. To convert to active tuberculosis, the bacteria must become active. In some countries like Canada people have medical privacy or "confidentiality" and do not have to reveal their active tuberculosis case to family, friends, or co-workers; therefore, the person who gets latent tuberculosis may never know who had the active case of tuberculosis that caused the latent tuberculosis diagnosis for them. Only by required testing (required in some jobs) or developing symptoms of active tuberculosis and visiting a medical doctor who does testing will a person know they have been exposed. Because tuberculosis is not common in the United States, doctors may not suspect tuberculosis; therefore, they may not test. If a person has symptoms of tuberculosis, it is wise to be tested. Persons with diabetes may have an 18% chance of converting to active tuberculosis. In fact, death from tuberculosis was greater in diabetic patients. Persons with HIV and latent tuberculosis have a 10% chance of developing active tuberculosis every year. "HIV infection is the greatest known risk factor for the progression of latent M. tuberculosis infection to active TB. In many African countries, 30–60% of all new TB cases occur in people with HIV, and TB is the leading cause of death globally for HIV-infected people." Reactivation Once a person has been diagnosed with Latent Tuberculosis (LTBI) and a medical doctor confirms no active tuberculosis, the person should remain alert to symptoms of active tuberculosis for the remainder of their life. Even after completing the full course of medication, there is no guarantee that the tuberculosis bacteria have all been killed. "When a person develops active TB (disease), the symptoms (cough, fever, night sweats, weight loss etc.) may be mild for many months. This can lead to delays in seeking care, and results in transmission of the bacteria to others." Tuberculosis does not always settle in the lungs. If the outbreak of tuberculosis is in the brain, organs, kidneys, joints, or others areas, the patient may have active tuberculosis for an extended period of time before discovering that they are active. "A person with TB disease may feel perfectly healthy or may only have a cough from time to time." However, these symptoms do not guarantee tuberculosis, and they may not exist at all, yet the patient may still have active tuberculosis. A person with symptoms listed may have active tuberculosis, and the person should immediately see a physician so that tuberculosis is not spread. If a person with the above symptoms does not see a physician, ignoring the symptoms can result in lung damage, eye damage, organ damage and eventually death. When tuberculosis settles in other organs (rather than lungs) or other parts of the body (such as the skeletal), symptoms may be different from when it settles in the lungs (such as the symptoms listed above). Thus, without the cough or flu-like symptoms, a person can unwittingly have active tuberculosis. Other symptoms include back pain, flank pain, PID symptoms, confusion, coma, difficulty swallowing, and many other symptoms that would be a part of other diseases. (Please see the reference for more information on symptoms.) Therefore, seeing a physician and asking for a tuberculosis test is absolutely necessary to rule out tuberculosis when a patient has symptoms without a diagnosis of disease. Risk factors Situations in which tuberculosis may become reactivated are: if there is onset of a disease affecting the immune system (such as AIDS) or a disease whose treatment affects the immune system (such as chemotherapy in cancer or systemic steroids in asthma or Enbrel, Humira or Orencia in rheumatoid arthritis); malnutrition (which may be the result of illness or injury affecting the digestive system, or of a prolonged period of not eating, or disturbance in food availability such as during famine or residence in a refugee or concentration camp); degradation of the immune system due to aging; certain systemic diseases such as diabetes,; other conditions such as debilitating disease (especially haematological and some solid cancers), use of steroid medication long-term, end-stage renal disease, silicosis, gastrectomy, and jejuno-ileal bypass; being elderly; and young age. Diagnosis There are two classes of tests commonly used to identify patients with latent tuberculosis: tuberculin skin tests and IFN-γ (Interferon-gamma) tests. The skin tests currently include the following two: Mantoux test Heaf test IFN-γ tests include the following three: T-SPOT.TB QuantiFERON-TB Gold QuantiFERON-TB Gold In-Tube Tuberculin skin testing The tuberculin skin test (TST) in its first iteration, the Mantoux Test, was developed in 1908. Tuberculin (also called purified protein derivative or PPD) is a standardised dead extract of cultured TB, injected into the skin to measure the person's immune response to the bacteria. So, if a person has been exposed to the bacteria previously, they should express an immune reaction to the injection, usually a mild swelling or redness around the site. There have been two primary methods of TST: the Mantoux test, and the Heaf test. The Heaf test was discontinued in 2005 because the manufacturer deemed its production to be financially unsustainable, though it was previously preferred in the UK because it was felt to require less training to administer and involved less inter-observer variation in its interpretation than the Mantoux test. The Mantoux test was the preferred test in the US, and is now the most widely used TST globally. Mantoux test See: Mantoux test The Mantoux test is now standardised by the WHO. 0.1 ml of tuberculin (100 units/ml), which delivers a dose of 5 units is given by intradermal injection into the surface of the lower forearm (subcutaneous injection results in false negatives). A waterproof ink mark is drawn around the injection site so as to avoid difficulty finding it later if the level of reaction is small. The test is read 48 to 72 hours later. The area of induration (NOT of erythema) is measured transversely across the forearm (left to right, not up and down) and recorded to the nearest millimetre. Heaf test See:Heaf test The Heaf test was first described in 1951. The test uses a Heaf gun with disposable single-use heads; each head has six needles arranged in a circle. There are standard heads and pediatric heads: the standard head is used on all patients aged 2 years and older; the pediatric head is for infants under the age of 2. For the standard head, the needles protrude 2 mm when the gun is actuated; for the pediatric heads, the needles protrude 1 mm. Skin is cleaned with alcohol, then tuberculin (100,000 units/ml) is evenly smeared on the skin (about 0.1 ml); the gun is then applied to the skin and fired. The excess solution is then wiped off and a waterproof ink mark is drawn around the injection site. The test is read 2 to 7 days later. Grade 0: no reaction, or induration of 3 or less puncture points; Grade 1: induration of four or more puncture points; Grade 2: induration of the six puncture points coalesce to form a circle; Grade 3: induration of 5 mm; or more Grade 4: induration of 10 mm or more, or ulceration The results of both tests are roughly equivalent as follows: Heaf grade 0 & 1 ~ Mantoux less than 5 mm; Heaf grade 2 ~ Mantoux 5–14 mm; Heaf grade 3 & 4 ~ Mantoux 15 or greater Tuberculin conversion Tuberculin conversion is said to occur if a patient who has previously had a negative tuberculin skin test develops a positive tuberculin skin test at a later test. It indicates a change from negative to positive, and usually signifies a new infection. Boosting The phenomenon of boosting is one way of obtaining a false positive test result. Theoretically, a person's ability to develop a reaction to the TST may decrease over time – for example, a person is infected with latent TB as a child, and is administered a TST as an adult. Because there has been such a long time since the immune responses to TB has been necessary, that person might give a negative test result. If so, there is a fairly reasonable chance that the TST triggers a hypersensitivity in the person's immune system – in other words, the TST reminds the person's immune system about TB, and the body overreacts to what it perceives as a reinfection. In this case, when that subject is given the test again (as is standard procedure, see above) they may have a significantly greater reaction to the test, giving a very strong positive; this can be commonly misdiagnosed as Tuberculin Conversion. This can also be triggered by receiving the BCG vaccine, as opposed to a proper infection. Although boosting can occur in any age group, the likelihood of the reaction increases with age. Boosting is only likely to be relevant if an individual is beginning to undergo periodic TSTs (health care workers, for example). In this case the standard procedure is called two-step testing. The individual is given their first test and in the event of a negative, given a second test in 1 to 3 weeks. This is done to combat boosting in situations where, had that person waited up to a year to get their next TST, they might still have a boosted reaction, and be misdiagnosed as a new infection. Here there is a difference in US and UK guidelines; in the US testers are told to ignore the possibility of false positive due to the BCG vaccine, as the BCG is seen as having waning efficacy over time. Therefore, the CDC urges that individuals be treated based on risk stratification regardless of BCG vaccination history, and if an individual receives a negative and then a positive TST they will be assessed for full TB treatment beginning with X-ray to confirm TB is not active and proceeding from there. Conversely, the UK guidelines acknowledge the potential effect of the BCG vaccination, as it is mandatory and therefore a prevalent concern – though the UK shares the procedure of administering two tests, one week apart, and accepting the second one as the accurate result, they also assume that a second positive is indicative of an old infection (and therefore certainly LTBI) or the BCG itself. In the case of BCG vaccinations confusing the results, Interferon-γ (IFN-γ) tests may be used as they will not be affected by the BCG. Interpretation According to the U.S. guidelines, there are multiple size thresholds for declaring a positive result of latent tuberculosis from the Mantoux test: For testees from high-risk groups, such as those who are HIV positive, the cutoff is 5 mm of induration; for medium risk groups, 10 mm; for low-risk groups, 15 mm. The U.S. guidelines recommend that a history of previous BCG vaccination should be ignored. For details of tuberculin skin test interpretation, please refer to the CDC guidelines (reference given below). The UK guidelines are formulated according to the Heaf test: In patients who have had BCG previously, latent TB is diagnosed if the Heaf test is grade 3 or 4 and have no signs or symptoms of active TB; if the Heaf test is grade 0 or 1, then the test is repeated. In patients who have not had BCG previously, latent TB is diagnosed if the Heaf test is grade 2, 3 or 4, and have no signs or symptoms of active TB. Repeat Heaf testing is not done in patients who have had BCG (because of the phenomenon of boosting). For details of tuberculin skin test interpretation, please refer to the BTS guidelines (references given below). Given that the US recommendation is that prior BCG vaccination be ignored in the interpretation of tuberculin skin tests, false positives with the Mantoux test are possible as a result of: (1) having previously had a BCG (even many years ago), or (2) periodical testing with tuberculin skin tests. Having regular TSTs boosts the immunological response in those people who have previously had BCG, so these people will falsely appear to be tuberculin conversions. This may lead to treating more people than necessary, with the possible risk of those patients developing adverse drug reactions. However, as Bacille Calmette-Guérin vaccine is not 100% effective, and is less protective in adults than pediatric patients, not treating these patients could lead to a possible infection. The current US policy seems to reflect a desire to err on the side of safety. The U.S. guidelines also allow for tuberculin skin testing in immunosuppressed patients (those with HIV, or who are on immunosuppressive drugs), whereas the UK guidelines recommend that tuberculin skin tests should not be used for such patients because it is unreliable. Interferon-γ testing The role of IFN-γ tests is undergoing constant review and various guidelines have been published with the option for revision as new data becomes available.CDC:MMWR Health Protection Agency:UK There are currently two commercially available interferon-γ release assays (IGRAs): QuantiFERON-TB Gold and T-SPOT.TB. These tests are not affected by prior BCG vaccination, and look for the body's response to specific TB antigens not present in other forms of mycobacteria and BCG (ESAT-6). Whilst these tests are new they are now becoming available globally. CDC: HPA Interim Guidance: Drug-resistant strains It is usually assumed by most medical practitioners in the early stages of a diagnosis that a case of latent tuberculosis is the normal or regular strain of tuberculosis. It will therefore be most commonly treated with Isoniazid (the most used treatment for latent tuberculosis.) Only if the tuberculosis bacteria does not respond to the treatment will the medical practitioner begin to consider more virulent strains, requiring significantly longer and more thorough treatment regimens. There are 4 types of tuberculosis recognized in the world today: Tuberculosis (TB) Multi-drug-resistant tuberculosis (MDR TB) Extensively drug-resistant tuberculosis (XDR TB) Totally drug-resistant tuberculosis (TDR TB) Treatment The treatment of latent tuberculosis infection (LTBI) is essential to controlling and eliminating TB by reducing the risk that TB infection will progress to disease. Latent tuberculosis will convert to active tuberculosis in 10% of cases (or more in cases of immune compromised patients). Taking medication for latent tuberculosis is recommended by many doctors. In the U.S., the standard treatment is nine months of isoniazid, but this regimen is not widely used outside of the US. Terminology There is no agreement regarding terminology: the terms preventive therapy and chemoprophylaxis have been used for decades, and are preferred in the UK because it involves giving medication to people who have no disease and are currently well: the reason for giving medication is primarily to prevent people from becoming unwell. In the U.S., physicians talk about latent tuberculosis treatment because the medication does not actually prevent infection: the person is already infected and the medication is intended to prevent existing silent infection from becoming active disease. There are no convincing reasons to prefer one term over the other. Specific situations "Populations at increased risk of progressing to active infection once exposed: Persons with recent TB infection [those infected within the previous two years] Congenital or acquired immunosuppressed patients (in particular, HIV-positive patients) Illicit intravenous drug users; alcohol and other chronic substance users Children (particularly those younger than 4 years old) Persons with comorbid conditions (ie, chronic kidney failure, diabetes, malignancy, hematologic cancers, body weight of at least 10% less than ideal, silicosis, gastrectomy, jejunoileal bypass, asthma, or other disorders requiring long-term use of corticosteroids or other immunosuppressants)." Treatment regimens It is essential that assessment to rule out active TB be carried out before treatment for LTBI is started. To give treatment for latent tuberculosis to someone with active tuberculosis is a serious error: the tuberculosis will not be adequately treated and there is a serious risk of developing drug-resistant strains of TB. There are several treatment regimens currently in use: 9H — isoniazid for 9 months is the gold standard (93% effective, in patients with positive test results and fibrotic pulmonary lesions compatible with tuberculosis). 6H — Isoniazid for 6 months might be adopted by a local TB program based on cost-effectiveness and patient compliance. This is the regimen currently recommended in the UK for routine use. The U.S. guidance excludes this regimen from use in children or persons with radiographic evidence of prior tuberculosis (old fibrotic lesions) (69% effective). 6 to 9H2 — An intermittent twice-weekly regimen for the above two treatment regimens is an alternative if administered under Directly observed therapy (DOT). 4R — rifampicin for 4 months is an alternative for those who are unable to take isoniazid or who have had known exposure to isoniazid-resistant TB. 3HR — Isoniazid and rifampin may be given daily for three months. 2RZ — The two-month regimen of rifampin and pyrazinamide is no longer recommended for treatment of LTBI because of the greatly increased risk of drug-induced hepatitis and death. 3HP – three-month (12-dose) regimen of weekly rifapentine and isoniazid. The 3HP regimen has to be administered under DOT. A self-administered therapy (SAT) of 3HP is investigated in a large international study. Evidence for treatment effectiveness A 2000 Cochrane review containing 11 double-blinded, randomized control trials and 73,375 patients examined six and 12 month courses of isoniazid (INH) for treatment of latent tuberculosis. HIV positive and patients currently or previously treated for tuberculosis were excluded. The main result was a relative risk (RR) of 0.40 (95% confidence interval (CI) 0.31 to 0.52) for development of active tuberculosis over two years or longer for patients treated with INH, with no significant difference between treatment courses of six or 12 months (RR 0.44, 95% CI 0.27 to 0.73 for six months, and 0.38, 95% CI 0.28 to 0.50 for 12 months). A Cochrane systematic review published in 2013 evaluated four different alternatives regimens to INH monotherapy for preventing active TB in HIV-negative people with latent tuberculosis infection. The evidence from this review found no difference between shorter regimens of Rifampicin or weekly, directly observed Rifapentine plus INH compare to INH monotherapy in preventing active TB in HIV-negative people at risk of developing it . However the review found that the shorter Rifampicin regimen for four months and weekly directly observed Rifapentine plus INH for three months "may have additional advantages of higher treatment completion and improved safety." However the overall quality of evidence was low to moderate (as per GRADE criteria) and none of the included trials were conducted in LMIC nations with high TB transmission and hence might not be applicable to nations with high TB transmission. Treatment efficacy There is no guaranteed "cure" for latent tuberculosis. "People infected with TB bacteria have a lifetime risk of falling ill with TB..." with those who have compromised immune systems, those with diabetes and those who use tobacco at greater risk. A person who has taken the complete course of Isoniazid (or other full course prescription for tuberculosis) on a regular, timely schedule may have been cured. "Current standard therapy is isoniazid (INH) which reduce the risk of active TB by as much as 90 per cent (in patients with positive LTBI test results and fibrotic pulmonary lesions compatible with tuberculosis) if taken daily for 9 months." However, if a person has not completed the medication exactly as prescribed, the "cure" is less likely, and the "cure" rate is directly proportional to following the prescribed treatment specifically as recommended. Furthermore, "If you don't take the medicine correctly and you become sick with TB a second time, the TB may be harder to treat if it has become drug resistant." If a patient were to be cured in the strictest definition of the word, it would mean that every single bacterium in the system is removed or dead, and that person cannot get tuberculosis (unless re-infected). However, there is no test to assure that every single bacterium has been killed in a patient's system. As such, a person diagnosed with latent TB can safely assume that, even after treatment, they will carry the bacteria – likely for the rest of their lives. Furthermore, "It has been estimated that up to one-third of the world's population is infected with M. tuberculosis, and this population is an important reservoir for disease reactivation." This means that in areas where TB is endemic treatment may be even less certain to "cure" TB, as reinfection could trigger activation of latent TB already present even in cases where treatment was followed completely. Controversy There is controversy over whether people who test positive long after infection have a significant risk of developing the disease (without re-infection). Some researchers and public health officials have warned that this test-positive population is a "source of future TB cases" even in the US and other wealthy countries, and that this "ticking time bomb" should be a focus of attention and resources. On the other hand, Marcel Behr, Paul Edelstein, and Lalita Ramakrishnan reviewed studies concerning the concept of latent tuberculosis in order to determine whether tuberculosis-infected persons have life-long infection capable of causing disease at any future time. These studies, both published in the British Medical Journal (BMJ) in 2018 and 2019, show that the incubation period of tuberculosis is short, usually within months after infection, and very rarely more than two years after infection. They also show that more than 90% of people infected with M. tuberculosis for more than two years never develop tuberculosis even if their immune system is severely suppressed. Immunologic tests for tuberculosis infection such as the tuberculin skin test and interferon gamma release assays (IGRA) only indicate past infection, with the majority of previously infected persons no longer capable of developing tuberculosis. Ramakrishnan told the New York Times that researchers "have spent hundreds of millions of dollars chasing after latency, but the whole idea that a quarter of the world is infected with TB is based on a fundamental misunderstanding." Writing in The Atlantic, science journalist Katherine J. Wu explains: The first BMJ article disputing widespread latency was accompanied by an editorial written by Dr. Soumya Swaminathan, Deputy Director-General of the World Health Organization, who endorsed the findings and called for more funding of TB research directed at the most heavily afflicted parts of the world, rather than disproportionate attention to a relatively minor problem that affects just the wealthy countries. The World Health Organization no longer endorses the concept that all those with immunologic evidence of past TB infection are currently infected and so are at risk of developing TB some time in the future. In 2022, the WHO issued corrigenda to its 2021 Global TB Report to clarify estimates on the worldwide burden of infected people. These corrigenda deleted "About a quarter of the world's population is infected with M. tuberculosis" and replaced it with "About a quarter of the world's population has been infected with M. tuberculosis." The corrigenda also removed the prior estimate of the lifetime risk of TB of 5 to 10% among those with evidence of past TB infection, indicating that they no longer have confidence in earlier estimates that a substantial percentage of those with positive immunologic test results will develop the disease. See also Silent disease References Further reading External links Immunologic tests tuberculosis Tuberculosis
Latent tuberculosis
Biology
5,649
30,876,732
https://en.wikipedia.org/wiki/CULTAN%20Fertilization
CULTAN Fertilization, or Controlled Uptake Long Term Ammonium Nutrition, is a type of injection fertilization where the entire amount of nitrogen needed for a plant to grow is injected at one time. During CULTAN fertilization, nitrogen is applied at the first signs of nitrogen deficiency in plants. Fertilizer is more commonly spread on the surface of fields in either a liquid or powder form by spraying it. Injecting fertilizer Although CULTAN fertilization is only done for nitrogen application, injection irrigation can also be used for other types of fertilizers. The most common way to inject fertilizer into the ground, regardless of chemical type, is through the drip method. The drip method involves an irrigation pump and a chemical injection pump. The two work simultaneously, slowly releasing nitrogen, letting it ‘drip’ from the system and then seep into the soil. If farming on a large scale, an engine (such as a tractor) is needed to move the system across the field. Advantages According to a study from the Czech Republic, injecting nitrogen into the soil leads for a higher dry matter content of the plant. Dry matter content is important because the nutrients found in plants are found in the ‘dry’ section, not the water that is also in the plant. CULTAN fertilization, like all forms of injection fertilization, allows for a more precise application and a more uniform distribution of the fertilizer. These methods allow the spread of nitrogen regardless of the condition of the field (wet, muddy) and reduce soil compaction caused by tractors moving along the field. Soil compaction can eventually lead to erosion. Labor is also significantly reduced when compared to convention, surface fertilization. Disadvantages Research has shown that average total yield is lower in crops that have been injected with nitrogen. The equipment used to inject fertilizer into the ground is more expensive than the typical spraying equipment, so injection is mostly used on smaller produce operations. References Fertilizers Nitrogen cycle
CULTAN Fertilization
Chemistry
413
3,072,173
https://en.wikipedia.org/wiki/Fossorial
A fossorial animal () is one that is adapted to digging and which lives primarily (but not solely) underground. Examples of fossorial vertebrates are badgers, naked mole-rats, meerkats, armadillos, wombats, and mole salamanders. Among invertebrates, many molluscs (e.g., clams), insects (e.g., beetles, wasps, bees), and arachnids (e.g. spiders) are fossorial. Prehistoric evidence The physical adaptation of fossoriality is widely accepted as being widespread among many prehistoric phyla and taxa, such as bacteria and early eukaryotes. Furthermore, fossoriality has evolved independently multiple times, even within a single family. Fossorial animals appeared simultaneously with the colonization of land by arthropods in the late Ordovician period (over 440 million years ago). Other notable early burrowers include Eocaecilia and possibly Dinilysia. The oldest example of burrowing in synapsids, the lineage which includes modern mammals and their ancestors, is a cynodont, Thrinaxodon liorhinus, found in the Karoo of South Africa, estimated to be 251 million years old. Evidence shows that this adaptation occurred due to dramatic mass extinctions in the Permian period. Physical adaptations in vertebrates There are six major external modifications, as described by H. W. Shimer in 1903, that are shared in all mammalian burrowing species: Fusiform, a spindle-shaped body tapering at both ends, adapted for the dense subsurface environment. Lesser developed or missing eyesight, considering subsurface darkness. Small or missing external ears, to reduce naturally occurring friction during burrowing. Short and stout limbs, since swiftness or speed of movement is less important than the strength to dig. Broad and stout forelimbs (manus), including long claws, designed to loosen the burrowing material for the hind feet to disperse in the back. This trait is disputed by Jorge Cubo, who states that the skull is the main tool during excavation, but that the most active parts are the forelimbs for digging and that the hind-limbs are used for stability. Short or missing tail, which has little to no locomotor activity or burrowing use to most fossorial mammals. Other important physical features include a subsurface adjusted skeleton: a triangularly shaped skull, a prenasal ossicle, chisel-shaped teeth, effectively fused and short lumbar vertebrae, well-developed sternum, strong forelimb and weaker hind limb bones. Due to the lack of light, one of the most important features of fossorial animals are the development of physical, sensory traits that allow them to communicate and navigate in the dark subsurface environment. Considering that sound travels slower in the air and faster through solid earth, the use of seismic (percussive) waves on a small scale is more advantageous in these environments. Several different uses are well documented. The Cape mole rat (Georychus capensis) uses drumming behavior to send messages to its kin through conspecific signaling. The Namib Desert golden mole (Eremitalpa granti namibensis) can detect termite colonies and similar prey underground due to the development of a hypertrophied malleus. This adaptation allows for better detection of low-frequency signals. The most likely explanation of the actual transmission of these seismic inputs, captured by the auditory system, is the use of bone conduction; whenever vibrations are applied to the skull, the signals travel through many routes to the inner ear. For animals that burrow by compressing soil, the work required increases exponentially with body diameter. In amphisbaenians, an ancient group of burrowing lizard-like squamates, specializations include the pennation of the longissimus dorsi, the main muscle associated with burrowing, to increase muscle cross-sectional area. Constrained to small body diameters by the soil, amphisbaenians can increase muscle mass by increasing body length, not body diameter. In most amphisbaenians, limbs were lost as part of fossorial lifestyle. However the mole lizard Bipes, unlike other amphisbaenians, retains robust digging forelimbs comparable to those of moles and mole crickets. Physiological modifications Many fossorial and sub-fossorial mammals that live in temperate zones with partially frozen grounds tend to hibernate due to the seasonal lack of soft, succulent herbage and other sources of nutrition. W. H. Shimer concluded that, in general, species that adopted fossorial lifestyles likely did so because they failed, aboveground, to find food and protection from predators. Additionally, some, such as E. Nevo, propose that fossorial lifestyles could have occurred because aboveground climates were harsh. Shifts towards an underground lifestyle also entail changes in metabolism and energetics, often in a weight-dependent manner. Sub-fossorial species weighing more than have comparably lower basal rates than those weighing lower than . The average fossorial animal has a basal rate between 60% and 90%. Further observations conclude that larger burrowing animals, such as hedgehogs or armadillos, have lower thermal conductance than smaller animals, most likely to reduce heat storage in their burrows. Geological and ecological implications One important impact on the environment caused by fossorial animals is bioturbation, defined by Marshall Wilkinson as the alteration of fundamental properties of the soil, including surface geomorphic processes. It is measured that small fossorials, such as ants, termites, and earthworms displace a massive amount of soil. The total global rates displaced by these animals are equivalent to the total global rates of tectonic uplift. The presence of burrowing animals also has a direct impact on the soil's composition, structure, and growing vegetation. The impact these animals have can range from feeding, harvesting, caching and soil disturbances, but can differ considering the large diversity of fossorial species – especially herbivorous species. The net effect is usually composed of an alteration of the composition of plant species and increased plant diversity, which can cause issues with standing crops, as the homogeneity of the crops is affected. Burrowing also impacts the nitrogen cycle in the affected soil. Mounds and bare soils that contain burrowing animals have considerably higher amounts of and as well as greater nitrification potential and microbial consumption than in vegetated soils. The primary mechanism for this occurrence is caused by the removal of the covering grassland. Burrowing snakes may be more vulnerable to changing environments than non-burrowing snakes, although this may not be the case for other fossorial groups such as lizards. This may form an evolutionary dead end for snakes. See also Arboreal Burrow Cursorial Fossa References Habitats Cave animals Animal physiology Animal locomotion
Fossorial
Physics,Biology
1,462
23,195,101
https://en.wikipedia.org/wiki/Retirement%20spend-down
At retirement, individuals stop working and no longer get employment earnings, and enter a phase of their lives, where they rely on the assets they have accumulated, to supply money for their spending needs for the rest of their lives. Retirement spend-down, or withdrawal rate, is the strategy a retiree follows to spend, decumulate or withdraw assets during retirement. Retirement planning aims to prepare individuals for retirement spend-down, because the different spend-down approaches available to retirees depend on the decisions they make during their working years. Actuaries and financial planners are experts on this topic. Importance More than 10,000 Post-World War II baby boomers will reach age 65 in the United States every day between 2014 and 2027. This represents the majority of the more than 78 million Americans born between 1946 and 1964. As of 2014, 74% of these people are expected to be alive in 2030, which highlights that most of them will live for many years beyond retirement. By the year 2000, 1 in every 14 people was age 65 or older. By the year 2050, more than 1 in 6 people are projected to be at least 65 years old. The following statistics emphasize the importance of a well-planned retirement spend-down strategy for these people: 87% of workers do not feel very confident about having enough money to retire comfortably. 80% of retirees do not feel very confident about maintaining financial security throughout their remaining lifetime. 49% of workers over age 55 have less than $50,000 of savings. 25% of workers have not saved at all for retirement. 35% of workers are not currently saving for retirement. 56% of workers have not tried to calculate their income needs in retirement. Longevity risk Individuals each have their own retirement aspirations, but all retirees face longevity risk – the risk of outliving their assets. This can spell financial disaster. Avoiding this risk is therefore a baseline goal that any successful retirement spend-down strategy addresses. Generally, longevity risk is greatest for low and middle income individuals. The probabilities of a 65-year-old living to various ages are: Longevity risk is largely underestimated. Most retirees do not expect to live beyond age 85, let alone into their 90s. A 2007 study of recently retired individuals asked them to rank the following risks in order of the level of concern they present: Health care costs Inflation Investment risk Maintaining lifestyle Need for long-term care Outliving assets (longevity risk) Longevity risk was ranked as the least concerning of these risks. Withdrawal rate A portion of retirement income often comes from savings, sometimes referred to as a nest egg. Analyzing one's savings involves a number of variables: how savings are invested (e.g., cash, stocks, bonds, real estate), and how this changes over time inflation during retirement how quickly savings are spent – the withdrawal rate Often, an investor will change some of their investment types as one ages. A common strategy to replace more risky investments with less risky investments as one gets older. A "risky" investment is an investment that has a higher potential return but also a higher potential loss. A "conservative" investment is an investment with a low potential return but a lower potential loss. A number of approaches exist to assist with choosing the correct risk level, for example, target date funds. A common rule of thumb for withdrawal rate is 4%, based on 20th century American investment returns, and first articulated in . Bengen later stated the 4% guideline was intended as a "worst case scenario" for retirees in United States, using a hypothetical example of someone who retired in 1968 at a stock market peak before a protracted bear market and high inflation through the 1970s. In that scenario, a 4% withdrawal rate allowed the investor's funds to last 30 years. Historically, Bengen says closer to 7% is an average safe withdrawal rate and at other times withdrawal rates up to 13% have been feasible. A 4% withdrawal rate is also one conclusion of the Trinity study (1998). This particular rule and approach have been heavily criticized, as have the methods of both sources, with critics arguing that withdrawal rates should vary with investment style (which they do in Bengen) and returns, and that this ignores the risk of emergencies and rising expenses (e.g., medical or long-term care). Others question the suitability of matching relatively fixed expenses with risky investment assets. New dynamic adjustment methods for retirement withdrawal rates have been developed after Bengen's 4% withdrawal rate was proposed: constant inflation-adjusted spending, Bengen's floor-and-ceiling rule, and Guyton and Klinger's decision rules. More complex withdrawal strategies have also been created. To decide a withdrawal rate, history shows the maximum sustainable inflation-adjusted withdrawal rate over rolling 30-year periods for three hypothetical stock and bond portfolios from 1926 to 2014. Stocks are represented by the S&P 500 Index, bonds by an index of five-year U.S. Treasury bonds. During the best 30-year period withdrawal rates of 10% annually could be used with a 100% success rate. The worst 30-year period had a maximum withdrawal rate of 3.5%. A 4% withdrawal rate survived most 30 year periods. The higher the stock allocation the higher rate of success. A portfolio of 75% stocks is more volatile but had higher maximum withdrawal rates. Starting with a withdrawal rate near 4% and a minimum 50% equity allocation in retirement gave a higher probability of success in historical 30 year periods. The above withdrawal strategies, sometimes referred to as strategic withdrawal plans or structured withdrawal plans, focus only on spend-down of invested assets and do not typically coordinate with retirement income from other sources, such as Social Security, pensions, and annuities. Under the actuarial approach described below for equating total personal assets with total spending liabilities to develop a sustainable spending budget, the amount to be withdrawn from invested assets each year is equal to the amount to be spent during the year (the spending budget) reduced by income from other sources for the year. Sources of retirement income Individuals may receive retirement income from a variety of sources: Personal savings and interest Retirement savings plans (i.e., individual retirement account (United States), Registered Retirement Savings Plan (Canada)) Defined contribution plans (i.e., 401(k), 403(b), SIMPLE, 457(b), etc.) Defined benefit pension plans Social Insurance (i.e., Canada Pension Plan, Old Age Security (Canada), National Insurance (United Kingdom), Social Security (United States)) Rental income Annuities Dividends Sale of assets to provide income Tontines Work during retirement Each has unique risk, eligibility, tax, timing, form of payment, and distribution considerations that should be integrated into a retirement spend-down strategy. Modeling retirement spend-down: traditional approach Traditional retirement spend-down approaches generally take the form of a gap analysis. Essentially, these tools collect a variety of input variables from an individual and use them to project the likelihood that the individual will meet specified retirement goals. They model the shortfall or surplus between the individual's retirement income and expected spending needs to identify whether the individual has adequate resources to retire at a particular age. Depending on their sophistication, they may be stochastic (often incorporating Monte Carlo simulation) or deterministic. Standard input variables Current age Expected retirement date or age Life expectancy Current savings Savings rate Current salary Salary increase rate Tax rate Inflation rate Rate of return on investments Expected retirement expenses Additional input variables that can enhance model sophistication Marital status Spouse's age Spouse's assets Health status Medical expense inflation Estimated social security benefit Estimated benefits from employer sponsored plans Asset class weights comprising personal savings Detailed expected retirement expenses Value of home and mortgage balance Life insurance holdings Expected post-retirement part-time income Output Shortfall or surplus There are three primary approaches utilized to estimate an individual's spending needs in retirement: Income replacement ratios: financial experts generally suggest that individuals need at least 70% of their pre-retirement income to maintain their standard of living. This approach is criticized from the standpoint that expenses, such as those related to health care, are not stable over time. Consumption smoothing: under this approach individuals develop a target expenditure pattern, generally far before retirement, that is intended to remain level throughout their lives. Proponents argue that individuals often spend conservatively earlier in their lives and could increase their overall utility and living standard by smoothing their consumption. Direct expense modeling: with the help of financial experts, individuals attempt to estimate future expenses directly, using projections of inflation, health care costs, and other variables to provide a framework for the analysis. Adverse impact of market downturn and lower interest rates Market volatility can have a significant impact on both a worker's retirement preparedness and a retiree's retirement spend-down strategy. American workers lost an estimated $2 trillion in retirement savings during the 2007–2008 financial crisis. 54% of workers lost confidence in their ability to retire comfortably due to the direct impact of the market turmoil on their retirement savings. Asset allocation contributed significantly to these issues. Basic investment principles recommend that individuals reduce their equity investment exposure as they approach retirement. Studies show, however, that 43% of 401(k) participants had equity exposure in excess of 70% at the beginning of 2008. World Pensions Council (WPC) financial economists have argued that durably low interest rates in most G20 countries will have an adverse impact on the underfunding condition of pension funds as "without returns that outstrip inflation, pension investors face the real value of their savings declining rather than ratcheting up over the next few years" From 1982 until 2011, most Western economies experienced a period of low inflation combined with relatively high returns on investments across all asset classes including government bonds. This brought a certain sense of complacency amongst some pension actuarial consultants and regulators, making it seem reasonable to use optimistic economic assumptions to calculate the present value of future pension liabilities. The potentially long-lasting collapse in returns on government bonds is taking place against the backdrop of a protracted fall in returns for other core-assets such as blue chip companies'stocks, and, more importantly, a silent demographic shock. Factoring in the corresponding longevity risk, pension premiums could be raised significantly while disposable incomes stagnate and employees work longer years before retiring. Coping with retirement spend-down challenges Longevity risk becomes more of a concern for individuals when their retirement savings are depleted by asset losses. Following the market downturn of 2008–09, 61% of working baby boomers are concerned about outliving their retirement assets. Traditional spend-down approaches generally recommend three ways they can attempt to address this risk: Save more (spend less) Invest more aggressively Lower their standard of living Saving more and investing more aggressively are difficult strategies for many individuals to implement due to constraints imposed by current expenses or an aversion to increased risk. Most individuals also are averse to lowering their standard of living. The closer individuals are to retirement, the more drastic these measures must be for them to have a significant impact on the individuals' retirement savings or spend-down strategies. Postponing retirement Individuals tend to have significantly more control over their retirement ages than they do over their savings rates, asset returns, or expenses. As a result, postponing retirement can be an attractive option for individuals looking to enhance their retirement preparedness or recover from investment losses. The relative impact that delaying retirement can have on an individual's retirement spend-down is dependent upon specific circumstances, but research has shown that delaying retirement from age 62 to age 66 can increase an average worker's retirement income by 33%. Postponing retirement minimizes the probability of running out of retirement savings in several ways: Additional returns are earned on savings that otherwise would be paid out as retirement income Additional savings are accumulated from a longer wage-earning period The post-retirement period is shortened Other sources of retirement income increase in value (Social Security, defined contribution plans, defined benefit pension plans) Studies show that nearly half of all workers expect to delay their retirement because they have accumulated fewer retirement assets than they had planned. Much of this is attributable to the market downturn of 2008–2009. Various unforeseen circumstances cause nearly half of all workers to retire earlier than they intend. In many cases, these individuals intend to work part-time during retirement. Again, however, statistics show that this is far less common than intentions would suggest. Modeling retirement spend-down: alternative approach The appeal of retirement age flexibility is the focal point of an actuarial approach to retirement spend-down that has spawned in response to the surge of baby boomers approaching retirement. The approach is based on personal asset/liability matching process and present values to determine current year and future year spending budget data points. This self-adjusting actuarial process is very similar to the process employed by pension actuaries to help pension plan sponsors determine current and future years’ annual contribution requirements. Similarity to individual asset/liability modeling Most approaches to retirement spend-down can be likened to individual asset/liability modeling. Regardless of the strategy employed, they seek to ensure that individuals' assets available for retirement are sufficient to fund their post-retirement liabilities and expenses. This is elaborated in dedicated portfolio theory. See also Trinity study References External links Post Retirement Needs and Risks, Society of Actuaries Financial Planning and Retirement Portal, AARP Retirement Portal, 360 Degrees of Financial Literacy Employee Benefit Research Institute Center for Retirement Research, Boston College Journal of Financial Planning Morningstar's 5-Point Retirement Portfolio Checkup Retirement Withdrawal Calculator How Much Can I Afford to Spend in Retirement Blog Actuarial science Investment Plan Individual retirement accounts
Retirement spend-down
Mathematics
2,810
2,584,881
https://en.wikipedia.org/wiki/Zugunruhe
Zugunruhe (/ˈtsuːk:ʊnʁuːə/; German: [tsuːk:ʊnʁuːə] ; lit. 'migration-anxiety') is the experience of migratory restlessness. Ethology In ethology, Zugunruhe describes anxious behavior in migratory animals, especially in birds during the normal migration period. When these animals are enclosed, such as in an Emlen funnel, Zugunruhe serves to study the seasonal cycles of the migratory syndrome. Zugunruhe involves increased activity towards and after dusk with changes in the normal sleep pattern."In accordance with their inherited calendars, birds get an urge to move. When migratory birds are held in captivity, they hop about, flutter their wings and flit from perch to perch just as birds of the same species are migrating in the wild. The caged birds ‘know’ they should be travelling too. This migratory restlessness, or Zugunruhe, was first described by Johann Andreas Naumann…[who] interpreted Zugunruhe to be an expression of the migratory instinct in birds." --William Fiennes, ‘The Snow Geese’ Etymology Zugunruhe is borrowed from German; it is a German compound word consisting of Zug, "move, migration," and unruhe (anxiety, restlessness). The word was first published in 1707, when it was used to describe the "inborn migratory urge" in captive migrants. Though common nouns are normally not capitalised in English, Zugunruhe is sometimes capitalised following the German convention. Effect Zugunruhe has been artificially induced in experiments by simulating long days. Some studies on White-crowned Sparrows have suggested that prolactin is involved in the pre-migratory hyperphagia (feeding), fattening and Zugunruhe. However, others have found that prolactin may merely be associated with lipogenesis (fat accumulation). Researchers have been able to study the endocrine controls and navigational mechanisms associated with migration by studying Zugunruhe. The phenomenon of Zugunruhe was generally believed to be found only in migratory species; however, a study of a resident species has shown low-level Zugunruhe, including oriented activity, suggesting that the endogenous mechanisms for migratory behaviour may be present even in a resident species. Further suggestions for endogenous programs are provided by observations that the number of nights on which Zugunruhe is exhibited by caged migrants appears related to the distance of migration involved. References Further reading Bird migration Ethology German words and phrases
Zugunruhe
Biology
545
1,250,286
https://en.wikipedia.org/wiki/Pentachlorophenol
Pentachlorophenol (PCP) is an organochlorine compound used as a pesticide and a disinfectant. First produced in the 1930s, it is marketed under many trade names. It can be found as pure PCP, or as the sodium salt of PCP, the latter of which dissolves easily in water. It can be biodegraded by some bacteria, including Sphingobium chlorophenolicum. Uses PCP has been used as a herbicide, insecticide, fungicide, algaecide, and disinfectant and as an ingredient in antifouling paint. Some applications were in agricultural seeds (for nonfood uses), leather, masonry, wood preservation, cooling-tower water, rope, and paper. It has previously been used in the manufacture of food packaging materials. Its use has declined due to its high toxicity and slow biodegradation. Two general methods are used for preserving wood. The pressure process method involves placing wood in a pressure-treating vessel, where it is immersed in PCP and then subjected to applied pressure. In the nonpressure process method, PCP is applied by spraying, brushing, dipping, or soaking. Pentachlorophenol esters can be used as active esters in peptide synthesis, much like more popular pentafluorophenyl esters. Exposure People may be exposed to PCP in occupational settings through the inhalation of contaminated workplace air and dermal contact with wood products treated with PCP. Also, general population exposure may occur through contact with contaminated environment media, particularly in the vicinity of wood-treatment facilities and hazardous-waste sites. In addition, some other important routes of exposure seem to be the inhalation of contaminated air, ingestion of contaminated ground water used as a source of drinking water, ingestion of contaminated food, and dermal contact with soils or products treated with the chemical. Toxicity Short-term exposure to large amounts of PCP can cause harmful effects on the liver, kidneys, blood, lungs, nervous system, immune system, and gastrointestinal tract. Elevated temperature, profuse sweating, uncoordinated movement, muscle twitching, and coma are additional side effects. Contact with PCP (particularly in the form of vapor) can irritate the skin, eyes, and mouth. Long-term exposure to low levels, such as those that occur in the workplace, can cause damage to the liver, kidneys, blood, and nervous system. Finally, exposure to PCP is also associated with carcinogenic, renal, and neurological effects. The U.S. Environmental Protection Agency toxicity class classifies PCP in group B2 (probable human carcinogen). Monitoring of human exposure Pentachlorophenol may be measured in plasma or urine as an index of excessive exposure. This is usually performed by gas chromatography with electron-capture or mass-spectrometric detection. Since urine contains predominantly conjugated PCP in chronic exposure situations, prior hydrolysis of specimens is recommended. The current ACGIH biological exposure limits for occupational exposure to PCP are 5 mg/L in an end-of-shift plasma specimen and 2 mg/g creatinine in an end-of-shift urine specimen. Absorption in humans and animals PCP is quickly absorbed through the gastrointestinal tract following ingestion. Accumulation is not common, but if it does occur, the major sites are the liver, kidneys, plasma protein, spleen, and fat. Unless kidney and liver functions are impaired, PCP is quickly eliminated from tissues and blood, and is excreted, mainly unchanged or in conjugated form, via the urine. Single doses of PCP have half-lives in blood of 30 to 50 hours in humans. Biomagnification of PCP in the food chain is not thought to be significant due to the fairly rapid metabolism of the compound by exposed organisms. Releases to the environment PCP has been detected in surface waters and sediments, rainwater, drinking water, aquatic organisms, soil, and food, as well as in human milk, adipose tissue, and urine. As PCP is generally used for its properties as a biocidal agent, considerable concern exists about adverse ecosystem effects in areas of PCP contamination. Releases to the environment are decreasing as a result of declining consumption and changing use methods. However, PCP is still released to surface waters from the atmosphere by wet deposition, from soil by run off and leaching, and from manufacturing and processing facilities. PCP is released directly into the atmosphere via volatilization from treated wood products and during production. Finally, releases to the soil can be by leaching from treated wood products, atmospheric deposition in precipitation (such as rain and snow), spills at industrial facilities, and at hazardous waste sites. After PCP is released into the atmosphere, it decomposes through photolysis. The main biodegradative pathway for PCP is reductive dehalogenation. In this process, the compound PCP is broken down to tetrachlorophenols, trichlorophenols, and dichlorophenols. Another pathway is methylation to pentachloroanisole (a more lipid-soluble compound). These two methods eventually lead to ring cleavage and complete degradation. In shallow waters, PCP is also quickly removed by photolysis. In deep or turbid water processes, sorption and biodegradation take place. In reductive soil and sediments, PCP can be degraded within 14 days to 5 years, depending on the anaerobic soil bacteria that are present. However, adsorption of PCP in soils is pH dependent because it increases under acidic conditions and decreases in neutral and basic conditions. Synthesis PCP can be produced by the chlorination of phenol in the presence of catalyst (anhydrous aluminium or ferric chloride) and a temperature up to about 191 °C. This process does not result in complete chlorination and commercial PCP is only 84–90% pure. The main contaminants include other polychlorinated phenols, polychlorinated dibenzo-p-dioxins, and polychlorinated dibenzofurans. Some of these species are even more toxic than the PCP itself. Pentachlorophenol by country Pentachlorophenol is classified as a persistent organic pollutant (POP). In May 2015, countries which have signed the Stockholm Convention voted 90–2 to ban pentachlorophenol use. The United States is not a signatory and has not banned the chemical. New Zealand PCP was used in New Zealand as a timber preservative and antisapstain treatment, but since 1988 is no longer used. It was also sold as a moss killer to the general public (by Shell, at least) in the form of a 115g/L aqueous solution and labelled as a poison. United States Since the early 1980s, the purchase and use of PCP in the U.S. has not been available to the general public. Nowadays, most of the PCP used in the U.S. is restricted to the treatment of utility poles and railroad ties. In the United States, any drinking-water supply with a PCP concentration exceeding the MCL, 1 ppb, must be notified by the water supplier to the public. Disposal of PCP and PCP-contaminated substances are regulated under RCRA as F-listed (F021) or D-listed (D037) hazardous wastes. Bridges and similar structures such as piers can still be treated with pentachlorophenol. Chile PCP was widely used in Chile until the early 1990s as a fungicide to combat the so-called "blue stain" in pine timber under the name of Basilit. See also Chlorophenol Collision between MV Testbank and MV Seadaniel Creosote Dichlorophenol Havertown Superfund Monochlorophenol Trichlorophenol References Cited sources External links Non-CCA Wood Preservatives: Guide to Selected Resources - National Pesticide Information Center EPA on pentachlorophenol atsdr.cdc.gov on pentachlorophenol CDC – NIOSH Pocket Guide to Chemical Hazards EPA study that used the fungus Phanerochaete chrysosporium to aid in bioremediation of pentachlorophenol in soil EPA ReRegistration – www.regulations.gov -Search docket ID EPA-HQ-OPP-2014-0653. Fungicides Endocrine disruptors Chlorobenzene derivatives Phenols Persistent organic pollutants under the Stockholm Convention Steroid sulfotransferase inhibitors Uncouplers IARC Group 1 carcinogens
Pentachlorophenol
Chemistry,Biology
1,854
36,921,205
https://en.wikipedia.org/wiki/V831%20Centauri
V831 Centauri is a multiple star system in the constellation Centaurus. It is visible to the naked eye with an apparent visual magnitude that ranges from 4.49 down to 4.66. The system is located at a distance of approximately 380 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +12 km/s. It is a likely member of the Lower Centaurus Crux concentration of the Sco OB2 association of co-moving stars. The magnitude 5.3 primary component forms a near-contact binary system, with the components designated Aa and Ab. It has a combined class of B8V, an orbital period of , a separation of , and both components are close to co-rotating with their orbit. The larger member has 4.1 times the mass of the Sun and 2.4 times the Sun's radius, while the companion has 3.4 and 2.3 times, respectively. In 1960, Alan William James Cousins announced the discovery that the star, then known as HR 4975, is a variable star. It was given its variable star designation, V831 Centauri, in 1985. The pair form an eclipsing system, and it is classed as a rotating ellipsoidal variable. The third star, component B, is magnitude 6.0 and forms a visual pair, designated See 170, with the inner system. They orbit each other with a period of 27.2 years and an eccentricity of 0.5. This star has a mass about 2.5 times that of the Sun and may be an Ap star. The fourth member, component C, orbits the system with a period of around 2,000 years. There is a fifth member, component D. References B-type main-sequence stars Lower Centaurus Crux Rotating ellipsoidal variables Centaurus Durchmusterung objects 114529 4975 064425 Centauri, V831 Ap stars
V831 Centauri
Astronomy
405
50,906,889
https://en.wikipedia.org/wiki/Intel%208289
The Intel 8289 is a Bus arbiter designed for Intel 8086/8087/8088/8089. The chip is supplied in 20-pin DIP package. The 8086 (and 8088) operate in maximum mode, so they are configured primarily for multiprocessor operation or for working with coprocessors. Necessary control signals are generated by the 8289. This version was available for US$44.80 in quantities of 100. References External links Bus-Arbiter Jim Nadir: Designing 8086, 8088, 8089 Multiprocessing System With The 8289 Bus Arbiter , Application Note (AP-51), März 1979, Intel Corporation. Intel chipsets Input/output integrated circuits
Intel 8289
Technology
164
40,458,005
https://en.wikipedia.org/wiki/Deoxyschizandrin
Deoxyschizandrin is a bio-active isolate of Schisandra chinensis. Deoxyschizandrin has been found to act as an agonist of the adiponectin receptor 2 (AdipoR2). References Adiponectin receptor agonists Phytochemicals
Deoxyschizandrin
Chemistry
66
14,431,176
https://en.wikipedia.org/wiki/Photochemical%20Reflectance%20Index
The Photochemical Reflectance Index (PRI) is a reflectance measurement developed by John Gamon during his tenure as a postdoctorate fellow supervised by Christopher Field at the Carnegie Institution for Science at Stanford University. The PRI is sensitive to changes in carotenoid pigments (e.g. xanthophyll pigments) in live foliage. Carotenoid pigments are indicative of photosynthetic light use efficiency, or the rate of carbon dioxide uptake by foliage per unit energy absorbed. As such, it is used in studies of vegetation productivity and stress. Because the PRI measures plant responses to stress, it can be used to assess general ecosystem health using satellite data or other forms of remote sensing. Applications include vegetation health in evergreen shrublands, forests, and agricultural crops prior to senescence. PRI is defined by the following equation using reflectance (ρ) at 531 and 570 nm wavelength: Some authors use The values range from –1 to 1. Sources ENVI Users Guide John Gamon, Josep Penuelas, and Christopher Field (1992). A narrow-waveband spectral index that tracks diurnal changes in photosynthetic efficiency. Remote Sensing of environment, 41, 35-44. Drolet, G.G. Heummrich, K.F. Hall, F.G., Middleton, E.M., Black, T.A., Barr, A.G. and Margolis, H.A. (2005). A MODIS-derived photochemical reflectance index to detect inter-annual variations in the photosynthetic light-use efficiency of a boreal deciduous forest. Remote Sensing of environment, 98, 212-224. Biophysics Botany Remote sensing 1992 introductions
Photochemical Reflectance Index
Physics,Biology
362
2,802,279
https://en.wikipedia.org/wiki/Poetaster
Poetaster (), like rhymester or versifier, is a derogatory term applied to bad or inferior poets. Specifically, poetaster has implications of unwarranted pretensions to artistic value. The word was coined in Latin by Erasmus in 1521. It was first used in English by Ben Jonson in his 1600 play Cynthia's Revels; immediately afterwards Jonson chose it as the title of his 1601 play Poetaster. In that play the "poetaster" character is a satire on John Marston, one of Jonson's rivals in the Poetomachia or War of the Theatres. Usage While poetaster has always been a negative appraisal of a poet's skills, rhymester (or rhymer) and versifier have held ambiguous meanings depending on the commentator's opinion of a writer's verse. Versifier is often used to refer to someone who produces work in verse with the implication that while technically able to make lines rhyme they have no real talent for poetry. Rhymer on the other hand is usually impolite. The faults of a poetaster frequently include errors or lapses in their work's meter, badly rhyming words which jar rather than flow, oversentimentality, too much use of the pathetic fallacy and unintentionally bathetic choice of subject matter. Although a mundane subject in the hands of some great poets can be raised to the level of art, such as On First Looking into Chapman's Homer by John Keats or Ode on the Death of a Favourite Cat, Drowned in a Tub of Gold Fishes by Thomas Gray, others merely produce bizarre poems on bizarre subjects, an example being James McIntyre, who wrote mainly of cheese. Other poets often regarded as poetasters are William Topaz McGonagall, Julia A. Moore, Edgar Guest, J. Gordon Coogler, Dmitry Khvostov, and Alfred Austin. Austin, despite having been a British poet laureate, is nevertheless regarded as greatly inferior to his predecessor, Alfred Lord Tennyson. Austin was frequently mocked during his career and is little read today. The American poet Joyce Kilmer (1886–1918), known for his 1913 poem "Trees", is often criticized for his overly sentimental and traditional verse written at the dawn of Modernist poetry, although some of his poems are frequently anthologized and retain enduring popular appeal. "Trees" has been parodied innumerable times, including by Ogden Nash. Modern use Musician Joanna Newsom on the album The Milk-eyed Mender uses the term to refer to a struggling narrator wracked with ambition to create beautiful poetry in a verse from "Inflammatory Writ": And as for my inflammatory writ? Well, I wrote it and I was not inflamed one bit. Advice from the master derailed that disaster; he said "Hand that pen over to me, poetaster" Rapper Big Daddy Kane uses an adjectival form as an insult in his song "Uncut, Pure": Your poetasterous style it plain bore me Pardon the vainglory, but here's the Kane story The band Miracle Fortress has a song entitled "Poetaster". Variants In the sense that a poetaster is a pretended poet, John Marston coined the term parasitaster, for one who pretends to be a parasite or sycophant, in his play Parasitaster, or The Fawn (1604). Later in the 17th century (the earliest cited use is from 1684) appeared the term criticaster for an inferior and pretentious critic. See also Doggerel Vogon poetry References Poets Incompetence
Poetaster
Biology
754
11,775,588
https://en.wikipedia.org/wiki/Ascochyta%20humuli
Ascochyta humuli is a plant pathogen that causes leaf spot on hops. See also List of Ascochyta species References Fungal plant pathogens and diseases Hop diseases humuli Fungus species
Ascochyta humuli
Biology
42
41,351,768
https://en.wikipedia.org/wiki/Imidoyl%20chloride
Imidoyl chlorides are organic compounds that contain the functional group RC(NR')Cl. A double bond exist between the R'N and the carbon centre. These compounds are analogues of acyl chloride. Imidoyl chlorides tend to be highly reactive and are more commonly found as intermediates in a wide variety of synthetic procedures. Such procedures include Gattermann aldehyde synthesis, Houben-Hoesch ketone synthesis, and the Beckmann rearrangement. Their chemistry is related to that of enamines and their tautomers when the α hydrogen is next to the C=N bond. Many chlorinated N-heterocycles are formally imidoyl chlorides, e.g. 2-chloropyridine, 2, 4, and 6-chloropyrimidines. Synthesis and properties Imidoyl halides are synthesized by combining amides and halogenating agents. The structure of the carboxylic acid amides plays a role in the outcome of the synthesis. Imidoyl chloride can be prepared by treating a monosubstituted carboxylic acid amide with phosgene. RC(O)NHR’ + COCl2 → RC(NR’)Cl + HCl + CO2 Thionyl chloride is also used. Imidoyl chlorides are generally colorless liquids or low-melting solids that are sensitive to both heat and especially moisture. In their IR spectra these compounds exhibit a characteristic νC=N band near 1650–1689 cm−1. Although both the syn and anti configurations are possible, most imidoyl chlorides adopt the anti configuration. Reactivity Imidoyl chlorides react readily with water, hydrogen sulfide, amines, and hydrogen halides. Treating imidoyl chlorides with water forms the corresponding amide: RC(NR’)Cl + H2O → RCONHR’ + HCl Aliphatic imidoyl chlorides are more sensitive toward hydrolysis than aryl derivatives. Electron-withdrawing substituents decrease the reaction rate. Imidoyl chlorides react with hydrogen sulfide to produce thioamides: RC(NR’)Cl + H2S → RC(S)NHR’ + HCl When amines are treated with imidoyl chlorides, amidines are obtained. RC(NR’)Cl + 2R”NH2 → RC(NR’)NHR” + R”NH3Cl When R' ≠ R", two isomers are possible. Upon heating, imidoyl chlorides also undergo dehydrohalogenation to form nitriles: RC(NR’)Cl → RC≡N + R’Cl Treatment of imidoyl chloride with hydrogen halides, such as HCl, forms the corresponding iminium chloride cations: RC(NR’)Cl + HCl → [RC(NHR’)Cl]+Cl− Applications Imidoyl chlorides are useful intermediates in the syntheses of several compounds, including imidates, thioimidates, amidines, and imidoyl cyanides. Most of these syntheses involve replacing the chloride with alcohols, thiols, amines, and cyanates, respectively. Imidoyl chlorides can also undergo Friedel-Crafts reactions to install an imine groups on aromatic substrates. If the nitrogen of the imidoyl chloride has two substituents, the resulting chloroiminium ion is vulnerable to attack by aromatic rings without the need for a Lewis acid to remove the chloride first. This reaction is called the Vilsmeier–Haack reaction, and the chloroiminium ion is referred to as the Vilsmeier reagent. After attaching the iminium ion to the ring, the functional group can later be hydrolyzed to a carbonyl for further modification. The Vilsmeier-Haack reaction can be a useful technique to add functional groups to an aromatic ring if the ring contains electron-withdrawing groups, which make using the alternative Friedel-Crafts reaction difficult. Imidoyl chlorides can be easily halogenated at the α carbon position. By treating imidoyl chlorides with hydrogen halide, will cause all α hydrogens to be replaced with the halide. This method can be an effective way to halogenate many substances. Imidoyl chlorides can also be used to form peptide bonds by first creating amidines and then allowing them to be hydrolyzed to the amide. This approach may prove to be a useful route to synthesizing synthetic proteins. Imidoyl chlorides can be difficult to handle. Imidoyl chlorides react readily with water, which makes any attempt to isolate and store them for long periods of time difficult. Further, imidoyl chlorides tend to undergo self-condensation at higher temperatures if the imidoyl chloride has an α CH group. At even higher temperatures, the chlorine of the imidoyl chloride tends to be eliminated, leaving the nitrile. Because of these complications, imidoyl chlorides are typically prepared and used immediately. More stable intermediates are being sought, with substances such as imidoylbenzotriazoles being suggested. References Functional groups Organochlorides
Imidoyl chloride
Chemistry
1,092
51,049,784
https://en.wikipedia.org/wiki/Chip%20budding
Chip budding is a grafting technique A chip of wood containing a bud is cut out of scion with desirable properties (tasty fruit, pretty flowers, etc.). A similarly shaped chip is cut out of the rootstock, and the scion bud is placed in the cut, in such a way that the cambium layers match. The new bud is usually fixed in place using grafting tape. Chip budding can be done in mid- to late summer, unlike most grafting which takes place in the early spring. Depending on sap flow, the bud may not begin growing until the following spring, though you can determine if the grafting succeeded before that by seeing whether the bud swells or shrivels. The next spring, all other shoots than that from the scion bud are removed, which will then become the source for the new top of the plant. References External links chip budding part 2 - Demonstration of chip budding by Stephen Hayes Horticultural techniques Plant reproduction Asexual reproduction Agronomy
Chip budding
Biology
209
73,592,462
https://en.wikipedia.org/wiki/Plectocarpon%20galapagoense
Plectocarpon galapagoense is a species of lichenicolous fungus in the family Lecanographaceae. Native to the Galápagos Islands, it grows on and within the ascomata and thallus of Sarcographa tricosa, a host lichen species. Although it appears to be a weak parasite, it may cause significant damage to the host lichen's reproductive structures. Taxonomy Plectocarpon galapagoense was described by Damien Ertz and Frank Bungartz in 2019. Its species epithet refers to its occurrence in the Galápagos Islands. The holotype specimen was collected by the second author on Pinta Island at an elevation of ; it was found in a forest of Zanthoxylum fagara with abundant ferns in the understory. Description The lichenicolous fungus initially grows immersed within the host lichen, eventually bursting through and appearing as black, star-shaped or rounded structures measuring 1–2 mm in diameter. The surface of the fungus is to , with a slit-like hymenial . It does not induce galls or produce necrotic areas on the host lichen. Its asci are somewhat cylindrical to narrowly , with a narrow ocular chamber, and contain 4–8 spores. The are fusiform and contain two or three septa, initially hyaline but becoming dark brown and as they mature. Similar species While similar to Plectocarpon macaronesiae, P. galapagoense differs in the size of its ascomata, the appearance of its surface, and its host genus. Plectocarpon dirinariae is another similar species but differs in its ascomatal shape and host genus. Plectocarpon aequatoriale, found in Ecuador, has distinctly convex ascomata, longer ascospores, and a different host genus. Opegrapha plectocarpoidea, known from Papua New Guinea, differs in its ascomatal shape, number of spores in its asci, and the that continues below the hymenium. Habitat and distribution Plectocarpon galapagoense is endemic to the Galapagos Islands. It grows on Sarcographa tricosa sensu lato, which is found on twigs and branches of Chiococca alba trees in the forest understory of Zanthoxylum fagara on Pinta Island. References Lecanographaceae Lichenicolous fungi Fungi described in 2019 Fungi of the Galápagos Islands Taxa named by Frank Bungartz Fungus species Taxa named by Damien Ertz
Plectocarpon galapagoense
Biology
538
9,634,133
https://en.wikipedia.org/wiki/Local%20insertion
In broadcasting, local insertion (known in the United Kingdom as an opt-out) is the act or capability of a broadcast television station, radio station or cable system to insert or replace part of a network feed with content unique to the local station or system. Most often this is a station identification (required by the broadcasting authority such as the U.S. Federal Communications Commission), but is also commonly used for television or radio advertisements, or a weather or traffic report. A digital on-screen graphic ("dog" or "bug"), commonly a translucent watermark, may also be keyed (superimposed) with a television station ID over the network feed using a character generator using genlock. In cases where individual broadcast stations carry programs separate from those shown on the main network, this is known as regional variation (in the United Kingdom) or an opt-out (in Canada and the United States). Automated local insertion used to be triggered with in-band signaling, such as DTMF tones or sub-audible sounds (such as 25 Hz), but is now done with out-of-band signaling, such as analog signal subcarriers via communications satellite, or now more commonly via digital signals; broadcast automation equipment can then handle these automatically. In an emergency, such as severe weather, local insertion may also occur instantly through command from another network or other source (such as the Emergency Alert System or First Warning). In this case, the most urgent warning messages may interrupt without delay, while others may be worked into a normal break in programming within 15 minutes of their initial issuance. Within individual programs In the United States, insertion can easily be heard every evening on the nationally syndicated radio show Delilah, where the host does a pre-recorded station-specific voiceover played over a music bed from the network. When host Delilah Rene says "this is Delilah", her voice (often in a slightly different tone or mood than what she has just been speaking) then identifies the branding or identification for the specific station (for example, "on B98.5 FM" when heard on WSB-FM in Atlanta, Georgia). Because of this slight difference in vocal quality, many syndicated radio networks suggest using only one voice for local station IDs 24/7; this way, the difference in vocal intonation is lessened. Insertion is made conspicuous when, due to carelessness, or even abuse—e.g. to squeeze in one more ad—the network program is already underway by the time the insert closes. This same mode of insertion is heard during weather forecasts transmitted by outside companies such as Weatherology, where all the audio assets, including three to four days of upcoming weather, temperatures, wind direction/speed and the current conditions and possible warnings are pre-recorded, then matched together to form the audio of a full forecast. The other more prominent example is during live sports programming carried over radio and television networks, where close to the top of the hour, a play-by-play announcer will say "we pause ten seconds for station identification; this is the (team name) (radio network branding)", or a close equivalent. On most stations, this is a basic station identification, as required by the Federal Communications Commission (FCC), with the call letters and city of license relayed, while on others a quickly-read five second advertisement or program promotion is read before the identification, or a breaking news event or weather warning occurring during the event is relayed, followed by the station ID. Due to many sports rights deals for televised sports moving to regional sports networks which are not required to identify themselves under FCC guidelines, or network sports coverage where the station is identified through an on-screen display by the local station rather than speech, this is more prominent on radio rather than television. Local commercial (and some non-commercial) broadcast television stations also insert local commercial breaks during programming each half-hour while network-supplied or syndicated content is being broadcast. Television networks and syndication distributors give their affiliates either 60, 90 or 120 seconds each half-hour (typically totaling about four minutes per hour) to run local station breaks, including promos for the station and advertisements for national and local area businesses (and on a few stations, local news updates – which were particularly common during the 1970s through the 1990s, especially as the "24 Hour News Source" format became commonplace in the United States during the latter decade – current time and temperature information, or a brief local weather forecast), over network programming. Typically, these networks air a blank feed showing the network's logo (such as with Fox, NBC, The CW, and MyNetworkTV) or a series of public service announcements (as with ABC and CBS), while stations air local commercials. PBS member stations and other non-commercial educational stations also insert promos for network series and/or syndicated or locally produced programming during promo breaks; as these station are non-commercial, breaks are typically not featured during the programs themselves, instead promos are inserted in-between shows, even – in the case of PBS members – if the station is carrying the national network feed. Various television morning news shows (such as Good Morning America and Today) also allocate five minutes of programming time each half-hour for stations to carry a local news update at :25 and :55 minutes past the hour; however the national feed continues for stations that do not wish to "break away", either because they do not air a morning newscast or simply do not have a news department (for example, some mid-sized and smaller market NBC affiliates, such as KTEN in Ada, Oklahoma, do not air news cut-ins during the weekend edition of Today if they do not have a weekend morning newscast, but cut-ins are shown during the weekday telecasts where Today follows a morning newscast). This also occurs with news on NPR's Morning Edition and All Things Considered, which respectively air during the morning and evening rush hours. For commercial stations in the 2020s, the news and weather update, which was traditionally 2-3 minutes in the past, now may run only as long as a condensed one minute at most, with the rest of the allocation devoted to local advertising. Starting in the early 1990s, some cable television systems began carrying a local insert called "Local Edition", a segment featuring local news inserts (which are produced by area television stations or local cable operators) that air at :24 and :54 minutes past the hour during HLN's rolling daytime news block, usually during the network's non-essential features news block. This has been discontinued as that network has switched to a general news/talk format beginning in 2005. Transmitter identification Translator stations may also have local insertion, though this is generally limited to identifying the repeating station's callsign and community of license separate from its parent station. In the United States, the FCC also allows up to 30 seconds each hour for fundraising to keep the translator service on the air. Pay television Local insertion is also used by cable and telephone company television providers, in which cable and telco headends insert advertisements for the system, promotions for programs on other cable channels carried by the provider and commercials for local area businesses (such as car dealerships or furniture stores) at least twice each hour; unlike most commercial broadcast stations, however, cable channels often run only 60 seconds of local commercial inserts each half-hour near the end of the first or second commercial break and are aired in place of national ads or network promos that air during that given time. Direct-broadcast satellite services take advantage of the hard drive space on consumer digital video recorders to upload service-specific advertising and promotions localized to the customer's market area, though consumers with non-DVR units instead have default service advertising and promos. Local insertion on cable television is used especially on The Weather Channel in the U.S. and The Weather Network/MétéoMédia in Canada, where systems like the WeatherStar, IntelliStar and PMX have been used to show regularly scheduled local weather forecasts (known as "Local on the 8s" on The Weather Channel in the U.S.), and well as the lower display line (LDL) or lower-third graphic that is shown at other times. The Weather Channel, in particular also airs ads during national breaks at the end of some advertisements allowing its WeatherSTAR or IntelliSTAR systems to insert selected locations for certain businesses operating in the area, such as restaurants or auto rental dealers; though The Weather Channel has not done this as much in recent years as they have in the past. This only applies to the cable systems, although in the U.S. direct-broadcast satellite services have shown an LDL of the current conditions and 12-hour forecast for select major cities. This is not seen on older TVRO or "big ugly dish" systems, as this is intended as a backhaul and has very few end-users, and is used as a clean feed, though some cable services which have not upgraded to the channel's HD systems may see the national overview instead while the standard definition broadcast remains localized. Set-top boxes In place of the IntelliSTAR, a hyperlocal form of insertion is now done on DirecTV (and possibly other DBS services), whereby the first half of the local forecast is generated by the set-top box. A "cutout" at the upper right corner of the picture allows the sponsor's advertising logo to be shown live from the main video feed, while a datacast on the satellite (like that which provides the electronic program guide) sends simple forecast and conditions data for the entire country every couple of minutes. Graphics are stored on the receiver, and displayed according to the forecast, which is selected by ZIP code or city according to user settings. Additionally, starting in 2011, DirecTV users with digital video recorders will have commercials downloaded to their boxes, which will play according to their demographic information, likely commanding higher revenue from advertisers. This may eventually lead to or merge with interactive television, which may find more success on cable and telco television because of the lack of a return channel on satellite and broadcast. Internet-connected TVs may erode this barrier as well, however, with only their embedded flash memory chip necessary to hold short video clips. The ATSC 3.0 standard, the newest standard planned for over-the-air television, has an emphasis on targeted news, weather or information to a ZIP code, along with advertising and overall improved tracking of viewers; the standard will likely require current sets to utilize a set-top box to receive ATSC 3.0 signals before the technology becomes commoditized into future sets. References See also Broadcast automation and centralcasting Emergency Alert System Station identification Broadcast engineering Television terminology Radio broadcasting
Local insertion
Engineering
2,216
2,198,678
https://en.wikipedia.org/wiki/Wet%20wipe
A wet wipe, also known as a wet towel, wet one, moist towelette, disposable wipe, disinfecting wipe, or a baby wipe (in specific circumstances) is a small to medium-sized moistened piece of plastic or cloth that either comes folded and individually wrapped for convenience or, in the case of dispensers, as a large roll with individual wipes that can be torn off. Wet wipes are used for cleaning purposes like personal hygiene and household cleaning; each is a separate product depending on the chemicals added and medical or office cleaning wipes are not intended for skin hygiene. In 2013, owing to increasing sales of the product in affluent countries, Consumer Reports reported that efforts to make the wipes "flushable" down the toilet had not entirely succeeded, according to their test. Invention American Arthur Julius is seen as the inventor of wet wipes. Julius worked in the cosmetics industry and in 1957, adjusted a soap portioning machine, putting it in a loft in Manhattan. Julius trademarked the name Wet-Nap in 1958, a name for the product that is still being used. After fine tuning his new hand-cleaning aid together with a mechanic, he unveiled his invention at the 1960 National Restaurant Show in Chicago and in 1963 started selling Wet-Nap products to Colonel Harland Sanders to be distributed to customers of Kentucky Fried Chicken. Production Ninety percent of wet wipes on the market are produced from nonwoven fabrics made of polyester or polypropylene. The material is moistened with water or other liquids (e.g., isopropyl alcohol) depending on the applications. The material may be treated with softeners, lotions, or perfume to adjust the tactile and olfactory properties. Preservatives such as methylisothiazolinone are used to prevent bacterial or fungal growth in the package. The finished wet wipes are folded and put in pocket size package or a box dispenser. Uses Wet wipes can serve a number of personal and household purposes. Although marketed primarily for wiping infants' bottoms in diaper changing, it is not uncommon for consumers to also use the product to clean floors, toilet seats, and other surfaces around the home. Parents also use wet wipes, or as they are called for baby care, baby wipes, for wiping up baby vomit and to clean babies' hands and faces. Baby wipes Baby wipes are wet wipes used to cleanse the sensitive skin of infants. These are saturated with solutions anywhere from gentle cleansing ingredients to alcohol-based "cleaners". Baby wipes are typically different pack counts (ranging up to 80 or more sheets per pack), and come with dispensing mechanisms. The origin of baby wipes most likely came in the mid-1950s as more people were travelling and needed a way to clean up on the go. One of the first companies to produce these was a company called Nice-Pak. They made napkin sized paper cloth saturated with a scented skin cleanser. The first wet-wipe products specifically marketed as baby wipes, such as Kimberly-Clark's Huggies wipes and Procter & Gamble's Pampers wipes, appeared on the market in 1990. As the technology to produce wipes matured and became more affordable, smaller brands began to appear. By the 1990s, most super stores like Kmart and Wal-Mart had their own private label brand of wipes made by other manufacturers. After this period there was a boom in the industry and many local brands started manufacturing because of low entry barriers. In December 2018, a New Zealand company launched the country's first ever wet and baby wipe alternative, the BDÉT Foam Wash. Toilet wet wipes Toilet wet wipes are sometimes preferred to standard toilet paper. Many brands sell toilet wet wipes, claiming they are "flushable". However, they do not decompose in septic tanks as they are made of polyester or polypropylene. In 2013 a Consumer Reports article said that none of the leading brands could pass their test. Personal hygiene Wet wipes are often included as part of a standard sealed cutlery package offered in restaurants or along with airline meals. Wet wipes began to be marketed as a luxury alternative to toilet paper by 2005 by companies such as Kimberly-Clark and Procter & Gamble. They are dispensed in the toilets of restaurants, service stations, doctors' offices, and other places with public use. Wet wipes have also found a use among visitors to outdoor music festivals, particularly those who camp, as an alternative to communal showers. Cleansing pads Cleansing pads are fiber sponges which have been previously soaked with water, alcohol and other active ingredients for a specific intended use. They are ready to use hygiene products and they are simple and convenient solutions to dispose of dirt or other undesirable elements. There are different type of cleansing pads offered by the beauty industry: make-up removing pads, anti-spot treatments and anti-acne pads that usually contain salicylic acid, vitamins, menthol and other treatments. Cleansing pads for preventing infection are usually saturated with alcohol and bundled in sterile packages. Hands and instruments may be disinfected with these pads while treating wounds. Disinfecting cleansing pads are often included in first aid kits for this purpose. Since the outbreak of H1N1 sales of individual impregnated wet wipes and gels in sachets and flowpacks have dramatically increased in the UK following the Government's advice to keep hands and surfaces clean to prevent the spread of germs. Industrial wipes Pre-impregnated industrial-strength cleaning wipes with powerful cleaning fluid that cuts through the dirt as the high performance fabric absorbs the residue. Industrial wipes has the ability to clean a vast range of though substances from hands, tools and surfaces, including: grime, grease, oil- and water-based paints and coatings, adhesives, silicone and acrylic sealants, poly foam, epoxy, oil, tar and more. Pain relief There are pain relief pads sopping with alcohol and benzocaine. These pads are good for treating minor scrapes, burns, and insect bites. They disinfect the injury and also ease pain and itching. Pet care Wet wipes are produced specifically for pet care, for example eye, ear, or dental cleansing pads (with boric acid, potassium chloride, zinc sulfate, sodium borate) for dogs, cats, horses, and birds. Healthcare Medical wet wipes are available for various applications. These include alcohol wet wipes, chlorhexidine wipes (for disinfection of surfaces and noninvasive medical devices), and sporicidal wipes. Medical wipes can be used to prevent the spread of pathogens such as norovirus and Clostridioides difficile. Effect on sewage systems Water management companies ask people not to flush wet wipes down toilets, as their failure to break apart or dissolve in water can cause sewer blockages known as fatbergs. Since the mid-2000s, wet wipes such as baby wipes have become more common for use as an alternative to toilet paper in affluent countries, including the United States and the United Kingdom. This usage has in some cases been encouraged by manufacturers, who have labelled some wet wipe brands as "flushable". Wet wipes, when flushed down the toilet, have been reported to clog internal plumbing, septic systems and public sewer systems. The tendency for fat and wet wipes to cling together allegedly encourages the growth of the problematic obstructions in sewers known as "fatbergs". In addition, some brands of wipes contain alcohol, which can kill bacteria and denature enzymes responsible for breaking down solid waste in septic tanks. In the late 2010s, other alternatives such as gel wipe had also come on to the market. In 2014, a class action suit was filed in the U.S. District Court for the Northern District of Ohio against Target Corporation, and Nice-Pak Products Inc. on behalf of consumers in Ohio who purchased Target-brand flushable wipes. The lawsuit alleged the retailer misled consumers by marking the packaging on its Up & Up brand wipes as flushable and safe for sewer and septic systems. The lawsuit also alleged that the products were a public health hazard because they clogged pumps at municipal waste-treatment facilities. Target and Nice-Pak agreed to settle the case in 2018. In 2015, the city of Wyoming, Minnesota, launched a class action suit against six companies, including Procter & Gamble, Kimberly-Clark, and Nice-Pak, alleging they were fraudulently promoting their products as "flushable". The city dropped the lawsuit in 2018 after concluding that the city had not experienced damage to its sewer systems or a rise in maintenance costs. Upon announcement of the withdrawal of the suit, an industry trade group representing the manufacturers of the wipes released a statement that disputed the claims that the products are harmful to sewer systems. In 2019, the industry body Water UK announced a new standard for flushable wet wipes. Wipes will need to pass rigorous testing in order to gain a new and approved "Fine to Flush" logo. As of January 2019, only one product had been confirmed to meet the standard, although there were about seven others in the process of being tested. See also Oshibori – reusable Japanese wet hand towel Washlet – a mechanical alternative to wet wipes References Personal hygiene products Disinfectants Paper products Toilets Babycare Disposable products
Wet wipe
Biology
1,975
18,303,800
https://en.wikipedia.org/wiki/HD%20181433%20c
HD 181433 c is an extrasolar planet located approximately 87 light-years away in the constellation of Pavo, orbiting the star HD 181433. This planet is at least 0.64 times as massive as Jupiter and takes 962 days to orbit the star at an orbital distance of 1.76 astronomical units (AU), or 263 gigametres (Gm). The orbit is eccentric, however, and ranges from at periastron to at apastron. François Bouchy et al. have published a paper detailing the HD 181433 planetary system in Astronomy and Astrophysics. References External links HD 181433 Pavo (constellation) Exoplanets discovered in 2008 Giant planets Exoplanets detected by radial velocity
HD 181433 c
Astronomy
149
42,682,745
https://en.wikipedia.org/wiki/Potential%20theory%20of%20Polanyi
The potential theory of Polanyi, also called Polanyi adsorption potential theory, is a model of adsorption proposed by Michael Polanyi where adsorption can be measured through the equilibrium between the chemical potential of a gas near the surface and the chemical potential of the gas from a large distance away. In this model, he assumed that the attraction largely due to Van Der Waals forces of the gas to the surface is determined by the position of the gas particle from the surface, and that the gas behaves as an ideal gas until condensation where the gas exceeds its equilibrium vapor pressure. While the adsorption theory of Henry is more applicable in low pressure and BET adsorption isotherm equation is more useful at from 0.05 to 0.35 P/Po, the Polanyi potential theory has much more application at higher P/Po (~0.1–0.8). Overview Michael Polanyi Michael Polanyi, FRS (11 March 1891 – 22 February 1976) was a Hungarian polymath, who made theoretical contribution to physical chemistry, economics, and philosophy. Polanyi was a well known theoretical chemist who contributed to the chemistry field through three main areas of study: adsorption of gases on solids, x-ray structure analysis of the properties of solids, and the rate of chemical reactions. However, Polanyi was active in both theoretical and experimental studies within the Chemistry field. Polanyi obtained a degree in medicine in 1913 as well as a Ph.D. in physical chemistry in 1917 from the University of Budapest. Later in his life, he taught as a chemistry professor at the Kaiser Wilhelm Institute in Berlin as well as the University of Manchester in Manchester, England. History Proposed theory In 1914, Polanyi wrote his first paper proposed on adsorption where he proposed a model for the adsorption of gas onto a solid surface. Afterwards, he published a fully developed paper in 1916, which included experimental verification by his students and other authors. During his research in the University of Budapest, his mentor, Professor Georg Bredig, sent his research findings to Albert Einstein. Einstein wrote back to Bredig stating: The papers of your M. Polanyi please me a lot. I have checked over the essentials in them and found them fundamentally correct. Polanyi later described this event by saying: Bang! I was a scientist. Polanyi and Einstein continued to write to each other on and off for the next 20 years. Criticism Polanyi's model of adsorption was met with much criticism for several decades after publication years. His simplistic model for determining adsorption was formed during the time of the discovery of Debye's fixed dipoles, Bohr's atomic model, and well as the developing theory of intermolecular forces and electrostatic forces by key figures in the chemistry world including W.H. Bragg, W.L. Bragg, and Willem Hendrik Keesom. Opponents of his model claimed that Polanyi's theory did not take into account these emerging theories. Criticism included that the model did not take into account the electrical interactions of the gas and the surface, and that the presence of other molecules would screen off the attraction of the gas to the surface. Polanyi's model was furthermore put under scrutiny following the experimental claims of Irving Langmuir from 1916 to 1918 through whose research would eventually win the Nobel Prize in 1932. However, Polanyi was not able to participate in many of these discussions because he served as a medical officer for the Austro-Hungarian army in the Serbian front during World War I. Polanyi wrote about this experience saying: I myself was protected for a while against any knowledge of these developments by serving as a medical officer in the Austro-Hungarian Army from August 1914 to October 1918, and by the subsequent revolutions and counter revolutions that lasted until the end of 1919. Members of less-well-informed circles elsewhere continued to be impressed for some time by the simplicity of my theory and its wide experimental verifications. Defense Polanyi described that the “turning point” of the acceptance of his model of adsorption occurred when Fritz Haber asked him to defend his theory in full in the Kaiser Wilhelm Institute for Physical Chemistry in Berlin, Germany. Many key players in the scientific world were present in this meeting including Albert Einstein. After hearing Polanyi's full explanation of his model, Haber and Einstein claimed that Polanyi “had displayed a total disregard for the scientifically established structure of the matter”. Years later, Polanyi described his ordeal by concluding, Professionally, I survived the occasion only by the skin of my teeth. Polanyi continued to provide supporting evidence in proving the validity of his model years after this meeting. Refutation Polanyi's 'deliverance' (as he described it) from these rejections and criticism of his model occurred in 1930, when Fritz London proposed a new theory of cohesive forces founded on the theories of quantum mechanics on the polarization of electronic systems. Polanyi wrote to London asking, “Are these forces subject to screening by intervening molecules? Would a solid acting by these forces possess a spatially fixed adsorption potential?” After computational analysis, a joint publication was made between Polanyi and London claiming that the adsorptive forces behaved similarly to the model that Polanyi had proposed. Further research Polanyi's theory has historical significance whose work has been used a foundation for other models, such as the theory of volume filling micropores (TVFM) and the Dubinin–Radushkevich theory. Other research have been performed loosely involving the potential theory of Polanyi such as the capillary condensation phenomenon discovered by Zsigmondy. Unlike Poylani's theory which involves a flat surface, Zsigmondy's research involves a porous structure like silica materials. His research proved that condensation of vapors can occur in narrow pores below the standard saturated vapour pressure. Theory Polanyi potential adsorption theory The Polanyi potential adsorption theory is based on the assumption that the molecules near a surface move according to a potential, similar to that of gravity or electric fields. This model is applicable in the case of gases at a surface at constant temperature. Gas molecules move closer to that surface when the pressure is higher than the equilibrium vapor pressure. The change in potential relative to the distance from the surface can be calculated using the formula for difference of the chemical potential, where is the chemical potential, is the molar entropy, is the molar volume, and is the molar internal energy. At equilibrium, the chemical potential of a gas at a distance from a surface, , is equal to the chemical potential of the gas at an infinitely large distance from the surface, . As a result, the integration from an infinitely far distance to r distance from the surface leads to where is the partial pressure at distance r and is the partial pressure at infinite distance from the surface. Since the temperature remains constant, the difference in chemical potential formula can be integrated over pressures and By setting the , the equation can be simplified to Using the ideal gas law, , the following formula is obtained Since gas condenses into a liquid on a surface when the pressure of the gas exceeds the equilibrium vapor pressure, , we can assume a liquid film forms over the surface of thickness, . The energy at is Considering that the partial pressure of the gases relates to the concentration, the adsorption potential, can be calculated as where is the saturated concentration of adsorbate and is the equilibrium concentration of the adsorbate. Theories based on Polanyi adsorption theory The potential theory underwent many refinements and changes throughout the years since its first report. One major theories of note that was developed using Polanyi's theory was the Dubinin theories, Dubinin–Radushkivech and Dubinin–Astakhov equations. Using the adsorption potential, the degree of filling of the adsorption space, , can be calculated as where is value of adsorption at temperature T and equilibrium pressure p, is the maximum value of adsorption, and is the characteristic energy of adsorption in kJ/mol, is the loss in Gibbs free energy in adsorption equal to and is the fitting coefficient. The Dubinin–Radushkivech equation where is equal to 2 and the optimized Dubinin-Astakhov equation where is fit to experimental data can be simplified to Other studies have used the Dubinin–Astakhov in a similar form of , where is equilibrium adsorbed concentration of adsorbent in mg/g, is maximum adsorbed concentration of adsorbent in mg/g, is the effective adsorption potential, where equal to , is equilibrium concentration of adsorbent in the solution phase in mg/L, and is the adsorbent solubility in water in mg/L. The characteristic energy of adsorption can be related to a characteristic energy of adsorption for a standard vapor on the same surface, , through the use of an affinity coefficient, The affinity coefficient is a ratio of the properties of the sample and standard vapors where and are the polarizabilities of the sample and standard vapors, respectively. Many studies have been performed to determine optimal fitting coefficients, , and affinity coefficients, , to best describe the adsorption of gases and vapors onto solids. As a result, the Dubinin–Astakhov equation remains in use in adsorption studies due to the accuracy it can obtain when fitted with experimental results. Dubinin–Astakhov parameters for vapors and gases Application In many modern studies, the Polanyi theory is widely used in the study of activated carbons, or carbon black. The theory has been successfully used to model a variety of scenarios such as the gas adsorption on activated carbon and the adsorption process of nonionic polycyclic aromatic hydrocarbons. Later on, experiments also showed that it can model ionic polycyclic aromatic hydrocarbons such as phenols and anilines. More recently, the Polyani adsorption isotherm has been used to model to adsorption of carbon nanoparticles. Characterization of carbon nanoparticles Historically, the theory was used to model nonuniform adsorbates and multi-components solutes. For certain pairs of adsorbates and adsorbents, the mathematical parameters of the Polyani theory can be related to the physicochemical properties of both adsorbents and adsorbates. The theory has been used to model the adsorption of carbon nanotubes and carbon nanoparticles. In the study done by Yang and Xing, the theory have been shown to better fit the adsorption isotherm than Langmuir, Freundlich, and partition. The experiment studied the adsorption of organic molecules on carbon nanoparticles and carbon nanotubes. According to the Polyani theory the surface defect curvatures of carbon nanoparticles could affect their adsorption. Flat surfaces on the particles will allow more surface atoms to approach adsorbing organic molecules which will increase the potential, leading to stronger interactions. The theory has been beneficial in trying to understand the adsorption mechanisms of organic compounds on carbon nanoparticles and estimating the adsorption capacity and affinity. Using this theory, researchers are hoping to be able to design carbon nanoparticles for specific needs such as using them as sorbents in environmental studies. Adsorption from different systems In one of the earlier studies conducted by Manes, M., & Hofer, L. J. E., the Polyani theory was used to characterize liquid-phase adsorption isotherms on various concentrations activated carbon using a wide range of organic solvent. The polyani theory was shown to be a good fit for these various systems. Because of the results, the study introduced the possibility of predicting isotherms for similar systems using minimal data. However, the limitation is that the adsorption isotherms for a large variety of solvents can only fit over a limited range. The curve was not able to fit the data at high-capacity range. The study also concluded that there were a few anomalies in the results. The adsorption from carbon tetrachloride, cyclohexane, and carbon disulfide onto activated carbon was not able to fit well to the curve, and remain to be explained. The researchers who conducted the experiment speculate that steric effects of carbon tetrachloride and cyclohexane may have played a role. The study has been done with a variety of systems such as organic liquids from water solutions and organic solids from water solutions. Competitive adsorption Since a variety of systems have been investigated, a study was done to investigate the individual adsorption of a mixed solution. This phenomenon is also called competitive adsorption because solutes tend to compete for the same adsorption sites. In the experiment conducted by Rosene and Manes, the competitive adsorption of glucose, urea, benzoic acid, phthalide, and p-nitrophenol. Using the Polanyi adsorption model, they were able to calculate the relative adsorption of each compound onto the surface of activated carbon. See also Adsorption Carbon nanotubes Activated carbon Freundlich adsorption BET adsorption theory References Surface science
Potential theory of Polanyi
Physics,Chemistry,Materials_science
2,812
16,781,783
https://en.wikipedia.org/wiki/HD%20100777%20b
HD 100777 b, formally named Laligurans, is an extrasolar planet located approximately 172 light-years away in the constellation of Leo, orbiting the star HD 100777. It has a minimum mass about 1.17 times greater than Jupiter and takes about 384 days to orbit its star. It has a semi-major axis of 1.03 AU and a moderately eccentric orbit around its star. The velocity of the orbit is 29.3 km/s. Dominique Naef discovered this planet in March 2007 by using HARPS spectrograph located in Chile. See also HD 190647 b Pipitea (planet) References External links Exoplanets discovered in 2007 Giant planets Leo (constellation) Exoplanets detected by radial velocity Exoplanets with proper names
HD 100777 b
Astronomy
164
31,938,922
https://en.wikipedia.org/wiki/Scytonemin
Scytonemin is a secondary metabolite and an extracellular matrix (sheath) pigment synthesized by many strains of cyanobacteria, including Nostoc, Scytonema, Calothrix, Lyngbya, Rivularia, Chlorogloeopsis, and Hyella. Scytonemin-synthesizing cyanobacteria often inhabit highly insolated terrestrial, freshwater and coastal environments such as deserts, semideserts, rocks, cliffs, marine intertidal flats, and hot springs. The pigment was originally discovered in 1849 by Swiss botanist Carl Nägeli, although the structure remained unsolved until 1993. It is an aromatic indole alkaloid built from two identical condensation products of tryptophanyl- and tyrosyl-derived subunits linked through a carbon-carbon bond. Depending on the redox conditions it can exist in two inter-convertible forms: a more common oxidized yellow-brown form which is insoluble in water and only slightly soluble in organic solvents, such as pyridine, and a reduced form with bright red color that is more soluble in organic solvents. Scytonemin absorbs very strongly and very broadly across the UV-C-UV-B-UV-A-violet-blue spectral region, with an in vivo maximum absorption at 370 nm and an in vitro maximum absorption at 386 and 252 nm, and with smaller peaks at 212, 278 and 300 nm. It is believed that scytonemin acts as a highly efficient protective biomolecule (sunscreen) that filters out damaging high frequency UV rays while at the same time allowing the transmittance of wavelengths necessary for photosynthesis. Its biosynthesis in cyanobacteria is mostly triggered by exposure to UV-A and UV-B wavelengths. Recently, Couradeau and coworkers found that cyanobacterial soil crusts warm the soil surface by as much as 10 °C through the production and accumulation of scytonemin pigments. This effect is due to the dissipation of the absorbed photons by the scytonemin molecules into heat. Biosynthesis The biosynthesis in Lyngbya aestuarii was recently explored by Balskus, Case, and Walsh. It proceeds by the conversion of L-tryptophan to 3-indole pyruvic acid, followed by coupling to p-hydroxyphenylpyruvic acid. Cyclization of the resultant β-ketoacid yields a tricyclic ketone. Oxidation and dimerization yields the completed natural product. Three scytonemin biosynthetic enzymes are necessary, denoted as ScyA-C. References Biological pigments Secondary metabolites Cyanobacteria
Scytonemin
Chemistry,Biology
588
4,585,564
https://en.wikipedia.org/wiki/Monoidal%20monad
In category theory, a branch of mathematics, a monoidal monad is a monad on a monoidal category such that the functor is a lax monoidal functor and the natural transformations and are monoidal natural transformations. In other words, is equipped with coherence maps and satisfying certain properties (again: they are lax monoidal), and the unit and multiplication are monoidal natural transformations. By monoidality of , the morphisms and are necessarily equal. All of the above can be compressed into the statement that a monoidal monad is a monad in the 2-category of monoidal categories, lax monoidal functors, and monoidal natural transformations. Opmonoidal monads Opmonoidal monads have been studied under various names. Ieke Moerdijk introduced them as "Hopf monads", while in works of Bruguières and Virelizier they are called "bimonads", by analogy to "bialgebra", reserving the term "Hopf monad" for opmonoidal monads with an antipode, in analogy to "Hopf algebras". An opmonoidal monad is a monad in the 2-category of monoidal categories, oplax monoidal functors and monoidal natural transformations. That means a monad on a monoidal category together with coherence maps and satisfying three axioms that make an opmonoidal functor, and four more axioms that make the unit and the multiplication into opmonoidal natural transformations. Alternatively, an opmonoidal monad is a monad on a monoidal category such that the category of Eilenberg-Moore algebras has a monoidal structure for which the forgetful functor is strong monoidal. An easy example for the monoidal category of vector spaces is the monad , where is a bialgebra. The multiplication and unit of define the multiplication and unit of the monad, while the comultiplication and counit of give rise to the opmonoidal structure. The algebras of this monad are right -modules, which one may tensor in the same way as their underlying vector spaces. Properties The Kleisli category of a monoidal monad has a canonical monoidal structure, induced by the monoidal structure of the monad, and such that the free functor is strong monoidal. The canonical adjunction between and the Kleisli category is a monoidal adjunction with respect to this monoidal structure, this means that the 2-category has Kleisli objects for monads. The 2-category of monads in is the 2-category of monoidal monads and it is isomorphic to the 2-category of monoidales (or pseudomonoids) in the category of monads , (lax) monoidal arrows between them and monoidal cells between them. The Eilenberg-Moore category of an opmonoidal monad has a canonical monoidal structure such that the forgetful functor is strong monoidal. Thus, the 2-category has Eilenberg-Moore objects for monads. The 2-category of monads in is the 2-category of monoidal monads and it is isomorphic to the 2-category of monoidales (or pseudomonoids) in the category of monads opmonoidal arrows between them and opmonoidal cells between them. Examples The following monads on the category of sets, with its cartesian monoidal structure, are monoidal monads: The power set monad . Indeed, there is a function , sending a pair of subsets to the subset . This function is natural in X and Y. Together with the unique function as well as the fact that are monoidal natural transformations, is established as a monoidal monad. The probability distribution (Giry) monad. The following monads on the category of sets, with its cartesian monoidal structure, are not monoidal monads If is a monoid, then is a monad, but in general there is no reason to expect a monoidal structure on it (unless is commutative). References Monoidal categories
Monoidal monad
Mathematics
871
51,851,681
https://en.wikipedia.org/wiki/DU%20Crucis
DU Crucis is a red supergiant and slow irregular variable star in the open cluster NGC 4755, which is also known as the Kappa (κ) Crucis Cluster or Jewel Box Cluster. Location DU Crucis is one of the brighter members of the Jewel Box Cluster and the brightest red supergiant, strongly contrasting with the other bright members which are blue supergiants. It is part of the central bar of the prominent letter A-shaped asterism at the centre of the cluster. The cluster is part of the larger Centaurus OB1 association and lies about 8,500 light years away. The cluster is just to the south-east of β Crucis, the lefthand star of the Southern Cross. Properties DU Crucis is an M2 intermediate luminosity supergiant (luminosity class Iab). Despite its low temperature, it is 46,600 times the luminosity of the sun, due to its very large size. The κ Crucis cluster has a calculated age of 11.2 million years. Variability Photometry from the Hipparcos satellite mission showed that DU Crucis varies in brightness with an amplitude of 0.44 magnitudes. No periodicity could be detected in the variations and it was classified as a slow irregular variable of type Lc, indicating a supergiant. Notes References External links Crux M-type supergiants 062918 CD-59 4459 J12534132-6020578 Slow irregular variables IRAS catalogue objects Crucis, DU
DU Crucis
Astronomy
318
26,417,040
https://en.wikipedia.org/wiki/U-Prove
U-Prove is a free and open-source technology and accompanying software development kit for user-centric identity management. The underlying cryptographic protocols were designed by Dr. Stefan Brands and further developed by Credentica and, subsequently, Microsoft. The technology was developed to allow internet users to disclose only the minimum amount of personal data when making electronic transactions as a way to reduce the likelihood of privacy violations. Overview U-Prove enables application developers to reconcile seemingly conflicting security and privacy objectives (including anonymity), and allows for digital identity claims to be efficiently tied to the use of tamper-resistant devices such as smart cards. Application areas of particular interest include cross-domain enterprise identity and access management, e-government SSO and data sharing, electronic health records, anonymous electronic voting, policy-based digital rights management, social networking data portability, and electronic payments. In 2008, Microsoft committed to opening up the U-Prove technology. As the first step, in March 2010 the company released a cryptographic specification and open-source API implementation code for part of the U-Prove technology as a Community Technology Preview under Microsoft's Open Specification Promise. Since then, several extensions have been released under the same terms and the technology has been tested in real-life applications. In 2010, the International Association of Privacy Professionals (IAPP) honored U-Prove with the 2010 Privacy Innovation Award for Technology. Microsoft also won the in European Identity Award in the Best Innovation category for U-Prove at the European Identity Conference 2010. The U-Prove Crypto SDK for C# is licensed under Apache License 2.0 and the source code is available on GitHub. Microsoft also provides a JavaScript SDK that implements the client-side of the U-Prove Cryptographic Specification. See also Blind signature Zero-knowledge proof Identity metasystem References Further reading External links U-Prove on Credentica.com U-Prove on Microsoft website Public-key cryptography Microsoft application programming interfaces Microsoft free software Software using the Apache license Computer access control frameworks
U-Prove
Technology
419
14,910,292
https://en.wikipedia.org/wiki/Seashell%20resonance
Seashell resonance refers to a popular folk myth that the sound of the ocean may be heard through seashells, particularly conch shells. This effect is similarly observed in any resonant cavity, such as an empty cup or a hand clasped to the ear. The resonant sounds are created from ambient noise in the surrounding environment by the processes of reverberation and (acoustic) amplification within the cavity of the shell. The ocean-like quality of seashell resonance is due in part to the similarity between airflow and ocean movement sounds. The association of seashells with the ocean likely plays a further role. Resonators attenuate or emphasize some ambient noise frequencies in the environment, including airflow within the resonator and sound originating from the body, such as bloodflow and muscle movement. These sounds are normally discarded by the auditory cortex; however, they become more obvious when louder external sounds are filtered out. This occlusion effect occurs with seashells and other resonators such as circumaural headphones, raising the acoustic impedance to external sounds. References External links How Stuff Works Mollusc shells Acoustics
Seashell resonance
Physics
238
51,926,519
https://en.wikipedia.org/wiki/Lolitrem%20B
Lolitrem B is one of many toxins produced by a fungus called Epichloë festucae var. lolii), which grows in Lolium perenne (perennial ryegrass). The fungus is symbiotic with the ryegrass; it doesn't harm the plant, and the toxins it produces kill insects that feed on ryegrass. Lolitrem B is one of these toxins, but it is also harmful to mammals. The shoots and flowers of infected ryegrass have especially high concentrations of lolitrem B, and when livestock eat too much of them, they get perennial ryegrass staggers. At low doses the animals have tremors, and at higher doses they stagger, and at higher yet doses the animals become paralyzed and die. The blood pressure of the animals also goes up. The effect of the lolitrem B comes on slowly and fades out slowly, as it is stored in fat after the ryegrass is eaten. The condition is especially common in New Zealand and Australia, and plant breeders there have been trying to develop strains of fungus that produce toxins only harmful to pests, and not to mammals. Lolitrem B affects a kind of ion channel called BK channels. These channels normally open temporarily to allow neurons and other electrically sensitive cells, like some heart cells, to "reset" after they fire; lolitrem B blocks them, preventing the neuron or heart cell from firing again. This affects nerve and heart function. The channel is also involved in blood vessel relaxation, and blocking the channel causes blood vessels to constrict, raising blood pressure. Etymology The Lolitrem B toxin derives the first part of its name ('Loli') from the host of the fungus (Lolium perenne), the middle part ('trem') due to the tremors the toxin is known to cause, and the last part of its name ('B') as part of a way to distinguish between different Lolitrems, based on their difference in chemical structure (see 'Chemistry'). Sources Lolitrem B is found in perennial ryegrass that has been infected with the fungus E. f. lolii (formerly Neotyphodium lolii). This fungus is an endophyte; for part of its lifecycle it lives inside plants, growing between the plant cells; it is most prevalent in the ryegrass stem. The fungus produces lolitrem B, one of several mycotoxins that kill pests but which also can be neurotoxins for mammals. Toxicity When animals eat ryegrass stems infected with E. f. lolii they get a condition called perennial ryegrass staggers; in cases of mild poisoning the animals get tremors, and in severe poisoning they stagger and collapse. In horses, tremors of the eyeball muscles are seen which are more severe during eating and exercise. Lolitrem B can also increase the heart rate, blood pressure, respiration rate and disrupt the digestion process. Lolitrems distinguish themselves among tremorgenic neurotoxins because they induce a long lasting effect on motor function and heart rate. The tremors can last for hours and at high concentrations they can cause death. In animals, lolitrem B more often causes death related to unfortunate accidents such as falling in a pond. The neurotoxic effects can be completely reversed. The threshold for toxicity varies between species of animals: for sheep a threshold value of 1.8 - 2.0 mg/kg was found, and for cattle 1.55 mg/kg. Measuring the lolitrem B concentration in fat tissue can be used to estimate the amount of lolitrem B consumed, and is used to determine the cause of death for cattle that presenting with neurological symptoms. Lolitrem B likely acts synergistically with ergotamine to increase smooth muscle contraction. Epidemiology E. f. lolii infects ryegrass worldwide, but cases of perennial ryegrass staggers are rare outside of Australia and New Zealand; the reasons for this are unclear but may have to do with the purposeful selection of endophyte-infected ryegrass by plant breeders, who prize its resistance to pests which are more prevalent in Australia and New Zealand than elsewhere, and the practice of monoculture by farmers in those countries. Prevention Plant breeders have been working with mycologists in Australia and New Zealand to develop strains of fungus that produce mycotoxins that are toxic to pests but not to mammals. Until those become commercially established the best prevention is avoiding grazing livestock on ryegrass when the stems are emerging and while the plant is flowering (concentrations are highest in the mature inflorescence and in the base of the plant), and avoiding overgrazing; once the exposure to lolitrem B ends the symptoms gradually decrease. Pharmacology Lolitrem B is rapidly eliminated from serum and has a half-life of 14 minutes. Lolitrem B is not very soluble, and is generally stored in fat after ingestion and slowly released; this is likely why its effects come on slowly and linger after ingestion has stopped. The more that is ingested, the more is stored in fat. Lolitrem B targets the large conductance calcium-activated potassium channels (BK channels) and in particular the α subunit (hSlo) of the BK channels. These channels open temporarily to allow neurons to "reset" after they fire; lolitrem B blocks them, preventing the neuron from firing again after it depolarizes, which at low doses leads to tremors and at high doses to paralysis and death. The binding site of lolitrem B is likely to be located in this α subunit. When lolitrem B is added, the potassium current quickly gets abolished and this inhibition cannot be reversed by washout (this reversal is possible for paxilline). However, over time lolitrem B slowly dissociates from the binding site. The inhibition by lolitrem B is calcium concentration-dependent. The concentration with half of the maximal inhibition (IC50) for hSlo was found to be 3.7 ± 0.4 nM. Lolitrem B is a more potent neurotoxin in vitro compared to paxilline. Lolitrem B preferably blocks the open configuration of BK channels, as under high calcium concentrations promoting the opening of BK channels, the apparent affinity increases three-fold. The inhibition by lolitrem B and its affinity differs with the calcium concentration. Lolitrem B has the highest affinity for BK channels when there is a high probability of an open conformation thus when the calcium binds to the high affinity sites. The inhibition occurs when the channels are in an open state. BK channels oppose vasoconstriction in blood vessels resulting in vasorelaxation. Blocking the channels leads to vasoconstriction and to an increase in blood pressure. The BK channel α subunit is expressed in muscle and nerve tissue and the BK channels are abundant in the brain. The BK channels modulate neurotransmitter release, the form of the action potential and repetitive firing. Inhibition of the channels can explain why there would be an increased release in excitatory neurotransmitters resulting in tremors, ataxia, hypersensitivity, increased smooth muscle contraction in the colon and an increased heart rate. Chemistry Lolitrem B is the most potent member of the lolitrem family. It possesses an indole-diterpene unit as well as a reactive epoxide group. It structurally looks like paxilline which is a related tremor inducer. There are multiple lolitrems which are labelled by a letter. The difference between them is the position and number of aryl and hydroxyl substituents plus the absence or presence of an I ring. The I ring seems to be necessary for prolonged tremors to occur. Intermediate metabolites such as terpendoles and paspaline can become lolitrems by addition of two rings (A and B) at the C20-C21 position to the indole moiety of the molecule. Biosynthesis The production of lolitrems – including B – requires 10 different genes on a locus (the locus) which is organized in three clusters. These clusters are separated by large AT-rich sequences. Cluster 1 contains the genes ltmG, ltmK and ltmM. Cluster 2 contains ltmP, ltmF, ltmB, ltmQ and ltmC and cluster 3 ltmE and ltmJ Four genes from cluster 2 are orthologues of functional characterized paxilline genes, meaning that the genes show homologous sequences. The genes in cluster 3 appear to be unique to the Epichloë genus. Much of this research into lolitrem synthesis has been performed by the Young group including Young et al 2005, 2006, and 2009. Young et al 2009 provides predictions of variation in indole-diterpene synthesis ability between Epichloë spp. See also Penitrem A - a structurally related fungal neurotoxin found on ryegrass References Neurotoxins Ion channel toxins
Lolitrem B
Chemistry
1,898
14,816,883
https://en.wikipedia.org/wiki/DIDO1
Death-inducer obliterator 1 is a protein that in humans is encoded by the DIDO1 gene. Function Apoptosis, a major form of cell death, is an efficient mechanism for eliminating unwanted cells and is of central importance for development and homeostasis in metazoan animals. In mice, the death inducer-obliterator-1 gene is upregulated by apoptotic signals and encodes a cytoplasmic protein that translocates to the nucleus upon apoptotic signal activation. When overexpressed, the mouse protein induced apoptosis in cell lines growing in vitro. This gene is similar to the mouse gene and therefore is thought to be involved in apoptosis. Alternatively spliced transcripts have been found for this gene, encoding multiple isoforms. References Further reading External links Transcription factors
DIDO1
Chemistry,Biology
176
12,024
https://en.wikipedia.org/wiki/General%20relativity
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations. Newton's law of universal gravitation, which describes classical gravity, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data. Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic. Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories. History Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913. The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life. During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests. General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency. In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated." From classical mechanics to general relativity General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity. Geometry of Newtonian gravity At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime. Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration. Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass. Relativistic generalization As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena. With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event , there is a set of events that can, in principle, either influence or be influenced by via signals or interactions that do not need to travel faster than light (such as event in the image), and a set of events for which such an influence is impossible (such as event in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry. Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry. A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity. The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish). Einstein's equations Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations: On the left-hand side is the Einstein tensor, , which is symmetric and a specific divergence-free combination of the Ricci tensor and the metric. In particular, is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as On the right-hand side, is a constant and is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant is found to be , where is the Newtonian constant of gravitation and the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: where is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Total force in general relativity In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by A conservative total force can then be obtained as its negative gradient where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect. Alternatives to general relativity There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory. Definition and basic applications The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building. Definition and basic properties General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve. While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation. As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance. Model-building The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present. Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture). Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories. Consequences of Einstein's theory General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication. Gravitational time dilation and frequency shift Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation. Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid. Light deflection and gravitational time delay General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun. This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity. Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space. Gravitational waves Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed. Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models. Orbital effects and the relativity of direction General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction. Precession of apsides In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations. The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude. In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by where: is the semi-major axis is the orbital period is the speed of light in vacuum is the orbital eccentricity Orbital decay According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation. The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations. Geodetic precession and frame-dragging Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%. Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. Astrophysical applications Gravitational lensing The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed. Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies. Gravitational-wave astronomy Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015. Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. Black holes and other compact objects Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures. Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory. Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry. Cosmology The current models of cosmology are based on Einstein's field equations, which include the cosmological constant since it has important influence on the large-scale dynamics of the cosmos, where is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation. Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear. An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below). Exotic solutions: time travel, warp drives Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel. Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability. Advanced concepts Asymptotic symmetries The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group. In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries. Causal structure and global geometry In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams. Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results. Horizons Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier. Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple. Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below). There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation. Singularities Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well. Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity. Evolution equations Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories. To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity. Global and quasi-local quantities The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy. Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture. Relationship with quantum theory If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question. Quantum field theory in curved spacetime Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes. Quantum gravity The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist. Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability"). One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps. Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology. All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. Current status General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research. See also (warp drive) References Bibliography ; original paper in Russian: See also English translation at Einstein Papers Project See also English translation at Einstein Papers Project See also English translation at Einstein Papers Project Further reading Popular books Beginning undergraduate textbooks Advanced undergraduate textbooks Graduate textbooks Specialists' books Journal articles See also English translation at Einstein Papers Project External links Einstein Online  – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics GEO600 home page, the official website of the GEO600 project. LIGO Laboratory NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity (lecture by Leonard Susskind recorded 22 September 2008 at Stanford University). Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory/advanced). General Relativity Tutorials by John Baez. The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space Concepts in astronomy Albert Einstein 1915 in science Articles containing video clips
General relativity
Physics,Astronomy
12,128
12,910,645
https://en.wikipedia.org/wiki/Neuromedin%20B%20receptor
The neuromedin B receptor (NMBR), now known as BB1 is a G protein-coupled receptor whose endogenous ligand is neuromedin B. In humans, this protein is encoded by the NMBR gene. Neuromedin B receptor binds neuromedin B, a potent mitogen and growth factor for normal and neoplastic lung and for gastrointestinal epithelial tissue. References Further reading External links G protein-coupled receptors
Neuromedin B receptor
Chemistry
99
868,391
https://en.wikipedia.org/wiki/Hobbit%20%28computer%29
Hobbit (Russian: Хоббит) is a Soviet/Russian 8-bit home computer, based on the ZX Spectrum hardware architecture. Besides Sinclair BASIC it also featured CP/M, Forth or LOGO modes, with the Forth or LOGO operating environment residing in an on-board ROM chip. Overview Hobbit was invented by Dmitry Mikhailov (Russian: Дмитрий Михайлов) (all R&D) and Mikhail Osetinskii (Russian: Михаил Осетинский) (management) in Leningrad, Russia in the late 1980s. The original circuit layout was designed on a home-made computer (built in 1979 using ASMP of three KR580 chips - Soviet Intel 8080 clones), also created by Dmitry Mikhailov. The computer was manufactured by the joint venture InterCompex. Hobbit was marketed in the former Soviet Union as a low-cost personal computer solution for basic educational and office needs, in addition to its obvious use as a home computer. Schools would use it on the classrooms, interconnecting several machines and forming a 56K baud network. It was possible to use another Hobbit or a IBM PC compatible with a special Hobbit network adapter card by InterCompex as a master host on the network. The Hobbit was also briefly marketed in the U.K., targeted mainly at the existing ZX Spectrum fans wanting a more advanced computer compatible with the familiar architecture. It was mentioned on Your Sinclair September 1990 and January 1991 issues; Crash April 1992 issue, and on Sinclair User August and September 1992 issues, highlighting the available Forth language and CP/M operating system. Domestic models often did not include the TV output, the internal speaker or both. The AY8910 chip for the domestic models was sold separately as an external extension module, hanging off the same extension bus as the optional external disk drive. Another extension was the SME (Screen and Memory Extension) board. This featured 32 KB of cache memory, some of which could be dedicated to a video text buffer in CGA mode (only supported by drivers in the FORTH or the CP/M environments; no known programs using the Sinclair-based BASIC mode used this feature). SME worked at astonishing speed - one machine code command made an output of an entire display line. SME was capable of rendering several dozens of windows per second, and its capabilities were fully utilized only in the Forth environment. Technical details Source: Z80A at 3.5 MHz 64K RAM Disk drives: external 2 x 5.25" drives (up to 4 connectable) or internal 3.5" drive Connections: joystick (2 x Sinclair, 1 x Kempston), Centronics, RS-232, audio in/out (for cassette recorder), system bus extension 74-key keyboard (33 keys freely programmable) Video output: Composite video TV out, EGA monitor Operating system: built in disassembler, CP/M clone called "Beta", system language switchable between English and Russian References Computer-related introductions in 1990 ZX Spectrum clones Soviet Union–United Kingdom relations Z80-based home computers Soviet computer systems
Hobbit (computer)
Technology
674
60,999,584
https://en.wikipedia.org/wiki/Mestranol/hydroxyprogesterone%20acetate
Mestranol/hydroxyprogesterone acetate (ME/OHPA), sold under the brand name Hormolidin, is a combination medication of mestranol (ME), an estrogen, and hydroxyprogesterone acetate (OHPA), a progestin, which was reportedly used as a sequential combined birth control pill for women in the early 1970s. It was formulated as oral tablets and contained 16 tablets of 80 μg ME, 5 tablets of 80 μg ME and 100 mg OHPA, and 7 placebo tablets (28 tablets in total). The medication was manufactured by the pharmaceutical company Gador in Argentina. See also List of combined sex-hormonal preparations § Estrogens and progestogens References Abandoned drugs Combined estrogen–progestogen formulations
Mestranol/hydroxyprogesterone acetate
Chemistry
173
7,451,605
https://en.wikipedia.org/wiki/Critically%20endangered
An IUCN Red List critically endangered (CR or sometimes CE) species is one that has been categorized by the International Union for Conservation of Nature as facing an extremely high risk of extinction in the wild. As of December 2023, of the 157,190 species currently on the IUCN Red List, 9,760 of those are listed as critically endangered, with 1,302 being possibly extinct and 67 possibly extinct in the wild. The IUCN Red List provides the public with information regarding the conservation status of animal, fungi, and plant species. It divides various species into seven different categories of conservation that are based on habitat range, population size, habitat, threats, etc. Each category represents a different level of global extinction risk. Species that are considered to be critically endangered are placed within the "Threatened" category. As the IUCN Red List does not consider a species extinct until extensive targeted surveys have been conducted, species that are possibly extinct are still listed as critically endangered. IUCN maintains a list of "possibly extinct" and "possibly extinct in the wild" species, modelled on categories used by BirdLife International to categorize these taxa. Criteria To be defined as critically endangered in the Red List, a species must meet any of the following criteria (A–E) ("3G/10Y" signifies three generations or ten years—whichever is longer—over a maximum of 100 years; "MI" signifies Mature Individuals): A: Population Size Reduction The rate of reduction is measured either over a 10 year span or across three different generations within that species. The cause for this decline must also be known. If the reasons for population reduction no longer occur and can be reversed, the population needs to have been reduced by at least 90% If not, then the population needs to have been reduced by at least 80% B: Reduction Across a Geographic Range This reduction must occur over less than 100 km2 OR the area of occupancy is less than 10 km2. Severe habitat fragmentation or existing at just one location Decline in extent of occurrence, area of occupancy, area/extent/quality of habitat, number of locations/subpopulations, or amount of MI. Extreme fluctuations in extent of occurrence, area of occupancy, number of locations/subpopulations, or amount of MI. C: Population Decline The population must decline to less than 250 MI and either: A decline of 25% over 3G/10Y Extreme fluctuations, or over 90% of MI in a single subpopulation, or no more than 50 MI in any one subpopulation. D: Population Size Reduction The population size must be reduced to numbers of less than 50 MI. E: Probability of Extinction There must be at least a 50% probability of going extinct in the wild within over 3G/10Y Causes The current extinction crisis is witnessing extinction rates that are occurring at a faster rate than that of the natural extinction rate. It has largely been credited towards human impacts on climate change and the loss of biodiversity. This is along with natural forces that may create stress on the species or cause an animal population to become extinct. Currently the biggest reason for species extinction is human interaction resulting in habitat loss. Species rely on their habitat for the resources needed for their survival. If the habitat becomes destroyed, the population will see a decline in their numbers. Activities that cause loss of habitat include pollution, urbanization, and agriculture. Another reason for plants and animals to become endangered is due to the introduction of invasive species. Invasive species invade and exploit a new habitat for its natural resources as a method to outcompete the native organisms, eventually taking over the habitat. This can lead to either the native species' extinction or causing them to become endangered, which also eventually causes extinction. Plants and animals may also go extinct due to disease. The introduction of a disease into a new habitat can cause it to spread amongst the native species. Due to their lack of familiarity with the disease or little resistance, the native species can die off. References IUCN Red List Biota by conservation status
Critically endangered
Biology
823
29,917,965
https://en.wikipedia.org/wiki/Wrigley%20Trophy
The Wrigley Trophy is an award given for motorboats. It was awarded as early as 1912 with a $1,500 cash prize. In 1912 the award was disputed when James A. Pugh contested the win by J. Stuart Blackton. He argued that Baby Reliance II was allowed a late entry and had already missed two rounds of competition. Winners Cassandra (raceboat); George Griffith (1960) Baby Reliance II, J. Stuart Blackton (1912) References Science and technology awards Motorboat racing
Wrigley Trophy
Technology
105
21,557,464
https://en.wikipedia.org/wiki/Principal%20%28computer%20security%29
A principal in computer security is an entity that can be authenticated by a computer system or network. It is referred to as a security principal in Java and Microsoft literature. Principals can be individual people, computers, services, computational entities such as processes and threads, or any group of such things. They need to be identified and authenticated before they can be assigned rights and privileges over resources in the network. A principal typically has an associated identifier (such as a security identifier) that allows it to be referenced for identification or assignment of properties and permissions. A principal often becomes synonymous with the credentials used to act as that principal, such as a password or (for service principals) an access token or other secrets. References External links - Generic Security Service API Version 2. - WebDAV Current Principal Extension. - The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2. Computing terminology Cybersecurity engineering
Principal (computer security)
Technology,Engineering
198
2,866
https://en.wikipedia.org/wiki/Ammeter
An ammeter (abbreviation of ampere meter) is an instrument used to measure the current in a circuit. Electric currents are measured in amperes (A), hence the name. For direct measurement, the ammeter is connected in series with the circuit in which the current is to be measured. An ammeter usually has low resistance so that it does not cause a significant voltage drop in the circuit being measured. Instruments used to measure smaller currents, in the milliampere or microampere range, are designated as milliammeters or microammeters. Early ammeters were laboratory instruments that relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems. It is generally represented by letter 'A' in a circuit. History The relation between electric current, magnetic fields and physical forces was first noted by Hans Christian Ørsted in 1820, who observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect – the instruments were called "multipliers". The word rheoscope as a detector of electrical currents was coined by Sir Charles Wheatstone about 1840 but is no longer used to describe electrical instruments. The word makeup is similar to that of rheostat (also coined by Wheatstone) which was a device used to adjust the current in a circuit. Rheostat is a historical term for a variable resistance, though unlike rheoscope may still be encountered. Types Some instruments are panel meters, meant to be mounted on some sort of control panel. Of these, the flat, horizontal or vertical type is often called an edgewise meter. Moving-coil The D'Arsonval galvanometer is a moving coil ammeter. It uses magnetic deflection, where current passing through a coil placed in the magnetic field of a permanent magnet causes the coil to move. The modern form of this instrument was developed by Edward Weston, and uses two spiral springs to provide the restoring force. The uniform air gap between the iron core and the permanent magnet poles make the deflection of the meter linearly proportional to current. These meters have linear scales. Basic meter movements can have full-scale deflection for currents from about 25 microamperes to 10 milliamperes. Because the magnetic field is polarised, the meter needle acts in opposite directions for each direction of current. A DC ammeter is thus sensitive to which polarity it is connected in; most are marked with a positive terminal, but some have centre-zero mechanisms and can display currents in either direction. A moving coil meter indicates the average (mean) of a varying current through it, which is zero for AC. For this reason, moving-coil meters are only usable directly for DC, not AC. This type of meter movement is extremely common for both ammeters and other meters derived from them, such as voltmeters and ohmmeters. Moving magnet Moving magnet ammeters operate on essentially the same principle as moving coil, except that the coil is mounted in the meter case, and a permanent magnet moves the needle. Moving magnet Ammeters are able to carry larger currents than moving coil instruments, often several tens of amperes, because the coil can be made of thicker wire and the current does not have to be carried by the hairsprings. Indeed, some Ammeters of this type do not have hairsprings at all, instead using a fixed permanent magnet to provide the restoring force. Electrodynamic An electrodynamic ammeter uses an electromagnet instead of the permanent magnet of the d'Arsonval movement. This instrument can respond to both alternating and direct current and also indicates true RMS for AC. See wattmeter for an alternative use for this instrument. Moving-iron Moving iron ammeters use a piece of iron which moves when acted upon by the electromagnetic force of a fixed coil of wire. The moving-iron meter was invented by Austrian engineer Friedrich Drexler in 1884. This type of meter responds to both direct and alternating currents (as opposed to the moving-coil ammeter, which works on direct current only). The iron element consists of a moving vane attached to a pointer, and a fixed vane, surrounded by a coil. As alternating or direct current flows through the coil and induces a magnetic field in both vanes, the vanes repel each other and the moving vane deflects against the restoring force provided by fine helical springs. The deflection of a moving iron meter is proportional to the square of the current. Consequently, such meters would normally have a nonlinear scale, but the iron parts are usually modified in shape to make the scale fairly linear over most of its range. Moving iron instruments indicate the RMS value of any AC waveform applied. Moving iron ammeters are commonly used to measure current in industrial frequency AC circuits. Hot-wire In a hot-wire ammeter, a current passes through a wire which expands as it heats. Although these instruments have slow response time and low accuracy, they were sometimes used in measuring radio-frequency current. These also measure true RMS for an applied AC. Digital In much the same way as the analogue ammeter formed the basis for a wide variety of derived meters, including voltmeters, the basic mechanism for a digital meter is a digital voltmeter mechanism, and other types of meter are built around this. Digital ammeter designs use a shunt resistor to produce a calibrated voltage proportional to the current flowing. This voltage is then measured by a digital voltmeter, through use of an analog-to-digital converter (ADC); the digital display is calibrated to display the current through the shunt. Such instruments are often calibrated to indicate the RMS value for a sine wave only, but many designs will indicate true RMS within limitations of the wave crest factor. Integrating There is also a range of devices referred to as integrating ammeters. In these ammeters the current is summed over time, giving as a result the product of current and time; which is proportional to the electrical charge transferred with that current. These can be used for metering energy (the charge needs to be multiplied by the voltage to give energy) or for estimating the charge of a battery or capacitor. Picoammeter A picoammeter, or pico ammeter, measures very low electric current, usually from the picoampere range at the lower end to the milliampere range at the upper end. Picoammeters are used where the current being measured is below the limits of sensitivity of other devices, such as multimeters. Most picoammeters use a "virtual short" technique and have several different measurement ranges that must be switched between to cover multiple decades of measurement. Other modern picoammeters use log compression and a "current sink" method that eliminates range switching and associated voltage spikes. Special design and usage considerations must be observed in order to reduce leakage current which may swamp measurements such as special insulators and driven shields. Triaxial cable is often used for probe connections. Application Ammeters must be connected in series with the circuit to be measured. For relatively small currents (up to a few amperes), an ammeter may pass the whole of the circuit current. For larger direct currents, a shunt resistor carries most of the circuit current and a small, accurately-known fraction of the current passes through the meter movement. For alternating current circuits, a current transformer may be used to provide a convenient small current to drive an instrument, such as 1 or 5 amperes, while the primary current to be measured is much larger (up to thousands of amperes). The use of a shunt or current transformer also allows convenient location of the indicating meter without the need to run heavy circuit conductors up to the point of observation. In the case of alternating current, the use of a current transformer also isolates the meter from the high voltage of the primary circuit. A shunt provides no such isolation for a direct-current ammeter, but where high voltages are used it may be possible to place the ammeter in the "return" side of the circuit which may be at low potential with respect to earth. Ammeters must not be connected directly across a voltage source since their internal resistance is very low and excess current would flow. Ammeters are designed for a low voltage drop across their terminals, much less than one volt; the extra circuit losses produced by the ammeter are called its "burden" on the measured circuit(I). Ordinary Weston-type meter movements can measure only milliamperes at most, because the springs and practical coils can carry only limited currents. To measure larger currents, a resistor called a shunt is placed in parallel with the meter. The resistances of shunts is in the integer to fractional milliohm range. Nearly all of the current flows through the shunt, and only a small fraction flows through the meter. This allows the meter to measure large currents. Traditionally, the meter used with a shunt has a full-scale deflection (FSD) of , so shunts are typically designed to produce a voltage drop of when carrying their full rated current. To make a multi-range ammeter, a selector switch can be used to connect one of a number of shunts across the meter. It must be a make-before-break switch to avoid damaging current surges through the meter movement when switching ranges. A better arrangement is the Ayrton shunt or universal shunt, invented by William E. Ayrton, which does not require a make-before-break switch. It also avoids any inaccuracy because of contact resistance. In the figure, assuming for example, a movement with a full-scale voltage of 50 mV and desired current ranges of 10 mA, 100 mA, and 1 A, the resistance values would be: R1 = 4.5 ohms, R2 = 0.45 ohm, R3 = 0.05 ohm. And if the movement resistance is 1000 ohms, for example, R1 must be adjusted to 4.525 ohms. Switched shunts are rarely used for currents above 10 amperes. Zero-center ammeters are used for applications requiring current to be measured with both polarities, common in scientific and industrial equipment. Zero-center ammeters are also commonly placed in series with a battery. In this application, the charging of the battery deflects the needle to one side of the scale (commonly, the right side) and the discharging of the battery deflects the needle to the other side. A special type of zero-center ammeter for testing high currents in cars and trucks has a pivoted bar magnet that moves the pointer, and a fixed bar magnet to keep the pointer centered with no current. The magnetic field around the wire carrying current to be measured deflects the moving magnet. Since the ammeter shunt has a very low resistance, mistakenly wiring the ammeter in parallel with a voltage source will cause a short circuit, at best blowing a fuse, possibly damaging the instrument and wiring, and exposing an observer to injury. In AC circuits, a current transformer can be used to convert the large current in the main circuit into a smaller current more suited to a meter. Some designs of transformer are able to directly convert the magnetic field around a conductor into a small AC current, typically either or at full rated current, that can be easily read by a meter. In a similar way, accurate AC/DC non-contact ammeters have been constructed using Hall effect magnetic field sensors. A portable hand-held clamp-on ammeter is a common tool for maintenance of industrial and commercial electrical equipment, which is temporarily clipped over a wire to measure current. Some recent types have a parallel pair of magnetically soft probes that are placed on either side of the conductor. See also Clamp meter Class of accuracy in electrical measurements Electric circuit Electrical measurements Electrical current#Measurement Electronics List of electronics topics Measurement category Multimeter Ohmmeter Rheoscope Voltmeter Notes References External links — from Lessons in Electric Circuits series main page Electrical meters Electronic test equipment Flow meters
Ammeter
Chemistry,Technology,Engineering
2,603
21,861,922
https://en.wikipedia.org/wiki/Subunit%20vaccine
A subunit vaccine is a vaccine that contains purified parts of the pathogen that are antigenic, or necessary to elicit a protective immune response. Subunit vaccine can be made from dissembled viral particles in cell culture or recombinant DNA expression, in which case it is a recombinant subunit vaccine. A "subunit" vaccine doesn't contain the whole pathogen, unlike live attenuated or inactivated vaccine, but contains only the antigenic parts such as proteins, polysaccharides or peptides. Because the vaccine doesn't contain "live" components of the pathogen, there is no risk of introducing the disease, and is safer and more stable than vaccines containing whole pathogens. Other advantages include being well-established technology and being suitable for immunocompromised individuals. Disadvantages include being relatively complex to manufacture compared to some vaccines, possibly requiring adjuvants and booster shots, and requiring time to examine which antigenic combinations may work best. The first recombinant subunit vaccine was produced in the mid-1980s to protect people from Hepatitis B. Other recombinant subunit vaccines licensed include Engerix-B (hepatitis B), Gardasil 9 (Human Papillomavirus), Flublok (influenza), Shingrix (Herpes zoster) and Nuvaxovid (Coronavirus disease 2019). After injection, antigens trigger the production of antigen-specific antibodies, which are responsible for recognising and neutralising foreign substances. Basic components of recombinant subunit vaccines include recombinant subunits, adjuvants and carriers. Additionally, recombinant subunit vaccines are popular candidates for the development of vaccines against infectious diseases (e.g. tuberculosis, dengue). Recombinant subunit vaccines are considered to be safe for injection. The chances of adverse effects vary depending on the specific type of vaccine being administered. Minor side effects include injection site pain, fever, and fatigue, and serious adverse effects consist of anaphylaxis and potentially fatal allergic reaction. The contraindications are also vaccine-specific; they are generally not recommended for people with the previous history of anaphylaxis to any component of the vaccines. Advice from medical professionals should be sought before receiving any vaccination. Discovery The first certified subunit vaccine by clinical trials on humans is the hepatitis B vaccine, containing the surface antigens of the hepatitis B virus itself from infected patients and adjusted by newly developed technology aiming to enhance the vaccine safety and eliminate possible contamination through individuals plasma. Mechanism Subunit vaccines contain fragments of the pathogen, such as protein or polysaccharide, whose combinations are carefully selected to induce a strong and effective immune response. Because the immune system interacts with the pathogen in a limited way, the risk of side effects is minimal. An effective vaccine would elicit the immune response to the antigens and form immunological memory that allows quick recognition of the pathogens and quick response to future infections. A drawback is that the specific antigens used in a subunit vaccine may lack pathogen-associated molecular patterns which are common to a class of pathogen. These molecular structures may be used by immune cells for danger recognition, so without them, the immune response may be weaker. Another drawback is that the antigens do not infect cells, so the immune response to the subunit vaccines may only be antibody-mediated, not cell-mediated, and as a result, is weaker than those elicited by other types of vaccines. To increase immune response, adjuvants may be used with the subunit vaccines, or booster doses may be required. Types Protein subunit A protein subunit is a polypeptide chain or protein molecule that assembles (or "coassembles") with other protein molecules to form a protein complex. Large assemblies of proteins such as viruses often use a small number of types of protein subunits as building blocks. A key step in creating a recombinant protein vaccine is the identification and isolation of a protein subunit from the pathogen which is likely to trigger a strong and effective immune response, without including the parts of the virus or bacterium that enable the pathogen to reproduce. Parts of the protein shell or capsid of a virus are often suitable. The goal is for the protein subunit to prime the immune system response by mimicking the appearance but not the action of the pathogen. Another protein-based approach involves self‐assembly of multiple protein subunits into a virus-like particle (VLP) or nanoparticle. The purpose of increasing the vaccine's surface similarity to a whole virus particle (but not its ability to spread) is to trigger a stronger immune response. Protein subunit vaccines are generally made through protein production, manipulating the gene expression of an organism so that it expresses large amounts of a recombinant gene. A variety of approaches can be used for development depending on the vaccine involved. Yeast, baculovirus, or mammalian cell cultures can be used to produce large amounts of proteins in vitro. Protein-based vaccines are being used for hepatitis B and for human papillomavirus (HPV). The approach is being used to try to develop vaccines for difficult-to-vaccinate-against viruses such as ebolavirus and HIV. Protein-based vaccines for COVID-19 tend to target either its spike protein or its receptor binding domain. As of 2021, the most researched vaccine platform for COVID-19 worldwide was reported to be recombinant protein subunit vaccines. Polysaccharide subunit Vi capsular polysaccharide vaccine (ViCPS) against typhoid caused by the Typhi serotype of Salmonella enterica. Instead of being a protein, the Vi antigen is a bacterial capsule polysacchide, made up of a long sugar chain linked to a lipid. Capsular vaccines like ViCPS tend to be weak at eliciting immune responses in children. Making a conjugate vaccine by linking the polysacchide with a toxoid increases the efficacy. Conjugate vaccine A conjugate vaccine is a type of vaccine which combines a weak antigen with a strong antigen as a carrier so that the immune system has a stronger response to the weak antigen. Peptide subunit A peptide-based subunit vaccine employs a peptide instead of a full protein. Peptide-based subunit vaccine mostly used due to many reasons,such as, it is easy and affordable for massive production. Adding to that, its greatest stability, purity and exposed composition. Three steps occur leading to creation of peptide subunit vaccine; Epitope recognition Epitope optimization Peptide immunity improvement Features When compared with conventional attenuated vaccines and inactivated vaccines, recombinant subunit vaccines have the following special characteristics: They contain clearly identified compositions which greatly reduces the possibility of presence of undesired materials within the vaccine. Their pathogenicities are minimized as only fragments of the pathogen are present in the vaccine which cannot invade and multiply within the human body. They have better safety profiles and are suitable to be administered to immunocompromised patients. They are suitable for mass production due to the use of recombinant technologies. They have high stability so they can withstand environmental changes and are more convenient to be used in community settings. However, there are also some drawbacks regarding recombinant subunit vaccines: Addition of adjuvants is necessary during manufacturing to increase the efficacy of these vaccines. Patients will have to receive booster doses to maintain long-term immunity. Selection of appropriate cell lines for the cultivation of subunits is time-consuming because microbial proteins can be incompatible to certain expression systems. Pharmacology Vaccination is a potent way to protect individuals against infectious diseases. Active immunity can be acquired artificially by vaccination as a result of the body's own defense mechanism being triggered by the exposure of a small, controlled amount of pathogenic substances to produce its own antibodies and memory cells without being infected by the real pathogen. The processes involved in primary immune response are as follows: Pre-exposure to the antigens present in vaccines elicits a primary response. After injection, antigens will be ingested by antigen-presenting cells (APCs), such as dendritic cells and macrophages, via phagocytosis. The APCs will travel to lymph nodes, where immature B cells and T cells are present. Following antigen processes by APCs, antigens will bind to either MHC class I receptors or MHC class II receptors on the cell surface of the cells based on their compositional and structural features to form complexes. Antigen presentation occurs, in which T cell receptors attach to the antigen-MHC complexes, initiating clonal expansion and differentiation, and hence the conversion of naive T cells to cytotoxic T cells (CD8+) or helper T cells (CD4+). Cytotoxic CD8+ cells can directly destroy the infected cells containing the antigens that were presented to them by the APCs by releasing lytic molecules, while helper CD4+ cells are responsible for the secretion of cytokines that activates B cells and cytotoxic T cells. B cells can undergo activation in the absence of T cells via the B cell receptor signalling pathway. After dendritic cells capture the immunogen present in the vaccine, they can present the substances to naive B cells, causing the proliferation of plasma cells for antibody production. Isotype switching can take place during B cell development for the formation of different antibodies, including IgG, IgE and IgA. Memory B cells and T cells are formed post-infection. The antigens are memorised by these cells so that subsequent exposure to the same type of antigens will stimulate a secondary response, in which a higher concentration of antibodies specific for the antigens are reproduced rapidly and efficiently in a short time for the elimination of the pathogen. Under specific circumstances, low doses of vaccines are given initially, followed by additional doses named booster doses. Boosters can effectively maintain the level of memory cells in the human body, hence extending a person's immunity. Manufacturing The manufacturing process of recombinant subunit vaccines are as follows: Identification of immunogenic subunit Subunit expression and synthesis Extraction and purification Addition of adjuvants or incorporation to vectors Formulation and delivery. Identification of immunogenic subunit Candidate subunits will be selected primarily by their immunogenicity. To be immunogenic, they should be of foreign nature and of sufficient complexity for the reaction between different components of the immune system and the candidates to occur. Candidates are also selected based on size, nature of function (e.g. signalling) and cellular location (e.g. transmembrane). Subunit expression and synthesis Upon identifying the target subunit and its encoding gene, the gene will be isolated and transferred to a second, non-pathogenic organism, and cultured for mass production. The process is also known as heterologous expression. A suitable expression system is selected based on the requirement of post-translational modifications, costs, ease of product extraction and production efficiency. Commonly used systems for both licensed and developing recombinant subunit vaccines include bacteria, yeast, mammalian cells, insect cells. Bacterial cells Bacterial cells are widely used for cloning processes, genetic modification and small-scale productions. Escherichia coli (E. Coli) is widely utilised due to its highly explored genetics, widely available genetic tools for gene expression, accurate profiling and its ability to grow in inexpensive media at high cell densities. E. Coli is mostly appropriate for structurally simple proteins owing to its inability to carry out post-translational modifications, lack of protein secretary system and the potential for producing inclusion bodies that require additional solubilisation. Regarding application, E.Coli is being utilised as the expression system of the dengue vaccine. Yeast Yeast matches bacterial cells' cost-effectiveness, efficiency and technical feasibility. Moreover, yeast secretes soluble proteins and has the ability to perform post-translational modifications similar to mammalian cells. Notably, yeast incorporates more mannose molecules during N-glycosylation when compared with other eukaryotes, which may trigger cellular conformational stress responses. Such responses may result in failure in reaching native protein conformation, implying potential reduction of serum half-life and immunogenicity. Regarding application, both the hepatitis B virus surface antigen (HBsAg) and the virus-like particles (VLPs) of the major capsid protein L1 of human papillomavirus type 6, 11, 16, 18 are produced by Saccharomyces cerevisiae. Mammalian cells Mammalian cells are well known for their ability to perform therapeutically essential post-translational modifications and express properly folded, glycosylated and functionally active proteins. However, efficacy of mammalian cells may be limited by epigenetic gene silencing and aggresome formation (recombinant protein aggregation). For mammalian cells, synthesised proteins were reported to be secreted into chemically defined media, potentially simplifying protein extraction and purification. The most prominent example under this class is Chinese Hamster Ovary (CHO) cells utilised for the synthesis of recombinant varicella zoster virus surface glycoprotein (gE) antigen for SHINGRIX. CHO cells are recognised for rapid growth and their ability to offer process versatility. They can also be cultured in suspension-adapted culture in protein-free medium, hence reducing risk of prion-induced contamination. Baculovirus (insect) cells The baculovirus-insect cell expression system has the ability to express a variety of recombinant proteins at high levels and provide significant eukaryotic protein processing capabilities, including phosphorylation, glycosylation, myristoylation and palmitoylation. Similar to mammalian cells, proteins expressed are mostly soluble, accurately folded, and biologically active. However, it has slower growth rate and requires higher cost of growth medium than bacteria and yeast, and confers toxicological risks. A notable feature is the existence of elements of control that allow for the expression of secreted and membrane-bound proteins in Baculovirus-insect cells. Licensed recombinant subunit vaccines that utilises baculovirus-insect cells include Cervarix (papillomavirus C-terminal truncated major capsid protein L1 types 16 and 18) and Flublok Quadrivalent (hemagglutinin (HA) proteins from four strains of influenza viruses). Extraction and purification Throughout history, extraction and purification methods have evolved from standard chromatographic methods to the utilisation of affinity tags. However, the final extraction and purification process undertaken highly depends on the chosen expression system. Please refer to subunit expression and synthesis for more insights. Addition of adjuvants Adjuvants are materials added to improve immunogenicity of recombinant subunit vaccines. Adjuvants increase the magnitude of adaptive response to the vaccine and guide the activation of the most effective forms of immunity for each specific pathogen (e.g. increasing generation of T cell memory). Addition of adjuvants may confer benefits including dose sparing and stabilisation of final vaccine formulation. Appropriate adjuvants are chosen based on safety, tolerance, compatibility of antigen and manufacturing considerations. Commonly used adjuvants for recombinant subunit vaccines are Alum adjuvants (e.g. aluminium hydroxide), Emulsions (e.g. MF59) and Liposomes combined with immunostimulatory molecules (e.g. AS01B). Formulation and delivery Delivery systems are primarily divided into polymer-based delivery systems (microspheres and liposomes) and live delivery systems (gram-positive bacteria, gram-negative bacteria and viruses) Polymer-based delivery systems Vaccine antigens are often encapsulated within microspheres or liposomes. Common microspheres made using Poly-lactic acid (PLA) and poly-lactic-co-glycolic acid (PLGA) allow for controlled antigen release by degrading in vivo while liposomes including multilamellar or unilamellar vesicles allow for prolonged release. Polymer-based delivery systems confer advantages such as increased resistance to degradation in GI tract, controlled antigen release, raised particle uptake by immune cells and enhanced ability to induce cytotoxic T cell responses. An example of licensed recombinant vaccine utilising liposomal delivery is Shringrix. Live delivery systems Live delivery systems, also known as vectors, are cells modified with ligands or antigens to improve the immunogenicity of recombinant subunits via altering antigen presentation, biodistribution and trafficking. Subunits may either be inserted within the carrier or genetically engineered to be expressed on the surface of the vectors for efficient presentation to the mucosal immune system. Advantages and disadvantages Advantages Cannot revert to virulence meaning they cannot cause the disease they aim to protect against Safe for immunocompromised patients Can withstand changes in conditions (e.g. temperature, light exposure, humidity) Disadvantages Reduced immunogenicity compared to attenuated vaccines Require adjuvants to improve immunogenicity Often require multiple doses ("booster" doses) to provide long-term immunity Can be difficult to isolate the specific antigen(s) which will invoke the necessary immune response It is not easy to supervise conjugation chemistry which leads to noncontinuous variation Adverse effects and contraindications Recombinant subunit vaccines are safe for administration. However, mild local reactions, including induration and swelling of the injection site, along with fever, fatigue and headache may be encountered after vaccination. Occurrence of severe hypersensitivity reactions and anaphylaxis is rare, but can possibly lead to deaths of individuals. Adverse effects can vary among populations depending on their physical health condition, age, gender and genetic predisposition. Recombinant subunit vaccines are contraindicated to people who have experienced allergic reactions and anaphylaxis to antigens or other components of the vaccines previously. Furthermore, precautions should be taken when administering vaccines to people who are in diseased state and during pregnancy, in which their injections should be delayed until their conditions become stable and after childbirth respectively. Licensed vaccines Hepatitis B ENGERIX-B (produced by GSK) and RECOMBIVAX HB (produced by merck) are two recombinant subunit vaccines licensed for the protection against hepatitis B. Both contain HBsAg harvested and purified from Saccharomyces cerevisiae and are formulated as a suspension of the antigen adjuvanted with alum. Antibody concentration ≥10mIU/mL against HBsAg are recognized as conferring protection against hepatitis B infection. It has been shown that primary 3-dose vaccination of healthy individuals is associated with ≥90% seroprotection rates for ENGERIX-B, despite decreasing with older age. Lower seroprotection rates are also associated with presence of underlying chronic diseases and immunodeficiency. Yet, GSK HepB still has a clinically acceptable safety profile in all studied populations. Human Papillomavirus (HPV) Cervarix, GARDASIL and GARDASIL9 are three recombinant subunit vaccines licensed for the protection against HPV infection. They differ in the strains which they protect the patients from as Cervarix confers protection against type 16 and 18, Gardasil confers protection against type 6, 11, 16 and 18, and Gardasil 9 confers protection against type 6, 11, 16, 18, 31, 33, 45, 52, 58 respectively.  The vaccines contain purified VLP of the major capsid L1 protein produced by recombinant Saccharomyces cerevisiae. It has been shown in a 2014 systematic quantitative review that the bivalent HPV vaccine (Cervarix) is associated with pain (OR 3.29; 95% CI: 3.00–3.60), swelling (OR 3.14; 95% CI: 2.79–3.53) and redness (OR 2.41; 95% CI: 2.17–2.68) being the most frequently reported adverse effects. For Gardasil, the most frequently reported events were pain (OR 2.88; 95% CI: 2.42–3.43) and swelling (OR 2.65; 95% CI: 2.0–3.44). Gardasil was discontinued in the U.S. on May 8, 2017, after the introduction of Gardasil 9 and Cervarix was also voluntarily withdrawn in the U.S. on August 8, 2016. Influenza Flublok Quadrivalent is a licensed recombinant subunit vaccine for active immunisation against influenza. It contains HA proteins of four strains of influenza virus purified and extracted using the Baculovirus-insect expression system. The four viral strains are standardised annually according to United States Public Health Services (USPHS) requirements. Flublok Quadrivalent has a comparable safety profile to traditional trivalent and quadrivalent vaccine equivalents. Flublok is also associated with less local reactions (RR = 0.94, 95% CI 0.90–0.98, three RCTs, FEM, I2 = 0%, low‐ certainty evidence) and higher risk of chills (RR = 1.33, 95% CI 1.03–1.72, three RCTs, FEM, I2 = 14%, low‐certainty evidence). Herpes Zoster SHINGRIX is a licensed recombinant subunit vaccine for protection against Herpes Zoster, whose risk of developing increases with decline of varicella zoster virus (VZV) specific immunity. The vaccine contains VZV gE antigen component extracted from CHO cells, which is to be reconstituted with adjuvant suspension AS01B. Systematic reviews and meta-analyses have been conducted on the efficacy, effectiveness and safety of SHINGRIX in immunocompromised 18–49 year old patients and healthy adults aged 50 and over. These studies reported humoral and cell-mediated immunity rate ranged between 65.4 and 96.2% and 50.0–93.0% while efficacy in patients (18–49 yo) with haematological malignancies was estimated at 87.2% (95%CI, 44.3–98.6%) up to 13 months post-vaccination with an acceptable safety profile. COVID-19 NUVAXOVID is a recombinant subunit vaccine licensed for the prevention of SARS-CoV-2 infection. Market authorization was issued on 20 December 2021. The vaccine contains the SARS-CoV-2 spike protein produced using the baculovirus expression system, which is eventually adjuvanted with the Matrix M adjuvant. History While the practice of immunisation can be traced back to the 12th century, in which ancient Chinese at that time employed the technique of variolation to confer immunity to smallpox infection, the modern era of vaccination has a short history of around 200 years. It began with the invention of a vaccine by Edward Jenner in 1798 to eradicate smallpox by injecting relatively weaker cowpox virus into the human body. The middle of the 20th century marked the golden age of vaccine science. Rapid technological advancements during this period of time enabled scientists to cultivate cell culture under controlled environments in laboratories, subsequently giving rise to the production of vaccines against poliomyelitis, measles and various communicable diseases. Conjugated vaccines were also developed using immunologic markers including capsular polysaccharide and proteins. Creation of products targeting common illnesses successfully lowered infection-related mortality and reduced public healthcare burden. Emergence of genetic engineering techniques revolutionised the creation of vaccines. By the end of the 20th century, researchers had the ability to create recombinant vaccines apart from traditional whole-cell vaccine, for instance Hepatitis B vaccine, which uses the viral antigens to initiate immune responses. As the manufacturing methods continue to evolve, vaccines with more complex constitutions will inevitably be generated in the future to extend their therapeutic applications to both infectious and non-infectious diseases, in order to safeguard the health of more people. Future directions Recombinant subunit vaccines are used in development for tuberculosis, dengue fever, soil-transmitted helminths, feline leukaemia and COVID-19. Subunit vaccines are not only considered effective for SARS-COV-2, but also as candidates for evolving immunizations against malaria, tetanus, salmonella enterica, and other diseases. COVID-19 Research has been conducted to explore the possibility of developing a heterologous SARS-CoV receptor-binding domain (RBD) recombinant protein as a human vaccine against COVID-19. The theory is supported by evidence that convalescent serum from SARS-CoV patients have the ability to neutralise SARS-CoV-2 (corresponding virus for COVID-19) and that amino acid similarity between SARS-CoV and SARS-CoV-2 spike and RBD protein is high (82%). References Vaccines Pharmacology
Subunit vaccine
Chemistry,Biology
5,265
28,953,468
https://en.wikipedia.org/wiki/William%20Dickson%20Lang
William Dickson Lang (28 September 1878 – 3 March 1966) was Keeper of the Department of Geology at the British Museum from 1928 until 1938. Early life Lang was born at Kurnal, India the second son of Edward Tickle Lang and Hebe, the daughter of John Venn Prior. At the age of 1, the family returned to England from the Punjab region of India. Lang's father was a civil servant, who had been working on the Jumna Canal in the Punjab. Education William Lang was educated at Christ's Hospital School, then went to Harrow School in 1894 and Pembroke College, Cambridge in 1898 to read zoology. He graduated with his B.A. in 1902 and M.A. in 1905. Career In 1902 he started as an assistant in the Geology Department of the British Museum (N.H.) in charge of Protozoa, Coelenterates, Sponges and Polyzoa (=Bryozoa). During World War I he was made curator of mosquitos and produced in 1920 "A Handbook on British mosquitos". After the war he returned to the Geology Department and in 1928 became Keeper of Geology in succession to F. A. Bather. Lang was elected as a Fellow of the Royal Society in May 1929. His candidacy citation read: "Distinguished for his knowledge of palaeontology; has applied evolutionary principles to the systematic arrangement of fossil polyzoa and corals, studying the recapitulation of ancestral characters in the post-embryonic growth-stages of compound as well as simple organisms, e.g., 'Brit Mus Catalogue Fossil Bryozoa' (1921, 1922), 'The Pelmatoporinae'. Lang elucidated in detail the faunal and stratigraphical succession of the Lias along the Dorset coast, with special relation to ammonites. He was a proponent of the theory of orthogenesis, believing that several lineages of cribrimorph cheilostome bryozoans evolved progressively thicker and more elaborate skeletal structures which eventually became maladaptive, driving the lineage to extinction. By extending the study of existing British species of mosquitoes to their four larval stages, previously ill-known, he tested the relationships already inferred from imaginal characters. Later life Lang mentored many students, who came to use the facilities of the British Museum (N.H.). He retired from the British Museum (N.H.) in 1938 and moved to Charmouth, Dorset, where he had holidayed from an early age. In 1940, Lang, Stanley Smith and H. Dighton Thomas published the "Index to palaeozoic coral genera". In his retirement Lang wrote several articles about Mary Anning, the fossil collector. He also published on the geology and palaeontology of the Dorset coast around Charmouth. In all, he published over 130 papers. He was president of the Dorset Natural History and Archaeological Society from 1938 to 1940 and member of its council from 1956 to 1966. He was well liked and respected, and his letters to colleagues and students, including Dorothy Hill, demonstrate the respect and affection with which he and his work was held. Personal life Lang married Georgiana Dixon in 1908; they had a son, W. Geoffrey Lang and a daughter, J. Brenda Lang. He died in 1966 and was survived by his wife and children. Notes Further reading 1878 births 1966 deaths British geologists Fellows of the Royal Society Lyell Medal winners Alumni of Pembroke College, Cambridge People educated at Harrow School Employees of the Natural History Museum, London Orthogenesis
William Dickson Lang
Biology
746
50,953,582
https://en.wikipedia.org/wiki/Galegine
Galegine is a toxic chemical compound that has been isolated from Galega officinalis. It has also been found to be the principal cause of the toxicity of poison sedge (Schoenus asperocarpus). Galegine was used in the 1920s as a pharmaceutical treatment for diabetes; however, because of its toxicity, its use was soon supplanted by superior alternatives. Research on galegine eventually led to the development of metformin which is used today for treatment of type 2 diabetes. See also Nitensidine D References Guanidine alkaloids Plant toxins
Galegine
Chemistry
126
43,411,017
https://en.wikipedia.org/wiki/Chlorencoelia%20versiformis
Chlorencoelia is a species of fungus in the family Hemiphacidiaceae. It was originally described in 1798 by Christian Hendrik Persoon as Peziza versiformis. The species was transferred to Chlorencoelia in 1975. References External links Fungi described in 1798 Helotiales Taxa named by Christiaan Hendrik Persoon Fungus species
Chlorencoelia versiformis
Biology
78
6,688,378
https://en.wikipedia.org/wiki/Dewvaporation
Dewvaporation is a novel desalination technology developed at Arizona State University (Tempe) as an energy efficient tool for freshwater procurement and saline waste stream management. The system has relatively low installation costs and low operation and maintenance requirements. The process uses air as a carrier gas that transfers water vapor from ascending evaporative channels to adjacent, descending dew-forming channels. Heat flowing through the barrier allows the evaporative energy requirement to be fully satisfied by the heat released by condensation on the dew forming side. A small pressure difference is held so that the condensing cooler air is kept on the cool side. Near atmospheric operation permits corrosion free and scale-resistant polypropylene construction, and also allows the use of low-grade heat to drive the process. The process is proprietary, developed by Dr. James R. Beckman. Currently, Altela Inc. (Albuquerque, NM) is manufacturing this technology under the AltelaRain trade name. Detailed process According to the Bureau of Reclamation, a branch of the US Department of Interior, this process uses simple corrugated plastic tanks with many "DewVaporation columns" inserted in each tank. Each column is made of corrugated plastic and is divided into two compartments. The wall in the middle serves for receiving and evaporating sea-water into a hot air stream, and on the other side for condensing freshwater. The cooling from the evaporation helps water condense on the dividing wall, while the energy from the condensing vapor, now turned to droplets, passes back to the evaporation side, and is absorbed in the evaporating sea water. This way, much of the energy (as heat) is left in the process, and is not removed with the air leaving the DewVaporation column. Various improvements have been proposed, among those reusing the output brine and adding external heat in a stacked way, so that the pressure and humidity gap between the two sides of the column are optimal and constant. See also Heat of evaporation Heat transfer References External links Solar Desalination using Dewvaporation Seawater desalination using Dewvaporation technique: theoretical development and design evolution Seawater desalination using Dewvaporation technique: experimental and enhancement work with economic analysis Dewvaporation Desalination 5,000-Gallon-Per-Day Pilot Plant Brackish and seawater desalination using a 20 ft2 dewvaporation tower Water treatment Water desalination
Dewvaporation
Chemistry,Engineering,Environmental_science
508
29,072,679
https://en.wikipedia.org/wiki/Universal%20gateway
A universal gateway is a device that transacts data between two or more data sources using communication protocols specific to each. Sometimes called a universal protocol gateway, this class of product is designed as a computer appliance, and is used to connect data from one automation system to another. Typical applications Typical applications include: M2M Communications – machine to machine communications between machines from different vendors, typically using different communication protocols. This is often a requirement to optimize the performance of a production line, by effectively communicating machine states upstream and downstream of a piece of equipment. Machine idle times can trigger lower power operation. Inventory Levels can be more effectively managed on a per station basis, by knowing the upstream and downstream demands. M2E Communications – machine to enterprise communications is typically managed through database interactions. In this case, EATM technology is typically leveraged for data interoperability. However, many enterprise systems have real-time data interfaces. When real-time interfaces are involved, a universal gateway, with its ability to support many protocols simultaneously becomes the best choice. In all cases, communications can fall over many different transports, RS-232, RS-485, Ethernet, etc. Universal Gateways have the ability to communicate between protocols and over different transports simultaneously. Design Hardware platform – Industrial Computer, Embedded Computer, Computer Appliance Communications software – Software (Drivers) to support one or more Industrial Protocols. Communications is typically polled or change based. Great care is typically taken to leverage communication protocols for the most efficient transactions of data (Optimized message sizes, communications speeds, and data update rates). Typical protocols; Rockwell Automation CIP, Ethernet/IP, Siemens Industrial Ethernet, Modbus TCP. There are hundreds of automation device protocols and Universal Gateway solutions are typically targeting certain market segments and will be based on automation vendor relationships. Bridging software – Linking software for connecting data from one device to data in another, one being the source of data and one being the destination. Typically data is transferred on data change, on a time basis, or based on process conditions – Run, Stop, etc. Versus protocol converters A universal gateway will typically offer all protocols on a computer appliance, for the benefit of the process engineer, giving them the opportunity to pick and choose one or more protocols, and change them over time, as the application needs demand. Protocol converters are typically designed with a single purpose, to convert protocol X to Y, and are not offering the level of configurability and flexibility of a universal gateway. New markets Special classes of universal gateway are addressing special needs. The Smart Grid is now prompting a new class of application where plant floor equipment is tied to electric utilities for the purpose of Demand and Response Control over power use. There are a wide variety of "Smart Grid" protocols that need to be connected to Automation Protocols via bridging software. These universal gateways typically support both wired and wireless connectivity. See also Host adapter Protocol converter Classes of computers Computer networking Automation software Industrial equipment
Universal gateway
Technology,Engineering
612
231,064
https://en.wikipedia.org/wiki/Terrestrial%20television
Terrestrial television, or over-the-air television (OTA) is a type of television broadcasting in which the content is transmitted via radio waves from the terrestrial (Earth-based) transmitter of a TV station to a TV receiver having an antenna. The term terrestrial is more common in Europe and Latin America, while in Canada and the United States it is called over-the-air or simply broadcast. This type of TV broadcast is distinguished from newer technologies, such as satellite television (direct broadcast satellite or DBS television), in which the signal is transmitted to the receiver from an overhead satellite; cable television, in which the signal is carried to the receiver through a cable; and Internet Protocol television, in which the signal is received over an Internet stream or on a network utilizing the Internet Protocol. Terrestrial television stations broadcast on television channels with frequencies between about 52 and 600 MHz in the VHF and UHF bands. Since radio waves in these bands travel by line of sight, reception is generally limited by the visual horizon to distances of , although under better conditions and with tropospheric ducting, signals can sometimes be received hundreds of kilometers distant. Terrestrial television was the first technology used for television broadcasting. The BBC began broadcasting in 1929 and by 1930 many radio stations had a regular schedule of experimental television programmes. However, these early experimental systems had insufficient picture quality to attract the public, due to their mechanical scan technology, and television did not become widespread until after World War II with the advent of electronic scan television technology. The television broadcasting business followed the model of radio networks, with local television stations in cities and towns affiliated with television networks, either commercial (in the US) or government-controlled (in Europe), which provided content. Television broadcasts were in greyscale (called black and white) until the transition to color television in the 1960s. There was no other method of television delivery until the 1950s with the beginnings of cable television and community antenna television (CATV). CATV was, initially, only a re-broadcast of over-the-air signals. With the widespread adoption of cable across the United States in the 1970s and 1980s, viewing of terrestrial television broadcasts has been in decline; in 2018, it was estimated that about 14% of US households used an antenna. However, in certain other regions terrestrial television continue to be the preferred method of receiving television, and it is estimated by Deloitte as of 2020 that at least 1.6 billion people in the world receive at least some television using these means. The largest market is thought to be Indonesia, where 250 million people watch through terrestrial. By 2019, over-the-top media service (OTT) which is streamed via the internet had become a common alternative. Analog Europe Following the ST61 conference, UHF frequencies were first used in the UK in 1964 with the introduction of BBC2. In the UK, VHF channels were kept on the old 405-line system, while UHF was used solely for 625-line broadcasts (which later used PAL color). Television broadcasting in the 405-line system continued after the introduction of four analog programs in the UHF bands until the last 405-line transmitters were switched off on January 6, 1985. VHF Band III was used in other countries around Europe for PAL broadcasts until the planned phase-out and switch over to digital television. The success of analog terrestrial television across Europe varied from country to country. Although each country had rights to a certain number of frequencies by virtue of the ST61 plan, not all of them were brought into service. Americas The first National Television System Committee standard was introduced in 1941. This standard defined a transmission scheme for a black-and-white picture with 525 lines of vertical resolution at 60 fields per second. In the early 1950s, this standard was superseded by a backward-compatible standard for color television. The NTSC standard was exclusively being used in the Americas as well as Japan until the introduction of digital terrestrial television (DTT). While Mexico has ended all its analog television broadcasts and the United States and Canada have shut down nearly all of their analog TV stations, the NTSC standard continues to be used in the rest of Latin American countries except for Argentina, Paraguay and Uruguay where PAL-N standard is used while testing their DTT platform. In the late 1990s and early 2000s, the Advanced Television Systems Committee developed the ATSC standard for digital high-definition terrestrial transmission. This standard was eventually adopted by many American countries, including the United States, Canada, Dominican Republic, Mexico, Argentina, El Salvador, Guatemala and Honduras; however, the four latter countries reversed their decision in favor of ISDB-Tb. The Pan-American terrestrial television operates on analog channels 2 through 6 (VHF-low band, 54 to 88 MHz, known as band I in Europe), 7 through 13 (VHF-high band, 174 to 216 MHz, known as band III elsewhere), and 14 through 51 (UHF television band, 470 to 698 MHz, elsewhere bands IV and V). Unlike with analog transmission, ATSC channel numbers do not correspond to radio frequencies. Instead, a virtual channel is defined as part of the ATSC stream metadata so that a station can transmit on any frequency but still show the same channel number. Additionally, free-to-air television repeaters and signal boosters can be used to rebroadcast a terrestrial television signal using an otherwise unused channel to cover areas with marginal reception. (see Pan-American television frequencies for frequency allocation charts) Analog television channels 2 through 6, 7 through 13, and 14 through 51 are only used for LPTV translator stations in the United States. Channels 52 through 69 are still used by some existing stations, but these channels must be vacated if telecommunications companies notify the stations to vacate that signal spectrum. By convention, broadcast television signals are transmitted with horizontal polarization. Asia Terrestrial television broadcast in Asia started as early as 1939 in Japan through a series of experiments done by NHK Broadcasting Institute of Technology. However, these experiments were interrupted by the beginning of the World War II in the Pacific. On February 1, 1953, NHK (Japan Broadcasting Corporation) began broadcasting. On August 28, 1953, Nippon TV (Nippon Television Network Corporation), the first commercial television broadcaster in Asia was launched. Meanwhile, in the Philippines, Alto Broadcasting System (now ABS-CBN Corporation), the first commercial television broadcaster in Southeast Asia, launched its first commercial terrestrial television station DZAQ-TV on October 23, 1953, with the help of Radio Corporation of America (RCA). Digital By the mid-1990s, the interest in digital television across Europe was such the CEPT convened the "Chester '97" conference to agree on means by which digital television could be inserted into the ST61 frequency plan. The introduction of digital terrestrial television in the late 1990s and early years of the 21st century led the ITU to call a Regional Radiocommunication Conference to abrogate the ST61 plan and to put a new plan for DTT broadcasting only in its place. In December 2005, the European Union decided to cease all analog audio and analog video television transmissions by 2012 and switch all terrestrial television broadcasting to digital audio and digital video (all EU countries have agreed on using DVB-T). The Netherlands completed the transition in December 2006, and some EU member states decided to complete their switchover as early as 2008 (Sweden), and (Denmark) in 2009. While the UK began to switch off analog broadcasts, region by region, in late 2007, it was not completed until 24 October 2012. Norway ceased all analog television transmissions on 1 December 2009. Two member states (not specified in the announcement) expressed concerns that they might not be able to proceed to the switchover by 2012 due to technical limitations; the rest of the EU member states had stopped analog television transmissions by the end 2012. Many countries are developing and evaluating digital terrestrial television systems. Australia has adopted the DVB-T standards and the government's industry regulator, the Australian Communications and Media Authority, has mandated that all analog transmissions will cease by 2012. Mandated digital conversion started early in 2009 with a graduated program. The first centre to experience analog switch-off was the remote Victorian regional town of Mildura, in 2010. The government supplied underprivileged houses across the nation with free digital set-top converter boxes in order to minimize conversion disruption. Australia's major free-to-air television networks were all granted digital transmission licenses and are each required to broadcast at least one high-definition and one standard-definition channel into all of their markets. In North America, a specification laid out by the ATSC has become the standard for digital terrestrial television. In the United States, the Federal Communications Commission (FCC) set the final deadline for the switch-off of analog service for 12 June 2009. All television receivers must now include a DTT tuner using ATSC. In Canada, the Canadian Radio-television and Telecommunications Commission (CRTC) set 31 August 2011 as the date that terrestrial analog transmission service ceased in metropolitan areas and provincial capitals. In Mexico, the Federal Telecommunications Institute (IFT) discontinued the use of analog terrestrial television on 31 December 2015. See also Antenna farm Audience measurement Blackout (broadcasting) Broadcast call signs Broadcast license Broadcast range Broadcast relay station Broadcast syndication Broadcast television systems City of license Digital multimedia broadcasting Effects of time zones on North American broadcasting Emergency Broadcast System Independent station List of digital television deployments by country List of Formula One broadcasters List of Kentucky Derby broadcasters List of historical Major League Baseball television broadcasters List of current NFL broadcasters List of current National Hockey League broadcasters List of Super Bowl broadcasters List of World Series broadcasters List of current Women's National Basketball Association broadcasters List of television stations in North America by media market List of United States over-the-air television networks Lists of television channels for various lists Media market News broadcasting Public broadcasting Superstation Television system Pay television References External links TVRadioWorld TV stations directory W9WI.com (Terrestrial repeater and TV hobbyist information) TV Coverage maps and Signal Analysis Television technology Television terminology
Terrestrial television
Technology
2,042
42,917,948
https://en.wikipedia.org/wiki/Sulfuric%20acid%20poisoning
Sulfuric acid poisoning refers to ingestion of sulfuric acid, found in lead-acid batteries and some metal cleaners, pool cleaners, drain cleaners and anti-rust products. Signs and symptoms Brown to black streak from angle of mouth Brown to black vomitus Brown to black stomach wall Black swollen tongue White (chalky white) teeth Blotting paper appearance of stomach mucosa Ulceration of esophagus (fibrosis and stricture) Perforation of stomach. The stomach resembles a black spongy mass on post mortem Treatment For superficial injuries, washing (therapeutic irrigation) is important. Emergency treatments include protecting the airway, which might involve a tracheostomy. Further treatment will vary depending on the severity, but might include investigations to determine the extent of damage (bronchoscopy for the airways and endoscopy for the gastrointestinal tract), followed by treatments including surgery (to debride and repair) and intravenous fluids. Gastric lavage is contraindicated in corrosive acid poisoning like sulfuric acid poisoning. Bicarbonate is also contraindicated as it liberates carbon dioxide which can cause gastric dilatation leading to rupture of stomach, leading to severe abominal damage or death. Society and culture Vitriolage is the act of throwing sulfuric acid or other corrosive acids on somebody's face. References External links Sulphuric acid: Toxicological overview Sulfuric acid poisoning on Penn Medicine Sulfuric acid poisoning on Medline Plus Toxic effects of substances chiefly nonmedicinal as to source Chemical weapons attacks
Sulfuric acid poisoning
Chemistry,Environmental_science
338
49,731,496
https://en.wikipedia.org/wiki/Chemical%20cycling
Chemical cycling describes systems of repeated circulation of chemicals between other compounds, states and materials, and back to their original state, that occurs in space, and on many objects in space including the Earth. Active chemical cycling is known to occur in stars, many planets and natural satellites. Chemical cycling plays a large role in sustaining planetary atmospheres, liquids and biological processes and can greatly influence weather and climate. Some chemical cycles release renewable energy, others may give rise to complex chemical reactions, organic compounds and prebiotic chemistry. On terrestrial bodies such as the Earth, chemical cycles involving the lithosphere are known as geochemical cycles. Ongoing geochemical cycles are one of the main attributes of geologically active worlds. A chemical cycle involving a biosphere is known as a biogeochemical cycle. The Sun, other stars and star systems In most hydrogen-fusing stars, including the Sun, a chemical cycle involved in stellar nucleosynthesis occurs which is known as a carbon-nitrogen-oxygen or (CNO cycle). In addition to this cycle, stars also have a helium cycle. Various cycles involving gas and dust have been found to occur in galaxies. Venus The majority of known chemical cycles on Venus involve its dense atmosphere and compounds of carbon and sulphur, the most significant being a strong carbon dioxide cycle. The lack of a complete carbon cycle including a geochemical carbon cycle, for example, is thought to be a cause of its runaway greenhouse effect, due to the lack of a substantial carbon sink. Sulphur cycles including sulphur oxide cycles also occur, sulphur oxide in the upper atmosphere and results in the presence of sulfuric acid in turn returns to oxides through photolysis. Indications also suggest an ozone cycle on Venus similar to that of Earth's. Earth A number of different types of chemical cycles geochemical cycles occur on Earth. Biogeochemical cycles play an important role in sustaining the biosphere. Notable active chemical cycles on Earth include: Carbon cycle – consisting of an atmospheric carbon cycle (and carbon dioxide cycle), terrestrial biological carbon cycle, oceanic carbon cycle and geological carbon cycle Nitrogen cycle – which converts nitrogen between its forms through fixation, ammonification, nitrification, and denitrification Oxygen cycle and Ozone–oxygen cycle – a biogeochemical cycle of circulating oxygen between the atmosphere, biosphere (the global sum of all ecosystems), and the lithosphere Ozone-oxygen cycle – continually regenerates ozone in the atmosphere and converts ultraviolet radiation (UV) into heat Water cycle – moves water continuously on, above and below the surface shifting between states of liquid, solution, ice and vapour Methane cycle – moves methane between geological and biogeochemical sources and reactions in the atmosphere Hydrogen cycle – a biogeochemical cycle brought about by a combination of biological and abiological processes Phosphorus cycle – the movement of phosphorus through the lithosphere, hydrosphere, and biosphere Sulfur cycle – a biogeochemical process resulting form the mineralization of organic sulfur, oxidation, reduction and incorporation into organic compounds Carbonate–silicate cycle transforms silicate rocks to carbonate rocks by weathering and sedimentation and transforms carbonate rocks back into silicates by metamorphism and magmatism. Rock cycle – switches rock between its three forms: sedimentary, metamorphic, and igneous Mercury cycle – a biogeochemical process in which naturally occurring mercury is bioaccumulated before recombining with sulfur and returning to geological sources as sediments Other chemical cycles include hydrogen peroxide. Mars Recent evidence suggests that similar chemical cycles to Earth's occur on a lesser scale on Mars, facilitated by the thin atmosphere, including carbon dioxide (and possibly carbon), water, sulphur, methane, oxygen, ozone, and nitrogen cycles. Many studies point to significantly more active chemical cycles on Mars in the past, however the faint young Sun paradox has proved problematic in determining chemical cycles involved in early climate models of the planet. Jupiter Jupiter, like all the gas giants, has an atmospheric methane cycle. Recent studies indicate a hydrological cycle of water-ammonia vastly different to the type operating on terrestrial planets like Earth and also a cycle of hydrogen sulfide. Significant chemical cycles exist on Jupiter's moons. Recent evidence points to Europa possessing several active cycles, most notably a water cycle. Other studies suggest an oxygen and radiation induced carbon dioxide cycle. Io and Europa, appear to have radiolytic sulphur cycles involving their lithospheres. In addition, Europa is thought to have a sulfur dioxide cycle. In addition, the Io plasma torus contributes to a sulphur cycle on Jupiter and Ganymede. Studies also imply active oxygen cycles on Ganymede and oxygen and radiolytic carbon dioxide cycles on Callisto. Saturn In addition to Saturn's methane cycle some studies suggest an ammonia cycle induced by photolysis similar to Jupiter's. The cycles of its moons are of particular interest. Observations by Cassini–Huygens of Titan's atmosphere and interactions with its liquid mantle give rise to several active chemical cycles including a methane, hydrocarbon, hydrogen, and carbon cycles. Enceladus has an active hydrological, silicate and possibly a nitrogen cycle. Uranus Uranus has an active methane cycle. Methane is converted to hydrocarbons through photolysis which condenses and as they are heated, release methane which rises to the upper atmosphere. Studies by Grundy et al. (2006) indicate active carbon cycles operates on Titania, Umbriel and Ariel and Oberon through the ongoing sublimation and deposition of carbon dioxide, though some is lost to space over long periods of time. Neptune Neptune's internal heat and convection drives cycles of methane, carbon, and a combination of other volatiles within Triton's lithosphere. Models predicted the presence of seasonal nitrogen cycles on the moon Triton, however this has not been supported by observations to date. Pluto-Charon system Models predict a seasonal nitrogen cycle on Pluto and observations by New Horizons appear to support this. References Biogeochemical cycle Geochemistry Planetary science
Chemical cycling
Chemistry,Astronomy
1,255
54,644,701
https://en.wikipedia.org/wiki/NGC%207080
NGC 7080 is a barred spiral galaxy located about 204.5 million light-years away in the constellation of Vulpecula. It has an estimated diameter of about 100,000 light-years which would make it similar in size to the Milky Way. NGC 7080 was discovered by astronomer Albert Marth on September 6, 1863. According to Harold Corwin, NGC 7054 is a duplicate observation of NGC 7080. One supernova has been observed in NGC 7080: SN 1998ey (type Ic-pec, mag.16.8) was discovered by Ron Arbour on 5 December 1998. See also NGC 1300 References External links Barred spiral galaxies Vulpecula 7080 11756 66861 Astronomical objects discovered in 1863
NGC 7080
Astronomy
153
318,413
https://en.wikipedia.org/wiki/French%20Institute%20for%20Research%20in%20Computer%20Science%20and%20Automation
The National Institute for Research in Digital Science and Technology (Inria) () is a French national research institution focusing on computer science and applied mathematics. It was created under the name French Institute for Research in Computer Science and Automation (IRIA) () in 1967 at Rocquencourt near Paris, part of Plan Calcul. Its first site was the historical premises of SHAPE (central command of NATO military forces), which is still used as Inria's main headquarters. In 1980, IRIA became INRIA. Since 2011, it has been styled Inria. Inria is a Public Scientific and Technical Research Establishment (EPST) under the double supervision of the French Ministry of National Education, Advanced Instruction and Research and the Ministry of Economy, Finance and Industry. Administrative status Inria has nine research centers distributed across France (in Bordeaux, Grenoble-Inovallée, Lille, Lyon, Nancy, Paris-Rocquencourt, Rennes, Saclay, and Sophia Antipolis) and one center abroad in Santiago de Chile, Chile. It also contributes to academic research teams outside of those centers. Inria Rennes is part of the joint Institut de recherche en informatique et systèmes aléatoires (IRISA) with several other entities. Before December 2007, the three centers of Bordeaux, Lille and Saclay formed a single research center called INRIA Futurs. In October 2010, Inria, with Pierre and Marie Curie University (Now Sorbonne University) and Paris Diderot University started IRILL, a center for innovation and research initiative for free software. Inria employs 3800 people. Among them are 1300 researchers, 1000 Ph.D. students and 500 postdoctorates. Research Inria does both theoretical and applied research in computer science. In the process, it has produced many widely used programs, such as Bigloo, a Scheme implementation CADP, a tool box for the verification of asynchronous concurrent systems Caml, a language from the ML family Caml Light and OCaml implementations Chorus, microkernel-based distributed operating system CompCert, verified C compiler for PowerPC, ARM and x86_32 Contrail Coq, a proof assistant CYCLADES, pioneered the use of datagrams, functional layering, and the end-to-end strategy. Eigen (C++ library) Esterel, a programming language for State Automata Geneauto — code-generation from model Graphite, a research platform for computer graphics, 3D modeling and numerical geometry Gudhi — A C++ library with Python interface for computational topology and topological data analysis Le Lisp, a portable Lisp implementation medInria, a medical image processing software, popularly used for MRI images. GNU MPFR, an arbitrary-precision floating-point library OpenViBE, a software platform dedicated to designing, testing and using brain–computer interfaces. Pharo, an open-source Smalltalk derived from Squeak . scikit-learn, a machine learning software package Scilab, a numerical computation software package SimGrid SmartEiffel, a free Eiffel compiler SOFA, an open source framework for multi-physics simulation with an emphasis on medical simulation. TOM, a pattern matching language ViSP, an open source visual servoing platform library XtreemFS XtreemOS, a grid distributed operating system Zenon, an extensible automated theorem prover producing checkable proofs Inria furthermore leads French AI Research, ranking 12th worldwide in 2019, based on accepted publications at the prestigious Conference on Neural Information Processing Systems. History During the summer of 1988, the INRIA connected its Sophia-Antipolis unit to the NSFNet via Princeton using a satellite link leased to France Telecom and MCI. The link became operational on 8 August 1988, and allowed INRIA researchers to access the US network and allowed NASA researchers access to an astronomical database based in Strasbourg. This was the first international connection to NSFNET and the first time that French networks were connected directly to a network using TCP/IP, the Internet protocol. The Internet in France was limited to research and education for some years to come. References Further reading External links See also Stratégie nationale pour l'intelligence artificielle Computer science research organizations History of computing in France Scientific agencies of the government of France Theoretical computer science Computer science institutes in France Members of the European Research Consortium for Informatics and Mathematics Information technology research institutes Carnot label
French Institute for Research in Computer Science and Automation
Mathematics,Technology
916
56,089,649
https://en.wikipedia.org/wiki/European%20Secure%20Software-defined%20Radio
European Secure Software-defined Radio (ESSOR) is a planned European Union (EU) Permanent Structured Cooperation project for the development of common technologies for European military software-defined radio systems, to guarantee the interoperability and security of voice and data communications between EU forces in joint operations, on a variety of platforms. History The project was based on United States' Software Communications Architecture and Joint Tactical Radio System, to which Thales was a major contributor. Germany initially did not participate in ESSOR, developing instead its own SDR system, Streitkräftegemeinsame, verbundfähige Funkgerät-Ausstattung. Consortium The work of development is being carried out by a consortium of private companies, one from each member country, including Thales (FR), Leonardo (IT), Indra Sistemas (SP), Radmor (PL), Bittium (FI) and Rohde & Schwarz (DE). See also Permanent Structured Cooperation Organisation for Joint Armament Cooperation References External links Description Permanent Structured Cooperation projects Software-defined radio Military equipment of the European Union
European Secure Software-defined Radio
Engineering
226
55,178,467
https://en.wikipedia.org/wiki/Dual%20specificity%20phosphatase%208
Dual specificity phosphatase 8 is a protein that in humans is encoded by the DUSP8 gene. Function The protein encoded by this gene is a member of the dual specificity protein phosphatase subfamily. These phosphatases inactivate their target kinases by dephosphorylating both the phosphoserine/threonine and phosphotyrosine residues. They negatively regulate members of the mitogen-activated protein (MAP) kinase superfamily (MAPK/ERK, SAPK/JNK, p38), which is associated with cellular proliferation and differentiation. Different members of the family of dual specificity phosphatases show distinct substrate specificities for various MAP kinases, different tissue distribution and subcellular localization, and different modes of inducibility of their expression by extracellular stimuli. This gene product inactivates SAPK/JNK and p38, is expressed predominantly in the adult brain, heart, and skeletal muscle, is localized in the cytoplasm, and is induced by nerve growth factor and insulin. An intronless pseudogene for DUSP8 is present on chromosome 10q11.2. References Further reading
Dual specificity phosphatase 8
Chemistry
253
71,429,322
https://en.wikipedia.org/wiki/List
A list is a set of discrete items of information collected and set forth in some format for utility, entertainment, or other purposes. A list may be memorialized in any number of ways, including existing only in the mind of the list-maker, but lists are frequently written down on paper, or maintained electronically. Lists are "most frequently a tool", and "one does not read but only uses a list: one looks up the relevant information in it, but usually does not need to deal with it as a whole". Purpose It has been observed that, with a few exceptions, "the scholarship on lists remains fragmented". David Wallechinsky, a co-author of The Book of Lists, described the attraction of lists as being "because we live in an era of overstimulation, especially in terms of information, and lists help us in organizing what is otherwise overwhelming". While many lists have practical purposes, such as memorializing needed household items, lists are also created purely for entertainment, such as lists put out by various music venues of the "best bands" or "best songs" of a certain era. Such lists may be based on objective factors such as record sales and awards received, or may be generated entirely from the subjective opinion of the writer of the list. Musicologist David V. Moskowitz notes: The practice of ordering a list evaluating things so that better items on the list are ahead of less good items is called ranking. Lists created for the purpose of ranking a subset of an indefinite population (such as the top 100 of the thousands of bands that have performed in a given genre) are almost always presented as round numbers. Studies have determined that a list of items falling within a round number has a substantial psychological impact, such that "the difference between items ranked No. 10 and No. 11 feels enormous and significant, even if it's actually quite minimal or unknown". The same list may serve different purposes for different people. A list of currently popular songs may provide the average person with suggestions for music that they may want to sample, but to a record company executive, the same list would indicate trends regarding the kinds of artists to sign to maximize future profits. Organizing principles Lists may be organized by a number of different principles. For example, a shopping list or a list of places to visit while vacationing might each be organized by priority (with the most important or most desired items at the top and least important or least desired at the bottom), or by proximity, so that following the list will take the shopper or vacationer on the most efficient route. A list may also completely lack any principle of organization, if it does not serve a purpose for which such a principle is needed. An unsorted list is one "in which data items are placed in no particular order with respect to their content; the only relationships between data elements consist of the list predecessor and successor relationships". For example, in her book, Seriously... I'm Kidding, comedian Ellen DeGeneres provides a list of acknowledgements, notes her difficulty in determining how to order the list, and ultimately writes: "This list is in no particular order. Just because someone is first doesn't mean they're the most important. It doesn't mean they're not the most important either". A list that is sorted by some principle may be said to be following a ranking or sequence. Items on a list are often delineated by bullet points or a numbering scheme. Kinds of lists Kinds of lists used in everyday life include: Shopping list: a list of items needed to be purchased by a shopper, such as a list of groceries to be purchased on the next visit to the grocery store (a grocery list) To-do list or Task list: a list or "backlog" of pending tasks Checklist: a type of job aid used in repetitive tasks to reduce failure by compensating for potential limits of human memory and attention Roster: a list of people scheduled to participate in a task, such as employees of a company, or, more specifically, professional athletes set to participate in a specific sporting event Wish list, an itemization of goods or services that a person or organization desires Many highly specialized kinds of lists also exist. For example, a table of contents is a list of the chapters or other features of a written work, usually at the beginning of that work, and an index is a list of concepts or terms found in such a work, usually at the end of the work, and usually indicating where in the work the concepts or terms can be found. A track list is a list of songs on an album, and set list is a list of songs that a band will regularly play in concerts during a tour. A word list is a list of the lexicon of a language (generally sorted by frequency of occurrence either by levels or as a ranked list) within some given text corpus, serving the purpose of vocabulary acquisition. Many connoisseurs or experts in particular areas will assemble "best of" lists containing things that are considered the best examples within that area. Where such lists are open to a wide array of subjective considerations, such as a list of best poems, best songs, or best athletes in a particular sport, experts with differing opinions may engage in lengthy debates over which items belong on the list, and in which order. Task lists A task list (also called a to-do list or "things-to-do") is a list of tasks to be completed, such as chores or steps toward completing a project. It is an inventory tool which serves as an alternative or supplement to memory. Writer Julie Morgenstern suggests "do's and don'ts" of time management that include mapping out everything that is important, by making a task list. Task lists are also business management, project management, and software development, and may involve more than one list. When one of the items on a task list is accomplished, the task is checked or crossed off. The traditional method is to write these on a piece of paper with a pen or pencil, usually on a note pad or clip-board. Task lists can also have the form of paper or software checklists. Numerous digital equivalents are now available, including personal information management (PIM) applications and most PDAs. There are also several web-based task list applications, many of which are free. Task list organization Task lists are often diarized and tiered. The simplest tiered system includes a general to-do list (or task-holding file) to record all the tasks the person needs to accomplish and a daily to-do list which is created each day by transferring tasks from the general to-do list. An alternative is to create a "not-to-do list", to avoid unnecessary tasks. Task lists are often prioritized in the following ways. A daily list of things to do, numbered in the order of their importance and done in that order one at a time as daily time allows, is attributed to consultant Ivy Lee (1877–1934) as the most profitable advice received by Charles M. Schwab (1862–1939), president of the Bethlehem Steel Corporation. An early advocate of "ABC" prioritization was Alan Lakein, in 1973. In his system "A" items were the most important ("A-1" the most important within that group), "B" next most important, "C" least important. A particular method of applying the ABC method assigns "A" to tasks to be done within a day, "B" a week, and "C" a month. To prioritize a daily task list, one either records the tasks in the order of highest priority, or assigns them a number after they are listed ("1" for highest priority, "2" for second highest priority, etc.) which indicates in which order to execute the tasks. The latter method is generally faster, allowing the tasks to be recorded more quickly. Another way of prioritizing compulsory tasks (group A) is to put the most unpleasant one first. When it is done, the rest of the list feels easier. Groups B and C can benefit from the same idea, but instead of doing the first task (which is the most unpleasant) right away, it gives motivation to do other tasks from the list to avoid the first one. A completely different approach which argues against prioritizing altogether was put forward by British author Mark Forster in his book "Do It Tomorrow and Other Secrets of Time Management". This is based on the idea of operating "closed" to-do lists, instead of the traditional "open" to-do list. He argues that the traditional never-ending to-do lists virtually guarantees that some of your work will be left undone. This approach advocates getting all your work done, every day, and if you are unable to achieve it, that helps you diagnose where you are going wrong and what needs to change. Various writers have stressed potential difficulties with to-do lists such as the following. Management of the list can take over from implementing it. This could be caused by procrastination by prolonging the planning activity. This is akin to analysis paralysis. As with any activity, there is a point of diminishing returns. To remain flexible, a task system must allow for disaster. A company must be ready for a disaster. Even if it is a small disaster, if no one made time for this situation, it can metastasize, potentially causing damage to the company. To avoid getting stuck in a wasteful pattern, the task system should also include regular (monthly, semi-annual, and annual) planning and system-evaluation sessions, to weed out inefficiencies and ensure the user is headed in the direction he or she truly desires. If some time is not regularly spent on achieving long-range goals, the individual may get stuck in a perpetual holding pattern on short-term plans, like staying at a particular job much longer than originally planned. See also A-list Blacklist/Whitelist The Book of Lists Difference list The Infinity of Lists (2009) by Umberto Eco, on the topic of lists Life list Linked list List (abstract data type), in computer science List comprehension List of lists of lists Outline (list) Self-organizing list Short list Wait list Word list References Main topic articles Information management
List
Technology
2,127
604,067
https://en.wikipedia.org/wiki/Nemetschek
Nemetschek Group is a vendor of software for architects, engineers and the construction industry. The company develops and distributes software for planning, designing, building and managing buildings and real estate, as well as for media and entertainment. History 20th century The company was founded by Prof. Georg Nemetschek in 1963, and initially went by the name of Ingenieurbüro für das Bauwesen (engineering firm for the construction industry), focusing on structural design. It was one of the first companies in the industry to use computers and developed software for engineers, initially for its own requirements. In 1977, Nemetschek started distributing its program Statik 97/77 for civil engineering. At the Hanover Fair in 1980, Nemetschek presented a software package for integrated calculation and design of standard components for solid construction. This was the first software enabling computer-aided engineering (CAE) on microcomputers, and the product remained unique on the market for many years. In 1989, Nemetschek Programmsystem GmbH was founded and was responsible for software distribution; Georg Nemetschek's engineering firm continued to be in charge of program development. The main product, Allplan – a CAD system for architects and engineers, was launched in 1984. This allowed designers to model buildings in three dimensions. Nemetschek began to expand internationally in the 1980s. By 1996, the company had subsidiaries in eight European countries and distribution partners in nine European countries; since 1992, it has also had a development site in Bratislava, Slovakia. The first acquisitions were made at the end of the 1990s, including the structural design program vendor Friedrich + Lochner. The company, operating as Nemetschek AG since 1994, went public in 1999 (it has been listed in the Prime Standard market segment and the TecDAX in Frankfurt ever since). 21st century Two major company takeovers followed in 2000: the American firm Diehl Graphsoft (now Vectorworks) and Maxon Computer GmbH, with its Cinema 4D software for visualization and animation. In 2006, Nemetschek acquired Hungary's Graphisoft (for its key product ArchiCAD), and Belgium's SCIA International. In November 2013, Nemetschek acquired the MEP software provider Data Design System (DDS). On 31 October 2014, the acquisition of Bluebeam Software, Inc. was concluded. At the end of 2015, Solibri was acquired. Since 2016, the company has operated as Nemetschek SE. Later that year, SDS/2 was acquired. In 2017, it acquired dRofus and RISA. MCS Solutions was acquired in 2018, closely followed by the acquisition of Axxerion B.V and Plandatis and subsequently rebranded to Spacewell. Other acquisitions have been completed at a brand level (for example, Redshift Rendering Technologies, Red Giant and Pixologic were acquired by Maxon, DEXMA by Spacewell). Since 18 September 2018, Nemetschek is listed in the MDAX in addition to its TecDAX listing. Among others, Nemetschek is a member of the BuildingSMART e.V. and the Deutsche Gesellschaft für Nachhaltiges Bauen (DGNB) (German Sustainable Building Council), actively advocating for open building information modeling (BIM) standards ("open BIM") in the AEC/O industry. Business units Since 2008, Nemetschek has acted as a holding company with four business units: Planning & Design (Architecture and Civil Engineering) Build & Construct Manage & Operate Media & Entertainment. The holding company maintains 13 product brands, covering the whole building lifecycle, from planning to operations. See also Comparison of CAD editors for architecture, engineering and construction (AEC) References External links Nemetschek SE website Companies based in Munich Software companies established in 1963 Software companies of Germany Companies listed on the Frankfurt Stock Exchange Building information modeling German brands Companies in the TecDAX Companies in the MDAX 1963 establishments in West Germany
Nemetschek
Engineering
830
14,165,837
https://en.wikipedia.org/wiki/RAB4A
Ras-related protein Rab-4A is a protein that in humans is encoded by the RAB4A gene. Interactions RAB4A has been shown to interact with: CD2AP, KIF3B, RAB11FIP1, RABEP1, and STX4. References Further reading
RAB4A
Chemistry
67
13,629,379
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA77
In molecular biology, Small nucleolar RNA SNORA77 (also known as ACA63) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA). SNORA77 was identified by computational screening and its expression in mouse experimentally verified by Northern blot and primer extension analysis. It belongs to the H/ACA box class of snoRNAs as it has the predicted hairpin-hinge-hairpin-tail structure and conserved H/ACA-box motifs. SNORA77 is proposed to guide the pseudouridylation of 18S ribosomal RNA (rRNA) residue U814. Pseudouridylation is the isomerisation of the nucleoside uridine to the different isomeric form pseudouridine. References External links Non-coding RNA
Small nucleolar RNA SNORA77
Chemistry
237
19,363,223
https://en.wikipedia.org/wiki/Trivial%20Graph%20Format
Trivial Graph Format (TGF) is a simple text-based adjacency list file format for describing graphs, widely used because of its simplicity. Format The format consists of a list of node definitions, which map node IDs to labels, followed by a list of edges, which specify node pairs and an optional edge label. Because of its lack of standardization, the format has many variations. For instance, some implementations of the format require the node IDs to be integers, while others allow more general alphanumeric identifiers. Each node definition is a single line of text starting with the node ID, separated by a space from its label. The node definitions are separated from the edge definitions by a line containing the "#" character. Each edge definition is another line of text, starting with the two IDs for the endpoints of the edge separated by a space. If the edge has a label, it appears on the same line after the endpoint IDs. The graph may be interpreted as a directed or undirected graph. For directed graphs, to specify the concept of bi-directionality in an edge, one may either specify two edges (forward and back) or differentiate the edge by means of a label. Example A simple graph with two nodes and one edge might look like: 1 First node 2 Second node # 1 2 Edge between the two See also yEd, a graph editor that can handle TGF file format. References External links Using TGF in the yFiles Graph Drawing library Using TGF in Wolfram Mathematica Graph description languages
Trivial Graph Format
Mathematics
321
6,111,026
https://en.wikipedia.org/wiki/K-NFB%20Reader
The K-NFB Reader (an acronym for Kurzweil — National Federation of the Blind Reader) is a handheld electronic reading device for the blind. It was developed in a partnership between Ray Kurzweil and the National Federation of the Blind. The original version of the reader was composed of a digital camera and a PDA, which contained specialised OCR software and speech synthesizers to read the scanned material aloud. It was released at a price of $3,495. The software was later ported to the Symbian operating system, to be used on Nokia N82 camera phones, with a new price of $1,595. Developed by the National Federation of the Blind and Sensotec NV in 2014, an iOS port was released at a price of $99. An Android version was released shortly after. KNFB Reader can read: Receipts Package labels and mail Product and nutritional information Print on your computer or tablet screen Longer documents such as books and user manuals Private documents such as tax materials, mortgage documents, bills, and medical reports Ebooks and documents in the ePub format Documents in more than thirty languages Innovative features Source: Text Detection (shows you where there is print to capture) Tilt and Viewfinder Assist (ensures you capture the entire page) Text Highlighting (pinpoints text for dyslexic and other print-disabled users) See also Kurzweil Educational Systems References External links Official site Blindness equipment
K-NFB Reader
Technology
295
24,575,247
https://en.wikipedia.org/wiki/Stonyhurst%20disks
A Stonyhurst disk is a transparent circular grid with lines of longitude and latitude that can overlay a solar image to reference the positions of sunspots. This overlay system was originally created at the Stonyhurst College observatory. References Astronomical instruments Stonyhurst College
Stonyhurst disks
Astronomy
53
32,038,866
https://en.wikipedia.org/wiki/FASTRAD
FASTRAD is a tool dedicated to the calculation of radiation effects (Dose and Displacement Damage) on electronics. The software has uses in high energy physics and nuclear experiments, medical areas, and accelerator and space physics studies, though it is primarily used in the design of satellites. History FASTRAD is a radiation tool dedicated to the analysis and design of radiation sensitive systems. The project was created in 1999, five years after the creation of the product's parent company TRAD, and has been under active development since. Over time, the radiation hardness that satellite manufacturers have been able to offer has greatly increased. Both the optimization of space systems in terms of the power/mass ratio, or the miniaturization of electronic devices, tends to increase the sensitivity of those systems to the space radiation environment. In order to mitigate the impact on the radiation hardness process, the first solution is to replace the rough shielding analysis by an accurate estimate of the real radiation constraint on the system. Historically, FASTRAD has been able to assist this industry. The main goal of the software is to reduce the margins stemming from a conservative approach of estimating radiation analysis, while reducing the cycle time of mechanical design changes for shielding optimization. In some cases, it can be used to justify the use of non rad-hard parts and save cost and planning for space program equipment. For space applications, the software is capable of simulating the entire satellite system. Radiation CAD interface The main CAD capabilities of the tool are: Creation of multiple simple primitives Insertion of complex 3D geometries coming from STEP or IGES format files Standard modelling tool set (clipping plane, 2D projection, measurement tool, colors, view shot,...) The core of the software is the radiation 3D modeler. The goal of the engine is to make a realistic model of any mechanical design. The main section of the interface is the display window, where the user can manipulate their design. The 3D solids can be defined either by using the component toolbar or by importing them from other 3rd party software (CATIA, Pro/Engineer...) with the standard STEP or IGES format. The Open Cascade library included in FASTRAD provides advanced visualization capabilities like cut operations, complex shape management, and STEP and IGES exchange format modules. The advanced STEP module allows you to import the hierarchy, name and color information. The full 3D designer model is then managed by FASTRAD (visualization, radiation calculation, post-processing). Material properties are one of the most important parts of radiation simula. The interface allows you to set the material properties of each solid of the 3D model, such as the density and the mass ratio of each element of the (compound) material by determining its chemical composition (see Fig. 1.). The list of predefined materials can be extended by the user. Simulated radiation detectors can be placed on the 3D model. In this way, radiation effects can be estimated at any point of the 3D model using a Monte Carlo algorithm for a fine calculation of energy deposition by particle-matter interaction (see “Dose calculation and shielding” below), or for a ray-tracing approach. Several more features (local frame display, interactive measurement tool, context menus,...) are included in the interface. Dose calculation and shielding Once the radiation model is completed, the user can perform a deposited dose estimate using the sector analysis module of the software. This ray-tracing module combines the information coming from the radiation model with the information of the radiation environment using a Dose Depth Curve. This dose depth curve gives the deposited dose in a target material (mainly Silicon for electronic devices) behind an Aluminum spherical shielding thickness. This calculation is performed for each detector placed in the 3D model. Even for complex geometries, the calculation provides two kinds of information: the 3D distribution mass around each detector the estimated deposited dose in an isotropic radiation environment Using a post-processing of those results, FASTRAD provides information about optimum shielding location using several viewing representation types. Figure 2. presents a mapping of the mass distribution viewed by one component of an electronic board. The red area indicates the critical directions in terms of shielding thickness. The user is able to optimize the size of additional shielding that can be used to decrease the received dose on the studied detector. The main advantage of this process is the short time needed to complete this task and the well defined mechanical shielding solution provided by the sector analysis post-processing. Monte Carlo algorithm The dose calculation in the software uses a Monte Carlo module (developed through a partnership with the CNES). This algorithm can be used either in a forward process or a reverse one. In the first case, the software manages the transport of electrons and photons (including secondary particles) from 1keV to 10 MeV, in the 3D model. Creation of secondary photons and electrons are taken into account. Any type of energy spectrum and source geometry can be defined. Sensitive volumes (SV) are selected by the user and FASTRAD computes the deposited energy inside the SVs. The reverse Monte Carlo module is dedicated to the dose calculation due to an isotropic irradiation of electrons in a complex and multi-scale geometry, and as a result, the forward algorithm can lead to large computational times. The principle of the reverse method is to use: A forward particle tracking method in the vicinity of the SV A backward particle tracking method from the SV to the external source. The Reverse Monte Carlo method for electron transport takes into account the energy deposition due to primary electrons and secondary photons. The Monte Carlo module was successfully verified through a comparison with GEANT4 results for the forward algorithm and with US Format for the reverse method. One example is the case of a piece of electronic equipment in a satellite structure. The radiation environment corresponds to the electron energy spectrum of a geostationary mission (from 10 keV up to 5 MeV). Interface to Geant4 Geant4 is a particle-matter interaction toolkit maintained by a worldwide collaboration of scientists and software engineers. This C++ library contains a wide range of interaction cross section data and models together with a tracking engine of particles through a 3D geometry. The Geant4 interface implemented in the FASTRAD software provides a tool able to create the 3D geometry, define the particle source, set the physics list and create all the resulting source files in a ready-to-compile Geant4 project. The tool is useful for young engineers who need to be driven into the Geant4 world, and who can use FASTRAD as a tutorial tool, or by experts who do not want to spend time on the creation of C++ files that describe the geometry, material, and basic physics and who can use the Geant4 project created by FASTRAD as a base that can be enhanced by specific features relative to their physical application. The Geant4 interface gives the software a wide range of radiation related fields, as Geant4 is already used for space, medical, nuclear, aeronautical and military applications. Its radiation CAD capabilities facilitate the engineering process for any radiation sensitive system analysis. Technical specifications FASTRAD was developed using C++ with OpenGL to manage the 3D and Open Cascade library for the STEP import and Boolean operations. It was tested under Mac and LINUX using an OS emulator (PowerPC, VMware ...). Computer Requirements: Configuration: Windows Vista/XP/NT/2000 - 512 Mo RAM - 50 Mo HDD. See also NOVICE (EMPC) () Geant4 - GEometry ANd Tracking IGES - Initial Graphics Exchange Specification CATIA - Computer Aided Three-dimensional Interactive Application Latchup RayXpert - 3D modelling software that calculates the gamma dose rate by Monte Carlo External links FASTRAD is distributed by TRAD. Official TRAD website Official FASTRAD website References Physics software Nuclear physics Radiation effects
FASTRAD
Physics,Materials_science,Engineering
1,603
49,241,310
https://en.wikipedia.org/wiki/Anion%20exchanger%20family
The anion exchanger family (TC# 2.A.31, also named bicarbonate transporter family) is a member of the large APC superfamily of secondary carriers. Members of the AE family are generally responsible for the transport of anions across cellular barriers, although their functions may vary. All of them exchange bicarbonate. Characterized protein members of the AE family are found in plants, animals, insects and yeast. Uncharacterized AE homologues may be present in bacteria (e.g., in Enterococcus faecium, 372 aas; gi 22992757; 29% identity in 90 residues). Animal AE proteins consist of homodimeric complexes of integral membrane proteins that vary in size from about 900 amino acyl residues to about 1250 residues. Their N-terminal hydrophilic domains may interact with cytoskeletal proteins and therefore play a cell structural role. Some of the currently characterized members of the AE family can be found in the Transporter Classification Database. Family overview Bicarbonate (HCO3 −) transport mechanisms are the principal regulators of pH in animal cells. Such transport also plays a vital role in acid-base movements in the stomach, pancreas, intestine, kidney, reproductive organs and the central nervous system. Functional studies have suggested different HCO3 − transport modes. Anion exchanger proteins exchange HCO3 − for Cl− in a reversible, electroneutral manner. Na+/HCO3 − co-transport proteins mediate the coupled movement of Na+ and HCO3 − across plasma membranes, often in an electrogenic manner. Sequence analysis of the two families of HCO3 − transporters that have been cloned to date (the anion exchangers and Na+/HCO3 − co-transporters) reveals that they are homologous. This is not entirely unexpected, given that they both transport HCO3 − and are inhibited by a class of pharmacological agents called disulphonic stilbenes. They share around ~25-30% sequence identity, which is distributed along their entire sequence length, and have similar predicted membrane topologies, suggesting they have ~10 transmembrane (TM) domains. A conserved domain is found at the C terminus of many bicarbonate transport proteins. It is also found in some plant proteins responsible for boron transport. In these proteins it covers almost the entire length of the sequence. The Band 3 anion exchange proteins that exchange bicarbonate are the most abundant polypeptide in the red blood cell membrane, comprising 25% of the total membrane protein. The cytoplasmic domain of band 3 functions primarily as an anchoring site for other membrane-associated proteins. Included among the protein ligands of this domain are ankyrin, protein 4.2, protein 4.1, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), phosphofructokinase, aldolase, hemoglobin, hemichromes, and the protein tyrosine kinase (p72syk). Anion exchangers in humans In humans, anion exchangers fall under the solute carrier family 4 (SLC4) family, which is composed of 10 paralogous members (SLC4A1-5; SLC4A7-11). Nine encode proteins that transport HCO. Functionally, eight of these proteins fall into two major groups: three Cl-HCO exchangers (AE1-3) and five Na+-coupled HCO transporters (NBCe1, NBCe2, NBCn1, NBCn2, NDCBE). Two of the Na+-coupled transporters (NBCe1, NBCe2) are electrogenic; the other three Na+-coupled HCO transporters and all three AEs are electroneutral. Two others (AE4, SLC4A9 and BTR1, SLC4A11) are not characterized. Most, though not all, are inhibited by 4,4'-diisothiocyanatostilbene-2,2'-disulfonate (DIDS). SLC4 proteins play roles in acid-base homeostasis, transport of H+ or HCO by epithelia (e.g. absorption of HCO in the renal proximal tubule, secretion of HCO in the pancreatic duct), as well as the regulation of cell volume and intracellular pH. Based on their hydropathy plots all SLC4 proteins are hypothesized to share a similar topology in the cell membrane. They have relatively long cytoplasmic N-terminal domains composed of a few hundred to several hundred residues, followed by 10-14 transmembrane (TM) domains, and end with relatively short cytoplasmic C-terminal domains composed of ~30 to ~90 residues. Although the C-terminal domain comprises a small percentage of the size of the protein, this domain in some cases, has (i) binding motifs that may be important for protein-protein interactions (e.g., AE1, AE2, and NBCn1), (ii) is important for trafficking to the cell membrane (e.g., AE1 and NBCe1), and (iii) may provide sites for regulation of transporter function via protein kinase A phosphorylation (e.g., NBCe1). The SLC4 family comprises the following proteins. SLC4A1 SLC4A2 SLC4A3 SLC4A4 SLC4A5 SLC4A7 SLC4A8 SLC4A9 SLC4A10 SLC4A11 Anion exchanger 1 The human anion exchanger 1 (AE1 or Band 3) binds carbonic anhydrase II (CAII) forming a "transport metabolon" as CAII binding activates AE1 transport activity about 10 fold. AE1 is also activated by interaction with glycophorin, which also functions to target it to the plasma membrane. The membrane-embedded C-terminal domains may each span the membrane 13-16 times. According to the model of Zhu et al. (2003), AE1 in humans spans the membrane 16 times, 13 times as α-helix, and three times (TMSs 10, 11 and 14) possibly as β-strands. AE1 preferentially catalyzes anion exchange (antiport) reactions. Specific point mutations in human anion exchanger 1 (AE1) convert this electroneutral anion exchanger into a monovalent cation conductance. The same transport site within the AE1 spanning domain is involved in both anion exchange and cation transport. AE1 in human red blood cells has been shown to transport a variety of inorganic and organic anions. Divalent anions may be symported with H+. Additionally, it catalyzes flipping of several anionic amphipathic molecules such as sodium dodecyl sulfate (SDS) and phosphatidic acid from one monolayer of the phospholipid bilayer to the other monolayer. The rate of flipping is sufficiently rapid to suggest that this AE1-catalyzed process is physiologically important in red blood cells and possibly in other animal tissues as well. Anionic phospholipids and fatty acids are likely to be natural substrates. However, the mere presence of TMSs enhances the rates of lipid flip-flop. Structure The crystal structure of AE1 (CTD) at 3.5 angstroms has been determined. The structure is locked in an outward-facing open conformation by an inhibitor. Comparing this structure with a substrate-bound structure of the uracil transporter UraA in an inward-facing conformation allowed identification of the likely anion-binding position in the AE1 (CTD), and led to proposal of a possible transport mechanism that could explain why selected mutations lead to disease. The 3-D structure confirmed that the AE family is a member of the APC superfamily. There are several crystal structures available for the AE1 protein in RCSB (links are also available in TCDB). AE1: , , , , , , , , , , , , , , , , , Other members Renal Na+:HCO cotransporters have been found to be members of the AE family. They catalyze the reabsorption of HCO in the renal proximal tubule in an electrogenic process that is inhibited by typical stilbene inhibitors of AE such as DIDS and SITS. They are also found in many other body tissues. At least two genes encode these symporters in any one mammal. A 10 TMS model has been presented, but this model conflicts with the 14 TMS model proposed for AE1. The transmembrane topology of the human pancreatic electrogenic Na+:HO transporter, NBC1, has been studied. A TMS topology with N- and C-termini in the cytoplasm has been suggested. An extracellular loop determines the stoichiometry of Na+-HCO cotransporters. In addition to the Na+-independent anion exchangers (AE1-3) and the Na+:HCO cotransporters (NBCs) (which may be either electroneutral or electrogenic), a Na+-driven HCO/Cl− exchanger (NCBE) has been sequenced and characterized. It transports Na+ + HCO preferentially in the inward direction and H+ + Cl− in the outward direction. This NCBE is widespread in mammalian tissues where it plays an important role in cytoplasmic alkalinization. For example, in pancreatic β-cells, it mediates a glucose-dependent rise in pH related to insulin secretion. Animal cells in tissue culture expressing the gene-encoding the ABC-type chloride channel protein CFTR (TC# 3.A.1.202.1) in the plasma membrane have been reported to exhibit cyclic AMP-dependent stimulation of AE activity. Regulation was independent of the Cl− conductance function of CFTR, and mutations in the nucleotide-binding domain #2 of CFTR altered regulation independently of their effects on chloride channel activity. These observations may explain impaired HCO secretion in cystic fibrosis patients. Anion exchangers in plants and fungi Plants and yeast have anion transporters that in both the pericycle cells of plants and the plasma membrane of yeast cells export borate or boric acid (pKa = 9.2). In A. thaliana, boron is exported from pericycle cells into the root stellar apoplasm against a concentration gradient for uptake into the shoots. In S. cerevisiae, export is also against a concentration gradient. The yeast transporter recognizes HCO, I−, Br−, NO and Cl−, which may be substrates. Tolerance to boron toxicity in cereals is known to be associated with reduced tissue accumulation of boron. Expression of genes from roots of boron-tolerant wheat and barley with high similarity to efflux transporters from Arabidopsis and rice lowered boron concentrations due to an efflux mechanism. The mechanism of energy coupling is not known, nor is it known if borate or boric acid is the substrate. Several possibilities (uniport, anion:anion exchange and anion:cation exchange) can account for the data. Transport reactions The physiologically relevant transport reaction catalyzed by anion exchangers of the AE family is: Cl− (in) + HCO (out) ⇌ Cl− (out) + HCO (in). That for the Na+:HCO3- cotransporters is: Na+ (out) + nHCO (out) → Na+ (in) + nHCO (in). That for the Na+/HCO:H+/Cl− exchanger is: Na+ (out) + HCO (out) + H+ (in) + Cl− (in) ⇌ Na+ (in) + HCO (in) + H+ (out) + Cl− (out). That for the boron efflux protein of plants and yeast is: Boron (in) → Boron (out) See also Solute carrier family Transporter Classification Database References Protein families Transmembrane transporters Solute carrier family
Anion exchanger family
Biology
2,627
22,236,311
https://en.wikipedia.org/wiki/Quasi-homogeneous%20polynomial
In algebra, a multivariate polynomial is quasi-homogeneous or weighted homogeneous, if there exist r integers , called weights of the variables, such that the sum is the same for all nonzero terms of . This sum is the weight or the degree of the polynomial. The term quasi-homogeneous comes from the fact that a polynomial is quasi-homogeneous if and only if for every in any field containing the coefficients. A polynomial is quasi-homogeneous with weights if and only if is a homogeneous polynomial in the . In particular, a homogeneous polynomial is always quasi-homogeneous, with all weights equal to 1. A polynomial is quasi-homogeneous if and only if all the belong to the same affine hyperplane. As the Newton polytope of the polynomial is the convex hull of the set the quasi-homogeneous polynomials may also be defined as the polynomials that have a degenerate Newton polytope (here "degenerate" means "contained in some affine hyperplane"). Introduction Consider the polynomial , which is not homogeneous. However, if instead of considering we use the pair to test homogeneity, then We say that is a quasi-homogeneous polynomial of type , because its three pairs of exponents , and all satisfy the linear equation . In particular, this says that the Newton polytope of lies in the affine space with equation inside . The above equation is equivalent to this new one: . Some authors prefer to use this last condition and prefer to say that our polynomial is quasi-homogeneous of type . As noted above, a homogeneous polynomial of degree is just a quasi-homogeneous polynomial of type ; in this case all its pairs of exponents will satisfy the equation . Definition Let be a polynomial in variables with coefficients in a commutative ring . We express it as a finite sum We say that is quasi-homogeneous of type , , if there exists some such that whenever . References Commutative algebra Algebraic geometry
Quasi-homogeneous polynomial
Mathematics
396
71,320,302
https://en.wikipedia.org/wiki/Come%20into%20My%20Cellar
Come into My Cellar, alternatively titled Boys! Raise Giant Mushrooms in Your Cellar!, is a science fiction short story by American writer Ray Bradbury. It was originally published in Galaxy Magazine in October 1962, and was subsequently included in the short-story collection S is for Space. The story is about an alien invasion in the form of fungi who take over the body and free will of whoever consumes them, and disperse by sending Special Delivery packages to new victims with mushrooms to be grown and eaten. Ray Bradbury mentioned having the idea for the story while eating steak and mushrooms with a group of editors, and not being taken seriously by them. He then joked that he didn't eat mushrooms for the following years. Plot summary Hugh Fortnum wakes up to the noises of his family and neighbour. He opens the window and greets his neighbour Mrs. Goodbody, intent in treating her bushes against bugs and pests, and convinced of being the first line of defence against flying saucers. Hugh hears the doorbell and walks downstairs to find his wife Cynthia holding a Special Delivery package from New Orleans for their son Tom. The small package is from the Great Bayou Novelty Greenhouse, and contains ‘The Sylvan Glade Jumbo-Giant Guaranteed Growth Raise-Them-in-Your-Cellar-for-Big-Profit Mushrooms’. Tom goes down to the cellar and starts planting the mushrooms. As the advertisement says, they will show fabulous growth within only 24 hours. Toward noon, Hugh Fortnum is driving to the market when he picks up his friend Roger Willis, a biology teacher. Roger is scared and tells Hugh about his intuitive belief that something is wrong with the world. Hugh asks Roger what to do about it, and Roger tells him to wait and observe the world for a few days. Roger leaves, and Hugh drives away. Hugh sits on his porch with his wife and asks her if she has had any sort of intuition lately. She ponders the question and answers that she did not, when Tom appears, showing them the remarkable growth of the mushrooms he’s cultivating in the cellar. In just seven hours, hundreds of greyish brown mushrooms are sprouting from the soil. Cynthia feels uneasy, asks if what Tom is growing really are mushrooms and Tom leaves angrily for the cellar. The phone rings and it’s Dorothy, Roger Willis’ wife. She says that her husband is gone, and asks for Hugh’s help. Hugh goes over to Roger and Dorothy Willis' house and sees that Roger’s clothes are gone. Dorothy and the son Joe are confused about the sudden disappearance of Roger. Before promising Dorothy that he will try to find Roger, Hugh sees Joe walking down to the cellar. Hugh goes back home to find his neighbour, Mrs. Goodbody, fighting off aphids, waterbugs, woodworms, and now Marasmius oreades. He explains to his wife about the disappearance of Roger Willis when a delivery boy brings him a telegram from Roger, saying that he is in New Orleans and that Hugh must refuse all Special Delivery packages at all costs. Hugh calls the police. In the evening, the phone rings at the Fortnum’s house. It is Roger, saying he is on a business trip and asking Hugh why he sent the police to find him. He also tells Hugh that his wife and son knew about his trip, and that he will be back in five days. Roger passes the phone to an angry lieutenant, asking Hugh for explanations. Hugh calls Dorothy, and she confirms that her son received, like all kids in the neighbourhood, a Special Delivery package a few days earlier, the same as Tom received in the morning. Watching a meteor in the sky, Hugh starts to suspect that something invaded Earth. He ponders that an alien invasion would not come by meteors or flying saucers, but by means of spores, seeds, pollens or viruses raining on Earth from space; and he thinks that a spore germinating into a mushroom would not need arms and legs to send itself around via Special Delivery, if it could be eaten by a person, infiltrate their blood and take over their cells. Just like what happened to Roger Willis, concludes Hugh, who became something else after eating the mushrooms grown by his son. Cynthia goes to bed and Hugh pours himself a glass of milk. In the fridge, he finds a fresh-cut mushroom, left by Tom for his parents to eat after he himself had a mushroom sandwich. Hugh leaves the mushrooms at the bottom of the stairs leading upstairs. He calls to his son Tom who is down in the cellar tending his crop. Tom tells his dad to come down to see the harvest. Hugh goes down into the dark cellar, shutting the door behind him. Reception The Master of Arts thesis by Şeyma Karaca discusses the story (with the alternative title "Boys! Raise giant mushrooms in your cellar!") from a perspective of mental metamorphosis and alien invasion. The "SF Personality" series #24 by Hardy Kettlitz summarizes the story and highlights the increasing paranoia throughout it, giving the interpretation that, at the end, Hugh Fortnum walks into the cellar without knowing what awaits him. John Booth for Wired included the short story as one of "Ten stories by Ray Bradbury to get you into the Halloween spirit" due to its suspence and lurking menace. In the essay by Eric S. Rabkin "Is Mars heaven? The Martian chronicles, Fahrenheit 451 and Ray Bradbury's landscape of longing" in "Visions of Mars: Essays on the Red Planet in Fiction and Science", the author highlights how the story, with invaders from outer space taking over the body and the mind of citizens, fits within the narrative of its historical times (Cold War). Adaptations Ray Bradbury wrote the Alfred Hitchcock Presents TV show Special Delivery (1959) based on a similar plot. Under the pen name Luis Peñafiel, Narciso Ibáñez Serrador adapted the story into the two-part episode "La Bodega" for the 1966 Spanish anthology tv series Historias para no dormir. The story was adapted into the short movie The Ray Bradbury Theater: Boys! Raise Giant Mushrooms In Your Cellar! (1989). The comic strip Come into My Cellar by English comic artist Dave Gibbons is based on Ray Bradbury's short story References External links Come into My Cellar on the Internet Speculative Fiction Database Comic strip Come into My Cellar by English comic artist Dave Gibbon on the Internet Speculative Fiction Database Short stories by Ray Bradbury Science fiction short stories Fungi and humans Fictional fungi
Come into My Cellar
Biology
1,343
71,655,197
https://en.wikipedia.org/wiki/Hiroko%20Nagahara
Hiroko Nagahara (, born 1952) is a Japanese cosmochemist and astromineralogist whose research studies the chemical composition and formation of chondrules, the molten mineral droplets that accrete to form asteroids and meteoroids. She is a fellow of the Earth–Life Science Institute of the Tokyo Institute of Technology, a professor emerita of Tokyo University, and a former president of the Meteoritical Society. Education and career Nagahara was born in 1952 in Tokyo, and studied in the faculty of science and engineering at Waseda University, graduating in 1970, earning a master's degree in 1976. She completing a doctorate in 1983 through the University of Tokyo, supervised by Ikuo Kushiro. She joined the University of Tokyo as an assistant professor in 1984, and became a full professor there in 2001. Recognition Nagahara was the 2001 winner of the Saruhashi Prize. She was the 2015 winner of the J. Lawrence Smith Medal of the National Academy of Sciences "for her work on the kinetics of evaporation and condensation processes in the early Solar System and her fundamental contributions to one of the most enduring mysteries in meteoritics, the formation of the chondrules that constitute the characteristic component of the most abundant group of meteorites." In 2016 the Meteoritical Society gave Nagahara the Leonard Medal, its highest award. In 2018, Nagahara was named as a Fellow of the Japan Geoscience Union (JpGU), "for pioneering and innovative contributions to cosmochemistry, meteoritics, and planetary science, and also for outstanding contributions to the Earth and planetary science community". Asteroid 6225 Hiroko, discovered in 1981, was named for her. References 1952 births Living people Mineralogists Women mineralogists Planetary scientists Women planetary scientists Japanese geochemists Meteorite researchers Waseda University alumni University of Tokyo alumni Academic staff of the University of Tokyo
Hiroko Nagahara
Chemistry
391
102,490
https://en.wikipedia.org/wiki/Dell
Dell Inc. is an American technology company that develops, sells, repairs, and supports personal computers (PCs), servers, data storage devices, network switches, software, computer peripherals including printers and webcams among other products and services. Dell is based in Round Rock, Texas. Founded by Michael Dell in 1984, Dell started making IBM clone computers and pioneered selling cut-price PCs directly to customers, managing its supply chain and electronic commerce. The company rose rapidly during the 1990s and in 2001 it became the largest global PC vendor for the first time. Dell was a pure hardware vendor until 2009 when it acquired Perot Systems. Dell then entered the market for IT services. The company has expanded storage and networking systems. In the late 2000s, it began expanding from offering computers only to delivering a range of technology for enterprise customers. Dell is a subsidiary of Dell Technologies, a publicly traded company, as well as a component of the NASDAQ-100 and S&P 500. Dell is ranked 31st on the Fortune 500 list in 2022, up from 76th in 2021. It is also the sixth-largest company in Texas by total revenue, according to Fortune magazine. It is the second-largest non-oil company in Texas. it is the world's third-largest personal computer vendor by unit sales, after Lenovo and HP. In 2015, Dell acquired the enterprise technology firm EMC Corporation, together becoming divisions of Dell Technologies. Dell EMC sells data storage, information security, virtualization, analytics, and cloud computing. History Founding and start-up Michael Dell founded Dell Computer Corporation, doing business as PC's Limited in 1984 while a student at the University of Texas at Austin, operating from Michael Dell's off-campus dormitory room at Dobie Center. The start-up aimed to sell IBM PC compatible computers built from stock components. Michael Dell started trading in the belief that, by selling personal computer systems directly to customers, PC's Limited could better understand customers' needs and provide the most effective computing solutions to meet those needs. Dell dropped out of college upon completion of his freshman year at the University of Texas in order to focus full-time on his fledgling business, after getting about $1,000 in expansion-capital from his family. As of April 2021, Dell's net worth was estimated to be over $50 billion (). In 1985, PC's Limited launched its first computer, the "Turbo PC," priced at US$795 (). The Turbo PC featured an Intel 8088-compatible processor with a maximum speed of 8 MHz. PC's Limited marketed these systems through national computer magazines, selling directly to consumers while custom-assembling each unit based on a range of options. This approach allowed them to offer competitive prices compared to retail brands, coupled with the convenience of pre-assembled units, making them one of the early success stories of this business model. The company grossed over $73 million in its first year of operation. The company dropped the PC's Limited name in 1987 to become Dell Computer Corporation and began expanding globally. The reasoning was that this new company name better reflected its presence in the business market, and also resolved issues with the use of "Limited" in a company name in certain countries. The company set up its first international operations in Britain; 11 more followed within the next four years. In June 1988, Dell Computer's market capitalization grew by $30 million to $80 million () from its June 22 initial public offering of 3.5 million shares at $8.50 a share. In 1989, Dell Computer set up its first on-site service programs in order to compensate for the lack of local retailers prepared to act as service centers. Growth in the 1990s and early 2000s In 1990, Dell Computer tried selling its products indirectly through warehouse clubs and computer superstores, but met with little success, and the company re-focused on its more successful direct-to-consumer sales model. In 1992, Fortune included Dell Computer Corporation in its list of the world's 500 largest companies, making Michael Dell the youngest CEO of a Fortune 500 company at that time. In 1993, to complement its own direct sales channel, Dell planned to sell PCs at big-box retail outlets such as Wal-Mart, which would have brought in an additional $125 million () in annual revenue. Bain consultant Kevin Rollins persuaded Michael Dell to pull out of these deals, believing they would be money losers in the long run. Margins at retail were thin at best and Dell left the reseller channel in 1994. Rollins would soon join Dell full-time and eventually become the company president and CEO. Originally, Dell did not emphasize the consumer market, due to the higher costs and low profit margins in selling to individuals and households; this changed when the company's Internet site took off in 1996 and 1997. While the industry's average selling price to individuals was going down, Dell's was going up, as second- and third-time computer buyers who wanted powerful computers with multiple features and did not need much technical support were choosing Dell. Dell found an opportunity among PC-savvy individuals who liked the convenience of buying direct, customizing their PC to their means, and having it delivered in days. In early 1997, Dell created an internal sales and marketing group dedicated to serving the home market and introduced a product line designed especially for individual users. From 1997 to 2004, Dell steadily grew and it gained market share from competitors even during industry slumps. During the same period, rival PC vendors such as Compaq, Gateway, IBM, Packard Bell, and AST Research struggled and eventually left the market or were bought out. Dell surpassed Compaq to become the largest PC manufacturer in 1999. Operating costs made up only 10 percent of Dell's $35 billion in revenue in 2002 (), compared with 21 percent of revenue at Hewlett-Packard, 25 percent at Gateway, and 46 percent at Cisco. In 2002, when Compaq merged with Hewlett-Packard (the fourth-place PC maker), the newly combined Hewlett-Packard took the top spot for a time but struggled and Dell soon regained its lead. Dell grew the fastest in the early 2000s. In 2002, Dell expanded its product line to include televisions, handhelds, digital audio players, and printers. Chairman and CEO Michael Dell had repeatedly blocked President and COO Kevin Rollins's attempt to lessen the company's heavy dependency on PCs, which Rollins wanted to fix by acquiring EMC Corporation; a move that would eventually occur over 12 years later. In 2003, at the annual company meeting, the stockholders approved changing the company name to "Dell Inc." to recognize the company's expansion beyond computers. In 2004, the company announced that it would build a new assembly-plant near Winston-Salem, North Carolina; the city and county provided Dell with $37.2 million in incentive packages; the state provided approximately $250 million () in incentives and tax breaks. In July, Michael Dell stepped aside as chief executive officer while retaining his position as chairman of the board. Kevin Rollins, who had held a number of executive posts at Dell, became the new CEO. Despite no longer holding the CEO title, Dell essentially acted as a de facto co-CEO with Rollins. Under Rollins, Dell purchased the computer hardware manufacturer Alienware in 2006. Dell Inc.'s plan anticipated Alienware continuing to operate independently under its existing management. Alienware expected to benefit from Dell's efficient manufacturing system. Key events In 2005, while earnings and sales continued to rise, sales growth slowed considerably, and the company stock lost 25% of its value that year. By June 2006, the stock traded around US$25 which was 40% down from July 2005—the high-water mark of the company in the post-dotcom era. By June 2021, the stock had reached an all-time high of over US$100 per share, reflecting the company's successful transition to a technology solutions provider that helps customers navigate digital transformation. The slowing sales growth has been attributed to the maturing PC market, which constituted 66% of Dell's sales, and analysts suggested that Dell needed to make inroads into non-PC business segments such as storage, services, and servers. Dell's price advantage was tied to its ultra-lean manufacturing for desktop PCs, but this became less important as savings became harder to find inside the company's supply chain, and as competitors such as Hewlett-Packard and Acer made their PC manufacturing operations more efficient to match Dell, weakening Dell's traditional price differentiation. Throughout the entire PC industry, declines in prices along with commensurate increases in performance meant that Dell had fewer opportunities to upsell to their customers. As a result, the company was selling a greater proportion of inexpensive PCs than before, which eroded profit margins. The laptop segment had become the fastest-growing of the PC market, but Dell produced low-cost notebooks in China like other PC manufacturers which eliminated Dell's manufacturing cost advantages, plus Dell's reliance on Internet sales meant that it missed out on growing notebook sales in big box stores. CNET has suggested that Dell was getting trapped in the increasing commoditization of high volume low margin computers, which prevented it from offering more exciting devices that consumers demanded. Despite plans of expanding into other global regions and product segments, Dell was heavily dependent on US corporate PC market, as desktop PCs sold to both commercial and corporate customers accounted for 32 percent of its revenue, 85 percent of its revenue comes from businesses, and 64 percent of its revenue comes from North and South America, according to its 2006 third-quarter results. US shipments of desktop PCs were shrinking, and the corporate PC market, which purchases PCs in upgrade cycles, had largely decided to take a break from buying new systems. The last cycle started around 2002, three or so years after companies started buying PCs ahead of the perceived Y2K problems, and corporate clients were not expected to upgrade again until extensive testing of Microsoft's Windows Vista (expected in early 2007), putting the next upgrade cycle around 2008. Heavily dependent on PCs, Dell had to slash prices to boost sales volumes, while demanding deep cuts from suppliers. Dell had long stuck by its direct sales model. Consumers had become the main drivers of PC sales in recent years, yet there had a decline in consumers purchasing PCs through the Web or on the phone, as increasing numbers were visiting consumer electronics retail stores to try out the devices first. Dell's rivals in the PC industry, HP, Gateway and Acer, had a long retail presence and so were well poised to take advantage of the consumer shift. The lack of a retail presence stymied Dell's attempts to offer consumer electronics such as flat-panel TVs and MP3 players. Dell responded by experimenting with mall kiosks, plus quasi-retail stores in Texas and New York. Dell had a reputation as a company that relied upon supply chain efficiencies to sell established technologies at low prices, instead of being an innovator. By the mid-2000s many analysts were looking to innovating companies as the next source of growth in the technology sector. Dell's low spending on R&D relative to its revenue (compared to IBM, Hewlett-Packard, and Apple Inc.)—which worked well in the commoditized PC market—prevented it from making inroads into more lucrative segments, such as MP3 players and later mobile devices. Increasing spending on R&D would have cut into the operating margins that the company emphasized. Dell had done well with a horizontal organization that focused on PCs when the computing industry moved to horizontal mix-and-match layers in the 1980s, but by the mid-2000 the industry shifted to vertically integrated stacks to deliver an end-to-end IT product, and Dell lagged far behind competitors like Hewlett-Packard and Oracle. Dell's reputation for poor customer service, which was exacerbated as it moved call centers offshore and as its growth outstripped its technical support infrastructure, came under increasing scrutiny on the Web. The original Dell model was known for high customer satisfaction when PCs sold for thousands of dollars but by the 2000s, the company could not justify that level of service when computers in the same line-up sold for hundreds of dollars. Rollins responded by shifting Dick Hunter from the head of manufacturing to head of customer service. Hunter, who noted that Dell's DNA of cost-cutting "got in the way," aimed to reduce call transfer times and have call center representatives resolve inquiries in one call. By 2006, Dell had spent $100 million in just a few months to improve on this and rolled out DellConnect to answer customer inquiries more quickly. In July 2006, the company started its Direct2Dell blog, and then in February 2007, Michael Dell launched IdeaStorm.com, asking customers for advice including selling Linux computers and reducing the promotional "bloatware" on PCs. These initiatives did manage to cut the negative blog posts from 49% to 22%, as well as reduce the "Dell Hell" prominent on Internet search engines. There was also criticism that Dell used faulty components for its PCs, particularly the 11.8 million OptiPlex desktop computers sold to businesses and governments from May 2003 to July 2005 that suffered from faulty capacitors. A battery recall in August 2006, as a result of a Dell laptop catching fire, caused much negative attention for the company though later, Sony was found responsible for the manufacturing of the batteries, however a Sony spokesman said the problem concerned the combination of the battery with a charger, which was specific to Dell. 2006 marked the first year that Dell's growth was slower than the PC industry as a whole. By the fourth quarter of 2006, Dell lost its title of the largest PC manufacturer to Hewlett Packard whose Personal Systems Group was invigorated thanks to a restructuring initiated by their CEO Mark Hurd. SEC investigation In August 2005, Dell became the subject of an informal investigation by the United States SEC. In 2006, the company disclosed that the US Attorney for the Southern District of New York had subpoenaed documents related to the company's financial reporting dating back to 2002. The company delayed filing financial reports for the third and fourth fiscal quarter of 2006, and several class-action lawsuits were filed. Dell Inc's failure to file its quarterly earnings report could have subjected the company to de-listing from the Nasdaq, but the exchange granted Dell a waiver, allowing the stock to trade normally. In August 2007, the company announced that it would restate its earnings for fiscal years 2003 through 2006 and the first quarter of 2007 after an internal audit found that certain employees had changed corporate account balances to meet quarterly financial targets. In July 2010, the SEC announced charges against several senior Dell executives, including Dell Chairman and CEO Michael Dell, former CEO Kevin Rollins, and former CFO James Schneider, "with failing to disclose material information to investors and using fraudulent accounting to make it falsely appear that the company was consistently meeting Wall Street earnings targets and reducing its operating expenses." Dell, inc. was fined $100 million, with Michael Dell personally fined $4 million. Michael Dell resumes CEO role After four out of five quarterly earnings reports were below expectations, Rollins resigned as president and CEO on January 31, 2007, and founder Michael Dell assumed the role of CEO again. On March 1, 2007, the company issued a preliminary quarterly earnings report showing gross sales of $14.4 billion, down 5% year-over-year, and net income of $687 million (30 cents per share), down 33%. Net earnings would have declined even more if not for the effects of eliminated employee bonuses, which accounted for six cents per share. NASDAQ extended the company's deadline for filing financials to May 4. Dell 2.0 and downsizing Dell announced a change campaign called "Dell 2.0," reducing the number of employees and diversifying the company's products. While chairman of the board after relinquishing his CEO position, Michael Dell still had significant input in the company during Rollins' years as CEO. With the return of Michael Dell as CEO, the company saw changes in operations, the exodus of many senior vice-presidents and new personnel brought in from outside the company. Michael Dell announced a number of initiatives and plans (part of the "Dell 2.0" initiative) to improve the company's financial performance. These include elimination of 2006 bonuses for employees with some discretionary awards, reduction in the number of managers reporting directly to Michael Dell from 20 to 12, and reduction of "bureaucracy". Jim Schneider retired as CFO and was replaced by Donald Carty, as the company came under an SEC probe for its accounting practices. On April 23, 2008, Dell announced the closure of one of its biggest Canadian call-centers in Kanata, Ontario, terminating approximately 1100 employees, with 500 of those redundancies effective on the spot, and with the official closure of the center scheduled for the summer. The call-center had opened in 2006 after the city of Ottawa won a bid to host it. Less than a year later, Dell planned to double its workforce to nearly 3,000 workers add a new building. These plans were reversed, due to a high Canadian dollar that made the Ottawa staff relatively expensive, and also as part of Dell's turnaround, which involved moving these call-center jobs offshore to cut costs. The company had also announced the shutdown of its Edmonton, Alberta, office, losing 900 jobs. In total, Dell announced the ending of about 8,800 jobs in 2007–2008 — 10% of its workforce. By the late 2000s, Dell's "configure to order" approach of manufacturing—delivering individual PCs configured to customer specifications from its US facilities was no longer as efficient or competitive with high-volume Asian contract manufacturers as PCs became powerful low-cost commodities. Dell closed plants that produced desktop computers for the North American market, including the Mort Topfer Manufacturing Center in Austin, Texas (original location) and Lebanon, Tennessee (opened in 1999) in 2008 and early 2009, respectively. The desktop production plant in Winston-Salem, North Carolina, received US$280 million in incentives from the state and opened in 2005 (), but ceased operations in November 2010. Dell's contract with the state required them to repay the incentives for failing to meet the conditions, and they sold the North Carolina plant to Herbalife. Much work was transferred to manufacturers in Asia and Mexico, or some of Dell's own factories overseas. On January 8, 2009, Dell announced the closure of its manufacturing plant in Limerick, Ireland, with the loss of 1,900 jobs and the transfer of production to its plant in Łodź in Poland. Attempts at diversification The release of Apple's iPad tablet computer had a negative impact on Dell and other major PC vendors, as consumers switched away from desktop and laptop PCs. Dell's own mobility division has not managed success with developing smartphones or tablets, whether running Windows or Google Android. The Dell Streak was a failure commercially and critically due to its outdated OS, numerous bugs, and low resolution screen. InfoWorld suggested that Dell and other OEMs saw tablets as a short-term, low-investment opportunity running Google Android, an approach that neglected user interface and failed to gain long term market traction with consumers. Dell has responded by pushing higher-end PCs, such as the XPS line of notebooks, which do not compete with the Apple iPad and Kindle Fire tablets. The growing popularity of smartphones and tablet computers instead of PCs drove Dell's consumer segment to an operating loss in Q3 2012. In December 2012, Dell suffered its first decline in holiday sales in five years, despite the introduction of Windows 8. In the shrinking PC industry, Dell continued to lose market share, as it dropped below Lenovo in 2011 to fall to number three in the world. Dell and fellow American contemporary Hewlett Packard came under pressure from Asian PC manufacturers Lenovo, Asus, and Acer, all of which had lower production costs and were willing to accept lower profit margins. In addition, while the Asian PC vendors had been improving their quality and design—for instance, Lenovo's ThinkPad series was winning corporate customers away from Dell's laptops—Dell's customer service and reputation had been slipping. Dell remained the second-most profitable PC vendor, as it took 13 percent of operating profits in the PC industry during Q4 2012, behind Apple's Mac that took 45 percent, seven percent at Hewlett Packard, six percent at Lenovo and Asus, and one percent for Acer. Dell attempted to offset its declining PC business, which still accounted for half of its revenue and generates steady cash flow, by expanding into the enterprise market with servers, networking, software, and services. It avoided many of the acquisition write-downs and management turnover that plagued its chief rival Hewlett Packard. Despite spending $13 billion on acquisitions to diversify its portfolio beyond hardware, the company was unable to convince the market that it could thrive or made the transformation in the post-PC world, as it suffered continued declines in revenue and share price. Dell's market share in the corporate segment was previously a "moat" against rivals but this has no longer been the case as sales and profits have fallen precipitously. 2013 buyout After several weeks of rumors, which started around January 11, 2013, Dell announced on February 5, 2013, that it had struck a $24.4 billion () leveraged buyout deal, that would have delisted its shares from the NASDAQ and Hong Kong Stock Exchange and taken it private. Reuters reported that Michael Dell and Silver Lake Partners, aided by a $2 billion loan from Microsoft, would acquire the public shares at $13.65 apiece. The $24.4 billion buyout was projected to be the largest leveraged buyout backed by private equity since the 2007–2008 financial crisis (). It is also the largest technology buyout ever, surpassing the 2006 buyout of Freescale Semiconductor for $17.5 billion (). Michael Dell said of the February offer "I believe this transaction will open an exciting new chapter for Dell, our customers and team members". Dell rival Lenovo responded to the buyout, saying, "the financial actions of some of our traditional competitors will not substantially change our outlook." In March 2013, the Blackstone Group and Carl Icahn expressed interest in purchasing Dell. In April 2013, Blackstone withdrew their offer, citing deteriorating business. Other private equity firms such as KKR & Co. and TPG Capital declined to submit alternative bids for Dell, citing the uncertain market for personal computers and competitive pressures, so the "wide-open bidding war" never materialized. Analysts said that the biggest challenge facing Silver Lake would be to find an "exit strategy" to profit from its investment, which would be when the company would hold an IPO to go public again, and one warned "But even if you can get a $25bn enterprise value for Dell, it will take years to get out." In May 2013, Michael Dell joined his board in voting for the offer. The following August he reached a deal with the special committee on the board for $13.88 per share, a raised price of $13.75 plus a special dividend of 13 cents, as well as a change to the voting rules. The $13.88 cash offer (plus a $.08 per share dividend for the third fiscal quarter) was accepted on September 12 and closed on October 30, 2013, ending Dell's 25-year run as a publicly traded company. After the buyout, the newly private Dell offered a Voluntary Separation Program that they expected to reduce their workforce by up to seven percent. The reception to the program so exceeded the expectations that Dell may be forced to hire new staff to make up for the losses. Recent history On November 19, 2015, Dell, alongside Arm Holdings, Cisco Systems, Intel, Microsoft, and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing. Acquisition of EMC On October 12, 2015, Dell Inc. announced its intent to acquire EMC Corporation in a cash-and-stock deal valued at $67 billion (), which has been considered the largest-ever acquisition in the technology sector. As part of the acquisition, Dell would take over EMC's 81% stake in the cloud-computing and virtualization company VMware. This would combine Dell's enterprise server, personal computer, and mobile businesses with EMC's enterprise storage business in a significant Vertical merger of IT giants. Dell would pay $24.05 per share of EMC, and $9.05 per share of tracking stock in VMware. The announcement came two years after Dell Inc. returned to private ownership, claiming that it faced bleak prospects and would need several years out of the public eye to rebuild its business. It was thought that the company's value had roughly doubled since then. EMC was being pressured by Elliott Management, a hedge fund holding 2.2% of EMC's stock, to reorganize their unusual "Federation" structure, in which EMC's divisions were effectively being run as independent companies. Elliott argued this structure deeply undervalued EMC's core "EMC II" data storage business, and that increasing competition between EMC II and VMware products was confusing the market and hindering both companies. The Wall Street Journal estimated that in 2014 Dell had revenue of $27.3 billion () from personal computers and $8.9 billion from servers, while EMC had $16.5 billion from EMC II, $1 billion from RSA Security, $6 billion from VMware, and $230 million from Pivotal Software. EMC owns around 80 percent of the stock of VMware. The proposed acquisition will maintain VMware as a separate company, held via a new tracking stock, while the other parts of EMC will be rolled into Dell. Once the acquisition closes Dell will again publish quarterly financial results, having ceased these on going private in 2013. The combined business was expected to address the markets for scale-out architecture, converged infrastructure and private cloud computing, playing to the strengths of both EMC and Dell. Commentators have questioned the deal, with FBR Capital Markets saying that though it makes a "ton of sense" for Dell, it's a "nightmare scenario that would lack strategic synergies" for EMC. Fortune said there was a lot for Dell to like in EMC's portfolio, but "does it all add up enough to justify tens of billions of dollars for the entire package? Probably not." The Register reported the view of William Blair & Company that the merger would "blow up the current IT chess board", forcing other IT infrastructure vendors to restructure to achieve scale and vertical integration. The value of VMware stock fell 10% after the announcement, valuing the deal at around $63–64bn rather than the $67bn originally reported. Key investors backing the deal besides Dell were Singapore's Temasek Holdings and Silver Lake Partners. On September 7, 2016, Dell completed the merger with EMC, which involved the issuance of $45.9 billion () in debt and $4.4 billion () of common stock. At the time, some analysts claimed that Dell's acquisition of the former Iomega could harm the LenovoEMC partnership. In July 2018, Dell announced intentions to become a publicly traded company again by paying $21.7 billion () in both cash and stock to buy back shares from its stake in VMware, offering shareholders roughly 60 cents on the dollar as part of the deal. In November, Carl Icahn (9.3% owner of Dell) sued the company over plans to go public. As a result of pressure from Icahn and other activist investors, Dell renegotiated the deal, ultimately offering shareholders about 80% of market value. As part of this deal, Dell once again became a public company, with the original Dell computer business and Dell EMC operating under the newly created parent, Dell Technologies. Post-acquisition, Dell was re-organized with a new parent company, Dell Technologies; Dell's consumer and workstation businesses are internally referred to as the Dell Client Solutions Group, and is one of the company's three main business divisions alongside Dell EMC and VMware. In January 2021 (), Dell reported $94 billion () in sales and $13 billion operating cash flow during 2020. On March 1, 2024, Dell's stock hit all-time high after earnings. It delivered a strong performance from its artificial intelligence unit that sent shares up nearly 40%, its highest daily gain since the company went public in 2018. In August 2024, the company announced it would be laying off 12,500 employees—10% of its workforce—in order to invest in artificial intelligence initiatives. Dell and AMD When Dell acquired Alienware early in 2006, some Alienware systems had AMD chips. On August 17, 2006, a Dell press release stated that starting in September, Dell Dimension desktop computers would have AMD processors and that later in the year Dell would release a two-socket, quad-processor server using AMD Opteron chips, moving away from Dell's tradition of only offering Intel processors in Dell PCs. CNET's News.com on August 17, 2006, cited Dell's CEO Kevin Rollins as attributing the move to AMD processors to lower costs and to AMD technology. AMD's senior VP in commercial business, Marty Seyer, stated: "Dell's wider embrace of AMD processor-based offerings is a win for Dell, for the industry and most importantly for Dell customers." On October 23, 2006, Dell announced new AMD-based servers — the PowerEdge 6950 and the PowerEdge SC1435. On November 1, 2006, Dell's website began offering notebooks based on AMD processors (the Inspiron 1501 with a display) with the choice of a single-core MK-36 processor, dual-core Turion X2 chips or Mobile Sempron. In 2017, Dell released the AlienWare 17 gaming laptop. The model was primarily based on NVIDIA GeForce GTX 1080 systems. Dell and desktop Linux In 1998, Ralph Nader asked Dell (and five other major OEMs) to offer alternate operating systems to Microsoft Windows, specifically including Linux, for which "there is clearly a growing interest". Possibly coincidentally, Dell started offering Linux notebook systems that "cost no more than their Windows 98 counterparts" in 2000, and soon expanded, with Dell becoming "the first major manufacturer to offer Linux across its full product line". However, by early 2001 Dell had "disbanded its Linux business unit." On February 26, 2007, Dell announced that it had commenced a program to sell and distribute a range of computers with pre-installed Linux distributions as an alternative to Microsoft Windows. Dell indicated that Novell's SUSE Linux would appear first. However, the next day, Dell announced that its previous announcement related to certifying the hardware as ready to work with Novell SUSE Linux and that it (Dell) had no plans to sell systems pre-installed with Linux in the near future. On March 28, 2007, Dell announced that it would begin shipping some desktops and laptops with Linux pre-installed, although it did not specify which distribution of Linux or which hardware would lead. On April 18, a report appeared suggesting that Michael Dell used Ubuntu on one of his home systems. On May 1, 2007, Dell announced it would ship the Ubuntu Linux distribution. On May 24, 2007, Dell started selling models with Ubuntu Linux 7.04 pre-installed: a laptop, a budget computer, and a high-end PC. On June 27, 2007, Dell announced on its Direct2Dell blog that it planned to offer more pre-loaded systems (the new Dell Inspiron desktops and laptops). After the IdeaStorm site supported extending the bundles beyond the US market, Dell later announced more international marketing. On August 7, 2007, Dell officially announced that it would offer one notebook and one desktop in the UK, France and Germany with Ubuntu "pre-installed". At LinuxWorld 2007 Dell announced plans to provide Novell's SUSE Linux Enterprise Desktop on selected models in China, "factory-installed". On November 30, 2007, Dell reported shipping 40,000 Ubuntu PCs. On January 24, 2008, Dell in Germany, Spain, France, and the United Kingdom launched a second laptop, an XPS M1330 with Ubuntu 7.10, for 849 euro or GBP 599 upwards. On February 18, 2008, Dell announced that the Inspiron 1525 would have Ubuntu as an optional operating system. On February 22, 2008, Dell announced plans to sell Ubuntu in Canada and in Latin America From September 16, 2008, Dell has shipped both Dell Ubuntu Netbook Remix and Windows XP Home versions of the Inspiron Mini 9 and the Inspiron Mini 12. Dell shipped the Inspiron Mini laptops with Ubuntu version 8.04. As of 2021, Dell continues to offer select laptops and workstations with Ubuntu Linux pre-installed, under the "Developer Edition" moniker. Corporate affairs Business trends The key trends for Dell are (as of the financial year ending late January/early February): Senior leadership List of chairmen Michael Dell (1984– ) List of chief executives Michael Dell (1984–2004) Kevin Rollins (2004–2007) Michael Dell (2007–present); second term List of Dell marketing slogans Be direct (1998–2001) Easy as Dell (2001–2004) Get more out of now (2004–2005) It's a Dell (2005–2006) Dell. Purely You (2006–2007) Yours is Here (2007–2011) The power to do more (2011–present) Acquisitions Dell facilities Dell's headquarters is located in Round Rock, Texas. the company employed about 14,000 people in central Texas and was the region's largest private employer, which has of space. As of 1999 almost half of the general fund of the city of Round Rock originated from sales taxes generated from the Dell headquarters. Dell previously had its headquarters in the Arboretum complex in northern Austin, Texas. In 1989 Dell occupied in the Arboretum complex. In 1990, Dell had 1,200 employees in its headquarters. In 1993, Dell submitted a document to Round Rock officials, titled "Dell Computer Corporate Headquarters, Round Rock, Texas, May 1993 Schematic Design." Despite the filing, during that year the company said that it was not going to move its headquarters. In 1994, Dell announced that it was moving most of its employees out of the Arboretum, but that it was going to continue to occupy the top floor of the Arboretum and that the company's official headquarters address would continue to be the Arboretum. The top floor continued to hold Dell's board room, demonstration center, and visitor meeting room. Less than one month prior to August 29, 1994, Dell moved 1,100 customer support and telephone sales employees to Round Rock. Dell's lease in the Arboretum had been scheduled to expire in 1994. By 1996, Dell was moving its headquarters to Round Rock. As of January 1996, 3,500 people still worked at the current Dell headquarters. One building of the Round Rock headquarters, Round Rock 3, had space for 6,400 employees and was scheduled to be completed in November 1996. In 1998 Dell announced that it was going to add two buildings to its Round Rock complex, adding of office space to the complex. In 2000, Dell announced that it would lease of space in the Las Cimas office complex in unincorporated Travis County, Texas, between Austin and West Lake Hills, to house the company's executive offices and corporate headquarters. 100 senior executives were scheduled to work in the building by the end of 2000. In January 2001, the company leased the space in Las Cimas 2, located along Loop 360. Las Cimas 2 housed Dell's executives, the investment operations, and some corporate functions. Dell also had an option for of space in Las Cimas 3. After a slowdown in business required reducing employees and production capacity, Dell decided to sublease its offices in two buildings in the Las Cimas office complex. In 2002 Dell announced that it planned to sublease its space to another tenant; the company planned to move its headquarters back to Round Rock once a tenant was secured. By 2003, Dell moved its headquarters back to Round Rock. It leased all of Las Cimas I and II, with a total of , for about a seven-year period after 2003. By that year roughly of that space was absorbed by new subtenants. In 2008, Dell switched the power sources of the Round Rock headquarters to more environmentally friendly ones, with 60% of the total power coming from TXU Energy wind farms and 40% coming from the Austin Community Landfill gas-to-energy plant operated by Waste Management, Inc. Dell facilities in the United States are located in Austin, Texas; Nashua, New Hampshire; Nashville, Tennessee; Oklahoma City, Oklahoma; Peoria, Illinois; Hillsboro, Oregon (Portland area); Winston-Salem, North Carolina; Eden Prairie, Minnesota (Dell Compellent); Bowling Green, Kentucky; Lincoln, Nebraska; and Miami, Florida. Facilities located abroad include Penang, Malaysia; Xiamen, China; Bracknell, UK; Manila, Philippines Chennai, India; Hyderabad, India; Noida, India; Hortolândia and Porto Alegre, Brazil; Bratislava, Slovakia; Łódź, Poland; Panama City, Panama; Dublin and Limerick, Ireland; Casablanca, Morocco and Montpellier, France. The US and India are the only countries that have all Dell's business functions and provide support globally: research and development, manufacturing, finance, analysis, and customer care. Dell was recognized as "India's Most Desired Brand in 2023", as per TRA's Most Desired Brands report 2023. Manufacturing From its early beginnings, Dell operated as a pioneer in the "configure to order" approach to manufacturing—delivering individual PCs configured to customer specifications. In contrast, most PC manufacturers in those times delivered large orders to intermediaries on a quarterly basis. To minimize the delay between purchase and delivery, Dell has a general policy of manufacturing its products close to its customers. This also allows for implementing a just-in-time (JIT) manufacturing approach, which minimizes inventory costs. Low inventory is another signature of the Dell business model—a critical consideration in an industry where components depreciate very rapidly. Dell's manufacturing process covers assembly, software installation, functional testing (including "burn-in"), and quality control. Throughout most of the company's history, Dell manufactured desktop machines in-house and contracted out the manufacturing of base notebooks for configuration in-house. The company's approach has changed, as cited in the 2006 Annual Report, which states, "We are continuing to expand our use of original design manufacturing partnerships and manufacturing outsourcing relationships." The Wall Street Journal reported in September 2008 that "Dell has approached contract computer manufacturers with offers to sell" their plants. By the late 2000s, Dell's "configure to order" approach of manufacturing—delivering individual PCs configured to customer specifications from its US facilities was no longer as efficient or competitive with high-volume Asian contract manufacturers as PCs became powerful low-cost commodities. Assembly of desktop computers for the North American market formerly took place at Dell plants in Austin, Texas, (original location) and Lebanon, Tennessee, (opened in 1999), which were closed in 2008 and early 2009, respectively. The plant in Winston-Salem, North Carolina, opened in 2005 but ceased operations in November 2010. Most of the work that used to take place in Dell's US plants was transferred to contract manufacturers in Asia and Mexico, or some of Dell's own factories overseas. The Miami, Florida, facility of its Alienware subsidiary remains in operation, while Dell continues to produce its servers (its most profitable products) in Austin, Texas. Dell assembled computers for the EMEA market at the Limerick facility in the Republic of Ireland, and once employed about 4,500 people in that country. Dell began manufacturing in Limerick in 1991 and went on to become Ireland's largest exporter of goods and its second-largest company and foreign investor. On January 8, 2009, Dell announced that it would move all Dell manufacturing in Limerick to Dell's new plant in the Polish city of Łódź by January 2010. European Union officials said they would investigate a €52.7million aid package the Polish government used to attract Dell away from Ireland. European Manufacturing Facility 1 (EMF1, opened in 1990) and EMF3 form part of the Raheen Industrial Estate near Limerick. EMF2 (previously a Wang facility, later occupied by Flextronics, situated in Castletroy) closed in 2002, and Dell Inc. has consolidated production into EMF3 (EMF1 now contains only offices). Subsidies from the Polish government did keep Dell for a long time. After ending assembly in the Limerick plant the Cherrywood Technology Campus in Dublin was the largest Dell office in the republic with over 1200 people in sales (mainly UK & Ireland), support (enterprise support for EMEA) and research and development for cloud computing, but no more manufacturing except Dell's Alienware subsidiary, which manufactures PCs in an Athlone, Ireland, plant. Whether this facility will remain in Ireland is not certain. Dell started production at EMF4 in Łódź, Poland, in late 2007. Dell moved desktop, notebook and PowerEdge server manufacturing for the South American market from the Eldorado do Sul plant opened in 1999, to a new plant in Hortolândia, Brazil, in 2007. Products Scope and brands The corporation markets specific brand names to different market segments. Its Business/Corporate class includes: OptiPlex (office desktop computer systems) Dimension (home desktop computer systems) Vostro (office/small business desktop and notebook systems) n Series (desktop and notebook computers shipped with Linux or FreeDOS installed) Latitude (business-focused notebooks) Precision (workstation systems and high-performance "Mobile Workstation" notebooks), PowerEdge (business servers) PowerVault (direct-attach and network-attached storage) Force10 (network switches) PowerConnect (network switches) Dell Compellent (storage area networks) EqualLogic (enterprise class iSCSI SANs) Dell EMR (electronic medical records) Dell's Home Office/Consumer class includes: Inspiron (medium-range desktop and notebook computers) XPS (high-end desktop and notebook computers) G Series (high/medium-performance gaming laptops) Alienware (high-performance gaming systems) Venue (Tablets Android / Windows) Dell's Peripherals class includes USB keydrives, LCD televisions, and printers; Dell monitors includes LCD TVs, plasma TVs and projectors for HDTV and monitors. Dell UltraSharp is further a high-end brand of monitors. Dell service and support brands include the Dell Solution Station (extended domestic support services, previously "Dell on Call"), Dell Support Center (extended support services abroad), Dell Business Support (a commercial service-contract that provides an industry-certified technician with a lower call-volume than in normal queues), Dell Everdream Desktop Management ("Software as a service" remote-desktop management, originally a SaaS company founded by Elon Musk's cousin, Lyndon Rive, which Dell bought in 2007), and Your Tech Team (a support-queue available to home users who purchased their systems either through Dell's website or through Dell phone-centers). Discontinued products and brands include Axim (PDA; discontinued April 9, 2007), Dimension (home and small office desktop computers; discontinued July 2007), Dell Digital Jukebox (MP3 player; discontinued August 2006), Dell PowerApp (application-based servers), Dell Optiplex (desktop and tower computers previously supported to run server and desktop operating systems), Dell Unix (an SVR4-based Unix operating system for its Dell-branded PCs and workstations; discontinued in 1993) and Dell Mobile Connect(Windows Mobile application; discontinued July 31, 2022). Security Self-signed root certificate In November 2015, it emerged that several Dell computers had shipped with an identical pre-installed root certificate known as "eDellRoot". This raised such security risks as attackers impersonating HTTPS-protected websites such as Google and Bank of America and malware being signed with the certificate to bypass Microsoft software filtering. Dell apologized and offered a removal tool. Dell Foundation Services Also in November 2015, a researcher discovered that customers with diagnostic program Dell Foundation Services could be digitally tracked using the unique service tag number assigned to them by the program. This was possible even if a customer enabled private browsing and deleted their browser cookies. Ars Technica recommended that Dell customers uninstall the program until the issue was addressed. Commercial aspects Organization The board consists of nine directors. Michael Dell, the founder of the company, serves as chairman of the board and chief executive officer. Other board members include Don Carty, Judy Lewent, Klaus Luft, Alex Mandl, and Sam Nunn. Shareholders elect the nine board members at meetings, and those board members who do not get a majority of votes must submit a resignation to the board, which will subsequently choose whether or not to accept the resignation. The board of directors usually sets up five committees having oversight over specific matters. These committees include the Audit Committee, which handles accounting issues, including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Finance Committee, which handles financial matters such as proposed mergers and acquisitions; the Governance and Nominating Committee, which handles various corporate matters (including the nomination of the board); and the Antitrust Compliance Committee, which attempts to prevent company practices from violating antitrust laws. Day-to-day operations of the company are run by the Global Executive Management Committee, which sets strategic direction. Dell has regional senior vice-presidents for countries other than the United States. Marketing Dell advertisements have appeared in several types of media including television, the Internet, magazines, catalogs, and newspapers. Some of Dell Inc's marketing strategies include lowering prices at all times of the year, free bonus products (such as Dell printers), and free shipping to encourage more sales and stave off competitors. In 2006, Dell cut its prices in an effort to maintain its 19.2% market share. This also cut profit margins by more than half, from 8.7 to 4.3 percent. To maintain its low prices, Dell continues to accept most purchases of its products via the Internet and through the telephone network, and to move its customer-care division to India and El Salvador. A popular United States television and print ad campaign in the early 2000s featured the actor Ben Curtis playing the part of "Steven", a lightly mischievous blond-haired youth who came to the assistance of bereft computer purchasers. Each television advertisement usually ended with Steven's catch-phrase: "Dude, you're gettin' a Dell!" A subsequent advertising campaign featured interns at Dell headquarters (with Curtis' character appearing in a small cameo at the end of one of the first commercials in this particular campaign). In 2007, Dell switched advertising agencies in the US from BBDO to Working Mother Media. In July 2007, Dell released new advertising created by Working Mother to support the Inspiron and XPS lines. The ads featured music from the Flaming Lips and Devo who re-formed especially to record the song in the ad "Work it Out". Also in 2007, Dell began using the slogan "Yours is here" to say that it customizes computers to fit customers' requirements. Beginning in 2011, Dell began hosting a conference in Austin, Texas, at the Austin Convention Center titled "Dell World". The event featured new technology and services provided by Dell and Dell's partners. In 2011, the event was held October 12–14. In 2012, the event was held December 11–13. In 2013, the event was held December 11–13. In 2014, the event was held November 4–6. Dell partner program In late 2007, Dell Inc. announced that it planned to expand its program to value-added resellers (VARs), giving it the official name of "Dell Partner Direct" and a new Website. Dell India has started Online Ecommerce website with its Dell Partner www.compuindia.com GNG Electronics Pvt Ltd termed as Dell Express Ship Affiliate(DESA). The main objective was to reduce the delivery time. Customers who visit Dell India official site are given the option to buy online which then will be redirected to Dell affiliate website compuindia.com. Global analytics Dell also operates a captive analytics division which supports pricing, web analytics, and supply chain operations. DGA operates as a single, centralized entity with a global view of Dell's business activities. The firm supports over 500 internal customers worldwide and has created a quantified impact of over $500 million. Criticisms of marketing of laptop security In 2008, Dell received press coverage over its claim of having the world's most secure laptops, specifically, its Latitude D630 and Latitude D830. At Lenovo's request, the (US) National Advertising Division (NAD) evaluated the claim, and reported that Dell did not have enough evidence to support it. Retail Dell first opened their retail stores in India. United States In the early 1990s, Dell sold its products through Best Buy, Costco and Sam's Club stores in the United States. Dell stopped this practice in 1994, citing low profit margins on the business, exclusively distributing through a direct-sales model for the next decade. In 2003, Dell briefly sold products in Sears stores in the US. In 2007, Dell started shipping its products to major retailers in the US once again, starting with Sam's Club and Wal-Mart. Staples, the largest office-supply retailer in the US, and Best Buy, the largest electronics retailer in the US, became Dell retail partners later that same year. Kiosks Starting in 2002, Dell opened kiosk locations in the United States to allow customers to examine products before buying them directly from the company. Starting in 2005, Dell expanded kiosk locations to include shopping malls across Australia, Canada, Singapore and Hong Kong. On January 30, 2008, Dell announced it would shut down all 140 kiosks in the US due to expansion into retail stores. By June 3, 2010, Dell had also shut down all of its mall kiosks in Australia. Retail stores , Dell products shipped to one of the largest office supply retailers in Canada, Staples Business Depot. In April 2008, Future Shop and Best Buy began carrying a subset of Dell products, such as certain desktops, laptops, printers, and monitors. Since some shoppers in certain markets show reluctance to purchase technological products through the phone or the Internet, Dell has looked into opening retail operations in some countries in Central Europe and Russia. In April 2007, Dell opened a retail store in Budapest. In October of the same year, Dell opened a retail store in Moscow. In the UK, HMV's flagship Trocadero store has sold Dell XPS PCs since December 2007. From January 2008 the UK stores of DSGi have sold Dell products (in particular, through Currys and PC World stores). As of 2008, the large supermarket chain Tesco has sold Dell laptops and desktops in outlets throughout the UK. In May 2008, Dell reached an agreement with the office supply chain, Officeworks (part of Coles Group), to stock a few modified models in the Inspiron desktop and notebook range. These models have slightly different model numbers, but almost replicate the ones available from the Dell Store. Dell continued its retail push in the Australian market with its partnership with Harris Technology (another part of Coles Group) in November of the same year. In addition, Dell expanded its retail distributions in Australia through an agreement with the discount electrical retailer, The Good Guys, known for "Slashing Prices". Dell agreed to distribute a variety of makes of both desktops and notebooks, including Studio and XPS systems in late 2008. Dell and Dick Smith Electronics (owned by Woolworths Limited) reached an agreement to expand within Dick Smith's 400 stores throughout Australia and New Zealand in May 2009 (1 year since Officeworks—owned by Coles Group—reached a deal). The retailer has agreed to distribute a variety of Inspiron and Studio notebooks, with minimal Studio desktops from the Dell range. , Dell continues to run and operate its various kiosks in 18 shopping centers throughout Australia. On March 31, 2010, Dell announced to Australian Kiosk employees that they were shutting down the Australian/New Zealand Dell kiosk program. In Germany, Dell is selling selected smartphones and notebooks via Media Markt and Saturn, as well as some shopping websites. Competition Dell's major competitors include Lenovo, Hewlett-Packard (HP), Hasee, Acer, Fujitsu, Toshiba, Gateway, Sony, Asus, MSI, Panasonic, Samsung and Apple. Dell and its subsidiary, Alienware, compete in the enthusiast market against AVADirect, Falcon Northwest, VoodooPC (a subsidiary of HP), and other manufacturers. In the second quarter of 2006, Dell had between 18% and 19% share of the worldwide personal computer market, compared to HP with roughly 15%. , Dell lost its lead in the PC business to Hewlett-Packard. Both Gartner and IDC estimated that in the third quarter of 2006, HP shipped more units worldwide than Dell did. Dell's 3.6% growth paled in comparison to HP's 15% growth during the same period. The problem got worse in the fourth quarter, when Gartner estimated that Dell PC shipments declined 8.9% (versus HP's 23.9% growth). As a result, at the end of 2006 Dell's overall PC market share stood at 13.9% (versus HP's 17.4%). IDC reported that Dell lost more server market share than any of the top four competitors in that arena. IDC's Q4 2006 estimates show Dell's share of the server market at 8.1%, down from 9.5% in the previous year. This represents an 8.8% loss year-over-year, primarily to competitors EMC and IBM. As of 2021, Dell is the third-largest PC manufacturer after Lenovo and HP. Partnership with EMC In 2001, Dell and EMC entered into a partnership whereby both companies jointly design products, and Dell provided support for certain EMC products including midrange storage systems, such as fibre channel and iSCSI storage area networks. The relationship also promotes and sells OEM versions of backup, recovery, replication and archiving software. On December 9, 2008, Dell and EMC announced the multi-year extension, through 2013, of the strategic partnership with EMC. In addition, Dell expanded its product lineup by adding the EMC Celerra NX4 storage system to the portfolio of Dell/EMC family of networked storage systems and partnered on a new line of data deduplication products as part of its TierDisk family of data storage devices. On October 17, 2011, Dell discontinued reselling all EMC storage products, ending the partnership 2 years early. Later Dell would acquire and merge with EMC in the largest tech merger to date. Environmental record Dell committed to reducing greenhouse gas emissions from its global activities by 40% by 2015, with the 2008 fiscal year as the baseline year. It is listed in Greenpeace's Guide to Greener Electronics that scores leading electronics manufacturers according to their policies on sustainability, climate and energy and how green their products are. In November 2011, Dell ranked 2nd out of 15 listed electronics makers (increasing its score to 5.1 from 4.9, which it gained in the previous ranking from October 2010). Dell was the first company to publicly state a timeline for the elimination of toxic polyvinyl chloride (PVC) and brominated flame retardants (BFRs), which it planned to phase out by the end of 2009. It revised this commitment and now aims to remove toxics by the end of 2011 but only in its computing products. In March 2010, Greenpeace activists protested at Dell offices in Bangalore, Amsterdam and Copenhagen calling for Dell's founder and CEO Michael Dell to "drop the toxics" and claiming that Dell's aspiration to be 'the greenest technology company on the planet' was "hypocritical". Dell has launched its first products completely free of PVC and BFRs with the G-Series monitors (G2210 and G2410) in 2009. In its 2012 report on progress relating to conflict minerals, the Enough Project rated Dell the eighth-highest of 24 consumer electronics companies. Green initiatives Dell became the first company in the information technology industry to establish a product-recycling goal (in 2004) and completed the implementation of its global consumer recycling-program in 2006. On February 6, 2007, the National Recycling Coalition awarded Dell its "Recycling Works" award for efforts to promote producer responsibility. On July 19, 2007, Dell announced that it had exceeded targets in working to achieve a multi-year goal of recovering 275 million pounds of computer equipment by 2009. The company reported the recovery of 78 million pounds (nearly 40,000 tons) of IT equipment from customers in 2006, a 93-percent increase over 2005; and 12.4% of the equipment Dell sold seven years earlier. On June 5, 2007, Dell set a goal of becoming the greenest technology company on Earth for the long term. The company launched a zero-carbon initiative that includes: reducing Dell's carbon intensity by 15 percent by 2012 requiring primary suppliers to report carbon emissions data during quarterly business reviews partnering with customers to build the "greenest PC on the planet" expanding the company's carbon-offsetting program, "Plant a Tree for Me" Dell reports its environmental performance in an annual Corporate Social Responsibility (CSR) Report that follows the Global Reporting Initiative (GRI) protocol. Dell's 2008 CSR report ranked as "Application Level B" as "checked by GRI". The company aims to reduce its external environmental impact through an energy-efficient evolution of products, and also reduce its direct operational impact through energy-efficiency programs. Criticism In the 1990s, Dell switched from using primarily ATX motherboards and PSU to using boards and power supplies with mechanically identical but differently wired connectors. This meant customers wishing to upgrade their hardware would have to replace parts with scarce Dell-compatible parts instead of commonly available parts. While motherboard power connections reverted to the industry standard in 2003, Dell remains secretive about their motherboard pin-outs for peripherals (such as MMC readers and power on/off switches and LEDs). In 2005, complaints about Dell more than doubled to 1,533, after earnings grew 52% that year. In 2006, Dell acknowledged that it had problems with customer service. Issues included call transfers of more than 45% of calls and long wait times. Dell's blog detailed the response: "We're spending more than a $100 million—and a lot of blood, sweat, and tears of talented people—to fix this." Later in the year, the company increased its spending on customer service to $150 million. Since 2018, Dell has seen significant increase in consumer satisfaction. Moreover, their customer service has been praised for its prompt and accurate answers to most questions, especially those directed to their social media support. On August 17, 2007, Dell Inc. announced that after an internal investigation into its accounting practices it would restate and reduce earnings from 2003 through to the first quarter of 2007 by a total amount of between $50 million and $150 million, or 2 cents to 7 cents per share. The investigation, begun in November 2006, resulted from concerns raised by the U.S. Securities and Exchange Commission over some documents and information that Dell Inc. had submitted. It was alleged that Dell had not disclosed large exclusivity payments received from Intel for agreeing not to buy processors from rival manufacturer AMD. In 2010 Dell finally paid $100 million () to settle the SEC's charges of fraud. Michael Dell and other executives also paid penalties and suffered other sanctions, without admitting or denying the charges. In July 2009, Dell apologized after drawing the ire of the Taiwanese Consumer Protection Commission for twice refusing to honor a flood of orders against unusually low prices offered on its Taiwanese website. In the first instance, Dell offered a 19" LCD panel for $15. In the second instance, Dell offered its Latitude E4300 notebook at NT$18,558 (US$580), 70% lower than the usual price of NT$60,900 (US$1900). Concerning the E4300, rather than honor the discount taking a significant loss, the firm withdrew orders and offered a voucher of up to NT$20,000 (US$625) a customer in compensation. The consumer rights authorities in Taiwan fined Dell NT$1 million (US$31250) for customer rights infringements. Many consumers sued the firm for unfair compensation. A court in southern Taiwan ordered the firm to deliver 18 laptops and 76 flat-panel monitors to 31 consumers for NT$490,000 (US$15,120), less than a third of the normal price. The court said the event could hardly be regarded as mistakes, as the prestigious firm said the company mispriced its products twice on its Taiwanese website within 3 weeks. After Michael Dell made a $24.4 billion buyout bid in August 2013 (), activist shareholder Carl Icahn sued the company and its board in an attempt to derail the bid and promote his own forthcoming offer. In 2020, the Australian Strategic Policy Institute accused at least 82 major brands, including Dell, of being connected to forced Uyghur labor in Xinjiang. See also Dell laptops List of computer system manufacturers List of Dell ownership activities Configurator Mass customization References Further reading Dell Company Information Michael Dell, Catherine Fredman, Direct From Dell, Dell as the seventh-most-admired computer company in the USA, eighth overall, and seventh worldwide. Fortune, Most Admired Companies 2006. BBC News, August 21, 2003, Dell makes grab for market share USA Today, January 20, 2001, Dell business model turns to muscle as rivals struggle Ubuntu Forums, June 7, 2007, Dell's with Ubuntu called Dellbuntu External links Computer companies of the United States Computer hardware companies Computer systems companies Consumer electronics brands American brands Cloud computing providers Display technology companies Home computer hardware companies Mobile phone manufacturers Netbook manufacturers Networking hardware companies Online retailers of the United States Multinational companies headquartered in the United States Manufacturing companies based in Austin, Texas Companies based in Round Rock, Texas Computer companies established in 1984 Electronics companies established in 1984 1984 establishments in Texas Companies formerly listed on the Nasdaq Privately held companies based in Texas Silver Lake (investment firm) companies 1980s initial public offerings Round Rock, Texas
Dell
Technology
13,162
60,264,107
https://en.wikipedia.org/wiki/EURO%20Journal%20on%20Decision%20Processes
The EURO Journal on Decision Processes (EJDP) is a peer-reviewed academic journal that was established in 2012 and originally published by Springer Science+Business Media. In 2021, the journal will instead be published by Elsevier. It is an official journal of the Association of European Operational Research Societies, publishing scientific knowledge on the theoretical, methodological, behavioural and organizational topics that contribute to the understanding and appropriate use of operational research in supporting different phases of decision-making processes. The editor-in-chief is Jutta Geldermann Past Editor-in-Chief: Vincent Mousseau (2016 - 2021) Ahti Salo (2012-2016). References External links Behavioural sciences Operations research English-language journals Academic journals established in 2012
EURO Journal on Decision Processes
Mathematics,Biology
153
24,053,951
https://en.wikipedia.org/wiki/C20H25N3O2
The molecular formula C20H25N3O2 (molar mass: 339.43 g/mol, exact mass: 339.1947 u) may refer to: Methylergometrine, or methylergonovine WAY-317,538 (SEN-12333) Molecular formulas
C20H25N3O2
Physics,Chemistry
64
14,241,105
https://en.wikipedia.org/wiki/Van%20der%20Waals%20surface
The van der Waals surface of a molecule is an abstract representation or model of that molecule, illustrating where, in very rough terms, a surface might reside for the molecule based on the hard cutoffs of van der Waals radii for individual atoms, and it represents a surface through which the molecule might be conceived as interacting with other molecules. Also referred to as a van der Waals envelope, the van der Waals surface is named for Johannes Diderik van der Waals, a Dutch theoretical physicist and thermodynamicist who developed theory to provide a liquid-gas equation of state that accounted for the non-zero volume of atoms and molecules, and on their exhibiting an attractive force when they interacted (theoretical constructions that also bear his name). van der Waals surfaces are therefore a tool used in the abstract representations of molecules, whether accessed, as they were originally, via hand calculation, or via physical wood/plastic models, or now digitally, via computational chemistry software. Practically speaking, CPK models, developed by and named for Robert Corey, Linus Pauling, and Walter Koltun, were the first widely used physical molecular models based on van der Waals radii, and allowed broad pedagogical and research use of a model showing the van der Waals surfaces of molecules. van der Waals volume and van der Waals surface area Related to the title concept are the ideas of a van der Waals volume, Vw, and a van der Waals surface area, abbreviated variously as Aw, vdWSA, VSA, and WSA. A van der Waals surface area is an abstract conception of the surface area of atoms or molecules from a mathematical estimation, either computing it from first principles or by integrating over a corresponding van der Waals volume. In simplest case, for a spherical monatomic gas, it is simply the computed surface area of a sphere of radius equal to the van der Waals radius of the gaseous atom: The van der Waals volume, a type of atomic or molecular volume, is a property directly related to the van der Waals radius, and is defined as the volume occupied by an individual atom, or in a combined sense, by all atoms of a molecule. It may be calculated for atoms if the van der Waals radius is known, and for molecules if its atoms radii and the inter-atomic distances and angles are known. As above, in simplest case, for a spherical monatomic gas, Vw is simply the computed volume of a sphere of radius equal to the van der Waals radius of the gaseous atom: For a molecule, Vw is the volume enclosed by the van der Waals surface; hence, computation of Vw presumes ability to describe and compute a van der Waals surface. van der Waals volumes of molecules are always smaller than the sum of the van der Waals volumes of their constituent atoms, due to the fact that the interatomic distances resulting from chemical bond are less than the sum of the atomic van der Waals radii. In this sense, a van der Waals surface of a homonuclear diatomic molecule can be viewed as an pictorial overlap of the two spherical van der Waals surfaces of the individual atoms, likewise for larger molecules like methane, ammonia, etc. (see images). van der Waals radii and volumes may be determined from the mechanical properties of gases (the original method, determining the van der Waals constant), from the critical point (e.g., of a fluid), from crystallographic measurements of the spacing between pairs of unbonded atoms in crystals, or from measurements of electrical or optical properties (i.e., polarizability or molar refractivity). In all cases, measurements are made on macroscopic samples and results are expressed as molar quantities. van der Waals volumes of a single atom or molecules are arrived at by dividing the macroscopically determined volumes by the Avogadro constant. The various methods give radius values which are similar, but not identical—generally within 1–2 Å (100–200 pm). Useful tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will be seen to present different values for the van der Waals radius of the same atom. As well, it has been argued that the van der Waals radius is not a fixed property of an atom in all circumstances, rather, that it will vary with the chemical environment of the atom. Gallery See also Molecular surface (disambiguation) van der Waals force van der Waals molecule van der Waals radius van der Waals strain References and notes Further reading DC Whitley, van der Waals surface graphs and molecular shape, Journal of Mathematical Chemistry, Volume 23, Numbers 3-4, 1998, pp. 377–397(21). M. Petitjean, On the Analytical Calculation of van der Waals Surfaces and Volumes: Some Numerical Aspects, Journal of Computational Chemistry, Volume 15, Number 5, 1994, pp. 507–523. External links VSAs for various molecules by Anton Antonov, The Wolfram Demonstrations Project, 2007. van der Waals radii, Structural Biology Glossary, Image Library of Biological Macromolecules. Analytical calculation of van der Waals surfaces and volumes. Intermolecular forces Physical chemistry Surface
Van der Waals surface
Physics,Chemistry,Materials_science,Engineering
1,131
33,884,098
https://en.wikipedia.org/wiki/David%20Phillips%20%28chemist%29
David Phillips, (born 3 December 1939) is a British chemist specialising in photochemistry and lasers, and was president of the Royal Society of Chemistry from 2010 to 2012. Education and early life Phillips was born 3 December 1939 in Kendal, lived in South Shields and attended the Grammar School. He studied at the University of Birmingham attaining a BSc and PhD. Career and research Phillips began his career doing postdoctoral research at the University of Texas at Austin and the Academy of Sciences of the USSR. He was appointed a lecturer in chemistry at the University of Southampton, rising to the status of Reader then becoming Wolfson Professor of Natural Philosophy, at the Royal Institution. In 1981, Phillips became a founding member of the World Cultural Council. In 1989 he moved to Imperial College, London as professor of physical chemistry and held a number of senior posts there. In 1987 he gave the Royal Institution Christmas Lectures on television. He was appointed Officer of the Order of the British Empire (OBE) in 1999 and Commander of the Order of the British Empire (CBE) in the 2012 New Year Honours for services to chemistry. In May 2011 he was the guest on Desert Island Discs and in June 2012 was Michael Berkeley's guest on Private Passions. Views on nuclear power Ahead of the 50th anniversary of the 1962 James Bond film Dr No, Phillips stated that the character of Dr No, with his personal nuclear reactor, helped to create a "remorselessly grim" reputation for atomic energy. and that the popularity of the movie created an enduringly negative image of nuclear power – as something dangerous that could be wielded by megalomaniacs with aspirations to world domination. Phillips claims that when nuclear power is discussed "it is not at all surprising that the public at home and abroad are sceptical" and concludes that "The Royal Society of Chemistry asserts that nuclear power has to be part of the future national energy mix, in which it plays a major role. Fossil fuels have to be eradicated for people to live in a healthy environment. Let's say yes to nuclear and no to Dr No's nonsense." Awards and honours Phillips received the Porter Medal in 2010 and was elected a Fellow of the Royal Society (FRS) in 2015. References Living people 1939 births British physical chemists Commanders of the Order of the British Empire Fellows of the Royal Society of Chemistry Photochemists Alumni of the University of Birmingham Academics of the University of Southampton Academics of Imperial College London Fellows of the Royal Society Presidents of the Royal Society of Chemistry Founding members of the World Cultural Council Deans of the Imperial College Faculty of Natural Sciences pt:David Chilton Phillips
David Phillips (chemist)
Chemistry
526
57,589,493
https://en.wikipedia.org/wiki/Xiaomi%20Mi%208
The Xiaomi Mi 8 is a flagship Android smartphone developed by Xiaomi Inc. It was launched at an event held in Shenzhen, China as the successor to the Xiaomi Mi 6. The naming of the Xiaomi Mi 8 (skipping the Mi 7) is in celebration of Xiaomi Inc's eighth anniversary. The Mi 8 draws parallels to the iPhone X, as both the rear and front of the phone are replicated. This design was later carried on to the mid-range Redmi Note 6 Pro and Mi A2 Lite. Specifications Hardware The Xiaomi Mi 8 is powered by the Qualcomm Snapdragon 845 processor, with 6 GB LPDDR4X RAM and Adreno 630 GPU. It has a FullHD plus AMOLED display. Storage options include 64 GB or 128 GB. The handset features a fingerprint scanner on the rear or on the screen under the display, in the Explorer Edition. It features a 3,400 mAh battery with a USB-C reversible connector which supports Quick Charge 4.0+. It has Gorilla Glass 5. It does not feature a 3.5mm headphone jack and comes with a USB-C to 3.5mm headphone jack adapter provided in the box. The Mi 8 includes a dual camera setup with a 12 MP wide angle lens sensor and a 12 MP telephoto lens sensor. The front camera has a 20 MP sensor with an aperture of f/2.0. The Mi 8 camera has an overall score of 99 and a photo score of 105 on DxOMark. Explorer Edition also introduces a 3D optical facial recognition with the standard IR Sensor for dark condition and a dual band GNSS which allows reception of L1 and L5 signals simultaneously. Software It runs on Android 10, with Xiaomi's custom MIUI 11 skin which is upgradeable to MIUI 12. References Android (operating system) devices Mobile phones introduced in 2018 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Discontinued flagship smartphones Xiaomi smartphones
Xiaomi Mi 8
Technology
422
9,124,553
https://en.wikipedia.org/wiki/Generalized%20assignment%20problem
In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other. This problem in its most general form is as follows: There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost and profit that may vary depending on the agent-task assignment. Moreover, each agent has a budget and the sum of the costs of tasks assigned to it cannot exceed this budget. It is required to find an assignment in which all agents do not exceed their budget and total profit of the assignment is maximized. In special cases In the special case in which all the agents' budgets and all tasks' costs are equal to 1, this problem reduces to the assignment problem. When the costs and profits of all tasks do not vary between different agents, this problem reduces to the multiple knapsack problem. If there is a single agent, then, this problem reduces to the knapsack problem. Explanation of definition In the following, we have n kinds of items, through and m kinds of bins through . Each bin is associated with a budget . For a bin , each item has a profit and a weight . A solution is an assignment from items to bins. A feasible solution is a solution in which for each bin the total weight of assigned items is at most . The solution's profit is the sum of profits for each item-bin assignment. The goal is to find a maximum profit feasible solution. Mathematically the generalized assignment problem can be formulated as an integer program: Complexity The generalized assignment problem is NP-hard. However, there are linear-programming relaxations which give a -approximation. Greedy approximation algorithm For the problem variant in which not every item must be assigned to a bin, there is a family of algorithms for solving the GAP by using a combinatorial translation of any algorithm for the knapsack problem into an approximation algorithm for the GAP. Using any -approximation algorithm ALG for the knapsack problem, it is possible to construct a ()-approximation for the generalized assignment problem in a greedy manner using a residual profit concept. The algorithm constructs a schedule in iterations, where during iteration a tentative selection of items to bin is selected. The selection for bin might change as items might be reselected in a later iteration for other bins. The residual profit of an item for bin is if is not selected for any other bin or – if is selected for bin . Formally: We use a vector to indicate the tentative schedule during the algorithm. Specifically, means the item is scheduled on bin and means that item is not scheduled. The residual profit in iteration is denoted by , where if item is not scheduled (i.e. ) and if item is scheduled on bin (i.e. ). Formally: Set For do: Call ALG to find a solution to bin using the residual profit function . Denote the selected items by . Update using , i.e., for all . See also Assignment problem References Further reading NP-complete problems Combinatorial optimization
Generalized assignment problem
Mathematics
658
14,903,178
https://en.wikipedia.org/wiki/Yield%20gap
The yield gap or yield ratio is the ratio of the dividend yield of an equity and the yield of a long-term government bond. Typically equities have a higher yield (as a percentage of the market price of the equity) thus reflecting the higher risk of holding an equity. The purpose of calculating the yield gap is to assess whether the equity is over or under priced as compared to bonds. For a given equity, the following cases may be considered: If the yield gap is numerically small, then equity yield is lower than bond yield implying that the equity is overpriced. If the yield gap is numerically large, then equity yield is higher than bond yield implying that the equity is cheap. See also Yield (finance) References Financial ratios
Yield gap
Mathematics
155