id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
2,858,696
https://en.wikipedia.org/wiki/Tau1%20Aquarii
{{DISPLAYTITLE:Tau1 Aquarii}} Tau1 Aquarii, Latinized from τ1 Aquarii, is the Bayer designation for a single star in the equatorial constellation of Aquarius. With an apparent visual magnitude of 5.66, it is a faint naked eye that requires dark suburban skies for viewing. Parallax measurements made during the Hipparcos mission yield a distance estimate of roughly from Earth. The star is drifting further away with a radial velocity of +15 km/s. It is a candidate member of the Pisces-Eridanus stellar stream. The stellar classification of τ1 Aquarii is B9 V; right along the borderline between a B- and A-type main sequence star. This is a candidate silicon star; a type of Ap star of class CP2 that shows a magnetic field. It is around 100 million years old and is spinning rapidly with a projected rotational velocity of 185 km/s. The star has 2.7 times the mass of the Sun and double the Sun's radius. It is radiating 63.5 times the luminosity of the Sun from its photosphere at an effective temperature of 10,617 K. When examined in the infrared band, it displays an excess emission that is a characteristic of stars with an orbiting debris disk. The model that best fits the data suggests there are two concentric circumstellar disks. References External links Image Tau1 Aquarii A-type main-sequence stars Ap stars Circumstellar disks Aquarius (constellation) Aquarii, Tau1 BD-14 6346 Aquarii, 69 215766 112542 8673
Tau1 Aquarii
Astronomy
348
20,215,176
https://en.wikipedia.org/wiki/Akpeteshie
Akpeteshie is a liquor produced by distilling palm wine or sugar cane, primarily in the region of Western Africa. It is the national spirit of Ghana. In Nigeria it is known as Ògógóró (Ogog'), a Yoruba word, usually distilled locally from fermented Raffia palm tree juice, where it is known as the country's homebrew. Today, there is a misconception that Ogogoro can be pure ethanol, but traditionally, it had to come from the palm tree and then be distilled from this source. It is popular throughout West Africa, and goes by many names including apio, ogoglo, ogogoro (Ogog'), VC10, Kill Me Quick, Efie Nipa, Kele, Kumepreko, Anferewoase, Apiatiti, Home Boy, Nana Drobo, One Touch among others. It is also known as sapele water, kparaga, kai-kai, Sun gbalaja, egun inu igo meaning The Masquerade in the Bottle, push-me-push-you, and/or crim-kena, sonsé ("do you do it?" in Yoruba language). In the Igbo language it is known as . Other Nigerian epithets include: , Urhobo, as well OHMS (Our Home Made Stuff), Iced Water, Push Me, I Push You and Craze man in the bottle. Ghanaian moonshine is referred to as akpeteshie. History and origins Before the advent of European colonization of what is today Ghana, the Anlo brewed a local spirit also known as "kpótomenui," meaning "something hidden in a coconut mat fence." With British colonization of what became known as the Gold Coast, such local brewing was outlawed in the early 1930s. According to a 1996 interview with S.S. Dotse about his life under British colonial rule: "Our contention was that the drink the white man brought is the same as ours. The white men's contention was that ours was too strong...Before the white men came we were using akpeteshie. But when they came they banned it, probably because they wanted to make sales on their own liquor. And so we were calling it kpótomenui. When you had a visitor whom you knew very well, then you ordered that kpótomenui be brought. This is akpeteshie, but it was never referred to by name." The name "akpeteshie" was given to the drink with its prohibition: the word comes from the Ga language (ape te shie, the act of hiding) spoken in greater Accra and means they are hiding, referring to the secretive way in which non-European inhabitants were forced to consume the beverage. Despite being outlawed, illicit spirits remained commonplace, with reports that even schoolboys were able to easily obtain akpeteshie through the 1930s. Demand for akpeteshie and the profits to be made from its sale was enough to encourage the spread of sugar cane cultivation in the Anlo region of Ghana. Distillation was legalized with decolonization and Ghanaian independence. The first factory was established in the Volta Region, taking advantage of the area's supply of sugar cane plantations. Preparation Ogogoro is distilled from the juice of Raffia palm trees. An incision is made in the trunk and a gourd placed outside it for collection, which is collected a day or two later. After extraction, the sap is boiled to form steam, which subsequently condenses and is collected for consumption. Ogogoro is not synthetic ethanol but it is tapped from a natural source and then distilled. Brewing Akpeteshie is distilled from palm wine or sugarcane juice. This sweetened liquid or wine is first fermented in a large barrels, sometimes with the help of yeast. After this first stage of fermentation, fires are built under the barrels in order to bring the liquid to a boil and pass the resulting vapor through a copper pipe within cooling barrels, where it condenses and drips into sieved jars. The boiled juice then undergoes a distillation. The resulting spirit is between 40 and 50% alcohol by volume. Packaging and consumption Akpeteshie is not professionally bottled or sealed, but instead poured into unlabeled used bottles. The spirit can be bought wholesale from a brewer or by the glass at boutiques and bars. Although not professionally advertised, the drink is very popular. This is partially due to its price, which is lower than that of other professionally bottled or imported drinks. Its relative cheapness makes it a drink associated more with the poor, but even those who can afford better quality are said to consume the spirit in secret. The potency of the liquor heavily affects the bodily senses, providing a feeling likened to that of a knockout punch. Practiced drinkers can be seen acknowledging receipt by blowing out air or pounding their chest. > Social significance As drink and commodity, respectively, ogogoro carries substantial cultural and economic significance within Nigeria. It is an essential part of numerous religious and social ceremonies; Burutu (Ijaw) priests pour it onto the ground as offerings to contact their gods, while fathers of Nigerian brides use it as a libation by which they provide their official blessing to a wedding. The economic facets of ogogoro have been equally salient throughout recent Nigerian history. Many poor Nigerian families homebrew the drink as a means of economic subsistence, many of whom sell shots of it on city street corners. The criminalization of ogogoro which occurred under the colonial regime is also believed to have been largely economic; while the public justifications for the law regarded public health and Christian beliefs regarding alcohol, it has been argued that colonial officials were also seeking to suppress local economic activity which might draw money or labor away from the colonial system. References External links Pedro's ogogoro Ghanaian drinks Distilled drinks Forestry in Ghana Non-timber forest products Ghanaian distilled drinks Palm wine Sugar-based alcoholic drinks
Akpeteshie
Chemistry
1,269
2,202,360
https://en.wikipedia.org/wiki/TUGboat
TUGboat (, DOI prefix 10.47397) is a journal published three times per year by the TeX Users Group. It covers a wide range of topics in digital typography relevant to the TeX typesetting system. The editor is Barbara Beeton. See also The PracTeX Journal External links TUGboat home page List of TeX-related publications and journals TeX Typesetting Academic journals established in 1980 Computer science journals
TUGboat
Mathematics,Technology
89
61,659,788
https://en.wikipedia.org/wiki/RCA%201600
The RCA 1600 is a discontinued 16-bit minicomputer designed and built by RCA in West Palm Beach, Florida and Marlboro, Massachusetts. It was developed to meet the needs of several RCA divisions, including the Graphics Systems Division (GSD), Instructional Systems, and Global Communications. It was introduced in 1968, and at the time of UNIVAC's purchase of the RCA Computer Division in 1972 the 1600 was estimated to be in use by 40 customers. The 1600 was intended for use in embedded systems, and was retained by UNIVAC and used in products such as the Accuscan supermarket checkout system in the 1970s. Description The 1600 uses magnetic-core memory with a cycle time on 1.6μsec, structured as words of 18 bits—16 data bits, one parity bit, and one memory protection bit. Four configurations offered memory sizes of 8 K, 16 K, 32 K, and 64 K bytes (4, 8, 16, and 32 KW). Individual words of memory can be protected by setting the associated protection bit. Attempts to store into protected memory are trapped if memory protection is enabled by a console switch. The processor has sixteen 16-bit "standard" registers, eight for each program state. Program state one is used for normal execution, program state two is used for interrupt service routines. Because each state has an independent set of registers, switching states can be done "essentially instantaneously." Register 8 is the instruction counter in both states. If high-speed I/O (cycle stealing) is used, registers 6 and 7 in program state two are used for I/O address and byte count respectively. The architecture defines 29 instructions in three groups. All instructions are 16 bits, must be located on a word boundary, and therefore can be accessed in one machine cycle of 1.6μsec. There are also seven "special" registers serving particular functions which can also be read and written programmatically. References External links RCA 1600 Users Guide Preliminary Minicomputers RCA brands 16-bit computers Computer-related introductions in 1968
RCA 1600
Technology
427
1,266,404
https://en.wikipedia.org/wiki/Hypalon
Hypalon is a chlorosulfonated polyethylene (CSPE) synthetic rubber (CSM) noted for its resistance to chemicals, temperature extremes, and ultraviolet light. It was a product of DuPont Performance Elastomers, a subsidiary of DuPont. Hypalon as it is now known in the marine industry today is a remarketed version of the old Hypalon using an additional layer of neoprene (cr) so the new chemical formulation is csm/cr. Chemical structure Polyethylene is treated with a mixture of chlorine and sulfur dioxide under UV-radiation. The product contains 20-40% chlorine. The polymer also contains a few percent chlorosulfonyl (ClSO2-) groups. These reactive groups allow for vulcanization, which strongly affects the physical durability of the products. An estimated 110,000 tons/y were produced in 1991. Discontinuance DuPont Performance Elastomers announced on May 7, 2009, that it intended to close its manufacturing plant in Beaumont, Texas, by June 30, 2009. This was DPE's sole plant for CSM materials. The company was therefore exiting the business for Hypalon and its related product, Acsium. The plant closure was delayed until April 20, 2010, in response to customer requests. References Elastomers Brand name materials
Hypalon
Chemistry
291
6,296,596
https://en.wikipedia.org/wiki/Current%20filament
A current filament is an inhomogeneity in the current density distribution lateral to the direction of the current flow (that is, orthogonal to the current density vector). It is common in devices showing current-type negative differential conductivity, especially of S-type (SNDC). References Semiconductor device defects
Current filament
Technology
68
966,255
https://en.wikipedia.org/wiki/Participatory%20design
Participatory design (originally co-operative design, now often co-design) is an approach to design attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end users) in the design process to help ensure the result meets their needs and is usable. Participatory design is an approach which is focused on processes and procedures of design and is not a design style. The term is used in a variety of fields e.g. software design, urban design, architecture, landscape architecture, product design, sustainability, graphic design, industrial design, planning, and health services development as a way of creating environments that are more responsive and appropriate to their inhabitants' and users' cultural, emotional, spiritual and practical needs. It is also one approach to placemaking. Recent research suggests that designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own. Companies increasingly rely on their user communities to generate new product ideas, marketing them as "user-designed" products to the wider consumer market; consumers who are not actively participating but observe this user-driven approach show a preference for products from such firms over those driven by designers. This preference is attributed to an enhanced identification with firms adopting a user-driven philosophy, consumers experiencing empowerment by being indirectly involved in the design process, leading to a preference for the firm's products. If consumers feel dissimilar to participating users, especially in demographics or expertise, the effects are weakened. Additionally, if a user-driven firm is only selectively open to user participation, rather than fully inclusive, observing consumers may not feel socially included, attenuating the identified preference. Participatory design has been used in many settings and at various scales. For some, this approach has a political dimension of user empowerment and democratization. This inclusion of external parties in the design process does not excuse designers of their responsibilities. In their article "Participatory Design and Prototyping", Wendy Mackay and Michel Beaudouin-Lafon support this point by stating that "[a] common misconception about participatory design is that designers are expected to abdicate their responsibilities as designers and leave the design to users. This is never the case: designers must always consider what users can and cannot contribute." In several Scandinavian countries, during the 1960s and 1970s, participatory design was rooted in work with trade unions; its ancestry also includes action research and sociotechnical design. Definition In participatory design, participants (putative, potential or future) are invited to cooperate with designers, researchers and developers during an innovation process. Co-design requires the end user's participation: not only in decision making but also in idea generation. Potentially, they participate during several stages of an innovation process: they participate during the initial exploration and problem definition both to help define the problem and to focus ideas for solution, and during development, they help evaluate proposed solutions. Maarten Pieters and Stefanie Jansen describe co-design as part of a complete co-creation process, which refers to the "transparent process of value creation in ongoing, productive collaboration with, and supported by all relevant parties, with end-users playing a central role" and covers all stages of a development process. Differing terms In "Co-designing for Society", Deborah Szebeko and Lauren Tan list various precursors of co-design, starting with the Scandinavian participatory design movement and then state "Co-design differs from some of these areas as it includes all stakeholders of an issue not just the users, throughout the entire process from research to implementation." In contrast, Elizabeth Sanders and Pieter Stappers state that "the terminology used until the recent obsession with what is now called co-creation/co-design" was "participatory design". They also discuss the differences between co-design and co-creation and how they are "often confused and/or treated synonymously with one another". In their words, "Co-creation is a very broad term with applications ranging from the physical to the metaphysical and from the material to the spiritual", while seeing "co-design [as] a specific instance of co-creation". Pulling from the idea of what co-creation is, the definition of co-design in the context of their paper developed into "the creativity of designers and people not trained in design working together in the design development process". Another term brought up in this article front end design, which was formerly known as pre-design. "The goal of the explorations in the front end is to determine what is to be designed and sometimes what should not be designed and manufactured" and provides a space for the initial stages of co-design to take place. An alternate definition of co-design has been brought up by Maria Gabriela Sanchez and Lois Frankel. They proposed that "Co-design may be considered, for the purpose of this study, as an interdisciplinary process that involves designers and non-designers in the development of design solutions" and that "the success of the interdisciplinary process depends on the participation of all the stakeholders in the project". "Co-design is a perfect example of interdisciplinary work, where designer, researcher, and user work collaboratively in order to reach a common goal. The concept of interdisciplinarity, however, becomes broader in this context where it not only results from the union of different academic disciplines, but from the combination of different perspectives on a problem or topic." Fourth Order Design Similarly, another perspective comes from Golsby-Smith's "Fourth Order Design" which outlines a design process in which end-user participation is required and favours individual process over outcome. Buchanan's definition of culture as a verb is a key part of Golsby-Smith's argument in favour of fourth order design. In Buchanan's words, "Culture is not a state, expressed in an ideology or a body of doctrines. It is an activity. Culture is the activity of ordering, disordering and reordering in the search for understanding and for values which guide action." Therefore, to design for the fourth-order one must design within the widest scope. The system is discussion and the focus falls onto process rather than outcome. The idea that culture and people are an integral part of participatory design is supported by the idea that a "key feature of the field is that it involves people or communities: it is not merely a mental place or a series of processes". "Just as a product is not only a thing, but exists within a series of connected processes, so these processes do not live in a vacuum, but move through a field of less tangible factors such as values, beliefs and the wider context of other contingent processes." Different dimensions As described by Sanders and Stappers, one could position co-design as a form of human-centered design across two different dimensions. One dimension is the emphasis on research or design, another dimension is how much people are involved. Therefore, there are many forms of co-design, with different degrees of emphasis on research or design and different degrees of stakeholder involvement. For instance, one of the forms of co-design which involves stakeholders strongly early at the front end design process in the creative activities is generative co-design. Generative co-design is increasingly being used to involve different stakeholders as patient, care professionals and designers actively in the creative making process to develop health services. Another dimension to consider is that of the crossover between design research and education. An example of this is a study that was completed at the Middle East Technical University in Turkey, the purpose of which was to look into the use of “team development [in] enhancing interdisciplinary collaboration between design and engineering students using design thinking”. The students in this study were tasked with completing a group project and reporting on the experience of working together. One of the main takeaways was that "Interdisciplinary collaboration is an effective way to address complex problems with creative solutions. However, a successful collaboration requires teams first to get ready to work in harmony towards a shared goal and to appreciate interdisciplinarity" History From the 1960s onward there was a growing demand for greater consideration of community opinions in major decision-making. In Australia many people believed that they were not being planned 'for' but planned 'at'. (Nichols 2009). A lack of consultation made the planning system seem paternalistic and without proper consideration of how changes to the built environment affected its primary users. In Britain "the idea that the public should participate was first raised in 1965." However the level of participation is an important issue. At a minimum public workshops and hearings have now been included in almost every planning endeavour. Yet this level of consultation can simply mean information about change without detailed participation. Involvement that 'recognises an active part in plan making' has not always been straightforward to achieve. Participatory design has attempted to create a platform for active participation in the design process, for end users. History in Scandinavia Participatory design was actually born in Scandinavia and called cooperative design. However, when the methods were presented to the US community 'cooperation' was a word that didn't resonate with the strong separation between workers and managers - they weren't supposed to discuss ways of working face-to-face. Hence, 'participatory' was instead used as the initial Participatory Design sessions weren't a direct cooperation between workers and managers, sitting in the same room discussing how to improve their work environment and tools, but there were separate sessions for workers and managers. Each group was participating in the process, not directly cooperating. (in historical review of Cooperative Design, at a Scandinavian conference). In Scandinavia, research projects on user participation in systems development date back to the 1970s. The so-called "collective resource approach" developed strategies and techniques for workers to influence the design and use of computer applications at the workplace: The Norwegian Iron and Metal Workers Union (NJMF) project took a first move from traditional research to working with people, directly changing the role of the union clubs in the project. The Scandinavian projects developed an action research approach, emphasizing active co-operation between researchers and workers of the organization to help improve the latter's work situation. While researchers got their results, the people whom they worked with were equally entitled to get something out of the project. The approach built on people's own experiences, providing for them resources to be able to act in their current situation. The view of organizations as fundamentally harmonious—according to which conflicts in an organization are regarded as pseudo-conflicts or "problems" dissolved by good analysis and increased communication—was rejected in favor of a view of organizations recognizing fundamental "un-dissolvable" conflicts in organizations (Ehn & Sandberg, 1979). In the Utopia project (Bødker et al., 1987, Ehn, 1988), the major achievements were the experience-based design methods, developed through the focus on hands-on experiences, emphasizing the need for technical and organizational alternatives (Bødker et al., 1987). The parallel Florence project (Gro Bjerkness & Tone Bratteteig) started a long line of Scandinavian research projects in the health sector. In particular, it worked with nurses and developed approaches for nurses to get a voice in the development of work and IT in hospitals. The Florence project put gender on the agenda with its starting point in a highly gendered work environment. The 1990s led to a number of projects including the AT project (Bødker et al., 1993) and the EureCoop/EuroCode projects (Grønbæk, Kyng & Mogensen, 1995). In recent years, it has been a major challenge to participatory design to embrace the fact that much technology development no longer happens as design of isolated systems in well-defined communities of work (Beck, 2002). At the dawn of the 21st century, we use technology at work, at home, in school, and while on the move. Co-design As mentioned above, one definition of co-design states that it is the process of working with one or more non-designers throughout the design process. This method is focused on the insights, experiences and input from end-users on a product or service, with the aim to develop strategies for improvement. It is often used by trained designers who recognize the difficulty in properly understanding the cultural, societal, or usage scenarios encountered by their user. C. K. Prahalad and Venkat Ramaswamy are usually given credit for bringing co-creation/co-design to the minds of those in the business community with the 2004 publication of their book, The Future of Competition: Co-Creating Unique Value with Customers. They propose: The phrase co-design is also used in reference to the simultaneous development of interrelated software and hardware systems. The term co-design has become popular in mobile phone development, where the two perspectives of hardware and software design are brought into a co-design process. Results directly related to integrating co-design into existing frameworks is "researchers and practitioners have seen that co-creation practiced at the early front end of the design development process can have an impact with positive, long-range consequences." New role of the designer under co-design Co-design is an attempt to define a new evolution of the design process and with that, there is an evolution of the designer. Within the co-design process, the designer is required to shift their role from one of expertise to one of an egalitarian mindset. The designer must believe that all people are capable of creativity and problem solving. The designer no longer exists from the isolated roles of researcher and creator, but now must shift to roles such as philosopher and facilitator. This shift allows for the designer to position themselves and their designs within the context of the world around them creating better awareness. This awareness is important because in the designer's attempt to answer a question, "[they] must address all other related questions about values, perceptions, and worldview". Therefore, by shifting the role of the designer not only do the designs better address their cultural context yet so do the discussions around them. Discourses Discourses in the PD literature have been sculpted by three main concerns: (1) the politics of design, (2) the nature of participation, and (3) methods, tools and techniques for carrying out design projects (Finn Kensing & Jeanette Blomberg, 1998, p. 168). Politics of design The politics of design have been the concern for many design researchers and practitioners. Kensing and Blomberg illustrate the main concerns which related to the introduction of new frameworks such as system design which related to the introduction of computer-based systems and power dynamics that emerge within the workspace. The automation introduced by system design has created concerns within unions and workers as it threatened their involvement in production and their ownership over their work situation. Asaro (2000) offers a detailed analysis of the politics of design and the inclusion of "users" in the design process. Nature of participation Major international organizations such as Project for Public Spaces create opportunities for rigorous participation in the design and creation of place, believing that it is the essential ingredient for successful environments. Rather than simply consulting the public, PPS creates a platform for the community to participate and co-design new areas, which reflect their intimate knowledge. Providing insights, which independent design professionals such as architects or even local government planners may not have. Using a method called Place Performance Evaluation or (Place Game), groups from the community are taken on the site of proposed development, where they use their knowledge to develop design strategies, which would benefit the community. "Whether the participants are schoolchildren or professionals, the exercise produces dramatic results because it relies on the expertise of people who use the place every day, or who are the potential users of the place." This successfully engages with the ultimate idea of participatory design, where various stakeholders who will be the users of the end product, are involved in the design process as a collective. Similar projects have had success in Melbourne, Australia particularly in relation to contested sites, where design solutions are often harder to establish. The Talbot Reserve in the suburb of St. Kilda faced numerous problems of use, such as becoming a regular spot for sex workers and drug users to congregate. A Design In, which incorporated a variety of key users in the community about what they wanted for the future of the reserve allowed traditionally marginalised voices to participate in the design process. Participants described it as 'a transforming experience as they saw the world through different eyes.' (Press, 2003, p. 62). This is perhaps the key attribute of participatory design, a process which, allows multiple voices to be heard and involved in the design, resulting in outcomes which suite a wider range of users. It builds empathy within the system and users where it is implemented, which makes solving larger problems more holistically. As planning affects everyone it is believed that "those whose livelihoods, environments and lives are at stake should be involved in the decisions which affect them" (Sarkissian and Perglut, 1986, p. 3). C. West Churchman said systems thinking "begins when first you view the world through the eyes of another". In the built environment Participatory design has many applications in development and changes to the built environment. It has particular currency to planners and architects, in relation to placemaking and community regeneration projects. It potentially offers a far more democratic approach to the design process as it involves more than one stakeholder. By incorporating a variety of views there is greater opportunity for successful outcomes. Many universities and major institutions are beginning to recognise its importance. The UN, Global studio involved students from Columbia University, University of Sydney and Sapienza University of Rome to provide design solutions for Vancouver's downtown eastside, which suffered from drug- and alcohol-related problems. The process allowed cross-discipline participation from planners, architects and industrial designers, which focused on collaboration and the sharing of ideas and stories, as opposed to rigid and singular design outcomes. (Kuiper, 2007, p. 52) Public interest design Public interest design is a design movement, extending to architecture, with the main aim of structuring design around the needs of the community. At the core of its application is participatory design. Through allowing individuals to have a say in the process of design of their own surrounding built environment, design can become proactive and tailored towards addressing wider social issues facing that community. Public interest design is meant to reshape conventional modern architectural practice. Instead of having each construction project solely meet the needs of the individual, public interest design addresses wider social issues at their core. This shift in architectural practice is a structural and systemic one, allowing design to serve communities responsibly. Solutions to social issues can be addressed in a long-term manner through such design, serving the public, and involving it directly in the process through participatory design. The built environment can become the very reason for social and community issues to arise if not executed properly and responsibly. Conventional architectural practice often does cause such problems since only the paying client has a say in the design process. That is why many architects throughout the world are employing participatory design and practicing their profession more responsibly, encouraging a wider shift in architectural practice. Several architects have largely succeeded in disproving theories that deem public interest design and participatory design financially and organizationally not feasible. Their work is setting the stage for the expansion of this movement, providing valuable data on its effectiveness and the ways in which it can be carried out. Difficulties of Adoption and Involvement Participatory Design is a growing practice within the field of design yet has not yet been widely implemented. Some barriers to the adoption of participatory design are listed below. Doubt of universal creativity A belief that creativity is a restricted skill would invalidate the proposal of participatory design to allow a wider reach of affected people to participate in the creative process of designing. However, this belief is based on a limited view of creativity which does not recognize that creativity can manifest in a wide range of activities and experiences. This doubt can be damaging not only to individuals but also to society as a whole. By assuming that only a select few possess creative talent, we may overlook the unique perspectives, ideas, and solutions. Lack of technology in software based co-op design Often co-op based design technology assumes users have equal knowledge of technology used. For example: Co-op 3d-design program can let multiple people design at same time, but does not have support for guided help - tell the other guy what to do through markings and text, without talking to the person. In programming, one also have the lack of guided help support, concerning co-op based programing. One have support for letting multiple people programming at same time, but here one also have lack of guided help support - text saying write this code, hints from other user, that one can mark relevant stuff on screen and so on. This is a problem in pair-programming, with communication as a bottle neck - one should have possibility to mark, configure and guide the user without knowledge. Self-serving hierarchies In a profit-motivated system, the commercial field of design may feel fearful of relinquishing some control in order to empower those who are typically not involved in the process of design. Commercial organizational structures often prioritize profit, individual gain, or status over the well-being of the community or other externalities. However, participatory practices are not impossible to implement in commercial settings. It may be difficult for those who have acquired success in a hierarchical structure to imagine alternative systems of open collaboration. Lack of investment Although participatory design has been of interest in design academia, applied uses require funding and dedication from many individuals. The high time and financial costs make research and development of participatory design less appealing for speculative investors. It also may be difficult to find or convince enough shareholders or community members to commit their time and effort to a project. However, widespread and involved participation is critical to the process. Successful examples of participatory design are critical because they demonstrate the benefits of this approach and inspire others to adopt it. A lack of funding or interest can cause participatory projects to revert to practices where the designer initiates and dominates rather than facilitating design by the community. Differing priorities between designers and participants Participatory design projects which involve a professional designer as a facilitator to a larger group can have difficulty with competing objectives. Designers may prioritize aesthetics while end-users may prioritize functionality and affordability. Addressing these differing priorities may involve finding creative solutions that balance the needs of all stakeholders, such as using low-cost materials that meet functional requirements while also being aesthetically pleasing. Despite any potential predetermined assumptions, "the users’ knowledge has to be considered as important as the knowledge of the other professionals in the team, [as this] can be an obstacle to the co-design practice." "[The future of] co-designing will be a close collaboration between all the stakeholders in the design development process together with a variety of professionals having hybrid design/research skills." Emotional and ethical dimensions in participatory design Recent scholarship has highlighted the complex emotional landscape navigated by researchers engaged in participatory design, especially in contexts involving vulnerable or marginalized communities. Emotional challenges such as guilt and shame often emerge as researchers confront the disparity between their professional objectives and the lived realities of the communities they engage with. These emotions may stem from unmet expectations, perceived exploitation, or limited project impact. For instance, researchers may experience a sense of guilt when project outcomes fail to meet community needs or when research goals appear to benefit academic careers more than the communities themselves. The ethical dilemmas associated with balancing research agendas, funding constraints, and community needs can create a conflict between professional obligations and personal commitments, potentially leading to emotional burnout or moral distress. Consequently, there is a growing call within the field for frameworks that address these emotional aspects, advocate for ethical reflexivity, and promote sustained engagement strategies that align more closely with community well-being and autonomy. This perspective broadens the traditional scope of participatory design by acknowledging the emotional toll on researchers, thereby emphasizing the need for supportive structures that account for these emotional and ethical intricacies. From Community Consultation to Community Design Many local governments require community consultation in any major changes to the built environment. Community involvement in the planning process is almost a standard requirement in most strategic changes. Community involvement in local decision making creates a sense of empowerment. The City of Melbourne Swanston Street redevelopment project received over 5000 responses from the public allowing them to participate in the design process by commenting on seven different design options. While the City of Yarra recently held a "Stories in the Street" consultation, to record peoples ideas about the future of Smith Street. It offered participants a variety of mediums to explore their opinions such as mapping, photo surveys and storytelling. Although local councils are taking positive steps towards participatory design as opposed to traditional top down approaches to planning, many communities are moving to take design into their own hands. Portland, Oregon City Repair Project is a form of participatory design, which involves the community co-designing problem areas together to make positive changes to their environment. It involves collaborative decision-making and design without traditional involvement from local government or professionals but instead runs on volunteers from the community. The process has created successful projects such as intersection repair, which saw a misused intersection develop into a successful community square. In Malawi, a UNICEF WASH programme trialled participatory design development for latrines in order to ensure that users participate in creating and selecting sanitation technologies that are appropriate and affordable for them. The process provided an opportunity for community members to share their traditional knowledge and skills in partnership with designers and researchers. Peer-to-peer urbanism is a form of decentralized, participatory design for urban environments and individual buildings. It borrows organizational ideas from the open-source software movement, so that knowledge about construction methods and urban design schemes is freely exchanged. In software development In the English-speaking world, the term has a particular currency in the world of software development, especially in circles connected to Computer Professionals for Social Responsibility (CPSR), who have put on a series of Participatory Design Conferences. It overlaps with the approach extreme programming takes to user involvement in design, but (possibly because of its European trade union origins) the Participatory Design tradition puts more emphasis on the involvement of a broad population of users rather than a small number of user representatives. Participatory design can be seen as a move of end-users into the world of researchers and developers, whereas empathic design can be seen as a move of researchers and developers into the world of end-users. There is a very significant differentiation between user-design and user-centered design in that there is an emancipatory theoretical foundation, and a systems theory bedrock (Ivanov, 1972, 1995), on which user-design is founded. Indeed, user-centered design is a useful and important construct, but one that suggests that users are taken as centers in the design process, consulting with users heavily, but not allowing users to make the decisions, nor empowering users with the tools that the experts use. For example, Wikipedia content is user-designed. Users are given the necessary tools to make their own entries. Wikipedia's underlying wiki software is based on user-centered design: while users are allowed to propose changes or have input on the design, a smaller and more specialized group decide about features and system design. Participatory work in software development has historically tended toward two distinct trajectories, one in Scandinavia and northern Europe, and the other in North America. The Scandinavian and northern European tradition has remained closer to its roots in the labor movement (e.g., Beck, 2002; Bjerknes, Ehn, and Kyng, 1987). The North American and Pacific rim tradition has tended to be both broader (e.g., including managers and executives as "stakeholders" in design) and more circumscribed (e.g., design of individual features as contrasted with the Scandinavian approach to the design of entire systems and design of the work that the system is supposed to support) (e.g., Beyer and Holtzblatt, 1998; Noro and Imada, 1991). However, some more recent work has tended to combine the two approaches (Bødker et al., 2004; Muller, 2007). Research methodology Increasingly researchers are focusing on co-design as a way of doing research, and therefore are developing parts of its research methodology. For instance, in the field of generative co-design Vandekerckhove et al. have proposed a methodology to assemble a group of stakeholders to participate in generative co-design activities in the early innovation process. They propose first to sample a group of potential stakeholders through snowball sampling, afterwards interview these people and assess their knowledge and inference experience, lastly they propose to assemble a diverse group of stakeholders according to their knowledge and inference experience. Though not completely synonymous, research methods of Participatory Design can be defined under Participatory Research (PR): a term for research designs and frameworks using direct collaboration with those affected by the studied issue. More specifically, Participatory Design has evolved from Community-Based Research and Participatory Action Research (PAR). PAR is a qualitative research methodology involving: "three types of change, including critical consciousness development of researchers and participants, improvement of lives of those participating in research, and transformation of societal 'decolonizing' research methods with the power of healing and social justice". Participatory Action Research (PAR) is a subset of Community-Based Research aimed explicitly at including participants and empowering people to create measurable action. PAR practices across various disciplines, with research in Participatory Design being an application of its different qualitative methodologies. Just as PAR is often used in social sciences, for example, to investigate a person's lived experience concerning systemic structures and social power relations, Participatory Design seeks to deeply understand stakeholders' experiences by directly engaging them in the problem-defining and solving processes. Therefore, in Participatory Design, research methods extend beyond simple qualitative and quantitative data collection. Rather than being concentrated within data collection, research methods of Participatory Design are tools and techniques used throughout co-designing research questions, collecting, analyzing, and interpreting data, knowledge dissemination, and enacting change. When facilitating research in Participatory Design, decisions are made in all research phases to assess what will produce genuine stakeholder participation. By doing so, one of Participatory Design's goals is to dismantle the power imbalance existing between 'designers' and 'users.' Applying PR and PAR research methods seeks to engage communities and question power hierarchies, which "makes us aware of the always contingent character of our presumptions and truths... truths are logical, contingent and intersubjective... not directed toward some specific and predetermined end goal... committed to denying us the (seeming) firmness of our commonsensical assumptions". Participatory design offers this denial of our "commonsensical assumptions" because it forces designers to consider knowledge beyond their craft and education. Therefore, a designer conducting research for Participatory Design assumes the role of facilitator and co-creator. See also Co-creation Computer-supported cooperative work Design thinking Participatory action research Permaculture Public participation Service design User innovation User participation in architecture (N.J. Habraken, Giancarlo De Carlo, and Structuralists such as Aldo van Eyck) Notes References Asaro, Peter M. (2000). "Transforming society by transforming technology: the science and politics of participatory design." Accounting Management and Information Technology 10: 257–290. Banathy, B.H. (1992). Comprehensive systems design in education: building a design culture in education. Educational Technology, 22(3) 33–35. Beck, E. (2002). P for Political - Participation is Not Enough. SJIS, Volume 14 – 2002 Belotti, V. and Bly, S., 1996. Walking away from desktop computer: distributed collaboration and mobility in a product design team. In Proceedings of CSCW "96, Cambridge, Mass., November 16–20, ACM press: 209–218. Beyer, H., and Holtzblatt, K. (1998). Contextual design: Defining customer-centered systems. San Francisco: Morgan Kaufmann. Button, G. and Sharrock, W. 1996. Project work: the organisation of collaborative design and development in software engineering. CSCW Journal, 5 (4), p. 369–386. Bødker, S. and Iversen, O. S. (2002): Staging a professional participatory design practice: moving PD beyond the initial fascination of user involvement. In Proceedings of the Second Nordic Conference on Human-Computer interaction (Aarhus, Denmark, October 19–23, 2002). NordiCHI '02, vol. 31. ACM Press, New York, NY, 11-18 Bødker, K., Kensing, F., and Simonsen, J. (2004). Participatory IT design: Designing for business and workplace realities. Cambridge, MA, USA: MIT Press. Bødker, S., Christiansen, E., Ehn, P., Markussen, R., Mogensen, P., & Trigg, R. (1993). The AT Project: Practical research in cooperative design, DAIMI No. PB-454. Department of Computer Science, Aarhus University. Bødker, S., Ehn, P., Kammersgaard, J., Kyng, M., & Sundblad, Y. (1987). A Utopian experience: In G. Bjerknes, P. Ehn, & M. Kyng. (Eds.), Computers and democracy: A Scandinavian challenge (pp. 251–278). Aldershot, UK: Avebury. Carr, A.A. (1997). User-design in the creation of human learning systems. Educational Technology Research and Development, 45 (3), 5–22. Carr-Chellman, A.A., Cuyar, C., & Breman, J. (1998). User-design: A case application in health care training. Educational Technology Research and Development, 46 (4), 97–114. Divitini, M. & Farshchian, B.A. 1999. Using Email and WWW in a Distributed Participatory Design Project. In SIGGROUP Bulletin 20(1), pp. 10–15. Ehn, P. & Kyng, M., 1991. Cardboard Computers: Mocking-it-up or Hands-on the Future. In, Greenbaum, J. & Kyng, M. (Eds.) Design at Work, pp. 169 – 196. Hillsdale, New Jersey: Laurence Erlbaum Associates. Ehn, P. (1988). Work-oriented design of computer artifacts. Falköping: Arbetslivscentrum/Almqvist & Wiksell International, Hillsdale, NJ: Lawrence Erlbaum Associates Ehn, P. and Sandberg, Å. (1979). God utredning: In Sandberg, Å. (Ed.): Utredning och förändring i förvaltningen[Investigation and change in administration]. Stockholm: Liber. Grudin, J. (1993). Obstacles to Participatory Design in Large Product Development Organizations: In Namioka, A. & Schuler, D. (Eds.), Participatory design. Principles and practices (pp. 99–122). Hillsdale NJ: Lawrence Erlbaum Associates. Grønbæk, K., Kyng, M. & P. Mogensen (1993). CSCW challenges: Cooperative Design in Engineering Projects, Communications of the ACM, 36, 6, pp. 67–77 Ivanov, K. (1972). Quality-control of information: On the concept of accuracy of information in data banks and in management information systems. The University of Stockholm and The Royal Institute of Technology. Doctoral dissertation. Ivanov, K. (1995). A subsystem in the design of informatics: Recalling an archetypal engineer. In B. Dahlbom (Ed.), The infological equation: Essays in honor of Börje Langefors, (pp. 287–301). Gothenburg: Gothenburg University, Dept. of Informatics (). Note #16. Kensing, F. & Blomberg, J. 1998. Participatory Design: Issues and Concerns In Computer Supported Cooperative Work, Vol. 7, pp. 167–185. Kensing, F. 2003. Methods and Practices in Participatory Design. ITU Press, Copenhagen, Denmark. Kuiper, Gabrielle, June 2007, Participatory planning and design in the downtown eastside: reflections on Global Studio Vancouver, Australian Planner, v.44, no.2, pp. 52–53 Kyng, M. (1989). Designing for a dollar a day. Office, Technology and People, 4(2): 157–170. Muller, M.J. (2007). Participatory design: The third space in HCI (revised). In J. Jacko and A. Sears (eds.), Handbook of HCI 2nd Edition. Mahway NJ USA: Erlbaum. Naghsh, A. M., Ozcan M. B. 2004. Gabbeh - A Tool For Computer Supported Collaboration in Electronic Paper-Prototyping. In *Dearden A & Watts L. (Eds). Proceedings of HCI "04: Design for Life volume 2. British HCI Group pp77–80 Näslund, T., 1997. Computers in Context –But in Which Context? In Kyng, M. & Mathiassen, L. (Eds). Computers and Design in Context. MIT Press, Cambridge, MA. pp. 171–200. Nichols, Dave, (2009) Planning Thought and History Lecture, The University of Melbourne Noro, K., & Imada, A. S. (Eds.). (1991) Participatory ergonomics. London: Taylor and Francis. Perry, M. & Sanderson, D. 1998. Coordinating Joint Design Work: The Role of Communication and Artefacts. Design Studies, Vol. 19, pp. 273–28 Press, Mandy, 2003. "Communities for Everyone: redesigning contested public places in Victoria", Chapter 9 of end Weeks et al. (eds), Community Practices in Australia (French Forests NSW: Pearson Sprint Print), pp. 59–65 Pan, Y., 2018. From Field to Simulator: Visualising Ethnographic Outcomes to Support Systems Developers. University of Oslo. Doctoral dissertation. Reigeluth, C. M. (1993). Principles of educational systems design. International Journal of Educational Research, 19 (2), 117–131. Sarkissian, W, Perglut, D. 1986, Community Participation in Practice, The Community Participation handbook, Second edition, Murdoch University Sanders, E. B. N., & Stappers, P. J. (2008). Co-creation and the new landscapes of design. Codesign, 4(1), 5–18. Santa Rosa, J.G. & Moraes, A. Design Participativo: técnicas para inclusão de usuários no processo de ergodesign de interfaces. Rio de Janeiro: RioBooks, 2012. Schuler, D. & Namioka, A. (1993). Participatory design: Principles and practices. Hillsdale, NJ: Erlbaum. Trainer, Ted 1996, Towards a sustainable economy: The need for fundamental change Envirobook/ Jon Carpenter, Sydney/Oxford, pp. 135–167 Trischler, Jakob, Simon J. Pervan, Stephen J. Kelly and Don R. Scott (2018). The value of codesign: The effect of customer involvement in service design teams. Journal of Service Research, 21(1): 75–100. https://doi.org/10.1177/1094670517714060 Wojahn, P. G., Neuwirth, C. M., Bullock, B. 1998. Effects of Interfaces for Annotation on Communication in a Collaborative Task. In Proceedings of CHI "98, LA, CA, April 18–23, ACM press: 456–463 Von Bertalanffy, L. (1968). General systems theory. New York: Braziller. Design Innovation Product development Citizen science models
Participatory design
Engineering
8,600
1,966,840
https://en.wikipedia.org/wiki/Neisseria%20meningitidis
Neisseria meningitidis, often referred to as the meningococcus, is a Gram-negative bacterium that can cause meningitis and other forms of meningococcal disease such as meningococcemia, a life-threatening sepsis. The bacterium is referred to as a coccus because it is round, and more specifically a diplococcus because of its tendency to form pairs. About 10% of adults are carriers of the bacteria in their nasopharynx. As an exclusively human pathogen, it causes developmental impairment and death in about 10% of cases. It causes the only form of bacterial meningitis known to occur epidemically, mainly in Africa and Asia. It occurs worldwide in both epidemic and endemic form. N. meningitidis is spread through saliva and respiratory secretions during coughing, sneezing, kissing, chewing on toys and through sharing a source of fresh water. It has also been reported to be transmitted through oral sex and cause urethritis in men. It infects its host cells by sticking to them with long thin extensions called pili and the surface-exposed proteins Opa and Opc and has several virulence factors. Signs and symptoms Meningococcus can cause meningitis and other forms of meningococcal disease. It initially produces general symptoms like fatigue, fever, and headache and can rapidly progress to neck stiffness, coma and death in 10% of cases. Petechiae occur in about 50% of cases. Chance of survival is highly correlated with blood cortisol levels, with lower levels prior to steroid administration corresponding with increased patient mortality. Symptoms of meningococcal meningitis are easily confused with those caused by other bacteria, such as Haemophilus influenzae and Streptococcus pneumoniae. Suspicion of meningitis is a medical emergency and immediate medical assessment is recommended. Current guidance in the United Kingdom is that if a case of meningococcal meningitis or septicaemia (infection of the blood) is suspected, intravenous antibiotics should be given and the ill person admitted to the hospital. This means that laboratory tests may be less likely to confirm the presence of Neisseria meningitidis as the antibiotics will dramatically lower the number of bacteria in the body. The UK guidance is based on the idea that the reduced ability to identify the bacteria is outweighed by reduced chance of death. Septicaemia caused by Neisseria meningitidis has received much less public attention than meningococcal meningitis even though septicaemia has been linked to infant deaths. Meningococcal septicaemia typically causes a purpuric rash, that does not lose its color when pressed with a glass slide ("non-blanching") and does not cause the classical symptoms of meningitis. This means the condition may be ignored by those not aware of the significance of the rash. Septicaemia carries an approximate 50% mortality rate over a few hours from initial onset. Other severe complications include Waterhouse–Friderichsen syndrome, a massive, usually bilateral, hemorrhage into the adrenal glands caused by fulminant meningococcemia, adrenal insufficiency, and disseminated intravascular coagulation. Not all instances of a purpura-like rash are due to meningococcal septicaemia; other possible causes, such as idiopathic thrombocytopenic purpura (ITP; a platelet disorder) and Henoch–Schönlein purpura, also need prompt investigation. Microbiology N. meningitidis is a Gram-negative diplococcus since it has an outer and inner membranes with a thin layer of peptidoglycan in between. It is 0.6–1.0 micrometers in size. It tests positive for the enzyme cytochrome c oxidase. Habitat N. meningitidis is a part of the normal nonpathogenic flora in the nasopharynx of up to 8–25% of adults. It colonizes and infects only humans, and has never been isolated from other animals. This is thought to result from the bacterium's inability to get iron from sources other than human transferrin and lactoferrin. Subtypes Disease-causing strains are classified according to the antigenic structure of their polysaccharide capsule. Serotype distribution varies markedly around the world. Among the 13 identified capsular types of N. meningitidis, six (A, B, C, W135, X, and Y) account for most disease cases worldwide. Type A has been the most prevalent in Africa and Asia, but is rare/practically absent in North America. In the United States, serogroup B is the predominant cause of disease and mortality, followed by serogroup C. The multiple subtypes have hindered development of a universal vaccine for meningococcal disease. Pathogenesis Virulence Lipooligosaccharide (LOS) is a component of the outer membrane of N. meningitidis. This acts as an endotoxin and is responsible for septic shock and hemorrhage due to the destruction of red blood cells. Other virulence factors include a polysaccharide capsule which prevents host phagocytosis and aids in evasion of the host immune response. Adhesion is another key virulence strategy to successfully invade host cell. There are several known proteins that are involved in adhesion and invasion, or mediate interactions with specific host cell receptors. These include the Type IV pilin adhesin which mediates attachment of the bacterium to the epithelial cells of the nasopharynx, surface-exposed Opa and Opc proteins which mediate interactions with specific host cell receptors, and NadA which is involved in adhesion. Pathogenic meningococci that have invaded into the bloodstream must be able to survive in the new niche, this is facilitated by acquisition and utilisation of iron (FetA and Hmbr), resisting intracellular oxidative killing by producing catalase and superoxide dismutase and ability to avoid complement mediated killing (fHbp). Meningococci produce an IgA protease, an enzyme that cleaves IgA class antibodies and thus allows the bacteria to evade a subclass of the humoral immune system. A hypervirulent strain was discovered in China. Its impact is yet to be determined. Complement inhibition Factor H binding protein (fHbp) that is exhibited in N. meningitidis and some commensal species is the main inhibitor of the alternative complement pathway. fHbp protects meningococci from complement-mediated death in human serum experiments, but has also been shown to protect meningococci from antimicrobial peptides in vitro. Factor H binding protein is key to the pathogenesis of N. meningitidis, and is, therefore, important as a potential vaccine candidate. Porins are also an important factor for complement inhibition for both pathogenic and commensal species. Porins are important for nutrient acquisition. Porins are also recognized by TLR2, they bind complement factors (C3b, C4b, factor H, and C4bp (complement factor 4b-binding protein)). Cooperation with pili for CR3-mediated internalization is another function of porins. Ability to translocate into host cells and modulate reactive oxygen species production and apoptosis is made possible by porins, as well. Strains of the same species can express different porins. Genome At least 8 complete genomes of Neisseria meningitidis strains have been determined which encode about 2,100 to 2,500 proteins. The genome of strain MC58 (serogroup B) has 2,272,351 base pairs. When sequenced in 2000, it was found to contain 2158 open reading frames (ORFs). Of these, a biological function was predicted for 1158 (53.7%). There were three major islands of horizontal DNA transfer found. Two encode proteins involved in pathogenicity. The third island only codes for hypothetical proteins. They also found more genes that undergo phase variation than any pathogen then known. Phase variation is a mechanism that helps the pathogen to evade the immune system of the host. The genome size of strain H44/76 is 2.18 Mb, and encodes 2,480 open reading frames (ORFs), compared to 2.27 Mb and 2,465 ORFs for MC58. Both strains have a GC content of 51.5%. A comparison with MC58 showed that four genes are uniquely present in H44/76 and nine genes are only present in MC58. Of all ORFs in H44/76, 2,317 (93%) show more than 99% sequence identity. The complete genome sequence of strain NMA510612 (serogroup A) consists of one circular chromosome with a size of 2,188,020 bp, and the average GC content is 51.5%. The chromosome is predicted to possess 4 rRNA operons, 163 insertion elements (IS), 59 tRNAs, and 2,462 ORFs. There is a public database available for N. meningitidis core genome Multilocus sequence typing (cgMLST). Available at: Neisseria typing Genetic transformation Genetic transformation is the process by which a recipient bacterial cell takes up DNA from a neighboring cell and integrates this DNA into the recipient's genome by recombination. In N. meningitidis, DNA transformation requires the presence of short DNA sequences (9–10 mers residing in coding regions) of the donor DNA. These sequences are called DNA uptake sequences (DUSs). Specific recognition of these sequences is mediated by a type IV pilin. In N. meningitidis DUSs occur at a significantly higher density in genes involved in DNA repair and recombination (as well as in restriction-modification and replication) than in other annotated gene groups. The over-representation of DUS in DNA repair and recombination genes may reflect the benefit of maintaining the integrity of the DNA repair and recombination machinery by preferentially taking up genome maintenance genes, that could replace their damaged counterparts in the recipient cell. N. meningititis colonizes the nasopharyngeal mucosa, which is rich in macrophages. Upon their activation, macrophages produce superoxide (O2−) and hydrogen peroxide (H2O2). Thus N. meningitidis is likely to encounter oxidative stress during its life cycle. Consequently, an important benefit of genetic transformation to N. meningitidis may be the maintenance of the recombination and repair machinery of the cell that removes oxidative DNA damages such as those caused by reactive oxygen. This is consistent with the more general idea that transformation benefits bacterial pathogens by facilitating repair of DNA damages produced by the oxidative defenses of the host during infection. Meningococci population is extensively diverse genetically, this is due to horizontal gene transfers while in the nasophanrynx. Gene transfer can occur within and between genomes of Neisseria species, and it is the main mechanism of acquiring new traits. This is facilitated by the natural competence of the meningococci to take up foreign DNA. The commensal species of Neisseria can act as a reservoir of genes that can be acquired; for example, this is how capsule switching can occur as a means of hiding from the immune system. An invasive N. meningitidis strain of serogroup C broke out in Nigeria in 2013 – the strain was a new sequence type, ST-10217 determined by multilocus sequence typing. It was determined that a commensal strain of N. meningitidis acquired an 8-kb prophage, the meningococcal disease-associated island (MDAΦ), previously associated with hyper-invasiveness; and the full serogroup C capsule operon, thus becoming a hypervirulent strain. This illustrates how hypervirulent strains can arise from non-pathgenic strains due to the high propensity of gene transfers and DNA uptake by N. meningitidis. Diagnosis A small amount of cerebrospinal fluid (CSF) is sent to the laboratory as soon as possible for analysis. The diagnosis is suspected, when Gram-negative diplococci are seen on Gram stain of a centrifuged sample of CSF; sometimes they are located inside white blood cells. The microscopic identification takes around 1–2 hours after specimen arrival in the laboratory. The gold standard of diagnosis is microbiological isolation of N. meningitidis by growth from a sterile body fluid, which could be CSF or blood. Diagnosis is confirmed when the organism has grown, most often on a chocolate agar plate, but also on Thayer–Martin agar. To differentiate any bacterial growth from other species a small amount of a bacterial colony is Gram stained and tested for oxidase and catalase. Gram negative diplococci that are oxidase and catalase positive are then tested for fermentation of the following carbohydrates: maltose, sucrose, and glucose. N. meningitidis will ferment glucose and maltose. Finally, serology determines the subgroup of the N. meningitidis, which is important for epidemiological surveillance purposes; this may often only be done in specialized laboratories. The above tests take a minimum of 48–72 hours turnaround time for growing the organism, and up to a week more for serotyping. Growth can and often does fail, either because antibiotics have been given preemptively, or because specimens have been inappropriately transported, as the organism is extremely susceptible to antibiotics and fastidious in its temperature, and growth medium requirements. Polymerase chain reaction (PCR) tests where available, mostly in industrialized countries, have been increasingly used; PCR can rapidly identify the organism, and works even after antibiotics have been given. Prevention All recent contacts of the infected patient over the seven days before onset should receive medication to prevent them from contracting the infection. This especially includes young children and their child caregivers or nursery-school contacts, as well as anyone who had direct exposure to the patient through kissing, sharing utensils, or medical interventions such as mouth-to-mouth resuscitation. Anyone who frequently ate, slept or stayed at the patient's home during the seven days before the onset of symptom, or those who sat beside the patient on an airplane flight or classroom for eight hours or longer, should also receive chemoprophylaxis. The agent of choice is usually oral rifampicin for a few days. Receiving a dose of the meningococcal vaccine before traveling to a country in the "meningitis belt" or having a booster meningitis vaccine, normally five years apart could prevent someone from getting an infection from the pathogen. Vaccination United States A number of vaccines are available in the U.S. to prevent meningococcal disease. Some of the vaccines cover serogroup B, while others cover A, C, W, and Y. The Centers for Disease Control and Prevention (CDC) recommends all teenagers receive MenACWY vaccine and booster, with optional MenB. MenACWY and MenB are also recommended for people of other ages with various medical conditions and social risk factors. A meningococcal polysaccharide vaccine (MPSV4) has been available since the 1970s and is the only meningococcal vaccine licensed for people older than 55. MPSV4 may be used in people 2–55 years old if the MCV4 vaccines are not available or contraindicated. Two meningococcal conjugate vaccines (MCV4) are licensed for use in the U.S. The first conjugate vaccine was licensed in 2005, the second in 2010. Conjugate vaccines are the preferred vaccine for people 2 through 55 years of age. It is indicated in those with impaired immunity, such as nephrotic syndrome or splenectomy. In June 2012, the U.S. Food and Drug Administration (FDA) approved a combination vaccine against two types of meningococcal diseases and Hib disease for infants and children 6 weeks to 18 months old. The vaccine, Menhibrix, was designed to prevent disease caused by Neisseria meningitidis serogroups C and Y, and Haemophilus influenzae type b (Hib). It was the first meningococcal vaccine that could be given to infants as young as six weeks old. In October 2014 the FDA approved the first vaccine effective against serogroup B, named Trumenba, for use in 10- to 25-year-old individuals. Africa In 2010, the Meningitis Vaccine Project introduced a vaccine called MenAfriVac in the African meningitis belt. It was made by generic drug maker Serum Institute of India and cost 50 U.S. cents per injection. Beginning in Burkina Faso in 2010, it has been given to 215 million people across Benin, Cameroon, Chad, Ivory Coast, Ethiopia, Ghana, Mali, Niger, Mauritania, Nigeria, Senegal, Sudan, Togo and Gambia. The vaccination campaign has resulted in near-elimination of serogroup A meningitis from the participating countries. Treatment Persons with confirmed N. meningitidis infection should be hospitalized immediately for treatment with antibiotics. Because meningococcal disease can disseminate very rapidly, a single dose of intramuscular antibiotic is often given at the earliest possible opportunity, even before hospitalization, if disease symptoms look suspicious enough. Third-generation cephalosporin antibiotics (i.e. cefotaxime, ceftriaxone) should be used to treat a suspected or culture-proven meningococcal infection before antibiotic susceptibility results are available. Clinical practice guidelines endorse empirical treatment in the event a lumbar puncture to collect cerebrospinal fluid (CSF) for laboratory testing cannot first be performed. Antibiotic treatment may affect the results of microbiology tests, but a diagnosis may be made on the basis of blood-cultures and clinical examination. Epidemiology N. meningitidis is a major cause of illness, developmental impairment and death during childhood in industrialized countries and has been responsible for epidemics in Africa and in Asia. Every year, about 2,500 to 3,500 people become infected with N. meningitidis in the US, with a frequency of about 1 in 100,000. Children younger than five years are at greatest risk, followed by teenagers of high school age. Rates in the African meningitis belt were as high as 1 in 1,000 to 1 in 100 before introduction of a vaccine in 2010. The incidence of meningococcal disease is highest among infants (children younger than one year old) whose immune system is relatively immature. In industrialized countries there is a second peak of incidence in young adults, who are congregating closely, living in dormitories or smoking. Vaccine development is ongoing. It is spread through saliva and other respiratory secretions during coughing, sneezing, kissing, and chewing on toys. Inhalation of respiratory droplets from a carrier which may be someone who is themselves in the early stages of disease can transmit the bacteria. Close contact with a carrier is the predominant risk factor. Other risk factors include a weakened general or local immune response, such as a recent upper respiratory infection, smoking, and complement deficiency. The incubation period is short, from 2 to 10 days. In susceptible individuals, N. meningitidis may invade the bloodstream and cause a systemic infection, sepsis, disseminated intravascular coagulation, breakdown of circulation, and septic shock. History In 1884 Ettore Marchiafava and Angelo Celli first observed the bacterium inside cells in the cerebral spinal fluid (CSF). In 1887 Anton Weichselbaum isolated the bacterium from the CSF of patients with bacterial meningitis. He named the bacterium Diplococcus intracellularis meningitidis. Biotechnology Components from Neisseria meningitidis are being harnessed in biotechnology. Its Cas9 enzyme is a useful tool in CRISPR gene editing because the enzyme is small and has distinct targeting features to the commonly used enzyme from Streptococcus pyogenes. The cell-surface protein FrpC from Neisseria meningitidis has been engineered to allow covalent coupling between proteins, because it generates a reactive anhydride when exposed to calcium. The bacterium also expresses unique enzymes able to cleave IgA antibodies. See also DNA uptake sequence DNA taken up by Neisseria NmVac4-A/C/Y/W-135 polysaccharide vaccine Sara Branham Matthews microbiologist Shwartzman phenomenon Sepsis References External links Type strain of Neisseria meningitidis at BacDive - the Bacterial Diversity Metadatabase Gram-negative bacteria Pathogenic bacteria Polysaccharide encapsulated bacteria Neisseriales Bacteria described in 1901
Neisseria meningitidis
Biology
4,460
11,556,937
https://en.wikipedia.org/wiki/Bitumen%20of%20Judea
Bitumen of Judea is a sort of natural tar known from ancient times. It is a naturally occurring asphalt used since ancient times as a wood colorant, and in early photography. Wood coloration usage Bitumen of Judea may be used as a colorant for wood for an aged, natural and rustic appearance. It is soluble in turpentine and some other terpenes, and can be combined with oils, waxes, varnishes and glazes. Light-sensitive properties It is a light-sensitive material in what is accepted to be the first complete photographic process, i.e., one capable of producing durable light-fast results. The technique was developed by French scientist and inventor Nicéphore Niépce in the 1820s. In 1826 or 1827, he applied a thin coating of the tar-like material to a pewter plate and took a picture of parts of the buildings and surrounding countryside of his estate, producing what is usually described as the first photograph. It is considered to be the oldest known surviving photograph made in a camera. The plate was exposed in the camera for at least eight hours. The bitumen, initially soluble in spirits and oils, was hardened and made insoluble (probably polymerized) in the brightest areas of the image. The unhardened part was then rinsed away with a solvent. Niépce's primary objective was not a photoengraving or photolithography process, but rather a photo-etching process, since engraving requires the intervention of a physical rather than chemical process and lithography involves a grease and water resistance process. However, Niépce's famous image of Pope Pius VI was produced first by photo-etching and then "improved" by hand engraving. Bitumen, superbly resistant to strong acids, was in fact later widely used as a photoresist in making printing plates for mechanical printing processes. The surface of a zinc or other metal plate was coated, exposed, developed with a solvent that laid bare the unexposed areas, then etched in an acid bath, producing the required surface relief. References Photographic processes dating from the 19th century Asphalt
Bitumen of Judea
Physics,Chemistry
439
113,221
https://en.wikipedia.org/wiki/Large-group%20awareness%20training
The term large-group awareness training (LGAT) refers to activities—usually offered by groups with links to the human potential movement—which claim to increase self-awareness and to bring about desirable transformations in individuals' personal lives. LGATs are unconventional; they often take place over several days, and may compromise participants' mental wellbeing. LGAT programs may involve several hundred people at a time. Though early definitions cited LGATs as featuring unusually long durations, more recent texts describe trainings lasting from a few hours to a few days. Forsyth and Corazzini cite Lieberman (1994) as suggesting "that at least 1.3 million Americans have taken part in LGAT sessions". Definitions of LGAT In 2005 Rubinstein compared large-group awareness training to certain principles of cognitive therapy, such as the idea that people can change their lives by reinterpreting the way they view external circumstances. In the 1997 collection of essays Consumer Research: Postcards from the edge, discussing behavioral and economic studies, the authors contrast the "enclosed locations" used in Large Group Awareness Trainings with the relatively open environment of a "variety store". The Handbook of Group Psychotherapy (1994) characterised LGAT as focusing on "philosophical, psychological and ethical issues" relating "to personal effectiveness, decision-making, personal responsibility, and commitment." Psychologist Dennis Coon's textbook, Psychology: A Journey, defines the LGAT as referring to programs claiming "to increase self-awareness and facilitate constructive personal change". Coon further defines Large Group Awareness Training in his book Introduction to Psychology. Coon and Mitterer emphasize the commercial nature of several LGAT organizations. The evolution of LGAT providers Lou Kilzer, writing in The Rocky Mountain News, identified Leadership Dynamics (in operation 1967–1973) as "the first of the genre psychologists call 'large group awareness training'". Leadership Dynamics directly or indirectly influenced several permutations of large-group transformation trainings. Werner Erhard (successively associated with Erhard Seminars Training (est or EST), WE&A and Landmark Education) trained as an instructor with Mind Dynamics. Michael Langone notes that Erhard Seminars Training (est) became in the popular mind the archetype for LGATs. While working for Holiday Magic, Lifespring founder John Hanley attended a course at Leadership Dynamics. Chris Mathe, at the time a PhD candidate in clinical psychology, wrote that most of the current commercial forms of Large Group Awareness Training were modeled after the Leadership Dynamics Institute. Academic analyses, studies "Large Group Awareness Training", a 1982 peer-reviewed article published in Annual Review of Psychology, sought to summarize literature on the subject of LGATs and to examine their efficacy and their relationship with more standard psychology. This academic article describes and analyzes large group awareness training as influenced by the work of humanistic psychologists such as Carl Rogers, Abraham Maslow and Rollo May. LGATs as commercial trainings took many techniques from encounter groups. They existed alongside but "outside the domains of academic psychology or psychiatry. Their measure of performance was consumer satisfaction and formal research was seldom pursued." The article describes an est training, and discusses the literature on the testimony of est graduates. It notes minor changes on psychological tests after the training and mentions anecdotal reports of psychiatric casualties among est trainees. The article considers how est compares to more standard psychotherapy techniques such as behavior therapy, group and existential psychotherapy before concluding with a call for "objective and rigorous research" and stating that unknown variables might have accounted for some of the positive accounts. Psychologists advised borderline or psychotic patients not to participate. Psychological factors cited by academics include emotional "flooding", catharsis, universality (identification with others), the instillation of hope, identification and what Sartre called "uncontested authorship". In 1989 researchers from the University of Connecticut received the "National Consultants to Management Award" from the American Psychological Association for their study: Evaluating a Large Group Awareness Training. Psychologist Chris Mathe has written in the interests of consumer-protection, encouraging potential attendees of LGATs to discuss such trainings with any current therapist or counselor, to examine the principles underlying the program, and to determine pre-screening methods, the training of facilitators, the full cost of the training and of any suggested follow-up care. One study noted the many difficulties in evaluating LGATs, from proponents' explicit rejection of certain study models to difficulty in establishing a rigorous control group. In some cases, organizations under study have partially funded research into themselves. Not all professional researchers view LGATs favorably. Researchers such as psychologist Philip Cushman, for example, found that the program he studied "consists of a pre-meditated attack on the self". A 1983 study on Lifespring found that "although participants often experience a heightened sense of well-being as a consequence of the training, the phenomenon is essentially pathological", meaning that, in the program studied, "the training systematically undermines ego functioning and promotes regression to the extent that reality testing is significantly impaired". Lieberman's 1987 study, funded partially by Lifespring, noted that 5 out of a sample of 289 participants experienced "stress reactions" including one "transitory psychotic episode". He commented: "Whether [these five] would have experienced such stress under other conditions cannot be answered. The clinical evidence, however, is that the reactions were directly attributable to the large group awareness training." In 2003 the Vatican reported its study results about New Age training courses: In Coon's psychology textbook (Introduction to Psychology) the author references many other studies, which postulate that many of the "claimed benefits" of Large Group Awareness Training actually take the form of "a kind of therapy placebo effect". Jarvis described Large Group Awareness Training as "educationally dubious" in the 2002 book The Theory & Practice of Teaching. Tapper mentions that "some large group-awareness training and psychotherapy groups" exemplify non-religious "cults". Benjamin criticizes LGAT groups for their high prices and spiritual subtleties. LGAT techniques Specific techniques used in some Large Group Awareness Trainings may include: meditation biofeedback jargon self-hypnosis relaxation techniques visualization neuro-linguistic programming yoga LGATs utilize such techniques during long sessions, sometimes called "marathon" sessions. Paglia describes "EST's Large Group Awareness Training": "Marathon, eight-hour sessions, in which [participants] were confined and harassed, supposedly led to the breakdown of conventional ego, after which they were in effect born again." Finkelstein's 1982 article provides a detailed description of the structure and techniques of an Erhard Seminars Training event—techniques similar to those used in some group therapy and encounter groups. The academic textbook, Handbook of Group Psychotherapy regards Large Group Awareness Training organisations as "less open to leader differences", because they follow a "detailed written plan" that does not vary from one training to the next. In his book Life 102, LGAT participant and former trainer Peter McWilliams describes the basic technique of marathon trainings as pressure/release and asserts that advertising uses pressure/release "all the time", as do "good cop/bad cop" police-interrogations and revival meetings. By spending approximately half the time making a person feel bad and then suddenly reversing the feeling through effusive praise, the programs cause participants to experience a stress-reaction and an "endorphin high". McWilliams gives examples of various LGAT activities called processes with names such as "love bomb", "lifeboat", "cocktail party" and "cradling", which take place over many hours and days, physically exhausting the participants to make them more susceptible to the trainer's message, whether in the participants' best interests or not. Although extremely critical of some LGATs, McWilliams found positive value in others, asserting that they varied not in technique but in the application of technique. LGATs and the anti-cult movement After commissioning a report in 1983 by the APA Task Force on Deceptive and Indirect Methods of Persuasion and Control (DIMPAC) chaired by anti-cult psychologist Margaret Singer, the American Psychological Association (APA) subsequently rejected and strongly criticised the 1986 DIMPAC report, which included large group awareness trainings as one example of what it called "coercive persuasion". In 1997 the APA characterized Singer's hypotheses as "uninformed speculations based on skewed data". It stated in 1987 that the report generally lacked "the scientific rigor and evenhanded critical approach necessary for APA imprimatur." The APA also stated that "the specific methods by which Drs. Singer and Benson have arrived at their conclusions have also been rejected by all serious scholars in the field." Singer sued the APA, and lost on June 17, 1994. Despite the APA rejection of her task-force's report, Singer remained in good standing among psychology researchers. Singer reworked much of the DIMPAC report material into the book Cults in Our Midst (1995, second edition: 2003), which she co-authored with Janja Lalich. Singer and Lalich state that "large group awareness trainings" tend to last at least four days and usually five. Their book mentions Erhard Seminars Training ("est") and similar undertakings, such as the Landmark Forum, Lifespring, Actualizations, MSIA/Insight and PSI Seminars. In Cults in our Midst, Singer differentiated between the usage of the terms cult and Large Group Awareness Training, while pointing out some commonalities. Elsewhere she groups the two phenomena together, in that they both use a shared set of thought-reform techniques. See also Brainwashing Multi-level marketing List of large-group awareness training organizations Group dynamics Crowd psychology References Further reading Books Articles Polaski, Mary. "The Mary Polaski "L" Series" Media/Press Group processes Human Potential Movement Personal development
Large-group awareness training
Biology
2,078
33,289,884
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2092
In molecular biology, glycoside hydrolase family 92 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. This domain occurs within alpha-1,2-mannosidases, which remove alpha-1,2-linked mannose residues from Man(9)(GlcNAc)(2) by hydrolysis. They are critical for the maturation of N-linked oligosaccharides and ER-associated degradation. Glycoside hydrolase family 92 includes enzymes with mannosyl-oligosaccharide α-1,2-mannosidase , mannosyl-oligosaccharide α-1,3-mannosidase , mannosyl-oligosaccharide α-1,6-mannosidase , α-mannosidase , α-1,2-mannosidase , α-1,3-mannosidase and α-1,4-mannosidase activities. It includes enzymes critical for the maturation of N-linked oligosaccharides and ER-associated degradation. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 92
Biology
362
48,720,870
https://en.wikipedia.org/wiki/Quasi-exact%20solvability
A linear differential operator L is called quasi-exactly-solvable (QES) if it has a finite-dimensional invariant subspace of functions such that where n is a dimension of . There are two important cases: is the space of multivariate polynomials of degree not higher than some integer number; and is a subspace of a Hilbert space. Sometimes, the functional space is isomorphic to the finite-dimensional representation space of a Lie algebra g of first-order differential operators. In this case, the operator L is called a g-Lie-algebraic Quasi-Exactly-Solvable operator. Usually, one can indicate basis where L has block-triangular form. If the operator L is of the second order and has the form of the Schrödinger operator, it is called a Quasi-Exactly-Solvable Schrödinger operator. The most studied cases are one-dimensional -Lie-algebraic quasi-exactly-solvable (Schrödinger) operators. The best known example is the sextic QES anharmonic oscillator with the Hamiltonian where (n+1) eigenstates of positive (negative) parity can be found algebraically. Their eigenfunctions are of the form where is a polynomial of degree n and (energies) eigenvalues are roots of an algebraic equation of degree (n+1). In general, twelve families of one-dimensional QES problems are known, two of them characterized by elliptic potentials. References External links Differential operators Mathematical analysis Multivariable calculus
Quasi-exact solvability
Mathematics
319
74,728,128
https://en.wikipedia.org/wiki/HD%2085628
HD 85628 (MASCARA-4) is a binary star system in the constellation of Carina. The host star, HD 85628 A, is an A-type main-sequence star, the primary star of the system, with a hot Jupiter in orbit around it. The secondary star is HD 85628 B, a K-type main-sequence star. Little is known about it. Nomenclature This star system was first catalogued in the Henry Draper Catalog as HD 85628, its most common name. The Henry Draper Catalogue gave stars visible to the naked eye in suitable conditions a designation, indicating that this star can be seen with the naked eye. But in 2019, the Multi-site All-Sky Camera announced the discovery of the exoplanet HD 85628 Ab/MASCARA-4b around HD 85628 A. Thus, the primary star is sometimes catalogued as MASCARA-4. Planetary system In 2019, a hot Jupiter exoplanet was discovered by MASCARA using the transit method around HD 85628 A. References Carina (constellation) Binary stars A-type main-sequence stars M-type main-sequence stars 085628 Planetary systems with one confirmed planet Planetary transit variables
HD 85628
Astronomy
250
9,390,462
https://en.wikipedia.org/wiki/Sea%20Sonic
Sea Sonic Electronics Co., Ltd. (), stylized as Seasonic, is a Taiwanese power supply and computer PSU manufacturer and retailer, formerly limited to trading hardware OEM for other companies. They first started making power supplies for the PC industry in the 1981. All of their PSUs are 80 Plus-certified. In 2002, Sea Sonic established a wholly owned subsidiary in California to sell products in the US retail market and to provide technical support. History 1975 Sea Sonic incorporated to manufacture Electronic Test Equipment. 1981 Sea Sonic enters the PC power supply market 1984 Headquarters relocates to Shilin, Republic of China. 1986 The factory phases in Automated Test Equipment in production methodology, this is the first in switching power supply manufacturing in Taiwan. 1990 Second factory in Taoyuan County (now Taoyuan City), Taiwan begins operation. 1993 European office opens in The Netherlands. 1994 Dong Guan China I factory begins full operation. 1995 Sea Sonic develops an ATX power supply for the Pentium market. 1997 Dong Guan factory receives ISO9002 certification. 1998 The Dong Guan II factory begins full operation. Taiwan headquarters and Taoyuan factory receive ISO9001 certification. 1999 Headquarters relocates to present address at Neihu, Taipei. 2000 Dong Guan factory receives ISO 9001 certification. The first PSU maker to provide PC and IPC market cost-effective Active PFC (Power Factor Correction) solutions. Designs and applies S2FC (Smart & Silent Fan Control) towards PC and IPC products. 2002 USA office opens in California, USA. Sea Sonic Electronics Co., Ltd. lists on Taiwan's Gre Tai Securities Market (OTC Stock Exchange). 2003 Launched retail products with own brand name and won awards and recommendations worldwide. 2004 Dedicated to develop green and silent power supplies with higher efficiency and higher power output. 2005 The USA office was renamed as Sea Sonic Electronics Inc., a 100% Sea Sonic owned subsidiary, to serve North and South America customers. The first PSU manufacturer to win the 80 Plus efficiency certification. 2006 Dong Guan factory receives ISO14001 certification. Began to mass-produce RoHS & WEEE compliant products. 2008 European subsidiary opens in the Netherlands to serve the European market. Dong Guan Factory II begins full operation. 2009 Sea Sonic is first in the market to achieve 80 PLUS® Gold rating by introducing the X-Series power supplies. 2010 Sea Sonic introduces the world's first 80 PLUS® Gold rated fanless models to the worldwide retail market. 2011 The 80 PLUS® Platinum rated 860 W and 1000 W models get introduced. 2012 Japan subsidiary opens in Tokyo. The 80 PLUS® Platinum rated 400 W, 460 W, and 520 W ultra-silent fanless models enter the world market. 2013 The S12G 80 PLUS® Gold-, and the M12II EVO 80 PLUS® Bronze-rated power supplies get introduced. 2014 Sea Sonic launches the 80 PLUS® Platinum 1050 W and 1200 W, and the 80 PLUS® Gold X-Series 1050 W and 1250 W models. 2017 Under the 'One Seasonic' initiative, Sea Sonic revamps its entire product line to introduce the PRIME, FOCUS and CORE series. 2018 The Seasonic SCMD (system cable management device) marks the beginning of a new era for simplifying cable management. 2019 The Seasonic CONNECT system modernizes system installation and cable management. 2020 Sea Sonic partners with G2 Esports to enter the world of competitive gaming. 2021 The new Seasonic SYNCRO Q704 case wins both the 2021 Red Dot Award and the 2021 iF Design Award for excellent design. 2022 The Seasonic MagFlow Fan has won the 2022 Red Dot Design Award and the 2022 if Design Award for its innovative design. 2023 Sea Sonic receives ISO 14064-1:2018 certification for the quantification and reporting of greenhouse gas emissions and removals. References External links 1975 establishments in Taiwan Computer power supply unit manufacturers Computer companies of Taiwan Computer hardware companies Electronics companies of Taiwan Companies based in Taipei Computer companies established in 1975 Electronics companies established in 1975 Taiwanese brands
Sea Sonic
Technology
830
42,097,039
https://en.wikipedia.org/wiki/Kepler-29
Kepler-29 is a Sun-like star in the northern constellation of Cygnus. It is located at the celestial coordinates: Right Ascension , Declination . With an apparent visual magnitude of 15.456, this star is too faint to be seen with the naked eye. It is a solar analog, having a close mass, radius, and temperature as the Sun. Currently the age of the star has not been determined due to its 2780 light-year (850 parsecs) distance. As of 2016 no Jovian exoplanets of 0.9–1.4 have been found at a distance of 5 AU. Planetary system In 2011 an analysis of the first four months of data from the Kepler space telescope detected 1235 planetary candidates two of which orbited this star. Later study of the transit-timing variations of the system lead to the confirmation of both planets. The planetary orbits are lying in Orbital resonance to each other, with orbital period ratio being exactly 7:9. References Cygnus (constellation) G-type main-sequence stars 738 Planetary transit variables Planetary systems with two confirmed planets J19532359+4729284
Kepler-29
Astronomy
239
24,364,222
https://en.wikipedia.org/wiki/Sigma%20heat
Sigma heat, denoted , is a measure of the specific energy of humid air. It is used in the field of mining engineering for calculations relating to the temperature regulation of mine air. Sigma heat is sometimes called total heat, although total heat may instead mean enthalpy. Definition Sigma heat is the energy which would be extracted from a unit mass of humid air if it were cooled to a certain reference temperature under constant pressure while simultaneously removing any condensation formed during the process. Because sigma heat assumes that condensation will be removed, any energy which would be extracted by cooling the water vapor below its condensation point does not count towards sigma heat. The reference temperature is usually , although is sometimes used as well. Assuming a reference temperature of 0°F, the following formula may be used under standard temperature ranges and pressure: where is the sigma heat of the air (in BTU/lb), is the dry-bulb temperature of the air (in °F), and is the specific humidity of the air (unitless). The equivalent metric formula: where is the sigma heat of the air (in kJ/kg), is the dry-bulb temperature of the air (in °C), and is the specific humidity of the air (unitless) sometimes expressed as kg/kg. Comparison with enthalpy Sigma heat is not the same as the enthalpy of the humid air above the reference temperature. (Enthalpy is sometimes called total heat or true total heat) Unlike sigma heat, enthalpy does include the energy which would be extracted in cooling the condensed water vapor all the way to the reference temperature. Essentially, enthalpy assumes that all components of the system must be cooled during the cooling process, whereas sigma heat assumes that some of those components (liquid water) are removed part way through the process. Nevertheless, some writers mistakenly use the term enthalpy when they actually mean sigma heat, creating some confusion. Assuming a reference temperature of 0°F, the relationship between enthalpy and sigma heat may be shown mathematically as: where is the specific enthalpy of the air above its reference temperature, is the sigma heat of the air (in BTU/lb), is the specific humidity of the air (unitless), and is the wet bulb temperature (in °F). (Standard temperature ranges are assumed.) Wet bulb temperature vs. dry bulb temperature Assuming constant pressure, sigma heat is solely a function of the wet bulb temperature of the air. For this reason, humidity need not be taken into account unless dry-bulb temperature measurements are used. Like sigma heat, the wet bulb temperature is not directly affected by the temperature of any condensed water vapor (liquid water), and it varies only when there is a net energy change to the system. In contrast, the dry bulb temperature can vary even for processes where there is no such net energy change. This difference may be understood by examining evaporative cooling. During evaporative cooling, all energy lost from air molecules as sensible heat is gained as latent heat by water molecules evaporating into that air. With no net energy gained or lost from the now more humid air, sigma heat remains unchanged. In keeping with this, the wet bulb temperature also remains unchanged, as its reading already represented the maximum possible amount of evaporative cooling. The dry bulb temperature however is in conflict with the sigma heat since it decreases during such evaporative cooling. This is why measurements of sigma heat which use dry bulb temperatures must also take into account the humidity of the air. Notes References Mining engineering Psychrometrics Heating, ventilation, and air conditioning
Sigma heat
Engineering
742
26,444,076
https://en.wikipedia.org/wiki/Perron%20method
In the mathematical study of harmonic functions, the Perron method, also known as the method of subharmonic functions, is a technique introduced by Oskar Perron for the solution of the Dirichlet problem for Laplace's equation. The Perron method works by finding the largest subharmonic function with boundary values below the desired values; the "Perron solution" coincides with the actual solution of the Dirichlet problem if the problem is soluble. The Dirichlet problem is to find a harmonic function in a domain, with boundary conditions given by a continuous function . The Perron solution is defined by taking the pointwise supremum over a family of functions , where is the set of all subharmonic functions such that on the boundary of the domain. The Perron solution u(x) is always harmonic; however, the values it takes on the boundary may not be the same as the desired boundary values . A point y of the boundary satisfies a barrier condition if there exists a superharmonic function , defined on the entire domain, such that and for all . Points satisfying the barrier condition are called regular points of the boundary for the Laplacian. These are precisely the points at which one is guaranteed to obtain the desired boundary values: as . The characterization of regular points on surfaces is part of potential theory. Regular points on the boundary of a domain are those points that satisfy the Wiener criterion: for any , let be the capacity of the set ; then is a regular point if and only if diverges. The Wiener criterion was first devised by Norbert Wiener; it was extended by Werner Püschel to uniformly elliptic divergence-form equations with smooth coefficients, and thence to uniformly elliptic divergence form equations with bounded measureable coefficients by Walter Littman, Guido Stampacchia, and Hans Weinberger. References Further reading Potential theory Partial differential equations Subharmonic functions
Perron method
Mathematics
390
62,280,237
https://en.wikipedia.org/wiki/Electronics%20prototyping
In electronics, prototyping means building an actual circuit to a theoretical design to verify that it works, and to provide a physical platform for debugging it if it does not. The prototype is often constructed using techniques such as wire wrapping or using a breadboard, stripboard or perfboard, with the result being a circuit that is electrically identical to the design but not physically identical to the final product. Open-source tools like Fritzing exist to document electronic prototypes (especially the breadboard-based ones) and move toward physical production. Prototyping platforms such as Arduino also simplify the task of programming and interacting with a microcontroller. The developer can choose to deploy their invention as-is using the prototyping platform, or replace it with only the microcontroller chip and the circuitry that is relevant to their product. A technician can quickly build a prototype (and make additions and modifications) using these techniques, but for volume production it is much faster and usually cheaper to mass-produce custom printed circuit boards than to produce these other kinds of prototype boards. The proliferation of quick-turn PCB fabrication and assembly companies has enabled the concepts of rapid prototyping to be applied to electronic circuit design. It is now possible, even with the smallest passive components and largest fine-pitch packages, to have boards fabricated, assembled, and even tested in a matter of days. Boards Breadboard Perfboard Stripboard References P Prototypes
Electronics prototyping
Engineering
300
23,315,139
https://en.wikipedia.org/wiki/Lutetium%20tantalate
Lutetium tantalate is a chemical compound of lutetium, tantalum and oxygen with the formula LuTaO4. With a density of 9.81 g/cm3, this mixed oxide compound is the densest known white stable material. (Although thorium dioxide ThO2 is also white and has a higher density of 10 g/cm3, it is radioactively unstable; while not radioactive enough to make it unstable as a material, even its low rate of decay is still too much for certain uses such as phosphors for detecting ionising radiation.) The white color and high density of LuTaO4 make it ideal for phosphor applications, though the high cost of lutetium is a hindrance. Properties Under standard conditions, LuTaO4 has a monoclinic (labeled as M'; Pearson symbol mP12, Space group = P2/a, No 13) fergusonite-type crystal structure. This can be changed to an I2/a (M) structure by annealing at 1,600 °C. Both structures are stable under standard conditions. In the M' structure, the lutetium atom is 8-fold coordinated with oxygen and forms a distorted antiprism with a C2 site symmetry. The structure of lutetium tantalate is identical to that of yttrium tantalate (YTaO4) and gadolinium tantalate (GdTaO4). Lutetium tantalate itself is weakly fluorescent. Bright emission is achieved by incorporating small amounts (about 1%) of various rare-earth dopants during the crystal growth process, for example, with europium (sharp red line at 610 nm), samarium (red: 610 nm), terbium (green-yellow: 495 and 545 nm lines), praseodymium (red: 615 nm), thulium (blue: 455 nm), dysprosium (orange: 580 nm) or niobium (blue: 400 nm, broad peak). The emission is best excited by electrons, X-rays or ultraviolet light at 220 nm. The high density of LuTaO4 favors X-ray excitation, which has relatively more efficient, stronger absorption in LuTaO4, compared to other materials. LuTaO4 also exhibits thermoluminescence — it glows in the dark when heated after illumination. Preparation To prepare a sample of lutetium tantalate, powders of lutetium and tantalum oxides (Lu2O3 and Ta2O5) are mixed and annealed at a temperature above 1,200 °C for several hours. To prepare a phosphor, a small fraction of appropriate material, such as an oxide of another rare-earth metal, is added to the mixture before annealing. After cooling, the product is leached with water, washed, filtered and dried, resulting in a white powder consisting of micrometre-sized particles of LuTaO4. References Lutetium compounds Tantalates Phosphors and scintillators
Lutetium tantalate
Chemistry
640
1,498,080
https://en.wikipedia.org/wiki/Picotechnology
The term picotechnology is a portmanteau of picometre and technology, intended to parallel the term nanotechnology. It is a hypothetical future level of technological manipulation of matter, on the scale of trillionths of a metre or picoscale (10−12). This is three orders of magnitude smaller than a nanometre (and thus most nanotechnology) and two orders of magnitude smaller than most chemistry transformations and measurements. Picotechnology would involve the manipulation of matter at the atomic level. A further hypothetical development, femtotechnology, would involve working with matter at the subatomic level. Applications Picoscience is a term used by some futurists to refer to structuring of matter on a true picometre scale. Picotechnology was described as involving the alteration of the structure and chemical properties of individual atoms, typically through the manipulation of energy states of electrons within an atom to produce metastable (or otherwise stabilized) states with unusual properties, producing some form of exotic atom. Analogous transformations known to exist in the real world are redox chemistry, which can manipulate the oxidation states of atoms; excitation of electrons to metastable excited states as with lasers and some forms of saturable absorption; and the manipulation of the states of excited electrons in Rydberg atoms to encode information. However, none of these processes produces the types of exotic atoms described by futurists. Alternatively, picotechnology is used by some researchers in nanotechnology to refer to the fabrication of structures where atoms and devices are positioned with sub-nanometre accuracy. This is important where interaction with a single atom or molecule is desired, because of the strength of the interaction between two atoms which are very close. For example, the force between an atom in an atomic force microscope probe tip and an atom in a sample being studied vary exponentially with separation distance, and is sensitive to changes in position on the order of 50 to 100 picometres (due to Pauli exclusion at short ranges and van der Waals forces at long ranges). In popular culture The Chinese science fiction novel The Three-Body Problem features a plot-point in which an advanced alien civilization imbues individual protons with supercomputing powers and subsequently manipulates said protons via quantum entanglement (the fictional name for these proton-sized supercomputers is "sophons"). See also Femtotechnology IBM in atoms, a 1989 demonstration by IBM of a technology capable of manipulating individual atoms Technological singularity "There's Plenty of Room at the Bottom", a 1959 lecture by physicist Richard Feynman on the direct manipulation of individual atoms References External links Picotechnology at the Nanosciences group at CEMES , France. Nanotechnology
Picotechnology
Materials_science,Engineering
577
1,583,887
https://en.wikipedia.org/wiki/Smart%20Common%20Input%20Method
The Smart Common Input Method (SCIM) is a platform for inputting more than thirty languages on computers, including Chinese-Japanese-Korean style character languages (CJK), and many European languages. It is used for POSIX-style operating systems including Linux and BSD. Its purposes are to provide a simple and powerful common interface for users from any country, and to provide a clear architecture for programming, so as to reduce time required to develop individual input methods. Goals The main goals of the SCIM project include: To act as a unified frontend for current available input method libraries. Bindings to uim and m17n library are available (as of August 2007). To act as a language engine of IIIMF (an input method framework). To support as many input method protocols/interfaces as existing and in common use. To support multiple operating systems. (Currently, only POSIX-style operating systems are available.) Architecture SCIM was originally written in the C++ language but has moved to pure C since 1.4.14. It abstracts the input method interface to several classes and attempts to simplify the classes and make them more independent from each other. With the simpler and more independent interfaces, developers can write their own input methods in fewer lines of code. SCIM is a modularized IM platform, and as such, components can be implemented as dynamically loadable modules, thus can be loaded during runtime at will. For example, input methods written for SCIM could be IMEngine modules, and users can use such IMEngine modules combined with different interface modules (FrontEnd) in different environments without rewrite or recompile of the IMEngine modules, reducing the compile time or development time of the project. SCIM is a high-level library, similar to XIM or IIIMF; however, SCIM claims to be simpler than either of those IM platforms. SCIM also claims that it can be used alongside XIM or IIIMF. SCIM can also be used to extend the input method interface of existing application toolkits, such as GTK+, Qt and Clutter via IMmodules. Related projects SKIM is a separate project aimed at integrating SCIM more tightly into the K Desktop Environment, by providing a GUI panel (named scim-panel-kde as an alternative to scim-panel-gtk), a KConfig config module and setup dialogs for itself and the SCIM module libscim. It also has its own plugin system which supports on-demand loadable actions. t-latn-pre and t-latn-post are two input methods that provide an easy way for composing accented characters, either by preceding regular characters with diacritic marks (in the case of t-latn-pre), or by adding the marks subsequently (in the case of t-latn-post). Their main advantage is the large number of composed characters from different languages that can be entered this way, rendering it unnecessary to install, for example, separate keyboard layouts. These input methods are available for SCIM through the M17n library. See also Input method IBus List of input methods for UNIX platforms uim References External links m17n Multilingualization Freedesktop.org Input methods Free software programmed in C++
Smart Common Input Method
Technology
704
22,433,196
https://en.wikipedia.org/wiki/Evolution%20of%20emotion
The study of the evolution of emotions dates back to the 19th century. Evolution and natural selection has been applied to the study of human communication, mainly by Charles Darwin in his 1872 work, The Expression of the Emotions in Man and Animals. Darwin researched the expression of emotions in an effort to support his materialist theory of unguided evolution. He proposed that much like other traits found in animals, emotions apparently also evolved and were adapted over time. His work looked at not only facial expressions in animals and specifically humans, but attempted to point out parallels between behaviors in humans and other animals. According to evolutionary theory, different emotions evolved at different times. Primal emotions, such as love and fear, are associated with ancient parts of the psyche. Social emotions, such as guilt and pride, evolved among social primates. Evolutionary psychologists consider human emotions to be best adapted to the life our ancestors led in nomadic foraging bands. Origins Darwin's original plan was to include his findings about expression of emotions in a chapter of his work, The Descent of Man, and Selection in Relation to Sex (Darwin, 1871) but found that he had enough material for a whole book. It was based on observations, both those around him and of people in many parts of the world. One important observation he made was that even in individuals who were born blind, body and facial expressions displayed are similar to those of anyone else. The ideas found in his book on universality of emotions were intended to go against Sir Charles Bell's 1844 claim that human facial muscles were created to give them the unique ability to express emotions. The main purpose of Darwin's work was to support the theory of evolution by demonstrating that emotions in humans and other animals are similar. Most of the similarities he found were between species closely related, but he found some similarities between distantly related species as well. He proposed the idea that emotional states are adaptive, and therefore only those able to express certain emotions passed on their characteristics. Darwin's principles In the 1872 work, Darwin proposed three principles. The first of the three is the "principle of serviceable habits", which he defined as we have certain habits or we perform different actions in certain states of mind, which get associated when that state of mind is induced, even when its not needed then. He used as an example contracting of eyebrows (furrowing the brow), which he noted is serviceable to prevent too much light from entering the eyes. He also said that the raising of eyebrows serves to increase the field of vision. He cited examples of people attempting to remember something and raising their brows, as though they could "see" what they were trying to remember. The second of the principles is that of antithesis. While some habits are serviceable, Darwin proposed that some actions or habits are carried out merely because they are opposite in nature to a serviceable habit, but are not serviceable themselves. Shrugging of the shoulders is an example Darwin used of antithesis, because it has no service. Shoulder shrugging is a passive expression, and very opposite of a confident or aggressive expression. The third of the principles is expressive habits, or nervous discharge from the nervous system. This principle proposes that some habits are performed because of a build-up to the nervous system, which causes a discharge of the excitement. Examples include foot and finger tapping, as well as vocal expressions and expressions of anger. Darwin noted that many animals rarely make noises, even when in pain, but under extreme circumstances they vocalize in response to pain and fear. Research Paul Ekman is most noted in this field for conducting research involving facial expressions of emotions. His work provided data to back up Darwin's ideas about universality of facial expressions, even across cultures. He conducted research by showing photographs exhibiting expressions of basic emotion to people and asking them to identify what emotion was being expressed. In 1971, Ekman and Wallace Friesen presented to people in a preliterate culture a story involving a certain emotion, along with photographs of specific facial expressions. The photographs had been previously used in studies using subjects from Western cultures. When asked to choose, from two or three photographs, the emotion being expressed in the story, the preliterate subjects' choices matched those of the Western subjects most of the time. These results indicated that certain expressions are universally associated with particular emotions, even in instances in which the people had little or no exposure to Western culture. The only emotions the preliterate people found hard to distinguish between were fear and surprise. Ekman noted that while universal expressions do not necessarily prove Darwin's theory that they evolved, they do provide strong evidence of the possibility. He mentioned the similarities between human expressions and those of other primates, as well as an overall universality of certain expressions to back up Darwin's ideas. The expressions of emotion that Ekman noted as most universal based on research are: anger, fear, disgust, sadness, and enjoyment. A common view is that facial expressions initially served a non-communicative adaptive function. Thus, the widened eyes in the facial expression of fear have been shown to increase the visual field and the speed of moving the eyes which helps finding and following threats. The wrinkled nose and mouth of the facial expression of disgust limit the intake of foul-smelling and possibly dangerous air and particles. Later, such reactions, which could be observed by other members of the group, increasingly become more distinctive and exaggerated in order to fulfill a primarily socially communicative function. This communicative function can dramatically or subtly influence the behavior of other members in the group. Thus, rhesus monkeys or human infants can learn to fear potential dangers based on only the facial expressions of fear of other group members or parents. Seeing fear expressions increases the tendency for flight responses while seeing anger expressions increases the tendency for fight responses. Classical conditioning studies have found that it is easier to create a pairing between a negative stimulant and anger/fear expressions than between a negative stimulant and a happiness expression. Cross-cultural studies and studies on the congenitally blind have found that these groups display the same expressions of shame and pride in situations related to social status. These expressions have clear similarities to displays of submission and dominance by other primates. Humans viewing expression of pride automatically assign a higher social status to such individuals than to those expressing other emotions. Robert Zajonc published two reviews in 1989 of the "facial efference theory of emotion", also known as facial feedback theory, which he had first introduced to the scientific literature in an article published in Science in 1985. This theory proposes that the facial musculature of mammals can control the temperature of the base of the brain (in particular the hypothalamus) by varying the degree of forward and backward flow through a vascular network (a so-called rete mirabile). The theory is based on the idea that increasing the temperature of portions of the hypothalamus can produce aggressive behavior, whereas cooling can produce relaxation. Our emotional language has comparable descriptors, such as "hot-head" and "cool-breezy". The theory offers an explanation for the evolution of common facial expressions of emotion in mammals. Little experimental work has been done to extend the theory, however. Carroll Izard discussed gains and losses associated with the evolution of emotions. He said that discrete emotion experiences emerge in ontogeny before language or conceptual structures that frame the qualia known as discrete emotion feelings are acquired. He noted that in evolution, when humans gained the capability of expressing themselves with language, this contributed greatly to emotional evolution. Not only can humans articulate and share their emotions, they can use their experiences to foresee and take appropriate action in future experiences. He did, however, raise the question of whether or not humans have lost some of their empathy for one another, citing things such as murder and crime against one another as destructive. Joseph LeDoux focuses much of his research on the emotion fear. Fear can be evoked by two systems in the brain, both involving the thalamus and the amygdala: one old, short and fast, the other more recently evolved, more circuitous and slower. In the older system, sensory information travels directly and quickly from the thalamus to the amygdala where it elicits the autonomic and motor responses we call fear. In the younger system, sensory information travels from the thalamus to the relevant cortical sensory areas (touch to the somatosensory cortex, vision to the visual cortex, etc.) and on to frontal association areas, where appraisal occurs. These frontal areas communicate directly with the amygdala and, in light of appraisal, may reduce or magnify the amygdala's fear response. If you glimpse what looks like a snake, long before your younger frontal areas have had time to determine it is a stick, the old thalamus-amygdala system will have evoked fear. LeDoux hypothesizes that the old fast system persists because a behavioral response at the first hint of danger is of little consequence when mistaken but may mean the difference between life and death when appropriate. See also Animal sexual behaviour § Sex for pleasure Empathy § In animals Facial expression Fear § In animals Reward system § Animals vs. humans Paul Ekman Posture (psychology) References External links The Expression of the Emotions in Man and Animals Emotion Evolution Evolutionary psychology
Evolution of emotion
Biology
1,917
52,653,581
https://en.wikipedia.org/wiki/American%20Genius
American Genius is an American documentary series focusing on the lives of inventors and pioneers who have been responsible for major developments in their areas of expertise and helped shape the course of history. Consisting of eight episodes, it was telecast on the National Geographic Channel from June 1, 2015 to June 22, 2015. Through re-enactments, each episode focuses on the lives of two individuals (or groups) competing with each other in the same field of expertise, illustrating how some of the greatest inventions were made possible by competition between aspirants. Segments of the episodes also consist of interviews by historians and experts in the specific domain under discussion. Apple Inc.'s co-founder Steve Wozniak gave his perspective and thoughts on the episode 'Jobs vs Gates' to several media outlets before the series premiere. Episodes Reception The series received favorable reviews from critics. Melissa Camacho from the Common Sense Media rated it 4 out of 5 stars while concluding “It will encourage the audience to think about the technology we rely on today a little differently”. While Neil Genzlinger from The New York Times described the expert interviews as interspersing and illuminating, he wrote the word 'versus' in the episode titles were unsuitable as several of the inventors had collaborated to work at some point in their lives. See also List of programs broadcast by National Geographic Channel Steve Jobs (film) America: The Story of Us References External links Jeff Wilburn Stephen David Entertainment Documentary television series about technology History of electrical engineering National Geographic (American TV channel) original programming 2010s American documentary television series 2015 American television series debuts 2015 American television series endings
American Genius
Engineering
326
7,200,923
https://en.wikipedia.org/wiki/N-Acetylneuraminic%20acid
N-Acetylneuraminic acid (Neu5Ac or NANA) is the predominant sialic acid found in human cells, and many mammalian cells. Other forms, such as N-Glycolylneuraminic acid, may also occur in cells. This residue is negatively charged at physiological pH and is found in complex glycans on mucins and glycoproteins found at the cell membrane. Neu5Ac residues are also found in glycolipids, known as gangliosides, a crucial component of neuronal membranes found in the brain. Along with involvement in preventing infections (mucus associated with mucous membranes—mouth, nose, GI, respiratory tract), Neu5Ac acts as a receptor for influenza viruses, allowing attachment to mucous cells via hemagglutinin (an early step in acquiring influenzavirus infection). In the biology of bacterial pathogens Neu5Ac is also important in the biology of a number of pathogenic and symbiotic bacteria as it can be used either as a nutrient, providing both carbon and nitrogen to the bacteria, or in some pathogens, can be activated and placed on the cell surface. Bacteria have evolved transporters for Neu5Ac to enable them to capture it from their environment and a number of these have been characterized including the NanT protein from Escherichia coli, the SiaPQM TRAP transporter from Haemophilus influenzae and the SatABCD ABC transporter from Haemophilus ducreyi. Medical use In Japan, Neu5Ac is approved under the trade name Acenobel for the treatment of distal myopathy with rimmed vacuoles. See also Neuraminic acid N-Glycolylneuraminic acid Sialic acid References Amino sugars Sugar acids Monosaccharides
N-Acetylneuraminic acid
Chemistry
388
3,910,661
https://en.wikipedia.org/wiki/Draft%20%28boiler%29
In a water boiler, draft is the difference between atmospheric pressure and the pressure existing in the furnace or flue gas passage. Draft can also be referred to as the difference in pressure in the combustion chamber area which results in the motion of the flue gases and the air flow. Types of draft Drafts are produced by the rising combustion gases in the stack, flue, or by mechanical means. For example, a boiler can be put into four categories: natural, induced, balanced, and forced. Natural draft: When air or flue gases flow due to the difference in density of the hot flue gases and cooler ambient gases. The difference in density creates a pressure differential that moves the hotter flue gases into the cooler surroundings. Forced draft: When air or flue gases are maintained above atmospheric pressure. Normally it is done with the help of a forced draft fan. Induced draft: When air or flue gases flow under the effect of a gradually decreasing pressure below atmospheric pressure. In this case, the system is said to operate under induced draft. The stacks (or chimneys) provide sufficient natural draft to meet the low draft loss needs. In order to meet higher pressure differentials, the stacks must simultaneously operate with draft fans. Balanced draft: When the static pressure is equal to the atmospheric pressure, the system is referred to as balanced draft. Draft is said to be zero in this system. Importance/significance For the proper and the optimized heat transfer from the flue gases to the boiler tubes draft holds a relatively high amount of significance. The combustion rate of the flue gases and the amount of heat transfer to the boiler are both dependent on the movement and motion of the flue gases. A boiler equipped with a combustion chamber which has a strong current of air (draft) through the fuel bed will increase the rate of combustion (which is the efficient utilization of fuel with minimum waste of unused fuel). The stronger movement will also increase the heat transfer rate from the flue gases to the boiler (which improves efficiency and circulation). Drafting in steam locomotives Since the stack of a locomotive is too short to provide natural draft, during normal running forced draft is achieved by directing the exhaust steam from the cylinders through a cone ("blast pipe") upwards and into a skirt at the bottom of the stack. When the locomotive is stationary or in a restricted space "live" steam from the boiler is directed through an annular ring surrounding the blast pipe to produce the same effect. See also Cooling tower system Stack effect Controlling draught References Engines Engine technology Energy conversion
Draft (boiler)
Physics,Technology
516
1,935,972
https://en.wikipedia.org/wiki/Diethyl%20azodicarboxylate
Diethyl azodicarboxylate, conventionally abbreviated as DEAD and sometimes as DEADCAT, is an organic compound with the structural formula . Its molecular structure consists of a central azo functional group, RN=NR, flanked by two ethyl ester groups. This orange-red liquid is a valuable reagent but also quite dangerous and explodes upon heating. Therefore, commercial shipment of pure diethyl azodicarboxylate is prohibited in the United States and is carried out either in solution or on polystyrene particles. DEAD is an aza-dienophile and an efficient dehydrogenating agent, converting alcohols to aldehydes, thiols to disulfides and hydrazo groups to azo groups; it is also a good electron acceptor. While DEAD is used in numerous chemical reactions it is mostly known as a key component of the Mitsunobu reaction, a common strategy for the preparation of an amine, azide, ether, thioether, or ester from the corresponding alcohol. It is used in the synthesis of various natural products and pharmaceuticals such as zidovudine, an AIDS drug; FdUMP, a potent antitumor agent; and procarbazine, a chemotherapy drug. Properties DEAD is an orange-red liquid which weakens its color to yellow or colorless upon dilution or chemical reaction. This color change is conventionally used for visual monitoring of the synthesis. DEAD dissolves in most common organic solvents, such as toluene, chloroform, ethanol, tetrahydrofuran and dichloromethane but has low solubility in water or carbon tetrachloride; the solubility in water is higher for the related azo compound dimethyl azodicarboxylate. DEAD is a strong electron acceptor and easily oxidizes a solution of sodium iodide in glacial acetic acid. It also reacts vigorously with hydrazine hydrate producing diethyl hydrazodicarboxylate and evolving nitrogen. Linear combination of atomic orbitals molecular orbital method (LCAO-MO) calculations suggest that the molecule of DEAD is unusual in having a high-lying vacant bonding orbital, and therefore tends to withdraw hydrogen atoms from various hydrogen donors. Photoassisted removal of hydrogen by DEAD was demonstrated for isopropyl alcohol, resulting in pinacol and tetraethyl tetrazanetetracarboxylate, and for acetaldehyde yielding diacetyl and diethyl hydrazodicarboxylate. Similarly, reacting DEAD with ethanol and cyclohexanol abstracts hydrogen producing acetaldehyde and cyclohexanone. Those reactions also proceed without light, although at much lower yields. Thus, in general DEAD is an aza-dienophile and dehydrogenating agent, converting alcohols to aldehydes, thiols to disulfides and hydrazo groups to azo groups. It also undergoes pericyclic reactions with alkenes and dienes via ene and Diels–Alder mechanisms. Preparation Although available commercially, diethyl azodicarboxylate can be prepared fresh in the laboratory, especially if required in pure, non-diluted form. A two-step synthesis starts from hydrazine, first by alkylation with ethyl chloroformate, followed by treating the resulting diethyl hydrazodicarboxylate with chlorine (bubbling through the solution), hypochlorous acid, concentrated nitric acid or red fuming nitric acid. The reaction is carried out in an ice bath, and the reagents are added dropwise so that the temperature does not rise above 20 °C. Diethyl hydrazodicarboxylate is a solid with melting temperature of 131–133 °C which is collected as a residue; it is significantly more stable to heating than DEAD and is conventionally dried at a temperature of about 80 °C. Applications Mitsunobu reaction DEAD is a reagent in the Mitsunobu reaction where it forms an adduct with phosphines (usually triphenylphosphine) and assists the synthesis of esters, ethers, amines and thioethers from alcohols. Reactions normally result in the inversion of molecular symmetry. DEAD was used in the original 1967 article by Oyo Mitsunobu, and his 1981 review on the use of diethyl azodicarboxylate is a top-cited chemistry article. The Mitsunobu reaction has several applications in the synthesis of natural products and pharmaceuticals. In the above reaction, which is assisted either by DEAD or DIAD (diisopropyl azodicarboxylate), thymidine 1 transforms to the derivative 2. The latter easily converts to zidovudine 4 (also known as azidothymidine or AZT), an important antiviral drug, used among others in the treatment of AIDS. Another example of pharmaceutical application of DEAD-assisted Mitsunobu reaction is the synthesis of bis[(pivaloyloxy)methyl [PIVz] derivative of 2'-deoxy-5-fluorouridine 5'-monophosphate (FdUMP), which is a potent antitumor agent. Michael reaction The azo group in DEAD is a Michael acceptor. In the presence of a copper(II) catalyst, DEAD assists conversion of β-keto esters to the corresponding hydrazine derivatives. The substitution of boronic acid esters proceeds similarly: Other reactions DEAD is an efficient component in Diels-Alder reactions and in click chemistry, for example the synthesis of bicyclo[2.1.0]pentane, which originates from Otto Diels. It has also been used to generate aza-Baylis-Hillman adducts with acrylates. DEAD can be used for synthesis of heterocyclic compounds. Thus, pyrazoline derivatives convert by condensation to α,β-unsaturated ketones: Another application is the use of DEAD as an enophile in ene reactions: Safety DEAD is toxic, shock and light sensitive; it can violently explode when its undiluted form is heated above 100 °C. Shipment by air of pure diethyl azodicarboxylate is prohibited in the United States and is carried out in solution, typically about 40% DEAD in toluene. Alternatively, DEAD is transported and stored on 100–300 mesh polystyrene particles at a concentration of about 1 mmol/g. The time-weighed average threshold limit value for exposure to DEAD over a typical 40-hour working week is 50 parts per million; that is, DEAD is half as toxic as, e.g., carbon monoxide. Safety hazards have resulted in rapid decline of DEAD usage and replacement with DIAD and other similar compounds. References Azo compounds Reagents for organic chemistry Ethyl esters Carboxylate esters
Diethyl azodicarboxylate
Chemistry
1,466
610,813
https://en.wikipedia.org/wiki/Tarnish
Tarnish is a thin layer of corrosion that forms over copper, brass, aluminum, magnesium, neodymium and other similar metals as their outermost layer undergoes a chemical reaction. Tarnish does not always result from the sole effects of oxygen in the air. For example, silver needs hydrogen sulfide to tarnish, although it may tarnish with oxygen over time. It often appears as a dull, gray or black film or coating over metal. Tarnish is a surface phenomenon that is self-limiting, unlike rust. Only the top few layers of the metal react. The layer of tarnish seals and protects the underlying layers from reacting. Tarnish preserves the underlying metal in outdoor use, and in this form is called chemical patina. Unlike wear patina necessary in applications such as copper roofing, outdoor copper, bronze, and brass statues and fittings, chemical patina is considered a lot more uneven and undesirable. Patina is the name given to tarnish on copper-based metals, while toning is a term for the type of tarnish which forms on coins. Chemistry Tarnish is a product of a chemical reaction between a metal and a nonmetal compound, especially oxygen and sulfur dioxide. It is usually a metal oxide, the product of oxidation; sometimes it is a metal sulfide. The metal oxide sometimes reacts with water to make the hydroxide, or with carbon dioxide to make the carbonate. It is a chemical change. There are various methods to prevent metals from tarnishing. Prevention and removal Using a thin coat of polish can prevent tarnish from forming over these metals. Tarnish can be removed by using steel wool, sandpaper, emery paper, baking soda or a file to rub or polish the metal's dull surface. Fine objects (such as silverware) may have the tarnish electrochemically reversed (non-destructively) by resting the objects on a piece of aluminium foil in a pot of boiling water with a small amount of salt or baking soda, or it may be removed with a special polishing compound and a soft cloth. Gentler abrasives, such as calcium carbonate, are often used by museums to clean tarnished silver as they cannot damage or scratch the silver and will not leave unwanted residues. References Chemical reactions Metals Metalworking terminology Corrosion
Tarnish
Chemistry,Materials_science
469
11,422,398
https://en.wikipedia.org/wiki/UnaL2%20LINE%203%E2%80%B2%20element
The UnaL2 LINE 3′ element is an RNA element found in the UnaL2 LINE (long interspersed nuclear element) and partner SINE (short interspersed nuclear element) from eel. This conserved element is a stem-loop that is critical for their retrotransposition found in their 3′ end. The first step of retrotransposition is the recognition of their 3′ tails by UnaL2-encoded reverse transcriptase. The NMR structure of a 17-nucleotide RNA derived from the 3′ tail of UnaL2 has been determined. References External links Cis-regulatory RNA elements
UnaL2 LINE 3′ element
Chemistry
124
42,955,431
https://en.wikipedia.org/wiki/Betable
Betable is a London-based company that develops and markets a real-money gambling platform for the social gaming industry. The company is licensed by the United Kingdom Gambling Commission and the Alderney Gambling Control Commission and is certified by third-party testing houses. The company has raised a total of $23 million in venture funding from, among others, Venture51, Greylock Partners, and Founders Fund. History Christopher Griffin, the company’s current CEO, founded Betable in 2008. The first iteration of the service involved users creating betting opportunities and placing bets on a central, social-oriented gambling site. In July 2010 the company raised $3 million in seed funding from Atomico Ventures. In 2012, Griffin re-capped the company and re-launched Betable from being a betting site to developing a real-money gambling platform. The Betable API beta program was released in July 2012, allowing game developers to integrate Betable betting features. In October 2012, Betable partnered with game developers Slingo, Digital Chocolate, and Murka Games to incorporate betting into the developers' current offerings. In November 2012, Mandala Games became the first European game developer to use the Betable platform, enabling real-money play in its title Slots by La Riviera. In November 2013, Betable raised an $18.5 million Series A funding round led by Venture51. TechCrunch April Fools' slot machine On 1 April 2013, news website TechCrunch published a hoax article claiming that it would be launching a social betting game for venture capitalists to gamble at, remarking that it would be "an even easier way to bypass SEC regulations around being an accredited investor". The article included a TechCrunch-themed slot machine that was powered by Betable software. Products and services Third-party game developers use the Betable API to apply real-money gambling functionality to mobile games. In addition to converting standard games (such as slots, blackjack, and roulette) into real-money gambling titles, the software can be used to create non-traditional types of gambling games that operate on top of the Betable platform. Once installed, Betable acts as a turnkey gaming engine that manages all real-money tasks within a game such as identity verification, anti-fraud safeguards, regulatory compliance, transactions, auditing, and gambling mechanics. The platform acts as an alternative to other forms of app monetization, such as banner ads or freemium models, by allowing developers to enable revenue-generating betting features within their games. Because Betable possesses gambling licenses from the United Kingdom Gambling Commission that allow it to provide gambling services on another party's behalf, developers can enable betting within their games without applying for any licenses themselves. Betable is compatible with games on iOS and Android operating systems. Battle Keno One example of a traditional gaming title being converted to a real-money gambling app through Betable's platform is 30AK Gaming's Battle Keno, an adaptation of Battleship that financially rewards or penalizes players based on gameplay. Prospect Hall Casino In February 2015, Betable launched Prospect Hall Casino, a UK online gambling business with online casino games for mobile and web. The business is licensed and regulated by the United Kingdom Gambling Commission and the Alderney Gambling Control Commission. References External links Official website Online gambling companies of the United Kingdom Mobile game companies Hotel and leisure companies based in London Social software
Betable
Technology
696
175,440
https://en.wikipedia.org/wiki/Medical%20cannabis
Medical cannabis, medicinal cannabis or medical marijuana (MMJ) refers to cannabis products and cannabinoid molecules that are prescribed by physicians for their patients. The use of cannabis as medicine has a long history, but has not been as rigorously tested as other medicinal plants due to legal and governmental restrictions, resulting in limited clinical research to define the safety and efficacy of using cannabis to treat diseases. Preliminary evidence has indicated that cannabis might reduce nausea and vomiting during chemotherapy and reduce chronic pain and muscle spasms. Regarding non-inhaled cannabis or cannabinoids, a 2021 review found that it provided little relief against chronic pain and sleep disturbance, and caused several transient adverse effects, such as cognitive impairment, nausea, and drowsiness. Short-term use increases the risk of minor and major adverse effects. Common side effects include dizziness, feeling tired, vomiting, and hallucinations. Long-term effects of cannabis are not clear. Concerns include memory and cognition problems, risk of addiction, schizophrenia in young people, and the risk of children taking it by accident. Many cultures have used cannabis for therapeutic purposes for thousands of years. Some American medical organizations have requested removal of cannabis from the list of Schedule I controlled substances, emphasizing that rescheduling would enable more extensive research and regulatory oversight to ensure safe access. Others oppose its legalization, such as the American Academy of Pediatrics. Medical cannabis can be administered through various methods, including capsules, lozenges, tinctures, dermal patches, oral or dermal sprays, cannabis edibles, and vaporizing or smoking dried buds. Synthetic cannabinoids are available for prescription use in some countries, such as synthetic delta-9-THC and nabilone. Countries that allow the medical use of whole-plant cannabis include Argentina, Australia, Canada, Chile, Colombia, Germany, Greece, Israel, Italy, the Netherlands, Peru, Poland, Portugal, Spain, and Uruguay. In the United States, 38 states and the District of Columbia have legalized cannabis for medical purposes, beginning with the passage of California's Proposition 215 in 1996. Although cannabis remains prohibited for any use at the federal level, the Rohrabacher–Farr amendment was enacted in December 2014, limiting the ability of federal law to be enforced in states where medical cannabis has been legalized. This amendment reflects an increasing bipartisan acknowledgment of the potential therapeutic uses of cannabis and the significance of state-level policymaking in this area. Classification In the U.S., the National Institute on Drug Abuse defines medical cannabis as "using the whole, unprocessed marijuana plant or its basic extracts to treat symptoms of illness and other conditions". A cannabis plant includes more than 400 different chemicals, of which about 70 are cannabinoids. In comparison, typical government-approved medications contain only one or two chemicals. The number of active chemicals in cannabis is one reason why treatment with cannabis is difficult to classify and study. A 2014 review stated that the variations in ratio of CBD-to-THC in botanical and pharmaceutical preparations determines the therapeutic vs psychoactive effects (CBD attenuates THC's psychoactive effects) of cannabis products. Medical uses Overall, research into the health effects of medical cannabis has been of low quality and it is not clear whether it is a useful treatment for any condition, or whether harms outweigh any benefit. There is no consistent evidence that it helps with chronic pain and muscle spasms. Low quality evidence suggests its use for reducing nausea during chemotherapy, improving appetite in HIV/AIDS, improving sleep, and improving tics in Tourette syndrome. When usual treatments are ineffective, cannabinoids have also been recommended for anorexia, arthritis, glaucoma, and migraine. It is unclear whether American states might be able to mitigate the adverse effects of the opioid epidemic by prescribing medical cannabis as an alternative pain management drug. Cannabis should not be used in pregnancy. Insomnia Research analyzing data from the National Health and Nutrition Examination Survey (NHANES) did not find significant differences in sleep duration between cannabis users and non-users. This suggests that while some individuals may perceive benefits from cannabis use in terms of sleep, it may not significantly change overall sleep patterns across the general population. A review of literature up to 2018 indicates that cannabidiol (CBD) may have therapeutic potential for the treatment of insomnia. CBD, a non-psychoactive component of cannabis, is of particular interest due to its potential to influence sleep without the psychoactive effects associated with tetrahydrocannabinol (THC). Nausea and vomiting Medical cannabis is somewhat effective in chemotherapy-induced nausea and vomiting (CINV) and may be a reasonable option in those who do not improve following preferential treatment. Comparative studies have found cannabinoids to be more effective than some conventional antiemetics such as prochlorperazine, promethazine, and metoclopramide in controlling CINV, but these are used less frequently because of side effects including dizziness, dysphoria, and hallucinations. Long-term cannabis use may cause nausea and vomiting, a condition known as cannabinoid hyperemesis syndrome (CHS). A 2016 Cochrane review said that cannabinoids were "probably effective" in treating chemotherapy-induced nausea in children, but with a high side-effect profile (mainly drowsiness, dizziness, altered moods, and increased appetite). Less common side effects were "ocular problems, orthostatic hypotension, muscle twitching, pruritus, vagueness, hallucinations, lightheadedness and dry mouth". HIV/AIDS Evidence is lacking for both efficacy and safety of cannabis and cannabinoids in treating patients with HIV/AIDS or for anorexia associated with AIDS. As of 2013, current studies suffer from the effects of bias, small sample size, and lack of long-term data. Pain A 2021 review found little effect of using non-inhaled cannabis to relieve chronic pain. According to a 2019 systematic review, there have been inconsistent results of using cannabis for neuropathic pain, spasms associated with multiple sclerosis and pain from rheumatic disorders, but was not effective treating chronic cancer pain. The authors state that additional randomized controlled trials of different cannabis products are necessary to make conclusive recommendations. When cannabis is inhaled to relieve pain, blood levels of cannabinoids rise faster than when oral products are used, peaking within three minutes and attaining an analgesic effect in seven minutes. A 2011 review considered cannabis to be generally safe, and it appears safer than opioids in palliative care. A 2022 review concluded the pain relief experienced after using medical cannabis is due to the placebo effect, especially given widespread media attention that sets the expectation for pain relief. Neurological conditions Cannabis' efficacy is not clear in treating neurological problems, including multiple sclerosis (MS) and movement problems. Evidence also suggests that oral cannabis extract is effective for reducing patient-centered measures of spasticity. A trial of cannabis is deemed to be a reasonable option if other treatments have not been effective. Its use for MS is approved in ten countries. A 2012 review found no problems with tolerance, abuse, or addiction. In the United States, cannabidiol, one of the cannabinoids found in the marijuana plant, has been approved for treating two severe forms of epilepsy, Lennox-Gastaut syndrome and Dravet syndrome. Mental health A 2019 systematic review found that there is a lack of evidence that cannabinoids are effective in treating depressive or anxiety disorders, attention-deficit hyperactivity disorder (ADHD), Tourette syndrome, post-traumatic stress disorder, or psychosis. Research indicates that cannabis, particularly CBD, may have anxiolytic (anxiety-reducing) effects. A study found that CBD significantly reduced anxiety during a simulated public speaking test for individuals with social anxiety disorder. However, the relationship between cannabis use and anxiety symptoms is complex, and while some users report relief, the overall evidence from observational studies and clinical trials remains inconclusive. Cannabis is often used by people to cope with anxiety, yet the efficacy and safety of cannabis for treating anxiety disorders is yet to be researched. Cannabis use, especially at high doses, is associated with a higher risk of psychosis, particularly in individuals with a genetic predisposition to psychotic disorders like schizophrenia. Some studies have shown that cannabis can trigger a temporary psychotic episode, which may increase the risk of developing a psychotic disorder later. The impact of cannabis on depression is less clear. Some studies suggest a potential increase in depression risk among adolescents who use cannabis, though findings are inconsistent across studies. Adverse effects Medical use There is insufficient data to draw strong conclusions about the safety of medical cannabis. Typically, adverse effects of medical cannabis use are not serious; they include tiredness, dizziness, increased appetite, and cardiovascular and psychoactive effects. Other effects can include impaired short-term memory; impaired motor coordination; altered judgment; and paranoia or psychosis at high doses. Tolerance to these effects develops over a period of days or weeks. The amount of cannabis normally used for medicinal purposes is not believed to cause any permanent cognitive impairment in adults, though long-term treatment in adolescents should be weighed carefully as they are more susceptible to these impairments. Withdrawal symptoms are rarely a problem with controlled medical administration of cannabinoids. The ability to drive vehicles or to operate machinery may be impaired until a tolerance is developed. Although supporters of medical cannabis say that it is safe, further research is required to assess the long-term safety of its use. Cognitive effects Recreational use of cannabis is associated with cognitive deficits, especially for those who begin to use cannabis in adolescence. there is a lack of research into long-term cognitive effects of medical use of cannabis, but one 12-month observational study reported that "MC patients demonstrated significant improvements on measures of executive function and clinical state over the course of 12 months". Impact on psychosis Exposure to THC can cause acute transient psychotic symptoms in healthy individuals and people with schizophrenia. A 2007 meta analysis concluded that cannabis use reduced the average age of onset of psychosis by 2.7 years relative to non-cannabis use. A 2005 meta analysis concluded that adolescent use of cannabis increases the risk of psychosis, and that the risk is dose-related. A 2004 literature review on the subject concluded that cannabis use is associated with a two-fold increase in the risk of psychosis, but that cannabis use is "neither necessary nor sufficient" to cause psychosis. A French review from 2009 came to a conclusion that cannabis use, particularly that before age 15, was a factor in the development of schizophrenic disorders. Pharmacology The genus Cannabis contains two species which produce useful amounts of psychoactive cannabinoids: Cannabis indica and Cannabis sativa, which are listed as Schedule I medicinal plants in the US; a third species, Cannabis ruderalis, has few psychogenic properties. Cannabis contains more than 460 compounds; at least 80 of these are cannabinoids – chemical compounds that interact with cannabinoid receptors in the brain. As of 2012, more than 20 cannabinoids were being studied by the U.S. FDA. The most psychoactive cannabinoid found in the cannabis plant is tetrahydrocannabinol (or delta-9-tetrahydrocannabinol, commonly known as THC). Other cannabinoids include delta-8-tetrahydrocannabinol, cannabidiol (CBD), cannabinol (CBN), cannabicyclol (CBL), cannabichromene (CBC) and cannabigerol (CBG); they have less psychotropic effects than THC, but may play a role in the overall effect of cannabis. The most studied are THC, CBD and CBN. CB1 and CB2 are the primary cannabinoid receptors responsible for several of the effects of cannabinoids, although other receptors may play a role as well. Both belong to a group of receptors called G protein-coupled receptors (GPCRs). CB1 receptors are found in very high levels in the brain and are thought to be responsible for psychoactive effects. CB2 receptors are found peripherally throughout the body and are thought to modulate pain and inflammation. Absorption Cannabinoid absorption is dependent on its route of administration. Inhaled and vaporized THC have similar absorption profiles to smoked THC, with a bioavailability ranging from 10 to 35%. Oral administration has the lowest bioavailability of approximately 6%, variable absorption depending on the vehicle used, and the longest time to peak plasma levels (2 to 6 hours) compared to smoked or vaporized THC. Similar to THC, CBD has poor oral bioavailability, approximately 6%. The low bioavailability is largely attributed to significant first-pass metabolism in the liver and erratic absorption from the gastrointestinal tract. However, oral administration of CBD has a faster time to peak concentrations (2 hours) than THC. Due to the poor bioavailability of oral preparations, alternative routes of administration have been studied, including sublingual and rectal. These alternative formulations maximize bioavailability and reduce first-pass metabolism. Sublingual administration in rabbits yielded bioavailability of 16% and time to peak concentration of 4 hours. Rectal administration in monkeys doubled bioavailability to 13.5% and achieved peak blood concentrations within 1 to 8 hours after administration. Distribution Like cannabinoid absorption, distribution is also dependent on route of administration. Smoking and inhalation of vaporized cannabis have better absorption than do other routes of administration, and therefore also have more predictable distribution. THC is highly protein bound once absorbed, with only 3% found unbound in the plasma. It distributes rapidly to highly vascularized organs such as the heart, lungs, liver, spleen, and kidneys, as well as to various glands. Low levels can be detected in the brain, testes, and unborn fetuses, all of which are protected from systemic circulation via barriers. THC further distributes into fatty tissues a few days after administration due to its high lipophilicity, and is found deposited in the spleen and fat after redistribution. Metabolism Delta-9-THC is the primary molecule responsible for the effects of cannabis. Delta-9-THC is metabolized in the liver and turns into 11-OH-THC. 11-OH-THC is the first metabolic product in this pathway. Both Delta-9-THC and 11-OH-THC are psychoactive. The metabolism of THC into 11-OH-THC plays a part in the heightened psychoactive effects of edible cannabis. Next, 11-OH-THC is metabolized in the liver into 11-COOH-THC, which is the second metabolic product of THC. 11-COOH-THC is not psychoactive. Ingestion of edible cannabis products lead to a slower onset of effect than the inhalation of it because the THC travels to the liver first through the blood before it travels to the rest of the body. Inhaled cannabis can result in THC going directly to the brain, where it then travels from the brain back to the liver in recirculation for metabolism. Eventually, both routes of metabolism result in the metabolism of psychoactive THC to inactive 11-COOH-THC. Excretion Due to substantial metabolism of THC and CBD, their metabolites are excreted mostly via feces, rather than by urine. After delta-9-THC is hydroxylated into 11-OH-THC via CYP2C9, CYP2C19, and CYP3A4, it undergoes phase II metabolism into more than 30 metabolites, a majority of which are products of glucuronidation. Approximately 65% of THC is excreted in feces and 25% in the urine, while the remaining 10% is excreted by other means. The terminal half-life of THC is 25 to 36 hours, whereas for CBD it is 18 to 32 hours. CBD is hydroxylated by P450 liver enzymes into 7-OH-CBD. Its metabolites are products of primarily CYP2C19 and CYP3A4 activity, with potential activity of CYP1A1, CYP1A2, CYP2C9, and CYP2D6. Similar to delta-9-THC, a majority of CBD is excreted in feces and some in the urine. The terminal half-life is approximately 18–32 hours. Administration Smoking has been the means of administration of cannabis for many users, but it is not suitable for the use of cannabis as a medicine. It was the most common method of medical cannabis consumption in the US . It is difficult to predict the pharmacological response to cannabis because concentration of cannabinoids varies widely, as there are different ways of preparing it for consumption (smoked, applied as oils, eaten, infused into other foods, or drunk) and a lack of production controls. The potential for adverse effects from smoke inhalation makes smoking a less viable option than oral preparations. Cannabis vaporizers have gained popularity because of a perception among users that fewer harmful chemicals are ingested when components are inhaled via aerosol rather than smoke. Cannabinoid medicines are available in pill form (dronabinol and nabilone) and liquid extracts formulated into an oromucosal spray (nabiximols). Oral preparations are "problematic due to the uptake of cannabinoids into fatty tissue, from which they are released slowly, and the significant first-pass liver metabolism, which breaks down Δ9THC and contributes further to the variability of plasma concentrations". The US Food and Drug Administration (FDA) has not approved smoked cannabis for any condition or disease, as it deems that evidence is lacking concerning safety and efficacy. The FDA issued a 2006 advisory against smoked medical cannabis stating: "marijuana has a high potential for abuse, has no currently accepted medical use in treatment in the United States, and has a lack of accepted safety for use under medical supervision." History Ancient Cannabis, called má 麻 (meaning "hemp; cannabis; numbness") or dàmá 大麻 (with "big; great") in Chinese, was used in Taiwan for fiber starting about 10,000 years ago. The botanist Hui-lin Li wrote that in China, "The use of Cannabis in medicine was probably a very early development. Since ancient humans used hemp seed as food, it was quite natural for them to also discover the medicinal properties of the plant." Emperor Shen-Nung, who was also a pharmacologist, wrote a book on treatment methods in 2737 BCE that included the medical benefits of cannabis. He recommended the substance for many ailments, including constipation, gout, rheumatism, and absent-mindedness. Cannabis is one of the 50 "fundamental" herbs in traditional Chinese medicine. The Ebers Papyrus () from Ancient Egypt describes medical cannabis. The ancient Egyptians used hemp (cannabis) in suppositories for relieving the pain of hemorrhoids. Surviving texts from ancient India confirm that cannabis' psychoactive properties were recognized, and doctors used it for treating a variety of illnesses and ailments, including insomnia, headaches, gastrointestinal disorders, and pain, including during childbirth. The Ancient Greeks used cannabis to dress wounds and sores on their horses, and in humans, dried leaves of cannabis were used to treat nose bleeds, and cannabis seeds were used to expel tapeworms. In the medieval Islamic world, Arabic physicians made use of the diuretic, antiemetic, antiepileptic, anti-inflammatory, analgesic and antipyretic properties of Cannabis sativa, and used it extensively as medication from the 8th to 18th centuries. Landrace strains Cannabis seeds may have been used for food, rituals or religious practices in ancient Europe and China. Harvesting the plant led to the spread of cannabis throughout Eurasia about 10,000 to 5,000 years ago, with further distribution to the Middle East and Africa about 2,000 to 500 years ago. A landrace strain of cannabis developed over centuries. They are cultivars of the plant that originated in one specific region. Widely cultivated strains of cannabis, such as "Afghani" or "Hindu Kush", are indigenous to the Pakistan and Afghanistan regions, while "Durban Poison" is native to Africa. There are approximately 16 landrace strains of cannabis identified from Pakistan, Jamaica, Africa, Mexico, Central America and Asia. Modern An Irish physician, William Brooke O'Shaughnessy, is credited with introducing cannabis to Western medicine. O'Shaughnessy discovered cannabis in the 1830s while living abroad in India, where he conducted numerous experiments investigating the drug's medical utility (noting in particular its analgesic and anticonvulsant effects). He returned to England with a supply of cannabis in 1842, after which its use spread through Europe and the United States. In 1845 French physician Jacques-Joseph Moreau published a book about the use of cannabis in psychiatry. In 1850 cannabis was entered into the United States Pharmacopeia. An anecdotal report of Cannabis indica as a treatment for tetanus appeared in Scientific American in 1880. The use of cannabis in medicine began to decline by the end of the 19th century, due to difficulty in controlling dosages and the rise in popularity of synthetic and opium-derived drugs. Also, the advent of the hypodermic syringe allowed these drugs to be injected for immediate effect, in contrast to cannabis which is not water-soluble and therefore cannot be injected. In the United States, the medical use of cannabis further declined with the passage of the Marihuana Tax Act of 1937, which imposed new regulations and fees on physicians prescribing cannabis. Cannabis was removed from the U.S. Pharmacopeia in 1941, and officially banned for any use with the passage of the Controlled Substances Act of 1970. Cannabis began to attract renewed interest as medicine in the 1970s and 1980s, in particular due to its use by cancer and AIDS patients who reported relief from the effects of chemotherapy and wasting syndrome. In 1996, California became the first U.S. state to legalize medical cannabis in defiance of federal law. In 2001, Canada became the first country to adopt a system regulating the medical use of cannabis. Society and culture Legal status Countries that have legalized the medical use of cannabis include Argentina, Australia, Brazil, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Finland, Germany, Greece, Israel, Italy, Jamaica, Lebanon, Luxembourg, Malta, Morocco, the Netherlands, New Zealand, North Macedonia, Panama, Peru, Poland, Portugal, Rwanda, Sri Lanka, Switzerland, Thailand, the United Kingdom, and Uruguay. Other countries have more restrictive laws that allow only the use of isolated cannabinoid drugs such as Sativex or Epidiolex. Countries with the most relaxed policies include Canada, the Netherlands, Thailand, and Uruguay, where cannabis can be purchased without need for a prescription. In Mexico, THC content of medical cannabis is limited to one percent. In the United States, the legality of medical cannabis varies by state. However, in many of these countries, access may not always be possible under the same conditions. International law Cannabis and its derivatives are subject to regulation under three United Nations drug control treaties: the 1961 Single Convention on Narcotic Drugs, the 1971 Convention on Psychotropic Substances, and the 1988 Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Cannabis and cannabis resin are classified as a Schedule I drug under the Single Convention treaty, meaning that medical use is considered "indispensible for the relief of pain and suffering" but that it is considered to be an addictive medication with risks of abuse. Countries have an obligation to provide access and sufficient availability of drugs listed in Schedule I for the purposes of medical uses. Prior to December 2020 cannabis and cannabis resin were also included in Schedule IV, a more restrictive level of control, which is for only the most dangerous drugs such as heroin and fentanyl. They were removed after an independent scientific assessment by the World Health Organization in 2018-1029. Member nations of the UN Commission on Narcotic Drugs voted 27–25 to remove it from Schedule IV on 2 December 2020, following a World Health Organization recommendation for removal in January 2019. United States In the United States, the use of cannabis for medical purposes is legal in 38 states, four out of five permanently inhabited U.S. territories, and the District of Columbia. An additional 10 states have more restrictive laws allowing the use of low-THC products. Cannabis remains illegal at the federal level under the Controlled Substances Act, which classifies it as a Schedule I drug with a high potential for abuse and no accepted medical use. In December 2014, however, the Rohrabacher–Farr amendment was signed into law, prohibiting the Justice Department from prosecuting individuals acting in accordance with state medical cannabis laws. In the US, the FDA has approved two oral cannabinoids for use as medicine in 1985: dronabinol (pure delta-9-THC; brand name Marinol) and nabilone (a synthetic neocannabinoid; brand name Cesamet). In the US, they are both listed as Schedule II, indicating high potential for side effects and addiction. Economics Distribution The method of obtaining medical cannabis varies by region and by legislation. In the US, most consumers grow their own or buy it from cannabis dispensaries in states where it is legal. Marijuana vending machines for selling or dispensing cannabis are in use in the United States and are planned to be used in Canada. In 2014, the startup Meadow began offering on-demand delivery of medical marijuana in the San Francisco Bay Area, through their mobile app. Almost 70% of medical cannabis is exported from the United Kingdom, according to a 2017 United Nations report, with much of the remaining amount coming from Canada and the Netherlands. Insurance In the United States, health insurance companies may not pay for a medical marijuana prescription as the Food and Drug Administration must approve any substance for medicinal purposes. Before this can happen, the FDA must first permit the study of the medical benefits and drawbacks of the substance, which it has not done since it was placed on Schedule I of the Controlled Substances Act in 1970. Therefore, all expenses incurred fulfilling a medical marijuana prescription will possibly be incurred as out-of-pocket. However, the New Mexico Court of Appeals has ruled that workers' compensation insurance must pay for prescribed marijuana as part of the state's Medical Cannabis Program. Positions of medical organizations Medical organizations that have issued statements in support of allowing access to medical cannabis include the American Nurses Association, American Public Health Association, American Medical Student Association, National Multiple Sclerosis Society, Epilepsy Foundation, and Leukemia & Lymphoma Society. Organizations that oppose the legalization of medical cannabis include the American Academy of Pediatrics (AAP) and American Psychiatric Association. However, the AAP also supports rescheduling for the purpose of facilitating research. The American Medical Association and American College of Physicians do not take a position on the legalization of medical cannabis, but have called for the Schedule I classification to be reviewed. The American Academy of Family Physicians and American Society of Addiction Medicine also do not take a position, but do support rescheduling to better facilitate research. The American Heart Association says that "many of the concerning health implications of cannabis include cardiovascular diseases" but that it supports rescheduling to allow "more nuanced ... marijuana legislation and regulation" and to "reflect the existing science behind cannabis". The American Cancer Society and American Psychological Association have noted the obstacles that exist for conducting research on cannabis, and have called on the federal government to better enable scientific study of the drug. Cancer Research UK say that while cannabis is being studied for therapeutic potential, "claims that there is solid "proof" that cannabis or cannabinoids can cure cancer is highly misleading to patients and their families, and builds a false picture of the state of progress in this area". Nonproprietary names There are three International Nonproprietary Name (INN) granted for cannabinoids: two plant-derived phytocannabinoids and one neocannabinoid: Dronabinol is the INN for delta-9-THC (there is a common confusion according to which the word "dronabinol" would only refer to synthetic delta-9-THC, which is incorrect). Cannabidiol is also the official INN for the molecule, granted in 2017. Nabilone is the INN for a synthetic cannabinoid analog (not present in Cannabis plants). Nabiximols is the generic name (but not recognized as an INN) of a mixture of Cannabidiol and Dronabinol. Its most common form is the oromucosal spray derived from two strains of Cannabis sativa and containing THC and CBD traded under the brand name Sativex®. It is not approved in the United States, but is approved in several European countries, Canada, and New Zealand as of 2013. As an antiemetic, these medications are usually used when conventional treatment for nausea and vomiting associated with cancer chemotherapy fail to work. Nabiximols is used for treatment of spasticity associated with MS when other therapies have not worked, and when an initial trial demonstrates "meaningful improvement". Trials for FDA approval in the US are underway. It is also approved in several European countries for overactive bladder and vomiting. When sold under the trade name Sativex as a mouth spray, the prescribed daily dose in Sweden delivers a maximum of 32.4 mg of THC and 30 mg of CBD; mild to moderate dizziness is common during the first few weeks. Relative to inhaled consumption, peak concentration of oral THC is delayed, and it may be difficult to determine optimal dosage because of variability in patient absorption. In 1964, Albert Lockhart and Manley West began studying the health effects of traditional cannabis use in Jamaican communities. They developed, and in 1987 gained permission to market, the pharmaceutical "Canasol", one of the first cannabis extracts. Research A 2022 review concluded that "oral, synthetic cannabis products with high THC-to-CBD ratios and sublingual, extracted cannabis products with comparable THC-to-CBD ratios may be associated with short-term improvements in chronic pain and increased risk for dizziness and sedation." See also Charlotte's Web (cannabis) Chinese herbology Tilden's Extract References Further reading External links , links to websites about medical cannabis Information on Cannabis and Cannabinoids from the U.S. National Cancer Institute Information on cannabis (marihuana, marijuana) and the cannabinoids from Health Canada The Center for Medicinal Cannabis Research of the University of California Medical Marijuana – a 2014–2015 three-part CNN documentary produced by Sanjay Gupta Antiemetics Antioxidants Biologically based therapies Herbalism Medical ethics Medicinal plants Pharmaceuticals policy Pharmacognosy
Medical cannabis
Chemistry
6,538
74,216,929
https://en.wikipedia.org/wiki/Dithiobutylamine
Dithiobutylamine (DTBA) is a reducing agent intended as an alternative for DTT in biochemical uses. It was designed to be easily synthesized in non-racemic form, to have a lower pKa (allowing more effective reduction at neutral pH), and to have a low disulfide E°′ reduction potential. It was rationally designed & reported in 2012. It is commercially available. See also Dithiothreitol (DTT) 2-Mercaptoethanol (BME) TCEP References Thiols Reducing agents
Dithiobutylamine
Chemistry
119
4,243,396
https://en.wikipedia.org/wiki/Russula%20ochroleuca
Russula ochroleuca is a member of the genus Russula. A group that have become known as brittlegills. It has been commonly known as the common yellow russula for some years, and latterly the ochre brittlegill. It is widespread, and common in mixed woodland. Taxonomy Russula ochroleuca was first noted and named as a species of Agaricus by the pioneering South African mycologist Christian Hendrik Persoon in 1801. Description The cap is dull yellow and wide, initially convex, later flat, or slightly depressed. The cap margin becomes furrowed when mature, and it is two-thirds peeling. The gills are white to greyish white, and are adnexed. The stipe is long, wide, cylindrical, white or later greyish. The taste is mild to moderately hot. It could be confused with the similar-looking and much better tasting Russula claroflava. Distribution and habitat Russula ochroleuca grows in deciduous and coniferous forest, where it (at least in Northwestern Europe) is very common. In the USA it is fairly common under conifers; birch, and aspen in the Northern States. Edibility Although considered edible, it is not known as particularly tasty. It is mild to moderately hot. See also List of Russula species References "Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990. ochroleuca Fungi described in 1801 Fungi of Europe Taxa named by Christiaan Hendrik Persoon Fungus species
Russula ochroleuca
Biology
347
47,693,720
https://en.wikipedia.org/wiki/2015%20Dongying%20explosion
The 2015 Dongyin explosion was an explosion that occurred at the Diao Kou Xiang Bin Yuan Chemical Co. located within the Dongying Economic Development Zone in Dongying, Shandong, China, on Monday, 31 August 2015 and killed thirteen people. Events At 11:22pm on 31 August 2015, a chemical factory in the Dongyin-Lijin Binhai Economic and Technological Development Zone in eastern China exploded. The ensuing fire took five hours to bring under control. Chinese authorities detained 12 company employees and executives and 11 government officials. One person was reported to have been killed in the explosion, however the death toll later rose to 13 with 25 others injured. The blast came just three weeks after the Tianjin disaster which garnered significant media coverage. See also 2015 Tianjin explosions 2014 Kunshan explosion 1988 PEPCON disaster Largest artificial non-nuclear explosions List of accidents and disasters by death toll References 2015 disasters in China 2015 industrial disasters Chemical plant explosions Explosions in 2015 Explosions in China Industrial fires and explosions in China Disasters in Shandong August 2015 events in China
2015 Dongying explosion
Chemistry
207
33,998,286
https://en.wikipedia.org/wiki/Jared%20Spool
Jared Spool (born December 8, 1960) is an American writer, researcher, speaker, educator, and an expert on the subjects of usability, software, design, and research. He is the founding principal of User Interface Engineering (UIE), a research, training, and consulting firm that specializes in website and product usability. He is also an amateur magician. Spool attended Niskayuna High School in Niskayuna, NY. Spool has been working in the field of usability and design since 1978, before the term usability was ever associated with computers. Achievements and awards Under Spool's leadership, in 1996 UIE launched the User Interface Conference, an annual user experience research and design conference, which he chairs and delivers the keynotes for. From 1998 until 2008, as an adjunct faculty member at Tufts University, Spool created and taught a unique curriculum for the Experience Design Management course at the Tufts Gordon Institute. Spool has delivered the keynote presentations for The National Association of Government Webmasters, The National Association of Online Librarians, Higher Ed Webmasters, Agile 2009, South by Southwest Interactive, Web Advertising, Web Visions, the Usability Professionals Association, CHI (conference), the Information Architecture Summit, UX Australia, UX Lisbon, UX London, Drupal Con 2011, An Event Apart, Designing for People Amsterdam, UPA China, the Norwegian Computer Society, the British Computer Society, the Society for Technical Communication, and the Federal Webmasters Society. In 2011, the Stevens Award was given to Spool, "whose quiet evangelism of usability and the practical outcomes of methods and tools had a wide-ranging influence on how we think about making systems effective." Current activities Spool spends time working with research teams, consults with organizations so they can better understand how to solve their design problems, and works with reporters and industry analysts on the state of design. In addition to being a speaker at more than 20 conferences every year, Spool presents almost weekly for various groups. Jared also runs a 24-week UX strategy program via the organization The Center Centre and offers a UX community called The Leaders of Awesomeness. He also shares resources and collaborates with other organizations frequently to evangelize principles of user research. Jared continues to advocate for UX as a practice and is quoted as saying, “The number one responsibility of UX leaders is to make their organization the world's foremost experts on their users and what their users need.” Spool also sits on the editorial board for Rosenfeld Media, a user experience publishing house. The Center Centre With the help of a successful Kickstarter campaign, in 2014, Jared co-founded the Center Centre, "a new, bricks-and-mortar user experience design school for adults," with Dr. Leslie Jensen-Inman. Bibliography Books Spool, Jared M. & Robert Hoekman, Jr. Web Anatomy: Interaction Design Frameworks that Work (). Spool, Jared M., Rosalee J. Wolfe & Daniel M. McCracken. User-Centered Web Site Development: A Human-Computer Interaction Approach (). Spool, Jared M., Carolyn Snyder, Tara Scanlon & Terri DeAngelo. Web Site Usability: A Designer's Guide (). Jeffrey Rubin & Dana Chisnell, Spool, Jared M. (Forward), Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests (). Articles 1993. "User involvement in the design process: why, when & how?" INTERCHI 1993: 251-254 1994. "ProductUsability: survival techniques." CHI Conference Companion 1994: 365-366 1994. "Using a game to teach a design process." CHI Conference Companion 1994: 117-118 1995. "CHI 95 Conference Companion" 1995: 395-396 1995. "User Interface Engineering: fostering creative product development." CHI 95 Conference Companion 1995: 166-167 1997. "Product Usability: Survival Techniques." CHI Extended Abstracts 1997: 154-155 1997. "Measuring Website Usability." CHI Extended Abstracts 1997: 125 2002. "Usability in practice: alternatives to formative evaluations-evolution and revolution." CHI Extended Abstracts 2002: 891-897 2002. "Usability in practice: formative usability evaluations - evolution and revolution." CHI Extended Abstracts 2002: 885-890 2003. " Evaluating globally: how to conduct international or intercultural usability research." CHI Extended Abstracts 2003: 704-705 2003. "The "magic number 5": is it enough for web testing?" CHI Extended Abstracts 2003: 698-699 2005. "The great debate: can usability scale up?" CHI Extended Abstracts 2005: 1174-1175 2007. "Get real!": what's wrong with hci prototyping and how can we fix it? CHI Extended Abstracts 2007: 1913-1916 References External links User Interface Engineering (UIE) official website Jared Spool on Slideshare The Magic of Jared Spool The Center Centre Living people 1960 births Usability Web design Web developers Computer programmers Human–computer interaction researchers American bloggers American technology writers American designers 21st-century American non-fiction writers
Jared Spool
Engineering
1,086
37,002,473
https://en.wikipedia.org/wiki/Tau5%20Eridani
{{DISPLAYTITLE:Tau5 Eridani}} Tau5 Eridani, Latinized from τ5 Eridani, is a binary star system in the constellation Eridanus. It is visible to the naked eye with a combined apparent visual magnitude of 4.26. The distance to this system, as estimated using the parallax technique, is around 293 light years. Tau5 Eridani is a double-lined spectroscopic binary system. The two stars orbit each other closely with a period of 6.2 days and an eccentricity of 0.2. On average, the two stars are separated by around 0.183 AU. The primary component is a B-type main sequence star with a stellar classification of B0 V. It is around 157 million years old and is spinning with a projected rotational velocity of 55 km/s. The star has around 3.3 times the mass of the Sun and 3.2 times the Sun's radius. It radiates 188 times the solar luminosity from an outer atmosphere at an effective temperature of 12,514 K. The secondary component has a stellar classification of B9 V. It is slightly smaller, with an estimated size equal to 2.6 times the radius of the Sun. Although τ5 Eridani has no bright visual companion stars, the galaxy IC 1953 is less than 10' away. It is one of the brighter members of a loose group of galaxies called the Eridanus Group scattered around the components of τ Eridani. References B-type main-sequence stars Eridanus (constellation) Eridani, Tau5 Eridani, 19 022203 016611 1088 Spectroscopic binaries Durchmusterung objects
Tau5 Eridani
Astronomy
358
20,872,385
https://en.wikipedia.org/wiki/Telomerization
Telomerization is a reaction that produces a particular kind of oligomer with two distinct end groups. The oligomer is called a telomer. Some telomerizations proceed by radical pathways, many do not. A generic equation is: A-B + n M -> A-M_{n}-B where M is the monomer, and A and B are the end groups, and n is the degree of polymerization. One example is the coupled dimerization and hydroesterification of buta-1,3-diene. This step produces a doubly unsaturated C9-ester: 2CH2=CH-CH=CH2 + CO + CH3OH → CH2=CH(CH2)3CH=CHCH2CO2CH3 The monomer in this reaction is butadiene, the degree of polymerization is 2, and the end groups are vinyl and the carboxy methyl (CO2CH3). This and several related reactions proceed with palladium catalysts. Many telomerizations are used in industrial chemistry. Nomenclature According to the jargon in polymer chemistry, telomerization requires a telogen to react with at least one unsaturated taxogen molecule. Fluorotelomers are an example. See also Perfluorooctanoic acid (synthesis) Telomerization (dimerization) References Chemical synthesis Chemical processes Industrial processes Polymer chemistry Oligomers
Telomerization
Chemistry,Materials_science,Engineering
303
69,427
https://en.wikipedia.org/wiki/Botanical%20garden
A botanical garden or botanic garden is a garden with a documented collection of living plants for the purpose of scientific research, conservation, display, and education. It is their mandate as a botanical garden that plants are labelled with their botanical names. It may contain specialist plant collections such as cacti and other succulent plants, herb gardens, plants from particular parts of the world, and so on; there may be glasshouses or shadehouses, again with special collections such as tropical plants, alpine plants, or other exotic plants that are not native to that region. Most are at least partly open to the public, and may offer guided tours, public programming such as workshops, courses, educational displays, art exhibitions, book rooms, open-air theatrical and musical performances, and other entertainment. Botanical gardens are often run by universities or other scientific research organizations, and often have associated herbaria and research programmes in plant taxonomy or some other aspect of botanical science. In principle, their role is to maintain documented collections of living plants for the purposes of scientific research, conservation, display, and education, although this will depend on the resources available and the special interests pursued at each particular garden. The staff will normally include botanists as well as gardeners. Many botanical gardens offer diploma/certificate programs in horticulture, botany and taxonomy. There are many internship opportunities offered to aspiring horticulturists. As well as opportunities for students/researchers to use the collection for their studies. History The origin of modern botanical gardens is generally traced to the appointment of botany professors to the medical faculties of universities in 16th-century Renaissance Italy, which also entailed curating a medicinal garden. However, the objectives, content, and audience of today's botanic gardens more closely resembles that of the grandiose gardens of antiquity and the educational garden of Theophrastus in the Lyceum of ancient Athens. The early concern with medicinal plants changed in the 17th century to an interest in the new plant imports from explorations outside Europe as botany gradually established its independence from medicine. In the 18th century, systems of nomenclature and classification were devised by botanists working in the herbaria and universities associated with the gardens, these systems often being displayed in the gardens as educational "order beds". With the rapid expansion of European colonies around the globe in the late 18th century, botanic gardens were established in the tropics, and economic botany became a focus with the hub at the Royal Botanic Gardens, Kew, near London. Over the years, botanical gardens, as cultural and scientific organisations, have responded to the interests of botany and horticulture. Nowadays, most botanical gardens display a mix of the themes mentioned and more; having a strong connection with the general public, there is the opportunity to provide visitors with information relating to the environmental issues being faced at the start of the 21st century, especially those relating to plant conservation and sustainability. Definitions The "New Royal Horticultural Society Dictionary of Gardening" (1999) points out that among the various kinds of organizations known as botanical gardens, there are many that are in modern times public gardens with little scientific activity, and it cited a tighter definition published by the World Wildlife Fund and IUCN when launching the "Botanic Gardens Conservation Strategy" in 1989: "A botanic garden is a garden containing scientifically ordered and maintained collections of plants, usually documented and labelled, and open to the public for the purposes of recreation, education and research." The term tends to be used somewhat differently in different parts of the world. For example a large woodland garden with a good collection of rhododendron and other flowering tree and shrub species is very likely to present itself as a "botanical garden" if it is located in the US, but very unlikely to do so if in the UK (unless it also contains other relevant features). Very few of the sites used for the UK's dispersed National Plant Collection, usually holding large collections of a particular taxonomic group, would call themselves "botanic gardens". This has been further reduced by Botanic Gardens Conservation International to the following definition which "encompasses the spirit of a true botanic garden": "A botanic garden is an institution holding documented collections of living plants for the purposes of scientific research, conservation, display and education." The following definition was produced by staff of the Liberty Hyde Bailey Hortorium of Cornell University in 1976. It covers in some detail the many functions and activities generally associated with botanical gardens: A botanical garden is a controlled and staffed institution for the maintenance of a living collection of plants under scientific management for purposes of education and research, together with such libraries, herbaria, laboratories, and museums as are essential to its particular undertakings. Each botanical garden naturally develops its own special fields of interests depending on its personnel, location, extent, available funds, and the terms of its charter. It may include greenhouses, test grounds, an herbarium, an arboretum, and other departments. It maintains a scientific as well as a plant-growing staff, and publication is one of its major modes of expression. This broad outline is then expanded: The botanic garden may be an independent institution, a governmental operation, or affiliated to a college or university. If a department of an educational institution, it may be related to a teaching program. In any case, it exists for scientific ends and is not to be restricted or diverted by other demands. It is not merely a landscaped or ornamental garden, although it may be artistic, nor is it an experiment station or yet a park with labels on the plants. The essential element is the intention of the enterprise, which is the acquisition and dissemination of botanical knowledge. A contemporary botanic garden is a strictly protected green area, where a managing organization creates landscaped gardens and holds documented collections of living plants and/or preserved plant accessions containing functional units of heredity of actual or potential value for purposes such as scientific research, education, public display, conservation, sustainable use, tourism and recreational activities, production of marketable plant-based products and services for improvement of human well-being. The botanical gardens network Worldwide, there are now about 1800 botanical gardens and arboreta in about 150 countries (mostly in temperate regions) of which about 550 are in Europe (150 of which are in Russia), 200 in North America, and an increasing number in East Asia. These gardens attract about 300 million visitors a year. Historically, botanical gardens exchanged plants through the publication of seed lists (these were called in the 18th century). This was a means of transferring both plants and information between botanical gardens. This system continues today, although the possibility of genetic piracy and the transmission of invasive species has received greater attention in recent times. The International Association of Botanic Gardens was formed in 1954 as a worldwide organisation affiliated to the International Union of Biological Sciences. More recently, coordination has also been provided by Botanic Gardens Conservation International (BGCI), which has the mission "To mobilise botanic gardens and engage partners in securing plant diversity for the well-being of people and the planet". BGCI has over 700 membersmostly botanic gardensin 118 countries, and strongly supports the Global Strategy for Plant Conservation by producing a range resources and publications, and by organizing international conferences and conservation programs. Communication also happens regionally. In the United States, there is the American Public Gardens Association (formerly the American Association of Botanic Gardens and Arboreta), and in Australasia there is the Botanic Gardens of Australia and New Zealand (BGANZ). History and development The history of botanical gardens is closely linked to the history of botany itself. The botanical gardens of the 16th and 17th centuries were medicinal gardens, but the idea of a botanical garden changed to encompass displays of the beautiful, strange, new and sometimes economically important plant trophies being returned from the European colonies and other distant lands. Later, in the 18th century, they became more educational in function, demonstrating the latest plant classification systems devised by botanists working in the associated herbaria as they tried to order these new treasures. Then, in the 19th and 20th centuries, the trend was towards a combination of specialist and eclectic collections demonstrating many aspects of both horticulture and botany. Precursors The idea of "scientific" gardens used specifically for the study of plants dates back to antiquity. Grand gardens of ancient history Near-eastern royal gardens set aside for economic use or display and containing at least some plants gained by special collecting trips or military campaigns abroad, are known from the second millennium BCE in ancient Egypt, Mesopotamia, Crete, Mexico and China. In about 2800 BCE, the Chinese Emperor Shen Nung sent collectors to distant regions searching for plants with economic or medicinal value. It has also been suggested that the Spanish colonization of Mesoamerica influenced the history of the botanical garden as gardens in Tenochtitlan established by king Nezahualcoyotl, also gardens in Chalco (altépetl) and elsewhere, greatly impressed the Spanish invaders, not only with their appearance, but also because the indigenous Aztecs employed many more medicinal plants than did the classical world of Europe. Early medieval gardens in Islamic Spain resembled botanic gardens of the future, an example being the 11th-century Huerta del Rey garden of physician and author Ibn Wafid (999–1075 CE) in Toledo. This was later taken over by garden chronicler Ibn Bassal (fl. 1085 CE) until the Christian conquest in 1085 CE. Ibn Bassal then founded a garden in Seville, most of its plants being collected on a botanical expedition that included Morocco, Persia, Sicily, and Egypt. The medical school of Montpelier was also founded by Spanish Arab physicians, and by 1250 CE, it included a physic garden, but the site was not given botanic garden status until 1593. Physic gardens Botanical gardens, in the modern sense, developed from physic gardens, whose main purpose was to cultivate herbs for medical use as well as research and experimentation. Such gardens have a long history. In Europe, for example, Aristotle (384 BCE – 322 BCE) is said to have had a physic garden in the Lyceum at Athens, which was used for educational purposes and for the study of botany, and this was inherited, or possibly set up, by his pupil Theophrastus, the "Father of Botany". There is some debate among science historians whether this garden was ordered and scientific enough to be considered "botanical", and suggest it more appropriate to attribute the earliest known botanical garden in Europe to the botanist and pharmacologist Antonius Castor, mentioned by Pliny the Elder in the 1st century. Though these ancient gardens shared some of the characteristics of present-day botanical gardens, the forerunners of modern botanical gardens are generally regarded as being the medieval monastic physic gardens that originated after the decline of the Roman Empire at the time of Emperor Charlemagne (742–789 CE). These contained a , a garden used mostly for vegetables, and another section set aside for specially labelled medicinal plants and this was called the or more generally known as a physic garden, and a or orchard. These gardens were probably given impetus when Charlemagne issued a capitulary, the Capitulary de Villis, which listed 73 herbs to be used in the physic gardens of his dominions. Many of these were found in British gardens even though they only occurred naturally in continental Europe, demonstrating earlier plant introduction. Pope Nicholas V set aside part of the Vatican grounds in 1447, for a garden of medicinal plants that were used to promote the teaching of botany, and this was a forerunner to the University gardens at Padua and Pisa established in the 1540s. Certainly the founding of many early botanic gardens was instigated by members of the medical profession. 16th- and 17th-century European gardens In the 17th century, botanical gardens began their contribution to a deeper scientific curiosity about plants. If a botanical garden is defined by its scientific or academic connection, then the first true botanical gardens were established with the revival of learning that occurred in the European Renaissance. These were secular gardens attached to universities and medical schools, used as resources for teaching and research. The superintendents of these gardens were often professors of botany with international reputations, a factor that probably contributed to the creation of botany as an independent discipline rather than a descriptive adjunct to medicine. Origins in the Italian Renaissance The botanical gardens of Southern Europe were associated with university faculties of medicine and were founded in Italy at Orto botanico di Pisa (1544), Orto botanico di Padova (1545), Orto Botanico di Firenze (1545), Orto Botanico dell'Università di Pavia (1558) and Orto Botanico dell'Università di Bologna (1568). Here the physicians (referred to in English as apothecaries) delivered lectures on the Mediterranean "simples" or "officinals" that were being cultivated in the grounds. Student education was no doubt stimulated by the relatively recent advent of printing and the publication of the first herbals. All of these botanical gardens still exist, mostly in their original locations. Northern Europe The tradition of these Italian gardens passed into Spain Botanical Garden of Valencia, 1567) and Northern Europe, where similar gardens were established in the Netherlands (Hortus Botanicus Leiden, 1590; Hortus Botanicus (Amsterdam), 1638), Germany (Alter Botanischer Garten Tübingen, 1535; Leipzig Botanical Garden, 1580; Botanischer Garten Jena, 1586; Botanischer Garten Heidelberg, 1593; Herrenhäuser Gärten, Hanover, 1666; Botanischer Garten der Christian-Albrechts-Universität zu Kiel, 1669; Botanical Garden in Berlin, 1672), Switzerland (Old Botanical Garden, Zürich, 1560; Basel, 1589); England (University of Oxford Botanic Garden, 1621; Chelsea Physic Garden, 1673); Scotland (Royal Botanic Garden Edinburgh, 1670); and in France (Jardin des plantes de Montpellier, 1593; Faculty of Medicine Garden, Paris, 1597; Jardin des Plantes, Paris, 1635), Denmark (University of Copenhagen Botanical Garden, 1600); Sweden (Uppsala University, 1655). Beginnings of botanical science During the 16th and 17th centuries, the first plants were being imported to these major Western European gardens from Eastern Europe and nearby Asia (which provided many bulbs), and these found a place in the new gardens, where they could be conveniently studied by the plant experts of the day. For example, Asian introductions were described by Carolus Clusius (1526–1609), who was director, in turn, of the Botanical Garden of the University of Vienna and Hortus Botanicus Leiden. Many plants were being collected from the Near East, especially bulbous plants from Turkey. Clusius laid the foundations of Dutch tulip breeding and the bulb industry, and he helped create one of the earliest formal botanical gardens of Europe at Leyden where his detailed planting lists have made it possible to recreate this garden near its original site. The of Leyden in 1601 was a perfect square divided into quarters for the four continents, but by 1720, though, it was a rambling system of beds, struggling to contain the novelties rushing in, and it became better known as the . His Exoticorum libri decem (1605) is an important survey of exotic plants and animals that is still consulted today. The inclusion of new plant introductions in botanic gardens meant their scientific role was now widening, as botany gradually asserted its independence from medicine. In the mid to late 17th century, the Paris Jardin des Plantes was a centre of interest with the greatest number of new introductions to attract the public. In England, the Chelsea Physic Garden was founded in 1673 as the "Garden of the Society of Apothecaries". The Chelsea garden had heated greenhouses, and in 1723 appointed Philip Miller (1691–1771) as head gardener. He had a wide influence on both botany and horticulture, as plants poured into it from around the world. The garden's golden age came in the 18th century, when it became the world's most richly stocked botanical garden. Its seed-exchange programme was established in 1682 and still continues today. 18th century With the increase in maritime trade, ever more plants were being brought back to Europe as trophies from distant lands, and these were triumphantly displayed in the private estates of the wealthy, in commercial nurseries, and in the public botanical gardens. Heated conservatories called "orangeries", such as the one at Kew, became a feature of many botanical gardens. Industrial expansion in Europe and North America resulted in new building skills, so plants sensitive to cold were kept over winter in progressively elaborate and expensive heated conservatories and glasshouses. The Cape, Dutch East Indies The 18th century was marked by introductions from the Cape of South Africaincluding ericas, geraniums, pelargoniums, succulents, and proteaceous plantswhile the Dutch trade with the Dutch East Indies resulted in a golden era for the Leiden and Amsterdam botanical gardens and a boom in the construction of conservatories. Royal Botanic Gardens, Kew The Royal Gardens at Kew were founded in 1759, initially as part of the Royal Garden set aside as a physic garden. William Aiton (1741–1793), the first curator, was taught by garden chronicler Philip Miller of the Chelsea Physic Garden whose son Charles became first curator of the original Cambridge Botanic Garden (1762). In 1759, the "Physick Garden" was planted, and by 1767, it was claimed that "the Exotick Garden is by far the richest in Europe". Gardens such as the Royal Botanic Gardens, Kew (1759) and Orotava Acclimatization Garden , Tenerife (1788) and the Real Jardín Botánico de Madrid (1755) were set up to cultivate new species returned from expeditions to the tropics; they also helped found new tropical botanical gardens. From the 1770s, following the example of the French and Spanish, amateur collectors were supplemented by official horticultural and botanical plant hunters. These botanical gardens were boosted by the flora being sent back to Europe from various European colonies around the globe. At this time, British horticulturalists were importing many woody plants from Britain's colonies in North America, and the popularity of horticulture had increased enormously, encouraged by the horticultural and botanical collecting expeditions overseas fostered by the directorship of Sir William Jackson Hooker and his keen interest in economic botany. At the end of the 18th century, Kew, under the directorship of Sir Joseph Banks, enjoyed a golden age of plant hunting, sending out collectors to the South African Cape, Australia, Chile, China, Ceylon, Brazil, and elsewhere, and acting as "the great botanical exchange house of the British Empire". From its earliest days to the present, Kew has in many ways exemplified botanic garden ideals, and is respected worldwide for the published work of its scientists, the education of horticultural students, its public programmes, and the scientific underpinning of its horticulture. Bartram's Garden In 1728, John Bartram founded Bartram's Garden in Philadelphia, one of the continent's first botanical gardens. The garden is now managed as a historical site that includes a few original and many modern specimens as well as extensive archives and restored historical farm buildings. Plant classification The large number of plants needing description were often listed in garden catalogues; and at this time Carl Linnaeus established the system of binomial nomenclature which greatly facilitated the listing process. Names of plants were authenticated by dried plant specimens mounted on card (a or garden of dried plants) that were stored in buildings called herbaria, these taxonomic research institutions being frequently associated with the botanical gardens, many of which by then had "order beds" to display the classification systems being developed by botanists in the gardens' museums and herbaria. Botanical gardens had now become scientific collections, as botanists published their descriptions of the new exotic plants, and these were also recorded for posterity in detail by superb botanical illustrations. In this century, botanical gardens effectively dropped their medicinal function in favour of scientific and aesthetic priorities, and the term "botanic garden" came to be more closely associated with the herbarium, library (and later laboratories) housed there than with the living collectionson which little research was undertaken. 19th century The late 18th and early 19th centuries were marked by the establishment of tropical botanical gardens as a tool of colonial expansion (for trade and commerce and, secondarily, science) mainly by the British and Dutch, in India, South-east Asia and the Caribbean. This was also the time of Sir Joseph Banks's botanical collections during Captain James Cook's circumnavigations of the planet and his explorations of Oceania, which formed the last phase of plant introduction on a grand scale. Tropical botanical gardens There are currently about 230 tropical botanical gardens with a concentration in southern and south-eastern Asia. The first botanical garden founded in the tropics was the Pamplemousses Botanical Garden in Mauritius, established in 1735 to provide food for ships using the port, but later trialling and distributing many plants of economic importance. This was followed by the West Indies (Saint Vincent and the Grenadines Botanic Gardens, 1764) and in 1786 by the Acharya Jagadish Chandra Bose Botanical Garden in Calcutta, India founded during a period of prosperity when the city was a trading centre for the Dutch East India Company. Other gardens were constructed in Brazil (Rio de Janeiro Botanical Garden, 1808), Sri Lanka (Botanic Gardens of Peradeniya, 1821 and on a site dating back to 1371), Indonesia (Bogor Botanical Gardens, 1817 and Kebun Raya Cibodas, 1852), and Singapore (Singapore Botanical Gardens, 1822). These had a profound effect on the economy of the countries, especially in relation to the foods and medicines introduced. The importation of rubber trees to the Singapore Botanic Garden initiated the important rubber industry of the Malay Peninsula. At this time also, teak and tea were introduced to India and breadfruit, pepper and starfruit to the Caribbean. Included in the charter of these gardens was the investigation of the local flora for its economic potential to both the colonists and the local people. Many crop plants were introduced by or through these gardensoften in association with European botanical gardens such as Kew or Amsterdamand included cloves, tea, coffee, breadfruit, cinchona, sugar, cotton, palm oil and Theobroma cacao (for chocolate). During these times, the rubber plant was introduced to Singapore. Especially in the tropics, the larger gardens were frequently associated with a herbarium and museum of economy. The Botanical Garden of Peradeniya had considerable influence on the development of agriculture in Ceylon where the Para rubber tree () was introduced from Kew, which had itself imported the plant from South America. Other examples include cotton from the Chelsea Physic Garden to the Province of Georgia in 1732 and tea into India by Calcutta Botanic Garden. The transfer of germplasm between the temperate and tropical botanical gardens was undoubtedly responsible for the range of agricultural crops currently used in several regions of the tropics. Australia The first botanical gardens in Australia were founded early in the 19th century. The Royal Botanic Gardens, Sydney, 1816; the Royal Tasmanian Botanical Gardens, 1818; the Royal Botanic Gardens, Melbourne, 1845; Adelaide Botanic Gardens, 1854; and Brisbane Botanic Gardens, 1855. These were established essentially as colonial gardens of economic botany and acclimatisation. The Auburn Botanical Gardens, 1977, located in Sydney's western suburbs, are one of the popular and diverse botanical gardens in the Greater Western Sydney area. New Zealand Major botanical gardens in New Zealand include Dunedin Botanic Garden, 1863; Christchurch Botanic Gardens, 1863; Ōtari-Wilton's Bush, 1926; and Wellington Botanic Garden, 1868. Hong Kong Hong Kong Botanic Gardens, 1871 (renamed Hong Kong Zoological and Botanical Gardens in 1975), up from the Government Hill in Victoria City, Hong Kong Island. Japan The Koishikawa Botanical Garden in Tokyo, with its origin going back to the Tokugawa shogunate's ownership, became in 1877 part of the Tokyo Imperial University. Sri Lanka In Sri Lanka major botanical gardens include the Royal Botanic Gardens, Peradeniya (formally established in 1843), Hakgala Botanical Gardens (1861) and Henarathgoda Botanical Garden (1876). Ecuador Jardín Botánico de Quito is inside the Parque La Carolina is a 165.5-acre (670,000 m2) park in the centre of the Quito central business district, bordered by the avenues Río Amazonas, de los Shyris, Naciones Unidas, Eloy Alfaro, and de la República. The botanical garden of Quito is a park, a botanical garden, an arboretum and greenhouses of 18,600 square meters that is planned to increase, maintain the plants of the country (Ecuador is among the 17 richest countries in the world in the native species, a study on this matter). The Ecuadorian flora classified, determines the existence of 17,000 species) Egypt The Orman Garden, one of the most famous botanical gardens in Egypt, is located at Giza, in Cairo, and dates back to 1875. South Africa South Africa has ten national level botanical gardens all of which are overseen by the South African National Biodiversity Institute (SANBI). The oldest botanical garden in South Africa is the Durban Botanic Gardens which has been located on the same site since 1851. The Kirstenbosch National Botanical Garden is the most famous and developed garden in the country, established in 1913 on a site dating to 1848 and covering a 36 hectare area with an additional 528 hectares of mountainside wilderness that form part of the garden. Stellenbosch University Botanical Garden is the oldest university botanical garden in South Africa, and was established in 1922. Other botanical gardens in country include the Walter Sisulu National Botanical Garden, Harold Porter National Botanical Gardens and Karoo Desert National Botanical Garden. Some smaller gardens and parks that verge on being a botanical garden includes the Arderne Gardens in Cape Town founded in 1845. United States The first botanical garden in the United States, Bartram's Garden, was founded in 1730 near Philadelphia, and in the same year, the Linnaean Botanic Garden at Philadelphia itself. Presidents George Washington, Thomas Jefferson and James Madison, all experienced farmers, shared the dream of a national botanic garden for the collection, preservation and study of plants from around the world to contribute to the welfare of the American people paving the way for establishing the US Botanic Garden, right outside the nation's Capitol in Washington DC in 1820. In 1859, the Missouri Botanical Garden was founded at St Louis; it is now one of the world's leading gardens specializing in tropical plants. This was one of several popular American gardens, including Longwood Gardens (1798), Arnold Arboretum (1872), New York Botanical Garden (1891), Huntington Botanical Gardens (1906), Brooklyn Botanic Garden (1910), International Peace Garden (1932), and Fairchild Tropical Botanic Garden (1938). The first native plant garden in the United States was established in 1907 by Eloise Butler. US tax code provides for a substantial benefit to botanical gardens, this has led to a large number of entities declaring their campuses a botanical garden with little regard for veracity. Russia Russia has more gardens describing themselves as botanical gardens than any other country. Better-known gardens are Moscow University Botanic Garden ('the Apothecary Garden'), (1706), Saint Petersburg Botanical Garden, (1714); and Moscow Botanical Garden of Academy of Sciences, (1945). These gardens are notable for their structures that include sculptures, pavilions, bandstands, memorials, shadehouses, tea houses and such. Among the smaller gardens within Russia, one that is increasingly gaining prominence, is the Botanical Garden of Tver State University (1879) – the northernmost botanical Garden with an exhibition of steppe plants, only one of its kind in the Upper Volga. Ukraine Ukraine has about 30 botanical gardens. The most respected collections are Nikitsky Botanical Garden, Yalta, founded in 1812, M. M. Gryshko National Botanical Garden, a botanical garden of the National Academy of Sciences of Ukraine founded in 1936, and A. V. Fomin Botanical Garden of the Taras Shevchenko National University of Kyiv founded in 1839, which are located in Kyiv, the capital of Ukraine. 20th century Civic and municipal botanical gardens A large number of civic or municipal botanical gardens were founded in the 19th and 20th centuries. These did not develop scientific facilities or programmes, but the horticultural aspects were strong and the plants often labelled. They were botanical gardens in the sense of building up collections of plants and exchanging seeds with other gardens around the world, although their collection policies were determined by those in day-to-day charge of them. They tended to become little more than beautifully maintained parks and were, indeed, often under general parks administrations. Community engagement The second half of the 20th century saw increasingly sophisticated educational, visitor service, and interpretation services. Botanical gardens started to cater for many interests and their displays reflected this, often including botanical exhibits on themes of evolution, ecology or taxonomy, horticultural displays of attractive flowerbeds and herbaceous borders, plants from different parts of the world, special collections of plant groups such as bamboos or roses, and specialist glasshouse collections such as tropical plants, alpine plants, cacti and orchids, as well as the traditional herb gardens and medicinal plants. Specialised gardens like the Palmengarten in Frankfurt, Germany (1869), one of the world's leading orchid and succulent plant collections, have been very popular. There was a renewed interest in gardens of indigenous plants and areas dedicated to natural vegetation. With decreasing financial support from governments, revenue-raising public entertainment increased, including music, art exhibitions, special botanical exhibitions, theatre and film, this being supplemented by the advent of "Friends" organisations and the use of volunteer guides. Plant conservation Plant conservation and the heritage value of exceptional historic landscapes were treated with a growing sense of urgency. Specialist gardens were sometimes given a separate or adjoining site, to display native and indigenous plants. In the 1970s, gardens became focused on plant conservation. The Botanic Gardens Conservation Secretariat was established by the IUCN, and the World Conservation Union in 1987 with the aim of coordinating the plant conservation efforts of botanical gardens around the world. It maintains a database of rare and endangered species in botanical gardens' living collections. Many gardens hold ex situ conservation collections that preserve genetic variation. These may be held as seeds dried and stored at low temperature, or in tissue culture (such as the Kew Millennium Seedbank); as living plants, including those that are of special horticultural, historical or scientific interest (such as those in the National Plant Collection in the United Kingdom); or by managing and preserving areas of natural vegetation. Collections are often held and cultivated with the intention of reintroduction to their original habitats. The Center for Plant Conservation at St Louis, Missouri, coordinates the conservation of native North American species. Role and functions Many of the functions of botanical gardens have already been discussed in the sections above, which emphasise the scientific underpinning of botanical gardens with their focus on research, education and conservation. However, as multifaceted organisations, all sites have their own special interests. In a remarkable paper on the role of botanical gardens, Ferdinand von Mueller (1825–1896), the director of the Royal Botanic Gardens, Melbourne (1852–1873), stated, "in all cases the objects [of a botanical garden] must be mainly scientific and predominantly instructive". He then detailed many of the objectives being pursued by the world's botanical gardens in the middle of the 19th century, when European gardens were at their height. Many of these are listed below to give a sense of the scope of botanical gardens' activities at that time, and the ways in which they differed from parks or what he called "public pleasure gardens": availability of plants for scientific research display of plant diversity in form and use display of plants of particular regions (including local) plants sometimes grown within their particular families plants grown for their seed or rarity major timber (American English: lumber) trees plants of economic significance glasshouse plants of different climates all plants accurately labelled records kept of plants and their performance catalogues of holdings published periodically research facilities utilising the living collections studies in plant taxonomy examples of different vegetation types student education a herbarium selection and introduction of ornamental and other plants to commerce studies of plant chemistry (phytochemistry) report on the effects of plants on livestock at least one collector maintained doing field work Botanical gardens must find a compromise between the need for peace and seclusion, while at the same time satisfying the public need for information and visitor services that include restaurants, information centres and sales areas that bring with them rubbish, noise, and hyperactivity. Attractive landscaping and planting design sometimes compete with scientific interests — with science now often taking second place. Some gardens are now heritage landscapes that are subject to constant demand for new exhibits and exemplary environmental management. Many gardens now have plant shops selling flowers, herbs, and vegetable seedlings suitable for transplanting; many, like the UBC Botanical Garden and Centre for Plant Research and the Chicago Botanic Garden, have plant-breeding programs and introduce new plants to the horticultural trade. Future Botanical gardens are still being built, such as the first botanical garden in Oman, which will be one of the largest gardens in the world. Once completed, it will house the first large-scale cloud forest in a huge glasshouse. Development of botanical gardens in China over recent years has been remarkable, including the Hainan Botanical Garden of Tropical Economic Plants South China Botanical Garden at Guangzhou, the Xishuangbanna Botanical Garden of Tropical Plants and the Xiamen Botanic Garden, but in developed countries, many have closed for lack of financial support, this being especially true of botanical gardens attached to universities. The Palestine Museum of Natural History has a botanic garden, which has been described as a site of nation-building and resistance by Silvia Hassouna. Botanical gardens have always responded to the interests and values of the day. If a single function were to be chosen from the early literature on botanical gardens, it would be their scientific endeavour and, flowing from this, their instructional value. In their formative years, botanical gardens were gardens for physicians and botanists, but then they progressively became more associated with ornamental horticulture and the needs of the general public. The scientific reputation of a botanical garden is now judged by the publications coming out of herbaria and similar facilities, not by its living collections. The interest in economic plants now has less relevance, and the concern with plant classification systems has all but disappeared, while a fascination with the curious, beautiful and new seems unlikely to diminish. In recent times, the focus has been on creating an awareness of the threat to the Earth's ecosystems from human populations and its consequent need for biological and physical resources. Botanical gardens provide an excellent medium for communication between the world of botanical science and the general public. Education programs can help the public develop greater environmental awareness by understanding the meaning and importance of ideas like conservation and sustainability. Photo gallery Maps BGCI garden ID, Botanical gardens, Europe See also Herb farm List of botanical gardens List of botanical gardens in Canada List of botanical gardens in the United States List of botanical gardens in the United Kingdom Plant collecting National Public Gardens Day Botanical and horticultural library List of botanical gardens in Australia Footnotes References Bibliography Klemun, Marianne, The Botanical Garden, EGO - European History Online, Mainz: Institute of European History, 2019, retrieved: March 8, 2021 (pdf). External links Science museums Biorepositories
Botanical garden
Biology
7,361
1,996,903
https://en.wikipedia.org/wiki/Cassiopeia%20A
Cassiopeia A (Cas A) () is a supernova remnant (SNR) in the constellation Cassiopeia and the brightest extrasolar radio source in the sky at frequencies above 1 GHz. The supernova occurred approximately away within the Milky Way; given the width of the Orion Arm, it lies in the next-nearest arm outwards, the Perseus Arm, about 30 degrees from the Galactic anticenter. The expanding cloud of material left over from the supernova now appears approximately across from Earth's perspective. It has been seen in wavelengths of visible light with amateur telescopes down to 234 mm (9.25 in) with filters. It is estimated that light from the supernova itself first reached Earth near the 1660s, although there are no definitively corresponding records from then. Cas A is circumpolar at and above mid-Northern latitudes which had extensive records and basic telescopes. Its likely omission in records is probably due to interstellar dust absorbing optical wavelength radiation before it reached Earth, although it is possible that it was recorded as a sixth magnitude star 3 Cassiopeiae by John Flamsteed. Possible explanations lean toward the idea that the source star was unusually massive and had previously ejected much of its outer layers. These outer layers would have cloaked the star and absorbed much of the visible-light emission as the inner star collapsed. Cas A was among the first discrete astronomical radio sources found. Its discovery was reported in 1948 by Martin Ryle and Francis Graham-Smith, astronomers at Cambridge, based on observations with the Long Michelson Interferometer. The optical component was first identified in 1950. Possible observations Calculations working back from the currently observed expansion point to an explosion that would have become visible on Earth around 1667. Astronomer William Ashworth and others have suggested that the Astronomer Royal John Flamsteed may have inadvertently observed the supernova on , when he catalogued a sixth-magnitude star 3 Cassiopeiae, but there is no corresponding star at the recorded position. Possible explanations include an error in the position, or that a transient was recorded. Caroline Herschel noted that a star in the vicinity of τ Cas, HD 220562, fit well with 3 Cas if a common error in sextant readings was made. Alternatively, the star AR Cassiopeiae may have been observed, again with the position recorded incorrectly. The position and timing mean that it may have been an observation of the Cassiopeia A progenitor supernova. Another suggestion from recent cross-disciplinary research is that the supernova was the "noon day star", observed in 1630, that was thought to have heralded the birth of Charles II, the future monarch of Great Britain. However, it is more probable that the "noon day star" was the planet Venus that reached its maximum morning brightness two days earlier, allowing day time visibility in a clear sky. A bright supernova in Cassiopeia would have been visible for months and there would be more observation records as Cassiopeia is visible above the horizon any night in Europe. No supernova occurring within the Milky Way has been visible to the naked eye from Earth since. Expansion The expansion shell has a temperature of around 30 million K, and is expanding at 4,000−6,000 km/s. Observations of the exploded star through the Hubble Space Telescope have shown that, despite the original belief that the remnants were expanding in a uniform manner, there are high velocity outlying eject knots moving with transverse velocities of 5,500−14,500 km/s with the highest speeds occurring in two nearly opposing jets. When the view of the expanding star uses colors to differentiate materials of different chemical compositions, it shows that similar materials often remain gathered together in the remnants of the explosion. Radio source Cas A had a flux density of at 1 GHz in 1980. Because the supernova remnant is cooling, its flux density is decreasing. At 1 GHz, its flux density is decreasing at a rate of per year. This decrease means that, at frequencies below 1 GHz, Cas A is now less intense than Cygnus A. Cas A is still the brightest extrasolar radio source in the sky at frequencies above 1 GHz. X-ray source Although Cas X-1 (or Cas XR-1), the apparent first X-ray source in the constellation Cassiopeia was not detected during the 16 June 1964, Aerobee sounding rocket flight, it was considered as a possible source. Cas A was scanned during another Aerobee rocket flight of 1 October 1964, but no significant X-ray flux above background was associated with the position. Cas XR-1 was discovered by an Aerobee rocket flight on 25 April 1965, at RA Dec . Cas X-1 is Cas A, a Type II SNR at RA Dec . The designations Cassiopeia X-1, Cas XR-1, Cas X-1 are no longer used, but the X-ray source is Cas A (SNR G111.7-02.1) at 2U 2321+58. In 1999, the Chandra X-Ray Observatory found CXOU J232327.8+584842, a central compact object that is the neutron star remnant left by the explosion. Supernova reflected echo In 2005 an infrared echo of the Cassiopeia A explosion was observed on nearby gas clouds using Spitzer Space Telescope. The infrared echo was also seen by IRAS and studied with the Infrared Spectrograph. Previously it was suspected that a flare in 1950 from a central pulsar could be responsible for the infrared echo. With the new data it was concluded that this is unlikely the case and that the infrared echo was caused by thermal emission by dust, which was heated by the radiative output of the supernova during the shock breakout. The infrared echo is accompanied by a scattered light echo. The recorded spectrum of the optical light echo proved the supernova was of Type IIb, meaning it resulted from the internal collapse and violent explosion of a massive star, most probably a red supergiant with a helium core which had lost almost all of its hydrogen envelope. This was the first observation of the light echo of a supernova whose explosion had not been directly observed which opens up the possibility of studying and reconstructing past astronomical events. In 2011 a study used spectra from different positions of the light echo to confirm that the Cassiopeia A supernova was asymmetric. Phosphorus detection In 2013, astronomers detected phosphorus in Cassiopeia A, which confirmed that this element is produced in supernovae through supernova nucleosynthesis. The phosphorus-to-iron ratio in material from the supernova remnant could be up to 100 times higher than in the Milky Way in general. Gallery See also List of supernova remnants Light echo References External links 1940s in outer space 1947 in science 461 Cassiopeia (constellation) Milky Way Supernova remnants Astronomical objects discovered in 1947 Cassiopeiae, 03
Cassiopeia A
Astronomy
1,432
63,538,213
https://en.wikipedia.org/wiki/Maria%20Sk%C5%82odowska-Curie%20Monument%20%28Lublin%29
The Maria Skłodowska-Curie Monument (Polish: Pomnik Marii Skłodowskiej-Curie w Lublinie) is a bronze statue in Lublin, eastern Poland, dedicated to Polish physicist and chemist Marie Curie (1867–1934). History The bronze monument was designed by Polish sculptor Marian Konieczny (with Stanisław Ciechan) and ceremonially unveiled on 24 October 1964. It is 9 metres high (including pedestal) and stands on Marie Skłodowska-Curie Square (Plac Marii Skłodowskiej-Curie), near Maria Curie-Skłodowska University (UMCS). Marie Curie is depicted in a long robe and holding a book in her right hand. The pedestal inscriptions read: "To Maria Skłodowska-Curie, from the University Bearing Her Name, and from [Polish] Society" and "On the 20th Anniversary of the Founding of the University. 1944–1964." Gallery See also Maria Konopnicka Monument in Września References Monuments and memorials in Poland 1964 establishments in Poland 1964 sculptures Buildings and structures completed in 1964 Buildings and structures in Lublin Outdoor sculptures in Poland Statues of women in Poland Tourist attractions in Lublin Monuments and memorials to Marie Curie Colossal statues
Maria Skłodowska-Curie Monument (Lublin)
Physics,Mathematics
267
16,782,285
https://en.wikipedia.org/wiki/HD%20170469%20b
HD 170469 b is a gas giant exoplanet located approximately 212 light-years away in the constellation Ophiuchus, orbiting the star HD 170469. This planet was discovered in April 2007. The star is 1.1 solar mass and the planet is at least 67% the mass of Jupiter, orbiting about half the distance of Jupiter from the Sun. The mass value is only minimum since the inclination is unknown. The orbital distance is more than twice the distance from Earth to the Sun, although taking over three Earth years to orbit the star. The combined distance and period would make orbital velocity of 19.8 km/s, slower than Earth's 29.8 km/s. References External links Exoplanets discovered in 2007 Giant planets Ophiuchus Exoplanets detected by radial velocity
HD 170469 b
Astronomy
169
55,962,963
https://en.wikipedia.org/wiki/NGC%205640
NGC 5640 is a spiral galaxy approximately 660 million light-years away from Earth in the constellation of Camelopardalis. It was discovered by British astronomer William Herschel on December 20, 1797. Supernova SN 1996ah Supernova SN 1996ah was discovered in NGC 5640 on June 6, 1996 by J. Mueller, who was using the 1.2-m Oschin Schmidt telescope in the course of the second Palomar Sky Survey. SN 1996ah had magnitude about 18 and was located southwest of the centre of NGC 5640 (coordinates: RA 14h20m39.020s, DEC +80d07m21.00s, J2000.0). It was classified as type Ia supernova. See also List of NGC objects (5001–6000) References External links SEDS Spiral galaxies Camelopardalis 5640 51263 Astronomical objects discovered in 1797 Discoveries by William Herschel
NGC 5640
Astronomy
192
346,883
https://en.wikipedia.org/wiki/IEEE%20802.1X
IEEE 802.1X is an IEEE Standard for port-based network access control (PNAC). It is part of the IEEE 802.1 group of networking protocols. It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN. The standard directly addresses an attack technique called Hardware Addition where an attacker posing as a guest, customer or staff smuggles a hacking device into the building that they then plug into the network giving them full access. A notable example of the issue occurred in 2005 when a machine attached to Walmart's network hacked thousands of their servers. IEEE 802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over wired IEEE 802 networks and over 802.11 wireless networks, which is known as "EAP over LAN" or EAPOL. EAPOL was originally specified for IEEE 802.3 Ethernet, IEEE 802.5 Token Ring, and FDDI (ANSI X3T9.5/X3T12 and ISO 9314) in 802.1X-2001, but was extended to suit other IEEE 802 LAN technologies such as IEEE 802.11 wireless in 802.1X-2004. The EAPOL was also modified for use with IEEE 802.1AE ("MACsec") and IEEE 802.1AR (Secure Device Identity, DevID) in 802.1X-2010 to support service identification and optional point to point encryption over the internal LAN segment. 802.1X is part of the logical link control (LLC) sublayer of the 802 reference model. Overview 802.1X authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device (such as a laptop) that wishes to attach to the LAN/WLAN. The term 'supplicant' is also used interchangeably to refer to the software running on the client that provides credentials to the authenticator. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point; and the authentication server is typically a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client's connection or setting. Authentication servers typically run software supporting the RADIUS and EAP protocols. In some cases, the authentication server software may be running on the authenticator hardware. The authenticator acts like a security guard to a protected network. The supplicant (i.e., client device) is not allowed access through the authenticator to the protected side of the network until the supplicant's identity has been validated and authorized. With 802.1X port-based authentication, the supplicant must initially provide the required credentials to the authenticator - these will have been specified in advance by the network administrator and could include a user name/password or a permitted digital certificate. The authenticator forwards these credentials to the authentication server to decide whether access is to be granted. If the authentication server determines the credentials are valid, it informs the authenticator, which in turn allows the supplicant (client device) to access resources located on the protected side of the network. Protocol operation EAPOL operates over the data link layer, and in Ethernet II framing protocol has an EtherType value of 0x888E. Port entities 802.1X-2001 defines two logical port entities for an authenticated port—the "controlled port" and the "uncontrolled port". The controlled port is manipulated by the 802.1X PAE (Port Access Entity) to allow (in the authorized state) or prevent (in the unauthorized state) network traffic ingress and egress to/from the controlled port. The uncontrolled port is used by the 802.1X PAE to transmit and receive EAPOL frames. 802.1X-2004 defines the equivalent port entities for the supplicant; so a supplicant implementing 802.1X-2004 may prevent higher-level protocols from being used if it is not content that authentication has successfully completed. This is particularly useful when an EAP method providing mutual authentication is used, as the supplicant can prevent data leakage when connected to an unauthorized network. Typical authentication progression The typical authentication procedure consists of: Initialization On detection of a new supplicant, the port on the switch (authenticator) is enabled and set to the "unauthorized" state. In this state, only 802.1X traffic is allowed; other traffic, such as the Internet Protocol (and with that TCP and UDP), is dropped. Initiation To initiate authentication the authenticator will periodically transmit EAP-Request Identity frames to a special Layer 2 MAC address () on the local network segment. The supplicant listens at this address, and on receipt of the EAP-Request Identity frame, it responds with an EAP-Response Identity frame containing an identifier for the supplicant such as a User ID. The authenticator then encapsulates this Identity response in a RADIUS Access-Request packet and forwards it on to the authentication server. The supplicant may also initiate or restart authentication by sending an EAPOL-Start frame to the authenticator, which will then reply with an EAP-Request Identity frame. Negotiation (Technically EAP negotiation)'' The authentication server sends a reply (encapsulated in a RADIUS Access-Challenge packet) to the authenticator, containing an EAP Request specifying the EAP Method (The type of EAP based authentication it wishes the supplicant to perform). The authenticator encapsulates the EAP Request in an EAPOL frame and transmits it to the supplicant. At this point, the supplicant can start using the requested EAP Method, or do a NAK ("Negative Acknowledgement") and respond with the EAP Methods it is willing to perform. Authentication If the authentication server and supplicant agree on an EAP Method, EAP Requests and Responses are sent between the supplicant and the authentication server (translated by the authenticator) until the authentication server responds with either an EAP-Success message (encapsulated in a RADIUS Access-Accept packet), or an EAP-Failure message (encapsulated in a RADIUS Access-Reject packet). If authentication is successful, the authenticator sets the port to the "authorized" state and normal traffic is allowed. If it is unsuccessful, the port remains in the "unauthorized" state. When the supplicant logs off, it sends an EAPOL-logoff message to the authenticator, the authenticator then sets the port to the "unauthorized" state, once again blocking all non-EAP traffic. Implementations An open-source project named Open1X produces a client, Xsupplicant. This client is currently available for both Linux and Windows. The main drawbacks of the Open1X client are that it does not provide comprehensible and extensive user documentation and that most Linux vendors do not provide a package for it. The more general wpa_supplicant can be used for 802.11 wireless networks and wired networks. Both support a very wide range of EAP types. The iPhone and iPod Touch support 802.1X since the release of iOS 2.0. Android has support for 802.1X since the release of 1.6 Donut. ChromeOS has supported 802.1X since mid-2011. macOS has offered native support since 10.3. Avenda Systems provides a supplicant for Windows, Linux and macOS. They also have a plugin for the Microsoft NAP framework. Avenda also offers health checking agents. Windows Windows defaults to not responding to 802.1X authentication requests for 20 minutes after a failed authentication. This can cause significant disruption to clients. The block period can be configured using the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\dot3svc\BlockTime DWORD value (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\wlansvc\BlockTime for wireless networks) in the registry (entered in minutes). A hotfix is required for Windows XP SP3 and Windows Vista SP2 to make the period configurable. Wildcard server certificates are not supported by EAPHost, the Windows component that provides EAP support in the operating system. The implication of this is that when using a commercial certification authority, individual certificates must be purchased. Windows XP Windows XP has major issues with its handling of IP address changes resulting from user-based 802.1X authentication that changes the VLAN and thus subnet of clients. Microsoft has stated that it will not backport the SSO feature from Vista that resolves these issues. If users are not logging in with roaming profiles, a hotfix must be downloaded and installed if authenticating via PEAP with PEAP-MSCHAPv2. Windows Vista Windows Vista-based computers that are connected via an IP phone may not authenticate as expected and, as a result, the client can be placed into the wrong VLAN. A hotfix is available to correct this. Windows 7 Windows 7 based computers that are connected via an IP phone may not authenticate as expected and, consequently, the client can be placed into the wrong VLAN. A hotfix is available to correct this. Windows 7 does not respond to 802.1X authentication requests after initial 802.1X authentication fails. This can cause significant disruption to clients. A hotfix is available to correct this. Windows PE For most enterprises deploying and rolling out operating systems remotely, Windows PE does not have native support for 802.1X. However, support can be added to WinPE 2.1 and WinPE 3.0 through hotfixes that are available from Microsoft. Although full documentation is not yet available, preliminary documentation for the use of these hotfixes is available via a Microsoft blog. Linux Most Linux distributions support 802.1X via wpa_supplicant and desktop integration like NetworkManager. Apple devices As of iOS 17 and macOS 14, Apple devices support connecting to 802.1X networks using EAP-TLS with TLS 1.3 (EAP-TLS 1.3). Additionally, devices running iOS/iPadOS/tvOS 17 or later support wired 802.1X networks. Federations eduroam (the international roaming service), mandates the use of 802.1X authentication when providing network access to guests visiting from other eduroam-enabled institutions. BT (British Telecom, PLC) employs Identity Federation for authentication in services delivered to a wide variety of industries and governments. Proprietary extensions MAB (MAC Authentication Bypass) Not all devices support 802.1X authentication. Examples include network printers, Ethernet-based electronics like environmental sensors, cameras, and wireless phones. For those devices to be used in a protected network environment, alternative mechanisms must be provided to authenticate them. One option would be to disable 802.1X on that port, but that leaves that port unprotected and open for abuse. Another slightly more reliable option is to use the MAB option. When MAB is configured on a port, that port will first try to check if the connected device is 802.1X compliant, and if no reaction is received from the connected device, it will try to authenticate with the AAA server using the connected device's MAC address as username and password. The network administrator then must make provisions on the RADIUS server to authenticate those MAC addresses, either by adding them as regular users or implementing additional logic to resolve them in a network inventory database. Many managed Ethernet switches offer options for this. Vulnerabilities in 802.1X-2001 and 802.1X-2004 Shared media In the summer of 2005, Microsoft's Steve Riley posted an article (based on the original research of Microsoft MVP Svyatoslav Pidgorny) detailing a serious vulnerability in the 802.1X protocol, involving a man in the middle attack. In summary, the flaw stems from the fact that 802.1X authenticates only at the beginning of the connection, but after that authentication, it's possible for an attacker to use the authenticated port if they have the ability to physically insert themselves (perhaps using a workgroup hub) between the authenticated computer and the port. Riley suggests that for wired networks the use of IPsec or a combination of IPsec and 802.1X would be more secure. EAPOL-Logoff frames transmitted by the 802.1X supplicant are sent in the clear and contain no data derived from the credential exchange that initially authenticated the client. They are therefore trivially easy to spoof on shared media and can be used as part of a targeted DoS on both wired and wireless LANs. In an EAPOL-Logoff attack a malicious third party, with access to the medium the authenticator is attached to, repeatedly sends forged EAPOL-Logoff frames from the target device's MAC Address. The authenticator (believing that the targeted device wishes to end its authentication session) closes the target's authentication session, blocking traffic ingressing from the target, denying it access to the network. The 802.1X-2010 specification, which began as 802.1af, addresses vulnerabilities in previous 802.1X specifications, by using MACsec IEEE 802.1AE to encrypt data between logical ports (running on top of a physical port) and IEEE 802.1AR (Secure Device Identity / DevID) authenticated devices. As a stopgap, until these enhancements are widely implemented, some vendors have extended the 802.1X-2001 and 802.1X-2004 protocol, allowing multiple concurrent authentication sessions to occur on a single port. While this prevents traffic from devices with unauthenticated MAC addresses ingressing on an 802.1X authenticated port, it will not stop a malicious device snooping on traffic from an authenticated device and provides no protection against MAC spoofing, or EAPOL-Logoff attacks. Alternatives The IETF-backed alternative is the Protocol for Carrying Authentication for Network Access (PANA), which also carries EAP, although it works at layer 3, using UDP, thus not being tied to the 802 infrastructure. See also AEGIS SecureConnect IEEE 802.11i-2004 References External links IEEE page on 802.1X GetIEEE802 Download 802.1X-2020 GetIEEE802 Download 802.1X-2010 GetIEEE802 Download 802.1X-2004 GetIEEE802 Download 802.1X-2001 Ultimate wireless security guide: Self-signed certificates for your RADIUS server WIRE1x Wired Networking with 802.1X Authentication on Microsoft TechNet IEEE 802.01x Networking standards Computer access control protocols Computer network security
IEEE 802.1X
Technology,Engineering
3,128
11,780,267
https://en.wikipedia.org/wiki/Donetsk%20National%20Technical%20University
Donetsk National Technical University (DonNTU, formerly Donetsk Polytechnic Institute and other names) is the largest and oldest higher education establishment in Donbas, founded in 1921. In its early years, it was attended by Nikita Khrushchev. Following the loss of Ukrainian government control over Donetsk in 2014 during the war in Donbass, the University was evacuated to Pokrovsk. A small group of collaborationists among the faculty have claimed to continue to operate as the Donetsk National Technical University in the campus in Russian-occupied Donetsk, as a project aimed to legitimize the so-called Donetsk People's Republic, however diplomas they issued weren't recognized even in Russia itself and local students were officially enrolled as correspondent students at minor provincial Russian universities. Following the full-scale Russian invasion of Ukraine, the university appears to have been discontinued. On 28 February 2024, the university building in Pokrovsk was partially destroyed by Russian missile attack. Structure Donetsk National Technical University (DonNTU) is the first higher education establishment in the Donbas Region and one of the first technical universities in Ukraine. 27,000 students study at 7 faculties, 60 specialities being their major. There are 28 correspondent members and academicians of the engineering academies, 18 honorary researchers and professors among the academics of the university. A number of scientists of DonNTU are honorary and full members of foreign organizations and academies. There are professors and students whose work was supported by the Soros Fund. DPI team From 1987—1996, the university had a group known as the participating in the popular comedic game-show KVN. It contained a number of figures, made up of students who were studying at the university and would later become prominent within Ukrainian society, including Ismail Abdullaiev and . Collaboration DonNTU has more than 70 collaboration agreements with universities all over the world. There is an office of the Siemens company at the university. At the three engineering faculties (German, French, and English) students are trained in the appropriate foreign language. A Polish faculty has been established. Thirty professors from foreign universities are Honorary Doctors of DonNTU. The university has a reading room sponsored by the Goethe Institute, Germany. Donetsk National Technical University is a member of the EAU (European Association of Universities). DonNTU is a member of UICEE – The International Engineering Education Centre sponsored by UNESCO, Melbourne, Australia EAIE – European Association of International Education EAAU – Euro-Asian Association of Universities SEFI – European Association of Engineering Education IGIP – International Association of Engineering Education (Austria) COFRAMA – French Council on Management links development with the countries of the CIS and Russia (Lion, France); PRELUDE – International Association of Research and links with universities (Belgium) CEUME – Consortium of Management Education in Ukraine (the US, Poland) URAN – Ukrainian Educational and Research Network sponsored by the NATO and German Research Network DonNTU is a participant in the following international programmes: TEMPUS-TACIS NCD-JEP – 23125-2002 European Studios; DAAD Eastern Partnerships (Germany) Stipend of the International Board of the Ministry of Education and Science (the German Aerodynamics Center) BWTZ-Programm (Germany, the Ministry of Science) INTAS – Publishing House (Germany) BMEU/CEUME Business - Management - Education (USA, Poland) The Jozef Mihknowski Science Development Fund (Poland) Students Exchange Programmes AIESEC (Poland) Grant from the Ministry of Education and Sports, Poland Grant from the Ministry of Education and Science (Russia) Programme Dnipro (France) Grant of the Special School of Social Works, Construction and Industry (ESTP), (France) Grant from the government of Czech Republic SIDA – Master and Bachelor programme Sandwich (Sweden) Gallery References External links Official website of the University Website of russian collaborationsts, offline since March 2022 Universities and colleges in Donetsk Universities and colleges established in 1921 1921 establishments in Ukraine Technical universities and colleges in Ukraine National universities in Ukraine Institutions with the title of National in Ukraine Schools of mines
Donetsk National Technical University
Engineering
835
521,941
https://en.wikipedia.org/wiki/Anti-social%20behaviour
Anti-social behaviours, sometimes called dissocial behaviours, are actions which are considered to violate the rights of or otherwise harm others by committing crime or nuisance, such as stealing and physical attack or noncriminal behaviours such as lying and manipulation. It is considered to be disruptive to others in society. This can be carried out in various ways, which includes, but is not limited to, intentional aggression, as well as covert and overt hostility. Anti-social behaviour also develops through social interaction within the family and community. It continuously affects a child's temperament, cognitive ability and their involvement with negative peers, dramatically affecting children's cooperative problem-solving skills. Many people also label behaviour which is deemed contrary to prevailing norms for social conduct as anti-social behaviour. However, researchers have stated that it is a difficult term to define, particularly in the United Kingdom where many acts fall into its category. The term is especially used in Irish English and British English. Although the term is fairly new to the common lexicon, the word anti-social behaviour has been used for many years in the psychosocial world where it was defined as "unwanted behaviour as the result of personality disorder." For example, David Farrington, a British criminologist and forensic psychologist, stated that teenagers can exhibit anti-social behaviour by engaging in various amounts of wrongdoings such as stealing, vandalism, sexual promiscuity, excessive smoking, heavy drinking, confrontations with parents, and gambling. In children, conduct disorders could result from ineffective parenting. Anti-social behaviour is typically associated with other behavioural and developmental issues such as hyperactivity, depression, learning disabilities, and impulsivity. Alongside these issues one can be predisposed or more inclined to develop such behaviour due to one's genetics, neurobiological and environmental stressors in the prenatal stage of one's life, through the early childhood years. The American Psychiatric Association, in its Diagnostic and Statistical Manual of Mental Disorders, diagnoses persistent anti-social behaviour starting from a young age as antisocial personality disorder. Genetic factors include abnormalities in the prefrontal cortex of the brain while neurobiological risk include maternal drug use during pregnancy, birth complications, low birth weight, prenatal brain damage, traumatic head injury, and chronic illness. The World Health Organization includes it in the International Classification of Diseases as dissocial personality disorder. A pattern of persistent anti-social behaviours can also be present in children and adolescents diagnosed with conduct problems, including conduct disorder or oppositional defiant disorder under the DSM-5. It has been suggested that individuals with intellectual disabilities have higher tendencies to display anti-social behaviours, but this may be related to social deprivation and mental health problems. More research is required on this topic. Development Intent and discrimination may determine both pro-social and anti-social behaviour. Infants may act in seemingly anti-social ways and yet be generally accepted as too young to know the difference before the age of four or five. Berger states that parents should teach their children that "emotions need to be regulated, not depressed". One problem with the assumption that a behaviour that is "simply ignorant" in infants would have antisocial causes in persons older than four or five years at the same time as the latter are supposed to have more complex brains (and with it a more advanced consciousness) is that it presumes that what appears to be the same behaviour would have fewer possible causes in a more complex brain than in a less complex brain, which is criticized because a more complex brain increases the number of possible causes of what looks like the same behaviour as opposed to decreasing it. Studies have shown that in children between ages 13–14 who bully or show aggressive behaviour towards others exhibit anti-social behaviours in their early adulthood. There are strong statistical relationships that show this significant association between childhood aggressiveness and anti-social behaviours. Analyses saw that 20% of these children who exhibit anti-social behaviours at later ages had court appearances and police contact as a result of their behaviour. Many of the studies regarding the media's influence on anti-social behaviour have been deemed inconclusive. Some reviews have found strong correlations between aggression and the viewing of violent media, while others find little evidence to support their case. The only unanimously accepted truth regarding anti-social behaviour is that parental guidance carries an undoubtedly strong influence; providing children with brief negative evaluations of violent characters helps to reduce violent effects in the individual. Cause and effects Family Families greatly impact the causation of anti-social behaviour. Some other familial causes are parent history of anti-social behaviours, parental alcohol and drug abuse, unstable home life, absence of good parenting, physical abuse, parental instability (mental health issues/PTSD) and economic distress within the family. Neurobiology Studies have found that there is a link between antisocial behaviour and increased amygdala activity specifically centered around facial expressions that are based in anger. This research focuses on the fact that the symptom of over reactivity to perceived threats that comes with antisocial behaviour may be from this increase in amygdala activity. This focus on perceived threat does not include emotions centered around distress. Consumption patterns There is a small link between antisocial personality characteristics in adulthood and more TV watching as a child. The risk of early adulthood criminal conviction increased by nearly 30 percent with each hour children spent watching TV on an average weekend. Peers can also impact one's predisposition to anti-social behaviours, in particular, children in peer groups are more likely to associate with anti-social behaviours if present within their peer group. Especially within youth, patterns of lying, cheating and disruptive behaviours found in young children are early signs of anti-social behaviour. Adults must intervene if they notice their children providing these behaviours. Early detection is best in the preschool years and middle school years in best hopes of interrupting the trajectory of these negative patterns. These patterns in children can lead to conduct disorder, a disorder that allows children to rebel against atypical age-appropriate norms. Moreover, these offences can lead to oppositional defiant disorder, which allows children to be defiant against adults and create vindictive behaviours and patterns. Furthermore, children who exhibit anti-social behaviour also are more prone to alcoholism in adulthood. Intervention and treatment As a high prevalence mental health problem in children, many interventions and treatments are developed to prevent anti-social behaviours and to help reinforce pro-social behaviours. Several factors are considered as direct or indirect causes of developing anti-social behaviour in children. Addressing these factors is necessary to develop a reliable and effective intervention or treatment. Children's perinatal risk, temperament, intelligence, nutrition level, and interaction with parents or caregivers can influence their behaviours. As for parents or caregivers, their personality traits, behaviours, socioeconomic status, social network, and living environment can also affect children's development of anti-social behaviour. An individual's age at intervention is a strong predictor of the effectiveness of a given treatment. The specific kinds of anti-social behaviours exhibited, as well as the magnitude of those behaviours also impact how effective a treatment is for an individual. Behavioural parent training (BPT) is more effective to preschool or elementary school-aged children, and cognitive behavioural therapy (CBT) has higher effectiveness for adolescents. Moreover, early intervention of anti-social behaviour is relatively more promising. For preschool children, family is the main consideration for the context of intervention and treatment. The interaction between children and parents or caregivers, parenting skills, social support, and socioeconomic status would be the factors. For school-aged children, the school context also needs to be considered. The collaboration amongst parents, teachers, and school psychologists is usually recommended to help children develop the ability of resolving conflicts, managing their anger, developing positive interactions with other students, and learning pro-social behaviours within both home and school settings. Moreover, the training for parents or caregivers are also important. Their children would be more likely to learn positive social behaviours and reduce inappropriate behaviours if they become good role models and have effective parenting skills. Cognitive behavioural therapy Cognitive behavioural therapy (CBT), is a highly effective, evidence-based therapy, in relation to anti-social behaviour. This type of treatment focuses on enabling the patients to create an accurate image of the self, allowing the individuals to find the trigger of their harmful actions and changing how individuals think and act in social situations. Due to their impulsivity, their inability to form trusting relationships and their nature of blaming others when a situation arises, individuals with particularly aggressive anti-social behaviours tend to have maladaptive social cognitions, including hostile attribution bias, which lead to negative behavioural outcomes. CBT has been found to be more effective for older children and less effective for younger children. Problem-solving skills training (PSST) is a type of CBT that aims to recognize and correct how an individual thinks and consequently behaves in social environments. This training provides steps to assist people in obtaining the skill to be able to evaluate potential solutions to problems occurring outside of therapy and learn how to create positive solutions to avoid physical aggression and resolve conflict. Therapists, when providing CBT intervention to individuals with anti-social behaviour, should first assess the level of the risk of the behaviour in order to establish a plan on the duration and intensity of the intervention. Moreover, therapists should support and motivate individuals to practice the new skills and behaviours in environments and contexts where the conflicts would naturally occur to observe the effects of CBT. Behavioural parent training Behavioural parent training (BPT) or parent management training (PMT), focuses on changing how parents interact with their children and equips them with ways to recognize and change their child's maladaptive behaviour in a variety of situations. BPT assumes that individuals are exposed to reinforcements and punishments daily and that anti-social behaviour, which can be learned, is a result of these reinforcements and punishments. Since certain types of interactions between parents and children may reinforce a child's anti-social behaviour, the aim of BPT is to teach the parent effective skills to better manage and communicate with their child. This could be done by reinforcing pro-social behaviours while punishing or ignoring anti-social behaviours. It is important to note that the effects of this therapy can be seen only if the newly acquired communication methods are maintained. BPT has been found to be most effective for younger children under the age of 12. Researchers credit the effectiveness of this treatment at younger ages due to the fact that younger children are more reliant on their parents. BPT is used to treat children with conduct problems, but also for children with ADHD. According to a meta-analysis, the effectiveness of BPT is supported by short-term changes on the children's anti-social behaviour. However, whether these changes are maintained over a longer period of time is still unclear. School-based Intervention First Step to Success is an early intervention for Kindergarten to 3rd grade children who are demonstrating antisocial behaviours. First Step is a collaborative intervention between home and school. There are three important components: (1) Screening; (2) School intervention (CLASS): teaches the child appropriate behaviour through positive reinforcement; (3) Home intervention (HomeBase): teaches the parent key skills for supporting their child and the use of positive reinforcement. The classroom intervention phase (CLASS) takes about 30 days to complete and has 3 phases: (1) Coach-led; (2) Teacher-led; (3) Maintaining. The Red Card/Green Card game (red = inappropriate behaviour; green = appropriate behaviour) is played at school each day. The coach/teacher shows a red/green card as a visual cue to the target student based on their current behaviour. Points are earned if the card is on green at the end of a timed interval. If enough points are earned at the end of the game, the target child gets to choose a reward that the entire class can enjoy together (i.e., extra time at recess, playing a special game, etc.). Coaches/teachers communicate daily with parent(s) throughout the intervention. The home intervention (HomeBase) begins a few days after the classroom intervention. HomeBase builds parent's confidence in 6 specific skill areas and in parent-child activities. Coaches meet with parent(s) once weekly for 6 weeks. Parent(s) engage with the target child for 10–15 minutes daily in one-on-one time during the intervention. Overall, First Steps takes about 3 months to implement, requires minimal time from parent(s) and teachers and has shown empirically positive results in increasing prosocial behaviour in at-risk children. Psychotherapy Psychotherapy or talk therapy, although not always effective, can also be used to treat individuals with anti-social behaviour. Individuals can learn skills such as anger and violence management. This type of therapy can help individuals with anti-social behaviour bridge the gap between their feelings and behaviours, which they lack the connection previously. It is most effective when specific issues are being discussed with individuals with anti-social behaviours, rather than a broad general concept. This type of therapy works well with individuals who are at a mild to moderate stage of anti-social behaviour since they still have some sense of responsibility regarding their own problems. Mentalization-based treatment is another form of group psychotherapy shifting its focus on the relational and mental factors related to anti-social personality disorder rather than anger management and violent acts. This particular group therapy targets the mentalizing vulnerabilities and attachment patterns of patients by using a semi-structured group process focused on personal formulation and by establishing group values to promote learning from other members and generating "we-ness." When working with individuals with anti-social behaviour, therapist must be mindful of building a trusting therapeutic relationship since these individuals might have never experienced rewarding relationships. Therapists also need to be reminded that changes might take place slowly, thus an ability for noticing small changes and constant encouragement for individuals with anti-social behaviour to continue the intervention are required. Family therapy Family therapy, which is a type of psychotherapy, helps promote communication between family members, thus resolving conflicts related to anti-social behaviour. Since family exerts enormous influence over children's development, it is important to identify the behaviours that could potentially lead to anti-social behaviours in children. It is a relatively short-term therapy which involves the family members who are willing to participate. Family therapy can be used to address specific topics such as aggression. The therapy may end when the family can resolve conflicts without needing the therapists to intervene. Diagnosis There is no official diagnosis for anti-social behaviour. However, we can have a look at the official diagnosis for antisocial personality disorder (ASPD) and use it as guideline while keeping in mind that anti-social behaviour and ASPD are not to be confused. Distinguishing from antisocial personality disorder When looking at non-ASPD patients (who show anti-social behaviour) and ASPD patients, it all comes down to the same types of behaviours. However, ASPD is a personality disorder which is defined by the consistency and stability of the observed behaviour, in this case, anti-social behaviour. Antisocial personality disorder can only be diagnosed when a pattern of anti-social behaviour began being noticeable during childhood and/or early teens and remained stable and consistent across time and context. In the official DSM IV-TR for ASPD, it is specified that the anti-social behaviour has to occur outside of time frames surrounding traumatic life events or manic episodes (if the individual is diagnosed with another mental disorder). The diagnosis for ASPD cannot be done before the age of 18. For example, someone who exhibits anti-social behaviour with their family but pro-social behaviour with friends and coworkers would not qualify for ASPD because the behaviour is not consistent across context. Someone who was consistently behaving in a pro-social way and then begins exhibiting anti-social behaviour in response to a specific life event would not qualify for ASPD either because the behaviour is not stable across time. Law breaking behaviour in which the individuals are putting themselves or others at risk is considered anti-social even if it is not consistent or stable (examples: speeding, use of drugs, getting in physical conflict). In relation to the previous statement, juvenile delinquency is a core element to the diagnosis of ASPD. Individuals who begin getting in trouble with the law (in more than one area) at an abnormally early age (around 15) and keep recurrently doing so in adulthood may be suspected of having ASPD. Evidence: frustration and aggression With some limitations, research has established a correlation between frustration and aggression when it comes to anti-social behaviour. The presence of anti-social behaviour may be detected when an individual is experiencing an abnormally high amount of frustrations in their daily life routine and when those frustrations always result into aggression. The term impulsivity is commonly used to describe this behavioural pattern. Anti-social behaviour can also be detected if the aggressiveness and impulsiveness of the individual's behaviour in response to frustrations is so that it causes obstruction to social interactions and achievement of personal goals. In both of these cases, we can consider the different types of treatment and therapy previously mentioned in this article. Examples in childhood: unable to make friends, unable to follow rules, getting kicked out of school, unable to fulfill minimal levels of education (elementary school, middle school). Examples in early adulthood: unable to keep a job or an apartment, difficulty with maintaining relationships. Prognosis The prognosis of having anti-social behaviour is not very favourable due to its high stability throughout children development. Studies have shown that children who are aggressive and have conduct problems are more likely to have anti-social behaviour in adolescence. Early intervention of anti-social behaviour is relatively more effective since the anti-social pattern lasts for a shorter period of time. Moreover, since younger children would have smaller social networks and less social activities, fewer contexts need to be considered for the intervention and treatment. For adolescents, studies have shown that the influence of treatments becomes less effective. The prognosis seems to not be influenced by the duration of intervention, however; a long-term follow-up is necessary to confirm that the intervention or treatment is effective. Individuals who exhibit anti-social behaviour are more likely to use drugs and abuse alcohol. This could make the prognosis worse since he or she would less likely be involved in social activities and would become more isolated. By location United Kingdom An anti-social behaviour order (ASBO) is a civil order made against a person who has been shown, on the balance of evidence, to have engaged in anti-social behaviour. The orders, introduced in the United Kingdom by Prime Minister Tony Blair in 1998, were designed to criminalize minor incidents that would not have warranted prosecution before. The Crime and Disorder Act 1998 defines anti-social behaviour as acting in a manner that has "caused or was likely to cause harassment, alarm or distress to one or more persons not of the same household" as the perpetrator. There has been debate concerning the vagueness of this definition. However, among legal professionals in the UK there are behaviours commonly considered to fall under the definitions of anti-social behaviour. These include, but are not limited to, threatening or intimidating actions, racial or religious harassment, verbal abuse, and physical abuse. In a survey conducted by University College London during May 2006, the UK was thought by respondents to be Europe's worst country for anti-social behaviour, with 76% believing Britain had a "big or moderate problem". Current legislation governing anti-social behaviour in the UK is the Anti-Social Behaviour, Crime and Policing Act 2014 which received Royal Assent in March 2014 and came into enforcement in October 2014. This replaces tools such as the ASBO with 6 streamlined tools designed to make it easier to act on anti-social behaviour. Australia Anti-social behaviour can have a negative effect and impact on Australian communities and their perception of safety. The Western Australia Police force define anti-social behaviour as any behaviour that annoys, irritates, disturbs or interferes with a person's ability to go about their lawful business. In Australia, many different acts are classed as anti-social behaviour, such as: misuse of public space; disregard for community safety; disregard for personal well-being; acts directed at people; graffiti; protests; liquor offences; and drunk driving. It has been found that it is very common for Australian adolescents to engage in different levels of anti-social behaviour. A survey was conducted in 1996 in New South Wales, Australia, of 441, 234 secondary school students in years 7 to 12 about their involvement in anti-social activities. 38.6% reported intentionally damaging or destroying someone else's property, 22.8% admitted to having received or selling stolen goods and close to 40% confessed to attacking someone with the idea of hurting them. The Australian community are encouraged to report any behaviour of concern and play a vital role assisting police in reducing anti-social behaviour. One study conducted in 2016 established how perpetrators of anti-social behaviour may not actually intend to cause offense. The study examined anti-social behaviours (or microaggressions) within the LGBTIQ community on a university campus. The study established how many members felt that other people would often commit anti-social behaviours, however there was no explicit suggestion of any maliciousness behind these acts. Rather, it was just that the offenders were naive to the impact of their behaviour. The Western Australia Police force uses a three-step strategy to deal with anti-social behaviour. Prevention – This action uses community engagement, intelligence, training and development and the targeting of hotspots, attempting to prevent unacceptable behaviour from occurring. Response – A timely and effective response to anti-social behaviour is vital. Police provide ownership, leadership and coordination to apprehend offenders. Resolution – Identifying the underlying issues that cause anti-social behaviour and resolve these issues with the help of the community. Offenders are successfully prosecuted. Japan The 1970's, brought attention to a social and historical phenomenon called hikikomori. Often called the lost generation, with pervasive and severe social withdrawal and anti-social tendencies. Individuals with hikikomori, are commonly in their 20's or 30's, avoiding as much social interaction as possible. Japanese psychologist and leading expert on the topic, Tamaki Saito, was one of the first to present that approximately 1% of the country's population was considered hikikomori at the time. Today, it is still existent in Japan taking on new forms of seclusion by using digital tools, such as video games and internet chatting, to replace social interaction. The term Hikikomori has since been used throughout the world, in Asia, Europe, North and South America, Africa and Australia. See also References Further reading External links Anti-Social Behaviour.org.uk MIT Technology Review - How a Troll-Spotting Algorithm Learned Its Anti-antisocial Trade Behavioral addiction Criminal subcultures
Anti-social behaviour
Biology
4,787
61,124,880
https://en.wikipedia.org/wiki/Estradiol%20diundecylate/hydroxyprogesterone%20heptanoate/testosterone%20cyclohexylpropionate
Estradiol diundecylate/hydroxyprogesterone heptanoate/testosterone cyclohexylpropionate (EDU/OHPH/TCHP), sold under the brand name Trioestrine Retard, is an injectable combination medication of estradiol diundecylate (EDU), an estrogen, hydroxyprogesterone heptanoate (OHPH), a progestogen, and testosterone cyclohexylpropionate (TCHP), an androgen/anabolic steroid. It contained 2.25 mg EDU, 100 mg OHPH, and 67.5 mg TCHP in oil solution, was provided as ampoules, and was administered by intramuscular injection. The medication was manufactured by Roussel and Théramex and was marketed by 1953. It is no longer available. See also List of combined sex-hormonal preparations § Estrogens, progestogens, and androgens References Abandoned drugs Combined estrogen–progestogen–androgen formulations
Estradiol diundecylate/hydroxyprogesterone heptanoate/testosterone cyclohexylpropionate
Chemistry
230
11,778,031
https://en.wikipedia.org/wiki/Sodium%20sulfate%20%28data%20page%29
This page provides supplementary chemical data on sodium sulfate. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Safety Data Sheet (SDS) for this chemical from the manufacturer and follow its directions. Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Sodium sulfate (data page)
Chemistry
73
12,833,402
https://en.wikipedia.org/wiki/Formate%20dehydrogenase
Formate dehydrogenases are a set of enzymes that catalyse the oxidation of formate to carbon dioxide, donating the electrons to a second substrate, such as NAD+ in formate:NAD+ oxidoreductase () or to a cytochrome in formate:ferricytochrome-b1 oxidoreductase (). This family of enzymes has attracted attention as inspiration or guidance on methods for the carbon dioxide fixation, relevant to global warming. Function NAD-dependent formate dehydrogenases are important in methylotrophic yeast and bacteria, being vital in the catabolism of C1 compounds such as methanol. The cytochrome-dependent enzymes are more important in anaerobic metabolism in prokaryotes. For example, in E. coli, the formate:ferricytochrome-b1 oxidoreductase is an intrinsic membrane protein with two subunits and is involved in anaerobic nitrate respiration. NAD-dependent reaction Formate + NAD+ CO2 + NADH + H+ Cytochrome-dependent reaction Formate + 2 ferricytochrome b1 CO2 + 2 ferrocytochrome b1 + 2 H+ Molybdopterin, molybdenum and selenium dependence The metal-dependent Fdh's feature Mo or W at their active sites. These active sites resemble the motif seen in DMSO reductase, with two molybdopterin cofactors bound to Mo/W in a bidentate fashion. The fifth and sixth ligands are sulfide and either cysteinate or selenocysteinate. The mechanism of action appears to involve 2e redox of the metal centers, induced by hydride transfer from formate and release of carbon dioxide: In this scheme, represents the four thiolate-like ligands provided by the two dithiolene cofactors, the molybdopterins. The dithiolene and cysteinyl/selenocysteinyl ligands are redox-innocent. In terms of the molecular details, the mechanism remains uncertain, despite numerous investigations. Most mechanisms assume that formate does not coordinate to Mo/W, in contrast to typical Mo/W oxo-transferases (e.g., DMSO reductase). A popular mechanistic proposal entails transfer of H− from formate to the Mo/WVI=S group. Transmembrane domain Formate dehydrogenase consists of two transmembrane domains; three α-helices of the β-subunit and four transmembrane helices from the gamma-subunit. The β-subunit of formate dehydrogenase is present in the periplasm with a single transmembrane α-helix spanning the membrane by anchoring the β-subunit to the inner-membrane surface. The β-subunit has two subdomains, where each subdomain has two [4Fe-4S] ferredoxin clusters. The judicious alignment of the [4Fe-4S] clusters in a chain through the subunit have low separation distances, which allow rapid electron flow through [4Fe-4S]-1, [4Fe-4S]-4, [4Fe-4S]-2, and [4Fe-4S]-3 to the periplasmic heme b in the γ-subunit. The electron flow is then directed across the membrane to a cytoplasmic heme b in the γ-subunit . The γ-subunit of formate dehydrogenase is a membrane-bound cytochrome b consisting of four transmembrane helices and two heme b groups which produce a four-helix bundle which aids in heme binding. The heme b cofactors bound to the gamma subunit allow for the hopping of electrons through the subunit. The transmembrane helices maintain both heme b groups, while only three provide the heme ligands thereby anchoring Fe-heme. The periplasmic heme b group accepts electrons from [4Fe-4S]-3 clusters of the  β-subunit’s periplasmic domain. The cytoplasmic heme b group accepts electrons from the periplasmic heme b group, where electron flow is then directed towards the menaquinone (vitamin K) reduction site, present in the transmembrane domain of the gamma subunit. The menaquinone reduction site in the γ-subunit, accepts electrons through the binding of a histidine ligand of the cytoplasmic heme b. See also Formate dehydrogenase (cytochrome) Formate dehydrogenase (cytochrome-c-553) Formate dehydrogenase (NADP+) Microbial metabolism Additional reading References External links ENZYME link for EC 1.2.2.1 ENZYME link for EC 1.2.1.2 Cellular respiration Metabolism EC 1.2.2 EC 1.2.1
Formate dehydrogenase
Chemistry,Biology
1,062
52,614,095
https://en.wikipedia.org/wiki/Solidesulfovibrio%20aerotolerans
Solidesulfovibrio aerotolerans is a Gram-negative, mesophilic, sulphate-reducing and oxygen tolerant bacterium from the genus of Solidesulfovibrio which has been isolated from activated sludge in Denmark. Originally described under Desulfovibrio, it was reassigned to Solidesulfovibrio by Waite et al. in 2020. References External links Type strain of Desulfovibrio aerotolerans at BacDive - the Bacterial Diversity Metadatabase Desulfovibrionales Bacteria described in 2009
Solidesulfovibrio aerotolerans
Biology
115
24,133,495
https://en.wikipedia.org/wiki/C15H12O7
{{DISPLAYTITLE:C15H12O7}} The molecular formula C15H12O7 (molar mass: 304.25 g/mol, exact mass: 304.058303 u) may refer to: Dihydromorin, a flavanonol Taxifolin (epitaxifolin), a flavanonol Hydrorobinetin, a flavonoid Molecular formulas
C15H12O7
Physics,Chemistry
92
3,243,253
https://en.wikipedia.org/wiki/Comb%20generator
A comb generator is a signal generator that produces multiple harmonics of its input signal. The appearance of the output at the spectrum analyzer screen, resembling teeth of a comb, gave the device its name. Comb generators find wide range of uses in microwave technology. E.g., synchronous signals in wide frequency bandwidth can be produced by a comb generator. The most common use is in broadband frequency synthesizers, where the high frequency signals act as stable references correlated to the lower energy references; the outputs can be used directly, or to synchronize phase-locked loop oscillators. It may be also used to generate a complete set of substitution channels for testing, each of which carries the same baseband audio and video signal. Comb generators are also used in RFI testing of consumer electronics, where their output is used as a simulated RF emissions, as it is a stable broadband noise source with repeatable output. It is also used during compliance testing to various government requirements for products such as medical devices (FDA), military electronics (MIL-STD-461), commercial avionics (Federal Aviation Administration), digital electronics (Federal Communications Commission), in the USA. An optical comb generator can be used as generators of terahertz radiation. Internally, it is a resonant electro-optic modulator, with the capability of generating hundreds of sidebands with total span of at least 3 terahertz (limited by the optical dispersion of the lithium niobate crystal) and frequency spacing of 17 GHz. Other construction can be based on erbium-doped fiber laser or Ti-sapphire laser often in combination with carrier envelope offset control. See also Comb filter Frequency comb References External links Yet another comb generator Com-Power Corporation Com-Power Corporation Laboratory equipment Electronic test equipment Signal processing Terahertz technology
Comb generator
Physics,Technology,Engineering
381
77,703,177
https://en.wikipedia.org/wiki/Joining%20technology
The joining technology is used in any type of mechanical joint which is the arrangement formed by two or more elements: typically, two physical parts and a joining element. The mechanical joining systems make possible to form a set of several pieces using the individual parts and the corresponding joining elements. There are fixed sets and removable sets. Most common utensils (tools, furniture, weapons, clothing, footwear, vehicles, ...) are made up of sets of parts. The study of mechanical joints is essential to ensure the proper functioning of the mentioned assemblies.”. Types of unions Metallic materials Riveted joint Bolted joint Pin joint Folded joints: Sheet metal folding joint Welded joints Soldering Brazing Spot welding Wood and "nailable" materials Wood The joints between pieces of wood (natural or processed), between materials similar to joint effects (for example, plastic foam boards) and combined materials can be various. If the parts to be joined include (in addition to wood) metals, ceramic materials or polymers, the joints can be more elaborate. Joints Joints of two pieces of wood . A mortise determines the shape of the ends of the two pieces of wood to be joined. Some of traditional joints are listed below: dovetail joint pocket-hole joinery Biscuit joint dowel (carpentry) tongue and groove Butt joint; p.e. traditional violins Beveled joint; p.e. two pieces of plywood Joining elements fastener nails copper bronze, brass, aluminium and others iron screws Self-tapping screw Bolt (fastener) staples threaded inserts glues and adhesives female and male (forced assembly) Tools with wooden handle Often the handle is wedged and forced. Sometimes with a skinny type bailer or similar. hammers axes scythe chisel plane adzes Others barrels (metal staves and hoops) Polymers Glues and adhesives Flexible materials knots sewn boats ropes Self unions Some manufactured items are made from a raw material using self unions, that is: unions without using other joining materials. basketry reed mats fabric felt knitted fabrics Braided leather For example: in a braided leather whip, all joints are made without any kind of sewing thread or adhesive Fishing nets Metallic nets Historical examples Neolithic The replacement of cut stone tools by polished stone tools is not the most important innovation, although it is the one that gives the period its name. The diversification of tasks that needed to be done (cutting down trees, sowing seeds, harvesting cereals, much of the grain...) explains that the first farmers had to create new specific tools for each function. Most utensils were made of flint with a wooden handle, others were made of bone and animal horn. They made pottery to store food, fabric for clothing with wool and linen, musical instruments... The mortice and tenon coupling was used to join the planks of the ancient Greek ships with double box and false wick. This set was fixed with a wooden peg on each side. The construction system of the large ships of antiquity (the chaining with a guarantor of the lining plates) was of Phoenician origin. The Romans called it "Phoenician chains" ("coagmenta punicana", in Latin plural). Bronze Age iron age Phoenicians and Carthaginians The mortice and tenon coupling was used to join the planks of the ancient Greek ships with double box and false wick. This set was fixed with a wooden peg on each side. The construction system of the large ships of antiquity (the chaining with a guarantor of the lining plates) was of Phoenician origin. The Romans called it "Phoenician joints" ("coagmenta punicana", in Latin plural). Ancient egypt The images show three material assemblies representing three mechanical joints and the corresponding joint elements. The first example, a solar boat, recalls the sewn joints of the wooden pieces that make up the boat's hull. In this particular case the joints were reinforced with box and wick fittings. The second example is that of the wheels of a war chariot. The button of a wheel was formed by the union of the six "vertices" of six pieces of wood - each bent at an angle - so that each spoke was formed by the union of two arms of contiguous angular pieces. The third example is based on the funerary mask of Tutankhamun and shows a kind of soft welding for metals. Ancient Greece Classical Greek culture offers many examples of ensembles made up of pieces mechanically joined together. The following sets are randomly presented: a hoplite spear, a hoplite shield, a mechanical system for chariot racing, the Antikythera mechanism, and general war machines. In figure 1 you can see a Greek spear made up of three parts: the tip (of bronze or steel), the shaft (of ash or a similar wood) and the shaft (of bronze or steel). This set involves two unions. Viking Era The ships of the Vikings, the drakkars, had (almost all) tingled hulls. The juxtaposed lath system was the most popular on the Mediterranean coast. Boats, oxen, gussies and ships of great harbor had ships according to this arrangement. The tingled can method (in which each can overlaps the bottom can) was typical of the Atlantic coasts. An example would be the ships of the Vikings, the drakkars. The method of sewing the tins was followed in various parts of the world, with examples in the Nordic countries and on the coasts of the Indian Ocean. The union of two planks in a drakkar was secured by means of iron rivets (or nails with a dome on the outside and a bent point on the inside). The tightness was obtained with moss or wool impregnated with resin or glue. Patents Since about the fifteenth century, joining technologies have been the subject of patents and similar actions. Here follows a small, random sample, arranged chronologically. The listed patents include assembly tools for mounting or tightening fasteners. 1891 The Swedish company Bahco attributes an improved design, in 1891 or 1892, to the Swedish inventor Johan Petter Johansson who in 1892 received a patent. 1909 Allen screws. 1944 Blind rivets 1981 Pozidriv screws. References External links Konstruktionsatlas (Maschinenbau) Mechanics Soldering
Joining technology
Physics,Engineering
1,329
77,103,777
https://en.wikipedia.org/wiki/Gregori%20Aminoff
Gregori Aminoff (8 February 1883 – 11 February 1947) was a Swedish mineralogist, artist, and a member of the Aminoff family. During his career, Aminoff introduced X-ray diffraction and electron diffraction to the Swedish scientific community and was a pioneer in crystallography in Sweden. Education and career Aminoff was born in Stockholm and studied mineralogy at Stockholm University and took exams at Uppsala University, where he received a bachelor's degree in 1905. He switched to art after graduation and studied art first at the Konstnärsförbundets skola in Stockholm until its closure in 1908 and later in Italy with Henri Matisse. He exhibited with De Unga in Stockholm in 1909, 1910 and 1911, was represented at the Autumn Salon (Salon d'Automne) in Paris in 1909, and in 1912 together with Arvid Nilsson (1881–1971) had the solo exhibition Salon Joel in Stockholm. He also participated in exhibitions at the Konstnärsförbundet (English: Artists' Association) and at the Royal Swedish Academy of Fine Arts' spring exhibitions. Aminoff liked to paint naked bodies in landscapes but also urban motifs, portraits and landscapes. He is represented with works at, among others, the National Museum in Stockholm. Aminoff supported himself through his art practices mostly through taking temporary jobs. In 1914, Aminoff stopped painting and resumed his studies of mineralogy and crystallography, which he had previously put aside. Through the great support from his first wife, Aminoff received a licentiate at Stockholm University in 1916. He later received his PhD in 1918 from Stockholm University and became a docent in mineralogy and crystallography. In the same year, Aminoff introduced X-ray crystallography in Sweden. In 1923, he became a professor at the Mineralogical Department of the Swedish Museum of Natural History. The diffraction techniques he brought to Sweden later attracted the interests from people such as Gösta Phragmén and Arne Westgren, whom Aminoff mentored. Aminoff was elected in 1933 as a member of the Royal Swedish Academy of Sciences. He was awarded the Björkén Prize in 1935 along with Arne Westgren. A mineral found by Aminoff in Långban's mine in Värmland has been named after him as aminoffite. Personal life Aminoff was the son of captain Tönnes Aminoff in the Svea Life Guards and Mathilda Aminoff, née Lindström, who worked as a piano teacher. He was the nephew of Iwan T. Aminoff, the Swedish army officer and author. Aminoff married in 1908 a fellow student from Stockholm University, Ingrid Setterlund, daughter of head teacher Carl Setterlund. They had four daughters, Brita, Eva, Malin and Ulla. Aminoff was remarried to Birgit Broomé (1892–1950), daughter of Emilia Broomé and Erik Ludvig Broomé, in 1929. Aminoff published several papers in crystallography together with his second wife Broomé (later Broomé-Aminoff). Aminoff died of a heart disease in 1947. The Aminoffs were buried at the Norra burial place (Norra begravningsplatsen) outside Stockholm. Gregori Aminoff prize Aminoff's widow, Birgit Broomé-Aminoff, established the Professor Gregori Aminoff memorial fund in her will in 1950. The fund is administered by the Royal Swedish Academy of Sciences and an annual prize, the Gregori Aminoff Prize, is awarded annually for published work in crystallography since 1979. References 1883 births 1947 deaths People from Stockholm Academic staff of Stockholm University Swedish mineralogists Crystallographers Stockholm University alumni Uppsala University alumni Aminoff family Members of the Royal Swedish Academy of Sciences
Gregori Aminoff
Chemistry,Materials_science
778
14,322,756
https://en.wikipedia.org/wiki/Testicular%20receptor%204
Testicular receptor 4 also known as NR2C2 (nuclear receptor subfamily 2, group C, member 2) is a protein that in humans is encoded by the NR2C2 gene. The testicular receptor 4 is a member of the nuclear receptor family of transcription factors. Interactions Testicular receptor 4 has been shown to interact with Androgen receptor, Estrogen receptor alpha, and Hepatocyte nuclear factor 4 alpha. See also Testicular receptor References Further reading External links Intracellular receptors Transcription factors
Testicular receptor 4
Chemistry,Biology
104
5,900,525
https://en.wikipedia.org/wiki/List%20of%20Romanesque%20buildings
Listed below are examples of surviving buildings in Romanesque style in Europe, sorted by modern day countries. List Austria Gurk Cathedral, Gurk, Carinthia Ossiach Abbey, Ossiach, Carinthia Virgilkapelle, Vienna Millstatt Abbey, Millstatt, Carinthia , Hollabrunn, Lower Austria Belgium Tournai Cathedral in Tournai Abbey Church of Saint Peter, Hastière, Hastière Collegiate Church of Saint Bartholomew, Liège Collegiate Church of Saint Gertrude in Nivelles , Celles Collegiate Church of Saint Ursmarus, Lobbes , Soignies , Ghent , Liège Church of Saint Remaclus, Ocquier , Nandrin Church of Saint Quentin, Tournai , Hamoir Croatia St. Anastasia, Zadar St. Benedict, Split St. Peter, Rab St. Mary the Blessed, Rab Czech Republic St. Longin's Rotunda in Prague Rotunda of the Finding of the Holy Cross in Prague St. George's Basilica, Prague (Bazilika svatého Jiří, Praha) St. Bartholomew's Church in Prague-Kyje St. George's Rotunda on Říp Castle and rotunda in Týnec nad Sázavou St. Peter and Paul (Petr a Pavel) Church in Poříčí nad Sázavou St. Jacob's (Jakub) Church in Cirkvice (near Kutná Hora) St. Procopius Basilica in Třebíč St. Peter's Rotunda in Starý Plzenec St. Peter and Paul Rotunda in Budeč (near Zákolany u Kladna) Rotunda of the Virgin Mary and St. Catherine in Znojmo St. Martin's Rotunda in Vyšehrad, Prague St. Catherine's Rotunda in Česká Třebová Basilica of the Assumption of the Virgin Mary in Tismice (near Český Brod) St. Bartholomew's Church in Kondrac (near Vlašim) Basilica of the Visitation of Our Lady, Premonstratensian Monastery in Milevsko Zdík's Palace (Zdíkův palác) in Olomouc Landštejn Castle, Landštejn Rotunda of St Wenceslaus, Malá Strana France Romanesque architecture expands in France through monasteries. Burgundy was the center of monastic life in France - one of the most important Benedictine monasteries of medieval Europe was located in Cluny. Pilgrimages also contributed to expansion of this style. Many pilgrims passed through France on their way to Santiago de Compostela. French Romanesque schools of architecture, which are specific for every region, are characterised by the variety of stone vaulting. Regions that developed distinctive styles are: Burgundy abbey church, Cluny Saint-Bénigne, Dijon Autun St Philibert at Tournus Provence Church of St. Trophime and cloister, Arles Tour Fenestrelle, Uzès Abbey of Sénanque, Gordes Le Thoronet Abbey, Brignoles Fréjus Cathedral, Fréjus Silvacane Abbey, La Roque-d'Anthéron Montmajour Abbey, Arles Aquitaine Saint-Front, Périgueux Notre-Dame-la-Grande, Poitiers Saint-Pierre, Angoulême Sainte-Croix, Bordeaux Auvergne Saint-Foy, Conques Saint-Sernin, Toulouse Notre-Dame-du-Port, Clermont-Ferrand Saint-Austremoine, Issoire Notre-Dame, Orcival Normandy Saint-Étienne, Caen, abbey church, Jumièges, Seine-Maritime abbey church of Saint-Georges-de-Boscherville, Seine-Maritime Sainte-Trinité, Caen, Calvados Cerisy-la-Forêt, Manche Lessay, Manche abbey church, Mont Saint-Michel, Avranches Saint-Nectaire Saint-Saturnin Sainte-Madeleine, Vezelay Basilica of Paray-le-Monial Abbey Church of Saint-Savin-sur-Gartempe Chapaize Abbatiale de Cruas Abbey of Vigeois, Limousin Fontevraud Abbey Saint-Martin-du-Canigou, Roussillon Germany Bamberg Cathedral Bonn Minster Brunswick Cathedral Cologne the twelve Romanesque churches of Cologne, include Gross St Martin, St. Maria im Kapitol with fine wooden doors, the central plan St. Gereon, St. Aposteln, St. Pantaleon Freising, Cathedral Gernrode, St. Cyriakus' Church Goslar Goslar Cathedral Imperial Palace Hildesheim Hildesheim Cathedral St. Michael's Church St. Godehard's Church Church of the Holy Cross St. Mauritius Church Mainz Cathedral Maria Laach Abbey Naumburg Cathedral Regensburg: Schottenkirche St. Jakob Trier Cathedral Speyer Cathedral Straubing: St. Peter's Church Worms Cathedral Würzburg Cathedral Hungary Calvinist church, Ócsa (e. 13th century) Parish church of the Annunciation of Our Lady, Türje (e. 13th century) Parish church of St. James the Apostle, Lébény (c. 1190-1212) Premontre monastery church, Zsámbék, (c. 1220–1235) Parish church of St. George, Ják (c. 1220-1256) Abbey Church of the Assumption of Our Lady, Belapatfalva (1232–1246) Cathedral of Pécs Pécs (11th century, 1882–1891) Royal palace at Esztergom Esztergom (10th-13th century) Pannonhalma Archabbey (certain parts) Pannonhalma (11th-13th century) Ireland Cormac's Chapel, Cashel (1127–1134) Aghadoe, County Kerry (1158) Nuns' Church, Clonmacnoise (1167) Tuam Cathedral and Crosses (c. 1184) Ardmore Church and Round Tower, County Waterford Baltinglass Cistercian Abbey, County Wicklow Boyle Cistercian Abbey, County Roscommon Christ Church Cathedral, Dublin Clonfert Cathedral, County Galway Cong Abbey, County Galway Devenish Round Tower and Churches, County Fermanagh Dysert O'Dea Church and Round Tower, County Clare Freshford, County Kilkenny Jerpoint Cistercian Abbey, County Kilkenny Killeshin, County Laois Maghera, County Londonderry Monaincha Abbey and Cross, County Tipperary Rahan Church of Ireland Church, County Offaly Timahoe Round Tower, County Laois St. Saviour's, Glendalough Italy In Italy, the prevalent diffusion is in Lombardy, in Emilia - Romagna, in Tuscany, in the continental part of Veneto and in Apulia; everyone of these "Romanesque styles" has proper characteristics, for constructing methods and for materials. For example, a characteristic of Romanesque is that to change the classic elements with Christian elements, but in Tuscany and Apulia the classic decoratings remain. Materials depended from the local disponibility, because the importation was too expensive. In fact, in Lombardy the most used material is ceramic, because of the argillous nature of the terrain; but that is not true for Como, where there were large diponibility of stone; in Tuscany buildings in white marble (from Carrara) are frequent, with inserts of green serpentin marble. In Lombardy and Emilia, in that age united, in Romanesque epoque there was a great artistic flowering. The most monumental churches and cathedrals are often built with the campata system, with varying columns which weigh a tutto sesto arcos. In plain the material of construction is prevalently the mattone, but buildings in stone do not lack. The greater part of the Roman cities along the via Emilia is equipped in this age of monumental cathedral, between which they already maintain to the medieval system. Abruzzo San Clemente a Casauria San Liberatore a Maiella Santa Maria Arabona Sant'Antimo Abbey Aosta Valley Aosta Cathedral Collegiate church of Saint Ursus Emilia-Romagna Modena Cathedral - Abbey of San Mercuriale, Forlì and campanile - Chiesa di S. Maria Oliveto (Albinea - province of Reggio Emilia) Parma Baptistery - Parma Cathedral - Piacenza Cathedral - Friuli-Venezia Giulia Basilica di Poppo, Aquileia, province of Udine Basilica patriarcale, Aquileia - province of Udine Latium Cathedral of Acquapendente (province of Viterbo) Church of S. Maria della Libera (Aquino - province of Frosinone) Lombardy Sant'Ambrogio, Milan San Lorenzo, Milan Duomo vecchio, Brescia San Michele Maggiore, Pavia Cathedral of Monza S. Cosma e Damiano (Rezzago - province of Como) Madonna del Ghisallo (Magreglio - province of Como) S. Alessandro (Lasnigo - province of Como) S. Pietro (Albese - province of Como) Chiesa di S. Tommaso (Acquanegra sul Chiese - province of Mantova) Sant'Abbondio (Como) San Tomè (Almenno San Bartolomeo - province of Bergamo) Marche Ancona Cathedral (Ancona) Santa Maria della Piazza, Ancona (Ancona) Pieve of S. Urbano (Apiro - province of Macerata) San Vittore alle Chiuse Piedmont Vezzolano Abbey (Albugnano - province of Asti) Crypt of Sant'Anastasio (Asti) Pieve of San Secondo (Cortazzone - province of Asti) San Secondo (Magnano) Church of Saints Nazarius and Celsus (Montechiaro - province of Asti) Pieve of San Lorenzo (Montiglio - province of Asti) San Michele, Oleggio Abbey of Santi Nazario e Celso (San Nazzaro Sesia - province of Novara) Abbey of Santa Fede (Cavagnolo - province of Tourin) Cattedrale dell'Addolorata (Acqui Terme - province of Alessandria) Church of S. Pietro (Albugnano - province of Asti) Baptistery of Agrate (Agrate Conturbia - province of Novara) Romanesque architecture in Canavese area Ivrea - Puglia Basilica of San Nicola, Bari Bari Cathedral Ruvo Cathedral Otranto Cathedral Barletta Cathedral Andria Cathedral Church of Saint Conrad, Molfetta Altamura Cathedral Basilica of Santa Maria Maggiore di Siponto Conversano cathedral Basilica del Santo Sepolcro, Barletta Cathedral of Bitonto Trani Cathedral Sardinia S. Giusta (S. Giusta) S. Maria (Bonarcado) S. Paolo (Milis) S. Palmerio (Ghilarza) Il Carmine (Mogoro) S. Gregorio (Sardara) S. Leonardo (Masullas) S. Lussorio (Fordongianus) S. Gregorio (Solarussa) S. Nicola di Trullas (Semestene) San Nicola di Silanis (Sedini) S. Pietro (Zuri - Sardinia S. Maria Maddalena (Silì) S. Maria della Mercede (Norbello) S. Pietro di Sorres (Borutta) Santissima Trinità di Saccargia Sant'Antioco di Bisarcio (Ozieri) Santa Maria del Regno (Ardara) San Simplicio, Olbia Nostra Signora di Tergu S. Pantaleo (Dolianova) S. Alenixedda (Cagliari) S. Lorenzo (Silanus) S. Leonardo (Siete Fuentes) S. Maria (Uta) S. Maria (Tratalias) S. Pietro Extramuros (Bosa) S. Gavino (Porto Torres) Sicily Cathedral, Cefalù Cathedral, Monreale Cathedral, Palermo Palatine Chapel in Norman Palace, Palermo Church of the Holy Spirit, Palermo Church of the Holy Spirit (Sicily), Palermo Church of San Cataldo, Palermo Church of Santi Pietro e Paolo d'Agrò Casalvecchio Siculo Church of Saints Peter and Paul, Itala Church of San Nicolò la Latina, Sciacca Church of Santa Maria della Raccomandata, Sciacca Church of Madonna delle Giummare, Mazara del Vallo Church of San Nicolò Regale, Mazara del Vallo Church of the Santissima Annunziata dei Catalani, Messina Abbey of the Santo Spirito, Caltanissetta Church of San Nicolò ai Cordari, Syracuse Tuscany San Miniato al Monte, Florence Pisa Cathedral San Paolo a Ripa d'Arno, Pisa Santa Maria della Pieve, Arezzo Sant'Ambrogio, Florence Pieve of Romena, Pratovecchio, Arezzo Pieve of Làmulas (Arcidosso - province of Grosseto) Chiesa abbaziale (Abbadia Isola - province of Siena) Chiesa abbaziale (Abbadia San Salvatore - province of Siena) Abbey of San Galgano (province of Siena) Oratorio of Alpe di Poti, province of Arezzo Chiesa di S. Jacopo Maggiore (Altopascio - province of Lucca) Chiesa di S. Stefano (Anghiari - province of Arezzo) Parish church of Saints Ippolito and Cassiano Umbria Basilica of Saint Francis of Assisi Cathedral of Spoleto San Francesco, Terni Chiesa di San Bernardino da Siena (La Pigge - Trevi - province of Perugia) Chiesa di Sant'Arcangelo (La Pigge - Trevi - province of Perugia) Eremo di San Marco e la grotta del Beato Ventura (La Pigge - Trevi - province of Perugia) Chiesa Tonda (La Pigge - Trevi - province of Perugia) S. Maria di Pietrarossa (Trevi - province of Perugia) S. Stefano di Piaggia (Trevi - province of Perugia) S. Nicolò (Trevi - province of Perugia) S. Fabiano (Trevi - province of Perugia) S. Tommaso (Trevi - province of Perugia) S. Sabino (Trevi - province of Perugia) S. Pietro a Pettine (Trevi - province of Perugia) S. Costanzo (Trevi - province of Perugia) S. Andrea (Trevi - province of Perugia) S. Egidio di Borgo (Trevi - province of Perugia) S. Donato (Trevi - province of Perugia) S. Leonardo del Colle (Trevi - province of Perugia) S. Martino in Manciano (Trevi - province of Perugia) S. Apollinare (Trevi - province of Perugia) S. Stefano in Manciano (Trevi - province of Perugia) S. Pietro in Bovara (Trevi - province of Perugia) S. Maria di Pelan (Trevi - province of Perugia) S. Paolo di Coste (Trevi - province of Perugia) S. Croce in Val dell'Aquila (Trevi - province of Perugia) S. Emiliano (Trevi - province of Perugia) Veneto Basilica di San Zeno, Verona Santa Sofia Church (Padua) San Giacomo dell'Orio (Venice) San Lorenzo, Verona Santa Toscana, Verona Santa Maria Maggiore (Gazzo, province of Verona) S. Pietro (Villanova - province of Verona) S. Maria (Bonavigo - province of Verona) S. Michele (Belfiore - province of Verona) S. Andrea (Sommacampagna - province of Verona) Netherlands Basilica of Saint Servatius, Maastricht (English:Saint Servaes) Onze-Lieve-Vrouwe, Maastricht (Church of Our Lady) Munsterkerk, Roermond , Utrecht (Saint John's Church) Pieterskerk, Utrecht (Saint Peters Church) St. Plechelmus, Oldenzaal (Saint Plecholmus Church) Chapel, Lemiers (Chapel) Reformed church, Oirschot Abbey church Rolduc, Kerkrade Susteren Abbey, Susteren St. Wiro, Plechelmus and Otgerus, Sint Odiliënberg St. Remigius, Klimmen Poland Greater Poland St. Trinity-Church in Strzelno St. Prokop-Rotunda in Strzelno St. Nicolaus-Church in Giecz Romanesque doors in Gniezno Cathedral Church of St. John from Jerusalem Outside the Walls in Poznań Born of Blessed Virgin Mary Church in Kotłów Benedictine Abbey in Mogilno Kuyavia St. Peter and Paul-Collegiate in Kruszwica St. Mary-Church in Inowrocław St. Margaret Church in Kościelec Kujawski Lesser Poland St. Andrew's Church in Kraków St. Adalbert Church in Kraków St. Leonard Crypt in Wawel, Kraków St. Nicholas Church in Wysocice St. John the Baptist church in Prandocin Lublin Voivodeship Dungeon in Lublin Castle Łódzkie St. Giles-Church in Inowłódz Church and campanile in Krzyworzeka Cistercians Abbey in Sulejów St. Ursula-Church in Strońsko Collegiate church in Tum St. Nicholas Church in Żarnów Masovia Masovian Blessed Virgin Mary Cathedral in Płock Abbey church in Czerwińsk nad Wisłą Silesia Saint Godehard-Rotunda in Strzelin St. Giles-Church in Wrocław Romanesque House in Wrocław St. Nicolaus-Rotunda in Cieszyn Castle in Będzin Blessed Virgin Mary-Church in Lwówek Śląski Blessed Virgin Mary church in Złotoryja South part and ruins of the chapel in Piast Castle in Legnica Blessed Virgin Mary church in Środa Śląska St. John the Baptist church in Siewierz Świętokrzyskie St. Martin-Collegiate in Opatów St. Jacob-Church in Sandomierz St. Florian-Church in Koprzywnica Cistercians Abbey in Wąchock St. Giles-Church in Tarczek St. John the Baptist-Church in Grzegorzowice St. John the Baptist church in Skalbmierz West Pomerania Knights Templar chapel in Rurka Knights Templar chapel in Chwarszczany Cistercians Abbey in Kołbacz Ziemia Lubuska Blessed Virgin Mary Church in Lubsko Church in Biedrzychowice St. Andrew's Church in Szprotawa Portugal Ganfei Convent in Valença, destroyed in 1000 by the Muslims, rebuilt in 1018, façade and main chapel changed in later periods, the rest of the temple is Romanesque Pombeiro Monastery in Felgueiras, began in 1059, only the apse and the portal are from this period) Church and tower of the Travanca Monastery in Amarante, Preromanesque, Romanesque reconstruction in 1096, most of the building remained intact since the 13th century Lisbon Cathedral, began in 1147. Romanesque portals and nave Braga Cathedral, began in the first half of the 12th century. Romanesque portals and nave Oporto Cathedral, began in the first half of the 12th century. Romanesque nave Castle of Almourol, built after 1160 by the Knights Templar Old Cathedral of Coimbra, began 1162 Round church in the Convent of the Order of Christ in Tomar, 12th century, built by the Knights Templar Church of Cedofeita in Oporto, second half of the 12th century Monastery of Rates in Póvoa de Varzim, most of the building is from the 12th century, except the main chapel Domus Municipalis, Bragança Romania St. Michael's Cathedral, Alba Iulia, began in 1009, reconstructed 1246-1291. St. Michael's fortified church, Cisnădioara, late 12th century. Herina Evangelical Church, Herina, raised by the Order of Saint Benedict 1250-1260. Cluj-Mănăștur Calvaria Church, Cluj-Napoca, 9th-10th centuries, reconstructed 1896. Cincu Evangelical fortified church, Cincu, 13th century. Reformed church of Acâș, Acâș, early 13th century. Dormition of the Theotokos Church, Strei 1270 or middle 14th century. Evangelical fortified church in Vurpăr, Vurpăr, early 13th century. Reformed Church in Ocna Sibiului, Ocna Sibiului, 1240-1280. Rotunda church in Geoagiu, Geoagiu, 11th century Serbia Voljavča monastery (1050) The Tracts of Saint George monastery, Novi Pazar (1166). See Đurđevi stupovi built by Stefan Nemanja in the 12th century. Đurđevi stupovi, Montenegro, founded by Stefan Prvoslav, the nephew of Stefan Nemanja in 1213. The Studenica monastery (1190) Patriarchal Monastery of Peć (13th century) Pridvorica Monastery (12th century) Žiča crowning church, Kraljevo (1217) Arača (around 1230) Mileševa monastery (1234) Morača (monastery) (1252) The Sopoćani monastery (1265) Gradac Monastery (1270) Tronoša Monastery (1276) Church of St. Achillius, Arilje (1296) Gračanica Monastery (1321) Visoki Dečani monastery, Kosovo (1327) Vojlovica monastery (1375) Ravanica Monastery (1375) Ljubostinja (1388) Kalenić monastery (1407) Church of Saint Mary, Morović Slovakia During the time of early Christianity every 10 villages were ordered to build a church. Several rotunda have been built in this time. Boldog, Romanesque church with Gothic modifications. Spišská Kapitula, an ecclesiastical town with a Romanesque cathedral Nitra-Drazovce, a tiny Romanesque church on the hill above the village Levice-Kalinciakovo, a well preserved tiny Romanesque church built of hewn stone The Church of Saint George, Nitrianska_Blatnica, the Great Moravian period or shortly after Haluzice, Nové Mesto nad Váhom, Romanesque church Sedmerovec-Pominovce Diakovce, Romanesque cathedral Boldog, Romanesque Church Bíňa, Premontre Abbey monastery in the romanesque style Veľký Klíž, Partizánske, Church Romanesque Church in Veľká Tŕňa Romanesque church in Kšinná Spain Before Cluny`s influence, Romanesque first developed in Spain in the 10th and 11th centuries in Catalonia, Huesca and the Aragonese Pyrenees, simultaneously with the north of Italy, into what has been called "First Romanesque" or "Lombard Romanesque". It is a primitive style whose characteristics are thick walls, lack of sculpture and the presence of rhythmic ornamental arches. Romanesque architecture truly arrives with the influence of Cluny through the Way of Saint James pilgrimage route that ends in the Cathedral of Santiago de Compostela. The model of the Spanish Romanesque in the 12th century was the Cathedral of Jaca, with its characteristic apse structure and plan, and its "chess" decoration in strips called taqueado jaqués. As the Christian kingdoms advanced towards the South, this model spread throughout the reconquered areas with some variations. Spanish Romanesque was also influenced by the Spanish pre-Romanesque styles, mainly the Asturian and the Mozarab. But there is also a strong influence from the moorish architecture, so close in space, specially the vaults of Córdoba`s Mosque, and the polylobulated arches. In the 13th century, some Romanesque churches were built with early Gothic architectural elements. Aragón, Catalonia, Castile and Navarra are the areas where numerous examples of Spanish Romanesque can be found. Aragon Province of Zaragoza Monastery of San Pedro el Viejo Jaca Cathedral Loarre castle San Juan de la Peña Churches of San Caprasio and Santa María in Santa Cruz de la Serós San Adrián de Sasabe Santa Maria de Iguacel Church of Santiago in Agüero Serrablo churches Province of Zaragoza Real Monasterio de Nuestra Señora de Rueda, Aragon region Cantabria Santillana del Mar Collegiate Church and cloister Collegiate Church of San Pedro de Cervatos Catalonia Province of Barcelona Sant Benet de Bages Churches of Saint Mary (old Cathedral), Saint Peter and Saint Michael in Terrassa Province of Lleida Sant Climent de Taüll, Vall de Boí Province of Girona Girona Cathedral Sant Pere de Galligants Sant Pere de Rodes Monastery of Santa Maria de Ripoll Sant Pere, Camprodon Abbey of Saint-Michel-de-Cuxa Sant Quirze de Colera Province of Tarragona Tarragona Cathedral Castile and León Province of Avila Church of San Vicente Ermita de San Pelayo y San Isidoro, formerly in Ávila, moved to Madrid Province of Burgos Monastery of Santo Domingo de Silos San Juan de Ortega Church Province of León Basilica of San Isidoro, with "Royal Pantheon" Arbás Church Province of Palencia Carrión de los Condes Church of Santiago Carrión de los Condes Church of Santa María de las Victorias Aguilar de Campoo Church of Santa Cecilia Monastery of Santa María la Real in Aguilar de Campoo Arenillas de San Pelayo Church of San Pelayo Barrio de Santa María Church of Santa Eulalia Cillamayor Church of Santa María la Real St. Martin, Frómista Olmos de Ojeda Church of Santa Eufemia San Salvador de Cantamuda Collegiate Church Province of Salamanca Salamanca Cathedral Province of Segovia Duratón La Asunción de María, church Fuentidueña Church of San Miguel Grado del Pico Church of San Pedro Perorrubio Church of San Pedro Requijada Church of Virgen de Las Vegas San Pedro de Gaillos Church Sepúlveda Church of San Salvador Province of Soria Soria, Santo Domingo Soria San Juan de Duero, Cloister Province of Zamora Zamora Cathedral Other Romanesque buildings in Zamora Benavente: Church of Santa María del Azogue Santa María la Mayor, Collegiate Church, Toro, province of Zamora Galicia Province of A Coruña Santiago de Compostela Cathedral Santiago de Compostela Gelmirez Palace Santiago de Compostela Santa María del Sar (Colegiata) A Coruña Church of Santiago A Coruña Collegiate Church of Santa María del Campo Province of Lugo Lugo Cathedral Noia Church of San Martiño Church of San Juan of Portomarín Vilar de Donas, Monastery Sarria, Church Barbadelo, Church Province of Ourense Cathedral, Ourense, Romanesque and Gothic Madrid Church of San Juan Bautista (Talamanca de Jarama) Navarra San Pedro de la Rúa. Church and cloister. Estella Church of San Miguel, Estella Palace of the Kings of Navarra, Estella Church of Santo Sepulcro, Torres del Río Monastery of Leyre (San Salvador de Leyre) Abbey Church of Santa María la Real, Sangüesa Norway Hamar Cathedral, Hamar Nidaros Cathedral, Trondheim Sweden Akebäck Church, Akebäck Anga Church, Anga Bjäresjö Church, Dalby Church, Dalby Gamla Uppsala kyrka, Gamla Uppsala Garde Church, Garde Havdhem Church, Havdhem Lund Cathedral, Lund Vä Church, Vä Switzerland Abbey of Romainmôtier Abbey Church of Saint-Sulpice, Vaud Grossmünster Church, Zürich Münster Schaffhausen Payerne Rüeggisberg Priory Turkey Galata Tower, Galata, Istanbul Ukraine Saint Pantaleon church, Shevchenkove Saints Borys and Hlib Cathedral in Chernihiv Pyatnytska Church in Chernihiv Church of the Dormition, Krylos United Kingdom England In England, Romanesque architecture is often termed 'Norman architecture'. Castles, cathedrals and churches of the Norman period have frequently been extended during later periods. It is normal to find Norman in combination with Gothic architecture. Durham Cathedral is regarded as the finest Norman building in England. Peterborough Cathedral is an intact Norman cathedral except for the early Gothic west front and late Gothic eastern ambulatory. Ely Cathedral: the nave is Norman and west front Norman and Transitional Norwich Cathedral, excluding the Gothic spire and vault Canterbury Cathedral: the crypt, chapels and two small towers remain from the previous building destroyed by fire. Hereford Cathedral Southwell Minster St Albans Cathedral Gloucester cathedral, the nave arcades Tewkesbury abbey church Rochester Cathedral St Bartholomew-the-Great, Smithfield, London Patrixbourne Church, Kent Barfrestone Church, Kent Tixover church Bradford Church of St. Chad, West Yorkshire Kilpeck Church Leominster Priory Oakham castle hall, a unique survival in England of the hall of a Norman fortified manor house Tower of London: the keep known as the White Tower Norwich Castle Ludlow Castle Rochester Castle, Kent The Holy Sepulchre, Cambridge Waltham Abbey Church, Essex St John's Priory Crypt, London Scotland Dunnottar Castle, older portions as Romanesque Muchalls Castle, ground level groin vault course only Myres Castle, undercroft only survives as Romaneseque St. Margaret's Chapel in Edinburgh Castle See also Romanesque architecture List of regional characteristics of Romanesque churches Romanesque secular and domestic architecture Pre-Romanesque art and architecture Ottonian architecture Romanesque art Romanesque sculpture Renaissance of the 12th century Romanesque Revival architecture Medieval architecture Romano-Gothic architecture .01 Romanesque architecture
List of Romanesque buildings
Engineering
6,306
76,906,020
https://en.wikipedia.org/wiki/Discharge%20regime
Discharge regime, flow regime, or hydrological regime (commonly termed river regime, but that term is also used for other measurements) is the long-term pattern of annual changes to a stream's discharge at a particular point. Hence, it shows how the discharge of a stream at that point is expected to change over the year. It is thus the hydrological equivalent of climate. The main factor affecting the regime is climate, along with relief, bedrock, soil and vegetation, as well as human activity. Like general trends can be grouped together into certain named groups, either by what causes them and the part of the year they happen (most classifications) or by the climate in which they most commonly appear (Beckinsale classification). There are many different classifications; however, most of them are localized to a specific area and cannot be used to classify all the rivers of the world. When interpreting such records of discharge, it is important to factor in the timescale over which the average monthly values were calculated. It is particularly difficult to establish a typical annual discharge regime for rivers with high interannual variability in monthly discharge and/or significant changes in the catchment's characteristics (e.g. tectonic influences or the introduction of water management practices). Overview Maurice Pardé was the first to classify river regimes more thoroughly. His classification was based on what the primary reasons for such pattern are, and how many of them there are. According to this, he termed three basic types: Simple regimes, where there is only one dominant factor. Mixed or double regimes, where there are two dominant factors. Complex regimes, where there are multiple dominant factors. Pardé split the simple regimes further into temperature-dependent (glacial, mountains snow melt, plains snow melt; latter two often called "nival") and rainfall-dependent or pluvial (equatorial, intertropical, temperate oceanic, mediterranean) categories. Beckinsale later more clearly defined the distinct simple regimes based on climate present in the catchment area and thus splitting the world into "hydrological regions". His main inspiration was the Köppen climate classification, and he also devised strings of letters to define them. However, the system was criticised as it based the regimes on climate instead of purely on discharge pattern and also lacked some patterns. Another attempt to provide classification of world regimes was made in 1988 by Heines et al., which was based purely on the discharge pattern and classified all patterns into one of 15 categories; however, the determination is sometimes contradictory and quite complex, and the distinction does not differentiate between simple, mixed or complex regimes as it determines the regime solely on the main peak, which is contradictory to commonly used system in the Alpine region. Hence, rivers with nivo-pluvial regimes are commonly split into two different categories, while most pluvio-nival regimes are all grouped into a single category along with complex regimes – the uniform regime, despite showing quite pronounced and regular yearly pattern. Moreover, it does not differentiate between temperature-dependant and rainfall-dependant regimes. Nonetheless, it added one new regime that was not present in Beckinsale's classification, the moderate mid-autumn regime with a peak in November (Northern Hemisphere) or May (Southern Hemisphere). This system too, is very rarely used. In later years, most of the research was only done in the region around the Alps, so that area is much more thoroughly researched than others, and most names for subclasses of regimes are for those found there. These were mostly further differentiated from Pardé's distinction. The most common names given, although they might be defined differently in different publications, are: Glacial, for regimes where most of water is due to melting of snow and ice and the peak occurs in late summer. Nival, with a peak in late spring or early summer and still high importance of snow-melt. Pluvial, which is (almost) purely based on seasonal rainfall and not on snow. A peak is usually in winter, although it can occur at any point along the year. If it occurs in the time of monsoons, it is sometimes called tropical pluvial. Nivo-pluvial, with a nival peak in late spring and a pluvial peak in the fall. The main minimum is in winter. Pluvio-nival, which is similar to nivo-pluvial, but the nival peak is earlier (March/April on the Northern Hemisphere) and the main minimum is in summer, not in winter. Nivo-glacial, for regimes sharing characteristics of glacial and nival regimes and a peak in mid summer. The Pardé's differentiation of single regimes from mixed regimes is sometimes rather considered to be based on the number of peaks rather than the number of factors as it is more objective. Most of nival and even glacial regimes have some influence of rainfall and regimes considered pluvial have some influence of snowfall in regions with continental climate; see the coefficient of nivosity. The distinction between both classifications can be seen with the nivo-glacial regime, which is sometimes considered as a mixed regime, but is often considered as a simple regime in more detailed studies. However, many groupings of multiple pluvial or nival peaks are still considered a simple regime in some sources. Measurement of river regimes River regimes, similarly to the climate, are compounded by averaging the discharge data for several years; ideally that should be 30 years or more, as with the climate. However, the data is much scarcer, and sometimes data for as low as eight years are used. If the flow is regular and shows very similar year-to-year pattern, that could be enough, but for rivers with irregular patterns or for those that are most of the time dry, that period has to be much longer for accurate results. This is especially the problem with wadis as they often have both traits. The discharge pattern is specific not only to a river, but also a point along a river as it can change with new tributaries and an increase in the catchment area. This data is then averaged for each month separately. Sometimes, the average maximum and minimum for each month is also added. But unlike climate, rivers can drastically range in discharge, from small creeks with mean discharges less than 0.1 cubic meters per second to the Amazon River, which has average monthly discharge of more than 200,000 cubic meters per second at its peak in May. For regimes, the exact discharge of a river in one month is not as important as is the relation to other monthly discharges measured at the same point along a particular river. And although discharge is still often used for showing seasonal variation, two other forms are more commonly used, the percentage of yearly flow and the Pardé coefficient. Percentage of yearly flow represents how much of the total yearly discharge the month contributes and is calculated by the following formula: , where is the mean discharge of a particular month and is the mean yearly discharge. Discharge of an average month is and the total of all months should add to 100% (or rather, roughly, due to rounding). Even more common is the Pardé coefficient, discharge coefficient or simply the coefficient, which is more intuitive as an average month would have a value of 1. Anything above that means there is bigger discharge than average and anything lower means that there is lower discharge than the average. It is calculated by the following equation: , where is the mean discharge of a particular month and is the mean yearly discharge. Pardé coefficients for all months should add to 12 and are without a unit. The data is often presented is a special diagram, called a hydrograph, or, more specifically, an annual hydrograph as it shows monthly discharge variation in a year, but no rainfall pattern. The units used in a hydrograph can be either discharge, monthly percentage or Pardé coefficients. The shape of the graph is the same in any case, only the scale needs to be adjusted. From the hydrograph, maxima and minima are easy to spot and the regime can be determined more easily. Hence, they are a vital part for river regimes, just as climographs are for climate. Yearly coefficient Similarly to Pardé's coefficient, there are also other coefficients that can be used to analyze the regime of a river. One possibility is to look how many times the discharge during the peak is larger than the discharge during the minimum, rather than the mean as with Pardé's coefficient. It is sometimes called the yearly coefficient and is defined as: , where is the mean discharge of the month with the highest discharge and is the mean discharge of the month with the lowest discharge. If is 0, then the coefficient is undefined. Annual variability Annual variability shows how much the peaks on average deviate from the perfectly uniform regime. It is calculated as the standard deviation of the mean discharge of months from the mean yearly discharge. That value is then divided by the mean yearly discharge and multiplied by 100%, i.e.: The most uniform regimes have a value below 10%, while it can reach more than 150% for rivers with the most drastic peaks. Grimm coefficients Grimm coefficients, used in Austria, are not defined for a single month, but for 'doppelmonats', i.e., for two consecutive months. The mean flow of both months – January and February, February and March, March and April, and so on – is added, still conserving 12 different values throughout the year. This is done since for nival regimes, this better correlates to different types of peak (nival, nivo-glacial, glacial etc.). They are defined as follows: (Initial definition) (Adapted definition so values are closer to Pardé's; version used on Wikipedia) , where . Coefficient of nivosity Pardé and Beckinsale determined whether the peak is pluvio-nival, nivo-pluvial, nival or glacial based on the fact what percentage of the discharge during the warm season is contributed by the melt-water, and not by the time of the peak as it is common today. However, it has been calculated for few rivers. The values are the following: 0–6%: pluvial 6–14%: pluvio-nival 15–25%: nivo-pluvial 26–38%: transition to nival 39–50%: pure nival to nivo-glacial more than 50%: glacial Factors affecting river regimes There are multiple factors that determine when a river will have a greater discharge and when a smaller one. The most obvious factor is precipitation, since most rivers get their water supply in that way. However, temperature also plays a significant role, as well as the characteristics of its catchment area, such as altitude, vegetation, bedrock, soil and lake storage. An important factor is also the human factor as humans may either fully control the water supply by building dams and barriers, or partially by diverting water for irrigation, industrial and personal use. The factor that differentiates classification of river regimes from climate the most is that rivers can change their regime along its path due to a change of conditions and new tributaries. Climate The primary factor affecting river regimes is the climate of its catchment area, both by the amount of rainfall and by the temperature fluctuations throughout the year. This has led Beckinsale to classify regimes based primarily on the climate. Although there is correlation, climate is still not fully reflected in a river regime. Moreover, a catchment area can span through more than one climate and lead to more complex interactions between the climate and the regime. A discharge pattern can closely resemble the rainfall pattern since rainfall in a river's catchment area contributes to its water flow, rise of the underground water and filling of lakes. There is some delay between the peak rainfall and peak discharge, which is also dependent on the type of soil and bedrock, since the water from rain must reach the gauging station for the discharge to be recorded. The time is naturally longer for bigger catchment areas. If the water from precipitation is frozen, such as snow or hail, it has to melt first, leading to longer delays and shallower peaks. The delay becomes heavily influenced by the temperature since temperatures below zero cause the snow to stay frozen until it becomes warmer in the spring, when temperatures rise and melt the snow, leading to a peak, which might be again a bit delayed. The time of the peak is determined by when the midday temperature sufficiently soars above , which is usually considered to be when the average temperature reaches above . In the mildest continental climates, bordering the oceanic climate, the peak is usually in March on the Northern Hemisphere or September on the Southern Hemisphere, but can be as late as August/February on the highest mountains and ice caps, where the flow also heavily varies throughout the day. Melting of glaciers alone can also supply large amounts of water even in areas where there is little to no precipitation, as in ice cap climate and cold dry and semi-dry climates. On the other side, high temperatures and sunny weather lead to a significant increase in evapotranspiration, either directly from river, or from moist soil and plants, leading to the fact that less precipitation reaches the river and that plants consume more water, respectively. For terrain in darker colors, the rate of evaporation is higher than for a terrain in lighter colors due to lower albedo. Relief Relief often determines how sharp and how wide the nival peaks are, leading Pardé to already classify mountain nival and plain nival regimes separately. If the relief is rather flat, the snow will melt everywhere in a short period of time due to similar conditions, leading to a sharp peak about three months wide. However, if the terrain is hilly or mountainous, snow located in lowlands will melt first, with the temperature gradually decreasing with altitude (about per 1000 m). Hence the peak is wider, and especially the decrease after the peak can extend all the way to late summer when the temperatures are highest. Due to this phenomenon, the precipitation in lowland areas might be rainfall, but snow in higher areas, leading to a peak quickly after the rainfall and another when the temperatures start to melt the snow. Another important aspect is altitude. At exceptionally high altitudes, atmosphere is thinner so the solar insolation is much greater, which is why Beckinsale differentiates between mountain nival and glacial from similar regimes found at higher latitudes. Additionally, steeper slopes lead to faster surface runoff, leading to more prominent peaks, while flat terrain allows for lakes to spread, which regulate the discharge of the river downstream. Larger catchment areas also lead to shallower peaks. Vegetation Vegetation in general decreases surface runoff and consequently discharge of a river, and leads to greater infiltration. Forests dominated by trees that shed their leaves during winter have an annual pattern of the extent of water interception, which shapes the pattern in its own way. The impact of vegetation is noticeable in all areas but the driest and coldest, where vegetation is scarce. Vegetation growing in the river beds can drastically hinder the flow of water, especially in the summer, leading to smaller discharges. Soil and bedrock The most important aspect of the ground in this regard is the permeability and water-holding capacity of the rocks and soils in the discharge basin. In general, the more the ground is permeable, the less pronounced the maxima and minima are since the rocks accumulate water during the wet season and release it during the dry season; lag time is also longer since there is less surface runoff. If the wet season is really pronounced, the rocks become saturated and fail to infiltrate excess water, so all rainfall is quickly released into the stream. On the other side, however, if the rocks are too permeable, as in the karst terrain, rivers might have a notable discharge only when the rocks are saturated or the groundwater level rises and would otherwise be dry with all the water accumulating in subterranean rivers or disappearing in ponors. Examples of rocks with high water-holding capacity include limestone, sandstone and basalt, while materials used in urban areas (such as asphalt and concrete) have very low permeability leading to flash floods. Human activity Human factors can also greatly change discharge of a river. On one side, water can be extracted either directly from a river or indirectly from groundwater for the purposes of drinking and irrigation, among others, lowering the discharge. For the latter, the consumption usually spikes during the dry season or during crop growth (i.e., summer and spring). On the other side, waste waters are released into streams, increasing the discharge; however, they are more or less constant all year round so they do not impact the regime as much. Another important factor is the construction of dams, where a lot of water accumulates in a lake, making the minima and maxima less pronounced. In addition, the discharge of water is often in large part regulated in regard to other human needs, such as electricity production, meaning that the discharge of a river downstream of a dam can be completely different than upstream. Here, an example is given for the Aswan dam. As can be observed, the yearly coefficient is lower at the dam than upstream, showing the effect of the dam. Simple regimes Simple regimes are hence only those that have exactly one peak; this does not hold for cases where both peaks are nival or both are pluvial, which are often grouped together into simple regimes. They are grouped into five categories: pluvial, tropical pluvial, nival, nivo-glacial and glacial. Pluvial regime Pluvial regimes occur mainly in oceanic and mediterranean climates, such as the UK, New Zealand, southeastern USA, South Africa and the Mediterranean regions. Generally, peaks occur in colder season, from November to May on the Northern Hemisphere (although April and May occur in a small area near Texas) and from June to September on the Southern Hemisphere. Pardé had two different types for this category – the temperate pluvial and the Mediterranean regimes. The peak is due to rainfall in the colder period and the minimum is in summer due to higher evapotranspiration and usually less rainfall. The temperate pluvial regime (Beckinsale symbol CFa/b) usually has a milder minimum and the discharge is quite high also during the summer. Meanwhile, the Mediterranean regime (Beckinsale symbol CS) has a more pronounced minimum due to a lack of rainfall in the region, and rivers have a noticeably smaller discharge during summer, or even dry up completely. Beckinsale distinguished another pluvial regime, with a peak in April or May, which he denoted CFaT as it occurs almost solely around Texas, Louisiana and Arkansas. Tropical pluvial regime The name for the regime is misleading; the regime commonly occurs anywhere the main rainfall is during summer. This includes the intertropical region, but also includes parts influenced by monsoon, extending north even to Russia and south to central Argentina. It is characterized by a strong peak during the warm period, with a maximum from May to December on the Northern Hemisphere and from January to June on the Southern Hemisphere. The regime therefore allows for a lot of variation, both in terms of when the peak occurs and how low the minimum is. Pardé additionally differentiated this category into two subtypes and Beckinsale split it into four. The most common such regime is Beckinsale's regime AM (for monsoon, as in Köppen classification), which is characterized by a period of low discharge for up to four months. It occurs in western Africa, the Amazon basin, and southeastern Asia. In more arid areas, the period of low water increases to six, seven months and up to nine, which Beckinsale classified as AW. The peak is hence narrower and greater. In dry climate, ephemeral streams that have irregular year-to-year patterns exist. Most of the time, it is dry and it only has discharge during flash floods. Beckinsale classifies it as BW, but only briefly mentions. Due to irregularity, the peak might be spread out or show multiple peaks, and could resemble other regimes. The previous three regimes are all called intertropical by Pardé but the next is also differentiated by him as it has two maxima instead of one. He termed the name equatorial regime, while Beckinsale used the symbol AF. It occurs in Africa around Cameroon and Gabon, and in Asia in Indonesia and Malaysia, where one peak is in October/November/December and another in April/May/June, sort of being symmetrical for both hemispheres. Interestingly, the same pattern is not observed in South America. Nival regime Nival regime is characterized by a maximum which is contributed by the snow-melt as the temperatures increase above the melting point. Hence, the peaks occur in spring or summer. They occur in regions with continental and polar climate, which is on the Southern Hemisphere mostly limited to the Andes, Antarctica and minor outlying islands. Pardé split the regimes into two groups: the mountain nival and the plain nival regimes, which Beckinsale also expanded. Plain regimes have maxima that are more pronounced and narrow, usually up to three months, and the minimum is milder and mostly not much lower from other months apart from the peak. The minimum, if the regime is not transitioning to a pluvio-nival regime, is usually quickly after the maximum, while for mountain regimes, it is often right before. Such regimes are exceptionally rare on the Southern Hemisphere. Nival regimes are commonly intermittent in subarctic climate where the river freezes during winter. Plain nival regime Beckinsale differentiates six plain nival to nivo-pluvial regimes, mainly based on when the peak occurs. If the peak occurs in March or April, Beckinsale called this a DFa/b regime, which correlates to Mader's transitional pluvial regime. There, it is defined more precisely that the peak is in March or April, with the second highest discharge in the other of those months, not February or May. This translates to a peak in September or October on the southern hemisphere. This regime occurs in most European plains and parts of St. Lawrence River basin. If the nivo-pluvial peak occurs later, in April or May (October or November on the Southern Hemisphere), followed by the discharge of the other month, the regime is transitional nival or DFb/c. This regime is rarer and occurs mostly in parts of Russia and Canada, but also at some plains at higher altitudes. In parts of Russia and Canada and on elevated plains, the peak can be even later, in May or June (November or December on the Southern Hemisphere). Beckinsale denoted this regime with DFc. Beckinsale also added another category, Dwd, for rivers that completely diminish during the winter due to cold conditions with a sharp maximum in the summer. Such rivers occur in Siberia and northern Canada. The peak can be from May to July on Northern Hemisphere or from November to January on Southern Hemisphere. Apart from that, he also added another category for regimes with pluvio-nival or nivo-pluvial maxima where the pluvial maximum corresponds to a Texan or early tropical pluvial regime, not the usual temperate pluvial. This regime occurs in parts of PRC and around Kansas. If this peak happens later, Beckinsale classified it as DWb/c. The peak can occur as late as September on the Northern Hemisphere or March on the Southern Hemisphere. Mountain nival regime Pardé and Beckinsale both assigned only one category to the mountain nival regime (symbol HN), but Mader distinguishes several of them. If the peak occurs in April or May on Northern Hemisphere and October or November on Southern Hemisphere with the discharge of the other of those two months following, it is called transitional nival, common for lower hilly areas. If the peak is in May or June on the Northern Hemisphere, or November or December on Southern Hemisphere, followed by the other of those two, the regime is called mild nival. The regime which Mader just calls 'nival' is when the highest discharge is in June/December, followed by July/January, and then May/November. Nivo-glacial regime The nivo-glacial regime occurs in areas where seasonal snow meets the permanent ice sheets of glaciers on top of mountains or at higher latitudes. Therefore, both melting of snow and ice from glaciers contribute to produce a maximum in early or mid summer. In turn, it could still be distinguished between plain and mountain regimes, but that distinction is rarely made despite being quite obvious. It is also characterized by great diurnal changes, and a sharp maximum. Pardé and Beckinsale did not distinguish this regime from glacial and nival regimes. Mader defines it as having a peak in June or July, followed by the other of the two, and then the August's discharge, which translates to a peak in December or January, followed by the other two and then February for Southern Hemisphere. Such regimes occur in the Alps, the Himalayas, Coast Mountains and southern Andes. Plain nivo-glacial regimes occur on Greenland, northern Canada and Svalbard. Glacial regime The glacial regime is the most extreme variety of temperature-dependent regimes and occurs in areas where more than 20% of its catchment area is covered by glaciers. This is usually at altitudes over , but it can also happen in polar climates which was not explicitly mentioned by Pardé, who grouped both categories together. Rivers with this regime also experience great diurnal variations. The discharge is heavily dominated by the melting of glaciers, leading to a strong maximum in late summer and a really intense minimum during the rest of the year, unless it has major lake storage, such as the Rhône after the Lake Geneva or the Baker River, which is shown below. Mader defines it to have the highest discharge in July or August, followed by the other month. In really extreme cases (mostly on Antarctica), there could also be a plain glacial regime. Mixed regimes Mixed or double regimes are regimes where one peak is due to a temperature-dependent factor (snow or ice melt) and one is due to rainfall. There are many possible combinations, but only some have been studied in more detail. They can also be split into two categories – plain (versions of Beckinsale's plain nival regimes with another peak) and mountain. They can be in general thought of as combinations of two simple regimes but the cold-season pluvial peak is usually in autumn, not in late winter as is common for temperate pluvial regime. Mixed regimes are usually split into two other categories: the nivo-pluvial and pluvio-nival regimes, the first having a nival peak in late spring (April to June on Northern Hemisphere, October to December on Southern Hemisphere) and the biggest minimum in the winter while the latter usually has a nival peak in early spring (March or April on Northern Hemisphere, September or October on Southern Hemisphere) and the biggest minimum in the summer. Plain mixed regime Beckinsale did not really classify the regimes by the number of factors contributing to the discharge, so such regimes are grouped with simple regimes in his classification as they appear in close proximity to those regimes. For all of his six examples, mixed regimes can be found, although for DFa and DWd, that is quite rare. In the majority of cases, they are nivo-pluvial with the main minimum in winter, except for DFa/b. Mountain mixed regime Mountain mixed regimes are thoroughly researched and quite common in the Alps, and rivers with such regimes rise in most mountain chains. Beckinsale does not distinguish them from plain regimes, however, they are classified rather different from his classification in newer sources. Mader classifies mixed regimes with the nival peaks corresponding to mild nival or Mader's nival as 'winter nival' and 'autumn nival', depending on the pluvial peak. The winter peak is usually small. In monsoonal areas, the peak can be in summer as well. Mader denoted only those regimes with nival peaks corresponding to transitional nival as 'nivo-pluvial'. Hrvatin in his distinction also differentiated between 'high mountain Alpine nivo-pluvial regime' and 'medium mountain Alpine nivo-pluvial regime', the first showing significant difference between the minima and the other not, although some regimes in his classification also have mild nival peaks. In Japan, the pluvial peak is in the summer. In Mader's classification, any regime with a transitional pluvial peak is pluvio-nival. Hrvatin also defines it further with a major overlap to Mader's classification. If minima are rather mild, then it is classified as 'Alpine pluvio-nival regime', if minima are more pronounced but the peaks are mild, then it is classified as 'Dinaric-Alpine pluvio-nival regime' and if the peaks are also pronounced, then it is 'Dinaric pluvio-nival regime'. His 'Pannonian pluvio-nival regime' corresponds to a plain mixed regime. Japan has mixed regimes with tropical pluvial peak. Complex regimes Complex regimes is the catch-all category for all rivers where the discharge is influenced by many different factors that occur at different times of the year. For rivers that flow through many different climates and have many tributaries from different climates, their regime can become unrepresentative of any area the river's catchment area is in. Many of the world's longest rivers have such regimes, such as the Nile, the Congo, the St. Lawrence River and the Rhone. A special form of such regimes is the uniform regime, where all peaks and minima are extremely mild. References Bibliography Rivers Hydrology
Discharge regime
Chemistry,Engineering,Environmental_science
6,133
26,649,321
https://en.wikipedia.org/wiki/Chroma%20dots
Chroma dots are visual artifacts caused by displaying an unfiltered PAL analogue colour video signal on a black-and-white television or monitor. They are commonly found on black-and-white recordings of television programmes originally made in colour. Chroma dots were once regarded as undesirable picture noise, but recent advances in computer technology have allowed them to be used to reconstruct the original colour signal from black-and-white recordings, providing a means to re-colour material where the original colour copy is lost. Example of the chroma dot reconstruction: Background Analogue colour video signals comprise two components: luminance and chrominance. The luminance component describes the brightness of each part of the picture, while the chrominance component describes the colour tone. When displayed on a black-and-white monitor, the luminance signal produces a normal black-and-white image, while the chrominance signal manifests as a fine pattern of dots of varying size and intensity overlaid over the black-and-white picture. A related phenomenon is dot crawl, which can produce visual artifacts in colour pictures. History In the early days of colour television, it was common practice for broadcasters to produce black-and-white film copies of colour programmes for sale and transmission in territories lacking colour broadcast facilities or employing different colour television systems. During the telerecording process, it was normal practice to insert a filter circuit between the colour video output and the black-and-white monitor input in order to remove the colour signal and prevent the formation of chroma dots. In many cases, however, the filter was not used and the chroma dot patterning is permanently burned into the resulting film recording. Use in restoration In 1994, James Insell, a BBC engineer, noticed that when playing back a copy of a black-and-white telerecording through colour video equipment, spurious colour was generated by the presence of chroma dots in the picture. He theorised that it might be possible to use the chroma dots to reconstruct the original colour signal, and in 2007, set up a working group to carry out further research. In 2008, it was announced that members of the working group had successfully managed to restore the Dad's Army episode "Room at the Bottom" using information from the chroma dot patterning. The process has since been used to restore other programmes including the pilot episode of Are You Being Served? and various episodes of Doctor Who: (Planet of the Daleks, episode 3; Invasion of the Dinosaurs, episode 1; and episodes 2–6 of The Mind of Evil). The working group hope that the technique may enable the restoration of many other programmes for which no colour copy is known to exist. However, the results are dependent on whether or not chroma dot patterning is present and the quality of the black-and-white recording. References Display technology
Chroma dots
Engineering
607
57,900,656
https://en.wikipedia.org/wiki/Measurable%20group
In mathematics, a measurable group is a special type of group in the intersection between group theory and measure theory. Measurable groups are used to study measures is an abstract setting and are often closely related to topological groups. Definition Let a group with group law . Let further be a σ-algebra of subsets of the set . The group, or more formally the triple is called a measurable group if the inversion is measurable from to . the group law is measurable from to Here, denotes the formation of the product σ-algebra of the σ-algebras and . Topological groups as measurable groups Every second-countable topological group can be taken as a measurable group. This is done by equipping the group with the Borel σ-algebra , which is the σ-algebra generated by the topology. Since by definition of a topological group, the group law and the formation of the inverse element is continuous, both operations are in this case also measurable from to and from to , respectively. Second countability ensures that , and therefore the group is also a measurable group. Related concepts Measurable groups can be seen as measurable acting groups that act on themselves. References Measure theory Group theory
Measurable group
Mathematics
246
2,904,877
https://en.wikipedia.org/wiki/Shearing%20interferometer
The shearing interferometer is an extremely simple means to observe interference and to use this phenomenon to test the collimation of light beams, especially from laser sources which have a coherence length which is usually significantly longer than the thickness of the shear plate (see graphics) so that the basic condition for interference is fulfilled. Function The testing device consists of a high-quality optical glass, like N-BK7, with extremely flat optical surfaces that are usually at a slight angle to each other. When a plane wave is incident at an angle of 45°, which gives maximum sensitivity, it is reflected two times. The two reflections are laterally separated due to the finite thickness of the plate and by the wedge. This separation is referred to as the shear and has given the instrument its name. The shear can also be produced by gratings. Parallel-sided shear plates are sometimes used, but the interpretation of the interference fringes of wedged plates is relatively easy and straightforward. Wedged shear plates produce a graded path difference between the front and back surface reflections; as a consequence, a parallel beam of light produces a linear fringe pattern within the overlap. With a plane wavefront incident, the overlap of the two reflected beams shows interference fringes with a spacing of , where is the spacing perpendicular to shear, is the wavelength of the beam, n the refractive index, and the wedge angle. This equation makes the simplification that the distance from the wedged shear plate to the observation plane is small relative to the wavefront radius of curvature at the observation plane. The fringes are equally spaced and will be exactly perpendicular to the wedge orientation and parallel to a usually present wire cursor aligned along the beam axis in the shearing interferometer. The orientation of the fringes varies when the beam is not perfectly collimated. In the case of a noncollimated beam incident on a wedged shear plate, the path difference between the two reflected wavefronts is increased or decreased from the case of perfect collimation depending on the sign of the curvature. The pattern is then rotated and the beam's wavefront radius of curvature can be calculated: , with the shear distance, the fringe distance, the wavelength and the angular deviation of the fringe alignment from that of perfect collimation. If the spacing normal to the fringes is used instead, this equation becomes , where is the fringe spacing normal to the fringes. See also List of types of interferometers Air-wedge shearing interferometer Spectral phase interferometry for direct electric-field reconstruction, a type of spectral shearing interferometry, which is similar in concept to the one in the present article, except that the shearing is performed in the frequency domain instead of laterally. References External links University of Erlangen — Optical Design and Microptics Interferometers
Shearing interferometer
Technology,Engineering
590
44,035,978
https://en.wikipedia.org/wiki/Elonis%20v.%20United%20States
Elonis v. United States, 575 U.S. 723 (2015), was a United States Supreme Court case concerning whether conviction of threatening another person over interstate lines (under 18 U.S.C. § 875(c)) requires proof of subjective intent to threaten or whether it is enough to show that a "reasonable person" would regard the statement as threatening. In controversy were the purported threats of violent rap lyrics written by Anthony Douglas Elonis and posted to Facebook under a pseudonym. The ACLU filed an amicus brief in support of the petitioner. It was the first time the Court has heard a case considering true threats and the limits of speech on social media. Background Elonis was in the process of divorce and made a number of public Facebook posts. He "posted the script of a sketch" by The Whitest Kids U' Know, which originally referenced saying "I want to kill the President of the United States" and replaced the president with his wife: Elonis ended the post with this statement: "Art is about pushing limits. I'm willing to go to jail for my constitutional rights. Are you?" A week later, Elonis posted about local law enforcement and a kindergarten class, which caught the attention of the Federal Bureau of Investigation. Then, he wrote a post on Facebook about one of the agents who visited him: He concluded: The actions led to Elonis's indictment by a grand jury on five counts of threats to park employees and visitors, local law enforcement, his estranged wife, an FBI agent, and a kindergarten class that had been relayed through interstate communication. At the district court, Elonis moved to dismiss the indictment for failing to allege that he had intended to threaten anyone. His motion was denied. He requested a jury instruction that "the government must prove that he intended to communicate a true threat." which was also denied. He was convicted on the last four of the five counts, and was sentenced to 44 months in prison and three years on supervised release. He appealed unsuccessfully to the Third Circuit, renewing his challenge to the jury instructions. He then appealed to the U.S. Supreme Court based on lack of any attempt to show intent to threaten and on First Amendment rights. Decision On June 1, 2015, the U.S. Supreme Court reversed Elonis's conviction in an 8–1 decision. Chief Justice John Roberts wrote for a seven-justice majority, Samuel Alito authored an opinion concurring in part and dissenting in part, and Clarence Thomas authored a dissenting opinion. The finding of the circuit court was reversed and the matter remanded. Majority opinion The majority opinion, written by Roberts, did not rule on First Amendment matters or on the question of whether recklessness was sufficient mens rea to show intent. It ruled that mens rea was required to prove the commission of a crime under §875(c). Importantly, the mens rea issue had been preserved for review, since Elonis had raised that objection at every stage of the previous proceedings. The government contended that the presence of the words "intent to extort" in §875(b) and §875(d) implied that the absence in §875(c) was constructive. The court disagreed, holding that the absence of the language in §875(c) was because the section was intended to have a broader scope than threats relating to extortion. The opinion drew on many Supreme Court cases holding that in criminal law, mens rea was required though it had not been mentioned explicitly in statute. Consequently, the Supreme Court ruled in favor of Elonis. Alito's concurrence Justice Samuel Alito, concurring in part and dissenting in part, opined that while agreeing that mens rea was required and specifically that showing negligence was not sufficient, the court should have ruled on the question of recklessness. He further opined that recklessness was sufficient to show a crime under that provision on the basis that going further would amount to amending the statute, rather than interpreting it. Since Elonis explicitly argued that recklessness was not sufficient, Alito said: Alito also addressed the First Amendment question, elided by the majority opinion. He held that "lyrics in songs that are performed for an audience or sold in recorded form are unlikely to be interpreted as a real threat to a real person.... Statements on social media that are pointedly directed at their victims, by contrast, are much more likely to be taken seriously." Thomas's dissent Justice Clarence Thomas, dissenting, wrote against discarding the "general intent" standard without replacing it with a clearer standard. Thomas argued that "there is no historical practice requiring more than general intent when a statute regulates speech." Thomas cited Rosen v. United States, arguing that general intent was sufficient in this case. However, the majority opinion offers refutation in that Rosen turned on ignorance of the law: knowledge as to whether material was legally obscene, not on whether it was intended to be obscene. Thomas also supported the government's claim that the presence of "intent to extort" language in the adjacent §875(b) and did not address the majority's reasoning on that language. Thomas used precedent, notably from the states and 18th-century England based on other but similar and, arguably, influencing legislation to support his "general intent" claim. Thomas also drew a parallel with general intent in tort. While he sought to address the First Amendment issues, he never strayed far from "general intent." Aftermath On remand, the Third Circuit reaffirmed the conviction “conclud[ing] beyond a reasonable doubt that Elonis would have been convicted if the jury had been properly instructed” and therefore was harmless error. See also Rule of lenity Counterman v. Colorado Footnotes External links United States Free Speech Clause case law United States Supreme Court cases United States Supreme Court cases of the Roberts Court 2015 in United States case law
Elonis v. United States
Technology
1,226
747,164
https://en.wikipedia.org/wiki/Belvedere%20%28structure%29
A belvedere or belvidere (from Italian for "beautiful view") is an architectural structure sited to take advantage of a fine or scenic view. The term has been used both for rooms in the upper part of a building or structures on the roof, or a separate pavilion in a garden or park. The actual structure can be of any form or style, including a turret, a cupola or an open gallery. The term may be also used for a paved terrace or just a place with a good viewpoint, but no actual building. It has also been used as a name for a whole building, as in the Belvedere, Vienna, a huge palace, or Belvedere Castle, a folly in Central Park in New York. Examples On the hillside above the Vatican Palace (–1490), Antonio del Pollaiuolo built a small pavilion ( in Italian) named the palazzetto or the Belvedere for Pope Innocent VIII. Some years later Donato Bramante linked the Vatican with the Belvedere, a commission from Pope Julius II, by creating the Cortile del Belvedere ("Courtyard of the Belvedere"), in which stood the Apollo Belvedere, among the most famous of antique sculptures. This began the fashion in the 16th century for the belvedere. Gallery See also Belvedere (M. C. Escher), a picture by M. C. Escher which shows an impossible belvedere Gazebo Gloriette Widow's walk General references Citations External links Architectural elements
Belvedere (structure)
Technology,Engineering
320
4,543
https://en.wikipedia.org/wiki/Blue
Blue is one of the three primary colours in the RYB colour model (traditional colour theory), as well as in the RGB (additive) colour model. It lies between violet and cyan on the spectrum of visible light. The term blue generally describes colours perceived by humans observing light with a dominant wavelength that's between approximately 450 and 495 nanometres. Most blues contain a slight mixture of other colours; azure contains some green, while ultramarine contains some violet. The clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering. An optical effect called the Tyndall effect explains blue eyes. Distant objects appear more blue because of another optical effect called aerial perspective. Blue has been an important colour in art and decoration since ancient times. The semi-precious stone lapis lazuli was used in ancient Egypt for jewellery and ornament and later, in the Renaissance, to make the pigment ultramarine, the most expensive of all pigments. In the eighth century Chinese artists used cobalt blue to colour fine blue and white porcelain. In the Middle Ages, European artists used it in the windows of cathedrals. Europeans wore clothing coloured with the vegetable dye woad until it was replaced by the finer indigo from America. In the 19th century, synthetic blue dyes and pigments gradually replaced organic dyes and mineral pigments. Dark blue became a common colour for military uniforms and later, in the late 20th century, for business suits. Because blue has commonly been associated with harmony, it was chosen as the colour of the flags of the United Nations and the European Union. In the United States and Europe, blue is the colour that both men and women are most likely to choose as their favourite, with at least one recent survey showing the same across several other countries, including China, Malaysia, and Indonesia. Past surveys in the US and Europe have found that blue is the colour most commonly associated with harmony, confidence, masculinity, knowledge, intelligence, calmness, distance, infinity, the imagination, cold, and sadness. Etymology and linguistics The modern English word blue comes from Middle English or , from the Old French , a word of Germanic origin, related to the Old High German word (meaning 'shimmering, lustrous'). In heraldry, the word azure is used for blue. In Russian, Spanish, Mongolian, Irish, and some other languages, there is no single word for blue, but rather different words for light blue (, ; ) and dark blue (, ; ) (see Colour term). Several languages, including Japanese and Lakota Sioux, use the same word to describe blue and green. For example, in Vietnamese, the colour of both tree leaves and the sky is . In Japanese, the word for blue (, ) is often used for colours that English speakers would refer to as green, such as the colour of a traffic signal meaning "go". In Lakota, the word is used for both blue and green, the two colours not being distinguished in older Lakota (for more on this subject, see Blue–green distinction in language). Linguistic research indicates that languages do not begin by having a word for the colour blue. Colour names often developed individually in natural languages, typically beginning with black and white (or dark and light), and then adding red, and only much later – usually as the last main category of colour accepted in a language – adding the colour blue, probably when blue pigments could be manufactured reliably in the culture using that language. Optics and colour theory The term blue generally describes colours perceived by humans observing light with a dominant wavelength between approximately 450 and 495 nanometres. Blues with a higher frequency and thus a shorter wavelength gradually look more violet, while those with a lower frequency and a longer wavelength gradually appear more green. Purer blues are in the middle of this range, e.g., around 470 nanometres. Isaac Newton included blue as one of the seven colours in his first description of the visible spectrum. He chose seven colours because that was the number of notes in the musical scale, which he believed was related to the optical spectrum. He included indigo, the hue between blue and violet, as one of the separate colours, though today it is usually considered a hue of blue. In painting and traditional colour theory, blue is one of the three primary colours of pigments (red, yellow, blue), which can be mixed to form a wide gamut of colours. Red and blue mixed together form violet, blue and yellow together form green. Mixing all three primary colours together produces a dark brown. From the Renaissance onward, painters used this system to create their colours (see RYB colour model). The RYB model was used for colour printing by Jacob Christoph Le Blon as early as 1725. Later, printers discovered that more accurate colours could be created by using combinations of cyan, magenta, yellow, and black ink, put onto separate inked plates and then overlaid one at a time onto paper. This method could produce almost all the colours in the spectrum with reasonable accuracy. On the HSV colour wheel, the complement of blue is yellow; that is, a colour corresponding to an equal mixture of red and green light. On a colour wheel based on traditional colour theory (RYB) where blue was considered a primary colour, its complementary colour is considered to be orange (based on the Munsell colour wheel). LED In 1993, high-brightness blue LEDs were demonstrated by Shuji Nakamura of Nichia Corporation. In parallel, Isamu Akasaki and Hiroshi Amano of Nagoya University were working on a new development which revolutionized LED lighting. Nakamura was awarded the 2006 Millennium Technology Prize for his invention. Nakamura, Hiroshi Amano and Isamu Akasaki were awarded the Nobel Prize in Physics in 2014 for the invention of an efficient blue LED. Lasers Lasers emitting in the blue region of the spectrum became widely available to the public in 2010 with the release of inexpensive high-powered 445–447 nm laser diode technology. Previously the blue wavelengths were accessible only through DPSS which are comparatively expensive and inefficient, but still widely used by scientists for applications including optogenetics, Raman spectroscopy, and particle image velocimetry, due to their superior beam quality. Blue gas lasers are also still commonly used for holography, DNA sequencing, optical pumping, among other scientific and medical applications. Shades and variations Blue is the colour of light between violet and cyan on the visible spectrum. Hues of blue include indigo and ultramarine, closer to violet; pure blue, without any mixture of other colours; Azure, which is a lighter shade of blue, similar to the colour of the sky; Cyan, which is midway in the spectrum between blue and green, and the other blue-greens such as turquoise, teal, and aquamarine. Blue also varies in shade or tint; darker shades of blue contain black or grey, while lighter tints contain white. Darker shades of blue include ultramarine, cobalt blue, navy blue, and Prussian blue; while lighter tints include sky blue, azure, and Egyptian blue (for a more complete list see the List of colours). As a structural colour In nature, many blue phenomena arise from structural colouration, the result of interference between reflections from two or more surfaces of thin films, combined with refraction as light enters and exits such films. The geometry then determines that at certain angles, the light reflected from both surfaces interferes constructively, while at other angles, the light interferes destructively. Diverse colours therefore appear despite the absence of colourants. Colourants Artificial blues Egyptian blue, the first artificial pigment, was produced in the third millennium BC in Ancient Egypt. It is produced by heating pulverized sand, copper, and natron. It was used in tomb paintings and funereal objects to protect the dead in their afterlife. Prior to the 1700s, blue colourants for artwork were mainly based on lapis lazuli and the related mineral ultramarine. A breakthrough occurred in 1709 when German druggist and pigment maker Johann Jacob Diesbach discovered Prussian blue. The new blue arose from experiments involving heating dried blood with iron sulphides and was initially called Berliner Blau. By 1710 it was being used by the French painter Antoine Watteau, and later his successor Nicolas Lancret. It became immensely popular for the manufacture of wallpaper, and in the 19th century was widely used by French impressionist painters. Beginning in the 1820s, Prussian blue was imported into Japan through the port of Nagasaki. It was called bero-ai, or Berlin blue, and it became popular because it did not fade like traditional Japanese blue pigment, ai-gami, made from the dayflower. Prussian blue was used by both Hokusai, in his wave paintings, and Hiroshige. In 1799 a French chemist, Louis Jacques Thénard, made a synthetic cobalt blue pigment which became immensely popular with painters. In 1824 the Societé pour l'Encouragement d'Industrie in France offered a prize for the invention of an artificial ultramarine which could rival the natural colour made from lapis lazuli. The prize was won in 1826 by a chemist named Jean Baptiste Guimet, but he refused to reveal the formula of his colour. In 1828, another scientist, Christian Gmelin then a professor of chemistry in Tübingen, found the process and published his formula. This was the beginning of new industry to manufacture artificial ultramarine, which eventually almost completely replaced the natural product. In 1878 German chemists synthesized indigo. This product rapidly replaced natural indigo, wiping out vast farms growing indigo. It is now the blue of blue jeans. As the pace of organic chemistry accelerated, a succession of synthetic blue dyes were discovered including Indanthrone blue, which had even greater resistance to fading during washing or in the sun, and copper phthalocyanine. Dyes for textiles and food Woad and true indigo were once used but since the early 1900s, all indigo is synthetic. Produced on an industrial scale, indigo is the blue of blue jeans. Blue dyes are organic compounds, both synthetic and natural. For food, the triarylmethane dye Brilliant blue FCF is used for candies. The search continues for stable, natural blue dyes suitable for the food industry. Various raspberry-flavoured foods are dyed blue. This was done to distinguish strawberry, watermelon and raspberry-flavoured foods. The company ICEE used Blue No. 1 for their blue raspberry ICEEs. Pigments for painting and glass Blue pigments were once produced from minerals, especially lapis lazuli and its close relative ultramarine. These minerals were crushed, ground into powder, and then mixed with a quick-drying binding agent, such as egg yolk (tempera painting); or with a slow-drying oil, such as linseed oil, for oil painting. Two inorganic but synthetic blue pigments are cerulean blue (primarily cobalt(II) stanate: ) and Prussian blue (milori blue: primarily ). The chromophore in blue glass and glazes is cobalt(II). Diverse cobalt(II) salts such as cobalt carbonate or cobalt(II) aluminate are mixed with the silica prior to firing. The cobalt occupies sites otherwise filled with silicon. Inks Methyl blue is the dominant blue pigment in inks used in pens. Blueprinting involves the production of Prussian blue in situ. Inorganic compounds Certain metal ions characteristically form blue solutions or blue salts. Of some practical importance, cobalt is used to make the deep blue glazes and glasses. It substitutes for silicon or aluminum ions in these materials. Cobalt is the blue chromophore in stained glass windows, such as those in Gothic cathedrals and in Chinese porcelain beginning in the Tang dynasty. Copper(II) (Cu2+) also produces many blue compounds, including the commercial algicide copper(II) sulfate (CuSO4.5H2O). Similarly, vanadyl salts and solutions are often blue, e.g. vanadyl sulfate. In nature Sky and sea When sunlight passes through the atmosphere, the blue wavelengths are scattered more widely by the oxygen and nitrogen molecules, and more blue comes to our eyes. This effect is called Rayleigh scattering, after Lord Rayleigh and confirmed by Albert Einstein in 1911. The sea is seen as blue for largely the same reason: the water absorbs the longer wavelengths of red and reflects and scatters the blue, which comes to the eye of the viewer. The deeper the observer goes, the darker the blue becomes. In the open sea, only about 1% of light penetrates to a depth of 200 metres (see underwater and euphotic depth). The colour of the sea is also affected by the colour of the sky, reflected by particles in the water; and by algae and plant life in the water, which can make it look green; or by sediment, which can make it look brown. The farther away an object is, the more blue it often appears to the eye. For example, mountains in the distance often appear blue. This is the effect of atmospheric perspective; the farther an object is away from the viewer, the less contrast there is between the object and its background colour, which is usually blue. In a painting where different parts of the composition are blue, green and red, the blue will appear to be more distant, and the red closer to the viewer. The cooler a colour is, the more distant it seems. Blue light is scattered more than other wavelengths by the gases in the atmosphere, hence our "blue planet". Minerals Some of the most desirable gems are blue, including sapphire and tanzanite. Compounds of copper(II) are characteristically blue and so are many copper-containing minerals. Azurite (, with a deep blue colour, was once employed in medieval years, but it is unstable pigment, losing its colour especially under dry conditions. Lapis lazuli, mined in Afghanistan for more than three thousand years, was used for jewelry and ornaments, and later was crushed and powdered and used as a pigment. The more it was ground, the lighter the blue colour became. Natural ultramarine, made by grinding lapis lazuli into a fine powder, was the finest available blue pigment in the Middle Ages and the Renaissance. It was extremely expensive, and in Italian Renaissance art, it was often reserved for the robes of the Virgin Mary. Plants and fungi Intense efforts have focused on blue flowers and the possibility that natural blue colourants could be used as food dyes. Commonly, blue colours in plants are anthocyanins: "the largest group of water-soluble pigments found widespread in the plant kingdom". In the few plants that exploit structural colouration, brilliant colours are produced by structures within cells. The most brilliant blue colouration known in any living tissue is found in the marble berries of Pollia condensata, where a spiral structure of cellulose fibrils scattering blue light. The fruit of quandong (Santalum acuminatum) can appear blue owing to the same effect. Animals Blue-pigmented animals are relatively rare. Examples of which include butterflies of the genus Nessaea, where blue is created by pterobilin. Other blue pigments of animal origin include phorcabilin, used by other butterflies in Graphium and Papilio (specifically P. phorcas and P. weiskei), and sarpedobilin, which is used by Graphium sarpedon. Blue-pigmented organelles, known as "cyanosomes", exist in the chromatophores of at least two fish species, the mandarin fish and the picturesque dragonet. More commonly, blueness in animals is a structural colouration; an optical interference effect induced by organized nanometre-sized scales or fibres. Examples include the plumage of several birds like the blue jay and indigo bunting, the scales of butterflies like the morpho butterfly, collagen fibres in the skin of some species of monkey and opossum, and the iridophore cells in some fish and frogs. Eyes Blue eyes do not actually contain any blue pigment. Eye colour is determined by two factors: the pigmentation of the eye's iris and the scattering of light by the turbid medium in the stroma of the iris. In humans, the pigmentation of the iris varies from light brown to black. The appearance of blue, green, and hazel eyes results from the Tyndall scattering of light in the stroma, an optical effect similar to what accounts for the blueness of the sky. The irises of the eyes of people with blue eyes contain less dark melanin than those of people with brown eyes, which means that they absorb less short-wavelength blue light, which is instead reflected out to the viewer. Eye colour also varies depending on the lighting conditions, especially for lighter-coloured eyes. Blue eyes are most common in Ireland, the Baltic Sea area and Northern Europe, and are also found in Eastern, Central, and Southern Europe. Blue eyes are also found in parts of Western Asia, most notably in Afghanistan, Syria, Iraq, and Iran. In Estonia, 99% of people have blue eyes. In Denmark in 1978, only 8% of the population had brown eyes, though through immigration, today that number is about 11%. In Germany, about 75% have blue eyes. In the United States, as of 2006, 1 out of every 6 people, or 16.6% of the total population, and 22.3% of the white population, have blue eyes, compared with about half of Americans born in 1900, and a third of Americans born in 1950. Blue eyes are becoming less common among American children. In the US, males are 3–5% more likely to have blue eyes than females. History In the ancient world As early as the 7th millennium BC, lapis lazuli was mined in the Sar-i Sang mines, in Shortugai, and in other mines in Badakhshan province in northeast Afghanistan. Lapis lazuli artifacts, dated to 7570 BC, have been found at Bhirrana, which is the oldest site of Indus Valley civilisation. Lapis was highly valued by the Indus Valley Civilisation (7570–1900 BC). Lapis beads have been found at Neolithic burials in Mehrgarh, the Caucasus, and as far away as Mauritania. It was used in the funeral mask of Tutankhamun (1341–1323 BC). A term for Blue was relatively rare in many forms of ancient art and decoration, and even in ancient literature. The Ancient Greek poets described the sea as green, brown or "the colour of wine". The colour is mentioned several times in the Hebrew Bible as 'tekhelet'. Reds, blacks, browns, and ochres are found in cave paintings from the Upper Paleolithic period, but not blue. Blue was also not used for dyeing fabric until long after red, ochre, pink, and purple. This is probably due to the perennial difficulty of making blue dyes and pigments. On the other hand, the rarity of blue pigment made it even more valuable. The earliest known blue dyes were made from plants – woad in Europe, indigo in Asia and Africa, while blue pigments were made from minerals, usually either lapis lazuli or azurite, and required more. Blue glazes posed still another challenge since the early blue dyes and pigments were not thermally robust. In , the blue glaze Egyptian blue was introduced for ceramics, as well as many other objects. The Greeks imported indigo dye from India, calling it indikon, and they painted with Egyptian blue. Blue was not one of the four primary colours for Greek painting described by Pliny the Elder (red, yellow, black, and white). For the Romans, blue was the colour of mourning, as well as the colour of barbarians. The Celts and Germans reportedly dyed their faces blue to frighten their enemies, and tinted their hair blue when they grew old. The Romans made extensive use of indigo and Egyptian blue pigment, as evidenced, in part, by frescos in Pompeii. The Romans had many words for varieties of blue, including , , , , , , , and , but two words, both of foreign origin, became the most enduring; , from the Germanic word blau, which eventually became bleu or blue; and , from the Arabic word , which became azure. Blue was widely used in the decoration of churches in the Byzantine Empire. By contrast, in the Islamic world, blue was of secondary to green, believed to be the favourite colour of the Prophet Mohammed. At certain times in Moorish Spain and other parts of the Islamic world, blue was the colour worn by Christians and Jews, because only Muslims were allowed to wear white and green. In the Middle Ages In the art and life of Europe during the early Middle Ages, blue played a minor role. This changed dramatically between 1130 and 1140 in Paris, when the Abbe Suger rebuilt the Saint Denis Basilica. Suger considered that light was the visible manifestation of the Holy Spirit. He installed stained glass windows coloured with cobalt, which, combined with the light from the red glass, filled the church with a bluish violet light. The church became the marvel of the Christian world, and the colour became known as the . In the years that followed even more elegant blue stained glass windows were installed in other churches, including at Chartres Cathedral and Sainte-Chapelle in Paris. In the 12th century the Roman Catholic Church dictated that painters in Italy (and the rest of Europe consequently) to paint the Virgin Mary with blue, which became associated with holiness, humility and virtue. In medieval paintings, blue was used to attract the attention of the viewer to the Virgin Mary. Paintings of the mythical King Arthur began to show him dressed in blue. The coat of arms of the kings of France became an azure or light blue shield, sprinkled with golden fleur-de-lis or lilies. Blue had come from obscurity to become the royal colour. Renaissance through 18th century Blue came into wider use beginning in the Renaissance, when artists began to paint the world with perspective, depth, shadows, and light from a single source. In Renaissance paintings, artists tried to create harmonies between blue and red, lightening the blue with lead white paint and adding shadows and highlights. Raphael was a master of this technique, carefully balancing the reds and the blues so no one colour dominated the picture. Ultramarine was the most prestigious blue of the Renaissance, being more expensive than gold. Wealthy art patrons commissioned works with the most expensive blues possible. In 1616 Richard Sackville commissioned a portrait of himself by Isaac Oliver with three different blues, including ultramarine pigment for his stockings. An industry for the manufacture of fine blue and white pottery began in the 14th century in Jingdezhen, China, using white Chinese porcelain decorated with patterns of cobalt blue, imported from Persia. It was first made for the family of the Emperor of China, then was exported around the world, with designs for export adapted to European subjects and tastes. The Chinese blue style was also adapted by Dutch craftsmen in Delft and English craftsmen in Staffordshire in the 17th-18th centuries. in the 18th century, blue and white porcelains were produced by Josiah Wedgwood and other British craftsmen. 19th-20th century The early 19th century saw the ancestor of the modern blue business suit, created by Beau Brummel (1776–1840), who set fashion at the London Court. It also saw the invention of blue jeans, a highly popular form of workers's costume, invented in 1853 by Jacob W. Davis who used metal rivets to strengthen blue denim work clothing in the California gold fields. The invention was funded by San Francisco entrepreneur Levi Strauss, and spread around the world. Recognizing the emotional power of blue, many artists made it the central element of paintings in the 19th and 20th centuries. They included Pablo Picasso, Pavel Kuznetsov and the Blue Rose art group, and Kandinsky and Der Blaue Reiter (The Blue Rider) school. Henri Matisse expressed deep emotions with blue, "A certain blue penetrates your soul." In the second half of the 20th century, painters of the abstract expressionist movement use blues to inspire ideas and emotions. Painter Mark Rothko observed that colour was "only an instrument;" his interest was "in expressing human emotions tragedy, ecstasy, doom, and so on". In society and culture Uniforms In the 17th century. The Prince-Elector of Brandenburg, Frederick William I of Prussia, chose Prussian blue as the new colour of Prussian military uniforms, because it was made with Woad, a local crop, rather than Indigo, which was produced by the colonies of Brandenburg's rival, England. It was worn by the German army until World War I, with the exception of the soldiers of Bavaria, who wore sky-blue. In 1748, the Royal Navy adopted a dark shade of blue for the uniform of officers. It was first known as marine blue, now known as navy blue. The militia organized by George Washington selected blue and buff, the colours of the British Whig Party. Blue continued to be the colour of the field uniform of the US Army until 1902, and is still the colour of the dress uniform. In the 19th century, police in the United Kingdom, including the Metropolitan Police and the City of London Police also adopted a navy blue uniform. Similar traditions were embraced in France and Austria. It was also adopted at about the same time for the uniforms of the officers of the New York City Police Department. Gender Blue is used to represent males. Beginning as a trend the mid-19th century and applying primarily to clothing, gendered associations with blue became more widespread from the 1950s. The colour became associated with males after the second world war. Religion Blue in Judaism: In the Torah, the Israelites were commanded to put fringes, tzitzit, on the corners of their garments, and to weave within these fringes a "twisted thread of blue (tekhelet)". In ancient days, this blue thread was made from a dye extracted from a Mediterranean snail called the hilazon. Maimonides claimed that this blue was the colour of "the clear noonday sky"; Rashi, the colour of the evening sky. According to several rabbinic sages, blue is the colour of God's Glory. Staring at this colour aids in mediation, bringing us a glimpse of the "pavement of sapphire, like the very sky for purity", which is a likeness of the Throne of God. (The Hebrew word for glory) Many items in the Mishkan, the portable sanctuary in the wilderness, such as the menorah, many of the vessels, and the Ark of the Covenant, were covered with blue cloth when transported from place to place. Blue in Christianity: Blue is particularly associated with the Virgin Mary. This was the result of a decree of Pope Gregory I (540–601) who ordered that all religious paintings should tell a story which was clearly comprehensible to all viewers, and that figures should be easily recognizable, especially that of the figure of Mary. If she was alone in the image, her costume was usually painted with the finest blue, ultramarine. If she was with Christ, her costume was usually painted with a less expensive pigment, to avoid outshining him. Blue in Hinduism: Many of the gods are depicted as having blue-coloured skin, particularly those associated with Vishnu, who is said to be the preserver of the world, and thus intimately connected to water. Krishna and Rama, Vishnu's avatars, are usually depicted with blue skin. Shiva, the destroyer deity, is also depicted in a light-blue hue, and is called Nīlakaṇṭha, or blue-throated, for having swallowed poison to save the universe during the Samudra Manthana, the churning of the ocean of milk. Blue is used to symbolically represent the fifth, and the throat, chakra (Vishuddha). Blue in Sikhism: The Akali Nihangs warriors wear all-blue attire. Guru Gobind Singh also has a blue roan horse. The Sikh Rehat Maryada states that the Nishan Sahib hoisted outside every Gurudwara should be xanthic (Basanti in Punjabi) or greyish blue (modern day navy blue) (Surmaaee in Punjabi) colour. Blue in Paganism: Blue is associated with peace, truth, wisdom, protection, and patience. It helps with healing, psychic ability, harmony, and understanding. Sports In sports, blue is widely represented in uniforms in part because the majority of national teams wear the colours of their national flag. For example, the national men's football team of France are known as Les Bleus (the Blues). Similarly, Argentina, Italy, and Uruguay wear blue shirts. The Asian Football Confederation and the Oceania Football Confederation use blue text on their logos. Blue is well represented in baseball (Blue Jays), basketball, and American football, and Ice hockey. The Indian national cricket team wears blue uniform during One day international matches, as such the team is also referred to as "Men in Blue". Politics Unlike red or green, blue was not strongly associated with any particular country, religion or political movement. As the colour of harmony, it was chosen as the colour for the flags of the United Nations, the European Union, and NATO. In politics, blue is often used as the colour of conservative parties, contrasting with the red associated with left-wing parties. Some conservative parties that use the colour blue include the Conservative Party (UK), Conservative Party of Canada, Liberal Party of Australia, Liberal Party of Brazil, and Likud of Israel. However, in some countries, blue is not associated main conservative party. In the United States, the liberal Democratic Party is associated with blue, while the conservative Republican Party with red. US states which have been won by the Democratic Party in four consecutive presidential elections are termed "blue states", while those that have been won by the Republican Party are termed "red states". South Korea also uses this colour model, with the Democratic Party on the left using blue and the People Power Party on the right using red. See also Engineer's blue Lists of colours Non-photo blue Blue pigments References Works cited (page numbers refer to the French translation) Further reading External links "Friday essay: from the Great Wave to Starry Night, how a blue pigment changed the world", By Hugh Davies, theconversation.com Optical spectrum Primary colors Rainbow colors Secondary colors Shades of violet Web colors Masculinity
Blue
Physics
6,346
7,543,003
https://en.wikipedia.org/wiki/Transitional%20shelter
Transitional shelters are designed to provide temporary housing and support services for individuals and families experiencing homelessness, with the goal of helping them transition into permanent housing. Unlike emergency shelters that focus on immediate and short-term relief, transitional shelters offer a more extended stay typically ranging from several months to up to two years. These shelters often cater to specific populations, such as women and children fleeing domestic violence, individuals recovering from addiction, or families working to regain financial stability. A key feature of transitional shelters is the integration of support services aimed at addressing the underlying causes of homelessness including case management, job training, educational support, mental health counseling, addiction treatment, and childcare. Transitional shelters are often funded through a combination of federal, state, and local government programs, as well as private donations and grants. Programs like HUD’s Continuum of Care (CoC) and Transitional Housing Assistance Grants support the development and operation of these facilities. The term transitional shelter emerged in the mid-20th century as part of broader efforts to address homelessness and housing instability in the United States and globally. Initially, it was used to describe temporary housing solutions provided after major crises, such as wars or natural disasters, where displaced populations needed stable environments before transitioning to permanent homes. Unlike extended-stay refugee camps, which are often established to address long-term displacement due to ongoing conflict or lack of resettlement options, transitional shelters are designed with a defined timeline and the goal of facilitating a quicker integration into permanent housing. In the 1980s, as homelessness rose in the U.S. due to economic recession, cuts to social services, and deinstitutionalization of mental health care, transitional shelters became a prominent component of federal housing strategies. The passage of the McKinney-Vento Homeless Assistance Act in 1987 formalized the concept, funding programs that combined temporary housing with supportive services to help individuals and families rebuild their lives. Since then, the term has evolved to encompass a wide range of programs aimed at addressing diverse causes of homelessness, such as domestic violence, addiction recovery, and economic instability. Today, transitional shelters are recognized as a critical step in the continuum of care for those experiencing homelessness, bridging the gap between emergency shelters and permanent housing solutions. See also Refugee shelter Emergency shelter External links transitional settlement: displaced populations An Assessment of Sphere Humanitarian Standards for Shelter and Settlement Planning in Kenya's Dadaab Refugee Camps References Buildings and structures by type Refugee camps Temporary populated places Disaster preparedness Emergency services
Transitional shelter
Engineering
504
964,328
https://en.wikipedia.org/wiki/Wolfgang%20Krull
Wolfgang Krull (26 August 1899 – 12 April 1971) was a German mathematician who made fundamental contributions to commutative algebra, introducing concepts that are now central to the subject. Krull was born and went to school in Baden-Baden. He attended the Universities of Freiburg, Rostock and finally Göttingen from 1919–1921, where he earned his doctorate under Alfred Loewy. He worked as an instructor and professor at Freiburg, then spent a decade at the University of Erlangen. In 1939, Krull moved to become chair at the University of Bonn, where he remained for the rest of his life. Wolfgang Krull was a member of the Nazi Party. His 35 doctoral students include Wilfried Brauer, Karl-Otto Stöhr and Jürgen Neukirch. See also Cohen structure theorem Jacobson ring Local ring Prime ideal Real algebraic geometry Regular local ring Valuation ring Krull dimension Krull ring Krull topology Krull–Azumaya theorem Krull–Schmidt category Krull–Schmidt theorem Krull's intersection theorem Krull's principal ideal theorem Krull's separation lemma Krull's theorem Publications References External links 1899 births 1971 deaths 20th-century German mathematicians Nazi Party members Algebraists
Wolfgang Krull
Mathematics
263
208,259
https://en.wikipedia.org/wiki/20%20%28number%29
20 (twenty) is the natural number following 19 and preceding 21. A group of twenty units is sometimes referred to as a score. In Mathematics Twenty is a composite number. It is also the smallest primitive abundant number. The Happy Family of sporadic groups is made up of twenty finite simple groups that are all subquotients of the friendly giant, the largest of twenty-six sporadic groups. Geometry An icosagon is a polygon with 20 edges. Bring's curve is a Riemann surface, whose fundamental polygon is a regular hyperbolic icosagon. Platonic Solids The largest number of faces a Platonic solid can have is twenty faces, which make up a regular icosahedron. A dodecahedron, on the other hand, has twenty vertices, likewise the most a regular polyhedron can have. This is because the icosahedron and dodecahdron are duals of each other. Other fields Science 20 is the third magic number in physics. Biology In some countries, the number 20 is used as an index in measuring visual acuity. 20/20 indicates normal vision at 20 feet, although it is commonly used to mean "perfect vision" in countries using the Imperial system. (The metric equivalent is 6/6.) When someone is able to see only after an event how things turned out, that person is often said to have had "20/20 hindsight" Psychology In many disciplines of developmental psychology, adulthood starts at age 20. Culture Age 20 The traditional age of majority in Japan, although the voting age has been reduced to 18. Japanese people commemorate the twentieth birthday with personal ceremonies, and it comes with a number of legal rights like the right to marry. To represent this, the Japanese language has a special word for "20-years-old" that does not follow the rest of their numbering system. Accordingly, the word 二十歳 is read all at once as "はたち" (hatachi) rather than the expected pronunciation of the three characters as "にじゅうさい" (nijyuusai, which is literally "two," "ten," and the counter for "years old"). Number systems 20 is the basis for vigesimal number systems, used by several different civilizations in the past (and to this day), including the Maya. Indefinite number A 'score' is a group of twenty (often used in combination with a cardinal number, e.g. fourscore to mean 80), but also often used as an indefinite number (e.g. the newspaper headline "Scores of Typhoon Survivors Flown to Manila"). References External links Integers
20 (number)
Mathematics
536
75,859,087
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Z%20Fold%206
The Samsung Galaxy Z Fold 6 (stylized as Samsung Galaxy Z Fold6, sold as Samsung Galaxy Fold 6 in certain territories) is an Android-based foldable smartphone that was announced by Samsung Electronics on July 10, 2024, at Galaxy Unpacked and was released on July 24, 2024. References Samsung Galaxy Foldable smartphones Mobile phones introduced in 2024 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Mobile phones with 8K video recording
Samsung Galaxy Z Fold 6
Technology
96
23,092,863
https://en.wikipedia.org/wiki/Fractional%20anisotropy
Fractional anisotropy (FA) is a scalar value between zero and one that describes the degree of anisotropy of a diffusion process. A value of zero means that diffusion is isotropic, i.e. it is unrestricted (or equally restricted) in all directions. A value of one means that diffusion occurs only along one axis and is fully restricted along all other directions. FA is a measure often used in diffusion imaging where it is thought to reflect fiber density, axonal diameter, and myelination in white matter. The FA is an extension of the concept of eccentricity of conic sections in 3 dimensions, normalized to the unit range. Definition A Diffusion Ellipsoid is completely represented by the Diffusion Tensor, D. FA is calculated from the eigenvalues of the diffusion tensor. The eigenvectors give the directions in which the ellipsoid has major axes, and the corresponding eigenvalues give the magnitude of the peak in each eigenvector direction. with being the mean value of the eigenvalues. An equivalent formula for FA is which is further equivalent to: where R is the "normalized" diffusion tensor: Note that if all the eigenvalues are equal, which happens for isotropic (spherical) diffusion, as in free water, the FA is . The FA can reach a maximum value of (this rarely happens in real data), in which case D has only one nonzero eigenvalue and the ellipsoid reduces to a line in the direction of that eigenvector. This means that the diffusion is confined to that direction alone. Details This can be visualized with an ellipsoid, which is defined by the eigenvectors and eigenvalues of D. The FA of a sphere is 0 since the diffusion is isotropic, and there is equal probability of diffusion in all directions. The eigenvectors and eigenvalues of the Diffusion Tensor give a complete representation of the diffusion process. FA quantifies the pointedness of the ellipsoid, but does not give information about which direction it is pointing to. Note that the FA of most liquids, including water, is unless the diffusion process is being constrained by structures such as network of fibers. The measured FA may depend on the effective length scale of the diffusion measurement. If the diffusion process is not constrained on the scale being measured (the constraints are too far apart) or the constraints switch direction on a smaller scale than the measured one, then the measured FA will be attenuated. For example, the brain can be thought of as a fluid permeated by many fibers (nerve axons). However, in most parts the fibers go in all directions, and thus although they constrain the diffusion the FA is . In some regions, such as the corpus callosum the fibers are aligned over a large enough scale (on the order of a mm) for their directions to mostly agree within the resolution element of a magnetic resonance image, and it is these regions that stand out in an FA image. Liquid crystals can also exhibit anisotropic diffusion because the needle or plate-like shapes of their molecules affect how they slide over one another. When the FA is 0 the tensor nature of D is often ignored, and it is called the diffusion constant. One drawback of the Diffusion Tensor model is that it can account only for Gaussian diffusion processes, which has been found to be inadequate in accurately representing the true diffusion process in the human brain. Due to this, higher order models using spherical harmonics and Orientation Distribution Functions (ODF) have been used to define newer and richer estimates of the anisotropy, called Generalized Fractional Anisotropy. GFA computations use samples of the ODF to evaluate the anisotropy in diffusion. They can also be easily calculated by using the Spherical Harmonic coefficients of the ODF model. References Transport phenomena Diffusion Imaging Tensors Neuroimaging Medical imaging Magnetic resonance imaging de:Diffusivität#Fraktionale Anisotropie
Fractional anisotropy
Physics,Chemistry,Engineering
844
10,858,517
https://en.wikipedia.org/wiki/Human%20Fertilisation%20and%20Embryology%20Act%201990
The Human Fertilisation and Embryology Act 1990 (c. 37) is an Act of the Parliament of the United Kingdom. It created the Human Fertilisation and Embryology Authority which is in charge of human embryo research, along with monitoring and licensing fertility clinics in the United Kingdom. The Authority is composed of a chairman, a deputy chairman, and however many members are appointed by the UK Secretary of State. They are in charge of reviewing information about human embryos and subsequent development, provision of treatment services, and activities governed by the Act of 1990. The Authority also offers information and advice to people seeking treatment, and to those who have donated gametes or embryos for purposes or activities covered in the Act of 1990. Some of the subjects under the Human Fertilisation and Embryology Act of 1990 are prohibitions in connection with gametes, embryos, and germ cells. The Act also addresses licensing conditions, code of practice, and procedure of approval involving human embryos. This only concerns human embryos which have reached the two cell zygote stage, at which they are considered "fertilised" in the act. It also governs the keeping and using of human embryos, but only outside a woman's body. The act contains amendments to UK law regarding termination of pregnancy, surrogacy and parental rights. Human Fertilisation and Embryology Authority and stem cell policy The Human Fertilization and Embryology Act 1990 regulates ex-vivo human embryo creation and the research involving them. This act established the Human Fertilisation and Embryology Authority (HFEA) to regulate treatment and research in the UK involving human embryos. In 2001, an extension of the Act legalized embryo research for the purposes of "increasing knowledge about the development of embryos," "increasing knowledge about serious disease," and "enabling any such knowledge to be applied in developing treatments for serious disease." The HFEA grants licenses and research permission for up to three years, based on approval of five steps by the Research License Committee. HFEA policies are reviewed by specialists in the field regularly. After research and literature are reviewed, and open public meetings are held, the summarized information is presented to the Human Fertilisation Embryology Authority. Policy under review Sperm and egg donation Donors must meet certain criteria in order to be eligible for sperm, egg, or embryo donation. The donor can donate for research purposes or fertility treatment. Donors should find a HFEA licensed clinic, or can go through the National Gamete Donation Trust. Multiple births as a result of IVF The HFEA is carrying out a detailed review to determine the best way to reduce the risk of multiple pregnancies with in vitro fertilization (IVF). For example, Nadya Suleman (or "Octomom") is publicly known for giving birth to octuplets after IVF treatment. Review of scientific methods to reduce mitochondrial disease This policy allows for the use of techniques which alter the mitochondrial DNA of the egg or an embryo used in IVF, to prevent serious mitochondrial diseases from being inherited. Past policy reviews The policies reviewed by HFEA cover everything from human reproductive cloning to the creation of human-animal hybrids, and include subjects such as ethics with scientific and social significance. Genetic testing Sperm, eggs and embryos received in the donation process are currently tested for many medical conditions, and also quarantined for six months to reduce the risk of complications to the mother and child. Other than a screening for genetic disorders, donors are tested for HIV, hepatitis B, and hepatitis C. Embryo research Embryos must be donated by a woman between the ages of 18 and 35 years old, who has also undergone a medical screening and given informed consent (which can be revoked at any point up until the embryo is used). Fees and regulation £3,000 for extraction and initial freezing £160 yearly for storage £4,000-£8,000 total per treatment cycle Risks of treatment "Welfare of the Child" review (multiple pregnancy), for people seeking IVF treatment. While there is always a risk of having a multiples pregnancy after receiving IVF treatment, HFEA is reviewing policies which will reduce this dangerous possibility. No more than two eggs or embryos can be legally implanted in a woman in an IVF treatment. There is a 25% success rate of this procedure per treatment cycle. Clinical safety Includes safety procedure regulations at fertility clinics; includes safe cryopreservation of eggs and embryos. Eggs and embryos are stored for ten years after the initial treatment. If the patient decides not to pursue another pregnancy, the eggs and embryos can be donated for research or to another couple for fertility treatments. Payment of donors In donor-assisted conception, the donor may not receive any monetary compensation (in the UK), although they may have related expenses covered. HFEA and liquid nitrogen gamete storage Sperm, eggs and embryos are stored in liquid nitrogen using cryopreservation (defined as the freezing of cells or whole tissues to sub-zero temperatures—the boiling point of liquid nitrogen). This method preserves living organisms in a state where they can be restored to how they were before freezing. A cryoprotective compound (a liquid called cryopreservation medium), along with carefully controlled cooling and warming cycles ensure that minimal damage is done to the cells. However, the freezing process is still somewhat damaging. Therefore, men wishing to donate sperm or have it stored for future use must make six sperm deposits for every one child they wish to have, due to the 50% survival rate of the sperm in each deposit. The sperm is then put into straw shaped vials, and placed in a storage tank of either liquid nitrogen, or liquid nitrogen vapor. The sub-zero temperatures of the liquid generally range from -150 degrees Celsius, to -196 degrees Celsius. According to HFEA, the storage period for both human gametes and embryos cannot exceed ten years. HFEA requires a full informed consent from each party that has any relation to the egg, gametes, or embryo, all of which must be stored in accordance with their consents. Exceptions to the informed consent of gamete storage: Consent is not required if the gametes were legally obtained from the person in question before they reached 18 years of age. Consent is not required if the person in question is about to undergo a medical procedure which could impair their fertility. In this situation, a licensed medical practitioner can sign off for the storage of the gametes. A medical practitioner can also authorize storage of the gametes, if it is believed to be in the person's best interest. Consent is not required if the person in question is under 16 years of age, and is therefore considered incompetent to give consent. The act states that it is legal to "take" gametes or accept those provided, and store them without a person's consent, if the person is considered incapable, or until they "acquire such capacity." However, under paragraphs 9 and 10 of HFEA 1990, a person's gametes cannot be legally stored in the UK after their death. Earlier steps and legislation In July 1982 the Warnock Committee Inquiry was established. It was "to consider recent and potential developments in medicine and science related to human fertilisation and embryology; to consider what policies and safeguards should be applied, including consideration of the social, ethical, and legal implications of these developments; and to make recommendations." The Warnock Report was published on 18 July 1984. The report stated that a regulator was needed due to the 'special status' of embryos. In 1985 the Interim Licensing Authority was created. It was supposed to regulate work and research regarding human in vitro fertilisation until a permanent government legislation was passed. It remained the only authority until 1990. The Unborn Children Protection Bill was also created in 1985. It was written by Enoch Powell and prohibited embryonic research. The Health Secretary would only have been allowed an embryo to be kept and implanted if it was for the sole purpose of assisting a named woman to bear a child. No other reason was allowed. This bill was not passed. It was reintroduced in 1986, where it again failed to pass. This was repeated again 1989. The Surrogacy Arrangements Act 1985 was the first law that governed surrogacy arrangements. It criminalized commercial surrogacy arrangements. In 1987 the framework for human fertilisation and embryology was created. A white paper was published in regards to the recommendations of the Warnock Report. In 1990 the Human Fertilisation and Embryology Act 1990 was passed. The Human Fertilisation and Embryology Authority, HFEA, officially started work August 1, 1991. HFEA coverage The act covers several areas: Any and all fertility treatment of humans involving the use of donated genetic material (eggs, sperm or embryos). The storage of human eggs, sperm and embryos. Research on early human embryos. The creation of the Human Fertilisation and Embryology Authority, or HFEA, which regulates assisted reproduction in the UK. Within the act an embryo is defined as a live human embryo where fertilisation is complete, complete is defined as the appearance of a two cell zygote. Storage of human eggs, sperm, and embryos The act states that eggs, sperm, and embryo can only be stored for a finite amount of time in very specific conditions that are regulated by the Human Fertilisation and Embryology Authority. Human eggs and sperm can be stored for up to ten years. Human embryos can be stored for maximum of five years. Research on early human embryos Research on human embryos can only be performed for specifically defined purposes that must be considered 'necessary and desirable' by the Human Fertilisation and Embryology Authority. Research can only be performed on an embryo for a maximum of fourteen days or until the primitive streak appears. The genetic composition of any cell within the embryo cannot be altered during the embryo's formation for research. The act defined several purposes: Innovations in infertility treatments Increasing knowledge regarding miscarriages Increasing knowledge of congenital disease Developing more effectual methods of contraception Generating methods for detecting and identifying gene or chromosomal irregularities in embryos before implantation. Abortion provisions Section 37 of the Act amends the Abortion Act 1967. The section specifies and broadens the conditions where abortion is legal. Women who consider abortion are referred to two doctors. Each doctor then advises her whether abortion is a suitable decision based on the conditions listed below. An abortion is granted only when the doctors reach a unanimous decision that the woman may terminate her pregnancy. An abortion that is performed without this decision or under any other circumstances is considered unlawful. Abortion may be granted under one of the following circumstances: The registered medical practitioner that performs the abortion will continue to act in accordance with the Infant Life (Preservation) Act 1929. Amendments In 1991 the statutory storage period and special expeditions sections were revisited. Regulations were extended storage periods for eggs and sperm. Licensing rules for egg and sperm storage were also clarified. A Disclosure of Information Act was created in 1992. This allowed the Human Fertilisation and Embryology Authority to disclose information to others with the patient's consent. for example, information could be shared with their general practitioner. The Criminal Justice and Public Order Act 1994 added section 156. This prohibited the treatment of cells from aborted embryos. During the same year the Parental Orders regulations allowed parental orders to be made in surrogacy cases. In 1996 the permitted storage period for embryos was extended. The Human Fertilisation and Embryology (Deceased Fathers) Act 2003 amended section 28 in 2000. Sperm may be taken from a deceased male to fertilize an egg if the corresponding man and woman were: married living as man and wife or had been receiving treatment together at a licensed clinic. In 2001 the Human Fertilisation and Embryology Regulations were added. These regulations extended the purposes that an embryo can be created for in regards to research. Better understanding of embryonic development Further knowledge of serious disease research involving the treatment of serious disease In addition, the Human Reproductive Cloning Act 2001 was passed. This essential made human reproductive cloning illegal by outlawing the implantation of research embryos. As of 2004 the Disclosure of Donor Information Regulations were formed. Any sperm or egg donors registered after April 1, 2005, were required to pass on name and last address given to the offspring. During this time Parliament began reviewing the Human Fertilisation and Embryology Act 1990. Licensing of all establishments handling gametes for treatment was required as of 2007 in the Quality and Safety Regulations. In 2006 a white paper was published regarding a revised legislation for fertility. This led to the Human Fertilisation and Embryology Act 2008, HFE, being passed. This was a major review of fertility legislation, updating and amending the act of 1990. In 2009 the HFE act was passed. This is the current law in the UK. See also Human Reproductive Cloning Act 2001 Human Fertilisation and Embryology (Deceased Fathers) Act 2003 Human Fertilisation and Embryology Act 2008 References External links United Kingdom abortion law Acts of the Parliament of the United Kingdom concerning healthcare Cloning Medical genetics in the United Kingdom United Kingdom Acts of Parliament 1990 Medical regulation in the United Kingdom Surrogacy
Human Fertilisation and Embryology Act 1990
Engineering,Biology
2,760
42,953,903
https://en.wikipedia.org/wiki/NGC%204217
NGC 4217 is an edge-on spiral galaxy which lies approximately 60 million light-years (18 million parsecs) away in the constellation of Canes Venatici. It is a possible companion galaxy to Messier 106 (NGC 4258). One supernova, SN 2022myz (type I, mag. 19), was discovered in NGC 4217 on 19 June 2022. Gallery References Further reading External links 4217 Canes Venatici Spiral galaxies 039241
NGC 4217
Astronomy
104
179,924
https://en.wikipedia.org/wiki/Helix
A helix (; ) is a shape like a cylindrical coil spring or the thread of a machine screw. It is a type of smooth space curve with tangent lines at a constant angle to a fixed axis. Helices are important in biology, as the DNA molecule is formed as two intertwined helices, and many proteins have helical substructures, known as alpha helices. The word helix comes from the Greek word , "twisted, curved". A "filled-in" helix – for example, a "spiral" (helical) ramp – is a surface called a helicoid. Properties and types The pitch of a helix is the height of one complete helix turn, measured parallel to the axis of the helix. A double helix consists of two (typically congruent) helices with the same axis, differing by a translation along the axis. A circular helix (i.e. one with constant radius) has constant band curvature and constant torsion. The slope of a circular helix is commonly defined as the ratio of the circumference of the circular cylinder that it spirals around, and its pitch (the height of one complete helix turn). A conic helix, also known as a conic spiral, may be defined as a spiral on a conic surface, with the distance to the apex an exponential function of the angle indicating direction from the axis. A curve is called a general helix or cylindrical helix if its tangent makes a constant angle with a fixed line in space. A curve is a general helix if and only if the ratio of curvature to torsion is constant. A curve is called a slant helix if its principal normal makes a constant angle with a fixed line in space. It can be constructed by applying a transformation to the moving frame of a general helix. For more general helix-like space curves can be found, see space spiral; e.g., spherical spiral. Handedness Helices can be either right-handed or left-handed. With the line of sight along the helix's axis, if a clockwise screwing motion moves the helix away from the observer, then it is called a right-handed helix; if towards the observer, then it is a left-handed helix. Handedness (or chirality) is a property of the helix, not of the perspective: a right-handed helix cannot be turned to look like a left-handed one unless it is viewed in a mirror, and vice versa. Mathematical description In mathematics, a helix is a curve in 3-dimensional space. The following parametrisation in Cartesian coordinates defines a particular helix; perhaps the simplest equations for one is As the parameter increases, the point traces a right-handed helix of pitch (or slope 1) and radius 1 about the -axis, in a right-handed coordinate system. In cylindrical coordinates , the same helix is parametrised by: A circular helix of radius and slope (or pitch ) is described by the following parametrisation: Another way of mathematically constructing a helix is to plot the complex-valued function as a function of the real number (see Euler's formula). The value of and the real and imaginary parts of the function value give this plot three real dimensions. Except for rotations, translations, and changes of scale, all right-handed helices are equivalent to the helix defined above. The equivalent left-handed helix can be constructed in a number of ways, the simplest being to negate any one of the , or components. Arc length, curvature and torsion A circular helix of radius and slope (or pitch ) expressed in Cartesian coordinates as the parametric equation has an arc length of a curvature of and a torsion of A helix has constant non-zero curvature and torsion. A helix is the vector-valued function So a helix can be reparameterized as a function of , which must be unit-speed: The unit tangent vector is The normal vector is Its curvature is . The unit normal vector is The binormal vector is Its torsion is Examples An example of a double helix in molecular biology is the nucleic acid double helix. An example of a conic helix is the Corkscrew roller coaster at Cedar Point amusement park. Some curves found in nature consist of multiple helices of different handedness joined together by transitions known as tendril perversions. Most hardware screw threads are right-handed helices. The alpha helix in biology as well as the A and B forms of DNA are also right-handed helices. The Z form of DNA is left-handed. In music, pitch space is often modeled with helices or double helices, most often extending out of a circle such as the circle of fifths, so as to represent octave equivalency. In aviation, geometric pitch is the distance an element of an airplane propeller would advance in one revolution if it were moving along a helix having an angle equal to that between the chord of the element and a plane perpendicular to the propeller axis; see also: pitch angle (aviation). See also Alpha helix Arc spring Boerdijk–Coxeter helix Circular polarization Collagen helix Helical symmetry Helicity Helix angle Helical axis Hemihelix Seashell surface Solenoid Superhelix Triple helix References Geometric shapes Curves
Helix
Mathematics
1,089
444,224
https://en.wikipedia.org/wiki/Piecewise%20linear%20function
In mathematics, a piecewise linear or segmented function is a real-valued function of a real variable, whose graph is composed of straight-line segments. Definition A piecewise linear function is a function defined on a (possibly unbounded) interval of real numbers, such that there is a collection of intervals on each of which the function is an affine function. (Thus "piecewise linear" is actually defined to mean "piecewise affine".) If the domain of the function is compact, there needs to be a finite collection of such intervals; if the domain is not compact, it may either be required to be finite or to be locally finite in the reals. Examples The function defined by is piecewise linear with four pieces. The graph of this function is shown to the right. Since the graph of an affine(*) function is a line, the graph of a piecewise linear function consists of line segments and rays. The x values (in the above example −3, 0, and 3) where the slope changes are typically called breakpoints, changepoints, threshold values or knots. As in many applications, this function is also continuous. The graph of a continuous piecewise linear function on a compact interval is a polygonal chain. (*) A linear function satisfies by definition and therefore in particular ; functions whose graph is a straight line are affine rather than linear. There are other examples of piecewise linear functions: Absolute value Sawtooth function Floor function Step function, a function composed of constant sub-functions, so also called a piecewise constant function Boxcar function, Heaviside step function Sign function Triangular function Fitting to a curve An approximation to a known curve can be found by sampling the curve and interpolating linearly between the points. An algorithm for computing the most significant points subject to a given error tolerance has been published. Fitting to data If partitions, and then breakpoints, are already known, linear regression can be performed independently on these partitions. However, continuity is not preserved in that case, and also there is no unique reference model underlying the observed data. A stable algorithm with this case has been derived. If partitions are not known, the residual sum of squares can be used to choose optimal separation points. However efficient computation and joint estimation of all model parameters (including the breakpoints) may be obtained by an iterative procedure currently implemented in the package segmented for the R language. A variant of decision tree learning called model trees learns piecewise linear functions. Generalizations The notion of a piecewise linear function makes sense in several different contexts. Piecewise linear functions may be defined on n-dimensional Euclidean space, or more generally any vector space or affine space, as well as on piecewise linear manifolds and simplicial complexes (see simplicial map). In each case, the function may be real-valued, or it may take values from a vector space, an affine space, a piecewise linear manifold, or a simplicial complex. (In these contexts, the term “linear” does not refer solely to linear transformations, but to more general affine linear functions.) In dimensions higher than one, it is common to require the domain of each piece to be a polygon or polytope. This guarantees that the graph of the function will be composed of polygonal or polytopal pieces. Splines generalize piecewise linear functions to higher-order polynomials, which are in turn contained in the category of piecewise-differentiable functions, PDIFF. Specializations Important sub-classes of piecewise linear functions include the continuous piecewise linear functions and the convex piecewise linear functions. In general, for every n-dimensional continuous piecewise linear function , there is a such that If is convex and continuous, then there is a such that Applications In agriculture piecewise regression analysis of measured data is used to detect the range over which growth factors affect the yield and the range over which the crop is not sensitive to changes in these factors. The image on the left shows that at shallow watertables the yield declines, whereas at deeper (> 7 dm) watertables the yield is unaffected. The graph is made using the method of least squares to find the two segments with the best fit. The graph on the right reveals that crop yields tolerate a soil salinity up to ECe = 8 dS/m (ECe is the electric conductivity of an extract of a saturated soil sample), while beyond that value the crop production reduces. The graph is made with the method of partial regression to find the longest range of "no effect", i.e. where the line is horizontal. The two segments need not join at the same point. Only for the second segment method of least squares is used. See also Linear interpolation Spline interpolation Tropical geometry Polygonal chain Further reading Apps, P., Long, N., & Rees, R. (2014). Optimal piecewise linear income taxation. Journal of Public Economic Theory, 16(4), 523–545. References Real analysis Types of functions
Piecewise linear function
Mathematics
1,052
40,384,846
https://en.wikipedia.org/wiki/Cubic%20cupola
In 4-dimensional geometry, the cubic cupola is a 4-polytope bounded by a rhombicuboctahedron, a parallel cube, connected by 6 square prisms, 12 triangular prisms, 8 triangular pyramids. Related polytopes The cubic cupola can be sliced off from a runcinated tesseract, on a hyperplane parallel to cubic cell. The cupola can be seen in an edge-centered (B3) orthogonal projection of the runcinated tesseract: See also Cubic pyramid Octahedral cupola Runcinated tesseract References External links Segmentochora: cube || sirco, K-4.71 4-polytopes
Cubic cupola
Mathematics
146
39,385,804
https://en.wikipedia.org/wiki/BlackBerry%20Q5
The BlackBerry Q5 is the third BlackBerry 10 smartphone, unveiled at the BlackBerry Live 2013 Keynote on May 14, 2013. BlackBerry 10 is gesture based. The Q5 is targeted largely at emerging markets because of its lower end specifications. It is available in black, white, red, pink and grey. Like the BlackBerry Q10, it has a QWERTY keyboard. Features While typing, there is the option to type using the keyboard on the touchscreen. The keyboard has more space between the individual keys allowing for ease of typing. The touchscreen is small in size with a high resolution allowing the user to be able to read and view images with ease. However, the camera on the device is only five megapixels, which was below the norm at the time, being eight megapixels. Exclusive features include BlackBerry Hub, which allows users to view email, messages, and social network updates with a swipe to the side while using any application. Time Shift Mode creates perfect photos by taking multiple photos and this allows the user to isolate a single part of the photo and move it. Availability The BlackBerry Q5 was first available in the United Arab Emirates and later in India and Canada. The target regions for this product are Europe, the Middle East, Africa, Asia and Latin America. India is the first country in Asia Pacific where this product was launched. In 2014, BlackBerry had a large portion of the smartphone market in India. Model comparison See also BlackBerry 10 List of BlackBerry 10 devices References External links Q5 Mobile phones introduced in 2013 Mobile phones with an integrated hardware keyboard Discontinued smartphones
BlackBerry Q5
Technology
323
22,254,915
https://en.wikipedia.org/wiki/European%20Summer%20School%20in%20Information%20Retrieval
The European Summer School in Information Retrieval (ESSIR) is a scientific event founded in 1990, which starts off a series of Summer Schools to provide high-quality teaching of information retrieval on advanced topics. ESSIR is typically a week-long event consisting of guest lectures and seminars from invited lecturers who are recognized experts in the field. The aim of ESSIR is to give to its participants a common ground in different aspects of Information Retrieval (IR). Maristella Agosti in 2008 stated that: "The term IR identifies the activities that a person – the user – has to conduct to choose, from a collection of documents, those that can be of interest to him to satisfy a specific and contingent information need." IR is a discipline with many facets and at the same time influences and is influenced by many other scientific disciplines. Indeed, IR ranges from computer science to information science and beyond; moreover, a large number of IR methods and techniques are adopted and absorbed by several technologies. The IR core methods and techniques are those for designing and developing IR systems, Web search engines, and tools for information storing and querying in Digital Libraries. IR core subjects are: system architectures, algorithms, formal theoretical models, and evaluation of the diverse systems and services that implement functionalities of storing and retrieving documents from multimedia document collections, and over wide area networks such as the Internet. ESSIR aims to give a deep and authoritative insight of the core IR methods and subjects along these three dimensions and also for this reason it is intended for researchers starting out in IR, for industrialists who wish to know more about this increasingly important topic and for people working on topics related to management of information on the Internet. Two books have been prepared as readings in IR from editions of ESSIR, the first one is Lectures on Information Retrieval, the second one is Advanced Topics in Information Retrieval. ESSIR Editions ESSIR series started in 1990 coming out from the successful experience of the Summer School in Information Retrieval (SSIR) conceived and designed by Maristella Agosti, University of Padua, Italy and Nick Belkin, Rutgers University, U.S.A., for an Italian audience in 1989. Notes External links ESSIR Web site Charter of the ESSIR initiative ESSIR presentation page of the IMS Research Group (last updated in 2010, site with historical information) IMS Research Group, Department of Information Engineering – University of Padua, Italy Department of Information Engineering – University of Padua, Italy University of Padua, Italy Computer science conferences Information retrieval organizations Information technology organizations based in Europe Summer schools
European Summer School in Information Retrieval
Technology
520
1,085,422
https://en.wikipedia.org/wiki/Barfoed%27s%20test
Barfoed's test is a chemical test used for detecting the presence of monosaccharides. It is based on the reduction of copper(II) acetate to copper(I) oxide (Cu2O), which forms a brick-red precipitate. RCHO + 2Cu2+ + 2H2O → RCOOH + Cu2O↓ + 4H+ (Disaccharides may also react, but the reaction is much slower.) The aldehyde group of the monosaccharide which normally forms a cyclic hemiacetal is oxidized to the carboxylate. A number of other substances, including sodium chloride, may interfere. Its author is the Danish chemist Christen Thomsen Barfoed and it is primarily used in botany. The test is similar to the reaction of Fehling's solution to aldehydes. Composition Barfoed's reagent consists of a 0.33 molar solution of copper (II) acetate in 1% acetic acid solution. The reagent does not keep well and it is therefore advisable to make it up when it is actually required. Procedure 1 drops of Barfoed's reagent is added to 2 mL of given sample in a test tube and boiled for 3 minutes and then allowed to cool. If a red precipitate occurs, a monosaccharide is present. References Biochemistry detection methods Monosaccharides
Barfoed's test
Chemistry,Biology
309
70,343,570
https://en.wikipedia.org/wiki/Tim%20Hawarden
Timothy George Hawarden (24 December 1943 – 10 November 2009) was a South African astrophysicist known for his pioneering work on passive cooling techniques for space telescopes for which he won NASA's Exceptional Technology Achievement Medal. Biography Hawarden was born in Mossel Bay, Cape Province, South Africa. He graduated from the University of Natal in 1966 with a BSc in Physics and Applied Mathematics, and then graduated from the University of Cape Town with an MSc in Astronomy 1970 and then a PhD in 1975 on old open clusters. While undertaking his PhD he worked as an optical astronomer at the Royal Observatory, Cape of Good Hope and then from 1972 as the Deputy Head of the Photometry Department at the South African Astronomical Observatory in Cape Town. In 1975 he worked as the Deputy Astronomer-in-Charge of the UK Schmidt Telescope at the Siding Spring Observatory in New South Wales, Australia. In 1978 he moved to work at the Royal Observatory in Edinburgh, Scotland, from which he was based for the rest of his career. In 1981 he began working on the United Kingdom Infrared Telescope in Hawaii. In 1987 he moved to Hawaii and led the telescope's ambitious upgrades programme throughout the 1990s. He returned to Edinburgh in 2001 and became the UK Astronomy Technology Centre Project Scientist developing extremely large telescopes (ELT) before retiring in 2006 to care for his wife Frances. He remained active in the field of astronomy until his sudden death in Edinburgh in 2009. Passive cooling of space telescopes Hawarden was involved in the development of the Infrared Space Observatory as the Co-Investigator for the infrared camera (ISOCAM) but he considered the cryogenic cooling system "horrendously complicated". The dependency of infrared space telescopes on cryogenic cooling limited the telescope's lifespan as well as adding significant weight. In the early 1980s Hawarden began developing the idea of using passive cooling for infrared space telescopes through a combination of radiators, sunshields, and by locating the telescope further from Earth. Having a telescope orbit the Sun–Earth L2 Lagrange point enables the sunshield to shelter the telescope from the radiant heat of the Sun, the Earth, and the Moon. A passively cooled telescope is significantly lighter and permits much larger optics and instruments. In 1989 Hawarden proposed such a telescope, the Passively Cooled Orbiting Infrared Observatory Telescope (POIROT) to the European Space Agency but the design was rejected. In 1991 Hawarden and Harley Thronson proposed a similar design to NASA for the Edison project but the proposal was also rejected. The ideas continued to face resistance though some passive cooling was incorporated into the design of the diameter Spitzer Space Telescope launched in 2003. The ideas were later adopted in full for the diameter James Webb Space Telescope launched in 2021. In 2010 Hawarden was posthumously awarded the NASA Exceptional Technology Achievement Medal for his work on passive cooling techniques, the award citing "the breakthrough concepts that made possible the James Webb Space Telescope and its successors". The award was accepted on behalf of Hawarden's widow Frances by the Nobel-laureate physicist John C. Mather. References Astrophysicists 1943 births 2009 deaths South African astronomers Fellows of the Royal Astronomical Society University of Cape Town alumni South African emigrants to the United Kingdom People from Mossel Bay 20th-century astronomers 21st-century astronomers
Tim Hawarden
Physics
673
43,336,080
https://en.wikipedia.org/wiki/Photon%20underproduction%20crisis
The photon underproduction crisis is a cosmological discussion concerning the purported deficit between observed photons and predicted photons. The deficit, or underproduction crisis, is a theoretical problem, arising from comparing observations of ultraviolet light emitted from known populations of galaxies and quasars to theoretical predictions of the amount of ultraviolet light require to simulate the observed distribution of the hydrogen gas in the local universe in a cosmological simulation. The distribution of hydrogen gas was inferred using Lyman-alpha forest observations from Hubble Space Telescope's Cosmic Origins Spectrograph. The amount of light from galaxies and quasars can be estimated from its effect on the distribution of hydrogen and helium in the regions between galaxies. Highly energetic ultraviolet photons can convert electrically neutral hydrogen gas into ionized gas. A team led by Juna Kollmeier reported an unexpected deficit of roughly 400% between ionizing light from known sources and the actual observations of intergalactic hydrogen. Kollmeier and her team wrote in their scientific report, “We examine the statistics of the low-redshift Lyman-alpha forest from smoothed particle hydrodynamic simulations in light of recent improvements in the estimated evolution of the cosmic ultraviolet background (UVB) and recent observations from the Cosmic Origins Spectrograph (COS). We find that the value of the metagalactic photoionization rate required by our simulations to match the observed properties of the low-redshift Lyman-alpha forest is a factor of 5 larger than the value predicted by state of the art models for the evolution of this quantity.” Cosmological simulations start at very high cosmological redshift z (such as z=100 or larger) and are evolved to z=0. According to Benjamin D. Oppenheimer, who is one of the report's coauthors, “The simulations fit the data beautifully in the early universe, and they fit the local data beautifully if we're allowed to assume that this extra light is really there. It's possible the simulations do not reflect reality, which by itself would be a surprise, because intergalactic hydrogen is the component of the Universe that we think we understand the best.” Kollmeier and her team state that "... either conventional sources of ionizing photons (galaxies and quasars) must contribute considerably more than current observational estimates or our theoretical understanding of the low-redshift universe is in need of substantial revision.” A similar study, led by Michael Shull, found that the deficit is only twice as large and not five times larger, as previously claimed. A potential resolution to the photon underproduction crisis is presented by a series of recent papers. Khaire & Srianand showed that a metagalactic photoionization rate that is two to five times larger can be easily obtained using updated quasar and galaxy observations. Recent observations of quasars indicate that the quasar contribution to ultraviolet photons is twice that of previous estimates. The revised galaxy contribution is also three times higher. Furthermore, the Kollmeier GADGET-2 simulations did not include heating from active galactic nuclei (AGN) feedback. Including AGN feedback was shown to be an important element for heating in the low redshift intergalactic medium (IGM) (Gurvich, Burkhart, & Bird 2016.). This implies that the low redshift COS data can be used to calibrate AGN feedback models in cosmological simulations. See also Diffuse extragalactic background radiation References Extragalactic astronomy Unsolved problems in astronomy
Photon underproduction crisis
Physics,Astronomy
741
2,480,987
https://en.wikipedia.org/wiki/Kanthal%20%28alloy%29
Kanthal is the trademark for a family of iron-chromium-aluminium (FeCrAl) alloys used in a wide range of resistance and high-temperature applications. Kanthal FeCrAl alloys consist of mainly iron, chromium (20–30%) and aluminium (4–7.5 %). The first Kanthal FeCrAl alloy was developed by Hans von Kantzow in Hallstahammar, Sweden. The alloys are known for their ability to withstand high temperatures and having intermediate electric resistance. As such, it is frequently used in heating elements. The trademark Kanthal is owned by Alleima AB. Characteristics For heating, resistance wire must be stable in air when hot. Kanthal FeCrAl alloy forms a protective layer of aluminium oxide (alumina). Aluminium oxide has high thermal conductivity but is an electrical insulator, so special techniques may be required to make good electrical connections. Ordinary Kanthal FeCrAl alloy has a melting point of . Special grades can be used as high as . Depending on specific composition the resistivity is about 1.4μΩ·m and temperature coefficient is +49ppm/K (). Uses Kanthal is used in heating elements due to its flexibility, durability and tensile strength. Its uses are widespread, for example in toasters, home and industrial heaters, kilns and diffusion heaters (used in the making of crystalline silicon). In comparison to the other types of resistance wire used in vaping such as Nichrome, titanium-alloy and stainless steel, Kanthal is durable enough to withstand the temperatures required, but flexible and cheap enough to be practical for vaping purposes. See also Nichrome References External links The brand website for Kanthal products Resistance wire technical information tables Chromium alloys Ferrous alloys Refractory metals Swedish inventions de:Heizleiterlegierung#Kanthal
Kanthal (alloy)
Chemistry
392
31,608,402
https://en.wikipedia.org/wiki/Self-righting%20mechanism
In robot combat, a self-righting mechanism or srimech (sometimes spelled as srimec or shrimech) is a device used to re-right a robot should it get flipped. Biohazard of BattleBots was the first robot to self-right. Military applications As of 2016, the U.S. Army Research Laboratory (ARL), based at Aberdeen Proving Grounds, MD, developed self-righting robots for bomb defusal and reconnaissance. Listed as a 2004–2020 effort, the prototype was called CRAM, for compressible robot with articulated mechanisms. ARL scientists were led by Chad Kessens, and collaborated with researchers from the University of California, Berkeley, and Johns Hopkins University to develop a prototype. Cockroach exoskeletons inspired researchers to manufacture a robot that can move around rapidly in both open and confined spaces with self-righting capabilities. In 2016, ARL and its collaborators published additional research, "Cockroach-inspired winged robot reveals principles of ground-based dynamic self-righting", demonstrating a bio-inspired design. Researchers showed that robots can use insect body structures to achieve self-righting, as demonstrated in the rounded shell and mobile wings of the robot prototype. American Robot Wars: 1994–1997 Biohazard was the first robot to self-right in combat, against Vlad the Impaler in the 1996 tournament, however since the match had ended it made no difference to who actually won. Terminal Frenzy had attempted to right itself when it earlier came up against Biohazard, but failed to do so. The next year Vlad the Impaler fought Biohazard again, and the former used its special pneumatic lifting arm to self-right numerous times, yet it still lost the judge's decision. UK Robot Wars Series 2-3 The first attempted self-right in the UK Robot Wars was by a robot called Chaos, during its Series 2 heat final. However, it was unable to do so. Later in the series, Cassius successfully righted itself with its pneumatic flipping arm, after Sir Killalot had flipped it over with his drill during the semi-final pinball trial. Cassius was flipped again in the Grand Final, but it self-righted and flipped Roadblock to win the eliminator. In Series 3, Chaos' successor Chaos 2 used its innovative rear-hinged flipper panel to catapult itself through the air and then land on its wheels, a technique that later became standard. Weapon srimechs The majority of flippers can double as srimechs. However, most flippers are powered by and therefore have limited uses. Some axes can also be used as srimechs; the first robot to successfully use an axe to self-right was Iron Awe in Robot Wars Series 4. List of robots with weapon srimechs Robots are listed alphabetically. The weapon indicated is the weapon used to self-right. Other methods of self-righting Some robots had weapons that couldn't be used for self-righting, and so incorporated separate dedicated srimechs. These varied in design and effectiveness; examples include Razer's side wings, Hypno-Disc's srimech bar and Panic Attack's top lid. These did not detract from the weapon, but could easily break if damaged repeatedly, and also took up some of the robot's precious weight allowance - Razer, after the addition of its wings, had to have over 450 holes drilled into it to keep it within the weight limit. Body shape A rarer and more difficult type of srimech was to design the robot's body in such a way that it could roll back onto its wheels when flipped. Sometimes known as a "rollover" design, robots with this ability included Mega Morg. While a fairly ingenious solution, that did not require any additional power or mechanics, there were still flaws. It was extremely difficult to get the design perfect, and if flipped without enough momentum or flipped from the front or back, the robot would be left stranded. Mega Morg's predecessor, The Morgue, was also defeated in Series 4 by Firestorm when it was flipped against the arena wall, preventing it from rolling over. Some robots were not true rollover designs, but had other design elements intended to aid them in self-righting. Examples include the rounded Lexan panels on the rear of Behemoth, without which it would have stranded itself on its back when self-righting, and Spikasaurus' roll-bars. These were often effective but, like active srimechs, they were vulnerable to damage. See also Gömböc References Robot combat Robot control
Self-righting mechanism
Engineering
968
6,855,511
https://en.wikipedia.org/wiki/Electrical%20room
An electrical room is a technical room or space in a building dedicated to electrical equipment. Its size is usually proportional to the size of the building; large buildings may have a main electrical room and subsidiary electrical rooms. Electrical equipment may be for power distribution equipment, or for communications equipment. Electrical rooms typically house the following equipment: Electric switchboards Distribution boards Circuit breakers and disconnects Motor control centers Transformers Busbars Electricity meters Backup batteries in a Battery room Fire alarm control panels Distribution frames In large building complexes, the primary electrical room may house an indoor electrical substation. Construction features The construction features of an electrical room vary depending on the scope of the equipment to be installed. Floors may be reinforced to support heavy transformers and switchgear. Walls and ceilings may have to support a heavy cable tray system or busbars. Additional ventilation or air conditioning may be needed, since electrical apparatus gives off heat but the temperature must not rise beyond the tolerance of equipment. Double doors may be installed to allow for maintenance of large equipment. If utility service entrance equipment and metering is present in the room, special provisions may be made for access by utility personnel. Fire detection and fire suppression systems, such as carbon dioxide, may be installed. A large electrical room may have extensive provisions for grounding (earthing) and bonding enclosures of electrical equipment to prevent stray voltage and danger of electric shock, even during faults in the electrical system. Lightning protection requires different measures than protection from power-frequency faults. Electrical rooms may have electromagnetic shielding to prevent interference to nearby sensitive audio or video equipment. In large facilities access control systems may control admission to the room. Regulations Layout details and construction of electrical rooms will be controlled by local building code and electrical code regulations. Requirements for an electrical room relate to fire safety and electrical hazards. An electrical room is usually required to be secured from access by unauthorized persons; these rules are especially strict where equipment within the room has exposed live terminals. Regulations may require two separate means of exit from a room where the power rating of circuits exceeds some threshold, to allow for quick exit in an emergency. Rooms containing oil-filled equipment may be required to have fire-resistant construction or active fire suppression equipment in the room and may be designated as an electrical vault. Since power distribution often requires large numbers of electrical cables, special measures for fire resistance of cables and cable trays may be also specified by regulations. In industrial buildings that handle flammable gases or liquids, or combustible dusts, special electrical rooms may be prepared that have ventilation and other measures to prevent an explosion hazard that would otherwise exist with electrical equipment in hazardous areas. For large installations, it may be less costly overall to use a special room than to install a large number of devices that are resistant to the hazardous conditions. Similarly, in wet or corrosive environments, electrical equipment may be separated in a room that can be protected from the atmospheric conditions. Building code and electrical code regulations will dictate minimal working space around equipment to allow safe access during maintenance. Practical design of an electrical room will consider layout of the initial equipment and allow for additions over the economic life of the facility. See also Utility vault Electrical enclosure References Rooms Electric power distribution
Electrical room
Engineering
647
32,411,640
https://en.wikipedia.org/wiki/Doi%E2%80%93Naganuma%20lifting
In mathematics, the Doi–Naganuma lifting is a map from elliptic modular forms to Hilbert modular forms of a real quadratic field, introduced by and . It was a precursor of the base change lifting. It is named for Japanese mathematicians Kōji Doi (土井公二) and Hidehisa Naganuma (長沼英久). See also Saito–Kurokawa lift, a similar lift to Siegel modular forms References Modular forms
Doi–Naganuma lifting
Mathematics
93
74,537,750
https://en.wikipedia.org/wiki/Thailand%20Tokamak-1
Thailand Tokamak-1 (or TT-1) is a small research tokamak operated by the Thailand Institute of Nuclear Technology in Nakhon Nayok province, Thailand. The tokamak was built in collaboration with the Institute of Plasma Physics of the Chinese Academy of Sciences and features an upgraded design based on the HT-6M tokamak developed in 1984. The first successful test of the device occurred on 21 April 2023. TT-1 officially began operations on 25 July 2023 and became the first tokamak to operate in Southeast Asia. References Tokamaks Nuclear technology in Thailand
Thailand Tokamak-1
Physics
124
12,643,716
https://en.wikipedia.org/wiki/Service%20innovation
Service innovation is used to refer to many things. These include but not limited to: Innovation in services, in service products – new or improved service products (commodities or public services). Often this is contrasted with “technological innovation”, though service products can have technological elements. This sense of service innovation is closely related to service design and "new service development". Innovation in service processes – new or improved ways of designing and producing services. This may include innovation in service delivery systems, though often this will be regarded instead as a service product innovation. Innovation of this sort may be technological, technological - or expertise -based, or a matter of work organization (e.g. restructuring of work between professionals and paraprofessionals). Innovation in service firms, organizations, and industries – organizational innovations, as well as service product and process innovations, and the management of innovation processes, within service organizations. Definitions The Finnish research agency TEKES defines service innovation as "a new or significantly improved service concept that is taken into practice. It can be for example a new customer interaction channel, a distribution system or a technological concept or a combination of them. A service innovation always includes replicable elements that can be identified and systematically reproduced in other cases or environments. The replicable element can be the service outcome or the service process as such or a part of them. A service innovation benefits both the service producer and customers and it improves its developer's competitive edge. A service innovation is a service product or service process that is based on some technology or systematic method. In services however, the innovation does not necessarily relate to the novelty of the technology itself but the innovation often lies in the non-technological areas. Service innovations can for instance be new solutions in the customer interface, new distribution methods, novel application of technology in the service process, new forms of operation with the supply chain or new ways to organize and manage services." Another definition proposed by Van Ark et al. (2003) states it as a "new or considerably changed service concept, client interaction channel, service delivery system or technological concept that individually, but most likely in combination, leads to one or more (re)new(ed) service functions that are new to the firm and do change the service/good offered on the market and do require structurally new technological, human or organizational capabilities of the service organization." This definition covers the notions of technological and non-technological innovation. Non-technological innovations in services mainly arise from investment in intangible inputs. Service Innovation Research Many literatures on what makes for successful innovations of this kind comes from the New Service Development research field (e.g. Johne and Storey, 1998; Nijssen et al., 2006). Service design practitioners have also extensively discussed the features of effective service products and experiences. One of the key aspects of many service activities is the high involvement of the client/customer/user in the production of the final service. Additionally, firms cooperate with both horizontal (e.g., competitors) and vertical (e.g., suppliers) business partners in order to develop relevant service innovations. Without this co-production (i.e. interactivity of service production), the service would often not be created. This co-production, together with the intangibility of many service products, causes service innovation to often take forms rather different from those familiar through studies of innovation in manufacturing. Innovation researchers have, for this reason, stressed that much service innovation is hard to capture in traditional categories like product or process innovation, and that its effects are diverse. The co-production process, and the interactions between service provider and client, can also form the focus of innovation. A growing number of professional association have service sections that promote service innovation research, including INFORMS, ISSIP, and others. Areas of innovation – den Hertog's model Thus den Hertog (2000) who identifies four “dimensions” of service innovation, takes quite a different direction to much standard innovation theorizing. The Service Concept refers to a service concept that is new to its particular market – a new service in effect, or in Edvardsson's (1996, 1997) terminology, a “new value proposition”. Many service innovations involve fairly intangible characteristics of the service, and others involve new ways of organizing solutions to problems (be these new or familiar ones). Examples might include new types of bank account or information service. In some service sectors, such as retail, there is much talk about “formats”, such as the organization of shops in different ways (more or less specialized, more or less focused on quality or cost-saving, etc.). The Client Interface refers to innovation in the interface between the service provider and its customers. Clients are often highly involved in service production, and changes in the way in which they play their roles and are related to suppliers can be major innovations for many services. Examples might include a greater amount of self-service for clients visiting service organizations. There is a French literature on service innovation that focuses especially on this type of innovation, identifying it as innovation in “servuction”. The Service Delivery System also often relates to the linkage between the service provider and its client, since delivery does involve an interaction across this interface. However, there are also internal organizational arrangements that relate to the ways in which service workers perform their job so as to deliver the critical services. Much innovation concerns the electronic delivery of services, but we can also think of, for instance, transport and packaging innovations (e.g. pizza delivery). An emerging concept of SDP is the idea of taking a "factory" approach to Service Innovation. A "service factory" approach is a standardized and industrialized environment for more effective service innovation, development and operations for the IP era. Technological Options resemble most familiar process innovation in manufacturing sectors. New information technology is especially important to services, since it allows for greater efficiency and effectiveness in the information-processing elements that are, as we have seen, prevalent to a great extent in services sectors. We also often see physical products accompanying services, such as customer loyalty cards and “smart” RFID cards for transactions, and a wide range of devices for communication services. In practice, the majority of service innovations will almost certainly involve various combinations of these four dimensions. For instance: A new IT system (technology dimension) may be used to enable customer self-service (interface dimension) as in the case of a bank contacting its customers. The ability to track one's order or the location of an item that one has posted or is expecting to receive. Services may be delivered electronically, as in the case of much online banking and cash withdrawals from ATMs. A new service allowing a client to examine various options and calculate what they would be paying with different types of accounts. A new service will often require a new service delivery system, and changes at the client interface. An elaboration of this model to suggest six dimensions of innovation was developed in the course of work on creative sectors, by Green, Miles and Rutter. As well as Technology and Production process, four dimensions were specified whose linkages are very strong in creative sectors like videogames, advertising and design: Cultural Product, Cultural Concept, Delivery and User Interface. The service innovation literature is surprisingly poorly related to the literature on new product development, which has spawned a line of study on new service development. This often focuses on the managerially important issue of what makes for successful service innovation. See for example Johne and Storey (1998), who reviewed numerous New Service Development studies. Services Features and Innovation Potential: Miles (1993) influential article on 'Service Innovation' Ian Miles of the Manchester Institute of Innovation Research (MIoIR), The University of Manchester, is one of the scholars on the study of 'Service Innovation'. He coined the term in his 1993 article in the journal FUTURES, (Vol. 25, No. 6, pp. 653–672,) . He listed a series of characteristic features of services, and associated these with particular types of innovation. Such innovations are often aimed at overcoming problems associated with service characteristics like the difficulty in demonstrating the service to the client, or the problems in storing and building up stocks of the service. After Miles (1993), numerous studies were made, one of the more recent studies that reaches similar conclusions was from a qualitative survey of service organizations by Candi (2007).) Note that the “product” related innovations below have a lot in common with new service development as discussed above. In the following list, features of services are linked to innovation strategies by the symbol >>>. Features of services associated with service production Technology and Plant (Low levels of capital equipment; heavy investment in buildings >>> Reduce costs of buildings by use of teleservices, toll-free phone numbers, etc.) Labor (Some services highly professional, esp. requiring interpersonal skills); others relatively unskilled, often involving casual or part-time labor. Specialist knowledge may be important, but rarely technological skills (other than Information Technology) >>> Reduce reliance on expensive and scarce skills by use of expert systems and related innovations; Relocation of key operations to areas of low labor costs (using telecommunications to maintain coordination). Organization of Labor Process (Workforce often engaged in craft-like production with limited management control of details of work. >>> Use IT to monitor workforce (e.g. tachometers and mobile communications for transport staff; Aim for 'flatter' organizational structures, with data from field and front-office workers directly entering databases and thence Management Information Systems.) Features of Production (Production is often non-continuous and economies of scale are limited >>> Standardize production (e.g. 'fast-food' chains), reorganize in more assembly-line-like feature with more standard components and higher division of labor.) Organization of Industry (Some services state-run public services; Others often small-scale with high preponderance of family firms and self-employed >>> Externalization and privatization of public services; combination of small firms using network technologies; IT-based service management systems.) Features of services associated with service product Nature of Product (Immaterial, often information-intensive; Hard to store or transport; Process and product hard to distinguish. >>> Add material components (e.g. client cards, membership cards). Use telematics for ordering, reservation, and if possible – delivery. Maintain elements of familiar 'user-interfaces'.) Features of Product (Often customized to consumer requirements.>>> Use of Electronic data interchange or Internet for remote input of client details; use software to record client requirements and match to service product. Features of services associated with services consumption Delivery of Product (Production and consumption coterminous in time and space; often client or supplier has to move to meet the other party.>>> Telematics; Automated Teller Machines and equivalent information services.) Role of Consumer (Services are consumer-intensive, requiring inputs from consumer into design/production process.>>> Consumer use of standardized menus and new modes of delivering orders.) Organization of Consumption (Often hard to separate production from consumption; Self-service in formal and informal economies commonplace.>>> Increased use of self-service, utilizing existing consumer (or intermediate producer) technology – e.g. telephones, PCs – and user-friendly software interfaces.) Features of services associated with services markets Organization of Markets (Some services delivered via public sector bureaucratic provision; some costs are invisibly bundled with goods (e.g. retail sector).>>> Introduction of quasi-markets and/or privatization of services; new modes of charging (pay per society), new reservation systems; more volatility in pricing using features of EPOS and related systems.) Regulation (Professional regulation common in some services.>>> Use of databases by regulatory institutions and service providers to supply and examine performance indicators and diagnostic evidence.) Marketing (Difficult to demonstrate products in advance.>>> Guarantees; demonstration packages (e.g. demo software, shareware, trial periods of use).) Additionally, a number of general tendencies in the innovation process in services have been noted. These include: The industrialization of services, involving efforts to standardize services, to yield service products of predictable characteristics and quality, with economies of scale and improved delivery times. This typically involves the introduction of high levels of division of labor, with the use of pre-packaged and automated elements (such as pre-prepared meals, word processed templates for form letters, and the like). Standardization of the service products has become a competitive strategy for many firms. Organizational change is innovation. Survey data suggest that services place particular emphasis on organizational change. Many innovations in services involve combinations of specific new technologies together with organization change. The role of organizational innovations in services is very apparent – developments such as supermarkets and other self-service facilities are significant in the development of modern service industries. Such organizational innovations will often have a technological dimension, whether this be very basic (e.g. shopping trolleys), or relatively high-tech (EPOS – electronic point of sale – equipment or ATMs linked into networks). An important trajectory of organizational change has been towards self-servicing, without necessarily following this development all the way toward the vision of the client sitting at home interacting with the service provider via a remote terminal. Instead, reorganization of the facilities of the service provider permits customer self-service in the service establishment, saving on labor costs and often increasing user satisfaction as it is possible to make decisions anonymously and at one's own pace. Beyond self-servicing, the involvement of clients as coproducers is particularly important for knowledge-intensive business services, with the emphasis being laid upon clients' role in advancing the expertise of service suppliers, and identifying new avenues for its application. Web2.0 has brought “user innovation” to the fore in electronic services. Service Innovation using IoT and Big Data Analytics In the traditional product-service system (PSS) business model, industries develop product with value-added service instead of single product itself, and provide their customers services that are needed. In this relationship, the market goal of manufacturers is not one-time product selling, but continuous profit from customers by total service solution, which can satisfy unmet customers’ needs. Most of PSS systems focus on ‘human-generated or human-related data’ instead of ‘machine-generated data or industrial data’, which may include machine controllers, sensors, manufacturing systems, etc. Early work using web-based product monitoring for remote product services including GM OnStar Telematics, Otis Remote Elevator Maintenance (REM), and GE Medical InSite during the 1990s. Service innovation and public policy In recent years policy makers have begun to consider the potential for promoting services innovation as part of their economic development strategies. Such consideration has, in part, been driven by the growing contribution that services activities make to national and regional economies. It also reflects the emerging recognition that traditional policy measures such as R&D grants and technology transfer supports have been developed from a manufacturing perspective of the innovation process. The European Commission and the OECD has been particularly active in seeking to generate reflection on services innovation and its policy implications. This has resulted in studies such as the OECD's reports into knowledge intensive services, and the European Commission Expert Group report on services innovation – the report of the group, "Fostering Innovation in Services" as well as various TrendChart studies. The European Commission has also launched a number of Knowledge Intensive Services Platforms designed to act as laboratories for new public policies for services innovation. Few economic development agencies at the member state level, and fewer still at the regional level, have translated this new thinking on services innovation into policy action. Finland is an exception, where knowledge intensive business services have been a focus of much regional work (esp. the Uusimaa region). Finland has been active in thinking about the policy implications of services innovation. This has seen TEKES – the Finnish Funding Agency for Technology and Innovation – launch the SERVE initiative, designed to support ‘Finnish companies and research organizations in the development of innovative service concepts that can be reproduced or replicated and where some technology or systematic method is applied.’ Germany has also undertaken initiatives for services R&D. Canada and Norway have programs as well. Ireland has been considering a services-focused innovation policy, with Forfás – its national policy and advisory board for enterprise, trade, science, technology and innovation – having undertaken a review of Ireland's existing policy and support measures for innovation, and outlined options for a new policy and framework environment in support of service innovation activity. At the regional level, limited information is available on how Europe's regions are responding to the challenges presented by service innovation. [CM International] has recently published a European survey on services innovation and regional policy responses. The results of this suggest that very few regions in France, the UK and Ireland have an explicit focus on services and innovation. Many do, however, express a desire to address this issue in the coming future. Notes References Business Week, March 29th, 2007 Tekes - Serve - Innovative Services Technology Programme 2006-2010 Tekes: Finnish Funding Agency for Technology and Innovation New service development: a review of the literature and annotated bibliography See for more recent thinking, Miles I. (2001) Services Innovation: A Reconfiguration of Innovation Studies (University of Manchester: PREST discussion paper DP01-05) at ; Miles Ian (2004) Service Innovation a book chapter in Fagerberg et al. (Eds.) (2004), Oxford Handbook of Innovation, Oxford: Oxford University Press; and his web site at Manchester Institute of Innovation Research (MIoIR), MBS, The University of Manchester. B.van Ark et al.,(2003)"Services Innovation, Performance and Policy: A Review" June, 2003, Research Series No6, The Hague Survey is available on cm-intl.com/en External links USE Global Report A global research on Service Innovation. Generation of Service Innovation through Customer Integration in the Music Industry (Full text download) International Society for Service Innovation Professionals Design Entrepreneurship Innovation Innovation economics Political economy Product development Public economics Systems thinking Services sector of the economy Services marketing
Service innovation
Engineering
3,755
5,100,302
https://en.wikipedia.org/wiki/List%20of%20DVD%20manufacturers
This aims to be a complete list of DVD manufacturers. This list may not be complete or up to date. If you see a manufacturer that should be here but isn't (or one that shouldn't be here but is), please update the page accordingly. This list is only a list of brand names for DVDs and not an actual manufacturers list. A Aiwa Akai Alba Amazon Amstrad Apex Digital Apple ACCURA Acme Acer Allied Electronics Pte Limited Asus B Bang & Olufsen BenQ Bose Bush Beyond C CMC Magnetics Citizen Electronics Co., Ltd. Craig Electronics Curtis International Ltd. D Daewoo Electronics Denon Dell E Emerson F Facebook Funai Fukuda G GE Google Go Electronics Grundig H Harman/kardon Hitachi Hewlett-Packard I Imation J Jodie JVC K KDS L Lenovo LG LiteOn Loewe M Magnavox Marantz Maxell Medion Memorex Microsoft Windows Mitsubishi Electric Moser Baer Mustek Systems, Inc. N NEC O Onn Oppo Orion Electric P Panasonic Philips Pioneer ProScan Pressing-Media R RCA Ritek Ricoh S Samsung Sanyo Sharp Sony Sylvania Symphonic SM Pictures T Teac Technics Technika Thomson Toshiba U U-Tech Media Corporation Ultradisc Unis Company Limited UMC (Universal Media Corporation) V Verbatim Corporation W Weltec Y Yamaha Z Zenith See also DVD References Computing-related lists Technology-related lists DVD
List of DVD manufacturers
Technology
313
46,458,358
https://en.wikipedia.org/wiki/Anabaseine
Anabaseine (3,4,5,6-tetrahydro-2,3′-bipyridine) is an alkaloid toxin produced by Nemertines worms and Aphaenogaster ants. It is structurally similar to nicotine and anabasine. Similarly, it has been shown to act as an agonist on most nicotinic acetylcholine receptors in the central nervous system and peripheral nervous system. Mechanism of action The iminium form of anabaseine binds to most nicotinic acetylcholine receptors in both the peripheral nervous system and central nervous system. But, there is a higher binding affinity for receptors in the brain with a α7 subunit, as well as skeletal muscle receptors. Binding causes the depolarization of neurons, and induces the release of both dopamine and norepinephrine. Biological effects Anabaseine causes paralysis in crustaceans and insects, but not in vertebrates, presumably by acting as an agonist on peripheral neuromuscular nicotinic acetylcholine receptors. Structure The anabaseine molecule consists of a non-aromatic tetrahydropyridine ring connected to the 3rd carbon of a 3-pyridyl ring. It can exist in three forms at physiological pH: a ketone, imine, or iminium structure. Due to conjugation between the imine and 3-pyridyl ring, anabaseine exists as a nearly coplanar molecule. Synthesis Spath and Mamoli first synthesized anabaseine in 1936. The researchers reacted benzoic anhydride with δ-valerolactam to yield N-benzoylpiperidone. Then, N-benzoylpiperidone is reacted with nicotinic acid ethyl ester to produce α-nicotinoyl-N-benzoyl-2-piperidone. This product then is decarboxylated, undergoes a ring closure, and amide hydrolysis to form anabaseine. Additional synthetic strategies have since been developed by Bloom, Zoltewicz, Smith, and Villemin. Derivatives Due to anabaseine’s fairly non-specific binding to nicotinic acetylcholine receptors, the molecule was largely discarded as a useful tool in research or medicine. However, anabaseine derivatives have been identified with a more selective α7 binding profile. One such derivative (GTS-21, 3-(2,4-dimethoxybenzylidene)-anabaseine) has been studied as a drug candidate for cognitive and memory deficits, particularly associated with schizophrenia; it has been studied in phase II clinical trials without progression to phase III. Moreover, the modification of the anabaseine pyridine nucleus led to the obtainment of new derivatives endowed with binding and functional selectivity for the α3β4 nicotinic acetylcholine receptor subtype. References Neuropharmacology Nicotinic agonists Pyridine alkaloids Drug discovery 3-Pyridyl compounds
Anabaseine
Chemistry,Biology
638
1,300,939
https://en.wikipedia.org/wiki/Infinite-dimensional%20optimization
In certain optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function or the shape of a body. Such a problem is an infinite-dimensional optimization problem, because, a continuous quantity cannot be determined by a finite number of certain degrees of freedom. Examples Find the shortest path between two points in a plane. The variables in this problem are the curves connecting the two points. The optimal solution is of course the line segment joining the points, if the metric defined on the plane is the Euclidean metric. Given two cities in a country with many hills and valleys, find the shortest road going from one city to the other. This problem is a generalization of the above, and the solution is not as obvious. Given two circles which will serve as top and bottom for a cup of given height, find the shape of the side wall of the cup so that the side wall has minimal area. The intuition would suggest that the cup must have conical or cylindrical shape, which is false. The actual minimum surface is the catenoid. Find the shape of a bridge capable of sustaining given amount of traffic using the smallest amount of material. Find the shape of an airplane which bounces away most of the radio waves from an enemy radar. Infinite-dimensional optimization problems can be more challenging than finite-dimensional ones. Typically one needs to employ methods from partial differential equations to solve such problems. Several disciplines which study infinite-dimensional optimization problems are calculus of variations, optimal control and shape optimization. See also Semi-infinite programming References David Luenberger (1997). Optimization by Vector Space Methods. John Wiley & Sons. . Edward J. Anderson and Peter Nash, Linear Programming in Infinite-Dimensional Spaces, Wiley, 1987. M. A. Goberna and M. A. López, Linear Semi-Infinite Optimization, Wiley, 1998. Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. Functional analysis Optimization in vector spaces
Infinite-dimensional optimization
Mathematics
411
48,287,022
https://en.wikipedia.org/wiki/List%20of%20Ultrabook%20models
This is a list of Ultrabook models. Huron River Chief River Shark Bay Notes: * This component is upgradable. That is, the manufacturer allows the user to customize/upgrade this component of the ultrabook at time of purchase to one of several better options for an increase in price. Some manufacturers or stores charge more, and will require wait time and/or home delivery for customized models. For more details on the available upgrades for a model, click on the references listed next to the model name. References Ultrabook models Ultrabooks
List of Ultrabook models
Technology
111
50,900,171
https://en.wikipedia.org/wiki/Branched-chain%20fatty%20acid
Branched-chain fatty acids (BCFA) are usually saturated fatty acids with one or more methyl branches on the carbon chain. BCFAs are most often found in bacteria, but can be found in nattō, dairy, vernix caseosa of human infants and California sea lions where they may play a role in fostering the development of their intestinal microbiota. Another waxy animal material containing BCFAs is lanolin. Branched chain fatty acids are considered to be responsible for the smell of mutton and higher content causes consumers to dislike the smell of lamb meat. Branched-chain fatty acids are synthesized by the branch-chain fatty acid synthesizing system. References Fatty acids
Branched-chain fatty acid
Chemistry
141
891,481
https://en.wikipedia.org/wiki/123%20%28number%29
123 (one hundred [and] twenty-three) is the natural number following 122 and preceding 124. In mathematics 123 is a Lucas number. It is the eleventh member of the Mian–Chowla sequence. Along with 6, 123 is one of only two positive integers that is simultaneously two more than a perfect square and two less than a perfect cube (123 = 112 + 2 = 53 - 2). 123 is the first whole number containing numbers from 1 to 3. In religion In numerology, the sequence of 123 is associated with progress and the Holy Trinity, so it may be referred to as the Trinity of Progress or Triad of Progress. References Integers
123 (number)
Mathematics
136
21,194,600
https://en.wikipedia.org/wiki/STRING
In molecular biology, STRING (Search Tool for the Retrieval of Interacting Genes/Proteins) is a biological database and web resource of known and predicted protein–protein interactions. The STRING database contains information from numerous sources, including experimental data, computational prediction methods and public text collections. It is freely accessible and it is regularly updated. The resource also serves to highlight functional enrichments in user-provided lists of proteins, using a number of functional classification systems such as GO, Pfam and KEGG. The latest version 11b contains information on about 24,5 million proteins from more than 5000 organisms. STRING has been developed by a consortium of academic institutions including CPR, EMBL, KU, SIB, TUD and UZH. Usage Protein–protein interaction networks are an important ingredient for the system-level understanding of cellular processes. Such networks can be used for filtering and assessing functional genomics data and for providing an intuitive platform for annotating structural, functional and evolutionary properties of proteins. Exploring the predicted interaction networks can suggest new directions for future experimental research and provide cross-species predictions for efficient interaction mapping. Features The data is weighted and integrated and a confidence score is calculated for all protein interactions. Results of the various computational predictions can be inspected from different designated views. There are two modes of STRING: Protein-mode and COG-mode. Predicted interactions are propagated to proteins in other organisms for which interaction has been described by inference of orthology. A web interface is available to access the data and to give a fast overview of the proteins and their interactions. A plug-in for cytoscape to use STRING data is available. Another possibility to access data STRING is to use the application programming interface (API) by constructing a URL that contain the request. Data sources Like many other databases that store protein association knowledge, STRING imports data from experimentally derived protein–protein interactions through literature curation. Furthermore, STRING also store computationally predicted interactions from: (i) text mining of scientific texts, (ii) interactions computed from genomic features, and (iii) interactions transferred from model organisms based on orthology. All predicted or imported interactions are benchmarked against a common reference of functional partnership as annotated by KEGG (Kyoto Encyclopedia of Genes and Genomes). Imported data STRING imports protein association knowledge from databases of physical interaction and databases of curated biological pathway knowledge (MINT, HPRD, BIND, DIP, BioGRID, KEGG, Reactome, IntAct, EcoCyc, NCI-Nature Pathway Interaction Database, GO). Links are supplied to the originating data of the respective experimental repositories and database resources. Text mining A large body of scientific texts (SGD, OMIM, FlyBase, PubMed) are parsed to search for statistically relevant co-occurrences of gene names. Predicted data Neighborhood: Similar genomic context in different species suggest a similar function of the proteins. Fusion-fission events: Proteins that are fused in some genomes are very likely to be functionally linked (as in other genomes where the genes are not fused). Occurrence: Proteins that have a similar function or an occurrence in the same metabolic pathway, must be expressed together and have similar phylogenetic profile. Coexpression: Predicted association between genes based on observed patterns of simultaneous expression of genes. References External links STRING site STITCH website, related database on interactions of proteins with small molecules Biochemistry databases Systems biology
STRING
Chemistry,Biology
699
40,333,066
https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20Fran%C3%A7aise%20de%20G%C3%A9nie%20des%20Proc%C3%A9d%C3%A9s
The Société Française de Génie des Procédés (French Society of Process Engineers) or SFGP is a French organization for chemical engineers. It is a member of the European Federation of Chemical Engineering and acts as joint Secretariat, and of la Fédération Française pour les sciences de la Chimie (FFC). It publishes a technical journal "Récents progrès en Génie des Procédés", and news for members "Procédique" (first published April 1988), and organizes a congress every alternate year. As of 2014 its membership is in excess of 600. Its history dates back to a congress in 1987, 1er Congrès Français de Génie des Procédés, and the formation the following year of a Groupe Français de Génie des Procédés (GFGP), which in 1989 had 340 members. and formally transformed into the present organization in 1997. Its mission statement is to: promote Process Engineering. promote exchanges between academics, trainers and researchers, manufacturers developing and operating processes, engineering companies and suppliers at the national, European and global levels. build a network of experts to respond to societal challenges and the innovation needs of Process Industries. be a force for representations to political and institutional decision-makers. References Chemical engineering organizations Chemical industry in France Engineering societies based in France Organizations established in 1997 1997 establishments in France
Société Française de Génie des Procédés
Chemistry,Engineering
269
22,203
https://en.wikipedia.org/wiki/Organic%20compound
Some chemical authorities define an organic compound as a chemical compound that contains a carbon–hydrogen or carbon–carbon bond; others consider an organic compound to be any chemical compound that contains carbon. For example, carbon-containing compounds such as alkanes (e.g. methane ) and its derivatives are universally considered organic, but many others are sometimes considered inorganic, such as halides of carbon without carbon-hydrogen and carbon-carbon bonds (e.g. carbon tetrachloride ), and certain compounds of carbon with nitrogen and oxygen (e.g. cyanide ion , hydrogen cyanide , chloroformic acid , carbon dioxide , and carbonate ion ). Due to carbon's ability to catenate (form chains with other carbon atoms), millions of organic compounds are known. The study of the properties, reactions, and syntheses of organic compounds comprise the discipline known as organic chemistry. For historical reasons, a few classes of carbon-containing compounds (e.g., carbonate salts and cyanide salts), along with a few other exceptions (e.g., carbon dioxide, and even hydrogen cyanide despite the fact it contains a carbon-hydrogen bond), are generally considered inorganic. Other than those just named, little consensus exists among chemists on precisely which carbon-containing compounds are excluded, making any rigorous definition of an organic compound elusive. Although organic compounds make up only a small percentage of Earth's crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically-produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically. In chemical nomenclature, an organyl group, frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom. Definition For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates (excluding carbonate esters), simple oxides of carbon (for example, CO and ) and cyanides are generally considered inorganic compounds. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes and carbon nanotubes are also excluded because they are simple substances composed of a single element and so not generally considered chemical compounds. The word "organic" in this context does not mean "natural". History Vitalism Vitalism was a widespread conception that substances found in organic nature are formed from the chemical elements by the action of a "vital force" or "life-force" (vis vitalis) that only living organisms possess. In the 1810s, Jöns Jacob Berzelius argued that a regulative force must exist within living bodies. Berzelius also contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalism taught that formation of these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations in laboratories. Vitalism survived for a short period after the formulation of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism, thus disproving vitalism. Modern classification and ambiguities Although vitalism has been discredited, scientific nomenclature retains the distinction between organic and inorganic compounds. The modern meaning of organic compound is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term carbogenic has been proposed by E. J. Corey as a modern alternative to organic, but this neologism remains relatively obscure. The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen. As described in detail below, any definition of organic compound that uses simple, broadly-applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon-containing compound, excluding several classes of substances traditionally considered "inorganic". The list of substances so excluded varies from author to author. Still, it is generally agreed upon that there are (at least) a few carbon-containing compounds that should not be considered organic. For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, ), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, and and "covalent" carbides, e.g. and SiC, and graphite intercalation compounds, e.g. ). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides of carbon (CO, , and arguably, ), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, , BrCN, cyanate anion , etc.), and heavier analogs thereof (e.g., cyaphide anion , , COS; although carbon disulfide is often classed as an organic solvent). Halides of carbon without hydrogen (e.g., and ), phosgene (), carboranes, metal carbonyls (e.g., nickel tetracarbonyl), mellitic anhydride (), and other exotic oxocarbons are also considered inorganic by some authorities. Nickel tetracarbonyl () and other metal carbonyls are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen, and are often prepared directly from metal and carbon monoxide. Nickel tetracarbonyl is typically classified as an organometallic compound as it satisfies the broad definition that organometallic chemistry covers all compounds that contain at least one carbon to metal covalent bond; it is unknown whether organometallic compounds form a subset of organic compounds. For example, the evidence of covalent Fe-C bonding in cementite, a major component of steel, places it within this broad definition of organometallic, yet steel and other carbon-containing alloys are seldom regarded as organic compounds. Thus, it is unclear whether the definition of organometallic should be narrowed, whether these considerations imply that organometallic compounds are not necessarily organic, or both. Metal complexes with organic ligands but no carbon-metal bonds (e.g., ) are not considered organometallic; instead, they are called metal-organic compounds (and might be considered organic). The relatively narrow definition of organic compounds as those containing C-H bonds excludes compounds that are (historically and practically) considered organic. Neither urea nor oxalic acid are organic by this definition, yet they were two key compounds in the vitalism debate. However, the IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid as organic compounds. Other compounds lacking C-H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C-H bonds, is considered a possible organic compound in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite (). A slightly broader definition of the organic compound includes all compounds bearing C-H or C-C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, and would be considered by this rule to be "inorganic", whereas , , and would be organic, though these compounds share many physical and chemical properties. Classification Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also be classified or subdivided by the presence of heteroatoms, e.g., organometallic compounds, which feature bonds between carbon and a metal, and organophosphorus compounds, which feature bonds between carbon and a phosphorus. Another distinction, based on the size of organic compounds, distinguishes between small molecules and polymers. Natural compounds Natural compounds refer to those that are produced by plants or animals. Many of these are still extracted from natural sources because they would be more expensive to produce artificially. Examples include most sugars, some alkaloids and terpenoids, certain nutrients such as vitamin B12, and, in general, those natural products with large or stereoisometrically complicated molecules present in reasonable concentrations in living organisms. Further compounds of prime importance in biochemistry are antigens, carbohydrates, enzymes, hormones, lipids and fatty acids, neurotransmitters, nucleic acids, proteins, peptides and amino acids, lectins, vitamins, and fats and oils. Synthetic compounds Compounds that are prepared by reaction of other compounds are known as "synthetic". They may be either compounds that are already found in plants/animals or those artificial compounds that do not occur naturally. Most polymers (a category that includes all plastics and rubbers) are organic synthetic or semi-synthetic compounds. Biotechnology Many organic compounds—two examples are ethanol and insulin—are manufactured industrially using organisms such as bacteria and yeast. Typically, the DNA of an organism is altered to express compounds not ordinarily produced by the organism. Many such biotechnology-engineered compounds did not previously exist in nature. Databases The CAS database is the most comprehensive repository for data on organic compounds. The search tool SciFinder is offered. The Beilstein database contains information on 9.8 million substances, covers the scientific literature from 1771 to the present, and is today accessible via Reaxys. Structures and a large diversity of physical and chemical properties are available for each substance, with reference to original literature. PubChem contains 18.4 million entries on compounds and especially covers the field of medicinal chemistry. A great number of more specialized databases exist for diverse branches of organic chemistry. Structure determination The main tools are proton and carbon-13 NMR spectroscopy, IR Spectroscopy, Mass spectrometry, UV/Vis Spectroscopy and X-ray crystallography. See also List of chemical compounds List of organic compounds References External links Organic Compounds Database Organic Materials Database Organic chemistry
Organic compound
Chemistry
2,379
57,813,559
https://en.wikipedia.org/wiki/RXC%20J2211.7-0350
RXC J2211.7-0350 is a cluster of galaxies. Galaxy clusters are the biggest objects in the Universe that are held together by gravity. See also Brightest cluster galaxy Galaxy groups Galaxy clusters List of galaxy clusters References Galaxy clusters Aquarius (constellation)
RXC J2211.7-0350
Astronomy
56
35,755,926
https://en.wikipedia.org/wiki/Discrete%20least%20squares%20meshless%20method
In mathematics the discrete least squares meshless (DLSM) method is a meshless method based on the least squares concept. The method is based on the minimization of a least squares functional, defined as the weighted summation of the squared residual of the governing differential equation and its boundary conditions at nodal points used to discretize the domain and its boundaries. Description While most of the existing meshless methods need background cells for numerical integration, DLSM did not require a numerical integration procedure due to the use of the discrete least squares method to discretize the governing differential equation. A Moving least squares (MLS) approximation method is used to construct the shape function, making the approach a fully least squares-based approach. Arzani and Afshar developed the DLSM method in 2006 for the solution of Poisson's equation. Firoozjaee and Afshar proposed the collocated discrete least squares meshless (CDLSM) method to solve elliptic partial differential equations, and studied the effect of the collocation points on the convergence and accuracy of the method. The method can be considered as an extension the earlier method of DLSM by the introduction of a set of collocation points for the calculation of the least squares functional. CDLSM was later used by Naisipour et al. to solve elasticity problems regarding the irregular distribution of nodal points. Afshar and Lashckarbolok used the CDLSM method for the adaptive simulation of hyperbolic problems. A simple a posteriori error indicator based on the value of the least squares functional and a node moving strategy was used and tested on 1-D hyperbolic problems. Shobeyri and Afshar simulated free surface problems using the DLSM method. The method was then extended for adaptive simulation of two-dimensional shocked hyperbolic problems by Afshar and Firoozjaee. Also, adaptive node-moving refinement and multi-stage node enrichment adaptive refinement are formulated in the DLSM for the solution of elasticity problems. Amani, Afshar and Naisipour. proposed mixed discrete least squares meshless (MDLSM) formulation for solution of planar elasticity problems. In this approach, the differential equations governing the planar elasticity problems are written in terms of the stresses and displacements which are approximated independently using the same shape functions. Since the resulting governing equations are of the first order, both the displacement and stress boundary conditions are of the Dirichlet type, which is easily incorporated via a penalty method. Because this is a least squares based algorithm of the MDLSM method, the proposed method does not need to be satisfied by the Ladyzhenskaya–Babuška–Brezzi (LBB) condition. Notes References H. Arzani, M.H. Afshar, Solving Poisson's equations by the discrete least square meshless method, WIT Transactions on Modelling and Simulation 42 (2006) 23–31. M. H. Afshar, M. Lashckarbolok, Collocated discrete least square (CDLS) meshless method: error estimate and adaptive refinement, International Journal for Numerical Methods in Fluids 56 (2008) 1909–1928. M. Naisipour, M. H. Afshar, B. Hassani, A.R. Firoozjaee, Collocation Discrete Least Square (CDLS) Method for Elasticity Problems. International Journal of Civil Engineering 7 (2009) 9–18. A.R. Firoozjaee, M.H. Afshar, Discrete least squares meshless method with sampling points for the solution of elliptic partial differential equations. Engineering Analysis with Boundary Elements 33 (2009) 83–92. G. Shobeyri, M.H. Afshar, Simulating free surface problems using Discrete Least Squares Meshless method. Computers & Fluids 39 (2010) 461–470. M.H.Afshar, and A.R. Firoozjaee, Adaptive Simulation of Two Dimensional Hyperbolic Problems by Collocated Discrete Least Squares Meshless Method, Computer and Fluids, 39, (2010) 2030–2039. M.H.Afshar, M. Naisipour, J. Amani, Node moving adaptive refinement strategy for planar elasticity problems using discrete least squares meshless method, Finite Elements in Analysis and Design, 47, (2011) 1315–1325. M.H.Afshar, J. Amani, M. Naisipour, A node enrichment adaptive refinement by Discrete Least Squares Meshless method for solution of elasticity problems, Engineering Analysis with Boundary Elements, 36, (2012) 385–393. J. Amani, M.H.Afshar, M. Naisipour, Mixed Discrete Least Squares Meshless method for planar elasticity problems using regular and irregular nodal distributions, Engineering Analysis with Boundary Elements, 36, (2012) 894–902. Faraji, S., M. Afshar, et al. (2014). "Mixed discrete least square meshless method for solution of quadratic partial differential equations." Scientia Iranica. Transaction A, Civil Engineering 21(3): 492. Faraji, S. et al. (2018) Mixed discrete least squares meshless method for solving the linear and non-linear propagation problems Differential equations Least squares
Discrete least squares meshless method
Mathematics
1,122
68,193,269
https://en.wikipedia.org/wiki/Project%20Santa%20Barbara
Project Santa Barbara was a missile program developed under the administration of Philippine president Ferdinand Marcos (1965–86) during the Cold War. The first successful launch was in 1972. The project was discontinued due to undisclosed reasons. Background Project Santa Barbara was initiated by the administration of Philippine president Ferdinand Marcos and it involved the Philippine Navy and a group of scientists. It was conceived amidst the United States withdrawal of its armed forces in Indochina and in anticipation that the US would also withdraw its forces stationed in the Philippines. Under the program, different types of missiles were developed which are intended to intercept land, sea, and air-based threats. There were also plans to export missiles developed under the program to friendly countries. One of the missiles developed was the Bongbong rocket, named after the moniker of President Marcos' son Ferdinand Jr. The National Aeronautics and Space Administration of the United States described the weapon as the Philippines' first liquid-propellant rocket. The associated weaponry system of the Bongbong rocket is similar to the Soviet unguided artillery Katyusha. The 37 dynamic tests were conducted, with most of the test conducted on Caballo Island. Four of the test were made in Fort Magsaysay. The first successful launch under the project involved the Bongbong rocket. The launch was made on March 12, 1972 with the rocket retrieved from the South China Sea. The project was discontinued due to undisclosed reasons. References External links https://www.youtube.com/watch?v=RxHtd1XYaCw - A 1973 parade featuring the Bongbong rocket archived in Ghostarchive.org on 3 May 2022 Rockets and missiles Ferdinand Marcos Experimental rockets Philippine Army Secret military programs
Project Santa Barbara
Engineering
345
49,339,022
https://en.wikipedia.org/wiki/Indian%20Fish%20Trap%20State%20Preserve
Indian Fish Trap State Preserve, also known as the Indian Fish Weir, is a historic site located near the Amana Colonies in rural Iowa County, Iowa. The fish weir is an array of rocks in a V-shaped formation in the Iowa River. It is the only structure of this kind in Iowa. History It is not known when the fish weir was built, possibly in either the Late Prehistoric period or Early Historic period. Glacial boulders from a nearby bluff were probably used to construct it. Each wing of the dam is about in length. The fish were thought to be herded toward the vertex of the "V" where they would be easier to net or spear. They were then placed into an adjacent holding pool. Early pioneers discovered the weir, and it was included on a General Land Office map in the 1840s. Archaeologist Charles R. Keyes wrote about the weir in 1925. Historically, the weir was submerged below the surface in high water. The Coralville Reservoir pool partially inundates the preserve, which also affects the visibility of the weir. There are three Indian burial mounds that date from the Early Woodland Period located nearby. The fish weir was relocated in 1952, and dedicated as an archaeological state preserve in 1976. It was listed on the National Register of Historic Places in 1988. Shifting of the Iowa River in the 1990s appears to have buried the fish weir, and it is now south of the main river channel, possibly buried in silt. References Protected areas established in 1976 Iowa state preserves Native American history of Iowa Protected areas of Iowa County, Iowa National Register of Historic Places in Iowa County, Iowa Weirs Iowa River
Indian Fish Trap State Preserve
Environmental_science
328
313,418
https://en.wikipedia.org/wiki/Luminous%20intensity
In photometry, luminous intensity is a measure of the wavelength-weighted power emitted by a light source in a particular direction per unit solid angle, based on the luminosity function, a standardized model of the sensitivity of the human eye. The SI unit of luminous intensity is the candela (cd), an SI base unit. Measurement Photometry deals with the measurement of visible light as perceived by human eyes. The human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. When adapted for bright conditions (photopic vision), the eye is most sensitive to yellow-green light at 555 nm. Light with the same radiant intensity at other wavelengths has a lower luminous intensity. The curve which represents the response of the human eye to light is a defined standard function or established by the International Commission on Illumination (CIE, for Commission Internationale de l'Éclairage) and standardized in collaboration with the ISO. Luminous intensity of artificial light sources is typically measured using and a goniophotometer outfitted with a photometer or a spectroradiometer. Relationship to other measures Luminous intensity should not be confused with another photometric unit, luminous flux, which is the total perceived power emitted in all directions. Luminous intensity is the perceived power per unit solid angle. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, though its luminous flux remains unchanged. Luminous intensity is also not the same as the radiant intensity, the corresponding objective physical quantity used in the measurement science of radiometry. Units Like other SI base units, the candela has an operational definition—it is defined by the description of a physical process that will produce one candela of luminous intensity. By definition, if one constructs a light source that emits monochromatic green light with a frequency of 540 THz, and that has a radiant intensity of 1/683 watts per steradian in a given direction, that light source will emit one candela in the specified direction. The frequency of light used in the definition corresponds to a wavelength in a vacuum of , which is near the peak of the eye's response to light. If the source emitted uniformly in all directions, the total radiant flux would be about , since there are 4 steradians in a sphere. A typical modern candle produces very roughly one candela while releasing heat at roughly . Prior to the definition of the candela, a variety of units for luminous intensity were used in various countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these standards was the English standard: candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria, and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp. In 1881, Jules Violle proposed the Violle as a unit of luminous intensity, and it was notable as the first unit of light intensity that did not depend on the properties of a particular lamp. All of these units were superseded by the definition of the candela. Usage The luminous intensity for monochromatic light of a particular wavelength is given by where is the luminous intensity in candelas (cd), is the radiant intensity in watts per steradian (W/sr), is the standard luminosity function. If more than one wavelength is present (as is usually the case), one must sum or integrate over the spectrum of wavelengths present to get the luminous intensity: See also Brightness International System of Quantities Radiance References Curve data Scalar physical quantities SI base quantities Photometry Electromagnetic quantities
Luminous intensity
Physics,Mathematics
862
34,741,587
https://en.wikipedia.org/wiki/Re%C8%99i%C8%9Ba%20Works
The Reșița Works are two companies, TMK Reșița and UCM Reșița, located in Reșița, in the Banat region of Romania. Founded in 1771 and operating under a single structure until 1948 and then from 1954 to 1962, during the Communist era they were known respectively as the Reșița Steel Works (Combinatul Siderurgic Reșița) and as the Reșița Machine Building Plant (Uzina Constructoare de Mașini Reșița), the latter renamed in 1973 as the Reșița Machine Building Enterprise (Întreprinderea de Construcții de Mașini Reșița). They have played a crucial role in the industrial development both of the region and of Romania as a whole, and their evolution has been largely synonymous with that of their host city. History Beginnings and growth The Habsburg monarchy, which then ruled the Banat, was interested in developing extractive metallurgy in the province, and began building furnaces for iron ore smelting in Reșița in 1769, those at Bocșa proving inadequate for its industrial needs. The works trace their origins to July 3, 1771, when the first furnaces and forges were inaugurated, making it the oldest industrial factory in present-day Romania. At first, metalworking was the focus of activity, but machinery manufacturing gradually gained prominence, becoming the main occupation in the last quarter of the 19th century. For decades, the two complemented each other within the same integrated factory. Until 1855, the works belonged to the Treasury of what had become the Austrian Empire, which exercised control through the Banat Mining Directorate in Oravița. By 1815, they were producing cast iron pieces coming directly from the furnaces, rods forged from iron, hoops for cart wheels, tools, nails and utensils for agricultural and home use. In 1855, with the empire facing financial crisis and looking to sell, the works were bought by an international consortium, the Imperial Royal Privileged Austrian State Railway Company (K.u.K Oberprivillegierte Staatseisenbahn Gesellschaft or St.E.G.). Aside from the Reșița Works, this company also owned land and mining, metalworking and railway properties in the Banat and Bohemia, a locomotive factory in Vienna and the concession for building and operating a railway network of some , and was financed by one French and two Austrian banks. A persistent legend holds that in the late 1880s, metal produced at Reșița was sent to France to be used in building the Eiffel Tower. However, there is no documentary evidence to support this claim. Since their opening, the development and fortunes of the works have been deeply entwined with the history of the city itself. An important element of their success was due to their relative self-sufficiency; over time, the works tended to use raw materials and energy sources produced on-site. Following the union of Transylvania with Romania, including the Banat, a 1920 royal decree transformed St.E.G.'s Romanian holdings into the Steel Works and Domains of Reșița (Uzinele de Fier și Domeniile Reșița; U.D.R. or U.D.R.I.N.) company. A "workshops directorate" belonging to the company was built on the left bank of the Bârzava River; this included the machine works, the old industrial platform of today's UCM Reșița, where the first St.E.G. workshops were also built between 1886 and 1891. By surface area, over 90% of the company properties were forests, but they also included iron, coal and copper mines; vineyards; roads; and limestone quarries. Starting in the 1920s, the works had the following divisions: blast furnaces; a coking plant; steelworks; rolling mills; a foundry; a forge; a factory for bridges and metal structures; a factory for mounted wheels; an old machine factory; a factory for petroleum extraction equipment; an armaments factory; a factory for electric machinery; and a locomotives factory with a capacity of 100 units per year. Among the main products generated were steam locomotives, including repairs; mounted wheels, including axles; wheel bandages, metal bridges, railroad switches and other rail equipment; metal frames for buildings and factories; moveable bridges; electric machinery and equipment such as motors, generators and transformers; petroleum extraction equipment, including pumpjacks, couplings, heavy drill bits, pump units, rotary engine parts, crown blocks and gear reducers; and armaments, such as artillery, gun carriages, 75 mm Vickers antitank and antiaircraft guns; coastal artillery; naval mines; and Brandt 60 and 120 mm LR Gun-mortars. In terms of revenue and number of employees, the company was the largest in Romania, with the latter figure reaching 22,892 in 1948. In 1939, following the German occupation of Czechoslovakia, the Nazi regime took over Československá Zbrojovka's one-tenth share in Reșița. Together with other incursions into Romanian industry, this move seriously undermined the attempts of King Carol II to maintain an independent foreign policy. Subsequently, commercial and technical management ended up in the hands of Reichswerke Hermann Göring. Nationalization In June 1948, the new Communist regime nationalized the company, along with 350 others. For over a year, it kept its former name but was gradually integrated into the new government structure. A decree issued in August 1949 led to its effective disaggregation by the end of the year, and its components were folded into two SovRom joint ventures, Sovrommetal (the iron extraction division) and Sovrom Utilaj Petrolier (the machine production division). Thus, for the first time, the Reșița Works were divided in two. In September 1954, with the end of the SovRom period, they were reunited into one entity, the Reșița Metallurgical Works (Combinatul Metalurgic Reșița) under the Ministry of Heavy Industry, later the Ministry of Metallurgy and Machine Building. After 1948, although the Reșița Works remained the most important heavy industry producers in Romania, they were gradually marginalized as well, with a series of units being shut down: metal structures and bridges (1953-1958); petroleum extraction equipment (1954-1955); railroad switches (1955); transformers, electric equipment and medium-sized electric motors (1957); mounted wheels (1959); moveable bridges and cranes (after 1973); thermal energy equipment such as steam turbines, turbo generators and related devices (1977); and locomotive bogies (1981). At the same time, significant technological advances were incorporated. Among the devices introduced were steam turbines and turbo generators; new air compressors; diesel locomotives and bogies; electrical bushings; hydroelectric units including hydraulic turbines, generators and rotation regulators; Diesel engines for marine propulsion; equipment for the chemical and metallurgical industries; fluid mechanics equipment like hydraulic pumps and large hydraulic servo motors. At the same time, steam locomotives were phased out. During four decades of a planned economy, no significant economic development program on a national scale—including the program to develop the energy supply through thermoelectric and hydroelectric machines and equipment; the nuclear power program; and the programs to develop rail transport, the naval fleet, the metallurgical, mining and chemical industries—was undertaken without a certain degree of involvement from the Reșița Works, whether by incorporating or producing machines and equipment. Additionally, their products were exported to nearly forty countries. Split and subsequent privatization On April 1, 1962, the works were again split into two separate entities meant to operate in tandem: the Reșița Steel Works (Combinatul Siderurgic Reșița; CSR) and the Reșița Machine Building Plant (Uzina Constructoare de Mașini Reșița; UCMR or UCM). The Communist regime fell in 1989, and CSR had begun to decline by 1993. In December 1994, a demonstration of the 6,800 remaining workers and 30,000 Reșița residents brought about investments and new equipment. CSR became a public company in 1996. Its first privatization in 2000, undertaken by a government eager to be divested of a debt-ridden entity, was a failure. CSR's takeover by an American company accused of failing to fulfill its promise of improving the plant led to labor unrest. This was exploited by the extremist Greater Romania Party, which took control of regular demonstrations where slogans against joining the European Union and NATO became increasingly commonplace; finally, in June 2001, the government announced it would go to court to scrap the contract because of the nationwide "economic and social destabilization" risked by allowing the situation to continue. The process was restarted in 2003, and the following year, the state sold it off. A subsidiary of the Russian firm OAO TMK, it has been known as TMK Reșița since 2006. It produces tubular billets, heavy round profiles and blooms, and started putting out blanks in 2007. By 2011, the number of employees had fallen to 800, from 10,400 in 1990. UCMR was under the control of various ministries, its name being changed in 1973 to Reșița Machine Building Enterprise (Întreprinderea de Construcții de Mașini Reșița; ICMR). Between 1969 and 1973, it was the hub of Reșița Plants Group (Grupul de Uzine Reșița), which also included a metal structures plant in Bocșa, a machine plant in Caransebeș, a mechanical plant in Timișoara and an institute for research and planning hydroelectric equipment in Reșița. After the Romanian Revolution, it regained the UCMR name in 1991, and underwent a privatization process starting in 1993. This concluded in 2003, when the state sold the remainder of its shares. Largely owned by a Swiss company and with some 2500 employees, it is involved with machining operations on machine tools, welding, heat and thermochemical treatments and electroplating. Four industrial elements of the Reșița Works are listed as historic monuments: the UCM locomotive factory, and from the CSR, blast furnace #2, the brick factory and the puddling and steam laminating workshop. In addition, two villas belonging to the UCM authorities are listed, as well as a number of those belonging to the UDR leadership. Although blast furnace #1 was demolished, the remaining one, representing the fifth generation of blast furnaces on the same site, was left standing due to its symbolic significance in the city's cultural identity and contribution to the industrial landscape. By the early 1990s, the works had caused serious air, water and soil pollution, making Reșița among the most severely polluted areas of Eastern Europe. See also Galați steel works FAUR Notes Reșița Companies of Caraș-Severin County Historic monuments in Caraș-Severin County Companies established in 1771 Steel companies of Romania Blast furnaces ro:UCM Reșița
Reșița Works
Chemistry
2,267
5,106,530
https://en.wikipedia.org/wiki/Lamella%20%28cell%20biology%29
A lamella (: lamellae) in biology refers to a thin layer, membrane or plate of tissue. This is a very broad definition, and can refer to many different structures. Any thin layer of organic tissue can be called a lamella and there is a wide array of functions an individual layer can serve. For example, an intercellular lipid lamella is formed when lamellar disks fuse to form a lamellar sheet. It is believed that these disks are formed from vesicles, giving the lamellar sheet a lipid bilayer that plays a role in water diffusion. Another instance of cellular lamellae can be seen in chloroplasts. Thylakoid membranes are actually a system of lamellar membranes working together, and are differentiated into different lamellar domains. This lamellar system allows plants to convert light energy into chemical energy. Chloroplasts are characterized by a system of membranes embedded in a hydrophobic proteinaceous matrix, or stroma. The basic unit of the membrane system is a flattened single vesicle called the thylakoid; thylakoids stack into grana. All the thylakoids of a granum are connected with each other, and the grana are connected by intergranal lamellae. It is placed between the two primary cell walls of two plant cells and made up of intracellular matrix. The lamella comprises a mixture of polygalacturons (D-galacturonic acid) and neutral carbohydrates. It is soluble in the pectinase enzyme. Lamella, in cell biology, is also used to describe the leading edge of a motile cell, of which the lamellipodia is the most forward portion. The lipid bilayer core of biological membranes is also called lamellar phase. Thus, each bilayer of multilamellar liposomes and wall of a unilamellar liposome is also referred to as a lamella. See also Middle lamella Thylakoid Lipid bilayer References Further reading Cell biology Photosynthesis Prokaryotic cell anatomy
Lamella (cell biology)
Chemistry,Biology
429
49,030,177
https://en.wikipedia.org/wiki/Dansk%20Datamatik%20Center
Dansk Datamatik Center (DDC) was a Danish software research and development centre that existed from 1979 to 1989. Its main purpose was to demonstrate the value of using modern techniques, especially those involving formal methods, in software design and development. Three major projects dominated much of the centre's existence. The first concerned the formal specification and compilation of the CHILL programming language for use in telecommunication switches. The second involved the formal specification and compilation of the Ada programming language. Both the Ada and CHILL efforts made use of formal methods. In particular, DDC worked with Meta-IV, an early version of the specification language of the Vienna Development Method (VDM) formal method for the development of computer-based systems. As founded by Dines Bjørner, this represented the "Danish School" of VDM. This use of VDM led in 1984 to the DDC Ada compiler becoming the first European Ada compiler to be validated by the United States Department of Defense. The third major project was dedicated towards creation of a new formal method, RAISE. The success of the Ada compiler system would lead to creation of the commercial company DDC International A/S (DDC-I, Inc. in the US) in 1985, which would develop, productise, and market it both directly to customers and to other companies which would use it as the basis for their own Ada compiler products. Origins In spring 1979, Christian Gram, a computer scientist at the Technical University of Denmark (DTU)—located in Kongens Lyngby, north of Copenhagen—suggested to his colleague Dines Bjørner the idea of building an advanced software institute. Looking at the software crisis of the time, they felt that computer science had created foundational and theoretical approaches that if applied could make software development a more professional process and permit the development of large software systems on schedule and with quality. They approached the Akademiet for de Tekniske Videnskaber (ATV, the Danish Academy for Technical Sciences) with this idea, and in September 1979, Dansk Datamatik Center was formed as an ATV institute for advanced software development. (It was also referred to as the Danish Datamatics Centre in some early documents.) Ten large producers or users of information technology in Denmark became paying members of the new entity: , Crone & Koch, the Danish Defence Research Establishment, , , Kommunedata, Regnecentralen af 1979, Sparekassernes Datacenter, (TFL), and ØK Data, with each member paying DKK 100,000 per year. Bjørner became the scientific leader of the centre. The managing director of DDC was Leif Rystrøm. When it reached its greatest size around 1984, some 30–35 professional employees worked at DDC, with about 40 employees in total. By 1984, DDC had a budget of DKK 13 million, a substantial increase from its initial budget of DKK 1 million. Many of the engineers hired came from DTU and Copenhagen University. In the beginning the centre was housed in a building on the DTU campus, but then it became located in a converted textile mill along the Mølleåen, close to Lyngby centre. The cube-inspired red logo of DDC was designed by Ole Friis, who in 1984 won the from the Danish Design Centre for it. CHILL projects During 1978, Bjørner became interested in creating a formal definition, using denotational semantics, of the CHILL programming language then under development. Work on the formal definition of CHILL began that year based upon the request of Teleteknisk Forskningslaboratorium, assigned to a group under the Comité Consultatif International Téléphonique et Télégraphique (CCITT) and conducted at DTU, with some eighteen students working on the effort. Once DDC was established, the formal definition was completed there in 1980 and 1981. Opinions on the value of the effort differ: Bjørner has stated it discovered a definitional issue that led to the simplification of the language, while Remi Bourgonjon of Philips, the convener of the Implementors' Forum organized by the CCITT, thought the formal definition was too complicated and came too late to benefit CHILL compiler designers. At the same time, a CHILL compiler was developed, again starting before DDC but completed by it and TFL. It was developed using formal methods. The two organisations made the compiler publicly available and it would have an important role in education concerning the CHILL language. It was also adapted by British firm Imperial Software Technology with a new code generator and found use by GEC and others during the 1980s. A joint project that GEC and DDC carried out in the early 1980s was to investigate the incorporation of CHILL into an Ada Programming Support Environment (APSE), to support projects that used both languages . DDC's part of the project used an examination of the denotational semantics of both languages and concluded that such an integration was technically feasible. DDC continued to be involved in publishing papers at CHILL conferences during the first half of the 1980s, but not after that. Ada projects The advent of the U.S. Defense Department sponsorship of the Ada programming language during the 1979–80 period led to European interest in the new language as well, and the Commission of the European Communities (CEC) decided to allocate funding for a European Ada compiler and runtime system. A consortium of Olivetti from Italy and DDC and Christian Rovsing from Denmark submitted a bid that in early 1981 won out over a previously favored bid from a French–German consortium; half of the funding would come from the CEC and half from Danish sources. Ole N. Oest was transferred from the Danish Defence Research Establishment to DDC to manage the Ada work. DDC was responsible for developing a Portable Ada Programming System. Requirements included hosting the Ada compiler on small, 16-bit minicomputers such as the Christian Rovsing CR80D and Olivetti M40, among other platforms, and being able to fit within 80 kilobytes code and 110 kilobytes data. As a result, the compiler was constructed of many passes, in this case six for the front end alone, with linearized trees stored in files as the representation between passes. The compiler creation process went through four steps: development of a formal specification of Ada, development of a formal specification of the compiler components; development of more detailed formal specifications of particular compiler passes; implementation of these specifications in Ada itself. Among formal approaches, using the Vienna Development Method (VDM) was advantageous in this project because it was tailored for use with computer languages and compilers and because it allowed stepwise refinement of operations as well as of data representations. The central goal of the process was to prove that the implementation was equivalent to the specification. In cases where the static abstract syntax representation needed to have additional constraints incorporated, well-formedness criteria—another aspect of VDM—were defined. The first step in the process, a formal specification for Ada, had already been started by five students at DTU in 1980 as part of their master's theses. Ada was a difficult language to implement and early attempts to build a compiler for it often resulted in disappointment or outright failure. The DDC compiler was validated on a VAX/VMS system in September 1984, being the first European Ada compiler to pass, and proved a success. At that point about 44 person-years of development work had gone into it. The defect rate and maintenance costs would prove to be significantly lower for the compiler than for the software industry average. Attention regarding DDC's use of VDM in compiler design led to interest from other computer manufacturers and sales were made of what became known as the DDC OEM Compiler Kit (the name being a reference to the original equipment manufacturer business model). The compiler system offered two points for retargeting, a high-level tree-structured intermediate language and a sequence of instructions for an abstract stack machine; the latter meant shorter project times but usually not the most optimized generated code. (The abstract stack-based virtual machine was also worked on by Christian Rovsing; there was also some idea of possibly implementing it in hardware or firmware.) The first such OEM sale was to Nokia, for rehosting on the Nokia MPS 10. The second, with a contract made in February 1984, was with Honeywell Information Systems in Boston. The compiler was thus rehosted and retargeted to the Honeywell DPS6 and validated in November 1984. In addition, cross compilers began to be developed, with DDC doing one from VAX/VMS to the Intel 8086, beginning what would become a successful line of products. In December 1984, DDC signed a contract with Advanced Computer Techniques in New York, based on a license royalty arrangement. They began using the DDC front end to develop a cross-compiler for the MIL-STD-1750A architecture, which would become a reasonably successful product with a number of customers. Success of the Ada project led to a separate company being formed in 1985, called DDC International A/S, with the purpose of commercializing the Ada compiler system; Oest was named the managing director of the company. A year later a US-based subsidiary of that company, DDC-I, Inc., was formed in the state of Arizona. Concurrent with the compiler work, there was a push on various fronts to provide a formal definition of Ada, with several different approaches and metalanguages tried. Some Europeans argued that such a task was critical and that it was the only basis upon which an ISO standard for the language should be published. The CEC sponsored this work and the contract was won by DDC in partnership with two Italian research institutes, the Istituto di Elaborazione dell’Informazione (IEI) in Pisa and the Consorzio per la Ricerca e le Applicazioni di Informatica (CRAI) in Genoa, with work beginning in 1984. Additional consulting on the project was provided by staff at the University of Genoa, the University of Pisa, and at DTU. The work built up the previous formal definitions that had been done at DTU and by DDC at the beginning of its Ada compiler project, but further work was needed the define the entire language and Meta-IV had to be extended in places or alternate approaches taken. This effort culminated in the 1987 publication of the full formal definition of Ada, encompassing three separate publications and eight volumes in total. While this effort did lead to a better understanding of the language and a number of clarifications to it being made, in the end the ultimate definition of the language remained the natural language one in the Ada Language Reference Manual. RAISE projects The use of VDM in the CHILL and Ada projects revealed the need for improvements in formal specification techniques, and in 1983 DDC conducted a Formal Methods Appraisal study, producing a number of requirements that a formal specification language should embody. Following that DDC was awarded a CEC contract to develop a successor to VDM, which was called RAISE (Rigorous Approach to Industrial Software Engineering). This was done in consortium with STC Technology of Great Britain, which helped in the creation of the new technology, and with Nordisk Brown Boveri of Denmark and International Computers Limited of Britain, which exercised it in industrial settings. The project involved some 120 person-years of effort and sought to create a wide-spectrum language intended to handle every level from the initial, high-level abstract one down to one level above programming. It sought to remedy VDM's weaknesses with respect to modularity, concurrency, and lack of tools, and it also sought to unify approaches taken in the likes of Z notation, CSP, Larch, and OBJ. Besides the RAISE Specification language, the project also produced a description of best practices for the RAISE Method, and a RAISE toolset. Other projects In 1981 DDC, in conjunction with some of its members, conducted a study of the many office automation initiatives and products then available and published a taxonomy and terminology guide that analysed the domain. They then specified a generic office automation system using both VDM and informal language. Later during 1983–1987, DDC worked as a subcontractor to member ØK Data on the Functional Analysis of Office Requirements (FAOR) project under ESPRIT. DDC also gave courses and seminars in various software development topics, and starting in 1987, initiated a Danish-language quarterly publication Cubus which discussed various technical and scientific topics in an effort to engage in technology transfer. Conclusion and legacy During the centre's existence, some of the constituent members lost interest in its work, with no need for the CHILL or Ada compilers and the RAISE work too ambitious for their use. General acceptance of Ada as a language underperformed expectations and Ada product sales by DDC-I did not provide sufficient profits to allow money to flow to DDC. With sustained funding becoming a problem, in 1989 Dansk Datamatik Center was closed down. Work on the Ada products was carried on by DDC-I, where it was used in many high-visibility aerospace and similar projects. The best-known of these was the Airplane Information Management System flight software for the Boeing 777 airliner. Subsequent developers of the DDC-I Ada compiler were often not as well versed in formal methods as the original developers. The Ada products would still be generating revenue for DDC-I into the 2010s. DDC's work and staff on RAISE were transferred to Computer Resources International (CRI) in 1988. They used it as the basis for the European ESPRIT II LaCoS project in the 1990s. The RAISE effort was subsequently sold to Terma A/S, who use it as part of work for the European Space Agency and various defense industry projects. DDC had relatively little involvement with the Nordic software world, because it relied on European Union-based partners and funding and Denmark was the only Nordic country in the EU at the time. Nor did the Danish financial sector ever show an interest in DDC's work. In looking back, the founders of the centre have stated that, "Where DDC failed was to [convince] major Danish companies of the benefits of using reliable software development based on formal methods. (But, DDC did not try very much.)" DDC researchers believed that their work was still beneficial in making Danish technology firms aware of modern software development approaches and in populating those firms with as many as a hundred software designers and developers who had worked at DDC, and that in any case, "DDC completed a large number of projects with better performance and higher product quality than was common in the 1980s." In a 2014 survey of forty years of formal methods efforts, Bjørner and Klaus Havelund lamented that adoption of formal methods has not become widespread in the software industry and referred to the DDC Ada compiler as an unsung success story of the value of such use. Bibliography A slightly expanded version of this chapter is available online at https://www.researchgate.net/publication/221271386_Dansk_Datamatik_Center. A further expanded version is part of Bjørner's online memoir at http://www.imm.dtu.dk/~dibj/trivia/node5.html. A slides presentation by Gram based on the paper is available online as Why Dansk Datamatik Center? WorldCat entry References Software engineering organizations Computer science research organizations Formal methods organizations Scientific organizations based in Denmark Defunct organizations based in Denmark Companies based in Lyngby-Taarbæk Municipality Organizations established in 1979 1979 establishments in Denmark Religious organizations disestablished in 1989 Ada (programming language)
Dansk Datamatik Center
Engineering
3,275
12,807,714
https://en.wikipedia.org/wiki/Mu%20problem
In theoretical physics, the problem is a problem of supersymmetric theories, concerned with understanding the parameters of the theory. Background The supersymmetric Higgs mass parameter appears as the following term in the superpotential: . It is necessary to provide a mass for the fermionic superpartners of the Higgs bosons, i.e. the higgsinos, and it enters as well the scalar potential of the Higgs bosons. To ensure that and get a non-zero vacuum expectation value after electroweak symmetry breaking, should be of the order of magnitude of the electroweak scale, many orders of magnitude smaller than the Planck scale (), which is the natural cutoff scale. This brings about a problem of naturalness: Why is that scale so much smaller than the cutoff scale? And why, if the term in the superpotential has different physical origins, do the corresponding scale happen to fall so close to each other? Before LHC, it was thought that the soft supersymmetry breaking terms should also be of the same order of magnitude as the electroweak scale. This was negated by the Higgs mass measurements and limits on supersymmetry models. One proposed solution, known as the Giudice–Masiero mechanism, is that this term does not appear explicitly in the Lagrangian, because it violates some global symmetry, and can therefore be created only via spontaneous breaking of this symmetry. This is proposed to happen together with F-term supersymmetry breaking, with a spurious field that parameterizes the hidden supersymmetry-breaking sector of the theory (meaning that is the non-zero -term). Let us assume that the Kahler potential includes a term of the form times some dimensionless coefficient, which is naturally of order one, and where Mpl is Planck mass. Then as supersymmetry breaks, gets a non-zero vacuum expectation value ⟨⟩ and the following effective term is added to the superpotential: which gives a measured On the other hand, soft supersymmetry breaking terms are similarly created and also have a natural scale of See also NMSSM (Next-to-Minimal Supersymmetric Standard Model) Minimal Supersymmetric Standard Model Doublet–triplet splitting problem Hierarchy problem Little hierarchy problem References External links Supersymmetric Models with extra singlets: a review; DJ Miller, University of Glasgow Supersymmetric quantum field theory Physics beyond the Standard Model
Mu problem
Physics
516