id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
8,570,918
https://en.wikipedia.org/wiki/Nuclear%20Energy%20Institute
The Nuclear Energy Institute (NEI) is a nuclear industry trade association in the United States, based in Washington, D.C. Synopsis The Nuclear Energy Institute represents the nuclear technologies industry. NEI’s stated mission “is to promote the use and growth of nuclear energy through efficient operations and effective policy.” NEI works on legislative and regulatory issues impacting the industry, such as the preservation of nuclear plants and used nuclear fuel storage. The association represents the nuclear industry's interests before Congress and the Nuclear Regulatory Commission. It often produces research reports and testifies at federal and state congressional hearings. The nuclear energy industry that NEI represents and serves includes: Commercial electricity generation, nuclear medicine including diagnostics and therapy, food processing and agricultural applications, industrial and manufacturing applications, uranium mining and processing, nuclear fuel and radioactive materials manufacturing, transportation of radioactive materials, and nuclear waste management NEI is governed by a 47-member board of directors. The board includes representatives from the nation's 27 nuclear utilities, plant designers, architect/engineering firms and fuel cycle companies. Eighteen members of the board serve on the executive committee, which is responsible for NEI's business and policy affairs. History The Nuclear Energy Institute (NEI) was founded in 1994 from the merger of several nuclear energy industry organizations, the oldest of which was created in 1953. Specifically, in 1994, NEI was formed from the merger of the Nuclear Utility Management and Resources Council (NUMARC), which addressed generic regulatory and technical issues; The U.S. Council for Energy Awareness (USCEA), which conducted a national communications program; the American Nuclear Energy Council (ANEC), which conducted government affairs; and the nuclear division of the Edison Electric Institute (EEI), which handled issues involving used nuclear fuel management, nuclear fuel supply, and the economics of nuclear energy. In 1987, NUMARC and USCEA were created through a division of the Atomic Industrial Forum (AIF). USCEA was founded in 1979 as the Committee for Energy Awareness and it changed its name to USCEA in Jan 1992 (in the aftermath of Three Mile Island) to create ambiguity. In a 1983 magazine interview, USCEA president and CEO Harold Finger stated, "I guess we chose our name very well. Many people ask us [if USCEA] is a government agency of bureaucracy." It has been charged with blatant misrepresentations in the CEO advertising campaign by the Safe Energy Communications Council (SECC). The membership list as of June 1990 lists 31 major power companies. The AIF was created in 1953 to focus on the beneficial uses of nuclear energy. This was two years before the international “Atoms for Peace” conference held in Geneva in 1955, marking the dawn of the nuclear age. Current issues In addition to its core mission, NEI also sponsors a number of public communications efforts to build support for the industry and the expansion of nuclear energy, a number of which have come under attack from environmentalists and anti-nuclear activists. In 2006, NEI founded the Clean and Safe Energy Coalition (CASEnergy) to help build local support around the country for new nuclear construction. The co-chairs of the coalition are early Greenpeace member Patrick Moore and former United States Environmental Protection Agency Secretary and New Jersey Governor Christine Todd Whitman. As of April 2006, CASEnergy boasted 427 organizations and 454 individuals as members. In April 2004, the Austin Chronicle reported that NEI has hired the Potomac Communications Group to ghostwrite pro-nuclear op-ed columns to be submitted to local newspapers under the name of local residents. In 2003 story in the Columbus Dispatch, NEI said that it engaged a public affairs agency to identify individuals with technical expertise in the nuclear energy industry to participate in the public debate. However, as many of these individuals have little experience in opinion writing for a non-technical audience, the agency provides assistance if requested, a common industry practice. In 1999, Public Citizen filed a complaint with the Federal Trade Commission charging that an NEI advertising campaign overstated the environmental benefits of nuclear energy to consumers living in markets where sales of electricity had been deregulated. In a ruling the following December, the FTC rejected those claims concluding: NEI did not violate the law; agreed that the advertisements were directed to policymakers and opinion leaders in forums that principally reach those who set national policy on energy and environmental issues, and therefore did not constitute "commercial speech"; noted that in different circumstances, such as direct marketing of electricity, such advertising could be considered commercial speech and be subject to stricter substantiation. NEI ran other ads with similar content, most recently one released in September 2006 touting nuclear energy's non-emitting character and the role it can play in reducing American dependence on foreign sources of fossil fuels like oil and natural gas. In 2008, Greenpeace criticized NEI's public relations efforts and suggested that NEI's advertising about nuclear power was an example of greenwashing. In the first quarter of 2008, NEI spent $320,000 on lobbying the US federal government. Besides Congress, the nuclear group lobbied the White House, Nuclear Regulatory Commission, departments of Commerce, Defense, Energy and others in the first three months of the year. The NEI spent $1.3 million to lobby the federal government in 2007. In 2012, NEI quoted Kathyrn Higley, professor of radiation health physics in the department of nuclear engineering at Oregon State University, who described the health impact of the Fukushima Daiichi nuclear accident to be "really, really minor", adding that "the Japanese government was able to effectively block a large component of exposure in this population". Advocacy One of NEI's main focuses is advocating for policies that would promote beneficial uses of nuclear energy. NEI utilizes the National Nuclear Energy Strategy which has four main points that they want to hit when guiding policy: preserve, sustain, innovate, and thrive. Preservation aims to keep and preserve the current nuclear power plants that are still in use today. Sustain is another point that is used to guide policies. Its goal is to sustain the operations of the existing plants through more efficient practices and smarter regulations. The point of innovation emphasizes creating newer nuclear technologies that will produce greener energy. Lastly, thrive touches on the point of saying that it is essential to our country's leadership that we can do well in the global nuclear energy marketplace. One of these most important, pressing points is the preservation of nuclear power plants. In the next few years, about half of the operating licenses for the US’s nuclear plants will expire  However, the NEI is helping provide information and push policy to help increase the amount of Second License Renewals. Second license renewal is where a nuclear power plant can extend its original operating license for up to 20 years. This is important because if these plants are forced to close if they do not renew their license, then they will most likely not be replaced with another nuclear power plant. They will probably be replaced with a less efficient plant that utilizes fossil fuels. This could hurt up to one-quarter of the environmental benefits that these nuclear plants have contributed. Along with the advocacy of policy, NEI also is dedicated to advocating the advantages of nuclear energy as well. Some of the main advantages that NEI states are the benefits in climate, national security, sustainable development, infrastructure, and air quality. Nuclear energy will help our climate by contributing to decarbonization. NEI also argues that if a country is leading in nuclear energy development, it would also be leading in the world. Nuclear power plants would be able to function even if something were to happen to the electrical grid around them which would greatly help the US. The sustainable development of increasing our nuclear energy would be very beneficial. NEI claims that it could even help poverty, hunger, and stagnant economies. Nuclear energy would help because it would provide individuals with clean, low-cost, secure energy.   Infrastructure within America has not been able to keep pace with Americans rapidly increasing power needs. To keep the gap between power and expansion of infrastructure, NEI suggests maintaining existing nuclear power plants. This suggestion is made with the knowledge that after a power plant has closed, it is gone forever. NEI also advocates for more nuclear power infrastructure due to hundreds of jobs being created and consistent for the years to come. NEI advocates for nuclear energy due to it being the largest source of clean energy within the United States, already producing more than half of the nation’s clean electricity. Due to the lack of emissions from nuclear energy, it acts as a beneficial option for states attempting to comply with the Clean Air Act. Key personnel President and Chief Executive Officer: Maria Korsnick Chairman: Ralph Izzo Vice Chairman: Paul D. Koonce President and Chief Executive Officer: Maria G. Korsnick Executive Vice President and Chief Financial Officer: Phyllis M. Rich Senior Vice President, External Affairs: Neal M. Cohen Senior Vice President, General Counsel and Secretary: Ellen C. Ginsberg Vice President, Policy Development and Public Affairs: John F. Kotek Vice President, Government Affairs: Beverly K. Marshall Chief Nuclear Officer and Senior Vice President, Generation and Suppliers: Doug E. True Vice President, Generation and Suppliers: Jennifer L. Uhle Vice President, Communications: Jon C. Wentzel See also Nuclear power in the United States Office of Nuclear Energy United States Department of Energy Atomic Industrial Forum American Nuclear Society Institute of Nuclear Power Operations Frank L. "Skip" Bowman (Biographic details) Institute of Nuclear Materials Management References External links Clean and Safe Energy Coalition (CASEnergy) Skip Bowman speech at LA Town Hall Dr. Patrick Moore at NEA 2006 Stewart Brand at NEA 2006 SourceWatch on the Nei Business organizations based in the United States Nuclear industry organizations Nuclear organizations 501(c)(6) nonprofit organizations Organizations established in 1994 Trade associations based in the United States Lobbying organizations based in Washington, D.C. Organizations based in Washington, D.C.
Nuclear Energy Institute
Engineering
2,039
4,878,864
https://en.wikipedia.org/wiki/DISCUS
DISCUS, or distributed source coding using syndromes, is a method for distributed source coding. It is a compression algorithm used to compress correlated data sources. The method is designed to achieve the Slepian–Wolf bound by using channel codes. History DISCUS was invented by researchers S. S. Pradhan and K. Ramachandran, and first published in their paper "Distributed source coding using syndromes (DISCUS): design and construction", published in the IEEE Transactions on Information Theory in 2003. Variations Many variations of DISCUS are presented in related literature. One such popular scheme is the Channel Code Partitioning scheme, which is an a-priori scheme, to reach the Slepian–Wolf bound. Many papers illustrate simulations and experiments on channel code partitioning using the turbo codes, Hamming codes and irregular repeat-accumulate codes. See also Modulo-N code is a simpler technique for compressing correlated data sources. Distributed source coding External links "Distributed source coding using syndromes (DISCUS): design and construction" by Pradhan, S.S. and Ramchandran, K. "DISCUS: Distributed Compression for Sensor Networks" Distributed Source Coding can also be implemented using Convolutional Codes or using Turbo Codes Information theory Wireless sensor network
DISCUS
Mathematics,Technology,Engineering
258
1,080,186
https://en.wikipedia.org/wiki/FLEX%20%28protocol%29
FLEX is a communications protocol developed by Motorola and used in many pagers. FLEX provides one-way communication only (from the provider to the pager device), but a related protocol called ReFLEX provides two-way messaging. Protocol Transmission of message data occurs in one of four modes: 1600/2, 3200/2, 3200/4, or 6400/4. All modes use FSK modulation. At 1600/2 this is on a 2 level FSK signal transmitted at 1600 bits per second. At 3200/2, this is a 2 level FSK signal transmitted at 3200 bits per second. At 3200/4, this is a 4 level FSK signal transmitted at 1600 symbols per second. Each 4 level symbol represents two bits for a bit rate of 3200 bits per second. At 6400/4, this is a 4 level FSK signal transmitted at 3200 symbols per second or 6400 bits per second. Data is transmitted in a set of 128 frames that takes 4 minutes to complete. Each frame contains a sync followed by 11 data blocks. The data blocks contain 256, 512 or 1024 bits for 1600, 3200 or 6400 bits per second respectively. The standard has been designed to allow the pager's receiver to be turned off for a high percentage of the time and therefore save on battery usage. Security Transmitted data over FLEX is not encrypted. A BCH error correcting code is used to improve the integrity of the data, although this is not cryptographically secure. There have been reported instances of individuals actively listening to pager traffic (private investigators, news organizations, etc.). Usage In The Netherlands the emergency services use the Flex-protocol in the nationwide P2000 network for pagers. The traffic on this network can be monitored online. In South Australia the State's SAGRN network for the Emergency Services paging system (CFS, SES, MFS and SAAS) is run on the FLEX 1600 protocol, and can be monitored online. See also ReFLEX Mobitex DataTAC External links FLEX: The New Edge For The Paging Industry Design And Implementation Of A Practical FLEX Paging Decoder US6396411B1: Reliable and fast frame synchronization scheme for flex paging protocol References Network protocols Motorola Radio paging
FLEX (protocol)
Technology
478
18,575,557
https://en.wikipedia.org/wiki/Informatization
Informatization or informatisation refers to the extent by which a geographical area, an economy or a society is becoming information-based, i.e. the increase in size of its information labor force. Usage of the term was inspired by Marc Porat’s categories of ages of human civilization: the Agricultural Age, the Industrial Age and the Information Age (1978). Informatization is to the Information Age what industrialization was to the Industrial Age. It has been stated that: The Agricultural Age has brought about the agriculturization of the planet. The Industrial Age has caused among other things the industrialization of agriculture. The Information Age has resulted to the informatization of the agricultural industry (Flor, 1993). The term has mostly been used within the context of national development. Everett Rogers defines informatization as the process through which new communication technologies are used as a means for furthering development as a nation becomes more and more an information society. However, some observers, such as Alexander Flor (1986) have cautioned about the negative impact of informatization on traditional societies. Recently, the technological determinism dimension has been highlighted in informatization. Randy Kluver of Texas A&M University defines informatization as the process primarily by which information technologies, such as the World Wide Web and other communication technologies, have transformed economic and social relations to such an extent that cultural and economic barriers are minimized. Kluver expands the concept to encompass the civic and cultural arenas. He believes that it is a process whereby information and communication technologies shape cultural and civic discourse. G. Wang describes the same phenomenon (1994) which she calls "informatization" as a "process" of change that features (a) the use of informatization and IT (information technologies) to such an extent that they become the dominant forces in commanding economic, political, social and cultural development; and (b) unprecedented growth in the speed, quantity, and popularity of information production and distribution." Origin of the term The term informatisation was coined by Simon Nora and Alain Minc in their publication L'Informatisation de la société: Rapport à M. le Président de la République which was translated in English in 1980 as The Computerization of Society: A report to the President of France. (SAOUG) However, in an article published in 1987 Minc preferred to use informatisation and not computerization. After the 1978 publication the concept was adopted in French, German and English subject literatures and was broadened to include more aspects than only computers and telecommunications (SAOUG). Social impact Informatization has many far-reaching consequences in society. Kim (2004) observes that these include repercussions in economics, politics and other aspects of modern living. In the economic sphere, for example, information is viewed as a focal resource for development, replacing the centrality of labor and capital during the industrial age. In the political arena, there are increased opportunities for participative democracy with the advent of information and communication technology (ICT) that provide easy access to information on varied social and political issues. In economic systems Industrialization propelled transformation of the economic system from agricultural age to modernized economies, and so informatization ushered the industrial age into an information-rich economy. Unlike the agricultural and industrial ages where economics refers to optimization of scarce resources, the information age deals with maximization of abundant resources. Alexander Flor (2008) wrote that informatization gives rise to information-based economies and societies wherein information naturally becomes a dominant commodity or resource. The accumulation and efficient use of knowledge has played a central role in the transformation of the economy (Linden 2004). Globalization Over the years, globalization and informatization have "redefined industries, politics, cultures, and perhaps the underlying rules of social order" (Friedman 1999). Although they explain different phenomena, their social, political, economic, and cultural functions remarkably overlaps. "Although globalization ultimately refers to the integration of economic institutions, much of this integration occurs through the channels of technology. Although international trade is not a new phenomenon, the advent of communications technologies has accelerated the pace and scope of trade" (Kluver). a) Globalization and Informatization will have great impact on cultural and social consequences of society. b) "Globalization and informatization are likely to diminish the concept of the national as a political institution" (Poster 1999). Friedman (1999) argues that as nation states decline in importance, multi-national corporations, nongovernmental organizations, and "superempowered individuals" such as George Soros gain influence and importance. As these non-political organizations and institutions gain importance, there are inevitable challenges to political, economic, and cultural processes. c) On the other hand, globalization and informatization allow for efficient flow of information. Individuals and societies are, therefore, greatly empowered to engage in international arena for economic, political, and cultural resources. d) "There is proliferation of information about lifestyles, religions, and cultural issues. The telecommunications and computer networks also allow for unprecedented global activism. This democratization of information increases the potential for international harmony, although it by no means guarantees it" (Kluver). e) These twin forces greatly affects "centuries of tradition, local autonomy, and cultural integrity." f) "Finally, one of the potentially most devastating impact of the forces of globalization and informatization is that there is created an insidious conflict between the new global economic order and the local, or even tribal, interests" (Kluver). Measurement Kim (2004) proposed to measure the informatization in a country using a composite measure made up of the following extraneous variables: Education, R&D Expenditure, Agricultural Sector and Intellectual Property. Kim also relates increasing democracy as evidence of social informatization. It supposedly take into consideration the three approaches to conceptualizing informatization namely the economic, technological, and stock. Each can be measured with economic data (e.g. GDP), ICT data (e.g. number of computers per population), and amount of information (e.g. number of published technological journals) respectively. Such composite measure is similar to the World Bank's Knowledge Assessment Methodology (KAM) Variables (2008) which are clustered into: overall performance of the economy, economic incentive and institutional regime, innovation system, education and human resources, and information and communication technology. Difficulties with measurement The measurement for the level of informatization is an ongoing area of development. Among the issues are the ambiguity of the definition of "information" and whether this entity can be quantifiable in contrast to the tangible products of industrialization. Taylor and Zang (2007) explored the issues behind the limitations of current theoretical models in terms of quantifying the positive impacts of ICT projects, and provided critiques of the information indicators used to gauge and justify informatization projects. International organizations such as the United Nations, through its World Summit on the Information Society (WSIS) and International Telecommunication Union (ITU); and Organisation for Economic Co-operation and Development (OECD) also recognize this challenge and have initiated efforts to improve the methodologies for measuring an "information society". National laws Informatization is recognized by states as important to national development. Some states have created laws implementing or regulating informatization. In Russia the State Duma enacted the Federal Law on Information, Informatization, and the Protection of Information on January 25, 1995. It was signed into law by President Boris Yeltsin on February 20, 1995. Azerbaijan had a Law on Information, Informatization and Protection of Information in 1998. Through TV white space (TVWS) technology TV white space refers to the unused TV channels between the active ones in the VHF and UHF spectrum. TV spectrum is commonly referred to as television "white spaces". TV white space can be used to provide broadband internet access, particularly in remote and rural areas (Edwards, 2016), and as such can serve as a tool for increasing information access and social and economic development. In 2008 The Federal Communications Commission voted to reallocate unlicensed white space spectrum for public use (White Space, n.d.) See also Digital divide Information Age Information revolution Information society Knowledge economy References Edwards, S. (2016, October 25). TV white space: The 'most powerful development tool?' Retrieved from devex.com/news/tv-white-space-the-most-powerful-development-tool-88868 Flor, Alexander G. 1986. The information-rich and the information-poor: Two faces of the Information Age in a developing country. University of the Philippines Los Baños. Flor, Alexander G. 1993. The informatization of agriculture. The Asian Journal of Communication. Volume 3 Number 2. pp 94-103 Kluver, Randy. Globalization, Informatization and Intercultural Communication. https://web.archive.org/web/20080725001511/http://acjournal.org/holdings/vol3/Iss3/spec1/kluver.htm . Accessed 25 July 2008. Federal Law on Information, Informatization, and the Protection of Information Enacted By the State Duma 25 January 1995. Available at https://fas.org/irp/world/russia/docs/law_info.htm . Accessed 7 August 2008. Kim, Sangmoon. 2004. Social Informatization: Its Measurement, Causes and Consequences. Paper presented at the annual meeting of the American Sociological Association, Hilton San Francisco & Renaissance Parc 55 Hotel, San Francisco, CA, August 14, 2004 [Online PDF]. Available at <http://www.allacademic.com/meta/p110605_index.html. Accessed 19 July 2008. SAOUG. (n.d.) The Role of the South African Online User Group (SAOUG) in the Informatisation of Society. Available at https://web.archive.org/web/20081121144659/http://www.saoug.org.za/archive/2003/0317a.pdf. Accessed 16 August 2008. Linden, Geert van der 2004. Transforming Institutional Knowledge into Organizational Effectiveness: The Challenge of Becoming a Learning Organization. Speech delivered at the ARTDo International Management and HRD Conference, September 25, 2004. Available at http://adb.org/documents/speeches/2004/mo2004056.asp Accessed Sept. 13,2008. Flor, Alexander G.2008 Developing Societies in the Information Age: A Critical Perspective UP Open University, Los Banos The World Bank.(2008) Knowledge Assessment Methodology. Variables and clusters. Available at . Accessed September 19, 2008. Friedman, T.(1999) The lexus and the olive tree: understanding globalization. Available at https://web.archive.org/web/20080725001511/http://acjournal.org/holdings/vol3/Iss3/spec1/kluver.htm. Accessed August 29, 2008. Poster, M.(1999) National identities and communications technologies. The Information Societies. Available at https://web.archive.org/web/20080725001511/http://acjournal.org/holdings/vol3/Iss3/spec1/kluver.htm. Accessed August 29, 2008. Law on information, informatization and protection of information. (2007) In Legislationline. Retrieved September 30, 2008, from . Taylor, Richard and Bin Zhang. Measuring the impact of ICT: Theories of Information and Development. (2007). Retrieved October 10, 2011, from https://web.archive.org/web/20120425055513/http://www.intramis.net/TPRC_files/TPRC%2008%20Taylor-Zhang%20Final.pdf White Space (n.d.). Retrieved November 21, 2019, from http://fcc.gov/general/white-space External links International Informatization Academy ACADEMIE EUROPEENNE D'INFORMATISATION (European Academy of Informatization) World Bank's Knowledge Assessment Methodology- You can check the level of informatization or knowledge economy of your country and compare it with other countries using their online tools. Information Information technology Knowledge economy Knowledge sharing
Informatization
Technology
2,587
31,587,409
https://en.wikipedia.org/wiki/Sanov%27s%20theorem
In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables. Let A be a set of probability distributions over an alphabet X, and let q be an arbitrary distribution over X (where q may or may not be in A). Suppose we draw n i.i.d. samples from q, represented by the vector . Then, we have the following bound on the probability that the empirical measure of the samples falls within the set A: , where is the joint probability distribution on , and is the information projection of q onto A. , the KL divergence, is given by: In words, the probability of drawing an atypical distribution is bounded by a function of the KL divergence from the true distribution to the atypical one; in the case that we consider a set of possible atypical distributions, there is a dominant atypical distribution, given by the information projection. Furthermore, if A is a closed set, then Technical statement Define: is a finite set with size . Understood as “alphabet”. is the simplex spanned by the alphabet. It is a subset of . is a random variable taking values in . Take samples from the distribution , then is the frequency probability vector for the sample. is the space of values that can take. In other words, it is Then, Sanov's theorem states: For every measurable subset , For every open subset , Here, means the interior, and means the closure. References Sanov, I. N. (1957) "On the probability of large deviations of random variables". Mat. Sbornik 42(84), No. 1, 11–44. Санов, И. Н. (1957) "О вероятности больших отклонений случайных величин". ''МАТЕМАТИЧЕСКИЙ СБОРНИК' 42(84), No. 1, 11–44. Information theory Probabilistic inequalities
Sanov's theorem
Mathematics,Technology,Engineering
501
4,536,556
https://en.wikipedia.org/wiki/Arp%20299
Arp 299 (parts of it also known as IC 694 and NGC 3690) is a pair of colliding galaxies approximately 134 million light-years away in the constellation Ursa Major. Both of the galaxies involved in the collision are barred irregular galaxies. NGC 3690 was discovered on 18 March 1790 by German-British astronomer William Herschel. It is not completely clear which object is historically called IC 694. According to some sources, the small appendage more than an arcminute northwest of the main pair is actually IC 694, not the primary (eastern) companion. The interaction of the two galaxies in Arp 299 produced young powerful starburst regions similar to those seen in II Zw 96. Supernovae Since 1992, fifteen supernovae have been detected in Arp 299: See also Antennae Galaxies Mice Galaxies References External links Interacting galaxies Barred irregular galaxies Peculiar galaxies Luminous infrared galaxies Ursa Major 3690 IC objects 299 Markarian galaxies Astronomical objects discovered in 1790 Discoveries by William Herschel
Arp 299
Astronomy
207
9,360,334
https://en.wikipedia.org/wiki/Modified-release%20dosage
Modified-release dosage is a mechanism that (in contrast to immediate-release dosage) delivers a drug with a delay after its administration (delayed-release dosage) or for a prolonged period of time (extended-release [ER, XR, XL] dosage) or to a specific target in the body (targeted-release dosage). Sustained-release dosage forms are dosage forms designed to release (liberate) a drug at a predetermined rate in order to maintain a constant drug concentration for a specific period of time with minimum side effects. This can be achieved through a variety of formulations, including liposomes and drug-polymer conjugates (an example being hydrogels). Sustained release's definition is more akin to a "controlled release" rather than "sustained". Extended-release dosage consists of either sustained-release (SR) or controlled-release (CR) dosage. SR maintains drug release over a sustained period but not at a constant rate. CR maintains drug release over a sustained period at a nearly constant rate. Sometimes these and other terms are treated as synonyms, but the United States Food and Drug Administration has in fact defined most of these as different concepts. Sometimes the term "depot tablet" is used, by analogy to the term for an injection formulation of a drug which releases slowly over time, but this term is not medically or pharmaceutically standard for oral medication. Modified-release dosage and its variants are mechanisms used in tablets (pills) and capsules to dissolve a drug over time in order to be released more slowly and steadily into the bloodstream, while having the advantage of being taken at less frequent intervals than immediate-release (IR) formulations of the same drug. For example, orally administered extended-release morphine can enable certain chronic pain patients to take only tablets per day, rather than needing to redose every as is typical with standard-release morphine tablets. Most commonly it refers to time-dependent release in oral dose formulations. Timed release has several distinct variants such as sustained release where prolonged release is intended, pulse release, delayed release (e.g. to target different regions of the GI tract) etc. A distinction of controlled release is that it not only prolongs action, but it attempts to maintain drug levels within the therapeutic window to avoid potentially hazardous peaks in drug concentration following ingestion or injection and to maximize therapeutic efficiency. In addition to pills, the mechanism can also apply to capsules and injectable drug carriers (that often have an additional release function), forms of controlled release medicines include gels, implants and devices (e.g. the vaginal ring and contraceptive implant) and transdermal patches. Examples for cosmetic, personal care, and food science applications often centre on odour or flavour release. The release technology scientific and industrial community is represented by the Controlled Release Society (CRS). The CRS is the worldwide society for delivery science and technologies. CRS serves more than 1,600 members from more than 50 countries. Two-thirds of CRS membership is represented by industry and one-third represents academia and government. CRS is affiliated with the Journal of Controlled Release and Drug Delivery and Translational Research scientific journals. List of abbreviations There is no industry standard for these abbreviations, and confusion and misreading have sometimes caused prescribing errors. Clear handwriting is necessary. For some drugs with multiple formulations, putting the meaning in parentheses is advisable. A few other abbreviations are similar to these (in that they may serve as suffixes) but refer to dose rather than release rate. They include ES and XS (Extra Strength). Methods Today, most time-release drugs are formulated so that the active ingredient is embedded in a matrix of insoluble substance(s) (various: some acrylics, even chitin; these substances are often patented) such that the dissolving drug must find its way out through the holes. In some SR formulations, the drug dissolves into the matrix, and the matrix physically swells to form a gel, allowing the drug to exit through the gel's outer surface. Micro-encapsulation is also regarded as a more complete technology to produce complex dissolution profiles. Through coating an active pharmaceutical ingredient around an inert core and layering it with insoluble substances to form a microsphere, one can obtain more consistent and replicable dissolution rates in a convenient format that can be mixed and matched with other instant release pharmaceutical ingredients into any two piece gelatin capsule. There are certain considerations for the formation of sustained-release formulation: If the pharmacological activity of the active compound is not related to its blood levels, time releasing has no purpose except in some cases, such as bupropion, to reduce possible side effects. If the absorption of the active compound involves an active transport, the development of a time-release product may be problematic. The biological half-life of the drug refers to the drug's elimination from the bloodstream which can be caused by metabolism, urine, and other forms of excretion. If the active compound has a long half-life (over 6 hours), it is sustained on its own. If the active compound has a short half-life, it would require a large amount to maintain a prolonged effective dose. In this case, a broad therapeutic window is necessary to avoid toxicity; otherwise, the risk is unwarranted and another mode of administration would be recommended. Appropriate half-lives used to apply sustained methods are typically 3–4 hours and a drug dose greater than 0.5 grams is too high. The therapeutic index also factors whether a drug can be used as a time release drug. A drug with a thin therapeutic range, or small therapeutic index, will be determined unfit for a sustained release mechanism in partial fear of dose dumping which can prove fatal at the conditions mentioned. For a drug that is made to be released over time, the objective is to stay within the therapeutic range as long as needed. There are many different methods used to obtain a sustained release. Diffusion systems Diffusion systems' rate release is dependent on the rate at which the drug dissolves through a barrier which is usually a type of polymer. Diffusion systems can be broken into two subcategories, reservoir devices and matrix devices. Reservoir devices coat the drug with polymers and in order for the reservoir devices to have sustained-release effects, the polymer must not dissolve and let the drug be released through diffusion. The rate of reservoir devices can be altered by changing the polymer and is possible be made to have zero-order release; however, drugs with higher molecular weight have difficulty diffusing through the membrane. Matrix devices forms a matrix (drug(s) mixed with a gelling agent) where the drug is dissolved/dispersed. The drug is usually dispersed within a polymer and then released by undergoing diffusion. However, to make the drug SR in this device, the rate of dissolution of the drug within the matrix needs to be higher than the rate at which it is released. The matrix device cannot achieve a zero-order release but higher molecular weight molecules can be used. The diffusion matrix device also tends to be easier to produce and protect from changing in the gastrointestinal tract, but factors such as food can affect the release rate. Dissolution systems Dissolution systems must have the system dissolved slowly in order for the drug to have sustained release properties which can be achieved by using appropriate salts and/or derivatives as well as coating the drug with a dissolving material. It is used for drug compounds with high solubility in water. When the drug is covered with some slow dissolving coat, it will eventually release the drug. Instead of diffusion, the drug release depends on the solubility and thickness of the coating. Because of this mechanism, the dissolution will be the rate limiting factor for drug release. Dissolution systems can be broken down to subcategories called reservoir devices and matrix devices. The reservoir device coats the drug with an appropriate material which will dissolve slowly. It can also be used to administer beads as a group with varying thickness, making the drug release in multiple times creating a SR. The matrix device has the drug in a matrix and the matrix is dissolved instead of a coating. It can come either as drug-impregnated spheres or drug-impregnated tablets. Osmotic systems Osmotic controlled-release oral delivery systems (OROS) have the form of a rigid tablet with a semi-permeable outer membrane and one or more small laser drilled holes in it. As the tablet passes through the body, water is absorbed through the semipermeable membrane via osmosis, and the resulting osmotic pressure is used to push the active drug through the opening(s) in the tablet. OROS is a trademarked name owned by ALZA Corporation, which pioneered the use of osmotic pumps for oral drug delivery. Osmotic release systems have a number of major advantages over other controlled-release mechanisms. They are significantly less affected by factors such as pH, food intake, GI motility, and differing intestinal environments. Using an osmotic pump to deliver drugs has additional inherent advantages regarding control over drug delivery rates. This allows for much more precise drug delivery over an extended period of time, which results in much more predictable pharmacokinetics. However, osmotic release systems are relatively complicated, somewhat difficult to manufacture, and may cause irritation or even blockage of the GI tract due to prolonged release of irritating drugs from the non-deformable tablet. Ion-exchange resin In the ion-exchange method, the resins are cross-linked water-insoluble polymers that contain ionisable functional groups that form a repeating pattern of polymers, creating a polymer chain. The drug is attached to the resin and is released when an appropriate interaction of ions and ion exchange groups occur. The area and length of the drug release and number of cross-link polymers dictate the rate at which the drug is released, determining the SR effect. Floating systems A floating system is a system where it floats on gastric fluids due to low density. The density of the gastric fluids is about 1 g/mL; thus, the drug/tablet administered must have a smaller density. The buoyancy will allow the system to float to the top of the stomach and release at a slower rate without worry of excreting it. This system requires that there are enough gastric fluids present as well as food. Many types of forms of drugs use this method such as powders, capsules, and tablets. Bio-adhesive systems Bio-adhesive systems generally are meant to stick to mucus and can be favorable for mouth based interactions due to high mucus levels in the general area but not as simple for other areas. Magnetic materials can be added to the drug so another magnet can hold it from outside the body to assist in holding the system in place. However, there is low patient compliance with this system. Matrix systems The matrix system is the mixture of materials with the drug, which will cause the drug to slow down. However, this system has several subcategories: hydrophobic matrices, lipid matrices, hydrophilic matrices, biodegradable matrices, and mineral matrices. A hydrophobic matrix is a drug mixed with a hydrophobic polymer. This causes SR because the drug, after being dissolved, will have to be released by going through channels made by the hydrophilic polymer. A hydrophilic matrix will go back to the matrix as discussed before where a matrix is a mixture of a drug or drugs with a gelling agent. This system is well liked because of its cost and broad regulatory acceptance. The polymers used can be broken down into categories: cellulose derivatives, non-cellulose natural, and polymers of acrylic acid. A lipid matrix uses wax or similar materials. Drug release happens via diffusion through, and erosion of, the wax and tends to be sensitive to digestive fluids. Biodegradable matrices are made with unstable, linked monomers that will erode by biological compounds such as enzymes and proteins. A mineral matrix which generally means the polymers used are obtained in seaweed. Stimuli inducing release Examples of stimuli that may be used to bring about release include pH, enzymes, light, magnetic fields, temperature, ultrasonics, osmosis, cellular traction forces, and electronic control of MEMS and NEMS. Spherical hydrogels, in micro-size (50-600 μm diameter) with 3-dimensional cross-linked polymer, can be used as drug carrier to control the release of the drug. These hydrogels are called microgels. They may possess a negative charge as example DC-beads. By ion-exchange mechanism, a large amount of oppositely charged amphiphilic drugs can be loaded inside these microgels. Then, the release of these drugs can be controlled by a specific triggering factor like pH, ionic strength or temperature. Pill splitting Some time release formulations do not work properly if split, such as controlled-release tablet coatings, while other formulations such as micro-encapsulation still work if the microcapsules inside are swallowed whole. Among the health information technology (HIT) that pharmacists use are medication safety tools to help manage this problem. For example, the ISMP "do not crush" list can be entered into the system so that warning stickers can be printed at the point of dispensing, to be stuck on the pill bottle. Pharmaceutical companies that do not supply a range of half-dose and quarter-dose versions of time-release tablets can make it difficult for patients to be slowly tapered off their drugs. History The earliest SR drugs are associated with a patent in 1938 by Israel Lipowski, who coated pellets which led to coating particles. The science of controlled release developed further with more oral sustained-release products in the late 1940s and early 1950s, the development of controlled release of marine anti-foulants in the 1950s, and controlled release fertilizer in the 1970s where sustained and controlled delivery of nutrients was achieved following a single application to the soil. Delivery is usually effected by dissolution, degradation, or disintegration of an excipient in which the active compound is formulated. Enteric coating and other encapsulation technologies can further modify release profiles. See also Depot injection Tablet (pharmacy) Footnotes External links Controlled Release Society United Kingdom & Ireland Controlled Release Society Controlled Release Technology 5-day short course at MIT with Professor Robert Langer. Dosage forms Routes of administration Drug delivery devices Pharmacokinetics
Modified-release dosage
Chemistry
3,019
19,236,636
https://en.wikipedia.org/wiki/Ecosystem-based%20management
Ecosystem-based management is an environmental management approach that recognizes the full array of interactions within an ecosystem, including humans, rather than considering single issues, species, or ecosystem services in isolation. It can be applied to studies in the terrestrial and aquatic environments with challenges being attributed to both. In the marine realm, they are highly challenging to quantify due to highly migratory species as well as rapidly changing environmental and anthropogenic factors that can alter the habitat rather quickly. To be able to manage fisheries efficiently and effectively it has become increasingly more pertinent to understand not only the biological aspects of the species being studied, but also the environmental variables they are experiencing. Population abundance and structure, life history traits, competition with other species, where the stock is in the local food web, tidal fluctuations, salinity patterns and anthropogenic influences are among the variables that must be taken into account to fully understand the implementation of a "ecosystem-based management" approach. Interest in ecosystem-based management in the marine realm has developed more recently, in response to increasing recognition of the declining state of fisheries and ocean ecosystems. However, due to a lack of a clear definition and the diversity involved with the environment, the implementation has been lagging. In freshwater lake ecosystems, it has been shown that ecosystem-based habitat management is more effective for enhancing fish populations than management alternatives. Terrestrial ecosystem-based management (often referred to as ecosystem management) came into its own during the conflicts over endangered species protection (particularly the northern spotted owl), land conservation, and water, grazing and timber rights in the western United States in the 1980s and 1990s. History The systemic origins of ecosystem-based management are rooted in the ecosystem management policy applied to the Great Lakes of North America in the late 1970s. The legislation created, the "Great Lakes Basin and the Great Lakes Water Quality Agreement of 1978", was based on the claim that "no park is an island", with the purpose to show how strict protection of the area is not the best method for preservation. This type of management system was however an idea that began long before and evolved through the testing and challenging of common ecosystem management practices. Before its complete synthesis, the management system's historical development can be traced back to the 1930s. During this time, the scientific communities who studied ecology realized that current approaches to the management of national parks did not provide effective protection of the species within. In 1932, The Ecological Society of America's Committee for the Study of Plant and Animal Communities recognized that US national parks needed to protect all the ecosystems contained within the park in order to create an inclusive and fully functioning sanctuary, and be prepared to handle natural fluctuations in its ecology. Also the committee explained the importance for interagency cooperation and improved public education, as well as challenged the idea that proper park management would "improve" nature. These ideas became the foundation of modern ecosystem-based management. As the understanding of how to manage ecosystems shifted, new tenets of the management system were produced. Biologists George Wright and Ben Thompson accounted for the size and boundary limitations of parks and contributed to the re-structuring of how park lines were drawn. They explained how large mammals for example could not be supported within the restricted zones of a national park and in order to protect these animals and their ecosystems a new approach would be needed. Other scientists followed suit, but none were successful in establishing a well-defined ecosystem-based management approach. In 1979, the importance of ecosystem-based management resurfaced in ecology from two biologists: John and Frank Craighead. The Craigheads found that grizzly bears of Yellowstone National Park could not sustain a population if only allowed to live within park boundaries. This reinforced the idea that a broader definition of what defines an ecosystem needed to be created, suggesting that it be based on the biotic requirements of the largest mammal present. The idea of ecosystem-based management began to catch on and projects throughout American National Parks reflected the idea of protecting an ecosystem in its entirety and not based on legal or ecological restrictions as previously used. Jim Agee and Darryll Johnson published a book-length report on managing ecosystems in 1988 explaining the theoretical framework management. While they did not fully embrace ecosystem-based management by still calling for "ecologically defined boundaries", they stated the importance of "clearly stated management goals, interagency cooperation, monitoring of management results, and leadership at the national policy levels". Most importantly they demanded the recognition of human influence. It was argued that scientists must keep in mind the "complex social context of their work" and always be moving towards "socially desirable conditions". This need to understand the social aspects of scientific management is the fundamental step from ecological management to ecosystem-based management. Although it continues to become recognized, a debate over ecosystem-based management continues. Grumbine (1994) believes, while the approach has evolved, it has not been fully incorporated into management practices because the most effective forms of it have yet to be seen. He articulates that the current ecological climate calls for the most holistic approach of ecological management. This is in part due to the rapid decline in biodiversity and because of the constant state of flux in societal and political views of nature. Conflicts over public interest and understanding of the natural world have created social and political climates that require interagency cooperation, which stands as a backbone for ecosystem-based management. Implementation Because ecosystem-based management is applied to large, diverse areas encompassing an array of interactions between species, ecosystem components, and humans, it is often perceived as a complex process that is difficult to implement. Slocombe (1998b) also noted that in addition, uncertainty is common and predictions are difficult. However, in light of significant ecosystem degradation, there is a need for a holistic approach that combines environmental knowledge and co-ordination with governing agencies to initiate, sustain and enforce habitat and species protection, and include public education and involvement. As a result, ecosystem-based management will likely be increasingly used in the future as a form of environmental management. Some suggestions for implementing ecosystem-based management and what the process may involve are as follows: Goals and objectives Defining clear and concise goals for ecosystem-based management is one of the most important steps in effective ecosystem-based management implementation. Goals must move beyond science-based or science-defined objectives to include social, cultural, economic and environmental importance. Of equal importance is to make sure that the community and stake-holders are involved throughout the entire process. Slocombe (1998a) also stated that a single, end-all goal cannot be the solution, but instead a combination of goals and their relationships with each other should be the focus. As discussed by Slocombe (1998a), goals should be broadly applicable, measurable and readily observable, and ideally be collectively supported in order to be achievable. The idea is to provide direction for both thinking and action and should try to minimize managing ecosystems in a static state. Goals should also be flexible enough to incorporate a measure of uncertainty and be able to evolve as conditions and knowledge change. This may involve focusing on specific threatening processes, such as habitat loss or introduced invasive species, occurring within an ecosystem. Overall the goals should be integrative, to include the structure, organization and processes of the management of an area. Correct ecosystem-based management should be based in goals that are both "substantive", to explain the aims and importance of protecting an area, and "procedural", to explain how substantive goals will be met. As described by Tallis et al. (2010), some steps of ecosystem-based management may include: Scoping This step involves the acquisition of data and knowledge from various sources in order to provide a thorough understanding of critical ecosystem components. Sources may include literature, informal sources such as aboriginal residents, resource users, and/or environmental experts. Data may also be gained through statistical analyses, simulation models, or conceptual models. Defining indicators Ecological indicators are useful for tracking or monitoring an ecosystem's status and can provide feedback on management progress as stressed by Slocombe (1998a). Examples may include the population size of a species or the levels of toxin present in a body of water. Social indicators may also be used such as the number or types of jobs within the environmental sector or the livelihood of specific social groups such as indigenous peoples. Setting thresholds Tallis et al. (2010) suggest setting thresholds for each indicator and setting targets that would represent a desired level of health for the ecosystem. Examples may include species composition within an ecosystem or the state of habitat conditions based on local observations or stakeholder interviews. Thresholds can be used to help guide management, particularly for a species by looking at the conservation status criteria established by either state or federal agencies and using models such as the minimum viable population size. Risk analysis A range of threats and disturbances, both natural and human, often can affect indicators. Risk is defined as the sensitivity of an indicator to an ecological disturbance. Several models can be used to assess risk such as population viability analysis. Monitoring Evaluating the effectiveness of the implemented management strategies is very important in determining how management actions are affecting the ecosystem indicators. Evaluation: This final step involves monitoring and assessing data to see how well the management strategies chosen are performing relative to the initial objectives stated. The use of simulation models or multi-stakeholder groups can help to assess management. It is important to note that many of these steps for implementing ecosystem-based management are limited by the governance in place for a region, the data available for assessing ecosystem status and reflecting on the changes occurring, and the time frame in which to operate. Challenges Because ecosystems differ greatly and express varying degrees of vulnerability, it is difficult to apply a functional framework that can be universally applied. These outlined steps or components of ecosystem-based management can, for the most part, be applied to multiple situations and are only suggestions for improving or guiding the challenges involved with managing complex issues. Because of the greater amount of influences, impacts, and interactions to account for, problems, obstacles and criticism often arise within ecosystem-based management. There is also a need for more data, spatially and temporally to help management make sound decisions for the sustainability of the stock being studied. The first commonly defined challenge is the need for meaningful and appropriate management units. Slocombe (1998b) noted that these units must be broad and contain value for people in and outside of the protected area. For example, Aberley (1993) suggests the use of "bioregions" as management units, which can allow peoples involvement with that region to come through. To define management units as inclusive regions rather that exclusive ecological zones would prevent further limitations created by narrow or restricting political and economic policy created from the units. Slocombe (1998b) suggests that better management units should be flexible and build from existing units and that the biggest challenge is creating truly effect units for managers to compare against. Another issue is in the creation of administrative bodies. They should operate as the essence of ecosystem-based management, working together towards mutually agreed upon goals. Gaps in administration or research, competing objectives or priorities between management agencies and governments due to overlapping jurisdictions, or obscure goals such as sustainability, ecosystem integrity, or biodiversity can often result in fragmented or weak management. In addition, Tallis (2010) stated that limited knowledge of ecosystem components and function and time constraints that can often limit objectives to only those that can be addressed in the short-term. The most challenging issue facing ecosystem-based management is that there exists little knowledge about the system and its effectiveness. Slocombe (1998b) stated that with limited resources available on how to implement the system it is hard to find support for its use. Slocombe (1998a) said that criticism of ecosystem-based management include its reliance on analogy and comparisons, too broadly applied frameworks, its overlap with or duplication of other methods such as ecosystem management, environmental management, or integrated ecosystem assessment, its vagueness in concepts and application, and its tendency to ignore historical, evolutionary or individual factors that may heavily influence ecosystem functioning. Tallis (2010) stated that ecosystem-based management is seen as a critical planning and management framework for conserving or restoring ecosystems though it is still not widely implemented. An ecosystem approach addresses many relationships across spatial, biological, and organizational scales and is a goal-driven approach to restoring and sustaining ecosystems and functions. In addition, ecosystem-based management involves community influence as well as planning and management from local, regional and national government bodies and management agencies. All must be in collaboration in order to develop a desired future of ecosystem conditions, particularly where ecosystems have undergone radical degradation and change. Slocombe (1998b) said that to move forward, ecosystem-based management should be approached through adaptive management, allowing flexibility and inclusiveness to deal with constant environmental, societal, and political change. Marine systems Ecosystem-based management of marine environments has begun to move away from the traditional strategies which focus on conservation of single species or single sectors in favor of an integrated approach which considers all key activities, particularly anthropogenic, that affect marine environments. Management must take into account the life history of the fish being studied, its association with the surrounding environment, its place in the food web, where it prefers to reside in the water column, and how it is affected by human pressures. The objective is to ensure sustainable ecosystems, thus protecting the resources and services they provide for future generations. In recent years there has been increasing recognition of anthropogenic disruption to marine ecosystems resulting from climate change, overfishing, nutrient and chemical pollution from land runoff, coastal development, bycatch, and habitat destruction. The effect of human activity on marine ecosystems has become an important issue because many of the benefits provided to humans by marine ecosystems are declining. These services include the provision of food, fuel, mineral resources, pharmaceuticals, as well as opportunities for recreation, trade, research and education. Guerry (2005) has identified an urgent need to improve the management of these declining ecosystems, particularly in coastal areas, to ensure sustainability. Human communities depend on marine ecosystems for important resources, but without holistic management, these ecosystems are likely to collapse. Olsson et al. (2008) suggest that the degradation of marine ecosystems is largely the result of poor governance and that new approaches to management are required. The Pew Oceans Commission and the United States Commission on Ocean Policy have indicated the importance of moving from current piecemeal management to a more integrated ecosystem-based approach. Stock assessment Stock assessment is a critically important aspect of fisheries management, but it is a highly complex, logistically difficult, and expensive process and can thus be a contentious issue, particularly when competing parties disagree on the findings of an assessment. Accurate stock assessments require knowledge of reproductive and morphological patterns, age-at-stage progressions, and movement ecology. Bottom up or top down All members of an ecosystem are affected by other organisms within that ecosystem, and proper management of wildlife requires knowledge of an organism's trophic level and its effects on other organisms within its food web. Top-down and bottom-up controls represent one method by which the numbers of wild populations of plants and animals are limited. Top-down controls have been seen in the explosion of sea urchins and subsequent decline in kelp beds due to the near-extirpation of sea otters.As otters were hunted nearly to extinction, sea urchins - preyed on by sea otters and which themselves feed on the kelp - boomed, resulting in the near-disappearance of kelp beds. Bottom-up controls are best illustrated when autotrophic primary producers such as plants and phytoplankton, which represent the lowest trophic level of an ecosystem, are limited, impacting all organisms in higher trophic levels, but bottom-up changes can also be seen in higher trophic levels. For example, the decline of North Sea puffins has been attributed to overexploitation of sand eels, an important prey item. Bycatch Red snapper is a species of enormous economic importance in the Gulf of Mexico. Management of this species is complicated by the large impact of bycatch associated with the shrimping industry. Rates of red snapper mortality are not explained by fishery landings, but are instead associated with large numbers of juvenile red snapper caught as bycatch in the fine mesh used by trawlers. Key elements Connections At its core, ecosystem-based management is about acknowledging interdependency connections, including the linkages between marine ecosystems and human societies, economies and institutional systems, as well as those among various species within an ecosystem and among ocean places that are linked by the movement of species, materials, and ocean currents. Of particular importance is how these factors all react and involve each other. In the Caribbean, the spiny lobster is managed based on a classic population model that for most fishery species works quite well. However, this species will grow and then halt its growth when it need to molt its shell and thus instead of a continuous growth cycle, it will pause its growth and invest its energy in a new shell. To further complicate matters, it slows this process down as it gets older to invest more energy into reproduction thus further deviating itself from the von Bertalanffy model of growth that was applied to it. The more information we can gather about an ecosystem and all of the interconnected factors which affect it, the more capable we will be of better managing that system. Cumulative impacts Ecosystem-based management focuses on how individual actions affect the ecosystem services that flow from coupled socio-ecological systems in an integrated fashion, rather than considering these impacts in a piecemeal manner. Loss of biodiversity in marine ecosystems is an example of how cumulative effects from different sectors can impact on an ecosystem in a compounding way. Overfishing, coastal development, filling and dredging, mining and other human activities all contribute to the loss of biodiversity and therefore degradation of the ecosystem. Work is needed prior to the carrying out of the research to understand the total effects that each species can have on each other and also on the environment. It must be carried out every year as well as species are changing their life history traits and their relationship with the environment as humans are continually modifying the environment. Interactions between sectors The only way to deal with the cumulative effects of human influences on marine ecosystems is for various contributing sectors to set common goals for the protection or management of ecosystems. While some policies may only affect a single sector, others may affect multiple sectors. A policy for the protection of endangered marine species, for example, could affect recreational and commercial fisheries, mining, shipping and tourism sectors to name a few. More effective ecosystem management would result from the collective adoption of policies by all sectors, rather than each sector creating their own isolated policies. For example, in the Gulf of Mexico there are oil rigs, recreational fisheries, commercial fisheries and multiple tourist attractions. One of the main fisheries is that of the Red Snapper which inhabits much of the Gulf and employs thousands of people in the commercial and recreational fishery. During the Deepwater oil spill it became abundantly obvious that it negatively affected the population numbers as well as the integrity of the catch that was being made. The species not only suffered higher mortality rates but the market was less trusting of the product. An environmental disaster interacted with the commercial, recreational, and economic sector for a specific species. Changing public perceptions Not all members of the public will be properly informed, or be fully aware, of current threats to marine ecosystems and it is therefore important to change public perceptions by informing people about these issues. It is important to consider the interest of the public when making decisions about ocean management and not just those who have a material interest because community support is needed by management agencies in order to make decisions. The Great Barrier Reef Marine Park Authority (GBRMPA) faced the issue of poor public awareness in their proposed management strategy which included no-take fishing zones. Olssen (2008) addressed this problem by starting a 'reef under pressure' information campaign to prove to the public that the Great Barrier Reef is under threat from human disturbances, and in doing so were successful in gaining public support. Bridging science and policy To ensure that all key players are on the same page, it is important to have communication between managers, resource users, scientists, government bodies and other stakeholders. Leslie and McLeod (2007) stated that proper engagement between these groups will enable the development of management initiatives that are realistic and enforceable as well as effective for ecosystem management. If certain small-scale players are not involved or informed, it is highly unlikely and equally challenging to get them to cooperate as well as to follow the rules that need to be put in place. It is of the utmost importance to have every stakeholder involved with every step of the process to increase the cohesion of the process. Embracing change Coupled social-ecological systems are constantly changing in ways that cannot be fully predicted or controlled. Understanding the resilience of ecosystems, i.e. the extent to which they can maintain structure, function, and identity in the face of disturbance, can enable better prediction of how ecosystems will respond to both natural and anthropogenic perturbations, and to changes in environmental management. With how much modification humans are doing to environments, it is important to understand these changes on a yearly basis as well. Some species are changing their life histories, Flounder, due to the increased pressures that humans are placing on the environment. Thus, when a manager or government does an assessment on the ecosystem for a given year, the relationship that a species has to others can change very quickly and thus negate the model that you use for an ecosystem very quickly if not redefined. Multiple objectives Ecosystem-based management focuses on the diverse benefits provided by marine systems, rather than on single ecosystem services. Such benefits or services include vibrant commercial and recreational fisheries, biodiversity conservation, renewable energy from wind or waves and coastal protection. The goal is to provide a sustainable fisheries while incorporating the impacts of other aspects on that resource. When managed correctly, an ecosystem-based model can greatly improve not only the resource being managed, but those associated with it. Learning and adaptation Because of the lack of control and predictability of coupled social-ecological systems, an adaptive management approach is recommended. There can be multiple different factors that must be overcome (fisheries, pollution, borders, multiple agencies, etc.) to create a positive outcome. Managers must be able to react and adapt as to limit the variance associated with the outcome. Other examples Great Bear Rainforest - Canada The Land and Resource Management Planning (LRMP) was implemented by the British Columbia Government (Canada) in the mid-1990s in the Great Bear Rainforest in order to establish a multiparty land-use planning system. The aim was to "maintain the ecological integrity of terrestrial, marine and freshwater ecosystems and achieve high levels of human well-being". The steps described in the programme included: protect old-growth forests, maintain forest structure at the stand level, protect threatened and endangered species and ecosystems, protect wetlands and apply adaptive management. MacKinnon (2008) highlighted that the main limitation of this program was the social and economic aspects related to the lack of orientation to improve human well-being. The Great Lakes - Canada and United States A Remedial Action Plan (RAP) was created during the Great Lakes Water Quality Agreement that implemented ecosystem-based management. The transition, according to the authors, from "a narrow to a broader approach " was not easy because it required the cooperation of both the Canadian and American governments. This meant different cultural, political and regulatory perspectives were involved with regards to the lakes. Hartig et al. (1998) described eight principles required to make the implementation of ecosystem-based management efficacious: "broad-based stakeholder involvement; commitment of top leaders; agreement on information needs and interpretation; action planning within a strategic framework; human resource development; results and indicators to measure progress; systematic review and feedback; and stakeholder satisfaction". Dam removal in the Pacific Northwest The Elwha dam removal in Washington state is the largest dam removal project in the United States. Not only was it blocking several species of salmon from reaching their natural habitat, it also had millions of tons of sediment built up behind it. Scallop aquaculture in Sechura Bay, Peru Peruvian Bay Scallop is grown in the benthic environment. Intensity of the fishery has caused concern over recent years and there has been a shift to more of an environmental management scheme. They are now using food web models to assess the current situation and to calibrate the stocking levels that are needed. The impacts of the scallops on the ecosystem and on other species are now being taken into account as to limit phytoplankton blooms, overstocking, diseases and overconsumption in a given year. This study is proposed to help guide both fisherman and managers in their goal of providing long-term success for the fishery as well as the ecosystem they are utilizing. Enhancing lake fish populations - Germany Scientists and numerous angling clubs have collaborated in a large-scale set of whole-lake experiments (20 gravel pit lakes monitored over a period of six years) to assess the outcomes of ecosystem-based habitat enhancement compared to alternative management practices in fisheries. In some of the lakes, additional shallow water zones were created. In other lakes, coarse wood bundles were added to enhance structural diversity. Other study lakes were stocked with five fish species of interest to fisheries. Unmanipulated lakes served as controls to allow for a comprehensive before-after-control-impact study design. The study was based on a sample of more than 150,000 fish. Radinger et al. (2023) found that fish stocking was ineffectual, whereas ecosystem-based habitat management through creating shallow zones increased fish abundance, especially that of juvenile fish. The authors argue that restoring ecological processes and key habitats have a larger potential to meet conservation goals than narrow, species-focused actions. See also Deep ecology Ecosystem management Ecosystem based fisheries Sustainable forest management Sustainable land management Ecosystem health Marine Ecology References External links Adaptive Management Advancing EBM Toolkit EBM Tools Network Ecosystem Management Initiative Resilience What is marine EBM? Natural resource management Management frameworks Oceanography
Ecosystem-based management
Physics,Environmental_science
5,363
4,322,218
https://en.wikipedia.org/wiki/Critical%20geography
Critical geography is theoretically informed geographical scholarship that promotes social justice, liberation, and leftist politics. Critical geography is also used as an umbrella term for Marxist, feminist, postmodern, poststructural, queer, left-wing, and activist geography. Critical geography is one variant of critical social science and the humanities that adopts Marx’s thesis to interpret and change the world. Fay (1987) defines contemporary critical science as the effort to understand oppression in a society and use this understanding to promote societal change and liberation. Agger (1998) identifies a number of features of critical social theory practiced in fields like geography, which include: a rejection of positivism; an endorsement of the possibility of progress; a claim for the structural dynamics of domination; an argument that dominance is derived from forms of false consciousness, ideology, and myth; a faith in the agency of everyday change and self-transformation and an attendant rejection of determinism; and a rejection of revolutionary expediency. Origin The term 'critical geography' has been in use since at least 1749, when the book Geography reformed: a new system of general geography, according to an Accurate Analysis of the science in four parts dedicated a chapter to the topic titled "of Critical Geography." This book proposed critical geography as the process by which geographers identify errors in the work of others and fix them in later publications. In this 1749 book, Cave uses examples of Eratosthenes, Hipparchus, and Ptolemy all correcting the errors of their predecessor before publishing their own work. In the 1970s, so-called "radical geographers" in the Anglo-American world began using the framework of critical geography to transform the scope of the discipline of geography in response to societal issues such as civil rights, environmental pollution, and war. Peet (2000) provides an overview of the evolution of radical and critical geography. The mid- to late-1970s saw ascending critiques of the quantitative revolution and the adoption of Marxist approaches through Marxist geography. The 1980s were marked by fissures between humanistic, feminist and Marxist streams, and a reversal of structural excess. In the late 1980s, critical geography emerged and gradually became a self-identified field. Although closely related, critical geography and radical geography are not interchangeable. Critical geography has two crucial departures from radical geography: (1) a rejection of the structural excess of Marxism, in accordance with the post-modern turn; and (2) an increasing interest in culture and representation, in contrast to radical geography's focus on the economy. Peet (2000) notices a rapprochement between critical and radical geography after heated debate in the 1990s. Nevertheless, Castree (2000) posits that critical and radical geography entail different commitments. He contends that the eclipse of radical geography indicates the professionalization and academicization of Left geography, and therefore worries about the loss of the "radical" tradition. Common themes As a consequence of the post-modern turn, critical geography doesn't have a unified commitment. Hubbard, Kitchin, Bartley, and Fuller (2002) asserts that critical geography has a diverse epistemology, ontology, and methodology, and does not have a distinctive theoretical identity. Nonetheless, Blomley (2006) identifies six common themes of critical geography, encompassing: A commitment to theory and a rejection of empiricism. Critical geographers consciously deploy theories of some form, but they draw from a variety of theoretical wells, such as political economy, governmentality, feminism, anti-racism, and anti-imperialism. A commitment to reveal the processes that produce inequalities. Critical geographers seek to unveil power, uncover inequality, expose resistance, and cultivate liberating politics and social changes. An emphasis on representation as a means of domination and resistance. A common focus of critical geography is to study how representations of space sustain power; or on the contrary, how representations of space can be used to challenge power. An optimistic faith in the power of critical scholarship. Critical geographers believe that scholarship can be used to resist dominant representations, and that scholars can undo said domination and help free the oppressed. There exists an implicit confidence in the power of critical scholarship to reach the uninformed, and in the capacities of people to defeat alienation by means of reflexive self-education. A commitment to progressive practices. Critical geographers want to make a difference through praxis. They claim to be united with social movements and activists with commitments to social justice. The actual relationship between critical geography and activism has been much debated. An understanding of space as a critical tool. Critical geographers pay special attention to how spatial arrangements and representations can be used to produce oppression and inequality. Critical geographers identify to varying degrees how space can be used as both a veil and tool of power. Critiques There are many criticisms of critical geography. Physical geographers and those who embraced the new techniques developed during the quantitative revolution are often the target of criticisms from critical geographers. These geographers argue that critical geographers argue from a place of ignorance on quantitative geography. The popularity of critical geography, and the resulting decline in quantitative methods, is argued to be in large part due to the difficulty of the subject matter causing people to "jump ship." Further, some believe that critical geographers are antiscience. Many quantitative geographers acknowledge the early criticisms pointed out by critical geographers and contend that new technology and techniques have addressed these criticisms and that they no longer apply. There has also been relatively limited discussion over the shared commitments of critical geographers, with a few exceptions such as Harvey (2000). The question such as "what are geographers critical of", and "to what end" needs to be answered. Barnes (2002) comments that critical geographers are better at providing explanatory diagnoses than offering anticipatory-utopian imaginations to reconfigure the world. Some critical geographers are concerned with the institutionalization of critical geography. Even though critical geographers conceive themselves as rebels and outsiders, critical thinking has become prevalent in geography. Critical geography is now situated at the very heart of the discipline of geography. Some see institutionalization as a natural result of the analytical strength and insights of critical geography, while others fear that institutionalization has entailed cooptation. The question is whether critical geography still holds its commitment to political change. Further, as critical geography is practiced across the world, the insights of critical geographers outside the Anglophone world should be better acknowledged. In this regard, Mizuoka et al. (2005) offered an overview of Japanese critical geography praxis since the 1920s. In addition, critical geography should also forge stronger linkage with critical scholars in other disciplines. See also Activism Critical theory Feminist geography Left-wing politics Marxist geography Postmodernism Post-structuralism Quantitative revolution Queer theory Notes References Further reading Blomley, Nicholas (2006). "Uncritical critical geography?". Progress in Human Geography. 30 (1): 87–94. . Blunt, Alison; Wills, Jane (2000). Dissident Geographies: An Introduction to Radical Ideas and Practice. Prentice Hall. . Castree, Noel (2000). "Professionalisation, Activism, and the University: Whither 'Critical Geography'?". Environment and Planning A. 32 (6): 955–970. . Castree, Noel; Gregory, Derek (2006). David Harvey: A Critical Reader. John Wiley & Sons. . Peet, Richard (2000). "Celebrating Thirty Years of Radical Geography". Environment and Planning A. 32 (6): 951–953. . Sidaway, James D.; Lin, Shaun; Chouinard, Vera; Ferretti, Federico; Gibson, Katherine; Kenney-Lazar. Miles; Philo, Chris; van Meeteren, Michiel; Wills, Jane; Wisner, Ben; Barnes, Trevor; Sheppard, Eric (2020). Book review forum: Reading Trevor Barnes and Eric Sheppard's Spatial histories of radical geography: North America and beyond. Trevor Barnes and Eric Sheppard Wiley (Antipode Book Series), NJ, USA; West Sussex, UK: Wiley. The AAG Review of Books, 8(4):236-258. Geography History of geography Human geography
Critical geography
Environmental_science
1,733
35,862,124
https://en.wikipedia.org/wiki/Garrett%20Lynch
Garrett Lynch (born 1977) is an Irish new media artist working with networked technologies in a variety of forms including online art, installation, performance and writing. Career Since 2000 Lynch has a developed an artistic practice centred on the use of networks. He has published papers including "Google and Art: A commercial/cultural new media art economy?" in the ISEA, Inter-Society for the Electronic Arts newsletter and "Net Art : au-delà du navigateur… un monde d’objets" (Net.art: beyond the browser to a world of things) in Terminal no. 101, "Net Art, Technologie ou Création?" (Net Art, Technology or Creation), spoken at conferences and events, curated exhibitions and live events, exhibited and performed in a number of international exhibitions and events including "Notes on a New Nature" at 319 Scholes, "The Vending Machine" at the 54th Venice Biennale, "REFF – Remix the World, Reinvent Reality" at Furtherfield Gallery, Jouable; Art, Jeu et Interactivité, and "Liminality: The Space Between Worlds" at Antena. Work Lynch's work has developed based on a conceptual consideration of the use of networks within artistic practice. Moving initially from a net.art practice and its emphasis on the web and the web browser as artistic form to a networked practice that explores networks in their widest interpretation, his work uses networks as "a means, site and context for artistic initiation, creation and discourse". Informed by Cybernetics and Communication theory, his discourse frequently deals with issues concerning nodes and their arrangement, the spaces between or the internodal and the behaviour that can occur between nodes in a network. As such he views his work dealing with these concepts and issues as largely opportunist, potentially parasitic in nature and his practice as essentially one of arrangements aligning it with key concerns within conceptual art. He states that his networked practice aims "to get people to think about the ideas I'm making connections or links between. I personally like work that makes me think so I try to make work to make others think…What obsesses me is not the internet per se but the idea of networks, all sorts of networks, technological (digital, electronic and electrical), social, biological etc. My work is starting to be a networked art rather than net.art, an internet art". Since 2001 curating has been a part of Garrett's practice. Initially curating online net.art works through the "Banner Art Collective", in 2006 he co-founded the sonic arts events "Open Ear" which ran ten live events at various locations in England and Wales until 2009. Lynch created the site-specific work for the online virtual world of Second Life "Between Saying and Doing" in 2008 as a critical comment on performance and virtual worlds. This initiated a series of installation and performance works dealing with ideas of identity and place as they relate to networked spaces that remains ongoing. In these works Lynch explores the "real" and the "virtual" through the transposing of his own identity to virtual worlds without any attempt to masquerade or imagine a new identity. This process involves the use of his real name for his "representation" or avatar, word play that references his names origins as both real and Irish and the use of a sandwich board prop stating this that is worn continuously. References Irish contemporary artists New media artists Net.artists Irish multimedia artists Postmodern artists Living people Irish digital artists Irish performance artists École nationale supérieure des arts décoratifs alumni 1977 births
Garrett Lynch
Technology
738
39,263,282
https://en.wikipedia.org/wiki/Theresa%20Kugel
Sister Theresa Kugel, OP (1912, Orekhovo-Zuyevo, Moscow Governorate, Russian Empire – 2 December 1977, Vilnius, Lithuanian SSR, Soviet Union), was a convert from Orthodox Judaism to the Russian Catholic Church, a Byzantine Rite Dominican nun in the community founded by Mother Catherine Abrikosova, and a Gulag survivor. Her birth name was Minna Rahmielovna Kugel (Минна Рахмиэловна Кугель). Early life Mina Rakhmielovna Kugel was born in 1912 into the family of a rabbi and grew up in Kostroma, where her father ran an illegal and underground synagogue in defiance of both Soviet anti-Judaism and anti-religious legislation. According to Walter Kolarz, the spread of atheist propaganda and the religious persecution of Soviet Jews was generally assigned at the time to the Yevsektsiya, or Jewish sections, of the Soviet Communist Party and it's main institutions. For example, the Jewish section inside the League of Militant Godless, "had a total of 40,000 Jewish members in 1929, the year when the anti-religious campaign was at its peak. These 'Jewish sections' were much despised by the bulk of Russia's Jewry. Their members were regarded with as much contempt as the Jewish renegades who turned persecutors of the own brethren in the Middle Ages." Minna Kugel's parents and siblings were later described as, "good and decent people, faithful to all the precepts of the Jewish religion." Mina, however, grew up feeling torn between the Orthodox Jewish values of her family and the coercive indoctrination into both Marxist-Leninism and Atheism through the Soviet educational system. According to Fr. Georgii Friedman, a young Mina Kugel was a member of the Young Pioneers and the Komsomol. In 1929, a 15-year old Mina Kugel graduated from high school in Yaroslavl, and returned to her parents in Kostroma, where she became closely acquainted with two of her father's boarders, Stephanie Gorodets and Margarita Krylevskaya. Both women were nuns of the Moscow community of the Third Order of Saint Dominic which had been founded in August 1917 by Mother Catherine Abrikosova. Until this time, Mina Kugel had never before been exposed to Christianity. Conversion In 1930, Mina was staying with her uncle in Yaroslavl and undergoing treatment for pulmonary tuberculosis. Out of curiosity, she went into a local Roman Catholic parish while Fr. Josif Josiukas was offering a Solemn High Mass followed by Eucharistic Adoration. After the Mass, Mina Kugel was looking at the Blessed Sacrament exposed in the Monstrance when she was overwhelmed by a new belief in the Real Presence and burst into tears. She left the church transformed into a completely different person. In 1931, Mina Kugel, who now desired only to become a Catholic, travelled to Moscow and stayed with her relatives. She was secretly baptized into the Russian Greek Catholic Church, with Nora Rubashova as her godmother, by the former Symbolist poet Fr. Sergei Solovyov. Mina Kugel took the Christian name of Theresa, in honor of St. Thérèse of Lisieux. Many years later, her spiritual director, Fr. Georgii Friedman, would compare Mina Kugel's conversion story with those of André Frossard and Hermann Cohen. Kugel's illness was considered hopeless and the doctors had reportedly given up on her. But after Bishop Pie Eugène Neveu gave her a flask of water from Lourdes, Kugel was cured and the doctors were allegedly unable to explain why. Dominican vocation One day in August 1932, Kugel was on her knees in prayer after making her Confession at St Louis Church when Bishop Neveu left the Confessional, took Kugel by the hand, and led her to the pew of Anna Abrikosova, a woman she had never met before. The Bishop said, "Mother, here is one more daughter for you." Mass was about to begin, so without saying a word, Abrikosova kissed Kugel on the cheek, after with they heard the Mass together. After Mass, Abrikosova said, "Well, now let's get acquainted". As she had been qualified as minus 12 upon her recent release from the Gulag, Abrikosova could not remain in Moscow and chose instead to reside with Sister Margaret in Kostroma, where the Kugel family also lived, and where Theresa regularly visited her. According to Ejsmont, "Theresa loved Mother and was happy that she could see her every day, dropping by to visit after work. Mother always greeted her with an affectionate smile, bright and clear. She was always modestly dressed; a black skirt with a modest white blouse with black stripes. Despite her modest attire, Mother Catherine always looked stately. Her conversation was always amiable and sweet. Her whole appearance exuded an extraordinary charm... Sister Margaret worried over Mother, but Mother just smiled at this. In all her conversations one could sense her great love for the person with whom she was speaking and that person's soul, and when she spoke of God and the Gifts of the Holy Spirit, she earnestly and with such ardor, it was clear she herself was deeply filled with all these gifts." When Sister Margaret once warned her of the enormous dangers posed by Kugel's regular visits, Abrikosova, whose cancer had recently returned, replied, "For the good and salvation of just a single soul, I am willing to go to prison again and to save the soul of this little Theresa, I am ready for another ten year term" According to Sister Philomena Ejsmont, the Kugel family only learned of Mina's conversion in late 1932, shortly after her father was arrested and sent to the Gulag with the other men involved in organizing their underground synagogue. According to Sister Philomena, "No one at Theresa 's House knew that she had become a Christian and been baptized, but her sister once noticed her wearing a cross around her neck and told her mother. A grand brawl broke out, with shouts and curses. The mother left the house and Theresa taking almost nothing with her, headed for the train station, intending to leave home forever. The mother soon returned and, hearing what had happened, took off for the train station, where she found Teresa already seated in a train that was about to depart. Her mother begged her to return, but Theresa was adamant in her decision and left for Moscow. Once there, she sought advice from Bishop Neveu, who sent her to join the sisters in Krasnodar." In 1932 she moved to Krasnodar, where she was assigned to beginning her postulancy under Sisters Magdalina Krylevslaya and Joanna Gotovtseva, and was tonsured as a Dominican nun with the name Teresa. According to Sister Philomena, "Sister Theresa settled in with them. There was no furniture in the room other than a night table and a wooden trestle bed. A little altar had been arranged on the night table, and the bed stood uncovered. They slept on the floor and ate in the landlady's kitchen." Kugel was later to recall Sister Magdalina, who had charge over her spiritual formation, as, "strict in the fulfillment of the Rule and Constitution of the Community", but also as, "a woman of extraordinary goodness and warmth." Gulag On 6 October 1933 in Krasnodar, Kugel arrived at the prison with a parcel for Sisters Magdalina and Joanna, who had just been arrested. Kugel, who was wearing only a very light dress at the time, was arrested, too. When she was placed with the other two sisters, Sister Magdalina, who was her immediate superior, took off her own coat and gave it to Kugel. When Kugel tried to protest, Sister Magdalina smiled and replied, "Don't worry, my mother will look out for me." While the three sisters were being taken to Butyrskaya prison in Moscow, the NKVD guards gave them nothing to eat along the way. Eventually, Sister Magdalina arranged for one of the convoy escorts to buy them some bread. Even though Sisters Theresa and Joanna pleased with her to divide the bread equally, Sister Magdalina insisted on taking only a tiny piece for herself and giving the other two nuns the rest. According to Sister Philomena Ejsmont, "When they arrived at the Moscow prison, they were not taken to a cell right away. Sister Magdalina told Sister Theresa, who had never been arrested prior to this time, what kinds of questions the investigators would ask, how to answer them, how to behave during interrogations and in general gave them a lot of very valuable advice. With deep grief, the Sisters later learned that Sister Magdalina, right after being placed in a solitary cell, died in the prison hospital on January 27, 1934. Thus she went to heaven possessing nothing but the dress she was wearing." In what the NKVD called "The Case of the Counterrevolutionary Terrorist-Monarchist Organization", Mother Catherine Abrikosova, Theresa Kugel, and all their fellow nuns stood accused of forming a "terrorist organization", plotting to assassinate Joseph Stalin, overthrow the Communist Party of the Soviet Union, and restore the House of Romanov as a constitutional monarchy in concert with "international fascism" and "Papal theocracy". It was further alleged that the nuns planned to restore Capitalism and for collective farms to be privatized and returned to the kulaks and the Russian nobility. The NKVD further alleged that the nuns' terrorist activities were directed by Bishop Pie Eugène Neveu, the Vatican's Congregation for the Oriental Churches, and Pope Pius XI. On 19 February 1934, Kugel was declared guilty as charged and sentenced to 3 years in a labor camp. Kugel was released on 16 November 1935. After December she lived in Bryansk and in October 1937 moved to Maloyaroslavets. Following Operation Barbarossa early in World War II, Maloyarolavets was occupied by Nazi Germany and, along with fellow Soviet Jewish Sister Nora Rubashova, Sister Theresa survived the Holocaust in Russia by working as a nurse in a German military hospital. Whenever possible, both sisters attended the Masses offered by Wehrmacht military chaplains and knelt at the Communion Rail alongside German soldiers who were fully aware of their Jewish ancestry. Many years later, Secular Tertiary Ivan Lupandin asked Nora Rubashova why one of the Catholic military chaplains, whom she jokingly called a Hochdeutsch for his staunch belief in German nationalism, never reported her or Sister Theresa's Jewishness to the Gestapo or the SS. Rubashova replied, "Well, he was a Catholic priest. He was nationalistic, but not that nationalistic." After Maloyaroslavets was liberated by Red Army, Sister Theresa Kugel, despite her Jewishness, was arrested by the NKVD on charges of collaboration with Nazi Germany. According to Ivan Lupandin, the NKVD's logic was that Sister Theresa must have been a collaborator because, "how else could she have worked in a hospital and not been shot by the Nazis?" On 31 October 1942, she was declared guilty and sentenced to five years of "corrective labor" in Temlag for being a "socially dangerous element." She was released on 25 March 1947 and returned to live in Maloyaroslavets. In the autumn of next year she moved to Kaluga. On 3 April 1949 Kugel she was arrested on charges of espionage for the Vatican. On 2 July 1949 she refused to sign the indictment and accordingly fell victim of the Political abuse of psychiatry in the Soviet Union. Sister Theresa was declared "psychologically incompetent" and, on 17 September, she was sent for involuntary treatment to a special hospital run by the MVD in Kazan. On 15 October 1952, Theresa Kugel was transferred to an ordinary psychiatric hospital and was finally released following the death of Joseph Stalin in 1953. Vilnius She moved to Vilnius and, while working as a cleaning woman and later as a nurse, Sister Theresa became the driving force in the monastic revival of the Dominican community. She arranged for official invitations for the surviving sisters to come and live with her inside a Khrushchyovka apartment building on Dzuku Street. Georgii Friedman, a Soviet Jewish jazz musician and recent Catholic convert, first visited them in 1974 and found that the Sisters were being covertly ministered to by Dominican Friars visiting from the People's Republic of Poland and by Fr. Volodymyr Prokopiv, a fellow Gulag survivor and priest of the illegal and underground Ukrainian Greek Catholic Church. Friedman later recalled, "I remember how the atmosphere of quiet and peace in their quarters delighted me. On the walls hung large images of Saint Dominic and Saint Catherine of Siena. In the tiniest little chapel they had made an altar put of a dresser, and on the altar stood a crucifix. A lamp flickered in a beautiful vessel to show that the Blessed Sacrament was reserved there." Friedman also recalled, "Sister Teresa, sixty-two years old, was tall, stocky, and plain. Her face reflected a selfless faith. She was then still working as a nurse in the hospital." Death In 1977, Mina Kugel was hospitalized with terminal bladder cancer. Soon after, Georgii Friedman and Fr Volydymyr Prokopiv arrived to visit her. Fr. Friedman later recalled, "From her frightfully changed, corpse-like face, her eyes looked upon me, radiant with love and joy. I quickly left the room because I was afraid I would begin to cry." When he returned, Friedman overheard Mina asking Father Prokopiv, "Father, why are they prolonging my sufferings with these pills?" Father Prokopiv bent over her and softly asked, "Do you not want to suffer a bit more for the sake of the salvation of souls?" According to Friedman, "'I do', she quietly answered, and thereafter not a single complaint fell from her lips." Mina Kugel died during surgery in Vilnius in 1977. Soon after her death, her fellow Soviet Jewish convert and former spiritual protege Georgii Friedman, was accepted into an illegal seminary, attended, and graduated. In 1979, Friedman was ordained as the first priest of the Russian Greek Catholic Church present on Soviet soil in decades by an underground bishop of the Ukrainian Greek Catholic Church. Sources I. Osipova 1996. S. 178; I. Osipova 1999. S. 333, the investigative case Abrikosov and others 1934 / / CA FSB RF, Investigative deal LB Ott and others / / TSLFSB Russia. References External links Book of Remembrance: Biographies of Catholic Clergy and Laity Repressed in the Soviet Union - Biography of Minna Kugel (Sr. Teresa of the Child Jesus, OP), University of Notre Dame 1912 births 1977 deaths 20th-century Eastern Catholic nuns Converts to Eastern Catholicism from Judaism Deaths from bladder cancer Deaths from cancer in the Soviet Union Dominican nuns Eastern Catholic Dominican nuns Gulag detainees Persecution of Catholics during the pontificate of Pope Pius XII Persecution of Christians in the Eastern Bloc People from Moscow Governorate Prisoners and detainees of the Soviet Union Religious persecution by communists Rescue of Jews during the Holocaust Russian Greek-Catholics Russian Jews Soviet Eastern Catholics Victims of anti-religious campaign in the Soviet Union
Theresa Kugel
Biology
3,221
20,967,190
https://en.wikipedia.org/wiki/Octadecanal
Octodecanal is a long-chain aldehyde, with the chemical formula C18H36O (also known as stearyl aldehyde). Octadecanal is used by several species of insect as a pheromone. References Fatty aldehydes Alkanals
Octadecanal
Chemistry
63
35,172,634
https://en.wikipedia.org/wiki/Publications%20of%20the%20Astronomical%20Society%20of%20Japan
Publications of the Astronomical Society of Japan (PASJ) is a peer-reviewed scientific journal of astronomy published by the Astronomical Society of Japan on a bimonthly basis. The journal was established in 1949. The current editor-in-chief is M. Ando. See also List of astronomy journals External links Publications of the Astronomical Society of Japan website Astronomy journals Bimonthly journals Academic journals established in 1949 English-language journals Academic journals published by learned and professional societies
Publications of the Astronomical Society of Japan
Astronomy
95
1,033,633
https://en.wikipedia.org/wiki/Baden%20Powell%20%28mathematician%29
Baden Powell, MA FRS FRGS (22 August 1796 – 11 June 1860) was an English mathematician and Church of England priest. He held the Savilian Chair of Geometry at the University of Oxford from 1827 to 1860. Powell was a prominent liberal theologian who put forward advanced ideas about evolution. Origins Baden Powell II was born at Stamford Hill, Hackney in London. His father, Baden Powell I (1767–1841), of Langton and Speldhurst in Kent, was a wine merchant, who served as High Sheriff of Kent in 1831, and as Master of the Worshipful Company of Mercers in 1822. The mother of Baden Powell II was Hester Powell (1776–1848), his father's paternal first cousin, a daughter of James Powell (1737–1824) of Clapton, Hackney, Middlesex, Master of the Worshipful Company of Salters in 1818. The Powell family can be traced back to the early 16th century, where they were yeomen farmers at Mildenhall in Suffolk. Baden Powell II's great-grandfather, David Powell (1725–1810) of Homerton, Middlesex, a second son, migrated to the City of London aged 17 in 1712, subsequently going into business as a merchant at Old Broad Street and buying the manor of Wattisfield in Suffolk. In 1740 a branch of the family bought the Whitefriars Glass works. The name Baden originated in Susanna Baden (1663–1737), the maternal grandmother of David Powell (1725–1810) of Homerton, Middlesex, and one of the ten children of Andrew Baden (1637–1716), a Mercer who served as Mayor of Salisbury in 1682. Education Powell was admitted as an undergraduate at Oriel College, Oxford in 1814, and graduated with a first-class honours degree in mathematics in 1817. Ordination Powell was ordained as a priest of the Church of England in 1821, having served as curate of Midhurst, Sussex. His first living was as Vicar of Plumstead, Kent, of which the advowson was owned by his family. He immediately began his scientific work there, starting with experiments on radiant heat. Marriages and children Powell married three times, and had fourteen children in total. His widow changed the last name of the surviving children of his third marriage to "Baden-Powell". Powell's first marriage on 21 July 1821 to Eliza Rivaz (died 13 March 1836) was childless. His second marriage on 27 September 1837 to Charlotte Pope (died 14 October 1844) produced one son and three daughters: Charlotte Elizabeth Powell, (14 September 1838–20 October 1917) Baden Henry Baden-Powell, FRSE (23 August 1841–2 January 1901) Louisa Ann Powell, (18 March 1843–1 August 1896) Laetitia Mary Powell, (4 June 1844–2 September 1865) His third marriage on 10 March 1846 (at St Luke's Church, Chelsea) to Henrietta Grace Smyth (3 September 1824–13 October 1914), a daughter of Admiral Smyth, produced seven sons and three daughters: Henry Warington Baden-Powell, (3 February 1847–24 April 1921), a naval officer, a fellow of the Royal Geographical Society and a King's Counsel (K.C.) Sir George Smyth Baden-Powell, (24 December 1847–20 November 1898), a politician and Conservative MP (1885–1898) Augustus Smyth Powell (1849–1863) Francis (Frank) Smyth Baden-Powell (29 July 1850– 25 December 1933), an artist who exhibited at the Royal Academy of Arts Henrietta Smyth Powell (28 October 1851–9 March 1854) John Penrose Smyth Powell (21 December 1852–14 December 1855) Jessie Smyth Powell (25 November 1855–24 July 1856) Robert Stephenson Smyth Baden-Powell, 1st Baron Baden-Powell, (22 February 1857–8 January 1941), an army officer, writer and a founder of the World Scouting Movement and (with his sister Agnes) founder of the Girl Guides. Agnes Smyth Baden-Powell, (16 December 1858–2 June 1945), founder of the Girl Guides. Baden Fletcher Smyth Baden-Powell, (22 May 1860–3 October 1937), an army officer, aviator and president of the Royal Aeronautical Society Shortly after Powell's death in 1860, his wife renamed the remaining children of his third marriage 'Baden-Powell'; the name was eventually legally changed by royal licence on 30 April 1902. Baden Henry Powell is often also referred to as Baden Henry Baden-Powell, and was using this name by the 1891 census. Evolution Powell was an outspoken advocate of the constant uniformity of the laws of the material world. His views were liberal, and he was sympathetic to evolutionary theory long before Charles Darwin had revealed his ideas. He argued that science should not be placed next to scripture or the two approaches would conflict, and in his own version of Francis Bacon's dictum, contended that the book of God's works was separate from the book of God's word, claiming that moral and physical phenomena were completely independent. His faith in the uniformity of nature (except man's mind) was set out in a theological argument; if God is a lawgiver, then a "miracle" would break the lawful edicts that had been issued at Creation. Therefore, a belief in miracles would be entirely atheistic. Powell's most significant works defended, in succession, the uniformitarian geology set out by Charles Lyell and the evolutionary ideas in Vestiges of the Natural History of Creation published anonymously by Robert Chambers which applied uniform laws to the history of life in contrast to more respectable ideas such as catastrophism involving a series of divine creations. "He insisted that no tortured interpretation of Genesis would ever suffice; we had to let go of the Days of Creation and base Christianity on the moral laws of the New Testament." The boldness of Powell and other theologians in dealing with science led Joseph Dalton Hooker to comment in a letter to Asa Gray dated 29 March 1857: "These parsons are so in the habit of dealing with the abstractions of doctrines as if there was no difficulty about them whatever, so confident, from the practice of having the talk all to themselves for an hour at least every week with no one to gainsay a syllable they utter, be it ever so loose or bad, that they gallop over the course when their field is Botany or Geology as if we were in the pews and they in the pulpit. Witness the self-confident style of Whewell and Baden Powell, Sedgwick and Buckland." William Whewell, Adam Sedgwick and William Buckland opposed evolutionary ideas. When the idea of natural selection was mooted by Charles Darwin and Alfred Russel Wallace in their 1858 papers to the Linnaean Society, both Powell and his brother-in-law William Henry Flower thought that natural selection made creation rational. Essays and Reviews He was one of seven liberal theologians who produced a manifesto titled Essays and Reviews around February 1860, which amongst other things joined in the debate over On the Origin of Species. These Anglicans included Oxford professors, country clergymen, the headmaster of Rugby school and a layman. Their declaration that miracles were irrational stirred up unprecedented anger, drawing much of the fire away from Charles Darwin. Essays sold 22,000 copies in two years, more than the Origin sold in twenty years, and sparked five years of increasingly polarised debate with books and pamphlets furiously contesting the issues. Referring to "Mr Darwin's masterly volume" and restating his argument that belief in miracles is atheistic, Baden Powell wrote that the book "must soon bring about an entire revolution in opinion in favour of the grand principle of the self-evolving powers of nature.": He would have been on the platform at the British Association for the Advancement of Science 1860 Oxford evolution debate that was a highlight of the reaction to Darwin's theory. Huxley's antagonist Wilberforce was also the foremost critic of Essays and Reviews. Powell died of a heart attack a fortnight before the meeting. He is buried in Kensal Green Cemetery, London. Works 1837: History of Natural Philosophy from the Earliest Periods to the Present Time Published by Longman, Brown, Green, and Longmans 1838: The Connexion of Natural and Divine Truth Or the Study of the Inductive Philosophy Considered as Subservient to Theology: Or, The Study of the Inductive Philosophy, Considered as Subservient to Theology, Published by J.W. Parker 1841: A General and Elementary View of the Undulatory Theory, as Applied to the Dispersion of Light, and Some Other Subjects: Including the Substance of Several Papers, Printed in the Philosophical Transactions, and Other Journals, Published by J.W. Parker 1854: (as editor) Lectures on Polarized Light: Together with a Lecture on the Microscope, Delivered Before the Pharmaceutical Society of Great Britain, and at the Medical School of the London Hospital by Jonathan Pereira, published by Longman, Brown, Green, and Longmans 1859: The Order of Nature: Considered in Reference to the Claims of Revelation : a Third Series of Essays, Published by Longman, Brown, Green, Longmans, & Roberts Papers to the Royal Society, the Ashmolean Society and others 1828 "The elements of curves: comprising, I. The geometrical principles of the conic sections; II. An introduction to the algebraic theory of curves; designed for the use of students in the University." 1829 "A short treatise on the principles of the differential and integral calculus" 1830 "An elementary treatise on the geometry of curves and curved surfaces, investigated by the application of the differential and integral calculus." 1832 "The present state and future prospects of mathematical and physical studies in the University of Oxford." 1833 "A short elementary treatise on experimental and mathematical optics." 1834 "On the achromatism of the eye " 1836 "On the theory of ratio and proportion, as treated by EUCLID, including an inquiry into the nature of quantity " 1836 "Observations for determining the refractive indices for the standard rays of the solar spectrum in various media " 1837 "An historical view of the progress of the physical and mathematical sciences from the earliest ages to the present times " 1837 "On the nature and evidence of the primary laws of motion " 1838 "Additional observations for determining the refractive indices for definite rays of the solar spectrum in several media " 1839 "A second supplement to observations for determining the refractive indices for definite rays of the solar spectrum in several media " 1841 "A general and elementary view of the undulatory theory, as applied to the dispersion of light and some other subjects... " 1842 "History of natural philosophy, from the earliest periods to the present time " 1842 "On the theory of parallel lines " 1842 "On necessary and contingent truth, considered in regard to some primary principles of mathematical and mechanical science... " 1849 "An essay on the relation of the several parts of a mathematical science to the fundamental idea therein contained... " 1850 "On irradiation" 1854 "Lectures on polarized light, together with a lecture on the microscope ... " with Jonathan Pereira 1855 "Essays on the spirit of the inductive philosophy, the unity of worlds, and the philosophy of creation " 1857 "Biographies of distinguished scientific men", by Francois ARAGO; translated (from the French) by William Henry SMYTH, Baden POWELL, and Robert GRANT Books published 1829: A Short Treatise on the Principles of the Differential and Integral Calculus 1837: On the Nature and Evidence of the Primary Laws of Motion 1839: Tradition Unveiled: Or, an Exposition of the Pretensions and Tendency of Authoritative Teaching in the Church 1841: The Protestant's Warning and Safeguard in the Present Times 1841: A General and Elementary View of the Undulatory Theory, As Applied to the Dispersion of Light, and Some Other Subjects, Including the substance of several papers, printed in the Philosophical Transactions, and other journals. 1855: The Unity of Worlds and of Nature: Three Essays on the Spirit of Inductive Philosophy; the Plurality of Worlds; and the Philosophy of Creation 1856: Christianity without Judaism. Two sermons, London – Longman, Brown, Green Longmans and Roberts via HathiTrust 1859: The Order of Nature: Considered in Reference to the Claims of Revelation: A Third Series of Essays Publications Theology 1833 Revelation and Science. 1834 To the Editor of the British Critic. 1836 Remarks on Dr. Hampden, &c. 1838 Connection of Natural and Divine Truth 1839 Tradition Unveiled .... London and America. 1840 Supplement to Tradition Unveiled. Ditto ditto. 1841 State Education. 1841 The Protestant's Warning. 1843–4 Three Articles on Anglo-Catholicism in British and Foreign Review, Nos. 31, 32, 33. 1845 Kitto's Cyclopaedia of Biblical Literature – Articles, "Creation","Deluge", "Lord's Day", "Sabbath". 1845 Life of Blanco White December Westminster Review 1845 Tendency of Puseyism June Ditto. 1846 Mysticism and Scepticism . . . July Edinburgh Review. 1847 Protestant Principles Oxford Protestant Magazine 1847 On the Study of Christian Evidences . . Edinburgh Review. 1848 Freedom of Opinion Oxford Protestant Magazine 1848 Church and State Ditto. 1848 Free Enquiry and Liberality. . Kitto's Journal of Sacred Literature. 1848 The Law and the Gospel., ... Ditto. 1848 On the Application and Misapplication of Scripture Ditto. 1850 The State Church – A Sermon before the university. 1855 Unity of Worlds – Two Editions. 1856 On the Burnett Prizes, and the Study of Natural Theology – Oxford Essays 1857 Christianity without Judaism—2nd Series of Essays – Two Editions. 1859 The Order of Nature – 3rd Series of Essays. 1860 On the Study of the Evidences of Christianity, in Essays and Reviews Science 1828 Elements of Curves-and two Supplements 1829 Differential Calculus, and application to Curves. 1830 On Examination Statutes 1832 On Mathematical Studies. 1833 Elementary Treatise on Optics. 1834 History of Natural Philosophy Cabinet Cyclopaedia. 1841 Treatise on the Undulatory Theory applied to Dispersion. 1851 Lecture Synopses in four parts – Geometry, Algebra, Conic Sections – Newton. 1857 Translation of Arago's Autobiography. 1857 Translations of Arago's Lives of Young, Malus, and Fresnel, with Optical Notes. Papers in Philosophical Transactions of the Royal Society 1825 On Radiant Heat. 1826 Second on Radiant Heat. 1834 On Repulsion of Heat. 1835 On Dispersion of Light. 1836 Second on Dispersion of Light. 1837 Third and fourth on Dispersion of Light. 1840 On the Theory of the Dispersion of Light, &c. 1842 On certain cases of Elliptic Polarization. 1845 On Metallic Reflexion, &c. 1848 On Prismatic Interference 1832 On Radiant Heat. 1839 On Refractive Indices. 1841 On Radiant Heat – Second Report. Reports to the British Association 1848–9 On Luminous Meteors (continued to 1869). 1882 to 1849 Numerous Papers on Sectional Proceedings. 1854 On Radiant Heat—Third Report. In Memoirs of the Royal Astronomical Society 1845 On a Double Image Micrometer. 1847 On Luminous Rings, &c. 1849 On Irradiation. (In Royal Astronomical Society's Proceedings.) 1847 On the Beads seen in Eclipses. 1853 On Foucault's Experiments on Rotation of Earth, &c. 1858 On C. Piazzi Smyth's Artificial Horizon. In Ashmolean Society's memoirs 1832 On the Acromatism of the Eye. On Refractive Indices – Three Papers. On Ratios and Proportion. 1849 On the Laws of Motion. On the Theory of Parallels. On Necessary and Contingent Truth Royal Institution abstracts of lectures 1848 On Shooting Stars. 1849 On the Nebular Theory. 1850 On Optical Phenomena in Astronomy. 1851 On Foucault's Pendulum Experiment 1852 On Light and Heat. 1854 On Rotatory Motion. 1858 On Rotatory Motion Applied to Observations at Sea. 1822 Translation of Raymond on Barometrical Measurement, with an Appendix .... Annals of Philosophy. 1823-5 Various, Papers on Light and Heat. Ditto. 1825-6 Two Papers on Heat. Quar. Jour. of Science. 1828 Two Papers on Polarization of Heat. Brewster's Philosophical Journal. 1830 On Mathematical Studies....London Review. 1832-3 Several Papers on Interference of Light, Diffraction, &c – Annals of Philosophy and Phil. Mag. 1834 On Radiant Heat Jameson's Phil. Journ. 1835–6 On Cauchy's Theory of Dispersion of Light, &c Journal of Science and Phil. Mag. Various Papers in Vol. I. of Mag. of Popular Science. Many Papers in Journal of Education. On the Progress of Optics . . . British Annual. On the State of Oxford Ditto. The Lives of Black and Lavoisier....Useful Knowledge Gallery of Portraits. 1838 On University Reform . .July Monthly Chron. 1838-9 Various Papers on Light. Journal of Science. 1838-9 Papers on Light .... Philosophical Magazine. 1839 Correspondence with Brewster, Athenaeum. 1839 On Comte's Philosophie Positive ....Monthly Chronicle. 1841 On Light Philosophical Magazine. 1841 Papers on Light Journal of Science. 1843 Review of Carpenter's Cyclopaedia ....Dublin University Magazine. 1843 Sir Isaac Newton and his Contemporaries Edinburgh Review. 1843 Review of Rigaud's History of the Principia. Ditto. 1846 On Aberration of Light . . .Journal of Science and Philosophical Magazine. 1852 On Lord Brougham's Optical Experiments. Journal of Science 1854 On Foucault's Gyroscope. . Journal of Science and Philosophical Magazine 1856 Life of Young . . . National Review and Philosophical Magazine 1856 On Brewster's Life of Newton .... Edinburgh Review. 1856 On Fresnel's Formulae for Light – July, August, and October – Journal of Science and Philosophical Magazine. 1857 Life and Writings of Arago Ditto. Also 1834 A Letter to the Editor of The British Critic Notable students Lewis Carroll attended the lectures on pure geometry by Baden Powell. Collections In 1970, 170 volumes from Powell's library were presented to the Bodleian Libraries by his grandson, D. F. W. Baden Powell. Notes References Further reading * Corsi, Pietro (1988). Science and Religion: Baden Powell and the Anglican Debate, 1800-1860, Cambridge University Press , 346 pages External links Collection of obituary notices 1796 births 1860 deaths Alumni of Oriel College, Oxford English Christian theologians 19th-century English mathematicians Fellows of the Royal Society Proto-evolutionary biologists Savilian Professors of Geometry 19th-century English Anglican priests Baden Fellows of the Royal Geographical Society
Baden Powell (mathematician)
Biology
3,866
20,227,419
https://en.wikipedia.org/wiki/Transmembrane%20channels
Transmembrane channels, also called membrane channels, are pores within a lipid bilayer. The channels can be formed by protein complexes that run across the membrane or by peptides. They may cross the cell membrane, connecting the cytosol, or cytoplasm, to the extracellular matrix. Transmembrane channels are also found in the membranes of organelles including the nucleus, the endoplasmic reticulum, the Golgi apparatus, mitochondria, chloroplasts, and lysosomes. Transmembrane channels differ from transporters and pumps in several ways. Some channels are less selective than typical transporters and pumps, differentiating solutes primarily by size and ionic charge. Channels perform passive transport of materials also known as facilitated diffusion. Transporters can carry out either passive or active transfer of materials while pumps require energy to act. There are several modes by which membrane channels operate. The most common is the gated channel which requires a trigger, such as a change in membrane potential in voltage-gated channels, to unlock or lock the pore opening. Voltage-gated channels are critical to the production of an action potential in neurons resulting in a nerve impulse. A ligand-gated channel requires a chemical, such as a neurotransmitter, to activate the channel. Stress-gated channels require a mechanical force applied to the channel for opening. Aquaporins are dedicated channels for the movement of water across the hydrophobic interior of the cell membrane. Ion channels are a type of transmembrane channel responsible for the passive transport of positively charged ions (sodium, potassium, calcium, hydrogen and magnesium) and negatively charged ions (chloride) and, can be either gated or ligand-gated channels. One of the best studied ion channels is the potassium ion channel. The potassium ion channel can allow rapid movement of potassium ions while being selective against sodium. Using X-ray diffraction data and atomic model computations a likely structure of the channel consists of a number of protein alpha-helixes forming an hourglass shaped pore with the narrowest point halfway through the membrane's lipid bilayer. To move through the channel the potassium ions must shed their aqueous matrix and enter a selectivity filter composed of carbonyl oxygens. The potassium ions pass through one atom at a time along five different cation (positively charged ion) binding sites. Diseases caused by ion channel malfunctions include cystic fibrosis where the channel for the chloride ion will not open or is missing in the cells of the lungs, intestine, pancreas, liver and skin. The cells can no longer regulate salt and water concentrations resulting in the symptoms typical of the disease. Additional disorders resulting from malfunctions in ion channels include forms of epilepsy, cardiac arrhythmia, certain types of periodic paralysis and ataxia. References Transmembrane proteins Membrane biology
Transmembrane channels
Chemistry
609
3,675,281
https://en.wikipedia.org/wiki/Vector-valued%20function
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range. Example: Helix A common example of a vector-valued function is one that depends on a single real parameter , often representing time, producing a vector as the result. In terms of the standard unit vectors , , of Cartesian , these specific types of vector-valued functions are given by expressions such as where , and are the coordinate functions of the parameter , and the domain of this vector-valued function is the intersection of the domains of the functions , , and . It can also be referred to in a different notation: The vector has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function near (between and ; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as increases from zero through . In 2D, we can analogously speak about vector-valued functions as: or Linear case In the linear case the function can be expressed in terms of matrices: where is an output vector, is a vector of inputs, and is an matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form where in addition {{math|b'}} is an vector of parameters. The linear case arises often, for example in multiple regression, where for instance the vector of predicted values of a dependent variable is expressed linearly in terms of a vector () of estimated values of model parameters: in which (playing the role of in the previous generic form) is an matrix of fixed (empirically based) numbers. Parametric representation of a surface A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters and determine the three Cartesian coordinates of any point on the surface: Here is a vector-valued function. For a surface embedded in -dimensional space, one similarly has the representation Derivative of a three-dimensional vector function Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if is a vector-valued function, then The vector derivative admits the following physical interpretation: if represents the position of a particle, then the derivative is the velocity of the particle Likewise, the derivative of the velocity is the acceleration Partial derivative The partial derivative of a vector function with respect to a scalar variable is defined as where is the scalar component of in the direction of . It is also called the direction cosine of and or their dot product. The vectors , , form an orthonormal basis fixed in the reference frame in which the derivative is being taken. Ordinary derivative If is regarded as a vector function of a single scalar variable, such as time , then the equation above reduces to the first ordinary time derivative of a with respect to , Total derivative If the vector is a function of a number of scalar variables , and each is only a function of time , then the ordinary derivative of with respect to can be expressed, in a form known as the total derivative, as Some authors prefer to use capital to indicate the total derivative operator, as in . The total derivative differs from the partial time derivative in that the total derivative accounts for changes in due to the time variance of the variables . Reference frames Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship. Derivative of a vector function with nonfixed bases The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of in the reference frame where , , are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a''' itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is where is the angular velocity of the reference frame E relative to the reference frame N. One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity in inertial reference frame N of a rocket R located at position can be found using the formula where is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, and are the derivatives of in reference frames N and E, respectively. By substitution, where is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth. Derivative and vector multiplication The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if is a scalar variable function of , In the case of dot multiplication, for two vectors and that are both functions of , Similarly, the derivative of the cross product of two vector functions is Derivative of an n-dimensional vector function A function of a real number with values in the space can be written as . Its derivative equals If is a function of several variables, say of then the partial derivatives of the components of form a matrix called the Jacobian matrix of . Infinite-dimensional vector functions If the values of a function lie in an infinite-dimensional vector space , such as a Hilbert space, then may be called an infinite-dimensional vector function''. Functions with values in a Hilbert space If the argument of is a real number and is a Hilbert space, then the derivative of at a point can be defined as in the finite-dimensional case: Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., or even , where is an infinite-dimensional vector space). N.B. If is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if (i.e., where is an orthonormal basis of the space &hairsp;), and exists, then However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. Other infinite-dimensional vector spaces Most of the above hold for other topological vector spaces too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. Vector field See also Coordinate vector Curve Multivalued function Parametric surface Position vector Parametrization Notes References External links Vector-valued functions and their properties (from Lake Tahoe Community College) Everything2 article 3 Dimensional vector-valued functions (from East Tennessee State University) "Position Vector Valued Functions" Khan Academy module Linear algebra Vector calculus Vectors (mathematics and physics) Types of functions
Vector-valued function
Mathematics
1,855
1,053,500
https://en.wikipedia.org/wiki/DNA%20extraction
The first isolation of deoxyribonucleic acid (DNA) was done in 1869 by Friedrich Miescher. DNA extraction is the process of isolating DNA from the cells of an organism isolated from a sample, typically a biological sample such as blood, saliva, or tissue. It involves breaking open the cells, removing proteins and other contaminants, and purifying the DNA so that it is free of other cellular components. The purified DNA can then be used for downstream applications such as PCR, sequencing, or cloning. Currently, it is a routine procedure in molecular biology or forensic analyses. This process can be done in several ways, depending on the type of the sample and the downstream application, the most common methods are: mechanical, chemical and enzymatic lysis, precipitation, purification, and concentration. The specific method used to extract the DNA, such as phenol-chloroform extraction, alcohol precipitation, or silica-based purification. For the chemical method, many different kits are used for extraction, and selecting the correct one will save time on kit optimization and extraction procedures. PCR sensitivity detection is considered to show the variation between the commercial kits. There are many different methods for extracting DNA, but some common steps include: Lysis: This step involves breaking open the cells to release the DNA. For example, in the case of bacterial cells, a solution of detergent and salt (such as SDS) can be used to disrupt the cell membrane and release the DNA. For plant and animal cells, mechanical or enzymatic methods are often used. Precipitation: Once the DNA is released, proteins and other contaminants must be removed. This is typically done by adding a precipitating agent, such as alcohol (such as ethanol or isopropanol), or a salt (such as ammonium acetate). The DNA will form a pellet at the bottom of the solution, while the contaminants will remain in the liquid. Purification: After the DNA is precipitated, it is usually further purified by using column-based methods. For example, silica-based spin columns can be used to bind the DNA, while contaminants are washed away. Alternatively, a centrifugation step can be used to purify the DNA by spinning it down to the bottom of a tube. Concentration: Finally, the amount of DNA present is usually increased by removing any remaining liquid. This is typically done by using a vacuum centrifugation or a lyophilization (freeze-drying) step. Some variations on these steps may be used depending on the specific DNA extraction protocol. Additionally, some kits are commercially available that include reagents and protocols specifically tailored to a specific type of sample. What does it deliver? DNA extraction is frequently a preliminary step in many diagnostic procedures used to identify environmental viruses and bacteria and diagnose illnesses and hereditary diseases. These methods consist of, but are not limited to: Fluorescence In Situ Hybridization (FISH) technique was developed in the 1980s. The basic idea is to use a nucleic acid probe to hybridize nuclear DNA from either interphase cells or metaphase chromosomes attached to a microscopic slide. It is a molecular method used, among other things, to recognize and count particular bacterial groupings. To recognize, define, and quantify the geographical and temporal patterns in marine bacterioplankton communities, researchers employ a technique called terminal restriction fragment length polymorphism (T-RFLP). Sequencing: Whole or partial genomes and other chromosomal components, ended for comparison with previously published sequences. Basic procedure Cells that are to be studied need to be collected. Breaking the cell membranes open exposes the DNA along with the cytoplasm within (cell lysis). Lipids from the cell membrane and the nucleus are broken down with detergents and surfactants. Breaking down proteins by adding a protease (optional). Breaking down RNA by adding an RNase (optional). The solution is treated with a concentrated salt solution (saline) to make debris such as broken proteins, lipids, and RNA clump together. Centrifugation of the solution, which separates the clumped cellular debris from the DNA. DNA purification from detergents, proteins, salts, and reagents is used during the cell lysis step. The most commonly used procedures are: Ethanol precipitation usually by ice-cold ethanol or isopropanol. Since DNA is insoluble in these alcohols, it will aggregate together, giving a pellet upon centrifugation. Precipitation of DNA is improved by increasing ionic strength, usually by adding sodium acetate. Phenol–chloroform extraction in which phenol denatures proteins in the sample. After centrifugation of the sample, denatured proteins stay in the organic phase while the aqueous phase containing nucleic acid is mixed with chloroform to remove phenol residues from the solution. Minicolumn purification relies on the fact that the nucleic acids may bind (adsorption) to the solid phase (silica or other) depending on the pH and the salt concentration of the buffer. Cellular and histone proteins bound to the DNA can be removed either by adding a protease or having precipitated the proteins with sodium or ammonium acetate or extracted them with a phenol-chloroform mixture before the DNA precipitation. After isolation, the DNA is dissolved in a slightly alkaline buffer, usually in a TE buffer, or in ultra-pure water. Common chemicals The most common chemicals used for DNA extraction include: Detergents, such as SDS or Tween-20, which are used to break open cells and release the DNA. Protease enzymes, such as Proteinase K, which are used to digest proteins that may be binding to the DNA. Phenol and chloroform, which are used to separate the DNA from other cellular components. Ethanol or isopropanol, which are used to precipitate the DNA. Salt, such as NaCl, which is often used to help dissolve the DNA and maintain its stability. EDTA, which is used to chelate the metals ions that can damage the DNA. Tris-HCL, which is used to maintain the pH at the optimal condition for DNA extraction. Method selection Some of the most common DNA extraction methods include organic extraction, Chelex extraction, and solid phase extraction. These methods consistently yield isolated DNA, but they differ in both the quality and the quantity of DNA yielded. When selecting a DNA extraction method, there are multiple factors to consider, including cost, time, safety, and risk of contamination. Organic extraction involves the addition of incubation in multiple different chemical solutions; including a lysis step, a phenol-chloroform extraction, an ethanol precipitation, and washing steps. Organic extraction is often used in laboratories because it is cheap, and it yields large quantities of pure DNA. Though it is easy, there are many steps involved, and it takes longer than other methods. It also involves the unfavorable use of the toxic chemicals phenol and chloroform, and there is an increased risk of contamination due to transferring the DNA between multiple tubes. Several protocols based on organic extraction of DNA were effectively developed decades ago, though improved and more practical versions of these protocols have also been developed and published in the last years. The chelex extraction method involves adding the Chelex resin to the sample, boiling the solution, then vortexing and centrifuging it. The cellular materials bind to the Chelex beads, while the DNA is available in the supernatant. The Chelex method is much faster and simpler than organic extraction, and it only requires one tube, which decreases the risk of DNA contamination. Unfortunately, Chelex extraction does not yield as much quantity and the DNA yielded is single-stranded, which means it can only be used for PCR-based analyses and not for RFLP. Solid phase extraction such as using a spin-column-based extraction method takes advantage of the fact that DNA binds to silica. The sample containing DNA is added to a column containing a silica gel or silica beads and chaotropic salts. The chaotropic salts disrupt the hydrogen bonding between strands and facilitate the binding of the DNA to silica by causing the nucleic acids to become hydrophobic. This exposes the phosphate residues so they are available for adsorption. The DNA binds to the silica, while the rest of the solution is washed out using ethanol to remove chaotropic salts and other unnecessary constituents. The DNA can then be rehydrated with aqueous low-salt solutions allowing for elution of the DNA from the beads. This method yields high-quality, largely double-stranded DNA which can be used for both PCR and RFLP analysis. This procedure can be automated and has a high throughput, although lower than the phenol-chloroform method. This is a one-step method i.e. the entire procedure is completed in one tube. This lowers the risk of contamination making it very useful for the forensic extraction of DNA. Multiple solid-phase extraction commercial kits are manufactured and marketed by different companies; the only problem is that they are more expensive than organic extraction or Chelex extraction. Special types Specific techniques must be chosen for the isolation of DNA from some samples. Typical samples with complicated DNA isolation are: archaeological samples containing partially degraded DNA, see ancient DNA samples containing inhibitors of subsequent analysis procedures, most notably inhibitors of PCR, such as humic acid from the soil, indigo and other fabric dyes or haemoglobin in blood samples from microorganisms with thick cellular walls, for example, yeast samples containing mixed DNA from multiple sources Extrachromosomal DNA is generally easy to isolate, especially plasmids may be easily isolated by cell lysis followed by precipitation of proteins, which traps chromosomal DNA in insoluble fraction and after centrifugation, plasmid DNA can be purified from soluble fraction. A Hirt DNA Extraction is an isolation of all extrachromosomal DNA in a mammalian cell. The Hirt extraction process gets rid of the high molecular weight nuclear DNA, leaving only low molecular weight mitochondrial DNA and any viral episomes present in the cell. Detection of DNA A diphenylamine (DPA) indicator will confirm the presence of DNA. This procedure involves chemical hydrolysis of DNA: when heated (e.g. ≥95 °C) in acid, the reaction requires a deoxyribose sugar and therefore is specific for DNA. Under these conditions, the 2-deoxyribose is converted to w-hydroxylevulinyl aldehyde, which reacts with the compound, diphenylamine, to produce a blue-colored compound. DNA concentration can be determined by measuring the intensity of absorbance of the solution at the 600 nm with a spectrophotometer and comparing to a standard curve of known DNA concentrations. Measuring the intensity of absorbance of the DNA solution at wavelengths 260 nm and 280 nm is used as a measure of DNA purity. DNA can be quantified by cutting the DNA with a restriction enzyme, running it on an agarose gel, staining with ethidium bromide (EtBr) or a different stain and comparing the intensity of the DNA with a DNA marker of known concentration. Using the Southern blot technique, this quantified DNA can be isolated and examined further using PCR and RFLP analysis. These procedures allow differentiation of the repeated sequences within the genome. It is these techniques which forensic scientists use for comparison, identification, and analysis. High-molecular-weight DNA extraction method In this method, plant nuclei are isolated by physically grinding tissues and reconstituting the intact nuclei in a unique Nuclear Isolation Buffer (NIB). The plastid DNAs are released from organelles and eliminated with an osmotic buffer by washing and centrifugation. The purified nuclei are then lysed and further cleaned by organic extraction, and the genomic DNA is precipitated with a high concentration of CTAB. The highly pure, high molecular weight gDNA is extracted from the nuclei, dissolved in a high pH buffer, allowing for stable long-term storage. DNA storage DNA storage is an important aspect of DNA extraction projects as it ensures the integrity and stability of the extracted DNA for downstream applications. One common method of DNA storage is ethanol precipitation, which involves adding ethanol and a salt, such as sodium chloride or potassium acetate, to the extracted DNA to precipitate it out of solution. The DNA is then pelleted by centrifugation and washed with 70% ethanol to remove any remaining contaminants. The DNA pellet is then air-dried and resuspended in a buffer, such as Tris-EDTA (TE) buffer, for storage. Another method is freezing the DNA in a buffer such as TE buffer, or in a cryoprotectant such as glycerol or DMSO, at -20 or -80 degrees Celsius. This method preserves the integrity of the DNA and slows down the activity of any enzymes that may degrade it. It's important to note that the choice of storage buffer and conditions will depend on the downstream application for which the DNA is intended. For example, if the DNA is to be used for PCR, it may be stored in TE buffer at 4 degrees Celsius, while if it is to be used for long-term storage or shipping, it may be stored in ethanol at -20 degrees Celsius. The extracted DNA should be regularly checked for its quality and integrity, such as by running a gel electrophoresis or spectrophotometry. The storage conditions should be also noted and controlled, such as the temperature and humidity. It's also important to consider the long-term stability of the DNA and the potential for degradation over time. The extracted DNA should be stored for as short a time as possible, and the conditions for storage should be chosen to minimize the risk of degradation. In general, the extracted DNA should be stored under the best possible conditions to ensure its stability and integrity for downstream applications. Quality control There are several quality control techniques used to ensure the quality of extracted DNA, including: Spectrophotometry: This is a widely used method for measuring the concentration and purity of a DNA sample. Spectrophotometry measures the absorbance of a sample at different wavelengths, typically at 260 nm and 280 nm. The ratio of absorbance at 260 nm and 280 nm is used to determine the purity of the DNA sample. Gel electrophoresis: This technique is used to visualize and compare the size and integrity of DNA samples. The DNA is loaded onto an agarose gel and then subjected to an electric field, which causes the DNA to migrate through the gel. The migration of the DNA can be visualized using ethidium bromide, which intercalates into the DNA and fluoresces under UV light. Fluorometry: Fluorometry is a method to determine the concentration of nucleic acids by measuring the fluorescence of the sample when excited by a specific wavelength of light. Fluorometry uses dyes that specifically bind to nucleic acids and have a high fluorescence intensity. PCR: Polymerase Chain Reaction (PCR) is a technique that amplifies a specific region of DNA, it is also used as a QC method by amplifying a small fragment of the DNA, if the amplification is successful, it means the extracted DNA is of good quality and it's not degraded. Qubit Fluorometer: The Qubit Fluorometer is an instrument that uses fluorescent dyes to measure the concentration of DNA and RNA in a sample. It is a quick and sensitive method that can be used to determine the concentration of DNA samples. Bioanalyzer: The bioanalyzer is an instrument that uses electrophoresis to separate and analyze DNA, RNA, and protein samples. It can provide detailed information about the size, integrity, and purity of a DNA sample. See also Boom method DNA fingerprinting DNA sequencing DNA structure Ethanol precipitation Plasmid preparation Polymerase chain reaction SCODA DNA purification References Further reading Li, Richard (2015). Forensic Biology. Boca Raton: CRC Press, Taylor & Francis Group. . Sambrook, Michael R.; Green, Joseph (2012). Molecular Cloning (4th ed.). Cold Spring Harbor, N.Y.: Cold Spring Harbor Laboratory Pr. . . External links How to extract DNA from anything living DNA Extraction Virtual Lab Biochemical separation processes Genetics techniques Molecular biology Laboratory techniques DNA Polymerase chain reaction Forensic genetics lt:DNR išskyrimas
DNA extraction
Chemistry,Engineering,Biology
3,499
46,801,272
https://en.wikipedia.org/wiki/Marcos%20Dajczer
Marcos Dajczer (born 19 November 1948, in Buenos Aires) is an Argentine-born Brazilian mathematician whose research concerns geometry and topology. Dajczer obtained his Ph.D. from the Instituto Nacional de Matemática Pura e Aplicada in 1980 under the supervision of Manfredo do Carmo. In 2006, he received Brazil's National Order of Scientific Merit honour for his work in mathematics. He was a Guggenheim Fellow in 1985. Do Carmo–Dajczer theorem is named after his teacher and him. Selected publications do Carmo, M.; Dajczer, M. (1983) . "Rotation hypersurfaces in spaces of constant curvature", Transactions of the American Mathematical Society, Volume 277, Number 2, pp. 685–709. do Carmo, M.; Dajczer, M. (1982) . "Helicoidal surfaces with constant mean curvature", Tohoku Mathematical Journal Second Series, Volume 34, Number 3, pp. 425–435. Submanifolds and Isometric Immersions (1990, Mathematics Lecture Series) References External links 1948 births Brazilian mathematicians People from Buenos Aires Differential geometers Topologists Living people Instituto Nacional de Matemática Pura e Aplicada alumni Instituto Nacional de Matemática Pura e Aplicada researchers
Marcos Dajczer
Mathematics
277
40,411,234
https://en.wikipedia.org/wiki/Molybdenum%28II%29%20bromide
Molybdenum(II) bromide is an inorganic compound with the formula MoBr2. It forms yellow-red crystals. Preparation Molybdenum(II) bromide is created by the reaction of elemental molybdenum(II) chloride with lithium bromide. Alternatively, it can be prepared by the disproportionation of molybdenum(III) bromide in a vacuum at . References Molybdenum dibromide at Web Elements Bromides Molybdenum halides Molybdenum(II) compounds
Molybdenum(II) bromide
Chemistry
123
1,002,527
https://en.wikipedia.org/wiki/Tab%20%28interface%29
In interface design, a tab is a graphical user interface object that allows multiple documents or panels to be contained within a single window, using tabs as a navigational widget for switching between sets of documents. It is an interface style most commonly associated with web browsers, web applications, text editors, and preference panels, with window managers and tiling window managers. Tabs are modeled after traditional card tabs inserted in paper files or card indexes (in keeping with the desktop metaphor). They are usually graphically displayed on webpages or apps as they look on paper. Tabs may appear in a horizontal bar or as a vertical list. Horizontal tabs may have multiple rows. In some cases, tabs may be reordered or organized into multiple rows through drag and drop interactions. Implementations may support opening an existing tab in a separate window or range-selecting multiple tabs for moving, closing, or separating them. History The WordVision DOS word processor for the IBM PC in 1982 was perhaps the first commercially available product with a tabbed interface. Don Hopkins developed and released several versions of tabbed window frames for the NeWS window system as free software, which the window manager applied to all NeWS applications, and enabled users to drag the tabs around to any edge of the window. The NeWS version of UniPress's Gosling Emacs text editor was another early product with multiple tabbed windows in 1988. It was used to develop an authoring tool for Ben Shneiderman's hypermedia browser HyperTIES (the NeWS workstation version of The Interactive Encyclopedia System), in 1988 at the University of Maryland Human-Computer Interaction Lab. HyperTIES also supported pie menus for managing windows and browsing hypermedia documents with PostScript applets. While Boeing Calc already utilized tabbed sheets (as so-called word pads) since at least 1987, Borland's Quattro Pro popularized tabs for spreadsheets in 1992. Microsoft Word in 1993 used them to simplify submenus. In 1994, BookLink Technologies featured tabbed windows in its InternetWorks browser. That same year, the text editor UltraEdit also appeared with a modern multi-row tabbed interface. The tabbed interface approach was then followed by the Internet Explorer shell NetCaptor in 1997. These were followed by several others like IBrowse in 1999, and Opera in 2000 (with the release of version 4 - although an MDI interface was supported before then), MultiViews October 2000, which changed its name into MultiZilla on April 1st, 2001 (an extension for the Mozilla Application Suite), Galeon in early 2001, Mozilla 0.9.5 in October 2001, Phoenix 0.1 (now Mozilla Firefox) in October 2002, Konqueror 3.1 in January 2003, and Safari in 2003. With the release of Internet Explorer 7 in 2006, all major web browsers featured a tabbed interface. Users quickly adopted the use of tabs in web browsing and web search. A study of tabbed browsing behavior in June 2009 found that users switched tabs in 57% of tab sessions, and 36% of users used new tabs to open search engine results at least once during that period. Numerous additional browser tab capabilities have emerged since then. One example is visual tabbed browsing in OmniWeb version 5, which displays preview images of pages in a drawer to the left or right of the main browser window. Another feature is the ability to re-order tabs and to bookmark all of the webpages opened in tab panes in a given window in a group or bookmark folder (as well as the ability to reopen all of them at the same time). Microsoft Internet Explorer marks tab families with different colours. Development Tab behavior in an application is determined by the underlying widget toolkit (for example Firefox uses GTK) framework. Due to lack of standardization, behavior may vary from one application to the next, which can result in usability challenges. Tab hoarding Tab hoarding is digital hoarding of web browser tabs. Users may accumulate tabs as reminders of tasks to research or complete (rather than using dedicated reminder software). They may use multiple browser windows to organize tabs or direct focus; however, leaving multiple windows open can exacerbate tab clutter. Tab hoarding can lead to stress and information overload, distraction, and reduced computer performance. It can develop into emotional attachment to the set of open tabs, including fear of losing them upon a crash or other reboot, and conversely, relief when tabs are properly restored. Tab hoarders have attributed the behavior to anxiety, fear of missing out, procrastination, and poor personal information management practices. The prevalence of tab hoarding is acknowledged by browser vendors such as Mozilla, and has inspired memory and tab management features in browsers and extensions. Such features include tab grouping, which allows related tabs to be visually organized and collapsed; conversion of tabs into a list of hyperlinks; and alternative interface paradigms, such as framing high-level tasks as first-class objects instead of tabs. A 2021 study developed UI design considerations which could enable better tools and changes to the code of web browsers that allow knowledge workers and other users to better manage and utilize their browser tabs. See also Comparison of document interfaces Microsoft Internet Explorer marks tab families with different colours IDE-style interface Ribbon (computing) References External links TabPanel Widget ASP.NET AJAX Control Toolkit Scriptaculous AJAX tabs Tab Window Demo deDevelopmentmo of the Pie Menu Tab Window Manager for The NeWS Toolkit 2.0 (1991). Graphical user interface elements Document interface Graphical control elements
Tab (interface)
Technology
1,191
34,902,187
https://en.wikipedia.org/wiki/Illusion%20knitting
Illusion knitting or shadow knitting is a form of textile art, in which the knitting is viewed as simply narrow stripes from one angle, and as an image when viewed from another angle. Illusion knitting has been recognised as an art form since 2010, largely due to the advances made by Steve Plummer who has created several large and detailed pieces. Similar effects occur in Tunisian crochet. Method Illusion knitting uses two colours of yarn and is worked in stripes of two rows in each colour. Illusion knitting is based on the flat smooth stocking stitch and the raised garter stitch. It is this combination of textures which allows the image to be seen only from the proper angle. Traditionally, charts for illusion knitting use four rows of knitting symbols to represent the stitches which the designer wishes to be seen. This makes the charts elongated and difficult to use for anything other than simple blocks of colour. These four rows make up two pairs where, in most cases, one pair is considered the opposite of the other. Where one pair has two rows of knit stitches, the other image pair has both knit and purl stitches. As in mosaic knitting, the knitter alternates between two colors. Colors with good contrast are preferred but are not required. The knitter knits two rows of color A, then two rows of color B, and repeats this throughout the body of the work. Only knit or purl stitches are used. Each row in the pattern, shown in the thumbnail to the right, represents four rows of knit or purl stitches, and each column represents one stitch. To follow this pattern, a knitter would use black and white: white being the background color (BC), and black being the master color (MC). Start at row one. This could be thought of as Row 1-1 and is a right-side row (RS). Row 1-1 (RS): With BC, knit. Row 1-2 (still following the pattern at row 1) (WS): Knit the blank boxes, purl the ones filled in. Row 1-3 (RS): Change to MC, knit. Row 1-4 (WS): Purl the blank boxes, knit the ones filled in. Move to Row 2 on the pattern and begin knitting the BC. (This is row 2–1.) Repeat for all rows and bind off. The visual effect of shadow knitting is due to the different height of the knit stitches on the wrong side rows. A knit stitch is flat, while a purl stitch is raised. Therefore, one can change which color (dark or light) stands out by changing from knit to purl. So the basic idea is to create a pattern in knit stitches in the colors one wants and purl stitches in the background color. When looking straight at the knitted piece, the stitches look approximately the same, but from an angle, only the raised purl stitches are visible. There are no constraints on the position of the purl/knit stitches, so a nearly infinite variety of patterns can be made. The pattern will not be apparent from every direction of viewing, since one ridge may "overshadow" another. Knitters often enjoy watching when the picture created becomes visible. The stark contrast of alternating light and dark stripes is also visually interesting. Extensions of the method include using more than two colors or using other stitches; e.g., lace knitting or cable knitting. Viewing For an illusion artwork to be effective it has to be able to be seen from a variety of angles. Generally, illusions that are designed to be viewed from the side are best. If the illusion hangs on the wall, the viewer can move around it and see the image appear and disappear. The illusion can be viewed equally well from the right and wrong sides. An illusion designed to be seen from the bottom would have to hang very high on the wall as a viewer would probably never be able to see it while facing it directly. This type of illusion is best used on a flat surface. Creating Charting Steve Plummer is a knitting artist who previously specialised in knitting wall-hangings and other items, primarily for the teaching of Mathematics. He approached illusion knitting from a mathematician's point of view and started to use a different method of charting. These charts use a square grid and are created by laying the grid over an existing image then colouring in all the stitches that need to be seen as raised bumps. Throughout the charting process it is still possible to see the original image and there is no distortion. These charts are different because they allow the four rows of the traditional method to be condensed into two different rows of squares. They can be used for creating very simple images, if required, but also allow much more flexibility to create works of art. The old method allowed areas to show either light or dark; the new method allows for intermediate shading, still using just two colours and only one colour in each stripe. The amount of shading depends on the number of stitches that are raised compared to the number that are lowered. Some of Steve's illusions are quite large. His Mona Lisa is significantly bigger than the real thing. The size is dictated by the smallest detail he wants to see, which, in a portrait, may be the centre of the eye. This then has to become one raised bump (garter stitch) on the knitting and the ridge in front of it must lie flat so you can see over it (stocking stitch). The rest of the image is built around this detail. The charting process takes perseverance, time and an amount of three-dimensional awareness. A chart can take up to 100 hours to produce. It is a process that can be learned by people with a little artistic ability and a lot of patience. Making and displaying Yarn: Illusions can be made using any smooth yarn in two contrasting colours. The same chart will work equally well in fine or chunky yarn, though the thicker yarn will create a much bigger piece of knitting and the viewer may have to stand further back to get the best effect. Needles: The clearest images are created by knitting on needles slightly thinner than recommended for the weight of the yarn used. If the art work is to be a bed-cover or wearable it needs to remain soft and flexible so should not be knitted too tightly. Markers: It is essential to be able to keep your place on the chart so it helps to use markers to match to the grid lines, every 10 or 20 stitches. Mounting pictures: If a piece is to hang on the wall it can be mounted on a board which will ensure that it remains flat. It should be very slightly stretched to prevent sagging. Mounting wearables: Wearables, such as shawls, can be displayed on a bar with a strip of Velcro attached to it. They are easily removed for wearing. Combining with other knitting techniques It is possible to combine illusion knitting with other textile techniques. Those listed below have been used successfully. There are probably many other possibilities. Intarsia can be used to introduce extra colours, as in the illusion of Marilyn Monroe, which was inspired by Andy Warhol's screen prints. It has four different colours. The same method of charting is used as only two colours occur in any one section, although the four sections are knitted as one piece. Modular knitting lends itself to illusions as small areas can be made separately then combined. Mitred knitting works well. The charting method needs to be adapted slightly to accommodate the different directions of the knitting. Geometric Illusion Art Illusion Art does not have to be pictorial. It also lends itself to geometric works. Sometimes the same illusion can be created in different ways. It is much less successful for abstract art. The brain needs to be able to perceive an image and fill in the gaps, which would be extremely difficult with an abstract design. Exhibitions Illusion Knitting Art is very new, the earliest exhibitions being held in 2010. Steve Plummer and Pat Ashforth 2010: University of Stirling, Royal Horticultural Halls, Herbert Art Gallery and Museum and other smaller venues. 2011: London Science Museum. 2011: Haworth Art Gallery, UK: Einstein (above) was exhibited alongside 150 more conventional artworks at a juried exhibition and was voted the favourite piece, by visitors to the exhibition. 2012: Lafayette College, Pennsylvania, US. 2012–2013: Puzzling World, Wānaka, New Zealand. 2012: Seven pieces by Steve Plummer acquired by Ripley's Believe It or Not!. Exhibitions to be announced. 2012: Proserpine featured on BBC2 Paul Martin's Handmade Revolution Tanja Boukal 2011: Vienna: Those In Darkness Drop From Sight. Artists Artists working in this field include Steve Plummer, Pat Ashforth, Brent Annable, Tanja Boukal, Nelleke Kool, Julie Rosencrans, Lisa Lehner and George Maffett. Some of these artists have experimented with using computer programs to speed up the design process. So far, no program has been as good as the artist's eye. George Maffett uses a Lego 3-D modelling program to assist in the process. References External links Steve Plummer's Illusion Knitting Patterns on Ravelry Vivian Høxbro's website Shadow knitting links Illusion knitting tutorials Design your own Illusion Patterns Knitting Optical illusions Textile techniques Textile arts Knitted fabrics 21st-century neologisms
Illusion knitting
Physics
1,911
36,250,088
https://en.wikipedia.org/wiki/Maltego
Maltego is a platform for open-source intelligence (OSINT) and cyber investigations, developed by Maltego Technologies GmbH, a company headquartered in Munich, Germany. Maltego is used by organizations across both the private and public sectors to support OSINT investigations, especially by cyber threat intelligence teams and law enforcement. It is employed by organizations such as the FBI, INTERPOL, financial institutions, and several DOW 30 companies. The platform supports both basic OSINT investigations for novice users and advanced analysis of large datasets for experienced analysts. It offers the ability to integrate internal data with a broad array of external data sources provided by Maltego. It also features tools for real-time collection, monitoring, and preservation of social media intelligence for public safety efforts, risk management, and legal prosecutions. History Maltego was originally developed by Paterva, a company based in Pretoria, South Africa. In 2019, Maltego Technologies, headquartered in Munich, Germany, assumed responsibility for all global customer-facing operations and later technology development and management. Certification and Compliance In 2023, Maltego Technologies received ISO 27001:2022 certification, an international standard for managing information security. The certification was renewed in 2024, to reflect the company’s ongoing commitment to maintaining internationally recognized standards of information security. Prior to obtaining ISO 27001:2022 certification, Maltego had already been compliant with the General Data Protection Regulation (GDPR). Charlesbank Acquisition On April 18, 2023, Maltego Technologies was acquired by Charlesbank Technology Opportunities Funds, managed by Charlesbank Capital Partners, for an undisclosed amount. As part of this acquisition, Charlesbank committed to investing over $100 million USD into the company to support its growth and development. Philip Mayrhofer, Managing Director of Maltego, commented on the acquisition, stating, "The Maltego platform is all about empowering investigators. Charlesbank shares our vision. They have made a significant investment in the company to accelerate product development and sales internationalization. This enables us to add more features and data sources and to improve usability for even more investigators." Following the acquisition, Maltego introduced new browser-based investigation capabilities and simplified data access, aimed at serving both novice and advanced investigators. The platform's expanded features were designed to facilitate collaboration in various settings, enhancing its utility for a broader range of users. Caleb Barlow, an industry expert who advised Charlesbank on the acquisition and joined Maltego's board, highlighted the platform's importance, stating, "I have known Maltego for over a decade, and it has been a staple in every cyber operators toolbox."  The investment was facilitated by Robert W. Baird, who served as the exclusive M&A adviser to Maltego and its selling shareholders. Acquisition of PublicSonar and Social Network Harvester In March 2024, Maltego Technologies acquired PublicSonar and Social Network Harvester to provide more capabilities to its all-in-one investigation platform. PublicSonar, developed in the Netherlands, offered a tool that leverages OSINT for large-scale, real-time monitoring, particularly in the context of physical security and public safety. It was widely used by organizations to manage public safety operations by analyzing and acting upon data from various open sources. By integrating PublicSonar into its platform, Maltego expanded its capabilities beyond cyber intelligence to include real-time public safety management. After the acquisition, PublicSonar, was rebranded as Maltego Monitor, reflecting its new role within the suite of tools of the German company. Social Network Harvester was designed for social network analysis, enabling investigative teams to collect, analyze, and preserve social media data that can be used as court-admissible evidence. It was particularly used by law enforcement and intelligence agencies that require robust tools for tracking and analyzing social media activities. After the acquisition, the German-developed Social Network Harvester was rebranded as Maltego Evidence and integrated into the platform offering. These acquisitions were motivated by Maltego’s vision of creating a comprehensive platform that supports a wide range of investigative needs, from cyber threat intelligence to public safety and legal investigations. By integrating these tools, Maltego strengthens its position as a platform for organizations involved in complex investigations with the ability to manage and interpret vast datasets. Product In 2023, Maltego began its transition from a single link analysis tool to an all-in-one platform that supports a wide range of users, including novice investigators, trained OSINT analysts, and technical investigators at law enforcement agencies, government institutions, large cyber threat intelligence teams, and enterprises worldwide. Tools in the All-in-One Platform The Maltego Graph, previously known as the Maltego Desktop Client, has been widely used for conducting complex and large-scale OSINT investigations, with the flexibility to integrate with other tools via API. In late 2023, Maltego introduced Maltego Search (originally released as OSINT Profiler), a browser-based tool designed to facilitate quick and automated preliminary OSINT searches, making it accessible to non-technical users. Following the acquisition of additional capabilities in April 2024, the platform expanded to include Maltego Monitor (formerly PublicSonar) and Maltego Evidence (formerly Social Network Harvester). These tools enhance the platform by providing monitoring and social network analysis functionalities, thereby broadening the scope of investigative support offered by Maltego. Data in the All-in-One Platform Maltego Data is a component of the Maltego platform that provides access to both internal and external data sources. This offering includes the Maltego Data Pass, Connectors, and Connector Builders. Maltego Data Pass offers users access to a curated and expanding collection of data sources relevant to a wide range of investigations, including those focused on persons of interest, threat intelligence, cryptocurrency, the dark web, and corporate intelligence. The Data Pass operates on a credit-based system, with allowances included in the user’s plan. Maltego serves as an intermediary, ensuring that data providers do not have visibility into the investigative activities of users. Maltego Connectors are integrations that enhances the platform's capabilities by enabling seamless access to over 100 pre-built Connectors, allowing users to effortlessly integrate additional data sources into their investigations with a single click. Connector Builders allow users to create custom Connectors to access internal data sources or external APIs for which they have API keys. This feature enables organizations to customize their data integration, utilizing Maltego's SDKs and Transform libraries. Users can also deploy Connectors developed and shared by the broader community, such as those available on GitHub. Services in the All-in-One Platform Maltego offers a range of services as part of its platform for customers on Professional and Organization plans. These services include: Maltego Academy with on-demand learning resources and custom live training sessions designed for investigators using the Maltego platform. Advisory services that offer guidance on workflow optimization and the development of new use cases to enhance the effectiveness of investigations. Technical custom engineering services that deliver specialized deployment and integration solutions tailored to meet specific organizational needs. Custom training services that include custom training sessions conducted by Maltego’s Subject Matter Experts. Maltego Admin for auditing and analyzing an organization’s Maltego usage, managing billing, and overseeing access authorization. References Data analysis software Domain Name System Internet architecture
Maltego
Technology
1,535
484,764
https://en.wikipedia.org/wiki/Process%20theory
A process theory is a system of ideas that explains how an entity changes and develops. Process theories are often contrasted with variance theories, that is, systems of ideas that explain the variance in a dependent variable based on one or more independent variables. While process theories focus on how something happens, variance theories focus on why something happens. Examples of process theories include evolution by natural selection, continental drift and the nitrogen cycle. Process theory archetypes Process theories come in four common archetypes. Evolutionary process theories explain change in a population through variation, selection and retention—much like biological evolution. In a dialectic process theory, “stability and change are explained by reference to the balance of power between opposing entities” (p. 517). In a teleological process theory, an agent “constructs an envisioned end state, takes action to reach it and monitors the progress” (p. 518). In a lifecycle process theory, “the trajectory to the final end state is prefigured and requires a particular historical sequence of events” (p. 515); that is, change always conforms to the same series of activities, stages, phases, like a caterpillar transforming into a butterfly. Applications and examples Process theories are important in management and software engineering. Process theories are used to explain how decisions are made how software is designed and how software processes are improved. Motivation theories can be classified broadly into two different perspectives: Content and Process theories. Content theories deal with “what” motivates people and it is concerned with individual needs and goals. Maslow, Alderfer, Herzberg and McClelland studied motivation from a “content” perspective. Process theories deal with the “process” of motivation and are concerned with “how” motivation occurs. Vroom, Porter & Lawler, Adams and Locke studied motivation from a “process” perspective. Process theories are also used in education, psychology, geology and many other fields; however, they are not always called "process theories". See also Interactions of actors theory Process-oriented psychology Process philosophy Process architecture Notes References A Brief Introduction to Motivation Theory Management science
Process theory
Biology
435
42,882,484
https://en.wikipedia.org/wiki/Battlefield%20Hardline
Battlefield Hardline is a first-person shooter video game developed by Visceral Games and published by Electronic Arts. It was released in March 2015 for PlayStation 3, PlayStation 4, Windows, Xbox 360, and Xbox One. The game is chronologically set between Battlefield 3 and Battlefield 4, Hardline focuses on crime, heist and policing elements instead of military warfare. Upon release, the game received a mixed critical reception, with critics praising the game's multiplayer mode, accessibility and voice acting, while criticizing the game's plot, stealth and narrative. It is the final Battlefield game to be released for the PlayStation 3 and Xbox 360 platforms. It was also the last game to be developed by Visceral Games before the company shut down in 2017. Gameplay The focus of the game is the "war on crime", breaking away from the military setting that characterized the series. As such, the main factions in Hardline are the police Special Response Units and criminals. Players have access to various military-grade weapons and vehicles, such as the Lenco BearCat, as well as having police equipment such as tasers and handcuffs. Hardline also uses the "Levolution" mechanic from Battlefield 4. For example, in the map "Downtown" players can send a construction crane crashing into the building, ripping down debris from the central buildings in downtown, which falls down on the streets of Los Angeles. This time, every map features multiple Levolution events, both small and large. Many new game modes are featured in Hardline, including "Heist", "Rescue", "Hotwire", "Blood Money", and "Crosshair" Mode. Heist: The criminals must break into a cash filled vault (or as featured in some maps, blow open the doors of an armoured truck) then move the cash filled packages to an extraction point; the police must stop them. If the Criminals manage to escape by bringing all the money to the extraction point, they win. Blood Money: Both factions must retrieve money from an open crate in the center of the map, then move it back to their respective side's armored truck. Players can also steal money from the opposing team's truck. The team that first deposits $5 million worth of money into their truck or the team with the most money under a time limit wins. Hotwire: Drivable cars take the role of traditional Conquest "flags". Like Conquest, capturing cars (done by driving above a certain "cruising" speed) will bleed the enemy team's reinforcement tickets. The team who reduces the other's to zero or who has the most tickets remaining after the time limit wins. Rescue: In a 3 minute long 5 vs 5 competitive mode, S.W.A.T. officers must try to rescue hostages held by criminal forces. The cops win by either rescuing the hostage(s) or by killing all the criminals. Criminals win by killing all the cops, or defending the hostages until the negotiations are over. Each player has only one life in this mode, which means no respawns. Crosshair: The second competitive game mode in Battlefield Hardline. Crosshair is also 3 minutes long, 5 vs 5 with only one life. In Crosshair the criminals are trying to kill a player controlled VIP on the cops side who is a former gang member turned states witness. The criminals win by killing the VIP and the cops win by getting the VIP to the extraction point. Visceral Games ratified that the single-player campaign will not be linear and promised to deliver a better one than the predecessors. The campaign features episodic crime dramas where choices will change situational outcomes and gameplay experiences. As a cop, players can use multiple police gadgets and personal equipment. The police badge can be used to order criminals to lay down their weapons, the scanner is used to stake out a situation, identify high-value targets, log evidences, tag alarms, and mark other threats. To slip past unnoticed, players can use bullet cases to distract enemies. Synopsis Setting Miami is embroiled in a drug war and Officer Nicholas "Nick" Mendoza (voiced by Philip Anthony-Rodriguez, motion captured by Nicholas Gonzalez) has just made detective. Alongside his partner, veteran detective Khai Minh Dao (Kelly Hu), he follows the drug supply chain from the streets to the source. In a series of increasingly off-the-books cases, the two detectives come to realize that power and corruption can affect both sides of the law. Plot In 2012, Miami Police Detectives Nick Mendoza and Carl Stoddard (Travis Willingham) make a drug bust that goes violent. After arresting a fleeing suspect, Captain Julian Dawes (Benito Martinez) has Nick partner up with Khai Minh Dao to follow a lead to cocaine broker Tyson Latchford (Adam J. Harrington). Forcing his associate Tap (David DeSantos) to wear a wire, they find a new drug called Hot Shot being sold in the streets of Miami and rescue Tyson from a group of armed men. In the process Khai is severely wounded, putting her out of action for several weeks. After returning (against her doctor's orders), Dawes orders the two to bring in Leo Ray (Graham Shiels) from the Elmore Hotel but are forced to fight their way through armed men connected to drug dealer Remy Neltz (T.J. Storm), who is distributing the Hot Shot drug. While capturing Leo, Khai beats him up for seemingly insulting her. Leo's information leads the two detectives to the Everglades, where drug bales are being dropped. Investigating the area, they discover several of Neltz's drug operations and Leo's mutilated corpse, who was presumably killed for cooperating with the Miami Police. They eventually find Neltz only to escape back to Miami. Before leaving, he mentions that he took a deal from Stoddard. The officers corner him in a Miami warehouse only for Stoddard to kill Neltz as he was about to elaborate more on their deal. Nick leaves in disgust after Stoddard and Khai take some cash before more officers arrive. Later, as a hurricane makes landfall, Dawes sends Nick and Khai back to the crime scene for any evidence incriminating Stoddard. Finding Neltz's recording implicating Stoddard, Nick finds his former partner in a meeting with other dealers but is forced to work with him to rescue Khai from more armed men. The three later meet Dawes, who destroys the evidence implicating Stoddard and revealing that himself and Khai are corrupt. The three betray Nick due to his refusal to go along with their scheme, framing him for laundering Neltz's drug money. Three years later in 2015, while on a prison bus, Nick escapes with the help of Tap and Tyson. The mastermind behind Nick's escape is none other than Khai. Despite raw feelings about her betrayal and being framed, Nick leaves with Khai and Tyson for Los Angeles. Khai briefs Nick that during the three years he has been in prison, Dawes founded private law enforcement firm Preferred Outcomes, having 'cleaned up' Miami and is starting to expand into other US cities. Wanting to ruin Dawes, Khai sends Nick and Tyson to rendezvous with Marcus "Boomer" Boone (Eugene Byrd) and the three of them disrupt Korean Mafia leader Kang's drug business (Dawes' main drug distribution spot in LA). Although not finding much, Nick and Khai follow another lead to the house of drug kingpin Neil Roark (Mark Rolston). During Roark's meeting, Nick comes up with the idea to steal Dawes' money before he can launder it and uses Khai's phone as a makeshift tracking device by placing it in a briefcase to be taken to where the rest of Dawes' money is being kept. After surviving a brief assault by Roark's men, Nick and Khai make their escape. Dawes' money is kept in the penthouse of his corporate HQ skyscraper back in Miami and behind an impregnable vault, Boomer calls a former associate of his for a safecracking robot. He and Nick drive to the desert to meet Boomer's contact, his ex-girlfriend Dune (Alexandra Daddario), who sets up a meeting with her father, Tony Alpert (Fred Tatasciore). Alpert backstabs them, however, revealing he knows Nick is an escaped felon and that Stoddard has placed a bounty on him for his capture alive. Nick and Boomer escape their prison and retrieve their gear from Alpert's compound. Along the way, Nick discovers that Alpert was behind the creation and manufacturing of the Hot Shot drug, and murdered an ATF agent named Darius Barnes (Josh Keaton) to cover up his plans of starting a civil war. Dune helps the two escape to an abandoned airfield, but they separate after surviving Alpert's ambush at a gas station. At the airfield, Nick retrieves the safecracking robot and wins a tank duel against Alpert, before he and Boomer escape in a plane Boomer had repaired. As Khai, Nick, Boomer, and Tyson prepare to leave for Miami they are ambushed by Stoddard and his men. Nick kills his former partner and sends a picture of Stoddard's body to Dawes. The group arrive in Miami and infiltrate Preferred Outcomes HQ. They find the vault in Dawes' penthouse only to find it booby-trapped. Tyson is gravely wounded by the blast but survives. Nick answers Khai's ringing phone in the empty vault to hear Dawes on the other side, telling Nick to come find him at Santa Rosita off the coast of Florida. Nick departs from his group on the island, who leave to find medical attention for Tyson, and infiltrates it alone to Dawes' mansion. Nick finds his former captain in his office, where Dawes tells him that he wishes Nick to join him and take over Preferred Outcomes once Dawes is gone and that the two are akin to be "more criminal than cop". Nick agrees to the last remark and unhesitantly shoots Dawes dead. Searching his office, he finds a letter addressed to him from Dawes explaining why he framed Nick three years earlier and follows a passage to his underground vault. Inside the vault, Nick finds Dawes' laundered fortune, which is now his, left to wonder how he will use it. Development Battlefield Hardline was revealed on an EA blog post by vice president and general manager of Visceral Games, Steve Papoutsis. The game was due for announcement during E3 2014, but information was leaked early. Unlike other games in the Battlefield franchise that feature military warfare, Hardline features a "cops and robbers" gameplay style. The leaked trailer refers to the game as Omaha. "Visceral started work on Battlefield Hardline about a year before Dead Space 3 shipped," creative director Ian Milham has revealed, suggesting that the game may have entered development in early 2012. On June 14, 2014, the Battlefield Hardline beta went public, coming after an official announcement at E3 2014 that the beta would be coming soon to PC and PlayStation 4. The beta ended on June 26, 2014. Later at E3 2014, EA confirmed that the game would be running at 1080p on the PlayStation 4 and was aiming to achieve the same resolution for the Xbox One version. However, on March 8, 2015, Visceral Games revealed that the PlayStation 4 version would only run at 900p, with the Xbox One version running at 720p. On February 3, 2015, the Battlefield Hardline beta became publicly active for all platforms. It was reported that 7 million people participated in the open beta and it was met with positive reception from both critics and players. On February 24, 2015, Electronic Arts confirmed that the game had been declared gold, indicating it was being prepared for production and release. Release On July 22, 2014, EA announced that they would delay Battlefield Hardline from October 21, 2014, to March 17, 2015. The reason for the delay was to implement the feedback given during the public beta. The Premium Edition of the game was announced on March 2, 2015. Players who purchased the Premium Edition will unlock several features, including masks, a Gun bench that allows player to customize their weapons and "Legendary Status", a feature relating to the progression system of the game. On the same day, the four expansion packs of the game, namely Criminal Activity, Robbery, Getaway, and Betrayal were announced. Similar to Battlefield 4s Premium Program, premium members of Hardline gained access to the four expansion packs two weeks before other players. Four new maps, as well as new vehicles, masks, and weapons were introduced to the game through the Criminal Activity DLC. According to the lead multiplayer producer Zach Mumbach, the pack would put more emphasis on "destructibility". A new game mode called "Bounty Hunter" is also featured. It was released in June 2015. The second expansion, Robbery, features a five-versus-five multiplayer modes called Squad Heist, new paints, weapons and "Legendary Super Feature". The expansion pack was released in September 2015. The third expansion, Getaway, which adds a new mode called "Capture the Bag" and new maps to the game, was released on January 12, 2016. The final expansion, Betrayal, was released in March 2016. Reception Critical response The PC, PlayStation 4 and Xbox One versions received "mixed or average" reviews according to the review aggregation website Metacritic. In Japan, where the game was ported for release on March 19, 2015 (the same release date as the PAL version), Famitsu gave the console versions each a score of two nines and two eights for a total of 34 out of 40. Anthony LaBella of GameRevolution praised PS4 version's stealth element, action-packed sequences, detailed single-player campaign, compelling and fast-paced multiplayer and the Heist mode, which requires players to utilize teamwork. He also praised the other new modes featured in the game such as Hotwire and Crosshair, which he stated "has showcased the transition from warfare to crime and provide plenty of entertainment outside of the traditional Battlefield experience". However, he criticized the predictable plot, flat characters, poor presentation of the campaign and the uninteresting story. He summarized the review by saying that "The combination of the stealth-focused campaign and many multiplayer modes establishes Battlefield Hardline as a worthwhile standalone entry in the popular FPS franchise." Brian Albert of IGN praised the game's enjoyable campaign, surprising comedic moments, decent plot, voice-acting and animation, likeable characters, well-designed levels, realistic weapons and audio, rewarding stealth, as well as the single-player campaign for requiring the player to utilize patience and skill and the game for encouraging players to use non-lethal takedown. He also praised the huge variety of multiplayer modes, the dynamic Hotwire mode and the well-designed and varied maps. He also praised the new gameplay features such as the grappling hook and zip-line for making transversal faster. However, he criticized the unlock system for not awarding players in accordance to their playstyles and the overly-simplistic AI. He summarized the review by saying that "Battlefields first foray into stealth makes for a fresh campaign, and the multiplayer has something for everyone." Jeff Marchiafava of Game Informer said that the PS4 version's single-player campaign "is a mess", and that its ending is "facepalm-worthy". However, he also said that the multiplayer mode is "still worthy of the Battlefield name". He summarized his review by saying that while the single-player campaign "falls flat, the heart of the Battlefield franchise beats on – albeit at a different tempo". Ben Griffin of GamesRadar+ praised its new-players friendly and compelling multiplayer, refreshing multiplayer modes, rewarding interrogations system and detailed character models. However, he criticised the unfocused campaign, simplistic and predictable AI, as well as the campaign's over-reliance on stealth, which he stated "has never evolved during the campaign". He summarized the review by saying that "While not quite as main-event-essential as previous Battlefield blockbusters, the tighter, faster Hardline is most definitely the good cop." Jeff Gerstmann of Giant Bomb praised the game's collectibles, which he stated "have actual context"; he criticised the idiotic AI partners as well as the poor story which has failed to deliver character development, tension and logic. He summarized the review by saying that "Battlefield Hardline is hardly a disaster, but it feels like a franchise spinning its wheels with minor adjustments, rather than truly advancing forward." He also noted that the game generally enjoyed a more stable launch than its predecessor Battlefield 4, as he stated that the game performs functionally across all platforms. Brett Phillips of VideoGamer.com strongly criticized the PS4 version's campaign, calling it "the worst campaign in the entire series". He also criticized its poorly-designed spawn points, unnecessary item-scanning, clichéd twists, anarchic and inconsistent Conquest mode, boring and frustrating Hotwire mode, as well as the removal of heavy weapons such as rocket launcher from the weapon menu. The progression system was also criticized for being incongruous with the narrative of the game. He also criticized the map design for lacking imagination and verticality, matches for lasting too long and the game itself for not taking any risks. He called the game "a forgettable, immature experience rather than one worth talking about" and he summarized the review by saying that "Battlefield Hardline could have been something unique, a chance for Visceral to place its own stamp on a long-standing franchise. What we instead get is a laughably-shambolic campaign and multiplayer that is merely serviceable and too timid to step out of Battlefield 4s shadow." Adam Rosenberg of Digital Trends gave the Xbox One version a score of four-and-a-half stars out of five, calling it "a two-pronged success, with a killer cops-and-robbers story backed by a speedy take on competitive play." Dean Takahashi of VentureBeat gave the Xbox One version a score of 86 out of 100, saying, "Overall, I think that EA and Visceral have established a new franchise within the Battlefield series, and one that could live on for many years to come." Chris Holzworth of EGMNow gave the PS4 version 7.5 out of 10, saying that it "might not reinvent the wheel the series rolls on, but it certainly makes it spin a whole lot smoother. Speeded up gameplay, an opened-up single-player, and a robust suite of new multiplayer modes lends itself to the best Battlefield to date—though that's not saying much, a decade later." Edge gave the PC version a score of seven out of ten, saying, "It feels like just that: a lower-budget sideshow to the glitzy main event." Mat Growcott of Push Square gave the PS4 version a score of seven stars out of ten, calling it "a decent game that gets points for originality of concept, but how much value it has is down to how much you enjoyed previous entries in the franchise, and how much you'd like to see the Cop FPS genre become a thing." Kirk McKeand of The Daily Telegraph gave the same PS4 version a score of six out of ten, saying, "There is still a great multiplayer shooter here, but it feels more like an expansion than a full sequel - if it wasn't for the campaign, Hardline would be Battlefield 4s version of Bad Company 2s Vietnam expansion - it even has the vehicle music. It just forgot to bring the personality." James McMurtie of National Post gave the PC version seven out of ten, saying, "Hardlines release was smooth, and although it did feel like a modified BF4, it also plays like something novel and worthwhile all on its own." Mike LeChavalier of Slant Magazine gave the PS4 version a score of three-and-a-half stars out of five, saying, "It wouldn't be a Battlefield game without a host of multiplayer scenarios, and Hardline is definitely no slouch in that department, even if the assortment of options lack a certain sweeping freshness that would have been greatly appreciated." David Jenkins of Metro gave the same PS4 version seven out of ten, saying, "The cops 'n' robbers theme often does more harm than good to the Battlefield formula, but this peculiar spin-off has just enough tricks of is own to be worth a collar." Andrew Phillips of The Digital Fix gave the Xbox One version six out of ten, calling it "a Battlefield game with weak single player and solid if underwhelming multiplayer - absolutely no one saw this coming." Ebenezer Samuel of New York Daily News gave the same console version three stars out of five, saying, "The end result is a Battlefield game that's solid, but not spectacular. Visceral takes the series narrative where its never been before, builds a solid story, and adds little pieces that have potential." However, Michael Thomsen of The Washington Post gave the PC version an unfavorable review, saying, "Hardline works best in its multiplayer portion where it abandons the pretensions of police work and storytelling. Playing Battlefield online is stepping into a sprawling tempest of gunfire with 63 other players. Here, violence has a cross-canceling effect, in which neither side is granted automatic authority and every power and ability can be questioned by the other side." One aspect of the game that was singled out by games media was a set of Easter eggs: when reloading a gun, there is a one in 10000 chance that instead of the standard reload animation, a comically absurd animation will play, which the press called "hilarious" and "zany". Sales The retail version of Battlefield Hardline debuted at No. 1 in the UK software sales chart in its first launch week. It also became the best-selling title in the UK in 2015 as of March 23, 2015. According to NPD Group, the game was the best-selling game in March in the United States. Notes References External links 2015 video games Asymmetrical multiplayer video games Battlefield (video game series) First-person shooters Frostbite (game engine) games Multiplayer and single-player video games Organized crime video games PlayStation 3 games PlayStation 4 games Stealth video games Fiction about theft Video games about police officers Video games about terrorism Video games about the illegal drug trade Video games developed in the United States Video games set in California Video games set in Colorado Video games set in Detroit Video games set in Florida Video games set in Los Angeles Video games set in Mexico Video games set in Miami Video games set in 2012 Video games set in 2015 Video games that support Mantle (API) Visceral Games Windows games Xbox 360 games Xbox One games Video games using Havok Video games scored by Paul Leonard-Morgan
Battlefield Hardline
Physics
4,815
36,347,942
https://en.wikipedia.org/wiki/Weak-Link%20Approach
The Weak-Link Approach (WLA) is a supramolecular coordination-based assembly methodology, first introduced in 1998 by the Mirkin Group at Northwestern University. This method takes advantage of hemilabile ligands -ligands that contain both strong and weak binding moieties- that can coordinate to metal centers and quantitatively assemble into a single condensed ‘closed’ structure (Figure 1). Unlike other supramolecular assembly methods, the WLA allows for the synthesis of supramolecular complexes that can be modulated from rigid ‘closed’ structures to flexible ‘open’ structures through reversible binding of allosteric effectors at the structural metal centers. The approach is general and has been applied to a variety of metal centers and ligand designs including those with utility in catalysis and allosteric regulation. Weak-Link Approach components There are three main components of the WLA methodology that enable the in situ control of supramolecular architecture: 1) the utilization of hemilabile ligands, 2) the choice of metal centers, and 3) the type of allosteric effector. Hemilabile ligands utilized in the WLA A key component of the WLA is the use of hemilabile ligands. Hemilabile ligands are polydentate chelates that contain at least two different types of bonding groups, denoted X and Y (Figure 2). The first group (X) bonds strongly to the metal center, while the other group (Y) is weakly bonding and easily displaced by coordinating ligands or solvent molecules (Z). In this way, the substitutionally labile group (Y) can be displaced from the metal center yet remain available for recoordination. For WLA-generated structures, a typical ligand design consists of a phosphine-based strong binding group and a weak-binding group containing O, S, Se, or N. More recent reports have utilized N-heterocyclic carbenes (NHC) as the strong-binding moiety. By using a combination of NHC- and phosphine-based hemilabile ligands, heteroligated complexes, and macrocycles have been successfully synthesized, allowing access to more complex architectures with sophisticated functions. Metal centers utilized in the WLA Due to the well-developed understanding of the reactions between the hemilabile ligands and d8 metal ions, the WLA has relied extensively on this type of metal center within its methodology. Initial reports focused on the use of Rh(I), but Ir(I), Ni(II), Pd(II), and Pt(II) have all been successfully employed. While d8 metal centers dominate the WLA literature, d6 Ru(II) and d9 Cu(I) have also been utilized. Importantly, the choice of metal centers tunes the identity and selectivity of the various allosteric effectors. Types of allosteric effectors in the WLA The use of hemilabile ligands allows structural motifs synthesized via the WLA to be modified with small molecule effectors much like allosteric enzymes in biology. As described above, the weak Y–M bond can be easily displaced by a coordinating ligands including Cl−, CO, CH3CN, RCO2−, and a variety of nitriles/isonitriles (Figure 2). Typical WLA constructs rely on the allosteric effector’s stronger affinity for the metal center versus the weakly binding Y moiety. Upon introduction of these effectors, the closed, rigid structures open to their more flexible form. The closed structures can then be reformed in situ by halide abstraction agents, such as noncoordinating silver and thallium salts, or by evacuation of the reaction chamber to remove solvent or small molecules. Recent progress has shown that the inclusion of pendent redox active transition metal groups in the WLA ligands enables control over the binding of ancillary ligands to a redox-inactive Pt(II) center via oxidation and reduction of the distal metal site (Figure 3). This discovery highlights that new forms of stimuli can be incorporated into the WLA for the design of novel stimuli-responsive materials. Classes of allosteric structures assembled via the Weak-Link Approach The generality of the WLA and its ability to accommodate a multitude of functional groups has allowed the facile synthesis of both molecular and supramolecular architectures. These structures can be broadly grouped into two classes of compounds based on the coordination geometry of the “closed” complexes: 1) cis-WLA complexes and 2) trans-WLA complexes. cis-WLA complexes The majority of WLA architectures synthesized to date can be classified as cis-WLA complexes. The strong-binding moieties adopt cis-coordination geometry around the metal center in these complexes, regardless of the identities of the strong-binder. For example, the heteroligated complex shown in Figure 3 is understood to be a cis-WLA complex because both the NHC- and phosphino- groups, the strong-binding components, are cis relative to each other. Using these complexes, molecular tweezers, macrocycles, and triple-layer structures have all been successfully synthesized (Figure 4). In 2017, the Mirkin group reported infinite coordination polymer particles incorporating WLA approach complexes. The extended structure was successfully obtained by appending secondary terpyridine groups onto the hemilabile ligands within the WLA subunits and allowing them to selectively bind Fe(II) ions (Figure 5). trans-WLA complexes The first trans-WLA complex was reported by the Mirkin group in 2017. In this complex, two NHC groups adopt a trans-coordination geometry around a Pd(II) metal center due to the addition of the sterically bulky tert-butyl groups to the imidazole ring of the hemilabile ligand. Upon effector binding, a linear change of up to ~9Å was observed (Figure 6). To date, only this molecular complex has been reported utilizing a trans-WLA complex. Examples of functional allosteric structures Allosteric regulation in supramolecular structures generated via the WLA is particularly important in the context of designing and synthesizing novel, bioinspired catalytic systems, where the conformation of the complex controls the activity of the catalyst. Below are a series of different catalytic motifs that have been constructed via the WLA and a discussion of the control mechanisms that can be used to modulate catalytic activity: ELISA mimic The first catalytically active supramolecular structure generated via the WLA was designed to operate via a mechanism inspired by the Enzyme Linked ImmunoSorbent Assay (ELISA). In such a supramolecular system, a target sandwiching event creates a catalyst target complex that subsequently generates chemiluminescent or fluorescent readout. For example, a homologated WLA-based Rh(I) macrocyclic structure has been developed that incorporates pyridine-bisimine Zn(II) moieties and behaves as an efficient and completely reversible allosteric modulator for the hydrolysis of 2-(hydroxypropyl)-p-nitrophenyl phosphate (HPNP), a model substrate for RNA (Figure 7). Significantly, the structural changes induced by small molecule regulators Cl− and CO transition this system from a catalytically inactive state to a very active one in a highly reversible fashion. Further, this system provides a highly sensitive platform for sensing chloride anions. As chloride binds to the Rh(I) centers, the complex is opened, allowing hydrolysis to occur. The hydrolysis product of the reaction (p-nitrophenolate) can be followed by UV-vis spectroscopy. As in ELISA, the WLA-generated mimic can take a small amount of target (chloride anions) and produce a large fluorescent readout that can be utilized for detection. There are several notable conclusions that can be drawn based on the catalytic studies of this complex. The first is that the closed complex is completely inactive under hydrolysis conditions. Second, the open complex is extremely active and capable of quantitatively hydrolyzing all the HPNP substrate in less than 40 min. By simply bubbling N2 into the solution, the reformation of the closed complex and the generation of an inactive catalyst can be achieved. PCR mimic The polymerase chain reaction (PCR) is utilized in biochemistry and molecular biology for exponentially amplifying nucleic acids by making copies of a specific region of a nucleic acid target. When coupled with diagnostic probes, this technique allows one to detect a small collection of molecules under very dilute conditions. A limitation of PCR is that it only works with nucleic acid targets, and there are no known analogues of PCR for other target molecular candidates. Using the WLA, this type of target amplification approach has been exemplified in an abiotic system. By incorporating Zn(II)-salen ligands into a supramolecular assembly, an acyl transfer reaction involving acetic anhydride and pyridylcarbinol as substrates was investigated. In the absence of acetate, there is almost no catalytic activity. Once a small amount of tetrabutylammonium acetate reacts with inactive complex at its two rhodium centers that serve as structural regulatory sites, it is converted into open cavity complex, which then catalyzes the reaction (Figure 8). In the early stages of the reaction, only a minor amount of the catalyst is activated. As the reaction proceeds, more acetate is generated, which leads to the formation of more activated complex and progressively faster catalysis. This type of behavior is typical for cascade reactions including PCR. Unlike the previous example in which the catalyst produced a signal amplifier, this catalyst is a target amplifier making more copies of the target acetate. Following the reaction by gas chromatography, one observes that the generation of products follows a sigmoidal curve, indicative of a PCR-like cascade reaction system. Triple-layer structure There was also a need to design a catalytic structure that would allow for the inclusion of mono-metallic catalyst that could be completely turned off. To this end the triple-layer motif was developed, composed of two transition metal nodes, two chemically inert blocking exterior layers, and a single catalytically active interior ligand. This complex was synthesized using the WLA and halide induced ligand rearrangement processes, and it can be reversibly activated and deactivated through small-molecule or elemental anion effector reactions that assemble and disassemble the trilayer structures. In a Al(III)-salen example, the polymerization of ε-caprolactone could be turned on and off based on the ancillary ligands and abstraction agents added to the system (Figure 9). Unlike with previous catalytic structures that utilized bimetallic systems, the tri-layer motif allows for the incorporation of a monometallic catalyst, opening the scope of potential catalysts that can be employed using these types of structures. References Molecular physics
Weak-Link Approach
Physics,Chemistry
2,335
4,588,735
https://en.wikipedia.org/wiki/Anecortave%20acetate
Anecortave (rINN) is a novel angiogenesis inhibitor used in the treatment of the exudative (wet) form of age-related macular degeneration. Although similar in chemical structure to the corticosteroid hydrocortisone acetate, it possesses no glucocorticoid activity. If it is approved, it will be marketed by Alcon as anecortave acetate for depot suspension under the trade name Retaane. No development has been reported since 2010. Potential applications In addition to treating wet-form age-related macular degeneration - aka. neovascular age-related macular degeneration, it has also been evaluated as a potential therapy for dry-form age related macular degeneration, as well as for reducing the intraocular pressure in eyes with ocular steroid injection-related glaucoma. Synthesis Anecortave can be synthesized from a 17-oxosteroid: In addition to being synthesized from a 17-oxosteroid, anecortave acetate can be derived from cortisol by reducing the 11-beta hydroxyl on cortisol to a double bond between carbons 9 and 11 and the addition of an acetate group to carbon 21. This results in a molecule with no glucocorticoid or mineralocorticoid activity. FDA application history Retaane (15 mg anecortave acetate depot suspension) which is manufactured by Alcon, Inc., was a fast track designated product which was also a drug in FDA’s Pilot Continuous Marketing Application (CMA) program which often enrolls drugs which are being brought to the market and have an indication for a significant unmet medical need. This allowed Retaane to file with the FDA using a “rolling” New Drug Application, which allows specific units, Chemistry, Manufacturing, and Controls (CMC), pre-clinical, and the clinical unit, of the NDA to be reviewed as they are completed instead of as one large document. This allows the FDA to review each unit within six months of the submission. Alcon first filed the CMC unit in 2003, the Pre-clinical and Clinical units in 2004. In 2005 Alcon, Inc. announced it received the approval letter for the NDA for Retaane. In 2007, Alcon got its letter of approval for Retaane’s indication to treat wet age-related macular degeneration (AMD), but final approval would require the completion of an additional clinical study. As a result, it supported the Anecortave Acetate Risk-Reduction Trial (AART). This study looked at the efficacy of Retaane to reduce the progression of the dry form of AMD to the wet form. It ended in 2008. In 2008, Alcon announced it was terminating the development of anecortave acetate for the prevention of developing sight-threatening choroidal neovascularization secondary to age-related macular degeneration. In 2009, Alcon announced the end of the drug's development for reducing intraocular pressure associated with glaucoma. Currently, anecortave acetate is not on the market or being made for therapeutic use. Delivery Retaane depot is delivered via posterior juxtascleral depot (PJD) that delivers the drug onto the sclera near the macula. This delivery method allows for a decreased risk of intraocular infection as well as decreased risk for detachment of the retina. Not only is the delivery method advantageous, but Retaane compared to other angiogenesis inhibitors used for similar indications, only has to be delivered once every six months compared to nine to twelve times a year. This allows for increased patient compliance. See also Fluoromedroxyprogesterone acetate References Tertiary alcohols Angiogenesis inhibitors Pregnanes Enones Diketones Esters
Anecortave acetate
Chemistry,Biology
812
19,357,000
https://en.wikipedia.org/wiki/Charles%20H.%20Lindsey
Charles Hodgson Lindsey was a British computer scientist, most known for his involvement with the programming language ALGOL 68. He was an editor of the Revised Report on Algol 68, and co-wrote a ground breaking book on the language An Informal Introduction to Algol 68, which was unusual because it was written so that it could be read horizontally (i.e., serially, in the normal manner) or vertically (i.e., starting with section 1.1, then 2.1, then 3.1, etc., before going back to section 1.2, then 2.2, and so on) depending on how a reader wanted to learn the language. He was responsible for the research implementation of ALGOL 68 for the experimental MU5 computer at Manchester University, and maintained an implementation of a subset named ALGOL 68S. He wrote the complete History of ALGOL 68 in: He was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He was awarded the IFIP Silver Core Award in 1977. He was involved in the Internet Engineering Task Force (IETF) Working Group which produced the two RFC standards, of which he was a co-author, providing the Usenet distributed discussion system specification. He was also a member of the IETF DKIM Working Group which produced a scheme for signing email headers. He was a member of the Computer Conservation Society, North West Branch, and part of the team restoring Douglas Hartree's Differential Analyser at Manchester Museum of Science and Technology. References External links British computer scientists
Charles H. Lindsey
Technology
365
41,432
https://en.wikipedia.org/wiki/Numerical%20aperture
In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective (and hence its light-gathering ability and resolution), and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it. General optics In most areas of optics, and especially in microscopy, the numerical aperture of an optical system such as an objective lens is defined by where is the index of refraction of the medium in which the lens is working (1.00 for air, 1.33 for pure water, and typically 1.52 for immersion oil; see also list of refractive indices), and is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. Because the index of refraction is included, the of a pencil of rays is an invariant as a pencil of rays passes from one material to another through a flat surface. This is easily shown by rearranging Snell's law to find that is constant across an interface. In air, the angular aperture of the lens is approximately twice this value (within the paraxial approximation). The is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, generally refers to object-space numerical aperture unless otherwise noted. In microscopy, is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved (the resolution) is proportional to , where is the wavelength of the light. A lens with a larger numerical aperture will be able to visualize finer details than a lens with a smaller numerical aperture. Assuming quality (diffraction-limited) optics, lenses with larger numerical apertures collect more light and will generally provide a brighter image, but will provide shallower depth of field. Numerical aperture is used to define the "pit size" in optical disc formats. Increasing the magnification and the numerical aperture of the objective reduces the working distance, i.e. the distance between front lens and specimen. Numerical aperture versus f-number Numerical aperture is not typically used in photography. Instead, the angular aperture of a lens (or an imaging mirror) is expressed by the f-number, written , where is the f-number given by the ratio of the focal length to the diameter of the entrance pupil : This ratio is related to the image-space numerical aperture when the lens is focused at infinity. Based on the diagram at the right, the image-space numerical aperture of the lens is: thus , assuming normal use in air (). The approximation holds when the numerical aperture is small, but it turns out that for well-corrected optical systems such as camera lenses, a more detailed analysis shows that is almost exactly equal to even at large numerical apertures. As Rudolf Kingslake explains, "It is a common error to suppose that the ratio [] is actually equal to , and not ... The tangent would, of course, be correct if the principal planes were really plane. However, the complete theory of the Abbe sine condition shows that if a lens is corrected for coma and spherical aberration, as all good photographic objectives must be, the second principal plane becomes a portion of a sphere of radius centered about the focal point". In this sense, the traditional thin-lens definition and illustration of f-number is misleading, and defining it in terms of numerical aperture may be more meaningful. Working (effective) f-number The f-number describes the light-gathering ability of the lens in the case where the marginal rays on the object side are parallel to the axis of the lens. This case is commonly encountered in photography, where objects being photographed are often far from the camera. When the object is not distant from the lens, however, the image is no longer formed in the lens's focal plane, and the f-number no longer accurately describes the light-gathering ability of the lens or the image-side numerical aperture. In this case, the numerical aperture is related to what is sometimes called the "working f-number" or "effective f-number". The working f-number is defined by modifying the relation above, taking into account the magnification from object to image: where is the working f-number, is the lens's magnification for an object a particular distance away, is the pupil magnification, and the is defined in terms of the angle of the marginal ray as before. The magnification here is typically negative, and the pupil magnification is most often assumed to be 1 — as Allen R. Greenleaf explains, "Illuminance varies inversely as the square of the distance between the exit pupil of the lens and the position of the plate or film. Because the position of the exit pupil usually is unknown to the user of a lens, the rear conjugate focal distance is used instead; the resultant theoretical error so introduced is insignificant with most types of photographic lenses." In photography, the factor is sometimes written as , where represents the absolute value of the magnification; in either case, the correction factor is 1 or greater. The two equalities in the equation above are each taken by various authors as the definition of working f-number, as the cited sources illustrate. They are not necessarily both exact, but are often treated as if they are. Conversely, the object-side numerical aperture is related to the f-number by way of the magnification (tending to zero for a distant object): Laser physics In laser physics, numerical aperture is defined slightly differently. Laser beams spread out as they propagate, but slowly. Far away from the narrowest part of the beam, the spread is roughly linear with distance—the laser beam forms a cone of light in the "far field". The relation used to define the of the laser beam is the same as that used for an optical system, but is defined differently. Laser beams typically do not have sharp edges like the cone of light that passes through the aperture of a lens does. Instead, the irradiance falls off gradually away from the center of the beam. It is very common for the beam to have a Gaussian profile. Laser physicists typically choose to make the divergence of the beam: the far-field angle between the beam axis and the distance from the axis at which the irradiance drops to times the on-axis irradiance. The of a Gaussian laser beam is then related to its minimum spot size ("beam waist") by where is the vacuum wavelength of the light, and is the diameter of the beam at its narrowest spot, measured between the irradiance points ("Full width at maximum of the intensity"). This means that a laser beam that is focused to a small spot will spread out quickly as it moves away from the focus, while a large-diameter laser beam can stay roughly the same size over a very long distance. See also: Gaussian beam width. Fiber optics A multi-mode optical fiber will only propagate light that enters the fiber within a certain range of angles, known as the acceptance cone of the fiber. The half-angle of this cone is called the acceptance angle, . For step-index multimode fiber in a given medium, the acceptance angle is determined only by the indices of refraction of the core, the cladding, and the medium: where is the refractive index of the medium around the fiber, is the refractive index of the fiber core, and is the refractive index of the cladding. While the core will accept light at higher angles, those rays will not totally reflect off the core–cladding interface, and so will not be transmitted to the other end of the fiber. The derivation of this formula is given below. When a light ray is incident from a medium of refractive index to the core of index at the maximum acceptance angle, Snell's law at the medium–core interface gives From the geometry of the above figure we have: where is the critical angle for total internal reflection. Substituting for in Snell's law we get: By squaring both sides Solving, we find the formula stated above: This has the same form as the numerical aperture in other optical systems, so it has become common to define the of any type of fiber to be where is the refractive index along the central axis of the fiber. Note that when this definition is used, the connection between the numerical aperture and the acceptance angle of the fiber becomes only an approximation. In particular, "" defined this way is not relevant for single-mode fiber. One cannot define an acceptance angle for single-mode fiber based on the indices of refraction alone. The number of bound modes, the mode volume, is related to the normalized frequency and thus to the numerical aperture. In multimode fibers, the term equilibrium numerical aperture is sometimes used. This refers to the numerical aperture with respect to the extreme exit angle of a ray emerging from a fiber in which equilibrium mode distribution has been established. See also f-number Launch numerical aperture Guided ray, optic fibre context Acceptance angle (solar concentrator), further context References External links "Microscope Objectives: Numerical Aperture and Resolution" by Mortimer Abramowitz and Michael W. Davidson, Molecular Expressions: Optical Microscopy Primer (website), Florida State University, April 22, 2004. "Basic Concepts and Formulas in Microscopy: Numerical Aperture" by Michael W. Davidson, Nikon MicroscopyU (website). "Numerical aperture", Encyclopedia of Laser Physics and Technology (website). "Numerical Aperture and Resolution", UCLA Brain Research Institute Microscopy Core Facilities (website), 2007. Optics Fiber optics Microscopy Dimensionless numbers of physics
Numerical aperture
Physics,Chemistry
2,098
679,582
https://en.wikipedia.org/wiki/Hypocenter
A hypocenter or hypocentre (), also called ground zero or surface zero, is the point on the Earth's surface directly below a nuclear explosion, meteor air burst, or other mid-air explosion. In seismology, the hypocenter of an earthquake is its point of origin below ground; a synonym is the focus of an earthquake. Generally, the terms ground zero and surface zero are also used in relation to epidemics, and other disasters to mark the point of the most severe damage or destruction. The term is distinguished from the term zero point in that the latter can also be located in the air, underground, or underwater. Trinity, Hiroshima and Nagasaki The term "ground zero" originally referred to the hypocenter of the Trinity test in Jornada del Muerto desert near Socorro, New Mexico, and the atomic bombings of Hiroshima and Nagasaki in Japan. The United States Strategic Bombing Survey of the atomic attacks, released in June 1946, used the term liberally, defining it as: William Laurence, an embedded reporter with the Manhattan Project, reported that "Zero" was "the code name given to the spot chosen for the [Trinity] test" in 1945. The Oxford English Dictionary, citing the use of the term in a 1946 New York Times report on the destroyed city of Hiroshima, defines ground zero as "that part of the ground situated immediately under an exploding bomb, especially an atomic one." The term was military slang, used at the Trinity site where the weapon tower for the first nuclear weapon was at "point zero", and moved into general use very shortly after the end of World War II. At Hiroshima, the hypocenter of the attack was Shima Hospital, approximately away from the intended aiming point at Aioi Bridge. The Pentagon During the Cold War, the Pentagon (headquarters of United States Department of Defense in Arlington County, Virginia) was an assured target in the event of nuclear war. The open space in the center of the Pentagon became known informally as ground zero. A snack bar that used to be located at the center of this open space was nicknamed "Cafe Ground Zero". World Trade Center During the September 11 attacks in 2001, two aircraft were hijacked by 10 al-Qaeda terrorists and were flown into the Twin Towers of the World Trade Center in New York City, causing massive damage and starting fires that caused the weakened 110-story skyscrapers to collapse. The destroyed World Trade Center site soon became known as "ground zero". Rescue workers also used the term "The Big Momma!", referring to the pile of rubble that was left after the buildings collapsed. Even after the site was cleaned up and construction on the new One World Trade Center and the National September 11 Memorial & Museum were well under way, the term was still frequently used to refer to the site, as when opponents of the Park51 project that was to be located two blocks away from the site labeled it the "Ground Zero mosque". In advance of the 10th anniversary of the attacks, New York City mayor Michael Bloomberg urged that the "ground zero" moniker be retired, saying, "…the time has come to call those what they are: The World Trade Center and the National September 11th Memorial and Museum." Meteor air bursts The hypocenter of a meteor air burst, an asteroid or comet that explodes in the atmosphere rather than strike the surface, is the closest point on the surface to the explosion. The Tunguska event occurred in Siberia in 1908 and flattened an estimated 80 million trees over an area of of forest. The trees at the hypocenter of the blast were left standing, but all their limbs had been blown off by the shockwave. The 2013 Chelyabinsk meteor's hypocenter in Russia was more populated than that of Tunguska, resulting in civil damage and injury, mostly from flying glass shards from broken windows. Earthquakes An earthquake's hypocenter or focus is the position where the strain energy stored in the rock is first released, marking the point where the fault begins to rupture. This occurs directly beneath the epicenter, at a distance known as the hypocentral depth or focal depth. The focal depth can be calculated from measurements based on seismic wave phenomena. As with all wave phenomena in physics, there is uncertainty in such measurements that grows with the wavelength so the focal depth of the source of these long-wavelength (low frequency) waves is difficult to determine exactly. Very strong earthquakes radiate a large fraction of their released energy in seismic waves with very long wavelengths and therefore a stronger earthquake involves the release of energy from a larger mass of rock. Computing the hypocenters of foreshocks, main shock, and aftershocks of earthquakes allows the three-dimensional plotting of the fault along which movement is occurring. The expanding wavefront from the earthquake's rupture propagates at a speed of several kilometers per second; this seismic wave is what is measured at various surface points in order to geometrically determine an initial guess as to the hypocenter. The wave reaches each station based upon how far away it was from the hypocenter. A number of things need to be taken into account, most importantly variations in the waves speed based upon the materials that it is passing through. With adjustments for velocity changes, the initial estimate of the hypocenter is made, then a series of linear equations is set up, one for each station. The equations express the difference between the observed arrival times and those calculated from the initial estimated hypocenter. These equations are solved by the method of least squares which minimizes the sum of the squares of the differences between the observed and calculated arrival times, and a new estimated hypocenter is computed. The system iterates until the location is pinpointed within the margin of error for the velocity computations. See also List of meteor air bursts List of nuclear weapon explosion sites Tenet, a 2020 film that includes a sub-surface nuclear "hypocenter" in its storyline References External links Seismology Geometric centers Atomic bombings of Hiroshima and Nagasaki Cold War terminology Metaphors referring to objects Metaphors referring to places Metaphors referring to war and violence Military slang and jargon September 11 attacks
Hypocenter
Physics,Mathematics
1,285
24,529,515
https://en.wikipedia.org/wiki/List%20of%20optical%20disc%20manufacturers
This aims to be a complete list of optical disc manufacturers, including pre-recorded/pressed/replicated, record-able/write-once and re-writable discs. This list is not necessarily complete or up to date - if you see a manufacturer that should be here but is not (or one that should not be here but is), please update the page accordingly. This list only lists manufacturers - not brands. For example, many Maxell DVDs are made by Ritek or CMC magnetics. Many companies use equipment from Singulus Technologies. This list includes both CD, DVD and Blu-ray recordable and rewritable media manufacturers (like Ritek), and disc replicators (companies that replicate discs with pre-recorded content, like Sony DADC). A Anwell Technologies (Defunct in 2019) B BeAll Bluray Korea C CD Video Manufacturing Inc. CDA, Inc Cinram (went bankrupt due to shrinking demand, purchased by Technicolor SA) CMC Magnetics D Discmakers Daxon Technology Discovery Systems (Defunct) E EMI (sold to Cinram) F Fujifilm FAS Development Corp. (stopped in 2017) G Gigastorage Corporation H Hitachi Maxell (Maxell, stopped) I Imation (stopped) Infodisc (stopped) Infosmart Technologies L Lead Data Inc. M Memory-Tech Micro-works Technology (defunct) Mitsui Chemicals (MAM-A) Moser Baer (Defunct since 2018 due to bankruptcy; Its assets have been liquidated) Mitsubishi Kagaku Media/Mitsubishi Chemical Corporation / Verbatim (sold in 2019 to CMC Magnetics) N New Cyberian O Optodisc Ltd. P PSI Media and Fulfillment Services Pandisk Technologies Philips Plasmon Data Systems (Defunct in late 1990's) Prodisc Pressing-Media www.pressing-media.com PrimeDisc Princo Corp (seems to have stopped, as of 2020 they no longer appear on their home page) Panasonic (Matsushita) (made DVD-RAM, stopped due to shrinking demand; made Blu-ray discs for recording until Feb 2023) R Ricoh Ritek River Pro Audio S SKC Sky Media Manufacturing SA Sonopress Sony Sony DADC Summit Creations Pte. Ltd. T JVC / Taiyo Yuden (stopped due to shrinking demand, assets sold to CMC magnetics) TDK Corporation (former) Technicolor SA Toshiba-EMI (sold to EMI Music Japan in 2006) Traxdata U Umedisc Group V Verbatim Vivastar (defunct) W WEA Manufacturing (sold to Cinram) References See also Blu-ray Disc authoring Blu-ray Disc Blu-ray Disc Association Blu-ray Disc recordable Blu-ray Region Code CBHD Based on HD DVD format. Comparison of high definition optical disc formats Digital rights management HD DVD HD NVD High definition optical disc format war Optical disc PlayStation 3 Blu-ray Disc DVD Optical disc Recordable Computing-related lists Technology-related lists Optical computer storage media
List of optical disc manufacturers
Technology
638
1,461,070
https://en.wikipedia.org/wiki/SIGHUP
On POSIX-compliant platforms, SIGHUP ("signal hang up") is a signal sent to a process when its controlling terminal is closed. It was originally designed to notify the process of a serial line drop. SIGHUP is a symbolic constant defined in the header file signal.h. History Access to computer systems for many years consisted of connecting a terminal to a mainframe system via a serial line and the RS-232 protocol. When a system of software interrupts, called signals, was being developed, one of those signals was designated for use on hangup. SIGHUP would be sent to programs when the serial line was dropped, often because the connected user terminated the connection by hanging up the modem. The system would detect the line was dropped via the lost Data Carrier Detect (DCD) signal. Signals have always been a convenient method of inter-process communication (IPC), but in early implementations there were no user-definable signals (such as the later additions of SIGUSR1 and SIGUSR2) that programs could intercept and interpret for their own purposes. For this reason, applications that did not require a controlling terminal, such as daemons, would re-purpose SIGHUP as a signal to re-read configuration files, or reinitialize. This convention survives to this day in packages such as Apache and Sendmail. Modern usage With the decline of access via serial line, the meaning of SIGHUP has changed somewhat on modern systems, often meaning a controlling pseudo or virtual terminal has been closed. If a command is executed inside a terminal window and the terminal window is closed while the command process is still running, it receives SIGHUP. If the process receiving SIGHUP is a Unix shell, then as part of job control it will often intercept the signal and ensure that all stopped processes are continued before sending the signal to child processes (more precisely, process groups, represented internally by the shell as a "job"), which by default terminates them. This can be circumvented in two ways. Firstly, the Single UNIX Specification describes a shell utility called nohup, which can be used as a wrapper to start a program and make it ignore SIGHUP by default. Secondly, child process groups can be "disowned" by invoking disown with the job id, which removes the process group from the shell's job table (so they will not be sent SIGHUP), or (optionally) keeps them in the job table but prevents them from receiving SIGHUP on shell termination. Different shells also have other methods of controlling and managing SIGHUP, such as the disown facility of ksh. Most modern Linux distributions documentation specify using kill -HUP <processID> to send the SIGHUP signal. Daemon programs sometimes use SIGHUP as a signal to restart themselves, the most common reason for this being to re-read a configuration file that has been changed. Details Symbolic signal names are used because signal numbers can vary across platforms, but XSI-conformant systems allow the use of the numeric constant 1 to be used to indicate a SIGHUP, which the vast majority of systems in fact use. SIGHUP can be handled. That is, programmers can define the action they want to occur upon receiving a SIGHUP, such as calling a function, ignoring it, or restoring the default action. The default action on POSIX-compliant systems is an abnormal termination. See also Unix signal RS-232 References Unix signals
SIGHUP
Technology
715
4,148,957
https://en.wikipedia.org/wiki/Weapons-grade%20nuclear%20material
Weapons-grade nuclear material is any fissionable nuclear material that is pure enough to make a nuclear weapon and has properties that make it particularly suitable for nuclear weapons use. Plutonium and uranium in grades normally used in nuclear weapons are the most common examples. (These nuclear materials have other categorizations based on their purity.) Only fissile isotopes of certain elements have the potential for use in nuclear weapons. For such use, the concentration of fissile isotopes uranium-235 and plutonium-239 in the element used must be sufficiently high. Uranium from natural sources is enriched by isotope separation, and plutonium is produced in a suitable nuclear reactor. Experiments have been conducted with uranium-233 (the fissile material at the heart of the thorium fuel cycle). Neptunium-237 and some isotopes of americium might be usable, but it is not clear that this has ever been implemented. The latter substances are part of the minor actinides in spent nuclear fuel. Critical mass Any weapons-grade nuclear material must have a critical mass that is small enough to justify its use in a weapon. The critical mass for any material is the smallest amount needed for a sustained nuclear chain reaction. Moreover, different isotopes have different critical masses, and the critical mass for many radioactive isotopes is infinite, because the mode of decay of one atom cannot induce similar decay of more than one neighboring atom. For example, the critical mass of uranium-238 is infinite, while the critical masses of uranium-233 and uranium-235 are finite. The critical mass for any isotope is influenced by any impurities and the physical shape of the material. The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the accompanying table. Most information on bare sphere masses is classified, but some documents have been declassified. Countries that have produced weapons-grade nuclear material At least ten countries have produced weapons-grade nuclear material: Five recognized "nuclear-weapon states" under the terms of the Nuclear Non-Proliferation Treaty (NPT): the United States (first nuclear weapon tested and two bombs used as weapons in 1945), Russia (first weapon tested in 1949), the United Kingdom (1952), France (1960), and China (1964) Three other declared nuclear states that are not signatories of the NPT: India (not a signatory, weapon tested in 1974), Pakistan (not a signatory, weapon tested in 1998), and North Korea (withdrew from the NPT in 2003, weapon tested in 2006) Israel, which is widely known to have developed nuclear weapons (likely first tested in the 1960s or 1970s) but has not openly declared its capability South Africa, which also had enrichment capabilities and developed nuclear weapons (possibly tested in 1979), but disassembled its arsenal and joined the NPT in 1991 Weapons-grade uranium Natural uranium is made weapons-grade through isotopic enrichment. Initially only about 0.7% of it is fissile U-235, with the rest being almost entirely uranium-238 (U-238). They are separated by their differing masses. Highly enriched uranium is considered weapons-grade when it has been enriched to about 90% U-235. U-233 is produced from thorium-232 by neutron capture. The U-233 produced thus does not require enrichment and can be relatively easily chemically separated from residual Th-232. It is therefore regulated as a special nuclear material only by the total amount present. U-233 may be intentionally down-blended with U-238 to remove proliferation concerns. While U-233 would thus seem ideal for weaponization, a significant obstacle to that goal is the co-production of trace amounts of uranium-232 due to side-reactions. U-232 hazards, a result of its highly radioactive decay products such as thallium-208, are significant even at 5 parts per million. Implosion nuclear weapons require U-232 levels below 50 PPM (above which the U-233 is considered "low grade"; cf. "Standard weapon grade plutonium requires a Pu-240 content of no more than 6.5%." which is 65,000 PPM, and the analogous Pu-238 was produced in levels of 0.5% (5000 PPM) or less). Gun-type fission weapons would require low U-232 levels and low levels of light impurities on the order of 1 PPM. Weapons-grade plutonium Pu-239 is produced artificially in nuclear reactors when a neutron is absorbed by U-238, forming U-239, which then decays in a rapid two-step process into Pu-239. It can then be separated from the uranium in a nuclear reprocessing plant. Weapons-grade plutonium is defined as being predominantly Pu-239, typically about 93% Pu-239. Pu-240 is produced when Pu-239 absorbs an additional neutron and fails to fission. Pu-240 and Pu-239 are not separated by reprocessing. Pu-240 has a high rate of spontaneous fission, which can cause a nuclear weapon to pre-detonate. This makes plutonium unsuitable for use in gun-type nuclear weapons. To reduce the concentration of Pu-240 in the plutonium produced, weapons program plutonium production reactors (e.g. B Reactor) irradiate the uranium for a far shorter time than is normal for a nuclear power reactor. More precisely, weapons-grade plutonium is obtained from uranium irradiated to a low burnup. This represents a fundamental difference between these two types of reactor. In a nuclear power station, high burnup is desirable. Power stations such as the obsolete British Magnox and French UNGG reactors, which were designed to produce either electricity or weapons material, were operated at low power levels with frequent fuel changes using online refuelling to produce weapons-grade plutonium. Such operation is not possible with the light water reactors most commonly used to produce electric power. In these the reactor must be shut down and the pressure vessel disassembled to gain access to the irradiated fuel. Plutonium recovered from LWR spent fuel, while not weapons grade, can be used to produce nuclear weapons at all levels of sophistication, though in simple designs it may produce only a fizzle yield. Weapons made with reactor-grade plutonium would require special cooling to keep them in storage and ready for use. A 1962 test at the U.S. Nevada National Security Site (then known as the Nevada Proving Grounds) used non-weapons-grade plutonium produced in a Magnox reactor in the United Kingdom. The plutonium used was provided to the United States under the 1958 US–UK Mutual Defence Agreement. Its isotopic composition has not been disclosed, other than the description reactor grade, and it has not been disclosed which definition was used in describing the material this way. The plutonium was apparently sourced from the Magnox reactors at Calder Hall or Chapelcross. The content of Pu-239 in material used for the 1962 test was not disclosed, but has been inferred to have been at least 85%, much higher than typical spent fuel from currently operating reactors. Occasionally, low-burnup spent fuel has been produced by a commercial LWR when an incident such as a fuel cladding failure has required early refuelling. If the period of irradiation has been sufficiently short, this spent fuel could be reprocessed to produce weapons grade plutonium. References External links Reactor-Grade and Weapons-Grade Plutonium in Nuclear Explosives, Canadian Coalition for Nuclear Responsibility Nuclear weapons and power-reactor plutonium , Amory B. Lovins, February 28, 1980, Nature, Vol. 283, No. 5750, pp. 817–823 Nuclear weapons Nuclear materials Plutonium Uranium
Weapons-grade nuclear material
Physics
1,620
3,147,902
https://en.wikipedia.org/wiki/Deer%20horn
A deer horn, or deer whistle, is a whistle mounted on automobiles intended to help prevent collisions with deer. Air moving through the device produces sound (ultrasound in some models), intended to warn deer of a vehicle's approach. Deer are highly unpredictable, skittish animals whose normal reaction to an unfamiliar sound is to stop, look and listen to determine if they are being threatened. If the whistle gives them advance warning, they may freeze on the roadside, rather than running across the road into the path of the vehicle. In Australia, a different product, with electrically powered speakers (Shu Roo), is used to decrease collisions with kangaroos. Researchers with the University of Wisconsin–Madison measured three devices and a press report said they found these three devices produced "low-pitched and ultrasonic sounds at speeds of 30 to 70 miles per hour; however, researchers were unable to verify that deer responded to the sounds." Researchers with the Georgia Game and Fish Department have pointed out several reasons for ultrasound devices not to work as advertised: Some deer whistles do not emit any ultrasonic sound under the advertised operating conditions (typically when the vehicle exceeds 30 mph). Ultrasonic sound does not carry very well. It does not travel a long enough distance to provide adequate warning, and also is stopped by virtually any intervening object, so any curves in a road will block the sound. Little is known about the auditory limits of deer, but current knowledge indicates that deer hear approximately the same frequencies as humans, and thus if humans can't hear a sound, deer probably can't either. If deer could hear ultrasound, it is unknown if it would alarm them or induce a flight response. In addition to the Georgia and Wisconsin studies, a study by the Ohio State Police Department indicated the whistles are ineffective. The Department of Zoology at the University of Melbourne did independent testing, funded by the Royal Automobile Club of Victoria, New South Wales Road Traffic Authority, National Roads and Motorists’ Association Limited, and Transport South Australia. They bought one Shu Roo and tested it on a sedan, a 4x4, an 18-seat bus, and a cargo truck. The Shu Roo could be heard by their test equipment above the sound of wind and vehicle engines at up to . Wind on test days ranged from 0 to . They also compared road collisions among fleet vehicles with and without Shu Roos, especially targeting bus and truck companies. They used pre-existing installations of Shu Roos at the participating companies, not random assignment. Vehicles averaged one collision with a kangaroo per , the same value with and without Shu Roos. They excluded two vehicles with Shu Roos which hit 39 and 25 kangaroos respectively, each in one night. The collisions of non-Shu Roo vehicles were concentrated in fewer vehicles than the collisions of Shu Roo vehicles, which may reflect routes or drivers. Fleet managers reported some Shu Roos did not stay on. It was hard to recruit professional drivers willing to report their mileage to the survey. An alternative in future studies would be to enlist a car hire company, since they already track mileage, could randomly assign devices to cars, and benefit from accurate results. References Further reading Vehicle safety technologies Sound production Human–animal interaction
Deer horn
Biology
658
798,654
https://en.wikipedia.org/wiki/Jeremy%20Elson
Jeremy Elson (born 1974) is a computer researcher specializing in wireless Sensor Networks. He is also the creator of the popular CircleMUD. Elson received his Ph.D. from UCLA in 2003. He previously worked at Microsoft Research in the Distributed Systems and Security group within Systems and Networking Research. Recent projects include MapCruncher and ASIRRA. In 2015, he started his career at Google. Jeremy is also known for his work on calculating whether a Powerball ticket can ever be profitable. References External links Jeremy Elson's home page Mega Millions and Powerball: Can you ever expect a ticket to be profitable? 1974 births Living people MUD developers University of California, Los Angeles alumni
Jeremy Elson
Technology
143
1,402,262
https://en.wikipedia.org/wiki/Baby%20colic
Baby colic, also known as infantile colic, is defined as episodes of crying for more than three hours a day, for more than three days a week, for three weeks in an otherwise healthy child. Often crying occurs in the evening. It typically does not result in long-term problems. The crying can result in frustration of the parents, depression following delivery, excess visits to the doctor, and child abuse. The cause of colic is unknown. Some believe it is due to gastrointestinal discomfort like intestinal cramping. Diagnosis requires ruling out other possible causes. Concerning findings include a fever, poor activity, or a swollen abdomen. Fewer than 5% of infants with excess crying have an underlying organic disease. Treatment is generally conservative, with little to no role for either medications or alternative therapies. Extra support for the parents may be useful. Tentative evidence supports certain probiotics for the baby and a low-allergen diet by the mother in those who are breastfed. Hydrolyzed formula may be useful in those who are bottlefed. Colic affects 10–40% of babies. Equally common in bottle and breast-fed infants, it begins during the second week of life, peaks at 6 weeks, and resolves between 12 and 16 weeks. It rarely lasts up to one year of age. It occurs at the same rate in boys and in girls. The first detailed medical description of the problem was published in 1954. Signs and symptoms Colic is defined as episodes of crying for more than three hours a day, for more than three days a week for at least a three-week duration in an otherwise healthy child. It is most common around six weeks of age and gets better by six months of age. By contrast, infants normally cry an average of just over two hours a day, with the duration peaking at six weeks. With colic, periods of crying most commonly happen in the evening and for no obvious reason. Associated symptoms may include legs pulled up to the stomach, a flushed face, clenched hands, and a wrinkled brow. The cry is often high pitched (piercing). Effect on the family An infant with colic may affect family stability and be a cause of short-term anxiety or depression in the father and mother. It may also contribute to exhaustion and stress in the parents. Persistent infant crying has been associated with severe marital discord, postpartum depression, early termination of breastfeeding, frequent visits to doctors, a quadrupling of laboratory tests, and prescription of medication for acid reflux. Babies with colic may be exposed to abuse, especially shaken baby syndrome. Parent training programs for managing infantile colic may result in a reduction in crying time. Causes The cause of colic is generally unknown. Fewer than 5% of infants who cry excessively turn out to have an underlying organic disease, such as constipation, gastroesophageal reflux disease, lactose intolerance, anal fissures, subdural hematomas, or infantile migraine. Babies fed cow's milk have been shown to develop antibody responses to the bovine protein, and some studies have shown an association between consumption of cow's milk and infant colic. Studies performed showed conflicting evidence about the role of cow's milk allergy. While previously believed to be related to gas pains, this does not appear to be the case. Another theory holds that colic is related to hyperperistalsis of the digestive tube (increased level of activity of contraction and relaxation). The evidence that the use of anticholinergic agents improve colic symptoms supports this hypothesis. Psychological and social factors have been proposed as a cause, but there is no evidence. Studies performed do not support the theory that maternal (or paternal) personality or anxiety causes colic, nor that it is a consequence of a difficult temperament of the baby, but families with colicky children may eventually develop anxiety, fatigue and problems with family functioning as a result. There is some evidence that cigarette smoke may increase the risk. It seems unrelated to breast or bottle feeding with rates similar in both groups. Reflux does not appear to be related to colic. Diagnosis Colic is diagnosed after other potential causes of crying are excluded. This can typically be done via a history and physical exam, and in most cases tests such as X-rays or blood tests are not needed. Babies who cry may simply be hungry, uncomfortable, or ill. Less than 10% of babies who would meet the definition of colic based on the amount they cry have an identifiable underlying disease. Cause for concern include: an elevated temperature, a history of breathing problems or a child who is not appropriately gaining weight. Indications that further investigations may be needed include: Vomiting (vomit that is green or yellow, bloody or occurring more than five times a day) Change in stool (constipation or diarrhea, especially with blood or mucus) Abnormal temperature (a rectal temperature less than or over Irritability (crying all day with few calm periods in between) Lethargy (excess sleepiness, lack of smiles or interested gaze, weak sucking lasting over six hours) Poor weight gain (gaining less than 15 grams a day) Problems to consider when the above are present include: Infections (e.g. ear infection, urine infection, meningitis, appendicitis) Intestinal pain (e.g. food allergy, acid reflux, constipation, intestinal blockage) Trouble breathing (e.g. from a cold, excessive dust, congenital nasal blockage, oversized tongue) Increased brain pressure (e.g. hematoma, hydrocephalus) Skin pain (e.g. a loose diaper pin, irritated rash, a hair wrapped around a toe) Mouth pain (e.g. yeast infection) Kidney pain (e.g. blockage of the urinary system) Eye pain (e.g. scratched cornea, glaucoma) Overdose (e.g. excessive Vitamin D, excessive sodium) Others (e.g. migraine headache, heart failure, hyperthyroidism) Persistently fussy babies with poor weight gain, vomiting more than five times a day, or other significant feeding problems should be evaluated for other illnesses (e.g. urinary infection, intestinal obstruction, acid reflux). Treatment Management of colic is generally conservative and involves the reassurance of parents. Calming measures may be used and include soothing motions, limiting stimulation, pacifier use, and carrying the baby around in a carrier, although it is not entirely clear if these actions have any effect beyond placebo. Swaddling does not appear to help. Medication No medications have been found to be both safe and effective. Simethicone is safe but ineffective, while dicyclomine works but is unsafe. Evidence does not support the use of cimetropium bromide, and there is little evidence for alternative medications or techniques. While medications to treat reflux are common, there is no evidence that they are useful. Diet Dietary changes by infants are generally not needed. In mothers who are breastfeeding, a hypoallergenic diet by the mother—not eating milk and dairy products, eggs, wheat, and nuts—may improve matters, while elimination of only cow's milk does not seem to produce any improvement. In formula-fed infants, switching to a soy-based or hydrolyzed protein formula may help. Evidence of benefit is greater for hydrolyzed protein formula with the benefit from soy based formula being disputed. Both these formulas have greater cost and may not be as palatable. Supplementation with fiber has not been shown to have any benefit. A 2018 Cochrane review of 15 randomized controlled trials involving 1,121 infants was unable to recommend any dietary interventions. A 2019 review determined that probiotics were no more effective than placebo although a reduction in crying time was measured. Complimentary and alternative medicine No clear beneficial effect from spinal manipulation or massage has been shown. Further, as there is no evidence of safety for cervical manipulation for baby colic, it is not advised. There is a case of a three-month-old dying following manipulation of the neck area. Little clinical evidence supports the efficacy of "gripe water" and caution in use is needed, especially in formulations that include alcohol or sugar. Evidence does not support lactase supplementation. The use of probiotics, specifically Lactobacillus reuteri, decreases crying time at three weeks by 46 minutes in breastfeed babies but has unclear effects in those who are formula fed. Fennel also appears effective. Prognosis Infants who are colicky do just as well as their non colicky peers with respect to temperament at one year of age. Epidemiology Colic affects 10–40% of children, occurring at the same rate in boys and in girls. History The word "colic" is derived from the ancient Greek word for intestine (sharing the same root as the word "colon"). It has been an age-old practice to drug crying infants. During the second century AD, the Greek physician Galen prescribed opium to calm fussy babies, and during the Middle Ages in Europe, mothers and wet nurses smeared their nipples with opium lotions before each feeding. Alcohol was also commonly given to infants. References External links Ailments of unknown cause Crying Pediatrics Wikipedia medicine articles ready to translate
Baby colic
Biology
1,942
32,473
https://en.wikipedia.org/wiki/Vaccination
Vaccination is the administration of a vaccine to help the immune system develop immunity from a disease. Vaccines contain a microorganism or virus in a weakened, live or killed state, or proteins or toxins from the organism. In stimulating the body's adaptive immunity, they help prevent sickness from an infectious disease. When a sufficiently large percentage of a population has been vaccinated, herd immunity results. Herd immunity protects those who may be immunocompromised and cannot get a vaccine because even a weakened version would harm them. The effectiveness of vaccination has been widely studied and verified. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the elimination of diseases such as polio and tetanus from much of the world. According to the World Health Organization (WHO), vaccination prevents 3.5–5 million deaths per year. A WHO-funded study by The Lancet estimates that, during the 50-year period starting in 1974, vaccination prevented 154 million deaths, including 146 million among children under age 5. However, some diseases have seen rising cases due to relatively low vaccination rates attributable partly to vaccine hesitancy. The first disease people tried to prevent by inoculation was most likely smallpox, with the first recorded use of variolation occurring in the 16th century in China. It was also the first disease for which a vaccine was produced. Although at least six people had used the same principles years earlier, the smallpox vaccine was invented in 1796 by English physician Edward Jenner. He was the first to publish evidence that it was effective and to provide advice on its production. Louis Pasteur furthered the concept through his work in microbiology. The immunization was called vaccination because it was derived from a virus affecting cows ( 'cow'). Smallpox was a contagious and deadly disease, causing the deaths of 20–60% of infected adults and over 80% of infected children. When smallpox was finally eradicated in 1979, it had already killed an estimated 300–500 million people in the 20th century. Vaccination and immunization have a similar meaning in everyday language. This is distinct from inoculation, which uses unweakened live pathogens. Vaccination efforts have been met with some reluctance on scientific, ethical, political, medical safety, and religious grounds, although no major religions oppose vaccination, and some consider it an obligation due to the potential to save lives. In the United States, people may receive compensation for alleged injuries under the National Vaccine Injury Compensation Program. Early success brought widespread acceptance, and mass vaccination campaigns have greatly reduced the incidence of many diseases in numerous geographic regions. The Centers for Disease Control and Prevention lists vaccination as one of the ten great public health achievements of the 20th century in the U.S. Mechanism of function Vaccines are a way of artificially activating the immune system to protect against infectious disease. The activation occurs through priming the immune system with an immunogen. Stimulating immune responses with an infectious agent is known as immunization. Vaccination includes various ways of administering immunogens. Most vaccines are administered before a patient has contracted a disease to help increase future protection. However, some vaccines are administered after the patient already has contracted a disease. Vaccines given after exposure to smallpox are reported to offer some protection from disease or may reduce the severity of disease. The first rabies immunization was given by Louis Pasteur to a child after he was bitten by a rabid dog. Since its discovery, the rabies vaccine has been proven effective in preventing rabies in humans when administered several times over 14 days along with rabies immune globulin and wound care. Other examples include experimental AIDS, cancer and Alzheimer's disease vaccines. Such immunizations aim to trigger an immune response more rapidly and with less harm than natural infection. Most vaccines are given by injection as they are not absorbed reliably through the intestines. Live attenuated polio, rotavirus, some typhoid, and some cholera vaccines are given orally to produce immunity in the bowel. While vaccination provides a lasting effect, it usually takes several weeks to develop. This differs from passive immunity (the transfer of antibodies, such as in breastfeeding), which has immediate effect. A vaccine failure is when an organism contracts a disease in spite of being vaccinated against it. Primary vaccine failure occurs when an organism's immune system does not produce antibodies when first vaccinated. Vaccines can fail when several series are given and fail to produce an immune response. The term "vaccine failure" does not necessarily imply that the vaccine is defective. Most vaccine failures are simply due to individual variations in immune response. Vaccination versus inoculation The term "inoculation" is often used interchangeably with "vaccination." However, while related, the terms are not synonymous. Vaccination is treatment of an individual with an attenuated (i.e. less virulent) pathogen or other immunogen, whereas inoculation, also called variolation in the context of smallpox prophylaxis, is treatment with unattenuated variola virus taken from a pustule or scab of a smallpox patient into the superficial layers of the skin, commonly the upper arm. Variolation was often done 'arm-to-arm' or, less effectively, 'scab-to-arm', and often caused the patient to become infected with smallpox, which in some cases resulted in severe disease. Vaccinations began in the late 18th century with the work of Edward Jenner and the smallpox vaccine. Preventing disease versus preventing infection Some vaccines, like the smallpox vaccine, prevent infection. Their use results in sterilizing immunity and can help eradicate a disease if there is no animal reserve. Other vaccines, including those for , help to (temporarily) lower the chance of severe disease for individuals, without necessarily reducing the probability of becoming infected. Safety Vaccine development and approval Just like any medication or procedure, no vaccine can be 100% safe or effective for everyone because each person's body can react differently. While minor side effects, such as soreness or low grade fever, are relatively common, serious side effects are very rare and occur in about 1 out of every 100,000 vaccinations and typically involve allergic reactions that can cause hives or difficulty breathing. However, vaccines are the safest they ever have been in history and each vaccine undergoes rigorous clinical trials to ensure their safety and efficacy before approval by authorities such as the US Food and Drug Administration (FDA). Prior to human testing, vaccines are tested on cell cultures and the results modelled to assess how they will interact with the immune system. During the next round of testing, researchers study vaccines in animals, including mice, rabbits, guinea pigs, and monkeys. Vaccines that pass each of these stages of testing are then approved by the public health safety authority (FDA in the United States) to start a three-phase series of human testing, advancing to higher phases only if they are deemed safe and effective at the previous phase. The people in these trials participate voluntarily and are required to prove they understand the purpose of the study and the potential risks. During phase I trials, a vaccine is tested in a group of about 20 people with the primary goal of assessing the vaccine's safety. Phase II trials expand the testing to include 50 to several hundred people. During this stage, the vaccine's safety continues to be evaluated and researchers also gather data on the effectiveness and the ideal dose of the vaccine. Vaccines determined to be safe and efficacious then advance to phase III trials, which focuses on the efficacy of the vaccine in hundreds to thousands of volunteers. This phase can take several years to complete and researchers use this opportunity to compare the vaccinated volunteers to those who have not been vaccinated to highlight any true reactions to the vaccine that occur. If a vaccine passes all of the phases of testing, the manufacturer can then apply for license of the vaccine through the relevant regulatory authorities such as the FDA in US. Before regulatory authorities approve use in the general public, they extensively review the results of the clinical trials, safety tests, purity tests, and manufacturing methods and establish that the manufacturer itself is up to government standards in many other areas. After regulatory approval, the regulators continue to monitor the manufacturing protocols, batch purity, and the manufacturing facility itself. Additionally, vaccines also undergo phase IV trials, which monitor the safety and efficacy of vaccines in tens of thousands of people, or more, across many years. Side effects The Centers for Disease Control and Prevention (CDC) has compiled a list of vaccines and their possible side effects. The risk of side effects varies between vaccines. Notable vaccine investigations In 1976 in the United States, a mass swine flu vaccination programme was discontinued after 362 cases of Guillain–Barré syndrome among 45 million vaccinated people. William Foege of the CDC estimated that the incidence of Guillain-Barré was four times higher in vaccinated people than in those not receiving the swine flu vaccine. Dengvaxia, the only approved vaccine for Dengue fever, was found to increase the risk of hospitalization for Dengue fever by 1.58 times in children of 9 years or younger, resulting in the suspension of a mass vaccination program in the Philippines in 2017. Pandemrix a vaccine for the H1N1 pandemic of 2009 given to around 31 million people was found to have a higher level of adverse events than alternative vaccines resulting in legal action. In a response to the narcolepsy reports following immunization with Pandemrix, the CDC carried out a population-based study and found the FDA-approved 2009 H1N1 flu shots were not associated with an increased risk for the neurological disorder. Ingredients The ingredients of vaccines can vary greatly from one to the next and no two vaccines are the same. The CDC has compiled a list of vaccines and their ingredients that is readily accessible on their website. Aluminium Aluminium is an adjuvant ingredient in some vaccines. An adjuvant is a type of ingredient that is used to help the body's immune system create a stronger immune response after receiving the vaccination. Aluminium is in a salt form (the ionic version of an element) and is used in the following compounds: aluminium hydroxide, aluminium phosphate, and aluminium potassium sulfate. For a given element, the ion form has different properties from the elemental form. Although it is possible to have aluminium toxicity, aluminium salts have been used effectively and safely since the 1930s when they were first used with the diphtheria and tetanus vaccines. Although there is a small increase in the chance of having a local reaction to a vaccine with an aluminium salt (redness, soreness, and swelling), there is no increased risk of any serious reactions. Mercury Certain vaccines once contained a compound called thiomersal or thimerosal, which is an organic compound containing mercury. Organomercury is commonly found in two forms. The methylmercury cation (with one carbon atom) is found in mercury-contaminated fish and is the form that people might ingest in mercury-polluted areas (Minamata disease), whereas the ethylmercury cation (with two carbon atoms) is present in thimerosal, linked to thiosalicylate. Although both are organomercury compounds, they do not have the same chemical properties and interact with the human body differently. Ethylmercury is cleared from the body faster than methylmercury and is less likely to cause toxic effects. Thimerosal was used as a preservative to prevent the growth of bacteria and fungi in vials that contain more than one dose of a vaccine. This helps reduce the risk of potential infections or serious illness that could occur from contamination of a vaccine vial. Although there was a small increase in risk of injection site redness and swelling with vaccines containing thimerosal, there was no increased risk of serious harm or autism. Even though evidence supports the safety and efficacy of thimerosal in vaccines, thimerosal was removed from childhood vaccines in the United States in 2001 as a precaution. Monitoring CDC Immunization Safety Office initiatives Vaccine Adverse Event Reporting System (VAERS) |Food and Drug Administration (FDA) Center for Biologics Evaluation and Research (CBER) |Immunization Action Coalition (IAC) Vaccine Safety Datalink (VSD) |Health Resources and Service Administration (HRSA) |Institute for Safe Medication Practices (ISMP) Clinical Immunization Safety Assessment (CISA) Project National Institutes of Health (NIH) National Vaccine Program Office (NVPO) The administration protocols, efficacy, and adverse events of vaccines are monitored by organizations of the US federal government, including the CDC and FDA, and independent agencies are constantly re-evaluating vaccine practices. As with all medications, vaccine use is determined by public health research, surveillance, and reporting to governments and the public. Usage The World Health Organization (WHO) has estimated that vaccination prevents 3.5–5 million deaths per year, and up to 1.5 million children die each year due to diseases that could have been prevented by vaccination. They estimate that 29% of deaths of children under five-years-old in 2013 were vaccine preventable. In other developing parts of the world, they are faced with the challenge of having a decreased availability of resources and vaccinations. Countries such as those in Sub-Saharan Africa cannot afford to provide the full range of childhood vaccinations. In 2024, a WHO/UNICEF report found “the number of children who received three doses of the vaccine against diphtheria, tetanus and pertussis (DTP) in 2023 – a key marker for global immunization coverage – stalled at 84% (108 million). However, the number of children who did not receive a single dose of the vaccine increased from 13.9 million in 2022 to 14.5 million in 2023. More than half of unvaccinated children live in the 31 countries with fragile, conflict-affected and vulnerable settings.” United States Vaccines have led to major decreases in the prevalence of infectious diseases in the United States. In 2007, studies regarding the effectiveness of vaccines on mortality or morbidity rates of those exposed to various diseases found almost 100% decreases in death rates, and about a 90% decrease in exposure rates. Vaccination adoption is reduced among some populations, such as those with low incomes, people with limited access to health care, and members of certain racial and ethnic minorities. Distrust of health-care providers, language barriers, and misleading or false information also contribute to lower adoption, as does anti-vaccine activism. Most government and private health insurance plans cover recommended vaccines at no charge when received by providers in their networks. The federal Vaccines for Children Program and the Social Security Act are among the major sources of financial support for vaccination of those in lower-income groups. The Centers for Disease Control and Prevention (CDC) publishes uniform national vaccine recommendations and immunization schedules, although state and local governments, as well as nongovernmental organizations, may have their own policies. History Before the first vaccinations, in the sense of using cowpox to inoculate people against smallpox, people have been inoculated in China and elsewhere, before being copied in the west, by using smallpox, called variolation. The earliest hints of the practice of variolation for smallpox in China come during the 10th century. The Chinese also practiced the oldest documented use of variolation, which comes from Wan Quan's (1499–1582) Douzhen Xinfa (痘疹心法) of 1549. They implemented a method of "nasal insufflation" administered by blowing powdered smallpox material, usually scabs, up the nostrils. Various insufflation techniques have been recorded throughout the sixteenth and seventeenth centuries within China. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. In France, Voltaire reports that the Chinese have practiced variolation "these hundred years". In 1796, Edward Jenner, a doctor in Berkeley in Gloucestershire, England, tested a common theory that a person who had contracted cowpox would be immune from smallpox. To test the theory, he took cowpox vesicles from a milkmaid named Sarah Nelmes with which he infected an eight-year-old boy named James Phipps, and two months later he inoculated the boy with smallpox, and smallpox did not develop. In 1798, Jenner published An Inquiry Into the Causes and Effects of the Variolæ Vaccinæ which created widespread interest. He distinguished 'true' and 'spurious' cowpox (which did not give the desired effect) and developed an "arm-to-arm" method of propagating the vaccine from the vaccinated individual's pustule. Early attempts at confirmation were confounded by contamination with smallpox, but despite controversy within the medical profession and religious opposition to the use of animal material, by 1801 his report was translated into six languages and over 100,000 people were vaccinated. The term vaccination was coined in 1800 by the surgeon Richard Dunning in his text Some observations on vaccination. In 1802, the Scottish physician Helenus Scott vaccinated dozens of children in Bombay against smallpox using Jenner's cowpox vaccine. In the same year Scott penned a letter to the editor in the Bombay Courier, declaring that "We have it now in our power to communicate the benefits of this important discovery to every part of India, perhaps to China and the whole eastern world". Subsequently, vaccination became firmly established in British India. A vaccination campaign was started in the new British colony of Ceylon in 1803. By 1807 the British had vaccinated more than a million Indians and Sri Lankans against smallpox. Also in 1803 the Spanish Balmis Expedition launched the first transcontinental effort to vaccinate people against smallpox. Following a smallpox epidemic in 1816 the Kingdom of Nepal ordered smallpox vaccine and requested the English veterinarian William Moorcroft to help in launching a vaccination campaign. In the same year a law was passed in Sweden to require the vaccination of children against smallpox by the age of two. Prussia briefly introduced compulsory vaccination in 1810 and again in the 1920s, but decided against a compulsory vaccination law in 1829. A law on compulsory smallpox vaccination was introduced in the Province of Hanover in the 1820s. In 1826, in Kragujevac, future prince Mihailo of Serbia was the first person to be vaccinated against smallpox in the principality of Serbia. Following a smallpox epidemic in 1837 that caused 40,000 deaths, the British government initiated a concentrated vaccination policy, starting with the Vaccination Act of 1840, which provided for universal vaccination and prohibited variolation. The Vaccination Act 1853 introduced compulsory smallpox vaccination in England and Wales. The law followed a severe outbreak of smallpox in 1851 and 1852. It provided that the poor law authorities would continue to dispense vaccination to all free of charge, but that records were to be kept on vaccinated children by the network of births registrars. It was accepted at the time, that voluntary vaccination had not reduced smallpox mortality, but the Vaccination Act 1853 was so badly implemented that it had little impact on the number of children vaccinated in England and Wales. The U.S. Supreme Court upheld compulsory vaccination laws in the 1905 landmark case Jacobson v. Massachusetts, ruling that laws could require vaccination to protect the public from dangerous communicable diseases. However, in practice the U.S. had the lowest rate of vaccination among industrialized nations in the early 20th century. Compulsory vaccination laws began to be enforced in the U.S. after World War II. In 1959, the WHO called for the eradication of smallpox worldwide, as smallpox was still endemic in 33 countries. In the 1960s six to eight children died each year in the U.S. from vaccination-related complications. According to the WHO there were in 1966 about 100 million cases of smallpox worldwide, causing an estimated two million deaths. In the 1970s there was such a small risk of contracting smallpox that the U.S. Public Health Service recommended for routine smallpox vaccination to be ended. By 1974 the WHO smallpox vaccination program had confined smallpox to parts of Pakistan, India, Bangladesh, Ethiopia and Somalia. In 1977 the WHO recorded the last case of smallpox infection acquired outside a laboratory in Somalia. In 1980 the WHO officially declared the world free of smallpox. In 1974 the WHO adopted the goal of universal vaccination by 1990 to protect children against six preventable infectious diseases: measles, poliomyelitis, diphtheria, whooping cough, tetanus, and tuberculosis. In the 1980s only 20 to 40% of children in developing countries were vaccinated against these six diseases. In wealthy nations the number of measles cases had dropped dramatically after the introduction of the measles vaccine in 1963. WHO figures demonstrate that in many countries a decline in measles vaccination leads to a resurgence in measles cases. Measles are so contagious that public health experts believe a vaccination rate of 100% is needed to control the disease. Despite decades of mass vaccination polio remains a threat in India, Nigeria, Somalia, Niger, Afghanistan, Bangladesh and Indonesia. By 2006 global health experts concluded that the eradication of polio was only possible if the supply of drinking water and sanitation facilities were improved in slums. The deployment of a combined DPT vaccine against diphtheria, pertussis (whooping cough), and tetanus in the 1950s was considered a major advancement for public health. But in the course of vaccination campaigns that spanned decades, DPT vaccines became associated with large number of cases with side effects. Despite improved DPT vaccines coming onto the market in the 1990s DPT vaccines became the focus of anti-vaccination campaigns in wealthy nations. As immunization rates fell outbreaks of pertussis increased in many countries. In 2000, the Global Alliance for Vaccines and Immunization was established to strengthen routine vaccinations and introduce new and underused vaccines in countries with a per capita GDP of under US$1000. UNICEF has reported on the extent to which children missed out on vaccinations from 2020 onwards due to the COVID-19 pandemic. By summer 2023, the organisation described vaccination programs as getting "back on track". Vaccination policy To eliminate the risk of outbreaks of some diseases, at various times governments and other institutions have employed policies requiring vaccination for all people. For example, an 1853 law required universal vaccination against smallpox in England and Wales, with fines levied on people who did not comply. Common contemporary U.S. vaccination policies require that children receive recommended vaccinations before entering public school. Beginning with early vaccination in the nineteenth century, these policies were resisted by a variety of groups, collectively called antivaccinationists, who object on scientific, ethical, political, medical safety, religious, and other grounds. Common objections are that vaccinations do not work, that compulsory vaccination constitutes excessive government intervention in personal matters, or that the proposed vaccinations are not sufficiently safe. Many modern vaccination policies allow exemptions for people who have compromised immune systems, allergies to the components used in vaccinations or strongly held objections. In countries with limited financial resources, limited vaccination coverage results in greater morbidity and mortality due to infectious disease. More affluent countries are able to subsidize vaccinations for at-risk groups, resulting in more comprehensive and effective coverage. In Australia, for example, the Government subsidizes vaccinations for seniors and indigenous Australians. Public Health Law Research, an independent US based organization, reported in 2009 that there is insufficient evidence to assess the effectiveness of requiring vaccinations as a condition for specified jobs as a means of reducing incidence of specific diseases among particularly vulnerable populations; that there is sufficient evidence supporting the effectiveness of requiring vaccinations as a condition for attending child care facilities and schools; and that there is strong evidence supporting the effectiveness of standing orders, which allow healthcare workers without prescription authority to administer vaccine as a public health intervention. Fractional dose vaccination Fractional dose vaccination reduces the dose of a vaccine to allow more individuals to be vaccinated with a given vaccine stock, trading societal benefit for individual protection. Based on the nonlinearity properties of many vaccines, it is effective in poverty diseases and promises benefits in pandemic waves, e.g. in COVID-19, when vaccine supply is limited. Litigation Allegations of vaccine injuries in recent decades have appeared in litigation in the U.S. Some families have won substantial awards from sympathetic juries, even though most public health officials have said that the claims of injuries were unfounded. In response, several vaccine makers stopped production, which the US government believed could be a threat to public health, so laws were passed to shield manufacturers from liabilities stemming from vaccine injury claims. The safety and side effects of multiple vaccines have been tested to uphold the viability of vaccines as a barrier against disease. The influenza vaccine was tested in controlled trials and proven to have negligible side effects equal to that of a placebo. Some concerns from families might have arisen from social beliefs and norms that cause them to mistrust or refuse vaccinations, contributing to this discrepancy in side effects that were unfounded. Opposition Opposition to vaccination, from a wide array of vaccine critics, has existed since the earliest vaccination campaigns. It is widely accepted that the benefits of preventing serious illness and death from infectious diseases greatly outweigh the risks of rare serious adverse effects following immunization. Some studies have claimed to show that current vaccine schedules increase infant mortality and hospitalization rates; those studies, however, are correlational in nature and therefore cannot demonstrate causal effects, and the studies have also been criticized for cherry picking the comparisons they report, for ignoring historical trends that support an opposing conclusion, and for counting vaccines in a manner that is "completely arbitrary and riddled with mistakes". Various disputes have arisen over the morality, ethics, effectiveness, and safety of vaccination. Some vaccination critics say that vaccines are ineffective against disease or that vaccine safety studies are inadequate. Some religious groups do not allow vaccination, and some political groups oppose mandatory vaccination on the grounds of individual liberty. In response, concern has been raised that spreading unfounded information about the medical risks of vaccines increases rates of life-threatening infections, not only in the children whose parents refused vaccinations, but also in those who cannot be vaccinated due to age or immunodeficiency, who could contract infections from unvaccinated carriers (see herd immunity). Some parents believe vaccinations cause autism, although there is no scientific evidence to support this idea. In 2011, Andrew Wakefield, a leading proponent of the theory that MMR vaccine causes autism, was found to have been financially motivated to falsify research data and was subsequently stripped of his medical license. In the United States people who refuse vaccines for non-medical reasons have made up a large percentage of the cases of measles, and subsequent cases of permanent hearing loss and death caused by the disease. Many parents do not vaccinate their children because they feel that diseases are no longer present due to vaccination. This is a false assumption, since diseases held in check by immunization programs can and do still return if immunization is dropped. These pathogens could possibly infect vaccinated people, due to the pathogen's ability to mutate when it is able to live in unvaccinated hosts. Vaccination and autism The notion of a connection between vaccines and autism originated in a 1998 paper published in The Lancet whose lead author was the physician Andrew Wakefield. His study concluded that eight of the 12 patients, ages 3 years to 10 years, developed behavioral symptoms consistent with autism following the MMR vaccine (an immunization against measles, mumps, and rubella). The article was widely criticized for lack of scientific rigor and it was proven that Wakefield falsified data in the article. In 2004, 10 of the original 12 co-authors (not including Wakefield) published a retraction of the article and stated the following: "We wish to make it clear that in this paper no causal link was established between MMR vaccine and autism as the data were insufficient." In 2010, The Lancet officially retracted the article, stating that several elements of the article were incorrect, including falsified data and protocols. The article has sparked a much greater anti-vaccination movement, particularly in the United States, and even though the article was shown to be fraudulent and was heavily retracted, one in four parents still believe that vaccines can cause autism. To date, all validated and definitive studies have shown that there is no correlation between vaccines and autism. One of the studies published in 2015 confirms there is no link between autism and the MMR vaccine. Infants were given a health plan, that included an MMR vaccine, and were continuously studied until they reached five years old. There was no link between the vaccine and children who had a normally developed sibling or a sibling that had autism making them a higher risk for developing autism themselves. It can be difficult to correct the memory of humans when wrong information is received prior to correct information. Even though there is much evidence to go against the Wakefield study and retractions were published by most of the co-authors, many people continue to believe and base decisions on the study as it still lingers in their memory. Studies and research are being conducted to determine effective ways to correct misinformation in the public memory. Routes of administration A vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal. Several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections. Economics of vaccination Health is often used as one of the metrics for determining the economic prosperity of a country. This is because healthier individuals are generally better suited to contributing to the economic development of a country than the sick. There are many reasons for this. For instance, a person who is vaccinated for influenza not only protects themselves from the risk of influenza, but simultaneously also prevents themselves from infecting those around them. This leads to a healthier society, which allows individuals to be more economically productive. Children are consequently able to attend school more often and have been shown to do better academically. Similarly, adults are able to work more often, more efficiently, and more effectively. Costs and benefits On the whole, vaccinations induce a net benefit to society. Vaccines are often noted for their high Return on investment (ROI) values, especially when considering the long-term effects. Some vaccines have much higher ROI values than others. Studies have shown that the ratios of vaccination benefits to costs can differ substantially—from 27:1 for diphtheria/pertussis, to 13.5:1 for measles, 4.76:1 for varicella, and 0.68–1.1 : 1 for pneumococcal conjugate. Some governments choose to subsidize the costs of vaccines, due to some of the high ROI values attributed to vaccinations. The United States subsidizes over half of all vaccines for children, which costs between $400 and $600 each. Although most children do get vaccinated, the adult population of the US is still below the recommended immunization levels. Many factors can be attributed to this issue. Many adults who have other health conditions are unable to be safely immunized, whereas others opt not to be immunized for the sake of private financial benefits. Many Americans are underinsured, and, as such, are required to pay for vaccines out-of-pocket. Others are responsible for paying high deductibles and co-pays. Although vaccinations usually induce long-term economic benefits, many governments struggle to pay the high short-term costs associated with labor and production. Consequently, many countries neglect to provide such services. According to a 2021 paper, vaccinations against haemophilus influenzae type b, hepatitis B, human papillomavirus, Japanese encephalitis, measles, neisseria meningitidis serogroup A, rotavirus, rubella, streptococcus pneumoniae, and yellow fever have prevented an estimated 50 million deaths from 2000 to 2019. The paper "represents the largest assessment of vaccine impact before COVID-19-related disruptions". According to a June 2022 study, COVID19 vaccinations prevented an additional 14.4 to 19.8 million deaths in 185 countries and territories from 8 December 2020 to 8 December 2021. They estimated that it would cost between $2.8 billion and $3.7 billion to develop at least one vaccine for each of them. This should be set against the potential cost of an outbreak. The 2003 SARS outbreak in East Asia cost $54 billion. Game theory uses utility functions to model costs and benefits, which may include financial and non-financial costs and benefits. In recent years, it has been argued that game theory can effectively be used to model vaccine uptake in societies. Researchers have used game theory for this purpose to analyse vaccination uptake in the context of diseases such as influenza and measles. Gallery See also Antitoxin Correlates of immunity COVID-19 vaccine DNA vaccination Feline vaccination H5N1 clinical trials Immunization during pregnancy List of vaccine topics Misinformation related to vaccination Vaccination and religion Vaccination of dogs Vaccinator Vaccine trial World Immunization Week References Further reading External links U.S. government Vaccine Research Center: Information regarding preventive vaccine research studies The Vaccine Page links to resources in many countries. "The complete routine immunisation schedule from summer 2014". Published by the UK Department of Health. (PDF) National Immunization Program, US Centers for Disease Control "Vaccine Safety" – US Centers for Disease Control "Vaccines Timeline" – Centers for Disease Control and Prevention Immunize.org – Immunization Action Coalition' (nonprofit working to increase immunization rates) WHO.int – 'Immunizations, vaccines and biologicals: Towards a World free of Vaccine Preventable Diseases', World Health Organization (WHO's global vaccination campaign website) Health-EU Portal Vaccinations in the EU History of Vaccines Medical education site from the College of Physicians of Philadelphia, the oldest medical professional society in the US Images of vaccine-preventable diseases Immunisation, BBC Radio 4 discussion with Nadja Durbach, Chris Dye & Sanjoy Bhattacharya (In Our Time, 20 April 2006) Biotechnology
Vaccination
Biology
7,395
24,142,446
https://en.wikipedia.org/wiki/Sustainable%20landscaping
Sustainable landscaping is a modern type of gardening or landscaping that takes the environmental issue of sustainability into account. According to Loehrlein in 2009 this includes design, construction and management of residential and commercial gardens and incorporates organic lawn management and organic gardening techniques. Definition A sustainable garden is designed to be both attractive and in balance with the local climate and environment and it should require minimal resource inputs. Thus, the design must be “functional, cost-efficient, visually pleasing, environmentally friendly and maintainable". As part of sustainable development, it pays close attention to preserving limited resources, reducing waste, and preventing air, water and soil pollution. Compost, fertilization, integrated pest management, using the right plant in the right place, appropriate use of turf and xeriscaping (water-wise gardening) are all components of sustainable landscaping. Benefits Sustainability can help urban commercial landscaping companies save money. In California, gardens often do not outweigh the cost of inputs like water and labor. However, using appropriately selected and properly sited plants may help to ensure that maintenance costs are lower because of reduced inputs. Long-lasting Reduced water usage and no surface runoff or puddles Minimal use of fertilizers and pesticides Use of green waste Conservation of energy and resources Issues Sustainability issues for landscaping include: Carbon sequestration Climate change Water conservation Energy usage Non-sustainable practices include: Consumption of non-renewable resources Greenhouse gas emissions Solutions Some of the solutions are: Reduction of stormwater run-off through the use of bio-swales, rain gardens and green roofs and walls. Reduction of water use in landscapes through design of water-wise garden techniques (sometimes known as xeriscaping) Bio-filtering of wastes through constructed wetlands Irrigation using water from showers and sinks, known as gray water Integrated Pest Management techniques for pest control Creating and enhancing wildlife habitat in urban environments Energy-efficient garden design in the form of proper placement and selection of shade trees and creation of wind breaks Permeable paving materials to reduce stormwater run-off and allow rain water to infiltrate into the ground and replenish groundwater rather than run into surface water Use of sustainably harvested wood, composite wood products for decking and other garden uses, as well as use of plastic lumber Recycling of products, such as glass, rubber from tires and other materials to create landscape products such as paving stones, mulch and other materials Soil management techniques, including composting kitchen and yard wastes, to maintain and enhance healthy soil that supports a diversity of soil life Integration and adoption of renewable energy, including solar-powered lighting Development of lawn alternatives such as xeriscaping, floral lawns, and meadows. Proper design One step to garden design is to do a "sustainability audit". This is similar to a landscape site analysis that is typically performed by landscape designers at the beginning of the design process. Factors such as lot size, house size, local covenants and budgets should be considered. The steps to design include a base plan, site inventory and analysis, construction documents, implementation and maintenance. Of great importance is considerations related to the growing conditions of the site. These include orientation to the sun, soil type, wind flow, slopes, shade and climate, the goal of reducing irrigation and use of toxic substances, and requires proper plant selection for the specific site. Sustainable landscaping is not only important because it saves money, it also limits the human impact on the surrounding ecosystem. However, planting species not native to the landscape may introduce invasive plant species as well as new wildlife that was not in the ecosystem before. Altering the ecosystem is a major problem and meeting with an expert with experience with the wildlife and agriculture in the area will help avoid this. Irrigation Mulch may be used to reduce water loss due to evaporation, reduce weeds, minimize erosion, dust and mud problems. Mulch can also add nutrients to the soil when it decomposes. However, mulch is most often used for weed suppression. Overuse of mulch can result in harm to the selected plantings. Care must be taken in the source of the mulch, for instance, black walnut trees result in a toxic mulch product. Grasscycling turf areas (using mulching mowers that leave grass clippings on the lawn) will also decrease the amount of fertilizer needed, reduce landfill waste and reduce costs of disposal. A common recommendation is to add 2-4 inches of mulch in flower beds and under trees away from the trunk. Mulch should be applied under trees to the dripline (extension of the branches) in lieu of flowers, hostas, turf or other plants that are often planted there. This practice of planting under trees is detrimental to tree roots, especially when such plants are irrigated to an excessive level that harms the tree. One must be careful not to apply mulch to the bark of the tree. It can result in smothering, mould and insect depredation. The practice of xeriscaping or water-wise gardening suggests that placing plants with similar water demands together will save time and low-water or drought-tolerant plants would be a smart initial consideration. A homeowner may consider consulting an accredited irrigation technician/auditor and obtain a water audit of current systems. Drip or sub-surface irrigation may be useful. Using evapotranspiration controllers, soil sensors and refined control panels will reduce water loss. Irrigation heads may need readjustment to avoid sprinkling on sidewalks or streets. Business owners may consider developing watering schedules based on historical or actual weather data and soil probes to monitor soil moisture prior to watering. Building materials When deciding what kind of building materials to put on a site it is important to recycle as often as possible, such as for example by reusing old bricks. It is also important to be careful about what materials you use, especially if you plan to grow food crops. Old telephone poles and railroad ties have usually been treated with a toxic substance called creosote that can leach into the soils. Sustainably harvested lumber is available, in which ecological, economic and social factors are integrated into the management of trees used for lumber. Planting selection One important part of sustainable landscaping is plant selection. Most of what makes a landscape unsustainable is the amount of inputs required to grow a non-native plant on it. What this means is that a local plant, which has adapted to local climate conditions will require less work to flourish. Instead, drought-tolerant plants like succulents and cacti are better suited to survive. Plants used as windbreaks can save up to 30% on heating costs in winter. They also help with shading a residence or commercial building in summer, create cool air through evapotranspiration and can cool hardscape areas such as driveways and sidewalks. Irrigation is an excellent end-use option in greywater recycling and rainwater harvesting systems, and a composting toilet can cover (at least) some of the nutrient requirements. Not all fruit trees are suitable for greywater irrigation, as reclaimed greywater is typically of high pH and acidophile plants don't do well in alkaline environments. Energy conservation may be achieved by placing broadleaf deciduous trees near the east, west and optionally north-facing walls of the house. Such selection provides shading in the summer while permitting large amounts of heat-carrying solar radiation to strike the house in the winter. The trees are to be placed as closely as possible to the house walls. As the efficiency of photovoltaic panels and passive solar heating is sensitive to shading, experts suggest the complete absence of trees near the south side. Another choice would be that of a dense vegetative fence composed of evergreens (e.g. conifers) near that side from which cold continental winds blow and also that side from which the prevailing winds blow. Such a choice creates a winter windbreak that prevents low temperatures outside the house and reduces air infiltration towards the inside. Calculations show that placing the windbreak at a distance twice the height of the trees can reduce the wind velocity by 75%. The above vegetative arrangements come with two disadvantages. Firstly, they minimize air circulation in summer although in many climates heating is more important and costly than cooling, and, secondly, they may affect the efficiency of photovoltaic panels. However, it has been estimated that if both arrangements are applied properly, they can reduce the overall house energy usage by up to 22%. Sustainable lawns Lawns are often used as the center point of a landscape. While there are many different species of grass, only a limited amount are considered sustainable. Knowing the climate around the landscape is ideal for saving water and being sustainable. For example, in southern California having a grass lawn of tall fescue will typically need upwards of of water. A lawn in the same place made up of mixed beds with various trees, shrubs, and ground cover will normally need of water. Having gravel, wood chips or bark, mulch, rubber mulch, artificial grass, patio, wood or composite deck, rock garden, or a succulent garden are all considered sustainable landscape techniques. Other species of plants other than grass that can take up a lawn are lantana, clover, creeping ivy, creeping thyme, oregano, rosemary hedges, silver pony foot, moneywort, chamomile, yarrow, creeping lily turf, ice plant, and stonecrop. Maintenance Pests It is best to start with pest-free plant materials and supplies and close inspection of the plant upon purchase is recommended. Establishing diversity within the area of plant species will encourage populations of beneficial organisms (e.g. birds, insects), which feed on potential plant pests. Attracting a wide variety of organisms with a variety of host plants has shown to be effective in increasing pollinator presence in agriculture. Because plant pests vary from plant to plant, assessing the problem correctly is half the battle. The owner must consider whether the plant can tolerate the damage caused by the pest. If not, then does the plant justify some sort of treatment? Physical barriers may help. Landscape managers should make use of Integrated Pest Management to reduce the use of pesticides and herbicides. Pruning Proper pruning will increase air circulation and may decrease the likelihood of plant diseases. However, improper pruning is detrimental to shrubs and trees. Programs There are several programs in place that are open to participation by various groups. For example, the Audubon Cooperative Sanctuary Program for golf courses, the Audubon Green Neighborhoods Program, and the National Wildlife Federation’s Backyard Habitat Program, to name a few. The Sustainable Sites Initiative, began in 2005, provides a points-based certification for landscapes, similar to the LEED program for buildings operated by the Green Building Council. It has guidelines and performance benchmarks. See also References Garden design Sustainable building Sustainable design Landscape architecture Organic gardening
Sustainable landscaping
Engineering
2,231
52,186,542
https://en.wikipedia.org/wiki/Haploporus%20cylindrosporus
Haploporus cylindrosporus is a species of poroid crust fungus in the family Polyporaceae. Found in China, it causes a white rot in decomposing angiosperm wood. Taxonomy The fungus was collected from Ailaoshan Nature Reserve in Jingdong County (Yunnan Province) in August 2015, and described as a new species the following year. The specific epithet cylindrosporus refers to the cylindrical spores. Description Fruit bodies of Haploporus cylindrosporus are crust-like, measuring long, wide, and up to 2 mm thick at the centre. The hymenophore, or pore surface, is white to cream coloured. The pores number around four to five per millimetre. There is a distinct margin that surrounds the fruit body, which is up to 2.5 mm wide. The hyphal structure is dimitic, meaning that there are both generative and skeletal hyphae. The generative hyphae have clamp connections. The thick-walled, cylindrical spores typically measure 10–11.5 by 4.5–5 μm. References Fungi described in 2016 Fungi of China Polyporaceae Taxa named by Yu-Cheng Dai Taxa named by Bao-Kai Cui Fungus species
Haploporus cylindrosporus
Biology
262
17,807,412
https://en.wikipedia.org/wiki/Extrusion%20coating
Extrusion coating is the coating of a molten web of synthetic resin onto a substrate material. It is a versatile coating technique used for the economic application of various plastics, notably polyethylene, onto paperboard, corrugated fiberboard, paper, aluminium foils, cellulose, non-wovens, or plastic films. It was. first developed in the 1940s for polyethylene coated paper for bags and packaging. Process Coating The actual process of extrusion coating involves extruding resin from a slot die at temperatures up to 320°C directly onto the moving web which may then passed through a nip consisting of a rubber covered pressure roller and a chrome plated cooling roll. The latter cools the molten film back into the solid state and also imparts the desired finish to the plastic surface. The web is normally run much faster than the speed at which the resin is extruded from the die, creating a coating thickness which is in proportion to the speed ratio and the slot gap. Laminating Extrusion laminating is a similar process except that the extruded hot molten resin acts as the bonding medium to a second web of material. Co-extrusion Co-extrusion is, again, a similar process but with two, or more, extruders coupled to a single die head in which the individually extruded melts are brought together and finally extruded as a multi-layer film. Uses The market for extrusion coating includes a variety of end-use applications such as liquid packaging, photographic, flexible packaging, mill and industrial wrappings, transport packaging, sack linings, building, envelopes, medical/hygiene, and release base. See also Curtain coating Calender References Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Coatings Plastics industry
Extrusion coating
Chemistry
400
531,239
https://en.wikipedia.org/wiki/Rotational%20spectroscopy
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. The rotational spectrum (power spectral density vs. rotational frequency) of polar molecules can be measured in absorption or emission by microwave spectroscopy or by far infrared spectroscopy. The rotational spectra of non-polar molecules cannot be observed by those methods, but can be observed and measured by Raman spectroscopy. Rotational spectroscopy is sometimes referred to as pure rotational spectroscopy to distinguish it from rotational-vibrational spectroscopy where changes in rotational energy occur together with changes in vibrational energy, and also from ro-vibronic spectroscopy (or just vibronic spectroscopy) where rotational, vibrational and electronic energy changes occur simultaneously. For rotational spectroscopy, molecules are classified according to symmetry into spherical tops, linear molecules, and symmetric tops; analytical expressions can be derived for the rotational energy terms of these molecules. Analytical expressions can be derived for the fourth category, asymmetric top, for rotational levels up to J=3, but higher energy levels need to be determined using numerical methods. The rotational energies are derived theoretically by considering the molecules to be rigid rotors and then applying extra terms to account for centrifugal distortion, fine structure, hyperfine structure and Coriolis coupling. Fitting the spectra to the theoretical expressions gives numerical values of the angular moments of inertia from which very precise values of molecular bond lengths and angles can be derived in favorable cases. In the presence of an electrostatic field there is Stark splitting which allows molecular electric dipole moments to be determined. An important application of rotational spectroscopy is in exploration of the chemical composition of the interstellar medium using radio telescopes. Applications Rotational spectroscopy has primarily been used to investigate fundamental aspects of molecular physics. It is a uniquely precise tool for the determination of molecular structure in gas-phase molecules. It can be used to establish barriers to internal rotation such as that associated with the rotation of the group relative to the group in chlorotoluene (). When fine or hyperfine structure can be observed, the technique also provides information on the electronic structures of molecules. Much of current understanding of the nature of weak molecular interactions such as van der Waals, hydrogen and halogen bonds has been established through rotational spectroscopy. In connection with radio astronomy, the technique has a key role in exploration of the chemical composition of the interstellar medium. Microwave transitions are measured in the laboratory and matched to emissions from the interstellar medium using a radio telescope. was the first stable polyatomic molecule to be identified in the interstellar medium. The measurement of chlorine monoxide is important for atmospheric chemistry. Current projects in astrochemistry involve both laboratory microwave spectroscopy and observations made using modern radiotelescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA). Overview A molecule in the gas phase is free to rotate relative to a set of mutually orthogonal axes of fixed orientation in space, centered on the center of mass of the molecule. Free rotation is not possible for molecules in liquid or solid phases due to the presence of intermolecular forces. Rotation about each unique axis is associated with a set of quantized energy levels dependent on the moment of inertia about that axis and a quantum number. Thus, for linear molecules the energy levels are described by a single moment of inertia and a single quantum number, , which defines the magnitude of the rotational angular momentum. For nonlinear molecules which are symmetric rotors (or symmetric tops - see next section), there are two moments of inertia and the energy also depends on a second rotational quantum number, , which defines the vector component of rotational angular momentum along the principal symmetry axis. Analysis of spectroscopic data with the expressions detailed below results in quantitative determination of the value(s) of the moment(s) of inertia. From these precise values of the molecular structure and dimensions may be obtained. For a linear molecule, analysis of the rotational spectrum provides values for the rotational constant and the moment of inertia of the molecule, and, knowing the atomic masses, can be used to determine the bond length directly. For diatomic molecules this process is straightforward. For linear molecules with more than two atoms it is necessary to measure the spectra of two or more isotopologues, such as 16O12C32S and 16O12C34S. This allows a set of simultaneous equations to be set up and solved for the bond lengths). A bond length obtained in this way is slightly different from the equilibrium bond length. This is because there is zero-point energy in the vibrational ground state, to which the rotational states refer, whereas the equilibrium bond length is at the minimum in the potential energy curve. The relation between the rotational constants is given by where v is a vibrational quantum number and α is a vibration-rotation interaction constant which can be calculated if the B values for two different vibrational states can be found. For other molecules, if the spectra can be resolved and individual transitions assigned both bond lengths and bond angles can be deduced. When this is not possible, as with most asymmetric tops, all that can be done is to fit the spectra to three moments of inertia calculated from an assumed molecular structure. By varying the molecular structure the fit can be improved, giving a qualitative estimate of the structure. Isotopic substitution is invaluable when using this approach to the determination of molecular structure. Classification of molecular rotors In quantum mechanics the free rotation of a molecule is quantized, so that the rotational energy and the angular momentum can take only certain fixed values, which are related simply to the moment of inertia, , of the molecule. For any molecule, there are three moments of inertia: , and about three mutually orthogonal axes A, B, and C with the origin at the center of mass of the system. The general convention, used in this article, is to define the axes such that , with axis corresponding to the smallest moment of inertia. Some authors, however, define the axis as the molecular rotation axis of highest order. The particular pattern of energy levels (and, hence, of transitions in the rotational spectrum) for a molecule is determined by its symmetry. A convenient way to look at the molecules is to divide them into four different classes, based on the symmetry of their structure. These are Selection rules Microwave and far-infrared spectra Transitions between rotational states can be observed in molecules with a permanent electric dipole moment. A consequence of this rule is that no microwave spectrum can be observed for centrosymmetric linear molecules such as (dinitrogen) or HCCH (ethyne), which are non-polar. Tetrahedral molecules such as (methane), which have both a zero dipole moment and isotropic polarizability, would not have a pure rotation spectrum but for the effect of centrifugal distortion; when the molecule rotates about a 3-fold symmetry axis a small dipole moment is created, allowing a weak rotation spectrum to be observed by microwave spectroscopy. With symmetric tops, the selection rule for electric-dipole-allowed pure rotation transitions is , . Since these transitions are due to absorption (or emission) of a single photon with a spin of one, conservation of angular momentum implies that the molecular angular momentum can change by at most one unit. Moreover, the quantum number K is limited to have values between and including +J to -J. Raman spectra For Raman spectra the molecules undergo transitions in which an incident photon is absorbed and another scattered photon is emitted. The general selection rule for such a transition to be allowed is that the molecular polarizability must be anisotropic, which means that it is not the same in all directions. Polarizability is a 3-dimensional tensor that can be represented as an ellipsoid. The polarizability ellipsoid of spherical top molecules is in fact spherical so those molecules show no rotational Raman spectrum. For all other molecules both Stokes and anti-Stokes lines can be observed and they have similar intensities due to the fact that many rotational states are thermally populated. The selection rule for linear molecules is ΔJ = 0, ±2. The reason for the values ±2 is that the polarizability returns to the same value twice during a rotation. The value ΔJ = 0 does not correspond to a molecular transition but rather to Rayleigh scattering in which the incident photon merely changes direction. The selection rule for symmetric top molecules is ΔK = 0 If K = 0, then ΔJ = ±2 If K ≠ 0, then ΔJ = 0, ±1, ±2 Transitions with ΔJ = +1 are said to belong to the R series, whereas transitions with belong to an S series. Since Raman transitions involve two photons, it is possible for the molecular angular momentum to change by two units. Units The units used for rotational constants depend on the type of measurement. With infrared spectra in the wavenumber scale (), the unit is usually the inverse centimeter, written as cm−1, which is literally the number of waves in one centimeter, or the reciprocal of the wavelength in centimeters (). On the other hand, for microwave spectra in the frequency scale (), the unit is usually the gigahertz. The relationship between these two units is derived from the expression where ν is a frequency, λ is a wavelength and c is the velocity of light. It follows that As 1 GHz = 109 Hz, the numerical conversion can be expressed as Effect of vibration on rotation The population of vibrationally excited states follows a Boltzmann distribution, so low-frequency vibrational states are appreciably populated even at room temperatures. As the moment of inertia is higher when a vibration is excited, the rotational constants (B) decrease. Consequently, the rotation frequencies in each vibration state are different from each other. This can give rise to "satellite" lines in the rotational spectrum. An example is provided by cyanodiacetylene, H−C≡C−C≡C−C≡N. Further, there is a fictitious force, Coriolis coupling, between the vibrational motion of the nuclei in the rotating (non-inertial) frame. However, as long as the vibrational quantum number does not change (i.e., the molecule is in only one state of vibration), the effect of vibration on rotation is not important, because the time for vibration is much shorter than the time required for rotation. The Coriolis coupling is often negligible, too, if one is interested in low vibrational and rotational quantum numbers only. Effect of rotation on vibrational spectra Historically, the theory of rotational energy levels was developed to account for observations of vibration-rotation spectra of gases in infrared spectroscopy, which was used before microwave spectroscopy had become practical. To a first approximation, the rotation and vibration can be treated as separable, so the energy of rotation is added to the energy of vibration. For example, the rotational energy levels for linear molecules (in the rigid-rotor approximation) are In this approximation, the vibration-rotation wavenumbers of transitions are where and are rotational constants for the upper and lower vibrational state respectively, while and are the rotational quantum numbers of the upper and lower levels. In reality, this expression has to be modified for the effects of anharmonicity of the vibrations, for centrifugal distortion and for Coriolis coupling. For the so-called R branch of the spectrum, so that there is simultaneous excitation of both vibration and rotation. For the P branch, so that a quantum of rotational energy is lost while a quantum of vibrational energy is gained. The purely vibrational transition, , gives rise to the Q branch of the spectrum. Because of the thermal population of the rotational states the P branch is slightly less intense than the R branch. Rotational constants obtained from infrared measurements are in good accord with those obtained by microwave spectroscopy, while the latter usually offers greater precision. Structure of rotational spectra Spherical top Spherical top molecules have no net dipole moment. A pure rotational spectrum cannot be observed by absorption or emission spectroscopy because there is no permanent dipole moment whose rotation can be accelerated by the electric field of an incident photon. Also the polarizability is isotropic, so that pure rotational transitions cannot be observed by Raman spectroscopy either. Nevertheless, rotational constants can be obtained by ro–vibrational spectroscopy. This occurs when a molecule is polar in the vibrationally excited state. For example, the molecule methane is a spherical top but the asymmetric C-H stretching band shows rotational fine structure in the infrared spectrum, illustrated in rovibrational coupling. This spectrum is also interesting because it shows clear evidence of Coriolis coupling in the asymmetric structure of the band. Linear molecules The rigid rotor is a good starting point from which to construct a model of a rotating molecule. It is assumed that component atoms are point masses connected by rigid bonds. A linear molecule lies on a single axis and each atom moves on the surface of a sphere around the centre of mass. The two degrees of rotational freedom correspond to the spherical coordinates θ and φ which describe the direction of the molecular axis, and the quantum state is determined by two quantum numbers J and M. J defines the magnitude of the rotational angular momentum, and M its component about an axis fixed in space, such as an external electric or magnetic field. In the absence of external fields, the energy depends only on J. Under the rigid rotor model, the rotational energy levels, F(J), of the molecule can be expressed as, where is the rotational constant of the molecule and is related to the moment of inertia of the molecule. In a linear molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, that is, , so For a diatomic molecule where m1 and m2 are the masses of the atoms and d is the distance between them. Selection rules dictate that during emission or absorption the rotational quantum number has to change by unity; i.e., . Thus, the locations of the lines in a rotational spectrum will be given by where denotes the lower level and denotes the upper level involved in the transition. The diagram illustrates rotational transitions that obey the =1 selection rule. The dashed lines show how these transitions map onto features that can be observed experimentally. Adjacent transitions are separated by 2B in the observed spectrum. Frequency or wavenumber units can also be used for the x axis of this plot. Rotational line intensities The probability of a transition taking place is the most important factor influencing the intensity of an observed rotational line. This probability is proportional to the population of the initial state involved in the transition. The population of a rotational state depends on two factors. The number of molecules in an excited state with quantum number J, relative to the number of molecules in the ground state, NJ/N0 is given by the Boltzmann distribution as , where k is the Boltzmann constant and T the absolute temperature. This factor decreases as J increases. The second factor is the degeneracy of the rotational state, which is equal to . This factor increases as J increases. Combining the two factors The maximum relative intensity occurs at The diagram at the right shows an intensity pattern roughly corresponding to the spectrum above it. Centrifugal distortion When a molecule rotates, the centrifugal force pulls the atoms apart. As a result, the moment of inertia of the molecule increases, thus decreasing the value of , when it is calculated using the expression for the rigid rotor. To account for this a centrifugal distortion correction term is added to the rotational energy levels of the diatomic molecule. where is the centrifugal distortion constant. Therefore, the line positions for the rotational mode change to In consequence, the spacing between lines is not constant, as in the rigid rotor approximation, but decreases with increasing rotational quantum number. An assumption underlying these expressions is that the molecular vibration follows simple harmonic motion. In the harmonic approximation the centrifugal constant can be derived as where k is the vibrational force constant. The relationship between and where is the harmonic vibration frequency, follows. If anharmonicity is to be taken into account, terms in higher powers of J should be added to the expressions for the energy levels and line positions. A striking example concerns the rotational spectrum of hydrogen fluoride which was fitted to terms up to [J(J+1)]5. Oxygen The electric dipole moment of the dioxygen molecule, is zero, but the molecule is paramagnetic with two unpaired electrons so that there are magnetic-dipole allowed transitions which can be observed by microwave spectroscopy. The unit electron spin has three spatial orientations with respect to the given molecular rotational angular momentum vector, K, so that each rotational level is split into three states, J = K + 1, K, and K - 1, each J state of this so-called p-type triplet arising from a different orientation of the spin with respect to the rotational motion of the molecule. The energy difference between successive J terms in any of these triplets is about 2 cm−1 (60 GHz), with the single exception of J = 1←0 difference which is about 4 cm−1. Selection rules for magnetic dipole transitions allow transitions between successive members of the triplet (ΔJ = ±1) so that for each value of the rotational angular momentum quantum number K there are two allowed transitions. The 16O nucleus has zero nuclear spin angular momentum, so that symmetry considerations demand that K have only odd values. Symmetric top For symmetric rotors a quantum number J is associated with the total angular momentum of the molecule. For a given value of J, there is a 2J+1- fold degeneracy with the quantum number, M taking the values +J ...0 ... -J. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. In the absence of an external electrical field, the rotational energy of a symmetric top is a function of only J and K and, in the rigid rotor approximation, the energy of each rotational state is given by where and for a prolate symmetric top molecule or for an oblate molecule. This gives the transition wavenumbers as which is the same as in the case of a linear molecule. With a first order correction for centrifugal distortion the transition wavenumbers become The term in DJK has the effect of removing degeneracy present in the rigid rotor approximation, with different K values. Asymmetric top The quantum number J refers to the total angular momentum, as before. Since there are three independent moments of inertia, there are two other independent quantum numbers to consider, but the term values for an asymmetric rotor cannot be derived in closed form. They are obtained by individual matrix diagonalization for each J value. Formulae are available for molecules whose shape approximates to that of a symmetric top. The water molecule is an important example of an asymmetric top. It has an intense pure rotation spectrum in the far infrared region, below about 200 cm−1. For this reason far infrared spectrometers have to be freed of atmospheric water vapour either by purging with a dry gas or by evacuation. The spectrum has been analyzed in detail. Quadrupole splitting When a nucleus has a spin quantum number, I, greater than 1/2 it has a quadrupole moment. In that case, coupling of nuclear spin angular momentum with rotational angular momentum causes splitting of the rotational energy levels. If the quantum number J of a rotational level is greater than I, levels are produced; but if J is less than I, levels result. The effect is one type of hyperfine splitting. For example, with 14N () in HCN, all levels with J > 0 are split into 3. The energies of the sub-levels are proportional to the nuclear quadrupole moment and a function of F and J. where , . Thus, observation of nuclear quadrupole splitting permits the magnitude of the nuclear quadrupole moment to be determined. This is an alternative method to the use of nuclear quadrupole resonance spectroscopy. The selection rule for rotational transitions becomes Stark and Zeeman effects In the presence of a static external electric field the degeneracy of each rotational state is partly removed, an instance of a Stark effect. For example, in linear molecules each energy level is split into components. The extent of splitting depends on the square of the electric field strength and the square of the dipole moment of the molecule. In principle this provides a means to determine the value of the molecular dipole moment with high precision. Examples include carbonyl sulfide, OCS, with . However, because the splitting depends on μ2, the orientation of the dipole must be deduced from quantum mechanical considerations. A similar removal of degeneracy will occur when a paramagnetic molecule is placed in a magnetic field, an instance of the Zeeman effect. Most species which can be observed in the gaseous state are diamagnetic . Exceptions are odd-electron molecules such as nitric oxide, NO, nitrogen dioxide, , some chlorine oxides and the hydroxyl radical. The Zeeman effect has been observed with dioxygen, Rotational Raman spectroscopy Molecular rotational transitions can also be observed by Raman spectroscopy. Rotational transitions are Raman-allowed for any molecule with an anisotropic polarizability which includes all molecules except for spherical tops. This means that rotational transitions of molecules with no permanent dipole moment, which cannot be observed in absorption or emission, can be observed, by scattering, in Raman spectroscopy. Very high resolution Raman spectra can be obtained by adapting a Fourier Transform Infrared Spectrometer. An example is the spectrum of . It shows the effect of nuclear spin, resulting in intensities variation of 3:1 in adjacent lines. A bond length of 109.9985 ± 0.0010 pm was deduced from the data. Instruments and methods The great majority of contemporary spectrometers use a mixture of commercially available and bespoke components which users integrate according to their particular needs. Instruments can be broadly categorised according to their general operating principles. Although rotational transitions can be found across a very broad region of the electromagnetic spectrum, fundamental physical constraints exist on the operational bandwidth of instrument components. It is often impractical and costly to switch to measurements within an entirely different frequency region. The instruments and operating principals described below are generally appropriate to microwave spectroscopy experiments conducted at frequencies between 6 and 24 GHz. Absorption cells and Stark modulation A microwave spectrometer can be most simply constructed using a source of microwave radiation, an absorption cell into which sample gas can be introduced and a detector such as a superheterodyne receiver. A spectrum can be obtained by sweeping the frequency of the source while detecting the intensity of transmitted radiation. A simple section of waveguide can serve as an absorption cell. An important variation of the technique in which an alternating current is applied across electrodes within the absorption cell results in a modulation of the frequencies of rotational transitions. This is referred to as Stark modulation and allows the use of phase-sensitive detection methods offering improved sensitivity. Absorption spectroscopy allows the study of samples that are thermodynamically stable at room temperature. The first study of the microwave spectrum of a molecule () was performed by Cleeton & Williams in 1934. Subsequent experiments exploited powerful sources of microwaves such as the klystron, many of which were developed for radar during the Second World War. The number of experiments in microwave spectroscopy surged immediately after the war. By 1948, Walter Gordy was able to prepare a review of the results contained in approximately 100 research papers. Commercial versions of microwave absorption spectrometer were developed by Hewlett-Packard in the 1970s and were once widely used for fundamental research. Most research laboratories now exploit either Balle-Flygare or chirped-pulse Fourier transform microwave (FTMW) spectrometers. Fourier transform microwave (FTMW) spectroscopy The theoretical framework underpinning FTMW spectroscopy is analogous to that used to describe FT-NMR spectroscopy. The behaviour of the evolving system is described by optical Bloch equations. First, a short (typically 0-3 microsecond duration) microwave pulse is introduced on resonance with a rotational transition. Those molecules that absorb the energy from this pulse are induced to rotate coherently in phase with the incident radiation. De-activation of the polarisation pulse is followed by microwave emission that accompanies decoherence of the molecular ensemble. This free induction decay occurs on a timescale of 1-100 microseconds depending on instrument settings. Following pioneering work by Dicke and co-workers in the 1950s, the first FTMW spectrometer was constructed by Ekkers and Flygare in 1975. Balle–Flygare FTMW spectrometer Balle, Campbell, Keenan and Flygare demonstrated that the FTMW technique can be applied within a "free space cell" comprising an evacuated chamber containing a Fabry-Perot cavity. This technique allows a sample to be probed only milliseconds after it undergoes rapid cooling to only a few kelvins in the throat of an expanding gas jet. This was a revolutionary development because (i) cooling molecules to low temperatures concentrates the available population in the lowest rotational energy levels. Coupled with benefits conferred by the use of a Fabry-Perot cavity, this brought a great enhancement in the sensitivity and resolution of spectrometers along with a reduction in the complexity of observed spectra; (ii) it became possible to isolate and study molecules that are very weakly bound because there is insufficient energy available for them to undergo fragmentation or chemical reaction at such low temperatures. William Klemperer was a pioneer in using this instrument for the exploration of weakly bound interactions. While the Fabry-Perot cavity of a Balle-Flygare FTMW spectrometer can typically be tuned into resonance at any frequency between 6 and 18 GHz, the bandwidth of individual measurements is restricted to about 1 MHz. An animation illustrates the operation of this instrument which is currently the most widely used tool for microwave spectroscopy. Chirped-Pulse FTMW spectrometer Noting that digitisers and related electronics technology had significantly progressed since the inception of FTMW spectroscopy, B.H. Pate at the University of Virginia designed a spectrometer which retains many advantages of the Balle-Flygare FT-MW spectrometer while innovating in (i) the use of a high speed (>4 GS/s) arbitrary waveform generator to generate a "chirped" microwave polarisation pulse that sweeps up to 12 GHz in frequency in less than a microsecond and (ii) the use of a high speed (>40 GS/s) oscilloscope to digitise and Fourier transform the molecular free induction decay. The result is an instrument that allows the study of weakly bound molecules but which is able to exploit a measurement bandwidth (12 GHz) that is greatly enhanced compared with the Balle-Flygare FTMW spectrometer. Modified versions of the original CP-FTMW spectrometer have been constructed by a number of groups in the United States, Canada and Europe. The instrument offers a broadband capability that is highly complementary to the high sensitivity and resolution offered by the Balle-Flygare design. Notes References Bibliography External links infrared gas spectra simulator Hyperphysics article on Rotational Spectrum A list of microwave spectroscopy research groups around the world Spectroscopy Rotation Rigid bodies mechanics
Rotational spectroscopy
Physics,Chemistry
5,690
1,399,085
https://en.wikipedia.org/wiki/Methods%20engineering
Methods engineering is a subspecialty of industrial engineering and manufacturing engineering concerned with human integration in industrial production processes. Overview Alternatively it can be described as the design of the productive process in which a person is involved. The task of the Methods engineer is to decide where humans will be utilized in the process of converting raw materials to finished products and how workers can most effectively perform their assigned tasks. The terms operation analysis, work design and simplification, and methods engineering and corporate re-engineering are frequently used interchangeably. Lowering costs and increasing reliability and productivity are the objectives of methods engineering. Methods efficiency engineering focuses on lowering costs through productivity improvement. It investigates the output obtained from each unit of input and the speed of each machine and man. Methods quality engineering focuses on increasing quality and reliability. These objectives are met in a five step sequence as follows: Project selection, data acquisition and presentation, data analysis, development of an ideal method based on the data analysis and, finally, presentation and implementation of the method. Methods engineering topics Project selection Methods engineers typically work on projects involving new product design, products with a high cost of production to profit ratio, and products associated with having poor quality issues. Different methods of project selection include the Pareto analysis, fish diagrams, Gantt charts, PERT charts, and job/work site analysis guides. Data acquisition and presentation Data that needs to be collected are specification sheets for the product, design drawings, process plans, quantity and delivery requirements, and projections as to how the product will perform or has performed in the market. Process charts are used to describe proposed or existing way of doing work utilizing machines and men. The Gantt process chart can assist in the analysis of the man to machine interaction and it can aid in establishing the optimum number of workers and machines subject to the financial constraints of the operation. A flow diagram is frequently employed to represent the manufacturing process associated with the product. Data analysis Data analysis enables the methods engineer to make decisions about several things, including: purpose of the operation, part design characteristics, specifications and tolerances of parts, materials, manufacturing process design, setup and tooling, working conditions, material handling, plant layout, and workplace design. Knowing the specifics (who, what, when, where, why, and how) of product manufacturing assists in the development of an optimum manufacturing method. Ideal method development Equations of synchronous and random servicing as well as line balancing are used to determine the ideal worker to machine ratio for the process or product chosen. Synchronous servicing is defined as the process where a machine is assigned to more than one operator, and the assigned operators and machine are occupied during the whole operating cycle. Random servicing of a facility, as the name indicates, is defined as a servicing process with a random time of occurrence and need of servicing variables. Line balancing equations determine the ideal number of workers needed on a production line to enable it to work at capacity. Presentation and methods implementation The industrial process or operation can be optimized using a variety of available methods. Each method design has its advantages and disadvantages. The best overall method is chosen using selection criteria and concepts involving value engineering, cost-benefit analysis, crossover charts, and economic analysis. The outcome of the selection process is then presented to the company for implementation at the plant. This last step involves "selling the idea" to the company brass, a skill the methods engineer must develop in addition to the normal engineering qualifications. See also Work design Motion analysis References Engineering disciplines Engineering statistics Industrial engineering
Methods engineering
Engineering
714
200,129
https://en.wikipedia.org/wiki/Hair%20loss
Hair loss, also known as alopecia or baldness, refers to a loss of hair from part of the head or body. Typically at least the head is involved. The severity of hair loss can vary from a small area to the entire body. Inflammation or scarring is not usually present. Hair loss in some people causes psychological distress. Common types include male- or female-pattern hair loss, alopecia areata, and a thinning of hair known as telogen effluvium. The cause of male-pattern hair loss is a combination of genetics and male hormones; the cause of female pattern hair loss is unclear; the cause of alopecia areata is autoimmune; and the cause of telogen effluvium is typically a physically or psychologically stressful event. Telogen effluvium is very common following pregnancy. Less common causes of hair loss without inflammation or scarring include the pulling out of hair, certain medications including chemotherapy, HIV/AIDS, hypothyroidism, and malnutrition including iron deficiency. Causes of hair loss that occurs with scarring or inflammation include fungal infection, lupus erythematosus, radiation therapy, and sarcoidosis. Diagnosis of hair loss is partly based on the areas affected. Treatment of pattern hair loss may simply involve accepting the condition, which can also include shaving one's head. Interventions that can be tried include the medications minoxidil (or finasteride) and hair transplant surgery. Alopecia areata may be treated by steroid injections in the affected area, but these need to be frequently repeated to be effective. Hair loss is a common problem. Pattern hair loss by age 50 affects about half of men and a quarter of women. About 2% of people develop alopecia areata at some point in time. Terminology Baldness is the partial or complete lack of hair growth, and part of the wider topic of "hair thinning". The degree and pattern of baldness varies, but its most common cause is androgenic hair loss, alopecia androgenetica, or alopecia seborrheica, with the last term primarily used in Europe. Hypotrichosis Hypotrichosis is a condition of abnormal hair patterns, predominantly loss or reduction. It occurs, most frequently, by the growth of vellus hair in areas of the body that normally produce terminal hair. Typically, the individual's hair growth is normal after birth, but shortly thereafter the hair is shed and replaced with sparse, abnormal hair growth. The new hair is typically fine, short and brittle, and may lack pigmentation. Baldness may be present by the time the subject is 25 years old. Signs and symptoms Symptoms of hair loss include hair loss in patches usually in circular patterns, dandruff, skin lesions, and scarring. Alopecia areata (mild – medium level) usually shows in unusual hair loss areas, e.g., eyebrows, backside of the head or above the ears, areas the male pattern baldness usually does not affect. In male-pattern hair loss, loss and thinning begin at the temples and the crown and hair either thins out or falls out. Female-pattern hair loss occurs at the frontal and parietal. People have between 100,000 and 150,000 hairs on their head. The number of strands normally lost in a day varies but on average is 100. In order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. The first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. Styling can also reveal areas of thinning, such as a wider parting or a thinning crown. Skin conditions A substantially blemished face, back and limbs could point to cystic acne. The most severe form of the condition, cystic acne, arises from the same hormonal imbalances that cause hair loss and is associated with dihydrotestosterone production. Psychological The psychology of hair thinning is a complex issue. Hair is considered an essential part of overall identity: especially for women, for whom it often represents femininity and attractiveness. Men typically associate a full head of hair with youth and vigor. People experiencing hair thinning often find themselves in a situation where their physical appearance is at odds with their own self-image and commonly worry that they appear older than they are or less attractive to others. Psychological problems due to baldness, if present, are typically most severe at the onset of symptoms. Hair loss induced by cancer chemotherapy has been reported to cause changes in self-concept and body image. Body image does not return to the previous state after regrowth of hair for a majority of patients. In such cases, patients have difficulties expressing their feelings (alexithymia) and may be more prone to avoiding family conflicts. Family therapy can help families to cope with these psychological problems if they arise. Causes Although not completely understood, hair loss can have many causes: Pattern hair loss Male pattern hair loss is believed to be due to a combination of genetics and the male hormone dihydrotestosterone. The cause in female pattern hair loss remains unclear. Infection Dissecting cellulitis of the scalp Fungal infections (such as tinea capitis) Folliculitis from various causes Demodex folliculitis, caused by Demodex folliculorum, a microscopic mite that feeds on the sebum produced by the sebaceous glands, denies hair essential nutrients and can cause thinning. Demodex folliculorum is not present on every scalp and is more likely to live in an excessively oily scalp environment. Secondary syphilis Drugs Temporary or permanent hair loss can be caused by several medications, including those for blood pressure problems, diabetes, heart disease and cholesterol. Any that affect the body's hormone balance can have a pronounced effect: these include the contraceptive pill, hormone replacement therapy, steroids and acne medications. Some treatments used to cure mycotic infections can cause massive hair loss. Medications (side effects from drugs, including chemotherapy, anabolic steroids, and birth control pills) Trauma Traction alopecia is most commonly found in people with ponytails or cornrows who pull on their hair with excessive force. In addition, rigorous brushing and heat styling, rough scalp massage can damage the cuticle, the hard outer casing of the hair. This causes individual strands to become weak and break off, reducing overall hair volume. Frictional alopecia is hair loss caused by rubbing of the hair or follicles, most infamously around the ankles of men from socks, where even if socks are no longer worn, the hair often will not grow back. Trichotillomania is the loss of hair caused by compulsive pulling and bending of the hairs. Onset of this disorder tends to begin around the onset of puberty and usually continues through adulthood. Due to the constant extraction of the hair roots, permanent hair loss can occur. Traumas such as childbirth, major surgery, poisoning, and severe stress may cause a hair loss condition known as telogen effluvium, in which a large number of hairs enter the resting phase at the same time, causing shedding and subsequent thinning. The condition also presents as a side effect of chemotherapy – while targeting dividing cancer cells, this treatment also affects hair's growth phase with the result that almost 90% of hairs fall out soon after chemotherapy starts. Radiation to the scalp, as when radiotherapy is applied to the head for the treatment of certain cancers there, can cause baldness of the irradiated areas. Pregnancy Hair loss often follows childbirth in the postpartum period without causing baldness. During pregnancy, the hair is thicker owing to increased circulating estrogens. Approximately three months after giving birth (typically between 2 and 5 months), estrogen levels drop and hair loss occurs, often particularly noticeably around the hairline and temple area. Hair typically grows back normally and treatment is not indicated. A similar situation occurs in women taking the fertility-stimulating drug clomiphene. Other causes Autoimmune disease. Alopecia areata is an autoimmune disorder also known as "spot baldness" that can result in hair loss ranging from just one location (Alopecia areata monolocularis) to every hair on the entire body (Alopecia areata universalis). Although thought to be caused by hair follicles becoming dormant, what triggers alopecia areata is not known. In most cases the condition corrects itself, but it can also spread to the entire scalp (alopecia totalis) or to the entire body (alopecia universalis). Skin diseases and cancer. Localized or diffuse hair loss may also occur in cicatricial alopecia (lupus erythematosus, lichen plano pilaris, folliculitis decalvans, central centrifugal cicatricial alopecia, postmenopausal frontal fibrosing alopecia, etc.). Tumours and skin outgrowths also induce localized baldness (sebaceous nevus, basal cell carcinoma, squamous cell carcinoma). Tumor alopecia is the hair loss in the immediate vicinity of either benign or malignant tumors of the scalp. Hypothyroidism (an under-active thyroid) and the side effects of its related medications can cause hair loss, typically frontal, which is particularly associated with thinning of the outer third of the eyebrows (also seen with syphilis). Hyperthyroidism (an over-active thyroid) can also cause hair loss, which is parietal rather than frontal. Sebaceous cysts. Temporary loss of hair can occur in areas where sebaceous cysts are present for considerable duration (normally one to several weeks). Congenital triangular alopecia – It is a triangular, or oval in some cases, shaped patch of hair loss in the temple area of the scalp that occurs mostly in young children. The affected area mainly contains vellus hair follicles or no hair follicles at all, but it does not expand. Its causes are unknown, and although it is a permanent condition, it does not have any other effect on the affected individuals. Hair growth conditions. Gradual thinning of hair with age is a natural condition known as involutional alopecia. This is caused by an increasing number of hair follicles switching from the growth, or anagen, phase into a resting phase, or telogen phase, so that remaining hairs become shorter and fewer in number. An unhealthy scalp environment can play a significant role in hair thinning by contributing to miniaturization or causing damage. Obesity. Obesity-induced stress, such as that induced by a high-fat diet (HFD), targets hair follicle stem cells (HFSCs) to accelerate hair thinning in mice. It is likely that similar molecular mechanism play a role in human hair loss. Other causes of hair loss include: Alopecia mucinosa Biotinidase deficiency Chronic inflammation Diabetes Pseudopelade of Brocq Telogen effluvium Tufted folliculitis Genetics Genetic forms of localized autosomal recessive hypotrichosis include: Pathophysiology Hair follicle growth occurs in cycles. Each cycle consists of a long growing phase (anagen), a short transitional phase (catagen) and a short resting phase (telogen). At the end of the resting phase, the hair falls out (exogen) and a new hair starts growing in the follicle, beginning the cycle again. Normally, about 40 (0–78 in men) hairs reach the end of their resting phase each day and fall out. When more than 100 hairs fall out per day, clinical hair loss (telogen effluvium) may occur. A disruption of the growing phase causes abnormal loss of anagen hairs (anagen effluvium). Diagnosis Because they are not usually associated with an increased loss rate, male-pattern and female-pattern hair loss do not generally require testing. If hair loss occurs in a young man with no family history, drug use could be the cause. The pull test helps to evaluate diffuse scalp hair loss. Gentle traction is exerted on a group of hairs (about 40–60) on three different areas of the scalp. The number of extracted hairs is counted and examined under a microscope. Normally, fewer than three hairs per area should come out with each pull. If more than ten hairs are obtained, the pull test is considered positive. The pluck test is conducted by pulling hair out "by the roots". The root of the plucked hair is examined under a microscope to determine the phase of growth, and is used to diagnose a defect of telogen, anagen, or systemic disease. Telogen hairs have tiny bulbs without sheaths at their roots. Telogen effluvium shows an increased percentage of hairs upon examination. Anagen hairs have sheaths attached to their roots. Anagen effluvium shows a decrease in telogen-phase hairs and an increased number of broken hairs. Scalp biopsy is used when the diagnosis is unsure; a biopsy allows for differing between scarring and nonscarring forms. Hair samples are taken from areas of inflammation, usually around the border of the bald patch. Daily hair counts are normally done when the pull test is negative. It is done by counting the number of hairs lost. The hair from the first morning combing or during washing should be counted. The hair is collected in a clear plastic bag for 14 days. The strands are recorded. If the hair count is >100/day, it is considered abnormal except after shampooing, where hair counts will be up to 250 and be normal. Trichoscopy is a noninvasive method of examining hair and scalp. The test may be performed with the use of a handheld dermoscope or a video dermoscope. It allows differential diagnosis of hair loss in most cases. There are two types of identification tests for female pattern baldness: the Ludwig Scale and the Savin Scale. Both track the progress of diffused thinning, which typically begins on the crown of the head behind the hairline, and becomes gradually more pronounced. For male pattern baldness, the Hamilton–Norwood scale tracks the progress of a receding hairline and/or a thinning crown, through to a horseshoe-shaped ring of hair around the head and on to total baldness. In almost all cases of thinning, and especially in cases of severe hair loss, it is recommended to seek advice from a doctor or dermatologist. Many types of thinning have an underlying genetic or health-related cause, which a qualified professional will be able to diagnose. Management Hiding hair loss Head One method of hiding hair loss is the comb over, which involves restyling the remaining hair to cover the balding area. It is usually a temporary solution, useful only while the area of hair loss is small. As the hair loss increases, a comb over becomes less effective. Another method is to wear a hat or a hairpiece such as a wig or toupee. The wig is a layer of artificial or natural hair made to resemble a typical hair style. In most cases the hair is artificial. Wigs vary widely in quality and cost. In the United States, the best wigsthose that look like real haircost up to tens of thousands of dollars. Organizations also collect individuals' donations of their own natural hair to be made into wigs for young cancer patients who have lost their hair due to chemotherapy or other cancer treatment in addition to any type of hair loss. Eyebrows Though not as common as the loss of hair on the head, chemotherapy, hormone imbalance, forms of hair loss, and other factors can also cause loss of hair in the eyebrows. Loss of growth in the outer one third of the eyebrow is often associated with hypothyroidism. Artificial eyebrows are available to replace missing eyebrows or to cover patchy eyebrows. Eyebrow embroidery is another option which involves the use of a blade to add pigment to the eyebrows. This gives a natural 3D look for those who are worried about an artificial look and it lasts for two years. Micropigmentation (permanent makeup tattooing) is also available for those who want the look to be permanent. Medications Treatments for the various forms of hair loss have limited success. Three medications have evidence to support their use in male pattern hair loss: minoxidil, finasteride, and dutasteride. They typically work better to prevent further hair loss, than to regrow lost hair. On June 13, 2022, the U.S. Food and Drug Administration (FDA) approved Olumiant (baricitinib) for adults with severe alopecia areatal. It is the first FDA approved drug for systemic treatment, or treatment for any area of the body. Minoxidil (Rogaine) is a nonprescription medication approved for male pattern baldness and alopecia areata. In a liquid or foam, it is rubbed into the scalp twice a day. Some people have an allergic reaction to the propylene glycol in the minoxidil solution and a minoxidil foam was developed without propylene glycol. Not all users will regrow hair. Minoxidil may also be taken orally although this route of administration is not approved by the FDA. The longer the hair has stopped growing, the less likely minoxidil will regrow hair. Minoxidil is not effective for other causes of hair loss. Hair regrowth can take 1 to 6 months to begin. Treatment must be continued indefinitely. If the treatment is stopped, hair loss resumes. Any regrown hair and any hair susceptible to being lost, while Minoxidil was used, will be lost. Most frequent side effects are mild scalp irritation, allergic contact dermatitis, and unwanted hair in other parts of the body. Finasteride (Propecia) is used in male-pattern hair loss in a pill form, taken 1 milligram per day. It is not indicated for women and is not recommended in pregnant women (as it is known to cause birth defects in fetuses). Treatment is effective starting within 6 weeks of treatment. Finasteride causes an increase in hair retention, the weight of hair, and some increase in regrowth. Side effects in about 2% of males include decreased sex drive, erectile dysfunction, and ejaculatory dysfunction. Treatment should be continued as long as positive results occur. Once treatment is stopped, hair loss resumes. Corticosteroids injections into the scalp can be used to treat alopecia areata. This type of treatment is repeated on a monthly basis. Oral pills for extensive hair loss may be used for alopecia areata. Results may take up to a month to be seen. Immunosuppressants applied to the scalp have been shown to temporarily reverse alopecia areata, though the side effects of some of these drugs make such therapy questionable. There is some tentative evidence that anthralin may be useful for treating alopecia areata. Hormonal modulators (oral contraceptives or antiandrogens such as spironolactone and flutamide) can be used for female-pattern hair loss associated with hyperandrogenemia. Surgery Hair transplantation is usually carried out under local anesthetic. A surgeon will move healthy hair from the back and sides of the head to areas of thinning. The procedure can take between four and eight hours, and additional sessions can be carried out to make hair even thicker. Transplanted hair falls out within a few weeks, but regrows permanently within months. Surgical options, such as follicle transplants, scalp flaps, and hair loss reduction, are available. These procedures are generally chosen by those who are self-conscious about their hair loss, but they are expensive and painful, with a risk of infection and scarring. Once surgery has occurred, six to eight months are needed before the quality of new hair can be assessed. Scalp reduction is the process of decreasing of the area of bald skin on the head. In time, the skin on the head becomes flexible and stretched enough that some of it can be surgically removed. After the hairless scalp is removed, the space is closed with hair-covered scalp. Scalp reduction is generally done in combination with hair transplantation to provide a natural-looking hairline, especially those with extensive hair loss. Hairline lowering can sometimes be used to lower a high hairline secondary to hair loss, although there may be a visible scar after further hair loss. Wigs are an alternative to medical and surgical treatment; some patients wear a wig or hairpiece. They can be used permanently or temporarily to cover the hair loss. High-quality, natural-looking wigs and hairpieces are available. Chemotherapy Hypothermia caps may be used to prevent hair loss during some kinds of chemotherapy, specifically, when taxanes or anthracyclines are administered. It is not recommended to be used when cancer is present in the skin of the scalp or for lymphoma or leukemia. There are generally only minor side effects from scalp cooling given during chemotherapy. Embracing baldness Instead of attempting to conceal their hair loss, some people embrace it by either doing nothing about it or sporting a shaved head. The general public became more accepting of men with shaved heads in the early 1950s, when Russian-American actor Yul Brynner began sporting the look; the resulting phenomenon inspired many of his male fans to shave their heads. Male celebrities then continued to bring mainstream popularity to shaved heads, including athletes such as Michael Jordan and Zinedine Zidane and actors such as Dwayne Johnson, Ben Kingsley, and Jason Statham. Female baldness is still viewed as less normal in various parts of the world. Alternative medicine Dietary supplements are not typically recommended. There is only one small trial of saw palmetto which shows tentative benefit in those with mild to moderate androgenetic alopecia. There is no evidence for biotin. Evidence for most other alternative medicine remedies is also insufficient. There was no good evidence for ginkgo, aloe vera, ginseng, bergamot, hibiscus, or sophora as of 2011. Many people use unproven treatments to treat hair loss. Egg oil, in Indian, Japanese, Unani (Roghan Baiza Murgh) and Chinese traditional medicine, was traditionally used as a treatment for hair loss. Research Research is looking into connections between hair loss and other health issues. While there has been speculation about a connection between early-onset male pattern hair loss and heart disease, a review of articles from 1954 to 1999 found no conclusive connection between baldness and coronary artery disease. The dermatologists who conducted the review suggested further study was needed. Environmental factors are under review. A 2007 study indicated that smoking may be a factor associated with age-related hair loss among Asian men. The study controlled for age and family history, and found statistically significant positive associations between moderate or severe male pattern hair loss and smoking status. Vertex baldness is associated with an increased risk of coronary heart disease (CHD) and the relationship depends upon the severity of baldness, while frontal baldness is not. Thus, vertex baldness might be a marker of CHD and is more closely associated with atherosclerosis than frontal baldness. Hair follicle aging A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be primed by a sustained cellular response to the DNA damage that accumulates in renewing stem cells during aging. This damage response involves the proteolysis of type XVII collagen by neutrophil elastase in response to DNA damage in hair follicle stem cells. Proteolysis of collagen leads to elimination of the damaged cells and, consequently, to terminal hair follicle miniaturization. Hedgehog signaling In June 2022 the University of California, Irvine announced that researchers have discovered that hedgehog signaling in murine fibroblasts induces new hair growth and hair multiplication while hedgehog activation increases fibroblast heterogeneity and drives new cell states. A new signaling molecule called SCUBE3 potently stimulates hair growth and may offer a therapeutic treatment for androgenetic alopecia. Etymology The term alopecia () is from the Classical Greek ἀλώπηξ, alōpēx, meaning "fox". The origin of this usage is because this animal sheds its coat twice a year, or because in ancient Greece foxes often lost hair because of mange. See also Alopecia in animals Lichen planopilaris List of conditions caused by problems with junctional proteins Locks of Love – charity that provides hair prosthetics to alopecia patients Psychogenic alopecia References External links Conditions of the skin appendages External signs of ageing Hair diseases Human hair Radiation health effects Wikipedia medicine articles ready to translate Wikipedia neurology articles ready to translate
Hair loss
Chemistry,Materials_science,Biology
5,293
30,958,460
https://en.wikipedia.org/wiki/Microsoft%20reaction%20card%20method
The Microsoft Reaction Card, developed by Microsoft in 2002 by Joey Benedek and Trish Miner, is a method used to check the emotional response and desirability of a design or product. This method is commonly used in the field of software design. Using this method involves a participant describing a design / product based on a list of 118 words. Each word is placed on a separate card. After viewing a design or product, the participant is asked to pick out the words they feel are relevant. The moderator then asks the participant to explain the reason for their selection. References http://uxmatters.com/mt/archives/2010/02/rapid-desirability-testing-a-case-study.php - online source of information https://www.nngroup.com/articles/microsoft-desirability-toolkit/ Microsoft culture Software testing
Microsoft reaction card method
Engineering
183
452,985
https://en.wikipedia.org/wiki/Hans%20Jonas
Hans Jonas (; ; 10 May 1903 – 5 February 1993) was a German-born American Jewish philosopher. From 1955 to 1976 he was the Alvin Johnson Professor of Philosophy at the New School for Social Research in New York City. Biography Jonas was born in Mönchengladbach, on 10 May 1903 to a Jewish family. He studied philosophy and theology at the University of Freiburg, the University of Berlin and the University of Heidelberg, and finally earned his Doctorate of Philosophy in 1928 from the University of Marburg with a thesis on Gnosticism entitled Der Begriff der Gnosis (The Concept of Gnosis) and directed by Martin Heidegger. During his study years his academic advisors included Edmund Husserl and Rudolf Bultmann. In Marburg he met Hannah Arendt, who was also pursuing her PhD there, and the two of them were to remain friends for the rest of their lives. When Heidegger joined the Nazi Party in 1933, it may have disturbed Jonas, as he was Jewish and an active Zionist. In 1964 Jonas repudiated his mentor Heidegger for his affiliation with the Nazis. He left Germany for England in 1933, and from England he moved to Palestine in 1934. There he met Lore Weiner, to whom he became betrothed. In 1940 he returned to Europe to join the British Army which had been arranging a special brigade for German Jews wanting to fight against Hitler. He was sent to Italy, and in the last phase of the war moved into Germany. Thus, he kept his promise that he would return only as a soldier in the victorious army. In this time he wrote several letters to Lore about philosophy, in particular philosophy of biology, that would form the basis of his later publications on the subject. They finally married in 1943. Immediately after the war he returned to Mönchengladbach to search for his mother but found that she had been sent to the gas chambers in the Auschwitz concentration camp. Having heard this, he refused to live in Germany again. He returned to Palestine and took part in the 1948 Arab–Israeli War. Jonas taught briefly at the Hebrew University of Jerusalem before moving to North America. In 1950 he left for Canada, teaching at Carleton University. From there he moved in 1955 to New York City, where he was to live for the rest of his life. He was a fellow of the Hastings Center and Professor of Philosophy at New School for Social Research from 1955 to 1976 (where he was Alvin Johnson Professor). From 1982 to 1983 Jonas held the Eric Voegelin Visiting Professorship at the University of Munich. He died at his home in New Rochelle, New York, on 5 February 1993, aged 89. Philosophical work Jonas's writings were very influential in different spheres. For example, The Gnostic Religion, based on his early research on the Gnosis and first published in 1958, was for many years the standard work in English on the subject of Gnosticism. The Imperative of Responsibility (German 1979, English 1984) centers on social and ethical problems created by technology. Jonas insists that human survival depends on our efforts to care for our planet and its future. He formulated a new and distinctive supreme moral imperative: "Act so that the effects of your action are compatible with the permanence of genuine human life". While The Imperative of Responsibility has been credited with catalyzing the environmental movement in Germany, his work The Phenomenon of Life (1966) forms the philosophical undergirding of one major school of bioethics in America. Murray Bookchin and Leon Kass both referred to Hans Jonas's work as major, or primary, inspiration. Heavily influenced by Martin Heidegger but also one of Heidegger's most outspoken philosophical critics, The Phenomenon of Life attempts to synthesize the philosophy of matter with the philosophy of mind, producing a rich existential understanding of biology, which ultimately argues for a simultaneously material and moral human nature. On the question of abortion, Jonas was against it, saying, "a mother-to-be is more than her individual self. She carries a human trust, and we should not make abortion merely a matter of her own private wish", society had a "social responsibility" to pregnant mothers, and "To give this mission[motherhood] over completely to individual choice oversteps the order of nature." His writing on the history of Gnosticism revisits terrain covered by earlier standard works on the subject such as Ernesto Buonaiuti's Lo gnosticismo: storia di antiche lotte religiose (1907), interpreting the religion from a unique version of existentialist philosophical viewpoint that also informed his later contributions. He was one of the first philosophers to concern himself with ethical questions in biological science. Jonas's career is generally divided into three periods defined by his three primary works, but in reverse order: studies of gnosticism, studies of philosophical biology, and ethical studies. Works English books The Gnostic Religion: The Message of the Alien God & the Beginnings of Christianity (Boston: Beacon Press, 1958) Second, enlarged edition, 1963. Third edition, 2001. (N.B. The "Introduction to the Third Edition" is in fact a talk given by Jonas in 1974.) The Phenomenon of Life: Toward a Philosophical Biology (New York, Harper & Row, 1966) OCLC 373876 (Evanston, Ill. : Northwestern University Press, 2001). The Imperative of Responsibility: In Search of Ethics for the Technological Age (translation of Das Prinzip Verantwortung) trans. Hans Jonas and David Herr (1979). (University of Chicago Press, 1984) Philosophical Essays: From Ancient Creed to Technological Man (Chicago: University of Chicago Press, 1974) "Technology and Responsibility: Reflections on the New Tasks of Ethics," Social Research 15 (Spring 1973). "Jewish and Christian Elements in Philosophy: their Share in the Emergence of the Modern Mind" "Seventeenth Century and After: The Meaning of the Scientific and Technological Revolution" "Socioeconomic Knowledge and Ignorance of Goals" "Philosophical Reflections on Experimenting with Human Subjects" "Against the Stream: Comments on the Definition and Redefinition of Death" "Biological Engineering—A Preview" "Contemporary Problems in Ethics from a Jewish Perspective" "Biological Foundations of Individuality" "Spinoza and the Theory of Organism" "Sight and Thought: A Review of 'Visual Thinking.'" "Change and Permanence: On the Possibility of Understanding History." "The Gnostic Syndrome: Typology of Its Thought, Imagination, and Mood." "The Hymn of the Pearl: Case Study of a Symbol, and the Claims for a Jewish Origin of Gnosticism." "Myth and Mysticism: A Study of Objectification and Interiorization in Religious Thought." "Origen's Metaphysics of Free Will, Fall, and Salvation: a 'Divine Comedy' of the Universe." "The Soul in Gnosticism and Plotinus." "The Abyss of the Will: Philosophical Meditations on the Seventh Chapter of Paul's Epistle to the Romans." Mortality and Morality: A Search for the Good After Auschwitz ed. Lawrence Vogel (Evanston, Ill.: Northwestern University Press, 1996). With Stuart F Spicker: Organism, medicine, and metaphysics : essays in honor of Hans Jonas on his 75th birthday, May 10, 1978 On faith, reason and responsibility (San Francisco: Harper and Row, 1978. New edition: Institute for Antiquity and Christianity, Claremont Graduate School, 1981.) Memoirs (Brandeis University Press, 2008) English monographs Immortality and the modern temper : the Ingersoll lecture, 1961 (Cambridge : Harvard Divinity School, 1962) OCLC 26072209 (included in The Phenomenon of Life) Heidegger and theology (1964) OCLC 14975064 (included in The Phenomenon of Life) Ethical aspects of experimentation with human subjects (Boston:American Academy of Arts and Sciences, 1969) OCLC 19884675. German Gnosis und spätantiker Geist (1–2, 1934–1954) Technik, Medizin und Ethik — Zur Praxis des Prinzips Verantwortung — Frankfurt a.M. : Suhrkamp, 1985 — (On Technology, Medicine and Ethics: On the Practice of the Imperative of Responsibility) Das Prinzip Verantwortung: Versuch einer Ethik für die technologische Zivilisation (Frankfurt am Main : Insel-Verlag, 1979). Erinnerungen. Nach Gesprächen mit Rachel Salamander, ed. Ch. Wiese. Frankfurt am Mein-Leipzig: Insel Verlag, 2003. Macht oder Ohnmacht der Subjektivität? Das Leib-Seele-Problem im Vorfeld des Prinzips Verantwortung. Frankfurt am Main: Insel, 1981, and then Frankfurt am Main: Suhrkamp, 1987. Erkenntnis und Verantwortung, Gespräch mit Ingo Hermann in der Reihe "Zeugen des Jahrhunderts", Edited by I. Hermann. Göttingen: Lamuv, 1991. Philosophische Untersuchungen und metaphysische Vermutungen. Frankfurt am Main: Insel, 1992, and then Frankfurt am Main: Suhrkamp, 1994. Organismus und Freiheit. Ansätze zu einer philosophischen Biologie. Göttingen: Vandenhoeck & Ruprecht, 1973. Augustin und das paulinische Freiheitsproblem. Ein philosophischer Beitrag zur Genesis der christlich-abendländischen Freiheitsidee, Göttingen: Vandenhoeck & Ruprecht, 1930. Second edition entitled Augustin und das paulinische Freiheitsproblem. Eine philosophische Studie zum pelagianischen Streit, with an introduction by J. M. Robinson. Göttingen: Vandenhoeck & Ruprecht, 1965. French Le concept de Dieu après Auschwitz Evolution et liberté Le Principe responsabilité Le Droit de mourir With Sabine Cornille and Philippe Ivernel: Pour une éthique du futur Une éthique pour la nature With Sylvie Courtine-Denamy: Entre le néant et l'éternité La gnose et l'Esprit de l'Antiquité tardive. Histoire et méthodologie de la recherche . Selected papers "The Right to Die." Hastings Center Report 8, number 4 (1978): 31–36. "Straddling the Boundaries of Theory and Practice: Recombinant DNA Research as a Case of Action in the Process of Inquiry." In Recombinant DNA: Science, Ethics and Politics, edited by J. Richards, 253–71. New York: Academic Press, 1978. "Toward a Philosophy of Technology." Hastings Center Report 9 (1979): 34–43. "The Heuristics of Fear." In Ethics in an Age of Pervasive Technology, edited by Melvin Kranzberg, 213–21. Boulder, Colo.: Westview Press, 1980. "Parallelism and Complementarity: The Psycho-Physical Problem in Spinoza and in the Succession of Niels Bohr." In The Philosophy of Baruch Spinoza, edited by Richard Kennington, 121–30. Washington, D.C.: Catholic University of the Americas Press, 1980. "Reflections on Technology, Progress and Utopia." Social Research 48 (1981): 411–55. "Technology as a Subject for Ethics." Social Research 49 (1982): 891–98. "Is Faith Still Possible? Memories of Rudolf Bultmann and Reflections on the Philosophical Aspects of His Work." Harvard Theological Review 75 (1982): 1–23. "Ontological Grounding of a Political Ethics: On the Metaphysics of Commitment to the Future of Man." Graduate Faculty Philosophical Journal 10, no. 1 (1984): 47–62. "Ethics and Biogenetic Art." Social Research 52 (1985): 491–504. "The Concept of God after Auschwitz: A Jewish Voice." Journal of Religion 67, number 1 (1987): 1–13. "The Consumer's Responsibility." In Ecology and Ethics. A Report from the Melbu conference, 18–23 July 1990, edited by Audun 0fsti, 215–18. Trondheim: Nordland Akademi for Kunst og Vitenskap, 1992. "The Burden and Blessing of Mortality." Hastings Center Report 22, no. 1 (1992): 34–40. "Philosophy at the End of the Century: A Survey of Its Past and Future." Social Research 61, number 4 (1994): 812–32. "Wissenschaft as Personal Experience [brief memoir]," The Hastings Center report 32:4 (Jul–Aug 2002): 27–35 "Materialism and the Theory of Organism." University of Toronto Quarterly, 21, 1 (1951): 39–52. Other papers "Causality and Perception," The Journal of Philosophy, Vol. 47, No. 11 (May 25, 1950), pp. 319–324 "The Nobility of Sight," Philosophy and Phenomenological Research, Vol. 14, No. 4 (Jun., 1954), pp. 507–519. (also in The Phenomenon of Life) "Immortality and the Modern Temper: The Ingersoll Lecture, 1961" The Harvard Theological Review, volume 55, number 1 (January 1962), pp. 1–20. (also in The Phenomenon of Life) "The Secret Books of the Egyptian Gnostics," The Journal of Religion, Vol. 42, No. 4 (Oct., 1962), pp. 262–273. "Myth and Mysticism: A Study of Objectification and Interiorization in Religious Thought," The Journal of Religion, Vol. 49, No. 4 (October 1969), pp. 315–329 "Freedom of Scientific Inquiry and the Public Interest," The Hastings Center Report, volume 6, number 4 (August 1976), pp. 15–17. See also Natural environment Environmental movement Ethics of technology Noocracy Jewish philosophy References Further reading Hans Jonas, "Wissenschaft as Personal Experience [brief memoir]," The Hastings Center report 32:4 (Jul–Aug 2002): 27–35 Levy, David J. Hans Jonas: The Integrity of Life. University of Missouri Press, 2002. Scodel, Harvey. "An interview with Professor Hans Jonas," Social Research Summer 2003. Troster, Lawrence. "Hans Jonas and the Concept of God after the Holocaust," Conservative Judaism (volume 55:4, Summer 2003) Strachan Donnelley "Hans Jonas, 1903–1993 [Obituary]," The Hastings Center Report 23:2 (March–April 1993), p. 12. Eric Pace: "Hans Jonas, Influential Philosopher, Is Dead at 89," New York Times (February 6, 1993) David Kaufmann: "One of Most Relevant Thinkers You’ve Never Heard Of," Forward (17 October 2007) Stuart F. Spicker, ed. Organism, Medicine and Metaphysics. Essays in Honor of Hans Jonas. Dordrecht: Reidel, 1978. Strachan Donnelley (editor), "The Legacy of Hans Jonas," special issue of The Hastings Center Report 25:7 (November–December 1995). Leon R. Kass, "Appreciating The Phenomenon of Life," p. 3. Richard J. Bernstein, "Rethinking Responsibility," p. 13. Strachan Donnelley, "Bioethical Troubles: Animal Individuals and Human Organisms," p. 21. Lawrence Vogel, "Does Environmental Ethics Need a Metaphysical Grounding?", p. 30.x Christian Schütze, "The Political and Intellectual Hans Jonas," p. 40. "Not Compassion Alone: On Euthanasia and Ethics" (interview with Jonas), p. 44. Hava Tirosh-Samuelson and Christian Wiese, eds., The Legacy of Hans Jonas: Judaism and the Phenomenon of Life (Brill, 2008). , Table of contents. Michael Schwartz and Osborne Wiggins, "Psychosomatic Medicine and the Philosophy of Life." Philosophy, Ethics, and Humanities in Medicine 2010, 5:2 (21 January 2010). http://www.peh-med.com/content/5/1/2 Adrian Hagiu and Sergiu Bortoș, "The Imperative of Responsibility in the Era of Fake News." Agathos (volume 13: 1, 2022). https://www.agathos-international-review.com/issues/2022/24/Hagiu.pdf Wiese, Christian. The Life and Thought of Hans Jonas: Jewish Dimensions. Brandeis, 2010. External links Hans-Jonas-Center Berlin 1903 births 1993 deaths 20th-century German philosophers Bioethicists Academic staff of Carleton University Commanders Crosses of the Order of Merit of the Federal Republic of Germany German philosophers of technology Existentialists Jewish emigrants from Nazi Germany to Mandatory Palestine German ethicists Historians of Gnosticism Jewish existentialists Jewish philosophers Writers from New Rochelle, New York People from Mönchengladbach The New School faculty University of Marburg alumni 20th-century American historians German male writers German Zionists 20th-century German male writers Jewish Brigade personnel Jewish ethicists Environmental ethicists Scholars of Mandaeism Fellows of the Hastings Center Israeli military personnel of the 1948 Arab–Israeli War
Hans Jonas
Environmental_science
3,678
22,842,507
https://en.wikipedia.org/wiki/Dioxane%20tetraketone
Dioxane tetraketone (or 1,4-dioxane-2,3,5,6-tetrone) is an organic compound with the formula C4O6. It is an oxide of carbon (an oxocarbon), which can be viewed as the fourfold ketone of dioxane. It can also be viewed as the cyclic dimer of oxiranedione (C2O3), the hypothetical anhydride of oxalic acid. In 1998, Paolo Strazzolini and others synthesized this compound by reacting oxalyl chloride (COCl)2 or the bromide (COBr)2 with a suspension of silver oxalate (Ag2C2O4) in diethyl ether at −15 °C, followed by evaporation of the solvent at low temperature and pressure. The substance is stable when dissolved in ether and trichloromethane at −30 °C, but decomposes into a 1:1 mixture of carbon monoxide (CO) and carbon dioxide (CO2) upon heating to 0 °C. The stability and conformation of the molecule were also analyzed by theoretical methods. See also 1,2-Dioxetanedione References Oxocarbons Dimers (chemistry) Carboxylic anhydrides Conjugated ketones
Dioxane tetraketone
Chemistry,Materials_science
277
66,812,671
https://en.wikipedia.org/wiki/Iota%20Mensae
Iota Mensae is a single star about away in the faint constellation Mensa. It has a very slightly variable apparent magnitude of 6.0, making it visible with the naked eye under good skies. Iota Mensae has a spectral type of B8III, indicating that it has exhausted hydrogen at its core and expanded away from the main sequence. It is about 3.6 times the mass (), 301 times as luminous, and has swollen to 9.5 times the radius of the Sun (). It is calculated to be 314 million years old. It has been catalogued as a chemically peculiar star with abnormally strong lines of silicon in its spectrum but this classification is now considered doubtful. Its brightness varies by a few hundredths of a magnitude. Its period was initially measured at 2.6 days, but this is now considered to be a period of 5.3 days with primary and secondary minima of a similar depth. The variability is thought to be due to the rotation of the star. References Mensa (constellation) Iota, Epsilon 026264 1991 038602 B-type giants Rotating ellipsoidal variables Durchmusterung objects
Iota Mensae
Astronomy
244
11,830,372
https://en.wikipedia.org/wiki/Menger%20curvature
In mathematics, the Menger curvature of a triple of points in n-dimensional Euclidean space Rn is the reciprocal of the radius of the circle that passes through the three points. It is named after the Austrian-American mathematician Karl Menger. Definition Let x, y and z be three points in Rn; for simplicity, assume for the moment that all three points are distinct and do not lie on a single straight line. Let Π ⊆ Rn be the Euclidean plane spanned by x, y and z and let C ⊆ Π be the unique Euclidean circle in Π that passes through x, y and z (the circumcircle of x, y and z). Let R be the radius of C. Then the Menger curvature c(x, y, z) of x, y and z is defined by If the three points are collinear, R can be informally considered to be +∞, and it makes rigorous sense to define c(x, y, z) = 0. If any of the points x, y and z are coincident, again define c(x, y, z) = 0. Using the well-known formula relating the side lengths of a triangle to its area, it follows that where A denotes the area of the triangle spanned by x, y and z. Another way of computing Menger curvature is the identity where is the angle made at the y-corner of the triangle spanned by x,y,z. Menger curvature may also be defined on a general metric space. If X is a metric space and x,y, and z are distinct points, let f be an isometry from into . Define the Menger curvature of these points to be Note that f need not be defined on all of X, just on {x,y,z}, and the value cX (x,y,z) is independent of the choice of f. Integral Curvature Rectifiability Menger curvature can be used to give quantitative conditions for when sets in may be rectifiable. For a Borel measure on a Euclidean space define A Borel set is rectifiable if , where denotes one-dimensional Hausdorff measure restricted to the set . The basic intuition behind the result is that Menger curvature measures how straight a given triple of points are (the smaller is, the closer x,y, and z are to being collinear), and this integral quantity being finite is saying that the set E is flat on most small scales. In particular, if the power in the integral is larger, our set is smoother than just being rectifiable Let , be a homeomorphism and . Then if . If where , and , then is rectifiable in the sense that there are countably many curves such that . The result is not true for , and for .: In the opposite direction, there is a result of Peter Jones: If , , and is rectifiable. Then there is a positive Radon measure supported on satisfying for all and such that (in particular, this measure is the Frostman measure associated to E). Moreover, if for some constant C and all and r>0, then . This last result follows from the Analyst's Traveling Salesman Theorem. Analogous results hold in general metric spaces: See also Menger-Melnikov curvature of a measure External links References Curvature (mathematics) Multi-dimensional geometry
Menger curvature
Physics
694
4,970,131
https://en.wikipedia.org/wiki/Photoprotection
Photoprotection is the biochemical process that helps organisms cope with molecular damage caused by sunlight. Plants and other oxygenic phototrophs have developed a suite of photoprotective mechanisms to prevent photoinhibition and oxidative stress caused by excess or fluctuating light conditions. Humans and other animals have also developed photoprotective mechanisms to avoid UV photodamage to the skin, prevent DNA damage, and minimize the downstream effects of oxidative stress. In photosynthetic organisms In organisms that perform oxygenic photosynthesis, excess light may lead to photoinhibition, or photoinactivation of the reaction centers, a process that does not necessarily involve chemical damage. When photosynthetic antenna pigments such as chlorophyll are excited by light absorption, unproductive reactions may occur by charge transfer to molecules with unpaired electrons. Because oxygenic phototrophs generate O2 as a byproduct from the photocatalyzed splitting of water (H2O), photosynthetic organisms have a particular risk of forming reactive oxygen species. Therefore, a diverse suite of mechanisms has developed in photosynthetic organisms to mitigate these potential threats, which become exacerbated under high irradiance, fluctuating light conditions, in adverse environmental conditions such as cold or drought, and while experiencing nutrient deficiencies which cause an imbalance between energetic sinks and sources. In eukaryotic phototrophs, these mechanisms include non-photochemical quenching mechanisms such as the xanthophyll cycle, biochemical pathways which serve as "relief valves", structural rearrangements of the complexes in the photosynthetic apparatus, and use of antioxidant molecules. Higher plants sometimes employ strategies such as reorientation of leaf axes to minimize incident light striking the surface. Mechanisms may also act on a longer time-scale, such as up-regulation of stress response proteins or down-regulation of pigment biosynthesis, although these processes are better characterized as "photoacclimatization" processes. Cyanobacteria possess some unique strategies for photoprotection which have not been identified in plants nor in algae. For example, most cyanobacteria possess an Orange Carotenoid Protein (OCP), which serves as a novel form of non-photochemical quenching. Another unique, albeit poorly-understood, cyanobacterial strategy involves the IsiA chlorophyll-binding protein, which can aggregate with carotenoids and form rings around the PSI reaction center complexes to aid in photoprotective energy dissipation. Some other cyanobacterial strategies may involve state-transitions of the phycobilisome antenna complex , photoreduction of water with the Flavodiiron proteins, and futile cycling of CO2 . In plants It is widely known that plants need light to survive, grow and reproduce. It is often assumed that more light is always beneficial; however, excess light can actually be harmful for some species of plants. Just as animals require a fine balance of resources, plants require a specific balance of light intensity and wavelength for optimal growth (this can vary from plant to plant). Optimizing the process of photosynthesis is essential for survival when environmental conditions are ideal and acclimation when environmental conditions are severe. When exposed to high light intensity, a plant reacts to mitigate the harmful effects of excess light. To best protect themselves from excess light, plants employ a multitude of methods to minimize harm inflicted by excess light. A variety of photoreceptors are used by plants to detect light intensity, direction and duration. In response to excess light, some photoreceptors have the ability to shift chloroplasts within the cell farther from the light source thus decreasing the harm done by superfluous light. Similarly, plants are able to produce enzymes that are essential to photoprotection such as Anthocyanin synthase. Plants deficient in photoprotection enzymes are much more sensitive to light damage than plants with functioning photoprotection enzymes. Also, plants produce a variety of secondary metabolites beneficial for their survival and protection from excess light. These secondary metabolites that provide plants with protection are commonly used in human sunscreen and pharmaceutical drugs to supplement the inadequate light protection that is innate to human skin cells. Various pigments and compounds can be employed by plants as a form of UV photoprotection as well. Pigmentation is one method employed by a variety of plants as a form of photoprotection. For example, in Antarctica, native mosses of green color can be found naturally shaded by rocks or other physical barriers while red colored mosses of the same species are likely to be found in wind and sun exposed locations. This variation in color is due to light intensity. Photoreceptors in mosses, phytochromes (red wavelengths) and phototropins (blue wavelengths), assist in the regulation of pigmentation. To better understand this phenomenon, Waterman et al. conducted an experiment to analyze the photoprotective qualities of UVACs (Ultraviolet Absorbing Compounds) and red pigmentation in antarctic mosses. Moss specimens of species Ceratodon purpureus, Bryum pseudotriquetrum and Schistidium antarctici were collected from an island region in East Antarctica. All specimens were then grown and observed in a lab setting under constant light and water conditions to assess photosynthesis, UVAC and pigmentation production. Moss gametophytes of red and green varieties were exposed to light and consistent watering for a period of two weeks. Following the growth observation, cell wall pigments were extracted from the moss specimens. These extracts were tested using UV–Vis spectrophotometry which uses light from the UV and visible spectrum to create an image depicting light absorbance. UVACs are typically found in the cytoplasm of the cell; however, when exposed to high-intensity light, UVACs are transported into the cell wall. It was found that mosses with higher concentrations of red pigments and UVACs located in the cell walls, rather than intracellularly, performed better in higher intensity light. Color change in the mosses was found not to be due to chloroplast movement within the cell. It was found that UVACs and red pigments function as long-term photoprotection in Antarctic mosses. Therefore, in response to high-intensity light stress, the production of UVACs and red pigmentation is up-regulated. Knowing that plants are able to differentially respond to varying concentrations and intensities of light, it is essential to understand why these reactions are important. Due to a steady rise in global temperatures in recent years, many plants have become more susceptible to light damage. Many factors including soil nutrient richness, ambient temperature fluctuation and water availability all impact the photoprotection process in plants. Plants exposed to high light intensity coupled with water deficits displayed a significantly inhibited photoprotection response. Although not yet fully understood, photoprotection is an essential function of plants. In humans Photoprotection of the human skin is achieved by extremely efficient internal conversion of DNA, proteins and melanin. Internal conversion is a photochemical process that converts the energy of the UV photon into small, harmless amounts of heat. If the energy of the UV photon were not transformed into heat, then it would lead to the generation of free radicals or other harmful reactive chemical species (e.g. singlet oxygen, or hydroxyl radical). In DNA this photoprotective mechanism evolved four billion years ago at the dawn of life. The purpose of this extremely efficient photoprotective mechanism is to prevent direct DNA damage and indirect DNA damage. The ultrafast internal conversion of DNA reduces the excited state lifetime of DNA to only a few femtoseconds (10−15s)—this way the excited DNA does not have enough time to react with other molecules. For melanin this mechanism has developed later in the course of evolution. Melanin is such an efficient photoprotective substance that it dissipates more than 99.9% of the absorbed UV radiation as heat. This means that less than 0.1% of the excited melanin molecules will undergo harmful chemical reactions or produce free radicals. Synthetic Melanocyte-stimulating hormone In the European Union and United States, afamelanotide is indicated for the prevention of phototoxicity in adults with erythropoietic protoporphyria. Afamelanotide is also being investigated as a method of photoprotection from in the treatment of polymorphous light eruption, actinic keratosis and squamous cell carcinoma (a form of skin cancer). Artificial melanin The cosmetic industry claims that the UV filter acts as an "artificial melanin". But those artificial substances used in sunscreens do not efficiently dissipate the energy of the UV photon as heat. Instead these substances have a very long excited state lifetime. In fact, the substances used in sunscreens are often used as photosensitizers in chemical reactions. (see Benzophenone). Oxybenzone, titanium oxide and octyl methoxycinnamate are photoprotective agents used in many sunscreens, providing broad-spectrum UV coverage, including UVB and short-wave UVA rays. See also Sunscreen Photocarcinogen Direct DNA damage Indirect DNA damage References Biological defense mechanisms Photochemistry Skin physiology Sun tanning
Photoprotection
Chemistry,Biology
1,990
71,402,863
https://en.wikipedia.org/wiki/Equisingularity
In algebraic geometry, an equisingularity is, roughly, a family of singularities that are not non-equivalent and is an important notion in singularity theory. There is no universal definition of equisingularity but Zariki's equisingularity is the most famous one. Zariski's equisingualrity, introduced in 1971 under the name " algebro-geometric equisingularity", gives a stratification that is different from the usual Whitney stratification on a real or complex algebraic variety. See also stratified space References Further reading https://mathoverflow.net/questions/299314/a-general-definition-of-an-equisingular-family-of-singular-varieties algebraic geometry
Equisingularity
Mathematics
167
1,869,575
https://en.wikipedia.org/wiki/Apache%20tears
Apache tears are rounded pebbles of obsidian or "obsidianites" composed of black or dark-colored natural volcanic glass, usually of rhyolitic composition and bearing conchoidal fracture. Also known by the lithologic term marekanite, this variety of obsidian occurs as subrounded to subangular bodies up to about in diameter, often bearing indented surfaces. Internally the pebbles sometimes contain fine bands or microlites and though in reflected light they appear black and opaque, they may be translucent in transmitted light. Apache tears fall between 5 and 5.5 in hardness on the Mohs scale. Geology Apache tears originate from siliceous lava flows, lava domes or ash-flow tuffs, often in close association with or embedded in, gray perlite. The spherules occur as cores within perlite masses that typically exhibit texture of concentrically curved, onion-skin fractures. Formation is apparently related to differential cooling and various alkali and water contents. Excessive water present during cooling and quenching of rhyolitic lava causes obsidian to hydrate (i.e., water entering the obsidian glass converts it to perlite). Where perlite is incompletely hydrated, fresh obsidian cores remain as pebbles of marekanite, or Apache tears; this origin has been occasionally described in the geologic literature (for example,). Apache tears are well known from tertiary volcanic terrain in numerous localities throughout the western United States, particularly Arizona, from where specimens were widely collected and sold in the lapidary and specimen trade. Several districts in western Nevada also have yielded abundant Apache tears eroding from tuff beds; such areas have been popularized in the lapidary trade through guides for rockhounds. Specimens from many of these sites have been avidly collected by rockhounds and lapidary enthusiasts, are often tumbled and may be considered semi-precious gemstones; locations are noted in the section "Gemstones of Nevada" by Rose and Ferdock. Culture The name comes from a legend of the Apache tribe: about 75 Apaches and the US Cavalry fought on a mountain overlooking what is now Superior, Arizona, in the 1870s. Facing defeat, the outnumbered Apache warriors rode their horses off the mountain to their deaths rather than be killed. The wives and families of the warriors cried when they heard of the tragedy; their tears turned into stone upon hitting the ground. American singer songwriter Johnny Cash wrote a song entitled "Apache Tears" for his 1964 album Bitter Tears: Ballads of the American Indian. See also Pele's tears References Gemstones American folklore Igneous rocks Obsidian
Apache tears
Physics
529
2,236,083
https://en.wikipedia.org/wiki/Osteopenia
Osteopenia, known as "low bone mass" or "low bone density", is a condition in which bone mineral density is low. Because their bones are weaker, people with osteopenia may have a higher risk of fractures, and some people may go on to develop osteoporosis. In 2010, 43 million older adults in the US had osteopenia. Unlike osteoporosis, osteopenia does not usually cause symptoms, and losing bone density in itself does not cause pain. There is no single cause for osteopenia, although there are several risk factors, including modifiable (behavioral, including dietary and use of certain drugs) and non-modifiable (for instance, loss of bone mass with age). For people with risk factors, screening via a DXA scanner may help to detect the development and progression of low bone density. Prevention of low bone density may begin early in life and includes a healthy diet and weight-bearing exercise, as well as avoidance of tobacco and alcohol. The treatment of osteopenia is controversial: non-pharmaceutical treatment involves preserving existing bone mass via healthy behaviors (dietary modification, weight-bearing exercise, avoidance or cessation of smoking or heavy alcohol use). Pharmaceutical treatment for osteopenia, including bisphosphonates and other medications, may be considered in certain cases but is not without risks. Overall, treatment decisions should be guided by considering each patient's constellation of risk factors for fractures. Risk factors Many divide risk factors for osteopenia into fixed (non-changeable) and modifiable factors. Osteopenia can also be secondary to other diseases. An incomplete list of risk factors: Fixed Age: bone density peaks at age 35, and then decreases. Bone density loss occurs in both men and women Ethnicity: European and Asian people have increased risk Sex: women are at higher risk, particularly those with early menopause Family history: low bone mass in the family increases risk Modifiable / behavioral Tobacco use Alcohol use Inactivity – particularly lack of weight-bearing or resistance activities Insufficient caloric intake – osteopenia can be connected to female athlete triad syndrome, which occurs in female athletes as a combination of energy deficiency, menstrual irregularities, and low bone mineral density. Low nutrient diet (particularly calcium, Vitamin D) Other diseases Celiac disease, via poor absorption of calcium and vitamin D Hyperthyroidism Anorexia nervosa Medications Steroids Anticonvulsants Screening and diagnosis The ISCD (International Society for Clinical Densitometry) and the National Osteoporosis Foundation recommend that older adults (women over 65 and men over 70) and adults with risk factors for low bone mass, or previous fragility fractures, undergo DXA testing. The DXA (dual X-ray absorptiometry) scan uses a form of X-ray technology, and offers accurate bone mineral density results with low radiation exposure. The United States Preventive Services Task Force recommends osteoporosis screening for women with increased risk over 65 and states there is insufficient evidence to support screening men. The main purpose of screening is to prevent fractures. Of note, USPSTF screening guidelines are for osteoporosis, not specifically osteopenia. The National Osteoporosis Foundation recommends use of central (hip and spine) DXA testing for accurate measure of bone density, emphasizing that peripheral or "screening" scanners should not be used to make clinically meaningful diagnoses and that peripheral and central DXA scans cannot be compared to each other. DXA scanners can be used to diagnose osteopenia or osteoporosis as well as to measure bone density over time as people age or undergo medical treatment or lifestyle changes. Information from the DXA scanner creates a bone mineral density T-score by comparing a patient's density to the bone density of a healthy young person. Bone density between 1 and 2.5 standard deviations below the reference, or a T-score between −1.0 and −2.5, indicates osteopenia (a T-score smaller than or equal to −2.5 indicates osteoporosis). Calculation of the T-score itself may not be standardized. The ISCD recommends using Caucasian women between 20 and 29 years old as the baseline for bone density for ALL patients, but not all facilities follow this recommendation. The ISCD recommends that Z-scores, not T-scores, be used to classify bone density in premenopausal women and men under 50. Prevention Prevention of low bone density can start early in life by maximizing peak bone density. Once a person loses bone density, the loss is usually irreversible, so preventing (greater than normal) bone loss is important. Actions to maximize bone density and stabilize loss include: Exercise, particularly weight-bearing exercise, resistance exercises and balance exercises, through mechanical loading that promotes increased bone mass, and reduced fall risk Adequate caloric intake Sufficient calcium in diet: older adults may have increased calcium needs—of note, medical conditions such as Celiac and hyperthyroidism can affect absorption of calcium Sufficient Vitamin D in diet Estrogen replacement Avoidance of steroid medications Limit alcohol use and smoking Treatment The pharmaceutical treatment of osteopenia is controversial and more nuanced than well-supported recommendations for improved nutrition and weight-bearing exercise. The diagnosis of osteopenia in and of itself does not always warrant pharmaceutical treatment. Many people with osteopenia may be advised to follow risk prevention measures (as above). Risk of fracture guides clinical treatment decisions: the World Health Organization (WHO) Fracture Risk Assessment Tool (FRAX) estimates the probability of hip fracture and the probability of a major osteoporotic fracture (MOF), which could occur in a bone other than the hip. In addition to bone density (T-score), calculation of the FRAX score involves age, body characteristics, health behaviors, and other medical history. As of 2014, The National Osteoporosis Foundation (NOF) recommends pharmaceutical treatment for osteopenic postmenopausal women and men over 50 with FRAX hip fracture probability of >3% or FRAX MOF probability >20%. As of 2016, the American Association of Clinical Endocrinologists and the American College of Endocrinology agree. In 2017, the American College of Physicians recommended that clinicians use individual judgment and knowledge of patients' particular risk factors for fractures, as well as patient preferences, to decide whether to pursue pharmaceutical treatment for women with osteopenia over 65. Pharmaceutical treatment for low bone density includes a range of medications. Commonly used drugs include bisphosphonates (alendronate, risedronate, and ibandronate)—some studies show that decreased fracture risk and increased bone density after bisphosphonate treatment for osteopenia. Other medications include selective estrogen receptor modulators (SERMs) (e.g., raloxifene), estrogens (e.g., estradiol), calcitonin, and parathyroid hormone-related protein analogues (e.g., abaloparatide, teriparatide). These drugs are not without risks. In this complex landscape, many argue that clinicians must consider a patient's individual risk of fracture, not simply treat those with osteopenia as equally at risk. A 2005 editorial in the Annals of Internal Medicine states "The objective of using osteoporosis drugs is to prevent fractures. This can be accomplished only by treating patients who are likely to have a fracture, not by simply treating T-scores." History Osteopenia, from Greek ὀστέον (ostéon), "bone" and πενία (penía), "poverty", is a condition of sub-normally mineralized bone, usually the result of a rate of bone lysis that exceeds the rate of bone matrix synthesis. See also osteoporosis. In June 1992, the World Health Organization defined osteopenia. An osteoporosis epidemiologist at the Mayo Clinic who participated in setting the criterion in 1992 said "It was just meant to indicate the emergence of a problem", and noted that "It didn't have any particular diagnostic or therapeutic significance. It was just meant to show a huge group who looked like they might be at risk." See also Bone mineral density Osteoporosis References External links Aging-associated diseases Endocrine diseases Osteopathies Rheumatology Histopathology Medical signs Medical controversies
Osteopenia
Chemistry,Biology
1,798
75,217,983
https://en.wikipedia.org/wiki/299%20%28number%29
299 is the natural number following 298 and preceding 300. In mathematics 299 is an odd composite number with two prime factors. 299 is a highly cototient number, meaning that it has more values for x-phi(x)= that number than any before it. 299 is a self number, meaning that it has 298 integer partitions. 299 is the twelfth cake number, the maximum number of pieces to get from 12 slices of a cake. 299 is a brilliant number meaning that it is the product of 2 primes with both having the same number of digits. References Integers
299 (number)
Mathematics
118
16,260,043
https://en.wikipedia.org/wiki/John%20E.%20Amoore
John E. Amoore (1930–1998) was a British biochemist who first proposed the stereochemical theory for olfaction. Bibliography Molecular Basis of Odor John E. Amoore, Published 1970, Thomas How Smells Shape Up John E. Amoore, Published 1977, American Chemical Society References British biochemists 1930 births 1998 deaths
John E. Amoore
Chemistry
72
2,830,832
https://en.wikipedia.org/wiki/Community%20studies
Community studies is an academic field drawing on both sociology and anthropology and the social research methods of ethnography and participant observation in the study of community. In academic settings around the world, community studies is variously a sub-discipline of anthropology or sociology, or an independent discipline. It is often interdisciplinary and geared toward practical applications rather than purely theoretical perspectives. Community studies is sometimes combined with other fields, i.e., "Urban and Community Studies," "Health and Community Studies," or "Family and Community studies." Epistemology In North America, community studies drew inspiration from the classic urban sociology texts produced by the Chicago School, such as the works of Louis Wirth and William Foote Whyte. In Britain, community studies was developed for colonial administrators working in East Africa, particularly Kenya. It was further developed in the post-war period with the Institute of Community Studies founded by Michael Young in east London, and with the studies published from the institute, such as Family and Kinship in East London. Community studies, like colonial anthropology, have often assumed the existence of discrete, relatively homogeneous, almost tribe-like communities, which can be studied as organic wholes. In this, it has been a key influence on communitarianism and communalism, from the local context to the global and everywhere in between. Curricula Community studies curricula are often centered on the "concerns" of communities. These include mental and physical health, stress, addiction, AIDS, racism, immigration, ethnicity, gender, identity, sexuality, the environment, crime, deviance, delinquency, family problems, social competence, poverty, homelessness and other psycho-social aspects. Understanding the socio-cultural completeness and the anthropological ramifications of the accurate analysis of community health is key to the sphere of these studies. Another focus of curricula in community studies is upon anthropology, cultural anthropology in particular. Some programs set as prerequisite knowledge, the background and historical contexts for community, drawing upon archeological findings and the theoretical underpinnings for social organization in ancient and prehistorical community settings. The theories connected with the Neolithic Revolution is one example of a deep study into how, where and why, hunter-gatherer communities formed. Community studies have been linked to the causes of social justice, promoting peace and nonviolence and working towards social change, often within an activist framework. Schools with Community studies concentrations Urban and Community Studies at the University of Connecticut Community Studies Program at the University of Colorado Boulder. Urban and Community Studies at the Rochester Institute of Technology. College of Community and Public Service at the University of Massachusetts Boston Child, Family and Community Studies Integrated Curriculum Courses at Douglas College (BC, Canada) Center for Community Studies at Peabody College-Vanderbilt The Centre for Urban and Community Studies at the University of Toronto Institute of Health and Community Studies at Bournemouth University (UK) Pan African Center for Community Studies at the University of Akron Department of Educational Policy & Community Studies at the University of Wisconsin–Milwaukee. Integrative Studies Concentrations - Community at George Mason University Department of Community, Agriculture, Recreation and Resource Studies at Michigan State University Department of Social Policy and Education at Birkbeck, University of London (UK) Department of Community and Regional Development at University of California, Davis Center for Urban Studies at Istanbul Sehir University Department of Community Studies at University of California, Santa Cruz Center for Communal Studies at the University of Southern Indiana Notes Further reading Community development Community psychology Sources Community studies at informal education Community Environmental social science
Community studies
Environmental_science
711
16,855,172
https://en.wikipedia.org/wiki/PDF%20%28gene%29
Peptide deformylase, mitochondrial is an enzyme that in humans is encoded by the PDF gene. References Further reading External links
PDF (gene)
Chemistry
27
64,790,461
https://en.wikipedia.org/wiki/Morse%20Micro
Morse Micro is a Sydney-based developer of Wi-Fi HaLow microprocessors; chips that enable high data rates, with long range and low power consumption. Amongst all Wi-Fi HaLow systems on a chip, Morse Micro processors are reported to be the smallest, fastest, longest-range with lowest-power-use. The main application of the technology is machine-to-machine communications. With the Internet of things expected to extend to 30 billion devices by 2025, this represents a steeply growing number of users of the technology. The founders plan to be part of "expanding Wi-Fi so it can go into everything, every smoke alarm, every camera." The firm has its global HQ in Sydney, which is also its main base for R&D, with additional centres in the United States, China, India, the United Kingdom and, from 2024, an operations centre in Taiwan. As of 2022, Morse Micro was producing more semiconductors than any other Australian-based tech company. Technology After eight years' development, the company's Wifi HalLow processor was reported to deliver 10 times the range of conventional Wi-Fi technology, and able to function for several years before needing battery change. Data rates and range The microprocessor allows for a range of data rates, depending on the modulation and coding scheme (MCS) used. This can be as low as 150 kilobits per second (Kbps) using MCS10 with BPSK modulation, to a top rate of 4 megabits per second (Mbps) using MCS9 at 256 quadrature amplitude modulation. The chip uses low-bandwidth wireless network protocols, operating in the 1 GHz spectrum, while providing a communications range of 1,000 metres. In one field test, researchers found the technology could sustain high speed data transmission between a device placed by the north end of Sydney Harbour Bridge and a device across the harbour at Sydney Opera House. The company claims their chip provides 10 times the range, 100 times the area and 1000 times the volume of data offered by traditional wi-fi. Connectivity and energy To enable networked communications between machines, a single Wi-Fi HaLow Access Point can securely connect up to 8,191 devices. Applications for the WiFi HaLow technology includes the Internet of things, which may include solutions for in the home (such as lighting, monitoring and smart door locks) and in industry (such as vehicle management, high-end security and supply chain asset tracking. Looking at its scalability, one American technical review made this assessment:That's ample capacity to connect every LED bulb, light switch, smart door lock, motorized window shade, thermostat, smoke detector, solar panel, security camera, or any imaginable smart-home device for the foreseeable future. Physically, the company's microchip is one-fifth the size of a traditional Wi-Fi processor. It uses very little energy, consuming a fraction of the power consumed by traditional chips, which is achieved by periodically waking and reporting. As such, the chips can operate for several years on a single coin-size battery. In 2020, the first generation of Morse Micro microchips went into production in Taiwan. The company has onshore design and fabrication of composite semiconductors in Australia, which has been assessed as a strategic capability. As of late 2022, the market for Wi-Fi Ha Low products appeared to be expanding, from those developing industrial IoT in the Japanese market which, "deploy thousands of devices in warehouses which use sensors and actuators." History "Wi-Fi was invented over 20 years ago in Australia and over that time we have seen it go into every laptop, phone and tablet, and all of that came from people in Australia. Today we are opening it up and expanding Wi-Fi so it can go into everything, every smoke alarm, every camera." — Andrew Terry, founder, speaking to The Sydney Morning Herald in 2017The founding partners of Morse Micro, Andrew Terry and Michael De Nil met while working for Broadcom, the largest supplier of integrated circuits for communications. De Nil said they noticed that chips designed for phones and laptops were being used for machine-to-machine communication and "that wasn't working very well." They decided to create a new kind of microprocessor, specifically for the Internet of things. Morse Micro Pty Ltd was established as a private company, limited by guarantee, in August 2016. The founders were later joined by several significant engineers, including: Professor Neil Weste the founder of Radiata Networks who had created the first 802.11a Wi-Fi chip Dr. John O'Sullivan (engineer) radio astronomer who led the team who invented Wi-Fi at CSIRO in the 1980s Dr. David Goodall, a design engineer at Radiata, which created the first commercial WiFi chip By late 2023, the company employed 180 people across Australia, the United States, China, India, UK, Singapore and Taiwan. From this point the focus of market expansion became Japan, through its Japanese investor MegaChips. Security cameras became a key application, which was recognised with the global industry award, the IoT Product of the Year, in 2022 and 2023. The Australian Financial Review reported from 2024 that Morse Micro was ameliorating for geopolitical risk by maintaining two supply chains for chips and components, one from mainland China, the other Taiwan, with assembly and warehousing in Singapore. The Singapore facility began operations in August 2023, and had produced over 2 million chips by November of that year. Investors The Australian Government provided the founders with seed funding in 2017 as they believed Morse Micro has the "first WiFi HaLow silicon chip that securely connects smart devices over long distances." It is reported to be the best-funded Wi-Fi HaLow technology companies, with large investors from Japan, the United States and a spread of Australian retirement funds. By 15 February 2023, the company had an estimated value of US$700 million, just over A$1 billion. Series A investment, 2019 In May 2019, Series A funding was provided in by a suite of investors. These included the Clean Energy Innovation Fund and CSIRO Innovation Fund, part of the Australian scientific research agency credited with inventing Wi-Fi in 1997. Investment also came from American entrepreneur Ray Stata of Analog Devices, Blackbird Ventures, Main Sequence Ventures, Right Click Capital, Kim Jackson and her husband Scott Farquhar through Skip Capital, Lucy and Malcolm Turnbull; and Uniseed, the venture fund of UniSuper. This tranche totalled A$42 million. Series B investment 2022 By September 2022 the company had announced its Series B round of A$140 million, later extended to A$170 million, attracting intense investor interest. The investment round was led by Japanese chip design and manufacturing giant MegaChips, with further investment from its incumbent investors, which is known to include several Australian superannuation groups, such as TelstraSuper, HESTA, Hostplus and NGS (managed by Blackbird Ventures) and UniSuper (managed by Uniseed). References External links Australian companies established in 2016 Companies based in Sydney Network protocols Machine to machine Wireless communication systems CSIRO people Australian inventions Wireless networking Internet of things Broadcom Integrated circuits IEEE 802.11
Morse Micro
Technology,Engineering
1,513
20,007,666
https://en.wikipedia.org/wiki/Alverine
Alverine is a drug used for functional gastrointestinal disorders. Alverine is a smooth muscle relaxant. Smooth muscle is a type of muscle that is not under voluntary control; it is the muscle present in places such as the gut and uterus. Adverse effects The side effects of Alverine include: Difficulties in breathing or shortness of breath, wheezing, swelling of the face or other parts of the body (associated with serious allergic reaction) Yellowing of the whites of the eyes and the skin, due to liver inflammation A feeling of nausea or dizziness Headache Minor allergic reaction (skin rash/itching) It was reported that alverine may induce toxic hepatitis. Mechanism of action Alverine acts directly on the muscle in the gut, causing it to relax. Alverine is a 5HT1A antagonist, which reduces rectal hypersensitivity. This prevents the muscle spasms which occur in the gut in conditions such as irritable bowel syndrome and diverticular disease. Diverticular disease is a condition in which small pouches form in the gut lining. These pouches can trap particles of food and become inflamed and painful. In irritable bowel syndrome, the normal activity of the gut muscle is lost. The muscle spasms result in symptoms such as abdominal pain and bloating, constipation or diarrhoea. By relaxing the gut muscle, alverine citrate relieves the symptoms of this condition. Alverine also relaxes the smooth muscle in the womb (uterus). It is therefore also used to treat painful menstruation, which is caused by muscle spasms in the uterus (dysmenorrhea). Alverine capsules are now available in the market. There are two strengths of capsule - 60 mg and 120 mg. The common dosage for adults and children over 12 years is 60–120 mg taken one, two or three a day, either before or after meals. Alverine is not suitable for those aged under 12 years. Women who are pregnant or breast-feeding should follow the instruction of doctors for the drug. Development and marketing A combination of alverine citrate and simeticone (ACS) for irritable bowel syndrome therapy were compared with placebo in a phase IV clinical trial. At week 4, the alverine citrate and simeticone group had lower VAS scores for abdominal pain/discomfort (median: 40 mm vs. 50 mm, P = 0.047) and higher responder rate (46.8% vs. 34.3%, OR = 1.3; P = 0.01) as compared with the placebo group. The drug was firstly authorized for marketing on 03/06/2014. The marketing authorisation holder is Dr. Reddy's Laboratories (UK) Ltd. References Drugs acting on the gastrointestinal system and metabolism Amines
Alverine
Chemistry
609
54,248,584
https://en.wikipedia.org/wiki/Optimal%20instruments
In statistics and econometrics, optimal instruments are a technique for improving the efficiency of estimators in conditional moment models, a class of semiparametric models that generate conditional expectation functions. To estimate parameters of a conditional moment model, the statistician can derive an expectation function (defining "moment conditions") and use the generalized method of moments (GMM). However, there are infinitely many moment conditions that can be generated from a single model; optimal instruments provide the most efficient moment conditions. As an example, consider the nonlinear regression model where is a scalar (one-dimensional) random variable, is a random vector with dimension , and is a -dimensional parameter. The conditional moment restriction is consistent with infinitely many moment conditions. For example: More generally, for any vector-valued function of , it will be the case that . That is, defines a finite set of orthogonality conditions. A natural question to ask, then, is whether an asymptotically efficient set of conditions is available, in the sense that no other set of conditions achieves lower asymptotic variance. Both econometricians and statisticians have extensively studied this subject. The answer to this question is generally that this finite set exists and have been proven for a wide range of estimators. Takeshi Amemiya was one of the first to work on this problem and show the optimal number of instruments for nonlinear simultaneous equation models with homoskedastic and serially uncorrelated errors. The form of the optimal instruments was characterized by Lars Peter Hansen, and results for nonparametric estimation of optimal instruments are provided by Newey. A result for nearest neighbor estimators was provided by Robinson. In linear regression The technique of optimal instruments can be used to show that, in a conditional moment linear regression model with iid data, the optimal GMM estimator is generalized least squares. Consider the model where is a scalar random variable, is a -dimensional random vector, and is a -dimensional parameter vector. As above, the moment conditions are where is an instrument set of dimension (). The task is to choose to minimize the asymptotic variance of the resulting GMM estimator. If the data are iid, the asymptotic variance of the GMM estimator is where . The optimal instruments are given by which produces the asymptotic variance matrix These are the optimal instruments because for any other , the matrix is positive semidefinite. Given iid data , the GMM estimator corresponding to is which is the generalized least squares estimator. (It is unfeasible because is unknown.) References Further reading Econometric modeling Moment (mathematics)
Optimal instruments
Physics,Mathematics
557
6,126,579
https://en.wikipedia.org/wiki/Transgender%20sexuality
Sexuality in transgender individuals encompasses all the issues of sexuality of other groups, including establishing a sexual identity, learning to deal with one's sexual needs, and finding a partner, but may be complicated by issues of gender dysphoria, side effects of surgery, physiological and emotional effects of hormone replacement therapy, psychological aspects of expressing sexuality after medical transition, or social aspects of expressing their gender. Sexual orientation Historically, clinicians labelled trans people as heterosexual or homosexual relative to their sex assigned at birth. Within the transgender community, sexual orientation terms based on gender identity are the most common, and these terms include lesbian, gay, bisexual, asexual, queer, and others. Sexual orientation distribution In the United States, transgender respondents to one 2015 survey self-identified as queer (21%), pansexual (18%), gay, lesbian, or same-gender-loving (16%), straight (15%), bisexual (14%), and asexual (10%). A second study found 23% reported being gay, lesbian, or same-gender-loving, 25% bisexual, 4% asexual, 23% queer, 23% straight and 2% something else. Transgender women A 2015 survey of roughly 3,000 American trans women showed that at least 60% were attracted to women and 55% were attracted to men. Of the trans women respondents 27% answered gay, lesbian, or same-gender-loving, 20% answered bisexual, 19% heterosexual, 16% pansexual, 6% answered asexual, 6% queer, and 6% did not answer. Transgender men Foerster reported a 15-year successful relationship between a woman and a trans man who transitioned in the late 1960s. In the 20th century, trans men attracted to women struggled to demonstrate the existence and legitimacy of their identity. Many trans men attracted to women, such as jazz musician Billy Tipton, kept their trans status private until their deaths. Until the mid-2010s, medical textbooks commonly suggested that most transgender men were straight. However, a 2015 survey of roughly 2000 American trans men showed more variation in sexual orientation or sexual identity among trans men. 23% identified as heterosexual or straight. The vast majority (65%) identified their sexual orientation or sexual identity as queer (24%), pansexual (17%), bisexual (12%), gay/same-gender loving (12%), asexual (7%), and 5% did not answer. Author Henry Rubin wrote that "[i]t took the substantial efforts of Lou Sullivan, a gay FTM activist who insisted that female-to-male transgender people could be attracted to men." Matt Kailey, author of Just Add Hormones: An Insider's Guide to the Transsexual Experience, recounts his transition "from 40-something straight woman to the gay man he'd always known himself to be." Researchers eventually acknowledged the existence of this phenomenon, and by the end of the 20th century, psychiatrist Ira Pauly wrote, "The statement that all female-to-male transgender are homosexual [Pauly means attracted to women] in their sexual preference can no longer be made." Trans gay men have varying levels of acceptance within other communities. Trans-feminine third genders Psychiatrist Richard Green, in an appendix to Harry Benjamin's 1966 The Transsexual Phenomenon, considers people who were assigned male at birth who have adopted a more feminine gender role. In this broad overview, entitled "Transsexualism: Mythological, Historical, and Cross-Cultural Aspects", Green argues that the members of these groups are mentally indistinguishable from modern western transsexual women. They have in common early effeminacy, adulthood femininity, and attraction to masculine males. The Hijra of the Indian Subcontinent are people who were assigned male at birth but occupy a female sexual and/or gender role, sometimes undergoing castration. As adults, they occupy a female role, but traditionally Hijra describe themselves as neither male nor female, preferring Hijra as their gender. They often express their femininity in youth; as adults, they are usually sexually-oriented towards masculine men. Mukhannathun were transgender individuals of the Muslim faith and Arab extraction who were present in Medina and Mecca during and after the time of Muhammad. Ibn Abd Al-Barh Al-Tabaeen, a companion of Aisha Umm ul-Mu'min'in who knew the same mukhannath as Mohammed, stated that "If he is like this, he would have no desire for women and he would not notice anything about them. This is one of those who have no interest in women who were permitted to enter upon women." That said, one of the Mukhannath of Medina during Muhammad's time had married a woman. Cultural status Beyond western cultures, sexual behavior and gender roles vary, which affects the place of gender variant people in that culture. Nadleehe of the North American Navajo hold a respected ceremonial position, whereas the Kathoey of Thailand experience more stigma comparatively. In Iran, while sex change is somewhat accepted, the society is heteronormative. As homosexuality is punishable by death, it is more common to see a trans man being in a relationship with a woman and a trans woman in a relationship with a man. Sexual practices Mira Bellwether's self-published 2010 'zine Fucking Trans Women was a landmark work in its focus on the perspectives and experiences of trans women, and has been described in Sexuality & Culture as "a comprehensive guide to trans women's sexuality". It focuses in particular on sex acts possible with flaccid penises and on the innervation of pre-op and non-op trans women's genital areas. It both named and popularized the act of muffing, or stimulating the inguinal canals through an invaginated scrotum, which can offer those with genital dysphoria a way to be penetrated from the front. Cultural studies scholar J.R. Latham wrote the first definitive analysis of trans men's sexual practices in the journal Sexualities. Few documentaries have been produced exploring transgender people's sexual practices. Since 2013, creator Tobi Hill-Meyer has been working on a series of projects related to transgender peoples sexualities titled Doing it Again. Research in areas of sexual behavior and experience is ongoing. One study from 2020 conducted in Spain analyzed the sexual health and behaviors of 260 participants. Naming the body Many transgender individuals choose to not use the language that is typically used to refer to sexual body parts, instead using less gendered words. The reason for this practice, is that hearing the typical names for genitalia and other sexual body parts can cause severe gender dysphoria for some trans people. Not all transgender people choose to rename their bodies. Those that choose not to rename their body, are often less uncomfortable with their body and/or do not associate their sexual body parts with a gender that differs from the one that they identify with. Ultimately, the decision of what language a trans person chooses to use for their body, and wants others to use, is up to the individual whose body is being named. Transgender women Some trans women choose to refer to their anuses as a vagina, pussy, or cunt. (Cunt may also refer to either inguinal canal.) Terms used for the penis include junk, strapoff, strapless, clit, and hen. Transgender men Some trans men refer to their vaginas as their front holes because they find that term less gendered; some use terms like man cave, bonus hole, or boy cunt. Terms used for the clitoris include the dick, cock, dicklet, while the breasts may be called the chesticles. Effects of transitioning Effects of feminizing hormone therapy For transgender women, taking estrogen stimulates the development of breast tissue, causing them to increase in both size and sensitivity. This increased sensitivity can be pleasurable, painful, or both, depending on the person and the type of stimulation. Furthermore, for those taking estrogen and who have male genitalia, estrogen can (and often does) shrink the external male genitalia and decrease the production of semen (at times bringing the sperm count to zero), and can decrease the ability for the male genitalia to become erect. In addition to these changes, some transgender women going through hormone therapy (HRT) can experience changes in the way their orgasms feel. For example, some people report the ability to experience multiple orgasms. HRT can cause decrease in sex drive or a change in the way arousal is experienced by trans women. A study published in 2014 found that 62.4% of trans women surveyed reported a decrease in sexual desire after hormone therapy and/or vaginoplasty. A 2008 study reported hypoactive sexual desire disorder (HSDD) in as many as one in three post-operative trans women on HRT, while around a quarter of the cisgender female controls were judged to have the disorder. There was no difference between the two group's reported sexual desire. Some trans women and healthcare providers anecdotally report the use of progestogens increasing libido. A 2009 pilot study tested the effectiveness of two treatments for HSDD in trans women: transdermal testosterone and oral dydrogesterone (a progestin). After six weeks of treatment, the group treated with testosterone reported improved sexual desire, while the group treated with the progestin reported no change. Effects of masculinizing hormone therapy For transgender men, one of the most notable physical changes that many taking testosterone experience, in terms of sexuality and the sexual body, is the stimulation of clitoral tissue and the enlargement of the clitoris. This increase in size can range anywhere from just a slight increase to quadrupling in size. Other effects can include vaginal atrophy, where the tissues of the vagina thin and may produce less lubrication. This can make sex with the female genitalia more painful and can, at times, result in bleeding. Transgender men taking testosterone are likely at increased risk of developing urinary tract infections, especially if they have receptive vaginal intercourse. Other effects that testosterone can have on transgender men can include an increase in their sex drive/libido. At times, this increase can be very sudden and dramatic. Like transgender women, some transgender men also experience changes in the way they experience arousal. Effects of gender-affirming surgery Trans women who have undergone vaginoplasty must dilate in order to properly shape and form the neovagina. After several months, sexual intercourse can replace dilation, but if not sexually active, dilation is required again, for the rest of the patient's life. Sexual orientation and transitioning Some trans people maintain a consistent orientation throughout their lives, in some cases remaining with the same partner through transition. A 2013 study found that 58.2 percent of its 452 transgender and gender-nonconforming respondents experienced sexual attraction changes during their lives, with trans masculine people more likely to experience "sexual fluidity". For transgender people who socially transitioned (about half of the total sample), 64.4 percent experienced attraction changes after transitioning, with trans feminine people more likely to experience sexual fluidity. A 2014 study of 70 trans women and 45 trans men had similar results, with trans women more likely to experience a change in sexual orientation (32.9 percent experienced changes versus 22.2 percent of trans men). In both groups of the 2014 study, trans people initially more attracted to the opposite of the sex they were assigned at birth were significantly more likely to experience sexual orientation changes (i.e. trans men initially attracted to men and trans women initially attracted to women changing their orientations). These sexual orientation changes could occur at any point in the transition process. Some gynephilic trans women self-report that after transitioning, they became sexually oriented towards males, and explain this as part of their emerging female identity. Kurt Freund hypothesized that such reports might reflect the desire of some trans women to portray themselves as "typically feminine" or, alternatively, might reflect their erotic interest in the validation provided by male partners, rather than representing a genuine change in preference. A 2005 study which relied upon vaginal photoplethysmographies to measure blood-flow in the genitalia of postoperative trans women found they had arousal patterns which were category specific (i.e. androphilic trans women were aroused by males, gynephilic trans women were aroused by females) in a similar fashion to natal males, and argue that vaginal photoplethysmographies are a useful technology for measuring the validity of such reports. The one trans woman in the study who reported a change in sexual orientation had arousal responses consistent with her pre-reassignment sexual orientation. While undergoing hormone therapy, some trans men report experiencing increased sexual attraction to cisgender men. This change can be confusing for those who experience it because it is often not a change that they expect to happen. However, gender transition does not always mean sexual orientation changes will happen. A 2021 study of 469 transgender women and 433 transgender men found that sexual orientation did not change over time or with hormonal transition. Transvestic fetishism The DSM once had a diagnosis of "transvestic fetishism". Some therapists and activists sought to de-pathologize this category in future revisions. DSM 5, which was released in 2013, replaced the transvestic fetishism category with "transvestic disorder". Following the example of the Benjamin Scale, in 1979 Buhrich and McConaghy proposed three clinically discrete categories of fetishistic transvestism: "nuclear" transvestites who were satisfied with cross-dressing, "marginal" transvestites who also desired feminization by hormones or surgical intervention, and "fetishistic transsexuals", who had shown fetishistic arousal but who identified as transsexuals and sought sex reassignment surgery. Sex work In many cultures, transgender people (especially trans women) are frequently involved in sex work such as transgender pornography. This is correlated with employment discrimination. In the 2011 National Trans Discrimination Survey, 11% of respondents reported having done sex work for income, compared to 1% of cisgender women in the US. According to the same survey, 13% of transgender Americans are unemployed, almost double the national average. 26% had lost their jobs due to their gender identity/expression. Transgender sex workers have high rates of HIV. In a review of studies on HIV prevalence in trans women working in the sex industry, over 27% were HIV positive. However, the review found that trans women engaged in sex work were not more likely than trans women not engaged in sex work to be HIV positive. Studies have found that in the United States HIV is especially prevalent amongst transgender sex workers of color, particularly black trans women, a problem that has been identified by academics and members of the transgender community. The subject of transgender sex workers has attracted attention in the media. Paris Lees, a British trans woman and journalist, wrote an article in June 2012 for the Independent defending criticism of Ria, star of Channel 4 documentary Ria: Teen Transsexual, who was seventeen at the time and depicted as working as a prostitute at a massage parlor, saying that the choice to engage in sex work is a matter of bodily autonomy and pointing out reasons that young trans women often turn to sex work such as low self-esteem and severe employment discrimination. A review by GLAAD of its archives of transgender-inclusive television episodes from 2002 to 2012 found that 20% of transgender characters were depicted as sex workers. A 2020 Netflix documentary, Disclosure, explores this in more depth. History Classifying transgender people by sexual orientation Historically, transgender people were unable to access gender affirming care unless they would be considered heterosexual post surgery. For much of the early 1900s, transgender persons were conflated with being either an invert or homosexual; as such, non-heterosexual sexual orientation data for transgender people is limited. In the 1980s, Lou Sullivan was instrumental in allowing non-heterosexual transgender people access to surgical care and hormones. Sexologist Magnus Hirschfeld first suggested a distinction based on sexual orientation in 1923. A number of two-type taxonomies based on sexuality have subsequently been proposed by clinicians, though some clinicians believe that other factors are more clinically useful categories, or that two types are insufficient. Some researchers have distinguished trans men attracted to women and trans men attracted to men. The Benjamin Scale proposed by endocrinologist Harry Benjamin in 1966 used sexual orientation as one of several factors to distinguish between "transvestites", "non-surgical" transsexuals, and "true transsexuals". In 1974, Person and Ovesey proposed dividing transsexual women into "primary" and "secondary" transsexuals. They defined "primary transsexuals" as asexual persons with little or no interest in partnered sexual activity and with no history of sexual arousal to cross-dressing or "cross-gender fantasy". They defined both homosexual and "transvestic" trans people to be "secondary transsexuals". Dr Norman Fisk noted those entering his clinic seeking reassignment surgery comprised a larger group than fit into the classical transsexual diagnosis. The article notes that effeminate gay men and heterosexual fetishistic transvestites desire surgery and could be considered good candidates for it. In the DSM-II, released in 1968, "transsexualism" was within the "paraphilias" category, and no other information was provided. In the DSM-III-R, released in 1987, the category of "gender identity disorder" was created, and "transsexualism" was divided into "asexual", "homosexual", "heterosexual" and "unspecified" sub-types. In the DSM-IV-TR, released in 2000, "transsexualism" was renamed "gender identity disorder". Attraction specifications were to male, female, both, or neither, with specific variations dependent on birth sex. In the DSM-V, released in 2013 and currently used in the United States and Canada, "gender identity disorder" is now "gender dysphoria", and attraction specifications are either gynephillic or androphillic. See also Attraction to transgender people Lou Sullivan Queer heterosexuality Tri-Ess References Human sexuality
Transgender sexuality
Biology
3,863
36,588,778
https://en.wikipedia.org/wiki/Stable-isotope%20probing
Stable-isotope probing (SIP) is a technique in microbial ecology for tracing uptake of nutrients in biogeochemical cycling by microorganisms. A substrate is enriched with a heavier stable isotope that is consumed by the organisms to be studied. Biomarkers with the heavier isotopes incorporated into them can be separated from biomarkers containing the more naturally abundant lighter isotope by isopycnic centrifugation. For example, 13CO2 can be used to find out which organisms are actively photosynthesizing or consuming new photosynthate. As the biomarker, DNA with 13C is then separated from DNA with 12C by centrifugation. Sequencing the DNA identifies which organisms were consuming existing carbohydrates and which were using carbohydrates more recently produced from photosynthesis. SIP with 18O-labeled water can be used to find out which organisms are actively growing, because oxygen from water is incorporated into DNA (and RNA) during synthesis. When DNA is the biomarker, SIP can be performed using isotopically labeled C, H, O, or N, though 13C is used most often. The density shift is proportional to the change in density in the DNA, which depends on the difference in mass between the rare and common isotopes for a given element, and on the abundance of elements in the DNA. For example, the difference in mass between 18O and 16O (two atomic mass units) is twice that between 13C and 12C (one atomic mass unit), so incorporation of 18O into DNA will cause a larger per atom density shift than will incorporation of 13C. Conversely, DNA contains nearly twice as many carbon atoms (11.25 per base, on average) as oxygen atoms (6 per base), so at equivalent labeling (e.g., 50 atom percent 13C or 18O), DNA labeled with 18O will be only slightly more dense than DNA fully labeled with 13C. Similarly, nitrogen is less abundant in DNA (3.75 atoms per base, on average), so a weaker DNA buoyant density shift is observed with 15N- versus 13C-labeled or 18O-labeled substrates. Larger buoyant density shifts are observed when multiple isotope tracers are used. Because density shifts as a predictable function of the change in mass caused by isotope assimilation, stable isotope probing can be modeled to estimate the amount of isotope incorporation, an approach called quantitative stable isotope probing (qSIP), which has been applied to microbial communities in soils, marine sediments, and decomposing leaves to compare rates of growth and substrate assimilation among different microbial taxa. See also Stable isotope labeling by amino acids in cell culture References Further reading Microbiology techniques Molecular biology techniques Bacteriology Environmental microbiology Microbial population biology
Stable-isotope probing
Chemistry,Biology,Environmental_science
578
41,799,896
https://en.wikipedia.org/wiki/Clumping%20factor
The clumping factor is a measurement of how density varies within a gaseous medium, and is commonly used in astrophysical settings where gas is not distributed uniformly. Gas densities can vary over many orders of magnitude, from the low density plasma in the Intergalactic medium between galaxies, to the neutral and dense molecular regions in the interstellar medium inside of galaxies. Moreover, gas throughout space is turbulent implying it has density structure on all spatial scales. The amount that gas clumps is important to know in astronomy when trying to infer gas properties from observations. The clumping of gas, and not just the amount of gas present, affects the luminosity of gas as it cools. The clumping factor is a measure of the density variation of a medium. It is defined as: where averaging is spatial. It is related to the variance of the density field by the square of the average density: Cooling rates and emission scale as the particle number density squared (collision rates have this scaling). Therefore, the clumping factor can be used to convert from density inferred by emission observations assuming uniform density, to true average gas density: References Space plasmas Equations of astronomy
Clumping factor
Physics,Astronomy
244
425,850
https://en.wikipedia.org/wiki/Thermodynamic%20system
A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics. Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another. Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy. The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems. Overview Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.” Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'. Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article. Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process. History The classification of thermodynamic systems arose with the development of thermodynamics as a science. Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment. At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium. In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems). Passive systems If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium. Active systems If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment. Systems in equilibrium In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic. For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly. The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium. In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings. Walls A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct. A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available. The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings. A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time. The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used. Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. Surroundings The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions. Closed system In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary. Adiabatic boundary – not allowing any heat exchange: A thermally isolated system Rigid boundary – not allowing exchange of work: A mechanically isolated system One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way. The first law of thermodynamics for energy transfers for closed system may be stated: where denotes the internal energy of the system, heat added to the system, the work done by the system. For infinitesimal changes the first law for closed systems may stated: If the work is due to a volume expansion by at a pressure then: For a quasi-reversible heat transfer, the second law of thermodynamics reads: where denotes the thermodynamic temperature and the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as: For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: where denotes the number of -type molecules, the number of atoms of element in molecule , and the total number of atoms of element in the system, which remains constant, since the system is closed. There is one such equation for each element in the system. Isolated system An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium. Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena. In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified. The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system. Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe). 'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system. Selective transfer of matter For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes. An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential. A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number. A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance it is usually denoted . The corresponding extensive variable can be the number of moles of the component substance in the system. For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics. Open system In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where is a relaxation time of a corresponding variable. It is convenient to consider the initial value equal to zero. The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables. The increments of Gibbs free energy and entropy at and are determined as The stationary states of the system exist due to exchange of both thermal energy () and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances that can be positive or negative; the quantity is chemical potential of substance .The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables , while are thermodynamic forces. This approach to the open system allows describing the growth and development of living objects in thermodynamic terms. See also Dynamical system Energy system Isolated system Mechanical system Physical system Quantum system Thermodynamic cycle Thermodynamic process Two-state quantum system GENERIC formalism References Sources Carnot, Sadi (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (in French). Paris: Bachelier. Dobroborsky B.S. Machine safety and the human factor / Edited by Doctor of Technical Sciences, prof. S.A. Volkov. — St. Petersburg: SPbGASU, 2011. — pp. 33–35. — 114 p. — ISBN 978-5-9227-0276-8. (Ru) Thermodynamic systems Equilibrium chemistry Thermodynamic cycles Thermodynamic processes
Thermodynamic system
Physics,Chemistry,Mathematics
4,253
468,641
https://en.wikipedia.org/wiki/History%20of%20perpetual%20motion%20machines
The history of perpetual motion machines dates at least back to the Middle Ages. For millennia, it was not clear whether perpetual motion devices were possible or not, but modern theories of thermodynamics have shown that they are impossible. Despite this, many attempts have been made to construct such machines, continuing into modern times. Modern designers and proponents sometimes use other terms, such as "overunity", to describe their inventions. History Pre-19th century There are some unsourced claims that a perpetual motion machine called the "magic wheel" (a wheel spinning on its axle powered by lodestones) appeared in 8th-century Bavaria. This historical claim appears to be unsubstantiated though often repeated. Early designs of perpetual motion machines were done by Indian mathematician–astronomer Bhaskara II, who described a wheel (Bhāskara's wheel) that he claimed would run forever. A drawing of a perpetual motion machine appeared in the sketchbook of Villard de Honnecourt, a 13th-century French master mason and architect. The sketchbook was concerned with mechanics and architecture. Following the example of Villard, Peter of Maricourt designed a magnetic globe which, if it were mounted without friction parallel to the celestial axis, would rotate once a day. It was intended to serve as an automatic armillary sphere. Leonardo da Vinci made a number of drawings of devices he hoped would make free energy. Leonardo da Vinci was generally against such devices, but drew and examined numerous overbalanced wheels. Mark Anthony Zimara, a 16th-century Italian scholar, proposed a self-blowing windmill. Various scholars in this period investigated the topic. In 1607 Cornelius Drebbel in "Wonder-vondt van de eeuwighe bewegingh" dedicated a Perpetuum motion machine to James I of England. It was described by Heinrich Hiesserle von Chodaw in 1621. Robert Boyle devised the "perpetual vase" ("perpetual goblet" or "hydrostatic paradox") which was discussed by Denis Papin in the Philosophical Transactions for 1685. Johann Bernoulli proposed a fluid energy machine. In 1686, Georg Andreas Böckler, designed a "self operating" self-powered water mill and several perpetual motion machines using balls using variants of Archimedes' screws. In 1712, Johann Bessler (Orffyreus), claimed to have experimented with 300 different perpetual motion models before developing what he said were working models. In the 1760s, James Cox and John Joseph Merlin developed Cox's timepiece. Cox claimed that the timepiece was a true perpetual motion machine, but as the device is powered by changes in atmospheric pressure via a mercury barometer, this is not the case. In 1775, the Royal Academy of Sciences in Paris made the statement that the Academy "will no longer accept or deal with proposals concerning perpetual motion." Industrial Revolution 19th century In 1812, Charles Redheffer, in Philadelphia, claimed to have developed a "generator" that could power other machines. The machine was open for viewing in Philadelphia, where Redheffer raised large amount of money from the admission fee. Redheffer moved his machine to New York, after his cover was blown in Philadelphia, while applying for government funding. It was there that Robert Fulton exposed Redheffer's schemes during an exposition of the device in New York City (1813). Removing some concealing wooden strips, Fulton found a catgut belt drive went through a wall to an attic. In the attic, a man was turning a crank to power the device. In 1827, Sir William Congreve, 2nd Baronet devised a machine running on capillary action that would disobey the principle that water seeks its own level, so to produce a continuous ascent and overflow. The device had an inclined plane over pulleys. At the top and bottom, there travelled an endless band of sponge, a bed and, over this, again an endless band of heavy weights jointed together. The whole stood over the surface of still water. Congreve believed his system would operate continuously. In 1868, an Austrian, Alois Drasch, received a US patent for a machine that possessed a "thrust key-type gearing" of a rotary engine. The vehicle driver could tilt a trough depending upon need. A heavy ball rolled in a cylindrical trough downward, and, with continuous adjustment of the device's levers and power output, Drasch believed that it would be possible to power a vehicle. In 1870, E.P. Willis of New Haven, Connecticut made money from a "proprietary" perpetual motion machine. A story of the overcomplicated device with a hidden source of energy appears in the Scientific American article "The Greatest Discovery Ever Yet Made". Investigation into the device eventually found a source of power that drove it. John Ernst Worrell Keely claimed the invention of an induction resonance motion motor. He explained that he used "etheric technology". In 1872, Keely announced that he had discovered a principle for power production based on the vibrations of tuning forks. Scientists investigated his machine which appeared to run on water, though Keely endeavoured to avoid this. Shortly after 1872, venture capitalists accused Keely of fraud (they lost nearly five million dollars). Keely's machine, it was discovered after his death, was based on hidden air pressure tubes. 1900 to 1950 In 1900, Nikola Tesla claimed to have discovered an abstract principle on which to base a perpetual motion machine of the second kind. No prototype was produced. He wrote: David Unaipon, Australian inventor, had a lifelong fascination with perpetual motion. One of his studies on Newtonian mechanics led him to create a shearing machine in 1910 that converted curvilineal motion into straight line movement. The device is the basis of modern mechanical shears. In the 1910s and 1920s, Harry Perrigo of Kansas City, Missouri, a graduate of MIT, claimed development of a free energy device. Perrigo claimed the energy source was "from thin air" or from aether waves. He demonstrated the device before the Congress of the United States on December 15, 1917. Perrigo had a pending application for the "Improvement in Method and Apparatus for Accumulating and Transforming Ether Electric Energy". Investigators report that his device contained a hidden motor battery. Popular Science, in the October 1920 issue, published an article on the lure of perpetual motion. Modern era 1951 to 1980 In 1966, Josef Papp (sometimes referred to as Joseph Papp or Joseph Papf) supposedly developed an alternative car engine that used inert gases. He gained a few investors but when the engine was publicly demonstrated, an explosion killed one of the observers and injured two others. Papp blamed the accident on interference by physicist Richard Feynman, who later shared his observations in an article in Laser, the journal of the Southern Californian Skeptics. Papp continued to accept money but never demonstrated another engine. On December 20 of 1977, Emil T. Hartman received titled "Permanent magnet propulsion system". This device is related to the Simple Magnetic Overunity Toy (SMOT). Paul Baumann, a German engineer, developed a machine referred to as the "Testatika" and known as the "Swiss M-L converter" or "Thesta-Distatica". Guido Franch reportedly had a process of transmuting water molecules into high-octane gasoline compounds (named Mota fuel) that would reduce the price of gasoline to 8 cents per gallon. This process involved a green powder (this claim may be related to the similar ones of John Andrews (1917)). He was brought to court for fraud in 1954 and acquitted, but in 1973 was convicted. Justice William Bauer and Justice Philip Romiti both observed a demonstration in the 1954 case. In 1958, Otis T. Carr from Oklahoma formed a company to manufacture UFO-styled spaceships and hovercraft. Carr sold stock for this commercial endeavour. He also promoted free energy machines. He claimed inspiration from Nikola Tesla, among others. In 1962, physicist Richard Feynman discussed a Brownian ratchet that would supposedly extract meaningful work from Brownian motion, although he went on to demonstrate how such a device would fail to work in practice. In the 1970s, David Hamel produced the Hamel generator, an "antigravity" device, supposedly after an alien abduction. The device was tested on MythBusters where it failed to demonstrate any lift-generating capability. Howard Robert Johnson developed a permanent magnet motor and, on April 24, 1979, received .[The United States Patent office main classification of his 4151431 patent is as an "electrical generator or motor structure, dynamoelectric, linear" (310/12).] Johnson claimed that his device generates motion, either rotary or linear, from nothing but permanent magnets in rotor as well as stator, acting against each other. He estimated that permanent magnets made of proper hard materials should lose less than two percent of their magnetization in powering a device for 18 years. In 1979, Joseph Westley Newman applied for a patent on a direct current electrical motor which, according to his book The Energy Machine of Joseph Newman did more mechanical work than could be accounted for by the electrical power supplied to it. Newman's patent application was rejected in 1983. Newman sued the US Patent and Trademark Office in US District Court, which ordered the National Bureau of Standards to test his machine; they informed the Court that Newman's device did not produce more power than supplied by the batteries it was connected to, and the Court found against Newman. 1981 to 1999 Dr. Yuri S. Potapov of Moldova claims development of an over-unity electrothermal water-based generator (referred to as "Yusmar 1"). He founded the YUSMAR company to promote his device. The device has failed to produce over unity under tests. Clean Energy Technologies, Inc. (CETI) claimed development of a device called the Patterson power cell that outputs small yet anomalous amounts of heat, perhaps due to cold fusion. Skeptics state that inaccurate measurements of friction effects from the cooling flow through the pellets may be responsible for the results. Dave Jones created a device in 1981 using a seemingly constantly rotating bicycle wheel sealed in a plexiglass container. He created it as a scientific joke, always stating that it was a fake and not a true perpetual motion machine, but to date no one has yet discovered how the device works. Before he died of cancer in 2017, his brother Peter persuaded him to write down the secret behind the wheel, which he sent in a letter to Martyn Poliakoff, a chemist at the University of Nottingham. Adam Savage examined the wheel in 2023, which was housed at the Royal Society, producing a video of the event, in which he suspected that an electrical mechanism of some kind drove the device. 2000s The motionless electromagnetic generator (MEG) was built by Tom Bearden. Allegedly, the device can eventually sustain its operation in addition to powering a load without application of external electrical power. Bearden claimed that it didn't violate the first law of thermodynamics because it extracted vacuum energy from the immediate environment. Critics dismiss this theory and instead identify it as a perpetual motion machine with an unscientific rationalization. Science writer Martin Gardner said that Bearden's physics theories, compiled in the self-published book Energy from the Vacuum, are considered "howlers" by physicists, and that his doctorate title was obtained from a diploma mill. Bearden then founded and directed the Alpha Foundation's Institute for Advanced Study (AIAS) to further propagate his theories. This group has published papers in established physics journals and in books published by leading publishing houses, but one analysis lamented these publications because the texts were "full of misconceptions and misunderstandings concerning the theory of the electromagnetic field." When Bearden was awarded in 2002, the American Physical Society issued a statement against the granting. The United States Patent and Trademark Office said that it would reexamine the patent and change the way it recruits examiners, and re-certify examiners on a regular basis, to prevent similar patents from being granted again. In 2002, the GWE (Genesis World Energy) group claimed to have 400 people developing a device that supposedly separated water into H2 and O2 using less energy than conventionally thought possible. No independent confirmation was ever made of their claims, and in 2006, company founder Patrick Kelly was sentenced to five years in prison for stealing funds from investors. In 2006, Steorn Ltd. claimed to have built an over-unity device based on rotating magnets, and took out an advertisement soliciting scientists to test their claims. The selection process for twelve began in September 2006 and concluded in December 2006. The selected jury started investigating Steorn's claims. A public demonstration scheduled for July 4, 2007 was canceled due to "technical difficulties". In June 2009, the selected jury said the technology does not work. See also History of science References Further reading Dircks, Henry. (1870). Perpetuum Mobile: Or, A History of the Search For Self-Motive Power, From the 13th to the 19th Century With an introductory essay. Second Series. London. W. Clowes and Sons Verance, Percy. (1916). Perpetual Motion: Comprising a History of the Efforts to Attain Self-Motive Mechanism with a Classified, Illustrated Collection and Explanation of the Devices Whereby it Has Been Sought and Why They Failed, and Comprising Also a Revision and Re-Arrangement of the Information Afforded by "Search for Self -Motive Power During The 17th, 18th and 19th Centuries," London, 1861, and "A History of the Search for Self-Motive Power from the 13th to The 19th Century," London, 1870, by Henry Dircks, C. E., LL. D., Etc.. 20th Century Enlightenment Specialty Co. Ord-Hume, Arthur W. J. G. (1977). Perpetual Motion: The History of an Obsession. St. Martin's Press. . Angrist, Stanley W., "Perpetual Motion Machines". Scientific American. January 1968. Hans-Peter, "Perpetual Motion Chronology". HP's Perpetuum Mobile. MacMillan, David M., et al., "The Rolling Ball Web , An Online Compendium of Rolling Ball Sculptures, Clocks, Etc". Lienhard, John H., "Perpetual motion". The Engines of Our Ingenuity, 1997. "Patents for Unworkable Devices ". The Museum of Unworkable Devices . "Perpetual Motion Pioneers (The Movers and Shakers)". The Museum of Unworkable Devices. Boes, Alex, "Museum of Hoaxes". Kilty, Kevin T., "Perpetual Motion". 1999. The Basement Mechanic's Guide to Testing Perpetual Motion Machines External links Gousseva, Maria, "Alleged Creation of Perpetual Energy Source Splits Scientific Community". Pravda.ru. Bearden, Tom, "Perpetual motion vs. "working machines creating energy from nothing" ". 2003, Revised 2004. Perpetuum mobile page by Veljko Milković. Perpetual motion machines Perpetual motion machines
History of perpetual motion machines
Physics,Chemistry,Technology
3,139
12,229,840
https://en.wikipedia.org/wiki/Holdridge%20life%20zones
The Holdridge life zones system is a global bioclimatic scheme for the classification of land areas. It was first published by Leslie Holdridge in 1947, and updated in 1967. It is a relatively simple system based on few empirical data, giving objective criteria. A basic assumption of the system is that both soil and the climax vegetation can be mapped once the climate is known. Scheme While it was first designed for tropical and subtropical areas, the system now applies globally. The system has been shown to fit not just tropical vegetation zones, but Mediterranean zones, and boreal zones too, but is less applicable to cold oceanic or cold arid climates where moisture becomes the predominant factor. The system has found a major use in assessing the potential changes in natural vegetation patterns due to global warming. The three major axes of the barycentric subdivisions are: precipitation (annual, logarithmic) biotemperature (mean annual, logarithmic) potential evapotranspiration ratio (PET) to mean total annual precipitation. Further indicators incorporated into the system are: humidity provinces latitudinal regions altitudinal belts Biotemperature is based on the growing season length and temperature. It is measured as the mean of all annual temperatures, with all temperatures below freezing and above 30 °C adjusted to 0 °C, as most plants are dormant at these temperatures. Holdridge's system uses biotemperature first, rather than the temperate latitude bias of Merriam's life zones, and does not primarily consider elevation directly. The system is considered more appropriate for tropical vegetation than Merriam's system. Scientific relationship between the 3 axes and 3 indicators Potential evapotranspiration (PET) is the amount of water that would be evaporated and transpired if there were enough water available. Higher temperatures result in higher PET. Evapotranspiration (ET) is the raw sum of evaporation and plant transpiration from the Earth's land surface to atmosphere. Evapotranspiration can never be greater than PET. The ratio, Precipitation/PET, is the aridity index (AI), with an AI<0.2 indicating arid/hyperarid, and AI<0.5 indicating dry. The coldest regions have not much evapotranspiration nor precipitation as there is not enough heat to evaporate much water, hence polar deserts. In the warmer regions, there are deserts with maximum PET but low rainfall that make the soil even drier, and rain forests with low PET and maximum rainfall causing river systems to drain excess water into oceans. Classes All the classes defined within the system, as used by the International Institute for Applied Systems Analysis (IIASA), are: Polar desert Subpolar dry tundra Subpolar moist tundra Subpolar wet tundra Subpolar rain tundra Boreal desert Boreal dry scrub Boreal moist forest Boreal wet forest Boreal rain forest Cool temperate desert Cool temperate desert scrub Cool temperate steppe Cool temperate moist forest Cool temperate wet forest Cool temperate rain forest Warm temperate desert Warm temperate desert scrub Warm temperate thorn scrub Warm temperate dry forest Warm temperate moist forest Warm temperate wet forest Warm temperate rain forest Subtropical desert Subtropical desert scrub Subtropical thorn woodland Subtropical dry forest Subtropical moist forest Subtropical wet forest Subtropical rain forest Tropical desert Tropical desert scrub Tropical thorn woodland Tropical very dry forest Tropical dry forest Tropical moist forest Tropical wet forest Tropical rain forest Climate change Many areas of the globe are expected to see substantial changes in their Holdridge life zone type as the result of climate change, with more severe change resulting in more remarkable shifts in a geologically rapid time span, leaving less time for humans and biomes to adjust. If species fail to adapt to these changes, they would ultimately go extinct: the scale of future change also determines the extent of extinction risk from climate change. For humanity, this phenomenon has particularly important implications for agriculture, as shifts in life zones happening in a matter of decades inherently result in unstable weather conditions compared to what that area had experienced throughout human history. Developed regions may be able to adjust to that, but those with fewer resources are less likely to do so. Some research suggests that under the scenario of continually increasing greenhouse gas emissions, known as SSP5-8.5, the areas responsible for over half of the current crop and livestock output would experience very rapid shift in its Holdridge Life Zones. This includes most of South Asia and the Middle East, as well as parts of sub-Saharan Africa and Central America: unlike the more developed areas facing the same shift, it is suggested they would struggle to adapt due to limited social resilience, and so crop and livestock in those places would leave what the authors have defined as a "safe climatic space". On a global scale, that results in 31% of crop and 34% of livestock production being outside of the safe climatic space. In contrast, under the low-emissions SSP1-2.6 (a scenario compatible with the less ambitious Paris Agreement goals, 5% and 8% of crop and livestock production would leave that safe climatic space. See also Andrew Delmar Hopkins Biome Ecoregion Köppen climate classification Life zone Trewartha climate classification References Biogeographic realms Sustainable building Geographic classifications Climate and weather classification systems
Holdridge life zones
Engineering
1,073
6,348,399
https://en.wikipedia.org/wiki/Gate%20%28airport%29
A gate is an area in an airport terminal that controls access to a passenger aircraft. While the exact specifications vary from airport to airport and country to country, most gates consist of a seated waiting area, a counter and a doorway leading to the aircraft. A gate adjacent to the stand where the aircraft is parked may be a contact gate, providing access by way of a jet bridge, or a ground-loaded gate, providing a path for passengers to leave the building to board via mobile stairs or airstairs built into the aircraft itself. A remote stand serves an aircraft stand further away, providing access to ground transportation to move passengers between the gate and the stand, where they board via stairs. Each gate typically corresponds to one parking stand on the airport's apron. A gate that provides access to multiple stands/jet bridges may have separate, designated doorways – sometimes termed sub-gates – for each stand. Commercial airport stands have airside components to facilitate passenger boarding and aircraft ground handling. While the term gate precisely refers only to the point of access for passengers, and the area where the aircraft itself is parked is precisely termed an aircraft stand, in commercial passenger aviation the term gate is also used to refer to the gate and aircraft stand together as a single area. Customs and immigration controls United States At most domestic gates, a single doorway connects the passenger waiting area with the jet bridge. International gates at U.S. international airports always have a second doorway to a separate corridor system that leads directly to the airport's U.S. Customs and Border Protection port of entry facility. For international arrivals from airports without preclearance, the door leading to the waiting area is closed and all arriving passengers are directed through the second doorway to CBP immigration and customs inspection. Jet bridge vs airstair Before the era of the jet bridge or jetway, airline passengers embarked onto the aircraft from ground level via airstairs. If initially indoors, passengers would exit the waiting area through a door to the outside and then passengers would proceed to the airstairs leading to the aircraft door. This method is still used for boarding smaller planes or boarding at smaller airports. Ownership The equipment is either airport or airline property, in most cases airport infrastructure. Gallery References Types of gates Airport infrastructure
Gate (airport)
Engineering
458
23,264,296
https://en.wikipedia.org/wiki/Committee%20to%20End%20Pay%20Toilets%20in%20America
The Committee to End Pay Toilets in America, or CEPTIA, was a 1970s grass-roots political organization which was one of the main forces behind the elimination of pay toilets in many American cities and states. History Founded in 1970 by nineteen-year-old Ira Gessel, the Committee's purpose was to "eliminate pay toilets in the U.S. through legislation and public pressure." Starting a national crusade to cast away coin-operated commodes, Gessel told newsmen, "You can have a fifty-dollar bill, but if you don't have a dime, that metal box is between you and relief." Membership in the organization cost only $0.25, and members received the Committee's newsletter, the Free Toilet Paper. Headquartered in Dayton, Ohio, U.S., the group had as many as 1,500 members, in seven chapters. The group also sponsored the Thomas Crapper Memorial Award, which was given to "the person who has made an outstanding contribution to the cause of CEPTIA and free toilets." In 1973, Chicago became the first American city to act when the city council voted 37–8 in support of a ban on pay toilets in that city. According to at least one source, this was "...a direct response, evidently," to CEPTIA. Achievements According to The Wall Street Journal, there were, in 1974, at least 50,000 pay toilets in America, mostly made by the Nik-O-Lok Company. Despite this flourishing commerce, CEPTIA was successful over the next few years in obtaining bans in New York, New Jersey, Minnesota, California, Florida, and Ohio. Lobbying was so successful that by June 1976, twelve states had enacted bans and the group announced that it was disbanding, declaring its mission mostly achieved. Criticism While CEPTIA's campaign was successful in largely eliminating pay toilets in the United States, critics charge that the result was not a flourishing of free public toilets, but rather many fewer public toilets of any sort than in other countries that did not see a movement against pay toilets. In recent years, commentators have called for a reconsideration of the pay toilet bans in the hope of making public toilets more widely available. References Organizations based in Dayton, Ohio Political organizations based in the United States Defunct organizations based in Ohio Toilets
Committee to End Pay Toilets in America
Biology
485
8,696,590
https://en.wikipedia.org/wiki/CCL23
Chemokine (C-C motif) ligand 23 (CCL23) is a small cytokine belonging to the CC chemokine family that is also known as Macrophage inflammatory protein 3 (MIP-3) and Myeloid progenitor inhibitory factor 1 (MPIF-1). CCL23 is predominantly expressed in lung and liver tissue, but is also found in bone marrow and placenta. It is also expressed in some cell lines of myeloid origin. CCL23 is highly chemotactic for resting T cells and monocytes and slightly chemotactic for neutrophils. It has also been attributed to an inhibitory activity on hematopoietic progenitor cells. The gene for CCL23 is located on human chromosome 17 in a locus containing several other CC chemokines. CCL23 is a ligand for the chemokine receptor CCR1. References Cytokines
CCL23
Chemistry
199
22,084,621
https://en.wikipedia.org/wiki/Ventricose
Ventricose is an adjective describing the condition of a mushroom, gastropod or plant that it is "swollen, distended, or inflated especially on one side". Mycology In mycology, ventricose is a condition in which the cystidia, lamella or stipe of a mushroom is swollen in the middle. Gastropods In gastropods, if the shell of a snail is ventricose or subventricose, it means the whorl of the shell is swollen. References Mycology Fungal morphology and anatomy
Ventricose
Biology
109
20,656,697
https://en.wikipedia.org/wiki/Varacin
Varacin is a bicyclic organosulfur compound originally found in marine Ascidiacea from the Polycitor genus. It contains an unusual pentathiepin ring which reacts with DNA, and varacin and synthetic analogues have been investigated for their antimicrobial and antitumour properties. Because of its potent biological activity and unusual and challenging ring system, it has been a popular target of efforts toward its total synthesis. References Phenethylamines Sulfur heterocycles Phenol ethers Heterocyclic compounds with 2 rings
Varacin
Chemistry
120
5,666,707
https://en.wikipedia.org/wiki/Architectural%20terracotta
Architectural terracotta refers to a fired mixture of clay and water that can be used in a non-structural, semi-structural, or structural capacity on the exterior or interior of a building. Terracotta is an ancient building material that translates from Latin as "baked earth". Some architectural terracotta is stronger than stoneware. It can be unglazed, painted, slip glazed, or glazed. Usually solid in earlier uses, in most cases from the 19th century onwards each piece of terracotta is composed of a hollow clay web enclosing a void space or cell. The cell can be installed in compression with mortar or hung with metal anchors; such cells are often partially backfilled with mortar. Terracotta can be used together with brick, for ornamental areas; if the source of the clay is the same they can be made to harmonize, or if different to contrast. It is often a cladding over a different structural material. History Terracotta was made by the ancient Greeks, Babylonians, ancient Egyptians, Romans, Chinese, and the Indus River Valley and Native American cultures. It was used for roof tiles, medallions, statues, capitals and other small architectural details. Ancient Eastern terracotta Indian terracotta manufacturers hand pressed, poured, and double-molded the clay mix. Plaster casts have been found in several ancient sites in Afghanistan, Bangladesh, India and Pakistan. Similarities in motifs and manufacturing processes have caused scholars to note cross cultural pollination between the Hellenic and Indus River Valley sculptural terracotta traditions. Famous early examples include the Bhitargaon temple and the Jain temple in the Mahbubnagar district. Chinese, Korean, and Japanese terracotta making traditions were focused on non-architectural uses such as statuary or cookware but various forms of terracotta tiles were popular roofing materials. Western terracotta Antiquity–1700s Greeks used terracotta for capitals, friezes, and other elements of their temples like at Olympia or Selenius. Domestically they used it for statuary and roof tiles. The Etruscans used terracotta for roof tiles, encased beams, and enclosed brick walls with it. The Roman terracotta innovation was the underfloor or hypocaust heating system that they used for their bath houses. Medieval European architecture did not expand terracotta use beyond the ancients. The manufacture of tile roofs diminished with low cost thatch roofing widely available. Southern German, Italian and Spanish city states kept the tradition alive. 1700s–1880s Great Britain Richard Holt and Thomas Ripley patented an artificial stone recipe in 1722. The business was fairly successful at making small architectural ornaments. Their company was taken over by George and Eleanor Coade in 1769. [See Coade stone, See Eleanor Coade ] George died a year later, leaving the company to his wife and daughter, both named Eleanor Coade. The Coade ladies popularized the grey mix of terracotta as an alternative to stone with the help of architects like Horace Walpole and Sir John Soane. Georgian architectural style was in vogue and demand for repetitive, classically inspired décor was very fashionable. The Victoria and Albert Museum (1867–1880) and the Natural History Museum of London (1879–1880) buildings ushered in an era of mass-produced architectural terracotta. North America Early manufacture The earliest manufacturer of architectural terracotta in the United States was started by Henry Tolman Jr. in Worcester, Massachusetts, around 1849. In the 1850s, New York City architects like Richard Upjohn and James Renwick used it on some of their projects, but the material failed to gain widespread popularity and many American architects falsely believed it couldn't endure the North American climate. 1870s–1930s The Chicago Fire of 1871 destroyed many of the wood and stone-constructed buildings of Chicago, Illinois, and spurred greater interest in fireproof building materials that could enable the elaborate construction of the era. James Taylor, an English-trained ceramicist, played a key role in establishing effective widespread terracotta production in the United States through his work for various firms such as the Chicago Terra Cotta Company, the Boston Terra Cotta Company, and the New York Architectural Terra-Cotta Company. The American architectural terracotta industry peaked during the late 1800s and helped enable the construction of skyscrapers by allowing for more lightweight construction on top of tall metal-framed structures. The fire-resistance of terracotta protected structural steel on many buildings constructed during this period, such as New York City's Flatiron Building. There was an increase in popularity of architectural terracotta made with colored, or polychrome, glazed architectural terracotta during the first decade of the 1900s. Architects began to employ combinations of colors to achieve dynamic designs and appearances. This usage diminished as time went on, especially after the success of Cass Gilbert's Woolworth Building increased demand for monochromatic terracotta. Trends in the 1920s favored setbacks in skyscraper towers, leading to increasing demand for sculpted forms in low relief. 1930s–1980s Usage of terracotta in architecture had diminished through the end of the 1920s and the onset of the Great Depression further harmed the industry: the number of terracotta companies dropped from eighteen in 1929 to eleven in 1933. This was largely attributed to architect's increasing preference for building with cheaper metal, glass, and cement. The time-intensive process of terracotta manufacture put it at a disadvantage compared to newer products. Changing fashions towards more minimalist, modern styles such as the Bauhaus School and International Style further harmed the industry, despite attempts by manufacturers to create products suited to these styles. Structural problems of earlier terracotta resulting from incomplete waterproofing, improper installation, poor maintenance, and interior corroding mild steel provided bad publicity for terracotta and further harmed its reputation for architects. For much of the 20th century the American terracotta industry was a fraction of its earlier scale and the few surviving companies largely subsisted on jobs producing less complex products like machine-produced ceramic veneers. Detailed architectural terracotta remained in use through the 1950s and 1960s, however it was often overlooked or misidentified. Architects during this time period did not embrace terracotta's natural properties and instead tended to use it to imitate other materials. 1980s-present Terracotta experienced a growth in popularity beginning in the 1980s when a resurgence in interest in historic preservation led to demand for architectural terracotta for restoration purposes. Historic manufacturers of terracotta such as Gladding, McBean, Ludowici-Celadon, and newer companies such as Boston Valley Terra Cotta all manufactured pieces used in the restoration of landmarks. Architects became interested in newer uses for terracotta and companies developed products such as rainscreen and wall cladding to allow for dynamic installations that retained terracotta's unique and distinct qualities while working with modern architectural styles. Manufacturing process Terracotta can be made by pouring or pressing the mix into a plaster or sandstone mold, clay can be hand carved, or mix can be extruded into a mold using specialized machines. Clay shrinks as it dries from water loss therefore all molds are made slightly larger than the required dimensions. After the desired green-ware, or air dried, shape is created it is fired in a kiln for several days where it shrinks even further. The hot clay is slowly cooled then hand finished. The ceramics are shipped to the project site where they are installed by local contractors. The hollow pieces are partially backfilled with mortar then placed into the wall, suspended from metal anchors, or hung on metal shelf angles. Design Academically trained artists were often the designers of the terracotta forms. Their drawings would be interpreted by the manufacturer who would plan out the joint locations and anchoring system. Once finalized, the drawings were turned into a plaster reality by sculptors who would create the mold for the craftsmen. Clay preparation Clay selection was very important to manufacture of terracotta. Homogenous, finer grain sizes were preferred. The color of the clay body was determined by the types of deposits that were locally available to the manufacture. Sand was added to temper the process. Crushed ceramic scraps called grog were also added to stiffen the product and help reduce shrinkage. Weathering the clay allowed pyrites to chemically change to hydrated ferric oxide and reduced alkali content. This aging minimizes the potential chemical changes during the rest of the manufacturing process. The weathered raw clay was dried, ground, and screened. Later, it would have been pugged in a mill that would mix the clay with water using rotating blades and force the blend through a sieve. Hand pressing terracotta An artist makes a negative plaster mold based on a clay positive prototype. 1–1¼" of the clay/water mixture is pressed into the mold. Wire mesh or other stiffeners are added to create the web, or clay body that surrounds the hollow cell. The product is air dried to allow the plaster to suck the moisture out of the green clay product. It is fired then slowly cooled. Extrusion Mechanized extrusion was used for the mass-production of terracotta blocks, popular in the 1920s. Prepared clay was fed into a machine that would then push the mix through a mold. The technique required the blocks to be made with simple shapes, so this process was often used for flooring, roofing, cladding, and later hollow clay tiles. Glazing The last step before firing the greenware was glazing. True glazes are made from various salts but prior to the 1890s most blocks were slip glazed or coated with a watered-down version of the clay mix. Liquefying the clay increased the amount of small silica particles that would be deposited on the surface of the block. These would melt during firing and harden. By 1900 almost all colors could be achieved with the addition of salt glazes. Black or brown were made by adding manganese oxide. Firing The kiln firing process could take days, up to two weeks. The clay is heated slowly to around 500°C to sweat off the loose or macroscopic water between the molecules. Then the temperature is increased to close to 900°C to release the chemically bonded water in gaseous form and the clay particles will begin to melt together or sinter. If the kiln reaches 1000°C then the clay particles will vitrtify and become glass like. After the maximum temperature was reached then the clay was slowly cooled over a few days. During firing a fireskin is created. A fireskin is the glass-like "bread crust" that covers the biscuit or interior body. Various kilns were used as technology developed and capital was available for investment. Muffle kilns were the most common kiln. They were used as early as 1870. The kilns burned gas, coal, or oil that heated an interior chamber from an exterior chamber. The walls "muffled" the heat so the greenware was not directly exposed to the flames. Down-draught kilns were also widely used. The interior chamber radiated heat around the terracotta by pulling in hot air from behind an exterior cavity wall. Like the muffle wall, the cavity wall protected the greenware from burning. Installation The earliest terracotta elements were laid directly into the masonry but as structural metal became more popular terracotta was suspended by metal anchors. The development of cast and later wrought iron as a structural material was closely linked to the rise of terracotta. Cast iron was first used as columns in the 1820s by William Strickland. Over the course of the 19th century metal became more incorporated into construction but it was not widely used structurally until the late 1890s. A series of disastrous fires (Chicago, 1871; Boston, 1872; and San Francisco, 1906) earned terracotta a reputation for being a fireproof, lightweight cladding material that could protect metal from melting. Holes were bored in the hollow blocks in choice locations to allow for metal 'J' or 'Z' hooks to connect the blocks to the load bearing steel frame and/or masonry walls. The metal could be hung vertically or anchored horizontally. Pins, clamps, clips, plates, and a variety of other devices were used to help secure the blocks. The joints would then be mortared and the block would be partially backfilled. Chemistry Composition Terracotta is made of a clay or silt matrix, a fluxing agent, and grog or bits of previously fired clay. Clays are the remnants of weathered rocks that are smaller than 2 microns. They are composed of silica and alumina. Kaolinite, halloysite, montmorillonite, illite and mica are all good types of clays for ceramic production. When mixed with water they create hydrous aluminum silica that is plastic and moldable. During the firing process the clays lose their water and become a hardened ceramic body. Fluxes add oxygen when they burn to create more uniform melting of the silica particles throughout the body of the ceramic. This increases the strength of the material. Common fluxing materials are calcium carbonate, alkaline feldspars, manganese, and iron oxides. Grog is used to prevent shrinking and provide structure for the fine clay matrix. Causes of failure The most common reasons for terracotta to fail are: poor manufacturing, improper installation, weathering, freeze/thaw cycling, and salt formation from atmospheric pollution. Porosity The porosity of terracotta greatly impacts its performance. The ability or inability for water and pollutants to enter into the material is directly correlated to its structural capacity. Terracotta is very strong in compression but weak in tension and shear strength. Any anomalous material expanding (ice, salts, incompatible fill material, or corroding metal anchors which cause rust jacking) inside the clay body will cause it to crack and eventually spall. Improper molding Inherent faults can severely impact the performance of the material. Improper molding can cause air pockets to form that increase the rate of deterioration. If the block is not fired or cooled properly then the fireskin will not be uniformly adhered to the substrate and can flake off. Likewise, if a glaze is not fired properly it will crack, flake, and fall off. Discolorations can result from mineral impurities such as pyrites or barium carbonates. Handling defects A fair amount of damage comes from clumsy transportation, storage, or installation of the material. If the mortar used around and inside the blocks is too strong then the stress will be translated to the terracotta block which will fail over time. Corroding interior metal anchors expand at a faster rate than the surrounding ceramic body causing it to fail from the inside out. Improper loading of the hollow terracotta blocks can create stress cracks. Flawed repairs Imperfect repair work often exacerbates the underlying problems, speeding up the decay of surrounding elements as well. Making penetrations in terracotta units to attach objects to the outside walls also allows moisture to enter the system, and often crack the terracotta as well. Installing sealant rather than mortar, or applying impervious coating, will trap moisture within the terracotta. Air polution The environment also plays a large role in the survival of terracotta. Different types of air pollution can cause different types of surface problems. When it rains, water and salts get sucked into the voids in and around the terracotta through capillary action. If it freezes then ice forms, putting internal stress on the material, causing it to crack from inside. A similar problem happens with atmospheric pollutants that are carried into the gaps by rains water. The pollution creates a mildly acidic solution that eats at the clay body or a salt crust forms, causing similar issues as ice. Consequences of failure With the majority of terracotta buildings being over one-hundred years old, failing terracotta has become a problem in many cities such as New York. Regular inspections and maintenance and repair programs are required by law, but nonetheless well-publicized incidents such as the death of Erica Tishman after a piece of terracotta fell from a 105-year old building. Manufacturers Britain Royal Doulton (1815 to present) Fambrini & Daniels (1838 to 1913) John Marriott Blashfield (1839 to 1878) Gibbs and Canning (1847 to 1950s) Burmantofts Pottery (1859 to 1957) Shaws of Darwen (1897 to 2014) Darwen Terracotta and Faience (2014 - present) United States Henry Tolman, Jr. (1848 to 1855) Chicago Terra Cotta Works (1868 to 1880) Gladding, McBean (1879 to present) Perth Amboy Terra Cotta Company (1879 to 1907) Boston Terra Cotta Company (1880 to 1893) A. Hall Terra Cotta Company (1883 to 1887) New York Architectural Terra-Cotta Company (1886 to 1929) Los Angeles Pressed Brick Company (1887 to 1916) Northwestern Terra Cotta Company (1888 to 1954) Celadon Terra Cotta Company (1888 to 1906) New Jersey Terra Cotta Company (1888 to 1928) South Amboy Terra Cotta Company (1903 to 1928) Denny-Renton Clay and Coal Company (1905 to 1927) O.W. Ketcham Terra Cotta Works (1906 to 1995) Ludowici-Celadon Company (1906 to present) Atlantic Terra Cotta Company (1907 to 1943) Federal Terra Cotta Company (1909 to 1928) Moravian Pottery and Tile Works (1912 to present) Federal Seaboard Terra Cotta Corporation (1928 to 1968) Boston Valley Terra Cotta (1981 to present) References Bibliography Barr, Emily. "PRESSING ISSUES IN-KIND TERRA COTTA REPLACEMENT IN THE 21ST CENTURY." Masters of Science Thesis. Columbia University. 2014 Dillon M. (1985) Bricks, Tiles and Terracotta, An Exhibition on one of the major industries of the Wrexham area, (Held at the Grosvenor Museum, Chester), 24pp. Didden, Amanda. "Standardization of terracotta anchorage: an analysis of shop drawings from the Northwestern Terra Cotta Company and the O.W. Ketcham Terra Cotta Works." Masters Thesis, University of Pennsylvania, 2003. Fidler, John. The Conservation of Architectural Terracotta and Faience. Transactions of the Association for Studies in the Conservation of Historic Buildings, no. 6(1981):3-16. Fidler, John. Fragile Remains. Architectural Ceramics: their History, Manufacture and Conservation. London: James and James, 1996. Gerns, Edward and Joshua Freedland. "Understanding terra-cotta distress: Evaluation and repair approaches." Journal of Building Appraisal. October 2006. James W P Campbell & Will Pryce, (2003) Brick: A World History, Jenkins, Moses. "Terracotta and Faience." Historic Scotland, Longmore House. Mack, Robert C. "The Manufacture and Use of Architectural Terra Cotta in the United States." In The Technology of Historic American Buildings, edited by H. Ward Jandl, 117–51. Washington, D.C.: Foundation for Preservation Technology, 1983. Ries, Heinrich and Henry Leighton. History of the Clay Working Industry in the United States. New York: John Wiley, 1909. Stratton, M. (1993) The Terracotta Revival : Building Innovation and the Image of the Industrial City in Britain and North America. London : Gollancz. Taylor, James. Terra Cotta. Architectural Record, Vol. 1(July 1891-July 1892):63-68. Taylor, James. "History of Terra Cotta in New York City." Architectural Record 2 July 1892-July 1893:136-148. Wells, Jeremy C. History of Structural Hollow Clay Tile in the United States Construction History, Vol. 22 (2007):27-46. External links Article on terracotta in Victorian and Edwardian Terracotta Buildings Understanding and Conserving Terracotta - Dr Michael Stratton Bolton Museums Soil-based building materials Building Terracotta
Architectural terracotta
Engineering
4,135
3,354,769
https://en.wikipedia.org/wiki/Motorola%20A780
The Motorola A780 is the second cellular PDA running the Linux operating system. It was introduced in 2003 and sold in Europe and Asia. Some models include GPS and navigation software. Design The Motorola A780 is a Linux-based smartphone. When the lid is closed, the phone appears like a traditional phone, with a keypad matrix and small display, actually a window to the larger display below the lid. When the lid is flipped open, a QVGA touch screen is revealed that can be used with fingers or a supplied stylus. Features The phone is supplied with a number of applications including a POP and IMAP email client, Opera web browser, calendar and a viewer for PDF and Microsoft Office files. The calendar and address book can be synchronized with a Microsoft Exchange or SyncML server. The phone has a 1.3 megapixel camera recording still and video images. RealPlayer is included to play sound audio files and streamed audio and video. The phone has 48 megabytes of internal flash memory for storing user data and a slot for a microSD card. Both Bluetooth and USB are provided for communication with another computer. Character entry is via an on-screen QWERTY keyboard and hand writing recognition. Models including a GPS receiver are supplied with ALK Technologies' CoPilot Live navigation software with street level maps of Europe. Technical details The phone has three processors: Baseband Processor (BP) is an ARM7TDMI that is used for basic GSM phone functions. The necessary digital signal processing is performed by an Onyx (566xx) DSP core. The BP runs the Nucleus operating system (produced by Mentor Graphics) from its own 32 MBit Flash memory. Application Processor (AP) is an Intel PXA270 with an ARMv5TE ARM core. This runs Linux, the user interface Qtopia and the application programs. Models with GPS use a Motorola MG4100 single chip GPS receiver integrated circuit. The Linux operating system used, EZX Linux, is a modified version of MontaVista Consumer Electronics Linux 3.0 Linux enthusiasts This phone is popular with Linux enthusiasts. It is able to establish a TCP/IP connection between the phone and another computer over USB or Bluetooth. One can then telnet to the phone and be presented with a bash prompt. From the prompt one can, for example, mount an NFS drive(s) on the phone. The underlying operating system, Motorola EZX is Linux based, its kernel is open source. With the source code hosted on opensource.motorola.com, it is possible to recompile and replace the kernel for this operating system. However Motorola did not publish a software development kit for native applications. Instead, they are expecting third party programs to be written in Java ME. The OpenEZX website is dedicated to providing free opensource software for this phone and others using the same OS. See also Motorola List of Motorola products List of mobile phones running Linux OpenEZX References External links A780 entry in the OpenEZX Wiki A780 Hardware details Motorola Open source Makes the Linux source code and drivers available in compliance with GPL Moto4Lin file manager and "seem" (customization) editor for A780 and others Moto4Lin wiki A780 page at SourceForge.net Motorolafans fansite with many applications for Motorola Linux Phones Motorola smartphones Information appliances
Motorola A780
Technology
703
15,358,229
https://en.wikipedia.org/wiki/List%20of%20works%20based%20on%20dreams
Dreams have been credited as the inspiration for several creative works and scientific discoveries. Books and poetry Kubla Khan Samuel Taylor Coleridge wrote Kubla Khan (completed in 1797 and published in 1816) upon awakening from an opium-influenced dream. In a preface to the work, he described having the poem come to him, fully formed, in his dream. When he woke, he immediately set to writing it down, but was interrupted by a visitor and could not remember the final lines. For this reason, he kept it unpublished for many years. Frankenstein Mary Shelley's Frankenstein (1818) was inspired by a dream: Strange Case of Dr Jekyll and Mr Hyde Robert Louis Stevenson dreamed the plot for his famous novel Strange Case of Dr Jekyll and Mr Hyde (1886). Tintin in Tibet The Belgian comics artist Hergé was plagued by nightmares in which he was chased by a white skeleton, whereupon the entire environment turned white. A psychiatrist advised him to stop making comics and take a rest, but Hergé drew an entire story set in a white environment: the snowy mountaintops of Tibet. Tintin in Tibet (1960) not only stopped his nightmares and worked as a therapeutic experience, but the work is also regarded as one of his masterpieces. Twilight Inspiration for Stephenie Meyer's Twilight (2005) came by a dream: The Miraculous Journey of Edward Tulane The seeds to the plot of The Miraculous Journey of Edward Tulane (2006) came to Kate DiCamillo in a dream: "One Christmas, I received an elegantly dressed toy rabbit as a gift. A few days later, I dreamed that the rabbit was face down on the ocean floor - lost and waiting to be found." Music Devil's Trill Sonata Giuseppe Tartini recounted that his most famous work, his Violin Sonata in G minor, more commonly known as the Devil's Trill Sonata, came to him in a dream in 1713. According to Tartini's account given to the French astronomer Jérôme Lalande, he dreamed that he had made a pact with the devil, to whom he had handed a violin after a music lesson, in order to assess whether the devil could play. The devil then proceeded to play "with such great art and intelligence, as I had never even conceived in my boldest flights of fantasy". Tartini said that on waking he "immediately grasped my violin in order to retain, in part at least, the impression of my dream". "O Little Town of Bethlehem" American musician Lewis Redner wrote "St. Louis", the melody to which the Christmas carol "O Little Town of Bethlehem" is most commonly sung in the United States, in December 1868 at the request of Episcopal clergyman and author Phillips Brooks, who had written the lyrics. Redner had not yet written the tune on the night before he was scheduled to rehearse it. According to Redner's account, he "was roused from sleep late in the night hearing an angel-strain whispering in my ear, and seizing a piece of music paper I jotted down the treble of the tune as we now have it, and on Sunday morning before going to church I filled in the harmony." "(I Can't Get No) Satisfaction" Keith Richards claimed to have dreamed the riff to the 1965 song "(I Can't Get No) Satisfaction". He ran through it once before falling asleep. He said when he listened back to it in the morning, there was about two minutes of acoustic guitar before you could hear him drop the pick and "then me snoring for the next forty minutes". "Yesterday" Paul McCartney claimed to have dreamed the melody to his song "Yesterday" (1965). After he woke up, he thought it was just a vague memory of some song he heard when he was younger. As it turned out that he had completely thought up this song all by himself, he recorded it and it became the most covered pop song in the world. "Black Sabbath" During the days of Earth, Geezer Butler wrote Black Sabbath's eponymous song "Black Sabbath", after a nightmare in which he had encountered a tall black figure at the edge of his bed, gazing at him. After he woke up, the book on the occult he had been reading prior to the nightmare had mysteriously vanished from his room. He later told the band about his experience and recorded the song using a haunting riff and a tritone. Named after a 1963 Boris Karloff film, "Black Sabbath" became one of the band's most popular songs, and they even named their debut album and the band itself after it. In 2023, Rolling Stone ranked "Black Sabbath" as the greatest heavy metal song of all time. "Let It Be" Paul McCartney has also claimed that the idea of "Let It Be" came to him after a dream he had about his late mother during the tense period surrounding the sessions for The Beatles ("the White Album") in 1968. McCartney later said: "It was great to visit with her again. I felt very blessed to have that dream. So that got me writing 'Let It Be'." In a later interview, McCartney said that in the dream his mother had told him, "It will be all right, just let it be." "The Prophet's Song" Brian May said that he was inspired to write the 1975 Queen track "The Prophet's Song" after a hepatitis-induced fever dream he had about an apocalyptic flood. It is the longest Queen song with vocals. Selected Ambient Works Volume II Richard James, who performs as Aphex Twin, has written several ambient tracks while lucid dreaming, saying that: James says that seventy per cent of his 1994 album Selected Ambient Works Volume II was written while lucid dreaming. The Dark Carnival Violent J, a member of Insane Clown Posse, claimed to have dreamed the concept of The Dark Carnival, a traveling carnival full of spirits, which is described in much of their discography. "It Could Be Better" Artist Left at London stated in a 2022 TikTok video that she first heard the hook of her song "It Could Be Better" from her album T.I.A.P.F.Y.H. in a dream where "the cast of High School Musical sang it at [her]". Hit Em In a tweet from July 2024, Drew Daniel of electronic music duo Matmos described a fictional music genre he encountered in a dream entitled "hit em". Recounted to him by a nondescript woman in the dream, the genre is a type of electronic music "with super crunched out sounds" in a 5/4 time signature with a tempo of 212 beats per minute. Following the tweet, numerous artists have tried their hand at creating hit em tracks. Film and television 3 Women Director Robert Altman conceived of his 1977 film 3 Women during a restless sleep while his wife was in the hospital. He dreamt that he was directing a film starring Shelley Duvall and Sissy Spacek in an identity theft story, against a desert backdrop. He based the film on this dream, although additional story details were added later. The Terminator Director James Cameron said the titular character in The Terminator (1984) was inspired by a dream he had under the influence of a soaring fever he suffered while he was "sick and dead broke" in Rome, Italy, during the final cut of Piranha II. He dreamed of "a chrome skeleton emerging from a fire", and made some sketches on hotel stationery upon waking: Over the Garden Wall Chapter 5 of the miniseries Over the Garden Wall (2014), "Mad Love", was inspired by a dream that show creator Patrick McHale had. In the events of the dream, Pat was house hunting and came across a secret library in one of the houses. As he explored further, he realized that he had entered someone else's home. In the episode, the character Quincy Endicott explores his mansion, and discovers that he has entered the mansion of his neighbor. Video games and software Deltarune In an interview conducted a few months after the release of its first chapter, Toby Fox stated that the idea for Deltarune (2018) came from a dream he experienced while bedridden from a fever seven years prior. According to Fox, the dream depicted the emotionally-moving ending to a game that did not exist; upon waking up, he was determined to make the game into a reality. Omori In a video discussing the creation of the 2020 game Omori, developer Omocat describes the game's liminal space area - White Space - as being inspired by a dream they experienced when in high school of "standing in a white room with nothing in it... Something red and blurry appeared in front of me... a giant floating rectangular button with the word 'Live' written across it, just like a video game interface. And... when I pressed it, I woke up." Other aspects of the game were influenced by lucid dreams the developer had experienced. They said that "I would try to escape them through death, by for instance, jumping into a lake. It's all pretty creepy stuff that probably influenced the game quite a bit." Salesforce The user interface of Salesforce, a widely used enterprise software platform founded in 1999, was inspired by a dream of its co-founder Marc Benioff. Benioff envisioned an application interface resembling that of Amazon, which included labeled tabs. Benioff said that in his dream: Science Descartes' new science Descartes claimed that three separate dreams that he had on November 10, 1619, revealed to him the basis of a new philosophy, the scientific method. Periodic table The chemist Dmitri Mendeleev is said to have invented the modern periodic table in a dream "where all the elements fell into place as required." Mendeleev, a chemistry professor and an avid player of the card game solitaire, had been attempting to clearly organize the elements, which at the time were grouped either by atomic weight or by common properties. In solitaire, however, cards are arranged both by suit, horizontally, and by number, vertically. After 3 days of nonstop attempts to invent the Periodic Table, Mendeleev is said to have fallen asleep, whereupon he promptly dreamt its structure. Sewing machine There is a possibly apocryphal account of Elias Howe inventing the needle of the modern lockstitch sewing machine in a dream. A traditional needle has its eye at its base, but Howe was supposedly inspired by a dream to instead position the eye at the point, as recorded in the history of his mother's family: Benzene The scientist Friedrich August Kekulé discovered the seemingly impossible chemical structure of benzene (C6H6) in 1836, when he had a dream of a group of snakes swallowing their tails. Structure of the atom Niels Bohr won the Nobel Prize for Physics in 1922 for his discovery of the structure of the atom. He recalled that the electrons revolving around the nucleus, like the solar system, came to him in a dream. Upon testing his "dream" hypothesis, he was able to discover that the atomic structure was, in fact, similar to it. Srinivasa Ramanujan's divine revelations Indian mathematician Srinivasa Ramanujan, known for his substantial contributions to number theory, analysis and other areas of pure mathematics, claimed that Hindu goddess Namagiri Thayar would bestow him with mathematical insights in his dreams and that in these visions, "scrolls containing the most complicated mathematics used to unfold before his eyes" Neurotransmission Before Otto Loewi's work, there was debate on whether neurotransmission was primarily chemical or electrical. On a night before Easter Sunday, Loewi had dreamed of the perfect experimental setup: two chambers with beating hearts - one with its nerves intact and the other without. These chambers would be filled with solution and connected with a tube. The experimenter would electrically stimulate the first heart, causing it to beat slower. If neurotransmission was primarily electrical, there would be no reason for the second heart to slow down. However, if neurotransmission was chemical, then the chemicals could theoretically float down the tube and slow down the second heart in the other chamber as well. Loewi wrote this idea down but could not decipher his own writing when he awoke in the morning. The next night, the dream came to him again. Working with Henry Dale, Loewi would go on to use this experimental setup to demonstrate chemical neurotransmission and win the Nobel Prize for it in 1936. Smart card Roland Moreno claimed to have thought of the smart card concept in a dream, telling France Soir in a 2006 interview, "I came up with the idea in my sleep... To be honest, I'm a lazy bum and my productivity is on the feeble side." Moreno patented the idea in 1974. Food King's Hand King's Hand is a dessert made of M&M's and cookie dough, molded into the shape of a hollow hand and baked, before being filled with Greek salad. It was invented by a 28-year-old data analyst, who says the idea for the dish came to her in a dream in which it was the main course of a festival feast. After a week of experimentation, she posted a series of photos on Twitter on December 6, 2020. Later that day, she shared her recipe. As of December 15, 2020, the tweet had garnered over 166,000 likes and was featured in a diverse array of media and print publications, including Fox News, TODAY, and BuzzFeed News. The original post inspired people to make their own versions, as well as descriptions of foods that had appeared in others' dreams. Languages Volapük The Volapük language was created by Johann Martin Schleyer (1831–1912), after dreaming that God had told him to create an international language. See also Tetris effect References External links On Divination in Sleep The Dreams of Descartes: Notes on the Origins of Scientific Thinking Dream Dreams
List of works based on dreams
Biology
2,916
54,711,947
https://en.wikipedia.org/wiki/3D%20structure%20change%20detection
3D Structure Change Detection is a type of Change detection (GIS) processes for GIS (geographical information systems). It is a process that measures how the volume of a particular area have changed between two or more time periods. A high-spatial resolution Digital elevation model (DEM) that provides accurate 4-d (space and time) structural information over area of interest is required to compute such changes. In production, two or more DEMs that cover the same area are used to monitor topographic changes of area. By comparing the DEMs made at different times, structure of terrain changes can be realized by the ground elevation difference from DEMs. Details, occurring time and accuracy of such changes are strongly relied on the resolution, quality of DEMs. In general, the problem of involves whether or not a change has occurred, or whether several changes have occurred. Such structure changes detection has been widely used to assess urban growth, impact of natural disasters like earthquake, volcano and battle damage assessment. See also Change detection (GIS) Digital elevation model Geographic information system References External links Geographic information systems Topography techniques Change detection
3D structure change detection
Technology
224
20,644,427
https://en.wikipedia.org/wiki/Ankanam
An Ankanam is a unit of measure similar to an acre. It is used mainly in regions of Andhra Pradesh and Karnataka, Nellore, Anekal, Bengaluru and Tirupati. An Ankanam is measured as , (mostly in the Nellore District) and, in some places (such as Tirupati), . In Nellore, one acre equals 605 Ankanams, and 1 cent amounts to 6.05 Ankanams. This unit is very popular, presumably because it is easier to calculate the cost of a piece of land. Etymology and definitions Ankanam is related to the words anga and adugu in dravidian languages (meaning foot). In “Kannada-English Dictionary by Rev. Ferdinand Kittel”. “Ankana” is defined as follows. “The (small or large) space either between any two posts or pillars in a wall that support the roof, or between any two beams”. Also from the reference of Hindu temple architecture 'Ankana' is defined as distance between pillar and pillar on one hand, and between one pillar and another (wall) on the other. Hence it has no definitive measurement. Comparison with other units Source: 1 Acre = 100 cents = 0.405 hectares = 605 Ankanas 1 cent = 6.05 Ankanas = 48 Sq.Yards 1 Ankana = 8 Sq. yard = 72 Sq. Feet 1 Tirupathi Ankana = 4 Sq.yard = 36 Sq. ft (in Tirupathi) 1 Sq. Yard = 9 Sq. Feet. 1 Yard = 3 feet. References Units of area Customary units in India External links Ankanam to Gajam Calculator
Ankanam
Mathematics
353
6,461,404
https://en.wikipedia.org/wiki/Urban%20bias
Urban bias refers to a political economy argument according to which economic development is hampered by groups who, by their central location in urban areas, are able to pressure governments to protect their interests. It is a structural condition of overurbanization and its growth leads to saturated urban labour market, truncated opportunity structures in rural areas, overburdened public services, distorted sectoral development in world economies, the isolation of large segments of the urban and rural population from the fruits of economic development, and economic growth due to the high costs of urban development. Groups often said to have an 'urban bias' include governments, political parties, labor unions, students, laws, civil servants and manufacturers. These interests are portrayed as often not reflecting the comparative economic advantage of the country, usually a less-industrialized country whose comparative advantage is considered to be export agriculture. Among the leading scholars to claim urban bias are Michael Lipton and Robert H. Bates. The notion of urban bias is particularly popular among those who advocate neoliberal economic policies. Many World Bank publications use the notion of urban bias to support policies oriented toward export agriculture. See also Metropolitan bias Rural bias References Development economics Regional economics Urban planning Urbanization
Urban bias
Engineering
241
21,961
https://en.wikipedia.org/wiki/Nucleon
In physics and chemistry, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines the atom's mass number (nucleon number). Until the 1960s, nucleons were thought to be elementary particles, not made up of smaller parts. Now they are understood as composite particles, made of three quarks bound together by the strong interaction. The interaction between two or more nucleons is called internucleon interaction or nuclear force, which is also ultimately caused by the strong interaction. (Before the discovery of quarks, the term "strong interaction" referred to just internucleon interactions.) Nucleons sit at the boundary where particle physics and nuclear physics overlap. Particle physics, particularly quantum chromodynamics, provides the fundamental equations that describe the properties of quarks and of the strong interaction. These equations describe quantitatively how quarks can bind together into protons and neutrons (and all the other hadrons). However, when multiple nucleons are assembled into an atomic nucleus (nuclide), these fundamental equations become too difficult to solve directly (see lattice QCD). Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models, such as the nuclear shell model. These models can successfully describe nuclide properties, as for example, whether or not a particular nuclide undergoes radioactive decay. The proton and neutron are in a scheme of categories being at once fermions, hadrons and baryons. The proton carries a positive net charge, and the neutron carries a zero net charge; the proton's mass is only about 0.13% less than the neutron's. Thus, they can be viewed as two states of the same nucleon, and together form an isospin doublet (). In isospin space, neutrons can be transformed into protons and conversely by SU(2) symmetries. These nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to Noether's theorem, isospin is conserved with respect to the strong interaction. Overview Properties Protons and neutrons are best known in their role as nucleons, i.e., as the components of atomic nuclei, but they also exist as free particles. Free neutrons are unstable, with a half-life of around 13 minutes, but they have important applications (see neutron radiation and neutron scattering). Protons not bound to other nucleons are the nuclei of hydrogen atoms when bound with an electron or if not bound to anything are ions or cosmic rays. Both the proton and the neutron are composite particles, meaning that each is composed of smaller parts, namely three quarks each; although once thought to be so, neither is an elementary particle. A proton is composed of two up quarks and one down quark, while the neutron has one up quark and two down quarks. Quarks are held together by the strong force, or equivalently, by gluons, which mediate the strong force at the quark level. An up quark has electric charge  e, and a down quark has charge  e, so the summed electric charges of proton and neutron are +e and 0, respectively. Thus, the neutron has a charge of 0 (zero), and therefore is electrically neutral; indeed, the term "neutron" comes from the fact that a neutron is electrically neutral. The masses of the proton and neutron are similar: for the proton it is (), while for the neutron it is (); the neutron is roughly 0.13% heavier. The similarity in mass can be explained roughly by the slight difference in masses of up quarks and down quarks composing the nucleons. However, a detailed description remains an unsolved problem in particle physics. The spin of the nucleon is , which means that they are fermions and, like electrons, are subject to the Pauli exclusion principle: no more than one nucleon, e.g. in an atomic nucleus, may occupy the same quantum state. The isospin and spin quantum numbers of the nucleon have two states each, resulting in four combinations in total. An alpha particle is composed of four nucleons occupying all four combinations, namely, it has two protons (having opposite spin) and two neutrons (also having opposite spin), and its net nuclear spin is zero. In larger nuclei constituent nucleons, by Pauli exclusion, are compelled to have relative motion, which may also contribute to nuclear spin via the orbital quantum number. They spread out into nuclear shells analogous to electron shells known from chemistry. Both the proton and neutron have magnetic moments, though the nucleon magnetic moments are anomalous and were unexpected when they were discovered in the 1930s. The proton's magnetic moment, symbol μ, is , whereas, if the proton were an elementary Dirac particle, it should have a magnetic moment of . Here the unit for the magnetic moments is the nuclear magneton, symbol μ, an atomic-scale unit of measure. The neutron's magnetic moment is μ = , whereas, since the neutron lacks an electric charge, it should have no magnetic moment. The value of the neutron's magnetic moment is negative because the direction of the moment is opposite to the neutron's spin. The nucleon magnetic moments arise from the quark substructure of the nucleons. The proton magnetic moment is exploited for NMR / MRI scanning. Stability A neutron in free state is an unstable particle, with a half-life around ten minutes. It undergoes decay (a type of radioactive decay) by turning into a proton while emitting an electron and an electron antineutrino. This reaction can occur because the mass of the neutron is slightly greater than that of the proton. (See the Neutron article for more discussion of neutron decay.) A proton by itself is thought to be stable, or at least its lifetime is too long to measure. This is an important discussion in particle physics (see Proton decay). Inside a nucleus, on the other hand, combined protons and neutrons (nucleons) can be stable or unstable depending on the nuclide, or nuclear species. Inside some nuclides, a neutron can turn into a proton (producing other particles) as described above; the reverse can happen inside other nuclides, where a proton turns into a neutron (producing other particles) through decay or electron capture. And inside still other nuclides, both protons and neutrons are stable and do not change form. Antinucleons Both nucleons have corresponding antiparticles: the antiproton and the antineutron, which have the same mass and opposite charge as the proton and neutron respectively, and they interact in the same way. (This is generally believed to be exactly true, due to CPT symmetry. If there is a difference, it is too small to measure in all experiments to date.) In particular, antinucleons can bind into an "antinucleus". So far, scientists have created antideuterium and antihelium-3 nuclei. Tables of detailed properties Nucleons The masses of the proton and neutron are known with far greater precision in daltons (Da) than in MeV/c2 due to the way in which these are defined. The conversion factor used is 1 Da = . At least 1035 years. See proton decay. For free neutrons; in most common nuclei, neutrons are stable. The masses of their antiparticles are assumed to be identical, and no experiments have refuted this to date. Current experiments show any relative difference between the masses of the proton and antiproton must be less than and the difference between the neutron and antineutron masses is on the order of . Nucleon resonances Nucleon resonances are excited states of nucleon particles, often corresponding to one of the quarks having a flipped spin state, or with different orbital angular momentum when the particle decays. Only resonances with a 3- or 4-star rating at the Particle Data Group (PDG) are included in this table. Due to their extraordinarily short lifetimes, many properties of these particles are still under investigation. The symbol format is given as N() , where is the particle's approximate mass, is the orbital angular momentum (in the spectroscopic notation) of the nucleon–meson pair, produced when it decays, and and are the particle's isospin and total angular momentum respectively. Since nucleons are defined as having isospin, the first number will always be 1, and the second number will always be odd. When discussing nucleon resonances, sometimes the N is omitted and the order is reversed, in the form (); for example, a proton can be denoted as "N(939) S11" or "S11 (939)". The table below lists only the base resonance; each individual entry represents 4 baryons: 2 nucleon resonances particles and their 2 antiparticles. Each resonance exists in a form with a positive electric charge (), with a quark composition of like the proton, and a neutral form, with a quark composition of like the neutron, as well as the corresponding antiparticles with antiquark compositions of and respectively. Since they contain no strange, charm, bottom, or top quarks, these particles do not possess strangeness, etc. The table only lists the resonances with an isospin = . For resonances with isospin = , see the article on Delta baryons. † The P11(939) nucleon represents the excited state of a normal proton or neutron. Such a particle may be stable when in an atomic nucleus, e.g. in lithium-6. Quark model classification In the quark model with SU(2) flavour, the two nucleons are part of the ground-state doublet. The proton has quark content of uud, and the neutron, udd. In SU(3) flavour, they are part of the ground-state octet (8) of spin- baryons, known as the Eightfold way. The other members of this octet are the hyperons strange isotriplet , , , the and the strange isodoublet , . One can extend this multiplet in SU(4) flavour (with the inclusion of the charm quark) to the ground-state 20-plet, or to SU(6) flavour (with the inclusion of the top and bottom quarks) to the ground-state 56-plet. The article on isospin provides an explicit expression for the nucleon wave functions in terms of the quark flavour eigenstates. Models Although it is known that the nucleon is made from three quarks, , it is not known how to solve the equations of motion for quantum chromodynamics. Thus, the study of the low-energy properties of the nucleon are performed by means of models. The only first-principles approach available is to attempt to solve the equations of QCD numerically, using lattice QCD. This requires complicated algorithms and very powerful supercomputers. However, several analytic models also exist: Skyrmion models The skyrmion models the nucleon as a topological soliton in a nonlinear SU(2) pion field. The topological stability of the skyrmion is interpreted as the conservation of baryon number, that is, the non-decay of the nucleon. The local topological winding number density is identified with the local baryon number density of the nucleon. With the pion isospin vector field oriented in the shape of a hedgehog space, the model is readily solvable, and is thus sometimes called the hedgehog model. The hedgehog model is able to predict low-energy parameters, such as the nucleon mass, radius and axial coupling constant, to approximately 30% of experimental values. MIT bag model The MIT bag model confines quarks and gluons interacting through quantum chromodynamics to a region of space determined by balancing the pressure exerted by the quarks and gluons against a hypothetical pressure exerted by the vacuum on all colored quantum fields. The simplest approximation to the model confines three non-interacting quarks to a spherical cavity, with the boundary condition that the quark vector current vanish on the boundary. The non-interacting treatment of the quarks is justified by appealing to the idea of asymptotic freedom, whereas the hard-boundary condition is justified by quark confinement. Mathematically, the model vaguely resembles that of a radar cavity, with solutions to the Dirac equation standing in for solutions to the Maxwell equations, and the vanishing vector current boundary condition standing for the conducting metal walls of the radar cavity. If the radius of the bag is set to the radius of the nucleon, the bag model predicts a nucleon mass that is within 30% of the actual mass. Although the basic bag model does not provide a pion-mediated interaction, it describes excellently the nucleon–nucleon forces through the 6 quark bag s-channel mechanism using the P-matrix. Chiral bag model The chiral bag model merges the MIT bag model and the skyrmion model. In this model, a hole is punched out of the middle of the skyrmion and replaced with a bag model. The boundary condition is provided by the requirement of continuity of the axial vector current across the bag boundary. Very curiously, the missing part of the topological winding number (the baryon number) of the hole punched into the skyrmion is exactly made up by the non-zero vacuum expectation value (or spectral asymmetry) of the quark fields inside the bag. , this remarkable trade-off between topology and the spectrum of an operator does not have any grounding or explanation in the mathematical theory of Hilbert spaces and their relationship to geometry. Several other properties of the chiral bag are notable: It provides a better fit to the low-energy nucleon properties, to within 5–10%, and these are almost completely independent of the chiral-bag radius, as long as the radius is less than the nucleon radius. This independence of radius is referred to as the Cheshire Cat principle, after the fading of Lewis Carroll's Cheshire Cat to just its smile. It is expected that a first-principles solution of the equations of QCD will demonstrate a similar duality of quark–meson descriptions. See also SLAC bag model Hadrons Electroweak interaction Footnotes References Particle listings Further reading Hadrons Baryons Neutron
Nucleon
Physics
3,104
2,729,585
https://en.wikipedia.org/wiki/Working%20animal
A working animal is an animal, usually domesticated, that is kept by humans and trained to perform tasks instead of being slaughtered to harvest animal products. Some are used for their physical strength (e.g. oxen and draft horses) or for transportation (e.g. riding horses and camels), while others are service animals trained to execute certain specialized tasks (e.g. hunting and guide dogs, messenger pigeons, and fishing cormorants). They may also be used for milking or herding. Some, at the end of their working lives, may also be used for meat or leather. The history of working animals may predate agriculture as dogs were used by hunter-gatherer ancestors; around the world, millions of animals work in relationship with their owners. Domesticated species are often bred for different uses and conditions, especially horses and working dogs. Working animals are usually raised on farms, though some are still captured from the wild, such as dolphins and some Asian elephants. People have found uses for a wide variety of abilities in animals, and even industrialized societies use many animals for work. People use the strength of horses, elephants, and oxen to pull carts and move loads. Police forces use dogs for finding illegal substances and assisting in apprehending wanted persons, others use dogs to find game or search for missing or trapped people. People use various animals—camels, donkeys, horses, dogs, etc.—for transport, either for riding or to pull wagons and sleds. Other animals, including dogs and monkeys, help disabled people. On rare occasions, wild animals are not only tamed, but trained to perform work—though often solely for novelty or entertainment, as such animals tend to lack the trustworthiness and mild temper of true domesticated working animals. Conversely, not all domesticated animals are working animals. For example, while cats may catch mice, it is an instinctive behavior, not one that can be trained by human intervention. Other domesticated animals, such as sheep or rabbits, may have agricultural uses for meat, hides and wool, but are not suitable for work. Finally, small domestic pets, such as most small birds (other than certain types of pigeon) are generally incapable of performing work other than providing companionship. Roles and specializations Transportation Some animals are used due to sheer physical strength in tasks such as ploughing or logging. Such animals are grouped as a draught or draft animals. Others may be used as pack animals, for animal-powered transport, the movement of people and goods. Together, these are sometimes called beasts of burden. Some animals are ridden by people on their backs and are known as mounts. Alternatively, one or more animals in harness may be used to pull vehicles. Riding animals or mounts Riding animals are animals that people use as mounts in order to perform tasks such as traversing across long distances or over rugged terrain, hunting on horseback or with some other riding animal, patrolling around rural and/or wilderness areas, rounding up and/or herding livestock or even for recreational enjoyment. They mainly include equines such as horses, donkeys, and mules; bovines such as cattle, water buffalo, and yak. In some places, elephants, llamas and camels are also used. Dromedary camels are in arid areas of Australia, North Africa and the Middle East; the less common Bactrian camel inhabits central and East Asia; both are used as working animals. On occasion, reindeer, though usually driven, may be ridden. Certain wild animals have been tamed and used for riding, usually for novelty purposes, including the zebra and the ostrich. Some mythical creatures are believed to act as divine mounts, such as garuda in Hinduism (See vahana for divine mounts in Hinduism) and the winged horse Pegasus in Greek mythology. Pack animals Pack animals may be of the same species as mounts or harness animals, though animals such as horses, mules, donkeys, reindeer and both types of camel may have individual bloodlines or breeds that have been selectively bred for packing. Additional species are only used to carry loads, including llamas in the Andes. Domesticated cattle and yaks are also used as pack animals. Other species used to carry cargo include dogs and pack goats. Draft animals An intermediate use is as draft animals, harnessed singly or in teams, to pull sleds, wheeled vehicles or ploughs. Oxen are slow but strong, and have been used in a yoke since ancient times: the earliest surviving vehicle, Puabi's Sumerian sledge, was ox-drawn; an acre was originally defined as the area a span of oxen could plow in a day. The domestic water buffalo and carabao, pull wagons and ploughs in Southeast Asia and the Philippines. Draught or draft horses are commonly used in harness for heavy work. Several breeds of medium-weight horses are used to pull lighter wheeled carts, carriages and buggies when a certain amount of speed or style is desirable. Mules are considered tough and strong, with harness capacity dependent on the type of horse mare used to produce the mule foal. Because they are a hybrid animal and usually are infertile, separate breeding programs must also be maintained. Ponies and donkeys are often used to pull carts and small wagons. Historically, ponies were commonly used in mining to pull ore carts. Dogs are used for pulling light carts or, particularly, sleds (e.g. sled dogs such as huskies) for both recreation and working purposes. Goats also can perform light harness work in front of carts. Reindeer are used in the Arctic and sub-Arctic Nordic countries and Siberia. During World War II, the Red Army deployed deer transportation battalions on the Eastern Front. In the twenty-first century, Russian soldiers continue to train with reindeer sleds in winter. In traditional festive legend, Santa Claus's reindeer pull a sleigh through the night sky to help Santa Claus deliver gifts to children on Christmas Eve. Elephants are used for logging in Southeast Asia. Less often, camels and llamas have been trained to harness. According to Juan Ignacio Molina the Dutch captain Joris van Spilbergen observed the use of chiliquenes (a llama type) by native Mapuches of Mocha Island as plough animals in 1614. Assorted wild animals have, on occasion, been tamed and trained to harness, including zebras and even moose. Guard animals As some domesticated animals display extremely protective or territorial behavior, certain breeds and species have been utilized to guard people and/or property such as homes, public buildings, businesses, crops, livestock and even venues of criminal activity. Guard animals can either act as alarms to alert their owners of danger or they can be used to actively scare off and/or even attack encroaching intruders or dangerous animals. Well known examples of guard animals include dogs, geese and llamas. Powering fixed machinery Working draught animals may power fixed machinery using a treadmill and have been used throughout history to power a winch to raise water from a well. Turnspit dogs were formerly used to power roasting jacks for roasting meat. Treatment animals Working as a form of biological treatment for the environment. Animals such as Asian carps were imported to the U.S. in 1970s to control algae, weed, and parasite growth in aquatic farms, weeds in canal systems, and as one form of sewage treatment. Pathogens and diseases Animals can be used to detect the presence of pathogens and patients carrying infectious diseases. Dogs (including scent hounds) and bees have been trained to detect COVID-19 infections. Dogs have been trained to detect cancer. One study reported ants could be used to detect cancer via urine. Detection rats such as those trained by APOPO can also be taught to identify diseases, especially pulmonary tuberculosis. Searching and retrieving Dogs and pigs, with a better sense of smell than humans, can assist with gathering by finding valuable products, such as truffles (a very expensive subterranean fungus). The French typically use truffle hogs, while Italians mainly use dogs. Monkeys are trained to pick coconuts from palm trees, a job many human workers consider as too dangerous. Detecting contraband Detection dogs, commonly employed by law enforcement authorities, are trained to use their senses to detect illegal drugs, explosives, currency, and contraband electronics such as illicit mobile phones, among other things. The sense most used by detection dogs is smell, hence such dogs are also commonly known as 'sniffer dogs'. For this task, dogs may sometimes be used remotely from the suspect item, for example via the Remote Air Sampling for Canine Olfaction (RASCO) system. Interfacing and organization Assistance animals The best-known example is the guide dog or seeing eye dog for blind people. See also service dog. Miniature horses are also occasionally used for this purpose as well. Trained dogs and African, Asian, and American monkeys, such as capuchin monkeys have been taught to provide other functions for impaired people, such as opening mail and minor household tasks of the same like. Herding A very close working relationship exists between a stockman or shepherd, a herding dog, and the herd (or mob) of sheep or cattle. Cattle and sheep herders in other parts of the world also use various dog breeds. Certain breeds of horses also have an innate "cow sense" that allows them to effectively carry a rider to the right place at the right time to muster (gather or round up) livestock. See stock horse; cutting horse Police and military The defensive and offensive capabilities of animals (such as fangs and claws) can be used to protect or to attack humans. The guard dog barks or attacks, to warn of an intruder, sniffer dogs are used to detect explosives contraband and attack dogs are trained to attack on command. War elephants were trained for battle in ancient times and are still used for military transport today. Military uses of horses have changed over the millennia but still continue, including for police work. Camel cavalry was used in deserts since they had better performance and survivability in the harsh desert environment than horses. India's Border Security Force and some other countries still used camel cavalry for patrolling in the Thar desert. Dolphins and sea lions carry markers to attach to naval mines as well as patrolling harbors. Dogs can be trained to find landmines. Rats, which are lighter and less of a risk to set the mines off, have recently been used more frequently. Homing pigeons transport material, usually messages on small pieces of paper, by air. Legal status In some jurisdictions, certain working animals are afforded greater legal rights than other animals. One such common example is police dogs and military dogs, which are often afforded additional protections and the same memorial services as human officers and soldiers. India law have provision for the in loco parentis for implementing animal welfare laws. Under the Indian law the non-human entities such as animals, deities, trusts, charitable organizations, corporate, managing bodies, etc. and several other non-human entitles have been given the status of the "legal person" with legal rights and duties, such as to sue and be sued, to own and transfer the property, to pay taxes, etc. In court cases regarding animals, the animals have the status of "legal person" and humans have the legal duty to act as "loco parentis" towards animals welfare like a parent has towards the minor children. In a case of cow-smuggling, the Punjab and Haryana High Court mandated that "entire animal kingdom including avian and aquatic" species has a "distinct legal persona with corresponding rights, duties, and liabilities of a living person" and humans are "loco parentis" while laying out the norms for animal welfare, veterinary treatment, fodder and shelter, e.g. animal drawn carriages must not have more than four humans, and load carrying animals must not be loaded beyond the specified limits and those limits must be halved when animals have to carry the load up a slope. A court while deciding the Animal Welfare Board of India vs Nagaraja case in 2014 mandated that animals are also entitled to the fundamental right to freedom enshrined in the Article 21 of Constitution of India i.e. right to life, personal liberty and the right to die with dignity (passive euthanasia). In another case, a court in Uttarakhand state mandated that animals have the same rights as humans. See also References External links Working Goats . Documentary produced by Oregon Field Guide Livestock Power (physics)
Working animal
Physics,Mathematics
2,584
33,415,439
https://en.wikipedia.org/wiki/Thai%20units%20of%20measurement
Thailand adopted the metric system on 17 December 1923. Before metrication, the traditional system of measurement used in Thailand employed anthropic units. Some of these units are still in use, albeit standardised to SI/metric measurements. When the Royal Thai Survey Department began cadastral survey in 1896, Director R. W. Giblin, F.R.G.S., noted, "It so happens that 40 metres or 4,000 centimetres are equal to one sen," so all cadastral plans are plotted, drawn, and printed to a scale of 1:4,000. The square wa, ngan and rai are still used in measurements of land area. The baht is still used as a unit of measurement in gold trading. However, one baht of 96.5% gold bullion is defined as 15.16 grams rather than the generic standard of 15 grams. The baht has also become the name of the currency of Thailand, which was originally fixed to the corresponding mass of silver. List of units References Thailand Customary units of measurement Units of measurement Culture of Thailand
Thai units of measurement
Mathematics
226
58,402,119
https://en.wikipedia.org/wiki/Aspergillus%20uvarum
Aspergillus uvarum is a species of fungus in the genus Aspergillus. It belongs to the group of black Aspergilli which are important industrial workhorses. A. uvarum belongs to the Nigri section. The species was first described in 2008. A. uvarum has been isolated from grapes in Europe. It has been shown to produce secalonic acid, which is common for other black aspergilli; and geodin, erdin, and dihydrogeodin, which are not produced by any other black aspergilli. The genome of A. uvarum was sequenced and published in 2014 as part of the Aspergillus whole-genome sequencing project – a project dedicated to performing whole-genome sequencing of all members of the genus Aspergillus. The genome assembly size was 35.85 Mbp. A. uvarum has 12,347 genes. Growth and morphology Aspergillus uvarum has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References uvarum Fungi described in 2008 Fungus species
Aspergillus uvarum
Biology
264
24,374,127
https://en.wikipedia.org/wiki/Macromolecular%20Materials%20and%20Engineering
Macromolecular Materials and Engineering is a monthly peer-reviewed scientific journal covering polymer science. It publishes Reviews, Feature Articles, Communications, and Full Papers on design, modification, characterization, and processing of advanced polymeric materials. Published topics include materials research on engineering polymers, tailor-made functional polymer systems, and new polymer additives. The editor-in-chief is David Huesmann. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.367. References External links Chemistry journals Materials science journals Academic journals established in 2000 English-language journals Wiley-VCH academic journals Monthly journals
Macromolecular Materials and Engineering
Materials_science,Engineering
127
7,130,691
https://en.wikipedia.org/wiki/Signal%20recognition%20particle%20receptor
Signal recognition particle (SRP) receptor, also called the docking protein, is a dimer composed of 2 different subunits that are associated exclusively with the rough ER in mammalian cells. Its main function is to identify the SRP units. SRP (signal recognition particle) is a molecule that helps the ribosome-mRNA-polypeptide complexes to settle down on the membrane of the endoplasmic reticulum. The eukaryotic SRP receptor (termed SR) is a heterodimer of SR-alpha (70 kDa; SRPRA) and SR-beta (25 kDa; SRPRB), both of which contain a GTP-binding domain, while the prokaryotic SRP receptor comprises only the monomeric loosely membrane-associated SR-alpha homologue FtsY (). SRX domain SR-alpha regulates the targeting of SRP-ribosome-nascent polypeptide complexes to the translocon. SR-alpha binds to the SRP54 subunit of the SRP complex. The SR-beta subunit is a transmembrane GTPase that anchors the SR-alpha subunit (a peripheral membrane GTPase) to the ER membrane. SR-beta interacts with the N-terminal SRX-domain of SR-alpha, which is not present in the bacterial FtsY homologue. SR-beta also functions in recruiting the SRP-nascent polypeptide to the protein-conducting channel. The SRX family represents eukaryotic homologues of the alpha subunit of the SR receptor. Members of this entry consist of a central six-stranded anti-parallel beta-sheet sandwiched by helix alpha1 on one side and helices alpha2-alpha4 on the other. They interact with the small GTPase SR-beta, forming a complex that matches a class of small G protein-effector complexes, including Rap-Raf, Ras-PI3K(gamma), Ras-RalGDS, and Arl2-PDE(delta). On the C-terminal of SR-alpha and FtsY is the NG domain similar to SRP54. NG domain The receptor binds to SPR54/Ffh by the "NG domain", a combination of a 4-helical-bundle "N" domain () and a GTPase "G" domain (), shared by both proteins. The bound structure is a quasi-symmetric heterodimer termed a targeting complex. Signal recognition particle (SRP) The signal recognition particle (SRP) is a multimeric protein, which along with its conjugate receptor (SR), is involved in targeting secretory proteins to the rough endoplasmic reticulum (RER) membrane in eukaryotes, or to the plasma membrane in prokaryotes. SRP recognises the signal sequence of the nascent polypeptide on the ribosome, retards its elongation, and docks the SRP-ribosome-polypeptide complex to the RER membrane via the SR receptor. SRP consists of six polypeptides (SRP9, SRP14, SRP19, SRP54, SRP68 and SRP72) and a single 300 nucleotide 7S RNA molecule. The RNA component catalyses the interaction of SRP with its SR receptor. In higher eukaryotes, the SRP complex consists of the Alu domain and the S domain linked by the SRP RNA. The Alu domain consists of a heterodimer of SRP9 and SRP14 bound to the 5' and 3' terminal sequences of SRP RNA. This domain is necessary for retarding the elongation of the nascent polypeptide chain, which gives SRP time to dock the ribosome-polypeptide complex to the RER membrane. References Receptors Protein targeting Single-pass transmembrane proteins
Signal recognition particle receptor
Chemistry,Biology
829
24,448,165
https://en.wikipedia.org/wiki/Bas%C3%ADlica%20Catedral%20Nuestra%20Se%C3%B1ora%20de%20la%20Altagracia
The Basilica-Cathedral of Our Lady of Altagracia (in Spanish, Basílica Catedral Nuestra Señora de la Altagracia) is a Roman Catholic minor basilica and cathedral in the Dominican Republic dedicated to Our Lady of Altagracia, patroness of the nation. It is in Salvaleón de Higüey. The basilica is the seat of the Roman Catholic Diocese of Nuestra Señora de la Altagracia in Higüey. The cathedral was raised to the honor of a minor basilica by Pope Paul VI on December 17, 1970. It was visited by Pope John Paul II during his visit to the country in 1992. Among many legends, one stands out. A long time ago a young girl from Salvaleón de Higüey asked her father for a portrait of the Virgin Mary. Her father (name unknown) brought the picture as gift for her. It is believed that the portrait was placed at the house of this girl. For some reason, at the break of dawn of each day, the portrait was always found outside the house, beneath a small tree. Every day this portrait was moved back inside by the girl, until she told her parents about it. The place became sacred, and the basilica was built on that same spot as reference of Mary's grace. The painting has been on display since 1571 and it was brought to the basilica in 1970. More than 800,000 visit the Basilica to see the image of Our Lady of Altagracia each year. The Feast of Our Lady of Altagracia is celebrated as a national holiday on January 21; depending on the day of the week it can be on the Friday before or the Monday after. The feast day was originally held on August 15 (Assumption of Mary) but was moved to January 21 to celebrate a victory over the French in 1690. See also Nine Years' War Virgin of Mercy References Roman Catholic cathedrals in the Dominican Republic 20th-century Roman Catholic church buildings Roman Catholic churches completed in 1970 Churches in the Dominican Republic Postmodern architecture Basilica churches in the Dominican Republic Buildings and structures in La Altagracia Province Tourist attractions in La Altagracia Province
Basílica Catedral Nuestra Señora de la Altagracia
Engineering
436
734,585
https://en.wikipedia.org/wiki/Scanning%20probe%20microscopy
Scanning probe microscopy (SPM) is a branch of microscopy that forms images of surfaces using a physical probe that scans the specimen. SPM was founded in 1981, with the invention of the scanning tunneling microscope, an instrument for imaging surfaces at the atomic level. The first successful scanning tunneling microscope experiment was done by Gerd Binnig and Heinrich Rohrer. The key to their success was using a feedback loop to regulate gap distance between the sample and the probe. Many scanning probe microscopes can image several interactions simultaneously. The manner of using these interactions to obtain an image is generally called a mode. The resolution varies somewhat from technique to technique, but some probe techniques reach a rather impressive atomic resolution. This is largely because piezoelectric actuators can execute motions with a precision and accuracy at the atomic level or better on electronic command. This family of techniques can be called "piezoelectric techniques". The other common denominator is that the data are typically obtained as a two-dimensional grid of data points, visualized in false color as a computer image. Established types AFM, atomic force microscopy Contact AFM Non-contact AFM Dynamic contact AFM Tapping AFM AFM-IR CFM, chemical force microscopy C-AFM, conductive atomic force microscopy EFM, electrostatic force microscopy KPFM, kelvin probe force microscopy MIM, microwave impedance microscopy MFM, magnetic force microscopy PFM, piezoresponse force microscopy PTMS, photothermal microspectroscopy/microscopy SCM, scanning capacitance microscopy SGM, scanning gate microscopy SQDM, scanning quantum dot microscopy SVM, scanning voltage microscopy FMM, force modulation microscopy TAFM, Tomographic AFM STM, scanning tunneling microscopy BEEM, ballistic electron emission microscopy ECSTM electrochemical scanning tunneling microscope SHPM, scanning Hall probe microscopy SPSM spin polarized scanning tunneling microscopy PSTM, photon scanning tunneling microscopy STP, scanning tunneling potentiometry SXSTM, synchrotron x-ray scanning tunneling microscopy SPE, Scanning Probe Electrochemistry SECM, scanning electrochemical microscopy SICM, scanning ion-conductance microscopy SVET, scanning vibrating electrode technique SKP, scanning Kelvin probe FluidFM, fluidic force microscopy FOSPM, feature-oriented scanning probe microscopy MRFM, magnetic resonance force microscopy NSOM, near-field scanning optical microscopy (or SNOM, scanning near-field optical microscopy) nano-FTIR, broadband nanoscale SNOM-based spectroscopy SSM, scanning SQUID microscopy SSRM, scanning spreading resistance microscopy SThM, scanning thermal microscopy SSET scanning single-electron transistor microscopy STIM, scanning thermo-ionic microscopy CGM, charge gradient microscopy SRPM, scanning resistive probe microscopy Image formation To form images, scanning probe microscopes raster scan the tip over the surface. At discrete points in the raster scan a value is recorded (which value depends on the type of SPM and the mode of operation, see below). These recorded values are displayed as a heat map to produce the final STM images, usually using a black and white or an orange color scale. Constant interaction mode In constant interaction mode (often referred to as "in feedback"), a feedback loop is used to physically move the probe closer to or further from the surface (in the z axis) under study to maintain a constant interaction. This interaction depends on the type of SPM, for scanning tunneling microscopy the interaction is the tunnel current, for contact mode AFM or MFM it is the cantilever deflection, etc. The type of feedback loop used is usually a PI-loop, which is a PID-loop where the differential gain has been set to zero (as it amplifies noise). The z position of the tip (scanning plane is the xy-plane) is recorded periodically and displayed as a heat map. This is normally referred to as a topography image. In this mode a second image, known as the ″error signal" or "error image" is also taken, which is a heat map of the interaction which was fed back on. Under perfect operation this image would be a blank at a constant value which was set on the feedback loop. Under real operation the image shows noise and often some indication of the surface structure. The user can use this image to edit the feedback gains to minimise features in the error signal. If the gains are set incorrectly, many imaging artifacts are possible. If gains are too low features can appear smeared. If the gains are too high the feedback can become unstable and oscillate, producing striped features in the images which are not physical. Constant height mode In constant height mode the probe is not moved in the z-axis during the raster scan. Instead the value of the interaction under study is recorded (i.e. the tunnel current for STM, or the cantilever oscillation amplitude for amplitude modulated non-contact AFM). This recorded information is displayed as a heat map, and is usually referred to as a constant height image. Constant height imaging is much more difficult than constant interaction imaging as the probe is much more likely to crash into the sample surface. Usually before performing constant height imaging one must image in constant interaction mode to check the surface has no large contaminants in the imaging region, to measure and correct for the sample tilt, and (especially for slow scans) to measure and correct for thermal drift of the sample. Piezoelectric creep can also be a problem, so the microscope often needs time to settle after large movements before constant height imaging can be performed. Constant height imaging can be advantageous for eliminating the possibility of feedback artifacts. Probe tips The nature of an SPM probe tip depends entirely on the type of SPM being used. The combination of tip shape and topography of the sample make up a SPM image. However, certain characteristics are common to all, or at least most, SPMs. Most importantly the probe must have a very sharp apex. The apex of the probe defines the resolution of the microscope, the sharper the probe the better the resolution. For atomic resolution imaging the probe must be terminated by a single atom. For many cantilever based SPMs (e.g. AFM and MFM), the entire cantilever and integrated probe are fabricated by acid [etching], usually from silicon nitride. Conducting probes, needed for STM and SCM among others, are usually constructed from platinum/iridium wire for ambient operations, or tungsten for UHV operation. Other materials such as gold are sometimes used either for sample specific reasons or if the SPM is to be combined with other experiments such as TERS. Platinum/iridium (and other ambient) probes are normally cut using sharp wire cutters, the optimal method is to cut most of the way through the wire and then pull to snap the last of the wire, increasing the likelihood of a single atom termination. Tungsten wires are usually electrochemically etched, following this the oxide layer normally needs to be removed once the tip is in UHV conditions. It is not uncommon for SPM probes (both purchased and "home-made") to not image with the desired resolution. This could be a tip which is too blunt or the probe may have more than one peak, resulting in a doubled or ghost image. For some probes, in situ modification of the tip apex is possible, this is usually done by either crashing the tip into the surface or by applying a large electric field. The latter is achieved by applying a bias voltage (of order 10V) between the tip and the sample, as this distance is usually 1-3 Angstroms, a very large field is generated. The additional attachment of a quantum dot to the tip apex of a conductive probe enables surface potential imaging with high lateral resolution, scanning quantum dot microscopy. Advantages The resolution of the microscopes is not limited by diffraction, only by the size of the probe-sample interaction volume (i.e., point spread function), which can be as small as a few picometres. Hence the ability to measure small local differences in object height (like that of 135 picometre steps on <100> silicon) is unparalleled. Laterally the probe-sample interaction extends only across the tip atom or atoms involved in the interaction. The interaction can be used to modify the sample to create small structures (Scanning probe lithography). Unlike electron microscope methods, specimens do not require a partial vacuum but can be observed in air at standard temperature and pressure or while submerged in a liquid reaction vessel. Disadvantages The detailed shape of the scanning tip is sometimes difficult to determine. Its effect on the resulting data is particularly noticeable if the specimen varies greatly in height over lateral distances of 10 nm or less. The scanning techniques are generally slower in acquiring images, due to the scanning process. As a result, efforts are being made to greatly improve the scanning rate. Like all scanning techniques, the embedding of spatial information into a time sequence opens the door to uncertainties in metrology, say of lateral spacings and angles, which arise due to time-domain effects like specimen drift, feedback loop oscillation, and mechanical vibration. The maximum image size is generally smaller. Scanning probe microscopy is often not useful for examining buried solid-solid or liquid-liquid interfaces. Scanning photo current microscopy (SPCM) SPCM can be considered as a member of the Scanning Probe Microscopy (SPM) family. The difference between other SPM techniques and SPCM is, it exploits a focused laser beam as the local excitation source instead of a probe tip. Characterization and analysis of spatially resolved optical behavior of materials is very important in opto-electronic industry. Simply this involves studying how the properties of a material vary across its surface or bulk structure. Techniques that enable spatially resolved optoelectronic measurements provide valuable insights for the enhancement of optical performance. Scanning electron microscopy (SPCM) has emerged as a powerful technique which can investigate spatially resolved optoelectronic properties in semiconductor nano structures. Principle In SPCM, a focused laser beam is used to excite the semiconducting material producing excitons (electro-hole pairs). These excitons undergo different mechanisms and if they can reach the nearby electrodes before the recombination takes place a photocurrent is generated. This photocurrent is position dependent as it, raster scans the device. SPCM analysis Using the position dependent photocurrent map, important photocurrent dynamics can be analyzed. SPCM provides information such as characteristic length such as minority diffusion length, recombination dynamics, doping concentration, internal electric field  etc. Visualization and analysis software In all instances and contrary to optical microscopes, rendering software is necessary to produce images. Such software is produced and embedded by instrument manufacturers but also available as an accessory from specialized work groups or companies. The main packages used are freeware: Gwyddion, WSxM (developed by Nanotec) and commercial: SPIP (developed by Image Metrology), FemtoScan Online (developed by Advanced Technologies Center), MountainsMap SPM (developed by Digital Surf), TopoStitch (developed by Image Metrology). References Further reading External links Scanning Probe Microscope - An Animated Explanation of its Inner Workings WeCanFigureThisOut.org Scanning Probe Microscope - An Animated Explanation of its Piezoelectric Crystals WeCanFigureThisOut.org
Scanning probe microscopy
Chemistry,Materials_science
2,390
43,680,319
https://en.wikipedia.org/wiki/Cecilia%20Jarlskog
Cecilia Jarlskog (born in 1941) is a Swedish theoretical physicist, working mainly on elementary particle physics. Jarlskog obtained her doctorate in 1970 in theoretical particle physics at the Technical University of Lund. She is known for her work on CP violation in the electroweak sector of the Standard Model, introducing what is known as the Jarlskog invariant, and for her work on grand unified theories (see Georgi–Jarlskog mass relation). Research interests Cecilia Jarlskog is mainly known for her study and expertise in theoretical particle physics. Her studies include research on the ways that sub-atomic and electronic constituents of matter cohere or lose their symmetry, matter and antimatter asymmetry, mathematical physics, neutrino physics, and grand unification. The Jarlskog invariant or rephasing-invariant CP violation parameter, is an invariant quantity in particle physics, which is in the order of ±2.8 x 10−5. This parameter is related to the unitarity conditions of the Cabibbo–Kobayashi–Maskawa matrix, which can be expressed as triangles whose sides are products of different elements of the matrix. As such, the Jarlskog invariant can be written as J = ±Im(VusVcbVV), which amounts to twice the area of the unitarity triangle. Because the area vanishes for the specific parameters in the Standard Model for which there would be no CP violation, this invariant is thus very useful to quantify the non-conservation of the CP-symmetry in elementary particle physics. It is one of Jarlskog's foremost contributions to physics, the other being the many years that she was an active member of CERN. She recalls her appreciation of CERN's (European Organization for Nuclear Research) international atmosphere. Being a part of this community gave her great opportunities to meet and talk with inspiring physicists from across the world. She noted that she felt fortunate to have 'lived in a period when the amount of information revealed about the nature of the elementary constituents of matter and their interactions has been mind-boggling'. At CERN, the European Organization for Nuclear Research, physicists and engineers probe the fundamental structure of the universe. The world's largest and most complex scientific instruments are employed to study the basic constituents of matter – fundamental particles. The particles are caused to collide at close to the speed of light, which affords physicists clues about the interactions of particles, and insights into the fundamental laws of nature. Career Jarlskog was appointed professor at the University of Bergen, Norway, in 1976. In 1985 she switched to the University of Stockholm, Sweden, staying there until 1994. Since then, Jarlskog has been a professor at Lund University, her alma mater, where she had graduated in 1970 with a PhD in theoretical particle physics. Jarlskog worked as a member of CERN from 1970 to 1972. In addition, she served on the CERN Scientific Policy committee from 1982 to 1988. In her remaining 6 years at CERN, she served as the Advisor to the Director General of CERN on Member States, from 1998 to 2004. Jarlskog was recognized by the Swedish Academy of Science community and was appointed as one of the 5 members of the Swedish Nobel Committee for Physics from 1989 to 2000, serving as chairman of that committee in 1999, when the prize was awarded to Gerard 't Hooft and Martinus J. G. Veltman. In 2023, Cecilia Jarlskog was given the EPS High Energy and Particle Physics Prize, which is awarded by the European Physical Society for outstanding contributions in experimental, theoretical or technological achievements. Jarlskog's prize was due to her "discovery of an invariant measure of CP violation in both quark and lepton sectors." Jarlskog is an Honorary Professor at three universities in China and received an honorary degree from the University College Dublin. She was also Member of the Swedish Academy of Sciences (1984), Member of the Norwegian Academy of Sciences (1987), Member of the Board of Trustees of the Nobel Foundation (1996) and Member of the Academia Europaea (2005). Books and articles Cecilia Jarlskog wrote the book, Portrait of Gunnar Källén: A Physics Shooting Star and Poet of Early Quantum Field Theory, while a member of CERN. Here she relates the accomplishments of a comparatively unknown physicist in quantum physics. Jarlskog has written many articles in her lifetime, among them are "Invariations of Lepton Mass Matrices and CP and T violation in Neutrino Oscillations", "On the Wings of Physics" and "Ambiguities Pertaining to Quark-Lepton Complementarity." External links Scientific publications of Cecilia Jarlskog on INSPIRE-HEP References 1941 births People associated with CERN Living people Lund University alumni Particle physicists Swedish physicists Theoretical physicists Members of Academia Europaea Swedish women physicists Presidents of the International Union of Pure and Applied Physics Members of the Royal Swedish Academy of Sciences
Cecilia Jarlskog
Physics
1,038
1,663,983
https://en.wikipedia.org/wiki/Rework%20%28electronics%29
In electronics, rework (or re-work) is repair or refinish of a printed circuit board (PCB) assembly, usually involving desoldering and re-soldering of surface-mounted electronic components (SMD). Mass processing techniques are not applicable to single device repair or replacement, and specialized manual techniques by expert personnel using appropriate equipment are required to replace defective components; area array packages such as ball grid array (BGA) devices particularly require expertise and appropriate tools. A hot air gun or hot air station is used to heat devices and melt solder, and specialised tools are used to pick up and position often tiny components. A rework station is a place to do this work—the tools and supplies for this work, typically on a workbench. Other kinds of rework require other tools. Reasons for rework Rework is practiced in many kinds of manufacturing when defective products are found. For electronics, defects may include: Poor solder joints because of faulty assembly or thermal cycling. Solder bridges—unwanted drops of solder that connect points that should be isolated from each other. Faulty components. Engineering parts changes, upgrades, etc. Components broken due to natural wear, physical stress or excessive current. Components damaged due to liquid ingress, leading to corrosion, weak solder joints or physical damage. Process The rework may involve several components, which must be worked on one by one without damage to surrounding parts or the PCB itself. All parts not being worked on are protected from heat and damage. Thermal stress on the electronic assembly is kept as low as possible to prevent unnecessary contractions of the board which might cause immediate or future damage. In the 21st century, almost all soldering is carried out with lead-free solder, both on manufactured assemblies and in rework, to avoid the health and environmental hazards of lead. Where this precaution is not necessary, tin-lead solder melts at a lower temperature and is easier to work with. Heating a single SMD with a hot-air gun to melt all solder joints between it and the PCB is usually the first step, followed by removing the SMD while the solder is molten. The pad array on the conductor board should then be cleaned of old solder. It is quite easy to remove these residues by heating them to melting temperature. A soldering iron or hot air gun can be used with desoldering braid. The precise placement of the new unit onto the prepared pad array requires skillful use of a highly accurate vision-alignment system with high resolution and magnification. The smaller the pitch and size of the components, the more precise working must be. Finally the newly placed SMD is soldered onto the board. Reliable solder joints are facilitated by use of a solder profile which preheats the board, heats all the connections between the unit and the PCB to the melting temperature of the solder used, then properly cools them. High quality demands or specific designs of SMDs require the precise application of solder paste before positioning and soldering the unit. The surface tension of the molten solder, which is on the board's solder pads, tends to pull the device into precise alignment with the pads if not initially positioned totally correctly. Reflowing and reballing Ball grid arrays (BGA) and chip scale packages (CSA) present special difficulties for testing and rework, as they have many small, closely spaced pads on their underside which are connected to matching pads on the PCB. Connecting pins are not accessible from the top for testing, and cannot be desoldered without heating the whole device to the melting point of solder. After fabrication of the BGA package, tiny balls of solder are glued to the pads on its underside; during assembly the balled package is placed on the PCB and heated to melt the solder and, all being well, to connect each pad on the device to its mate on the PCB without any extraneous solder bridging between adjacent pads. Bad connections produced during assembly can be detected and the assembly reworked (or scrapped). Imperfect connections of devices which are not themselves faulty, which work for a time and then fail, often triggered by thermal expansion and contraction at operating temperature, are not infrequent. Assemblies which fail because of bad BGA connections can be repaired either by reflowing, or by removing the device and cleaning it of solder, reballing, and replacing. Devices can be recovered from scrapped assemblies for reuse in the same way. Reflowing as a rework technique, similar to the manufacturing process of reflow soldering, involves dismantling the equipment to remove the faulty circuit board, pre-heating the whole board in an oven, heating the non-functioning component further to melt the solder, then cooling, following a carefully determined thermal profile, and reassembling, a process which is hoped will repair the bad connection without the need to remove and replace the component. This may or not resolve the problem; and there is a chance that the reflowed board will fail again after some time. For typical devices (PlayStation 3 and Xbox 360) one repair company estimates that the process, if there are no unexpected problems, takes about 80 minutes. On a forum where professional repair people discuss reflowing of laptop computer graphics chips, different contributors cite success rates (no failure within 6 months) of between 60 and 90% for reflowing with professional equipment and techniques, in equipment whose value does not justify complete reballing. Reflowing can be done non-professionally in a domestic oven or with a heat gun. While such methods can cure some problems, the outcome is likely to be less successful than is possible with accurate thermal profiling achieved by an experienced technician using professional equipment. Reballing involves dismantling, heating the chip until it can be removed from the board, typically with a hot-air gun and vacuum pickup tool, removing the device, removing solder remaining on the device and board, putting new solder balls in place, replacing the original device if there was a poor connection, or using a new one, and heating the device or board to solder it in place. The new balls can be placed via several methods, including: Using a stencil for both the balls and the solder paste or flux, Using a BGA "preform" with embedded balls corresponding to the device pattern, or Using semiautomated or fully automated machinery. For the PS3 and Xbox 360 mentioned above, the time is about 120 minutes if all goes well. Chips are at risk of being damaged by the repeated heating and cooling of reballing, and manufacturers' warranties sometimes do not cover this case. Removing solder with solder wick subjects devices to thermal stress fewer times than using a flowing solder bath. In a test twenty devices were reballed, some several times. Two failed to function, but were restored to full functionality after reballing again. One was subjected to 17 thermal cycles without failing. Results Properly carried out rework restores the functionality of the reworked assembly, and its subsequent lifetime should not significantly be affected. Consequently, where the cost of reworking is less than the value of the assembly, it is widely used in all sectors of the electronic industry. Manufacturer and service providers of communications-technologies, entertainment- and consumer-devices, industrial commodities, automobiles, medical technology, aerospace and other high power electronics rework when necessary. See also Reflow oven Thermal profiling References Permanent Elastomeric/Semi-Elastomeric Ball Grid Array (BGA) Stencils Electronics manufacturing
Rework (electronics)
Engineering
1,565
23,881,008
https://en.wikipedia.org/wiki/Project%20Kaisei
Project Kaisei (from 海星, kaisei, "ocean planet" in Japanese) is a scientific and commercial mission to study and clean up the Great Pacific Garbage Patch, a large body of floating plastic and marine debris trapped in the Pacific Ocean by the currents of the North Pacific Gyre. Discovered by NOAA, and publicized by Captain Charles Moore, the patch is estimated to contain 20 times the density of floating debris compared to the global average. The project aims to study the types, extent, and nature of the debris with a view to identifying the scope of the problem and its effects on the ocean biome as well as ways of capturing, detoxifying, and recycling the material. It was organized by the Ocean Voyages Institute, a California-based 501c3 non-profit organisation dealing with marine preservation. The project is based in San Francisco and Hong Kong. History Project Kaisei was started in late 2008 by Mary Crowley, owner of Ocean Voyages, Inc., a for-profit yacht brokerage, Doug Woodring, and George Orbelian, from the San Francisco Bay Area, all with many years of experience in ocean stewardship and activities. As ocean lovers, Mary being a long time sailor, George being a surfer, expert on surfboard design, Author of Essential Surfing, and carries on the work of Project Kaisei by sitting on the boards of the Walter Munk Foundation For The Oceans and the Buckminster Fuller Institute as well as connections to the Gump Research Station For Coral Reefs – Moorea, Tahiti. Doug Woodring has backgrounds in business, finance, innovative technology and media maintains his passion for open water swimming and paddling racing. Each had different contacts, networks and abilities to contribute to the group. With Doug living in Hong Kong, the group set up two points of operation on either side of the Pacific (San Francisco and Hong Kong) to bring global attention and relevant stakeholders together to stem the flow of plastic and marine debris into our ocean. Doug carries on the work with the strategic planning developed for Project Kaisei by James Gollub: The Ocean Recovery Alliance and The Plastics Disclosure Project. Project goals The project launched on 19 March 2009, with plans for an initial phase of scientific study of plastic marine debris in the North Pacific Gyre and various feasibility studies of the effects to life, size, location, depths, approaches to potential recovery and recycling technologies. The goal is to bring about a global collaboration of science, technology, and solutions, to help remove the waste and restore the health of the ocean biome. New catch methods for the debris are being studied, which would have low energy input and low marine life loss. Technologies for remediation or recycling are being evaluated, to potentially create secondary products from the waste, which in turn could help subsidize a larger scale cleanup. The project has completed two expeditions, one in the summer of 2009, and one in 2010. New data on the issue has been collected, and more research and planning need to be done in order to understand the metrics, effectiveness and costs associated with a larger scale cleanup effort. Planning is now taking place for future research and expeditions which would allow the testing of new capture technologies and equipment, as well as the demonstration of remediation or recycling technologies. Initial voyage In August 2009, the initial study and feasibility voyage phase of Project Kaisei began, conducted by two vessels, the 53-metre (174-foot) diesel-powered research vessel R/V New Horizon, and the project flagship, the 46 m (150 ft) tall ship Kaisei. The New Horizon, owned by the Scripps Institution of Oceanography, left San Diego on 2 August 2009 on the Scripps Environmental Accumulation of Plastic Expedition (SEAPLEX), set to last until 21 August. The SEAPLEX expedition is funded by the University of California, San Diego, the National Science Foundation with supplemental funding from Project Kaisei. Two days later the Kaisei departed San Francisco on 4 August, and was expected to undertake a 30-day voyage. The Kaisei was to investigate the size and concentration of the debris field, and explore retrieval methods, while the New Horizon would join her and study the effect of the debris field on marine life. Both vessels carried Apple iPhones outfitted with Voyage Tracker apps built by Ojingo Labs that allowed researchers to share videos and photos from the expedition in real time, an innovation that brought the world along with the researchers and resulted in Google Earth Hero recognition of the project. Intensive sampling On reaching the patch, 1,900 kilometres (1,000 nautical miles) from the Californian coast, New Horizon began intensive sampling on 9 August. The crew took samples every few hours around the clock, using nets of various sizes and collecting samples at various depths. New Horizon returned on Friday 21, August 2009. SEAPLEX reported their initial findings on Thursday 27, August 2009, declaring that the patch stretched across at least 3,100 km (1,700 nmi). Plastic was found in every one of the 100 consecutive surface samples gathered. Miriam Goldstein, chief scientist of the SEAPLEX expedition described the findings as "shocking". Speaking about the patch, Goldstein added, "There’s no island, there’s no eighth continent, it doesn’t look like a garbage dump. It looks like beautiful ocean. But then when you put the nets in the water, you see all the little pieces". Return Kaisei returned to San Francisco on the morning of Monday 31 August. OVI founder and Project Kaisei co-founder Mary Crowley stated immediately following the Kaisei expeditions that the pollution was "what we expected to see, or a little worse." Andrea Neal, principal investigator on the Kaisei speaking on Tuesday 1 September stated that "Marine debris is the new man-made epidemic. It's that serious". Kaisei and New Horizon together had conducted tests over 6,500 km (3,500 nmi) of the ocean. Initial findings from the voyages confirmed that the vast majority of the debris is small. The tiny portions of the debris field was said to be pervasive, and was found both at the surface and at numerous depths. It was also described as a "nearly inconceivable amount of tiny, confettilike pieces of broken plastic", increasing in density the further they sampled into the patch. Findings suggested that the presence of small debris, of a similar size to the existent marine life, could prove an obstacle to cleanup efforts. The research efforts also uncovered evidence of marine life consuming the microplastics. Larger debris found consisted of mainly plastic bottles, but also included shoe soles, plastic buckets, patio chairs, Styrofoam pieces, old toys and fishing vessel buoys. A significant collection of floating debris became entangled in fishing nets creating dense patches of pollution. Various types of marine life were found on, around, and within the tangled bundles of debris. Some of the garbage collected was put on display at the Bay Model Visitor Center in Sausalito, California. Goal The initial feasibility mission aimed to collect 40 tons of debris, using special nets designed not to catch fish, in two passes through the field. The project would later test methods of recycling the collected garbage into new plastic, or commercial products such as diesel fuel or clothing. If the initial mission proved the collection and processing technologies to be viable, it was expected that the Kaisei would lead a full scale commercial cleanup voyage with other vessels, becoming operational within 18 months. Fundraising and recognition Ocean Voyages Institute raised $500,000 for the Project Kaisei initial voyages. The SEAPLEX expedition cost $387,000, funded with $190,000 from UC Ship Funds, $140,000 from Project Kaisei and $57,000 from the National Science Foundation. Project Kaisei is also partnered with the California Department of Toxic Substances Control. The group has since been recognized by the United Nations Environment Programme (UNEP) in 2009 as a Climate Hero, by Google as a Google Earth Hero for its work with a video blogging voyage tracking system, and it was recently part of the Clinton Global Initiative in September 2010. See also Earth Day Great Canadian Shoreline Cleanup Junk raft Kamilo Beach Marine conservation Marine debris National Cleanup Day Ocean Conservancy Plastic recycling Plastiki SUPER HI-CAT The Ocean Cleanup World Cleanup Day https://www.youtube.com/watch?v=mzX1N7qseC4 References External links Project Kaisei Ocean Voyages Institute SEAPLEX – Scripps Environmental Accumulation of Plastic Expedition Research Vessel New Horizon Biological oceanography Ocean pollution Oceanography Pacific Ocean Scripps Institution of Oceanography Pacific expeditions
Project Kaisei
Physics,Chemistry,Environmental_science
1,760
5,827,634
https://en.wikipedia.org/wiki/Glucagon%20receptor
The glucagon receptor is a 62 kDa protein that is activated by glucagon and is a member of the class B G-protein coupled family of receptors (secretin receptor family), coupled to G alpha i, Gs and to a lesser extent G alpha q. Stimulation of the receptor results in the activation of adenylate cyclase and phospholipase C and in increased levels of the secondary messengers intracellular cAMP and calcium. In humans, the glucagon receptor is encoded by the gene. Glucagon receptors are mainly expressed in liver and in kidney with lesser amounts found in heart, adipose tissue, spleen, thymus, adrenal glands, pancreas, cerebral cortex, and gastrointestinal tract. Signal transduction pathway A glucagon receptor, upon binding with the signaling molecule glucagon, initiates a signal transduction pathway that begins with the activation of adenylate cyclase, which in turn produces cyclic AMP (cAMP). Protein kinase A, whose activation is dependent on the increased levels of cAMP, is responsible for the ensuing cellular response in the form of protein kinase 1 and 2. The ligand-bound glucagon receptor can also initiate a concurrent signaling pathway that is independent of cAMP by activating phospholipase C. Phospholipase C produces DAG and IP3 from PIP2, a phospholipid phospholipase C cleaves off of the plasma membrane. Ca2+ stores inside the cell release Ca2+ when its calcium channels are bound by IP3. Structure The 3D crystallographic structures of the seven transmembrane helical domain (7TM) and the extracellular domain (ECD) and an electron microscopy (EM) map of full length glucagon receptor have been determined. Furthermore, the structural dynamics of an active state complex of the Glucagon receptor, Glucagon, the Receptor activity-modifying protein, and the G-protein C-terminus has been determined using a computational and experimental approach. Clinical significance A missense mutation at 17q25 in the GCGR gene is associated with diabetes mellitus type 2. Inactivating mutation of glucagon receptor in humans causes resistance to glucagon and is associated with pancreatic alpha cell hyperplasia, nesidioblastosis, hyperglucagonemia, and pancreatic neuroendocrine tumors, also known as Mahvash disease. References Further reading G protein-coupled receptors
Glucagon receptor
Chemistry
534
187,344
https://en.wikipedia.org/wiki/Oil%20drop%20experiment
The oil drop experiment was performed by Robert A. Millikan and Harvey Fletcher in 1909 to measure the elementary electric charge (the charge of the electron). The experiment took place in the Ryerson Physical Laboratory at the University of Chicago. Millikan received the Nobel Prize in Physics in 1923. The experiment observed tiny electrically charged droplets of oil located between two parallel metal surfaces, forming the plates of a capacitor. The plates were oriented horizontally, with one plate above the other. A mist of atomized oil drops was introduced through a small hole in the top plate and was ionized by x-rays, making them negatively charged. First, with zero applied electric field, the velocity of a falling droplet was measured. At terminal velocity, the drag force equals the gravitational force. As both forces depend on the radius in different ways, the radius of the droplet, and therefore the mass and gravitational force, could be determined (using the known density of the oil). Next, a voltage inducing an electric field was applied between the plates and adjusted until the drops were suspended in mechanical equilibrium, indicating that the electrical force and the gravitational force were in balance. Using the known electric field, Millikan and Fletcher could determine the charge on the oil droplet. By repeating the experiment for many droplets, they confirmed that the charges were all small integer multiples of a certain base value, which was found to be , about 0.6% difference from the currently accepted value of They proposed that this was the magnitude of the negative charge of a single electron. Background Starting in 1908, while a professor at the University of Chicago, Millikan, with the significant input of Fletcher, the "able assistance of Mr. J. Yinbong Lee", and after improving his setup, published his seminal study in 1913. This remains controversial since papers found after Fletcher's death describe events in which Millikan coerced Fletcher into relinquishing authorship as a condition for receiving his PhD. In return, Millikan used his influence in support of Fletcher's career at Bell Labs. Millikan and Fletcher's experiment involved measuring the force on oil droplets in a glass chamber sandwiched between two electrodes, one above and one below. With the electrical field calculated, they could measure the droplet's charge, the charge on a single electron being (). At the time of Millikan and Fletcher's oil drop experiments, the existence of subatomic particles was not universally accepted. Experimenting with cathode rays in 1897, J. J. Thomson had discovered negatively charged "corpuscles", as he called them, with a mass about 1/1837 that of a hydrogen atom. Similar results had been found by George FitzGerald and Walter Kaufmann. Most of what was then known about electricity and magnetism, however, could be explained on the basis that charge is a continuous variable; in much the same way that many of the properties of light can be explained by treating it as a continuous wave rather than as a stream of photons. The elementary charge e is one of the fundamental physical constants and thus the accuracy of the value is of great importance. In 1923, Millikan won the Nobel Prize in physics, in part because of this experiment. Thomas Edison, who had previously thought of charge as a continuous variable, became convinced after working with Millikan and Fletcher's apparatus. This experiment has since been repeated by generations of physics students, although it is rather expensive and difficult to conduct properly. From 1995 to 2007, several computer-automated experiments have been conducted at SLAC to search for isolated fractionally charged particles, however, no evidence for fractional charge particles has been found after measuring over 100 million drops. Experimental procedure Apparatus Millikan's and Fletcher's apparatus incorporated a parallel pair of horizontal metal plates. By applying a potential difference across the plates, a uniform electric field was created in the space between them. A ring of insulating material was used to hold the plates apart. Four holes were cut into the ring, three for illumination by a bright light, and another to allow viewing through a microscope. A fine mist of oil droplets was sprayed into a chamber above the plates. The oil was of a type usually used in vacuum apparatus and was chosen because it had an extremely low vapour pressure. Ordinary oils would evaporate under the heat of the light source causing the mass of the oil drop to change over the course of the experiment. Some oil drops became electrically charged through friction with the nozzle as they were sprayed. Alternatively, charging could be brought about by including an ionizing radiation source (such as an X-ray tube). The droplets entered the space between the plates and, because they were charged, could be made to rise and fall by changing the voltage across the plates. Method Initially the oil drops are allowed to fall between the plates with the electric field turned off. They very quickly reach a terminal velocity because of friction with the air in the chamber. The field is then turned on and, if it is large enough, some of the drops (the charged ones) will start to rise. (This is because the upwards electric force FE is greater for them than the downwards gravitational force Fg, in the same way bits of paper can be picked by a charged rubber rod). A likely looking drop is selected and kept in the middle of the field of view by alternately switching off the voltage until all the other drops have fallen. The experiment is then continued with this one drop. The drop is allowed to fall and its terminal velocity v1 in the absence of an electric field is calculated. The drag force acting on the drop can then be worked out using Stokes' law: where v1 is the terminal velocity (i.e. velocity in the absence of an electric field) of the falling drop, η is the viscosity of the air, and r is the radius of the drop. The weight w is the volume D multiplied by the density ρ and the acceleration due to gravity g. However, what is needed is the apparent weight. The apparent weight in air is the true weight minus the upthrust (which equals the weight of air displaced by the oil drop). For a perfectly spherical droplet the apparent weight can be written as: At terminal velocity the oil drop is not accelerating. Therefore, the total force acting on it must be zero and the two forces F and must cancel one another out (that is, ). This implies Once r is calculated, can easily be worked out. Now the field is turned back on, and the electric force on the drop is where q is the charge on the oil drop and E is the electric field between the plates. For parallel plates where V is the potential difference and d is the distance between the plates. One conceivable way to work out q would be to adjust V until the oil drop remained steady. Then we could equate FE with . Also, determining FE proves difficult because the mass of the oil drop is difficult to determine without reverting to the use of Stokes' Law. A more practical approach is to turn V up slightly so that the oil drop rises with a new terminal velocity v2. Then Comparison to modern values Effective from the 2019 revision of the SI, the value of the elementary charge is defined to be exactly . Before that, the most recent (2014) accepted value was , where the (98) indicates the uncertainty of the last two decimal places. In his Nobel lecture, Millikan gave his measurement as , which equals . The difference is less than one percent, but is six times greater than Millikan's standard error, so the disagreement is significant. Using X-ray experiments, Erik Bäcklin in 1928 found a higher value of the elementary charge, or , which is within uncertainty of the exact value. Raymond Thayer Birge, conducting a review of physical constants in 1929, stated "The investigation by Bäcklin constitutes a pioneer piece of work, and it is quite likely, as such, to contain various unsuspected sources of systematic error. If [... it is ...] weighted according to the apparent probable error [...], the weighted average will still be suspiciously high. [...] the writer has finally decided to reject the Bäcklin value, and to use the weighted mean of the remaining two values." Birge averaged Millikan's result and a different, less accurate X-ray experiment that agreed with Millikan's result. Successive X-ray experiments continued to give high results, and proposals for the discrepancy were ruled out experimentally. Sten von Friesen measured the value with a new electron diffraction method, and the oil drop experiment was redone. Both gave high numbers. By 1937 it was "quite obvious" that Millikan's value could not be maintained any longer, and the established value became or . Controversy Some controversy was raised by physicist Gerald Holton (1978) who pointed out that Millikan recorded more measurements in his journal than he included in his final results. Holton suggested these data points were omitted from the large set of oil drops measured in his experiments without apparent reason. This claim was disputed by Allan Franklin, a high energy physics experimentalist and philosopher of science at the University of Colorado. Franklin contended that Millikan's exclusions of data did not substantively affect his final value of e, but did reduce the statistical error around this estimate e. This enabled Millikan to claim that he had calculated e to better than one half of one percent; in fact, if Millikan had included all of the data he had thrown out, the standard error of the mean would have been within 2%. While this would still have resulted in Millikan having measured e better than anyone else at the time, the slightly larger uncertainty might have allowed more disagreement with his results within the physics community. While Franklin left his support for Millikan's measurement with the conclusion that concedes that Millikan may have performed "cosmetic surgery" on the data, David Goodstein investigated the original detailed notebooks kept by Millikan, concluding that Millikan plainly states here and in the reports that he included only drops that had undergone a "complete series of observations" and excluded no drops from this group of complete measurements. Reasons for a failure to generate a complete observation include annotations regarding the apparatus setup, oil drop production, and atmospheric effects which invalidated, in Millikan's opinion (borne out by the reduced error in this set), a given particular measurement. Millikan's experiment as an example of psychological effects in scientific methodology In a commencement address given at the California Institute of Technology (Caltech) in 1974 (and reprinted in Surely You're Joking, Mr. Feynman! in 1985 as well as in The Pleasure of Finding Things Out in 1999), physicist Richard Feynman noted: References Further reading External links Simulation of the oil drop experiment (requires JavaScript) Thomsen, Marshall, "Good to the Last Drop". Millikan Stories as "Canned" Pedagogy. Eastern Michigan University. CSR/TSGC Team, "Quark search experiment". The University of Texas at Austin. The oil drop experiment appears in a list of Science's 10 Most Beautiful Experiments , originally published in the New York Times. Engeness, T.E., "The Millikan Oil Drop Experiment". 25 April 2005. Paper by Millikan discussing modifications to his original experiment to improve its accuracy. A variation of this experiment has been suggested for the International Space Station. Physics experiments Electrostatics Foundational quantum physics 1909 in science California Institute of Technology
Oil drop experiment
Physics
2,404
29,240,014
https://en.wikipedia.org/wiki/Cobicistat
Cobicistat, sold under the brand name Tybost, is a medication for use in the treatment of human immunodeficiency virus infection (HIV/AIDS). Its major mechanism of action is through the inhibition of human CYP3A proteins. Like ritonavir (Norvir), cobicistat is of interest for its ability to inhibit liver enzymes that metabolize other medications used to treat HIV, notably elvitegravir, an HIV integrase inhibitor. By combining cobicistat with elvitegravir, higher concentrations of the latter are achieved in the body with lower dosing, theoretically enhancing elvitegravir's viral suppression while diminishing its adverse side-effects. In contrast with ritonavir, the only other booster approved for use as a part of HAART, cobicistat has no anti-HIV activity of its own. Cobicistat is a component of three four-drug, fixed-dose combination HIV treatments. The first, elvitegravir/cobicistat/emtricitabine/tenofovir disoproxil, is marketed as Stribild and was approved by the FDA in August 2012 for use in the United States. The second, elvitegravir/cobicistat/emtricitabine/tenofovir alafenamide, is marketed as Genvoya and was approved by the FDA in November 2015 for use in the United States. Both Stribild and Genvoya are owned by Gilead Sciences. The third, cobicistat/darunavir/emtricitabine/tenofovir alafenamide, is marketed as Symtuza and was FDA approved July 17, 2018 and is owned by Janssen Pharmaceuticals. Additionally, there is a fixed-dose combination of cobicistat and the protease inhibitor darunavir (darunavir/cobicistat; marketed as Prezcobix by Janssen Therapeutics), and a fixed-dose combination of cobicistat and protease inhibitor atazanavir (atazanavir/cobicistat; marketed as Evotaz by Bristol-Myers Squibb). Both Prezcobix and Evotaz were approved by the FDA in January 2015. Cobicistat is a potent inhibitor of cytochrome P450 3A enzymes, including the important CYP3A4 subtype. It also inhibits intestinal transport proteins, increasing the overall absorption of several HIV medications, including atazanavir, darunavir, and tenofovir alafenamide. Chemistry Cobicistat is a drug analogue of ritonavir, in which the valine moiety is exchanged for a 2-morpholinoethyl group, and the backbone hydroxyl group is removed. These changes effectively eliminate the anti-HIV activity of ritonavir while preserving its inhibitory effects on the CYP3A isozyme family of proteins. Cobicistat is therefore able to increase plasma concentration of other coadministered anti-HIV drugs without the risk of causing cobicistat-resistant mutations in the HIV virus. Synthesis Cobicistat may be synthesized from any number of commercially available starting materials. The synthesis shown below utilizes L-methionine and bromoacetic acid as starting materials. Discovery and development Cobicistat was developed through structure-activity relationship studies using ritonavir and desoxyritonavir as lead compounds. These studies were conducted by scientists at Gilead Sciences, and successfully optimized ritonavir into a potent CYP3A inhibitor lacking anti-HIV activity. Cobicistat shows potent, selective inhibition of the CYP3A isozyme family (IC50 0.15 μM) compared to some CYP1A and CYP2C isozymes. As cobicistat was discovered using structure-activity relationship studies, its CYP3A binding is still poorly understood; however, research on the protein-ligand interactions between CYP3A4 and ritonavir analogues demonstrates that CYP 3A4 residues Ile369, Ala370, Met371, as well as Arg105 and Ser119, play an important role in ritonavir analogue inhibition of CYP3A4. References CYP3A4 inhibitors Thiazoles 4-Morpholinyl compounds Carbamates Ureas Isopropyl compounds Orphan drugs
Cobicistat
Chemistry
941
10,573,521
https://en.wikipedia.org/wiki/Syringaldehyde
Syringaldehyde is an organic compound that occurs in trace amounts widely in nature. Some species of insects use syringaldehyde in their chemical communication systems. Scolytus multistriatus uses it as a signal to find a host tree during oviposition. Because it contains many functional groups, it can be classified in many ways - aromatic, aldehyde, phenol. It is a colorless solid (impure samples appear yellowish) that is soluble in alcohol and polar organic solvents. Its refractive index is 1.53. Natural sources Syringaldehyde can be found naturally in the wood of spruce and maple trees. Syringaldehyde is also formed in oak barrels and extracted into whisky, which it gives spicy, smoky, hot and smoldering wood aromas. Preparation This compound may be prepared from syringol by the Duff reaction: See also Phenolic content in wine Syringol Syringic acid Acetosyringone Sinapyl alcohol Sinapinic acid Sinapaldehyde Sinapine Canolol References Insect pheromones O-methylated natural phenols Hydroxybenzaldehydes Phenol ethers
Syringaldehyde
Chemistry
250
50,787,899
https://en.wikipedia.org/wiki/Cortinarius%20catarracticus
Cortinarius catarracticus is a species of potentially lethal fungus in the family Cortinariaceae native to South Australia. References catarracticus Fungi described in 2004 Fungi native to Australia Fungus species
Cortinarius catarracticus
Biology
44
22,456,841
https://en.wikipedia.org/wiki/Fujitsu%20Technology%20Solutions
Fujitsu Technology Solutions GmbH (FTS) is a Munich-based information technology vendor in the so-called "EMEIA" markets: Europe, the Middle East, India and Africa. A subsidiary of Fujitsu in Tokyo, FTS was founded in 2009 when the parent firm bought out Siemens' 50% share of Fujitsu Siemens Computers. Products and services Fujitsu Technology Solutions provides a broad range of information and communications technology based products. Current Fujitsu Technology Solutions' current products and services include: Media Center ESPRIMO Q Notebooks CELSIUS LIFEBOOK Desktop PC ESPRIMO Workstation CELSIUS Tablet PC STYLISTIC Convertible PC LIFEBOOK T Industry Standard Servers PRIMERGY PRIMERGY BladeFrame Mission critical IA-64 servers PRIMEQUEST UNIX system based servers SPARC Enterprise Servers PRIMEPOWER 250, 450, 900, 1500, 2500 Storage ETERNUS S/390-compatible Mainframes S- series, SX- series Flat panel displays Operating systems SINIX: Unix variant, later renamed Reliant UNIX, available for RISC and S/390-compatible platforms BS2000: EBCDIC-based operating system for SPARC, x86 and S/390-compatible systems VM2000: EBCDIC-based hypervisor for S/390-compatible platform, capable of running multiple BS2000 and SINIX virtual machines Discontinued Fujitsu Technology Solutions' discontinued products and services include: Media Center ACTIVY Notebooks AMILO AMILO PRO ESPRIMO Mobile Liteline Mobile SCENIC Mobile Desktop PC SCALEO SCENIC AMILO DESKTOP Handheld Pocket LOOX Flat panel displays: Myrica Liquid crystal display televisions Plasma display televisions SCALEOVIEW Liquid crystal display computer monitors SCENICVIEW Liquid crystal display computer monitors Product Compliance Laboratory Fujitsu Technology Solutions operates a product compliance laboratory which is used in house and by third parties. See also List of computer system manufacturers List of Fujitsu products References Computer companies of Germany Computer hardware companies Computer systems companies Fujitsu subsidiaries companies based in Munich
Fujitsu Technology Solutions
Technology
404
3,255,966
https://en.wikipedia.org/wiki/Health%20technology
Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures, and systems developed to solve a health problem and improve quality of lives". This includes pharmaceuticals, devices, procedures, and organizational systems used in the healthcare industry, as well as computer-supported information systems. In the United States, these technologies involve standardized physical objects, as well as traditional and designed social means and methods to treat or care for patients. Development Pre-digital era During the pre-digital era, patients suffered from inefficient and faulty clinical systems, processes, and conditions. Many medical errors happened in the past due to undeveloped health technologies. Some examples of these medical errors included adverse drug events and alarm fatigue. When many alarms are repeatedly triggered or activated, especially for unimportant events, workers may become desensitized to the alarms. Healthcare professionals who have alarm fatigue may ignore an alarm believing it to be insignificant, which could lead to death and dangerous situations. With technological development, an intelligent program of integration and physiologic sense-making was developed and helped reduce the number of false alarms. Also, with greater investment in health technologies, fewer medical errors happened. Outdated paper records were replaced in many healthcare organizations by electronic health records (EHR). According to studies, this has brought many changes to healthcare. Drug administration has improved, healthcare providers can now access medical information easier, provide better treatments and faster results, and save more costs. Improvement To help promote and expand the adoption of health information technology, Congress passed the HITECH act as part of the American Recovery and Reinvestment Act of 2009. HITECH stands for Health Information Technology for Economic and Clinical Health Act. It gave the department of health and human services the authority to improve healthcare quality and efficiency through the promotion of health IT. The act provided financial incentives or penalties to organizations to motivate healthcare providers to improve healthcare. The purpose of the act was to improve quality, safety, efficiency, and ultimately to reduce health disparities. One of the main parts of the HITECH act was setting the meaningful use requirement, which required EHRs to allow for the electronic exchange of health information and to submit clinical information. The purpose of HITECH is to ensure the sharing of electronic information with patients and other clinicians are secure. HITECH also aimed to help healthcare providers have more efficient operations and reduce medical errors. The program consisted of three phases. Phase one aimed to improve healthcare quality, safety and efficiency. Phase two expanded on phase one and focused on clinical processes and ensuring the meaningful use of EHRs. Lastly, phase three focused on using Certified Electronic Health Record Technology (CEHRT) to improve health outcomes. In 2014, the implementation of electronic records in US hospitals rose from a low percentage of 10% to a high percentage of 70%. At the beginning of 2018, healthcare providers who participated in the Medicare Promoting Interoperability Program needed to report on Quality Payment Program requirements. The program focused more on interoperability and aimed to improve patient access to health information. Privacy of health data Phones that can track one's whereabouts, steps and more can serve as medical devices, and medical devices have much the same effect as these phones. According to one study, people were willing to share personal data for scientific advancements, although they still expressed uncertainty about who would have access to their data. People are naturally cautious about giving out sensitive personal information. Phones add an extra level of threat. Mobile devices continue to increase in popularity each year. The addition of mobile devices serving as medical devices increases the chances for an attacker to gain unauthorized information. In 2015 the Medical Access and CHIP Reauthorization Act (MACRA) was passed, pushing towards electronic health records. In the article "Health Information Technology: Integration, Patient Empowerment, and Security", K. Marvin provided multiple different polls based on people's views on different types of technology entering the medical field most answers were responded with somewhat likely and very few completely disagreed on the technology being used in medicine. Marvin discusses the maintenance required to protect medical data and technology against cyber attacks as well as providing a proper data backup system for the information. Patient Protection and Affordable Care Act (ACA) also known as Obamacare and health information technology health care is entering the digital era. Although with this development it needs to be protected. Both health information and financial information now made digital within the health industry might become a larger target for cyber-crime. Even with multiple different types of safeguards hackers somehow still find their way in so the security that is in place needs to constantly be updated to prevent these breaches. Policy With the increased use of IT systems, privacy violations were increasing rapidly due to the easier access and poor management. As such, the concern of privacy has become an important topic in healthcare. Privacy breaches happen when organizations do not protect the privacy of people's data. There are four types of privacy breaches, which include unintended disclosure by authorized personnel, intended disclosure by authorized personnel, privacy data loss or theft, and virtual hacking. It became more important to protect the privacy and security of patients' data because of the high negative impact on both individuals and organizations. Stolen personal information can be used to open credit cards or other unethical behaviors. Also, individuals have to spend a large amount of money to rectify the issue. The exposure of sensitive health information also can have negative impacts on individuals' relationships, jobs, or other personal areas. For the organization, the privacy breach can cause loss of trust, customers, legal actions, and monetary fines. HIPAA stands for the Health Insurance Portability and Accountability Act of 1996. It is a U.S. healthcare legislation to direct how patient data is used and includes two major rules which are privacy and security of data. The privacy rule protects people's rights to privacy and security rule determines how to protect people's privacy. According to the HIPAA Security Rule, it ensures that protected health information has three characteristics: confidentiality, availability, and integrity. Confidentiality indicates keeping the data confidential to prevent data loss or individuals who are unauthorized to access that protected health information. Availability allows people who are authorized to access the systems and networks when and where that information is in fact needed, such as natural disasters. In cases like this, protected health information is mostly backed up on to a separate server or printed out in paper copies, so people can access it. Lastly, integrity ensures not using inaccurate information and improperly modified data due to a bad design system or process to protect the permanence of the patient data. The consequences of using inaccurate or improperly modified data could become useless or even dangerous. Health Organizations of HIPAA also created administrative safeguards, physical safeguards, technical safeguards, to help protect the privacy of patients. Administrative safeguards typically include security management process, security personnel, information access management, workforce training and management, and evaluation of security policies and procedures. Security management processes are one of the important administrative safeguards' examples. It is essential to reduce the risks and vulnerabilities of the system. The processes are mostly the standard operating procedures written out as training manuals. The purpose is to educate people on how to handle protected health information in proper behavior. Physical safeguards include lock and key, card swipe, positioning of screens, confidential envelopes, and shredding of paper copies. Lock and key are common examples of physical safeguards. They can limit physical access to facilities. Lock and key are simple, but they can prevent individuals from stealing medical records. Individuals must have an actual key to access to the lock. Lastly, technical safeguards include access control, audit controls, integrity controls, and transmission security. The access control mechanism is a common example of technical safeguards. It allows the access of authorized personnel. The technology includes authentication and authorization. Authentication is the proof of identity that handles confidential information like username and password, while authorization is the act of determining whether a particular user is allowed to access certain data and perform activities in a system like add and delete. Assessment The concept of health technology assessment (HTA) was first coined in 1967 by the U.S. Congress in response to the increasing need to address the unintended and potential consequences of health technology, along with its prominent role in society. It was further institutionalized with the establishment of the congressional Office of Technology Assessment (OTA) in 1972–1973. HTA is defined as a comprehensive form of policy research that examines short- and long-term consequences of the application of technology, including benefits, costs, and risks. Due to the broad scope of technology assessment, it requires the participation of individuals besides scientists and health care practitioners such as managers and even the consumers. Several American organizations provide health technology assessments and these include the Centers for Medicare and Medicaid Services (CMS) and the Veterans Administration through its VA Technology Assessment Program (VATAP). The models adopted by these institutions vary, although they focus on whether a medical technology being offered is therapeutically relevant. A study conducted in 2007 noted that the assessments still did not use formal economic analyses. Aside from its development, however, assessment in the health technology industry has been viewed as sporadic and fragmented Issues such as the determination of products that needed to be developed, cost, and access, among others, also emerged. These, some argue, need to be included in the assessment since health technology is never purely a matter of science but also of beliefs, values, and ideologies. One of the mechanisms being suggested either as an element of or an alternative to the current TAs is bioethics, which is also referred to as the "fourth-generation" evaluation framework. There are at least two dimensions to an ethical HTA. The first involves the incorporation of ethics in the methodological standards employed to assess technologies while the second is concerned with the use of ethical framework in research and judgment on the part of the researchers who produce information used in the industry. In the future The practice of medicine in the United States is currently in a major transition. This transition is due to many factors, but primarily because of the implementation and integration of health technologies into healthcare. In recent years, the widespread adoption of electronic health records (EHR) has greatly impacted healthcare. In his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine's Computer Age, Robert Wachter aims to inform readers about this transition. Wachter states that there will be fewer hospitals in the future, and due to the advancement of technologies, people will be more likely to go to hospitals for major surgeries or critical illness. In the future, nurse call buttons will not be needed in hospitals. Instead, robots will deliver medication, take care of patients, and administer the system. In addition, the electronic health record will look different. Healthcare providers will be able to enter the notes via speech-to-text transcriptions in real-time. Wachter stated that information will be edited collaboratively across the patient-care team to improve the quality. Also, natural language processing will be more developed to help parse out keywords. In the future, patient data will reside in the cloud, and patients as well as authorized providers and individuals will be able to access their data from any device or location. Big data analysis will constantly be improving. Artificial intelligence and machine learning will be constantly improving and developing as it receives new data. Alerts will also be more intelligent and efficient than the current systems. Medical technology Medical technology, or "medtech", encompasses a wide range of healthcare products and is used to treat diseases and medical conditions affecting humans. Such technologies are intended to improve the quality of healthcare delivered through earlier diagnosis, less invasive treatment options and reduction in hospital stays and rehabilitation times. Recent advances in medical technology have also focused on cost reduction. Medical technology may broadly include medical devices, information technology, biotech, and healthcare services. The impacts of medical technology involve social and ethical issues. For example, physicians can seek objective information from technology rather than read subjective patient reports. A major driver of the sector's growth is the consumerization of medtech. Supported by the widespread availability of smartphones and tablets, providers can reach a large audience at low cost, a trend that stands to be consolidated as wearable technologies spread throughout the market. In the years 2010–2015, venture funding has grown 200%, allowing US$11.7 billion to flow into health tech businesses from over 30,000 investors in the space. Types of technology Medical technology has evolved into smaller portable devices, for instance, smartphones, touchscreens, tablets, laptops, digital ink, voice and face recognition and more. With this technology, innovations like electronic health records (EHR), health information exchange (HIE), Nationwide Health Information Network (NwHIN), personal health records (PHRs), patient portals, nanomedicine, genome-based personalized medicine, Geographical Positioning System (GPS), radio frequency identification (RFID), telemedicine, clinical decision support (CDS), mobile home health care and cloud computing came to exist. Medical imaging and magnetic resonance imaging (MRI) have been long used and proven medical technologies for medical research, patient reviewing, and treatment analyzing. With the advancement of imagining technologies, including the use of faster and more data, higher resolution images, and specialist automation software, the capabilities of medical imaging technology are growing and yielding better results. As the imaging hardware and software evolve this means that patients will need to use less contrasting agents, and also spend less time and money. Further advancement in healthcare is electromagnetic (EM) technology guidance systems, used in medical procedures, allowing real-time visualization and navigation for the placement of medical devices inside the human body. For example, a neuro-navigated catheter is inserted into the brain, or a feeding tube placement in the stomach or small intestine, as demonstrated by the ENvue System. ENvue is an advanced electromagnetic navigation system for enteral feeding tube placement. The system uses a field generator and several EM sensors enabling proper scaling of the display to the patient’s body contour, and real-time view of the feeding tube tip location and direction, which helps the medical staff ensure correct placement and avoid placement of the tube in the lungs. 3D printing is another major development in healthcare. It can be used to produce specialized splints, prostheses, parts for medical devices and inert implants. The end goal of 3D printing is being able to print out customized replaceable body parts. In the following section, it will explain more about 3D printing in healthcare. New types of technologies also include artificial intelligence and robots. 3D printing 3D printing is the use of specialized machines, software programs and materials to automate the process of building certain objects. It is having a rapid growth in the prosthesis, medical implants, novel drug formulations and the bioprinting of human tissues and organs. Companies such as Surgical Theater provide new technology that is capable of capturing 3D virtual images of patients' brains to use as practice for operations. 3D printing allows medical companies to produce prototypes to practice before an operation created with artificial tissue. 3D printing technologies are great for bio-medicine because the materials that are used to make allow the fabrication with control over many design features. 3D printing also has the benefits of affordable customization, more efficient designs, and saving more time. 3D printing is precise to design pills to house several drugs due to different release times. The technology allows the pills to transport to the targeted area and degrade safely in the body. As such, pills can be designed more efficiently and conveniently. In the future, doctors might be giving a digital file of printing instructions instead of a prescription. Besides, 3D printing will be more useful in medical implants. An example includes a surgical team that has designed a tracheal splint made by 3D printing to improve the respiration of a patient. This example shows the potential of 3D printing, which allows physicians to develop new implant and instrument designs easily. Overall, in the future of medicine, 3D printing will be crucial as it can be used in surgical planning, artificial and prosthetic devices, drugs, and medical implants. Artificial intelligence The scale and capabilities of artificial intelligence (AI) systems are growing rapidly, notably due to advances in big data. In healthcare, it is expected to provide easier accessibility of information, and to improve treatments while reducing cost. The integration of AI in healthcare tends to improve the quality and efficiency of complex tasks. Risks related to AI include the potential lack of accuracy, and privacy concerns related to the collected data. Delegating decisions to AI systems may also undermine accountability. Moreover, AI systems sometimes learn undesired behaviors from their training data. For example, an AI trained to detect skin diseases was found to have a strong tendency to classify images containing a ruler as cancerous, since pictures of malignancies typically include a ruler to show the scale. Applications AI brings many benefits to the healthcare industry. AI helps to detect diseases, administer chronic conditions, deliver health services, and discover the drug. Furthermore, AI has the potential to address important health challenges. In healthcare organizations, AI is able to plan and relocate resources. AI is able to match patients with healthcare providers that meet their needs. AI also helps improve the healthcare experience by using an app to identify patients' anxieties. In medical research, AI helps to analyze and evaluate the patterns and complex data. For instance, AI is important in drug discovery because it can search relevant studies and analyze different kinds of data. In clinical care, AI helps to detect diseases, analyze clinical data, publications, and guidelines. As such, AI aids to find the best treatments for the patients. Other uses of AI in clinical care include medical imaging, echocardiography, screening, and surgery. The ability of AlphaFold to predict how proteins fold also significantly accelerated medical research. Education Medical virtual reality provides doctors multiple surgical scenarios that could happen and allows them to practice and prepare themselves for these situations. It also permits medical students a hands-on experience of different procedures without the consequences of making potential mistakes. ORamaVR is one of the leading companies that employ such medical virtual reality technologies to transform medical education (knowledge) and training (skills) to improve patient outcomes, reduce surgical errors and training time and democratize medical education and training. Robots Modern robotics have made huge progress and contribution to healthcare. Robots can help doctors in performing variety tasks. Robotics adoption is increasing tremendously in hospitals. The following are different ways to improve healthcare by using robots: Surgical robots are one of the robotic systems, which allows a surgeon to bend and rotate tissues in a more flexible and efficient way. The system is equipped with a3D magnification vision system that can translate the hand movements of the surgeon to be precise in-order to perform a surgery with minimal incisions. Other robotics systems include the ability to diagnose and treat cancers. Many scientists began working on creating a next-generation robot system to assist the surgeon in performing knee and other bone replacement surgeries. Assistant robots will also be important to help reduce the workload for regular medical staff. They can help nurses with simple and time-consuming tasks like carrying multiple racks of medicines, lab specimen or other sensitive materials. Shortly, robotic pills are expected to reduce the number of surgeries. They can be moved inside a patient and delivered to the desired area. In addition, they can conduct biopsies, film the area and clear clogged arteries. Overall, medical robots are extremely useful in assisting physicians; however, it might take time to be professionally trained working with medical robots and for the robots to respond to a clinician's instructions. As such, many researchers and startups were working constantly to provide solutions to these challenges. Assistive technologies Assistive technologies are products designed to provide accessibility to individuals who have physical or cognitive problems or disabilities. They aim to improve the quality of life with assistive technologies. The range of assistive technologies is broad, ranging from low-tech solutions to physical hardware, to technical devices. There are four areas of assistive technologies, which include visual impairment, hearing impairment, physical limitations, cognitive limitations. There are many benefits of assistive technologies. They enable individuals to care for themselves, work, study, access information easily, improve independence and communication, and lastly participate fully in community life. Consumer-driven healthcare software As part of an ongoing trend towards consumer-driven healthcare, websites or apps which provide more information on health care quality and price to help patients choose their providers have grown. As of 2017, the sites with the most number of reviews in descending order included Healthgrades, Vitals.com, and RateMDs.com. Yelp, Google, and Facebook also host reviews with a large amount of traffic, although as of 2017 they had fewer medical reviews per doctor. Disputes around online reviews can lead to websites by health professionals alleging defamation. In 2018 Vitals.com was purchased by WebMD which is owned by Internet Brands. Patient safety organizations and government programs which have historically assessed quality have made their data more accessible over the internet; notable examples include the HospitalCompare by CMS and the LeapFrog Group's hospitalsafetygrade.org. Patient-oriented software may also help in other ways, including general education and appointments. Disclosure of legal disputes including medical license complaints or malpractice lawsuits has also been made easier. Every state discloses license status and at least some disciplinary action to the public, but as of 2018, this was not accessible via the internet for a few states. Consumers can look up medical licenses in a national database, DocInfo.org, maintained by the medical licensing organizations which contains limited details. Other tools include DocFinder at docfinder.docboard.org and certificationmatters.org from the American Board of Medical Specialties. In some cases more information is available from a mailed or walk-in request than the internet; for example, the Medical Board of California removes dismissed accusations from website profiles, but these are still available from a written or walk-in request, or a lookup in a separate database. The trend to disclosure is controversial and generate significant public debate, particularly about opening up the National Practitioner Data Bank. In 1996, Massachusetts became the first state to require detailed disclosure of malpractice claims. Self-monitoring Smartphones, tablets, and wearable computers have allowed people to monitor their health. These devices run numerous applications that are designed to provide simple health services and the monitoring of one's health with finding as critical problems to health as possible. An example of this is Fitbit, a fitness tracker that is worn on the user's wrist. This wearable technology allows people to track their steps, heart rate, floors climbed, miles walked, active minutes, and even sleep patterns. The data collected and analyzed allow users not just to keep track of their health but also help manage it, particularly through its capability to identify health risk factors. There is also the case of the Internet, which serves as a repository of information and expert content that can be used to "self-diagnose" instead of going to their doctor. For instance, one need only enumerate symptoms as search parameters at Google and the search engine could identify the illness from the list of contents uploaded to the World Wide Web, particularly those provided by expert/medical sources. These advances may eventually have some effect on doctor visits from patients and change the role of the health professionals from "gatekeeper to secondary care to facilitator of information interpretation and decision-making." Apart from basic services provided by Google in Search, there are also companies such as WebMD that already offer dedicated symptom-checking apps. Technology testing All medical equipment introduced commercially must meet both United States and international regulations. The devices are tested on their material, effects on the human body, all components including devices that have other devices included with them, and the mechanical aspects. The Medical Device User Fee and Modernization Act of 2002 was created to speed up the FDA's approval process of medical technology by introducing sponsor user fees for a faster review time with predetermined performance targets for review time. In addition, 36 devices and apps were approved by the FDA in 2016. Careers There are numerous careers in health technology in the US. Listed below are some job titles and average salaries. Athletic trainer, mean salary: $41,340. Athletic trainers treat athletes and other individuals who have sustained injuries. They also teach people how to prevent injuries. They perform their job under the supervision of physicians. Dental hygienist, mean salary: $67,340. Dental hygienists provide preventive dental care and teach patients how to maintain good oral health. They usually work under dentists' supervision. Clinical laboratory scientists, technicians, and technologists, mean salary: $51,770. Lab technicians and technologists perform laboratory tests and procedures. Technicians work under the supervision of a laboratory technologist or laboratory manager. Nuclear medicine technologist, mean salary: $67,910. Nuclear medicine technologists prepare and administer radiopharmaceuticals, radioactive drugs, to patients to treat or diagnose diseases. Pharmacy technician, mean salary: $28,070. Pharmacy technicians assist pharmacists with the preparation of prescription medications for customers. Allied professions The term medical technology may also refer to the duties performed by clinical laboratory professionals or medical technologists in various settings within the public and private sectors. The work of these professionals encompasses clinical applications of chemistry, genetics, hematology, immunohematology (blood banking), immunology, microbiology, serology, urinalysis, and miscellaneous body fluid analysis. Depending on location, educational level, and certifying body, these professionals may be referred to as biomedical scientists, medical laboratory scientists (MLS), medical technologists (MT), medical laboratory technologists and medical laboratory technicians. References Health care occupations Biomedical engineering United States
Health technology
Engineering,Biology
5,323
39,758,449
https://en.wikipedia.org/wiki/List%20of%20structural%20engineering%20companies
The following is a list of notable structural engineering companies. Only companies with a Wikipedia article should be included in the list. Many of the companies included in this list do not practice only structural engineering, but may also be involved in civil engineering, architecture, and other related practices. See also list of structural engineers and lists of engineers. A AKT II Arup Group Aurecon D Dar Al-Handasah E Expedition Engineering Exponent G Geiger Engineers GHD Group H HDR L LeMessurier Consultants Louis Berger Group M Magnusson Klemencic Associates Miyamoto International Mott MacDonald P Popp & Asociații R Rutherford + Chekene S Severud Associates Structuretech Engineering PC. Simpson Gumpertz & Heger Inc. Skidmore, Owings & Merrill T Thornton Tomasetti W Walter P Moore Weidlinger Associates Whitbybird Wiss, Janney, Elstner Associates, Inc. WSP Global WSP USA References Engineering consulting firms Companies
List of structural engineering companies
Engineering
206
768,379
https://en.wikipedia.org/wiki/Market%20Square%20Arena
Market Square Arena (MSA) was an indoor arena in Indianapolis. Completed in 1974, at a cost of $23 million, it seated 16,530 for basketball and 15,993 for ice hockey. Seating capacity for concerts and other events was adjusted by the use of large curtains which sealed off the upper rows. The arena closed down in 1999 and was demolished two years later. History In the late 1960s, the city of Indianapolis studied several market areas of the city for future development and revitalization. Students from the fourth-year design studio class at Ball State University College of Architecture and Planning met with the City of Indianapolis to review and select 20–26 projects for consideration. Students Joseph Mynhier and Terry Pastorino selected downtown Indianapolis as their market and designed what would become Market Square Arena. The design envisioned by Mynhier and Pastorino was later selected and used as a promotional tool by the City of Indianapolis for construction of the stadium. The city selected four architectural firms to complete the arena design with two representatives from each of the four companies. Terry Pastorino, who had worked for the firm of Kennedy, Brown & Trueblood during the summer of 1970 on the project, later joined the firm working on the arena. The original student design included a four-story office building covering two city blocks. As constructed, the arena consisted of a unique space frame design spanning Market Street. The playing floor was elevated over Market Street by twin 1400-space parking garages on each side of Market Street. Market Street, which already was physically terminated on the west by the Indiana Statehouse, was visually terminated on the east by the arena. The final design eventually took up one city block spanning Market Street. The arena was built using a $16 million contribution from the city of Indianapolis. Scoreboard Market Square Arena's original center-hung scoreboard was an American Sign & Indicator scoreboard with monochrome matrix screens, similar to those which would be installed at Arizona Veterans Memorial Coliseum and Joe Louis Arena. Its replacement, another American Sign & Indicator model, but with a color matrix screen on each of its four sides, was installed in time for the 1985 NBA All-Star Game held in the city and remained at the arena for the rest of its life, where it would later be complemented by front-projection video screens on each end of the arena. Demolition The Pacers moved to the new Conseco Fieldhouse, now Gainbridge Fieldhouse, for the 1999–2000 NBA season, and Market Square Arena was demolished on July 8, 2001, in a multimillion-dollar implosion performed by Controlled Demolition, Inc. It only took 12 seconds to demolish the arena completely. The arena's basketball floor remains preserved and housed nearby within the National Institute for Fitness and Sport in White River State Park. The site of the former arena was a parking lot for over a decade. The parking lot held a memorial to Elvis Presley, who played his final concert at MSA on June 26, 1977. The memorial was designed and built by Alan Clough. In January 2017, Cummins opened its Global Distribution Headquarters on the southern half of the site. A 28-story apartment building named 360 Market Square and containing a Whole Foods Market store opened in March 2018 on the northern half. The Elvis memorial was then placed on the sidewalk adjacent to 360 Market Square. Capacity Events Market Square Arena was best known as the home of the Indiana Pacers of the American Basketball Association and National Basketball Association from 1974 to 1999. The first Pacers basketball game held in the arena was a preseason game against the Milwaukee Bucks; the total attendance was 16,929. The first regular-season ABA game in the arena was held on October 18, 1974, against the San Antonio Spurs; the Pacers lost in double overtime, 129–121 in front of 7,473 fans. The first Pacers victory in Market Square Arena came on October 23 with a 122–107 win over the Spirits of St. Louis. The 1974–75 season ended for the Pacers with the ABA Finals played in Market Square Arena and Freedom Hall against their archrivals, the Kentucky Colonels. The Colonels defeated the Pacers in that championship series, winning the ABA title in five games (4 wins to 1). The 1975–76 Pacers won their final home ABA game in Market Square Arena with a 109–95 victory against the Colonels. (Kentucky won the next game by one point to win the series and advance, ending the Pacers' ABA tenure.) The Pacers continued to play at Market Square Arena after they joined the NBA in 1976; their first game at the arena as an NBA team was a 129–122 overtime loss to the Boston Celtics on October 21. Michael Jordan's return to the Chicago Bulls after his first retirement took place at Market Square Arena in a loss to the Pacers on March 19, 1995. The arena also hosted the 1980 NCAA men's basketball Final Four, which was won by the University of Louisville. and the Midwestern Collegiate Conference (now Horizon League) men's basketball conference tournament from 1986 to 1988 and again in 1993. In 1987, Indianapolis hosted the Pan American Games, and all of its basketball games were held at Market Square. The gold-medal game pitted Brazil against the United States. The U.S. team of college players featured two All-Americans in David Robinson and Danny Manning, two Final Four MVPs in Pervis Ellison and Keith Smart, and several other future NBA players. The U.S. team led 68–54 at halftime, but Oscar Schmidt led Brazil to a stunning comeback, finishing with 46 points as Brazil won 120–115. Market Square Arena was the primary concert venue for virtually all national and international musical acts visiting Indiana until its demolition in 2001. While many concerts moved to the Deer Creek Music Center amphitheater during summer months after that venue opened in 1989, Market Square remained the primary concert venue for large acts visiting the city of Indianapolis. Market Square hosted acts from Elvis Presley, Frank Sinatra, Eric Clapton, Kenny Rogers, Deep Purple, Cheap Trick, Grateful Dead, KISS, Metallica, and several Black Expo performances. Other events held at Market Square included circuses, Ice Capades, monster truck shows, indoor motocross racing, and rodeos. Market Square Arena was also the home of the Indianapolis Racers of the WHA from 1974 to 1979. 17-year-old Wayne Gretzky starred for the Racers in his first professional action before being traded to the Edmonton Oilers after a handful of games. As part of a series of 24 NHL games held in non-league markets during the 1992-93 season, the arena hosted its first NHL game on November 3, 1992, a matchup between the Chicago Blackhawks and the Washington Capitals. Notable events The first event held at the arena was a Glen Campbell concert on September 15, 1974. Elvis Presley performed his final live concert here, in front of 18,000 people, on June 26, 1977, seven weeks before his death on August 16. The Bee Gees performed here on July 26, 1979, as part of their Spirits Having Flown Tour. Billy Graham's 1980 Indiana Crusade, featuring a young Bill Gaither as one of the musical guests, was the most attended event at the arena in its history. Jimmy Swaggart held a crusade in the arena in July 1988. The arena was filled to capacity over the weekend of July 22–24 for each session. Pat Benatar performed during her Precious Time tour on August 19, 1981. Mötley Crüe filmed their music video for "Wild Side" here on July 18, 1987, which contains footage of drummer Tommy Lee's spinning drum cage. The video for John Mellencamp's song "Check It Out" was filmed during a concert at the arena on December 11, 1987. Andre The Giant won the WWF Heavyweight Championship, ending Hulk Hogan's first reign. This was televised live on NBC's The Main Event I on February 5, 1988. On July 28, 1988 Run-DMC performed their Run's House Tour at the arena with EPMD, Public Enemy, and DJ Jazzy Jeff & the Fresh Prince. On June 15, 1989, Eazy-E and N.W.A. performed as part of their "Eazy Duz It/Straight Outta Compton" Tour, The D.O.C., Kid 'n Play, J. J. Fad, Too Short, and Kwamé. On November 30, 1990, Ice Cube and Too Short bought their Straight from the Underground tour to Market Square Arena with Yo-Yo, Poor Righteous Teachers, D-Nice, and Kid Rock. Troy Dixon, better known as Trouble T. Roy, was killed when he accidentally fell from the exit ramp to the ground two stories below following his performance with Heavy D and the Boyz on July 15, 1990. Hosted an Event In Your House 11: Buried Alive on October 20, 1996 Michael Jackson performed two consecutive sold–out shows at Market Square Arena, during his Bad World Tour on March 18–19, 1988. KISS' show on November 28, 1992, was recorded and released as a live album, entitled Alive III. Wayne Gretzky first skated out on the ice to start his pro hockey career here. Michael Jordan made his first comeback from retirement at Market Square Arena on March 19, 1995; the Pacers defeated the Bulls in overtime in what was the most-watched NBA game on television in 20 years. See also List of American Basketball Association arenas References External links Video of MSA's demolition – Video Collection of the demolition by The Indianapolis Star Lost Indiana profile of Market Square Arena 1974 establishments in Indiana 1999 disestablishments in Indiana American Basketball Association venues Defunct college basketball venues in the United States Defunct indoor arenas in Indiana Defunct ice hockey venues in the United States Defunct indoor soccer venues in the United States Former NBA venues Demolished music venues in Indiana Demolished sports venues in Indiana IU Indy Jaguars men's basketball Sports venues completed in 1974 Sports venues demolished in 2001 Sports venues in Indianapolis World Hockey Association venues Ice hockey venues in Indiana Buildings and structures demolished by controlled implosion Indiana Pacers
Market Square Arena
Engineering
2,073
9,026,175
https://en.wikipedia.org/wiki/Camera%20Link
Camera Link is a serial communication protocol standard designed for camera interface applications based on the National Semiconductor interface Channel-link. It was designed for the purpose of standardizing scientific and industrial video products including cameras, cables and frame grabbers. The standard is maintained and administered by the Automated Imaging Association or AIA, the global machine vision industry's trade group. Transmission protocol Camera Link uses one to three Channel-link transceiver chips with four links at 7 serial bits each. At a minimum, Camera Link uses 28 bits to represent up to 24 bits of pixel data and 3 bits for video sync signals, leaving one spare bit. The video sync bits are Data Valid, Frame Valid, and Line Valid. The data are serialized 7:1, and the four data streams and a dedicated clock are driven over five LVDS pairs. The receiver accepts the four LVDS data streams and LVDS clock, and then drives the 28 bits and a clock to the board. The camera link standard calls for these 28 bits to be transmitted over 4 serialized differential pairs with a serialization factor of 7. The parallel data clock is transmitted with the data. Typically a 7× clock must be generated by a PLL or SERDES block in order to transmit or receive the serialized video. To deserialize the data, a shift register and counter may be employed. The shift register catches each of the serialized bits, one at a time, then registers the data out into the parallel clock domain - once the data counter has reached its terminal value. Variants Camera Link comes in several variants which differ in the amount of data that can be transferred. Some of them require two cables for transmission. Base configuration The "Base" Camera Link configuration carries signals over a single connector/cable. The cable used is a MDR ("Mini D Ribbon") 26-pin Male Plug Connector, optimized by 3M for the LVDS signal. In addition to the 5 LVDS pairs transmitting the serialized video data (24 bits of data and 4 framing/enable bits), the connector also carries 4 LVDS discrete control signals and 2 LVDS asynchronous serial communication channels for communicating with the camera. At the maximum chipset operating frequency (85 MHz), the base configuration yields a video data throughput of 2.04 Gbit/s (255 MB/s). Medium/Full configuration The Camera Link specification includes higher-bandwidth configurations that provide additional video data paths over a second connector/cable. The "Medium" configuration doubles the video bandwidth, adding 24 bits of data and the same 4 framing/enable bits present in the "Base" configuration. This yields a 48-bit wide video data path capable of throughput up to 4.08 Gbit/s (510 MB/s). The "Full" configuration adds another 16-bits to the data path, resulting in a 64-bit wide video path that can carry 5.44Gbit/s (680 MB/s). Deca configuration Some camera and data acquisition hardware manufacturers have extended the bandwidth of the interface beyond the limits imposed by the Camera Link interface specification. These formats extend the width of the "Full" configuration by utilizing 8 unused bits and reassigning the 8 redundant framing/enable bits to produce a data path width of up to 80 bits over two connectors/cables, which further increases the video bandwidth. A consensus has emerged in the industry about the 80-bit variant, and compatible cameras and frame grabbers are marketed with the term "Camera Link Deca". However, some manufacturers use the term "Extended Full" to refer to Deca configuration, and still others retain use of the term Camera Link Full while referring to Full Deca. The 80-bit video path can carry 6.8 Gbit/s (850 MB/s). Signal timing The image below shows the relative signal timing of the clock and one data line of one of the Channel Link transceivers used for Camera Link transmissions. Data words start in the middle of the high phase of the clock, and the most significant bit is transmitted first. Bit assignments The bits of pixel values are not assigned to serial transmitters in order, but are permutated in a complicated way, as shown in the following figure. The figure labels the Camera Link data bits consecutively and includes 8 additional bits not part of the Camera Link Full specification. (The Camera Link standard divides the data bits into eight 8-bit ports denoted by letter-number combinations, but uses the same letter-number combinations for color channels that do not always correspond one-to-one, making this notation ambiguous.) The upper half of this figure is only relevant for the Medium and Full configurations which require two physical interfaces and two cables. The two rectangles in the middle represent the cables, with the connector pins of each signal printed at either side. To the left of the transceivers, the list of pixel data bits transmitted over that Channel Link is printed, from LSB to MSB. The characters L, F and D refer to the Line Sync, Frame Sync and Data Valid bits, respectively. The underscore represents an unused spare bit. It remains to be said how pixel data bits are assigned to the bits 0 to 71 used in the figure. For grey-scale pixels, this is a trivial one-to-one mapping; for colour pixels with a multiple of 8 bits per colour, the colours are simply concatenated in the order red, green and blue (from LSB to MSB). For 12-bit RGB data, the lower 8 bits of each colour are assigned to data bits 0–7,16-23,32-39; the higher 4 bits of each colour to bits 8–11,12-15,40-43. Cables and connectors The standard prescribes 26-pin Miniature Delta Ribbon connectors (MDR-26) for use with Camera Link; the shrunk variant SDR-26 is allowed since standard version 1.2. The connector pin assignments are shown in the large figure in the previous section. The connector pinout is the following: Matching differential pairs are deliberately located at opposite sides of the connector, and at different connector sides at the different ends of the cable. This prevents skew due to the connector being mounted perpendicularly on a PCB. Camera Link cables are shielded twisted pair cables. The standard specifies that differential pairs must be individually shielded, and the cable as a whole must have two shields. Some companies save costs by not shielding the two serial interface signal pairs, which carry slower signals than the camera data; these cables have one camera end and one grabber end and may not be reversed, and cannot be used as a second cable in a Medium or Full configuration. Interface Standard Specifications The Camera Link standard is maintained by the AIA. The introduction of the Camera Link Interface Standard (1.0) was released in October 2000. Revision 1.1 was adopted in January 2004, with expanded software function support. The standard committee adopted version 1.2 in January 2007, introducing mini SDR ("Shrunk D Ribbon") connectors (SDR-26) and power over Camera Link (POCL). Annex D of revision 1.2 adds mechanical and electrical descriptions to the standard, especially cable performance. Annex E of revision 1.2 lists requirements of POCL equipment. Camera Link 2.0 was released in November 2011. See also Channel Link Automated Imaging Association GigE vision List of device bandwidths Low-voltage differential signaling (LVDS) CoaXPress Camera Serial Interface Notes External links Automated Imaging Association, the body responsible for the Camera Link standard Camera Link standard V1.0, October 2000 Camera Link HS, Camera Link High Speed Standard w/ support for fiber cabling and up to 16,000 MB/s Machine vision Computer buses Digital display connectors
Camera Link
Engineering
1,612
78,076,558
https://en.wikipedia.org/wiki/Rat%20Guard
A rat guard is a device used to prevent rats from boarding ships or entering buildings via ropes, cables, or wires. These guards are typically conical or disc-shaped and are designed to stop rats from climbing by creating a physical barrier they cannot pass. Rat guards are essential in maritime environments and areas where rats pose a threat to goods, vessels, and infrastructure. In summary, rat guards serve as a crucial tool in protecting vessels, storage facilities, and offshore installations from rats, preventing potential damage and health risks caused by these pests. Design and Function Rat guards work by creating an obstacle that prevents rats from climbing along lines such as mooring ropes or cables. The guards are generally made from metal or durable plastic and feature a smooth, sloped surface that rats cannot grip or climb. When installed correctly, the shape of the guard causes rats attempting to climb to slip off, preventing them from reaching the other side. The rat guards are positioned on ropes or cables a few feet away from the dock or vessel to stop rats from bypassing the guard by jumping across. These devices are particularly effective because they exploit the rats' inability to navigate the smooth, conical surface of the guard. Common Uses Rat guards are primarily used in maritime and industrial settings where rats pose a risk of infestation or damage. Common places where rat guards are used include: Ships and Ports: Rat guards are installed on mooring lines to prevent rats from climbing aboard vessels when docked. This helps protect cargo, reduce the risk of disease, and prevent damage to the ship's infrastructure. Docks and Harbors: Ports often have large rat populations due to abundant food and shelter. Rat guards are placed on mooring lines to block rats from accessing docked vessels. Marinas: Pleasure boats and yachts moored at marinas also use rat guards to avoid rodent infestations. Warehouses and Storage Facilities: Warehouses near ports or in areas with high rat activity may use rat guards on cables or wires to prevent rats from entering the facility. Fishing Boats: Fishing vessels use rat guards to protect food supplies and maintain the cleanliness of the catch. Oil Platforms and Offshore Installations: Offshore platforms and oil rigs use rat guards on mooring lines to prevent rats from boarding via supply vessels. Military Vessels: Naval ships use rat guards to protect against rodent infestation while docked. Materials and Installation Rat guards are commonly made from steel, aluminum, or strong plastic materials to ensure durability and to prevent rats from chewing through them. The guards are installed a short distance away from where the cable connects to the ship, dock, or building to ensure rats cannot bypass the guard. See also Pandemic References Pest control Mammal pest control
Rat Guard
Biology
544
46,965,308
https://en.wikipedia.org/wiki/Clean%20Water%20Rule
The Clean Water Rule is a 2015 regulation published by the U.S. Environmental Protection Agency (EPA) and the United States Army Corps of Engineers (USACE) to clarify water resource management in the United States under a provision of the Clean Water Act of 1972. The regulation defined the scope of federal water protection in a more consistent manner, particularly over streams and wetlands which have a significant hydrological and ecological connection to traditional navigable waters, interstate waters, and territorial seas. It is also referred to as the Waters of the United States (WOTUS) rule, which defines all bodies of water that fall under U.S. federal jurisdiction. The rule was published in response to concerns about lack of clarity over the act's scope from legislators at multiple levels, industry members, researchers and other science professionals, activists, and citizens. The rule was contested in litigation. In 2017 the Trump administration announced its intent to review and rescind or revise the rule. A Supreme Court ruling on January 22, 2018 returned the rule's nationwide authority after the rule was decided to be illegal by a lower court. It gave back jurisdiction previously complicated by decisions from the circuit courts of appeals. Two weeks later, the Trump administration formally suspended the rule until February 6, 2020. The Trump administration formally repealed the WOTUS rule on September 12, 2019 and published a replacement rule on April 21, 2020. On August 30, 2021, the United States District Court for the District of Arizona threw out the 2020 replacement rule. USACE and EPA published a revised definition of WOTUS on January 18, 2023, restoring the pre-2015 regulations on the scope of federal jurisdiction over waterways, effective March 20, 2023. On May 25, 2023, the United States Supreme Court ruled in the case Sackett v. Environmental Protection Agency that only wetlands and permanent bodies of water with a "continuous surface connection" to "traditional interstate navigable waters" are covered by the Clean Water Act, narrowing the application of the Clean Water Rule. Key provisions of the rescinded 2015 rule The 2015 rule ensures that Clean Water Act (CWA) programs are more precisely defined and intends to save time and avoid costs and confusion in future implementation of the act. The rule intends to make it is easier to predict what action(s) will be taken by the EPA and what processes companies and other stakeholders may have to undergo for projects and permitting. There are no direct changes to the law under the Clean Water Rule. After analysis, the EPA and Department of the Army found that higher instance of water coverage would produce a 2:1 ratio of benefits to costs in implementation after the final rule. Implementation of the rule will discern any implications for environmental justice communities, though it is clear that "meaningful involvement from minority, low-income, and indigenous populations, as well as other stakeholders, has been a cornerstone of development of the final rule." Specific details that have been clarified by the rule are outlined below. Defines more clearly the tributaries and adjacent waters that are under federal jurisdiction and explains how they are covered A tributary, or upstream water, must show physical features of flowing water – a bed, bank, and ordinary high water mark – to warrant protection. The rule provides protection for headwaters that have these features and have a significant connection to downstream waters. Adjacent waters are defined by three qualifying circumstances established by the rule. These can include wetlands, ponds, impoundments, and lakes which can impact the chemical, biological or physical integrity of neighboring waters. Carries over existing exclusions from the Clean Water Act All existing exclusions from longstanding agency practices are officially established for the first time. Waters used in normal agricultural, ranching, or silvicultural activities, as well as certain defined ditches, prior converted cropland, and waste treatment systems continue to be excluded. Reduces categories of waters which are subject to case-by-case analysis Before the rule, almost any water could be put through an analysis that remained case-specific, even if it would not be covered under CWA. The rule limits use of case-specific analysis by providing certainty and clarity of protected vs non-protected water. Ultimately the rule saves time and avoids further evaluation and the need to take the case to court. Protects US "regional water treasures" Specific watersheds have been shown to impact downstream water health. The rule protects Texas coastal prairie wetlands, coastal depressions called Carolina Bays and the related seasonal Delmarva bays, western vernal pools in California, pocosins, and other prairie potholes, when impacting downstream waterways. Defining "Waters of the United States" The Clean Water Act is the primary federal law regulating water pollution in the United States. The language of the Clean Water Act describes itself as pertaining to "Waters of the United States". The act defines these waters as "navigable waterways", which connects the act to constitutional authority to regulate interstate commerce. Two U.S. Supreme Court decisions, in 2001 and 2006, interpreted the law to include waters not presently navigable that were formerly navigable that might be readily dredged to be restored to navigation or be made available for navigation. The scope of these decisions cast into doubt lower court decisions interpreting the act's authority to extend regulatory authority to streams, wetlands, and small bodies of water not navigable in the sense of the interstate commerce clause. These decisions highlighted a need for the EPA and USACE to more precisely define and justify an implicit regulatory authority over tributaries flowing into the navigable waterways for which a clear statutory authority is provided. Solid Waste Agency of Northern Cook County (SWANCC) v. U.S. Army Corps of Engineers The Solid Waste Agency of Northern Cook County (SWANCC), Illinois, was denied federal permits to develop an old gravel mine site into a landfill because migratory bird ponds had developed in abandoned excavation trenches on the property. The Supreme Court ruled in 2001 that the authority granted by CWA did not extend to abandoned gravel pits with seasonal ponds. Rapanos v. United States In 1989, land developer John Rapanos filled on his property some 10-20 miles from the nearest navigable waters that his environmental consultant had classified as wetlands without a permit from the Michigan Department of Environmental Quality. Rapanos v. United States resulted in a 2006 Supreme Court decision with five justices concurring to vacate rulings against the defendants, but issuing three distinctly differing opinions leaving uncertain which of the described tests defined the limit of the federal authority to regulate wetlands. The resulting ambiguity became a part of the stated rationale for EPA rulemaking activity that resulted in the 2015 Waters of the United States rule. Development Following the SWANCC ruling, the EPA (then under the George W. Bush administration) issued guidelines in 2003 restricting regulatory review of some 20 million acres of isolated wetlands and gave advance notice of proposed rulemaking which would substantially narrow the scope of WOTUS and weaken CWA protections. After strong opposition from Congress the planned legislation was abandoned, to the relief of environmental advocates and disappointment of land development groups who sought a reduction in federal wetlands protection. The lack of a majority opinion in the 2006 Rapanos case prompted a second set of EPA guidelines directing the agency to determine wetlands protection on a case-by-case basis. This contributed to an uptick in lawsuits for the next 8 years challenging the EPA's regulatory authority over streams and wetlands. Seeking to reduce confusion and to restore the original scope of WOTUS to pre-SWANCC levels, repeated unsuccessful attempts were made to pass a bill called the "Clean Water Authority Restoration Act" in each Congress from 2002 to 2010. In April 2011, the EPA, under the Barack Obama administration, proposed a new set of guidelines to replace the two issued under the Bush administration. These guidelines formed the basis of what became the Clean Water Rule. In contrast to the manner in which the 2003 and 2007 guidelines were issued, the EPA and the USACE conducted peer-reviewed hydrological studies, interagency reviews, and economic analyses before publishing a formal proposed rule on April 21, 2014. On May 27, 2015, after a public comment period and numerous meetings with state entities, public and private stakeholders, then-EPA Administrator Gina McCarthy along with Assistant Army Secretary Jo-Ellen Darcy signed the Clean Water Rule, set to become effective in August of that year. Implications for stakeholders EPA had stated that the 2015 rule created no additional burden for stakeholders working in agriculture since there was no change to the exemptions for activities necessary to forestry, ranching, or farming. The rule provided clearer protection of many waters of the U.S. that, if polluted, could have detrimental effects on drinking water, habitats, and flood-prone areas. One U.S. water news organization stressed that, while the rule was an update to the CWA, there is still a need for more regulation since more than half of the nation's streams and rivers do not meet standards and most pollution issues come from nonpoint sources, such as agricultural runoff. Many people, 117 million according to EPA, rely on drinking water, in addition to many others who subside on fishing, from sources protected under the implementation of the rule. Low-income communities and communities of color are more often at risk of being affected by pollution. It has also been evidenced that, "states conduct fewer regulatory enforcement actions in counties with higher levels of poverty." The Environmental Justice Coalition for Water expressed, in its comment on the rule, the need to "strengthen the categorical protections" to wetlands, to minimize flooding and support pollution remediation. While there are no direct implications for indigenous peoples, tribal communities were consulted during the process of finalizing the Clean Water Rule. A separate, revised interpretive rule to the Clean Water Act, section 518, determined tribal lands should be treated as states and was made effective in May 2016. This amendment is important for giving people living on reservations access to EPA regulation and federal grants; tribes no longer need to "demonstrate inherent authority to regulate" their waters. The regulation at the state level is determined by the strength of federal coverage and some stakeholders consider the rule to be overreach by the government. There is concern from private landowners, including small business owners and farmers, that this "rule will lead to radical environmental groups suing homeowners and small businesses," and, ultimately, "increased regulatory costs, less economic development, fewer jobs." Legal challenges and opposition Partisan and industry opposition Government regulation and protection of fresh water supplies and watershed health is frequently perceived on the political right as a burden on economic growth and an infringement of landowner rights. The Clean Water Rule was part of a larger mobilization by the Obama administration to ingrain the presidency with an environmental legacy, which Republicans have viewed as an “over-reach” of executive power. The pushback against the Clean Water Rule also include some Democrats from "farm and energy states". Some state and local governments also consider the Clean Water Rule an unconstitutional over-reach violating federalism principles and due process provisions outlined in the 10th and 14th amendments respectively. Legal objections could also be raised on the principle that the Clean Water Act itself violates the Commerce Clause of the Constitution. On February 22, 2017, the Business Roundtable provided a list of federal regulations to the Trump administration which it wished to have reviewed for repeal or major reform; the Clean Water Rule was among the "wishlist" of sixteen. The roundtable is a consortium of large corporations including J.P. Morgan Chase, Honeywell, Lockheed Martin, and Dow Chemical Company. Federal stay After thirteen states sued to block the rule, U.S. Chief District Judge for North Dakota Ralph R. Erickson issued a preliminary injunction in 2015, hours before the rule was to take effect, blocking regulation in those states. In a separate case, the Sixth Circuit Court temporarily halted implementation of the 2015 Rule by issuing a nationwide stay on October 9, 2015, which was the day before the rule was supposed to come into effect. The Sixth Circuit's decision was overturned on January 22, 2018 when the Supreme Court of the United States issued a unanimous decision that the appeals courts do not have original jurisdiction to review challenges to the Clean Water Act and, therefore, lack the authority to issue a stay. Rather, challenges to the 2015 Rule must be filed in United States district courts. Trump administration Donald Trump, as part of his 2016 presidential campaign had set a goal of repealing or weakening the WOTUS rule, and once in office, began to act on that pledge, stating that the rule was a "massive power grab" by the government on farmers, home owners, and land commissioners, stalling economic growth. On February 28, 2017 Trump signed an executive order directing EPA to review the Clean Water Rule for conflicts with his economic growth agenda. On March 6, 2017 the Trump administration announced its intent to review and rescind or revise the rule. The Trump administration's choice for the EPA water chief, David Ross, represented the state of Wyoming in 2015 in a lawsuit against EPA's interpretation of WOTUS. On February 16, 2017 Trump signed a law disapproving and vacating the Stream Protection Rule. The rule, published by the Office of Surface Mining Reclamation and Enforcement on December 20, 2016 with just 31 days left in the Obama Administration's term of office, regulated mountaintop removal mining sites. In January 2018 EPA formally suspended the 2015 WOTUS regulation and announced plans to issue a new version later in 2018. Fifteen states, two cities and several environmental organizations have challenged EPA's suspension in several lawsuits. EPA and USACE published a proposed rule on February 14, 2019 that would revise the WOTUS definition. The Trump administration formally announced that the WOTUS rule had been repealed on September 12, 2019, to take effect within weeks. A replacement rule for the Clean Water Rule was issued by the Trump administration on January 23, 2020 (published April 21, 2020), which further rolled back protection on certain wetlands and streams and eliminated requirements for landowners to get EPA approval for certain modification of their own lands. The Natural Resources Defense Council and other environmental groups sued to block the new rule. On August 30, 2021, the United States District Court for the District of Arizona threw out the 2020 replacement rule in Pasqua Yaqui Tribe et al v. EPA. The court stated that EPA made serious procedural errors in its issuance of the 2020 rule and that implementation of the rule would lead to "serious environmental harm." Biden administration In June 2021 the administration of President Joe Biden described "significant environmental degradation" from hundreds of recently-initiated development projects that were not subject to regulatory approval because of the 2019 repeal. In an announcement EPA said it planned initiate a new rulemaking to reverse the 2019/2020 rule and restore the 2015 regulation widening the scope of federal jurisdiction over waterways. USACE and EPA published a new definition of WOTUS, returning to the definition in the pre-2015 regulations, on January 18, 2023. The rule took effect on March 20, 2023. However, on May 25, 2023, the restored policy would again be rolled back after the U.S. Supreme Court ruled that the EPA cannot regulate waters in the United States which have been isolated from larger bodies of water. See also Environmental policy of the United States Indigenous rights to land along rivers Inland waterways of the United States References External links Waters of the United States Rulemaking - EPA Water law in the United States United States Environmental Protection Agency United States Army Corps of Engineers 2015 in the environment Water pollution in the United States
Clean Water Rule
Engineering
3,142
43,608,571
https://en.wikipedia.org/wiki/Methoxyamine
Methoxyamine is the organic compound with the formula CH3ONH2. Also called O-methylhydroxylamine, it is a colourless volatile liquid that is soluble in polar organic solvent and in water. It is a derivative of hydroxylamine with the hydroxyl hydrogen replaced by a methyl group. Alternatively, it can be viewed as a derivative of methanol with the hydroxyl hydrogen replaced by an amino group. It is an isomer of N-methylhydroxylamine and aminomethanol. It decomposes in an exothermic reaction (-56 kJ/mol) to methane and azanone unless stored as a hydrochloride salt. Synthesis Methoxyamine is prepared via O-alkylation of hydroxylamine derivatives. For example, it is obtained by O-methylation of acetone oxime followed by hydrolysis of the O-methylated oxime: (CH3)2CNOCH3 + H2O → (CH3)2CO + H2NOCH3 The other broad method involves methanolysis of hydroxylamine sulfonates: H2NOSO3− + CH3OH → H2NOCH3 + HSO4− Reactions Analogous to the behavior of hydroxylamine, methoxyamine condenses with ketones and aldehydes to give imines. Methoxyamine is used as a synthon for NH2+. It undergoes deprotonation by methyl lithium to give CH3ONHLi. This N-lithio derivative is attacked by organolithium compounds to give, after hydrolysis, amines: H2NOCH3 + CH3Li → LiHNOCH3 + CH4 LiHNOCH3 + RLi → RNHLi + LiOCH3 RNHLi + H2O → RNH2 + LiOH Uses Methoxyamine has potential medicinal uses. It covalently binds to apurinic/apyrimidinic (AP) DNA damage sites and inhibits base excision repair (BER), which may result in an increase in DNA strand breaks and apoptosis.This agent may potentiate the anti-tumor activity of alkylating agents. Examples of drugs incorporating the methoxyamine unit are brasofensine and gemifloxacin. References External links Sigma-Aldrich Methoxyamine Hydrochloride Hydroxylamines
Methoxyamine
Chemistry
512
2,684,812
https://en.wikipedia.org/wiki/Alcyone%20%28star%29
Alcyone , designated η Tauri (Eta Tauri, abbreviated Eta Tau, η Tau), is a star in the constellation of Taurus. Approximately 440 light-years from the Sun, it is the brightest star in the Pleiades open cluster, which is a young cluster, around 100 million years old. There are a number of fainter stars very close to Alcyone, some of which are members of the same cluster. Nomenclature Eta Tauri is the star's Bayer designation. The name Alcyone originates in Greek mythology; she is one of the seven daughters of Atlas and Pleione known as the Pleiades. In 2016, the International Astronomical Union (IAU) organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Alcyone for this star. It is now so entered in the IAU Catalog of Star Names. In Chinese, (), meaning Hairy Head, refers to an asterism consisting Alcyone, Electra, Taygeta, Asterope, Maia, Merope, and Atlas. Consequently, the Chinese name for Alcyone itself is (), "the Sixth Star of Hairy Head". Physical properties Alcyone Is a blue-white B-type giant, similar the other bright B-type stars in the Pleiades cluster. With an apparent magnitude of +2.87 (absolute magnitude = −2.39), it is the brightest and most luminous star in the Pleiades. The spectral type of B7IIIe indicates that emission lines are present in its spectrum. Like many Be stars, Alcyone has created a gaseous disk flung into orbit around the star from its equator. Alcyone has a high rotational velocity, which causes it to have an ellipsoidal shape. Its effective radius is almost ten times that of the Sun, but the actual radius is lesser at poles and greater at the equator. Its effective temperature is approximately 12,300 K, with the actual temperature being greater at the poles and lesser at the equator. Its bolometric luminosity is 2,030 times solar. The age of the Pleiades is typically calculated to be around 130 million years, but Alcyone itself appears to be younger, less than 100 million years. Alcyone may be a blue straggler or models may not be deriving an accurate age for stars of this type. Companions The Catalog of Components of Double and Multiple Stars lists three companions: B is 24 Tauri, a magnitude 6.28 A0 main-sequence star 117" away; C is V647 Tauri, a δ Sct variable star; and D is a magnitude 9.15 F3 main-sequence star. V647 Tau varies from magnitude +8.25 to +8.30 over 1.13 hours. The Washington Double Star Catalog lists a further four companions, all fainter than 11th magnitude, and also describes component D as itself double with two nearly equal components separated by 0.30". Some previous lunar occultation studies found evidence of sub-arcsound companions, but more recently, a 2021 interferometric study concluded that Alcyone is a single star system. See also Circumstellar disc List of stars in Taurus Lists of stars Shell star White Tiger References External links Jim Kaler's Stars, University of Illinois:ALCYONE (Eta Tauri) Alcyone and the Pleiades Tauri, Eta Taurus (constellation) B-type giants A-type main-sequence stars F-type main-sequence stars Eclipsing binaries 5 Delta Scuti variables Pleiades Alcyone Tauri, 025 017702 1165 023630 BD+23 0541
Alcyone (star)
Astronomy
807