text
stringlengths
11
320k
source
stringlengths
26
161
In quantum mechanics , the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner ) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture . Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization , whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform . This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization . [ 1 ] It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. ( Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.) Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix , and, conversely, the density matrix is the Weyl transform of the Wigner function. In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect "classical" with "quantum" quantities. For example, the phase-space function may depend explicitly on the reduced Planck constant ħ , as it does in some familiar cases involving angular momentum. This invertible representation change then allows one to express quantum mechanics in phase space , as was appreciated in the 1940s by Hilbrand J. Groenewold [ 2 ] and José Enrique Moyal . [ 3 ] [ 4 ] In more generality, Weyl quantization is studied in cases where the phase space is a symplectic manifold , or possibly a Poisson manifold . Related structures include the Poisson–Lie groups and Kac–Moody algebras . The following explains the Weyl transformation on the simplest, two-dimensional Euclidean phase space. Let the coordinates on phase space be (q,p) , and let f be a function defined everywhere on phase space. In what follows, we fix operators P and Q satisfying the canonical commutation relations , such as the usual position and momentum operators in the Schrödinger representation. We assume that the exponentiated operators e i a Q {\displaystyle e^{iaQ}} and e i b P {\displaystyle e^{ibP}} constitute an irreducible representation of the Weyl relations , so that the Stone–von Neumann theorem (guaranteeing uniqueness of the canonical commutation relations) holds. The Weyl transform (or Weyl quantization ) of the function f is given by the following operator in Hilbert space, [ 5 ] [ 6 ] Φ [ f ] = 1 ( 2 π ) 2 ∬ ∬ f ( q , p ) ( e i ( a ( Q − q ) + b ( P − p ) ) ) d q d p d a d b . {\displaystyle \Phi [f]={\frac {1}{(2\pi )^{2}}}\iint \!\!\!\iint f(q,p)\left(e^{i(a(Q-q)+b(P-p))}\right){\text{d}}q\,{\text{d}}p\,{\text{d}}a\,{\text{d}}b.} Throughout, ħ is the reduced Planck constant . It is instructive to perform the p and q integrals in the above formula first, which has the effect of computing the ordinary Fourier transform f ~ {\displaystyle {\tilde {f}}} of the function f , while leaving the operator e i ( a Q + b P ) {\displaystyle e^{i(aQ+bP)}} . In that case, the Weyl transform can be written as [ 7 ] We may therefore think of the Weyl map as follows: We take the ordinary Fourier transform of the function f ( p , q ) {\displaystyle f(p,q)} , but then when applying the Fourier inversion formula, we substitute the quantum operators P {\displaystyle P} and Q {\displaystyle Q} for the original classical variables p and q , thus obtaining a "quantum version of f ." A less symmetric form, but handy for applications, is the following, The Weyl map may then also be expressed in terms of the integral kernel matrix elements of this operator, [ 8 ] The inverse of the above Weyl map is the Wigner map (or Wigner transform ), which was introduced by Eugene Wigner, [ 9 ] which takes the operator Φ back to the original phase-space kernel function f , f ( q , p ) = 2 ∫ − ∞ ∞ d y e − 2 i p y / ℏ ⟨ q + y | Φ [ f ] | q − y ⟩ . {\displaystyle f(q,p)=2\int _{-\infty }^{\infty }{\text{d}}y~e^{-2ipy/\hbar }~\langle q+y|\Phi [f]|q-y\rangle .} For example, the Wigner map of the oscillator thermal distribution operator exp ⁡ ( − β ( P 2 + Q 2 ) / 2 ) {\displaystyle \exp(-\beta (P^{2}+Q^{2})/2)} is [ 6 ] If one replaces Φ [ f ] {\displaystyle \Phi [f]} in the above expression with an arbitrary operator, the resulting function f may depend on the reduced Planck constant ħ , and may well describe quantum-mechanical processes, provided it is properly composed through the star product , below. [ 10 ] In turn, the Weyl map of the Wigner map is summarized by Groenewold's formula , [ 6 ] While the above formulas give a nice understanding of the Weyl quantization of a very general observable on phase space, they are not very convenient for computing on simple observables, such as those that are polynomials in q {\displaystyle q} and p {\displaystyle p} . In later sections, we will see that on such polynomials, the Weyl quantization represents the totally symmetric ordering of the noncommuting operators Q {\displaystyle Q} and P {\displaystyle P} . For example, the Wigner map of the quantum angular-momentum-squared operator L 2 is not just the classical angular momentum squared, but it further contains an offset term −3 ħ 2 /2 , which accounts for the nonvanishing angular momentum of the ground-state Bohr orbit . The action of the Weyl quantization on polynomial functions of q {\displaystyle q} and p {\displaystyle p} is completely determined by the following symmetric formula: [ 11 ] for all complex numbers a {\displaystyle a} and b {\displaystyle b} . From this formula, it is not hard to show that the Weyl quantization on a function of the form q k p l {\displaystyle q^{k}p^{l}} gives the average of all possible orderings of k {\displaystyle k} factors of Q {\displaystyle Q} and l {\displaystyle l} factors of P {\displaystyle P} : ∏ j = 1 N ξ k j ⟼ 1 N ! ∑ σ ∈ S N ∏ j = 1 N Ξ k σ ( j ) {\displaystyle \prod _{j=1}^{N}\xi _{k_{j}}~~\longmapsto ~~{\frac {1}{N!}}\sum _{\sigma \in S_{N}}\prod _{j=1}^{N}\Xi _{k_{\sigma (j)}}} where ξ j = q j , ξ n + j = p j {\displaystyle \xi _{j}=q_{j},\xi _{n+j}=p_{j}} , and S N {\displaystyle S_{N}} is the set of permutations on N elements . For example, we have While this result is conceptually natural, it is not convenient for computations when k {\displaystyle k} and l {\displaystyle l} are large. In such cases, we can use instead McCoy's formula [ 12 ] This expression gives an apparently different answer for the case of p 2 q 2 {\displaystyle p^{2}q^{2}} from the totally symmetric expression above. There is no contradiction, however, since the canonical commutation relations allow for more than one expression for the same operator. (The reader may find it instructive to use the commutation relations to rewrite the totally symmetric formula for the case of p 2 q 2 {\displaystyle p^{2}q^{2}} in terms of the operators P 2 Q 2 {\displaystyle P^{2}Q^{2}} , Q P 2 Q {\displaystyle QP^{2}Q} , and Q 2 P 2 {\displaystyle Q^{2}P^{2}} and verify the first expression in McCoy's formula with m = n = 2 {\displaystyle m=n=2} .) It is widely thought that the Weyl quantization, among all quantization schemes, comes as close as possible to mapping the Poisson bracket on the classical side to the commutator on the quantum side. (An exact correspondence is impossible, in light of Groenewold's theorem .) For example, Moyal showed the
https://en.wikipedia.org/wiki/Wigner–Weyl_transform
WikiPathways [ 1 ] [ 2 ] is a community resource for contributing and maintaining content dedicated to biological pathways . Any registered WikiPathways user can contribute, and anybody can become a registered user. [ 3 ] Contributions are monitored by a group of admins, but the bulk of peer review , editorial curation, and maintenance is the responsibility of the user community. WikiPathways is originally built using MediaWiki software, [ 4 ] a custom graphical pathway editing tool ( PathVisio [ 5 ] ) and integrated BridgeDb [ 6 ] databases covering major gene , protein , and metabolite systems. WikiPathways was founded in 2008 by Thomas Kelder, Alex Pico, Martijn Van Iersel, Kristina Hanspers, Bruce Conklin and Chris Evelo. Current architects are Alex Pico and Martina Summer-Kutmon. Each article at WikiPathways is dedicated to a particular pathway. Many types of molecular pathways are covered, including metabolic , [ 7 ] signaling , regulatory , etc. and the supported [ 8 ] species include human , mouse , zebrafish , fruit fly , C. elegans , yeast , rice and arabidopsis , [ 9 ] as well as bacteria and plant species. Using a search feature, one can locate a particular pathway by name, by the genes and proteins it contains, or by the text displayed in its description. The pathway collection can also be browsed with combinations of species names and ontology-based categories. In addition to the pathway diagram, each pathway page also includes a description, bibliography, pathway version history and list of component genes and proteins with linkouts to public resources. For individual pathway nodes, users can access a list of other pathways with that node. Pathway changes can be monitored by displaying previous revisions or by viewing differences between specific revisions. Using the pathway history one can also revert to a previous revision of a pathway. Pathways can also be tagged with ontology terms from three major BioPortal ontologies (Pathway, Disease and Cell Type). The pathway content at WikiPathways is freely available for download in several data and image formats. WikiPathways is completely open access and open source . All content is available under Creative Commons 0 . All source code for WikiPathways and the PathVisio editor is available under the Apache License, Version 2.0 . In addition to various primary data formats (e.g. GPML, BioPAX , Reactome , [ 10 ] KEGG , and RDF [ 11 ] ), WikiPathways supports a variety of ways to integrate and interact with pathway content. These include directed link-outs, image maps , RSS feeds and deep web services . [ 12 ] This enables reuse in projects like COVID19 Disease Map. [ 13 ] WikiPathways content is used to annotate and cross-link Wikipedia articles covering various genes, proteins, metabolites and pathways. Here are a few examples: As of this edit , this article uses content from "WikiPathWays:About" , which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 3.0 Unported License , but not under the GFDL . All relevant terms must be followed.
https://en.wikipedia.org/wiki/WikiPathways
Wikiloc is a website , launched in 2006, [ 2 ] [ 3 ] [ 4 ] containing GPS trails and waypoints that members have uploaded. [ 3 ] This mashup shows the routes in frames showing Google Maps (with the possibility to show the layers of World Relief Map (maps-for-free.com), OpenStreetMap , the related OpenCycleMap, USGS Imagery Topo Base Map and USGS Topo Base Map). The service is also available in Google Earth . [ 5 ] [ 6 ] There are mobile apps for Android [ 7 ] [ 8 ] and iPhone. [ 9 ] [ 10 ] The product has more than 11M members, [ 11 ] is offered in many languages [ 12 ] and has more than 37.9M [ clarification needed ] tracks [ 11 ] of dozens of activities (hiking, cycling, sailing, horseback riding, driving, paragliding, and so on) in many countries and territories. [ 13 ] Wikiloc began as a worldwide online reference for hiking. [ 14 ] Additionally, photographs on Wikiloc enabled automated content analysis to characterize the landscape in the Ebro Delta Natural Park, Spain. [ 15 ] Wikiloc does not pay its contributors. [ 16 ] It charges users to download trails and waypoints from the website. [ 17 ] It also prohibits bulk downloads or re-hosting of the data. [ 16 ] Wikiloc runs on Linux , with a PostgreSQL database manager with PostGIS for geography support, Apache software, Hibernate , GEOS, GDAL , PROJ.4 , and with some Python for preprocess and maintenance. [ 3 ] [ 18 ] Wikiloc has won the following prizes:
https://en.wikipedia.org/wiki/Wikiloc
This list is automatically generated from data in Wikidata and is periodically updated by Listeriabot . Edits made within the list area will be removed on the next update! ∑ 865 items.
https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Awards/Science_awards_list
Wikispecies is a wiki -based online project supported by the Wikimedia Foundation . Its aim is to create a comprehensive open content catalogue of all species ; the project is directed at scientists, rather than at the general public. Jimmy Wales stated that editors are not required to fax in their degrees , but that submissions will have to pass muster with a technical audience. [ 1 ] [ 2 ] Wikispecies is available under the GNU Free Documentation License and CC BY-SA 4.0 . Started in September 2004, with biologists around the world invited to contribute, [ 3 ] the project had grown to a framework encompassing the Linnaean taxonomy with links to Wikipedia articles on individual species by April 2005. [ 2 ] Benedikt Mandl coordinated the efforts of several people who were interested in getting involved with the project and contacted potential supporters in the early summer of 2004. Databases were evaluated and the administrators contacted; some of them have agreed on providing their data for Wikispecies. Mandl defined two major tasks: Advantages and disadvantages were widely discussed by the wikimedia-I mailing list. The board of directors of the Wikimedia Foundation voted by 4 to 0 in favor of the establishment of Wikispecies. The project was launched in August 2004 and is hosted at species.wikimedia.org. It was officially merged into a sister project of the Wikimedia Foundation on September 14, 2004. As a database for taxonomy and nomenclature, Wikispecies comprises taxon pages, and additionally pages about synonyms, taxon authorities, taxonomical publications, type material, and institutions or repositories holding type specimen. [ 5 ] Wikispecies has disabled local upload and asks users to use images from Wikimedia Commons . Wikispecies does not allow the use of content that does not conform to a free license.
https://en.wikipedia.org/wiki/Wikispecies
Wikispeed is an automotive startup with a modular design car. Wikispeed competed in the Progressive Automotive X Prize competition in 2010 and won the tenth place in the mainstream class, which had a hundred other cars competing, often from big companies and universities. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The car debuted at the North American International Auto Show (NAIAS) in Detroit , Michigan in January 2011. [ 5 ] [ 6 ] Wikispeed was founded by Joe Justice and is headquartered in Seattle, Washington . In 2011, Justice gave a TEDx talk explaining the management style implemented by the Wikispeed team. [ 7 ] In May of 2012, Joe Justice launched an Indiegogo campaign to crowdfund further refinement of their prototype design into a market-ready kit car . Justice did not seek development funding from "traditional venture capital " in an effort to avoid forcing the Wikispeed project "into commercial short-term money-making". [ 4 ] The campaign sought roughly $50,000 over a period of two months. [ 4 ] The campaign failed. Wikispeed innovates by applying scrum development techniques borrowed from the software development industry. They use open source tools and lean management methods to improve their productivity. [ 8 ] On January 6, 2015, Wikispeed announced that they have been unable to create a working engine module since their second model and called on the community for help. On February 15, 2015, Wikispeed announced an update that they have produced another working engine module. [ 9 ] This article about an automotive industry corporation or company is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wikispeed
Wikiwand , formerly stylized as WikiWand , [ 2 ] is a commercial proprietary interface developed for viewing Wikipedia articles. [ 3 ] Its interface includes a sidebar menu displaying the table of contents, a navigation bar, personalized links to other languages, new typography, access to previews of linked articles, display advertisements, and sponsored articles. [ 4 ] The interface is available on Chrome , Firefox , and Microsoft Edge as well as via Wikiwand's website. Wikiwand was founded in 2013 [ 1 ] by Lior Grossman and Ilan Lewin. [ 5 ] It officially launched in August 2014. [ 6 ] In 2014, Wikiwand was able to raise $600,000 to support the development of the interface. According to Grossman, "It didn't make sense to us that the fifth most popular website in the world, used by half a billion people, has an interface that hasn't been updated in over a decade. We found the Wikipedia interface cluttered, hard to read, hard to navigate, and lacking in terms of usability." [ 2 ] In March 2015, Wikiwand released an iOS app for iPhone and iPad. [ 7 ] In February 2020, an Android app was under development. [ 8 ] In 2021, it was noted that visiting a Wikiwand page made the visitor's device exchange hundreds of requests and several megabytes of data with advertisers. [ 9 ] On January 12, 2023, Wikiwand 2.0 was officially launched with an overhauled interface and new features like text-to-speech and AI-generated summaries collaborated with Wordtune Read . [ 10 ] On March 1, a Q&A feature powered by GPT-3 was added to the top of all articles. [ 11 ] Unlike Wikipedia, Wikiwand is ad-supported. It employs a freemium model that charges for a premium, ad-free plan. A large number of tracking cookies are set. When it started, Wikiwand announced that it would donate 30 percent of its profits to the Wikimedia Foundation. As of 2024, this applies only for user donations, not to subscription or advertising revenue. [ 12 ] Per the site's Donation page, donations to Wikiwand are made to the Israeli company Wikipele Ltd . [ 13 ]
https://en.wikipedia.org/wiki/Wikiwand
The Wildlife Conservation Research Unit ( WildCRU ) is part of the Department of Zoology at the University of Oxford in England. [ 1 ] Its mission is to achieve practical solutions to conservation problems through original scientific research , training conservation scientists to conduct research, putting scientific knowledge into practice, and educating and involving the public to achieve lasting solutions. [ citation needed ] The Unit was founded in 1986 by Professor David W. Macdonald . [ 2 ] In 2022 Professor Amy Dickman took over from David W. Macdonald as Director. [ 3 ] Members come from more than 30 countries and many have returned to hold influential roles in conservation. WildCRU research has been used to advise policy-makers worldwide. More than 300 scientific papers and 25 reports have been published, over a hundred fruitful collaborations have been fostered, and over 45 students have completed doctoral theses. [ citation needed ] WildCRU projects use all four elements of their Conservation Quartet: research to understand the problem, education to explain it, community involvement to ensure participation and acceptance, and implementation of a solution. [ citation needed ] The approach is interdisciplinary , linking to public health , community development and animal welfare . In a new initiative concerning ‘ biodiversity and business ’, WildCRU is working directly to influence policy making processes in industry . Current [ when? ] project areas include saving endangered species , resolving conflict, reconciling farming and wildlife , researching fundamental ecology , and managing wildlife diseases , pests and invasive species . [ citation needed ] Specific projects include protecting the Ethiopian wolf , Grevy's zebra and endemic birds in the Galapagos Islands , finding solutions to bushmeat exploitation in West Africa , community conservation education in Africa, sustainable farming , badger ecology and behavior, and the impact of American mink on native wildlife in Britain, Belarus, and Argentina. [ citation needed ] WildCRU is located in Tubney House, Abingdon Road, Tubney , Oxfordshire . [ 1 ]
https://en.wikipedia.org/wiki/WildCRU
Wild Fermentation: The Flavor, Nutrition, and Craft of Live-Culture Foods is a 2003 book by Sandor Katz that discusses the ancient practice of fermentation . While most of the conventional literature assumes the use of modern technology, Wild Fermentation focuses more on the practice and culture of fermenting food. The term " wild fermentation " refers to the reliance on naturally occurring bacteria and yeast to ferment food. For example, conventional bread making requires the use of a commercial, highly specialized yeast, while wild-fermented bread relies on naturally occurring cultures that are found on the flour, in the air, and so on. Similarly, the book's instructions on sauerkraut require only cabbage and salt, relying on the cultures that naturally exist on the vegetable to perform the fermentation. The book also discusses some foods that are not, strictly speaking, wild ferments such as miso , yogurt , kefir , and nattō . Beyond food, the book includes some discussion of social, personal, and political issues, such as the legality of raw milk cheeses in the United States. Newsweek has referred to Wild Fermentation as the "fermentation bible". [ 1 ] This article about a cook book or other food, drink, or cooking-related non-fiction book is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wild_Fermentation
The Wild Salmon Center (WSC) is an worldwide conservation network of scientists and advocates working to protect wild salmon , steelhead , char , trout and the ecosystems on which these species depend on. [ 2 ] [ 3 ] Headquartered in Portland, Oregon , WSC works with communities, businesses, governments, and other non-profits to protect and preserve healthy salmon ecosystems in the North Pacific. [ 4 ] WSC programs range in location from Russia , Japan , Alaska , British Columbia , Washington State , Oregon , and California . WSC was founded as a non-profit by Pete Soverel and Tom Pero in 1992, and was run entirely by volunteers during its first five years. Originally, WSC received funding for research and conservation by organizing angling trips to the Kamchatka Peninsula in the Russian Far East . These expeditions were a joint venture between the WSC and Moscow State University . In 1998, WSC hired Guido Rahr as executive director, who previously developed an approach to salmon conservation that focused on proactive protection of the strongest remaining populations (stronghold strategy). The organization expanded between 1999 and 2010. A new organization called "The Conservation Angler" was created in 2003 to take over the ecotourism programs, allowing WSC to focus solely on science and conservation. [ 5 ] One of the WSC programs, "State of the Salmon", in collaboration with Ecotrust , used data to track the health and trends of wild salmon populations. This data was then analyzed and used to inform salmon management and conservation throughout the Pacific Rim . [ 6 ] While "State of the Salmon" has concluded, sustainable fisheries work continues with a new organization, Ocean Outcomes , which was incubated and launched by the Wild Salmon Center in 2015. [ 7 ] In July 2023, WSC was labeled as “ undesirable ” in Russia, due to harm to economic development. [ 8 ] Adopted in 1999, the WSC has been focused on a proactive "salmon stronghold" conservation strategy as a regional and international approach to salmon conservation. [ 9 ] Salmon strongholds refer to river ecosystems that contain the most abundant and biologically diverse populations of wild salmon. Select areas include: the Kamchatka Peninsula, the Russian Far East mainland, Sakhalin Island , British Columbia, Bristol Bay , Alaska, as well as key watersheds in the lower 48 U.S. States . [ 10 ] As part of its efforts to protect habitats in Oregon, the Wild Salmon Center is a member of the North Coast State Forest Coalition . In identifying and then protecting salmon strongholds, WSC aims to conserve healthy salmon stocks before population decline and ensure sustainability for the long term. [ 11 ]
https://en.wikipedia.org/wiki/Wild_Salmon_Center
Originally, wild numbers are the numbers supposed to belong to a fictional sequence of numbers imagined to exist in the mathematical world of the mathematical fiction The Wild Numbers authored by Philibert Schogt , a Dutch philosopher and mathematician . [ 1 ] Even though Schogt has given a definition of the wild number sequence in his novel, it is couched in a deliberately imprecise language that the definition turns out to be no definition at all. However, the author claims that the first few members of the sequence are 11, 67, 2, 4769, 67. Later, inspired by this wild and erratic behaviour of the fictional wild numbers, American mathematician J. C. Lagarias used the terminology to describe a precisely defined sequence of integers which shows somewhat similar wild and erratic behaviour. Lagaria's wild numbers are connected with the Collatz conjecture and the concept of the 3 x + 1 semigroup . [ 2 ] [ 3 ] The original fictional sequence of wild numbers has found a place in the On-Line Encyclopedia of Integer Sequences . [ 4 ] In the novel The Wild Numbers , The Wild Number Problem is formulated as follows: But it has not been specified what those "deceptively simple operations" are. Consequently, there is no way of knowing how those numbers 11, 67, etc. were obtained and no way of finding what the next wild number would be. The novel The Wild Numbers has constructed a fictitious history for The Wild Number Problem. The important milestones in this history can be summarised as follows. In mathematics, the multiplicative semigroup, denoted by W 0 , generated by the set { 3 n + 2 2 n + 1 : n ≥ 0 } {\displaystyle \left\{{\frac {3n+2}{2n+1}}:n\geq 0\right\}} is called the Wooley semigroup in honour of the American mathematician Trevor D. Wooley. The multiplicative semigroup, denoted by W , generated by the set { 1 2 } ∪ { 3 n + 2 2 n + 1 : n ≥ 0 } {\displaystyle \left\{{\frac {1}{2}}\right\}\cup \left\{{\frac {3n+2}{2n+1}}:n\geq 0\right\}} is called the wild semigroup. The set of integers in W 0 is itself a multiplicative semigroup. It is called the Wooley integer semigroup and members of this semigroup are called Wooley integers. Similarly, the set of integers in W is itself a multiplicative semigroup. It is called the wild integer semigroup and members of this semigroup are called wild numbers. [ 6 ] The On-Line Encyclopedia of Integer Sequences (OEIS) has an entry with the identifying number A058883 relating to the wild numbers. According to OEIS, "apparently these are completely fictional and there is no mathematical explanation". However, the OEIS has some entries relating to pseudo-wild numbers carrying well-defined mathematical explanations. [ 4 ] Even though the sequence of wild numbers is entirely fictional, several mathematicians have tried to find rules that would generate the sequence of the fictional wild numbers. All these attempts have resulted in failures. However, in the process, certain new sequences of integers were created having similar wild and erratic behavior. These well-defined sequences are referred to as sequences of pseudo-wild numbers. A good example of this is the one discovered by the Dutch mathematician Floor van Lamoen. This sequence is defined as follows: [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Wild_number
In the mathematical areas of linear algebra and representation theory , a problem is wild if it contains the problem of classifying pairs of square matrices up to simultaneous similarity . [ 1 ] [ 2 ] [ 3 ] Examples of wild problems are classifying indecomposable representations of any quiver that is neither a Dynkin quiver (i.e. the underlying undirected graph of the quiver is a (finite) Dynkin diagram ) nor a Euclidean quiver (i.e., the underlying undirected graph of the quiver is an affine Dynkin diagram ). [ 4 ] Necessary and sufficient conditions have been proposed to check the simultaneously block triangularization and diagonalization of a finite set of matrices under the assumption that each matrix is diagonalizable over the field of the complex numbers. [ 5 ] This linear algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wild_problem
Wilder's law of initial value ( German "Ausgangswertgesetz", Ausgangswert meaning baseline in modern terms) states that "the direction of response of a body function to any agent depends to a large degree on the initial level of that function", proposed by Joseph Wilder . [ 1 ] It is unclear whether it is a real effect [ 2 ] or a statistical artifact . [ 3 ] "Das Ausgangswertgesetz" was first published in 1931 by Springer . Wilder gave human subjects injections of pilocarpine , atropine , or adrenaline , and found that the changes in pulse and blood pressure were smaller than average, if subjects had high values before the injection, and vice versa. [ 3 ] Wilder's first informal articles and work on the topic began in 1922. At that time, he worked in the office of Dr. Wagner-Jauregg , the Nobel Prize–winning doctor, at the University of Vienna. Yet, because Wilder was Jewish, he did not have similar opportunities for full professorship as his career matured. His career was temporarily interrupted by emigration to the United States in 1938. Wilder's legacy was established through the Law of Initial Value, which has been published the world over. However, Wilder's research and writing ranged from diverse treatises on ethics and magical thinking , to research on criminology, multiple sclerosis, blood sugar, and many other topics. He also practiced psychoanalysis in New York, and previously, as the Director of the Rothschilds' hospital for neurotics called "Rosenhügel" in Vienna. [ citation needed ]
https://en.wikipedia.org/wiki/Wilder's_law_of_initial_value
Wildfire modeling is concerned with numerical simulation of wildfires to comprehend and predict fire behavior. [ 1 ] [ 2 ] Wildfire modeling aims to aid wildfire suppression, increase the safety of firefighters and the public, and minimize damage. Wildfire modeling can also aid in protecting ecosystems , watersheds , and air quality . Using computational science , wildfire modeling involves the statistical analysis of past fire events to predict spotting risks and front behavior. Various wildfire propagation models have been proposed in the past, including simple ellipses and egg- and fan-shaped models. Early attempts to determine wildfire behavior assumed terrain and vegetation uniformity. However, the exact behavior of a wildfire's front is dependent on a variety of factors, including wind speed and slope steepness. Modern growth models utilize a combination of past ellipsoidal descriptions and Huygens' Principle to simulate fire growth as a continuously expanding polygon. [ 3 ] [ 4 ] Extreme value theory may also be used to predict the size of large wildfires. However, large fires that exceed suppression capabilities are often regarded as statistical outliers in standard analyses, even though fire policies are more influenced by large wildfires than by small fires. [ 5 ] Wildfire modeling attempts to reproduce fire behavior, such as how quickly the fire spreads, in which direction, how much heat it generates. A key input to behavior modeling is the Fuel Model , or type of fuel, through which the fire is burning. Behavior modeling can also include whether the fire transitions from the surface (a "surface fire") to the tree crowns (a "crown fire"), as well as extreme fire behavior including rapid rates of spread, fire whirls , and tall well-developed convection columns. Fire modeling also attempts to estimate fire effects, such as the ecological and hydrological effects of the fire, fuel consumption, tree mortality, and amount and rate of smoke produced. Wildland fire behavior is affected by weather , fuel characteristics, and topography . Weather influences fire through wind and moisture . Wind increases the fire spread in the wind direction, higher temperature makes the fire burn faster, while higher relative humidity , and precipitation (rain or snow) may slow it down or extinguish it altogether. Weather involving fast wind changes can be particularly dangerous, since they can suddenly change the fire direction and behavior. Such weather includes cold fronts , foehn winds , thunderstorm downdrafts , sea and land breeze , and diurnal slope winds . Wildfire fuel includes grass, wood, and anything else that can burn. Small dry twigs burn faster while large logs burn slower; dry fuel ignites more easily and burns faster than wet fuel. Topography factors that influence wildfires include the orientation toward the sun, which influences the amount of energy received from the sun, and the slope (fire spreads faster uphill). Fire can accelerate in narrow canyons and it can be slowed down or stopped by barriers such as creeks and roads. These factors act in combination. Rain or snow increases the fuel moisture, high relative humidity slows the drying of the fuel, while winds can make fuel dry faster. Wind can change the fire-accelerating effect of slopes to effects such as downslope windstorms (called Santa Anas , foehn winds, East winds, depending on the geographic location). Fuel properties may vary with topography as plant density varies with elevation or aspect with respect to the sun. It has long been recognized that "fires create their own weather." That is, the heat and moisture created by the fire feed back into the atmosphere, creating intense winds that drive the fire behavior. The heat produced by the wildfire changes the temperature of the atmosphere and creates strong updrafts, which can change the direction of surface winds. The water vapor released by the fire changes the moisture balance of the atmosphere. The water vapor can be carried away, where the latent heat stored in the vapor is released through condensation . Like all models in computational science, fire models need to strike a balance between fidelity, availability of data, and fast execution. Wildland fire models span a vast range of complexity, from simple cause and effect principles to the most physically complex presenting a difficult supercomputing challenge that cannot hope to be solved faster than real time. Forest-fire models have been developed since 1940 to the present, but a lot of chemical and thermodynamic questions related to fire behaviour are still to be resolved. Scientists and their forest fire models from 1940 till 2003 are listed in article. [ 6 ] Models can be divided into three groups: Empirical, Semi-empirical, and Physically based. Conceptual models from experience and intuition from past fires can be used to anticipate the future. Many semi-empirical fire spread equations, as in those published by the USDA Forest Service, [ 7 ] Forestry Canada, [ 8 ] Nobel, Bary, and Gill, [ 9 ] and Cheney, Gould, and Catchpole [ 10 ] for Australasian fuel complexes have been developed for quick estimation of fundamental parameters of interest such as fire spread rate, flame length, and fireline intensity of surface fires at a point for specific fuel complexes, assuming a representative point-location wind and terrain slope. Based on the work by Fons's in 1946, [ 11 ] and Emmons in 1963, [ 12 ] the quasi-steady equilibrium spread rate calculated for a surface fire on flat ground in no-wind conditions was calibrated using data of piles of sticks burned in a flame chamber/wind tunnel to represent other wind and slope conditions for the fuel complexes tested. Two-dimensional fire growth models such as FARSITE [ 13 ] and Prometheus, [ 14 ] the Canadian wildland fire growth model designed to work in Canadian fuel complexes, have been developed that apply such semi-empirical relationships and others regarding ground-to-crown transitions to calculate fire spread and other parameters along the surface. Certain assumptions must be made in models such as FARSITE and Prometheus to shape the fire growth. For example, Prometheus and FARSITE use the Huygens principle of wave propagation. A set of equations that can be used to propagate (shape and direction) a fire front using an elliptical shape was developed by Richards in 1990. [ 15 ] Although more sophisticated applications use a three-dimensional numerical weather prediction system to provide inputs such as wind velocity to one of the fire growth models listed above, the input was passive and the feedback of the fire upon the atmospheric wind and humidity are not accounted for. A simplified physically based two-dimensional fire spread models based upon conservation laws that use radiation as the dominant heat transfer mechanism and convection, which represents the effect of wind and slope, lead to reaction–diffusion systems of partial differential equations . [ 16 ] [ 17 ] More complex physical models join computational fluid dynamics models with a wildland fire component and allow the fire to feed back upon the atmosphere. These models include NCAR 's Coupled Atmosphere-Wildland Fire-Environment (CAWFE) model developed in 2005, [ 18 ] WRF-Fire at NCAR and University of Colorado Denver [ 19 ] which combines the Weather Research and Forecasting Model with a spread model by the level-set method , University of Utah 's Coupled Atmosphere-Wildland Fire Large Eddy Simulation developed in 2009, [ 20 ] Los Alamos National Laboratory's FIRETEC developed in, [ 21 ] the WUI ( wildland–urban interface ) Fire Dynamics Simulator (WFDS) developed in 2007, [ 22 ] and, to some degree, the two-dimensional model FIRESTAR. [ 23 ] [ 24 ] [ 25 ] These tools have different emphases and have been applied to better understand the fundamental aspects of fire behavior, such as fuel inhomogeneities on fire behavior, [ 21 ] feedbacks between the fire and the atmospheric environment as the basis for the universal fire shape, [ 26 ] [ 27 ] and are beginning to be applied to wildland urban interface house-to-house fire spread at the community-scale. The cost of added physical complexity is a corresponding increase in computational cost, so much so that a full three-dimensional explicit treatment of combustion in wildland fuels by direct numerical simulation (DNS) at scales relevant for atmospheric modeling does not exist, is beyond current supercomputers, and does not currently make sense to do because of the limited skill of weather models at spatial resolution under 1 km. Consequently, even these more complex models parameterize the fire in some way, for example, papers by Clark [ 28 ] [ 29 ] use equations developed by Rothermel for the USDA forest service [ 7 ] to calculate local fire spread rates using fire-modified local winds. And, although FIRETEC and WFDS carry prognostic conservation equations for the reacting fuel and oxygen concentrations, the computational grid cannot be fine enough to resolve the reaction rate-limiting mixing of fuel and oxygen, so approximations must be made concerning the subgrid-scale temperature distribution or the combustion reaction rates themselves. [ citation needed ] These models also are too small-scale to interact with a weather model, so the fluid motions use a computational fluid dynamics model confined in a box much smaller than the typical wildfire. [ citation needed ] Attempts to create the most complete theoretical model were made by Albini F.A. in USA and Grishin A.M. [ 30 ] in Russia. Grishin's work is based on the fundamental laws of physics, conservation and theoretical justifications are provided. The simplified two-dimensional model of running crown forest fire was developed in Belarusian State University by Barovik D.V. [ 31 ] [ 32 ] and Taranchuk V.B. Data assimilation periodically adjusts the model state to incorporate new data using statistical methods. Because fire is highly nonlinear and irreversible, data assimilation for fire models poses special challenges, and standard methods, such as the ensemble Kalman filter (EnKF) do not work well. Statistical variability of corrections and especially large corrections may result in nonphysical states, which tend to be preceded or accompanied by large spatial gradients . In order to ease this problem, the regularized EnKF [ 33 ] penalizes large changes of spatial gradients in the Bayesian update in EnKF. The regularization technique has a stabilizing effect on the simulations in the ensemble but it does not improve much the ability of the EnKF to track the data: The posterior ensemble is made out of linear combinations of the prior ensemble, and if a reasonably close location and shape of the fire cannot be found between the linear combinations, the data assimilation is simply out of luck, and the ensemble cannot approach the data. From that point on, the ensemble evolves essentially without regard to the data. This is called filter divergence. So, there is clearly a need to adjust the simulation state by a position change rather than an additive correction only. The morphing EnKF [ 34 ] combines the ideas of data assimilation with image registration and morphing to provide both additive and position correction in a natural manner, and can be used to change a model state reliably in response to data. [ 19 ] The limitations on fire modeling are not entirely computational. At this level, the models encounter limits in knowledge about the composition of pyrolysis products and reaction pathways , in addition to gaps in basic understanding about some aspects of fire behavior such as fire spread in live fuels and surface-to-crown fire transition. Thus, while more complex models have value in studying fire behavior and testing fire spread in a range of scenarios, from the application point of view, FARSITE and Palm -based applications of BEHAVE have shown great utility as practical in-the-field tools because of their ability to provide estimates of fire behavior in real time. While the coupled fire-atmosphere models have the ability to incorporate the ability of the fire to affect its own local weather, and model many aspects of the explosive, unsteady nature of fires that cannot be incorporated in current tools, it remains a challenge to apply these more complex models in a faster-than-real-time operational environment. Also, although they have reached a certain degree of realism when simulating specific natural fires, they must yet address issues such as identifying what specific, relevant operational information they could provide beyond current tools, how the simulation time could fit the operational time frame for decisions (therefore, the simulation must run substantially faster than real time), what temporal and spatial resolution must be used by the model, and how they estimate the inherent uncertainty in numerical weather prediction in their forecast. These operational constraints must be used to steer model development.
https://en.wikipedia.org/wiki/Wildfire_modeling
A wildflower (or wild flower ) is a flower that grows in the wild, rather than being intentionally seeded or planted. The term implies that the plant is neither a hybrid nor a selected cultivar that is any different from the native plant , even if it is growing where it would not naturally be found. The term can refer to the whole plant, even when not in bloom, and not just the flower. [ 1 ] "Wildflower" is an imprecise term. More exact terms include: In the United Kingdom, the organization Plantlife International instituted the "County Flowers scheme" in 2002; see County flowers of the United Kingdom for which members of the public nominated and voted for a wildflower emblem for their county . The aim was to spread awareness of the heritage of native species and about the need for conservation, as some of these species are endangered. For example, Somerset has adopted the cheddar pink ( Dianthus gratianopolitanus ), London the rosebay willowherb ( Chamerion angustifolium ) and Denbighshire /Sir Ddinbych in Wales the rare limestone woundwort ( Stachys alpina ). Media related to Wild flowers at Wikimedia Commons
https://en.wikipedia.org/wiki/Wildflower
Wildlife refers to undomesticated animals and uncultivated plant species which can exist in their natural habitat, but has come to include all organisms that grow or live wild in an area without being introduced by humans . [ 1 ] Wildlife was also synonymous to game : those birds and mammals that were hunted for sport . Wildlife can be found in all ecosystems . Deserts , plains , grasslands , woodlands , forests , and other areas including the most developed urban areas , all have distinct forms of wildlife. While the term in popular culture usually refers to animals that are untouched by human factors, most scientists agree that much wildlife is affected by human activities . [ 2 ] Some wildlife threaten human safety, health, property and quality of life . However, many wild animals, even the dangerous ones, have value to human beings. This value might be economic, educational, or emotional in nature. Humans have historically tended to separate civilization from wildlife in a number of ways, including the legal, social and moral senses. Some animals, however, have adapted to suburban environments. This includes urban wildlife such as feral cats , dogs, mice, and rats. Some religions declare certain animals to be sacred, and in modern times, concern for the natural environment has provoked activists to protest against the exploitation of wildlife for human benefit or entertainment. Global wildlife populations have decreased significantly by 68% since 1970 as a result of human activity, particularly overconsumption , population growth , and intensive farming , according to a 2020 World Wildlife Fund 's Living Planet Report and the Zoological Society of London 's Living Planet Index measure, which is further evidence that humans have unleashed a sixth mass extinction event. [ 3 ] [ 4 ] Different countries have various legal definitions for “wildlife” [ 5 ] but according to CITES , it has been estimated that annually the international wildlife trade amounts to billions of dollars and it affects hundreds of millions of animal and plant specimen. [ 6 ] Wildlife trade refers to the exchange of products derived from non-domesticated animals or plants usually extracted from their natural environment or raised under controlled conditions. It can involve the trade of living or dead individuals, tissues such as skins, bones or meat, or other products. Legal wildlife trade is regulated by the United Nations ' Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which currently has 184 member countries called Parties . [ 7 ] Illegal wildlife trade is widespread and constitutes one of the major illegal economic activities, comparable to the traffic of drugs and weapons . [ 8 ] Stone Age people and hunter-gatherers relied on wildlife, both plants and animals, for their food. In fact, some species may have been hunted to extinction by early human hunters. Today, hunting, fishing, and gathering wildlife is still a significant food source in some parts of the world. In other areas, hunting and non-commercial fishing are mainly seen as a sport or recreation. Meat sourced from wildlife that is not traditionally regarded as game is known as bushmeat . The increasing demand for wildlife as a source of traditional food in East Asia is decimating populations of sharks, primates, pangolins and other animals, which they believe have aphrodisiac properties. Malaysia is home to a vast array of amazing wildlife. However, illegal hunting and trade poses a threat to Malaysia's natural diversity. A November 2008 report from biologist and author Sally Kneidel, PhD, documented numerous wildlife species for sale in informal markets along the Amazon River , including wild-caught marmosets sold for as little as $1.60 (5 Peruvian soles). [ 14 ] [ self-published source? ] Many Amazon species, including peccaries , agoutis , turtles , turtle eggs, anacondas , armadillos are sold primarily as food. Wildlife has long been a common subject for educational television shows . National Geographic Society specials appeared on CBS since 1965, later moving to American Broadcasting Company and then Public Broadcasting Service . In 1963, NBC debuted Wild Kingdom , a popular program featuring zoologist Marlin Perkins as host. The BBC natural history unit in the United Kingdom was a similar pioneer, the first wildlife series LOOK presented by Sir Peter Scott , was a studio-based show, with filmed inserts. David Attenborough first made his appearance in this series, which was followed by the series Zoo Quest during which he and cameraman Charles Lagus went to many exotic places looking for and filming elusive wildlife—notably the Komodo dragon in Indonesia and lemurs in Madagascar. [ 15 ] Since 1984, the Discovery Channel and its spinoff Animal Planet in the US have dominated the market for shows about wildlife on cable television, while on Public Broadcasting Service the NATURE strand made by WNET-13 in New York and NOVA by WGBH in Boston are notable. Wildlife television is now a multimillion-dollar industry with specialist documentary film-makers in many countries including UK, US, New Zealand, Australia, Austria, Germany, Japan, and Canada. [ citation needed ] There are many magazines and websites which cover wildlife including National Wildlife , Birds & Blooms , Birding , wildlife.net , and Ranger Rick for children. Many animal species have spiritual significance in different cultures around the world, and they and their products may be used as sacred objects in religious rituals. For example, eagles, hawks and their feathers have great cultural and spiritual value to Native Americans as religious objects. In Hinduism the cow is regarded as sacred. [ 16 ] Muslims conduct sacrifices on Eid al-Adha , to commemorate the sacrificial spirit of Ibrāhīm in Islam ( Arabic-Abraham ) in love of God . Camels, sheep, goats may be offered as sacrifice during the three days of Eid. [ 17 ] In Christianity the Bible has a variety of animal symbols, the Lamb is a famous title of Jesus. In the New Testament the Gospels Mark , Luke and John have animal symbols: "Mark is a lion, Luke is a bull and John is an eagle." [ 18 ] Wildlife tourism is an element of many nations' travel industry centered around observation and interaction with local animal and plant life in their natural habitats. While it can include eco - and animal-friendly tourism, safari hunting and similar high-intervention activities also fall under the umbrella of wildlife tourism. Wildlife tourism , in its simplest sense, is interacting with wild animals in their natural habitat , either actively (e.g. hunting/collection) or passively (e.g. watching/photography). Wildlife tourism is an important part of the tourism industries in many countries including many African and South American countries, Australia , India , Canada , Indonesia , Bangladesh , Malaysia , Sri Lanka and Maldives among many. It has experienced a dramatic and rapid growth in recent years worldwide and many elements are closely aligned to eco-tourism and sustainable tourism . Wild animal suffering is suffering experienced by non-human animals living in the wild, outside of direct human control, due to natural processes. Its sources include disease , injury , parasitism , starvation , malnutrition , dehydration , weather conditions , natural disasters , killings by other animals , and psychological stress . [ 21 ] [ 22 ] An extensive amount of natural suffering has been described as an unavoidable consequence of Darwinian evolution , [ 23 ] as well as the pervasiveness of reproductive strategies , which favor producing large numbers of offspring, with a low amount of parental care and of which only a small number survive to adulthood, the rest dying in painful ways, has led some to argue that suffering dominates happiness in nature. [ 21 ] [ 24 ] [ 25 ] Some estimates suggest that the total population of wild animals, excluding nematodes but including arthropods, may be vastly greater than the number of animals killed by humans each year. This figure is estimated to be between 10¹⁸ and 10²¹ individuals. [ 26 ] The topic has historically been discussed in the context of the philosophy of religion as an instance of the problem of evil . [ 27 ] More recently, starting in the 19th century, a number of writers have considered the subject from a secular standpoint as a general moral issue, that humans might be able to help prevent. [ 28 ] There is considerable disagreement around taking such action, as many believe that human interventions in nature should not take place because of practicality, [ 29 ] valuing ecological preservation over the well-being and interests of individual animals, [ 30 ] considering any obligation to reduce wild animal suffering implied by animal rights to be absurd, [ 31 ] or viewing nature as an idyllic place where happiness is widespread. [ 24 ] Some argue that such interventions would be an example of human hubris , or playing God , and use examples of how human interventions, for other reasons, have unintentionally caused harm. [ 32 ] Others, including animal rights writers, have defended variants of a laissez-faire position, which argues that humans should not harm wild animals but that humans should not intervene to reduce natural harms that they experience. [ 33 ] [ 34 ] This subsection focuses on anthropogenic forms of wildlife destruction. The loss of animals from ecological communities is also known as defaunation . [ 40 ] Exploitation of wild populations has been a characteristic of modern man since our exodus from Africa 130,000 – 70,000 years ago. The rate of extinctions of entire species of plants and animals across the planet has been so high in the last few hundred years that it is widely believed that a sixth great extinction event ("the Holocene Mass Extinction ") is currently ongoing. [ 41 ] [ 42 ] [ 43 ] [ 44 ] The 2019 Global Assessment Report on Biodiversity and Ecosystem Services , published by the United Nations ' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services , says that roughly one million species of plants and animals face extinction within decades as the result of human actions. [ 45 ] [ 46 ] Subsequent studies have discovered that the destruction of wildlife is "significantly more alarming" than previously believed, with some 48% of 70,000 monitored animal species experiencing population declines as the result of human industrialization. [ 47 ] [ 48 ] According to a 2023 study published in PNAS , "immediate political, economic, and social efforts of an unprecedented scale are essential if we are to prevent these extinctions and their societal impacts." [ 49 ] [ 50 ] The four most general reasons that lead to destruction of wildlife include overkill, habitat destruction and fragmentation , impact of introduced species and chains of extinction. [ 51 ] Overkill happens whenever hunting occurs at rates greater than the reproductive capacity of the population is being exploited. The effects of this are often noticed much more dramatically in slow-growing populations such as many larger species of fish. Initially when a portion of a wild population is hunted, an increased availability of resources (food, etc.) is experienced increasing growth and reproduction as density dependent inhibition is lowered. Hunting, fishing and so on, have lowered the competition between members of a population. However, if this hunting continues at rate greater than the rate at which new members of the population can reach breeding age and produce more young, the population will begin to decrease in numbers . [ 52 ] Populations that are confined to islands, whether literal islands or just areas of habitat that are effectively an "island" for the species concerned, have also been observed to be at greater risk of dramatic population rise of deaths declines following unsustainable hunting . The habitat of any given species is considered its preferred area or territory . Many processes associated with human habitation of an area cause loss of this area and decrease the carrying capacity of the land for that species. In many cases these changes in land use cause a patchy break-up of the wild landscape. Agricultural land frequently displays this type of extremely fragmented, or relictual habitat. Farms sprawl across the landscape with patches of uncleared woodland or forest dotted in-between occasional paddocks. Examples of habitat destruction include grazing of bushland by farmed animals, changes to natural fire regimes, forest clearing for timber production and wetland draining for city expansion. This is particularly challenging since wild animals cannot drink tap water, which means they cannot autonomously survive in those habitats where there is no surface water access. Mice, cats, rabbits, dandelions and poison ivy are all examples of species that have become invasive threats to wild species in various parts of the world. Frequently species that are uncommon in their home range become out-of-control invasions in distant but similar climates. The reasons for this have not always been clear and Charles Darwin felt it was unlikely that exotic species would ever be able to grow abundantly in a place in which they had not evolved. The reality is that the vast majority of species exposed to a new habitat do not reproduce successfully. Occasionally, however, some populations do take hold and after a period of acclimation can increase in numbers significantly, having destructive effects on many elements of the native environment of which they have become part. This final group is one of secondary effects. All wild populations of living things have many complex intertwining links with other living things around them. Large herbivorous animals such as the hippopotamus have populations of insectivorous birds that feed off the many parasitic insects that grow on the hippo. Should the hippo die out, so too will these groups of birds, leading to further destruction as other species dependent on the birds are affected. Also referred to as a domino effect , this series of chain reactions is by far the most destructive process that can occur in any ecological community . Another example is the black drongos and the cattle egrets found in India. These birds feed on insects on the back of cattle, which helps to keep them disease-free. Destroying the nesting habitats of these birds would cause a decrease in the cattle population because of the spread of insect-borne diseases.
https://en.wikipedia.org/wiki/Wildlife
A wildlife corridor , also known as a habitat corridor , or green corridor, [ 1 ] is a designated area that connects wildlife populations that have been separated by human activities or structures, such as development, roads, or land clearings. These corridors enable movement of individuals between populations, which helps to prevent negative effects of inbreeding and reduced genetic diversity , often caused by genetic drift , that can occur in isolated populations. [ 2 ] Additionally, corridors support the re-establishment of populations that may have been reduced or wiped out due to random events like fires or disease. They can also mitigate some of the severe impacts of habitat fragmentation , [ 3 ] a result of urbanization that divides habitat areas and restricts animal movement. Habitat fragmentation from human development poses an increasing threat to biodiversity , and habitat corridors help to reduce its harmful effects. Corridors aside from their benefit to vulnerable wildlife populations can conflict with communities surrounding them when human-wildlife conflicts are involved. [ 4 ] In other communities the benefits of wildlife corridors to wildlife conservation are used and managed by indigenous communities. [ 5 ] Habitat corridors can be considered a management tool in areas where the destruction of a natural habitats has severely impacted native species , whether due to human development or natural disasters. When land is fragmented, wildlife populations may become unstable or isolated from larger populations. [ 6 ] These management tools are used by ecologists, biologists , indigenous tribes, and other concerned parties that oversee wildlife populations. Corridors help reconnect these fragmented populations and reduce negative population fluctuations by supporting these key aspects that stabilize populations: [ 7 ] Daniel Rosenberg et al. [ 8 ] were among the first to define the concept of wildlife corridors, developing a model that emphasized the corridors' role in facilitating movement unrestricted by the end of native vegetation or intermediate target patches of habitat. [ 9 ] Wildlife corridors also have significant indirect effects on plant populations by increasing pollen and seed dispersal through animals movement, of various species between isolated habitat patches. [ 10 ] Corridors must be large enough to support minimum critical populations, reduce migration barriers, and maximize connectivity between populations. [ 11 ] Wildlife corridors may also include aquatic habitats often referred to as riparian ribbons , [ 12 ] and are typically found in the form of rivers and streams. Terrestrial corridors take the form of wooded strips connecting forested areas or an urban hedgerows. [ 11 ] Wildlife corridors can connect into federal, state, private, and tribal land which can influence the opposition or acceptance of including wildlife corridors. The development of man made structures and expansion into natural areas can have an impact on both human and wildlife. [ 13 ] Although expressions such as " freedom to roam " promote the idea of wildlife freely moving throughout natural landscapes, this same ideology does not apply to indigenous peoples. [ 14 ] The theoretical ideas of landscape connectivity present them in a purely scientific and non-political manner that fails to account for political factors that can impact success within wildlife corridors and restorative ecological practices. [ 14 ] [ 15 ] Attempts to restore habitat over time require support from the local communities that surround the habitat area, oftentimes these communities are indigenous, that a restoration project is being placed around. [ 16 ] Indigenous knowledge of ecological landscape features across history is usually substituted with European explorers' of landscape ecology recollections when developing widescale corridor plans and within the broader ecological field. [ 14 ] [ 17 ] [ 13 ] As such there is a distinction in the use of ecological and indigenous knowledge when taking into account where wildlife populations are found, species composition within a community, and even seasonal patterns lengths and changes. [ 16 ] [ 18 ] Widespread efforts that actively involve the input of a variety of political and environmental groups are not always used in ecological restoration efforts. Currently there are some collaborations ongoing between indigenous groups surrounding wildlife corridor habitat such as the Yellowstone to Yukon Conservation Initiative which promote the conversion of previously stolen land into indigenously managed land. [ 14 ] The concern regarding land once used and lived upon by indigenous people, which now makes up habitat within wildlife corridors, and developed land that corridors cut across contribute to the Land Back movement. [ 14 ] Managing both terrestrial and aquatic lands can have a positive economic impact on Indigenous groups that continue to rely on wildlife populations for cultural practices, fishing, hunting, etc. in a variety of natural landscapes. [ 13 ] [ 19 ] Indigenous groups face financial inequities despite the large benefits of conservation efforts; this if the result of a lack of consideration placed on how wildlife corridors can impact local communities. [ 13 ] The overlap of wildlife, specifically larger predator species, poses a physical danger to local communities. [ 20 ] Economic revenue for local groups nearby or within heavily forested areas poses a threat to human property, crops, and livestock with higher chances of wildlife encounters; fisheries can also be negatively impacted by wilderness areas. [ 20 ] Many indigenous tribes manage wildlife populations within tribal lands that are legally recognized by governments, yet these tribes lack the finances to effectively manage large swathes of habitat. [ 5 ] The Tribal Wildlife Corridors Act would allow indigenous groups across the U.S. to implement wildlife corridors with both the finances and cooperation of neighboring governmental allies to help manage tribal lands. [ 5 ] Most species can be categorized into one of two groups: passage users and corridor dwellers . Passage users occupy corridors for brief periods. These animals use corridors for such events as seasonal migration , juvenile dispersal or moving between different parts of a large home range. Large herbivores , medium to large carnivores , and migratory species are typical passage users. [ 21 ] Corridor dwellers , on the other hand, can occupy a corridor for several years. Species such as plants , reptiles , amphibians , birds , insects , and small mammals may spend their entire lives in linear habitats. In such cases, the corridor must provide enough resources to support such species. [ 21 ] Habitat corridors can be categorized based on their width, with wider corridors generally supporting greater wildlife use. [ 22 ] However, the overall effectiveness of a corridor depends more on its design that its width. [ 11 ] The following are three main categories of corridor widths: Habitat corridors can also be classified based on their continuity. Continuous corridors are uninterrupted strips of habitat, while " stepping stone " corridors consist of small, separate patches of suitable habitat. However, stepping-stone corridors are more vulnerable to edge effects , which can reduce their effectiveness. Corridors can also take the form of wildlife crossings , such an underpasses or overpasses that allow animals to cross man-made structures like roads, helping to reduce human-wildlife conflict , such as roadkill . Observations that underpasses tend to be more effective than overpasses, as many animals are too timid to cross over a bridge in front of traffic and prefer the cover of an underpass. [ 23 ] Researchers use mark-recapture techniques and hair snares to assess genetic flow and observe how wildlife utilizes corridors. [ 24 ] Marking and recapturing animals helps track individual movement. [ 25 ] Genetic testing is also used to evaluate migration and mating patterns. By analyzing gene flow within a population, researchers can better understand the long- term role of corridors in migration and genetic diversity. [ 25 ] Wildlife corridors are most effective when designed with the ecology of their target species in mind. Factors such as seasonal movement, avoidance behavior, dispersal patterns, and specific habitat requirements must also be considered. [ 26 ] Corridors are more successful when they include some degree of randomness or asymmetry and are oriented perpendicular to habitat patches. [ 27 ] [ 11 ] However, they are vulnerable to edge effects ; habitat quality along the edge of a habitat fragment is often much lower than in core habitat areas. While wildlife corridors are essential for large species that require expensive ranges ; they are also crucial for smaller animals and plants, acting as ecological connectors to move between isolated habitat fragments. [ 28 ] Additionally wildlife corridors are designed to reduce human-wildlife conflicts. [ 29 ] [ 30 ] In Alberta, Canada , overpasses have been constructed to keep animals off the Trans-Canada Highway , which passes through Banff National Park . The tops of the bridges are planted with trees and native grasses, with fences present on either side to help guide animals. [ 31 ] In Southern California , 15 underpasses and drainage culverts were observed to see how many animals used them as corridors. They proved to be especially effective on wide-ranging species such as carnivores, mule deer , small mammals, and reptiles, even though the corridors were not intended specifically for animals. Researchers also learned that factors such as surrounding habitat, underpass dimensions, and human activity played a role in the frequency of usage. [ 32 ] In South Carolina , five remnant areas of land were monitored; one was put in the center with the other four surrounding it. Then, a corridor was put between one of the remnants and the center. Butterflies that were placed in the center habitat were two to four times more likely to move to the connected remnant rather than the disconnected ones. Furthermore, male holly plants were placed in the center region, and female holly plants in the connected region increased by 70 percent in seed production compared to those plants in the disconnected region. Plant seed dispersal through bird droppings was noted to be the dispersal method with the largest increase within the corridor-connected patch of land. [ 33 ] In Florida June 2021, the Florida Wildlife Corridor act was passed, securing a statewide network of nearly 18 million acres of connected ecosystems. [ 34 ] Starting from the Alabama state line, through the Florida panhandle and all the way to the Florida Keys. Containing state parks, national forests, and wildlife management areas supporting wildlife and human occupation. The positive effects on the rates of transfer and interbreeding in vole populations. A control population in which voles were confined to their core habitat with no corridor was compared to a treatment population in their core habitat with passages that they use to move to other regions. Females typically stayed and mated within their founder population , but the rate of transfer through corridors in the males was very high. [ 35 ] In 2001, a wolf corridor was restored through a golf course in Jasper National Park , Alberta , which successfully altered wildlife behavior and showed frequent use by the wolf population. [ 36 ] [ 37 ] Some species are more likely to utilize habitat corridors depending on migration and mating patterns, making it essential that corridor design is targeted towards a specific species. [ 50 ] [ 51 ] Due to space constraints, buffers are not usually implemented. [ 8 ] Without a buffer zone, corridors can become affected by disturbances from human land use change . There is a possibility that corridors could aid in the spread of invasive species, threatening native populations. [ 52 ]
https://en.wikipedia.org/wiki/Wildlife_corridor
Wildlife management is the management process influencing interactions among and between wildlife , its habitats and people to achieve predefined impacts. [ 1 ] [ 2 ] [ 3 ] Wildlife management can include wildlife conservation , population control , gamekeeping , wildlife contraceptive and pest control . [ 4 ] [ 5 ] Wildlife management aims to halt the loss in the Earth's biodiversity , [ 6 ] [ 7 ] by taking into consideration ecological principles such as carrying capacity , disturbance and succession , and environmental conditions such as physical geography , pedology and hydrology . [ 8 ] [ 9 ] [ 10 ] [ 11 ] Most wildlife biologists are concerned with the conservation and improvement of habitats; although rewilding is increasingly being undertaken. [ 12 ] [ 13 ] Techniques can include reforestation , pest control , nitrification and denitrification , irrigation , coppicing and hedge laying . [ 14 ] Gamekeeping is the management or control of wildlife for the well-being of game and may include the killing of other animals which share the same niche or predators to maintain a high population of more profitable species, such as pheasants introduced into woodland. Aldo Leopold defined wildlife management in 1933 as the art of making land produce sustained annual crops of wild game for recreational use. [ 15 ] The history of wildlife management begins with the game laws , which regulated the right to kill fish and wildlife game . [ 16 ] In Great Britain game laws developed out of the forest laws , which in the time of the Norman kings were very oppressive. Under William the Conqueror , it was as great a crime to kill one of the king's deer as to kill one of his subjects. A certain rank and standing were for a long time qualifications indispensably necessary to confer upon anyone the right of pursuing and killing game. [ 16 ] The late 19th century saw the passage of the first pieces of wildlife conservation legislation and the establishment of the first nature conservation societies. The Sea Birds Preservation Act of 1869 was passed in the United Kingdom as the first nature protection law in the world [ 17 ] after extensive lobbying from the Association for the Protection of Sea-Birds . [ 18 ] The Game Act 1831 ( 1 & 2 Will. 4 . c. 32) protected game birds by establishing close seasons when they could not be legally taken. The act made it lawful to take game only with the provision of a game license and provided for the appointment of gamekeepers around the country. The purposes of the law was to balance the needs for preservation and harvest and to manage both environment and populations of fish and game. [ 19 ] The Royal Society for the Protection of Birds was founded as the Plumage League in 1889 by Emily Williamson at her house in Manchester [ 20 ] as a protest group campaigning against the use of great crested grebe and kittiwake skins and feathers in fur clothing . The group gained popularity and eventually amalgamated with the Fur and Feather League in Croydon to form the RSPB. [ 21 ] The Society attracted growing support from the suburban middle-classes as well as support from many other influential figures, such as the ornithologist Professor Alfred Newton . [ 20 ] The National Trust formed in 1895 with the manifesto to "...promote the permanent preservation, for the benefit of the nation, of lands, ...to preserve (so far practicable) their natural aspect." On 1 May 1899, the Trust purchased two acres of Wicken Fen with a donation from the amateur naturalist Charles Rothschild , establishing the first nature reserve in Britain. [ 22 ] Rothschild was a pioneer of wildlife conservation in Britain, and went on to establish many other nature reserves, such as one at Woodwalton Fen , near Huntingdon , in 1910. [ 23 ] During his lifetime he built and managed his estate at Ashton Wold [ 24 ] in Northamptonshire to maximise its suitability for wildlife, especially butterflies. Concerned about the loss of wildlife habitats, in 1912 he set up the Society For The Promotion Of Nature Reserves , the forerunner of The Wildlife Trusts partnership . During the society's early years, membership tended to be made up of specialist naturalists and its growth was comparatively slow. The first independent Trust was formed in Norfolk in 1926 as the Norfolk Naturalists Trust, followed in 1938 by the Pembrokeshire Bird Protection Society which after several subsequent changes of name is now the Wildlife Trust of South and West Wales and it was not until the 1940s and 1950s that more Naturalists' Trusts were formed in Yorkshire , Lincolnshire, Leicestershire and Cambridgeshire . These early Trusts tended to focus on purchasing land to establish nature reserves in the geographical areas they served. In the later 20th century wildlife management is undertaken by several organizations including government bodies such as the Forestry Commission , Charities such as the RSPB and The Wildlife Trusts and privately hired gamekeepers and contractors. Legislation has also been passed to protect wildlife such as the Wildlife and Countryside Act 1981. The UK government also give farmers subsidies through the Countryside Stewardship Scheme to improve the conservation value of their farms. Early game laws were enacted in the United States in 1839 when Rhode Island closed the hunting season for white-tailed deer from May to November. Other regulations during this time focused primarily on restricting hunting. At this time, lawmakers did not consider population sizes or the need for preservation or restoration of wildlife habitats. [ 25 ] The profession of wildlife management was established in the United States in the 1920s and 1930s by Aldo Leopold and others who sought to transcend the purely restrictive policies of the previous generation of conservationists, such as anti-hunting activist William T. Hornaday . Leopold and his close associate Herbert Stoddard , who had both been trained in scientific forestry, argued that modern science and technology could be used to restore and improve wildlife habitat and thus produce abundant "crops" of ducks, deer, and other valued wild animals. [ 26 ] The institutional foundations of the profession of wildlife management were established in the 1930s, when Leopold was granted the first university professorship in wildlife management (1933, University of Wisconsin, Madison ), when Leopold's textbook 'Game Management' was published (1933), when The Wildlife Society was founded, when the Journal of Wildlife Management began publishing, and when the first Cooperative Wildlife Research Units were established. Conservationists planned many projects throughout the 1940s. Some of which included the harvesting of female mammals such as deer to decrease rising populations. Others included waterfowl and wetland research. The Fish and Wildlife Management Act was put in place to urge farmers to plant food for wildlife and to provide cover for them. [ 26 ] In 1937 the Federal Aid in Wildlife Restoration Act (also known as the Pittman-Robertson Act ) was passed in the U.S.. This law was an important advancement in the field of wildlife management. It placed a 10% tax on sales of guns and ammunition. The funds generated were then distributed to the states for use in wildlife management activities and research. This law is still in effect today. [ 27 ] Wildlife management grew after World War II with the help of the GI Bill and a postwar boom in recreational hunting. An important step in wildlife management in the United States national parks occurred after several years of public controversy regarding the forced reduction of the elk population in Yellowstone National Park . [ 28 ] In 1963, United States Secretary of the Interior Stewart Udall appointed an advisory board to collect scientific data to inform future wildlife management. In the Leopold Report , the committee observed that culling programs at other national parks had been ineffective, and recommended active management of Yellowstone's elk population. [ 29 ] Elk overpopulation in Yellowstone is thought by many wildlife biologists, such as Douglas Smith, to have been primarily caused by the extirpation of wolves from the park and surrounding environment. After wolves were removed, elk herds increased in population, reaching new highs during the mid-1930s. The increased number of elk resulted in overgrazing in parts of Yellowstone. Park officials decided that the elk herd should be managed. For approximately thirty years, the park elk herds were culled: Each year some were captured and shipped to other locations, a certain number were killed by park rangers, and hunters were allowed to take more elk that migrated outside the park. By the late 1960s the herd populations dropped to historic lows (less than 4,000 for the Northern Range herd). This caused outrage among both conservationists and hunters. The park service stopped culling elk in 1968. The elk population then rebounded. Twenty years later there were 19,000 elk in the Northern Range herd, a historic high. [ 30 ] Since the tumultuous 1970s, when animal rights activists and environmentalists began to challenge some aspects of wildlife management, the profession has been overshadowed by the rise of conservation biology . Although wildlife managers remain central to the implementation of the Endangered Species Act and other wildlife conservation policies, conservation biologists have shifted the focus of conservation away from wildlife management's concern with the protection and restoration of single species and toward the maintenance of ecosystems and biodiversity . In the United States, wildlife management practices are implemented by a governmental agency, such as the Endangered Species Act . [ 30 ] Custodial management is preventive or protective. The aim is to minimize external influences on the population and its habitat. It is appropriate in a national park where one of the stated goals is to protect ecological processes. It is also appropriate for conservation of a threatened species where the threat is of external origin rather than being intrinsic to the system. Feeding of animals by visitors is discouraged. [ 31 ] [ 32 ] Manipulative management acts on a population, either changing its numbers by direct means or influencing numbers by the indirect means of altering food supply, habitat, density of predators, or prevalence of disease. This is appropriate when a population is to be harvested, or when it slides to an unacceptably low density or increases to an unacceptably high level. Such densities are inevitably the subjective view of the land owner, and may be disputed by animal welfare interests. [ 33 ] Rewilding approaches emphasize the need to reduce human impact on ecosystems to restore them to their self-sustaining state. This approach differs from traditional wildlife management techniques by recognizing the role of keystone species which have an outsize effect on their ecosystems. By reintroducing keystone species, rewilding aims to restore missing ecological processes that have led to widespread environmental collapse. This field attributes the ongoing need for human intervention in ecosystem management to the historical extinction of a limited number of megafaunal species. [ 34 ] Rewilding strategies include Passive, Active, Pleistocene , trophic, and Urban rewilding techniques. Recent research suggests certain types of rewilding, particularly trophic rewilding, may contribute to climate change mitigation by restoring ecosystem resilience, and improving carbon sequestration . [ 35 ] Pest control is the control of real or perceived pests and can be used for the benefit of wildlife, farmers, gamekeepers or human safety. Wildlife management studies, research and lobbying by interest groups help designate times of the year when certain wildlife species can be legally hunted, allowing for surplus animals to be removed. In the United States, hunting season and bag limits are determined by guidelines set by the United States Fish and Wildlife Service for waterfowl and other migratory gamebirds. [ 36 ] The hunting season and bag limits for state regulated game species such as deer are determined by State game Commissions, which are made up of representatives from various interest groups and wildlife biologists. [ 37 ] Open season is when wildlife is allowed to be hunted by law and is usually not during the breeding season . Hunters may be restricted by sex, age or class of animal, for instance there may be an open season for any male deer with 4 points or better on at least one antler. Closed season is when wildlife is protected from hunting and is usually during its breeding season. Closed season is enforced by law, any hunting during closed season is punishable by law and termed as illegal hunting or poaching . Open and closed season on deer in the United Kingdom is legislated for in the Deer Act. [ 38 ] [ 39 ] Habitat destruction and fragmentation are major drivers of biodiversity loss in the United Kingdom, [ 40 ] however landowners that participate in field sports , particularly hunting and shooting, are more likely to conserve and reinstate woodlands [ 41 ] and hedgerows [ 42 ] because they are used by quarry species. A study in 2003 showed that they are around 2.5 times more likely to plant new woodlands than landowners without game or hunting interests, and also conserve a far greater woodland area. [ 41 ] Landowners undertake management measures to improve habitats for quarry species, including shrub planting, coppicing [ 43 ] and skylighting [ 44 ] to encourage understory growth. Roger Draycott of the Game and Wildlife Conservation Trust has reported the conservation benefits of game management schemes in the UK, including woodlands with denser undergrowth and higher abundances of native birds. [ 45 ] However, overall evidence that game shooting is beneficial to wider biodiversity has been inconclusive: high densities of game birds are known to negatively impact ecosystems, resulting in shorter grassland vegetation, [ 46 ] lower floral diversity in semi-natural woodlands, [ 47 ] fewer saplings in hedgerows leading from such woodlands, [ 48 ] and reductions in arthropod biomass attributable to predation. The rearing of both wild and released game birds requires the provision of food and shelter during the winter months, [ 49 ] and to achieve this landowners plant cover crops . Generally species such as maize or quinoa , these are planted in strips alongside arable land. Cover crops are also used by a variety of nationally declining farmland birds such as linnets and finches , [ 50 ] [ 51 ] [ 52 ] providing valuable food resources and refuge from predators. [ 53 ] Prior to the Hunting Act , fox hunting also provided a major incentive for woodland conservation and management throughout England and Wales, with higher species diversity and abundance of plants and butterflies found in woodlands managed for foxes according to a survey of mounted hunts in 2006, verified by The Council of Hunting Associations. [ 54 ] The control of wildlife through killing and hunting has created an opposition to hunting by animal rights and animal welfare activists. [ 55 ] Critics object to the real or perceived cruelty involved in some forms of wildlife management. They also argue against the deliberate breeding of certain animals by environmental organizations —who hunters pay money to kill—in pursuit of profit. [ 56 ] Additionally, they draw attention to the attitude that it is acceptable to kill animals in the name of ecosystem or biodiversity preservation, yet it is seen as unacceptable to kill humans for the same purpose; asserting that such attitudes are a form of discrimination based on species-membership i.e. speciesism . [ 57 ] [ 58 ] Environmentalists have also opposed hunting where they believe it is unnecessary or will negatively affect biodiversity. [ 59 ] Critics of game keeping note that habitat manipulation and predator control are often used to maintain artificially inflated populations of valuable game animals (including introduced exotics) without regard to the ecological integrity of the habitat. [ 60 ] [ 61 ] Gamekeepers in the United Kingdom claim it to be necessary for wildlife conservation as the amount of countryside they look after exceeds by a factor of nine the amount in nature reserves and national parks . [ 62 ] Media related to Wildlife management at Wikimedia Commons
https://en.wikipedia.org/wiki/Wildlife_management
Wildlife observation is the practice of noting the occurrence or abundance of animal species at a specific location and time, [ 1 ] either for research purposes or recreation. Common examples of this type of activity are bird watching and whale watching . The process of scientific wildlife observation includes the reporting of what ( diagnosis of the species), where ( geographical location ), when (date and time), who (details about observer), and why (reason for observation, or explanations for occurrence). Wildlife observation can be performed if the animals are alive, with the most notable example being face-to-face observation and live cameras, or are dead, with the primary example being the notifying of where roadkill has occurred. This outlines the basic information needed to collect data for a wildlife observation; which can also contribute to scientific investigations of distribution, habitat relations, trends, and movement of wildlife species. Wildlife observation allows for the study of organisms with minimal disturbance to their ecosystem depending on the type of method or equipment used. The use of equipment such as unmanned aerial vehicles (UAVs), more commonly known as drones, may disturb and cause negative impacts on wildlife. [ 2 ] Specialized equipment can be used to collect more accurate data. [ 3 ] Wildlife observation is believed to have traced its origins to the rule of Charles II of England when it was first instituted in 1675 at the Royal Observatory in present-day Greenwich , part of London . In modern times, it has practiced as an observance of wildlife species monitored in areas of vast wilderness . [ citation needed ] Through wildlife observation, there are many important details that can be discovered about the environment. For instance, if a fisher in Taiwan discovers that a certain species of fish he/she frequently catches is becoming rarer and rarer, there might be a substantial issue in the water that fisher is fishing in. It could be that there is a new predator in the water that has changed the animal food chain, a source of pollution, or perhaps even a larger problem. Regardless of the reason, this process of observing animals can help identify potential issues before they become severe problems in the world. Additionally, through animal observation, those who participate are also actively participating in the conservation of animal life. Oftentimes, the two subjects go hand-in-hand with one another because through the observation of animals, individuals are also discovering what issues animals around the world are currently facing and if there are any ways to put up a fight against them. [ 5 ] With more observation, fewer species of animals will become extinct. Before one can get started observing wildlife and helping the environment, it is important to research the animal they are choosing to observe. If one simply went into the observation process and skipped the crucial process of obtaining knowledge about the animals, it would be difficult for them to determine if anything was out of the ordinary. [ 6 ] Before observing, it would be wise to find out simple information about the animal such as: There are a variety of projects and websites devoted to wildlife observations. One of the most common projects are for bird observations (for example: e-bird ). For those who enjoy bird watching, there are a variety of ways one can contribute to this type of wildlife observation. The National Wildlife Refuge System has volunteer opportunities, citizen science projects, and if one is limiting in time; could purchase a Federal Duck Stamp that donates money to the wildlife refuge lands. [ 7 ] In the past few years, websites dedicated to reporting wildlife across broad taxonomic ranges have become available. For example, the California Roadkill Observation System provides a mechanism for citizen-scientists in California to report wildlife species killed by vehicles. The Maine Audubon Wildlife Road Watch allows reporting of observations of both dead and live animals along roads. A more recent addition to wildlife observation tools are the web sites that facilitate uploading and management of images from remote wildlife cameras. For example, the Smithsonian Institution supports the eMammal and Smithsonian Wild programs, which provide a mechanism for volunteer deployment of wildlife cameras around the world. Similarly, the Wildlife Observer Network hosts over a dozen wildlife-camera projects from around the world, providing tools and a database to manage photographs and camera networks. Monitoring programs for wildlife utilize new and easier ways to monitor animal species for citizen scientists and research scientists alike. One such monitoring device is the automated recorder. Automated recorders are a reliable way to monitor species such as bird, bats, and amphibians as they provide ability to save and independently identify a specific animal call. [ 8 ] The automated recorder analyzes the sounds of the species to identify the species and how many there are. [ 8 ] It was found that using the automated recorders produced larger quantity and even more quality data when compared with traditional, point-count data recording. [ 9 ] While providing better quality, it also provides a permanent record of the census which can be continually reviewed for any potential bias. [ 9 ] This monitoring device can improve wildlife observation and potentially save more animals. Using this device can allow for continued tracking of populations, continued censusing of individuals within a species, and allow for faster population size estimates. [ 8 ] One of the most popular forms of wildlife observation, birdwatching , is typically performed as a recreational pleasure. Those looking to birdwatch typically travel into a forest or other wooded area with a pair of binoculars in hand to aid the process. Birdwatching has become all the more important with the amount of deforestation that has been occurring in the world. Birds are arguably the most important factor in the balance of environmental systems : "They pollinate plants, disperse seeds, scavenge carcasses and recycle nutrients back into the earth." [ 10 ] A decrease in the total number of birds would cause destruction to much of the environmental system. The plants and trees around the world would die at an alarming rate which would, in turn, set off a chain reaction that would cause many other animals to die due to the environment change and habitat loss . One of the ways that birdwatching has an effect on the environment as a whole is that through consistent birdwatching, an observer would be able to identify whether they are seeing less of a certain species of bird. If this happens, there typically is a reasons for the occurrence, whether it be because of an increase in pollution in the area or possibly an increase in the population of predators. [ 11 ] If a watcher were to take notice of a change in what they typically see, they could notify the city or park and allow them to investigate into the cause a bit further. Through this action, birdwatchers are preserving the future for both animal and human life. Subsequently, by taking children birdwatching it is allowing the future generation to understand the importance of animal observation. If children learn at a young age how the environmental system works and that all life is intertwined, the world will be in much better hands. [ 11 ] These children will be the ones pioneer conservation movements and attempt to protect the habit for all animals. Live streams of animal exhibits at various zoos and aquariums across the United States have also become extremely popular. The Tennessee Aquarium has a webcam that allows online viewers to take a look into the happening so their Secret Reef exhibit which consists of reef fish, sharks, and a rescued green sea turtle. [ 12 ] Perhaps the most popular animals cams in the United States though come from, naturally, the largest zoo in the United States: The San Diego Zoo . The San Diego Zoo features eight live cams on their website – A panda, elephant, ape, penguin, polar bear, tiger, condor, and koala. The purpose of the live streams is to help educate the public about the behaviors of several different animals and to entertain those who might not be able to travel to a zoo. [ 13 ] The other notable zoos that have webcams are the National Zoo , Woodland Park Zoo , Houston Zoo , and Atlanta Zoo . Additionally, the Smithsonian National Museum of Natural History has opened a butterfly and plant observation pavilion. Visitors walk into a large tent and experience a one-of-a-kind situation in which hundreds of rare butterflies from all across the world are inches from their faces. [ 14 ] As is the case with a majority of subjects, one of the best and most effective ways to observe live animals is through data collection. This process can be done through a livestream or in the wild but it is more useful if the data is collected on animals that are in currently in the wild. [ 15 ] The ways the data can be collected are endless and it really just depends on what purpose an individual has as to what data would be the most useful. For example, if someone is interested in how deer interact with other animals in a certain location, it would be beneficial for them to take notes and record all of the animals that are native to the area where the deer are located. From there, they can describe any scenarios in which the deer had a positive or negative interaction with the other species of animals. In this instance, it would not really be helpful for the observer to collect data pertaining to the types of food the deer eat because the study is only focusing on the interaction amongst animals. Another example of how collecting data on wildlife would be useful is keeping track of the total number of a certain species exists within a forest. Naturally, it will be impossible to get a definitive number but if an accurate approximation can be made, it could be beneficial in determining if there has been a random increase or decrease in the population. If there is an increase, it could be due to a change in the species migration habits and if there is a decrease, it could be due to an external factor such as pollution or introduction of a new predator. [ 15 ] Many states have already begun to set up websites and systems for the public. The main purpose behind the movement is so that they can notify other individuals about road-killed wildlife. If enough people fill out the forms located on the websites, the government will become notified that there have been occurrences of a loss of animal life and will take the steps required to prevent it. Typically, the step that is taken is the posting of a wildlife crossing sign that, in turn, allows the public to know where there are common animal crossings. Maine and California are the states that have been the pioneers of this movement and this process has become particularly important on heavily traveled roads as no one would like endanger the animals or themselves. [ 16 ] Currently, there is an app (available on both iPhone and Android devices) made specifically for the purpose of identifying road-kill called “Mobile Mapper.” The app is a partner of the HerpMapper website. The purpose of the website is to use the user recorded observations for research and conservation purposes. [ 16 ] On average, the cost of repairing a car that has been damaged by a deer or other medium to large-sized animal is $2,000. [ 17 ] Even though there is no way that accidents involving animals can completely be prevented, placing more signs about possible animal crossings zones would cause drivers to drive more carefully and therefore have fewer accidents. Economically, this means that more families will be saving money and it could be used in a different way to help contributed to society as a whole. Climate change is one of the most heavily discussed topics around the world today, both politically and scientifically. The climate that Earth is currently experiencing has been steadily changing over time due to both natural causes and human exploitation. [ 18 ] Climate change has the potential to be detrimental to wildlife across the world, whether that be through rising sea levels, changes in temperatures through the years, or deforestation . [ 18 ] These are just a few of the examples of the contributing factors to climate change. Climate change is not something that citizens can entirely prevent from happening even if they wanted to. There are many natural causes such as volcanic activity and the Earth's orbit around the Sun that are strong contributing factors to the phenomena. [ 19 ] There are, however, prevention measures that can be taken to prevent climate change from happening as quickly. The primary way to prevent climate change is for society to reduce the amount of greenhouse gases that are present in the atmosphere. [ 19 ] This can be done through the improving of energy efficiency in many buildings, the stoppage of deforestation so more carbon dioxide can be removed from the atmosphere, and mode switching. [ 20 ] One of the more notable effects climate change has on the environment is the rising of sea levels around the world. Over the past 100 years, the sea level has risen approximately 1.8 millimeters each year. [ 21 ] The steady rise in sea levels can be attributed to the steadily increasing temperatures the Earth faces each year which causes the ice caps and glaciers to melt. [ 21 ] This increase in sea level is detrimental to the coastal ecosystems that exist around the world. The increase in sea level causes flooding on coastal wetlands, where certain animals will be unable to survive due to saltwater inundation . [ 21 ] The increase in the total amount of saltwater present in these wetlands could prove to be problematic for many species. While some may simply have to migrate to other areas, smaller ecosystems within the wetlands could be destroyed which, once again, influences the animal food chain. [ 21 ] Polar bears are animals that are specifically affected through the process of rising sea levels. Living near the Arctic region, polar bears find their food on ice caps and sheets of ice. As these sheets continue to become fewer in quantity, it is predicted that the polar bears will have a difficult time sustaining life and that by the year 2050, there could be less than 20,000 on Earth. [ 21 ] Coral reefs are the primary ecosystem that would be affected through a continuing increase in the sea level: "The coral reef ecosystem is adapted to thrive within certain temperature and sea level range. Corals live in a symbiotic relationship with photosynthetic zooxanthellae . Zooxanthellae need the sunlight in order to produce the nutrients necessary for the coral. Sea level rise may cause a decrease in solar radiation at the sea surface level, affecting the ability of photosynthetic zooxanthellae to produce nutrients for the coral, whereas, a sudden exposure of the coral reef to the atmosphere due to a low tide event may induce coral bleaching." [ 21 ] The loss of coral would have a subsequent effect on the total number of fish that exist within these ecosystems. In the Indo-Pacific coral reefs alone, there are in-between 4000 and 5000 different species of fish that have a relationship with the species of coral. [ 22 ] Specifically, the numerous different species of butterfly fish that feed on coral within these reefs would be affected if the coral was unable to live due to an increase in sea level. [ 22 ] Referring back to the food chain topic, this would then subsequently but directly affect species of snappers, eels, and sharks that use butterfly fish as a primary source of food. [ 23 ] If the snappers cannot find any butterfly fish to eat because the butterfly fish are dying due to the lack of coral, it means that the snapper population will decrease as well. The rising of sea level has the possibility to be catastrophic to the coastal ecosystems. [ 24 ] Pollution is another crucial threat to animal life, and human life, across the world. Every form of pollution has an effect on wildlife, whether it be through the air, water, or ground. While sometimes the origin and form of pollution is visible and easy to determine, other times it can be a mystery as to what exactly is causing the death of animals. Through constant and consistent observation of habitat analysis, humans can help prevent the loss of animal life by recognizing the early signs of pollution before the problem becomes too large. [ 25 ] Pollution can enter bodies of water in many different ways - Through toxic runoff from pesticides and fertilizers, containers of oil and other hazardous materials falling off of ships, or just from debris from humans that has not been picked up. [ 26 ] No matter what the form of pollution is, the effects water pollution has on animal life can be drastic. For example, the BP oil spill which occurred in 2010 impacted over 82,000 birds, 6,000 sea turtles, approximately 26,000 marine animals, and hundreds of thousands of fish. [ 27 ] While the observation of how animal life was and has been affected by this spill is unique and definitely on the larger scale, it still represents an accurate depiction of how observation can be crucial to animal lives. For example, by observing that a certain species of sea turtle was affected by the oil spill, zoologists and their teams would be able to determine the effects the loss of that sea turtle would have. [ 27 ] Another prominent example is how if one day a fisherman goes to a lake that he/she frequently visits and notices two or three dead fish on the surface. Knowing that that frequently does not happen, the fisherman tells his local city officials and park rangers about the occurrence and they find out that a farmer has been using a new pesticide that runs off into the lake. By simply observing what is common and what is not, the effects of some water pollution can be stopped before becoming too severe. Air pollution is commonly associated with the image of billowing clouds of smoke rising into the sky from a large factory. While the fumes and smoke previously stated definitely is a prominent form of air pollution, it is not the only one. Air pollution can come from the emission of cars, smoking, and other sources. [ 26 ] Air pollution does not just affect birds though, like one may have thought. Air pollution affects mammals, birds, reptiles, and any other organism that requires oxygen to live. [ 26 ] Frequently, if there is any highly dangerous air pollution, the animal observation process will be rather simple: There will be an abundance of dead animals located near the vicinity of the pollution. The primary concern of air pollution is how widespread the pollution can become in a short period of time. Acid rain is one of the largest forms of pollution today. The issue with acid rain is that it affects literally every living organism it comes in contact with, whether that be trees in a forest, water in an ocean or lake, or the skin of humans and animals. [ 28 ] Typically, acid rain is a combination of sulfur dioxide and nitrogen oxides that are emitted from factories. [ 26 ] If it is not controlled in a timely manner, it could lead to loss of life due to the dangerous nature of the composition of the rain. Deforestation has become one of the most prevalent issues environmentally. With a continuously growing population and not having the space to contain all the humans on Earth, forests are frequently the first areas that are cleared to make more room. [ 29 ] According to National Geographic , forests still cover approximately 30 percent of the land on Earth but each year large portions are cleared. [ 29 ] With deforestation, there are numerous subsequent side effects. Most notably, the clearing of entire forests (in some instances) destroys the habitat for hundreds of species of animals and 70 percent of the animals that reside in the forest will die as a result. [ 29 ] Additionally, deforestation causes a reduction in the total canopy cover which leads to more extreme temperature swings on the ground level because there are no branches and leaves to catch the sun's rays. [ 29 ] The way to combat the severe effects on the loss of animal life would be to stop cutting trees and forests down. While this is unlikely and almost impossible to happen, there is another solution: the partial removal of forests. Removing only portions of the forest keeps the environment of the entire forest intact which allows the animals to adapt to their surroundings. [ 29 ] Additionally, it is recommended that for every tree that is cut down another one be planted elsewhere in the forest. [ 29 ] Typically, the costs of animal observation are minuscule. As previously stated, animal observation can be done on a small or large scale; it just depends what goal an individual has in mind. For example, animal observation can be performed in the backyard of a house or at a local state park at no charge. All one would have to do is take a notepad, phone, or other device to write down their data and observations. On a larger scale, animal observation could be performed at an animal reserve, where the associated costs would be those associated with keeping the animals happy inside the reserve. While it is impossible to pinpoint exactly how much the zoos across the world spend on live streaming, it is estimated to be in the $1,000 range for every camera that is set up. [ 30 ] Referring back to the example from the "Deceased Wildlife Observation" section, it becomes apparent how animal observation can save families and the government money. With the average cost of repairing a car that has damage from a large sized animal being $2,000, families and the government could save money by making the public aware that they should proceed with caution in areas where animals have been hit. [ 31 ] Additionally, approximately $44 million of the $4.3 billion spent on water purity is spent each year on protecting aquatic species from nutrient pollution . [ 31 ] It is encouraging that the government is willing to spend the money to help save animals' lives, sometimes the effects of the pollution take effect before they are able to stop them entirely. One million seabirds and hundred thousand aquatic mammals and fish that are killed as a result of water pollution each year and that has its economic effects, both directly and indirectly. [ 32 ] Directly, the loss of aquatic mammals and fish has a direct impact on the sales of food. The EPA estimated recently that the effects of pollution cost the fishing industry tens of millions of dollars in sales. [ 33 ] Indirectly, the loss of birds causes humans to spend more money on pest control because the food chain is out of order. [ 34 ] The small rodents and insects that some birds prey upon are no longer being killed if the birds die. This means more of these pests find their ways into homes which causes more people to call exterminators, therefore setting off a chain reaction. The exterminators then must use insecticides to kill the animals which can have harmful runoff into the ground and local water systems, instead of allowing it to be done naturally by the animal food chain. [ 35 ]
https://en.wikipedia.org/wiki/Wildlife_observation
The wildlife of Nigeria consists of the flora and fauna of this country in West Africa . Nigeria has a wide variety of habitats, ranging from mangrove swamps and tropical rainforest to savanna with scattered clumps of trees. About 290 mammal species and 940 bird species have been recorded in the country. Nigeria is a large country in West Africa just north of the equator. It is bounded by Benin to the west, Niger to the north, Cameroon to the east and the Atlantic Ocean to the south. The country consists of several large plateaus separated by the valleys of the two major rivers, the Niger and the Benue , and their tributaries. These converge inland and flow into the Gulf of Guinea through a network of creeks and branches which form the extensive Niger Delta . Other rivers flow directly to the sea further west, with many smaller rivers being seasonal. The highest mountain is Chappal Waddi (2,419 m (7,936 ft)) on the Mambilla Plateau in the southeast of the country near the border with Cameroon. The Shere Hills (1,829 m (6,001 ft)) are another mountainous region located on the Jos Plateau in the center of the country. Major lakes include two reservoirs, Oguta Lake and Kainji Lake , and Lake Chad in the northeast. There are extensive coastal plains in the southwest and the southeast, and the coastline is low. [ 1 ] The wet season lasts from March to October, with winds from the southwest. The rest of the year is dry, with northeasterly harmattan winds blowing in from the Sahara . The coastal zone has between 1,500 and 3,000 mm (59 and 118 in) of rainfall per year, and the inland zones are drier except for the highland areas. [ 2 ] The most southerly part of the country is classified as "salt water swamp" or "mangrove swamp" because the vegetation consists primarily of mangroves . North of this is a fresh water swamp area containing salt-intolerant species such as the raffia palm , and north of this is rainforest . Further north again, the countryside becomes savanna with scattered groups of trees. [ 3 ] A common species in riverine forests in the south is Brachystegia eurycoma . [ 4 ] These main zones can be further subdivided. The coastal swamp forest extends many kilometers inland and contains all eight West African species of mangrove, with Rhizophora racemosa being the dominant species on the outer edge, R. harrisonii in the central part and R. mangle on the inner edge. [ 5 ] The mangrove swamps of the Niger Delta are estimated to be the breeding ground of 40% of the fish caught offshore. [ 5 ] The rainforest zone stretches inland for about 270 km (170 mi) but its composition varies considerably, with rainfall decreasing from west to east and from south to north. In Omo Forest Reserve for example, the commonest trees are several species of Diospyros , Tabernaemontana pachysiphon , Octolobus angustatus , Strombosia pustulata , Drypetes gossweileri , Rothmania hispida , Hunteria unbellata , Rinorea dentata , Voacanga africana , and Anthonotha aubryanum . [ 6 ] Where the rainforest grades into the savanna woodland, dominant trees include Burkea africana , Terminalia avicennioides , and Detarium microcarpum . [ 7 ] About one half of Nigeria is classified as Guinea savanna in the Guinean forest-savanna mosaic ecoregion, characterized by scattered groups of low trees surrounded by tall grasses, with strips of gallery forest along the watercourses. Typical trees here are suited to the seasonally dry conditions and repeated wildfires and include Lophira lanceolata , Afzelia africana , Daniellia oliveri , Borassus aethiopum , Anogeissus leiocarpa , Vitellaria paradoxa , Ceratonia siliqua , and species of Isoberlinia . [ 3 ] [ 8 ] A large number of mammal species are found in Nigeria with its diverse habitats. These include lions , leopards , mongooses , hyenas , side-striped jackals , African elephants , African buffaloes , African manatees , rhinoceroses , antelopes , waterbuck , giraffes , warthogs , red river hogs , hippopotamuses , pangolins , aardvarks , western tree hyraxes , bushbabies , monkeys , baboons , western gorillas , chimpanzees , bats , shrews , mice , rats , squirrels , and gerbils . Besides these, many species of whale and dolphin visit Nigerian waters. [ 9 ] About 940 species of bird have been recorded in Nigeria, five of them endemic to the country. [ 10 ] Each geographical zone has its typical bird species, with few being found in both forest and savanna. Around the Oba Dam , east of Ibadan , various waterfowl can be seen including several species of heron and egret , African pygmy goose , comb-crested jacana , black-winged stilt , Egyptian plover , and black crake . In the adjoining rainforest, specialties include western square-tailed drongo and glossy-backed drongo , the African oriole and black-headed orioles , painted-snipe , several species of dove , Klaas' and diederik cuckoos , as well as kingfishers , bee-eaters , rollers , and bushshrikes , including the fiery-breasted bushshrike , flocks of iridescent starlings , and several species of Malimbus , a genus only found in West Africa. Some birds found in open savanna include hooded vulture , stone partridge , guineafowl , black-billed wood dove , black cuckoo , blue-naped mousebird , and Abyssinian roller . [ 11 ] Birds endemic to Nigeria include the Ibadan malimbe , the Jos Plateau indigobird , the rock firefinch and the Anambra waxbill . [ 10 ]
https://en.wikipedia.org/wiki/Wildlife_of_Nigeria
Photo-identification is a technique used to identify and track individuals of a wild animal study population over time. It relies on capturing photographs of distinctive characteristics such as skin or pelage patterns or scars from the animal. In cetaceans , the dorsal fin area and tail flukes are commonly used. Photo-identification is generally used as an alternative to other, invasive methods of tagging that require attaching a device to each individual. The technique enables precise counting, rather than rough estimation, of the number of animals in a population. It also allows researchers to perform longitudinal studies of individuals over many years, yielding data about the lifecycle, lifespan, migration patterns, and social relationships of the animals. Species that are studied using photo-identification techniques include:
https://en.wikipedia.org/wiki/Wildlife_photo-identification
Wildlife radio telemetry is a tool used to track the movement and behavior of animals . This technique uses the transmission of radio signals to locate a transmitter attached to the animal of interest. It is often used to obtain location data on the animal's preferred habitat , home range , and to understand population dynamics . [ 1 ] The different types of radio telemetry techniques include very high frequency (VHF) transmitters, global positioning system (GPS) tracking, and satellite tracking . [ 2 ] Recent advances in technology have improved radio telemetry techniques by increasing the efficacy of data collection. However, studies involving radio telemetry should be reviewed in order to determine if newer techniques, such as collars that transmit the location to the operator via satellites, are actually required to accomplish the goals of the study. [ 3 ] The operator attaches a transmitter to an animal that gives off unique electromagnetic radio signals, which allows the animal to be located. Transmitters are available in a variety of forms and consist of an antenna, a power source, and the electronics required to produce a signal. Transmitters are chosen based on the behavior, size, and life history of the specific species being studied. In order to reduce the impact of the transmitter on the animal's behavior and quality of life, transmitters typically weigh no more than five percent of the animal's body weight. [ 3 ] However, the smaller the transmitter, the weaker and shorter-lived it is. Transmitters are often designed to fall off the animal at the conclusion of the study due to the unlikelihood of recapturing the tagged animals. [ 1 ] Large animals require transmitters in the form of collars, which leave room for the animal to grow without falling off. Ear tag transmitters are commonly attached to the ear of large animals that have changing neck sizes. Lightweight, adhesive transmitters are glued to the backs of smaller animals, such as bats. Necklace packs are transmitters that fit around the neck of upland game birds. Subcutaneous transmitters are applied to aquatic animals, which allows them to freely navigate underwater. In some species of fish that have ceased feeding, transmitters are inserted inside the animal's body cavity as a means to minimize the stress of tagging. [ 4 ] Whip antennas are an omni-directional transmitter design that produces more signal over a greater distance. A harness loop antenna design, implemented for small birds, involves a transmitter being wrapped around the body. [ 3 ] The operator uses an antenna that is attached to a receiver, which is programmed to the transmitter's frequency, to pick up the electromagnetic signals given off by the transmitter affixed to the target animal. [ 1 ] Receiver antennas may be hand-held or mounted on an object, and they are available in a variety of forms and functions. These antennas are also tuned to the proper frequency for the transmitter. The receiver produces a tone that increases in loudness or has a visual signal strength indicator that pulses as the operator approaches the transmitter. [ 3 ] Omnidirectional antennas have no additional elements and are used to determine the presence or absence of a signal, not its exact location. Elements are added segments of an antenna to increase the range of detectability of the receiver. Adcock antennas consist of two elements and are used to locate the direction of the signal. Loop antennas are small and useful for locating low frequency transmitters. The Yagi antenna contains 3 or 4 elements and is a strong, directional antenna commonly used to determine the location of a transmitter. Antennas can also be affixed to towers. This allows the antenna to be positioned higher, avoiding interference from buildings and trees. Boat, aircraft, and vehicle-mounted antennas allow the operator to exploit a larger area while tracking. [ 3 ] Direct tracking and triangulation methods allow the operator to locate a tagged animal. Direct or VHF tracking involves using a directional antenna to follow the signal given off by the transmitter to the exact location of the tagged animal. [ 2 ] The operator rotates the antenna until the loudest signal is found. The operator follows the signal, checking the direction of the signal frequently until he or she reaches the tagged animal. Triangulation is often used when an animal is on private or inaccessible property because it allows the operator to remotely determine the location of the tagged animal. The operator obtains three or more azimuths or bearings from locations around the signal and calculates the intersection of the azimuths to estimate the location of the transmitted animal. [ 1 ] Global positioning tracking involves a receiver that picks up signals from satellites to determine the location of a transmitted animal over time. The GPS transmitter is attached to an animal and records the location of the animal on the device by estimating the time taken for radio signals from at least three satellites to travel to the GPS transmitter. The data is collected by recapturing the animal to remove the GPS transmitter or remotely downloading the data off the transmitter. These units are often heavier and shorter-lived than the ones used for VHF tracking. Global positioning tracking is useful for migrating animals because their locations can accurately be determined, regardless of the distance they are from the operator. [ 2 ] Satellite tracking is similar to GPS tracking and allows animal movement to be tracked globally. This form of tracking is useful for remote or inaccessible areas. Many of these systems implement platform terminal transmitters (PTT) that send electromagnetic signals to Argos equipment found on satellites. The Argos receivers estimate the distance to the transmitter to determine its location. This data is received by the Argos data collection relay system. The PTT transmitters require larger batteries, causing them to be heavier than VHF transmitters. Satellite tracking is more accurate at locating larger animals that are more exposed to the sky, such as birds or animals living in prairies, open deserts, or savannas. [ 2 ] Wildlife radio telemetry has advanced the research opportunities available for studying animal populations. It can be applied to many areas of management and research to determine the habitat use of tagged animals, such as roost and foraging habitat preferences. [ 5 ] Radio telemetry has been used to study the home range and movement of populations. Specific migratory routes and dispersal behavior can be followed through radio tracking. Survivorship is often monitored with radio telemetry by studying age and mortality rates . [ 1 ] Technologies developed by companies such as Wildlife Drones have expanded these capabilities, enabling data collection using drone-based systems. It is important that any negative effects of attaching radio-transmitters to animals are reported to improve methods and reduce harm to individuals in future studies. [ 6 ]
https://en.wikipedia.org/wiki/Wildlife_radio_telemetry
Wildness , in its literal sense, is the quality of being wild or untamed . Beyond this, it has been defined as a quality produced in nature [ 1 ] and that which is not domesticated. [ 2 ] More recently, it has been defined as "a quality of interactive processing between organism and nature where the realities of base natures are met, allowing the construction of durable systems" [ 3 ] and "the autonomous ecological influences of nonhuman organisms." [ 4 ] People have explored the contrast of wildness versus tameness throughout recorded history. The earliest great work of literature, the Epic of Gilgamesh , tells a story of a wild man Enkidu in opposition to Gilgamesh who personifies civilization. In the story, Enkidu is defeated by Gilgamesh and becomes civilized. Cultures vary in their perception of the separation of humans from nature, with western civilization drawing a sharp contrast between the two while the traditions of many indigenous peoples have always seen humans as part of nature. The perception of man's place in nature and civilization has also changed over time. In western civilization, for example, Darwinism and environmentalism have renewed the perception of humans as part of nature, rather than separate from it. Wildness is often mentioned in the writings of naturalists, such as John Muir and David Brower , where it is admired for its freshness and otherness. Henry David Thoreau wrote "In wildness is the preservation of the world". Some artists and photographers such as Eliot Porter explore wildness in the themes of their works. The benefits of reconnecting with nature by seeing the achievements of wildness is an area being investigated by ecopsychology . Attempts to identify the characteristics of wildness are varied. One consideration sees wildness as that part of nature which is not controllable by humans. Nature retains a measure of autonomy, or wildness, apart from human constructions. [ 5 ] In Wild by Design , Laura J. Martin reviews attempts to manage nature while respecting and even generating the wildness of other species. Another version of this theme is that wildness produces things that are natural, while humans produce things that are artificial (man-made). [ 6 ] Ambiguities about the distinction between the natural and the artificial animate much of art, literature and philosophy. There is the perception that naturally produced items have a greater elegance over artificial things. Modern zoos seek to improve the health and vigour of animals by simulating natural settings, in a move away from stark man-made structures. Another view of wildness is that it is a social construct, and that humans cannot be considered innately ‘unnatural. As wildness is claimed to be a quality that builds from animals and ecosystems, it often fails to be considered within reductionist theories for nature. Meanwhile, an ecological perspective sees wildness as "(the degree of) subjection to natural selection pressures", many of which emerge independently from the biosphere . Thus modern civilization - contrasted with all humanity – can be seen as an 'unnatural' force (lacking wildness) as it strongly insulates its population from many natural selection mechanisms, including interspecific competition such as predation and disease, as well as some intraspecific phenomena. The importance of maintaining wildness in animals is recognized in the management of Wilderness areas . Feeding wild animals in national parks for example, is usually discouraged because the animals may lose the skills they need to fend for themselves. Human interventions may also upset continued natural selection pressures upon the population, producing a version of domestication within wildlife (Peterson et al. 2005). Tameness implies a reduction in wildness, where animals become more easily handled by humans. Some animals are easier to tame than others, and are amenable to domestication. In a clinical setting, wildness has been used as a scale to rate the ease with which various strains of laboratory mice can be captured and handled ( Wahlsten et al. 2003 ): In this sense, "wildness" may be interpreted as "tendency to respond with anxiety to handling". That there is no necessary connection between this factor and the state of wildness per se , given that some animals in the wild may be handled with little or no cause of anxiety. However, this factor does clearly indicate an animal's resistance to being handled. A classification system can be set out showing the spectrum from wild to domesticated animal states: This classification system does not account for several complicating factors: genetically modified organisms, feral populations, and hybridization . Many species that are farmed or ranched are now being genetically modified. This creates a unique category of them because it alters the organisms as a group but in ways unlike traditional domestication. Feral organisms are members of a population that was once raised under human control, but is now living and multiplying outside of human control. Examples include mustangs . Hybrids can be wild, domesticated, or both: a liger is a hybrid of two wild animals, a mule is a hybrid of two domesticated animals, and a beefalo is a cross between a wild and a domestic animal. The basic idea of ecopsychology is that while the human mind is shaped by the modern social world, it can be readily inspired and comforted by the wider natural world, because that is the arena in which it originally evolved. Mental health or unhealth cannot be understood in the narrow context of only intrapsychic phenomena or social relations. One also has to include the relationship of humans to other species and ecosystems. These relations have a deep evolutionary history; reach a natural affinity within the structure of their brains and they have deep psychic significance in the present time, in spite of urbanization. Humans are dependent on healthy nature not only for their physical sustenance, but for mental health, too. The concept of a state of nature was first posited by the 17th century English philosopher Thomas Hobbes in Leviathan . Hobbes described the concept in the Latin phrase bellum omnium contra omnes , meaning "the war of all against all." In this state any person has a natural right to do anything to preserve their own liberty or safety. Famously, he believed that such a condition would lead to a "war of every man against every man" and make life "solitary, poor, nasty, brutish, and short." Hobbes's view was challenged in the eighteenth century by Jean-Jacques Rousseau , who claimed that Hobbes was taking socialized persons and simply imagining them living outside of the society they were raised in. He affirmed instead that people were born neither good nor bad; men knew neither vice nor virtue since they had almost no dealings with each other. Their bad habits are the products of civilization specifically social hierarchies, property , and markets. Another criticism put forth by Karl Marx is his concept of species-being , or the unique potential of humans for dynamic, creative, and cooperative relations between each other. For Marx and others in his line of critical theory , alienated and abstracted social relations prevent the fulfillment of this potential (see anomie ). David Hume 's view brings together and challenges the theories of Rousseau and Hobbes. He posits that in the natural state we are born wicked and evil because of, for instance, the cry of the baby that demands attention. Like Rousseau, he believes that society shapes us, but that we are born evil and it is up to society to shape us into who we become. Thoreau made many statements on wildness: In Wildness is the preservation of the World. — "Walking" I wish to speak a word for Nature, for absolute Freedom and Wildness, as contrasted with a Freedom and Culture merely civil, — to regard man as an inhabitant, or a part and parcel of Nature, rather than a member of society. — "Walking" I long for wildness, a nature which I cannot put my foot through, woods where the wood thrush forever sings, where the hours are early morning ones, and there is dew on the grass, and the day is forever unproved, where I might have a fertile unknown for a soil about me. — Journal, 22 June 1853 As I came home through the woods with my string of fish, trailing my pole, it being now quite dark, I caught a glimpse of a woodchuck stealing across my path, and felt a strange thrill of savage delight, and was strongly tempted to seize and devour him raw; not that I was hungry then, except for that wildness which he represented. — Walden What we call wildness is a civilization other than our own. — Journal, 16 February 1859 In Wildness is the preservation of the World. — "Walking" We need the tonic of wildness — to wade sometimes in marshes where the bittern and the meadow-hen lurk, and hearing the booming of the snipe; to smell the whispering sedge where only some wilder and more solitary fowl builds her nest, and the mink crawls with its belly close o the ground. — Walden It is in vain to dream of a wildness distant from ourselves. There is none such. — Journal, 30 August 1856 The most alive is the wildest. — "Walking" Whatever has not come under the sway of man is wild. In this sense original and independent men are wild — not tamed and broken by society. — Journal, 3 September 1851 Trench says a wild man is a willed man. Well, then, a man of will who does what he wills or wishes, a man of hope and of the future tense, for not only the obstinate is willed, but far more the constant and persevering. The obstinate man, properly speaking, is one who will not. The perseverance of the saints is positive willedness, not a mere passive willingness. The fates are wild, for they will; and the Almighty is wild above all, as fate is. — Journal, 27 June 1853
https://en.wikipedia.org/wiki/Wildness
Wiles's proof of Fermat's Last Theorem is a proof by British mathematician Sir Andrew Wiles of a special case of the modularity theorem for elliptic curves . Together with Ribet's theorem , it provides a proof for Fermat's Last Theorem . Both Fermat's Last Theorem and the modularity theorem were believed to be impossible to prove using previous knowledge by almost all living mathematicians at the time. [ 1 ] : 203–205, 223, 226 Wiles first announced his proof on 23 June 1993 at a lecture in Cambridge entitled "Modular Forms, Elliptic Curves and Galois Representations". [ 2 ] However, in September 1993 the proof was found to contain an error. One year later on 19 September 1994, in what he would call "the most important moment of [his] working life", Wiles stumbled upon a revelation that allowed him to correct the proof to the satisfaction of the mathematical community. The corrected proof was published in 1995. [ 3 ] Wiles's proof uses many techniques from algebraic geometry and number theory and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry such as the category of schemes , significant number theoretic ideas from Iwasawa theory , and other 20th-century techniques which were not available to Fermat. The proof's method of identification of a deformation ring with a Hecke algebra (now referred to as an R=T theorem ) to prove modularity lifting theorems has been an influential development in algebraic number theory . Together, the two papers which contain the proof are 129 pages long [ 4 ] [ 5 ] and consumed more than seven years of Wiles's research time. John Coates described the proof as one of the highest achievements of number theory, and John Conway called it "the proof of the [20th] century." [ 6 ] Wiles's path to proving Fermat's Last Theorem, by way of proving the modularity theorem for the special case of semistable elliptic curves , established powerful modularity lifting techniques and opened up entire new approaches to numerous other problems. For proving Fermat's Last Theorem, he was knighted , and received other honours such as the 2016 Abel Prize . When announcing that Wiles had won the Abel Prize, the Norwegian Academy of Science and Letters described his achievement as a "stunning proof". [ 3 ] Fermat's Last Theorem , formulated in 1637, states that no three positive integers a , b , and c can satisfy the equation if n is an integer greater than two ( n > 2). Over time, this simple assertion became one of the most famous unproved claims in mathematics. Between its publication and Andrew Wiles's eventual solution more than 350 years later, many mathematicians and amateurs attempted to prove this statement, either for all values of n > 2, or for specific cases. It spurred the development of entire new areas within number theory . Proofs were eventually found for all values of n up to around 4 million, first by hand, and later by computer. However, no general proof was found that would be valid for all possible values of n , nor even a hint how such a proof could be undertaken. Separately from anything related to Fermat's Last Theorem, in the 1950s and 1960s Japanese mathematician Goro Shimura , drawing on ideas posed by Yutaka Taniyama , conjectured that a connection might exist between elliptic curves and modular forms . These were mathematical objects with no known connection between them. Taniyama and Shimura posed the question whether, unknown to mathematicians, the two kinds of object were actually identical mathematical objects, just seen in different ways. They conjectured that every rational elliptic curve is also modular . This became known as the Taniyama–Shimura conjecture. In the West, this conjecture became well known through a 1967 paper by André Weil , who gave conceptual evidence for it; thus, it is sometimes called the Taniyama–Shimura–Weil conjecture. By around 1980, much evidence had been accumulated to form conjectures about elliptic curves, and many papers had been written which examined the consequences if the conjecture were true, but the actual conjecture itself was unproven and generally considered inaccessible—meaning that mathematicians believed a proof of the conjecture was probably impossible using current knowledge. For decades, the conjecture remained an important but unsolved problem in mathematics. Around 50 years after first being proposed, the conjecture was finally proven and renamed the modularity theorem , largely as a result of Andrew Wiles's work described below. On yet another separate branch of development, in the late 1960s, Yves Hellegouarch came up with the idea of associating hypothetical solutions ( a , b , c ) of Fermat's equation with a completely different mathematical object: an elliptic curve. [ 7 ] The curve consists of all points in the plane whose coordinates ( x , y ) satisfy the relation Such an elliptic curve would enjoy very special properties due to the appearance of high powers of integers in its equation and the fact that a n + b n = c n would be an n th power as well. In 1982–1985, Gerhard Frey called attention to the unusual properties of this same curve, now called a Frey curve . He showed that it was likely that the curve could link Fermat and Taniyama, since any counterexample to Fermat's Last Theorem would probably also imply that an elliptic curve existed that was not modular . Frey showed that there were good reasons to believe that any set of numbers ( a , b , c , n ) capable of disproving Fermat's Last Theorem could also probably be used to disprove the Taniyama–Shimura–Weil conjecture. Therefore, if the Taniyama–Shimura–Weil conjecture were true, no set of numbers capable of disproving Fermat could exist, so Fermat's Last Theorem would have to be true as well. The conjecture says that each elliptic curve with rational coefficients can be constructed in an entirely different way, not by giving its equation but by using modular functions to parametrise coordinates x and y of the points on it. Thus, according to the conjecture, any elliptic curve over Q would have to be a modular elliptic curve , yet if a solution to Fermat's equation with non-zero a , b , c and n greater than 2 existed, the corresponding curve would not be modular, resulting in a contradiction. If the link identified by Frey could be proven, then in turn, it would mean that a disproof of Fermat's Last Theorem would disprove the Taniyama–Shimura–Weil conjecture, or by contraposition, a proof of the latter would prove the former as well. [ 8 ] To complete this link, it was necessary to show that Frey's intuition was correct: that a Frey curve, if it existed, could not be modular. In 1985, Jean-Pierre Serre provided a partial proof that a Frey curve could not be modular. Serre did not provide a complete proof of his proposal; the missing part (which Serre had noticed early on [ 9 ] : 1 ) became known as the epsilon conjecture (sometimes written ε-conjecture; now known as Ribet's theorem ). Serre's main interest was in an even more ambitious conjecture, Serre's conjecture on modular Galois representations , which would imply the Taniyama–Shimura–Weil conjecture. However his partial proof came close to confirming the link between Fermat and Taniyama. In the summer of 1986, Ken Ribet succeeded in proving the epsilon conjecture, now known as Ribet's theorem . His article was published in 1990. In doing so, Ribet finally proved the link between the two theorems by confirming, as Frey had suggested, that a proof of the Taniyama–Shimura–Weil conjecture for the kinds of elliptic curves Frey had identified, together with Ribet's theorem, would also prove Fermat's Last Theorem. In mathematical terms, Ribet's theorem showed that if the Galois representation associated with an elliptic curve has certain properties (which Frey's curve has), then that curve cannot be modular, in the sense that there cannot exist a modular form which gives rise to the same Galois representation. [ 10 ] Following the developments related to the Frey curve, and its link to both Fermat and Taniyama, a proof of Fermat's Last Theorem would follow from a proof of the Taniyama–Shimura–Weil conjecture—or at least a proof of the conjecture for the kinds of elliptic curves that included Frey's equation (known as semistable elliptic curves ). However, despite the progress made by Serre and Ribet, this approach to Fermat was widely considered unusable as well, since almost all mathematicians saw the Taniyama–Shimura–Weil conjecture itself as completely inaccessible to proof with current knowledge. [ 1 ] : 203–205, 223, 226 For example, Wiles's ex-supervisor John Coates stated that it seemed "impossible to actually prove", [ 1 ] : 226 and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible". [ 1 ] : 223 Hearing of Ribet's 1986 proof of the epsilon conjecture, English mathematician Andrew Wiles, who had studied elliptic curves and had a childhood fascination with Fermat, decided to begin working in secret towards a proof of the Taniyama–Shimura–Weil conjecture, since it was now professionally justifiable, [ 11 ] as well as because of the enticing goal of proving such a long-standing problem. Ribet later commented that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]." [ 1 ] : 223 Wiles initially presented his proof in 1993. It was finally accepted as correct, and published, in 1995, following the correction of a subtle error in one part of his original paper. His work was extended to a full proof of the modularity theorem over the following six years by others, who built on Wiles's work. During 21–23 June 1993, Wiles announced and presented his proof of the Taniyama–Shimura conjecture for semistable elliptic curves, and hence of Fermat's Last Theorem, over the course of three lectures delivered at the Isaac Newton Institute for Mathematical Sciences in Cambridge, England . [ 2 ] There was a relatively large amount of press coverage afterwards. [ 12 ] After the announcement, Nick Katz was appointed as one of the referees to review Wiles's manuscript. In the course of his review, he asked Wiles a series of clarifying questions that led Wiles to recognise that the proof contained a gap. There was an error in one critical portion of the proof which gave a bound for the order of a particular group: the Euler system used to extend Kolyvagin and Flach 's method was incomplete. The error would not have rendered his work worthless—each part of Wiles's work was highly significant and innovative by itself, as were the many developments and techniques he had created in the course of his work, and only one part was affected. [ 1 ] : 289, 296–297 Without this part proved, however, there was no actual proof of Fermat's Last Theorem. Wiles spent almost a year trying to repair his proof, initially by himself and then in collaboration with his former student Richard Taylor , without success. [ 13 ] [ 14 ] [ 15 ] By the end of 1993, rumours had spread that under scrutiny, Wiles's proof had failed, but how seriously was not known. Mathematicians were beginning to pressure Wiles to disclose his work whether or not complete, so that the wider community could explore and use whatever he had managed to accomplish. Instead of being fixed, the problem, which had originally seemed minor, now seemed very significant, far more serious, and less easy to resolve. [ 16 ] Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed, and to publishing his work so that others could build on it and find the error. He states that he was having a final look to try to understand the fundamental reasons why his approach could not be made to work, when he had a sudden insight that the specific reason why the Kolyvagin–Flach approach would not work directly also meant that his original attempt using Iwasawa theory could be made to work if he strengthened it using experience gained from the Kolyvagin–Flach approach since then. Each was inadequate by itself, but fixing one approach with tools from the other would resolve the issue and produce a class number formula (CNF) valid for all cases that were not already proven by his refereed paper: [ 13 ] [ 17 ] I was sitting at my desk examining the Kolyvagin–Flach method. It wasn't that I believed I could make it work, but I thought that at least I could explain why it didn't work. Suddenly I had this incredible revelation. I realised that, the Kolyvagin–Flach method wasn't working, but it was all I needed to make my original Iwasawa theory work from three years earlier. So out of the ashes of Kolyvagin–Flach seemed to rise the true answer to the problem. It was so indescribably beautiful; it was so simple and so elegant. I couldn't understand how I'd missed it and I just stared at it in disbelief for twenty minutes. Then during the day I walked around the department, and I'd keep coming back to my desk looking to see if it was still there. It was still there. I couldn't contain myself, I was so excited. It was the most important moment of my working life. Nothing I ever do again will mean as much. On 6 October Wiles asked three colleagues (including Gerd Faltings ) to review his new proof, [ 19 ] and on 24 October 1994 Wiles submitted two manuscripts, "Modular elliptic curves and Fermat's Last Theorem" [ 4 ] and "Ring theoretic properties of certain Hecke algebras", [ 5 ] the second of which Wiles had written with Taylor and proved that certain conditions were met which were needed to justify the corrected step in the main paper. The two papers were vetted and finally published as the entirety of the May 1995 issue of the Annals of Mathematics . The new proof was widely analysed and became accepted as likely correct in its major components. [ 6 ] [ 10 ] [ 11 ] These papers established the modularity theorem for semistable elliptic curves, the last step in proving Fermat's Last Theorem, 358 years after it was conjectured. Fermat claimed to "... have discovered a truly marvelous proof of this, which this margin is too narrow to contain". [ 20 ] [ 21 ] Wiles's proof is very complex, and incorporates the work of so many other specialists that it was suggested in 1994 that only a small number of people were capable of fully understanding at that time all the details of what he had done. [ 2 ] [ 22 ] The complexity of Wiles's proof motivated a 10-day conference at Boston University ; the resulting book of conference proceedings aimed to make the full range of required topics accessible to graduate students in number theory. [ 9 ] As noted above, Wiles proved the Taniyama–Shimura–Weil conjecture for the special case of semistable elliptic curves, rather than for all elliptic curves. Over the following years, Christophe Breuil , Brian Conrad , Fred Diamond , and Richard Taylor (sometimes abbreviated as "BCDT") carried the work further, ultimately proving the Taniyama–Shimura–Weil conjecture for all elliptic curves in a 2001 paper. [ 23 ] Now proven, the conjecture became known as the modularity theorem . In 2005, Dutch computer scientist Jan Bergstra posed the problem of formalizing Wiles's proof in such a way that it could be verified by computer . [ 24 ] Wiles proved the modularity theorem for semistable elliptic curves, from which Fermat’s last theorem follows using proof by contradiction . In this proof method, one assumes the opposite of what is to be proved, and shows if that were true, it would create a contradiction. The contradiction shows that the assumption (that the conclusion is wrong) must have been incorrect, requiring the conclusion to hold. The proof falls roughly in two parts: In the first part, Wiles proves a general result about " lifts ", known as the "modularity lifting theorem". This first part allows him to prove results about elliptic curves by converting them to problems about Galois representations of elliptic curves. He then uses this result to prove that all semistable curves are modular, by proving that the Galois representations of these curves are modular. Wiles aims first of all to prove a result about these representations, that he will use later: that if a semistable elliptic curve E has a Galois representation ρ ( E , p ) that is modular, the elliptic curve itself must be modular. Proving this is helpful in two ways: it makes counting and matching easier, and, significantly, to prove the representation is modular, we would only have to prove it for one single prime number p , and we can do this using any prime that makes our work easy – it does not matter which prime we use. This is the most difficult part of the problem – technically it means proving that if the Galois representation ρ ( E , p ) is a modular form, so are all the other related Galois representations ρ ( E , p ∞ ) for all powers of p . [ 3 ] This is the so-called " modular lifting problem", and Wiles approached it using deformations . Together, these allow us to work with representations of curves rather than directly with elliptic curves themselves. Our original goal will have been transformed into proving the modularity of geometric Galois representations of semistable elliptic curves, instead. Wiles described this realization as a "key breakthrough". A Galois representation of an elliptic curve is G → GL( Z p ) . To show that a geometric Galois representation of an elliptic curve is a modular form, we need to find a normalized eigenform whose eigenvalues (which are also its Fourier series coefficients) satisfy a congruence relationship for all but a finite number of primes. This is Wiles's lifting theorem (or modularity lifting theorem ), a major and revolutionary accomplishment at the time. So we can try to prove all of our elliptic curves are modular by using one prime number as p - but if we do not succeed in proving this for all elliptic curves, perhaps we can prove the rest by choosing different prime numbers as 'p' for the difficult cases. The proof must cover the Galois representations of all semistable elliptic curves E , but for each individual curve, we only need to prove it is modular using one prime number p .) From above, it does not matter which prime is chosen for the representations. We can use any one prime number that is easiest. 3 is the smallest prime number more than 2, and some work has already been done on representations of elliptic curves using ρ ( E , 3) , so choosing 3 as our prime number is a helpful starting point. Wiles found that it was easier to prove the representation was modular by choosing a prime p = 3 in the cases where the representation ρ ( E , 3) is irreducible, but the proof when ρ ( E , 3) is reducible was easier to prove by choosing p = 5 . So, the proof splits in two at this point. The switch between p = 3 and p = 5 has since opened a significant area of study in its own right (see Serre's modularity conjecture ) . Wiles uses his modularity lifting theorem to make short work of this case: This existing result for p = 3 is crucial to Wiles's approach and is one reason for initially using p = 3 . Wiles found that when the representation of an elliptic curve using p = 3 is reducible, it was easier to work with p = 5 and use his new lifting theorem to prove that ρ ( E , 5) will always be modular, than to try and prove directly that ρ ( E , 3) itself is modular (remembering that we only need to prove it for one prime). Wiles showed that in this case, one could always find another semistable elliptic curve F such that the representation ρ ( F , 3) is irreducible and also the representations ρ ( E , 5) and ρ ( F , 5) are isomorphic (they have identical structures). This proves: Wiles opted to attempt to match elliptic curves to a countable set of modular forms. He found that this direct approach was not working, so he transformed the problem by instead matching the Galois representations of the elliptic curves to modular forms. Wiles denotes this matching (or mapping) that, more specifically, is a ring homomorphism : R {\displaystyle R} is a deformation ring and T {\displaystyle \mathbf {T} } is a Hecke ring . Wiles had the insight that in many cases this ring homomorphism could be a ring isomorphism (Conjecture 2.16 in Chapter 2, §3 of the 1995 paper [ 4 ] ). He realised that the map between R {\displaystyle R} and T {\displaystyle \mathbf {T} } is an isomorphism if and only if two abelian groups occurring in the theory are finite and have the same cardinality . This is sometimes referred to as the "numerical criterion". Given this result, Fermat's Last Theorem is reduced to the statement that two groups have the same order. Much of the text of the proof leads into topics and theorems related to ring theory and commutation theory . Wiles's goal was to verify that the map R → T {\displaystyle R\rightarrow \mathbf {T} } is an isomorphism and ultimately that R = T {\displaystyle R=\mathbf {T} } . In treating deformations, Wiles defined four cases, with the flat deformation case requiring more effort to prove and treated in a separate article in the same volume entitled "Ring-theoretic properties of certain Hecke algebras". Gerd Faltings , in his bulletin, gives the following commutative diagram (p. 745): or ultimately that R = T {\displaystyle R=\mathbf {T} } , indicating a complete intersection . Since Wiles could not show that R = T {\displaystyle R=\mathbf {T} } directly, he did so through Z 3 , F 3 {\displaystyle \mathbf {Z} _{3},\mathbf {F} _{3}} and T / m {\displaystyle \mathbf {T} /{\mathfrak {m}}} via lifts . In order to perform this matching, Wiles had to create a class number formula (CNF). He first attempted to use horizontal Iwasawa theory but that part of his work had an unresolved issue such that he could not create a CNF. At the end of the summer of 1991, he learned about an Euler system recently developed by Victor Kolyvagin and Matthias Flach that seemed "tailor made" for the inductive part of his proof, which could be used to create a CNF, and so Wiles set his Iwasawa work aside and began working to extend Kolyvagin and Flach's work instead, in order to create the CNF his proof would require. [ 25 ] By the spring of 1993, his work had covered all but a few families of elliptic curves, and in early 1993, Wiles was confident enough of his nearing success to let one trusted colleague into his secret. Since his work relied extensively on using the Kolyvagin–Flach approach, which was new to mathematics and to Wiles, and which he had also extended, in January 1993 he asked his Princeton colleague, Nick Katz , to help him review his work for subtle errors. Their conclusion at the time was that the techniques Wiles used seemed to work correctly. [ 1 ] : 261–265 [ 26 ] Wiles's use of Kolyvagin–Flach would later be found to be the point of failure in the original proof submission, and he eventually had to revert to Iwasawa theory and a collaboration with Richard Taylor to fix it. In May 1993, while reading a paper by Mazur, Wiles had the insight that the 3/5 switch would resolve the final issues and would then cover all elliptic curves. Given an elliptic curve E {\displaystyle E} over the field Q {\displaystyle \mathbb {Q} } of rational numbers, for every prime power ℓ n {\displaystyle \ell ^{n}} , there exists a homomorphism from the absolute Galois group to the group of invertible 2 by 2 matrices whose entries are integers modulo ℓ n {\displaystyle \ell ^{n}} . This is because E ( Q ¯ ) {\displaystyle E({\bar {\mathbb {Q} }})} , the points of E {\displaystyle E} over Q ¯ {\displaystyle {\bar {\mathbb {Q} }}} , form an abelian group on which Gal ⁡ ( Q ¯ / Q ) {\displaystyle \operatorname {Gal} ({\bar {\mathbb {Q} }}/\mathbb {Q} )} acts; the subgroup of elements x {\displaystyle x} such that ℓ n x = 0 {\displaystyle \ell ^{n}x=0} is just ( Z / ℓ n Z ) 2 {\displaystyle (\mathbb {Z} /\ell ^{n}\mathbb {Z} )^{2}} , and an automorphism of this group is a matrix of the type described. Less obvious is that given a modular form of a certain special type, a Hecke eigenform with eigenvalues in Q {\displaystyle \mathbb {Q} } , one also gets a homomorphism This goes back to Eichler and Shimura. The idea is that the Galois group acts first on the modular curve on which the modular form is defined, thence on the Jacobian variety of the curve, and finally on the points of ℓ n {\displaystyle \ell ^{n}} power order on that Jacobian. The resulting representation is not usually 2-dimensional, but the Hecke operators cut out a 2-dimensional piece. It is easy to demonstrate that these representations come from some elliptic curve but the converse is the difficult part to prove. Instead of trying to go directly from the elliptic curve to the modular form, one can first pass to the mod ℓ n {\displaystyle {\bmod {\ell ^{n}}}} representation for some ℓ {\displaystyle \ell } and n {\displaystyle n} , and from that to the modular form. In the case where ℓ = 3 {\displaystyle \ell =3} and n = 1 {\displaystyle n=1} , results of the Langlands–Tunnell theorem show that the mod 3 {\displaystyle {\bmod {3}}} representation of any elliptic curve over Q {\displaystyle \mathbb {Q} } comes from a modular form. The basic strategy is to use induction on n {\displaystyle n} to show that this is true for ℓ = 3 {\displaystyle \ell =3} and any n {\displaystyle n} , that ultimately there is a single modular form that works for all n . To do this, one uses a counting argument, comparing the number of ways in which one can lift a mod ℓ n {\displaystyle \ell ^{n}} Galois representation to one mod ℓ n + 1 {\displaystyle \ell ^{n+1}} and the number of ways in which one can lift a mod ℓ n {\displaystyle \ell ^{n}} modular form. An essential point is to impose a sufficient set of conditions on the Galois representation; otherwise, there will be too many lifts and most will not be modular. These conditions should be satisfied for the representations coming from modular forms and those coming from elliptic curves. If the original ( m o d 3 ) {\displaystyle (\mathrm {mod} \,3)} representation has an image which is too small, one runs into trouble with the lifting argument, and in this case, there is a final trick which has since been studied in greater generality in the subsequent work on the Serre modularity conjecture . The idea involves the interplay between the ( m o d 3 ) {\displaystyle (\mathrm {mod} \,3)} and ( m o d 5 ) {\displaystyle (\mathrm {mod} \,5)} representations. In particular, if the mod-5 Galois representation ρ ¯ E , 5 {\displaystyle {\overline {\rho }}_{E,5}} associated to an semistable elliptic curve E over Q is irreducible, then there is another semistable elliptic curve E' over Q such that its associated mod-5 Galois representation ρ ¯ E ′ , 5 {\displaystyle {\overline {\rho }}_{E',5}} is isomorphic to ρ ¯ E , 5 {\displaystyle {\overline {\rho }}_{E,5}} and such that its associated mod-3 Galois representation ρ ¯ E , 3 {\displaystyle {\overline {\rho }}_{E,3}} is irreducible (and therefore modular by Langlands–Tunnell). [ 27 ] In his 108-page article published in 1995, Wiles divides the subject matter up into the following chapters (preceded here by page numbers): Gerd Faltings subsequently provided some simplifications to the 1995 proof, primarily in switching from geometric constructions to rather simpler algebraic ones. [ 19 ] [ 28 ] The book of the Cornell conference also contained simplifications to the original proof. [ 9 ] Wiles's paper is more than 100 pages long and often uses the specialised symbols and notations of group theory , algebraic geometry , commutative algebra , and Galois theory . The mathematicians who helped to lay the groundwork for Wiles often created new specialised concepts and technical jargon . Among the introductory presentations are an email which Ribet sent in 1993; [ 29 ] [ 30 ] Hesselink's quick review of top-level issues, which gives just the elementary algebra and avoids abstract algebra; [ 24 ] or Daney's web page, which provides a set of his own notes and lists the current books available on the subject. Weston attempts to provide a handy map of some of the relationships between the subjects. [ 31 ] F. Q. Gouvêa's 1994 article "A Marvelous Proof", which reviews some of the required topics, won a Lester R. Ford award from the Mathematical Association of America . [ 32 ] [ 33 ] Faltings' 5-page technical bulletin on the matter is a quick and technical review of the proof for the non-specialist. [ 34 ] For those in search of a commercially available book to guide them, he recommended that those familiar with abstract algebra read Hellegouarch, then read the Cornell book, [ 9 ] which is claimed to be accessible to "a graduate student in number theory". The Cornell book does not cover the entirety of the Wiles proof. [ 12 ]
https://en.wikipedia.org/wiki/Wiles's_proof_of_Fermat's_Last_Theorem
Wiley Interdisciplinary Reviews: Systems Biology and Medicine (abbreviated WIREs Systems Biology and Medicine ) is a bimonthly peer-reviewed interdisciplinary scientific review journal covering systems biology and medicine . It was established in 2009 and is published by John Wiley & Sons as part of its Wiley Interdisciplinary Reviews journal series. The editors-in-chief are Joseph H. Nadeau ( Pacific Northwest Research Institute ) and Shankar Subramaniam ( University of California, San Diego ). According to the Journal Citation Reports , the journal has a 2020 impact factor of 5.000, ranking it 48th out of 140 journals in the category "Medicine, Research & Experimental". [ 1 ] This article about a biology journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . This article about a general medical journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Wiley_Interdisciplinary_Reviews:_Systems_Biology_and_Medicine
The Wilfley Table is commonly used for the concentration of heavy minerals from the laboratory up to the industrial scale. It has a traditional shaking (oscillating) table design with a riffled deck. [ 1 ] It is one of several brands of wet tables used for the separation and concentration of heavy ore minerals which include the Deister Table and Holman Table, all built to handle either coarse or fine feeds for mineral processing. [ 2 ] The Wilfley Table became a design used world-wide due to the fact it significantly increased the recovery of silver, gold and other precious metals. [ 3 ] Such was the table's widespread use that it was included in Webster's Dictionary , [ 4 ] and has been in constant use by miners and metallurgists since its invention. [ 5 ] The Wilfley Table was conceived by Arthur Wilfley , a mining engineer based in Kokomo, Colorado in the United States. As a silver mine operator, Wilfley spent many years refining his separation table design in order to make the extraction of silver more economic. Rather than using heating processes ( smelting ) to concentrate the ore , Wilfley had been experimenting on mineral separation by use mineral density contrasts. [ 6 ] Wilfley was able to perfect a mechanical solution for the recovery of gold and silver from low-grade ores by means of the Wilfley table. [ 3 ] [ 5 ] The first Wilfley table was built on a preliminary scale in May 1895. [ 7 ] [ 8 ] The first full-sized table was used in Wilfley's own mill in Kokomo, in May 1896, while the first table sold for installation was placed in the Puzzle Mill , Breckinridge , Colorado, in August 1896. [ 7 ] Patented in 1897, [ 9 ] the Wilfley table made mining lower-grade ores profitable. Pulverised ore, suspended in a water solution, was washed across a sloping riffled vibrating table so that metals separated as they drained off. [ 6 ] [ 9 ] The Wilfley Table was said to have revolutionised ore dressing worldwide and more than 25,000 were in service by the 1930s. [ 3 ] The Wilfley Table was built to solve a problem common in the recovery of heavy ore minerals ; approximately 90% of gold grains, platinum group minerals , sulphides , arsenides/antimonides and tellurides , in source rocks are silt-sized (<0.063 mm (0.0025 in)). [ 10 ] Concentration of these minerals requires preconcentration techniques that include recovery of this fraction. Preconcentration may involve any number of methods including jigs, spirals, shaking tables, Knelson concentration , dense media separation, panning and hydroseparation. The Wilfley Table exploits preconcentration on the basis of density to separate minerals. It can recover silt to coarse sand-sized heavy minerals for a broad spectrum of commodities including diamonds , precious and base metals , and uranium . [ 10 ] The table, like most shaking tables, consists of a riffled deck with a gentle tilt on a stable support to counteract the table's oscillation . A motor, usually mounted to the side, drives a small arm that shakes the table along its length. The riffles are typically less than 10 mm (0.39 inches) high and cover more than half the table's surface. [ 10 ] [ 11 ] Varied riffle designs are available for specific applications. The riffles run longitudinally, parallel to the long dimension of the table. The table's shaking motion is parallel to the riffle pattern. Deck construction varies from wood to hard-wearing fiberglass where the riffles are formed as part of the mold . The decks are lined with high coefficient-of-friction materials ( linoleum , rubber or plastic), which assists in the mineral recovery process. [ 11 ] During operation, a slurry of <2 mm (0.079 inches) sample material consisting of about 25% solids by weight is fed with wash water along the top of the table, perpendicular to the direction of table motion. [ 10 ] [ 11 ] [ 12 ] The table is shaken longitudinally, using a slow forward stroke and a rapid return strike that causes particles to migrate or crawl along the deck parallel to the direction of motion. [ 6 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Particles move diagonally across the deck from the feed end and separate on the table according to size and density. Water flow rate, table tilt angle and intensity of the shaking motion must be properly adjusted for effective mineral recovery. [ 6 ] [ 10 ] The riffles cause mineral particles to stratify in the protected inter-riffle regions. [ 12 ] The finest and heaviest particles are forced to the bottom and the coarsest and lightest particles remain at the top. Particle layers migrate across the riffles with addition of new slurry feed and continued water wash. [ 12 ] [ 14 ] The riffles are tapered and flatten (disappear) towards the concentrate end of the table. The taper of the riffles causes migrating particles of progressively finer size and higher density to be brought into contact with the flowing film of water that tops the riffles; lighter material is washed away as tailings and middlings. Final concentration takes place in the unriffled region at the end of the deck where the layer of material at this stage is usually only a few particles deep. [ 11 ] [ 12 ] [ 14 ] Mineral separation is hampered by several factors, with particle size being particularly important. As the slurry feed grainsize increases, the efficiency of separation tends to decrease. [ 11 ] [ 12 ] Separation efficiency is also affected by the stroke of the table (frequency and length); fine feed requires a higher speed and shorter stroke than a coarse feed. A frequency of 200 to 325 strokes per minute is typical. [ 11 ] [ 13 ] [ 14 ] When Wilfley tables were originally employed to rework tailing dumps , the tables were found to enhance mineral recovery by some 35–40% percent compared to existing processes, [ 15 ] though this is not always the case. [ 16 ] Optimisation of table setup can have a significant impact on the recovery of ore. Using magnetite as a synthetic ore to test recovery on a Wilfley Table, Mackay et al. (2015) found that an optimised table setup (i.e. table inclination, wash-water flow rate, material feed rate, table speed, stroke amplitude, feed grade and feed density) increased magnetite recovery by a factor of 3.7. [ 6 ] The Wilfley table, like any wet table, is one of the most metallurgically efficient forms of gravity concentration, being used to treat the smaller, more difficult flow-streams, and to produce finished concentrates from the products of other forms of gravity system. [ 12 ] Additional efficiencies are gained in the treatment of low grade feeds where two or even three decks are stacked one above the other allowing for continuous feeding. [ 12 ] Modern applications of the Wilfley table (and other wet shaking tables) are predominantly observed in the following roles: [ 17 ] Tables are now also being used in the recycling of electronic scrap to recover precious metals. [ 12 ]
https://en.wikipedia.org/wiki/Wilfley_table
In mathematics , specifically combinatorics , a Wilf–Zeilberger pair , or WZ pair , is a pair of functions that can be used to certify certain combinatorial identities . WZ pairs are named after Herbert S. Wilf and Doron Zeilberger , and are instrumental in the evaluation of many sums involving binomial coefficients , factorials , and in general any hypergeometric series . A function's WZ counterpart may be used to find an equivalent and much simpler sum. Although finding WZ pairs by hand is impractical in most cases, Gosper's algorithm provides a method to find a function's WZ counterpart, and can be implemented in a symbolic manipulation program . Two functions F and G form a WZ pair if and only if the following two conditions hold: Together, these conditions ensure that because the function G telescopes : Therefore, that is The constant does not depend on n . Its value can be found by substituting n = n 0 for a particular n 0 . If F and G form a WZ pair, then they satisfy the relation where R ( n , k ) {\displaystyle R(n,k)} is a rational function of n and k and is called the WZ proof certificate . A Wilf–Zeilberger pair can be used to verify the identity Divide the identity by its right-hand side: Use the proof certificate to verify that the left-hand side does not depend on n , where Now F and G form a Wilf–Zeilberger pair. To prove that the constant in the right-hand side of the identity is 1, substitute n = 0, for instance.
https://en.wikipedia.org/wiki/Wilf–Zeilberger_pair
Wilhelm Karl Klemm (5 January 1896 – 24 October 1985) was an inorganic and physical chemist . [ 2 ] [ 5 ] Klemm did extensive work on intermetallic compounds , rare earth metals , transition elements and compounds involving oxygen and fluorine . He and Heinrich Bommer were the first to isolate elemental erbium (1934) [ 6 ] and ytterbium (1936). [ 5 ] [ 7 ] [ 8 ] Klemm refined Eduard Zintl 's ideas about the structure of intermetallic compounds and their connections to develop the Zintl-Klemm concept . [ 9 ] [ 10 ] [ 11 ] Klemm co-authored one of the ten most-cited papers in the history of the journal Zeitschrift für anorganische und allgemeine Chemie . [ 12 ] [ 13 ] His textbooks on inorganic chemistry became standard works for chemists. His Magnetochemie (c1936) is considered foundational to magnetochemistry . [ 14 ] Anorganische Chemie ( Inorganic Chemistry ) by Klemm and Rudolf Hoppe has been described as a legendary work by two titans of solid state chemistry. [ 15 ] Klemm was the second President of the Gesellschaft Deutscher Chemiker (GDCh), serving from 1952 to 1953. [ 16 ] [ 17 ] [ 18 ] He was President of the International Union of Pure and Applied Chemistry (IUPAC) from 1965 to 1967. [ 19 ] [ 20 ] Klemm co-edited the journal Zeitschrift für anorganische und allgemeine Chemie from 1939 to 1965. [ 12 ] Since 1985, the GDCh has awarded the Wilhelm Klemm Prize in his honor. [ 21 ] Klemm was born on 5 January 1896 in Guhrau , Lower Silesia to Wilhelm and Ottilie (John) Klemm. [ 3 ] [ 22 ] His father was a master carpenter and furniture manufacturer. [ 23 ] Klemm attended the Realgymnasium in Grünberg [ 3 ] before serving in the German army from 1914 to 1919. [ 4 ] He was an army liaison in Turkey, where he learned Turkish and Arabic . [ 23 ] From 1919 to 1923 Klemm studied chemistry at University of Breslau . In 1923, Klemm received a doctor of philosophy degree. Heinrich Biltz supervised Klemm's dissertation on the chemistry of uric acid , [ 5 ] [ 24 ] [ 2 ] : 3 entitled Aus der Chemie der Harnsäure (1923). [ 25 ] In December 1924 Klemm married Lisabeth Herrmann , who had studied chemistry at Danzig (Gdansk) and at Breslau University with Heinrich Biltz. She received her degree in 1921, completing a doctoral thesis magna cum laude on the methylation of uric acid and its methyl derivatives. Her father was a forestry scientist. [ 2 ] : 3 The Klemms formed a community of which Lisbeth Klemm was the social center, and Wilhelm was the intellectual center. [ 22 ] Heinrich Biltz recommended Klemm to his brother Wilhelm Biltz , who had begun teaching at the Technische Hochschule Hannover in 1921. Klemm habilitated there in the field of inorganic chemistry in 1927. [ 5 ] He was an enthusiastic, inspiring, hard worker, untiring, with unbelievable diligence and determination. – Rudolf Hoppe [ 4 ] From 1927 to 1929 Klemm worked as a Privatdozent at the Technische Hochschule Hannover. In 1929 he was promoted to the position of associate professor. [ 5 ] Klemm was reportedly a Professor for inorganic chemistry in Düsseldorf at some time between 1929 and 1933. [ 25 ] [ 26 ] As of 1 April 1933, Klemm became a full professor and head of the Department of Inorganic Chemistry at the Technische Hochschule Danzig . Klemm replaced Hans Joachim von Wartenberg , [ 22 ] who had taught at the Technische Hochschule Danzig from 1913 to 1932 and served in several senior positions including head of the Department of Inorganic Chemistry. Von Wartenberg left in August 1932 to become director of the Institute of Inorganic Chemistry at the University of Göttingen . [ 27 ] [ 28 ] [ 29 ] The Technische Hochschule Danzig was at that time located in the Free City of Danzig (1920-1939). [ 30 ] [ 31 ] The population of the city was predominantly German and faculty and staff tended to align with National Socialism even before 1933. [ 32 ] [ 33 ] The attitudes of scientists at the university have been described in terms of "shades of gray". [ 31 ] Klemm had some involvement with the National Socialists but his motives are not known. [ 34 ] Klemm was not a signatory of the Bekenntnis der Professoren an den deutschen Universitäten (1933). [ 35 ] He did sign the later Aufstellung zu den Unterzeichnern des Appells „An die Gebildeten der Welt“ (11. November 1933) , a list of academics who professed support for Adolf Hitler and National Socialism. [ 36 ] Klemm became a member of the NSDAP (Nazi Party) in 1938, rather later than contemporaries like Adolf Butenandt . [ 31 ] : 313 Following the Invasion of Poland which began 1 September 1939, the Free City of Danzig was annexed by Germany, and anti-Jewish measures escalated. [ 33 ] In a letter to the editorial staff of Chemische Berichte in June 1942 Klemm argued that contributions from chemist Georg-Maria Schwab and other "non-Aryan" authors should not appear in German chemical journals. [ 34 ] Klemm served as head of the Inorganic Chemistry department of the Technische Hochschule Danzig from 1933 to 1945, [ 25 ] and was its last vice-rector. He was responsible for the evacuation of equipment, books, files, and people in 1944–1945, in advance of Soviet troops. [ 37 ] Approximately 500 books and pieces of equipment and 300 staff and family members sailed on the ship Deutschland on 27 January 1945 bound for Kiel. [ 33 ] Much of the university including the chemistry building was destroyed in subsequent months. Following the war Gdańsk became part of Poland. On 24 March 1945, the university was re-established as a Polish institution. [ 33 ] During the period of denazification following the war, Nazi party members and others who were more than nominal participants in Nazi activities were barred from public posts. Those applying for academic positions had to certify their acceptability. [ 38 ] Klemm was the lead author for the preparation and publication of the six inorganic chemistry volumes of the FIAT review of German science, 1939-1946 (1948-1949). [ 39 ] FIAT volumes were compiled by leading German scientists in cooperation with the Military Government for Germany, involving Field Information Agencies Technical from the British, French, and U.S. zones, to report on the scientific work done in Germany during the war years. [ 40 ] From 23 May 1947 [ 26 ] to 1951, Klemm led the Inorganic Chemical Institute at University of Kiel (Christian-Albrechts-Universität zu Kiel). [ 41 ] The Institute of Inorganic Chemistry at the University of Kiel has a collection of correspondence and other papers dating from 1947 through the 1960s, relating to Wilhelm Klemm and his successor, Robert Juza . [ 26 ] Klemm's first wife, Lisabeth Klemm (née Herrmann, born 9 October 1895, Eberswalde) died of cancer on 15 October 1948 in Kiel. [ 2 ] In 1949, Klemm married Lina Arndt, a dentist who had been a friend of his first wife. [ 3 ] [ 2 ] By 1951, the Allied Powers were lifting reemployment restrictions against Nazi party members, and it became easier for academics to find or change positions. [ 38 ] Klemm accepted a position as professor and department head at the Westfälische Wilhelms-Universität Münster where he remained from 1951 until he retired as professor emeritus in 1964. [ 3 ] The university was in need of substantial rebuilding after the war. Klemm headed the Institute of Inorganic and Analytical Chemistry. [ 41 ] As rector of the Westfälische Wilhelms-Universität Münster from 1957 to 1958, [ 3 ] Klemm founded its Natural Science Center. [ 41 ] He also served as vice-rector from 1958 to 1960. [ 42 ] Klemm's scientific work focused on the systematic investigation of solids, to understand the properties of substances and how they related to the substances' atomic arrangement. [ 41 ] [ 22 ] At a very early stage he recognized the importance of physical methods including crystal structure analysis using X-ray diffraction and magnetochemical measurements for the investigation of solids. His paper with Wilhelm Biltz , "Über die Elektrolytische Leitfähigkeit geschmolzenen Scandiumchlorids"(About the electrolytic conductivity of molten scandium chloride, 1923) became one of the ten most-cited papers in the history of the journal Zeitschrift für anorganische und allgemeine Chemie . [ 12 ] [ 13 ] Klemm has been described as the founder of modern magnetochemistry [ 14 ] for introducing new methods in the 1920s and describing them in detail in his 1936 book, Magnetochemie . [ 23 ] It is considered a "pioneering textbook" and the foundation of much subsequent work in the field. [ 14 ] [ 43 ] Klemm's areas of focus included the intermetallic compounds , rare earth metals , transition elements and compounds involving oxygen and fluorine . [ 41 ] [ 22 ] [ 44 ] His work on the properties of rare elements such as gallium , germanium , indium , rhenium and related compounds was considered authoritative. He was particularly interested in the synthesis of compounds involving unusual degrees of oxidation, and the comparison of compounds with similar structure in order to better understand their properties. [ 45 ] Klemm studied molar volumes and coefficients of expansion of both fused and solid halides . He also examined indium , gallium , germanium , and rhenium , and rare earth elements , determining their heats of formation and studying their reactivity with ammonia . [ 5 ] In 1936, Wilhelm Klemm and Anna Neuber published research on the magnetic properties of triphenylchromium compounds. Their magnetic susceptibility (approx. 1.73 Bohr magnetons ) was found to be inconsistent with the structure determination proposed by Franz Hein for penta-, tetra- and triphenylchromium compounds. [ 46 ] [ 47 ] In 1934, Wilhelm Klemm and Heinrich Bommer were the first to achieve pure erbium , by heating erbium chloride with potassium. [ 6 ] In 1936, Wilhelm Klemm and Heinrich Bommer were the first to isolate elemental ytterbium by reducing ytterbium (III) chloride with potassium at 250 °C. They also determined the crystal structure and magnetic properties of the metal. [ 5 ] [ 7 ] [ 48 ] [ 8 ] Klemm's work on transition metal oxides, fluorides and lanthanides was interrupted in 1939 by World War II . [ 23 ] The research school of Wilhem Klemm (1896-1985) in Danzig specialized in the making of series of oxide and fluorine crystals by slightly changing the chemical composition from one compound to the following in the series. They played around with chemical structures like J. S. Bach made musical variations on a theme in The Art of Fugue - Rudolf Hoppe [ 49 ] Klemm's research led to the identification of systematic relationships among the elements of the periodic system . It also led to a new method for classifying rare earths based on the stability of both completely filled and "half-filled" electrons which could be applied to both ions and metals. [ 5 ] Klemm identified unusual oxidation states in oxo- and fluoro- complexes and refined the ideas of Eduard Zintl on the structure of intermetallic compounds to develop the Zintl-Klemm concept . [ 22 ] [ 9 ] [ 10 ] [ 11 ] [ 50 ] [ 51 ] One of Klemm's students and coworkers was Rudolf Hoppe . Hoppe worked with Klemm on fluorides, [ 52 ] and in 1962 produced the first noble gas compounds. [ 53 ] [ 54 ] Over the course of his career, Klemm wrote and co-wrote a number of textbooks on inorganic chemistry which became standard textbooks in the field, repeatedly reprinted and translated. These include: Klemm was a member of the Academy of Sciences Leopoldina (Deutsche Akademie der Naturforscher Leopoldina) in Halle, Germany ; the Bavarian Academy of Sciences and Humanities (Bayerische Akademie der Wissenschaften) in Munich, Germany ; the Göttingen Academy of Sciences (Akademie der Wissenschaften zu Göttingen) in Göttingen, Germany ; and the Rhine-Westphalian Academy of Sciences in Düsseldorf, Germany . [ 3 ] Klemm was co-editor of Zeitschrift für anorganische und allgemeine Chemie (the journal for inorganic and general chemistry) from 1939 to 1965. [ 5 ] [ 12 ] From 1945 onwards, his central tasks were to reestablish teaching and research in Kiel (1947-1951) and in Münster (1951-) and to help reconstruct chemical institutions at the national and international levels. [ 23 ] Wilhelm Klemm was an influential science organizer. He became the second president of the Gesellschaft Deutscher Chemiker (1952-1953), working to foster communication between chemists in different zones of post-war Germany. [ 16 ] In the 1950s and 1960s, he worked to build communication and cohesion between scientists in the GDR and the Federal Republic . As president of the GDCh he participated in the founding of the Chemical Society of the GDR, formally created on 11 May 1953. [ 16 ] Wilhelm Klemm campaigned for international exchange in the sciences. From 1965 to 1967 he was President of the International Union of Pure and Applied Chemistry (IUPAC). [ 57 ] [ 3 ] [ 58 ] He was the first German scientist to fill such a high international position after World War II . [ 37 ] In 1966 he became the secretary-treasurer of the recently formed Committee on Data for Science and Technology (CODATA) of the International Council of Scientific Unions (ICSU), whose purpose was to encourage the use of international standards of scientific nomenclature, symbols, constants, and data sets. [ 59 ] He served on the committee from 1968 to 1975, also holding the position of vice-president. [ 57 ] On 8 July 1977 Wilhelm and Lina Klemm signed a will describing their intention to use the revenue from the eventual sale of their home at Theresiengrund 22 for scholarships for students to travel and present their research internationally. [ 60 ] Lina Klemm died on 4 April 1985. [ 2 ] Wilhelm Klemm died on 24 October 1985 while visiting Gdansk for the first time since the war, to receive commemorative medal no. 467 from the Gdańsk University of Technology . [ 37 ] His body was returned to Münster, where he is buried in the Münster Central Cemetery, ID 186397208. [ 61 ] The first scholarships of the Wilhelm-Klemm-Stiftung were awarded in 1987. [ 60 ]
https://en.wikipedia.org/wiki/Wilhelm_Klemm
Wilhelm Normann (16 January 1870, in Petershagen – 1 May 1939, in Chemnitz ) (sometimes also spelled Norman ) was a German chemist who introduced the hydrogenation of fats in 1901. This invention, protected by German patent 141,029 in 1902, had a profound influence on the production of margarine and vegetable shortening . [ 1 ] [ 2 ] His father, Julius Normann, was the principal of the elementary school and Selekta in Petershagen . His mother was Luise Normann, née Siveke. Normann attended primary school from 31 March 1877. At Easter of his sixth grade he moved to the Friedrichs Gymnasium in Herford . After his father applied for a teacher's job at the municipal secondary school in Kreuznach , Wilhelm changed to the Royal Secondary School in Kreuznach. He passed his examinations and left school at the age of 18. Normann began work at the Herford machine fat and oil factory Leprince & Siveke in 1888. The founder of that company was his uncle, Wilhelm Siveke. After running a branch of the company in Hamburg for two years, he started studying chemistry at the laboratory of Professor Carl Remigius Fresenius in Wiesbaden . From April 1892 Normann continued his studies at the department of oil analytics at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin ) under the supervision of Professor D. Holde. From 1895 to 1900 he studied chemistry under supervision of Prof. Claus and Prof. Willgerod and geology under supervision of Prof. Steinmann at the Albert Ludwigs University of Freiburg . There he received his doctorate in 1900 with a work about Beiträge zur Kenntnis der Reaktion zwischen unterchlorigsauren Salzen und primären aromatischen Aminen ("Contributions to the knowledge of the reactions of hypochlorite salts and primary aromatic amines "). In 1901 Normann was appointed as correspondent of the Federal Geological Institute. [ 3 ] From 1901 to 1909 he was head of the laboratory at Leprince & Siveke, where conducted investigations of the properties of fats and oils. In 1901 Normann heard about Paul Sabatier publishing an article, [ 4 ] in which Sabatier stated that only with vaporizable organic compounds it is possible to bind catalytic hydrogen to fluid tar oils. Normann investigated and disproved Sabatier's assertion. He was able to transform liquid oleic acid into solid stearic acid by the use of catalytic hydrogenation with dispersed nickel. This was the precursor of saturated fat hardening . On 27 February 1901 Normann invented what he called fat hardening, which was the process of producing saturated fats. On 14 August 1902 the German Imperial patent office granted patent 141,029 to the Leprince & Siveke Company, and on 21 January 1903 Normann was granted the British patent, GB 190301515 "Process for Converting Unsaturated Fatty Acids or their Glycerides into Saturated Compounds". During the years 1905 to 1910 Normann built a fat hardening facility in the Herford company. In 1908 the patent was bought by Joseph Crosfield & Sons Limited of Warrington , England. From the autumn of 1909 hardened fat was being successfully produced in what in a large scale plant in Warrington. The initial year's production was nearly 3,000 tonnes (3,000 long tons; 3,300 short tons). [ 5 ] [ 6 ] When Lever Brothers produced a rival process Crosfield took them to court over patent infringement, which Crosfield lost. From 1911 to 1922, Normann was scientific director of Ölwerke Germania (Germania Oil Factory) in Emmerich am Rhein , which was established by the Dutch Jürgens company. From 1917, Normann built a fat hardening factory in Antwerp for the margarine company SAPA Societe anonyme des grasses, huiles et produits africaines , which operated in India . He served as technical director by order of the Belgian Colonial Society. On 25 April 1920 he filed for German patent 407180 Verfahren zur Herstellung von gemischten Glyceriden (Procedure for the production of mixed glycerides), which was approved on 9 December 1924. [ 7 ] On 26 June 1920 the Firma Oelwerke Germania and Dr Wilhelm Normann filed for German patent 417215, Verfahren zur Umesterung von Fettsaurestern. (Procedure for the transesterification of fatty esters), which was approved on 27 September 1925. [ 7 ] From 1924 to 1927 Normann was a consultant for fat hardening facilities for foreign companies. On 30 October 1926 Normann and the Volkmar Haenig & Comp, Metallochemische Werk Rodlebe company filed for German patent 564894, for Elektrisch beheizter Etagenroester (Electrically heated esters), approved 24 November 1932. [ 7 ] On 14 May 1929 he applied for German patent 582266, Verfahren zur Darstellung von Estern (Procedure for the representation of esters), which was approved on 11 August 1933. [ 7 ] Normann married Martha Uflerbäumer [ 8 ] of Herford on 12 September 1916. On 1 January 1939 Normann retired, and he died on 1 May 1939 after an illness in the Küchwald Hospital in Chemnitz . He was entombed on 5 May 1939 in the family grave at the old cemetery on Hermannstrasse in Herford. In commemoration of the inventor of fat hardening the DGF donated the Wilhelm Normann Medal on 15 May 1940. Since 1940 it has been irregularly awarded. The Wilhelm-Normann-Berufskolleg (Wilhelm Normann Professional College) in Herford was named after Normann.
https://en.wikipedia.org/wiki/Wilhelm_Normann
The Wilhelm Ostwald Institute for Physical and Theoretical Chemistry at the University of Leipzig , located at Linnéstraße 2 in Leipzig , is the oldest physical chemistry institute in Germany. It is one of seven institutes of the Faculty of Chemistry and Mineralogy of the University of Leipzig. The institute was ceremoniously inaugurated in 1898 by its first director, Nobel Prize winner Wilhelm Ostwald , and has borne the official name "Wilhelm Ostwald Institute for Physical and Theoretical Chemistry" since 1998. [ 1 ] As early as 1870, the Ministry of Culture and Public Education in Dresden had made an appointment for the then young field of physical chemistry. Gustav Wiedemann accepted the first professorship for physical chemistry in Leipzig in 1871 and, during this time, led the first "Physical-Chemical Laboratory". [ 1 ] On October 25, 2021, the 150th anniversary celebration took place in Leipzig. Wilhelm Ostwald took over this professorship in 1887, while Wiedemann accepted the position to become the chair of physics. [ 1 ] From 1887, the so-called "Second-Chemical Laboratory" under the direction of Wilhelm Ostwald at Brüderstr. 34 in Leipzig had become an internationally important center for physical chemistry. However, the premises could no longer meet this demand. [ 1 ] [ 2 ] For this reason, the Saxon Parliament granted 360,000 marks for the construction of a new institute in February 1896. Construction began immediately and as a result, teaching and research could begin as early as the winter semester of 1897. [ 1 ] On January 3, 1898, the newly built Physical-Chemical Institute was inaugurated. On the occasion of the ceremonial opening, Wilhelm Ostwald gave a keynote lecture and several important physicists and chemists at the time took part in the celebration, including Max Planck , Ernst Otto Beckmann and Max Le Blanc. [ 1 ] The U-shaped building had been furnished according to the most modern standards of the time: In the basement there was a large and a small battery, on the 1st floor there was a large workroom and equipment rooms, and on the 2nd floor there were two lecture halls (with 140 and 42 seats, respectively). In the attic there was a geological collection. There was also a director's apartment, located in the middle of the institute building and connected to the rest of the building by a corridor. [ 1 ] The old premises in the Brüderstraße were transferred to Ernst Otto Beckmann , for the establishment of a new professorship for applied chemistry. [ 1 ] Max Le Blanc succeeded Ostwald as director in 1906 and made alterations to the building to provide more space especially for electrochemical and photochemical research. [ 1 ] [ 2 ] During the great air raid on Leipzig on December 4, 1943 (during World War II ), the institute building was destroyed by incendiary bombs. The south wing was particularly badly hit. [ 1 ] The reconstruction of the north wing and parts of the central building were completed in 1951/52 so that work could be resumed. The institute now had a lecture hall (136 seats), three practical rooms and 24 laboratories. The workshop, collections and administrative rooms could also be used again. [ 1 ] At the beginning of the 1990s, the central building and the north wing (including renovation of the lecture hall) were completely refurbished and, in the course of this, the institute was also connected to the municipal district heating supply; until then, the institute operated its own hot water supply. [ 1 ] Wilhelm Ostwald was appointed professor at the University of Leipzig in 1887 and took up this post in October of the same year. [ 1 ] He thus took over the so-called "Second Chemical Laboratory" at Brüderstraße 34 from Gustav Wiedemann . The laboratory was divided into three departments "Physical-Chemical Department", "Analytical Department" and "Pharmaceutical Department". At this time, the laboratory was not yet a purely "physico-chemical" institute, but had a more diverse structure and was also responsible, for example, for the basic training of chemists, just as there were teaching activities for pharmacists and high school teachers. [ 1 ] Research at the end of the 19th century included in particular the theory of solutions , electrical conductivity , the dissociation of acids and bases, determination of molecular weights , theory of contact potentials, theory of electrical chains, polarization, internal friction, diffusion, and the optical, thermal, and volume relationships in chemical reactions . [ 1 ] [ 2 ] Ostwald's dilution law was also published at this laboratory in 1888, after Ostwald had made conductivity measurements of various acids. [ 1 ] [ 2 ] Svante Arrhenius had already been a collaborator of Ostwald in Riga and followed him to Leipzig as an assistant in 1888. Arrhenius conducted research in Leipzig until 1891, and in 1903 he was awarded the Nobel Prize in Chemistry for his theory of electrolytic dissociation. [ 3 ] The development of the Arrhenius equation dates from his time in Leipzig. [ 1 ] Walther Nernst accepted an invitation from Ostwald to come to Leipzig to write his habilitation thesis. He successfully completed the thesis on „Die elektromotorische Wirksamkeit der Jonen" ("The Electromotive Effectiveness of Jons") in 1889. In his habilitation, Nernst published the Nernst equation named after him. [ 1 ] Nernst received the Nobel Prize in Chemistry for the year 1920 as "recognition for his thermochemical work." [ 4 ] Julius Wagner was responsible for the analytical department between 1887 and 1897. Together with Ostwald, he developed a new didactics of the subject, gave lectures and designed new experiments for chemistry classes. In 1901, he was appointed the first professor of chemistry didactics in Germany. [ 1 ] Wilhelm Ostwald was at the height of his research at the time the institute was founded. Around 1900, he devoted himself in particular to experimental investigations on catalysis and chemical kinetics . In addition, time as an experimental quantity came into focus and with it the beginning of non-equilibrium thermodynamics. He also explored nitric acid production by oxidation of ammonia on a platinum contact and the direct recovery of ammonia from nitrogen and hydrogen , together with Eberhard Brauer. Detailed lists of publications from this period are provided, for example, in the book "Physikalische Chemie in Leipzig" by Ulf Messow and Konrad Krause. [ 1 ] Ostwald left the institute in 1906 after disagreements with the university administration. [ 1 ] Under Ostwald, practical research was carried out at the institute and some apparatus and measuring equipment was built or developed - for example, Ostwald's Urthermostat for controlling temperature and pycnometer for measuring liquid density. In addition, measurements of conductivity, voltage of elements, measurements of viscosity and surface tension were carried out and corresponding apparatuses were refined. The university mechanic Fritz Köhler founded his company on this basis and built these devices for the laboratories independently. Ostwald arranged for his students to complete a practical course in equipment development in this company, which more than 100 students took advantage of. [ 1 ] Max Le Blanc succeeded Ostwald as director. [ 1 ] [ 2 ] Le Blanc was Ostwald's assistant from 1890 to 1896 and habilitated in Leipzig in 1891 with his first studies on decomposition voltage. He held the post of director for 27 years, the longest anyone has ever held this position. At this time, he was also secretary of the Saxon Academy of Sciences , also longer than anyone else. [ 1 ] During his time, Le Blanc introduced the oscillograph as a measuring instrument of electrochemistry , and continued his work on measuring rapid potential changes on electrodes. He established the following departments including professorships: Photochemical Department, Chemical Department, Physical Chemical Department and Colloid Chemical Department. In addition, there were electrochemical exercises and exercises on catalysis. [ 1 ] After Max Le Blanc's retirement, Wilhelm Carl Böttger succeeded him as director on a provisional basis for one year. [ 1 ] However, the temporary succession was extended because Johannes Stark (President of the Physikalisch-Technische Reichsanstalt and Chairman of the Notgemeinschaft der deutschen Wissenschaft) wanted to impose Wolfgang Ostwald (the son of Wilhelm Ostwald ) as director - against the faculty's wish to appoint Karl Friedrich Bonhoeffer . [ 1 ] On November 1, 1934, Karl Friedrich Bonhoeffer was finally appointed as the chair of physical chemistry . He remained director of the institute until 1947. Bonhoeffer's research during this time focused on the labeling of atoms in biochemical processes with deuterium and on the reaction kinetics of gases and processes at electrode surfaces. Wolfgang Ostwald accepted the position of chair in colloid chemistry starting in 1935. [ 1 ] Bonhoeffer retained Le Blanc's structure of dividing the institute into departments. However, the "Analytical Department" was renamed the "Department of Applied Physical Chemistry" after Prof. Böttger retired in 1938. During the time of the 'Drittes Reich' ( Nazi Germany ), Bonhoeffer retained his position as director, although his entire family worked against the Nazis and despite the threat of arrest on several occasions. One of Karl Friedrich Bonhoeffer's younger brothers was the theologian Dietrich Bonhoeffer . From 1941 onwards, all of the Institute's research was directed towards wartime research, and research commissions came directly from the War Ministry. Since the orders were subject to secrecy, quite little is known to this day about the research of this period. The Institute (and several surrounding buildings) were destroyed in air raids on December 4, 1943 , and several following. All chemists then moved back to the original building at Brüderstraße 34. In June 1945, many professors of natural sciences from Leipzig were taken to Western Germany by the American occupiers. Bonhoeffer was able to escape this and remained director of the Physical-Chemical Institute until 1947. The Soviet Military Administration in Germany (SMAD) approved the reopening of the university for February 5, 1946. The faculty of the University of Leipzig shrank from 187 professors to 44 between May 8, 1945, and the reopening - due to denazification , compulsory service in the Soviet Union, and the like. Some chemists were also among them: Of previously 4 professors, only Bonhoeffer and one of the assistants remained in Leipzig after the war. [ 1 ] Especially chemists who were familiar with the production and handling of heavy water as this was of great interest in the Soviet Union . From 1946, some operations at the institute could be resumed, work begun during the war could be continued in part, and three doctoral students defended their dissertations in the same year. [ 1 ] [ 2 ] Bonhoeffer was followed by Herbert Staude as director of the institute from 1947 to 1959. [ 1 ] He had studied in Leipzig himself, had been an assistant to Max Le Blanc between 1925 and 1931, and now returned to Leipzig after several other positions. As director, he also oversaw the construction of the north wing of the Physico-Chemical Institute at Linnéstraße 2, in which he was particularly supported by the janitor of the time, Max Schädlich. From 1952, the institute had a lecture hall with 136 seats, three practical rooms, 24 laboratories, as well as a workshop and administrative rooms. In 1959, the institute again had 41 employees, 18 of whom were scientists. [ 1 ] Research at this time was mainly concerned with photochemistry (e.g. the photochemical properties of silver halides and other light-sensitive substances), thermochemistry (e.g. thermodynamic functions of inorganic substances or heats of mixing in liquid systems), electrochemistry (especially electrode processes), colloid chemistry and X-ray spectroscopy . Studies of exchange adsorption were carried out in an affiliated department. [ 1 ] During this time, there were repeated " Republikflucht " to West Germany. As a result, there were numerous interrogations, accusations and even arrests at the institute. It is believed that for this reason, the director Herbert Staude did not return to Leipzig from a conference in Austria in 1959. He found employment (later professorship) in Frankfurt am Main (West Germany) a little later. [ 1 ] He was succeeded in Leipzig in 1960 by Gerhard Geiseler, whose practical experience in industry also shaped the research focus at the institute. He had habilitated in Leipzig in 1955, and then received a chair in physical chemistry in 1960. During his time, there were research groups on kinetics , thermodynamics and molecular spectroscopy , and the two groups on electrochemistry and X-ray spectroscopy also remained. In 1965, Prof. Armin Meisel's "X-ray spectroscopy" group organized an international conference entitled "X-ray spectra and chemical bonding", which was held frequently in the following years. During Geiseler's years, the work at the institute was strongly oriented towards experiments, for which a wide variety of apparatus was developed, built and operated. This required close cooperation with the workshops. [ 1 ] [ 2 ] Right at the beginning of the GDR period, the universities were to be transformed from "civil universities" into state-controlled and organized universities. [ 1 ] There were the so-called university reforms, carried out by the State Secretariat for Higher and Technical Education. In the course of the reforms, the State Secretariat significantly changed the content and structure of the universities, which of course also affected the Physico-Chemical Institute in Leipzig . [ 1 ] [ 2 ] On June 15, 1968 (after the III. University Reform), the Chemistry Section was established and the Institutes of Chemistry were dissolved, making Geiseler the last director of the Physico-Chemical Institute since Wilhelm Ostwald . The aim of this introduction of sections was, on the one hand, to create conditions for more easily feasible interdisciplinary cooperation, and on the other hand, to create clear hierarchies so that the universities could be more easily supervised. The sections each had a section director, who in turn reported directly to the director of the university. The first section director of chemistry was Prof. Siegfried Hauptmann (1968–1972). During this time, the study program was very research-oriented and practical, which was further supported by mandatory company internships. [ 1 ] From 1981 onwards, the original 14 research groups of the Chemistry Section were concentrated into 8 scientific areas. Research topics were, for example, modern evaluation methods of X-ray spectroscopy , characterization of zeolites or also processing of raw lignite, catalytic high-pressure hydrogenation and extraction of gasification fuels. [ 1 ] [ 2 ] After the German reunification , a new section director was elected: Prof. Cornelius Weiss, who, however, held this post only until he took up his position as Rector of the University of Leipzig from 1.11.1990 to 4.3.1991. [ 1 ] He was succeeded in the post by his deputy, Prof. Horst Wilde. From this time on, instead of the term "Section", the term "Department" was introduced, which had already been in use before the GDR period. The Department of Chemistry belonged to the Faculty of Mathematics and Natural Sciences and was divided into 7 scientific sections plus a department for the methodology of chemistry teaching. [ 1 ] Exactly 584 years after the official opening of the University of Leipzig in 1409, on December 2, 1993, there was a return to the structures that existed before the university reforms of GDR times. 64 institutes were created at the University of Leipzig. [ 1 ] As of 1993, the Department of Chemistry consisted of eight institutes, including the Institute of Physical and Theoretical Chemistry. This was founded on 2.12.1993, the first director was Prof. Konrad Quitzsch. At that time, the institute consisted of 6 professors, 31 employees and 26 externally funded staff. [ 1 ] These institutes and the area of chemistry didactics together formed the Faculty of Chemistry and Mineralogy - the founding document for this dates from January 14, 1994. The first dean of the newly founded Faculty of Chemistry and Mineralogy was Prof. Dr. Joachim Reinhold, Vice Dean was Prof. Dr. Lothar Beyer and Dean of Studies Prof. Dr. Horst Wilde. [ 1 ] The Institute for Physical and Theoretical Chemistry has borne the official name "Wilhelm Ostwald Institute for Physical and Theoretical Chemistry" since the 1998 celebrations of the 100th anniversary of its inauguration in 1898. [ 1 ] After the election of Peter Bräuer as head of the scientific division (in german „Wissenschaftsbereich", WB) "Physical Chemistry" in October 1990, three new research groups (in german „Forschungsgruppe", FG) were formed within this WB (later this term was replaced by "working group" (in german „Arbeitssgruppe", AG)): [ 1 ] [ 2 ] The FG "Physical Chemistry of Interfaces" was headed by Peter Bräuer and J. Hoffmann. They did research on porous solids and their use for mass transfer processes. Thermodynamic, kinetic, molecular spectroscopic and molecular theoretical methods were used. In 1993, the FG was renamed to "Interfacial Thermodynamics and Kinetics / Adsorbate Structure" and at the same time took up the WG "Molecular Spectroscopy". From 1994 H. Böhlig took over the leadership of the group, which was dissolved in 1998. [ 1 ] [ 2 ] The FG "Thermodynamics" was headed by Konrad Quitzsch. The research of the FG was concerned with the physicochemical characterization of surfactant systems and micellar structures and microemulsions with simultaneous treatment of phase equilibria. In addition, the staff of this FG dealt with liquid-vapor equilibria , interfacial properties, and pollutant removal issues. After reunification, the FG placed great emphasis on presenting its research in the "old" German states and internationally and on entering into cooperative ventures. [ 2 ] The FG "Electron and X-ray Spectroscopy" was first headed by Armin Meisel, then from 1991 by Rüdiger Szargan. [ 2 ] The FG benefited greatly from the opening to the West, for example, thanks to support from the state, federal and EU (with their funding instruments BMBF , DFG and DAAD ), lecture invitations from abroad, collaborations on synchrotron radiation with laboratories worldwide and the like could be implemented. Numerous new instruments were also acquired, including the still-operating ESCALAB 220iXL photoelectron spectrometer , an IM5d electrochemical impedance measurement system, and an ASAP 2010 volumetric gas adsorption system. The group was primarily involved in spectroscopy, electron diffraction , and scanning tunneling and atomic force microscopy . [ 2 ] Overall, the entire scientific field of physical chemistry benefited greatly from the opening after the GDR era with generous funding, and the removal of travel restrictions. This made publications in internationally respected journals possible, as well as international collaborations and invitations to lecture. From 1993, the department of "Theoretical Chemistry" also belonged to the institute, which Joachim Reinhold headed from 1991. From 1992, two professorships for Theoretical Chemistry were created, of which Reinhold held one. [ 2 ] His research, at this time, dealt for example with the topics: Electronic and geometric structures, stability and reactivity of single- and multinuclear coordination compounds, mechanisms of reactions at transition metal centers, and investigation of the properties of adsorbate complexes of molecules on surfaces. In addition, the FG conducted research on topics such as: Energy spectra and magnetic properties of 1D and 2D molecular ensembles of extended aromatic hydrocarbons with defects or on ternary and quaternary A(III)-B(V) semiconductor solid solutions. [ 2 ] The second professorship formally created in 1992 was occupied by Cornelius Weiss. Since he was rector of the University of Leipzig from 1991, this professorship contributed little to research or teaching activities at the institute. [ 1 ] [ 2 ] After the retirement of Konrad Quitzsch in 1998, the institute was divided into the three working groups Physical Chemistry I, Physical Chemistry II and Theoretical Chemistry. [ 2 ] The professorship Physical Chemistry I was held by Harald Morgner from 1999. [ 2 ] His main field of work was the investigation of liquid surfaces with methods of vacuum-assisted surface analysis. Through the work in this group, Gibb's equation could be used for the first time to determine the chemical potential of surfactants as a function of their concentration without model assumptions. A new structural description of the surfaces of solutions was also found, which is useful for computer simulation of liquids, for example. The experimental methods were later applied not only to the surfaces of solutions, but also to study other soft matter systems. In March 2014, the professorship was filled by Knut R. Asmis, who became director of the Wilhelm Ostwald Institute from 2015 to 2020. The Physical Chemistry II group was headed by Prof. Rüdiger Szargan until 2006. [ 2 ] In this group, work on electron and X-ray spectroscopy and surface analysis was continued. Various projects brought, for example, new instrumental possibilities for electron evaporation, new techniques for scanning microscopy and insights into adsorption. Electrochemical and enzymatic reactions at laterally structured semiconductor surfaces and interfaces in electrolyte were also explored. In 2001, the institute received a new photoelectron spectrometer, which brought success in the field of spectroscopic surface research, for example, clarifying aspects of charge transport in semiconductor heterostructures with band discontinuities. In 2007, the professorship was filled by Reinhard Denecke. [ 2 ] The professorship for Theoretical Chemistry was held by Joachim Reinhold until his retirement in 2006. [ 2 ] After her appointment to the professorship of Theoretical Chemistry, Barbara Kirchner took over the leadership of this research group in 2007. From 2007 onwards, a methodological reorientation towards first-principles simulations took place. The aim of the research was the development, provision and application of a theoretical chemistry laboratory with which theoretical investigations of chemically complex systems can be carried out. Computers are used to describe microscopic sequences of chemical processes in oversized systems and in condensed phase. The research area combines traditional molecular dynamics with first-principles quantum chemistry. [ 2 ] From 2015 to 2018, Thomas Heine held the professorship of Theoretical Chemistry. In his research group, research was conducted on a wide variety of topics. An important focus was method development, i.e., the development of "computational tools" to describe chemical and physical phenomena at the atomic level. Another central research topic was theoretical studies of ultrathin materials, which should enable simplified fabrication of circuits and other complex structures by tailored 2D layers, as well as metal-organic frameworks (MOFs), which among other things show high potential as quantum sieves, for catalysis, as sensors, and as proton and electrical conductors. Furthermore, the Heine group was involved in the development of the density-functional-based tight-binding (DFTB) theory. Professor Heine moved to the TU Dresden in 2018. After Heine's appointment at TU Dresden, the professorship for Theoretical Chemistry was not filled again until March 2020, in the meantime Carsten Baldauf from the Fritz - Haber - Institute in Berlin had a corresponding teaching assignment. After the fall of the Berlin Wall in the 1990s, the AG of Prof. O. Brede at the Academy of Sciences of the GDR was transferred to an external Max Planck Group in Leipzig on the "Campus Permoserstrasse 15", which was financed by the Max Planck Society until 2007. [ 2 ] After the retirement of Prof. O. Brede, a new professorship for reaction dynamics was created at the Wilhelm Ostwald Institute and filled by Prof. Bernd Abel, who previously researched and taught in Göttingen . Between 2010 and 2015, Prof. Abel served as Institute Director of the Wilhelm Ostwald Institute. From 2012, Prof. Abel then also served as department head and deputy director at the Leibniz Institute for Surface Modification (IOM) in a joint appointment between the University of Leipzig and IOM. During this time, he also accepted the professorship for Technical Chemistry of Polymers (Technical Chemistry). Currently, there are five working groups at the Wilhelm Ostwald Institute. Prof. Reinhard Denecke is the institute's current director, since 2019. Between 1968 and 1991 the institute was a part of the chemistry section of the University of Leipzig , the directors of this section did not all come from physical chemistry. From 1991 the term "section" was replaced by "department". From 1993 the institute existed independently again. Since that time the institute directors were elected and took over the post in an executive capacity for a limited period of time.
https://en.wikipedia.org/wiki/Wilhelm_Ostwald_Institute
Georg Wilhelm Steinkopf (28 June 1879 – 12 March 1949) was a German chemist . Today he is mostly remembered for his work on the production of mustard gas during World War I . Georg Wilhelm Steinkopf was born on 28 June 1879 in Staßfurt , in the Prussian Province of Saxony in the German Empire , the son of Gustav Friedrich Steinkopf, a merchant, and his wife Elise Steinkopf (née Heine). In 1898 he began studying chemistry and physics at the University of Heidelberg . In 1899 he moved to the Technische Hochschule Karlsruhe (today the Karlsruhe Institute of Technology), where he finished his studies with a degree as Diplomingenieur in 1905. In Karlsruhe, he also met his future colleagues Fritz Haber and Roland Scholl . After receiving his Doctor of Science and eventually his Habilitation in 1909, he worked as an associate professor at the TU Karlsruhe until 1914, when he volunteered for service in World War I. In 1916 Fritz Haber, who was now the director of the Kaiser Wilhelm Institute for Physical Chemistry and Electrochemistry ( KWIPC , today the Fritz Haber Institute of the Max Planck Society ) in Berlin, invited Steinkopf to join his institute as the head of a team devoted to research on chemical weapons . Together with chemical engineer Wilhelm Lommel , Steinkopf developed a method for the large-scale production of bis(2-chloroethyl) sulfide, commonly known as mustard gas. Mustard gas was subsequently assigned the acronym LOST (LOmmel/STeinkopf) by the German military. Steinkopf's work on mustard gas and related substances had a negative impact on his health, which caused him to switch to another department of the KWIPC in 1917, supervising the production of gas ammunition. Although Fritz Haber wanted him to stay in Berlin, Steinkopf moved to Dresden after the end of World War I. Succeeding Reinhold von Walther [ de ] as the associate professor in organic chemistry at the Technische Universität Dresden , he worked there from 1919 until his retirement. His research focussed on organic arsenic compounds, thiophene compounds, and the formation of petroleum . In 1924, Steinkopf became a member of the Beirat des Heereswaffenamts ( Heereswaffenamt advisory council), an agency of the German military responsible for weapons research and development. He worked under strict secrecy and most of his friends and colleagues in Dresden did not know about this activity. After the Machtergreifung of the National Socialists in 1933, Reichswehrminister Werner von Blomberg demanded the Saxonian Volksbildungsministerium (Ministry of the People's Education) to show more recognition for Steinkopf's work during World War I. In 1935, Steinkopf was promoted to full professor , and continued to work at the TU Dresden until his retirement in 1940. His health being fragile due to his work with mustard gas and related substances, Steinkopf died on 12 March 1949 in Stuttgart . Aside from his scientific research, Steinkopf wrote several poems , novellas , and novels .
https://en.wikipedia.org/wiki/Wilhelm_Steinkopf
A Wilhelmy plate is a thin plate that is used to measure equilibrium surface or interfacial tension at an air–liquid or liquid–liquid interface. In this method, the plate is oriented perpendicular to the interface, and the force exerted on it is measured. Based on the work of Ludwig Wilhelmy , this method finds wide use in the preparation and monitoring of Langmuir films . The Wilhelmy plate consists of a thin plate usually on the order of a few square centimeters in area. The plate is often made from filter paper, glass or platinum which may be roughened to ensure complete wetting . In fact, the results of the experiment do not depend on the material used, as long as the material is wetted by the liquid. [ 1 ] The plate is cleaned thoroughly and attached to a balance with a thin metal wire. The force on the plate due to wetting is measured using a tensiometer or microbalance and used to calculate the surface tension ( γ {\displaystyle \gamma } ) using the Wilhelmy equation : where l {\displaystyle l} is the wetted perimeter ( 2 w + 2 d {\displaystyle 2w+2d} ), w {\displaystyle w} is the plate width, d {\displaystyle d} is the plate thickness, and θ {\displaystyle \theta } is the contact angle between the liquid phase and the plate. In practice the contact angle is rarely measured; instead, either literature values are used or complete wetting ( θ = 0 {\displaystyle \theta =0} ) is assumed. [ 2 ] In general, surface tension may be measured with high sensitivity using very thin plates ranging in thickness from 0.1 to 0.002 mm. The device is calibrated with pure liquids like water and ethanol. The buoyancy adjustment is minimized by utilizing a thin plate and dipping it as little as feasible. Wetting water on a platinum plate is accomplished by using commercially available platinum plates that have been roughened to improve wettability. [ 3 ] If complete wetting is assumed (contact angle = 0), no correction factors are required to calculate surface tensions when using the Wilhelmy plate, unlike for a du Noüy ring . In addition, because the plate is not moved during measurements, the Wilhelmy plate allows accurate determination of surface kinetics on a wide range of timescales, and it displays low operator variance. In a typical plate experiment, the plate is lowered to the surface being analyzed until a meniscus is formed, and then raised so that the bottom edge of the plate lies on the plane of the undisturbed surface. If measuring a buried interface, the second (less dense) phase is then added on top of the undisturbed primary (denser) phase in such a way as to not disturb the meniscus. The force at equilibrium can then be used to determine the absolute surface or interfacial tension . Due to a large wetted area of the plate, the measurement is less susceptible for measurement errors than when using a smaller probe. Also, the method has been described in several international measurement standards.
https://en.wikipedia.org/wiki/Wilhelmy_plate
In mathematics , Wilkie's theorem is a result by Alex Wilkie about the theory of ordered fields with an exponential function , or equivalently about the geometric nature of exponential varieties. In terms of model theory , Wilkie's theorem deals with the language L exp = (+, −, ·, <, 0, 1, e x ), the language of ordered rings with an exponential function e x . Suppose φ ( x 1 , ..., x m ) is a formula in this language. Then Wilkie's theorem states that there is an integer n ≥ m and polynomials f 1 , ..., f r ∈ Z [ x 1 , ..., x n , e x 1 , ..., e x n ] such that φ ( x 1 , ..., x m ) is equivalent to the existential formula Thus, while this theory does not have full quantifier elimination , formulae can be put in a particularly simple form. This result proves that the theory of the structure R exp , that is the real ordered field with the exponential function , is model complete . [ 1 ] In terms of analytic geometry , the theorem states that any definable set in the above language — in particular the complement of an exponential variety — is in fact a projection of an exponential variety. An exponential variety over a field K is the set of points in K n where a finite collection of exponential polynomials simultaneously vanish. Wilkie's theorem states that if we have any definable set in an L exp structure K = ( K , +, −, ·, 0, 1, e x ), say X ⊂ K m , then there will be an exponential variety in some higher dimension K n such that the projection of this variety down onto K m will be precisely X . The result can be considered as a variation of Gabrielov's theorem. This earlier theorem of Andrei Gabrielov dealt with sub-analytic sets , or the language L an of ordered rings with a function symbol for each proper analytic function on R m restricted to the closed unit cube [0, 1] m . Gabrielov's theorem states that any formula in this language is equivalent to an existential one, as above. [ 2 ] Hence the theory of the real ordered field with restricted analytic functions is model complete. Gabrielov's theorem applies to the real field with all restricted analytic functions adjoined, whereas Wilkie's theorem removes the need to restrict the function, but only allows one to add the exponential function. As an intermediate result Wilkie asked when the complement of a sub-analytic set could be defined using the same analytic functions that described the original set. It turns out the required functions are the Pfaffian functions . [ 1 ] In particular the theory of the real ordered field with restricted, totally defined Pfaffian functions is model complete. [ 3 ] Wilkie's approach for this latter result is somewhat different from his proof of Wilkie's theorem, and the result that allowed him to show that the Pfaffian structure is model complete is sometimes known as Wilkie's theorem of the complement. See also. [ 4 ]
https://en.wikipedia.org/wiki/Wilkie's_theorem
In numerical analysis , Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbations in the coefficients of the polynomial. The polynomial is w ( x ) = ∏ i = 1 20 ( x − i ) = ( x − 1 ) ( x − 2 ) ⋯ ( x − 20 ) . {\displaystyle w(x)=\prod _{i=1}^{20}(x-i)=(x-1)(x-2)\cdots (x-20).} Sometimes, the term Wilkinson's polynomial is also used to refer to some other polynomials appearing in Wilkinson's discussion. Wilkinson's polynomial arose in the study of algorithms for finding the roots of a polynomial p ( x ) = ∑ i = 0 n c i x i . {\displaystyle p(x)=\sum _{i=0}^{n}c_{i}x^{i}.} It is a natural question in numerical analysis to ask whether the problem of finding the roots of p from the coefficients c i is well-conditioned . That is, we hope that a small change in the coefficients will lead to a small change in the roots. Unfortunately, this is not the case here. The problem is ill-conditioned when the polynomial has a multiple root. For instance, the polynomial x 2 has a double root at x = 0 . However, the polynomial x 2 − ε (a perturbation of size ε ) has roots at ±√ ε , which is much bigger than ε when ε is small. It is therefore natural to expect that ill-conditioning also occurs when the polynomial has zeros which are very close. However, the problem may also be extremely ill-conditioned for polynomials with well-separated zeros. Wilkinson used the polynomial w ( x ) to illustrate this point (Wilkinson 1963). In 1984, he described the personal impact of this discovery: Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst. [ 1 ] Wilkinson's polynomial is often used to illustrate the undesirability of naively computing eigenvalues of a matrix by first calculating the coefficients of the matrix's characteristic polynomial and then finding its roots, since using the coefficients as an intermediate step may introduce an extreme ill-conditioning even if the original problem was well conditioned. [ 2 ] Wilkinson's polynomial w ( x ) = ∏ i = 1 20 ( x − i ) = ( x − 1 ) ( x − 2 ) ⋯ ( x − 20 ) {\displaystyle w(x)=\prod _{i=1}^{20}(x-i)=(x-1)(x-2)\cdots (x-20)} clearly has 20 roots, located at x = 1, 2, ..., 20 . These roots are far apart. However, the polynomial is still very ill-conditioned. Expanding the polynomial, one finds w ( x ) = x 20 − 210 x 19 + 20615 x 18 − 1256850 x 17 + 53327946 x 16 − 1672280820 x 15 + 40171771630 x 14 − 756111184500 x 13 + 11310276995381 x 12 − 135585182899530 x 11 + 1307535010540395 x 10 − 10142299865511450 x 9 + 63030812099294896 x 8 − 311333643161390640 x 7 + 1206647803780373360 x 6 − 3599979517947607200 x 5 + 8037811822645051776 x 4 − 12870931245150988800 x 3 + 13803759753640704000 x 2 − 8752948036761600000 x + 2432902008176640000. {\displaystyle {\begin{aligned}w(x)={}&x^{20}-210x^{19}+20615x^{18}-1256850x^{17}+53327946x^{16}\\&{}-1672280820x^{15}+40171771630x^{14}-756111184500x^{13}\\&{}+11310276995381x^{12}-135585182899530x^{11}\\&{}+1307535010540395x^{10}-10142299865511450x^{9}\\&{}+63030812099294896x^{8}-311333643161390640x^{7}\\&{}+1206647803780373360x^{6}-3599979517947607200x^{5}\\&{}+8037811822645051776x^{4}-12870931245150988800x^{3}\\&{}+13803759753640704000x^{2}-8752948036761600000x\\&{}+2432902008176640000.\end{aligned}}} If the coefficient of x 19 is decreased from −210 by 2 −23 to −210.0000001192, then the polynomial value w (20) decreases from 0 to −2 −23 20 19 = −6.25×10 17 , and the root at x = 20 grows to x ≈ 20.8 . The roots at x = 18 and x = 19 collide into a double root at x ≈ 18.62 which turns into a pair of complex conjugate roots at x ≈ 19.5 ± 1.9 i as the perturbation increases further. The 20 roots become (to 5 decimals) 1.00000 2.00000 3.00000 4.00000 5.00000 6.00001 6.99970 8.00727 8.91725 20.84691 10.09527 ± 11.79363 ± 13.99236 ± 16.73074 ± 19.50244 ± 0.64350 i 1.65233 i 2.51883 i 2.81262 i 1.94033 i {\displaystyle {\begin{array}{rrrrr}1.00000&2.00000&3.00000&4.00000&5.00000\\[8pt]6.00001&6.99970&8.00727&8.91725&20.84691\\[8pt]10.09527\pm {}&11.79363\pm {}&13.99236\pm {}&16.73074\pm {}&19.50244\pm {}\\[-3pt]0.64350i&1.65233i&2.51883i&2.81262i&1.94033i\end{array}}} Some of the roots are greatly displaced, even though the change to the coefficient is tiny and the original roots seem widely spaced. Wilkinson showed by the stability analysis discussed in the next section that this behavior is related to the fact that some roots α (such as α = 15) have many roots β that are "close" in the sense that | α − β | is smaller than | α |. Wilkinson chose the perturbation of 2 −23 because his Pilot ACE computer had 30-bit floating point significands , so for numbers around 210, 2 −23 was an error in the first bit position not represented in the computer. The two real numbers, −210 and −210 − 2 −23 , are represented by the same floating point number, which means that 2 −23 is the unavoidable error in representing a real coefficient close to −210 by a floating point number on that computer. The perturbation analysis shows that 30-bit coefficient precision is insufficient for separating the roots of Wilkinson's polynomial. Suppose that we perturb a polynomial p ( x ) = Π ( x − α j ) with roots α j by adding a small multiple t · c ( x ) of a polynomial c ( x ) , and ask how this affects the roots α j . To first order, the change in the roots will be controlled by the derivative d α j d t = − c ( α j ) p ′ ( α j ) . {\displaystyle {d\alpha _{j} \over dt}=-{c(\alpha _{j}) \over p^{\prime }(\alpha _{j})}.} When the derivative is small, the roots will be more stable under variations of t , and conversely if this derivative is large the roots will be unstable. In particular, if α j is a multiple root, then the denominator vanishes. In this case, α j is usually not differentiable with respect to t (unless c happens to vanish there), and the roots will be extremely unstable. For small values of t the perturbed root is given by the power series expansion in t α j + d α j d t t + d 2 α j d t 2 t 2 2 ! + ⋯ {\displaystyle \alpha _{j}+{d\alpha _{j} \over dt}t+{d^{2}\alpha _{j} \over dt^{2}}{t^{2} \over 2!}+\cdots } and one expects problems when | t | is larger than the radius of convergence of this power series, which is given by the smallest value of | t | such that the root α j becomes multiple. A very crude estimate for this radius takes half the distance from α j to the nearest root, and divides by the derivative above. In the example of Wilkinson's polynomial of degree 20, the roots are given by α j = j for j = 1, ..., 20 , and c ( x ) is equal to x 19 . So the derivative is given by d α j d t = − α j 19 ∏ k ≠ j ( α j − α k ) = − ∏ k ≠ j α j α j − α k . {\displaystyle {d\alpha _{j} \over dt}=-{\alpha _{j}^{19} \over \prod _{k\neq j}(\alpha _{j}-\alpha _{k})}=-\prod _{k\neq j}{\alpha _{j} \over \alpha _{j}-\alpha _{k}}.\,\!} This shows that the root α j will be less stable if there are many roots α k close to α j , in the sense that the distance |α j − α k | between them is smaller than |α j |. Example . For the root α 1 = 1, the derivative is equal to 1/19! which is very small; this root is stable even for large changes in t . This is because all the other roots β are a long way from it, in the sense that | α 1 − β | = 1, 2, 3, ..., 19 is larger than | α 1 | = 1. For example, even if t is as large as –10000000000, the root α 1 only changes from 1 to about 0.99999991779380 (which is very close to the first order approximation 1 + t /19! ≈ 0.99999991779365). Similarly, the other small roots of Wilkinson's polynomial are insensitive to changes in t . Example . On the other hand, for the root α 20 = 20, the derivative is equal to −20 19 /19! which is huge (about 43000000), so this root is very sensitive to small changes in t . The other roots β are close to α 20 , in the sense that | β − α 20 | = 1, 2, 3, ..., 19 is less than | α 20 | = 20. For t = −2 − 23 the first-order approximation 20 − t ·20 19 /19! = 25.137... to the perturbed root 20.84... is terrible; this is even more obvious for the root α 19 where the perturbed root has a large imaginary part but the first-order approximation (and for that matter all higher-order approximations) are real. The reason for this discrepancy is that | t | ≈ 0.000000119 is greater than the radius of convergence of the power series mentioned above (which is about 0.0000000029, somewhat smaller than the value 0.00000001 given by the crude estimate) so the linearized theory does not apply. For a value such as t = 0.000000001 that is significantly smaller than this radius of convergence, the first-order approximation 19.9569... is reasonably close to the root 19.9509... At first sight the roots α 1 = 1 and α 20 = 20 of Wilkinson's polynomial appear to be similar, as they are on opposite ends of a symmetric line of roots, and have the same set of distances 1, 2, 3, ..., 19 from other roots. However the analysis above shows that this is grossly misleading: the root α 20 = 20 is less stable than α 1 = 1 (to small perturbations in the coefficient of x 19 ) by a factor of 20 19 = 5242880000000000000000000. The second example considered by Wilkinson is w 2 ( x ) = ∏ i = 1 20 ( x − 2 − i ) = ( x − 2 − 1 ) ( x − 2 − 2 ) ⋯ ( x − 2 − 20 ) . {\displaystyle w_{2}(x)=\prod _{i=1}^{20}(x-2^{-i})=(x-2^{-1})(x-2^{-2})\cdots (x-2^{-20}).} The twenty zeros of this polynomial are in a geometric progression with common ratio 2, and hence the quotient α j α j − α k {\displaystyle \alpha _{j} \over \alpha _{j}-\alpha _{k}} cannot be large. Indeed, the zeros of w 2 are quite stable to large relative changes in the coefficients. The expansion p ( x ) = ∑ i = 0 n c i x i {\displaystyle p(x)=\sum _{i=0}^{n}c_{i}x^{i}} expresses the polynomial in a particular basis, namely that of the monomials. If the polynomial is expressed in another basis, then the problem of finding its roots may cease to be ill-conditioned. For example, in a Lagrange form , a small change in one (or several) coefficients need not change the roots too much. Indeed, the basis polynomials for interpolation at the points 0, 1, 2, ..., 20 are ℓ k ( x ) = ∏ i = 0 , … , 20 i ≠ k x − i k − i , for k = 0 , … , 20. {\displaystyle \ell _{k}(x)=\prod _{i=0,\ldots ,20 \atop i\neq k}{\frac {x-i}{k-i}},\qquad {\text{for}}\quad k=0,\ldots ,20.} Every polynomial (of degree 20 or less) can be expressed in this basis: p ( x ) = ∑ i = 0 20 d i ℓ i ( x ) . {\displaystyle p(x)=\sum _{i=0}^{20}d_{i}\ell _{i}(x).} For Wilkinson's polynomial, we find w ( x ) = ( 20 ! ) ℓ 0 ( x ) = ∑ i = 0 20 d i ℓ i ( x ) with d 0 = ( 20 ! ) , d 1 = d 2 = ⋯ = d 20 = 0. {\displaystyle w(x)=(20!)\ell _{0}(x)=\sum _{i=0}^{20}d_{i}\ell _{i}(x)\quad {\text{with}}\quad d_{0}=(20!),\,d_{1}=d_{2}=\cdots =d_{20}=0.} Given the definition of the Lagrange basis polynomial ℓ 0 ( x ) , a change in the coefficient d 0 will produce no change in the roots of w . However, a perturbation in the other coefficients (all equal to zero) will slightly change the roots. Therefore, Wilkinson's polynomial is well-conditioned in this basis. Wilkinson discussed "his" polynomial in It is mentioned in standard text books in numerical analysis, like Other references: A high-precision numerical computation is presented in:
https://en.wikipedia.org/wiki/Wilkinson's_polynomial
The Wilkinson Microwave Anisotropy Probe ( WMAP ), originally known as the Microwave Anisotropy Probe ( MAP and Explorer 80 ), was a NASA spacecraft operating from 2001 to 2010 which measured temperature differences across the sky in the cosmic microwave background (CMB) – the radiant heat remaining from the Big Bang . [ 5 ] [ 6 ] Headed by Professor Charles L. Bennett of Johns Hopkins University , the mission was developed in a joint partnership between the NASA Goddard Space Flight Center and Princeton University . [ 7 ] The WMAP spacecraft was launched on 30 June 2001 from Florida . The WMAP mission succeeded the COBE space mission and was the second medium-class (MIDEX) spacecraft in the NASA Explorer program . In 2003, MAP was renamed WMAP in honor of cosmologist David Todd Wilkinson (1935–2002), [ 7 ] who had been a member of the mission's science team. After nine years of operations, WMAP was switched off in 2010, following the launch of the more advanced Planck spacecraft by European Space Agency (ESA) in 2009. WMAP's measurements played a key role in establishing the current Standard Model of Cosmology: the Lambda-CDM model . The WMAP data are very well fit by a universe that is dominated by dark energy in the form of a cosmological constant . Other cosmological data are also consistent, and together tightly constrain the Model. In the Lambda-CDM model of the universe, the age of the universe is 13.772 ± 0.059 billion years. The WMAP mission's determination of the age of the universe is to better than 1% precision. [ 8 ] The current expansion rate of the universe is (see Hubble constant ) 69.32 ± 0.80 km·s −1 ·Mpc −1 . The content of the universe currently consists of 4.628% ± 0.093% ordinary baryonic matter ; 24.02% +0.88% −0.87% cold dark matter (CDM) that neither emits nor absorbs light; and 71.35% +0.95% −0.96% of dark energy in the form of a cosmological constant that accelerates the expansion of the universe . [ 9 ] Less than 1% of the current content of the universe is in neutrinos, but WMAP's measurements have found, for the first time in 2008, that the data prefer the existence of a cosmic neutrino background [ 10 ] with an effective number of neutrino species of 3.26 ± 0.35 . The contents point to a Euclidean flat geometry , with curvature ( Ω k {\displaystyle \Omega _{k}} ) of −0.0027 +0.0039 −0.0038 . The WMAP measurements also support the cosmic inflation paradigm in several ways, including the flatness measurement. The mission has won various awards: according to Science magazine, the WMAP was the Breakthrough of the Year for 2003 . [ 11 ] This mission's results papers were first and second in the "Super Hot Papers in Science Since 2003" list. [ 12 ] Of the all-time most referenced papers in physics and astronomy in the INSPIRE-HEP database, only three have been published since 2000, and all three are WMAP publications. Bennett, Lyman A. Page Jr. , and David N. Spergel, the latter both of Princeton University, shared the 2010 Shaw Prize in astronomy for their work on WMAP. [ 13 ] Bennett and the WMAP science team were awarded the 2012 Gruber Prize in cosmology. The 2018 Breakthrough Prize in Fundamental Physics was awarded to Bennett, Gary Hinshaw, Norman Jarosik, Page, Spergel, and the WMAP science team. In October 2010, the WMAP spacecraft was derelict in a heliocentric graveyard orbit after completing nine years of operations. [ 14 ] All WMAP data are released to the public and have been subject to careful scrutiny. The final official data release was the nine-year release in 2012. [ 15 ] [ 16 ] Some aspects of the data are statistically unusual for the Standard Model of Cosmology. For example, the largest angular-scale measurement, the quadrupole moment , is somewhat smaller than the Model would predict, but this discrepancy is not highly significant. [ 17 ] A large cold spot and other features of the data are more statistically significant, and research continues into these. The WMAP objective was to measure the temperature differences in the Cosmic Microwave Background (CMB) radiation . The anisotropies then were used to measure the universe's geometry, content, and evolution ; and to test the Big Bang model, and the cosmic inflation theory. [ 18 ] For that, the mission created a full-sky map of the CMB, with a 13 arcminutes resolution via multi-frequency observation. The map required the fewest systematic errors , no correlated pixel noise, and accurate calibration, to ensure angular-scale accuracy greater than its resolution. [ 18 ] The map contains 3,145,728 pixels, and uses the HEALPix scheme to pixelize the sphere. [ 19 ] The telescope also measured the CMB's E-mode polarization, [ 18 ] and foreground polarization. [ 10 ] Its service life was 27 months; 3 to reach the L 2 position, and 2 years of observation. [ 18 ] The MAP mission was proposed to NASA in 1995, selected for definition study in 1996, and approved for development in 1997. [ 20 ] [ 21 ] The WMAP was preceded by two missions to observe the CMB; (i) the Soviet RELIKT-1 that reported the upper-limit measurements of CMB anisotropies, and (ii) the U.S. COBE satellite that first reported large-scale CMB fluctuations. The WMAP was 45 times more sensitive, with 33 times the angular resolution of its COBE satellite predecessor. [ 22 ] The successor European Planck mission (operational 2009–2013) had a higher resolution and higher sensitivity than WMAP and observed in 9 frequency bands rather than WMAP's 5, allowing improved astrophysical foreground models. The telescope's primary reflecting mirrors are a pair of Gregorian 1.4 × 1.6 m (4 ft 7 in × 5 ft 3 in) dishes (facing opposite directions), that focus the signal onto a pair of 0.9 × 1.0 m (2 ft 11 in × 3 ft 3 in) secondary reflecting mirrors. They are shaped for optimal performance: a carbon fibre shell upon a Korex core, thinly-coated with aluminium and silicon oxide . The secondary reflectors transmit the signals to the corrugated feedhorns that sit on a focal plane array box beneath the primary reflectors. [ 18 ] The receivers are polarization -sensitive differential radiometers measuring the difference between two telescope beams. The signal is amplified with High-electron-mobility transistor (HEMT) low-noise amplifiers , built by the National Radio Astronomy Observatory (NRAO). There are 20 feeds, 10 in each direction, from which a radiometer collects a signal; the measure is the difference in the sky signal from opposite directions. The directional separation azimuth is 180°; the total angle is 141°. To improve subtraction of foreground signals from our Milky Way galaxy, the WMAP used five discrete radio frequency bands, from 23 GHz to 94 GHz. [ 18 ] The WMAP's base is a 5.0 m (16.4 ft)-diameter solar panel array that keeps the instruments in shadow during CMB observations, (by keeping the craft constantly angled at 22°, relative to the Sun ). Upon the array sit a bottom deck (supporting the warm components) and a top deck. The telescope's cold components: the focal-plane array and the mirrors, are separated from the warm components with a cylindrical, 33 cm (13 in)-long thermal isolation shell atop the deck. [ 18 ] Passive thermal radiators cool the WMAP to approximately 90 K (−183.2 °C; −297.7 °F); they are connected to the low-noise amplifiers . The telescope consumes 419 W of power. The available telescope heaters are emergency-survival heaters, and there is a transmitter heater, used to warm them when off. The WMAP spacecraft's temperature is monitored with platinum resistance thermometers . [ 18 ] The WMAP's calibration is effected with the CMB dipole and measurements of Jupiter ; the beam patterns are measured against Jupiter. The telescope's data are relayed daily via a 2-GHz transponder providing a 667 kbit/s downlink to a 70 m (230 ft) Deep Space Network station. The spacecraft has two transponders, one a redundant backup; they are minimally active – about 40 minutes daily – to minimize radio frequency interference . The telescope's position is maintained, in its three axes, with three reaction wheels , gyroscopes , two star trackers and Sun sensors , and is steered with eight hydrazine thrusters. [ 18 ] The WMAP spacecraft arrived at the Kennedy Space Center on 20 April 2001. After being tested for two months, it was launched via Delta II 7425 launch vehicle on 30 June 2001. [ 20 ] [ 22 ] It began operating on its internal power five minutes before its launching, and continued so operating until the solar panel array deployed. The WMAP was activated and monitored while it cooled. On 2 July 2001, it began working, first with in-flight testing (from launching until 17 August 2001), then began constant, formal work. [ 22 ] Afterwards, it effected three Earth-Moon phase loops, measuring its sidelobes , then flew by the Moon on 30 July 2001, en route to the Sun-Earth L 2 Lagrange point , arriving there on 1 October 2001, becoming the first CMB observation mission posted there. [ 20 ] Locating the spacecraft at Lagrange 2 , (1,500,000 km (930,000 mi) from Earth) thermally stabilizes it and minimizes the contaminating solar, terrestrial, and lunar emissions registered. To view the entire sky, without looking to the Sun, the WMAP traces a path around L 2 in a Lissajous orbit ca. 1.0° to 10°, [ 18 ] with a 6-month period. [ 20 ] The telescope rotates once every 2 minutes 9 seconds (0.464 rpm ) and precesses at the rate of 1 revolution per hour. [ 18 ] WMAP measured the entire sky every six months, and completed its first, full-sky observation in April 2002. [ 21 ] The WMAP instrument consists of pseudo-correlation differential radiometers fed by two back-to-back 1.5 m (4 ft 11 in) primary Gregorian reflectors. This instrument uses five frequency bands from 22 GHz to 90 GHz to facilitate rejection of foreground signals from our own Galaxy. The WMAP instrument has a 3.5° x 3.5° field of view (FoV). [ 23 ] The WMAP observed in five frequencies, permitting the measurement and subtraction of foreground contamination (from the Milky Way and extra-galactic sources) of the CMB. The main emission mechanisms are synchrotron radiation and free-free emission (dominating the lower frequencies), and astrophysical dust emissions (dominating the higher frequencies). The spectral properties of these emissions contribute different amounts to the five frequencies, thus permitting their identification and subtraction. [ 18 ] Foreground contamination is removed in several ways. First, subtract extant emission maps from the WMAP's measurements; second, use the components' known spectral values to identify them; third, simultaneously fit the position and spectra data of the foreground emission, using extra data sets. Foreground contamination was reduced by using only the full-sky map portions with the least foreground contamination, while masking the remaining map portions. [ 18 ] On 11 February 2003, NASA published the first-year's worth of WMAP data. The latest calculated age and composition of the early universe were presented. In addition, an image of the early universe, that "contains such stunning detail, that it may be one of the most important scientific results of recent years" was presented. The newly released data surpass previous CMB measurements. [ 7 ] Based upon the Lambda-CDM model , the WMAP team produced cosmological parameters from the WMAP's first-year results. Three sets are given below; the first and second sets are WMAP data; the difference is the addition of spectral indices, predictions of some inflationary models. The third data set combines the WMAP constraints with those from other CMB experiments ( ACBAR and CBI ), and constraints from the 2dF Galaxy Redshift Survey and Lyman alpha forest measurements. There are degenerations among the parameters, the most significant is between n s {\displaystyle n_{s}} and τ {\displaystyle \tau } ; the errors given are at 68% confidence. [ 24 ] Using the best-fit data and theoretical models, the WMAP team determined the times of important universal events, including the redshift of reionization , 17 ± 4 ; the redshift of decoupling , 1089 ± 1 (and the universe's age at decoupling, 379 +8 −7 kyr ); and the redshift of matter/radiation equality, 3233 +194 −210 . They determined the thickness of the surface of last scattering to be 195 ± 2 in redshift, or 118 +3 −2 kyr . They determined the current density of baryons , (2.5 ± 0.1) × 10 −7 cm −1 , and the ratio of baryons to photons, 6.1 +0.3 −0.2 × 10 −10 . The WMAP's detection of an early reionization excluded warm dark matter . [ 24 ] The team also examined Milky Way emissions at the WMAP frequencies, producing a 208- point source catalogue. The three-year WMAP data were released on 17 March 2006. The data included temperature and polarization measurements of the CMB, which provided further confirmation of the standard flat Lambda-CDM model and new evidence in support of inflation . The 3-year WMAP data alone shows that the universe must have dark matter . Results were computed both only using WMAP data, and also with a mix of parameter constraints from other instruments, including other CMB experiments ( Arcminute Cosmology Bolometer Array Receiver (ACBAR), Cosmic Background Imager (CBI) and BOOMERANG ), Sloan Digital Sky Survey (SDSS), the 2dF Galaxy Redshift Survey , the Supernova Legacy Survey and constraints on the Hubble constant from the Hubble Space Telescope . [ 25 ] [a] ^ Optical depth to reionization improved due to polarization measurements. [ 26 ] [b] ^ <0.30 when combined with SDSS data. No indication of non-gaussianity. [ 25 ] The five-year WMAP data were released on 28 February 2008. The data included new evidence for the cosmic neutrino background , evidence that it took over half billion years for the first stars to reionize the universe, and new constraints on cosmic inflation . [ 27 ] The improvement in the results came from both having an extra two years of measurements (the data set runs between midnight on 10 August 2001 to midnight of 9 August 2006), as well as using improved data processing techniques and a better characterization of the instrument, most notably of the beam shapes. They also make use of the 33-GHz observations for estimating cosmological parameters; previously only the 41-GHz and 61-GHz channels had been used. Improved masks were used to remove foregrounds. [ 10 ] Improvements to the spectra were in the 3rd acoustic peak, and the polarization spectra. [ 10 ] The measurements put constraints on the content of the universe at the time that the CMB was emitted; at the time 10% of the universe was made up of neutrinos, 12% of atoms, 15% of photons and 63% dark matter. The contribution of dark energy at the time was negligible. [ 27 ] It also constrained the content of the present-day universe; 4.6% atoms, 23% dark matter and 72% dark energy. [ 10 ] The WMAP five-year data was combined with measurements from Type Ia supernova (SNe) and Baryon acoustic oscillations (BAO). [ 10 ] The elliptical shape of the WMAP skymap is the result of a Mollweide projection . [ 28 ] The data puts limits on the value of the tensor-to-scalar ratio, r <0.22 (95% certainty), which determines the level at which gravitational waves affect the polarization of the CMB, and also puts limits on the amount of primordial non-gaussianity . Improved constraints were put on the redshift of reionization, which is 10.9 ± 1.4 , the redshift of decoupling , 1 090 .88 ± 0.72 (as well as age of universe at decoupling, 376.971 +3.162 −3.167 kyr ) and the redshift of matter/radiation equality, 3253 +89 −87 . [ 10 ] The extragalactic source catalogue was expanded to include 390 sources, and variability was detected in the emission from Mars and Saturn . [ 10 ] The seven-year WMAP data were released on 26 January 2010. As part of this release, claims for inconsistencies with the standard model were investigated. [ 29 ] Most were shown not to be statistically significant, and likely due to a posteriori selection (where one sees a weird deviation, but fails to consider properly how hard one has been looking; a deviation with 1:1000 likelihood will typically be found if one tries one thousand times). For the deviations that do remain, there are no alternative cosmological ideas (for instance, there seem to be correlations with the ecliptic pole). It seems most likely these are due to other effects, with the report mentioning uncertainties in the precise beam shape and other possible small remaining instrumental and analysis issues. The other confirmation of major significance is of the total amount of matter/energy in the universe in the form of dark energy – 72.8% (within 1.6%) as non 'particle' background, and dark matter – 22.7% (within 1.4%) of non baryonic (sub-atomic) 'particle' energy. This leaves matter, or baryonic particles (atoms) at only 4.56% (within 0.16%). On 29 December 2012, the nine-year WMAP data and related images were released. 13.772 ± 0.059 billion-year-old temperature fluctuations and a temperature range of ± 200 micro kelvins are shown in the image. In addition, the study found that 95% of the early universe is composed of dark matter and dark energy , the curvature of space is less than 0.4% of "flat" and the universe emerged from the cosmic Dark Ages "about 400 million years" after the Big Bang . [ 15 ] [ 16 ] [ 33 ] The main result of the mission is contained in the various oval maps of the CMB temperature differences. These oval images present the temperature distribution derived by the WMAP team from the observations by the telescope during the mission. Measured is the temperature obtained from a Planck's law interpretation of the microwave background. The oval map covers the whole sky. The results are a snapshot of the universe around 375,000 years after the Big Bang , which happened about 13.8 billion years ago. The microwave background is very homogeneous in temperature (the relative variations from the mean, which presently is still 2.7 kelvins, are only of the order of 5 × 10 −5 ). The temperature variations corresponding to the local directions are presented through different colors (the "red" directions are hotter, the "blue" directions cooler than the average). [ citation needed ] The original timeline for WMAP gave it two years of observations; these were completed by September 2003. Mission extensions were granted in 2002, 2004, 2006, and 2008 giving the spacecraft a total of 9 observing years, which ended August 2010 [ 20 ] and in October 2010 the spacecraft was moved to a heliocentric "graveyard" orbit . [ 14 ] The Planck spacecraft also measured the CMB from 2009 to 2013 and aims to refine the measurements made by WMAP, both in total intensity and polarization. Various ground- and balloon-based instruments have also made CMB contributions, and others are being constructed to do so. Many are aimed at searching for the B-mode polarization expected from the simplest models of inflation, including The E and B Experiment (EBEX), Spider , BICEP and Keck Array (BICEP2), Keck , QUIET , Cosmology Large Angular Scale Surveyor (CLASS), South Pole Telescope (SPTpol) and others. On 21 March 2013, the European-led research team behind the Planck spacecraft released the mission's all-sky map of the cosmic microwave background. [ 34 ] [ 35 ] The map suggests the universe is slightly older than previously thought. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370,000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10 −30 ) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter . Based on the 2013 data, the universe contains 4.9% ordinary matter , 26.8% dark matter and 68.3% dark energy . On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is 13.799 ± 0.021 billion years and the Hubble constant is 67.74 ± 0.46 (km/s)/Mpc . [ 36 ]
https://en.wikipedia.org/wiki/Wilkinson_Microwave_Anisotropy_Probe
In the field of microwave engineering and circuit design, the Wilkinson Power Divider is a specific class of power divider circuit that can achieve isolation between the output ports while maintaining a matched condition on all ports. The Wilkinson design can also be used as a power combiner because it is made up of passive components and hence is reciprocal. First published by Ernest J. Wilkinson in 1960, [ 1 ] this circuit finds wide use in radio frequency communication systems utilizing multiple channels since the high degree of isolation between the output ports prevents crosstalk between the individual channels. It uses quarter wave transformers , which can be easily fabricated as quarter wave lines on printed circuit boards . It is also possible to use other forms of transmission line (e.g. coaxial cable) or lumped circuit elements (inductors and capacitors). [ 2 ] The scattering parameters for the common case of a 2-way equal-split Wilkinson power divider at the design frequency is given by [ 3 ] Inspection of the S matrix reveals that the network is reciprocal ( S i j = S j i {\displaystyle S_{ij}=S_{ji}} ), that the terminals are matched ( S 11 , S 22 , S 33 = 0 {\displaystyle S_{11},S_{22},S_{33}=0} ), that the output terminals are isolated ( S 23 , S 32 {\displaystyle S_{23},S_{32}} =0), and that equal power division is achieved ( S 21 = S 31 {\displaystyle S_{21}=S_{31}} ). The non- unitary matrix results from the fact that the network is lossy. An ideal Wilkinson divider would yield S 21 = S 31 = − 3 dB = 20 log 10 ⁡ ( 1 2 ) {\displaystyle S_{21}=S_{31}=-3\,{\text{dB}}=20\log _{10}({\frac {1}{\sqrt {2}}})} . Network theorem governs that a divider cannot satisfy all three conditions (being matched, reciprocal and loss-less) at the same time. Wilkinson divider satisfies the first two (matched and reciprocal), and cannot satisfy the last one (being loss-less). Hence, there is some loss occurring in the network. No loss occurs when the signals at ports 2 and 3 are in phase and have equal magnitude. In case of noise input to ports 2 and 3, the noise level at port 1 does not increase, half of the noise power is dissipated in the resistor. By cascading, the input power might be divided to any n {\displaystyle n} -number of outputs. Unequal/Asymmetric Division Through Wilkinson Divider If the arms for port 2 and 3 are connected with un-equal impedances, then asymmetric division of power can be achieved. When characteristic impedance is Z 0 {\displaystyle Z_{0}} , and one wants to split power as P 2 {\displaystyle P_{2}} and P 3 {\displaystyle P_{3}} , and P 2 {\displaystyle P_{2}} ≠ P 3 {\displaystyle P_{3}} , then the design can be created following the equations: A new constant K {\displaystyle K} is defined for ease of expression, where K 2 = P 3 P 2 {\displaystyle K^{2}={\frac {P_{3}}{P_{2}}}} Then the design guideline is [ 4 ] : Z 03 = Z 0 1 + K 2 K 3 {\displaystyle Z_{03}=Z_{0}{\sqrt {\frac {1+K^{2}}{K^{3}}}}} Z 02 = Z 0 K ( 1 + K 2 ) = K 2 Z 03 {\displaystyle Z_{02}=Z_{0}{\sqrt {K(1+K^{2})}}=K^{2}Z_{03}} R = Z 0 ( K + 1 K ) {\displaystyle R=Z_{0}(K+{\frac {1}{K}})} The equal-splitting Wilkinson Divider is obtained for K = 1 {\displaystyle K=1} .
https://en.wikipedia.org/wiki/Wilkinson_power_divider
In statistics , Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test . Statistical tests (such as hypothesis testing ) generally require knowledge of the probability distribution of the test statistic . This is often a problem for likelihood ratios , where the probability distribution can be very difficult to determine. A convenient result by Samuel S. Wilks says that as the sample size approaches ∞ {\displaystyle \infty } , the distribution of the test statistic − 2 log ⁡ ( Λ ) {\displaystyle -2\log(\Lambda )} asymptotically approaches the chi-squared ( χ 2 {\displaystyle \chi ^{2}} ) distribution under the null hypothesis H 0 {\displaystyle H_{0}} . [ 1 ] Here, Λ {\displaystyle \Lambda } denotes the likelihood ratio , and the χ 2 {\displaystyle \chi ^{2}} distribution has degrees of freedom equal to the difference in dimensionality of Θ {\displaystyle \Theta } and Θ 0 {\displaystyle \Theta _{0}} , where Θ {\displaystyle \Theta } is the full parameter space and Θ 0 {\displaystyle \Theta _{0}} is the subset of the parameter space associated with H 0 {\displaystyle H_{0}} . This result means that for large samples and a great variety of hypotheses, a practitioner can compute the likelihood ratio Λ {\displaystyle \Lambda } for the data and compare − 2 log ⁡ ( Λ ) {\displaystyle -2\log(\Lambda )} to the χ 2 {\displaystyle \chi ^{2}} value corresponding to a desired statistical significance as an approximate statistical test. The theorem no longer applies when the true value of the parameter is on the boundary of the parameter space: Wilks’ theorem assumes that the ‘true’ but unknown values of the estimated parameters lie within the interior of the supported parameter space . In practice, one will notice the problem if the estimate lies on that boundary. In that event, the likelihood test is still a sensible test statistic and even possess some asymptotic optimality properties, but the significance (the p -value) can not be reliably estimated using the chi-squared distribution with the number of degrees of freedom prescribed by Wilks. In some cases, the asymptotic null-hypothesis distribution of the statistic is a mixture of chi-square distributions with different numbers of degrees of freedom. Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by D ) is twice the log of the likelihoods ratio, i.e. , it is twice the difference in the log-likelihoods: The model with more parameters (here alternative ) will always fit at least as well — i.e., have the same or greater log-likelihood — than the model with fewer parameters (here null ). Whether the fit is significantly better and should thus be preferred is determined by deriving how likely ( p -value ) it is to observe such a difference D by chance alone , if the model with fewer parameters were true. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to df alt − df null {\displaystyle \,{\text{df}}_{\text{alt}}-{\text{df}}_{\text{null}}\,} , [ 2 ] respectively the number of free parameters of models alternative and null . For example: If the null model has 1 parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of 2 × ( − 8012 − ( − 8024 ) ) = 24 {\displaystyle 2\times (-8012-(-8024))=24} with 3 − 1 = 2 {\displaystyle 3-1=2} degrees of freedom, and is equal to 6 × 10 − 6 {\displaystyle 6\times 10^{-6}} . Certain assumptions [ 1 ] must be met for the statistic to follow a chi-squared distribution , but empirical p -values may also be computed if those conditions are not met. An example of Pearson's test is a comparison of two coins to determine whether they have the same probability of coming up heads. The observations can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times each coin came up heads or tails. The contents of this table are our observations X . Here Θ consists of the possible combinations of values of the parameters p 1 H {\displaystyle p_{\mathrm {1H} }} , p 1 T {\displaystyle p_{\mathrm {1T} }} , p 2 H {\displaystyle p_{\mathrm {2H} }} , and p 2 T {\displaystyle p_{\mathrm {2T} }} , which are the probability that coins 1 and 2 come up heads or tails. In what follows, i = 1 , 2 {\displaystyle i=1,2} and j = H , T {\displaystyle j=\mathrm {H,T} } . The hypothesis space H is constrained by the usual constraints on a probability distribution, 0 ≤ p i j ≤ 1 {\displaystyle 0\leq p_{ij}\leq 1} , and p i H + p i T = 1 {\displaystyle p_{i\mathrm {H} }+p_{i\mathrm {T} }=1} . The space of the null hypothesis H 0 {\displaystyle H_{0}} is the subspace where p 1 j = p 2 j {\displaystyle p_{1j}=p_{2j}} . The dimensionality of the full parameter space Θ is 2 (either of the p 1 j {\displaystyle p_{1j}} and either of the p 2 j {\displaystyle p_{2j}} may be treated as free parameters under the hypothesis H {\displaystyle H} ), and the dimensionality of Θ 0 {\displaystyle \Theta _{0}} is 1 (only one of the p i j {\displaystyle p_{ij}} may be considered a free parameter under the null hypothesis H 0 {\displaystyle H_{0}} ). Writing n i j {\displaystyle n_{ij}} for the best estimates of p i j {\displaystyle p_{ij}} under the hypothesis H , the maximum likelihood estimate is given by Similarly, the maximum likelihood estimates of p i j {\displaystyle p_{ij}} under the null hypothesis H 0 {\displaystyle H_{0}} are given by which does not depend on the coin i . The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H 0 {\displaystyle H_{0}} , the asymptotic distribution for the test will be χ 2 ( 1 ) {\displaystyle \chi ^{2}(1)} , the χ 2 {\displaystyle \chi ^{2}} distribution with one degree of freedom. For the general contingency table, we can write the log-likelihood ratio statistic as Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space . This is commonly violated in random or mixed effects models , for example, when one of the variance components is negligible relative to the others. In some such cases, one variance component can be effectively zero, relative to the others, or in other cases the models can be improperly nested. To be clear: These limitations on Wilks’ theorem do not negate any power properties of a particular likelihood ratio test. [ 3 ] The only issue is that a χ 2 {\displaystyle \chi ^{2}} distribution is sometimes a poor choice for estimating the statistical significance of the result. Pinheiro and Bates (2000) showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naïve χ 2 {\displaystyle \chi ^{2}} – often dramatically so. [ 4 ] The naïve assumptions could give significance probabilities ( p -values) that are, on average, far too large in some cases and far too small in others. In general, to test random effects, they recommend using restricted maximum likelihood (REML). For fixed-effects testing, they say, “a likelihood ratio test for REML fits is not feasible”, because changing the fixed effects specification changes the meaning of the mixed effects, and the restricted model is therefore not nested within the larger model. [ 4 ] As a demonstration, they set either one or two random effects variances to zero in simulated tests. In those particular examples, the simulated p -values with k restrictions most closely matched a 50–50 mixture of χ 2 ( k ) {\displaystyle \chi ^{2}(k)} and χ 2 ( k − 1 ) {\displaystyle \chi ^{2}(k-1)} . (With k = 1 , χ 2 ( 0 ) {\displaystyle \chi ^{2}(0)} is 0 with probability 1. This means that a good approximation was 0.5 χ 2 ( 1 ) . {\displaystyle \,0.5\,\chi ^{2}(1)\,.} ) [ 4 ] Pinheiro and Bates also simulated tests of different fixed effects. In one test of a factor with 4 levels ( degrees of freedom = 3), they found that a 50–50 mixture of χ 2 ( 3 ) {\displaystyle \chi ^{2}(3)} and χ 2 ( 4 ) {\displaystyle \chi ^{2}(4)} was a good match for actual p -values obtained by simulation – and the error in using the naïve χ 2 ( 3 ) {\displaystyle \chi ^{2}(3)} “may not be too alarming.” [ 4 ] However, in another test of a factor with 15 levels, they found a reasonable match to χ 2 ( 18 ) {\displaystyle \chi ^{2}(18)} – 4 more degrees of freedom than the 14 that one would get from a naïve (inappropriate) application of Wilks’ theorem, and the simulated p -value was several times the naïve χ 2 ( 14 ) {\displaystyle \chi ^{2}(14)} . They conclude that for testing fixed effects, “it's wise to use simulation.” [ a ]
https://en.wikipedia.org/wiki/Wilks'_theorem
The will to live ( German : der Wille zum Leben ) is a concept developed by the German philosopher Arthur Schopenhauer , representing an irrational "blind incessant impulse without knowledge" that drives instinctive behaviors, causing an endless insatiable striving in human existence. This is contrasted with the concept of the will to survive under life threatening conditions used in psychology [ citation needed ] since Schopenhauer’s notion of the will to live is more broadly understood as the “animal[istic] force to endure, reproduce and flourish.” [ 1 ] There are significant correlations between the will to live and existential, psychological, social, and physical sources of distress . [ 2 ] Many, who overcome near-death experiences with no explanation, have described the will to live as a direct component of their survival. [ 3 ] The difference between the wish to die versus the wish to live is also a unique risk factor for suicide . [ 4 ] In psychology, the will to live is the drive for self-preservation , usually coupled with expectations for future improvement in one's state in life. [ 5 ] The will to live is an important concept when attempting to understand and comprehend why we do what we do in order to stay alive, and for as long as we can. This can be related to either one's push for survival on the brink of death, or someone who is just trying to find a meaning to continuing their life. Some researchers say that people who have a reason or purpose in life during such dreadful and horrific experiences will often appear to fare better than those that may find such experiences overwhelming. [ 6 ] Every day, people undergo countless types of negative experiences, some of which may be demoralizing, hurtful, or tragic. An ongoing question continues to be what keeps the will to live in these situations. People who claim to have had experiences involving the will to live have different explanations behind it. [ 7 ] The will to live is considered to be a very basic drive in humans; but not necessarily the main driving force. In psychotherapy , Sigmund Freud termed the pleasure principle , which is the seeking of pleasure and avoiding of pain . [ 8 ] Viktor Frankl , who spent time in German concentration camps, developed psychotherapy called logotherapy , which may be translated as the therapy focused on the "will to meaning". Maslow's hierarchy of needs highlights the innate appetite that people possess for love and belonging but before all this there is the very basic and powerful will to live. Psychologists have established that human beings are a goal-oriented species. In assessing the will to live, it should be borne in mind that it could be augmented or diminished by the relative strength of other simultaneously existent drives. Psychologists generally agree [ weasel words ] that there is the will to live, the will to pleasure, the will to superiority and the will to connection. There are also usually varying degrees of curiosity with regard to what may be termed the will to identity or establishing meaningful personal responses. The will to live is a platform without which it would not be possible to satisfy the other drives. However, this overlooks the possibility that there is a commonality among all creatures that drives all other urges. Self-preservation is a behavior that ensures the survival of an organism. [ 9 ] Pain and fear are integral parts of this mechanism. Pain motivates the individual to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. [ 10 ] Most pain resolves promptly once the painful stimulus is removed and the body has healed, but sometimes pain persists despite removal of the stimulus and apparent healing of the body; and sometimes pain arises in the absence of any detectable stimulus, damage or disease. [ 11 ] Fear causes the organism to seek safety and may cause a release of adrenaline , [ 12 ] [ 13 ] which has the effect of increased strength and heightened senses such as hearing, smell, and sight. Self-preservation may also be interpreted figuratively, in regard to the coping mechanisms one needs to prevent emotional trauma from distorting the mind (see: defence mechanism .) Even the most simple of living organisms (for example, the single-celled bacteria) are typically under intense selective pressure to evolve a response that would help avoid a damaging environment, if such an environment exists. Organisms also evolve while adapting - even thriving - in a benign environment (for example, a marine sponge modifies its structure in response to current changes, in order to better absorb and process nutrients). Self-preservation is therefore an almost universal hallmark of life. However, when introduced to a novel threat, many species will have a self-preservation response either too specialised, or not specialised enough, to cope with that particular threat. [ citation needed ] An example is the dodo , which evolved in the absence of natural predators and hence lacked an appropriate, general self-preservation response to heavy predation by humans and rats, showing no fear of them. “Existential, psychiatric, social, and, to a lesser degree, physical variables are highly correlated with the will to live”. [ 14 ] Existential issues found to correlate significantly include hopelessness, the desire for death, sense of dignity, and burden to others. Psychiatric issues found to be strongly associated are such as depression, anxiety, and lack of concentration. Physical issues that showed the strongest associations were appetite and appearance which did not show the same consistent degree of correlation. The four main predictor variables of the will to live changing over time are anxiety, shortness of breath, depression, and sense of well-being [ 15 ] which correlate with the other variable predictors as well. Social variables and quality of life measures are shown to correlate significantly with the will to live such as support and satisfaction with support from family, friends, and health care providers. [ 16 ] Findings on the will to live have suggested that psychological variables are replaced by physical mediators of variation as death draws nearer. The will to live has also proven to be highly unstable. [ 17 ] Several studies have been conducted testing the theory of the will to live. These studies varied in their focus, but broadly sought to understand the will to live as it differs by demographics , especially as it concerns the elderly and the terminally ill . A study conducted in 2005 asked elderly participants to rate their will to live and tracked this data across time. It found that those who reported a high or stable will to live generally lived longer than those who reported a weak will to live. Additionally, this study proposed that women were generally better able to cope with life-altering or life-threatening conditions and situations than men. However, it also suggested that the participants may not have been of stable health, and that further study was required before drawing any definite conclusions. [ 18 ] An earlier study conducted in 2002 tested the idea in terminally ill cancer patients, with most participants being elderly. This study found that those with the weakest will to live typically died sooner than those with a moderate will to live. Those with a high will to live could either die sooner or live as long as those with a moderate will to live. The authors went on to specify that further research is required, testing this theory against other terminal illnesses and in different age categories. [ 19 ] Anecdotal evidence also suggests a correlation between the individual will to live and survival in traumatic situations that include maltreatment . The Second World War and the Holocaust provide concrete examples of this, where many individuals survived years of malnourishment and mistreatment in concentration camps , and cited their will to live as a key part of their survival. [ 20 ] A study conducted in 2003 suggested that positive thought (i.e., having a positive outlook on ones' future and life in general) could lower ones' risk for health complications and diseases. This study posited that women who had a more positive outlook were more likely to carry a greater number of antibodies for certain flu strains , which suggested a stronger immune system more generally than those who had a negative outlook. [ 21 ] Further anecdotal evidence can be found through quantitative analysis of death records , which consistently show many people dying shortly after major holidays, suggesting that people will themselves to live until the holiday (or in other cases, a birthday), and then passing shortly thereafter. [ 22 ] [ 23 ]
https://en.wikipedia.org/wiki/Will_to_live
Willem P. C. " Pim " Stemmer (12 March 1957 – 2 April 2013) [ 2 ] was a Dutch scientist and entrepreneur who invented numerous biotechnologies . He was the founder and CEO of Amunix Inc., a company that creates " pharmaceutical proteins with extended dosing frequency". [ 3 ] His other prominent inventions include DNA shuffling , now referred to as molecular breeding. He holds more than 97 patents . [ 3 ] Stemmer was honored with the Charles Stark Draper Prize in 2011 for the pioneering contributions to directed evolution which won the Nobel Prize in Chemistry in 2018. [ 4 ] He was elected as member of National Academy of Engineering . Stemmer died of cancer on April 2, 2013. [ 5 ] Stemmer attended the Institut Montana Zugerberg , a boarding and day school on the Zugerberg , Switzerland , in the greater Zurich area, from which he graduated in 1975. He developed an interest in biology at the University of Amsterdam in the Netherlands , [ 6 ] from which he received a M.S. in biology in 1980. [ 7 ] It was not until 1980, however, when he traveled to University of Wisconsin–Madison that he was introduced to molecular biology . He received a PhD from the University of Wisconsin for his work on bacterial pili and fimbriae involved in host-pathogen interactions . Afterwards, he conducted postdoctoral research with Professor Fred Blattner on phage display of random peptide libraries and antibody fragment expression in E. coli bacteria. Stemmer initially worked on antibody fragment engineering at Hybritech . He then became a scientist at Affymax , where he invented DNA shuffling (also known as "molecular breeding"). In 1997 he founded Maxygen to commercialize DNA shuffling, which led to the founding of both Verdia and Codexis as spin-offs. Stemmer founded Avidia in 2003 after inventing its Avimer technology. [ citation needed ] He co-founded Amunix in 2006 together with Volker Schellenberger; [ citation needed ] its products comprise a "clinically proven pharmaceutical payload, typically a human protein, genetically fused to ‘XTEN’, a long, unstructured, hydrophilic protein chain", which prolongs serum half-life by "increasing the hydrodynamic radius , thus reducing kidney filtration ". [ 3 ] In 2008 he founded Versartis < as a spin-off from Amunix; Versartis went public on March 21, 2014. [ citation needed ] In 2011 Stemmer was honored with the Charles Stark Draper Prize , the United States' top engineering honor, for the pioneering contributions to directed evolution which won the Nobel Prize in Chemistry in 2018. It is a "method used to engineer novel enzymes and biocatalytic processes" for various pharmaceutical and chemical products, allowing researchers to endow proteins and cells with properties that ultimately enable solutions food ingredients, pharmaceuticals, toxicology , agricultural products, and biofuels . [ 3 ] His portfolio of patents from Maxygen was ranked as the #1 portfolio in pharma/biotech for 2003 by MIT 's Technology Review , and #2 in a review of the 150 largest pharma and biotechnology companies by The Wall Street Journal in 2006. He received the Doisy Award in 2000 and the David Perlman Award in 2001. [ 7 ] In 2005 he won the NASDAQ -sponsored VCynic Syndicate, a "syndicate of venture capitalists " that rated business case studies based on historical, current, and mock companies. [ 8 ]
https://en.wikipedia.org/wiki/Willem_P._C._Stemmer
Willem Rudolfs (February 13, 1886 – February 20, 1959) was a Dutch-born biochemist in entomology and pioneer in the field of sanitary sciences . Rudolfs was born in Wageningen , the Netherlands , and moved to the United States. In 1921, he earned his PhD at Rutgers College. [ 1 ] From 1921 to 1925 Rudolfs was a teacher at the Department of Entomology of Rutgers University . [ 2 ] In this period, his research as a biochemist in entomology was focussed on mosquitos : repelling them from human skin, attracting them so they can be counted, behaviour in different weather conditions. [ 3 ] [ 4 ] In his thirty years at the New Jersey Agricultural Experiment Station of Rutgers, Rudolfs became a world leading expert on sanitary sciences. [ 5 ] He was a highly involved member of the Federation of Sewage Works Association. [ 6 ] Nowadays this is the worldwide operating Water Environment Federation , which honours exceptional publications with the Rudolfs Industrial Waste Management Medal . [ 7 ] [ 8 ] In 1952, Rudolfs retired and moved back to the Netherlands. [ 9 ] He held several lectures that inspired the Dutch industry to take on industrial waste water treatment collectively. [ 10 ] [ 11 ] [ 12 ] The Rudolfs Industrial Waste Management Medal is an award established in 1949 by the Water Environment Federation (WEF) in honor of Willem Rudolfs, a notable figure in environmental engineering. The medal is awarded to scientists who have made an extraordinary contribution to industrial wastewater management through a significant scientific publication. The aim of the award is to recognize and promote advancements in the treatment and management of industrial wastewater, thereby contributing to public health and environmental sustainability. [ 13 ] [ 14 ] Background The Water Environment Federation (WEF) is a global non-profit organization dedicated to water quality and water management. As of 2019, WEF had 33,000 individual members and 75 affiliated member associations worldwide. The federation provides education and training for water professionals to help improve public health and protect the environment. Significance The Rudolfs Medal highlights the importance of scientific innovation in wastewater management, especially in industrial contexts where pollution and resource recovery are pressing challenges. By acknowledging impactful research, the award encourages ongoing development in environmental science and sustainable industry practices.
https://en.wikipedia.org/wiki/Willem_Rudolfs
The Willgerodt rearrangement or Willgerodt reaction is an organic reaction converting an aryl alkyl ketone , alkyne , or alkene to the corresponding amide by reaction with ammonium polysulfide , named after Conrad Willgerodt . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The formation of the corresponding carboxylic acid is a side reaction resulting from hydrolysis of the amide. When the alkyl group is an aliphatic chain (n typically 0 to 5), multiple reactions take place with the amide group always ending up at the terminal end. The net effect is thus migration of the carbonyl group to the end of the chain and oxidation. An example with modified reagents (sulfur, concentrated ammonium hydroxide and pyridine ) is the conversion of acetophenone to 2-phenylacetamide and phenylacetic acid [ 5 ] The related Willgerodt–Kindler reaction [ 6 ] takes place with elemental sulfur and an amine like morpholine . The initial product is a thioamide for example that of acetophenone [ 7 ] which can again be hydrolyzed to the amide. The reaction is named after Karl Kindler [ de ] A possible reaction mechanism for the Kindler variation is depicted below: [ 8 ] The first stage of the reaction is basic imine formation by the ketone group and the amine group of morpholine to give an enamine . This reacts as a nucleophile with electrophilic sulfur, similar to an Stork enamine alkylation reaction. [ verification needed ] The actual rearrangement reaction takes place when the amine group attacks the thiocarbonyl in a nucleophilic addition temporarily forming an aziridine and the thioacetamide by tautomerization .
https://en.wikipedia.org/wiki/Willgerodt_rearrangement
The Willi Hennig Society "was founded in 1980 with the expressed purpose of promoting the field of phylogenetic systematics ." [ 1 ] The society is represented by phylogenetic systematists managing and publishing in the peer-reviewed journal titled Cladistics . The society is named after Willi Hennig , a German systematic entomologist who developed the modern methods and philosophical approach to systematics in the 1940s and 1950s. [ 1 ] The society is also involved in reconstructing the tree of life . The current president, Prof. Dr. Stefan Richter of Universitat Rostock, was elected in 2022, succeeding Prof. Dr. Christiane Weirauch. The Willi Hennig Society was founded on a philosophical division among systematic biologists in the late 1970s. A debate created the rift between pheneticists who advocated for statistical or numerical methods that grouped taxa by overall similarity in taxonomy and systematic biologists who adopted a strict cladistic approach to taxonomy, recognizing groups by shared, derived characters alone. The last public clash occurred at the 13th Annual Numerical Taxonomy Conference at the Museum of Comparative Zoology (MCZ), Harvard University in October 1979. Twelve months later, the Willi Hennig Society was founded through invitation by Professor Edward O. Wiley and founding president James S. Farris . Seventy eight systematists from Great Britain, Sweden, Canada, and the United States gathered at the University of Kansas, to inaugurate the Willi Hennig Society. Membership doubled to over 150 by the second meeting. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The Willi Hennig Society is primarily involved in projects aimed at the study and classification of biodiversity using the methods and philosophy originally outlined by Willi Hennig in his book "Phylogenetic Systematics". [ 7 ] In 1994, the society teamed up with the Society of Systematic Biologists and the American Society of Plant Taxonomists to organize the Systematics Agenda 2000. [ 8 ] [ 9 ] The Systematics Agenda 2000 is ongoing and has set out to deliver on the following mission statements: The Willi Hennig Society publishes the journal Cladistics . [ 11 ]
https://en.wikipedia.org/wiki/Willi_Hennig_Society
William A. Mitchell (October 21, 1911 – July 26, 2004) was an American food chemist who, while working for General Foods Corporation between 1941 and 1976, was the key inventor behind Pop Rocks , Tang , Cool Whip , and powdered egg whites . [ 1 ] During his career he received over 70 patents . He was born in Raymond, Minnesota . When he was a teenager, he ran the sugar crystallization tanks at the American Sugar Beet Company and slept two hours a night before getting to school. He earned an undergraduate degree at Cotner College in Lincoln, Nebraska and then graduated with a master's degree in chemistry from the University of Nebraska . [ 2 ] Mitchell got a research job at an Agricultural Experiment Station in Lincoln, Nebraska. A lab accident there left him with second- and third-degree burns over most of his body. [ 3 ] He joined General Foods in 1941. His first major success came with a tapioca substitute he helped develop during World War II , in response to the disruption of cassava supplies. Because of this, tapioca quickly became known as "Mitchell mud" within the US WW II infantry. [ 3 ] [ 4 ] In 1957, he invented a powdered fruit-flavored vitamin-enhanced drink mix that became known as Tang Flavor Crystals. NASA started using Tang in 1962 in their space program. [ 2 ] [ 5 ] In 1956, he tried to create instantly self-carbonating soda, which resulted in the creation of Pop Rocks. Although Pop Rocks weren't sold until 1975, he received patent 3,012,893 for its manufacturing process in 1961. In 1967, he introduced Cool Whip, which became the largest and most profitable line in its division very quickly. [ 2 ] He received 70 patents [ 6 ] in total during his career. [ 2 ] Mitchell was a resident of Lincoln Park, New Jersey for many years before moving out west after his retirement in 1976. [ 7 ] He was married to Ruth Cobbey Mitchell and they had seven children. His daughter, Cheryl Mitchell , also became a food scientist. [ 8 ] He moved to Stockton after Ruth's death in 1999. Mitchell died of heart failure on July 26, 2004, at the age of 92 in Stockton, California , where he was living with his daughter. [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/William_A._Mitchell
The William B. Coley Award for Distinguished Research in Basic and Tumor Immunology is presented annually by the Cancer Research Institute , to scientists [ 1 ] who have made outstanding achievements in the fields of basic and tumor immunology and whose work has deepened our understanding of the immune system's response to disease, including cancer. The first awards were made in 1975 to a group of 16 scientists called the "Founders of Cancer Immunology." In 1993, the award was renamed after William B. Coley , [ 2 ] a late-nineteenth century surgeon who made the first attempts at the non-surgical treatment of cancer through stimulation of the immune system. For this reason, Coley has become known as the "Father of Cancer Immunotherapy." Source: [ 3 ]
https://en.wikipedia.org/wiki/William_B._Coley_Award
William Christopher Zeise (15 October 1789 – 12 November 1847) was a Danish organic chemist . He is best known for synthesising one of the first organometallic compounds , named Zeise's salt in his honour. He also performed pioneering studies in organosulfur chemistry , discovering the xanthates in 1823. William Christopher Zeise was born 15 October 1789 in Slagelse , the son of apothecary Frederick Zeise (1754–1836), who was an old friend of physicist Hans Christian Ørsted 's father. Zeise attended Slagelse Latin school with the poet Bernhard Severin Ingemann . [ 1 ] He moved to Copenhagen in 1805 to take up an apprenticeship under Gottfried Becker as a pharmacy assistant ( Apoteksmedhjælper ) at the Royal Court Pharmacy. Becker, an accomplished chemist, was employed as extraordinary professor of chemistry at the University . However, Zeise felt dissatisfied there and returned home after only a few months, complaining of health issues. Around this time, his interest in science ( natural philosophy ) began to develop. He familiarised himself with the new quantitative chemical theory of Antoine Lavoisier and read widely: Nicolai Tychsen's Apothekerkunst ( Theoretical and practical instructions for Pharmacists, 1804); Gren's Chemistry ; Adam Hauch's Principles of Natural Philosophy ; and Ørsted 's papers in Scandinavian Literature and Letters (his treatise on spontaneous combustion having made an especially strong impression upon him). Around this time, he experimented with a homemade voltaic pile . At seventeen years of age, he rearranged his father's pharmacy in accordance with the new pharmacopoeia of 1805, which had imposed the antiphlogistic nomenclature. In the summer of 1806, he noted in his diary "a most remarkable awakening within me for something higher, for scientific creative work in general, but for Science, chiefly Chemistry, deeply and in particular". How strong an impression this inner experience had made on him can be established by the fact that he now wanted to return to Copenhagen, not to return to the Court apothecary, but to pursue a path the study of chemistry. In autumn 1806, he was welcomed into the family home of Ørsted , where he was given a position as an assistant, helping Ørsted prepare his university lectures. His stay with Ørsted lasted several years. Ørsted later recounted how he had influenced Zeise through conversations and encouraged him when he expressed the desire to take the university entrance examination ( examen artium ). Ørsted spoke fondly of Zeise's independent spirit. Zeise became a university student in 1809. Zeise had at first intended to study medicine, but while attending lectures it became clear that his interests had a broader scientific base; and chemistry remained his favourite subject. He still experimented in Ørsted's laboratory; but since at that time there was no prospect for a teaching position at the university, he took the pharmacist exam in 1815, later a master's degree ( magisterkonferens ), and on 21 October 1817 he defended his doctoral dissertation on The effect of alkalis upon organic substances . The experimental part of this work he performed in a small laboratory, which in 1816 he had converted from the pharmacy in Slagelse . As the university had no separate lecturing chair in chemistry and no scientific laboratory, Zeise decided to work and study abroad in Germany and Paris. [ 1 ] In 1818 he managed to gather travelling money. Zeise arrived in Göttingen , where he spent four months working in Friedrich Stromeyer 's laboratory, one of the few experimental laboratories in Germany at that time. He was trained particularly in analytical chemistry , in which he would become a great master. Zeise next spent nearly a year in Paris. His diary entries reflect how diligent he was, and depict vividly the impression he got of the famous French scientists he came in contact with. In August 1818, while in Paris, Zeise became personally acquainted with the distinguished Swedish chemist Jöns Jacob Berzelius . Berzelius received the young Danish chemist with great benevolence, expressing his admiration for Zeise's doctoral thesis. They continued a respectful friendship thereafter, despite Zeise's being ten years younger than Berzelius. [ 2 ] Zeise returned to Denmark in the autumn of 1819. The prospects were not bright for an appointment at the University , although he was likely the only scientifically trained chemist in the country at that time. However, he learned at the end of the year that he had received public funds to support his work in science. That same year the university rented an apartment in Nørregade for use as a physics workshop and for physics education. Ørsted converted the apartment kitchen into a menial little laboratory, over which Zeise was made responsible. In this, the so-called Royal Science Laboratory, Zeise received ten students in the first year to whom he lectured, both in the laboratory and partly in the physical workshop. In June 1822 Zeise was appointed extraordinary professor of Chemistry. In 1824, Professor Ørsted selected a nearby farm as the site for what would later become the Danish Polytechnic Education Institution . It was decided to transform the large stable building in the courtyard into a dedicated chemical laboratory. When the Polytechnic College was eventually founded in 1829, Zeise was instrumental in getting the chemical laboratory expanded and organised.He was professor of organic chemistry at the polytechnic from its opening until his death. [ 2 ] In 1823, while still in the small laboratory at Nørregade , he identified a new family of sulfur-containing compounds. He called them xanthates (from Greek xanthos "yellow") on account of the predominantly yellow colour of xanthate salts. [ 3 ] Zeise was accordingly awarded a silver medal by the Royal Danish Academy of Sciences and Letters , and he became a member of that body the following year on the recommendation of Ørsted. In 1836 he became a Knight of the Order of the Dannebrog , a very high honour bestowed by the Danish monarch . As a teacher, Zeise demanded strict accuracy, conscientiousness , order, and diligence from his pupils. In February 1842, he married Maren Martine Bjerring and adopted her two children. [ 1 ] Zeise's health was fragile for much of his life and he suffered greatly, possibly due to his handling of noxious chemicals in poorly ventilated rooms. He died of tuberculosis [ 4 ] in Copenhagen on 12 November 1847 and was buried in Assistens Cemetery in the same city. [ 5 ] Zeise made several scientific discoveries. His discovery of mercaptans (thiols) in 1832 and thioethers in 1833, was once a weighty support for the influential (now obsolete) " Radical Theory " which Berzelius and Liebig developed, provoking important chemical studies. His discovery and work on xanthates, led to the widespread use of xanthate salts in synthetic chemistry. In 1830, Zeise attempted to react platinum chloride with ethanol, leading to a series of platinum -based organometallic compounds . [ 6 ] One of these compounds, originally referred to by Zeise himself as “ sal kalicoplatinicus inflammabilis ”, [ 7 ] was subsequently named after him according to the tradition of the day – it is still called Zeise's salt . Zeise's claim that the newly discovered salt contained ethylene was received with distrust by Justus von Liebig , whose understandable attacks on Zeise were quite unjustified. The complex indeed contains ethylene. Attempts to establish the correct structure and composition of Zeise's salt drove much basic research during the second half of the 19th century and led to a greater sophistication in organometallic chemistry. The structure of Zeise's salt was definitively resolved only with the advent of X-ray crystallography [ 8 ] [ 9 ] and the nature of its platinum to ethylene bond was not understood until the development of the Dewar–Chatt–Duncanson model in the 1950s. [ 10 ] [ 11 ] [ 12 ] Shortly before he died, Zeise published his attempts to purify the pigment carotene from carrot juice while in the Polytechnic Institute ; finding it to be soluble in carbon disulfide and correctly identifying it as a hydrocarbon. [ 13 ] Nearly a quarter of a century later Zeise’s views were fully vindicated by two scientists at the Royal College of Chemistry in London; Peter Griess and Carl Alexander von Martius confirmed Zeise’s formula and proved that ethylene was liberated when Zeise’s salt was decomposed. [ 2 ] Zeise’s publications include; Note: This article has been based largely on a biography of William Christopher Zeise, written in Danish by Carl Frederik Brica , in the first edition of the Dansk Biografisk Lexikon (Danish Biographical Lexicon), Volume XIX (1887-1905). The text in this article, therefore, has mostly been translated from the Danish original. The original work is in the Public Domain and is available through the Projekt Runeberg server. Where no other reference is provided the text of this article derives from this source.
https://en.wikipedia.org/wiki/William_Christopher_Zeise
William E. Bentley is the Robert E. Fischell Distinguished Professor of Engineering, founding Director of the Fischell Institute for Biomedical Devices, and currently the Director of the Maryland Technology Enterprise Institute in the A. James Clark School of Engineering at the University of Maryland . He was previously the Chair of the Fischell Department of Bioengineering, [ 2 ] where he assisted in establishing the department and provided leadership that led to its nationally ranked status. [ 3 ] Dr. Bentley is also appointed in the Department of Chemical and Biomolecular Engineering [ 4 ] at the University of Maryland , College Park and the Institute for Bioscience and Biotechnology Research. [ 5 ] He has served on advisory committees and panels for the NIH, NSF, DOD, DOE, USDA, and several state agencies. He has mentored over 25 Ph.D.s, some of whom are academics at: Cornell University (x2), University of Colorado , Clemson University , University of Connecticut , Tufts University , Postech (Korea), and National Tsing Hua University (Taiwan). Dr. Bentley received his undergraduate (BS, '82) and Master of Engineering degrees ('83) from Cornell University and his Ph.D. ('89) from the University of Colorado, Boulder , all in chemical engineering . Dr. Bentley worked for the International Paper Company , on alternative fuels and recovery process improvement. [ 6 ] At Maryland since 1989, Dr. Bentley has focused his research on the development of molecular tools that facilitate the expression of biologically active proteins, having authored over 200 related archival publications. Recent interests are on deciphering and manipulating signal transduction pathways, including those of bacterial communication networks, for altering cell phenotype. He was an integral component in the creation of the Bioprocess Scale-Up Facility (BSF) at the University of Maryland. While associated with the BSF, the facility performed contract research projects for MedImmune (Synagis) and Martek (Life's DHA) [ 7 ]
https://en.wikipedia.org/wiki/William_E._Bentley
William Esco Moerner , also known as W. E. Moerner , (born June 24, 1953) is an American physical chemist and chemical physicist with current work in the biophysics and imaging of single molecules. He is credited with achieving the first optical detection and spectroscopy of a single molecule in condensed phases , along with his postdoc, Lothar Kador. [ 1 ] [ 2 ] Optical study of single molecules has subsequently become a widely used single-molecule experiment in chemistry, physics and biology. [ 3 ] In 2014, he was awarded the Nobel Prize in Chemistry . [ 4 ] [ 5 ] Moerner was born in Pleasanton, California , in 1953 June 24 the son of Bertha Frances (Robinson) and William Alfred Moerner. He attended Washington University in St. Louis for undergraduate studies as an Alexander S. Langsdorf Engineering Fellow, and obtained three degrees: a B.S. in physics, a B.S. in electrical engineering, and an A.B. in mathematics in 1975. [ 6 ] He then pursued graduate study, partially supported by a National Science Foundation , at Cornell University in the group of Albert J. Sievers III. [ 7 ] Here he received an M.S. degree and a Ph.D. degree in physics in 1978 and 1982, respectively. His doctoral thesis was on vibrational relaxation dynamics of an IR-laser-excited molecular impurity mode in alkali halide lattices. [ 8 ] T Moerner worked at the IBM Almaden Research Center in San Jose, California , as a research staff member from 1981 to 1988, a manager from 1988 to 1989, and project leader from 1989 to 1995. After an appointment as visiting guest professor of physical chemistry at ETH Zurich (1993–1994), he assumed the distinguished chair in physical chemistry in the department of chemistry and biochemistry at the University of California, San Diego , from 1995 to 1998. In 1997 he was named the Robert Burns Woodward Visiting Professor at Harvard University . His research group moved to Stanford University in 1998, where he became professor of chemistry (1998), Harry S. Mosher Professor (2003), and professor, by courtesy, of applied physics (2005). [ 9 ] [ 10 ] [ 11 ] Moerner was appointed department chair for chemistry from 2011 to 2014. [ 12 ] His current areas of research and interest include: single-molecule spectroscopy and super-resolution microscopy , physical chemistry , chemical physics , biophysics , nanoparticle trapping, nanophotonics , photorefractive polymers, and spectral hole-burning. [ 10 ] [ 13 ] As of May 2014, Moerner was listed as a faculty advisor in 26 theses written by Stanford graduate students. [ 14 ] Recent editorial and advisory boards Moerner has served on include: member of the Board of Scientific Counselors for the National Institute of Biomedical Imaging and Bioengineering (NIBIB); [ 15 ] Advisory board member for the Institute of Atomic and Molecular Sciences, Academica Sinica, Taiwan; [ 16 ] advisory editorial board member for Chemical Physics Letters ; [ 17 ] advisory board member for the Center for Biomedical Imaging at Stanford; [ 18 ] and chair of Stanford University's health and safety committee. [ 12 ] Moerner is the recipient the National Winner of the Outstanding Young Professional Award for 1984, [ 19 ] from the electrical engineering honorary society, Eta Kappa Nu , April 22, 1985; IBM Outstanding Technical Achievement Award for Photon-Gated Spectral Hole-Burning, July 11, 1988; [ 20 ] IBM Outstanding Technical Achievement Award for Single-Molecule Detection and Spectroscopy, November 22, 1992; [ 20 ] Earle K. Plyler Prize for Molecular Spectroscopy , American Physical Society , 2001; [ 21 ] Wolf Prize in Chemistry , 2008; [ 22 ] [ 23 ] [ 24 ] Irving Langmuir Award in Chemical Physics, American Physical Society , 2009; [ 25 ] [ 26 ] Pittsburgh Spectroscopy Award, 2012; [ 27 ] Peter Debye Award in Physical Chemistry, American Chemical Society , 2013; [ 28 ] [ 29 ] the Engineering Alumni Achievement Award, Washington University, 2013; [ 6 ] and the Nobel Prize in Chemistry, 2014. [ 4 ] [ 5 ] Moerner also holds more than a dozen patents. His honorary memberships include Senior Member, IEEE , June 17, 1988, [ 20 ] and Member, National Academy of Sciences , 2007. [ 30 ] [ 31 ] He is also a Fellow of the Optical Society of America , May 28, 1992; [ 20 ] [ 32 ] the American Physical Society , November 16, 1992; [ 33 ] the American Academy of Arts and Sciences , 2001; [ 34 ] and the American Association for the Advancement of Science , 2004. [ 35 ] Moerner was born on June 24, 1953, at Parks Air Force Base in Pleasanton, California . From birth, his family called him by his initials W. E. as a way to distinguish him from his father and grandfather who are also named William. [ 9 ] He grew up in Texas where he attended Thomas Jefferson High School in San Antonio . He participated in many activities during high school: Band, Speech and Debate, Math and Science Contest Team, Bi-Phy-Chem, Masque and Gavel, National Honor Society, Boy Scouts, Amateur Radio Club, Russian Club, Forum Social Club, Toastmasters, "On the Spot" Team and Editor of Each has Spoken. Moerner and his wife, Sharon, have one son, Daniel. [ 36 ]
https://en.wikipedia.org/wiki/William_E._Moerner
University of Wisconsin , B.A. 1923 and MS 1924 William Everett Warner (August 22, 1897 – July 12, 1971) was an American academic, organization founder, and one of the "great leaders" and pioneers of the industrial arts education profession, now known as technology education . He was the founder of Epsilon Pi Tau honorary society and the American Industrial Arts Association (now the International Technology and Engineering Educators Association ). Warner was born in Roanoke, Illinois on August 22, 1897. [ 1 ] [ 2 ] His parents were Eva (née Redmon) and Isaac Newton Warner, a teacher, and principal. [ 3 ] He was the oldest of three children, who later became teachers. [ 3 ] His family moved to Elign, Illinois , followed by Normal, Illinois . [ 3 ] In the spring of 1908, the family moved to Chicago where his father was enrolled in the University of Chicago . [ 3 ] Waner attended the Woodlawn School in Chicago. [ 3 ] After Isaac Warner completed his bachelor's degree in 1910, he became the professor of teacher education at the Platteville Normal School in Platteville, Wisconsin . [ 3 ] Warner, who was thirteen years old, was educated at the Normal High School. [ 3 ] He was most interested in manual training and woodworking , and his goal was to be able to teach manual training. [ 3 ] During the summer, he worked as a farmhand, mowed grass, and ran a crusher at a local quarry. [ 3 ] While in school he worked as a bookkeeper for a local mine. [ 3 ] His earnings helped support his family who was still paying off his father's college loan. [ 3 ] However, he was also able to purchase clothes and alto horn . [ 3 ] When John Philip Sousa played in Plattville, he hired Warner to play with the Sousa Band. [ 3 ] Warner graduated from Platteville Normal School in 1917. His first teaching position was at a high school in Lodi, Wisconsin . [ 3 ] Unhappy in Lodi, he moved to Stevens Point, Wisonin the next year. [ 3 ] He was drafted for service in World War I , attending officers training in Waco, Texas . [ 1 ] [ 3 ] After the war, he returned to teaching and was eventually the assistant principal at a vocational school in Wausau, Wisconsin . [ 3 ] However, he resigned when the Smith–Hughes Act for vocational education passed, saying that "he could not work under the narrow accommodations of the act". [ 3 ] Warner enrolled int the University of Wisconsin , earning a B.A. in 1923 and a MS in 1924. [ 1 ] His thesis was The Control of the Continuation School. [ 4 ] While at Wisconsin, he was a member of the Arts and Crafts Club and Square and Compass , an organization for Master Masons . [ 5 ] [ 6 ] He paid for his tuition by playing his alto horn. [ 3 ] Warner then attended the Teachers College, Columbia University where he received his Ph.D. in 1928. [ 7 ] [ 1 ] [ 3 ] At the time, Columbia was the top graduate school in education in the United States. [ 7 ] Warnet studied under Frederick Gordon Bonser, along with John Dewey , Ira S. Griffth, Lois Mossman, Charles R. Richards, V. M. Russel, James E. Russell, David Snedden , and William H. Varnum. [ 8 ] [ 7 ] [ 3 ] Warner incorporated industrial arts with his studies; he claimed to be the first person to receive an advanced degree in industrial arts in the United States. [ 7 ] Warner became an assistant professor of industrial arts education at Ohio State University in 1925. [ 8 ] That same year, he established the graduate program in industrial arts at Ohio State. [ 9 ] [ 7 ] [ 3 ] Students came to the program from across the county. [ 3 ] In 1929, he established the American Security Research Foundation. He served as its first chairman. [ 3 ] He also developed a "laboratory of industries" that was installed in county schools in Ohio before World War II, first as an experiment and later as a standard in the field of industrial arts. [ 3 ] Many schools added new buildings to accommodate a new industrial arts laboratory. [ 3 ] Warner founded Epsilon Pi Tau honorary society at Ohio State University in 1929. [ 9 ] [ 1 ] The organization spread to include more than 125 chapters in North America and the Philippines. He served as its executive secretary for over forty years. [ 9 ] [ 3 ] He directed The Terminologial Investigation of Professional and Scientific Terms in Vocational and Practical Arts Education from 1929 to 1933. [ 7 ] This was a project of the Western Arts Association and defined terminology used by educators and in the field of industrial arts. [ 7 ] He was also president of the Western Arts Association from 1932 and 1937. [ 7 ] He published the influential work, Terminological Investigation, in 1933. [ 8 ] In this book and other work, Warner is credited with developing a new curriculum and adding the word "technology" to the industrial arts profession. [ 7 ] In 1934, he was chairman of the committee that published A Prospectus for Industrial Arts in Ohio . [ 7 ] He was promoted to full professor in 1939. [ 8 ] Warner established the American Industrial Arts Association (now the International Technology and Engineering Educators Association ) during the 10th anniversary celebration of Epsilon Pi Tau in 1939. [ 9 ] [ 1 ] He was the association's first president. [ 1 ] During World War II , he rose to the rank of lieutenant colonel, was a member of General Eisenhower's staff in Versailles and London, and received a Purple Heart . [ 1 ] After the war, he returned to Ohio State where he spent the rest of his career, [ 8 ] [ 1 ] However, from 1950 to 1953, he took a leave of absence to be the executive director of The Civil Defense in Ohio. [ 8 ] [ 1 ] Warner was influential in the formation of the National Council on Industrial Arts Teacher Education . [ 3 ] He was the first editor of Industrial Arts Teacher . [ 9 ] He lectured at more than 100 colleges across the United States and abroad and helped develop industrial arts programs in elementary, secondary, and post-secondary schools. [ 1 ] He became a professor emeritus of Ohio State in 1967. [ 2 ] For his many accomplishments in the field, Warner is considered one of the "great leaders" of the industrial arts profession. [ 7 ] Epsilon Pi Tau named The William E. Warners Awards Program in his honor. [ 10 ] Warner's papers are archived at Kent State University . [ 1 ] On August 14, 1920, Warner married Ellen E. Tood of Stevens Point in Chicago. [ 3 ] [ 2 ] She taught elementary school and supported their household while he was in graduate school. [ 3 ] After they moved to Columbus, Ellen Warner was recognized as an expert in special education for children and served on the University Bureau of Educational Research. [ 3 ] The couple celebrated their 50th wedding anniversary in August 1970. [ 3 ] Warner was a member of the American Legion , the Army and Navy Club of New York, the Newcomen Society of England, and the Rotary . [ 3 ] Warner died in Columbus, Ohio on July 12, 1971. [ 1 ] [ 2 ] His funeral services were held in Columbus and he was buried in the Forest Home Cemetery in Stevens Point. [ 2 ]
https://en.wikipedia.org/wiki/William_E._Warner
William Edward Augustin Aikin (6 February 1807 – 31 May 1888), known professionally as William E. A. Aikin , was an American analytical chemist and natural scientist . He was chair of the chemistry department at the University of Maryland from 1837 to 1883. While most of his work focused on chemistry, he held accomplishments in other fields in the natural sciences. Aikin was born in Rensselaer County, New York on February 6, 1807. He graduated from the Rensselaer Institute in 1829 and shortly thereafter earned a license from the New York State Medical Society . Aikin married twice and had 28 children. He outlived both wives and all but three of his children. [ 1 ] [ 2 ] [ 3 ] Despite completing his training and earning an honorary M.D. degree from the Vermont Academy of Medicine , Aikin turned away from the medical profession and took a position in 1833 teaching natural sciences at the Western Female Collegiate Institute in Pittsburgh . In Baltimore , he became an associate professor teaching chemistry and pharmacy at the University of Maryland for one year until he was elected chair of the chemistry department in October 1837. He filled that role until his withdrawal as Emeritus Professor in 1883. He was Dean of the School of Medicine at the university from 1840–1841 and from 1844 to 1855. Other positions he held included Professor of Natural Philosophy in the School of Arts and Sciences at the University of Maryland, Lecturer at the Maryland Institute , and Professor of Physics, Chemistry, and Natural Philosophy at Mount Saint Mary's College in Emmitsburg. [ 2 ] [ 3 ] [ 4 ] [ 5 ] If you want a pretty good practical mathematician, one of the best botanists in America, an experimental chemist, of the first order, a very superior Geologist, Mineralogist, and Zoologist, you have him in Dr. William Aikin. [ 5 ]
https://en.wikipedia.org/wiki/William_Edward_Augustin_Aikin
The William F. Meggers Award has been awarded annually since 1970 by the Optical Society (originally called the Optical Society of America) for outstanding contributions to spectroscopy. [ 1 ] It was established to honor William Frederick Meggers and his contributions to the fields of spectroscopy and metrology . For the invention of the chirped-pulse Fourier transform microwave technique, which revolutionized rotational spectroscopy, leading to an explosion of novel spectroscopic, astrochemical, analytical, dynamical, and chemical kinetics applications. Source: [ 1 ]
https://en.wikipedia.org/wiki/William_F._Meggers_Award_in_Spectroscopy
William Michael Gelbart (born June 12, 1946) is Distinguished Professor of Chemistry and Biochemistry at the University of California, Los Angeles , and a member of the California NanoSystems Institute and the UCLA Molecular Biology Institute. He obtained his Bachelor of Science degree from Harvard University in 1967, his Master's (1968) and PhD (1970) degrees from the University of Chicago , and did postdoctoral work at the University of Paris (1971) and the University of California, Berkeley (1972). After 30 years of research in theoretical physical chemistry , contributing notably to the fields of gas-phase photophysics, optical properties of simple liquids , and the statistical physics of complex fluids , he started a biophysics laboratory with Charles Knobler in 2002 to investigate the physical aspects of viral infectivity. Gelbart's early interest in science was inspired by his time as an undergraduate researcher in the molecular spectroscopy group of William Klemperer at Harvard . As a graduate student at the University of Chicago , with his mentors Stuart A. Rice , Karl Freed , and Joshua Jortner , he developed the modern theory of non-radiative processes ("radiationless transitions") in molecular photophysics. [ 1 ] [ 2 ] He was a US National Science Foundation/NATO Postdoctoral Fellow at the, University of Paris in 1971, and a Miller Institute Postdoctoral Fellow at UC Berkeley in 1972, during which time he switched fields and formulated a general theory of collision-induced optical properties of simple fluids. [ 3 ] He was appointed Assistant Professor of Chemistry, at UC Berkeley in 1972, continuing his researches on the quantum mechanical theory of molecular spectroscopy [ 4 ] and on the statistical mechanical theory of intermolecular and multiple light scattering in liquids away from and near their critical points . [ 5 ] [ 6 ] He moved to UCLA as Associate Professor of Chemistry in 1975, and was promoted to full Professor in 1979 and to Distinguished Professor in 1999. He was Chair of the Department of Chemistry and Biochemistry at UCLA from 2000-2004 and has been a member of UCLA's California NanoSystems Institute since 2004 and of its Molecular Biology Institute from 2008. At UCLA he became a leader in the then-emerging fields of " complex fluids " and " soft matter physics ". Shortly after moving there he began a 40-year collaboration with Avinoam Ben-Shaul on statistical-thermodynamic models of liquid crystal systems, polymer and polyelectrolyte (in particular, DNA ) solutions, and colloidal suspensions , and on the self-assembly theory of micelles , surfactant monolayers , and biological membranes. [ 7 ] [ 8 ] During a sabbatical year in 1998-99 at the Institute for Theoretical Physics in UC Santa Barbara and at the Curie Institute in Paris , Gelbart became deeply intrigued by viruses and over the course of the next several years, with his UCLA colleague Charles Knobler, established a laboratory to investigate simple viruses outside their hosts and isolated in test tubes. Early results included: the first measurement of pressure inside DNA viruses , establishing that it is as high as tens of atmospheres depending on genome length and ambient salt concentrations; [ 9 ] and the demonstration that capsid proteins from certain viruses are capable of complete in vitro packaging of a broad range of lengths of heterologous RNA . [ 10 ] This work, along with that of several other groups in the United States and Europe , helped launch the field of "physical virology". Most recently he moved his viruses from test tubes to host cells , and from wildtype viruses to artificial viruses and virus-like particles , engineered for purposes of delivering self-replicating RNA genes , RNA vaccines , and therapeutic microRNA to targeted mammalian cells. [ 11 ] In 1987 Gelbart was elected a Fellow of the American Physical Society "for his many contributions to the light scattering and phase transition properties of simple fluids, liquid crystals, and surfactant solutions". [ 12 ] He received the 1991 Lennard-Jones Prize of the British Royal Society of Chemistry , a 1998 Guggenheim Fellowship , the 2001 Liquids Prize of the American Chemical Society , election to the American Academy of Arts and Sciences in 2009, and endowed lectureships over the past 25 years at the Curie Institute (Paris) , the University of Leeds (England), Case Western Reserve University , Cornell University , Carnegie Mellon University , the University of Pittsburgh , and the University of Texas at Austin . At UCLA he won the 1996 University Distinguished Teaching Award, served as Chair of the Department of Chemistry and Biochemistry (2000-2004), and was awarded the Glenn T. Seaborg Medal in 2017. In 2016, his 70th birthday was honored by an international symposium on "Self Assembly, from Atoms to Life" at the Meso-American Center for Theoretical Physics, and by a " festschrift " issue of the Journal of Physical Chemistry B .
https://en.wikipedia.org/wiki/William_Gelbart
William H. Peirce (died 1944) was an American civil engineer and metallurgist , who pioneered copper production in the early 20th century. Among his achievements was the Peirce-Smith converter [ fr ] , invented with Elias Anton Cappelen Smith . He joined the Baltimore Copper Smelting & Rolling Company in 1890, becoming vice president in 1895, and later, president of the company. Under his management, the company became one of the major copper producer of the United States . In 1928, the company merged with five other copper companies, to create the Revere Copper Company . Described as "one of the foremost metallurgists of his time", Peirce became the vice president, director and a member of the Executive Committee of Revere from its incorporation in 1928 until his resignation in 1933. [ 1 ] The Peirce–Smith converter [ fr ] , developed in 1908 with Elias Anton Cappelen Smith , significantly improved the converting of copper matte . Before this invention, the converter was a cylindrical barrel, lined with an acid refractory lining, made of sand and clay. It was developed by two French engineers, Pierre Manhès and Paul David [ fr ] from 1880 to 1884. Their copper-converting process, named the Manhès–David process , was directly derived from the Bessemer process . In this horizontal chemical reactor, where air was injected into copper matte, a molten sulfide material containing iron, sulphur and copper, to become molten blister, an alloy containing 99% copper. But the basic slag produced during the blowing combined with the acid silica refractory lining, thereby causing a very short lifetime of the lining. [ 2 ] By developing a basic refractory material adapted to the matte refining process (in magnesia bricks), Peirce and his engineer Smith found a way to drastically increase the lifetime of the lining. It has been stated that, in some cases, the process allowing an increase from 10 to 2500 tons of copper produced without relining the converters. [ 3 ] A reduction of the cost of copper converting from 15–20 USD to 4–5 USD has been stated. [ 4 ] The Peirce-Smith converter quickly replaced the Manhès–David converter: by March 1912, the Peirce-Smith Converting Co claimed that "over 80% of the copper produced in [the U.S.] is being converted either in P-S type converters or on basic lining, under license, in the old acid shells". [ 3 ] It is still in use today, although the process has been significantly improved since then. In 2010, with 250 converters working in the world, the Peirce-Smith converters refine 90% of the copper matte. [ 5 ] In 1931, Peirce, still president of the Baltimore Copper Smelting & Rolling Co. , as well as being President of the Peirce-Smith Converter Company and vice president of the American Smelting & Refining Company , is awarded of the James Douglas medal, for "his numerous improvements in devices for smelting, refining, and rolling copper". [ 6 ]
https://en.wikipedia.org/wiki/William_H._Peirce
William Hampson (1854–1926) was the first person to patent a process for liquifying air. William Hampson was born on 14 March 1854, the second son of William Hampson of Puddington, Cheshire, England. [ 1 ] He was educated at Liverpool College , Manchester Grammar School and Trinity College, Oxford , where he matriculated in October 1874. [ 1 ] He studied classics, graduating with a second class degree. He then joined the Inner Temple in London to qualify as a barrister . [ 2 ] There is no record of Hampson's enrolment in a course of physics or engineering; he therefore seems to have educated himself in science and engineering. [ 3 ] In 1895, Hampson filed a preliminary patent for an apparatus to liquify air. [ 4 ] His apparatus was simple: [ 5 ] A compressor raised the pressure of a quantity of air to 87–150 atmospheres . The high-pressure air was then passed through cylinders that contained material which removed water and carbon dioxide from the air. The dried air then passed through a copper coil and exited through a nozzle at the end of the coil, which reduced the air's pressure to one atmosphere. After expanding through the nozzle, the air's temperature would drop greatly (due to the Joule-Thomson effect ). The cold air then flowed back over the coil, chilling the air that was flowing through the coil. As a result, within 20–25 minutes, the apparatus would begin to produce liquified air. The apparatus typically measured approximately one cubic metre. [ 6 ] [ 7 ] Hampson made a preliminary filing for a patent on his liquefaction process on 23 May 1895; Carl von Linde , a German engineer, filed for a similar patent on 5 June 1895. [ 8 ] [ 9 ] Hampson's method of liquifying gases was adopted by Brin's Oxygen Company of Westminster , London, England (renamed the "British Oxygen Company" in 1906). [ 10 ] In 1905, the company acquired Hampson's three patents on the liquefaction and separation of atmospheric gases. [ 11 ] From Brin's Oxygen Company, which retained Hampson as a consultant, Hampson provided William Ramsay with the liquid air that allowed Ramsay to discover neon , krypton , and xenon , for which Ramsay received the Nobel Prize in Chemistry of 1904. [ 12 ] In 1900–1901, Hampson also conducted adult education courses; specifically, a series of lectures at the University College in London . From these lectures came two books: Radium Explained (1905) provided a lay audience with an account of recent developments in research on radioactivity, while Paradoxes of Nature and of Science (1906) presented scientific curiosities that were contrary to common experience; e.g., how ice could be used as a source of heat. [ 13 ] [ 14 ] Hampson also became interested in medical science. He became a licensed apothecary in 1896, and by 1910 he was practising in several London hospitals. In 1912, he published his research on a crude pacemaker . [ 15 ] The system electrically stimulated large muscles of the body to contract regularly; pulses of blood were thus forced towards the heart and these pulses would then cause the heart to synchronise with the external electrical stimulator. Hampson also made a minor improvement to X-ray tubes . [ 16 ] Hampson also ventured into economics. He published a book on the subject: Modern Thraldom: A New Social Gospel (1907). [ 17 ] Hampson regarded credit—broadly interpreted as debt or borrowing in any form—as responsible for many of an economy's ills. He prescribed a world in which there would be no credit, interest, mortgages, or rents. All sales would be in cash; debts would not be legally recognised; factories would be run as a cooperative of their workers. The national government would be funded by a sales tax, and the national economy would be sheltered from foreign competition. He married Amy Bolton. [ citation needed ]
https://en.wikipedia.org/wiki/William_Hampson
William Joseph Pietro (born 1956) is an American/Canadian research scientist working in quantum chemistry , molecular electronics , and molecular machines . Pietro was born in Jersey City, New Jersey . His education includes a B.S. in chemistry from the Brooklyn Polytechnic Institute of New York, a Ph.D. in chemistry from the University of California, Irvine , and a postdoctoral fellowship at Northwestern University . Pietro was one of the founding authors of both Gaussian and Spartan electronic structure software packages. [ 1 ] [ 2 ] Pietro and co-workers Robert Hout and Warren Hehre invented the first algorithm for the high-resolution visualization of molecular orbitals . [ 3 ] [ 4 ] [ 5 ] Working in collaboration with John Pople and Warren Hehre, Pietro developed the first split-valence basis sets for transition metals and higher-row main-group elements. [ 6 ] [ 7 ] [ 8 ] Between 1985 and 1991, Pietro was a professor of chemistry at the University of Wisconsin–Madison , where his research group pioneered the first working molecular diode . [ 9 ] [ 10 ] Pietro is a professor of chemistry at York University researching theoretical aspects of electron transfer reactions in transition metal complexes. [ 11 ] [ 12 ] [ 13 ] [ 14 ] and the quantum dynamics of molecular [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] and biomolecular machines. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/William_J._Pietro
William Johnson McDonald (December 21, 1844 – February 8, 1926, though some sources give his date of death as February 6 ) was a Paris, Texas banker who left $850,000 (the bulk of his fortune) to the University of Texas System to endow an astronomical observatory. [ 2 ] The bequest was unexpected, and his will was contested, but after prolonged legal disputes the university received the money. [ 3 ] At the time, the university had no faculty of astronomy, so in 1932 it formed a collaboration with Otto Struve at the University of Chicago , who supplied astronomers. The McDonald Observatory is named after him, with Otto Struve becoming the first director. McDonald was the eldest of the three sons of Sarah Johnson and Henry Graham McDonald of Paris, Texas . As a young man, he was a private in the Confederate Army. [ 1 ] He became wealthy through his businesses as a lawyer and a banker, but remained frugal his entire life. He never married and had no children. [ 3 ]
https://en.wikipedia.org/wiki/William_Johnson_McDonald
William Kingdon Clifford (4 May 1845 – 3 March 1879) was a British mathematician and philosopher . Building on the work of Hermann Grassmann , he introduced what is now termed geometric algebra , a special case of the Clifford algebra named in his honour. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modelled to new positions. Clifford algebras in general and geometric algebra in particular have been of ever increasing importance to mathematical physics , [ 1 ] geometry , [ 2 ] and computing . [ 3 ] Clifford was the first to suggest that gravitation might be a manifestation of an underlying geometry. In his philosophical writings he coined the expression mind-stuff . Born in Exeter , William Clifford was educated at Doctor Templeton's Academy on Bedford Circus and showed great promise at school. [ 4 ] He went on to King's College London (at age 15) and Trinity College, Cambridge , where he was elected fellow in 1868, after being Second Wrangler in 1867 and second Smith's prizeman. [ 5 ] [ 6 ] In 1870, he was part of an expedition to Italy to observe the solar eclipse of 22 December 1870. During that voyage he survived a shipwreck along the Sicilian coast. [ 7 ] In 1871, he was appointed professor of mathematics and mechanics at University College London , and in 1874 became a fellow of the Royal Society . [ 5 ] He was also a member of the London Mathematical Society and the Metaphysical Society . Clifford married Lucy Lane on 7 April 1875, with whom he had two children. [ 8 ] Clifford enjoyed entertaining children and wrote a collection of fairy stories, The Little People . [ 9 ] In 1876, Clifford suffered a breakdown, probably brought on by overwork. He taught and administered by day, and wrote by night. A half-year holiday in Algeria and Spain allowed him to resume his duties for 18 months, after which he collapsed again. He went to the island of Madeira to recover, but died there of tuberculosis after a few months, leaving a widow with two children. Clifford and his wife are buried in London's Highgate Cemetery , near the graves of George Eliot and Herbert Spencer , just north of the grave of Karl Marx . The academic journal Advances in Applied Clifford Algebras publishes on Clifford's legacy in kinematics and abstract algebra . "Clifford was above all and before all a geometer." The discovery of non-Euclidean geometry opened new possibilities in geometry in Clifford's era. The field of intrinsic differential geometry was born, with the concept of curvature broadly applied to space itself as well as to curved lines and surfaces. Clifford was very much impressed by Bernhard Riemann ’s 1854 essay "On the hypotheses which lie at the bases of geometry". [ 10 ] In 1870, he reported to the Cambridge Philosophical Society on the curved space concepts of Riemann, and included speculation on the bending of space by gravity. Clifford's translation [ 11 ] [ 12 ] of Riemann's paper was published in Nature in 1873. His report at Cambridge, " On the Space-Theory of Matter ", was published in 1876, anticipating Albert Einstein 's general relativity by 40 years. Clifford elaborated elliptic space geometry as a non-Euclidean metric space . Equidistant curves in elliptic space are now said to be Clifford parallels . Clifford's contemporaries considered him acute and original, witty and warm. He often worked late into the night, which may have hastened his death. He published papers on a range of topics including algebraic forms and projective geometry and the textbook Elements of Dynamic . His application of graph theory to invariant theory was followed up by William Spottiswoode and Alfred Kempe . [ 13 ] In 1878, Clifford published a seminal work, building on Grassmann's extensive algebra. [ 14 ] He had succeeded in unifying the quaternions , developed by William Rowan Hamilton , with Grassmann's outer product (aka the exterior product ). He understood the geometric nature of Grassmann's creation, and that the quaternions fit cleanly into the algebra Grassmann had developed. The versors in quaternions facilitate representation of rotation. Clifford laid the foundation for a geometric product, composed of the sum of the inner product and Grassmann's outer product. The geometric product was eventually formalized by the Hungarian mathematician Marcel Riesz . The inner product equips geometric algebra with a metric, fully incorporating distance and angle relationships for lines, planes, and volumes, while the outer product gives those planes and volumes vector-like properties, including a directional bias. Combining the two brought the operation of division into play. This greatly expanded our qualitative understanding of how objects interact in space. Crucially, it also provided the means for quantitatively calculating the spatial consequences of those interactions. The resulting geometric algebra, as he called it, eventually realized the long sought goal [ i ] of creating an algebra that mirrors the movements and projections of objects in 3-dimensional space. [ 15 ] Moreover, Clifford's algebraic schema extends to higher dimensions. The algebraic operations have the same symbolic form as they do in 2 or 3-dimensions. The importance of general Clifford algebras has grown over time, while their isomorphism classes - as real algebras - have been identified in other mathematical systems beyond simply the quaternions. [ 16 ] The realms of real analysis and complex analysis have been expanded through the algebra H of quaternions, thanks to its notion of a three-dimensional sphere embedded in a four-dimensional space. Quaternion versors , which inhabit this 3-sphere, provide a representation of the rotation group SO(3) . Clifford noted that Hamilton's biquaternions were a tensor product H ⊗ C {\displaystyle H\otimes C} of known algebras, and proposed instead two other tensor products of H : Clifford argued that the "scalars" taken from the complex numbers C might instead be taken from split-complex numbers D or from the dual numbers N . In terms of tensor products, H ⊗ D {\displaystyle H\otimes D} produces split-biquaternions , while H ⊗ N {\displaystyle H\otimes N} forms dual quaternions . The algebra of dual quaternions is used to express screw displacement , a common mapping in kinematics. As a philosopher, Clifford's name is chiefly associated with two phrases of his coining, mind-stuff and the tribal self . The former symbolizes his metaphysical conception, suggested to him by his reading of Baruch Spinoza , [ 5 ] which Clifford (1878) defined as follows: [ 18 ] That element of which, as we have seen, even the simplest feeling is a complex, I shall call Mind-stuff. A moving molecule of inorganic matter does not possess mind or consciousness; but it possesses a small piece of mind-stuff. When molecules are so combined together as to form the film on the under side of a jelly-fish, the elements of mind-stuff which go along with them are so combined as to form the faint beginnings of Sentience. When the molecules are so combined as to form the brain and nervous system of a vertebrate, the corresponding elements of mind-stuff are so combined as to form some kind of consciousness; that is to say, changes in the complex which take place at the same time get so linked together that the repetition of one implies the repetition of the other. When matter takes the complex form of a living human brain, the corresponding mind-stuff takes the form of a human consciousness, having intelligence and volition. Regarding Clifford's concept, Sir Frederick Pollock wrote: Briefly put, the conception is that mind is the one ultimate reality; not mind as we know it in the complex forms of conscious feeling and thought, but the simpler elements out of which thought and feeling are built up. The hypothetical ultimate element of mind, or atom of mind-stuff, precisely corresponds to the hypothetical atom of matter, being the ultimate fact of which the material atom is the phenomenon. Matter and the sensible universe are the relations between particular organisms, that is, mind organized into consciousness , and the rest of the world. This leads to results which would in a loose and popular sense be called materialist . But the theory must, as a metaphysical theory, be reckoned on the idealist side. To speak technically, it is an idealist monism . [ 5 ] Tribal self , on the other hand, gives the key to Clifford's ethical view, which explains conscience and the moral law by the development in each individual of a 'self,' which prescribes the conduct conducive to the welfare of the 'tribe.' Much of Clifford's contemporary prominence was due to his attitude toward religion . Animated by an intense love of his conception of truth and devotion to public duty, he waged war on such ecclesiastical systems as seemed to him to favour obscurantism , and to put the claims of sect above those of human society. The alarm was greater, as theology was still unreconciled with Darwinism ; and Clifford was regarded as a dangerous champion of the anti-spiritual tendencies then imputed to modern science. [ 5 ] There has also been debate on the extent to which Clifford's doctrine of ' concomitance ' or ' psychophysical parallelism ' influenced John Hughlings Jackson 's model of the nervous system and, through him, the work of Janet, Freud, Ribot, and Ey. [ 19 ] In his 1877 essay, The Ethics of Belief , Clifford argues that it is immoral to believe things for which one lacks evidence. [ 20 ] He describes a ship-owner who planned to send to sea an old and not well-built ship full of passengers. The ship-owner had doubts suggested to him that the ship might not be seaworthy: "These doubts preyed upon his mind, and made him unhappy." He considered having the ship refitted even though it would be expensive. At last, "he succeeded in overcoming these melancholy reflections." He watched the ship depart, "with a light heart…and he got his insurance money when she went down in mid-ocean and told no tales." [ 20 ] Clifford argues that the ship-owner was guilty of the deaths of the passengers even though he sincerely believed the ship was sound: " [H]e had no right to believe on such evidence as was before him ." [ ii ] Moreover, he contends that even in the case where the ship successfully reaches the destination, the decision remains immoral, because the morality of the choice is defined forever once the choice is made, and actual outcome, defined by blind chance, doesn't matter. The ship-owner would be no less guilty: his wrongdoing would never be discovered, but he still had no right to make that decision given the information available to him at the time. Clifford famously concludes with what has come to be known as Clifford's principle : "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." [ 20 ] As such, he is arguing in direct opposition to religious thinkers for whom 'blind faith' (i.e. belief in things in spite of the lack of evidence for them) was a virtue. This paper was famously attacked by pragmatist philosopher William James in his " Will to Believe " lecture. Often these two works are read and published together as touchstones for the debate over evidentialism , faith , and overbelief . Though Clifford never constructed a full theory of spacetime and relativity , there are some remarkable observations he made in print that foreshadowed these modern concepts: In his book Elements of Dynamic (1878), he introduced "quasi-harmonic motion in a hyperbola". He wrote an expression for a parametrized unit hyperbola , which other authors later used as a model for relativistic velocity. Elsewhere he states: [ 21 ] This passage makes reference to biquaternions , though Clifford made these into split-biquaternions as his independent development. The book continues with a chapter "On the bending of space", the substance of general relativity . Clifford also discussed his views in On the Space-Theory of Matter in 1876. In 1910, William Barrett Frankland quoted the Space-Theory of Matter in his book on parallelism: "The boldness of this speculation is surely unexcelled in the history of thought. Up to the present, however, it presents the appearance of an Icarian flight." [ 22 ] Years later, after general relativity had been advanced by Albert Einstein , various authors noted that Clifford had anticipated Einstein. Hermann Weyl (1923), for instance, mentioned Clifford as one of those who, like Bernhard Riemann , anticipated the geometric ideas of relativity. [ 23 ] In 1940, Eric Temple Bell published The Development of Mathematics , in which he discusses the prescience of Clifford on relativity: [ 24 ] John Archibald Wheeler , during the 1960 International Congress for Logic, Methodology, and Philosophy of Science (CLMPS) at Stanford , introduced his geometrodynamics formulation of general relativity by crediting Clifford as the initiator. [ 25 ] In The Natural Philosophy of Time (1961), Gerald James Whitrow recalls Clifford's prescience, quoting him in order to describe the Friedmann–Lemaître–Robertson–Walker metric in cosmology. [ 26 ] Cornelius Lanczos (1970) summarizes Clifford's premonitions: [ 27 ] Likewise, Banesh Hoffmann (1973) writes: [ 28 ] In 1990, Ruth Farwell and Christopher Knee examined the record on acknowledgement of Clifford's foresight. [ 29 ] They conclude that "it was Clifford, not Riemann, who anticipated some of the conceptual ideas of General Relativity." To explain the lack of recognition of Clifford's prescience, they point out that he was an expert in metric geometry, and "metric geometry was too challenging to orthodox epistemology to be pursued." [ 29 ] In 1992, Farwell and Knee continued their study of Clifford and Riemann: [ 30 ] [They] hold that once tensors had been used in the theory of general relativity, the framework existed in which a geometrical perspective in physics could be developed and allowed the challenging geometrical conceptions of Riemann and Clifford to be rediscovered. "I…hold that in the physical world nothing else takes place but this variation [of the curvature of space]." "There is no scientific discoverer, no poet, no painter, no musician, who will not tell you that he found ready made his discovery or poem or picture—that it came to him from outside, and that he did not consciously create it from within." "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." "If a man, holding a belief which he was taught in childhood or persuaded of afterwards, keeps down and pushes away any doubts which arise about it in his mind, purposely avoids the reading of books and the company of men that call in question or discuss it, and regards as impious those questions which cannot easily be asked without disturbing it—the life of that man is one long sin against mankind." "I was not, and was conceived. I loved and did a little work. I am not and grieve not."
https://en.wikipedia.org/wiki/William_Kingdon_Clifford
William A. Klemperer (October 6, 1927 – November 5, 2017) was an American chemist , chemical physicist and molecular spectroscopist . Klemperer is most widely known for introducing molecular beam methods into chemical physics research, greatly increasing the understanding of nonbonding interactions between atoms and molecules through development of the microwave spectroscopy of van der Waals molecules formed in supersonic expansions, pioneering astrochemistry , including developing the first gas phase chemical models of cold molecular clouds that predicted an abundance of the molecular HCO + ion that was later confirmed by radio astronomy . [ 1 ] Bill Klemperer was born in New York City in 1927 as the child of two physicians. He and his younger brother were raised in New York and New Rochelle. [ 2 ] He graduated from New Rochelle High School in 1944 and then enlisted in the U.S. Navy Air Corps , where he trained as a tail gunner . [ 1 ] [ 2 ] He obtained an A.B. from Harvard University in 1950, majoring in Chemistry, and obtained a Ph.D. in Physical Chemistry under the direction of George C. Pimentel at University of California, Berkeley , in early 1954. [ 1 ] After one semester as an instructor at Berkeley, Bill returned to Harvard in July 1954. Though his initial appointment was as an instructor of analytical chemistry , a position which was considered unlikely to lead to a faculty position, [ 1 ] [ 2 ] he was appointed full professor in 1965. [ 1 ] He has remained associated with Harvard Chemistry throughout a long career. He spent 1968-69 on sabbatical at Cambridge University [ 2 ] and 1979-81 as Assistant Director for Mathematical and Physical Sciences at the U.S. National Science Foundation . He was a visiting scientist at Bell Laboratories . He also served as an advisor to NASA . [ 1 ] Klemperer became an emeritus professor in 2002 but remained active in both research and teaching. [ 3 ] Klemperer's early work concentrated on the infrared spectroscopy of small molecules that are only stable in the gas phase at high temperatures. Among these are the alkali halides, for many of which he obtained the first vibrational spectra. The work provided basic structural data for many oxides and fluorides, and gave insight into the details of the bonding. It also led Klemperer to recognize the potential of molecular beams in spectroscopy, and in particular the use of the electric resonance technique to address fundamental problems in structural chemistry. [ 4 ] Klemperer introduced the technique of supersonic cooling as a spectroscopic tool, [ 5 ] which has increased the intensity of molecular beams and also simplified the spectra. Klemperer helped to found the field of interstellar chemistry. In interstellar space, densities and temperatures are extremely low, and all chemical reactions must be exothermic, with no activation barriers. The chemistry is driven by ion-molecule reactions, and Klemperer's modeling [ 6 ] of those that occur in molecular clouds has led to a remarkably detailed understanding of their rich highly non-equilibrium chemistry. Klemperer assigned HCO + as the carrier of the mysterious but universal "X-ogen" radio-astronomical line at 89.6 GHz, [ 7 ] which had been reported by D. Buhl and L.E. Snyder. [ 8 ] Klemperer arrived at this prediction by taking the data seriously. The radio telescope data showed an isolated transition with no hyperfine splitting; thus there were no nuclei in the carrier of the signal with spin of one or greater nor was it a free radical with a magnetic moment. HCN is an extremely stable molecule and thus its isoelectronic analog, HCO + , whose structure and spectra could be well predicted by analogy, would also be stable, linear, and have a strong but sparse spectrum. Further, the chemical models he was developing predicted that HCO + would be one of the most abundant molecular species. Laboratory spectra of HCO + (taken later by Claude Woods et al. , [ 9 ] ) proved him right and thereby demonstrated that Herbst and Klemperer's models provided a predictive framework for our understanding of interstellar chemistry. The greatest impact of Klemperer's work has been in the study of intermolecular forces , a field of fundamental importance for all of molecular- and nano-science. Before Klemperer introduced spectroscopy with supersonic beams, the spectra of weakly bound species were almost unknown, having been restricted to dimers of a few very light systems. Scattering measurements provided precise intermolecular potentials for atom–atom systems, but provided at best only limited information on the anisotropy of atom–molecule potentials. He foresaw that he could synthesize dimers of almost any pair of molecules he could dilute in his beam and study their minimum energy structure in exquisite detail by rotational spectroscopy. This was later extended to other spectral regions by Klemperer and many others, and has qualitatively changed the questions that could be asked. Nowadays it is routine for microwave and infrared spectroscopists to follow his "two step synthesis" [ 10 ] to obtain the spectrum of a weakly bound complex: "Buy the components and expand." Klemperer quite literally changed the study of the intermolecular forces between molecules from a qualitative to a quantitative science. The dimer of hydrogen fluoride was the first hydrogen bonded complex to be studied by these new techniques, [ 11 ] and it was a puzzle. Instead of the simple rigid-rotor spectrum, which would have produced a 1 to 0 transition at 12 GHz, the lowest frequency transition was observed at 19 GHz. Arguing by analogy to the well known tunneling-inversion spectrum of ammonia, Klemperer recognized that the key to understanding the spectrum was to recognize that HF–HF was undergoing quantum tunnelling to FH–FH, interchanging the roles of proton donor and acceptor. Each rotational level was split into two tunneling states, with an energy separation equal to the tunneling rate divided by the Planck constant . The observed microwave transitions all involved a simultaneous change in rotational and tunneling energy. The tunneling frequency is extremely sensitive to the height and shape of the inter-conversion barrier, and thus samples the potential in the classically forbidden regions. Resolved tunneling splittings proved to be common in the spectra of weakly bound molecular dimers. Bill Klemperer has had many awards and honors, which include:
https://en.wikipedia.org/wiki/William_Klemperer
William Klyne (March 23, 1913, in Enfield, Middlesex – November 13, 1977) was an organic chemist known for his work in steroids and stereochemistry — a field in which he was a "pioneer", [ 1 ] and in which Ernest Eliel and Norman Allinger described him as "one of the world's experts". [ 2 ] In 1946, he gained a PhD from the University of Edinburgh with a thesis entitled “The steroid sulphates: studies on the conjugated sulphates of mare’s pregnancy urine". [ 3 ] Klyne taught at Westfield College , University of London , where he served as dean of science from 1971 to 1973, and as vice-principal from 1973 to 1976. [ 4 ] He also served on the editorial board of the Biochemical Society from 1950 to 1955, [ 4 ] and on IUPAC 's nomenclature committee from 1971 until his death. [ 4 ] As well, he established [ 5 ] and maintained [ 4 ] the Medical Research Council 's Steroid Reference Collection, and wrote several textbooks , including The Chemistry of Steroids (1957) and Atlas of Stereochemical Correlations (1974). [ 6 ] Klyne met Barbara Clayton in 1947 while both were employed at the Medical Research Council; they married in 1949. [ 7 ]
https://en.wikipedia.org/wiki/William_Klyne
Major General William Luther Sibert (October 12, 1860 – October 16, 1935) was a senior United States Army officer who commanded the 1st Division on the Western Front during World War I . Sibert was born in Gadsden, Alabama , on October 12, 1860. [ 1 ] After attending the University of Alabama from 1879 to 1880, he entered the United States Military Academy and was appointed a second lieutenant of Engineers , United States Army , on June 15, 1884. [ 1 ] His appointment was a distinction as only the top 10 percent of each West Point class was then commissioned into the Engineers. He graduated from the Engineer School of Applications in 1887 [ 1 ] and went on to hold several engineer positions in the United States and overseas. In 1899, he was assigned as the chief engineer of the 8th Army Corps and the chief engineer and general manager of the Manila and Dagupan Railroad during the Philippine Insurrection . Later, he returned to the United States where he was in charge of river and harbor districts and headquarters in Louisville and Pittsburgh . From 1907 through 1914, Sibert was a member of the Panama Canal Commission and was responsible for the building of a number of critical parts of the Panama Canal , including the Gatun Locks and Dam, the West Breakwater in Colon, and the channel from Gatun Lake to the Pacific Ocean. [ 2 ] On March 15, 1915, Sibert, by now a lieutenant colonel , was promoted to the rank of brigadier general . This promotion, while not an uncommon practice in the Regular Army of the time, was still unusual. Congress wanted to make Sibert a brigadier general, but the Engineer Corps was only authorized one, so instead of expanding the Corps, they appointed Sibert to a line officer slot (i.e., Infantry). The Army not knowing what to do with an engineer who had never led troops or trained for combat suddenly elevated to a general of infantry, decided to assign Sibert, who had been working on canal projects in the Mid-West and advisory missions to China, to command the Pacific Coast's Coastal Artillery. Here, it was felt he could do little harm. [ 3 ] Unfortunately for Sibert, when the United States entered World War I in April 1917, Brigadier General Sibert was one of the only senior infantry officers on active duty. He was duly breveted to major general and deployed with the initial four regiments of the American Expeditionary Forces (AEF) which formed the 1st Division (nicknamed "The Big Red One") once in France. The AEF's Commander-in-Chief (C-in-C), General John J. Pershing , a long serving cavalry officer famous for his exploits at San Juan Hill in the Spanish–American War , and recently in charge of the campaign against Pancho Villa , was short on qualified general officers (he himself had only recently been promoted to his position) so Sibert was placed in charge of the 1st Division. To his credit, Sibert opposed his own promotion as a line officer, protesting his own lack of experience. In the peacetime Army prior to 1917, though, it was relatively harmless. In the cauldron of the Western Front , it was a serious problem. The AEF suffered a serious leadership problem throughout the final year of the war, as officers were rapidly promoted to positions with little or no experience. The American Army was singularly unprepared for the war, and the strain of its rapid expansion created many personnel problems like Sibert's. Part of the problem was the Army's promotion system, which continued to cause problems into World War II . The rank a Regular Army officer might hold, and their official rank were not always the same. Thus a "peacetime rank" and a "wartime" rank differed. An officer might start the war as a lieutenant colonel, end the war as a major general, and then revert to being a lieutenant colonel after the war. Incidentally, pay was not necessarily tied to rank, but depended on time in service and an individual's official rank. In the small Regular Army of 1917, most officers were below the rank of colonel, and few serving in general officer billets actually were recognized by Congress as holding the rank of general, rather, they were "breveted" to the higher rank. Actual promotion required Congressional approval, the number of positions limited by law, and was based solely on seniority. Breveting allowed the Army to bypass these restrictions, for better or worse. Thus, the problem of promoting Sibert to brigadier general in the Engineer Corps and the subsequent trouble it caused. [ 4 ] Sibert led the 1st Infantry Division during its initial training by French and British forces. [ 1 ] In October 1917, Pershing wrote an extensive letter to Secretary of War Newton D. Baker expressing his concerns about some of his generals, "I hope you will permit me to speak very frankly and quite confidentially, but I fear that we have some general officers who have neither the experience, the energy, nor the aggressive spirit to prepare their units or to handle them under battle conditions, as they exist today. I shall comment in an enclosure on the individuals to whom I refer particularly." [ 5 ] In January 1918, the first elements of the AEF, part of the 1st Infantry Division, prepared to deploy into the line at Ansauville. MG Sibert was relieved by General John J. Pershing before the Division's deployment to the front. Pershing was dissatisfied with the Division's progress and elevated Brigadier General Robert Lee Bullard, a true line officer, to replace Sibert. [ 6 ] Sibert returned to the United States in January 1918 where he became the commanding general of the Army Corps of Engineers Southeastern Department located at Charleston, South Carolina . Sibert was not alone in his relief, as Secretary Baker had approved Pershing's relief of a number of individuals. Pershing showed some measure of respect for Sibert, who was pushing 58 years old (a contributing factor to his relief), recognizing that the position Sibert was in, was not entirely of his own making. Pershing was not nearly as kind to others he removed from command during the war. When the War Department created the Chemical Warfare Service (CWS) later that spring, Pershing was asked to name a general officer to head it. Pershing recommended Sibert to the War Department, demonstrating his understanding of Sibert's true ability as an engineer and project manager. Following his assignment to the CWS on June 28, 1918, Congress promoted Sibert to the rank of Major General , making the earlier brevet promotion official. Sibert led the CWS from May 1918 to February 1920. During that period the CWS in the United States focused on production and equipment. As commander of the CWS he oversaw the production of America's first chemical warfare agent, Lewisite , and the development of the US Army's chemical defense equipment, including the first US protective (or "gas") masks, the M-1 and M-2. The CWS in Europe, part of the AEF, did not fall under Sibert's control. Instead, that was led by Colonel Amos Fries , part of Pershing's Command Staff. When Sibert announced his retirement in 1919, Amos Fries, still in Europe, was selected to replace him. Today the US Army considers Sibert the "father of the US Army Chemical Corps" because he was the first commander of the CWS. Of course, he was also the first commanding officer of the 1st Infantry Division, the oldest continually serving Division in the United States Army. Sibert retired from active duty in February 1920 and settled in Bowling Green, Kentucky . Following his retirement from the Army, Sibert led the modernization of the docks and waterways in Mobile, Alabama , and served on the Presidential Commission that led to the building of Hoover Dam. He was elected to the University of Alabama Engineering Hall of Fame in 1961. For his services during World War I he was awarded the Army Distinguished Service Medal , the citation for which reads: The President of the United States of America, authorized by Act of Congress, July 9, 1918, takes pleasure in presenting the Army Distinguished Service Medal to Major General William Luther Sibert, United States Army, for exceptionally meritorious and distinguished services to the Government of the United States, in a duty of great responsibility during World War I, in the organization and administration of the Chemical Warfare Service, contributory to the successful prosecution of the war. [ 7 ] Sibert married Mary Margaret Cummings in September 1887, with whom he had five sons and one daughter. [ 1 ] After Mary's death in 1915, General Sibert married Juliette Roberts in June 1917. She died 15 months later and in 1922 Sibert married Evelyn Clyne Bairnsfather of Edinburgh, Scotland who remained his wife until his death on October 16, 1935, in Bowling Green. General Sibert is buried at Arlington National Cemetery. Two of his five sons, Edwin L. Sibert and Franklin C. Sibert , each retired as Major Generals in the Army.
https://en.wikipedia.org/wiki/William_L._Sibert
William Morgan Jackson (born September 24, 1936) is a Distinguished Research and Emeritus Professor of Chemistry at University of California, Davis and pioneer in the field of astrochemistry . His work considers cometary astrochemistry and the development of laser photochemistry to understand planetary atmospheres. He is a Fellow of the American Association for the Advancement of Science , the American Physical Society and the American Chemical Society . In addition to contributing research work, he is notable as a mentor and advocate for increasing minority participation in science, and was one of the founders of the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE). [ 1 ] In 2019, he was awarded the Astronomical Society of the Pacific Arthur B.C. Walker II Award for his research and commitment to promoting diversity. [ 2 ] In 2021, he was awarded the American Physical Society Julius Edgar Lilienfeld Prize for outstanding contributions to fundamental chemical physics and spectroscopy associated with asteroids and comets, and for exemplary teaching at both the undergraduate and graduate levels, as well as lifelong service and inspiration to a diverse community. [ 3 ] Jackson was born in Birmingham, Alabama to William Morgan and Claudia H. Jackson on September 24, 1936. [ 4 ] [ 5 ] He grew up in a segregated society and spent part of his childhood in Dynamite Hill, an area in Birmingham that the Ku Klux Klan frequently bombed during the Civil rights movement . [ 6 ] [ 7 ] His father, a Tuskegee University graduate, owned the Apex Cab Company and also taught auto mechanics at Parker High School , whilst his mother, a graduate of Santa Barbara Junior College , worked for the US government . [ 8 ] [ 5 ] At the age of nine, Jackson contracted polio and had to spend a year out of school. [ 5 ] After completing tenth grade, Jackson joined Morehouse College as an early entrance student. [ 9 ] He was awarded a full scholarship. [ 9 ] At first Jackson considered majoring in mathematics , but decided to study chemistry after meeting Henry Cecil McBay . [ 5 ] He graduated in 1956 and applied to several graduate schools, including Northwestern University and Purdue University . He received a response from Northwestern , who said that they had already fulfilled their three fellowships for African American students. [ 5 ] Eventually he moved Washington, D.C. , where he got a job and lived with his cousin. [ 5 ] He studied at the Catholic University of America , where he was awarded a postgraduate research fellowship. [ 10 ] During his doctoral studies, he worked in the Harry Diamond Laboratories , where he studied molten salt compounds. [ 5 ] During the final year of his PhD, Jackson's wife became pregnant, and Jackson took time out of graduate school to earn money. [ 5 ] During this time he worked at the National Institute of Standards and Technology . [ 5 ] He returned to the Catholic University of America where he studied gasoline additives. [ 5 ] After earning his PhD in 1961, he joined the Lockheed Martin , where he worked on formaldehyde resins and ways to protect missiles as they reenter the Atmosphere of Earth . [ 5 ] [ 9 ] [ 11 ] He returned to the National Institute of Standards and Technology as a postdoctoral researcher, studying how radiant energy impacted chemical structures. [ 5 ] He investigated how radiation impacted the coating applied to space vehicles. [ 5 ] In 1964, Jackson joined the Goddard Space Flight Center . [ 9 ] It was here that he became interested in the origins of free radicals in comets. [ 9 ] While at the Goddard Space Flight Center he proposed to use the International Ultraviolet Explorer satellite to look for comets. [ 9 ] Using the Haystack Observatory , Jackson made measurements of water emission in comets. [ 12 ] He joined the faculty at the University of Pittsburgh in 1969, where he spent a year researching and teaching. [ 5 ] [ 13 ] At the University of Pittsburgh , he worked with Wade Fite and Ted Brackman on the detection of electron impact on molecules using mass spectrometry . [ 12 ] A year later he returned to Goddard , where he developed a system to detect free radicals using laser beams. [ 5 ] In 1974, one of Jackson's colleagues, a professor of chemistry at Howard University , died suddenly. [ 5 ] Jackson agreed to teach his course for the rest of the term and was subsequently appointed to a joint position in chemistry and physics. [ 5 ] Here he began working on laser-induced fluorescence (LIF) to study the rovibronic coupling in cyano radicals. [ 12 ] He was the first person to demonstrate LIF could be used to study molecular photodissociation. [ 14 ] He primarily studied comets using satellites ground-based telescopes, using experimental data and theoretical predictions to establish how the free radicals inside comets form. Despite having left Goddard Space Flight Center , Jackson served as team leader for the International Ultraviolet Explorer telescope, which observed Halley's Comet . [ 9 ] He joined University of California, Davis in 1985 and was promoted to Distinguished Professor in 1998. [ 6 ] The Jackson laboratory ("Jackson's Photon Crusaders") developed tunable lasers that could be used to detect and characterize free radicals . [ 12 ] These included excimer lasers , nitrogen-pumped lasers and an Alexandrite laser. [ 12 ] By building laser systems in the laboratory, Jackson helped to establish the excited states of molecules that are present in planetary atmospheres. The experiments consisted of one laser for the photodissociation of the parent molecule, and another laser to excite the free radical into an excited state. When the excited molecule fluoresced back to the ground state, the fluorescence was captured in a photomultiplier tube . [ 12 ] He has investigated the photochemistry of carbon monoxide , nitrogen and carbon dioxide . [ 15 ] His laser systems exploit resonant four-wave mixing , which allows them to photodissociate gases observed in planetary atmospheres. [ 15 ] He also showed that it is possible to ionise the resulting atomic fragments using a velocity imaging time-of-flight mass spectrometer . [ 15 ] In 1996 The Planetary Society named asteroid 1081 EE37 as (4322) Billjackson in his honour. [ 16 ] He served as Chair of the Department of Chemistry at University of California, Davis in 2000. [ 16 ] He retired in 2006, but has continued to research and recruit students. [ 9 ] In 2013, he was made the Emile A. Dickenson Professor at University of California, Davis . [ 11 ] In 2019 the Journal of Physical Chemistry dedicated a special issue to Jackson. [ 17 ] Jackson has campaigned for equity, diversity and inclusion in science since he started his career. [ 6 ] He was one of the founders of the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE). [ 6 ] [ 16 ] The organization began to promote and award minority scientists and engineers, as well as encouraging high school students to consider studying science or engineering. It was supported by Ted Kennedy and the National Science Foundation . [ 9 ] Jackson served as NOBCChE's first treasurer from 1973. [ 18 ] He stated that he was inspired to start the NOBCChE after attending a meeting of the American Chemical Society , and seeing no African Americans there. [ 18 ] He has served in various capacities for the NOBCChE, attending every annual meeting other than one ( San Diego , 1999) in protest of the 1996 California Proposition 209 . [ 18 ] He provided evidence to Congress in an effort to increase research funding to historically black colleges and universities . [ 6 ] When he arrived at U.C. Davis, only two students from underrepresented minorities had ever earned chemistry PhDs there. While at UC Davis, he secured funding from the Alfred P. Sloan Foundation and increased the department's minority student population to about 15% of the academic cohort. [ 9 ] Jackson was known for bringing students and researchers to his laboratory "who were the stones the builders rejected, and he made them the cornerstones for future scientific research". [ 12 ] His awards and honours include; His publications include;
https://en.wikipedia.org/wiki/William_M._Jackson_(chemist)
William Merriam Burton (November 17, 1865 – December 29, 1954) was an American chemist who developed a widely used thermal cracking process for crude oil . [ 1 ] Burton was born in Cleveland , Ohio . In 1886, he received a Bachelor of Science degree at Western Reserve University . He earned a PhD at Johns Hopkins University in 1889. Burton initially worked for the Standard Oil refinery at Whiting, Indiana . He became president of Standard Oil from 1918 to 1927, when he retired. The process of thermal cracking invented by Burton, which became U.S. patent 1,049,667 on January 7, 1913, doubled the yield of gasoline that can be extracted from crude oil. The first thermal cracking method , the Shukhov cracking process , was invented by Russian engineer Vladimir Shukhov (1853-1939), in the Russian empire, Patent No. 12926, November 27, 1891. Burton died in Miami , Florida . This biographical article about an American chemist is a stub . You can help Wikipedia by expanding it . This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it . This article about an American businessperson born in the 1860s is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/William_Merriam_Burton
William E. Moffitt (9 November 1925 – 19 December 1958 [ 1 ] ) was a British quantum chemist . He died after a heart attack following a squash match. [ 2 ] He had been thought to be one of Britain's most gifted academics. [ 3 ] Moffitt was born in Berlin , Germany to British parents; his father was working in Berlin on behalf of the British government. [ 3 ] He was educated by private tuition up to the age of 11. [ 3 ] He attended Harrow School from 1936–43. His chemistry master later said of him that "he was undoubtably the most able of a decade of gifted boys ... [and] has a profound effect on all who met him. He did more than anyone to create in the school the intellectual climate so necessary for the stimulation of young minds". [ 2 ] He then studied chemistry at New College, Oxford , under an open scholarship, and graduated with first class honours. His D.Phil. supervisor, Charles Coulson , later wrote: [his] exuberant delight in life remained with him to the end. "Moffit's method of Atoms in Molecules" will remain for many years to remind us of his remarkable ability to initiate new ways of thinking in his professional subject. After receiving his D.Phil. for research in quantum chemistry , he joined the research staff of the British Rubber Producers Research Association . [ 2 ] He was made an assistant professor at Harvard in January 1953, [ 3 ] and was give an A.M Honoris Causa in 1955. [ 2 ] His colleague Edgar Bright Wilson said: Few men had as great an impact at so early an age. The reasons are clear. Few have been endowed with such a sparkling, quick and keen intelligence, with such a capacity for spending long hours in the thorough study of fundamental subjects ... His intellectual powers were not only applied to the solution of problems but perhaps even more to their wise selection. He avoided areas where only formal solutions were attainable, with no contact with experience. [ 3 ] Doctoral students who were advised by Moffitt include R. Stephen Berry and S. M. Blinder . [ 4 ] He married Dorothy Silberman in 1956 and had a daughter, Alison in June 1958. [ 3 ] He was a keen rugby player and enjoyed music and arts [ 2 ] and particularly English literature. [ 3 ] While sharing a cabin with a monk on a voyage to the UK from the US, he discussed the philosophy of religion with him in their only common language, Latin. [ 2 ]
https://en.wikipedia.org/wiki/William_Moffitt
William N. Porter (March 15, 1886 – February 5, 1973) was a United States Army officer who led the Army's Chemical Warfare Service during the second World War. [ 1 ] Porter was born in Lima, Ohio on March 15, 1886. [ 2 ] [ 3 ] He attended the United States Naval Academy in Annapolis, Maryland, graduating in 1909. [ 1 ] After less than a year of active naval service, he resigned as a midshipman on February 7, 1910. [ 4 ] (Until 1912, midshipmen who graduated from the academy had to serve two years in the fleet before being commissioned as ensigns.) [ 5 ] He then joined the Army as a second lieutenant in the U.S. Army Coast Artillery Corps . [ 1 ] Porter was transferred to the Army's Chemical Warfare Service in 1921 as a major. [ 1 ] [ 6 ] Porter graduated with distinction in 1927 from the Command and General Staff School and then attended both the Army Industrial College and the Army War College . [ 6 ] As a lieutenant colonel from 1934 to 1937, Porter was assigned to the faculty at the Air Corps Tactical School located at Maxwell Field . [ 7 ] Porter served as Director of the Chemical Warfare School, located at the time in Washington, DC, and, as a colonel, as Chemical Officer, 9th Corps Area. [ 8 ] After the outbreak of World War Two, Porter was appointed head of the Army Chemical Warfare Service and promoted to major general. [ 2 ] He led the service from May 1941 until 1945, despite having had no experience as a chemical officer in the First World War. [ 6 ] [ 9 ] Until March 1942, he reported directly to the Army Chief of Staff, but from then on to the Service of Supply, later renamed the Army Service Forces . [ 6 ] The Chemical Warfare Service was responsible for both offensive and defensive chemical weapons usage, including smoke. Under his command, the service fielded flamethrowers, developed and manufactured incendiary bombs and devices, worked to improve the effectiveness of DDT as an insecticide, and developed treatments for the expected effects of chemical weapons such as respiratory disease, burns, and poisoning. [ 1 ] [ 9 ] In December 1943, after the Battle of Tarawa , Porter and the Chemical Warfare Service urged the use of chemical warfare in the Pacific Theater to reduce U.S. casualties against fierce Japanese resistance. However, President Franklin Roosevelt and American public opinion opposed the use of poison gas and were not persuaded. [ 10 ] Porter was awarded the Army Distinguished Service Medal for his wartime efforts. [ 11 ] Porter retired from active duty on November 13, 1945. [ 6 ] Porter authored articles for professional journals: [ 12 ] Following his retirement from the Army, Porter was president of the Chemical Construction Corporation. [ 1 ] The company's offices were located in New York City. [ 2 ] In 1953, he was elected a director of Cambridge, MA electronics manufacturer Ultrasonic Corp. [ 13 ] Porter also served as president of the New York Chapter of the Armed Forces Chemical Association. [ 14 ] Porter died of a heart attack at his home in Key West, Florida in 1973 at the age of 86. [ 1 ] He was buried at Arlington National Cemetery . [ 2 ] Porter was married to Mary Porter. At the time of his death, he was survived by a son and a daughter. [ 1 ]
https://en.wikipedia.org/wiki/William_N._Porter
William Nicholas Guy Hitchon (22 October 1957 – 23 July 2023), commonly known as Nick Hitchon , was a nuclear fusion scientist and professor at the University of Wisconsin . [ 1 ] [ 2 ] Hitchon was born in Skipton , West Riding of Yorkshire (now North Yorkshire ), the eldest of three sons to Iona (née Hall) and Guy Hitchon. [ 3 ] [ 4 ] He was educated at Ermysted's Grammar School from 1968 to 1975. [ 3 ] Later, he earned a bachelor's and master's degree in physics from Oxford University and a D.Phil. in engineering science from the same university. [ 3 ] [ 5 ] Hitchon's personal life included two marriages, first to Jacqui Bush in 1979, which ended in divorce, and later to Cryss Brunner in 2001. [ 3 ] In 1964, Hitchon was featured as a child in the Seven Up! documentary for ITV's World in Action series. [ 2 ] [ 3 ] His life was periodically revisited in subsequent episodes by director Michael Apted until 2019. [ 3 ] [ 6 ] In 1982, Hitchon joined the University of Wisconsin , Madison, in the department of electrical and computer engineering . [ 3 ] [ 5 ] He became a professor in 1994 and served as the department chair from 1999 to 2002. [ 3 ] [ 5 ] In 2022, he took retirement. [ 7 ] During his tenure, he authored three books. [ 3 ]
https://en.wikipedia.org/wiki/William_Nicholas_Hitchon
William Otto Frohring (July 1, 1893 – September 13, 1959) was an American biochemical researcher, inventor and business executive. [ 2 ] He was a co-developer of "simulated milk adapted" (SMA), the first infant formula to be distributed in the United States and one of the most widely consumed infant formulas in the world. [ 3 ] Frohring held 15 patents, and led research in dairy products, and the refinement, synthesis and manufacture of vitamin products. [ 4 ] [ 3 ] William Frohring was born in Cleveland, Ohio , the son of William Erhardt Frohring, a railroad engineer, and Martha Louise Bliss. [ 5 ] He graduated from East Technical High School in Cleveland. After graduation, he worked as a motorcycle mechanic at the Luna Park, Cleveland Motordrome. In 1911, he received a two-year scholarship to Ohio State Agricultural College, where he majored in bacteriology and dairy technology. He graduated in 1915. [ 3 ] Frohring took a job on the loading dock at Telling-Belle Vernon Dairy, the largest dairy in Ohio. [ 6 ] Eight months later, he was offered an opportunity to run fat tests in the Telling-Belle Vernon labs for Henry J. Gerstenberger, MD, the medical director of Babies Dispensary and Children's Hospital (later, Rainbow Babies & Children's Hospital ), a pediatric hospital in Cleveland. [ 7 ] Dr. Gerstenberger, with the assistance of Harold O. Ruh, MD, was investigating the possibility of developing a dairy formula to supplement or replace maternal milk for infants. [ 3 ] William Frohring was brought on as part of the team with Gerstenberger and Ruh, and eventually became chief chemist on the project. The group's formula was based on diluted skimmed milk , with lactose and potassium chloride added to reach the human milk level. Among their novel contributions was to use mixtures of fats and oils rather than cream to duplicate human milk fat. [ 3 ] [ 8 ] Experimental batches of SMA distributed to local pediatricians were well received. [ 9 ] The group began getting orders for more. Frohring stepped forward as the business leader of the group. His idea was to give the patent of the formula to Babies Dispensary and Children's Hospital, and license the manufacture to Telling-Belle Vernon Dairy. [ 3 ] [ 8 ] [ 7 ] By 1919, Frohring was director of the laboratory at Telling-Belle Vernon Dairy, and had invented several pieces of laboratory and dairy processing equipment. In 1921, he was made a director and placed in charge of Laboratory Products Company, the company's new subsidiary to manufacture SMA, located in Mason, Michigan . [ 3 ] Laboratory Products Company diversified into the development of research into other newly identified biochemicals. Frohring recruited Albert Fredrick Ottomar Germann to study carotene. The company went on to become the world's major supplier of carotene. As an employee of the dairy, Frohring accumulated patents on a process for the production of soluble casein, an improved process for lactose production, a vitamin C concentrate from orange and tomato juice for addition to SMA, and a formulation called "Frohs Malted Chocolate Milk". The company was renamed the SMA Corporation, and added carotene concentrate, refined from palm oil, to its product line. Frohring set up a company to process the palm oil. He made his younger brother Paul (formerly sales manager for SMA Corporation), president and general manager. In 1939, SMA, its subsidiary, and all rights to the infant formula were purchased by American Home Products Corporation (AHPC). Frohring stayed on as president of the SMA Corporation, now owned by AHPC. He later became a director of AHPC. [ 3 ] In the early 1950s, Frohring received patents for Frohring Cement Mixers, a line of compact, portable mixers than can be moved out to a field and operated by hand, electric motor, gasoline motor or tractor motor. [ 10 ] [ 11 ] In 1953, Frohring patented a neurological research device known as a biothesiometer, used to determine a patient's sensitivity to vibration. [ 12 ] Other patents include a formula for hypo-alergic milk, a process of making liquid malted milk, and a method for determining vitamin A deficiency, and a method for extracting carotene. [ 13 ] [ 14 ] [ 15 ] [ 16 ] He was made an honorary doctor of science by McKinley-Roosevelt College in Chicago, Illinois. [ 2 ] Frohring's father, William, a railroad engineer, was born in Bavaria and immigrated to the United States at 1 years old, while his mother, Martha, was born in Ohio to German emigrants. [ 17 ] He was married to the former Gertrude Lewis, and had four children. He died of a heart attack at his home on Munn Road in Newbury, Ohio. [ 2 ]
https://en.wikipedia.org/wiki/William_Otto_Frohring
William Paul Byers (born 1943) is a Canadian mathematician and philosopher; professor emeritus in mathematics and statistics at Concordia University in Montreal, Quebec , Canada. He completed a BSc ('64), and an MSc ('65) from McGill University , and obtained his PhD ('69) from the University of California, Berkeley . His dissertation, Anosov Flows , was supervised by Stephen Smale . [ 1 ] His area of interests include dynamical systems and the philosophy of mathematics . Byers is the author of three books on mathematics:
https://en.wikipedia.org/wiki/William_P._Byers
Sir William Ramsay KCB FRS FRSE ( / ˈ r æ m z i / ; 2 October 1852 – 23 July 1916) was a Scottish chemist who discovered the noble gases and received the Nobel Prize in Chemistry in 1904 "in recognition of his services in the discovery of the inert gaseous elements in air" along with his collaborator, John William Strutt, 3rd Baron Rayleigh , who received the Nobel Prize in Physics that same year for their discovery of argon . After the two men identified argon, Ramsay investigated other atmospheric gases. His work in isolating argon, helium , neon , krypton , and xenon led to the development of a new section of the periodic table . [ 2 ] Ramsay was born at 2 Clifton Street [ 3 ] in Glasgow on 2 October 1852, the son of civil engineer and surveyor, William C. Ramsay, and his wife, Catherine Robertson. [ 4 ] The family lived at 2 Clifton Street in the city centre, a three-storey and basement Georgian townhouse. [ 3 ] The family moved to 1 Oakvale Place in the Hillhead district in his youth. [ 5 ] He was a nephew of the geologist Sir Andrew Ramsay . He was educated at Glasgow Academy and then apprenticed to Robert Napier, a shipbuilder in Govan . [ 6 ] However, he instead decided to study Chemistry at the University of Glasgow , matriculating in 1866 and graduating in 1869. He then undertook practical training with the chemist Thomas Anderson and then went to study in Germany at the University of Tübingen with Wilhelm Rudolph Fittig where his doctoral thesis was entitled Investigations in the Toluic and Nitrotoluic Acids . [ 7 ] [ 8 ] [ 9 ] Ramsay went back to Glasgow as Anderson's assistant at Anderson College . He was appointed as Professor of Chemistry at the University College of Bristol in 1879 and married Margaret Buchanan in 1881. In the same year he became the Principal of University College, Bristol, and somehow managed to combine that with active research both in organic chemistry and on gases. William Ramsay formed pyridine in 1876 from acetylene and hydrogen cyanide in an iron-tube furnace in what was the first synthesis of a heteroaromatic compound . [ 10 ] In 1887, he succeeded Alexander Williamson as the chair of Chemistry at University College London (UCL). It was here at UCL that his most celebrated discoveries were made. As early as 1885–1890, he published several notable papers on the oxides of nitrogen , developing the skills that he needed for his subsequent work. On the evening of 19 April 1894, Ramsay attended a lecture given by Lord Rayleigh . Rayleigh had noticed a discrepancy between the density of nitrogen made by chemical synthesis and nitrogen isolated from the air by removal of the other known components. After a short conversation, he and Ramsay decided to investigate this. In August Ramsay told Rayleigh he had isolated a new, heavy component of air, which did not appear to have any chemical reactivity . He named this inert gas " argon ", from the Greek word meaning "lazy". [ 2 ] In the following years, working with Morris Travers , he discovered neon , krypton , and xenon . He also isolated helium , which had only been observed in the spectrum of the sun, and had not previously been found on earth. In 1910 he isolated and characterised radon . [ 11 ] During 1893–1902, Ramsay collaborated with Emily Aston , a British chemist, in experiments on mineral analysis and atomic weight determination. Their work included publications on the molecular surface energies of mixtures of non-associating liquids. [ 12 ] Ramsay was elected an International Member of the American Philosophical Society in 1899. [ 13 ] He was appointed a Knight Commander of the Order of the Bath (KCB) in the 1902 Coronation Honours list published on 26 June 1902, [ 14 ] [ 15 ] and invested as such by King Edward VII at Buckingham Palace on 24 October 1902. [ 16 ] In 1904, Ramsay received the Nobel Prize in Chemistry . That same year, he was elected an International Member of the United States National Academy of Sciences . [ 17 ] Ramsay's standing among scientists led him to become an adviser to the Indian Institute of Science . He suggested Bangalore as the location for the institute. Ramsay endorsed the Industrial and Engineering Trust Ltd., a company that claimed it could extract gold from seawater , in 1905. It bought property on the English coast to begin its secret process. The company never produced any gold. Ramsay was the president of the British Association in 1911–1912. [ 18 ] In 1881, Ramsay was married to Margaret Johnstone Marshall (née Buchanan), daughter of George Stevenson Buchanan. They had a daughter, Catherine Elizabeth (Elska) and a son, William George, who died at 40. Ramsay lived in Hazlemere , Buckinghamshire , until his death. He died in High Wycombe , Buckinghamshire, on 23 July 1916 from nasal cancer at the age of 63 and was buried in Hazlemere parish church . A blue plaque at number 12 Arundel Gardens , Notting Hill , commemorates his life and work. The Sir William Ramsay School in Hazlemere and Ramsay grease are named after him. There is a memorial to him by Charles Hartwell in the north aisle of the choir at Westminster Abbey . [ 19 ] In 1923, University College London named its new Chemical Engineering department and seat after Ramsay, which had been funded by the Ramsay Memorial Fund. [ 20 ] One of Ramsay's former graduates, H. E. Watson was the third Ramsay professor of chemical engineering. On 2 October 2019, Google celebrated his 167th birthday with a Google Doodle . [ 21 ]
https://en.wikipedia.org/wiki/William_Ramsay
William Snow Burnside (20 December 1839 – 11 March 1920) was an Irish mathematician whose entire career was spent at Trinity College Dublin (TCD). He is chiefly remembered for the book The Theory of Equations: With an Introduction to the Theory of Binary Algebraic Forms (1881) [ 1 ] and his long tenure as Erasmus Smith's Professor of Mathematics at TCD. He is sometimes confused with his rough contemporary, the English mathematician William Burnside . [ 2 ] William Snow Burnside was born at Corcreevy House, near Fivemiletown, Tyrone , to William Smyth Burnside (1810–1884, Chancellor of Clogher Cathedral ) and Anne Henderson (1808–1881). [ 3 ] He studied mathematics under George Salmon at TCD (BA 1861, MA 1866, Fellowship 1871), and taught there until his retirement in 1917. He served as Erasmus Smiths's Professor of Mathematics for many decades (1879–1913), and co-authored the influential 1881 book The Theory of Equations: With an Introduction to the Theory of Binary Algebraic Forms with his TCD colleague Arthur William Panton (1843–1906). It ran to at least 7 editions, and was reissued by Dover Books in 1960. TCD awarded him a DSc in 1891. He lived one and a half miles away from campus, on Raglan Road , and was allegedly "the last man to regularly arrive in College on horseback". [ 4 ]
https://en.wikipedia.org/wiki/William_S._Burnside
Sir William Schooling KBE FRAS FSS (16 December 1860 – 18 February 1936) was a British expert on insurance and statistics . He was named a CBE in the 1918 Birthday Honours and a KBE in 1920 for his work with the War Savings Committee . [ 1 ] Schooling was the editor of Bourne's Directory , a listing of British insurance companies, and the author of several books on insurance [ 2 ] and on the history of the Hudson's Bay Company . [ 3 ] With Mark Barr , he also did pioneering work on the mathematics of the golden ratio . [ 4 ]
https://en.wikipedia.org/wiki/William_Schooling
William Sharp is a biotechnologist and entrepreneur, who holds a PhD in plant cell biology from Rutgers University . He is well known for his application of science into business, creating both start up companies and extensive technology transfer experience across the Americas and Asia in a broad sector of business ventures. Sharp has authored over seventy original research papers, abstracts and books in the field of plant cell biology including co-editing Plant Cell and Tissue Culture, The Ohio State University Press, Columbus, Ohio 1977 the five volume series entitled the Handbook of Plant Cell Culture, Volumes 1–5, MACMILLIAM Publishing Company, New York 1983–1986)and Reflections & Connections and Personal Journeys Through The Life Sciences, Volumes I & II, ScienceTechPublishers, LLC, Lewes, Delaware 2014. Sharp serves as a member of The Ohio State University College of Arts and Sciences Advisory Committee, the Ohio state University STEAM Factory, and the Ohio State University, Rutgers University, and University of São Paulo Tripartite Collaborative Program. [ 1 ] Sharp was the former Professor and Dean of Research at Cook College ; Director of Research at New Jersey Agricultural Experiment Station at Rutgers University , Executive Vice-president of DNA Pharmaceuticals Inc., Executive Vice-president for Research at DNA Plant Technology Corp, Research Director at Pioneer Research, Campbell Institute for Research & Technology, the Campbell Soup Company a Full Professor at the Ohio State University , Visiting professor, Center for Nuclear Energy in Agriculture, University of São Paulo, and Fellow of the Argonne National Laboratory. Sharp was a Fulbright Grantee during 1971 and 1973. He received the distinctive honor of having the Rod Sharp Professor of Microbiology, Ohio State University and the William "Rod" Sharp Biotechnology Conference Room, University of São Paulo named for him, recipient of the University of São Paulo Eminent Professorship and the Luiz Queiroz Distinguished Service Medal award from the University of São Paulo. Sharp was awarded The Board of Trustees Distinguished Service Award in recognition for his contributions to the Colleges of Arts and Sciences from The Ohio State University by President Gordon Gee at graduation commencement on December 9, 2007, and more recently the College of Arts and Sciences 2016 Alumni Distinguished service Award. Sharp continues to mentor and advise his former students. Sharp is the father of Gotham Film and Media Institute executive director and film producer Jeff (Jeffrey) Sharp , Executive Director, Gotham Film and Media Institute, Brooklyn, NY.
https://en.wikipedia.org/wiki/William_Sharp_(scientist)
William Simms (7 December 1793 – 21 June 1860) was a British scientific instrument maker. He was born in Birmingham, the second of nine children of William Simms (1763-1828), a toy maker. Soon after William Simm's birth the family moved to London so that William Simms Sr. could help his ailing father, James Simms, who had a jewellery business in Whitecross Street . This business was soon converted to the manufacture of optical instruments. William Sr. prospered and in 1804 he was elected a Freeman of the City. William Simms Jr. was sent in January 1806 to be educated in mathematics by a Mr. Hayward. After two years education in January 1808 he was apprenticed to Thomas Penstone, a member of the Worshipful Company of Goldsmiths . However William's interests lay elsewhere and in 1808 he was apprenticed to his father. He was elected a Freeman of the Worshipful Company of Goldsmiths in 1815 and set up in business for himself, working until 1818 at his father's Blackfriars premises. His elder brother James was already establishing his own reputation for navigational instruments. William Simms' chief interest was the division of the circle, the accuracy of which was essential to the manufacture of accurate scientific instruments. He became a correspondent of Thomas Jones , who brought him into contact with the instrument maker Edward Troughton and also persuaded him to join the Royal Society for the encouragement of Arts, Manufactures & Commerce . Here he met the engineer Bryan Donkin , and also Colonel Colby of the Ordnance Survey . In the 1825 Simms was asked to repair and redivide an astronomical circle made by Troughton in 1800. [ 1 ] Simms made and replaced parts where required, then redivided the circle. Later Simms wrote a paper for Troughton proposing a new method for dividing circles that was more accurate than an engine and quicker than using a roller. [ 2 ] Edward Troughton took on William Simms as a partner in 1826 [ 3 ] On Troughton's retirement Simms took over his business, which had a very good reputation in the manufacture of scientific instruments. Simms specialized in surveying instruments and from 1817 supplied theodolites to the Ordnance Survey and then to the East India Company , including those used by George Everest . On a larger scale he supplied telescopes , mural circles and other astronomical instruments to observatories at Kraków , Madras , Cambridge , Lucknow , Calcutta , Edinburgh , Brussels , Greenwich and other places. By the close of his career he had supplied most of the world's leading observatories with equipment. Simms' work formed the basis of the treatise on mathematical instruments written by his younger brother Frederick Walter Simms , who went on to become an important writer on civil engineering . His reputation was enhanced by the improvements he made to graduating instruments and his self-acting circular dividing engine reduced the work involved in manufacture from weeks to hours. He also helped standardize the measures of length the yard and chain for the Admiralty . Simms was elected an Associate of the Institution of Civil Engineers in 1828. He was a Fellow of the Royal Astronomical Society , which he joined in 1831, and elected a Fellow of the Royal Society in 1852. Simms died at the family home in Carshalton on 21 June 1860 and was buried at West Norwood Cemetery . His family, most especially his son James Simms carried on his instrument making work. Simms Rock in Antarctica is named after William Simms.
https://en.wikipedia.org/wiki/William_Simms_(instrument_maker)
William Webster (1855–1910) was an English chemical engineer credited with developments in gas detection, sewage treatment and medical use of x-rays . A gifted artist and musician, Webster also helped found the Blackheath Concert Halls and the adjacent Conservatoire in Blackheath in south-east London during the 1890s. Webster was the son of William Webster , a successful building contractor who grew wealthy from constructing major civil engineering and building projects in London. The family lived from 1869 in Wyberton House in Lee Terrace, Blackheath. The younger William Webster trained as a chemical engineer. A fellow of the Chemical Society , he patented a system to detect hydrogenous gases in mines in 1876, [ 1 ] and later developed a system for the electrolytic purification of sewage (patent application filed on 22 December 1887; US patent awarded on 19 February 1889), [ 2 ] trialled in 1888 at the Crossness Southern Outfall works [ 3 ] [ 4 ] which had been built by his father's firm in the 1860s. Webster was also a pioneer in x-ray research and a founder member of the Röntgen Society (since 1927 part of the British Institute of Radiology ), assisting surgeon Thomas Moore in producing radiographs in 1896, after which Moore set up an x-ray unit at the Miller General Hospital in Greenwich High Road. [ 5 ] Webster is also believed to be the first person to experience radiation 'sunburn', suffered on his right hand. [ 6 ] [ 5 ] He wrote a letter on the subject of x-ray photography published in the journal Nature in 1897. [ 7 ] Webster was an accomplished violinist, singer, and artist - his paintings were exhibited in the Summer Exhibition at the Royal Academy . In 1881 local residents formed the Blackheath Conservatoire of Music, and Webster founded the company which funded the building of a concert hall, today Blackheath Halls , and its neighbouring schools for art and music, the Blackheath Conservatoire . The Conservatoire of Music opened in 1896 and the School of Art in 1897. [ 8 ]
https://en.wikipedia.org/wiki/William_Webster_(chemical_engineer)
William Whewell ( / ˈ h juː əl / HEW -əl ; 24 May 1794 – 6 March 1866) was an English polymath . He was Master of Trinity College, Cambridge . In his time as a student there, he achieved distinction in both poetry and mathematics . The breadth of Whewell's endeavours is his most remarkable feature. In a time of increasing specialisation, Whewell belonged in an earlier era when natural philosophers investigated widely. He published work in mechanics , physics , geology , astronomy , and economics , while also composing poetry , writing a Bridgewater Treatise , translating the works of Goethe , and writing sermons and theological tracts. In mathematics, Whewell introduced what is now called the Whewell equation , defining the shape of a curve without reference to an arbitrarily chosen coordinate system. He also organized thousands of volunteers internationally to study ocean tides , in what is now considered one of the first citizen science projects. He received the Royal Medal for this work in 1837. [ 1 ] One of Whewell's greatest gifts to science was his word-smithing. He corresponded with many in his field and helped them come up with neologisms for their discoveries. Whewell coined, among other terms, scientist, [ 2 ] physicist , linguistics , consilience , catastrophism , uniformitarianism , and astigmatism ; [ 3 ] he suggested to Michael Faraday the terms electrode , ion , dielectric , anode , and cathode . [ 4 ] [ 5 ] Whewell died in Cambridge in 1866 as a result of a fall from his horse. Whewell was born in Lancaster , the son of John Whewell and his wife, Elizabeth Bennison. [ 6 ] His father was a master carpenter , and wished him to follow his trade, but William's success in mathematics at Lancaster Royal Grammar School and Heversham grammar school won him an exhibition (a type of scholarship) at Trinity College, Cambridge in 1812. He was the eldest of seven children having three brothers and three sisters born after him. Two of the brothers died as infants while the third died in 1812. Two of his sisters married; he corresponded with them in his career as a student and then a professor. His mother died in 1807, when Whewell was 13 years old. His father died in 1816, the year Whewell received his bachelor degree at Trinity College, but before his most significant professional accomplishments. [ 7 ] Whewell married, firstly, in 1841, Cordelia Marshall, daughter of John Marshall . Within days of his marriage, Whewell was recommended to be master of Trinity College in Cambridge, following Christopher Wordsworth . Cordelia died in 1855. In 1858 he married again, to Everina Frances (née Ellis), sister of Robert Leslie Ellis and widow of Sir Gilbert Affleck, 5th Baronet . [ 8 ] [ 9 ] She died in 1865. He had no children. In 1814 he was awarded the Chancellor's Gold Medal for poetry. [ 10 ] He was Second Wrangler in 1816, President of the Cambridge Union Society in 1817, became fellow and tutor of his college. He was professor of mineralogy from 1828 to 1832 and Knightbridge Professor of Philosophy (then called "moral theology and casuistical divinity") from 1838 to 1855. [ 11 ] [ 12 ] During the years as professor of philosophy, in 1841, Whewell succeeded Christopher Wordsworth as master. Whewell influenced the syllabus of the Mathematical Tripos at Cambridge, which undergraduates studied. He was a proponent of 'mixed mathematics': applied mathematics , descriptive geometry and mathematical physics , in contrast with pure mathematics . Under Whewell, analytic topics such as elliptical integrals were replaced by physical studies of electricity, heat and magnetism. He believed an intuitive geometrical understanding of mathematics, based on Euclid and Newton, was most appropriate. [ 13 ] Whewell died in Cambridge in 1866 as a result of a fall from his horse. [ 14 ] He was buried in the chapel of Trinity College, Cambridge, whilst his wives are buried together in the Mill Road Cemetery, Cambridge . A window dedicated to Lady Affleck, his second wife, was installed in her memory in the chancel of All Saints' Church, Cambridge and made by Morris & Co. A list of his writings was prepared after his death by Isaac Todhunter in two volumes, the first being an index of the names of persons with whom Whewell corresponded. [ 15 ] [ 16 ] Another book was published five years later, as a biography of Whewell's life interspersed with his letters to his father, his sisters, and other correspondence, written and compiled by his niece by marriage, Janet Mary Douglas, called Mrs Stair Douglas on the book's title page. [ 7 ] These books are available online in their entirety as part of the Internet Archive. In 1826 and 1828, Whewell was engaged with George Airy in conducting experiments in Dolcoath mine in Cornwall , in order to determine the density of the earth. Their united labours were unsuccessful, and Whewell did little more in the way of experimental science . He was the author, however, of an Essay on Mineralogical Classification , published in 1828, and carried out extensive work on the tides. [ 17 ] [ 12 ] When Whewell started his work on tides , there was a theory explaining the forces causing the tides, based on the work of Newton , Bernoulli , and Laplace . But this explained the forces, not how tides actually propagated in oceans bounded by continents. There was a series of tidal observations for a few ports, such as London and Liverpool , which allowed tide tables to be produced for these ports. However the methods used to create such tables, and in some cases the observations, were closely guarded trade secrets. John Lubbock , a former student of Whewell's, had analysed the available historic data (covering up to 25 years) for several ports to allow tables to be generated on a theoretical basis, publishing the methodology. [ 18 ] [ 19 ] This work was supported by Francis Beaufort , Hydrographer of the Navy , and contributed to the publication of the Admiralty Tide Tables starting in 1833. [ 20 ] Whewell built on Lubbock's work to develop an understanding of tidal patterns around the world that could be used to generate predictions for many locations without the need for long series of tidal observations at each port. This required extensive new observations, initially obtained through an informal network, and later through formal projects enabled by Beaufort at the Admiralty. In the first of these, in June 1834, every Coast Guard station in the United Kingdom recorded the tides every fifteen minutes for two weeks. [ 21 ] : 169–173 The second, in June 1835, was an international collaboration, involving Admiralty Surveyors, other Royal Navy and British observers, as well as those from the United States , France , Spain , Portugal , Belgium , Denmark , Norway , and the Netherlands . Islands, such as the Channel Islands , were particularly interesting, adding important detail of the progress of the tides through the ocean. The Admiralty also provided the resources for data analysis, and J.F. Dessiou, an expert calculator on the Admiralty staff, was in charge of the calculations. [ 21 ] : 175–182 Whewell made extensive use of graphical methods, and these became not just ways of displaying results, but tools in the analysis of data. [ 21 ] : 182 He published a number of maps showing cotidal lines (a term coined by himself, but first published by Lubbock, acknowledging the inventor [ 23 ] : 111 [ 24 ] ) – lines joining points where high tide occurred at the same time. These allowed a graphical representation of the progression of tidal waves through the ocean. From this, Whewell predicted that there should be a place where there was no tidal rise or fall in the southern part of the North Sea. [ 22 ] Such a "no-tide zone" is now called an amphidromic point . In 1840, the naval surveyor William Hewett confirmed Whewell's prediction. This involved anchoring his ship, HMS Fairy , and taking repeated soundings at the same location with lead and line , precautions being needed to allow for irregularities in the sea bed, and the effects of tidal flow. The data showed a rise of no more than 1 foot (0.30 m), near the limit of accuracy. [ 25 ] [ 23 ] Whewell published about 20 papers over a period of 20 years on his tidal researches. This was his major scientific achievement, and was an important source for his understanding of the process of scientific enquiry, the subject of one of his major works Philosophy of the Inductive Sciences . His best-known works are two voluminous books that attempt to systematize the development of the sciences, History of the Inductive Sciences (1837) and The Philosophy of the Inductive Sciences, Founded Upon Their History (1840, 1847, 1858–60). While the History traced how each branch of the sciences had evolved since antiquity, Whewell viewed the Philosophy as the "Moral" of the previous work as it sought to extract a universal theory of knowledge through history. In the latter, he attempted to follow Francis Bacon 's plan for discovery. He examined ideas ("explication of conceptions") and by the "colligation of facts" endeavored to unite these ideas with the facts and so construct science. [ 12 ] This colligation is an "act of thought", a mental operation consisting of bringing together a number of empirical facts by "superinducing" upon them a conception which unites the facts and renders them capable of being expressed in general laws. [ 26 ] Whewell refers to as an example Kepler and the discovery of the elliptical orbit: the orbit's points were colligated by the conception of the ellipse, not by the discovery of new facts. These conceptions are not "innate" (as in Kant), but being the fruits of the "progress of scientific thought (history) are unfolded in clearness and distinctness". [ 27 ] Whewell analyzed inductive reasoning into three steps: Upon these follow special methods of induction applicable to quantity: the method of curves, the method of means, the method of least squares and the method of residues, and special methods depending on resemblance (to which the transition is made through the law of continuity), such as the method of gradation and the method of natural classification. [ 12 ] In Philosophy of the Inductive Sciences Whewell was the first to use the term " consilience " to discuss the unification of knowledge between the different branches of learning. Here, as in his ethical doctrine, Whewell was moved by opposition to contemporary English empiricism . Following Immanuel Kant , he asserted against John Stuart Mill the a priori nature of necessary truth , and by his rules for the construction of conceptions he dispensed with the inductive methods of Mill . [ 12 ] Yet, according to Laura J. Snyder , "surprisingly, the received view of Whewell's methodology in the 20th century has tended to describe him as an anti-inductivist in the Popperian mold, that is it is claimed that Whewell endorses a 'conjectures and refutations' view of scientific discovery. Whewell explicitly rejects the hypothetico-deductive claim that hypotheses discovered by non-rational guesswork can be confirmed by consequentialist testing. Whewell explained that new hypotheses are 'collected from the facts' (Philosophy of Inductive Sciences, 1849, 17)". [ 28 ] In sum, the scientific discovery is a partly empirical and partly rational process; the "discovery of the conceptions is neither guesswork nor merely a matter of observations", we infer more than we see. [ 29 ] One of Whewell's greatest gifts to science was his wordsmithing. He often corresponded with many in his field and helped them come up with new terms for their discoveries. In fact, Whewell came up with the term scientist itself in 1833, and it was first published in Whewell's anonymous 1834 review of Mary Somerville 's On the Connexion of the Physical Sciences published in the Quarterly Review . [ 30 ] (They had previously been known as "natural philosophers" or "men of science"). Whewell was prominent not only in scientific research and philosophy but also in university and college administration. His first work, An Elementary Treatise on Mechanics (1819), cooperated with those of George Peacock and John Herschel in reforming the Cambridge method of mathematical teaching. His work and publications also helped influence the recognition of the moral and natural sciences as an integral part of the Cambridge curriculum. [ 12 ] In general, however, especially in later years, he opposed reform: he defended the tutorial system , and in a controversy with Connop Thirlwall (1834), opposed the admission of Dissenters ; he upheld the clerical fellowship system, the privileged class of "fellow-commoners", and the authority of heads of colleges in university affairs. [ 12 ] He opposed the appointment of the University Commission (1850) and wrote two pamphlets ( Remarks ) against the reform of the university (1855). He stood against the scheme of entrusting elections to the members of the senate and instead, advocated the use of college funds and the subvention of scientific and professorial work. [ 12 ] He was elected Master of Trinity College , Cambridge in 1841, and retained that position until his death in 1866. The Whewell Professorship of International Law and the Whewell Scholarships were established through the provisions of his will. [ 31 ] [ 32 ] Aside from Science, Whewell was also interested in the history of architecture throughout his life. He is best known for his writings on Gothic architecture , specifically his book, Architectural Notes on German Churches (first published in 1830). In this work, Whewell established a strict nomenclature for German Gothic churches and came up with a theory of stylistic development. His work is associated with the "scientific trend" of architectural writers, along with Thomas Rickman and Robert Willis . He paid from his own resources for the construction of two new courts of rooms at Trinity College, Cambridge , built in a Gothic style . The two courts were completed in 1860 and (posthumously) in 1868, and are now collectively named Whewell's Court (in the singular). Between 1835 and 1861 Whewell produced various works on the philosophy of morals and politics , the chief of which, Elements of Morality , including Polity , was published in 1845. The peculiarity of this work—written from what is known as the intuitional point of view —is its fivefold division of the springs of action and of their objects, of the primary and universal rights of man (personal security, property, contract, family rights, and government), and of the cardinal virtues ( benevolence , justice , truth , purity and order ). [ 12 ] Among Whewell's other works—too numerous to mention—were popular writings such as: Whewell was one of the Cambridge dons whom Charles Darwin met during his education there , and when Darwin returned from the Beagle voyage he was directly influenced by Whewell, who persuaded Darwin to become secretary of the Geological Society of London . The title pages of On the Origin of Species open with a quotation from Whewell's Bridgewater Treatise about science founded on a natural theology of a creator establishing laws: [ 37 ] But with regard to the material world, we can at least go so far as this—we can perceive that events are brought about not by insulated interpositions of Divine power, exerted in each particular case, but by the establishment of general laws. Though Darwin used the concepts of Whewell as he made and tested his hypotheses regarding the theory of evolution , Whewell did not support Darwin's theory itself. "Whewell also famously opposed the idea of evolution. First he published a new book, Indications of the Creator , 1845, composed of extracts from his earlier works to counteract the popular anonymous evolutionary work Vestiges of the Natural History of Creation . Later Whewell opposed Darwin's theories of evolution." [ 38 ] In the 1857 novel Barchester Towers Charlotte Stanhope uses the topic of the theological arguments, concerning the possibility of intelligent life on other planets, between Whewell and David Brewster in an attempt to start up a conversation between her impecunious brother and the wealthy young widow Eleanor Bold. [ 42 ]
https://en.wikipedia.org/wiki/William_Whewell
William M. Williams (25 February 1927 – 28 January 2011) was a Welsh-born metallurgical engineer and Birks professor of metallurgy at McGill University . Williams was born in Tonypandy , Wales , the son of a coal miner. [ 1 ] In 1944, he won a scholarship to study at the University of Bristol , where he earned a bachelor's degree in 1948 and later a Master of Science in physics. In working to earn his master's degree, he studied stereo micro-radiography at the University of Chicago , under the direction of Cyril Stanley Smith . Around the same time, he also took up a position as a metallurgist with the Revere Copper Company in Rome, New York . In 1960, Williams earned his doctorate from the University of Toronto . Williams commenced lecturing at McGill University in 1960, and was selected to be the Chairman of the Department of Mining and Metallurgy in 1966. As Chairman, he was instrumental in expanding the department at a time when only six degree programs in Metallurgical Engineering were being offered in Canadian universities, among which McGill's department was the oldest. During his tenure, seven new faculty members were added to the department, with the new faculty primarily focusing its research on extractive (hydro and pyro), process and physical metallurgy. [ 2 ] Ties between the department and Canadian industries were also strengthened during this time. Throughout his career, Williams conducted research on a variety of topics ranging from esoteric studies of grain shape to the practical aspects of abrasion resistant cast irons for mineral comminution . Williams held the position of Chairman until 1980, and retired from teaching in 1992. From 1972 to 1973, he was President of the Metallurgical Society of CIM . [ 3 ] As a specialist in failure analysis , Williams was consulted to investigate numerous engineering failures including such notable events as the 1965 LaSalle Heights disaster , [ 4 ] the Mississauga train derailment of 1979 , and the crash of Quebecair Flight 255 . From 1990 to 2000, he was a consultant metallurgist for Via Rail . Williams also served as an expert witness in about 40 court cases in Canada and the United States, and was twice appointed Judge's Expert by justices James K. Hugessen and Antonio Lamar respectively.
https://en.wikipedia.org/wiki/William_Williams_(metallurgist)
William Joseph Wiswesser (December 3, 1914 – December 17, 1989) was an American chemist best known as the creator of the Wiswesser line notation (WLN), which was an innovative way to represent chemical structures in a linear string of characters suitable for computer manipulation. He is also known for the Wiswesser rule , a mathematical formula that predicts the order of atomic orbitals in many-electron atoms. Wiswesser was born in Reading, Pennsylvania , to Louis and Hattie (Flatt) Wiswesser in 1914. [ 1 ] He attended Reading High School in Reading, and graduated from Lehigh University in Bethlehem, Pennsylvania , with a B.S. degree in chemistry in 1936. Following graduation, he worked at Hercules, the Trojan Powder Company, and the Picatinny Arsenal. Wiswesser then was an instructor of chemistry in the Cooper Union 's School of Engineering during the 1940s. [ 2 ] In 1945, he published his paper describing a formula that correctly orders the subshells of atomic orbitals in the manner of the Aufbau principle, known as the Wiswesser rule . [ 3 ] Following his time at Cooper Union, Wiswesser worked for Willson Products , where he was Director of Industrial Hygiene, followed by civilian employment by the U.S. Army at Fort Detrick and finally at the Agricultural Research Service of the USDA . [ 4 ] In 1949, Wiswesser first presented what is now known as the Wiswesser line notation, which was particularly well suited to molecular structure representation within the computing platforms and modalities available. [ 5 ] This work, which was further developed and expanded on by him for many years, had a lasting impact on the field of chemical informatics . Wiswesser was also interested in the history of chemistry and near the end of his life he made a special study of Josef Loschmidt 's work, alone at first [ 6 ] and then together with preeminent chemist Alfred Bader . [ 4 ] In 1970 he was awarded the Department of the Army Decoration for Exceptional Civilian Service , the highest honour which can be given by the United States Army to a civilian, in recognition of his "Chemical Line-Formula Notation", the WLN. [ 7 ] That same year, he was awarded an honorary doctorate at Lehigh University . [ 8 ] In 1975 he was awarded the Austin M. Patterson Award for chemical information science. [ 9 ] Wiswesser received the American Chemical Society Division of Chemical Information's Herman Skolnik Award in 1980, with a citation "For pioneering mathematical, physical, and chemical methods of punched-card and computer-stored representation of molecular structures, leading to the creation of the Wiswesser Line Notation (WLN) for concise storage and retrieval of chemical structures ...". [ 10 ] At the end of his life he was working for the United States Department of Agriculture on weed science until his final illness, and he died on 17 December 1989, aged 75, in Wyomissing, Pennsylvania , leaving a widow, Katherine, and a son, daughter and four grandchildren. [ 1 ] His scientific papers were deposited at Lehigh University after his death. [ 11 ]
https://en.wikipedia.org/wiki/William_Wiswesser
In combustion , Williams diagram refers to a classification diagram of different turbulent combustion regimes in a plane, having turbulent Reynolds number R e l {\displaystyle Re_{l}} as the x-axis and turbulent Damköhler number D a l {\displaystyle Da_{l}} as the y-axis. [ 1 ] The diagram is named after Forman A. Williams (1985). [ 2 ] The definition of the two non-dimensionaless numbers are [ 3 ] where u ′ {\displaystyle u'} is the rms turbulent velocity flucturation, l {\displaystyle l} is the integral length scale , ν {\displaystyle \nu } is the kinematic viscosity and t c h {\displaystyle t_{\mathrm {ch} }} is the chemical time scale. The Reynolds number R e λ {\displaystyle Re_{\lambda }} based on the Taylor microscale λ = l / R e l {\displaystyle \lambda =l/{\sqrt {Re_{l}}}} becomes R e λ = R e l {\displaystyle Re_{\lambda }={\sqrt {Re_{l}}}} . The Damköhler number based on the Kolmogorov time scale t η = ν l / u ′ 3 {\displaystyle t_{\eta }={\sqrt {\nu l/u^{\prime 3}}}} is given by D a η = D a l / R e l {\displaystyle Da_{\eta }=Da_{l}/{\sqrt {Re_{l}}}} . The Karlovitz number K a = t c h / t η {\displaystyle Ka=t_{\mathrm {ch} }/t_{\eta }} is defined by K a = R e l / D a l {\displaystyle Ka={\sqrt {Re_{l}/Da_{l}}}} . The Williams diagram is universal in the sense that it is applicable to both premixed and non-premixed combustion. In supersonic combustion and detonations , the diagram becomes three-dimensional due to the addition of the Mach number M a = u ′ / c {\displaystyle Ma=u'/c} as the z-axis, where c {\displaystyle c} is the sound speed . [ 4 ] In premixed combustion , an alternate diagram, known as the Borghi–Peters diagram , is also used to describe different regimes. This diagram is named after Roland Borghi (1985) and Norbert Peters (1986). [ 5 ] [ 6 ] The Borghi–Peters diagram uses l / δ L {\displaystyle l/\delta _{L}} as the x-axis and u ′ / S L {\displaystyle u'/S_{L}} as the y-axis, where δ L {\displaystyle \delta _{L}} and S L {\displaystyle S_{L}} are the thickness and speed of the planar, laminar premixed flame. Since δ L P r = ν / S L {\displaystyle \delta _{L}Pr=\nu /S_{L}} , where P r {\displaystyle Pr} is the Prandtl number (set P r = 1 {\displaystyle Pr=1} ), and t c h = δ L / S L {\displaystyle t_{\mathrm {ch} }=\delta _{L}/S_{L}} in premixed flames, we have The limitations of the Borghi–Peters diagram are that (1) it cannot be used for non-premixed combustion and (2) it is not suitable for practically relevant cases where both R e l {\displaystyle Re_{l}} and D a l {\displaystyle Da_{l}} are increased concurrently, such as increasing nozzle radius while maintaining constant nozzle exit velocity. [ 7 ]
https://en.wikipedia.org/wiki/Williams_diagram
In combustion , the Williams spray equation , also known as the Williams–Boltzmann equation , describes the statistical evolution of sprays contained in another fluid, analogous to the Boltzmann equation for the molecules, named after Forman A. Williams , who derived the equation in 1958. [ 1 ] [ 2 ] The sprays are assumed to be spherical with radius r {\displaystyle r} , even though the assumption is valid for solid particles(liquid droplets) when their shape has no consequence on the combustion. For liquid droplets to be nearly spherical, the spray has to be dilute(total volume occupied by the sprays is much less than the volume of the gas) and the Weber number W e = 2 r ρ g | v − u | 2 / σ {\displaystyle We=2r\rho _{g}|\mathbf {v} -\mathbf {u} |^{2}/\sigma } , where ρ g {\displaystyle \rho _{g}} is the gas density, v {\displaystyle \mathbf {v} } is the spray droplet velocity, u {\displaystyle \mathbf {u} } is the gas velocity and σ {\displaystyle \sigma } is the surface tension of the liquid spray, should be W e ≪ 10 {\displaystyle We\ll 10} . The equation is described by a number density function f j ( r , x , v , T , t ) d r d x d v d T {\displaystyle f_{j}(r,\mathbf {x} ,\mathbf {v} ,T,t)\,dr\,d\mathbf {x} \,d\mathbf {v} \,dT} , which represents the probable number of spray particles (droplets) of chemical species j {\displaystyle j} (of M {\displaystyle M} total species), that one can find with radii between r {\displaystyle r} and r + d r {\displaystyle r+dr} , located in the spatial range between x {\displaystyle \mathbf {x} } and x + d x {\displaystyle \mathbf {x} +d\mathbf {x} } , traveling with a velocity in between v {\displaystyle \mathbf {v} } and v + d v {\displaystyle \mathbf {v} +d\mathbf {v} } , having the temperature in between T {\displaystyle T} and T + d T {\displaystyle T+dT} at time t {\displaystyle t} . Then the spray equation for the evolution of this density function is given by where This model for the rocket motor was developed by Probert, [ 5 ] Williams [ 1 ] [ 6 ] and Tanasawa. [ 7 ] [ 8 ] It is reasonable to neglect Q j , Γ j {\displaystyle Q_{j},\ \Gamma _{j}} , for distances not very close to the spray atomizer, where major portion of combustion occurs. Consider a one-dimensional liquid-propellent rocket motor situated at x = 0 {\displaystyle x=0} , where fuel is sprayed. Neglecting E j {\displaystyle E_{j}} (density function is defined without the temperature so accordingly dimensions of f j {\displaystyle f_{j}} changes) and due to the fact that the mean flow is parallel to x {\displaystyle x} axis, the steady spray equation reduces to where u j {\displaystyle u_{j}} is the velocity in x {\displaystyle x} direction. Integrating with respect to the velocity results The contribution from the last term (spray acceleration term) becomes zero (using Divergence theorem ) since f j → 0 {\displaystyle f_{j}\rightarrow 0} when u {\displaystyle u} is very large, which is typically the case in rocket motors. The drop size rate R j {\displaystyle R_{j}} is well modeled using vaporization mechanisms as where χ j {\displaystyle \chi _{j}} is independent of r {\displaystyle r} , but can depend on the surrounding gas. Defining the number of droplets per unit volume per unit radius and average quantities averaged over velocities, the equation becomes If further assumed that u ¯ j {\displaystyle {\bar {u}}_{j}} is independent of r {\displaystyle r} , and with a transformed coordinate η j = [ r k j + 1 + ( k j + 1 ) ∫ 0 x χ j u ¯ j d x ] 1 / ( k j + 1 ) {\displaystyle \eta _{j}=\left[r^{k_{j}+1}+(k_{j}+1)\int _{0}^{x}{\frac {\chi _{j}}{{\bar {u}}_{j}}}\,dx\right]^{1/(k_{j}+1)}} If the combustion chamber has varying cross-section area A ( x ) {\displaystyle A(x)} , a known function for x > 0 {\displaystyle x>0} and with area A o {\displaystyle A_{o}} at the spraying location, then the solution is given by where G j , 0 = G j ( r , 0 ) , u ¯ j , 0 = u ¯ j ( x = 0 ) {\displaystyle G_{j,0}=G_{j}(r,0),\ {\bar {u}}_{j,0}={\bar {u}}_{j}(x=0)} are the number distribution and mean velocity at x = 0 {\displaystyle x=0} respectively.
https://en.wikipedia.org/wiki/Williams_spray_equation
The Williamson ether synthesis is an organic reaction , forming an ether from an organohalide and a deprotonated alcohol ( alkoxide ). This reaction was developed by Alexander Williamson in 1850. [ 2 ] Typically it involves the reaction of an alkoxide ion with a primary alkyl halide via an S N 2 reaction . This reaction is important in the history of organic chemistry because it helped prove the structure of ethers . The general reaction mechanism is as follows: [ 3 ] An example is the reaction of sodium ethoxide with chloroethane to form diethyl ether and sodium chloride: The Williamson ether reaction follows an S N 2 (bimolecular nucleophilic substitution) mechanism. In an S N 2 reaction mechanism there is a backside attack of an electrophile by a nucleophile and it occurs in a concerted mechanism (happens all at once). In order for the S N 2 reaction to take place there must be a good leaving group which is strongly electronegative, commonly a halide. [ 4 ] In the Williamson ether reaction there is an alkoxide ion (RO − ) which acts as the nucleophile, attacking the electrophilic carbon with the leaving group, which in most cases is an alkyl tosylate or an alkyl halide. The leaving site must be a primary carbon, because secondary and tertiary leaving sites generally prefer to proceed as an elimination reaction . Also, this reaction does not favor the formation of bulky ethers like di-tertbutyl ether, due to steric hindrance and predominant formation of alkenes instead. [ 5 ] The Williamson reaction is of broad scope, is widely used in both laboratory and industrial synthesis, and remains the simplest and most popular method of preparing ethers. Both symmetrical and asymmetrical ethers are easily prepared. The intramolecular reaction of halohydrins in particular, gives epoxides . In the case of asymmetrical ethers there are two possibilities for the choice of reactants, and one is usually preferable either on the basis of availability or reactivity. The Williamson reaction is also frequently used to prepare an ether indirectly from two alcohols. One of the alcohols is first converted to a leaving group (usually tosylate ), then the two are reacted together. The alkoxide (or aryloxide ) may be primary and secondary. Tertiary alkoxides tend to give elimination reaction because of steric hindrance. The alkylating agent, on the other hand is most preferably primary. Secondary alkylating agents also react, but tertiary ones are usually too prone to side reactions to be of practical use. The leaving group is most often a halide or a sulfonate ester synthesized for the purpose of the reaction. Since the conditions of the reaction are rather forcing, protecting groups are often used to pacify other parts of the reacting molecules (e.g. other alcohols , amines , etc.) The Williamson ether synthesis is a common organic reaction in industrial synthesis and in undergraduate teaching laboratories. Yields for these ether syntheses are traditionally low when reaction times are shortened, which can be the case with undergraduate laboratory class periods. Without allowing the reactions to reflux for the correct amount of time (anywhere from 1–8 hours from 50 to 100 °C) the reaction may not proceed to completion generating a poor overall product yield. To help mitigate this issue microwave-enhanced technology is now being utilized to speed up the reaction times for reactions such as the Williamson ether synthesis. This technology has transformed reaction times that required reflux of at least 1.5 hours to a quick 10-minute microwave run at 130 °C and this has increased the yield of ether synthesized from a range of 6-29% to 20-55% (data was compiled from several different lab sections incorporating the technology in their syntheses). [ 6 ] There have also been significant strides in the synthesis of ethers when using temperatures of 300 °C and up and using weaker alkylating agents to facilitate more efficient synthesis. This methodology helps streamline the synthesis process and makes synthesis on an industrial scale more feasible. The much higher temperature makes the weak alkylating agent more reactive and less likely to produce salts as a byproduct. This method has proved to be highly selective and especially helpful in production of aromatic ethers such as anisole which has increasing industrial applications. [ 7 ] Since alkoxide ions are highly reactive, they are usually prepared immediately prior to the reactions or are generated in situ . In laboratory chemistry, in situ generation is most often accomplished by the use of a carbonate base or potassium hydroxide , while in industrial syntheses phase transfer catalysis is very common. A wide range of solvents can be used, but protic solvents and apolar solvents tend to slow the reaction rate strongly, as a result of lowering the availability of the free nucleophile. For this reason, acetonitrile and N , N -dimethylformamide are particularly commonly used. A typical Williamson reaction is conducted at 50 to 100 °C and is complete in 1 to 8 h. Often the complete disappearance of the starting material is difficult to achieve, and side reactions are common. Yields of 50–95% are generally achieved in laboratory syntheses, while near-quantitative conversion can be achieved in industrial procedures. Catalysis is not usually necessary in laboratory syntheses. However, if an unreactive alkylating agent is used (e.g. an alkyl chloride) then the rate of reaction can be greatly improved by the addition of a catalytic quantity of a soluble iodide salt (which undergoes halide exchange with the chloride to yield a much more reactive iodide, a variant of the Finkelstein reaction ). In extreme cases, silver compounds such as silver oxide may be added: [ 8 ] The silver ion coordinates with the halide leaving group to make its departure more facile. Finally, phase transfer catalysts are sometimes used (e.g. tetrabutylammonium bromide or 18-crown-6 ) in order to increase the solubility of the alkoxide by offering a softer counter-ion . One more example of etherification reaction in the tri-phasic system under phase transfer catalytic conditions is the reaction of benzyl chloride and furfuryl alcohol . [ 9 ] The Williamson reaction often competes with the base-catalyzed elimination of the alkylating agent, [ 3 ] and the nature of the leaving group as well as the reaction conditions (particularly the temperature and solvent) can have a strong effect on which is favored. In particular, some structures of alkylating agent can be particularly prone to elimination. When the nucleophile is an aryloxide ion, the Williamson reaction can also compete with alkylation on the ring since the aryloxide is an ambident nucleophile .
https://en.wikipedia.org/wiki/Williamson_ether_synthesis
The Williams– Landel – Ferry Equation (or WLF Equation ) is an empirical equation associated with time–temperature superposition . [ 1 ] The WLF equation has the form where log ⁡ ( a T ) {\displaystyle \log(a_{T})} is the decadic logarithm of the WLF shift factor, [ 2 ] T is the temperature, T r is a reference temperature chosen to construct the compliance master curve and C 1 , C 2 are empirical constants adjusted to fit the values of the superposition parameter a T . The equation can be used to fit (regress) discrete values of the shift factor a T vs. temperature. Here, values of shift factor a T are obtained by horizontal shift log(a T ) of creep compliance data plotted vs. time or frequency in double logarithmic scale so that a data set obtained experimentally at temperature T superposes with the data set at temperature T r . A minimum of three values of a T are needed to obtain C 1 , C 2 , and typically more than three are used. Once constructed, the WLF equation allows for the estimation of the temperature shift factor for temperatures other than those for which the material was tested. In this way, the master curve can be applied to other temperatures. However, when the constants are obtained with data at temperatures above the glass transition temperature (T g ), the WLF equation is applicable to temperatures at or above T g only; the constants are positive and represent Arrhenius behavior. Extrapolation to temperatures below T g is erroneous. [ 3 ] When the constants are obtained with data at temperatures below T g , negative values of C 1 , C 2 are obtained, which are not applicable above T g and do not represent Arrhenius behavior. Therefore, the constants obtained above T g are not useful for predicting the response of the polymer for structural applications, which necessarily must operate at temperatures below T g . The WLF equation is a consequence of time–temperature superposition (TTSP), which mathematically is an application of Boltzmann's superposition principle . It is TTSP, not WLF, that allows the assembly of a compliance master curve that spans more time, or frequency, than afforded by the time available for experimentation or the frequency range of the instrumentation, such as dynamic mechanical analyzer (DMA) . While the time span of a TTSP master curve is broad, according to Struik, [ 4 ] it is valid only if the data sets did not suffer from ageing effects during the test time. Even then, the master curve represents a hypothetical material that does not age. Effective Time Theory. [ 4 ] needs to be used to obtain useful prediction for long term time. [ 5 ] Having data above T g , it is possible to predict the behavior (compliance, storage modulus , etc.) of viscoelastic materials for temperatures T>T g , and/or for times/frequencies longer/slower than the time available for experimentation. With the master curve and associated WLF equation it is possible to predict the mechanical properties of the polymer out of time scale of the machine (typically 10 − 2 {\displaystyle 10^{-2}} to 10 2 {\displaystyle 10^{2}} Hz), thus extrapolating the results of multi-frequency analysis to a broader range, out of measurement range of machine. The Williams-Landel-Ferry model, or WLF for short, is usually used for polymer melts or other fluids that have a glass transition temperature . The model is: where T -temperature, C 1 {\displaystyle C_{1}} , C 2 {\displaystyle C_{2}} , T r {\displaystyle T_{r}} and μ 0 {\displaystyle \mu _{0}} are empiric parameters (only three of them are independent from each other). If one selects the parameter T r {\displaystyle T_{r}} based on the glass transition temperature, then the parameters C 1 {\displaystyle C_{1}} , C 2 {\displaystyle C_{2}} become very similar for the wide class of polymers . Typically, if T r {\displaystyle T_{r}} is set to match the glass transition temperature T g {\displaystyle T_{g}} , we get and Van Krevelen recommends to choose and Using such universal parameters allows one to guess the temperature dependence of a polymer by knowing the viscosity at a single temperature. In reality the universal parameters are not that universal, and it is much better to fit the WLF parameters to the experimental data, within the temperature range of interest.
https://en.wikipedia.org/wiki/Williams–Landel–Ferry_equation
Willis Whitfield (December 6, 1919 – November 12, 2012 [ 1 ] [ 2 ] ) was an American physicist and inventor of the modern cleanroom , a room with a low level of pollutants used in manufacturing or scientific research . His invention earned him the nickname, "Mr. Clean," from Time Magazine . [ 3 ] [ 4 ] Whitfield was born in Rosedale, Oklahoma , the son of a cotton farmer. [ 3 ] Between 1942 and 1944, he served as a ground radar crew chief in the Signal Corps , later serving in the Navy until the end of World War 2. He would graduate from Hardin-Simmons University in 1952 with a Bachelor of Science in physics and mathematics, later pursuing graduate studies at George Washington University . [ 5 ] An employee of the Sandia National Laboratories in New Mexico , Whitfield created the initial plans for the cleanroom in 1960. [ 3 ] Prior to Whitfield's invention, earlier cleanrooms often had problems with particles and unpredictable airflows . [ 3 ] Whitfield solved this problem by designing his cleanrooms with a constant, highly filtered air flow to flush out impurities in the air. [ 3 ] Within a few years of its invention, sales of Whitfield's modern cleanroom had generated more than $50 billion in sales worldwide. [ 3 ] Whitfield retired from Sandia in 1984. [ 4 ] Whitfield died in Albuquerque, New Mexico , on November 12, 2012, at the age of 92. His death was announced by officials at Sandia National Laboratories. [ 3 ] Two years after his death, he would be inducted into the National Inventors Hall of Fame . [ 6 ]
https://en.wikipedia.org/wiki/Willis_Whitfield
In differential geometry , the Willmore conjecture is a lower bound on the Willmore energy of a torus . It is named after the English mathematician Tom Willmore , who conjectured it in 1965. [ 2 ] A proof by Fernando Codá Marques and André Neves was announced in 2012 and published in 2014. [ 1 ] [ 3 ] Let v : M → R 3 be a smooth immersion of a compact , orientable surface . Giving M the Riemannian metric induced by v , let H : M → R be the mean curvature (the arithmetic mean of the principal curvatures κ 1 and κ 2 at each point). In this notation, the Willmore energy W ( M ) of M is given by It is not hard to prove that the Willmore energy satisfies W ( M ) ≥ 4 π , with equality if and only if M is an embedded round sphere . Calculation of W ( M ) for a few examples suggests that there should be a better bound than W ( M ) ≥ 4 π for surfaces with genus g ( M ) > 0. In particular, calculation of W ( M ) for tori with various symmetries led Willmore to propose in 1965 the following conjecture, which now bears his name In 1982, Peter Wai-Kwong Li and Shing-Tung Yau proved the conjecture in the non-embedded case, showing that if f : Σ → S 3 {\displaystyle f:\Sigma \to S^{3}} is an immersion of a compact surface, which is not an embedding, then W ( M ) is at least 8 π . [ 4 ] In 2012, Fernando Codá Marques and André Neves proved the conjecture in the embedded case, using the Almgren–Pitts min-max theory of minimal surfaces . [ 3 ] [ 1 ] Martin Schmidt claimed a proof in 2002, [ 5 ] but it was not accepted for publication in any peer-reviewed mathematical journal (although it did not contain a proof of the Willmore conjecture, he proved some other important conjectures in it). Prior to the proof of Marques and Neves, the Willmore conjecture had already been proved for many special cases, such as tube tori (by Willmore himself), and for tori of revolution (by Langer & Singer). [ 6 ]
https://en.wikipedia.org/wiki/Willmore_conjecture
The Willow processor is a 105- qubit superconducting quantum computing processor developed by Google Quantum AI and manufactured in Santa Barbara, California . [ 1 ] On December 9, 2024, Google Quantum AI announced Willow in a Nature paper [ 2 ] and company blogpost, [ 1 ] and claiming two accomplishments: First, that Willow can reduce errors exponentially as the number of qubits is scaled, achieving below threshold quantum error correction. [ 1 ] [ 2 ] Second, that Willow completed a Random Circuit Sampling (RCS) benchmark task in 5 minutes that would take today's fastest supercomputers 10 septillion (10 25 ) years. [ 3 ] [ 4 ] Willow is constructed with a square grid of superconducting transmon physical qubits. [ 2 ] Improvements over past work were attributed to improved fabrication techniques, participation ratio engineering, and circuit parameter optimization. [ 2 ] Willow prompted optimism in accelerating applications in pharmaceuticals, material science, logistics, drug discovery, and energy grid allocation. [ 3 ] Popular media responses discussed its risk in breaking cryptographic systems , [ 3 ] but a Google spokesman said that they were still at least 10 years out from breaking RSA . [ 5 ] [ 6 ] Hartmut Neven, founder and lead of Google Quantum AI, told the BBC that Willow would be used in practical applications, [ 4 ] and in the announcement blogpost expressed the belief that advanced AI will benefit from quantum computing. [ 1 ] Willow follows the release of Foxtail in 2017, Bristlecone in 2018, and Sycamore in 2019. Willow has twice as many qubits as Sycamore [ 3 ] and improves upon T1 coherence time from Sycamore's 20 microseconds to 100 microseconds. [ 1 ] Willow's 105 qubits have an average connectivity of 3.47. [ 1 ] Hartmut Neven , founder of Google Quantum AI, prompted controversy [ 7 ] [ 8 ] by claiming that the success of Willow "lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse , a prediction first made by David Deutsch ." [ 1 ] Per Google company's claim, Willow is the first chip to achieve below threshold quantum error correction . [ 1 ] [ 2 ] However, a number of critics have pointed out several limitations:
https://en.wikipedia.org/wiki/Willow_processor
The willpower paradox is the idea that people may do things better by focusing less directly on doing them, implying that the direct exertion of volition may not always be the most powerful way to accomplish a goal. Research suggests that intrapersonal communication (talking to oneself) and maintaining a questioning mind are more likely to bring change. [ 1 ] One experiment compared the performance of two groups of people doing anagrams. One group thought about their impending anagram task; the other thought about whether or not they would perform anagrams. The second group performed better than those who knew for sure that they would be working on anagrams. The same researcher, Ibrahim Senay (at University of Illinois in Urbana), found similarly that repeatedly writing the question "Will I?" was more powerful than writing the traditional affirmation "I will". [ 2 ] Michael J. Taleff writes, "Willpower in our field ( psychology ) is a paradox ". Addiction affected patients are told that willfulness is less effective than willingness. [ 3 ]
https://en.wikipedia.org/wiki/Willpower_paradox
Wilma K. Olson (born c. 1945 ) is the Mary I. Bunting professor at the Rutgers Center for Quantitative Biology (CQB) (formerly known as BioMaPS institute for Quantitative Biology) [ 1 ] at Rutgers University . Olson has her own research group [ 2 ] on the New Brunswick campus. Although she is a polymer chemist by training, her research aims to understand the influence of chemical architecture on the conformation, properties, and interactions of nucleic acids . Olson received her bachelor's degree in chemistry at the University of Delaware in 1967, with honors and distinction. During her studies, she received the A.C.S. (Delaware Section) Student Award. Olson obtained her Ph.D. in 1971 at Stanford University , where she studied the configurational statistics of polynucleotide chains. Her advisor was polymer scientist Paul J. Flory , who would win the Nobel Prize in Chemistry in 1974. Olson remained at the Flory group for a post doc research, after which she became a Damon Runyon Cancer Research Foundation Postdoctoral Fellow with geneticist Charles R. Cantor at Columbia University . In 1972, Olson became an assistant professor at Rutgers University and full professor in 1979. During her time at Rutgers, she was a visiting professor at the University of Basel in Switzerland (1979–1980) and at the Polymer Chemistry Department of the Jilin University in Changchun , China (1981). Wilma Olson was involved in setting up the nucleic acid database, in collaboration with Helen M. Berman . Olson's studies DNA as polymers, with atoms and chemical bonds. She studies the interaction between DNA and structural proteins which do not bind to the nuclear bases, but to the phosphorus-sugar backbone, e.g. histones . Also, the energy needed to form circular DNA is investigated. [ 3 ] Olson aims to clarify the role of local structure on the overall folding of RNA , for instance the helices and loops in the ribosome . A second goal is to uncover structural details of nucleic acid structural transitions, such as those involving different DNA duplexes. This information helps to design new drugs and materials. During her career, Wilma Olson had won many awards, [ 4 ] among others:
https://en.wikipedia.org/wiki/Wilma_Olson
2017. Award of Merit Association for Information Science and Technology; Thomas D. Wilson is a researcher in information science and has been contributing to the field since 1961, when he received his Fellowship from the British Library Association . His research has focused on information management and information seeking behaviour. Thomas Daniel Wilson was born in 1935 at Shincliffe Station in County Durham , England. He left school at age 16 to work as a library assistant in Durham County Library . Following national service in the Royal Air Force he returned to Durham County Library and took the examinations of the Library Association to qualify as a professional librarian. He then moved to being head of a small academic library. He then worked as a corporate librarian for the Nuclear Research Centre of C. A. Parsons , at which time he became interested in the use of new technology in information science . After completing his Fellowship of the Library Association he began his academic career in 1961, with a move to the library school of the Municipal College of Commerce, now Northumbria University . He subsequently obtained a BSc degree in economics and sociology , and a doctorate in organization theory . [ 1 ] He is now Professor Emeritus at the Information School, University of Sheffield , and has been Visiting Professor at Leeds University Business School, and Professor Catedratico Convidado in the Faculty of Engineering University of Oporto . He has also been Senior Professor at the University of Borås, Sweden, and is now Professor Emeritus of that University. In the area of information behaviour (a term he invented to cover all activities associated with seeking, acquiring, using and sharing information) Dr. Wilson has focused largely in analyzing how individuals and groups gather and communicate information. Dr. Wilson's best-known study on information seeking behaviour was the INISS project, [ 2 ] conducted from 1980 to 1985. The aim of the project was to increase the efficiency of Social Services workers in the management of information. In addition to the traditional methods of surveys and interviews with those seeking the information, Dr. Wilson and his team also observed social workers and their managers in their day-to-day tasks, to see what techniques were actually used to find, use and communicate information. He observed that, in the environment of a Social Services office, the majority of information (60%) was oral, with a further 10% being notes taken on oral communication. That, combined with lack of training in using the other information sources available, had led to a lack of organized information being used at Social Services offices. He recommended the establishment of a central library for Social Services information, along with training staff to access that information, as well as more communication within each office on information needs. More recently, Dr. Wilson looked at information seeking behaviour for the British Library Research and Innovation Centre. The resulting paper, "Uncertainty in Information Seeking," [ 3 ] identified that information seeking is based on a series of uncertainty resolutions which lead to a problem solution. There are four steps in the process, problem identification, problem definition, problem resolution, and solution statement. At each step of the process, more information must be gathered in order to resolve the uncertainty of that step. Also, the research established that by providing information seekers with a pattern to follow (such as the four step uncertainty resolution pattern), the accuracy and volume of information they acquired was increased. Recently, Dr. Wilson has been an advocate for the adoption of activity theory in the area of information behaviour [ 4 ] and in information systems research. [ 5 ] Throughout his research career, Dr. Wilson has also been active in the field of information management and was the founder and first editor of the International Journal of Information Management . His research in this area included early studies on business use of the World Wide Web, [ 6 ] [ 7 ] [ 8 ] the relationship of information systems and business performance [ 9 ] and the application of mobile information systems in policing. [ 5 ] Wilson's model of information seeking behaviour was born out of a need to focus the field of information and library science on human use of information, rather than the use of sources. Previous studies undertaken in the field were primarily concerned with systems, specifically, how an individual uses a system. Very little had been written that examined an individual's information needs, or how information seeking behaviour related to other task-oriented behaviours. Wilson's first model came from a presentation at the University of Maryland in 1971 when "an attempt was made to map the processes involved in what was known at the time as "user need research". Published in 1981, Wilson's first model outlined the factors leading to information seeking , and the barriers inhibiting action. [ 10 ] It stated that information-seeking was prompted by an individual's physiological , cognitive , or affective needs, which have their roots in personal factors, role demands, or environmental context. [ 11 ] In order to satisfy these needs, an individual makes demands upon various information systems such as the library and the use of technology. The user may also contact an intermediary such as family, friends and colleagues. The information provided by any of the contacted sources is then evaluated to determine if it satisfies the individual's needs. [ 12 ] This first model was based on an understanding of human information-seeking behaviors that are best understood as three interwoven frameworks: The user, the information system, and the information resource. [ 12 ] Wilson later built upon his original model in order to understand the personal circumstance, social role , and environmental context in which an information need is created. [ 10 ] This new model, altered in 1994 incorporated Ellis' stages of information-seeking: starting, browsing, differentiating, monitoring, extracting, verifying and ending. [ 13 ] The new model It also displayed the physiological, affective, and cognitive needs that give rise to information seeking behaviour. [ 10 ] The model recognized that an information need is not a need in and of itself, but rather one that stems from a previous psychological need. These needs are generated by the interplay of personal habits and political, economic, and technological factors in an individual's environmental. The factors that drive needs can also obstruct an individual's search for information. In 1997 Wilson proposed a third, general model that built upon the previous two. This model incorporated several new elements that helped to demonstrate the stages experienced by the 'person in context', or searcher, when looking for information. These included an intermediate stage between the acknowledgement of a need and the initiation of action, [ 14 ] a redefining of the barriers he proposed in his second model as "intervening variables" [ 15 ] to show that factors can be supportive or preventative [ 15 ] a feedback loop , and an "activating mechanism" stage. [ 14 ] 'Activating mechanisms' identify relevant impetus that prompt a decision to seek information, and integrate behavioural theories such as 'stress/coping theory', 'risk/reward theory' and ' social learning theory '. In 1999, Wilson developed a nested model that brought together different areas of research in the study of information behavior . [ 15 ] [ 16 ] The model represented research topics as a series of nested fields, with information behavior as the general area of investigation, information-seeking behavior as its sub-set, and information searching behavior as a further sub-set. [ 15 ] Wilson's model has changed over time, and will continue to evolve as technology and the nature of information changes. [ 17 ] The model has been cited and discussed by leaders in the information science field, and can be integrated with other significant theories of information behaviour. [ 10 ] : 35 Wilson describes the model diagrams as elaborating on one another, saying "no one model stands alone and in using the model to guide the development of research ideas, it is necessary to examine and reflect upon all of the diagrams". [ 10 ] : 35 Recently, there has been a shift from theorizing on research already conducted on information behaviour, to pursuing "research within specific theoretical contexts". [ 17 ] Wilson's Model is "aimed at linking theories to action"; [ 10 ] : 35 however, it is this move from theory to action that is proving slow. Through numerous qualitative studies , "we now have many in depth investigations into the information seeking behavior of small samples of people". [ 17 ] Despite these studies, there have not been many links made between this research and changes in policy or practice. [ 17 ] In addition to this work, Dr. Wilson also founded ''Information Research'' , an online journal for the information sciences. This is a freely available, Open Access journal, which constitutes an excellent resource for IS students and researchers. He self-published the journal until 2017 when he gave ownership to the Swedish School of Library and Information Science at the University of Borås . It is now hosted by the Publicera site of the Royal Library, the Swedish National Library. On his retirement from the University of Sheffield, Dr. Wilson continued to engage in research as a member of the AIMTech Research Group at the University of Leeds Business School, which carried out projects on information in policing [], and through projects at the Swedish School of Library and Information Science at the University of Borås, where he participated in a number of European Union projects, including the SHAMAN project on long-term digital preservation. [ 18 ] In 2012, together with colleagues at the University of Borås and Gothenburg University, he was awarded a grant of 11.8 million Swedish kronor ($1.7 million) by Vetenskapsrådet (Swedish Research Council) for a programme of research into the production, distribution and use of e-books in Sweden. The research led to the publication of Books on screens: players in the Swedish e-book market, published by Nordicom, Gothenburg, Sweden. Since his retirement from the University of Borås, Dr. Wilson continues to pursue research independently and with former colleagues. His most recent publications deal with the motivations for promoting misinformation, [ 19 ] the early study of information seeking behaviour in psychology, [ 20 ] and the role of curiosity in information seeking. [ 21 ]
https://en.wikipedia.org/wiki/Wilson's_model_of_information_behavior
In algebra and number theory , Wilson's theorem states that a natural number n > 1 is a prime number if and only if the product of all the positive integers less than n is one less than a multiple of n . That is (using the notations of modular arithmetic ), the factorial ( n − 1 ) ! = 1 × 2 × 3 × ⋯ × ( n − 1 ) {\displaystyle (n-1)!=1\times 2\times 3\times \cdots \times (n-1)} satisfies exactly when n is a prime number. In other words, any integer n > 1 is a prime number if, and only if, ( n − 1)! + 1 is divisible by n . [ 1 ] The theorem was first stated by Ibn al-Haytham c. 1000 AD . [ 2 ] Edward Waring announced the theorem in 1770 without proving it, crediting his student John Wilson for the discovery. [ 3 ] Lagrange gave the first proof in 1771. [ 4 ] There is evidence that Leibniz was also aware of the result a century earlier, but never published it. [ 5 ] For each of the values of n from 2 to 30, the following table shows the number ( n − 1)! and the remainder when ( n − 1)! is divided by n . (In the notation of modular arithmetic , the remainder when m is divided by n is written m mod n .) The background color is blue for prime values of n , gold for composite values. As a biconditional (if and only if) statement, the proof has two halves: to show that equality does not hold when n {\displaystyle n} is composite, and to show that it does hold when n {\displaystyle n} is prime. Suppose that n {\displaystyle n} is composite. Therefore, it is divisible by some prime number q {\displaystyle q} where 2 ≤ q < n {\displaystyle 2\leq q<n} . Because q {\displaystyle q} divides n {\displaystyle n} , there is an integer k {\displaystyle k} such that n = q k {\displaystyle n=qk} . Suppose for the sake of contradiction that ( n − 1 ) ! {\displaystyle (n-1)!} were congruent to − 1 {\displaystyle -1} modulo n {\displaystyle {n}} . Then ( n − 1 ) ! {\displaystyle (n-1)!} would also be congruent to − 1 {\displaystyle -1} modulo q {\displaystyle {q}} : indeed, if ( n − 1 ) ! ≡ − 1 ( mod n ) {\displaystyle (n-1)!\equiv -1{\pmod {n}}} then ( n − 1 ) ! = n m − 1 = ( q k ) m − 1 = q ( k m ) − 1 {\displaystyle (n-1)!=nm-1=(qk)m-1=q(km)-1} for some integer m {\displaystyle m} , and consequently ( n − 1 ) ! {\displaystyle (n-1)!} is one less than a multiple of q {\displaystyle q} . On the other hand, since 2 ≤ q ≤ n − 1 {\displaystyle 2\leq q\leq n-1} , one of the factors in the expanded product ( n − 1 ) ! = ( n − 1 ) × ( n − 2 ) × ⋯ × 2 × 1 {\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1} is q {\displaystyle q} . Therefore ( n − 1 ) ! ≡ 0 ( mod q ) {\displaystyle (n-1)!\equiv 0{\pmod {q}}} . This is a contradiction; therefore it is not possible that ( n − 1 ) ! ≡ − 1 ( mod n ) {\displaystyle (n-1)!\equiv -1{\pmod {n}}} when n {\displaystyle n} is composite. In fact, more is true. With the sole exception of the case n = 4 {\displaystyle n=4} , where 3 ! = 6 ≡ 2 ( mod 4 ) {\displaystyle 3!=6\equiv 2{\pmod {4}}} , if n {\displaystyle n} is composite then ( n − 1 ) ! {\displaystyle (n-1)!} is congruent to 0 modulo n {\displaystyle n} . The proof can be divided into two cases: First, if n {\displaystyle n} can be factored as the product of two unequal numbers, n = a b {\displaystyle n=ab} , where 2 ≤ a < b < n {\displaystyle 2\leq a<b<n} , then both a {\displaystyle a} and b {\displaystyle b} will appear as factors in the product ( n − 1 ) ! = ( n − 1 ) × ( n − 2 ) × ⋯ × 2 × 1 {\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1} and so ( n − 1 ) ! {\displaystyle (n-1)!} is divisible by a b = n {\displaystyle ab=n} . If n {\displaystyle n} has no such factorization, then it must be the square of some prime q {\displaystyle q} larger than 2. But then 2 q < q 2 = n {\displaystyle 2q<q^{2}=n} , so both q {\displaystyle q} and 2 q {\displaystyle 2q} will be factors of ( n − 1 ) ! {\displaystyle (n-1)!} , and so n {\displaystyle n} divides ( n − 1 ) ! {\displaystyle (n-1)!} in this case, as well. The first two proofs below use the fact that the residue classes modulo a prime number form a finite field (specifically, a prime field ). [ 6 ] The result is trivial when p = 2 {\displaystyle p=2} , so assume p {\displaystyle p} is an odd prime, p ≥ 3 {\displaystyle p\geq 3} . Since the residue classes modulo p {\displaystyle p} form a field, every non-zero residue a {\displaystyle a} has a unique multiplicative inverse a − 1 {\displaystyle a^{-1}} . Euclid's lemma implies [ a ] that the only values of a {\displaystyle a} for which a ≡ a − 1 ( mod p ) {\displaystyle a\equiv a^{-1}{\pmod {p}}} are a ≡ ± 1 ( mod p ) {\displaystyle a\equiv \pm 1{\pmod {p}}} . Therefore, with the exception of ± 1 {\displaystyle \pm 1} , the factors in the expanded form of ( p − 1 ) ! {\displaystyle (p-1)!} can be arranged in disjoint pairs such that product of each pair is congruent to 1 modulo p {\displaystyle p} . This proves Wilson's theorem. For example, for p = 11 {\displaystyle p=11} , one has 10 ! = [ ( 1 ⋅ 10 ) ] ⋅ [ ( 2 ⋅ 6 ) ( 3 ⋅ 4 ) ( 5 ⋅ 9 ) ( 7 ⋅ 8 ) ] ≡ [ − 1 ] ⋅ [ 1 ⋅ 1 ⋅ 1 ⋅ 1 ] ≡ − 1 ( mod 11 ) . {\displaystyle 10!=[(1\cdot 10)]\cdot [(2\cdot 6)(3\cdot 4)(5\cdot 9)(7\cdot 8)]\equiv [-1]\cdot [1\cdot 1\cdot 1\cdot 1]\equiv -1{\pmod {11}}.} Again, the result is trivial for p = 2, so suppose p is an odd prime, p ≥ 3 . Consider the polynomial g has degree p − 1 , leading term x p − 1 , and constant term ( p − 1)! . Its p − 1 roots are 1, 2, ..., p − 1 . Now consider h also has degree p − 1 and leading term x p − 1 . Modulo p , Fermat's little theorem says it also has the same p − 1 roots, 1, 2, ..., p − 1 . Finally, consider f has degree at most p − 2 (since the leading terms cancel), and modulo p also has the p − 1 roots 1, 2, ..., p − 1 . But Lagrange's theorem says it cannot have more than p − 2 roots. Therefore, f must be identically zero (mod p ), so its constant term is ( p − 1)! + 1 ≡ 0 (mod p ) . This is Wilson's theorem. It is possible to deduce Wilson's theorem from a particular application of the Sylow theorems . Let p be a prime. It is immediate to deduce that the symmetric group S p {\displaystyle S_{p}} has exactly ( p − 1 ) ! {\displaystyle (p-1)!} elements of order p , namely the p -cycles C p {\displaystyle C_{p}} . On the other hand, each Sylow p -subgroup in S p {\displaystyle S_{p}} is a copy of C p {\displaystyle C_{p}} . Hence it follows that the number of Sylow p -subgroups is n p = ( p − 2 ) ! {\displaystyle n_{p}=(p-2)!} . The third Sylow theorem implies Multiplying both sides by ( p − 1) gives that is, the result. In practice, Wilson's theorem is useless as a primality test because computing ( n − 1)! modulo n for large n is computationally complex . [ 7 ] Using Wilson's Theorem, for any odd prime p = 2 m + 1 , we can rearrange the left hand side of 1 ⋅ 2 ⋯ ( p − 1 ) ≡ − 1 ( mod p ) {\displaystyle 1\cdot 2\cdots (p-1)\ \equiv \ -1\ {\pmod {p}}} to obtain the equality 1 ⋅ ( p − 1 ) ⋅ 2 ⋅ ( p − 2 ) ⋯ m ⋅ ( p − m ) ≡ 1 ⋅ ( − 1 ) ⋅ 2 ⋅ ( − 2 ) ⋯ m ⋅ ( − m ) ≡ − 1 ( mod p ) . {\displaystyle 1\cdot (p-1)\cdot 2\cdot (p-2)\cdots m\cdot (p-m)\ \equiv \ 1\cdot (-1)\cdot 2\cdot (-2)\cdots m\cdot (-m)\ \equiv \ -1{\pmod {p}}.} This becomes ∏ j = 1 m j 2 ≡ ( − 1 ) m + 1 ( mod p ) {\displaystyle \prod _{j=1}^{m}\ j^{2}\ \equiv (-1)^{m+1}{\pmod {p}}} or ( m ! ) 2 ≡ ( − 1 ) m + 1 ( mod p ) . {\displaystyle (m!)^{2}\equiv (-1)^{m+1}{\pmod {p}}.} We can use this fact to prove part of a famous result: for any prime p such that p ≡ 1 (mod 4) , the number (−1) is a square ( quadratic residue ) mod p . For this, suppose p = 4 k + 1 for some integer k . Then we can take m = 2 k above, and we conclude that ( m !) 2 is congruent to (−1) (mod p ). Wilson's theorem has been used to construct formulas for primes , but they are too slow to have practical value. Wilson's theorem allows one to define the p-adic gamma function . Gauss proved [ 8 ] [ 9 ] that ∏ k = 1 gcd ( k , m ) = 1 m k ≡ { − 1 ( mod m ) if m = 4 , p α , 2 p α 1 ( mod m ) otherwise {\displaystyle \prod _{k=1 \atop \gcd(k,m)=1}^{m}\!\!k\ \equiv {\begin{cases}-1{\pmod {m}}&{\text{if }}m=4,\;p^{\alpha },\;2p^{\alpha }\\\;\;\,1{\pmod {m}}&{\text{otherwise}}\end{cases}}} where p represents an odd prime and α {\displaystyle \alpha } a positive integer. That is, the product of the positive integers less than m and relatively prime to m is one less than a multiple of m when m is equal to 4, or a power of an odd prime, or twice a power of an odd prime; otherwise, the product is one more than a multiple of m . The values of m for which the product is −1 are precisely the ones where there is a primitive root modulo m . Original : Inoltre egli intravide anche il teorema di Wilson, come risulta dall'enunciato seguente: "Productus continuorum usque ad numerum qui antepraecedit datum divisus per datum relinquit 1 (vel complementum ad unum?) si datus sit primitivus. Si datus sit derivativus relinquet numerum qui cum dato habeat communem mensuram unitate majorem." Egli non giunse pero a dimostrarlo. Translation : In addition, he [Leibniz] also glimpsed Wilson's theorem, as shown in the following statement: "The product of all integers preceding the given integer, when divided by the given integer, leaves 1 (or the complement of 1?) if the given integer be prime. If the given integer be composite, it leaves a number which has a common factor with the given integer [which is] greater than one." However, he didn't succeed in proving it. The Disquisitiones Arithmeticae has been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Wilson's_theorem
In quantum field theory , Wilson loops are gauge invariant operators arising from the parallel transport of gauge variables around closed loops . They encode all gauge information of the theory, allowing for the construction of loop representations which fully describe gauge theories in terms of these loops. In pure gauge theory they play the role of order operators for confinement , where they satisfy what is known as the area law. Originally formulated by Kenneth G. Wilson in 1974, they were used to construct links and plaquettes which are the fundamental parameters in lattice gauge theory . [ 1 ] Wilson loops fall into the broader class of loop operators , with some other notable examples being 't Hooft loops , which are magnetic duals to Wilson loops, and Polyakov loops , which are the thermal version of Wilson loops. To properly define Wilson loops in gauge theory requires considering the fiber bundle formulation of gauge theories. [ 2 ] Here for each point in the d {\displaystyle d} -dimensional spacetime M {\displaystyle M} there is a copy of the gauge group G {\displaystyle G} forming what's known as a fiber of the fiber bundle . These fiber bundles are called principal bundles . Locally the resulting space looks like R d × G {\displaystyle \mathbb {R} ^{d}\times G} although globally it can have some twisted structure depending on how different fibers are glued together. The issue that Wilson lines resolve is how to compare points on fibers at two different spacetime points. This is analogous to parallel transport in general relativity which compares tangent vectors that live in the tangent spaces at different points. For principal bundles there is a natural way to compare different fiber points through the introduction of a connection , which is equivalent to introducing a gauge field. This is because a connection is a way to separate out the tangent space of the principal bundle into two subspaces known as the vertical and horizontal subspaces. [ 3 ] The former consists of all vectors pointing along the fiber G {\displaystyle G} while the latter consists of vectors that are perpendicular to the fiber. This allows for the comparison of fiber values at different spacetime points by connecting them with curves in the principal bundle whose tangent vectors always live in the horizontal subspace, so the curve is always perpendicular to any given fiber. If the starting fiber is at coordinate x i {\displaystyle x_{i}} with a starting point of the identity g i = e {\displaystyle g_{i}=e} , then to see how this changes when moving to another spacetime coordinate x f {\displaystyle x_{f}} , one needs to consider some spacetime curve γ : [ 0 , 1 ] → M {\displaystyle \gamma :[0,1]\rightarrow M} between x i {\displaystyle x_{i}} and x f {\displaystyle x_{f}} . The corresponding curve in the principal bundle, known as the horizontal lift of γ ( t ) {\displaystyle \gamma (t)} , is the curve γ ~ ( t ) {\displaystyle {\tilde {\gamma }}(t)} such that γ ~ ( 0 ) = g i {\displaystyle {\tilde {\gamma }}(0)=g_{i}} and that its tangent vectors always lie in the horizontal subspace. The fiber bundle formulation of gauge theory reveals that the Lie-algebra valued gauge field A μ ( x ) = A μ a ( x ) T a {\displaystyle A_{\mu }(x)=A_{\mu }^{a}(x)T^{a}} is equivalent to the connection that defines the horizontal subspace, so this leads to a differential equation for the horizontal lift This has a unique formal solution called the Wilson line between the two points where P {\displaystyle {\mathcal {P}}} is the path-ordering operator , which is unnecessary for abelian theories. The horizontal lift starting at some initial fiber point other than the identity merely requires multiplication by the initial element of the original horizontal lift. More generally, it holds that if γ ~ ′ ( 0 ) = γ ~ ( 0 ) g {\displaystyle {\tilde {\gamma }}'(0)={\tilde {\gamma }}(0)g} then γ ~ ′ ( t ) = γ ~ ( t ) g {\displaystyle {\tilde {\gamma }}'(t)={\tilde {\gamma }}(t)g} for all t ≥ 0 {\displaystyle t\geq 0} . Under a local gauge transformation g ( x ) {\displaystyle g(x)} the Wilson line transforms as This gauge transformation property is often used to directly introduce the Wilson line in the presence of matter fields ϕ ( x ) {\displaystyle \phi (x)} transforming in the fundamental representation of the gauge group, where the Wilson line is an operator that makes the combination ϕ ( x i ) † W [ x i , x f ] ϕ ( x f ) {\displaystyle \phi (x_{i})^{\dagger }W[x_{i},x_{f}]\phi (x_{f})} gauge invariant. [ 4 ] It allows for the comparison of the matter field at different points in a gauge invariant way. Alternatively, the Wilson lines can also be introduced by adding an infinitely heavy test particle charged under the gauge group. Its charge forms a quantized internal Hilbert space , which can be integrated out, yielding the Wilson line as the world-line of the test particle. [ 5 ] This works in quantum field theory whether or not there actually is any matter content in the theory. However, the swampland conjecture known as the completeness conjecture claims that in a consistent theory of quantum gravity , every Wilson line and 't Hooft line of a particular charge consistent with the Dirac quantization condition must have a corresponding particle of that charge be present in the theory. [ 6 ] Decoupling these particles by taking the infinite mass limit no longer works since this would form black holes . The trace of closed Wilson lines is a gauge invariant quantity known as the Wilson loop W [ γ ] = tr [ P exp ⁡ ( i ∮ γ A μ d x μ ) ] . {\displaystyle W[\gamma ]={\text{tr}}{\bigg [}{\mathcal {P}}\exp {\bigg (}i\oint _{\gamma }A_{\mu }\,dx^{\mu }{\bigg )}{\bigg ]}.} Mathematically the term within the trace is known as the holonomy , which describes a mapping of the fiber into itself upon horizontal lift along a closed loop. The set of all holonomies itself forms a group , which for principal bundles must be a subgroup of the gauge group. Wilson loops satisfy the reconstruction property where knowing the set of Wilson loops for all possible loops allows for the reconstruction of all gauge invariant information about the gauge connection. [ 7 ] Formally the set of all Wilson loops forms an overcomplete basis of solutions to the Gauss' law constraint. The set of all Wilson lines is in one-to-one correspondence with the representations of the gauge group. This can be reformulated in terms of Lie algebra language using the weight lattice of the gauge group Λ w {\displaystyle \Lambda _{w}} . In this case the types of Wilson loops are in one-to-one correspondence with Λ w / W {\displaystyle \Lambda _{w}/W} where W {\displaystyle W} is the Weyl group . [ 8 ] An alternative view of Wilson loops is to consider them as operators acting on the Hilbert space of states in Minkowski signature . [ 5 ] Since the Hilbert space lives on a single time slice, the only Wilson loops that can act as operators on this space are ones formed using spacelike loops. Such operators W [ γ ] {\displaystyle W[\gamma ]} create a closed loop of electric flux , which can be seen by noting that the electric field operator E i {\displaystyle E^{i}} is nonzero on the loop E i W [ γ ] | 0 ⟩ ≠ 0 {\displaystyle E^{i}W[\gamma ]|0\rangle \neq 0} but it vanishes everywhere else. Using Stokes theorem it follows that the spatial loop measures the magnetic flux through the loop. [ 9 ] Since temporal Wilson lines correspond to the configuration created by infinitely heavy stationary quarks, Wilson loop associated with a rectangular loop γ {\displaystyle \gamma } with two temporal components of length T {\displaystyle T} and two spatial components of length r {\displaystyle r} , can be interpreted as a quark -antiquark pair at fixed separation. Over large times the vacuum expectation value of the Wilson loop projects out the state with the minimum energy , which is the potential V ( r ) {\displaystyle V(r)} between the quarks. [ 10 ] The excited states with energy V ( r ) + Δ E {\displaystyle V(r)+\Delta E} are exponentially suppressed with time and so the expectation value goes as making the Wilson loop useful for calculating the potential between quark pairs. This potential must necessarily be a monotonically increasing and concave function of the quark separation. [ 11 ] [ 12 ] Since spacelike Wilson loops are not fundamentally different from the temporal ones, the quark potential is really directly related to the pure Yang–Mills theory structure and is a phenomenon independent of the matter content. [ 13 ] Elitzur's theorem ensures that local non-gauge invariant operators cannot have a non-zero expectation values. Instead one must use non-local gauge invariant operators as order parameters for confinement. The Wilson loop is exactly such an order parameter in pure Yang–Mills theory , where in the confining phase its expectation value follows the area law [ 14 ] for a loop that encloses an area A [ γ ] {\displaystyle A[\gamma ]} . This is motivated from the potential between infinitely heavy test quarks which in the confinement phase is expected to grow linearly V ( r ) ∼ σ r {\displaystyle V(r)\sim \sigma r} where σ {\displaystyle \sigma } is known as the string tension. Meanwhile, in the Higgs phase the expectation value follows the perimeter law where L [ γ ] {\displaystyle L[\gamma ]} is the perimeter length of the loop and b {\displaystyle b} is some constant. The area law of Wilson loops can be used to demonstrate confinement in certain low dimensional theories directly, such as for the Schwinger model whose confinement is driven by instantons . [ 15 ] In lattice field theory , Wilson lines and loops play a fundamental role in formulating gauge fields on the lattice . The smallest Wilson lines on the lattice, those between two adjacent lattice points, are known as links, with a single link starting from a lattice point n {\displaystyle n} going in the μ {\displaystyle \mu } direction denoted by U μ ( n ) {\displaystyle U_{\mu }(n)} . Four links around a single square are known as a plaquette, with their trace forming the smallest Wilson loop. [ 16 ] It is these plaquettes that are used to construct the lattice gauge action known as the Wilson action . Larger Wilson loops are expressed as products of link variables along some loop γ {\displaystyle \gamma } , denoted by [ 17 ] These Wilson loops are used to study confinement and quark potentials numerically . Linear combinations of Wilson loops are also used as interpolating operators that give rise to glueball states . [ 18 ] The glueball masses can then be extracted from the correlation function between these interpolators. [ 19 ] The lattice formulation of the Wilson loops also allows for an analytic demonstration of confinement in the strongly coupled phase, assuming the quenched approximation where quark loops are neglected. [ 20 ] This is done by expanding out the Wilson action as a power series of traces of plaquettes, where the first non-vanishing term in the expectation value of the Wilson loop in an SU ( 3 ) {\displaystyle {\text{SU}}(3)} gauge theory gives rise to an area law with a string tension of the form [ 21 ] [ 22 ] where β = 6 / g 2 {\displaystyle \beta =6/g^{2}} is the inverse coupling constant and a {\displaystyle a} is the lattice spacing. While this argument holds for both the abelian and non-abelian case, compact electrodynamics only exhibits confinement at strong coupling, with there being a phase transition to the Coulomb phase at β ∼ 1.01 {\displaystyle \beta \sim 1.01} , leaving the theory deconfined at weak coupling. [ 23 ] [ 24 ] Such a phase transition is not believed to exist for SU ( N ) {\displaystyle {\text{SU}}(N)} gauge theories at zero temperature , instead they exhibit confinement at all values of the coupling constant. Similarly to the functional derivative which acts on functions of functions , functions of loops admit two types of derivatives called the area derivative and the perimeter derivative. To define the former, consider a contour γ {\displaystyle \gamma } and another contour γ δ σ μ ν {\displaystyle \gamma _{\delta \sigma _{\mu \nu }}} which is the same contour but with an extra small loop at x {\displaystyle x} in the μ {\displaystyle \mu } - ν {\displaystyle \nu } plane with area δ σ μ ν = d x μ ∧ d x ν {\displaystyle \delta \sigma _{\mu \nu }=dx_{\mu }\wedge dx_{\nu }} . Then the area derivative of the loop functional F [ γ ] {\displaystyle F[\gamma ]} is defined through the same idea as the usual derivative, as the normalized difference between the functional of the two loops [ 25 ] The perimeter derivative is similarly defined whereby now γ δ x μ {\displaystyle \gamma _{\delta x_{\mu }}} is a slight deformation of the contour γ {\displaystyle \gamma } which at position x {\displaystyle x} has a small extruding loop of length δ x μ {\displaystyle \delta x_{\mu }} in the μ {\displaystyle \mu } direction and of zero area. The perimeter derivative ∂ μ x {\displaystyle \partial _{\mu }^{x}} of the loop functional is then defined as In the large N-limit , the Wilson loop vacuum expectation value satisfies a closed functional form equation called the Makeenko–Migdal equation [ 26 ] Here γ = γ x y ∪ γ y x {\displaystyle \gamma =\gamma _{xy}\cup \gamma _{yx}} with γ x y {\displaystyle \gamma _{xy}} being a line that does not close from x {\displaystyle x} to y {\displaystyle y} , with the two points however close to each other. The equation can also be written for finite N {\displaystyle N} , but in this case it does not factorize and instead leads to expectation values of products of Wilson loops, rather than the product of their expectation values. [ 27 ] This gives rise to an infinite chain of coupled equations for different Wilson loop expectation values, analogous to the Schwinger–Dyson equations . The Makeenko–Migdal equation has been solved exactly in two dimensional U ( ∞ ) {\displaystyle {\text{U}}(\infty )} theory. [ 28 ] Gauge groups that admit fundamental representations in terms of N × N {\displaystyle N\times N} matrices have Wilson loops that satisfy a set of identities called the Mandelstam identities, with these identities reflecting the particular properties of the underlying gauge group. [ 29 ] The identities apply to loops formed from two or more subloops, with γ = γ 2 ∘ γ 1 {\displaystyle \gamma =\gamma _{2}\circ \gamma _{1}} being a loop formed by first going around γ 1 {\displaystyle \gamma _{1}} and then going around γ 2 {\displaystyle \gamma _{2}} . The Mandelstam identity of the first kind states that W [ γ 1 ∘ γ 2 ] = W [ γ 2 ∘ γ 1 ] {\displaystyle W[\gamma _{1}\circ \gamma _{2}]=W[\gamma _{2}\circ \gamma _{1}]} , with this holding for any gauge group in any dimension. Mandelstam identities of the second kind are acquired by noting that in N {\displaystyle N} dimensions, any object with N + 1 {\displaystyle N+1} totally antisymmetric indices vanishes, meaning that δ [ b 1 a 1 δ b 2 a 2 ⋯ δ b N + 1 ] a N + 1 = 0 {\displaystyle \delta _{[b_{1}}^{a_{1}}\delta _{b_{2}}^{a_{2}}\cdots \delta _{b_{N+1}]}^{a_{N+1}}=0} . In the fundamental representation, the holonomies used to form the Wilson loops are N × N {\displaystyle N\times N} matrix representations of the gauge groups. Contracting N + 1 {\displaystyle N+1} holonomies with the delta functions yields a set of identities between Wilson loops. These can be written in terms the objects M K {\displaystyle M_{K}} defined iteratively so that M 1 [ γ ] = W [ γ ] {\displaystyle M_{1}[\gamma ]=W[\gamma ]} and In this notation the Mandelstam identities of the second kind are [ 30 ] For example, for a U ( 1 ) {\displaystyle {\text{U}}(1)} gauge group this gives W [ γ 1 ] W [ γ 2 ] = W [ γ 1 ∘ γ 2 ] {\displaystyle W[\gamma _{1}]W[\gamma _{2}]=W[\gamma _{1}\circ \gamma _{2}]} . If the fundamental representation are matrices of unit determinant , then it also holds that M N ( γ , … , γ ) = 1 {\displaystyle M_{N}(\gamma ,\dots ,\gamma )=1} . For example, applying this identity to SU ( 2 ) {\displaystyle {\text{SU}}(2)} gives Fundamental representations consisting of unitary matrices satisfy W [ γ ] = W ∗ [ γ − 1 ] {\displaystyle W[\gamma ]=W^{*}[\gamma ^{-1}]} . Furthermore, while the equality W [ I ] = N {\displaystyle W[I]=N} holds for all gauge groups in the fundamental representations, for unitary groups it moreover holds that | W [ γ ] | ≤ N {\displaystyle |W[\gamma ]|\leq N} . Since Wilson loops are operators of the gauge fields, the regularization and renormalization of the underlying Yang–Mills theory fields and couplings does not prevent the Wilson loops from requiring additional renormalization corrections. In a renormalized Yang–Mills theory, the particular way that the Wilson loops get renormalized depends on the geometry of the loop under consideration. The main features are [ 31 ] [ 32 ] [ 33 ] [ 34 ] Wilson loops play a role in the theory of scattering amplitudes where a set of dualities between them and special types of scattering amplitudes has been found. [ 35 ] These have first been suggested at strong coupling using the AdS/CFT correspondence . [ 36 ] For example, in N = 4 {\displaystyle {\mathcal {N}}=4} supersymmetric Yang–Mills theory maximally helicity violating amplitudes factorize into a tree-level component and a loop level correction. [ 37 ] This loop level correction does not depend on the helicities of the particles, but it was found to be dual to certain polygonal Wilson loops in the large N {\displaystyle N} limit, up to finite terms. While this duality was initially only suggested in the maximum helicity violating case, there are arguments that it can be extended to all helicity configurations by defining appropriate supersymmetric generalizations of the Wilson loop. [ 38 ] In compactified theories, zero mode gauge field states that are locally pure gauge configurations but are globally inequivalent to the vacuum are parameterized by closed Wilson lines in the compact direction. The presence of these on a compactified open string theory is equivalent under T-duality to a theory with non-coincident D-branes , whose separations are determined by the Wilson lines. [ 39 ] Wilson lines also play a role in orbifold compactifications where their presence leads to greater control of gauge symmetry breaking , giving a better handle on the final unbroken gauge group and also providing a mechanism for controlling the number of matter multiplets left after compactification. [ 40 ] These properties make Wilson lines important in compactifications of superstring theories. [ 41 ] [ 42 ] In a topological field theory , the expectation value of Wilson loops does not change under smooth deformations of the loop since the field theory does not depend on the metric . [ 43 ] For this reason, Wilson loops are key observables on in these theories and are used to calculate global properties of the manifold . In 2 + 1 {\displaystyle 2+1} dimensions they are closely related to knot theory with the expectation value of a product of loops depending only on the manifold structure and on how the loops are tied together. This led to the famous connection made by Edward Witten where he used Wilson loops in Chern–Simons theory to relate their partition function to Jones polynomials of knot theory. [ 44 ]
https://en.wikipedia.org/wiki/Wilson_loop
The Wilson ratio of a metal is the dimensionless ratio of the zero- temperature magnetic susceptibility to the coefficient of the linear temperature term in the electronic specific heat . The relative value of the Wilson ratio, compared to the Wilson ratio for the non-interacting Fermi gas , can provide insight into the types of interactions present. The Wilson ratio can be used to characterize strongly correlated Fermi liquids. [ 1 ] The Fermi liquid theory explains the behaviour of metals at very low temperatures. Two important features of a metal which obey this theory are: Both of these quantities, however, are proportional to the electronic density of states at the Fermi energy. Their ratio is a dimensionless quantity called the Wilson (or the Sommerfeld-Wilson) ratio, [ 2 ] defined as: After substituting the values of χ P (Pauli susceptibility) and C elec (electronic contribution to specific heat), obtained using Sommerfeld theory, the value obtained for R w in the case of a free electron gas is 1. In the case of real Fermi-liquid metals, the ratio can differ significantly from 1. The difference arises due to electron-electron interactions within the system. These tend to change the effective electronic mass , which affects both specific heat and magnetic susceptibility. Whether or not this increase in both is given by the same multiplicative factor is shown by the Wilson ratio. In some cases, electron-electron interactions give rise to an additional increase in susceptibility. The converse is also true, i.e. a deviation of the experimental value of R w from 1 may indicate strong electronic correlations. [ 3 ] Very high Wilson ratios (above 2) indicate nearness to ferromagnetism. This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wilson_ratio
WinCustomize is a website that provides content for users to customize Microsoft Windows . The site hosts thousands of skins , themes , icons , wallpapers , and other graphical content to modify the Windows graphical user interface . There is some premium or paid content, however, the vast majority of the content is free for users to download. WinCustomize was launched in March 2001 by Brad Wardell and Pat Ford, both of whom work at Stardock . After the dot-com recession had taken down many popular skin sites, WinCustomize quickly grew in popularity due to a combination of wide variety of content, uptime reliability, and being the preferred content destination by Stardock customers. The site has grown at a far greater pace than its founders had anticipated. It has managed to avoid having to put many limitations on users or having to resort to pop-up advertising because of its corporate patron Stardock subsidizing its costs. This growth has prompted several site redesigns to offer improved functionality and reliability to users. Since launch, WinCustomize has undergone several iterations: WinCustomize 2k5 — Launched at the end of 2004, WinCustomize was redesigned for improved stability, and added functionality, such as personal pages for subscribers, an articles' system, tutorials etc. WinCustomize 2k7 — Launched January 15, 2007, WC2k7 was a fundamental rewrite using ASP.NET . The focus was to build a foundation that was easier to maintain and, in the future, expand. WinCustomize v6 — Planned for Late 2008/Early 2009, the WC v6 project aims to be a major revision to how users navigate and interact with the site and the community as a whole. Where 2k7 was focused on the core codebase, v6 is focused on the user interface and experience. In July 2007 the WinCustomize Wiki was launched. [ 1 ] WinCustomize 2010 — WinCustomize 2010 was launched on April 20, 2010. This major revision represents a major change in the sites look and navigation for users. A guided tour of the new site was published for users. [ 2 ] Programs heavily associated with Windows customization include:
https://en.wikipedia.org/wiki/WinCustomize
winPenPack (often shortened to wPP) is an open-source software application suite for Windows. It is a collection of open source applications that have been modified to be executed directly from a USB flash drive (or any other removable storage device) without prior installation. WinPenPack programs are distributed as free software , and can be downloaded individually or grouped into suites. The creator, Danilo Leggieri, put the site winPenPack.com online on 23 November 2005. The project and the associated community then grew quickly. Since that date, 15 new versions and hundreds of open-source portable applications were released. The project is well known in Italy and abroad. It is hosted on SourceForge . The collections are regularly distributed bundled with popular PC magazines in Italy and worldwide. A thriving community of users is actively contributing to the growth of the project. The site currently hosts various projects created and suggested by forum members, and is also used for bug reporting and suggestions. [ 2 ] Since May 2006, winPenPack has been covered by most major Italian PC publications [ 3 ] including: PC Professionale , Win Magazine , Computer Magazine , Total Computer , Internet Genius , Quale Computer , Computer Week , and many others. All the applications available in the winPenPack suites are portable applications. Portable applications: X-Software is software that has been modified with X-Launcher to be executed as if it were a portable application. X-Launcher is a specific application which executes other applications in "portable mode" by means of recreating their original operating environment. A few examples of X-Software include X-Firefox (counterpart to Mozilla Firefox ), X-Thunderbird ( Mozilla Thunderbird ), X-Gimp ( GIMP ), and others. The winPenPack main menu can be executed from any removable storage device (including, and especially, from USB flash drives ). In each different winPenPack suite, the main menu is pre-configured to list all programs available (including programs belonging to other suites), and can be edited at any time. New programs can be added to the menu either manually (by means of the "Add" options or by drag-and-dropping them onto the menu) or automatically (please note that automatic installation is only available for X-Software, as opposed to portable applications ).
https://en.wikipedia.org/wiki/WinPenPack
WinShock is computer exploit that exploits a vulnerability in the Windows secure channel (SChannel) module and allows for remote code execution. [ 1 ] The exploit was discovered in May 2014 by IBM , who also helped patch the exploit. [ 2 ] The exploit was present and undetected in Windows software for 19 years, affecting every Windows version from Windows 95 to Windows 8.1 [ 3 ] WinShock exploits a vulnerability in the Windows secure channel (SChannel) security module that allows for remote control of a PC through a vulnerability in SSL , which then allows for remote code execution. [ 1 ] [ 4 ] With the execution of remote code, attackers could compromise the computer completely and gain complete control over it. [ 5 ] The vulnerability was given a CVSS 2.0 base score of 10.0, the highest score possible. [ 6 ] The attack exploits a vulnerable function in the SChannel module that handles SSL Certificates . [ 7 ] A number of Windows applications such as Microsoft Internet Information Services use the SChannel Security Service Provider to manage these certificates and are vulnerable to the attack. [ 8 ] It was later discovered in November 2014 that the attack could be executed even if the ISS Server was set to ignore SSL Certificates, as the function was still ran regardless. Microsoft Office, [ 9 ] and Remote Desktop software in Windows could also be exploited in the same way, even though it did not support SSL encryption at the time. [ 10 ] While the attack is covered by a single CVE , and is considered to be a single vulnerability, it is possible to execute a number of different and unique attacks by exploiting the vulnerability including buffer overflow attacks as well as certificate verification bypasses. [ 11 ] The exploit was discovered and disclosed privately to Microsoft in May 2014 by researchers in IBM's X-Force team who also helped to fix the issue. [ 3 ] It was later disclosed publicly on 11 November 2014, [ 1 ] with a proof-of-concept released not long after. [ 12 ]
https://en.wikipedia.org/wiki/WinShock
WinWAP was a web browser for Windows CE mobile devices. It was developed by the Finnish company Winwap Technologies. [ 1 ] WinWAP was first released in 1999.
https://en.wikipedia.org/wiki/WinWAP
Wind engineering is a subset of mechanical engineering , structural engineering , meteorology , and applied physics that analyzes the effects of wind in the natural and the built environment and studies the possible damage, inconvenience or benefits which may result from wind. In the field of engineering it includes strong winds, which may cause discomfort, as well as extreme winds, such as in a tornado , hurricane or heavy storm , which may cause widespread destruction. In the fields of wind energy and air pollution it also includes low and moderate winds as these are relevant to electricity production and dispersion of contaminants. Wind engineering draws upon meteorology , fluid dynamics , mechanics , geographic information systems , and a number of specialist engineering disciplines, including aerodynamics and structural dynamics . [ 1 ] The tools used include atmospheric models , atmospheric boundary layer wind tunnels , and computational fluid dynamics models. Wind engineering involves, among other topics: Wind engineering may be considered by structural engineers to be closely related to earthquake engineering and explosion protection . Some sports stadiums such as Candlestick Park and Arthur Ashe Stadium are known for their strong, sometimes swirly winds, which affect the playing conditions. Wind engineering as a separate discipline can be traced to the UK in the 1960s, when informal meetings were held at the National Physical Laboratory , the Building Research Establishment, and elsewhere. The term "wind engineering" was first coined in 1970. [ 2 ] Alan Garnett Davenport was one of the most prominent contributors to the development of wind engineering. [ 3 ] He is well known for developing the Alan Davenport wind-loading chain or in short "wind-loading chain" that describes how different components contribute to the final load calculated on the structure. [ 4 ] The design of buildings must account for wind loads, and these are affected by wind shear . For engineering purposes, a power law wind-speed profile may be defined as: [ 5 ] [ 6 ] where: Typically, buildings are designed to resist a strong wind with a very long return period, such as 50 years or more. The design wind speed is determined from historical records using extreme value theory to predict future extreme wind speeds. Wind speeds are generally calculated based on some regional design standard or standards. The design standards for building wind loads include: The advent of high-rise tower blocks led to concerns regarding the wind nuisance caused by these buildings to pedestrians in their vicinity. A number of wind comfort and wind danger criteria were developed from 1971, based on different pedestrian activities, such as: [ 7 ] Other criteria classified a wind environment as completely unacceptable or dangerous. Building geometries consisting of one and two rectangular buildings have a number of well-known effects: [ 8 ] [ 9 ] For more complex geometries, pedestrian wind comfort studies are required. These can use an appropriately scaled model in a boundary-layer wind tunnel , or more recently, use of computational fluid dynamics techniques has increased. [ 10 ] The pedestrian level wind speeds for a given exceedance probability are calculated to allow for regional wind speeds statistics. [ 11 ] The vertical wind profile used in these studies varies according to the terrain in the vicinity of the buildings (which may differ by wind direction), and is often grouped in categories, such as: [ 12 ] Wind turbines are affected by wind shear. Vertical wind-speed profiles result in different wind speeds at the blades nearest to the ground level compared to those at the top of blade travel, and this, in turn, affects the turbine operation. [ 13 ] The wind gradient can create a large bending moment in the shaft of a two bladed turbine when the blades are vertical. [ 14 ] The reduced wind gradient over water means shorter and less expensive wind turbine towers can be used in shallow seas. [ 15 ] For wind turbine engineering , wind speed variation with height is often approximated using a power law: [ 13 ] where: The knowledge of wind engineering is used to analyze and design all high-rise buildings, cable- suspension bridges and cable-stayed bridges , electricity transmission towers and telecommunication towers and all other types of towers and chimneys. The wind load is the dominant load in the analysis of many tall buildings, so wind engineering is essential for their analysis and design. Again, wind load is a dominant load in the analysis and design of all long-span cable bridges . ⚫
https://en.wikipedia.org/wiki/Wind_engineering
Wind setup , also known as wind effect or storm effect , refers to the rise in water level in seas, lakes, or other large bodies of water caused by winds pushing the water in a specific direction. As the wind moves across the water’s surface, it applies shear stress to the water, generating a wind-driven current. When this current encounters a shoreline, the water level increases due to the accumulation of water, which creates a hydrostatic counterforce that balances the shear force applied by the wind. [ 1 ] [ 2 ] During storms, wind setup forms part of the overall storm surge . For example, in the Netherlands , wind setup during a storm surge can raise water levels by as much as 3 metres above normal tidal levels. In tropical regions, such as the Caribbean , wind setup during cyclones can elevate water levels by up to 5 metres. This phenomenon becomes especially significant when water is funnelled into shallow or narrow areas, leading to higher storm surges. [ 3 ] Examples of the effects of wind setup include Hurricanes Gamma and Delta in 2020, during which wind setup was a major factor when strong winds and atmospheric pressure drops caused higher-than-expected coastal flooding across the Yucatán Peninsula in Mexico. [ 4 ] Similarly, in California’s Suisun Marsh , wind setup has been show to be a significant factor affecting local water levels, with strong winds pushing water into levees , contributing to frequent breaches and flooding. [ 5 ] In lakes , wind setup often leads to noticeable fluctuations in water levels. This effect is particularly clear in lakes with well-regulated water levels, such as the IJsselmeer , where the relationship between wind speed, water depth, and fetch length can be accurately measured and observed. [ 6 ] At sea, however, wind setup is typically masked by other factors, such as tidal variations. To measure the wind setup effect in coastal areas, the (calculated) astronomical tide is subtracted from the observed water level. For instance, during the North Sea flood of 1953 , the highest water level along the Dutch coast was recorded at 2.79 metres at the Vlissingen tidal station, while the highest wind setup—measuring 3.52 metres—was observed at Scheveningen . The highest wind setup ever recorded in the Netherlands, reaching 3.63 metres, occurred in Dintelsas, Steenbergen during the 1953 flood. However, globally, tropical regions like the Gulf of Mexico and the Caribbean often experience even higher wind setups during hurricane events, underscoring the importance of this phenomenon in coastal and flood management strategies. [ 4 ] Based on the equilibrium between the shear stress due to the wind on the water and the hydrostatic back pressure, the following equation is used: [ 7 ] in which: For an open coast, the equation becomes: in which However, this formula is not always applicable, particularly when dealing with open coasts or varying water depths. In such cases, a more complex approach is needed, which involves solving the differential equation using a one- or two-dimensional grid. This method, combined with real-world data, is used in countries like the Netherlands to predict wind setup along the coast during potential storms. [ 8 ] To calculate the wind setup in a lake, the following solution for the differential equation is used: In 1966 the Delta Works Committee recommended using a value of 3.8*10 −6 for κ {\displaystyle \kappa } under Dutch conditions. However, an analysis of measurement data from the IJsselmeer between 2002 and 2013 led to a more reliable value for κ {\displaystyle \kappa } , specifically κ {\displaystyle \kappa } = 2.2*10 −6 . [ 6 ] This study also found that the formula underestimated wind setup at higher wind speeds. As a result, it has been suggested to increase the exponent of the wind speed from 2 to 3 and to further adjust κ {\displaystyle \kappa } to κ {\displaystyle \kappa } =1.7*10 −7 . This modified formula can predict the wind setup on the IJsselmeer with an accuracy of approximately 15 centimetres. For confined environments such as marshes or small fetches, a simplified empirical model for wind setup has been proposed by Algra et al (2023). [ 5 ] This model was designed to estimate wind setup in the Suisun Marsh, where fetch lengths are smaller and shallow water depth conditions apply. The equation is expressed as: Where: This equation assumes that the fetch is small and simplifies the wind setup process by making the wind setup linearly proportional to the square of the wind speed. In their 2023 analysis of Van Sickle Island, Algra et al. found this model effective for environments with limited fetch and shallow depth, where the more complex approaches used for open coasts are unnecessary. Unlike the more detailed differential equation formulations used for larger open coasts or lakes, the Van Sickle model provides a practical approximation for confined areas where wind setup may still be significant but where spatial constraints simplify the overall water movement dynamics. [ 5 ] Wind setup should not be mistaken for wave run-up , which refers to the height which a wave reaches on a slope, or wave setup which is the increase in water level caused by breaking waves. [ 9 ]
https://en.wikipedia.org/wiki/Wind_setup
In physical oceanography and fluid dynamics , the wind stress is the shear stress exerted by the wind on the surface of large bodies of water – such as oceans , seas , estuaries and lakes . When wind is blowing over a water surface, the wind applies a wind force on the water surface. The wind stress is the component of this wind force that is parallel to the surface per unit area. Also, the wind stress can be described as the flux of horizontal momentum applied by the wind on the water surface. The wind stress causes a deformation of the water body whereby wind waves are generated. Also, the wind stress drives ocean currents and is therefore an important driver of the large-scale ocean circulation. [ 1 ] The wind stress is affected by the wind speed , the shape of the wind waves and the atmospheric stratification . It is one of the components of the air–sea interaction, with others being the atmospheric pressure on the water surface, as well as the exchange of energy and mass between the water and the atmosphere . [ 2 ] Stress is the quantity that describes the magnitude of a force that is causing a deformation of an object. Therefore, stress is defined as the force per unit area and its SI unit is the Pascal . When the deforming force acts parallel to the object's surface, this force is called a shear force and the stress it causes is called a shear stress . [ 3 ] Wind blowing over an ocean at rest first generates small-scale wind waves which extract energy and momentum from the wave field. As a result, the momentum flux (the rate of momentum transfer per unit area per unit time) generates a current. These surface currents are able to transport energy (e.g. heat ) and mass (e.g. water or nutrients ) around the globe. The different processes described here are depicted in the sketches shown in figures 1.1 till 1.4. Interactions between wind, wind waves and currents are an essential part of the world ocean dynamics . Eventually, the wind waves also influence the wind field leading to a complex interaction between wind and water whereof the research for a correct theoretical description is ongoing. [ 2 ] The Beaufort scale quantifies the correspondence between wind speed and different sea states . Only the top layer of the ocean ( mixed layer ) is stirred by the wind stress. This upper layer of the ocean has a depth on the order of 10m. [ 4 ] The wind blowing parallel to a water surface deforms that surface as a result of shear action caused by the fast wind blowing over the stagnant water. The wind blowing over the surface applies a shear force on the surface. The wind stress is the component of this force that acts parallel to the surface per unit area. This wind force exerted on the water surface due to shear stress is given by: Here, F represents the shear force per unit mass (default) , ρ {\displaystyle \rho } represents the air density and τ {\displaystyle \tau } represents the wind shear stress. Furthermore, z is the lifting direction as x corresponds to the zonal direction, y corresponds to the meridional direction . The vertical derivatives of the wind stress components are also called the vertical eddy viscosity . [ 5 ] The equation describes how the force exerted on the water surface decreases for a denser atmosphere or, to be more precise, a denser atmospheric boundary layer (this is the layer of a fluid where the influence of friction is felt). On the other hand, the exerted force on the water surface increases when the vertical eddy viscosity increases. The wind stress can also be described as a downward transfer of momentum and energy from the air to the water. The magnitude of the wind stress ( τ {\displaystyle \tau } ) is often parametrized as a function of wind speed U h {\displaystyle U_{h}} at a certain height h {\displaystyle h} above the surface in the form Here, ρ air {\displaystyle \rho _{\text{air}}} is the density of the surface air and C D is a dimensionless wind drag coefficient which is a repository function for all remaining dependencies. An often used value for the drag coefficient is C D = 0.0015 {\displaystyle C_{D}=0.0015} . Since the exchange of energy, momentum and moisture is often parametrized using bulk atmospheric formulae, the equation above is the semi-empirical bulk formula for the surface wind stress. The height at which the wind speed is referred to in wind drag formulas is usually 10 meters above the water surface. [ 6 ] [ 7 ] The formula for the wind stress explains how the stress increases for a denser atmosphere and higher wind speeds. When shear force caused by stress is in balance with the Coriolis force , this can be written as: where f is the Coriolis parameter , u and v are respectively the zonal and meridional currents and + f v {\displaystyle +fv} and − f u {\displaystyle -fu} are respectively the zonal Coriolis forces and meridional Coriolis forces . This balance of forces is known as the Ekman balance . Some important assumptions that underlie the Ekman balance are that there are no boundaries, an infinitely deep water layer, constant vertical eddy viscosity, barotropic conditions with no geostrophic flow and a constant Coriolis parameter. The oceanic currents that are generated by this balance are referred to as Ekman currents. In the Northern Hemisphere , Ekman currents at the surface are directed with an angle of 45° to the right of the wind stress direction and in the Southern Hemisphere they are directed with the same angle to the left of the wind stress direction. Flow directions of deeper positioned currents are deflected even more to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. This phenomenon is called the Ekman spiral . [ 4 ] [ 8 ] The Ekman transport can be obtained from vertically integrating the Ekman balance, giving: where D is the depth of the Ekman layer . Depth-averaged Ekman transport is directed perpendicular to the wind stress and, again, directed to the right of the wind stress direction in the Northern Hemisphere and to the left of the wind stress direction in the Southern Hemisphere. Alongshore winds therefore generate transport towards or away from the coast. For small values of D , water can return from or to deeper water layers, resulting in Ekman up- or downwelling . Upwelling due to Ekman transport can also happen at the equator due to the change of sign of the Coriolis parameter in the Northern and Southern Hemisphere and the stable easterly winds that are blowing to the North and South of the equator. [ 5 ] [ 1 ] Due to the strong temporal variability of the wind, the wind forcing on the ocean surface is also highly variable. This is one of the causes of the internal variability of ocean flows as these changes in the wind forcing cause changes in the wave field and the thereby generated currents. Variability of ocean flows also occurs because the changes of the wind forcing are disturbances of the mean ocean flow, which leads to instabilities . A well known phenomenon that is caused by changes in surface wind stress over the tropical Pacific is the El Niño-Southern Oscillation (ENSO). [ 1 ] The global annual mean wind stress forces the global ocean circulation. Typical values for the wind stress are about 0.1Pa and, in general, the zonal wind stress is stronger than the meridional wind stress as can be seen in figures 2.1 and 2.2. It can also be seen that the largest values of the wind stress occur in the Southern Ocean for the zonal direction with values of about 0.3Pa. Figures 2.3 and 2.4 show that monthly variations in the wind stress patterns are only minor and the general patterns stay the same during the whole year. It can be seen that there are strong easterly winds (i.e. blowing toward the West), called easterlies or trade winds near the equator, very strong westerly winds at midlatitudes (between ±30° and ±60°), called westerlies, and weaker easterly winds at polar latitudes. Also, on a large annual scale, the wind-stress field is fairly zonally homogeneous. Important meridional wind stress patterns are northward (southward) currents on the eastern (western) coasts of continents in the Northern Hemisphere and on the western (eastern) coast in the Southern Hemisphere since these generate coastal upwelling which causes biological activity. Examples of such patterns can be observed in figure 2.2 on the East coast of North America and on the West coast of South America. [ 4 ] [ 1 ] Wind stress is one of the drivers of the large-scale ocean circulation with other drivers being the gravitational pull exerted by the Moon and Sun, differences in atmospheric pressure at sea level and convection resulting from atmospheric cooling and evaporation . However, the contribution of the wind stress to the forcing of the oceanic general circulation is largest. Ocean waters respond to the wind stress because of their low resistance to shear and the relative consistence with which winds blow over the ocean. The combination of easterly winds near the equator and westerly winds at midlatitudes drives significant circulations in the North and South Atlantic Oceans, the North and South Pacific Oceans and the Indian Ocean with westward currents near the equator and eastward currents at midlatitudes. This results in characteristic gyre flows in the Atlantic and Pacific consisting of a subpolar and subtropical gyre. [ 4 ] [ 1 ] The strong westerlies in the Southern ocean drive the Antarctic Circumpolar Current which is the dominant current in the Southern Hemisphere whereof no comparable current exists in the Northern Hemisphere. [ 1 ] The equations to describe large-scale ocean dynamics were formulated by Harald Sverdrup and came to be known as Sverdrup dynamics. Important is the Sverdrup balance which describes the relation between the wind stress and the vertically integrated meridional transport of water. [ 10 ] Other significant contributions to the description of large-scale ocean circulation were made by Henry Stommel who formulated the first correct theory for the Gulf Stream [ 11 ] and theories of the abyssal circulation. [ 12 ] [ 13 ] Long before these theories were formulated, mariners have been aware of the major surface ocean currents. As an example, Benjamin Franklin already published a map of the Gulf Stream in 1770 and in European discovery of the gulf stream dates back to the 1512 expedition of Juan Ponce de León . [ 14 ] [ 15 ] Apart from such hydrographic measurement there are two methods to measure the ocean currents directly. Firstly, the Eulerian velocity can be measured using a current meter along a rope in the water column . And secondly, a drifter can be used which is an object that moves with the currents whereof the velocity can be measured. [ 1 ] Wind-driven upwelling brings nutrients from deep waters to the surface which leads to biological productivity. Therefore, wind stress impacts biological activity around the globe. Two important forms of wind-driven upwelling are coastal upwelling and equatorial upwelling . Coastal upwelling occurs when the wind stress is directed with the coast on its left (right) in the Northern (Southern) Hemisphere. If so, Ekman transport is directed away from the coast forcing waters from below to move upward. Well known coastal upwelling areas are the Canary Current , the Benguela Current , the California Current , the Humboldt Current , and the Somali Current . All of these currents support major fisheries due to the increased biological activities. Equatorial upwelling occurs due to the trade winds blowing towards the west in both the Northern Hemisphere and the Southern Hemisphere. However, the Ekman transport that is associated with these trade winds is directed 90° to the right of the winds in the Northern Hemisphere and 90° to the left of the winds in the Southern Hemisphere. As a result, to the North of the equator water is transported away from the equator and to the South of the equator water is transported away from the equator. This horizontal divergence of mass has to be compensated and hence upwelling occurs. [ 4 ] [ 1 ] Wind waves are waves at the water surface that are generated due to the shear action of wind stress on the water surface and the aim of gravity, that acts as a restoring force , to return the water surface to its equilibrium position. Wind waves in the ocean are also known as ocean surface waves. The wind waves interact with both the air and water flows above and below the waves. Therefore, the characteristics of wind waves are determined by the coupling processes between the boundary layers of both the atmosphere and ocean. Wind waves also play an important role themselves in the interaction processes between the ocean and the atmosphere. Wind waves in the ocean can travel thousands of kilometers. A proper description of the physical mechanisms that cause the growth of wind waves and is in accordance with observations has yet to be completed. A necessary condition for wind waves to grow is a minimum wind speed of 0.05 m/s. [ 2 ] [ 16 ] [ 17 ] [ 18 ] The drag coefficient is a dimensionless quantity which quantifies the resistance of the water surface. Due to the fact that the drag coefficient depends on the past of the wind, the drag coefficient is expressed differently for different time and spatial scales. A general expression for the drag coefficient does not yet exist and the value is unknown for unsteady and non-ideal conditions. In general, the drag coefficient increases with increasing wind speed and is greater for shallower waters. [ 2 ] The geostrophic drag coefficient is expressed as: [ 2 ] where U g {\displaystyle U_{g}} is the geostrophic wind which is given by: In global climate models, often a drag coefficient appropriate for a spatial scale of 1° by 1° and a monthly time scale is used. In such a timescale, the wind can strongly fluctuate. The monthly mean shear stress can be expressed as: where ρ {\displaystyle \rho } is the density, C D {\displaystyle C_{D}} is the drag coefficient, ⟨ U ⟩ {\displaystyle \langle U\rangle } is the monthly mean wind and U' is the fluctuation from the monthly mean. [ 2 ] It is not possible to directly measure the wind stress on the ocean surface. To obtain measurements of the wind stress, another easily measurable quantity like wind speed is measured and then via a parametrization the wind stress observations are obtained. Still, measurements of the wind stress are important as the value of the drag coefficient is not known for unsteady and non-ideal conditions. Measurements of the wind stress for such conditions can resolve the issue of the unknown drag coefficient. Four methods of measuring the drag coefficient are known as the Reynolds stress method, the dissipation method, the profile method and a method of using radar remote sensing. [ 2 ] The wind can also exert a stress force on land surface which can lead to erosion of the ground.
https://en.wikipedia.org/wiki/Wind_stress
A wind tunnel is "an apparatus for producing a controlled stream of air for conducting aerodynamic experiments". [ 1 ] The experiment is conducted in the test section of the wind tunnel and a complete tunnel configuration includes air ducting to and from the test section and a device for keeping the air in motion, such as a fan. Wind tunnel uses include assessing the effects of air on an aircraft in flight or a ground vehicle moving on land, and measuring the effect of wind on buildings and bridges. Wind tunnel test sections range in size from less than a foot across, to over 100 feet (30 m), and with air speeds from a light breeze to hypersonic. The earliest wind tunnels were invented towards the end of the 19th century, in the early days of aeronautical research, as part of the effort to develop heavier-than-air flying machines. The wind tunnel reversed the usual situation. Instead of the air standing still and an aircraft moving, an object would be held still and the air moved around it. In this way, a stationary observer could study the flying object in action, and could measure the aerodynamic forces acting on it. The development of wind tunnels accompanied the development of the airplane. Large wind tunnels were built during World War II, and as supersonic aircraft were developed, supersonic wind tunnels were constructed to test them. Wind tunnel testing was considered of strategic importance during the Cold War for development of aircraft and missiles. Advances in computational fluid dynamics (CFD) have reduced the demand for wind tunnel testing, but have not completely eliminated it. Many real-world problems can still not be modeled accurately enough by CFD to eliminate the need for wind tunnel testing. Moreover, confidence in a numerical simulation tool depends on comparing its results with experimental data, and these can be obtained, for example, from wind tunnel tests. A wind tunnel creates an outdoor environment in a controlled indoor setting which enables measurements of wind forces on a moving object to be taken while the object is stationary. This is much cheaper and more convenient than getting measurements while the object is moving. The object being tested, such as a scale model of an aircraft, is placed in the test section and restrained from moving. Air is flowed around the object and the forces on the model are measured. The measurements taken from the reduced-scale model are applicable to the full-size aircraft. Testing of scale models of a new aircraft design before it flies is done to ensure the first flight will be safe with the aircraft behaving in a predictable manner. Research in wind tunnels produces accurate results and is done rapidly and economically compared to flight testing of full-scale aircraft. Few people see a wind tunnel or are aware of their existence but work done in their test sections affects every-day life. For example, streamlining investigates ways to reduce the air drag on airliners, cars, and racing cyclists. The requirement for streamlining is not always the same, for airliners and cars it is to reduce fuel consumption, for racing cyclists it is to increase speed. Car fuel consumption is of secondary importance to drivers when starting and driving in extreme cold and wind-driven snow. This condition is investigated in a different kind of wind tunnel, the climatic wind tunnel. The test section subjects cars to a range of extreme environmental conditions to make sure the air conditioning can make the car comfortable on very hot and very cold days and can keep windows clear of condensation in very humid and cool weather. English mathematician and physicist Isaac Newton (1642–1726) displayed a forerunner to the modern wind tunnel in Proposition 36/37 of his book Philosophiæ Naturalis Principia Mathematica . [ 3 ] [ 4 ] English military engineer and mathematician Benjamin Robins (1707–1751) invented a whirling arm apparatus to determine drag [ 5 ] and did some of the first experiments in aerodynamics. Sir George Cayley (1773–1857) also used a whirling arm to measure the drag and lift of various airfoils. [ 6 ] His whirling arm was 5 feet (1.5 m) long and attained speeds between 10 and 20 feet per second (3 to 6 m/s). Otto Lilienthal used a rotating arm to make measurements on wing airfoils with varying angles of attack , establishing their lift-to-drag ratio polar diagrams, but was lacking the notions of induced drag and Reynolds numbers . [ 7 ] Drawbacks of whirling arm tests are that they do not produce a reliable flow of air. Centrifugal forces and the fact that the object is moving in its own wake also mean that detailed examination of the airflow is difficult. Francis Herbert Wenham (1824–1908), a Council Member of the Aeronautical Society of Great Britain , addressed these issues by inventing, designing, and operating the first enclosed wind tunnel in 1871. [ 8 ] [ 9 ] Once this breakthrough had been achieved, detailed technical data was rapidly extracted by the use of this tool. Wenham and his colleague John Browning are credited with many fundamental discoveries, including the measurement of l/d ratios, and the revelation of the beneficial effects of a high aspect ratio . Konstantin Tsiolkovsky built an open-section wind tunnel with a centrifugal blower in 1897, and determined the drag coefficients of flat plates, cylinders, and spheres. Danish inventor Poul la Cour used wind tunnels to develop wind turbines in the early 1890s. Carl Rickard Nyberg used a wind tunnel to design his Flugan starting in 1897. The Englishman Osborne Reynolds (1842–1912) of the University of Manchester demonstrated that the airflow pattern over a scale model would be the same for the full-scale vehicle if a certain flow parameter were the same in both cases. This parameter, now known as the Reynolds number , is used in the description of all fluid-flow situations, including the shape of flow patterns, the effectiveness of heat transfers, and the onset of turbulence. This comprises the central scientific justification for the use of models in wind tunnels to simulate real-life phenomena. The Wright brothers ' use of a simple wind tunnel in 1901 to study the effects of airflow over various shapes while developing their Wright Flyer was in some ways revolutionary. [ 10 ] However, they were using the accepted technology of the day, though this was not yet a common technology in America. In France , Gustave Eiffel (1832–1923) built his first open-return wind tunnel in 1909, powered by a 67 hp (50 kW) electric motor, at Champs-de-Mars, near the foot of the tower that bears his name. Between 1909 and 1912 Eiffel ran about 4,000 tests in his wind tunnel, and his systematic experimentation set new standards for aeronautical research. In 1912 Eiffel's laboratory was moved to Auteuil, a suburb of Paris, where his wind tunnel with a 7-foot (2 m) test section is still operational today. [ 11 ] Eiffel significantly improved the efficiency of the open-return wind tunnel by enclosing the test section in a chamber, designing a flared inlet with a honeycomb flow straightener, and adding a diffuser between the test section and the fan located at the downstream end of the diffuser; this was an arrangement followed by a number of wind tunnels later built; in fact the open-return low-speed wind tunnel is often called the Eiffel-type wind tunnel. Subsequent use of wind tunnels proliferated as the science of aerodynamics and discipline of aeronautical engineering were established and air travel and power were developed. The US Navy in 1916 built one of the largest wind tunnels in the world at that time at the Washington Navy Yard. The inlet was almost 11 feet (3.4 m) in diameter and the discharge part was 7 feet (2.1 m) in diameter. A 500 hp (370 kW) electric motor drove the paddle type fan blades. [ 13 ] In 1931 the NACA built a 30-by-60-foot (9.1 by 18.3 m) full-scale wind tunnel at Langley Research Center in Hampton, Virginia. The tunnel was powered by a pair of fans driven by 4,000 hp (3,000 kW) electric motors. The layout was a double-return, closed-loop format and could accommodate many full-size real aircraft as well as scale models. The tunnel was eventually closed and, even though it was declared a National Historic Landmark in 1995, demolition began in 2010. Until World War II, the world's largest wind tunnel, built in 1932–1934, was located in a suburb of Paris, Chalais-Meudon , France. [ citation needed ] It was designed to test full-size aircraft and had six large fans driven by high powered electric motors. [ 14 ] The Chalais-Meudon wind tunnel was used by ONERA under the name S1Ch until 1976 in the development of, e.g., the Caravelle and Concorde airplanes. Today, this wind tunnel is preserved as a national monument. Ludwig Prandtl was Theodore von Kármán 's teacher at Göttingen University and suggested the construction of a wind tunnel for tests of airships they were designing. [ 15 ] : 44 The vortex street of turbulence downstream of a cylinder was tested in the tunnel. [ 15 ] : 63 When he later moved to Aachen University he recalled use of this facility: I remembered the wind tunnel in Göttingen was started as a tool for studies of Zeppelin behavior, but that it had proven to be valuable for everything else from determining the direction of smoke from a ship's stack, to whether a given airplane would fly. Progress at Aachen, I felt, would be virtually impossible without a good wind tunnel. [ 15 ] : 76 When von Kármán began to consult with Caltech he worked with Clark Millikan and Arthur L. Klein. [ 15 ] : 124 He objected to their design and insisted on a return flow making the device "independent of the fluctuations of the outside atmosphere". It was completed in 1930 and used for Northrop Alpha testing. [ 15 ] : 169 In 1939 General Arnold asked what was required to advance the USAF, and von Kármán answered, "The first step is to build the right wind tunnel." [ 15 ] : 226 On the other hand, after the successes of the Bell X-2 and prospect of more advanced research, he wrote, "I was in favor of constructing such a plane because I have never believed that you can get all the answers out of a wind tunnel." [ 15 ] : 302–03 In 1941 the US constructed one of the largest wind tunnels at that time at Wright Field in Dayton, Ohio. This wind tunnel starts at 45 feet (14 m) and narrows to 20 feet (6.1 m) in diameter. Two 40-foot (12 m) fans were driven by a 40,000 hp (30,000 kW) electric motor. Large scale aircraft models could be tested at air speeds of 400 mph (640 km/h). [ 16 ] During WWII, Germany developed different designs of large wind tunnels to further their knowledge of aeronautics. For example, the wind tunnel at Peenemünde was a novel wind tunnel design that allowed for high-speed airflow research, but brought several design challenges regarding constructing a high-speed wind tunnel at scale. However, it successfully used some large natural caves which were increased in size by excavation and then sealed to store large volumes of air which could then be routed through the wind tunnels. By the end of the war, Germany had at least three different supersonic wind tunnels, with one capable of Mach 4.4 heated airflows. A large wind tunnel under construction near Oetztal , Austria would have had two fans directly driven by two 50,000 hp (37,000 kW) hydraulic turbines . The installation was not completed by the end of the war and the dismantled equipment was shipped to Modane , France in 1946 where it was re-erected and is still operated there by the ONERA . With its 26-foot (8 m) test section and airspeed up to Mach 1, it is the largest transonic wind tunnel facility in the world. [ 17 ] Frank Wattendorf reported on this wind tunnel for a US response. [ 18 ] On 22 June 1942, Curtiss-Wright financed construction of one of the nation's largest subsonic wind tunnels in Buffalo, New York. The first concrete for building was poured on 22 June 1942 on a site that eventually would become Calspan , where the wind tunnel still operates. [ 19 ] By the end of World War II, the US had built eight new wind tunnels, including the largest one in the world at Moffett Field near Sunnyvale, California, which was designed to test full size aircraft at speeds of less than 250 mph (400 km/h) [ 20 ] and a vertical wind tunnel at Wright Field, Ohio, where the wind stream is upwards for the testing of models in spin situations and the concepts and engineering designs for the first primitive helicopters flown in the US. [ 21 ] Later research into airflows near or above the speed of sound used a related approach. Metal pressure chambers were used to store high-pressure air which was then accelerated through a nozzle designed to provide supersonic flow. The observation or instrumentation chamber ("test section") was then placed at the proper location in the throat or nozzle for the desired airspeed. In the United States, concern over the lagging of American research facilities compared to those built by the Germans led to the Unitary Wind Tunnel Plan Act of 1949, which authorized expenditure to construct new wind tunnels at universities and at government sites. Some German war-time wind tunnels were dismantled for shipment to the United States as part of the plan to exploit German technology developments. [ 22 ] In the United States, many wind tunnels have been decommissioned from 1990 to 2010, including some historic facilities. Pressure is brought to bear on remaining wind tunnels due to declining or erratic usage, high electricity costs, and in some cases the high value of the real estate upon which the facility sits. On the other hand, CFD validation still requires wind-tunnel data, and this is likely to be the case for the foreseeable future. Studies have been done and others are underway to assess future military and commercial wind tunnel needs, but the outcome remains uncertain. [ 23 ] More recently an increasing use of jet-powered, instrumented unmanned vehicles, or research drones, have replaced some of the traditional uses of wind tunnels. [ 24 ] The world's fastest wind tunnel as of 2019 is the LENS-X wind tunnel, located in Buffalo, New York. [ 25 ] Air speed, direction and pressures are measured in several ways in wind tunnels. Air speed through the test section is determined by Bernoulli's principle . The direction of airflow around a model is shown by fluttering tufts of yarn attached to the aerodynamic surfaces. The direction of airflow approaching and leaving a surface can be seen by mounting tufts in the airflow in front of and behind the model. Smoke or bubbles of liquid can be introduced into the airflow upstream of the model, and their paths around the model recorded using photography (see particle image velocimetry ). Aerodynamic forces on the test model are measured with beam balances. [ 26 ] The pressure distribution on a test model has historically been measured by drilling small holes on the surface, and connecting them to manometers to measure the pressure at each hole. Pressure distributions can be measured more conveniently using pressure-sensitive paint , [ 27 ] in which pressure is indicated by the fluorescence of the paint. They can also be measured with very small electronic pressure sensors mounted on a flexible strip which is attached to the model. [ 28 ] The aerodynamic properties of an object can vary for a scaled model. [ 29 ] However, by observing certain similarity rules, a very satisfactory correspondence between the aerodynamic properties of a scaled model and a full-size object can be achieved. The choice of similarity parameters depends on the purpose of the test, but the most important conditions to satisfy are usually: In certain particular test cases, other similarity parameters must be satisfied, such as the Froude number . The model is mounted on a balance which measures forces and moments. Lift, drag, and lateral forces, as well as yaw, roll, and pitching moments are measured over a range of angle of attack . Common curves such as lift coefficient versus angle of attack are produced. The model must be held stationary, and these external supports create drag and potential turbulence that will affect the measurements. The supporting structures are kept as small as possible and aerodynamically shaped to minimize turbulence. Because air is transparent, it is difficult to directly observe the air movement itself. Instead, multiple methods of both quantitative and qualitative flow visualization methods have been developed for testing in a wind tunnel. High-speed turbulence and vortices can be difficult to see directly, but strobe lights and film cameras or high-speed digital cameras can help to capture events that are a blur to the naked eye. High-speed cameras are also required when the subject of the test is itself moving at high speed, such as an airplane propeller. The camera can capture stop-motion images of how the blade cuts through the particulate streams and how vortices are generated along the trailing edges of the moving blade. There are many different kinds of wind tunnels. They are typically classified by the range of speeds that are achieved in the test section, as follows: Wind tunnels are also classified by the orientation of air flow in the test section with respect to gravity. Typically they are oriented horizontally, as happens during level flight . A different class of wind tunnels are oriented vertically so that gravity can be balanced by drag instead of lift, and these have become a popular form of recreation for simulating sky-diving : Wind tunnels are also classified based on their main use. For those used with land vehicles such as cars and trucks the type of floor aerodynamics is also important. These vary from stationary floors through to full moving floors, with smaller moving floors and some attempt at boundary level control also being important. The main subcategories in the aeronautical wind tunnels are: Reynolds number is one of the governing similarity parameters for the simulation of flow in a wind tunnel. For mach number less than 0.3, it is the primary parameter that governs the flow characteristics. There are three main ways to simulate high Reynolds number, since it is not practical to obtain full scale Reynolds number by use of a full scale vehicle. [ citation needed ] V/STOL tunnels require large cross section area, but only small velocities. Since power varies with the cube of velocity, the power required for the operation is also less. An example of a V/STOL tunnel is the NASA Langley 14-by-22-foot (4.3 by 6.7 m) tunnel. [ 32 ] Vertical wind tunnels have a test section with air flowing upwards. Photography is used to record free-flight spin characteristics of aircraft models. Nets are installed above and below the test section to prevent the model from moving too high and to catch it when the air stops flowing. [ 33 ] Automotive wind tunnels fall into two categories: Wind tunnel testing of automobiles began in the 1920s, [ 34 ] on cars such as the Rumpler Tropfenwagen , and the Chrysler Airflow . Initially, scale models were tested, then larger wind tunnels were built to test full-scale cars with the capability to measure aerodynamic drag which enables improvements to be made for reducing fuel consumption. Wunibald Kamm built the first full-scale wind tunnel for motor vehicles. [ 35 ] Wind tunnels have been used to test sporting equipment including golf clubs, golf balls, bobsleds, cyclists, and race car helmets. Helmet aerodynamics are particularly important in open cockpit race cars such as Indycar and Formula One. Aerodynamic forces on the helmet at high speeds can cause considerable neck strain on the driver; and flow separation on the back side of the helmet can cause turbulent buffeting and thus blurred vision for the driver. [ 36 ] Other problems are also studied with wind tunnels. The effects of wind on man-made structures need to be studied when buildings became tall enough to be significantly affected by the wind. Very tall buildings present large surfaces to the wind, and the resulting forces have to be resisted by the building's internal structure or else the building will collapse. Determining such forces was required before building codes could specify the required strength of such buildings and these tests continue to be used for large or unusual buildings. These tunnels are used in the studies of noise generated by flow and its suppression. A high enthalpy wind tunnel is intended to study flow of air around objects moving at speeds much faster than the local speed of sound ( hypersonic speeds). " Enthalpy " is the total energy of a gas stream, composed of internal energy due to temperature, the product of pressure and volume, and the velocity of flow. Duplication of the conditions of hypersonic flight requires large volumes of high-pressure, heated air; large pressurized hot reservoirs, and electric arcs, are two techniques used. [ 37 ] The aerodynamic principles of the wind tunnel work equally on watercraft, except the water is more viscous and so sets greater forces on the object being tested. A looping flume is typically used for underwater aquadynamic testing. The interaction between two different types of fluids means that pure wind tunnel testing is only partly relevant. However, a similar sort of research is done in a towing tank . Air is not always the best test medium for studying small-scale aerodynamic principles, due to the speed of the air flow and airfoil movement. A study of fruit fly wings designed to understand how the wings produce lift was performed using a large tank of mineral oil and wings 100 times larger than actual size, in order to slow down the wing beats and make the vortices generated by the insect wings easier to see and understand. [ 38 ] Wind tunnel tests are used to determine wind velocities around buildings and bridges, and the wind forces on them. [ 39 ] Environmental wind tunnels are used to simulate the boundary layer of the atmosphere in windy conditions near the earth's surface. The wind near the ground is highly turbulent. [ 40 ] Whereas vehicle wind tunnels have features to produce steady, straight-line air approaching the test model environmental tunnels need spires followed by small cubes on the floor to make the air represent the atmosphere boundary layer as it approaches the test object. [ 41 ] The forces caused by wind on high-rise buildings and bridges have to be understood so they can be built using a minimum of construction materials while still being safe in very high winds. Another significant application for boundary layer wind tunnel modeling is for understanding exhaust gas dispersion patterns for hospitals, laboratories, and other emitting sources. Other examples of boundary layer wind tunnel applications are assessments of pedestrian comfort and snow drifting. Wind tunnel modeling is accepted as a method for aiding in green building design. For instance, the use of boundary layer wind tunnel modeling can be used as a credit for Leadership in Energy and Environmental Design (LEED) certification through the US Green Building Council.
https://en.wikipedia.org/wiki/Wind_tunnel
Windage is a term used in aerodynamics, firearms ballistics, and automobiles that mainly relates to the effects of air (e.g., wind) on an object of interest. The term is also used for the similar effects of liquids, such as oil. Windage is a force created on an object by friction when there is relative movement between air and the object. Windage loss is the reduction in efficiency due to windage forces. For example, electric motors are affected by friction between the rotor and air. [ 1 ] Large alternators have significant losses due to windage. To reduce losses, hydrogen gas may be used, since it is less dense. [ 2 ] There are two causes of windage: The term can refer to: Aerodynamic streamlining can be used to reduce windage. There is a hydrodynamic effect similar to windage, hydrodynamic drag . In firearms parlance, the word windage refers to the sight adjustment used to compensate for the horizontal deviation of the projectile trajectory from the intended point of impact due to wind drift or Coriolis effect . By contrast, the adjustment for the vertical deviation is the elevation . The colloquial term "Kentucky windage" refers to the practice of holding the aim to the upwind side of the target (also known as deflection shooting or "leading" the wind) to compensate for wind drift, without actually changing the existing adjustment settings on the gunsight . [ 3 ] In muzzleloading firearms, windage also refers to the difference in diameter between the bore and the ball , especially in muskets and cannons . [ 4 ] The bore gap allows the shot to be loaded quickly, but reduces the efficiency of the weapon's internal ballistics , as it allows gas to leak past the projectile. It also reduces the accuracy, as the ball takes a zig-zag path along the barrel, emerging out of the muzzle at an unpredictable angle. [ 5 ] In automotive parlance, windage refers to parasitic drag on the crankshaft due to sump oil splashing on the crank train during rough driving, as well as dissipating energy in turbulence from the crank train moving the crankcase gas and oil mist at high RPM. Windage may also inhibit the migration of oil into the sump and back to the oil pump, creating lubrication problems. Some manufacturers and aftermarket vendors have developed special scrapers to remove excess oil from the counterweights and windage screens to create a barrier between the crankshaft and oil sump. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Windage
A winding engine is a stationary engine used to control a cable , for example to power a mining hoist at a pit head . Electric hoist controllers have replaced proper winding engines in modern mining , but use electric motors that are also traditionally referred to as winding engines . Early winding engines were hand, or more usually horse powered . [ 1 ] The first powered winding engines were stationary steam engines . The demand for winding engines was one factor that drove James Watt to develop his rotative beam engine , with its ability continuously to turn a winding drum, rather than the early reciprocating beam engines that were only useful for working pumps . [ 2 ] They differ from most other stationary steam engines in that, like a steam locomotive , they need to be able to stop frequently and also reverse. This requires more complex valve gear and other controls than are needed on engines used in mills or to drive pumps . This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Winding_engine
A winding machine or winder is a machine for wrapping string, twine, cord, thread, yarn, rope, wire, ribbon, tape, etc. onto a spool, bobbin , reel, etc. [ 1 ] Winders are used heavily in textile manufacturing , especially in preparation to weaving where the yarn is wound onto a bobbin and then used in a shuttle . Ball winders, such as the Scottish Liaghra , are another type of winder that wind the yarn up from skein form into balls. Ball winders are commonly used by knitters and occasionally spinners . Winders have a center roll (a bobbin, spool, reel, belt-winding shell, etc.) on which the material is wound up. Often there are metal bars that travel through the center of the roll and are shaped according to their intended purpose. A circular bar facilitates greater speed, while a square bar provides a greater potential for torque. Edge sensors are used to sense how full the center roll is. They are mounted on adjustable slides to accommodate many different widths, as the width increases as the center roll is filled. The sensitivity of the sensor depends on the required speed of operation. Winding machines are classified based on the materials they are winding, [ 2 ] [ 3 ] some major types are On the basis of working the winders are classified as follows The benefits of automatic splicing add up to significantly increased productivity, greater quality control and reduced waste. It consists of a tail grabber and automatic diameter calculated splice initiation technique. The precision shear wheel and anvil mechanism guarantee a clean cut and no overlap. The splicing technique is divided into two major categories based on the type of joint, they are, Butt splicing where adhesives are used and Lap splicing: lap joint by means of applying heat and pressing it. Some winders have sensors built in to monitor the web (thread, wire, etc. that is being wound). There are three common types of sensors: An optical sensor is placed above or under the web to be controlled. Standard sensors are not affected by dirt, steam or temperatures up to 160 degrees Celsius. Optical sensors work by sending a beam of light at the web, and seeing if the light is reflected or not. It consists of a knife blade which is used to cut the web when the maximum roll diameter is reached. The knife blade is actuated either by means of pneumatic or electrical actuators. The rolls need to be changed when the maximum roll diameter has been reached. This can be achieved either manually or automatically. The system consists of a regulating circuit, which detects the web speed and the speed of the carriage. The regulator sets the web speed and speed of the carriage by comparing the web tension with an adjustable web-tension setpoint. For paper products, modern automated winders are capable of high-speed roll changes and core loading. Toilet-tissue winders are able to change over at running speed, while paper and board winders need to stop the winding process in order to cut the sheet and glue the paper on to a new wind-up core. Top-line winders introduce the paper core through a maintenance pit downstairs, with web interruption time as low as 15 seconds. More traditional winders require actual web stoppage for a changeover time of typically 45 seconds. These efficiencies enable modern paper mills to maintain consistent production over each winder for each paper machine, while continuing to maintain both machine health and product quality.
https://en.wikipedia.org/wiki/Winding_machine
In Win32 application programming, WindowProc (or window procedure ), also known as WndProc is a user-defined callback function that processes messages sent to a window. This function is specified when an application registers its window class and can be named anything (not necessarily WindowProc ). The window procedure is responsible for handling all messages that are sent to a window. The function prototype of WindowProc is given by: hwnd is a handle to the window to which the message was sent and uMsg identifies the actual message by its identifier, as specified in winuser.h . wParam and lParam are parameters whose meaning depends on the message. An application should identify the message and take the required action. Hundreds of different messages are produced as a result of various events taking place in the system, and typically, an application processes only a small fraction of these messages. In order to ensure that all messages are processed, Windows provides a default window procedure called DefWindowProc that provides default processing for messages that the application itself does not process. An application usually calls DefWindowProc at the end of its own WindowProc function, so that unprocessed messages can be passed down to the default procedure.
https://en.wikipedia.org/wiki/WindowProc