id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
4,255,637 | https://en.wikipedia.org/wiki/Games%20People%20Play%20%28book%29 | Games People Play: The Psychology of Human Relationships is a 1964 book by psychiatrist Eric Berne. The book was a bestseller at the time of its publication, despite drawing academic criticism for some of the psychoanalytic theories it presented. It popularized Berne's model of transactional analysis among a wide audience, and has been considered one of the first pop psychology books.
Background
The author Eric Berne was a psychiatrist specializing in psychotherapy who began developing alternate theories of interpersonal relationship dynamics in the 1950s. He sought to explain recurring patterns of interpersonal conflicts that he observed, which eventually became the basis of transactional analysis. After being rejected by a local psychoanalytic institute, he focused on writing about his own theories. In 1961, he published Transactional Analysis in Psychotherapy. That book was followed by Games People Play, in 1964. Berne did not intend for Games People Play to explore all aspects of transactional analysis, viewing it instead as an introduction to some of the concepts and patterns he identified. He borrowed money from friends and used his own savings to publish the book.
Summary
In the first half of the book, Berne introduces his theory of transactional analysis as a way of interpreting social interactions. He proposes that individuals encompass three roles or ego states, known as the Parent, the Adult, and the Child, which they switch between. He postulates that while Adult to Adult interactions are largely healthy, dysfunctional interactions can arise when people take on mismatched roles such as Parent and Child or Child and Adult.
The second half of the book catalogues a series of "mind games" identified by Berne, in which people interact through a patterned and predictable series of "transactions" based on these mismatched roles. He states that although these interactions may seem plausible, they are actually a way to conceal hidden motivations under scripted interactions with a predefined outcome. The book uses casual, often humorous phrases such as "See What You Made Me Do," "Why Don't You — Yes But," and "Ain't It Awful" as a way of briefly describing each game. Berne describes the "winner" of these mind games as the person that returns to the Adult ego-state first.
Reception and influence
Commercial performance
The book was a commercial success, and reached fifth place on The New York Times Best Seller list in March 1966. It has been described as one of the first "pop psychology" books. As of 1965, there were eight additional printings after the initial run of 3,000, and a total of 83,000 copies had been published. In a Time magazine article titled "The Names of the Games," speculated that the book's popularity was due to its applications for both self-help and "cocktail party talk." Carol M. Taylor, in the Florida Communication Journal, noted that many concepts and terms from transactional analysis had made their way into everyday speech.
The book was republished as an audiobook in 2012.
Critical reception
Despite its popularity among lay readership, Berne's model of interpersonal relationships received criticism from academics. A 1974 article by Roger W. Hite in Speech Teacher noted that although its theoretical basis had inspired numerous subsequent publications, there was little research or scientific support for it. Ben L. Glancy in a review for Quarterly Journal of Speech described Berne's work as "parlor psychiatry and party-time psychoanalysis." He wrote that the book oversimplified interpersonal relationships and was "antithetical" to contemporary psychological research. Some scholars, including proponents of transactional analysis, have expressed concern over the popularization of oversimplified psychological concepts as self-help methods. Peter Hartley's Interpersonal Communication noted the relative lack of academic review and interest in popular mental healthcare as opposed to physical healthcare in his overview of transactional analysis.
See also
I'm OK – You're OK
References
External links
Official website
Popular psychology books
Transactional analysis
Self-help books
1964 non-fiction books
Books about game theory
Books about games
Play (activity)
1964 quotations
Grove Press books | Games People Play (book) | Biology | 831 |
9,667,001 | https://en.wikipedia.org/wiki/Social%20development%20theory | Social development theory attempts to explain qualitative changes in the structure and framework of society, that help the society to better realize aims and objectives. Development can be defined in a manner applicable to all societies at all historical periods as an upward ascending movement featuring greater levels of energy, efficiency, quality, productivity, complexity, comprehension, creativity, mastery, enjoyment and accomplishment. Development is a process of social change, not merely a set of policies and programs instituted for some specific results. During the last five centuries this process has picked up in speed and intensity, and during the last five decades has witnessed a marked surge in acceleration.
The basic mechanism driving social change is increasing awareness leading to better organization. When society senses new and better opportunities for progress it develops new forms of organization to exploit these new openings successfully. The new forms of organization are better able to harness the available social energies and skills and resources to use the opportunities to get the intended results.
Development is governed by many factors that influence the results of developmental efforts. There must be a motive that drives the social change and essential preconditions for that change to occur. The motive must be powerful enough to overcome obstructions that impede that change from occurring. Development also requires resources such as capital, technology, and supporting infrastructure.
Development is the result of society's capacity to organize resources to meet challenges and opportunities. Society passes through well-defined stages in the course of its development. They are nomadic hunting and gathering, rural agrarian, urban, commercial, industrial, and post-industrial societies. Pioneers introduce new ideas, practices, and habits that conservative elements initially resist. At a later stage, innovations are accepted, imitated, organized, and used by other members of the community. Organizational improvements introduced to support the innovations can take place simultaneously at four different levels—physical, social, mental, and psychological. Moreover, four different types of resources are involved in promoting development. Of these four, physical resources are most visible, but least capable of expansion. Productivity of resources increases enormously as the quality of organization and level of knowledge inputs rise.
Development pace and scope varies according to the stage society is in. The three main stages are physical, vital (vital refers to the dynamic and nervous social energies of humanity that propel individuals to accomplish), and mental.
Terminology
Though the term development usually refers to economic progress, it can apply to political, social, and technological progress as well. These various sectors of society are so intertwined that it is difficult to neatly separate them. Development in all these sectors is governed by the same principles and laws, and therefore the term applies uniformly.
Economic development and human development need not mean the same thing. Strategies and policies aimed at greater growth may produce greater income in a country without improving the average living standard. This happened in oil-producing Middle Eastern countries—a surge in oil prices boosted their national income without much benefit to poorer citizens. Conversely, people-oriented programs and policies can improve health, education, living standards, and other quality-of-life measures with no special emphasis on monetary growth. This occurred in the 30 years of socialist and communist rule in Kerala in India.
Four related but distinct terms and phenomena form successive steps in a graded series: survival, growth, development, and evolution. Survival refers to a subsistence lifestyle with no marked qualitative changes in living standards. Growth refers to horizontal expansion in the existing plane characterized by quantitative expansion—such as a farmer increasing the area under cultivation, or a retailer opening more stores. Development refers to a vertical shift in the level of operations that causes qualitative changes, such as a retailer turning into a manufacturer or an elementary school turning into a high school.
Human development
Development is a human process, in the sense that human beings, not material factors, drive development. The energy and aspiration of people who seek development form the motive force that drives development. People's awareness may decide the direction of development. Their efficiency, productivity, creativity, and organizational capacities determine the level of people's accomplishment and enjoyment. Development is the outer realization of latent inner potentials. The level of people's education, intensity of their aspiration and energies, quality of their attitudes and values, skills and information all affect the extent and pace of development. These factors come into play whether it is the development of the individual, family, community, nation, or the whole world.
Process of emergence of new activities in society
Unconscious vs. conscious development
Human development normally proceeds from experience to comprehension. As society develops over centuries, it accumulates the experience of countless pioneers. The essence of that experience becomes the formula for accomplishment and success. The fact that experience precedes knowledge can be taken to mean that development is an unconscious process that gets carried out first, while knowledge becomes conscious later on only. Unconscious refers to activities that people carry out without knowing what the end results will be, or where their actions will lead. They carry out the acts without knowing the conditions required for success.
Role of pioneering individuals
The gathering of conscious knowledge of society matures and breaks out on the surface in the form of new ideas—espoused by pioneers who also take new initiatives to give expression to those ideas. Those initiatives may call for new strategies and new organizations, which conservative elements may resist. If the pioneer's initiatives succeed, it encourages imitation and slow propagation in the rest of the community. Later, growing success leads to society assimilating the new practice, and it becomes regularized and institutionalized. This can be viewed in three distinct phases of social preparedness, initiative of pioneers, and assimilation by the society.
The pioneer as such plays an important role in the development process—since through that person, unconscious knowledge becomes conscious. The awakening comes to the lone receptive individual first, and that person spreads the awakening to the rest of the society. Though pioneers appear as lone individuals, they act as conscious representatives of society as a whole, and their role should be viewed in that light.<ref>Cleveland, Harlan and Jacobs, Garry, The Genetic Code for Social Development". In: Human Choice, World Academy of Art & Science, USA, 1999, p. 7.</ref>
Imitation of the pioneer
Though a pioneer comes up with innovative ideas very often the initial response to a pioneer is one of indifference, ridicule or even one of outright hostility. If the pioneer persists and succeeds in an initiative, that person's efforts may eventually get the endorsement of the public. That endorsement tempts others to imitate the pioneer. If they also succeed, news spreads and brings wider acceptance. Conscious efforts to lend organizational support to the new initiative helps institutionalize the new innovation.
Organization of new activities
The organization is the human capacity to harness all available information, knowledge, resources, technology, infrastructure, and human skills to exploit new opportunities—and to face challenges and hurdles that block progress. The development comes through improvements in the human capacity of an organization. In other words, development comes through the emergence of better organizations that enhance society's capacity to make use of opportunities and face challenges.
The development of organizations may come through the formulation of new laws and regulations or new systems. Each new step of progress brings a corresponding new organization. Increasing European international trade in the 16th and 17th centuries demanded corresponding development in the banking industry and new commercial laws and civil arbitration facilities. New types of business ventures were formed to attract the capital needed to finance expanding trade. As a result, a new business entity appeared—the joint-stock company, which limited the investors' liability to the extent of their personal investment without endangering other properties.
Each new developmental advance is accompanied by new or more suitable organizations that facilitate that advance. Often, existing inadequate organizations must change to accommodate new advances.
Many countries have introduced scores of new reforms and procedures—such as the release of business activities directories, franchising, lease purchase, service, credit rating, collection agencies, industrial estates, free trade zones, and credit cards. Additionally, a diverse range of internet services have formed. Each new facility improves effective use of available social energies for productive purposes. The importance of these facilities for speeding development is apparent when they are absent. When Eastern European countries wanted to transition to market-type economies, they were seriously hampered in their efforts due to the absence of supportive systems and facilities.
Organization matures into institution
At a particular stage, organizations mature into institutions that become part of society. Beyond this point, an organization does not need laws or agencies to foster growth or ensure a continued presence. The transformation of an organization into an institution signifies society's total acceptance of that new organization.
The income tax office is an example of an organization that is actively maintained by the enactment of laws and the formation of an office for procuring taxes. Without active governmental support, this organization would disappear, as it does not enjoy universal public support. On the other hand, the institution of marriage is universally accepted, and would persist even if governments withdrew regulations that demand registration of marriage and impose age restrictions. The institution of marriage is sustained by the weight of tradition, not by government agencies and legal enactments.
Cultural transmission by the family
Families play a major role in the propagation of new activities once they win the support of the society. A family is a miniature version of the larger society—acceptance by the larger entity is reflected in the smaller entity. The family educates the younger generation and transmits social values like self-restraint, responsibility, skills, and occupational training. Though children do not follow their parents' footsteps as much as they once did, parents still mold their children's attitudes and thoughts regarding careers and future occupations. When families propagate a new activity, it signals that the new activity has become an integral part of the society.
Education
One of the most powerful means of propagating and sustaining new developments is the educational system in a society. Education transmits society's collective knowledge from one generation to the next. It equips each new generation to face future opportunities and challenges with knowledge gathered from the past. It shows the young generation the opportunities ahead for them, and thereby raises their aspiration to achieve more. Information imparted by education raises the level of expectations of youth, as well as aspirations for higher income. It also equips youth with the mental capacity to devise ways and means to improve productivity and enhance living standards.
Society can be conceived as a complex fabric that consists of interrelated activities, systems, and organizations. Development occurs when this complex fabric improves its own organization. That organizational improvement can take place simultaneously in several dimensions.
Quantitative expansion in the volume of social activities
Qualitative expansion in the content of all those elements that make up the social fabric
Geographic extension of the social fabric to bring more of the population under the cover of that fabric
Integration of existing and new organizations so the social fabric functions more efficiently
Such organizational innovations occur all the time, as a continuous process. New organizations emerge whenever a new developmental stage is reached, and old organizations are modified to suit new developmental requirements. The impact of these new organizations may be powerful enough to make people believe they are powerful in their own right—but it is society that creates the new organizations required to achieve its objectives.
The direction that the developmental process takes is influenced by the population's awareness of opportunities. Increasing awareness leads to greater aspiration, which releases greater energy that helps bring about greater accomplishment
Resources
Since the time of the English economist Thomas Malthus, some have thought that capacity for development is limited by availability of natural resources. Resources can be divided into four major categories: physical, social, mental, and human. Land, water, minerals and oil, etc. constitute physical resources. Social resources consist of society's capacity to manage and direct complex systems and activities. Knowledge, information and technology are mental resources. The energy, skill and capacities of people constitute human resources.
The science of economics is much concerned with scarcity of resources. Though physical resources are limited, social, mental, and human resources are not subject to inherent limits. Even if these appear limited, there is no fixity about the limitation, and these resources continue to expand over time. That expansion can be accelerated by the use of appropriate strategies. In recent decades the rate of growth of these three resources has accelerated dramatically.
The role of physical resources tends to diminish as society moves to higher developmental levels. Correspondingly, the role of non-material resources increases as development advances. One of the most important non-material resources is information, which has become a key input. Information is a non-material resource that is not exhausted by distribution or sharing. Greater access to information helps increase the pace of its development. Ready access to information about economic factors helps investors transfer capital to sectors and areas where it fetches a higher return. Greater input of non-material resources helps explain the rising productivity of societies in spite of a limited physical resource base.
Application of higher non-material inputs also raises the productivity of physical inputs. Modern technology has helped increase the proven sources of oil by 50% in recent years—and at the same time, reduced the cost of search operations by 75%. Moreover, technology shows it is possible to reduce the amount of physical inputs in a wide range of activities. Scientific agricultural methods demonstrated that soil productivity could be raised through synthetic fertilizers. Dutch farm scientists have demonstrated that a minimal water consumption of 1.4 liters is enough to raise a kilogram of vegetables, compared to the thousand liters that traditional irrigation methods normally require.
Henry Ford's assembly line techniques reduced the man-hours of labor required to deliver a car from 783 minutes to 93 minutes. These examples show that the greater input of higher non-material resources can raise the productivity of physical resources and thereby extend their limits.
Technological development
When the mind engages in pure creative thinking, it comes up with new thoughts and ideas. When it applies itself to society it can come up with new organizations. When it turns to the study of nature, it discovers nature's laws and mechanisms. When it applies itself to technology, it makes new discoveries and practical inventions that boost productivity. Technical creativity has had an erratic course through history, with some intense periods of creative output followed by some dull and inactive periods. However, the period since 1700 has been marked by an intense burst of technological creativity that is multiplying human capacities exponentially.
Though many reasons can be cited for the accelerating pace of technological inventions, a major cause is the role played by mental creativity in an increasing atmosphere of freedom. Political freedom and liberation from religious dogma had a powerful impact on creative thinking during the Age of Enlightenment. Dogmas and superstitions greatly restricted mental creativity. For example, when the astronomer Copernicus proposed a heliocentric view of the world, the church rejected it because it did not conform to established religious doctrine. When Galileo used a telescope to view the planets, the church condemned the device as an instrument of the devil, as it seemed so unusual. The Enlightenment shattered such obscurantist fetters on freedom of thought. From then on, the spirit of experimentation thrived.
Though technological inventions have increased the pace of development, the tendency to view developmental accomplishments as mainly powered by technology misses the bigger picture. Technological innovation was spurred by general advances in the social organization of knowledge. In the Middle Ages, efforts at scientific progress were few, mainly because there was no effective system to preserve and disseminate knowledge. Since there was no organized protection for patent rights, scientists and inventors were secretive about observations and discoveries. Establishment of scientific associations and scientific journals spurred the exchange of knowledge and created a written record for posterity.
Technological development depends on social organizations. Nobel laureate economist Arthur Lewis observed that the mechanization of factory production in England—the Industrial Revolution—was a direct result of the reorganization of English agriculture. Enclosure of common lands in England generated surplus income for farmers. That extra income generated additional raw materials for industrial processing, and produced greater demand for industrial products that traditional manufacturing processes could not meet.
The opening of sea trade further boosted demand for industrial production for export. Factory production increased many times when production was reorganized to use steam energy, combined with moving assembly lines, specialization, and division of labor. Thus, technological development was both a result of and a contributing factor to the overall development of society.
Individual scientific inventions do not spring out of the blue. They build on past accomplishments in an incremental manner, and give a conscious form to the unconscious knowledge that society gathers over time. As pioneers are more conscious than the surrounding community, their inventions normally meet with initial resistance, which recedes over time as their inventions gain wider acceptance. If opposition is stronger than the pioneer, then the introduction of an invention gets delayed.
In medieval times, when guilds tightly controlled their members, medical progress was slow mainly because physicians were secretive about their remedies. When Denis Papin demonstrated his steam engine, German naval authorities refused to accept it, fearing it would lead to increased unemployment. John Kay, who developed a flying shuttle textile loom, was physically threatened by English weavers who feared the loss of their jobs. He fled to France where his invention was more favorably received.
The widespread use of computers and application of biotechnology raises similar resistance among the public today. Whether the public receives an invention readily or resists depends on their awareness and willingness to entertain rapid change. Regardless of the response, technological inventions occurs as part of overall social development, not as an isolated field of activity.
Limits to development
The concept of inherent limits to development arose mainly because past development was determined largely by availability of physical resources. Humanity relied more on muscle-power than thought-power to accomplish work. That is no longer the case. Today, mental resources are the primary determinant of development. Where people drove a simple bullock cart, they now design ships and aircraft that carry huge loads across immense distances. Humanity has tamed rivers, cleared jungles and even turned arid desert lands into cultivable lands through irrigation.
By using intelligence, society has turned sand into powerful silicon chips that carry huge amounts of information and form the basis of computers. Since there is no inherent limit to the expansion of society's mental resources, the notion of limits to growth cannot be ultimately binding.
Three stages of development
Society's developmental journey is marked by three stages: physical, vital, and mental. These are not clear-cut stages, but overlap. All three are present in any society at times. One of them is predominant while the other two play subordinate roles. The term 'vital' denotes the emotional and nervous energies that empower society's drive towards accomplishment and express most directly in the interactions between human beings. Before the full development of mind, it is these vital energies that predominate in human personality and gradually yield the ground as the mental element becomes stronger. The speed and circumstances of social transition from one stage to another varies.
Physical stage
The physical stage is characterized by the domination of the physical element of the human personality. Physical stage is distinguished by the predominance of physical part of human personality, which is characterised by minimal technological advances and becoming depended on manual labour for agricultural practices. During this phase, society is preoccupied with bare survival and subsistence. Moreover, societal structures often exhibit rigidity, with little room for social mobility or advancement beyond one's inherited status or position. People follow tradition strictly and there is little innovation and change. Land is the main asset and productive resource during the physical stage and wealth is measured by the size of land holdings. This is the agrarian and feudal phase of society. Inherited wealth and position rule the roost and there is very little upward mobility. Meanwhile, economic transactions primarily revolve around barter systems and the exchange of goods rather than monetary transactions. Feudal lords and military chiefs function as the leaders of the society. Commerce and money play a relatively minor role. As innovative thinking and experimental approaches are discouraged, people follow tradition unwaveringly and show little inclination to think outside of established guidelines. Occupational skills are passed down from parent to child by a long process of apprenticeship. Despite its limitations, the physical stage lays the foundation for subsequent phases of development, serving as a crucial starting point for societal evolution and progress.
Guilds restrict the dissemination of trade secrets and technical knowledge. The Church controls the spread of new knowledge and tries to smother new ideas that does not agree with established dogmas. The physical stage comes to an end when the reorganization of agriculture gives scope for commerce and industry to expand. This happened in Europe during the 18th century when political revolutions abolished feudalism and the Industrial Revolution gave a boost to factory production. The shift to the vital and mental stages helps to break the bonds of tradition and inject new dynamism in social life.
Vital stage
The vital stage of society is infused with dynamism and change. The vital activities of society expand markedly. Society becomes curious, innovative and adventurous. During the vital stage emphasis shifts from interactions with the physical environment to social interactions between people. Trade supplants agriculture as the principal source of wealth.
The dawning of this phase in Europe led to exploratory voyages across the seas leading to the discovery of new lands and an expansion of sea trade. Equally important, society at this time began to more effectively harness the power of money. Commerce took over from agriculture, and money replaced land as the most productive resource. The center of life shifted from the countryside to the towns where opportunities for trade and business were in greater abundance.
The center of power shifted from the aristocracy to the business class, which employed the growing power of money to gain political influence. During the vital stage, the rule of law becomes more formal and binding, providing a secure and safe environment for business to flourish. Banks, shipping companies and joint-stock companies increase in numbers to make use of the opportunities. Fresh innovative thinking leads to new ways of life that people accept as they prove beneficial. Science and experimental approaches begin to make a headway as the hold of tradition and dogma weaken. Demand for education rises.
As the vital stage matures through the expansion of the commercial and industrial complex, surplus income arises, which prompts people to spend more on items so far considered out of reach. People begin to aspire for luxury and leisure that was not possible when life was at a subsistence level.
Mental stage
This stage has three essential characteristics: practical, social, and political application of mind. The practical application of mind generates many inventions. The social application of mind leads to new and more effective types of social organization. The political application leads to changes in the political systems that empower the populace to exercise political and human rights in a free and democratic manner. These changes began in the Renaissance and Enlightenment, and gained momentum in the Reformation, which proclaimed the right of individuals to relate directly to God without the mediation of priests. The political application of mind led to the American and French Revolutions, which produced writing that first recognized the rights of the common man and gradually led to the actual enjoyment of these rights.
Organization is a mental invention. Therefore, it is not surprising that the mental stage of development is responsible for the formulation of a great number of organizational innovations. Huge business corporations have emerged that make more money than even the total earnings of some small countries. Global networks for transportation and communication now connect the nations of the world within a common unified social fabric for sea and air travel, telecommunications, weather reporting and information exchange.
In addition to spurring technological and organizational innovation, the mental phase is also marked by the increasing power of ideas to change social life. Ethical ideals have been with humanity since the dawn of civilization. But their practical application in daily social life had to wait for the mental stage of development to emerge. The proclamation of human rights and the recognition of the value of the individual have become effective only after the development of mind and spread of education. The 20th century truly emerged as the century of the common man. Political, social, economic and many other rights were extended to more and more sections of humanity with each succeeding decade.
The relative duration of these three stages and the speed of transition from one to another varies from one society to another. However broadly speaking, the essential features of the physical, vital and mental stages of development are strikingly similar and therefore quite recognizable even in societies separated by great distance and having little direct contact with one another.
Moreover, societies also learn from those who have gone through these transitions before and, therefore, may be able to make the transitions faster and better. When the Netherlands introduced primary education in 1618, it was a pioneering initiative. When Japan did the same thing late in the 19th century, it had the advantage of the experience of the US and other countries. When many Asian countries initiated primary education in the 1950s after winning independence, they could draw on the vast experience of more developed nations. This is a major reason for the quickening pace of progress.
Natural vs. planned development
Natural development is distinct from development by government initiatives and planning. Natural development involves an unplanned and unconscious evolution of social norms and structures, which makes it different from government-led initiatives. Natural development is the spontaneous and unconscious process of development that normally occurs. It is distinguished by different factors including historical legacies, economic circumstances, cultural standards and its inherent complexities. Planned development is the result of deliberate conscious initiatives by the government to speed development through special programs and policies. Natural development is an unconscious process, since it results from the behavior of countless individuals acting on their own—rather than conscious intention of the community. It is also unconscious in the sense that society achieves the results without being fully conscious of how it did so. On the other hand, planned development is the result of deliberate attempts of governmental authorities to stimulate the developing process through specific policies and programmes.
The natural development of democracy in Europe over the past few centuries can be contrasted with the conscious effort to introduce democratic forms of government in former colonial nations after World War II. Planned development is also largely unconscious: the goals may be conscious, but the most effective means for achieving them may remain poorly understood. Planned development can become fully conscious only when the process of development itself is fully understood. The achievement of planned development relies on a comprehensive understanding of fundamental social dynamics and a sophisticated implementation strategy. While in planned development the government is the initiator, in the natural version it is private individuals or groups that are responsible for the initiative. Whoever initiates, the principles and policies are the same and success is assured only when the conditions and right principles are followed. Over centuries, democracy’s organic growth has been experienced while Europe stands in contrast to the purposeful Post-World War II, which attempts in colonial countries to establish democratic rules. In contrast to planned development projects, it has been found that natural development results from local efforts of private citizens along with community organisations.
Summary
Social development theory offers a comprehensive framework for understanding the qualitative changes in society over time. It highlights the role of increasing awareness and better organization in driving progress. Through stages of physical, vital, and mental development, societies evolve, embracing innovation and adapting to change.
See also
Idea of Progress
Social change
World systems theory
References
Jacobs, Garry et al.. Kamadhenu: The Prosperity Movement, Southern Publications, India, 1988.
Asokan. N. History of USA'', The Mother's Service Society, 2006.
Sociological theories
Economic development
Human development
International development
Technology development | Social development theory | Biology | 5,572 |
152,464 | https://en.wikipedia.org/wiki/Nuclide | Nuclides (or nucleides, from nucleus, also known as nuclear species) are a class of atoms characterized by their number of protons, Z, their number of neutrons, N, and their nuclear energy state.
The word nuclide was coined by the American nuclear physicist Truman P. Kohman in 1947. Kohman defined nuclide as a "species of atom characterized by the constitution of its nucleus" containing a certain number of neutrons and protons. The term thus originally focused on the nucleus.
Nuclides vs isotopes
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, while the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has large effects on nuclear properties, but its effect on chemical reactions is negligible for most elements. Even in the case of the very lightest elements, where the ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect, but it matters in some circumstances. For hydrogen, the lightest element, the isotope effect is large enough to affect biological systems strongly. In the case of helium, helium-4 obeys Bose–Einstein statistics, while helium-3 obeys Fermi–Dirac statistics. Since isotope is the older term, it is better known than nuclide, and is still occasionally used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine.
Types of nuclides
Although the words nuclide and isotope are often used interchangeably, being isotopes is actually only one relation between nuclides. The following table names some other relations.
A set of nuclides with equal proton number (atomic number), i.e., of the same chemical element but different neutron numbers, are called isotopes of the element. Particular nuclides are still often loosely called "isotopes", but the term "nuclide" is the correct one in general (i.e., when Z is not fixed). In similar manner, a set of nuclides with equal mass number A, but different atomic number, are called isobars (isobar = equal in weight), and isotones are nuclides of equal neutron number but different proton numbers. Likewise, nuclides with the same neutron excess (N − Z) are called isodiaphers. The name isotone was derived from the name isotope to emphasize that in the first group of nuclides it is the number of neutrons (n) that is constant, whereas in the second the number of protons (p).
See Isotope#Notation for an explanation of the notation used for different nuclide or isotope types.
Nuclear isomers are members of a set of nuclides with equal proton number and equal mass number (thus making them by definition the same isotope), but different states of excitation. An example is the two states of the single isotope shown among the decay schemes. Each of these two states (technetium-99m and technetium-99) qualifies as a different nuclide, illustrating one way that nuclides may differ from isotopes (an isotope may consist of several different nuclides of different excitation states).
The longest-lived non-ground state nuclear isomer is the nuclide tantalum-180m (), which has a half-life in excess of 1,000 trillion years. This nuclide occurs primordially, and has never been observed to decay to the ground state. (In contrast, the ground state nuclide tantalum-180 does not occur primordially, since it decays with a half life of only 8 hours to 180Hf (86%) or 180W (14%).)
There are 251 nuclides in nature that have never been observed to decay. They occur among the 80 different elements that have one or more stable isotopes. See stable nuclide and primordial nuclide. Unstable nuclides are radioactive and are called radionuclides. Their decay products ('daughter' products) are called radiogenic nuclides.
Origins of naturally occurring radionuclides
Natural radionuclides may be conveniently subdivided into three types. First, those whose half-lives t1/2 are at least 2% as long as the age of the Earth (for practical purposes, these are difficult to detect with half-lives less than 10% of the age of the Earth) (). These are remnants of nucleosynthesis that occurred in stars before the formation of the Solar System. For example, the isotope (t1/2 = ) of uranium is still fairly abundant in nature, but the shorter-lived isotope (t1/2 = ) is 138 times rarer. About 34 of these nuclides have been discovered (see List of nuclides and Primordial nuclide for details).
The second group of radionuclides that exist naturally consists of radiogenic nuclides such as (t1/2 = ), an isotope of radium, which are formed by radioactive decay. They occur in the decay chains of primordial isotopes of uranium or thorium. Some of these nuclides are very short-lived, such as isotopes of francium. There exist about 51 of these daughter nuclides that have half-lives too short to be primordial, and which exist in nature solely due to decay from longer lived radioactive primordial nuclides.
The third group consists of nuclides that are continuously being made in another fashion that is not simple spontaneous radioactive decay (i.e., only one atom involved with no incoming particle) but instead involves a natural nuclear reaction. These occur when atoms react with natural neutrons (from cosmic rays, spontaneous fission, or other sources), or are bombarded directly with cosmic rays. The latter, if non-primordial, are called cosmogenic nuclides. Other types of natural nuclear reactions produce nuclides that are said to be nucleogenic nuclides.
An example of nuclides made by nuclear reactions, are cosmogenic (radiocarbon) that is made by cosmic ray bombardment of other elements, and nucleogenic which is still being created by neutron bombardment of natural as a result of natural fission in uranium ores. Cosmogenic nuclides may be either stable or radioactive. If they are stable, their existence must be deduced against a background of stable nuclides, since every known stable nuclide is present on Earth primordially.
Artificially produced nuclides
Beyond the naturally occurring nuclides, more than 3000 radionuclides of varying half-lives have been artificially produced and characterized.
The known nuclides are shown in Table of nuclides. A list of primordial nuclides is given sorted by element, at List of elements by stability of isotopes. List of nuclides is sorted by half-life, for the 905 nuclides with half-lives longer than one hour.
Summary table for numbers of each class of nuclides
This is a summary table for the 905 nuclides with half-lives longer than one hour, given in list of nuclides. Note that numbers are not exact, and may change slightly in the future, if some "stable" nuclides are observed to be radioactive with very long half-lives.
Nuclear properties and stability
Atomic nuclei other than hydrogen have protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to be bound into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph). For example, although the neutron–proton ratio of is 1:2, the neutron–proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons.
Even and odd nucleon numbers
The proton–neutron ratio is not the only factor affecting nuclear stability. It depends also on even or odd parity of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture or more exotic means, such as spontaneous fission and cluster decay.
The majority of stable nuclides are even-proton–even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton–even-neutron, and even-proton–odd-neutron nuclides. Odd-proton–odd-neutron nuclides (and nuclei) are the least common.
See also
Isotope (much more information on abundance of stable nuclides)
List of elements by stability of isotopes
List of nuclides (sorted by half-life)
Table of nuclides
Alpha nuclide
Monoisotopic element
Mononuclidic element
Primordial element
Radionuclide
Hypernucleus
References
External links
Livechart - Table of Nuclides at The International Atomic Energy Agency
Nuclear physics | Nuclide | Physics,Chemistry | 2,140 |
60,575,836 | https://en.wikipedia.org/wiki/Unilalianism | Unilalianism (/junɨˈleɪ.li.ən.ɪzəm/), better known as Unilalia is a portmanteau combining the Latin unus with the ancient Greek laliá – together, this word is translated loosely into "one tongue [language]". It refers to a growing, new underground art and aesthetic movement created by Carter Wilson and his brother, Ellis, developing in the Seattle and greater Puget Sound region with its roots in Oakland, California.
List of Unilalian shows
This is a list of shows of Unilalianism.
Washington
2019 - Live Installation 001 (Olympia)
2019 - Live Installation 002 (Olympia)
2019 - Carter Wilson Album Release Celebration & Cafe Red Benefit Show (Seattle)
2019 - Unilalia Live! 001 (Seattle)
2019 - Unilalia Live! 002 (Seattle)
2019 - Unilalia Live! 003 (Olympia)
2020 - Unilalia Live! 004: Blackbox Underground (Ballard)
2020 - Unilalia Pop-Up Show (Seattle)
2020 - Unilalia Live! 005 (Seattle)
2020 - Unilalia Live! 006 (Seattle)
References
External links
The Unilalia Group
https://www.iheart.com/podcast/269-hhhnast-52465442/episode/dj-blake-interviews-carter-and-ellis-52471398/
American artist groups and collectives
Art and design organizations
Art societies
Indigenous art of the Americas
Afrofuturism
Counterculture | Unilalianism | Engineering | 319 |
64,366,263 | https://en.wikipedia.org/wiki/Developable%20roller | In geometry, a developable roller is a convex solid whose surface consists of a single continuous, developable face. While rolling on a plane, most developable rollers develop their entire surface so that all the points on the surface touch the rolling plane. All developable rollers have ruled surfaces. Four families of developable rollers have been described to date: the prime polysphericons, the convex hulls of the two disc rollers (TDR convex hulls), the polycons and the Platonicons.
Construction
Each developable roller family is based on a different construction principle. The prime polysphericons are a subfamily of the polysphericon family. They are based on bodies made by rotating regular polygons around one of their longest diagonals. These bodies are cut in two at their symmetry plane and the two halves are reunited after being rotated at an offset angle relative to each other. All prime polysphericons have two edges made of one or more circular arcs and four vertices. All of them, but the sphericon, have surfaces that consist of one kind of conic surface and one, or more, conical or cylindrical frustum surfaces. Two-disc rollers are made of two congruent symmetrical circular or elliptical sectors. The sectors are joined to each other such that the planes in which they lie are perpendicular to each other, and their axes of symmetry coincide. The convex hulls of these structures constitute the members of the TDR convex hull family. All members of this family have two edges (the two circular or elliptical arcs). They may have either 4 vertices, as in the sphericon (which is a member of this family as well) or none, as in the oloid. Like the prime polysphericons the polycons are based on regular polygons but consist of identical pieces of only one type of cone with no frustum parts. The cone is created by rotating two adjacent edges of a regular polygon (and in most cases their extensions as well) around the polygon's axis of symmetry that passes through their common vertex. A polycon based on an n-gon (a polygon with n edges) has n edges and n + 2 vertices. The sphericon, which is a member of this family as well, has circular edges. The hexacon's edges are parabolic. All other polycons' edges are hyperbolic. Like the polycons, the Platonicons are made of only one type of conic surface. Their unique feature is that each one of them circumscribes one of the five Platonic solids. Unlike the other families, this family is not infinite. 14 Platonicons have been discovered to date.
Rolling motion
Unlike axially symmetrical bodies that, if unrestricted, can perform a linear rolling motion (like the sphere or the cylinder) or a circular one (like the cone), developable rollers meander while rolling. Their motion is linear only on average. In the case of the polycons and Platonicons, as well as some of the prime polysphericons, the path of their center of mass consists of circular arcs. In the case of the prime polysphericons that have surfaces that contain cylindrical parts the path is a combination of circular arcs and straight lines. A general expression for the shape of the path of the TDR convex hulls center of mass has yet to be derived.
In order to maintain a smooth rolling motion the center of mass of a rolling body must maintain a constant height. All prime polysphericons, polycons, and platonicons and some of the TDR convex hulls share this property. Some of the TDR convex hulls, like the oloid, do not possess this property. In order for a TDR convex hull to maintain constant height the following must hold:
Where a and b are the half minor and major axes of the elliptic arcs, respectively, and c is the distance between their centers. For example, in the case where the skeletal structure of the convex hull TDR consists of two circular segments with radius r, for the center of mass to be kept at constant height, the distance between the sectors' centers should be equal to r.
References
External links
Sphericon series A list of the first members of the polysphericon family and a discussion about their various kinds.
Geometric shapes
Euclidean solid geometry | Developable roller | Physics,Mathematics | 916 |
27,344,508 | https://en.wikipedia.org/wiki/Abstract%20elementary%20class | In model theory, a discipline within mathematical logic, an abstract elementary class, or AEC for short, is a class of models with a partial order similar to the relation of an elementary substructure of an elementary class in first-order model theory. They were introduced by Saharon Shelah.
Definition
, for a class of structures in some language , is an AEC if it has the following properties:
is a partial order on .
If then is a substructure of .
Isomorphisms: is closed under isomorphisms, and if and then
Coherence: If and then
Tarski–Vaught chain axioms: If is an ordinal and is a chain (i.e. ), then:
If , for all , then
Löwenheim–Skolem axiom: There exists a cardinal , such that if is a subset of the universe of , then there is in whose universe contains such that and . We let denote the least such and call it the Löwenheim–Skolem number of .
Note that we usually do not care about the models of size less than the Löwenheim–Skolem number and often assume that there are none (we will adopt this convention in this article). This is justified since we can always remove all such models from an AEC without influencing its structure above the Löwenheim–Skolem number.
A -embedding is a map for such that and is an isomorphism from onto . If is clear from context, we omit it.
Examples
The following are examples of abstract elementary classes:
An Elementary class is the most basic example of an AEC: If T is a first-order theory, then the class of models of T together with elementary substructure forms an AEC with Löwenheim–Skolem number |T|.
If is a sentence in the infinitary logic , and is a countable fragment containing , then is an AEC with Löwenheim–Skolem number . This can be generalized to other logics, like , or , where expresses "there exists uncountably many".
If T is a first-order countable superstable theory, the set of -saturated models of T, together with elementary substructure, is an AEC with Löwenheim–Skolem number .
Zilber's pseudo-exponential fields form an AEC.
Common assumptions
AECs are very general objects and one usually make some of the assumptions below when studying them:
An AEC has joint embedding if any two model can be embedded inside a common model.
An AEC has no maximal model if any model has a proper extension.
An AEC has amalgamation if for any triple with , , there is and embeddings of and inside that fix pointwise.
Note that in elementary classes, joint embedding holds whenever the theory is complete, while amalgamation and no maximal models are well-known consequences of the compactness theorem. These three assumptions allow us to build a universal model-homogeneous monster model , exactly as in the elementary case.
Another assumption that one can make is tameness.
Shelah's categoricity conjecture
Shelah introduced AECs to provide a uniform framework in which to generalize first-order classification theory. Classification theory started with Morley's categoricity theorem, so it is natural to ask whether a similar result holds in AECs. This is Shelah's eventual categoricity conjecture. It states that there should be a Hanf number for categoricity:
For every AEC K there should be a cardinal depending only on such that if K is categorical in some (i.e. K has exactly one (up to isomorphism) model of size ), then K is categorical in for all .
Shelah also has several stronger conjectures: The threshold cardinal for categoricity is the Hanf number of pseudoelementary classes in a language of cardinality LS(K). More specifically when the class is in a countable language and axiomaziable by an sentence the threshold number for categoricity is . This conjecture dates back to 1976.
Several approximations have been published (see for example the results section below), assuming set-theoretic assumptions (such as the existence of large cardinals or variations of the generalized continuum hypothesis), or model-theoretic assumptions (such as amalgamation or tameness). As of 2014, the original conjecture remains open.
Results
The following are some important results about AECs. Except for the last, all results are due to Shelah.
Shelah's Presentation Theorem: Any AEC is : it is a reduct of a class of models of a first-order theory omitting at most types.
Hanf number for existence: Any AEC which has a model of size has models of arbitrarily large sizes.
Amalgamation from categoricity: If K is an AEC categorical in and and , then K has amalgamation for models of size .
Existence from categoricity: If K is a AEC with Löwenheim–Skolem number and K is categorical in and , then K has a model of size . In particular, no sentence of can have exactly one uncountable model.
Approximations to Shelah's categoricity conjecture:
Downward transfer from a successor: If K is an abstract elementary class with amalgamation that is categorical in a "high-enough" successor , then K is categorical in all high-enough .
Shelah's categoricity conjecture for a successor from large cardinals: If there are class-many strongly compact cardinals, then Shelah's categoricity conjecture holds when we start with categoricity at a successor.
See also
Tame abstract elementary class
Notes
References
Model theory
Category theory | Abstract elementary class | Mathematics | 1,207 |
38,090,833 | https://en.wikipedia.org/wiki/Stocks%20%28shipyard%29 | Stocks are an external framework in a shipyard used to support construction of (usually) wooden ships. They are normally associated with a slipway to allow the ship to slide down into the water. In addition to supporting the ship itself, they are typically used to give access to the ship's bottom and sides.
References
Shipbuilding | Stocks (shipyard) | Engineering | 66 |
9,105,950 | https://en.wikipedia.org/wiki/Global%20Ocean%20Data%20Analysis%20Project | The Global Ocean Data Analysis Project (GLODAP) is a synthesis project bringing together oceanographic data, featuring two major releases as of 2018. The central goal of GLODAP is to generate a global climatology of the World Ocean's carbon cycle for use in studies of both its natural and anthropogenically forced states. GLODAP is funded by the National Oceanic and Atmospheric Administration, the U.S. Department of Energy, and the National Science Foundation.
The first GLODAP release (v1.1) was produced from data collected during the 1990s by research cruises on the World Ocean Circulation Experiment, Joint Global Ocean Flux Study and Ocean-Atmosphere Exchange Study programmes. The second GLODAP release (v2) extended the first using data from cruises from 2000 to 2013. The data are available both as individual "bottle data" from sample sites, and as interpolated fields on a standard longitude, latitude, depth grid.
Dataset
The GLODAPv1.1 climatology contains analysed fields of "present day" (1990s) dissolved inorganic carbon (DIC), alkalinity, carbon-14 (14C), CFC-11 and CFC-12. The fields consist of three-dimensional, objectively-analysed global grids at 1° horizontal resolution, interpolated onto 33 standardised vertical intervals from the surface (0 m) to the abyssal seafloor (5500 m). In terms of temporal resolution, the relative scarcity of the source data mean that, unlike the World Ocean Atlas, averaged fields are only produced for the annual time-scale. The GLODAP climatology is missing data in certain oceanic provinces including the Arctic Ocean, the Caribbean Sea, the Mediterranean Sea and Maritime Southeast Asia.
Additionally, analysis has attempted to separate natural from anthropogenic DIC, to produce fields of pre-industrial (18th century) DIC and "present day" anthropogenic . This separation allows estimation of the magnitude of the ocean sink for anthropogenic , and is important for studies of phenomena such as ocean acidification. However, as anthropogenic DIC is chemically and physically identical to natural DIC, this separation is difficult. GLODAP used a mathematical technique known as C* (C-star) to deconvolute anthropogenic from natural DIC (there are a number of alternative methods). This uses information about ocean biogeochemistry and surface disequilibrium together with other ocean tracers including carbon-14, CFC-11 and CFC-12 (which indicate water mass age) to try to separate out natural from that added during the ongoing anthropogenic transient. The technique is not straightforward and has associated errors, although it is gradually being refined to improve it. Its findings are generally supported by independent predictions made by dynamic models.
The GLODAPv2 climatology largely repeats the earlier format, but makes use of the large number of observations of the ocean's carbon cycle made over the intervening period (2000–2013). The analysed "present-day" fields in the resulting dataset are normalised to year 2002. Anthropogenic carbon was estimated in GLODAPv2 using a "transit-time distribution" (TTD) method (an approach using a Green's function). In addition to updated fields of DIC (total and anthropogenic) and alkalinity, GLODAPv2 includes fields of seawater pH and calcium carbonate saturation state (Ω; omega). The latter is a non-dimensional number calculated by dividing the local carbonate ion concentration by the ambient saturation concentration for calcium carbonate (for the biomineral polymorphs calcite and aragonite), and relates to an oceanographic property, the carbonate compensation depth. Values of this below 1 indicate undersaturation, and potential dissolution, while values above 1 indicate supersaturation, and relative stability.
Gallery
The following panels show sea surface concentrations of fields prepared by GLODAPv1.1. The "pre-industrial" is the 18th century, while "present-day" is approximately the 1990s.
The following panels show sea surface concentrations of fields prepared by GLODAPv2. The "pre-industrial" is the 18th century, while "present-day" is normalised to 2002. Note that these properties are shown in mass units (per kilogram of seawater) rather than the volume units (per cubic metre of seawater) used in the GLODAPv1.1 panels.
See also
Biogeochemical cycle
Biological pump
Continental shelf pump
Geochemical Ocean Sections Study
Joint Global Ocean Flux Study
Ocean acidification
Solubility pump
World Ocean Atlas
World Ocean Circulation Experiment
References
External links
GLODAP website, Bjerknes Climate Data Centre
GLODAP v1.1 website, National Oceanic and Atmospheric Administration
GLODAP v2 website, National Oceanic and Atmospheric Administration
Biological oceanography
Carbon
Chemical oceanography
Chlorofluorocarbons
Oceanography
Physical oceanography | Global Ocean Data Analysis Project | Physics,Chemistry,Environmental_science | 1,063 |
3,960,285 | https://en.wikipedia.org/wiki/Nebulin | Nebulin is an actin-binding protein which is localized to the thin filament of the sarcomeres in skeletal muscle. Nebulin in humans is coded for by the gene NEB. It is a very large protein (600–900 kDa) and binds as many as 200 actin monomers. Because its length is proportional to thin filament length, it is believed that nebulin acts as a thin filament "ruler" and regulates thin filament length during sarcomere assembly. Other functions of nebulin, such as a role in cell signaling, remain uncertain.
Nebulin has also been shown to regulate actin-myosin interactions by inhibiting ATPase activity in a calcium-calmodulin sensitive manner.
Mutations in nebulin cause some cases of the autosomal recessive disorder nemaline myopathy.
A smaller member of the nebulin protein family, termed nebulette, is expressed in cardiac muscle.
Structure
The structure of the SH3 domain of nebulin was determined by protein nuclear magnetic resonance spectroscopy. The SH3 domain from nebulin is composed of 60 amino acid residues, of which 30 percent is in the beta sheet secondary structure (7 strands; 18 residues).
Knockout phenotype
As of 2007, two knockout mouse models for nebulin have been developed to better understand its in vivo function. Bang and colleagues demonstrated that nebulin-knockout mice die postnatally, have reduced thin filament length, and impaired contractile function. Postnatal sarcomere disorganization and degeneration occurred rapidly in these mice, indicating the nebulin is essential for maintaining the structural integrity of myofibrils. Witt and colleagues had similar results in their mice, which also died postnatally with reduced thin filament length and contractile function. These nebulin-knockout mice are being investigated as animal models of nemaline myopathy.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Nemaline Myopathy
PDBe-KB provides an overview of all the structure information available in the PDB for Human Nebulin
Proteins | Nebulin | Chemistry | 456 |
1,608,987 | https://en.wikipedia.org/wiki/Thoughtworks | Thoughtworks Holding, Inc. is a privately-held, global technology company with 49 offices in 18 countries. It provides software design and delivery, and tools and consulting services. The company is closely associated with the movement for agile software development, and has contributed to open source products. Thoughtworks' business includes Digital Product Development Services, Digital Experience and Distributed Agile software development.
History
1980s–1990s
In the late 1980s, Roy Singham founded Singham Business Services as a management consulting company servicing the equipment leasing industry in a Chicago basement. According to Singham, after two-to-three years, Singham started recruiting additional staff and came up with the name Thoughtworks in 1990. The company was incorporated under the new name in 1993 and focused on building software applications. Over time, Thoughtworks' technology shifted from C++ and Forte 4GL in the mid-1990s to include Java in the late 1990s.
1990s–2010s
Martin Fowler joined the company in 1999 and became its chief scientist in 2000.
In 2001, Thoughtworks agreed to settle a lawsuit by Microsoft for $480,000 for deploying unlicensed copies of office productivity software to employees.
Also in 2001, Fowler, Jim Highsmith, and other key software figures authored the Agile Manifesto. The company began using agile techniques while working on a leasing project. Thoughtworks' technical expertise expanded with the .NET Framework in 2002, C# in 2004, Ruby and the Rails platform in 2006. In 2002, Thoughtworks chief scientist Martin Fowler wrote "Patterns of Enterprise Application Architecture" with contributions by ThoughtWorkers David Rice and Matthew Foemmel, as well as outside contributors Edward Hieatt, Robert Mee, and Randy Stafford.
Thoughtworks Studios was launched as its product division in 2006 and shut down in 2020. The division created, supported and sold agile project management and software development and deployment tools including Mingle, Gauge (formerly Twist), Snap CI and GoCD.
On 2 March 2007, Thoughtworks announced Trevor Mather as the new CEO. Singham became Executive chairman. Also in March 2007, Rebecca Parsons assumed the role of Chief Technical Officer, having been with the company since 1999.
By 2008, Thoughtworks employed 1,000 people and was growing at the rate of 20–30% p.a., with bases around the world. Its clients included Microsoft, Oracle, major banks, and The Guardian newspaper. Singham owned 97% of the common stock of the company. By 2010, its clients included Daimler AG, Siemens and Barclays, and had opened a second headquarters in Bangalore.
In 2010, Singham opened Thoughtworks’ Fifth Agile Software Development Conference in Beijing.
2010s–2020s
In 2010, Jim Highsmith joined Thoughtworks.
In April 2013, Thoughtworks announced a collective leadership structure and appointed four co-Presidents of the global organization. The appointments followed the announcement that the then current CEO, Trevor Mather, was leaving Thoughtworks to take up the role of CEO for the used car sales business Trader Media Group.
In May 2013, Dr. David Walton was hired as Director of Global Health. Walton has done work in Haiti since 1999, including helping establish a 300-room, solar-powered hospital and the establishment of a noncommunicable disease clinic.
In 2015, Guo Xiao, who started as a developer in Thoughtworks China in 1999, became the chief executive officer and President. Also in 2015, Chinese marketing data company AdMaster acquired Chinese online form automation platform JinShuJu from Thoughtworks.
In early 2016, Thoughtworks closed their Toronto offices, the last remaining Canadian office after the closure of their Calgary offices in 2013. They have since reopened the Toronto office.
Singham sold the company to British private equity firm Apax Partners in 2017 for $785 million, by which time it had 4,500 employees across 15 countries, including South Africa. Singham left the company.
After 2017, several members of Thoughtworks senior staff began to work for the People's Support Foundation, founded by Singham's partner Jodie Evans with the support of Chad Wathington, Thoughtworks' chief strategy officer, and Jason Pfetcher, Thoughtworks' former general counsel.
2020s–Present
Thoughtworks announced that it acquired Gemini Solutions Inc. in January 2021. Gemini is a privately held software development consulting services firm, and it is based in Romania. At the end of January 2021, Thoughtworks raised $720 million in funding according to data compiled by Chicago Inno. The following month, Thoughtworks acquired Fourkind, a machine learning and data science consulting company based in Finland. In March 2021, Thoughtworks worked with the Veterans Affairs Department to deploy a centralized mechanism for delivering updates via 'VANotify'.
On September 15, 2021, Thoughtworks IPO'd on the NASDAQ and is listed as $TWKS.
In April 2022, Thoughtworks acquired Connected, a product development company based in Canada.
In May 2024, Guo Xiao stepped down as CEO of Thoughtworks, with the transition becoming official in June 2024. He is succeeded by Mike Sutcliff.
In November 2024, Thoughtworks was taken private by Apax Partners for $4.40 per share.
Corporate philosophy
Thoughtworks launched its Social Impact Program in 2009. This program provided pro-bono or other developmental help for non-profits and organizations with socially-driven missions. Clients included Democracy Now! (mobile content delivery site), Human Network International (mobile data collection), and the Institute for Reproductive Health (SMS-based fertility planner). In 2010, Thoughtworks provided software engineering services for Grameen Foundation's Mifos platform.
Translation Cards is an open source Android app that helps field workers and refugees communicate more effectively and confidently. With the help of Google volunteers, Mercy Corps partnered with Thoughtworks and UNHCR to create the app.
Notable employees
Ola Bini
Zack Exley
Martin Fowler
Jim Highsmith
Aaron Swartz
See also
Software industry in Telangana
References
External links
Indian companies established in 1993
Software companies established in 1993
Companies listed on the Nasdaq
Enterprise architecture
Enterprise application integration
Information technology consulting firms of the United States
Linux companies
Software companies based in Illinois
Software companies of India
Software design
Software development process
Agile software development
Software companies of the United States
2021 initial public offerings
Apax Partners companies | Thoughtworks | Engineering | 1,277 |
45,199,482 | https://en.wikipedia.org/wiki/Erysodienone | Erysodienone is a key precursor in the biosynthesis of many Erythrina-produced alkaloids. Early work was done by Derek Barton and co-workers to illustrate the biosynthetic pathways towards erythrina alkaloids. It was demonstrated that erysodienone could be synthesized from simple starting materials by a similar approach as its biosynthetic pathway, which led to the development of the biomimetic synthesis of erysodienone.
Synthesis
The biosynthesis of erysodienone involves a key step of oxidative phenol coupling. Starting with S-norprotosinomenine precursor A, cyclization via oxidative phenol coupling forms intermediate B, which in turn can be rearranged to form intermediate C. Hydrogenation of C forms the diphenoquinone intermediate E. An intramolecular Michael addition reaction converts E to the final product, erysodienone.
A biomimetic synthesis route for erysodienone was developed based on a similar oxidative phenol coupling mechanism. Barton and co-workers found that treating bisphenolethylamine precursor F with oxidants such as K3Fe(CN)6 initiated oxidative phenol coupling to form the 9-membered ring structure in intermediate D that itself undergo a Michael addition to give erysodienone.
References
Further reading
Indolizidines
Isoquinoline alkaloids | Erysodienone | Chemistry | 311 |
68,482,163 | https://en.wikipedia.org/wiki/Sidney%20Hemming | Sidney Hemming is an analytical geochemist known for her work documenting Earth's history through analysis of sediments and sedimentary rocks. She is a professor of earth and environmental sciences at Columbia University.
Education and career
Hemming earned a BS from Midwestern University in 1983 and an MS from Tulane University in 1986. In 1994 she earned her PhD from Stony Brook where she studied lead isotopes in sedimentary rocks. In 1994, Hemming started a postdoc with Wally Broecker at Lamont–Doherty Earth Observatory. As of 2021, Hemming is a professor of earth and environmental sciences at Lamont–Doherty Earth Observatory.
In 2018, Hemming was named a fellow of the American Geophysical Union who cited her "for the development of geochemical and isotopic tracers for sediments to reveal geological processes and events through Earth's history". In 2021, Hemming received a Guggenheim Fellowship which she plans to use to study the time period between the Pliocene and the Pleistocene.
Research
Hemming's research documents Earth's historical changes through the analysis of chemical signals preserved in sedimentary rocks and sediments. She uses geochronology to obtain age estimates of events occurring in the ocean thereby tracking changes in water circulation, winds, and glaciers. She has used neodymium isotopes to track rapid changes in Antarctic Intermediate Water and changes in North Atlantic Deep Water. In the southern Ocean, Hemming has used strontium isotopes in sediments to track changes in the strength of the Agulhas current during the Last Glacial Maximum and constrained the location of the Antarctic Circumpolar Current. In California, her research on past climate conditions at Mono Lake revealed chemical signatures in the sediments recorded the Laschamp event, a global geomagnetic shift. In the North Atlantic Ocean, Hemming's research on Heinrich events has constrained the amount of ice-rafted debris moved by icebergs into the North Atlantic Ocean.
Selected publications
Awards and honors
Fellow, American Geophysical Union (2018)
Fellow, Geological Society of America (2018)
Fellow, Geochemical Society (2020)
Guggenheim Foundation Fellow (2021)
References
External links
Fellows of the American Geophysical Union
Midwestern State University alumni
Tulane University alumni
Stony Brook University alumni
Lamont–Doherty Earth Observatory people
Women geochemists
Paleoclimatologists
Women geologists
Columbia University faculty | Sidney Hemming | Chemistry | 472 |
12,103,432 | https://en.wikipedia.org/wiki/Regression%20%28medicine%29 | Regression in medicine is the partial or complete reversal of a disease's signs and symptoms.
Clinically, regression generally refers to a decrease in severity of symptoms without completely disappearing. At a later point, symptoms may return. These symptoms are then called recidive.
In cancer, regression refers to a specific decrease in the size or extent of a tumour. In histopathology, histological regression is one or more areas within a tumor in which neoplastic cells have disappeared or decreased in number. In melanomas, this means complete or partial disappearance from areas of the dermis (and occasionally from the epidermis), which have been replaced by fibrosis, accompanied by melanophages, new blood vessels, and a variable degree of inflammation.
References
Epidemiology
Medical terminology | Regression (medicine) | Environmental_science | 166 |
2,008,706 | https://en.wikipedia.org/wiki/Union%20Chain%20Bridge | The Union Chain Bridge or Union Bridge is a suspension bridge that spans the River Tweed between Horncliffe, Northumberland, England and Fishwick, Berwickshire, Scotland. It is upstream of Berwick-upon-Tweed. When it opened in 1820 it was the longest wrought iron suspension bridge in the world with a span of , and the first vehicular bridge of its type in the United Kingdom. Although work started on the Menai Suspension Bridge earlier, the Union Bridge was completed first. The suspension bridge, which is a Category A listed building in Scotland, is now the oldest to be still carrying road traffic.
The bridge is also a Grade I listed building in England and an International Historic Civil Engineering Landmark. It lies on Sustrans Route 1 and the Pennine Cycleway. Its chains are represented on the Flag of Berwickshire.
History
Before the opening of the Union Bridge, crossing the river at this point involved an round trip via Berwick downstream or a trip via Coldstream upstream. (Ladykirk and Norham Bridge did not open until 1888.) The Tweed was forded in the vicinity of the bridge site, but the route was impassable during periods of high water. The Berwick and North Durham Turnpike Trust took on responsibility for improving matters by issuing a specification for a bridge.
Construction
The bridge was designed by an Royal Navy officer, Captain Samuel Brown. Brown joined the Navy in 1795, and seeing the need for an improvement on the hemp ropes used, which frequently failed with resulting loss to shipping, he employed blacksmiths to create experimental wrought iron chains. was fitted with iron rigging in 1806, and in a test voyage proved successful enough that in 1808, with his cousin Samuel Lenox, he set up a company that would become Brown Lenox & Co. Brown left the Navy in 1812, and in 1813 he built a prototype suspension bridge of span, using of iron. It was sufficiently strong to support a carriage, and John Rennie and Thomas Telford reported favourably upon it.
Brown took out a patent in 1816 for a method of manufacturing chains, followed by a patent titled Construction of a Bridge by the Formation and Uniting of its Component Parts in July 1817. In around 1817, Brown proposed a span bridge over the River Mersey at Runcorn, but this bridge was not built. It is not known why Brown became involved with the Union Bridge project, but agreed to take on the work based on a specification dated September 1818.
Brown knew little of masonry, and Rennie did this aspect of the work.
The bridge proposal received consent in July 1819, with the authority of an Act of Parliament that had been passed in 1802, and construction began on 2 August 1819.
The bridge, which has its western end in Scotland and its eastern end in England, was built with a single span of . There is a sandstone pier on the Scottish side, while the English side has a sandstone tower built into the side of the river bluffs that support the bridge's chains. The Scottish side has a straight approach road, but across the river, the road turns sharply south due to the river bank's steep sides.
It opened on 26 July 1820, with an opening ceremony attended by the celebrated Scottish civil engineer Robert Stevenson among others. Captain Brown tested the bridge in a curricle towing twelve carts before a crowd of about 700 spectators crossed. Until 1885, tolls were charged for crossing the bridge; the toll cottage on the English side was demolished in 1955.
Usage
With the abolition of turnpike tolls in 1883, maintenance of the bridge passed to the Tweed Bridges Trust. When the Trust was wound up the bridge became the responsibility of Scottish Borders Council and Northumberland County Council and is now maintained by the latter.
In addition to the 1902 addition of cables, the bridge has been strengthened and refurbished on many occasions. The bridge deck was substantially renewed in 1871, and again in 1974, with the chains reinforced at intervals throughout its life.
Maintenance and restoration
The bridge was closed to motor vehicles for several months during 2007 due to one of the bridge hangers breaking. In December 2008 the bridge was closed to traffic as a result of a landslide.
In March 2013 there was a proposal to close the bridge because there was a lack of funds to maintain it. In 2013, the bridge was placed on Historic England's Heritage at Risk register. In October 2014, local enthusiasts and activists started a campaign to have the bridge fully restored in time for its bicentenary in 2020.
In March 2017 Scottish Borders Council and Northumberland County Council agreed to contribute £550,000 each towards a restoration project that was then expected to cost £5 million. Between then and August 2020 further pledges were made by both councils, the National Lottery Heritage Fund and Historic England. The work started in October 2020 and was expected to cost £10.5 million and take around 18 months. The chains were cut in March 2021 and the restored bridge was due to re-open in early 2022. The bridge reopened on 17 April 2023. In July 2023, the bridge was designated as a Historic Civil Engineering Landmark by the American Society of Civil Engineers.
References
External links
The website of the Union Chain Bridge: Crossing Borders, Inspiring Communities project.
The website of the Friends of the Union Chain Bridge.
Film voerage and bridge history
Suspension bridges in England
Bridges in the Scottish Borders
Bridges across the River Tweed
Chain bridges
Grade I listed bridges
Bridges completed in 1820
History of Northumberland
Berwickshire
Scheduled monuments in Northumberland
Berwick Upon Tweed, Union
Category A listed buildings in the Scottish Borders
Listed bridges in Scotland
Historic Civil Engineering Landmarks
Anglo-Scottish border
Former toll bridges in England
Former toll bridges in Scotland
1820 establishments in England
1820 establishments in Scotland | Union Chain Bridge | Engineering | 1,142 |
3,031,477 | https://en.wikipedia.org/wiki/Kalb%E2%80%93Ramond%20field | In theoretical physics in general and string theory in particular, the Kalb–Ramond field (named after Michael Kalb and Pierre Ramond), also known as the Kalb–Ramond B-field or Kalb–Ramond NS–NS B-field, is a quantum field that transforms as a two-form, i.e., an antisymmetric tensor field with two indices.
The adjective "NS" reflects the fact that in the RNS formalism, these fields appear in the NS–NS sector in which all vector fermions are anti-periodic. Both uses of the word "NS" refer to André Neveu and John Henry Schwarz, who studied such boundary conditions (the so-called Neveu–Schwarz boundary conditions) and the fields that satisfy them in 1971.
Details
The Kalb–Ramond field generalizes the electromagnetic potential but it has two indices instead of one. This difference is related to the fact that the electromagnetic potential is integrated over one-dimensional worldlines of particles to obtain one of its contributions to the action while the Kalb–Ramond field must be integrated over the two-dimensional worldsheet of the string. In particular, while the
action for a charged particle moving in an electromagnetic
potential is given by
that for a string coupled to the Kalb–Ramond field has the form
This term in the action implies that the fundamental string of string theory is a source of the NS–NS B-field, much like charged particles are sources of the electromagnetic field.
The Kalb–Ramond field appears, together with the metric tensor and dilaton, as a set of massless excitations of a closed string.
See also
Curtright field
p-form electrodynamics
Ramond–Ramond field
References
String theory
Gauge bosons | Kalb–Ramond field | Astronomy | 369 |
1,160,971 | https://en.wikipedia.org/wiki/Global%20Information%20Assurance%20Certification | Global Information Assurance Certification (GIAC) is an information security certification entity that specializes in technical and practical certification as well as new research in the form of its GIAC Gold program. SANS Institute founded the certification entity in 1999 and the term GIAC is trademarked by The Escal Institute of Advanced Technologies.
GIAC provides a set of vendor-neutral computer security certifications linked to the training courses provided by the SANS. GIAC is specific to the leading edge technological advancement of IT security in order to keep ahead of "black hat" techniques. Papers written by individuals pursuing GIAC certifications are presented at the SANS Reading Room on GIAC's website.
Initially all SANS GIAC certifications required a written paper or "practical" on a specific area of the certification in order to achieve the certification. In April 2005, the SANS organization changed the format of the certification by breaking it into two separate levels. The "silver" level certification is achieved upon completion of a multiple choice exam. The "gold" level certification can be obtained by completing a research paper and has the silver level as a prerequisite.
As of August 27, 2022, GIAC has granted 173,822 certifications worldwide.
SANS GIAC Certifications
Certifications listed as 'unavailable' are not listed in official SANS or GIAC sources, and are found elsewhere. They are not the same as retired courses.
Cyber Defense
Penetration Testing
Management, Audit, Legal
Operations
Developer
Incident Response and Forensics
Industrial Control Systems
GSE
Unobtainable Certifications
The following certifications are no longer issued.
External links
Notes
Computer security qualifications
Digital forensics certification | Global Information Assurance Certification | Technology | 328 |
24,075,605 | https://en.wikipedia.org/wiki/Dictionary%20of%20natural%20phenols%20and%20polyphenols%20molecular%20formulas | Natural polyphenols molecular formulas represent a class of natural aromatic organic compounds in which one or more hydroxy groups are attached directly to the benzene ring, generally formed from C, H and O.
The entries are sorted by mass.
References
Polyphenols
Dictionary
Dictionary | Dictionary of natural phenols and polyphenols molecular formulas | Chemistry | 57 |
46,875,090 | https://en.wikipedia.org/wiki/Phlegmacium%20cremeiamarescens | Phlegmacium cremeiamarescens is a species of fungus in the family Cortinariaceae. It was originally described in 2014 by the mycologists Ilkka Kytövuori, Kare Liimatainen and Tuula Niskanen who classified it as Cortinarius cremeiamarescens. It was placed in the (subgenus Phlegmacium) of the large mushroom genus Cortinarius. The specific epithet cremeiamarescens refers to the fruitbody colour and the bitter-tasting cap cuticle. Phlegmacium gentianeus is a sister species with which it has been previously confused. It is found in southern Europe and western North America, where it grows in coniferous forests.
In 2022 the species was transferred from Cortinarius and reclassified as Phlegmacium cremeiamarescens based on genomic data.
See also
List of Cortinarius species
References
External links
cremeiamarescens
Fungi described in 2014
Fungi of Europe
Fungus species | Phlegmacium cremeiamarescens | Biology | 209 |
33,611,958 | https://en.wikipedia.org/wiki/Dashpot%20timer | The first automatic timer, the dashpot timer has been used in many different machines and has many variations. Pneumatic, hydraulic-action, and mercury displacement timers. Being used in a variety of things such as printing presses, motors, and even irrigation systems, the dashpot timer has seen many applications. Even in modern times with electrical and digital timers, these old mechanical timers are still in use due to their simplicity and ability to function in tough environments.
Types
The dashpot timer is a fluid time-on-timer that can be used in definite time motor acceleration starters and controllers. A dashpot timer is a container, a piston, and a shaft. The dashpot timer functions when a magnetic field forces a piston to move within a cylinder when the coil is energized. The movement of the piston is limited by fluid passing through an orifice on the piston. The amount of fluid passing through the orifice is controlled by a throttle value, which determines the delay. If the fluid used to move the piston is air it is actually known as a pneumatic dashpot. If the fluid is oil, it is known as a hydraulic dashpot. Another kind of dashpot timer is the mercury displacement timer, this model uses mercury to contact electrodes.
Pneumatic timer
The pneumatic timer consists of a timing disk, filter, diaphragm, solenoid coil, operating spring and a solenoid core. When the pneumatic timer is energized, the solenoid core moves up into the coil. When this occurs, the core applies pressure on the diaphragm. This moves the diaphragm into the top chamber, air trapped in the chamber is expelled through the needle valve timing disk. In pneumatic timers the amount of delay that occurs can be altered by adjusting the needle valve. Pneumatic timers are very reliable and have a very long operational life expectancy
Hydraulic-action timers
Hydraulic dashpots or hydraulic-action timers are similar in appearance and operation to pneumatic timers. Hydraulic-action timers work by energizing the solenoid coil which pulls the hollow core into the center of the coil. Fluid in the hollow core is then forced to go through an orifice at the top, a one way check valve at the bottom of the hollow core prevents the fluid from escaping through the bottom. After the fluid is expelled, the core completes its upward movement and closes an air gap in the core, which in turn increases its electromagnetic field strength. When the coil becomes de-energized, it releases the core, and fluid is forced back into the hollow area of the core through the check valve, so the fluid is used again next time the coil is energized. Hydraulic action timers are usually designed for a specific time, which is set in the factory during their manufacture. These timers are also very reliable.
Mercury-displacement timers
Another important classification of dashpot timers are mercury-displacement timers. These depend on the displacement of a pool of mercury that make connection with two electrodes. There are two kinds of mercury displacement timers, delayed-make displacement timers and slow break displacement timers. Delayed-make displacement timers work by having a plunger floating in a container of mercury, when energy is applied to the coil it pulls the plunger into its center. The mercury that is displaced by this enters the thimble though an orifice. Inert gas trapped at the top of the thimble prevents the mercury from rising. Eventually the gas escapes through a ceramic plug, and this permits mercury to fill the thimble. When the mercury rises to a certain level it makes contact between electrodes. The amount of delay that this produces is determined when it is manufactured. Slow-break displacement timers work in the same way as delayed-make displacement timers only that when the coil is de-energized, the plunger rises to its original position, and mercury flows through the orifice to reach outside level. when it falls below the lip of the ceramic cup, electrical contacts become open. Its physical size is used to regulate the delay time of connection break. These timers are designed with a fixed delay period, usually to a maximum of 20 minutes.
Application
Most dashpot timers are used in sequential, automatic control applications where the completion of one operation causes the start of another process. Common applications include automatic milling machines, periodic lubrication, animated shop-window displays, staged start-up of pumps, automatic presses, and industrial washing machines. Dashpot timers are also used in motors, blowers, lighting, public restroom faucets, and control valves as well as in banking, retail, irrigation, and general industrial applications. Common problems with dashpot timers were variations in temperature, the entrance of dirt and other matter into the dashpot system, and general wear and tear of the system.
Every kind of dashpot timer has seen use in different technologies, from toasters to automated factory operations. The dashpot timer, or mechanical timer, has changed the way we use technology. With its many industrial and commercial applications, to household appliances and gardening, the dashpot timer is a very important invention that has certainly led to many changes in how thing were done during the 20th century and how things are done in modern times. In modern times, even though we have electrical and digital timers, mechanical timers are still used, especially in cases where the environment is not friendly for electronics. Another advantage of mechanical timers is that they are easy to repair. The amount of precise automated systems used in modern factories shows the usefulness and precision of these timers, while their presence in many households shows their availability and inexpensiveness. Even with the small problems that are present in these systems they have proven themselves reliable enough to be used in many fields for nearly a century.
References
Herman, Stephen. Industrial Motor Control. 6th ed. N.p.: Cengage Learning, 2009. 44-46. Print.
Institution of Automobile Engineers. The Automobile engineer. Vol. 35. N.p.: IPC Transport Press Ltd, 1945. 191. Print.
Laplante, Phillp A. Comprehensive dictionary of electrical engineering. N.p.: Springer, 1999. 160. Print.
Patrick, Dale R., and Stephen W. Fardo. Industrial electronics: devices and systems. 2nd ed. N.p.: The Fairmont Press, 2000. 474-79. Print.
Sardeson, Robert. "Mechanical Timer." Google Patents. N.p., 16 July 1940. Web. 19 Oct. 2011.
Liptak, Bela G. Process control and optimization. 4th ed. N.p.: CRC Press, 2006. 1036-42. Print.
Control devices | Dashpot timer | Engineering | 1,405 |
10,661,427 | https://en.wikipedia.org/wiki/Blogger%27s%20Code%20of%20Conduct | The Blogger's Code of Conduct was a proposal by Tim O'Reilly for bloggers to adopt a uniform policy for moderation of comments. It was proposed in 2007, in response to controversy involving threats made to blogger Kathy Sierra. The idea of the code was first reported by BBC News, who quoted O'Reilly saying, "I do think we need some code of conduct around what is acceptable behaviour, I would hope that it doesn't come through any kind of regulation it would come through self-regulation."
In Ireland, a proposal for a code was raised in an article in Sunday Business Post in 2009 by Simon Palmer, a radio presenter and PR consultant in Dublin, after false details in relation to a client had appeared on Irish blogs Time To Raise Above Blog Standard. After his comments he was subjected to sustained on line abuse from Irish bloggers and anonymous trolls and even received death threats.
In Nepal, 10 prominent bloggers signed a Code of Ethics for Bloggers, first proposed by Ujjwal Acharya and finalized after discussion among bloggers, on July 27, 2011.
According to The New York Times, O'Reilly and others based their preliminary list on one developed by the BlogHer women's blogging support network and, working with others, came up with a list of seven proposed ideas:
Take responsibility not just for your own words, but for the comments you allow on your blog.
Label your tolerance level for abusive comments.
Consider eliminating anonymous comments.
Don't feed the trolls.
Take the conversation offline, and talk directly, or find an intermediary who can do so.
If you know someone who is behaving badly, tell them so.
Don't say anything online that you wouldn't say in person.
Reception
Reaction to the proposal was internationally widespread among bloggers and media writers. According to the San Francisco Chronicle, the blogosphere described it as "excessive, unworkable and an open door to censorship." Author Bruce Brown approved of the code, reproducing in his book on blogging. TechCrunch founder Michael Arrington and entrepreneur and blogger Dave Winer were two notable Americans who wrote against the plan. Technology blogger Robert Scoble stated that the proposed rules "make me feel uncomfortable" and "As a writer, it makes me feel like I live in Iran."
References
External links
"Draft Blogger's Code of Conduct" by Tim O'Reilly
"Code of Conduct: Lessons Learned So Far" by Tim O'Reilly
Blogging
Internet culture
Texts about the Internet
Etiquette
Internet ethics
2007 documents | Blogger's Code of Conduct | Technology,Biology | 526 |
57,099,066 | https://en.wikipedia.org/wiki/Skeletocutis%20fimbriata | Skeletocutis fimbriata is a species of poroid fungus in the family Polyporaceae. Found in China, it was described as new to science in 2008. The holotype collection was made in the Shennongjia nature reserve in northwestern Hubei province, where it was found growing on rotting angiosperm wood. The fungus is distinguished from the other Skeletocutis species by its narrow spores, and its coarsely fimbriate margin on the fruit bodies. The specific epithet fimbriata refers to this latter characteristic.
References
Fungi described in 2008
Fungi of China
fimbriata
Taxa named by Yu-Cheng Dai
Fungus species | Skeletocutis fimbriata | Biology | 139 |
4,665,849 | https://en.wikipedia.org/wiki/Silver%20azide | Silver azide is the chemical compound with the formula . It is a silver(I) salt of hydrazoic acid. It forms a colorless crystals. Like most azides, it is a primary explosive.
Structure and chemistry
Silver azide can be prepared by treating an aqueous solution of silver nitrate with sodium azide. The silver azide precipitates as a white solid, leaving sodium nitrate in solution.
X-ray crystallography shows that is a coordination polymer with square planar coordinated by four azide ligands. Correspondingly, each end of each azide ligand is connected to a pair of centers. The structure consists of two-dimensional layers stacked one on top of the other, with weaker Ag–N bonds between layers. The coordination of can alternatively be described as highly distorted 4 + 2 octahedral, the two more distant nitrogen atoms being part of the layers above and below.
In its most characteristic reaction, the solid decomposes explosively, releasing nitrogen gas:
The first step in this decomposition is the production of free electrons and azide radicals; thus the reaction rate is increased by the addition of semiconducting oxides. Pure silver azide explodes at 340 °C, but the presence of impurities lowers this down to 270 °C. This reaction has a lower activation energy and initial delay than the corresponding decomposition of lead azide.
Safety
, like most heavy metal azides, is a dangerous primary explosive. Decomposition can be triggered by exposure to ultraviolet light or by impact. Ceric ammonium nitrate is used as an oxidising agent to destroy in spills.
See also
Silver nitride
References
Silver compounds
Azides
Explosive chemicals | Silver azide | Chemistry | 341 |
45,359,822 | https://en.wikipedia.org/wiki/Expedition%2049 | Expedition 49 was the 49th expedition to the International Space Station.
Anatoli Ivanishin, Kathleen Rubins and Takuya Onishi transferred from Expedition 48. Expedition 49 began upon the departure of Soyuz TMA-20M on September 6, 2016 and concluded upon the departure of Soyuz MS-01 in October 2016. The crew of Soyuz MS-02 then transferred to Expedition 50.
Crew
Notes
One US Segment based EVA was planned for Expedition 49, this was later postponed.
A soccer ball belonging to Ellison Onizuka who was killed in the Space Shuttle Challenger disaster was brought to the ISS by Shane Kimbrough.
References
External links
NASA's Space Station Expeditions page
Expeditions to the International Space Station
2016 in spaceflight | Expedition 49 | Astronomy | 148 |
60,792,670 | https://en.wikipedia.org/wiki/Lisa%20Alvarez-Cohen | Lisa Alvarez-Cohen is the vice provost for academic planning, Fred and Claire Sauer Professor at the University of California, Berkeley. She was elected a member of the National Academy of Engineering in 2010 for the discovery and application of novel microorganisms and biochemical pathways for microbial degradation of environmental contaminants. She is also a Fellow of the American Society for Microbiology.
Early life and education
Alvarez-Cohen studied engineering and applied science at Harvard University and graduated in 1984. She was a postgraduate student at Stanford University, where she earned her master's degree in 1985 and a PhD in 1991.
Research and career
Alvarez-Cohen works in environmental microbiology and ecology. She joined the faculty at University of California, Berkeley in 1991, and was the first woman to achieve tenure in Berkeley's Civil & Environmental Engineering Department. She is interested in species that can perform environmentally relevant functions, including studies of biotransformation and the fate of environmental water contaminants. Alvarez-Cohen uses omics based molecular tools to optimise bioremediation. Amongst other contaminants, Alvarez-Cohen's lab have studied the remediation of trichloroethene, aqueous film forming foams and arsenic:
Trichloroethene is a contaminant that is routinely found at Superfund sites. Trichloroethene is often included on the United States Environmental Protection Agency priorities list, and is typically dechlorinated using dehalococcoides.
Aqueous film forming foams have been used since the 1960s to extinguish hydrocarbon fuel fires. They contain perfluoroalkoxy alkanes, which can be contaminants due to their impact on human heath. Perfluoroalkoxy alkanes are known to bioaccumulate and exhibit toxicity in animals.
Arsenic occurs regularly on the National Priorities List in various chlorinated solvents.
Alvarez-Cohen uses anammox to remove nitrogen from wastewater. Anammox is cheaper and more efficient than conventional nitrogen sequestration. She uses stable isotope traces to study the fundamental mechanisms of anammox.
Academic service
In 2007 Alvarez-Cohen became chair for the Department of Civil and Environmental Engineering, a position she held until 2012. She has served as the diversity director of the Stanford University Engineering Research Center and elected chair of the Berkeley Division of the Academic Senate. She was appointed the vice provost for academic planning in July 2018.
Alvarez-Cohen has appeared on NPR and served on the editorial advisory board of Environmental Science & Technology. She has represented the United States at the National Academy of Engineering Frontiers of Engineering in India, Arlington County, Virginia, and Irvine, California.
Selected publications
Awards and honours
1994 W. M. Keck Foundation Award for Engineering Teaching Excellence
2002 Elected a fellow of the American Society for Microbiology
2003 Association of Environmental Engineering and Science Professors Distinguished Service Award
2010 Elected to the National Academy of Engineering
2014 American Society of Civil Engineers Simon W. Freese Environmental Engineering Award
2018 Association of Environmental Engineering and Science Professors Fellow
Personal life
Alvarez-Cohen is married to Mike Dean Alvarez Cohen, with whom she has two children, Jason and Ryan.
References
Living people
American women environmentalists
American environmentalists
American environmental scientists
American women chemists
Harvard John A. Paulson School of Engineering and Applied Sciences alumni
Stanford University School of Engineering alumni
Members of the United States National Academy of Engineering
Year of birth missing (living people)
Fellows of the Association of Environmental Engineering and Science Professors
Fellows of the American Academy of Microbiology
21st-century American women | Lisa Alvarez-Cohen | Environmental_science | 727 |
4,827,691 | https://en.wikipedia.org/wiki/Leibniz%20operator | In abstract algebraic logic, a branch of mathematical logic, the Leibniz operator is a tool used to classify deductive systems, which have a precise technical definition and capture a large number of logics. The Leibniz operator was introduced by Wim Blok and Don Pigozzi, two of the founders of the field, as a means to abstract the well-known Lindenbaum–Tarski process, that leads to the association of Boolean algebras to classical propositional calculus, and make it applicable to as wide a variety of sentential logics as possible. It is an operator that assigns to a given theory of a given sentential logic, perceived as a term algebra with a consequence operation on its universe, the largest congruence on the algebra that is compatible with the theory.
Formulation
In this article, we introduce the Leibniz operator in the special case of classical propositional calculus, then we abstract it to the general notion applied to an arbitrary sentential logic and, finally, we summarize some of the most important consequences of its use in the theory of abstract algebraic logic.
Let
denote the classical propositional calculus. According to the classical
Lindenbaum–Tarski process, given a theory
of ,
if
denotes the binary relation on the set of formulas
of , defined by
if and only if
where denotes the usual
classical propositional equivalence connective, then
turns out to be a congruence
on the formula algebra. Furthermore, the quotient
is a Boolean algebra
and every Boolean algebra may be formed in this way.
Thus, the variety of Boolean algebras, which is,
in algebraic logic terminology, the
equivalent algebraic semantics (algebraic counterpart)
of classical propositional calculus, is the class of
all algebras formed by taking appropriate quotients
of term algebras by those special kinds of
congruences.
Notice that the condition
that defines
is equivalent to the
condition
for every formula : if and only if .
Passing now to an arbitrary sentential logic
given a theory ,
the Leibniz congruence associated with is
denoted by and is defined, for all
, by
if and only if, for every formula
containing a variable
and possibly other variables in the list ,
and all formulas forming a list of the same
length as that of , we have that
if and only if .
It turns out that this binary relation is a congruence relation on the formula algebra and, in fact, may alternatively be characterized as the largest congruence on the formula algebra that is compatible
with the theory , in the sense that if and , then we must have also . It is this congruence that plays the same role as the congruence used in the traditional Lindenbaum–Tarski process described above in the context of an arbitrary sentential logic.
It is not, however, the case that for arbitrary sentential logics the quotients of the term algebras by these Leibniz congruences over different theories yield all algebras in the class that forms the natural algebraic counterpart of the sentential logic. This phenomenon occurs only in the case of "nice" logics and one of the main goals of abstract algebraic logic is to make this vague notion of a logic being "nice", in this sense, mathematically precise.
The Leibniz operator
is the operator that maps a theory of a given logic to the
Leibniz congruence
associated with the theory. Thus, formally,
is a mapping from the collection
of the theories of a sentential logic
to the collection
of all congruences on the formula algebra
of the sentential logic.
Hierarchy
The Leibniz operator and the study of various of its properties that may or may not be satisfied for particular sentential logics have given rise to what is now known as the abstract algebraic hierarchy or Leibniz hierarchy of sentential logics. Logics are classified in various levels of this hierarchy depending on how strong a tie exists between the logic and its algebraic counterpart.
The properties of the Leibniz operator that help classify the logics are monotonicity, injectivity, continuity and commutativity with inverse substitutions. For instance, protoalgebraic logics, forming the widest class in the hierarchy – i.e., the one that lies in the bottom of the hierarchy and contains all other classes – are characterized by the monotonicity of the Leibniz operator on their theories.
Other notable classes are formed by the equivalential logics, the weakly algebraizable logics and the algebraizable logics, among others.
There is a generalization of the Leibniz operator, in the context of categorical abstract algebraic logic, that makes it possible to apply a wide variety of techniques that were previously applicable only in the sentential logic framework to logics formalized as -institutions.
The -institution framework is significantly wider in scope than the framework of sentential logics because it allows incorporating multiple signatures and quantifiers in the language and it provides a mechanism for handling logics that are not syntactically based.
References
Font, J. M., Jansana, R., Pigozzi, D., (2003), A survey of abstract algebraic logic, Studia Logica 74: 13–79.
External links
Algebraic logic | Leibniz operator | Mathematics | 1,075 |
46,276,373 | https://en.wikipedia.org/wiki/Journal%20of%20Zhejiang%20University%20Science%20B | The Journal of Zhejiang University Science B: Biomedicine & Biotechnology is a monthly peer-reviewed scientific journal covering biomedicine, biochemistry, and biotechnology. It was established in 2000 and is published by Zhejiang University Press in collaboration with Springer Science+Business Media. The editors-in-chief are Shu-min Duan and De-nian Ba, both of Zhejiang University.
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index Expanded, The Zoological Record, BIOSIS Previews, Index Medicus/MEDLINE/PubMed, and Scopus. According to Journal Citation Reports, the journal has a 2016 impact factor of 1.676.
References
External links
Official website (Springer Nature)
Official website (Zhejiang University)
Monthly journals
Academic journals established in 2000
English-language journals
Zhejiang University Press academic journals
Springer Science+Business Media academic journals
Biotechnology journals
General medical journals | Journal of Zhejiang University Science B | Biology | 184 |
5,573,115 | https://en.wikipedia.org/wiki/Siltation | Siltation is water pollution caused by particulate terrestrial clastic material, with a particle size dominated by silt or clay. It refers both to the increased concentration of suspended sediments and to the increased accumulation (temporary or permanent) of fine sediments on bottoms where they are undesirable. Siltation is most often caused by soil erosion or sediment spill.
It is sometimes referred to by the ambiguous term "sediment pollution", which can also refer to a chemical contamination of sediments accumulated on the bottom, or to pollutants bound to sediment particles. Although "siltation" is not perfectly stringent, since it also includes particle sizes other than silt, it is preferred for its lack of ambiguity.
Causes
The origin of the increased sediment transport into an area may be erosion on land or activities in the water.
In rural areas, the erosion source is typically soil degradation by intensive or inadequate agricultural practices, leading to soil erosion, especially in fine-grained soils such as loess. The result will be an increased amount of silt and clay in the water bodies that drain the area. In urban areas, the erosion source is typically construction activities, which involve clearing the original land-covering vegetation and temporarily creating something akin to an urban desert from which fines are easily washed out during rainstorms.
In water, the main pollution source is sediment spill from dredging, the transportation of dredged material on barges, and the deposition of dredged material in or near water. Such deposition may be made to get rid of unwanted material, such as the offshore dumping of material dredged from harbours and navigation channels. The deposition may also be to build up the coastline, for artificial islands, or for beach replenishment.
Climate change also affects siltation rates.
Another important cause of siltation is the septage and other sewage sludges that are discharged from households or business establishments with no septic tanks or wastewater treatment facilities to bodies of water.
Vulnerabilities
While the sediment in transport is in suspension, it acts as a pollutant for those who require clean water, such as for cooling or in industrial processes, and it includes aquatic life that are sensitive to suspended material in the water. While nekton have been found to avoid spill plumes in the water (e.g. the environmental monitoring project during the building of the Øresund Bridge), filtering benthic organisms have no way of escape. Among the most sensitive organisms are coral polyps. Generally speaking, hard bottom communities and mussel banks (including oysters) are more sensitive to siltation than sand and mud bottoms. Unlike in the sea, in a stream, the plume will cover the entire channel, except possibly for backwaters, and so fish will also be directly affected in most cases.
Siltation can also affect navigation channels or irrigation channels. It refers to the undesired accumulation of sediments in channels intended for vessels or for distributing water.
Measurement and monitoring
One may distinguish between measurements at the source, during transport, and within the affected area. Source measurements of erosion may be very difficult since the lost material may be a fraction of a millimeter per year. Therefore, the approach taken is typically to measure the sediment in transport in the stream, by measuring the sediment concentration and multiplying that with the discharge; for example, times gives .
Also, sediment spill is better measured in transport than at the source. The sediment transport in open water is estimated by measuring the turbidity, correlating turbidity to sediment concentration (using a regression developed from water samples that are filtered, dried, and weighed), multiplying the concentration with the discharge as above, and integrating over the entire plume. To distinguish the spill contribution, the background turbidity is subtracted from the spill plume turbidity. Since the spill plume in open water varies in space and time, an integration over the entire plume is required, and repeated many times to get acceptably low uncertainty in the results. The measurements are made close to the source, in the order of a few hundred meters.
Anything beyond a work area buffer zone for sediment spill is considered the potential impact area. In the open sea, the impact of concern is almost exclusively with the sessile bottom communities since empirical data show that fish effectively avoid the impacted area. The siltation affects the bottom community in two main ways. The suspended sediment may interfere with the food gathering of filtering organisms, and the sediment accumulation on the bottom may bury organisms to the point that they starve or even die. It is only if the concentration is extreme that it decreases the light level sufficiently for impacting primary productivity. An accumulation of as little as may kill coral polyps.
While the effect of the siltation on the biota (once the harm is already done) can be studied by repeated inspection of selected test plots, the magnitude of the siltation process in the impact area may be measured directly by monitoring in real time. Parameters to measure are sediment accumulation, turbidity at the level of the filtering biota, and optionally incident light.
Siltation of the magnitude that it affects shipping can also be monitored by repeated bathymetric surveys.
Mitigation
In rural areas, the first line of defense is to maintain land cover and prevent soil erosion in the first place. The second line of defense is to trap the material before it reaches the stream network (known as sediment control). In urban areas, the defenses are to keep land uncovered for as short a time as possible during construction and to use silt screens to prevent the sediment from getting released in water bodies.
During dredging, the spill can be minimized but not eliminated completely by the way the dredger is designed and operated. If the material is deposited on land, efficient sedimentation basins can be constructed. If it is dumped into relatively deep water, there will be a significant spill during dumping but not thereafter, and the spill that arises has minimal impact if there are only fine-sediment bottoms nearby.
One of the most difficult conflicts of interest to resolve, as regards siltation mitigation, is perhaps beach nourishment. When sediments are placed on or near beaches in order to replenish an eroding beach, any fines in the material will continue to be washed out for as long as the sand is being reworked. Since all replenished beaches are eroding or they would not need replenishment, they will contribute to nearshore siltation almost for as long as it takes to erode away what was added, albeit with somewhat decreasing intensity over time. Since the leakage is detrimental to coral reefs, the practice leads to a direct conflict between the public interest of saving beaches, and preserving any nearshore coral reefs. To minimize the conflict, beach replenishment should not be done with sand containing any silt or clay fractions. In practice the sand is often taken from offshore areas, and since the proportion of fines in sediments typically increases in the offshore direction, the deposited sand will inevitably contain a significant percentage of siltation-contributing fines.
It is desirable to minimize the siltation of irrigation channels by hydrologic design, the objective being not to create zones with falling sediment transport capacity, as that is conducive to sedimentation. Once sedimentation has occurred, in irrigation or navigation channels, dredging is often the only remedy.
References
Earth sciences
Sediments
Water pollution | Siltation | Chemistry,Environmental_science | 1,521 |
17,909,920 | https://en.wikipedia.org/wiki/Cilnidipine | Cilnidipine is a calcium channel blocker. Cilnidipine is approved for use in Japan, China, India, Nepal, and Korea for hypertension.
It is a calcium antagonist accompanied with L-type and N-type calcium channel blocking functions. Unlike other calcium antagonists, cilnidipine can act on the N-type calcium channel in addition to acting on the L-type calcium channel.
It was patented in 1984 and approved for medical use in 1995. Cilnidipine is currently being repurposed and developed for use in patients with Raynaud's Phenomenon and Systemic Sclerosis by Aisa Pharma, a US biopharma development company.
Medical uses
Cilnidipine decreases blood pressure and is used to treat hypertension and its comorbidities. Due to its blocking action at the N-type and L-type calcium channel, cilnidipine dilates both arterioles and venules, reducing the pressure in the capillary bed. Cilnidipine is vasoselective and has a weak direct dromotropic effect, a strong vasodepressor effect, and an arrhythmia-inhibiting effect.
Blood pressure control with cilnidipine treatment in Japanese post-stroke hypertensive patients [The CA-ATTEND study] - the results of a large-scale prospective post-marketing surveillance study of post-stroke hypertensive patients (n = 2667, male 60.4%, 69.0 ± 10.9 years) treated with cilnidipine indicate that cilnidipine was effective in treating uncontrolled blood pressure and was well tolerated in post-stroke hypertensive patients.
The Ambulatory Blood Pressure Control and Home Blood Pressure (Morning and Evening) Lowering By N-Channel Blocker Cilnidipine (ACHIEVE-ONE) trial is a large-scale (n=2319) clinical study on blood pressure (BP) and pulse rate (PR) in the real world with use of cilnidipine - this study revealed that Cilnidipine significantly reduced BP and PR in hypertensive patients at the clinic and at home, especially with higher BP and PR in the morning. Cilnidipine is currently being studied in the RECONNOITER study in Australia for its effect on Raynaud's and other manifestations of disease in patients with Systemic Sclerosis.
Side effects
The side effects could be severe dizziness, fast heartbeat, and swelling of face, lips, tongue, eyelids, hands and feet. Lesser side effects include stomach pain, diarrhea and hypotension.
Peripheral edema, a common side effect from the use of amlodipine, was reduced when patients were shifted to cilnidipine.
Brand Name
In India, it is sold under the brand name Cinod, cilacar, clinblue, cilaheart among others at doses of 5mg/10mg/20mg.
History
It was jointly developed by Fuji Viscera Pharmaceutical Company and Ajinomoto, and was approved to enter the market and be used as an anti-hypertensive in 1995.
References
Further reading
Calcium channel blockers
Dihydropyridines
3-Nitrophenyl compounds
Carboxylate esters
Ethers | Cilnidipine | Chemistry | 677 |
71,520,312 | https://en.wikipedia.org/wiki/Lagokarpos | Lagokarpos is an extinct plant genus from the Late Paleocene to early Middle Eocene of North America, Germany and China. Its relationship with modern taxa is unclear.
Etymology
Lago- is from lagós(λαγώς) which means "hare" and karpos is from karpós(καρπός) which means "fruit". The name refers to its rabbit-eared wings.
References
Prehistoric angiosperm genera
Angiosperms
Paleocene plants
Eocene plants | Lagokarpos | Biology | 107 |
295,045 | https://en.wikipedia.org/wiki/Bok%20Prize | The Bok Prize is awarded annually by the Astronomical Society of Australia and the Australian Academy of Science to recognise outstanding research in astronomy by honoring a student at an Australian university. The prize consists of the Bok Medal together with an award of $1000 and ASA membership for the following year.
History
The prize is named to commemorate the energetic work of Bart Bok in promoting the undergraduate and graduate study of astronomy in Australia, during his term (1957–1966) as Director of the Mount Stromlo Observatory.
Past winners
Source: Astronomical Society of Australia
See also
List of astronomy awards
Prizes named after people
References
Australian science and technology awards
Astronomy prizes
Awards established in 1989 | Bok Prize | Astronomy,Technology | 133 |
5,768,059 | https://en.wikipedia.org/wiki/Hitsuzend%C5%8D | is believed by Zen Buddhists to be a method of achieving samādhi (Japanese: 三昧 sanmai), which is a unification with the highest reality. Hitsuzendo refers specifically to a school of Japanese Zen calligraphy to which the rating system of modern calligraphy (well-proportioned and pleasing to the eye) is foreign. Instead, the calligraphy of Hitsuzendo must breathe with the vitality of eternal experience.
Origins
Yokoyama Tenkei (1885–1966), inspired by the teachings of Yamaoka Tesshu (1836–1888), founded the Hitsuzendo line of thought as a "practice to uncover one's original self through the brush." This was then further developed by Omori Sogen Roshi as a way of Zen practice. Hitsuzendo is practised standing, using a large brush and ink, usually on newspaper roll. In this way, the whole body is used to guide the brush, in contrast to writing at a table.
History
Calligraphy was brought to Japan from China and Chinese masters such as Wang Xizhi 王羲之 (Jp: Ou Gishi; 303-361) have had a profound influence, especially on the karayō style which is still practiced today. The indigenous Japanese wayō tradition (和様書道, wayō-shodō) only appeared towards the end of the Heian era. However, the calligraphy of Zen scholars was often more concerned with spiritual qualities and individual expression and shunned technicalities which led to unique and distinctly personal styles. Japanese calligraphy has three basic styles: Kaisho 楷書, Gyōsho 行書, and Sōsho 草書, adopted from China.
Philosophical background
True creativity is not the product of consciousness but rather the "phenomenon of life itself." True creation must arise from mu-shin 無心, the state of "no-mind," in which thought, emotions, and expectations do not matter. Truly skillful Zen calligraphy is not the product of intense "practice;" rather, it is best achieved as the product of the "no-mind" state, a high level of spirituality, and a heart free of disturbances.
To write Zen calligraphic characters that convey truly deep meaning, one must focus intensely and become one with the meaning of the characters they create. In order to do this, one must free one's mind and heart of disturbances and focus only on the meaning of the character. Becoming one with what you create, essentially, is the philosophy behind Zen Calligraphy and other Japanese arts.
See also
Zenga
Bokuseki
References
Terayama, Tanchu. Zen Brushwork - Focusing The Mind With Calligraphy And Painting
East Asian calligraphy
Japanese calligraphy
Zen Buddhist philosophical concepts
Japanese art
Visual motifs
Zen art and culture
Zenga | Hitsuzendō | Mathematics | 567 |
12,892,875 | https://en.wikipedia.org/wiki/Lepismium%20cruciforme | Lepismium cruciforme is a species of plant in the family Cactaceae. It is found in Argentina, Brazil, and Paraguay. Its natural habitat is subtropical or tropical moist lowland forests. It is threatened by habitat loss.
References
cruciforme
Taxonomy articles created by Polbot
Least concern biota of South America
Epiphytes
Lithophytes | Lepismium cruciforme | Biology | 76 |
44,986,732 | https://en.wikipedia.org/wiki/Topological%20monoid | In topology, a branch of mathematics, a topological monoid is a monoid object in the category of topological spaces. In other words, it is a monoid with a topology with respect to which the monoid's binary operation is continuous. Every topological group is a topological monoid.
See also
H-space
References
External links
topological monoid from symmetric monoidal category
Topological spaces
Algebraic topology | Topological monoid | Mathematics | 81 |
697,229 | https://en.wikipedia.org/wiki/Chemical%20Markup%20Language | Chemical Markup Language (ChemML or CML) is an approach to managing molecular information using tools such as XML and Java. It was the first domain specific implementation based strictly on XML, first based on a DTD and later on an XML Schema, the most robust and widely used system for precise information management in many areas. It has been developed over more than a decade by Murray-Rust, Rzepa and others and has been tested in many areas and on a variety of machines.
Chemical information is traditionally stored in many different file types which inhibit reuse of the documents. CML uses XML's portability to help CML developers and chemists design interoperable documents. There are a number of tools that can generate, process and view CML documents. Publishers can distribute chemistry within XML documents by using CML, e.g. in RSS documents.
CML is capable of supporting a wide range of chemical concepts including:
molecules
reactions
spectra and analytical data
computational chemistry
chemical crystallography and materials
Details of CML and points currently under discussion are now posted on the CML Blog.
Versioning
Versions of the schema are available at SourceForge. As of April 2012, the latest frozen schema is CML v2.4. Some constructs in CML v1 are now deprecated.
Tools
JUMBO began life as the Java Universal Molecular Browser for Objects but is now a Java library that supports validation, reading and writing of CML as well as conversion of several legacy formats to CML and, for example, a reaction in CML to an animated SVG representation of the reaction. JUMBO has evolved into an extensive Java library, CMLDOM, supporting all elements in the schema. Although JUMBO used to be a browser, the preferred approach is to use the Open Source tools Jmol and JChemPaint, some of which use alternative CML libraries. See Blue Obelisk.
Software support
Software importing and exporting a valid CML format
Bioclipse
CDK
JOELib
OpenBabel
Avogadro
XDrawChem
OpenChrom
See also
List of document markup languages
Comparison of document markup languages
JCAMP-DX (another well-known standard, especially for spectroscopic data)
Blue Obelisk community for Open Source chemical software
MathML
References
Further reading
External links
Chemical Markup Language (CML) This includes the CML Schema, links to tools, documentation, and source code
Discussion list
CML Blog
The original (old) site
Markup languages
Industry-specific XML-based standards
Chemical file formats
Cheminformatics
Computational chemistry
Computer-related introductions in 1999 | Chemical Markup Language | Chemistry | 543 |
34,077,780 | https://en.wikipedia.org/wiki/Job-exposure%20matrix | A job-exposure matrix (JEM) is a tool used to assess exposure to potential health hazards in occupational epidemiological studies.
Essentially, a JEM comprises a list of levels of exposure to a variety of harmful (or potentially harmful) agents for selected occupational titles. In large population-based epidemiological studies, JEMs may be used as a quick and systematic means of converting coded occupational data (job titles) into a matrix of possible exposures, eliminating the need to assess each individual's exposure in detail.
Advantages
Assessing exposure by title is less costly than looking at individual cases. JEMs may also reduce differential information bias that might occur when evaluating exposure for individuals from medical records in which their jobs are apparent.
Disadvantages
Variability of exposure within occupational classes in different workplaces, countries, or throughout time is commonly not taken into account, which can result in nondifferential exposure misclassification.
References
Epidemiology
Cohort study methods
Observational study
Occupational safety and health | Job-exposure matrix | Environmental_science | 202 |
28,684,697 | https://en.wikipedia.org/wiki/Ye%C5%9Filyurt%20feces%20case | The Yeşilyurt feces case ( or ) refers to the affair that Turkish gendarmeirs under the command of the "security commander of Silopi-Judi" () Major Cafer Tayyar Çağlayan forced villagers to eat feces on the night of January 14–15, 1989 in Yeşilyurt (Cinibir) village of Cizre District (Mardin Province, in the present day in Şırnak Province) and related trials.
The villagers applied to the European Court of Human Rights where they were represented by Orhan Dogan. The Court ruled in favor of the villagers and against the Turkish state.
Notes
Sources
Further reading
Zeynel Abidin Kızılyaprak, Hasan Kaya, Şerif Beyaz, Zana Farqini, İstanbul Kürt Enstitüsü, 1900'den 2000'e Kürtler - Kronolojik Albüm, Özgür Bakış Milenyum Armağanı, İstanbul, Ocak 2000, p. 84.
Celal Başlangıç, "Parıldayan çılgın elmas", Radikal, April 28, 2003.
Celal Başlangıç, Korku Tapınağı: Güçlükonak-Silopi-Lice-Tunceli, İletişim Yayınları, 2001, .
1989 in Turkey
Turkish military scandals
European Court of Human Rights cases involving Turkey
History of Şırnak Province
Cizre District
January 1989 events in Turkey
Feces
Gendarmerie (Turkey)
1989 scandals | Yeşilyurt feces case | Biology | 331 |
1,624,188 | https://en.wikipedia.org/wiki/Ozonolysis | In organic chemistry, ozonolysis is an organic reaction where the unsaturated bonds are cleaved with ozone (). Multiple carbon–carbon bond are replaced by carbonyl () groups, such as aldehydes, ketones, and carboxylic acids. The reaction is predominantly applied to alkenes, but alkynes and azo compounds are also susceptible to cleavage. The outcome of the reaction depends on the type of multiple bond being oxidized and the work-up conditions.
Detailed procedures have been reported.
Ozonolysis of alkenes
Alkenes can be oxidized with ozone to form alcohols, aldehydes or ketones, or carboxylic acids. In a typical procedure, ozone is bubbled through a solution of the alkene in methanol at until the solution takes on a characteristic blue color, which is due to unreacted ozone. Industry however recommends temperatures near . This color change indicates complete consumption of the alkene. Alternatively, various other reagents can be used as indicators of this endpoint by detecting the presence of ozone. If ozonolysis is performed by introducing a stream of ozone-enriched oxygen through the reaction mixture, the effluent gas can be directed through a potassium iodide solution. When the solution has stopped absorbing ozone, the excess ozone oxidizes the iodide to iodine, which can easily be observed by its violet color. For closer control of the reaction itself, an indicator such as Sudan Red III can be added to the reaction mixture. Ozone reacts with this indicator more slowly than with the intended ozonolysis target. The ozonolysis of the indicator, which causes a noticeable color change, only occurs once the desired target has been consumed. If the substrate has two alkenes that react with ozone at different rates, one can choose an indicator whose own oxidation rate is intermediate between them, and therefore stop the reaction when only the most susceptible alkene in the substrate has reacted. Otherwise, the presence of unreacted ozone in solution (seeing its blue color) or in the bubbles (via iodide detection) only indicates when all alkenes have reacted.
After completing the addition, a reagent is then added to convert the intermediate ozonide to a carbonyl derivative. Reductive work-up conditions are far more commonly used than oxidative conditions.
The use of triphenylphosphine, thiourea, zinc dust, or dimethyl sulfide produces aldehydes or ketones. While the use of sodium borohydride produces alcohols. (R group can also be hydrogens)
The use of hydrogen peroxide can produce carboxylic acids.
Amine N-oxides produce aldehydes directly. Other functional groups, such as benzyl ethers, can also be oxidized by ozone. It has been proposed that small amounts of acid may be generated during the reaction from oxidation of the solvent, so pyridine is sometimes used to buffer the reaction. Dichloromethane is often used as a 1:1 cosolvent to facilitate timely cleavage of the ozonide. Azelaic acid and pelargonic acids are produced from ozonolysis of oleic acid on an industrial scale.
An example is the ozonolysis of eugenol converting the terminal alkene to an aldehyde:
By controlling the reaction/workup conditions, unsymmetrical products can be generated from symmetrical alkenes:
Using TsOH; sodium bicarbonate (NaHCO3); dimethyl sulfide (DMS) gives an aldehyde and a dimethyl acetal
Using acetic anhydride (Ac2O), triethylamine (Et3N) gives a methyl ester and an aldehyde
Using TsOH; Ac2O, Et3N, gives a methyl ester and a dimethyl acetal.
Reaction mechanism
In the generally accepted mechanism proposed by Rudolf Criegee in 1953, the alkene and ozone form an intermediate molozonide in a 1,3-dipolar cycloaddition. Next, the molozonide reverts to its corresponding carbonyl oxide (also called the Criegee intermediate or Criegee zwitterion) and aldehyde or ketone (3) in a retro-1,3-dipolar cycloaddition. The oxide and aldehyde or ketone react again in a 1,3-dipolar cycloaddition, producing a relatively stable ozonide intermediate, known as a trioxolane (4).
Evidence for this mechanism is found in isotopic labeling. When 17O-labelled benzaldehyde reacts with carbonyl oxides, the label ends up exclusively in the ether linkage of the ozonide. There is still dispute over whether the molozonide collapses via a concerted or radical process; this may also exhibit a substrate dependence.
History
Christian Friedrich Schönbein, who discovered ozone in 1840, also did the first ozonolysis: in 1845, he reported that ethylene reacts with ozone – after the reaction, neither the smell of ozone nor the smell of ethylene was perceivable. The ozonolysis of alkenes is sometimes referred to as "Harries ozonolysis", because some attribute this reaction to Carl Dietrich Harries. Before the advent of modern spectroscopic techniques, the ozonolysis was an important method for determining the structure of organic molecules. Chemists would ozonize an unknown alkene to yield smaller and more readily identifiable fragments.
Ozonolysis of alkynes
Ozonolysis of alkynes generally gives an acid anhydride or diketone product, not complete fragmentation as for alkenes. A reducing agent is not needed for these reactions. The mechanism is unknown. If the reaction is performed in the presence of water, the anhydride hydrolyzes to give two carboxylic acids.
Other substrates
Although rarely examined, azo compounds () are susceptible to ozonolysis. Nitrosamines () are produced.
Applications
The main use of ozonolysis is for the conversion of unsaturated fatty acids to value-added derivatives. Ozonolysis of oleic acid is an important route to azelaic acid. The coproduct is nonanoic acid:
Erucic acid is a precursor to brassylic acid, a C13-dicarboxylic acid that is used to make specialty polyamides and polyesters. The conversion entails ozonolysis, which selectively cleaves the C=C bond in erucic acid:
A number of drugs and their intermediates have been produced by ozonolysis. The use of ozone in the pharmaceutical industry is difficult to discern owing to confidentiality considerations.
Ozonolysis as an analytical method
Ozonolysis has been used to characterize the structure of some polyolefins. Early experiments showed that the repeat unit in natural rubber was shown to be isoprene.
Occurrence
Ozonolysis can be a serious problem, known as ozone cracking where traces of the gas in an atmosphere degrade elastomers, such as natural rubber, polybutadiene, styrene-butadiene, and nitrile rubber. Ozonolysis produces surface ketone groups that can cause further gradual degradation via Norrish reactions if the polymer is exposed to light. To minimize this problem, many polyolefin-based products are treated with antiozonants.
Ozone cracking is a form of stress corrosion cracking where active chemical species attack products of a susceptible material. The rubber product must be under tension for crack growth to occur. Ozone cracking was once commonly seen in the sidewalls of tires, where it could expand to cause a dangerous blowout, but is now rare owing to the use of modern antiozonants. Other means of prevention include replacing susceptible rubbers with resistant elastomers such as polychloroprene, EPDM or Viton.
Safety
The use of ozone in the pharmaceutical industry is limited by safety considerations.
See also
Polymer degradation
Lemieux–Johnson oxidation – an alternative system using periodate and osmium tetroxide
Trametes hirsuta, a biotechnological alternative to ozonolysis.
References
Organic oxidation reactions
Cycloadditions | Ozonolysis | Chemistry | 1,745 |
50,387,552 | https://en.wikipedia.org/wiki/Corpus%20Corporum | Corpus Corporum (Lat. "the collection of collections") or in full, Corpus Córporum: repositorium operum latinorum apud universitatem Turicensem, is a digital Medieval Latin library developed by the University of Zurich, Institute for Greek and Latin Philology. As of May 2016, the repository contains a total of 137,982,350 words, including the entire Patrologia Latina, the Vulgate, Corpus Thomisticum and numerous other medieval and Neo-Latin collections of religious, literary and scientific texts.
Development
The non-commercial site, still under development (in statu nascendi) at mlat.uzh.ch, is conceived as a text (meta-)repository with a set of research tools. It is being developed under the direction of Phillip Roelli, and it uses only free and open software.
The project aims to provide a platform for standardised (TEI) xml-files of copyright-free Latin texts; to make the texts searchable in complex manners; and to function, as an online platform for the publication of Latin texts (e.g. the Richard Rufus Project's corpus at Stanford University).
Texts are divided into searchable corpora on specific topics, e.g. one corpus consists of ten Latin translations of Aristotle's Physica.
See also
Christian Classics Ethereal Library
Etymologiae
LacusCurtius
Library of Latin Texts
Perseus Project
Thesaurus Linguae Latinae
References
External links
Project Home page.
Swiss digital libraries
Computing in classical studies
Corpora
Educational projects
Latin-language literature
Medieval Latin literature
Online Scripture Search Engine
Publications of patristic texts
Text Encoding Initiative
Thesauri
University of Zurich | Corpus Corporum | Technology | 355 |
3,510,287 | https://en.wikipedia.org/wiki/IEC%2061499 | The international standard IEC 61499, addressing the topic of function blocks for industrial process measurement and control systems, was initially published by the International Electrotechnical Commission (IEC) in 2005. The specification of IEC 61499 defines a generic model for distributed control systems and is based on the IEC 61131 standard. The concepts of IEC 61499 are also explained by Lewis and Zoitl as well as Vyatkin.
Part 1: Architecture
IEC 61499-1 defines the architecture for distributed systems. In IEC 61499 the cyclic execution model of IEC 61131 is replaced by an event driven execution model. The event driven execution model allows an explicit specification of the execution order of function blocks. If necessary, periodically executed applications can be implemented by using the E_CYCLE function block for the generation of periodic events as described in Annex A of IEC 61499-1.
IEC 61499 enables an application-centric design, in which one or more applications, defined by networks of interconnected function blocks, are created for the whole system and subsequently distributed to the available devices. All devices within a system are described within a device model. The topology of the system is reflected by the system model. The distribution of an application is described within the mapping model. Therefore, applications of a system are distributable but maintained together.
IEC 61499 is strongly influenced by Erlang, with its shared-nothing model and distribution transparency.
Like IEC 61131-3 function blocks, IEC 61499 function block types specify both an interface and an implementation. In contrast to IEC 61131-3, an IEC 61499 interface contains event inputs and outputs in addition to data inputs and outputs. Events can be associated with data inputs and outputs by WITH constraints. IEC 61499 defines several function block types, all of which can contain a behavior description in terms of service sequences:
Service interface function block – SIFB: The source code is hidden and its functionality is only described by service sequences.
Basic function block - BFB: Its functionality is described in terms of an Execution Control Chart (ECC), which is similar to a state diagram (UML). Every state can have several actions. Each action references one or zero algorithms and one or zero events. Algorithms can be implemented as defined in compliant standards.
Composite function block - CFB: Its functionality is defined by a function block network.
Adapter interfaces: An adapter interface is not a real function block. It combines several events and data connections within one connection and provides an interface concept to separate specification and implementation.
Subapplication: Its functionality is also defined as a function block network. In contrast to CFBs, subapplications can be distributed.
To maintain the applications on a device IEC 61499 provides a management model. The device manager maintains the lifecycle of any resource and manages the communication with the software tools (e.g., configuration tool, agent) via management commands. Through the interface of the software tool and the management commands, online reconfiguration of IEC 61499 applications can be realized.
Part 2: Software tool requirements
IEC 61499-2 defines requirements for software tools to be compliant to IEC 61499. This includes requirements for the representation and the portability of IEC 61499 elements as well as a DTD format to exchange IEC 61499 elements between different software tools.
There are already some IEC 61499 compliant software tools available. Among these are commercial software tools, open-source software tools, and academic and research developments. Usually an IEC 61499 compliant runtime environment and an IEC 61499 compliant development environment is needed.
Part 3: Tutorial Information (2008 withdrawn)
IEC 61499-3 was related to an early Publicly Available Specification (PAS) version of the standard and was withdrawn in 2008. This part answered FAQs related to the IEC 61499 standard and described the use of IEC 61499 elements with examples to solve common challenges during the engineering of automation systems.
Among other examples, IEC 61499-3 described the use of SIFBs as communication function blocks for remote access to real-time data and parameters of function blocks; the use of adapter interfaces to implement object oriented concepts; initialization algorithms in function block networks; and the implementation of ECCs for a simplified motor control of hypothetical VCRs.
Additionally the impacts of the mapping concerning the communication function blocks was explained, as well as the device management by management applications and its function blocks, and the principle of the device manager function block (DEV_MGR).
Part 4: Rules for compliance profiles
IEC 61499-4 describes the rules that a system, device or software tool must follow to be compliant to IEC 61499. These rules are related to interoperability, portability and configuration. Two devices are interoperable if they can work together to provide the functionality specified by a system configuration. Applications compliant to IEC 61499 have to be portable, which means that they can be exchanged between software tools of different vendors considering the requirements for software tools described within IEC 61499-2. Devices of any vendor have to be configurable by any IEC 61499 compliant software tool.
Besides these general rules, IEC 61499-4 also defines the structure of compliance profiles. A compliance profile describes how a system conforms to the rules of the IEC 61499 standard. For example, the configurability of a device by a software tool is determined by the supported management commands. The XML exchange format which determines portability of IEC 61499 compliant applications is defined within part 2 and is completed by the compliance profile, for example by declaring the supported file name extensions for exchange of software library elements.
The interoperability between devices of different vendors is defined by the layers of the OSI models. Also status outputs, IP addresses, port numbers as well as the data encoding of function blocks like PUBLISH/SUBSCRIBE and CLIENT/SERVER, which are used for the communication between devices, have to be considered. HOLOBLOC, Inc. defines the "IEC 61499 compliance profile for feasibility demonstrations", which is for example supported by the IEC 61499 compliant software tools FBDK, 4diac IDE, and nxtSTUDIO.
References
Sources
External links
61499 | IEC 61499 | Technology | 1,299 |
23,735,070 | https://en.wikipedia.org/wiki/Photonics%20Society%20of%20Poland | Photonics Society of Poland () is the largest optics/optoelectronics/photonics organization in Poland. It was transformed from the SPIE Poland Chapter on October 18, 2007 during the Extraordinary General Meeting of the SPIE Poland Chapter members.
PSP is a publisher of Photonics Letters of Poland.
Photonics Letters of Poland
Photonics Letters of Poland is a peer-reviewed scientific journal published by the Photonics Society of Poland in cooperation with SPIE four times a year. Founded in 2009. The journal has the following divisions of editorial scope: optical technology; information processing; lasers, photonics; environmental optics; and biomedical optics.
See also
European Photonics Industry Consortium
External links
Photonics Society of Poland official web site
PSP publications sites
Photonics Letters of Poland
Physics societies
Optics institutions
Scientific societies based in Poland
Engineering societies
Scientific organizations established in 2007
2007 establishments in Poland | Photonics Society of Poland | Engineering | 179 |
31,389,867 | https://en.wikipedia.org/wiki/Ituran | Ituran Location and Control Ltd. is an Israeli company that provides stolen vehicle recovery and tracking services, and markets GPS wireless communications products. Ituran is traded on NASDAQ and is included in the TA-100 Index. Ituran has over 3,200 employees worldwide and is a market leader in Brazil, Argentina, Israel and the United States. As of June 2020, the company has more than 2M subscribers.
History
Ituran was established in 1994 by the Tadiran conglomerate to develop and operate a service for locating stolen vehicles using a technology that was originally developed for military use at Tadiran Telematics, a subsidiary of Tadiran Communications. The core technology was originally developed and licensed by Tadiran from Teletrac USA. Teletrac was founded as International Teletrac Systems in 1988. It received initial funding from a unit of AirTouch Communication (formerly known as Pacific Telesis) in exchange for 49% equity of the company. Teletrac contracted Tadiran to build base stations for their US system which were later used under license in deploying the network in Israel.
In 1995, Tadiran decided to sell the Ituran concept to a group of investors headed by Izzy Sheratzky.
In 1998, the company had an initial public offering on the Tel Aviv Stock Exchange, raising the capital required to develop the service overseas in the United States, Brazil and Argentina.
In November 1999, Ituran acquired Tadiran Telematics, which manufactured the vehicle tracking systems Ituran was using for its services, for $10 million. The acquisition enabled Ituran to reduce the cost of the systems it sold.
Ituran then changed their company's name to Telematics Wireless.
In 2005, Ituran raised approximately $50 million in an initial public offering on Nasdaq, which gave the company a value of $294 million.
In April 2007, Ituran acquired the Mapa group for $13 million. The Mapa Group consists of three divisions: geographical databases, map publishing in print and online, satellite navigation and location-based services.
In November 2007, Ituran sold Telematics Wireless to Singapore based ST Electronics, part of the ST Engineering corporation, for $90 million.
In 2011, Ituran signed an agreement with Pelephone which allows Ituran to use Pelephone's network to set up an MVNO (mobile virtual network operator) venture.
In 2012, Ituran announced that Ituran Brazil entered into agreement with General Motors Brazil ("GMB") through a company controlled by Ituran (51%).
Ituran Brazil was founded in 1999 and has (as of the end of 2014) more than 310,000 active subscribers.
In May 2018, a severe security vulnerability was discovered in the Ituran system. The vulnerability allowed attackers to easily extract personal information of Ituran customers, including home addresses, phone numbers and car registration numbers. For some users it also allowed real-time tracking using the Ituran application. Following the discovery of the vulnerability, Ituran disabled the self-service portal for its Israeli customers, until the vulnerability is fixed.
Mapa
Mapa - Mapping and Publishing (, lit. map) is an Israeli cartographic and book publishing company, purchased in April 2007 by Ituran for $13 million.
Mapa was founded in 1985 under the name Sifrei Tel Aviv (lit. Tel Aviv Books). It entered the field of cartography in 1994. Mapa eventually moved solely into cartography, becoming the market leader in map making for the public sector in 1995. It controlled close to 80% of the map making market in 2005. Mapa has been expanding its horizons since 1996, and now publishes books in all fields. It also exports some books. Mapa's workforce consists of approximately 70 employees.
See also
Economy of Israel
Vehicle tracking system
Location-based service
Telematics
External links
Ituran USA website
Ituran Argentina website
Ituran Brazil website
Ituran Europe website
Telematics Wireless website
Telematics
Mapa
References
Economy of Israel
Global Positioning System
Science and technology in Israel
Wireless locating
Geographical technology
Companies based in Tel Aviv
Companies listed on the Tel Aviv Stock Exchange
Companies listed on the Nasdaq
Israeli brands | Ituran | Technology,Engineering | 882 |
327,121 | https://en.wikipedia.org/wiki/Quicksilver%20%28novel%29 | Quicksilver is a historical novel by Neal Stephenson, published in 2003. It is the first volume of The Baroque Cycle, his late Baroque historical fiction series, succeeded by The Confusion and The System of the World (both published in 2004). Quicksilver won the Arthur C. Clarke Award and was nominated for the Locus Award in 2004. Stephenson organized the structure of Quicksilver such that chapters have been incorporated into three internal books titled "Quicksilver", "The King of the Vagabonds", and "Odalisque". In 2006, each internal book was released in separate paperback editions, to make the 900 pages more approachable for readers. These internal books were originally independent novels within the greater cycle during composition.
The novel Quicksilver is written in various narrative styles, such as theatrical staging and epistolary, and follows a large group of characters. Though mostly set in England, France, and the United Provinces in the period 1655 through 1673, the first book includes a frame story set in late 1713 Massachusetts. In order to write the novel, Stephenson researched the period extensively and integrates events and historical themes important to historical scholarship throughout the novel. However, Stephenson alters details such as the members of the Cabal ministry, the historical cabinet of Charles II of England, to facilitate the incorporation of his fictional characters. Within the historical context, Stephenson also deals with many themes which pervade his other works, including the exploration of knowledge, communication and cryptography.
The plot of the first and third books focus on Daniel Waterhouse's exploits as a natural philosopher and friend to the young Isaac Newton and his later observations of English politics and religion, respectively. The second book introduces the vagabond Jack Shaftoe ("King of the Vagabonds") and Eliza (a former member of a Turkish harem) as they cross Europe, eventually landing in the Netherlands, where Eliza becomes entangled in commerce and politics. Quicksilver operates in the same fictional universe as Stephenson's earlier novel Cryptonomicon, in which descendants of Quicksilver characters Shaftoe and Waterhouse appear prominently.
Background and development
During the period in which he wrote Cryptonomicon, Stephenson read George Dyson's Darwin Amongst the Machines, which led him to Gottfried Leibniz's interest in a computing machine, the Leibniz–Newton feud, and Newton's work at the Royal Treasury. He considered this "striking when [he] was already working on a book about money and a book about computers," and became inspired to write about the period. Originally intended to be included in Cryptonomicon, Stephenson instead used the material as the foundation for Quicksilver, the first volume of the Baroque Cycle. The research for the sprawling historical novel created what Stephenson called "data management problems", and he resorted to a system of notebooks to record research, track characters, and find material during the writing process.
Historicity
In Quicksilver, Stephenson places the ancestors of the Cryptonomicons characters in Enlightenment Europe alongside a cast of historical individuals from Restoration England and the Enlightenment. Amongst the cast are some of the most prominent natural philosophers, mathematicians and scientists (Newton and Leibniz), and politicians (William of Orange and Nassau) of the age. In an interview, Stephenson explained he deliberately depicted both the historical and fictional characters as authentic representatives of historical classes of people, such as the Vagabonds as personified by Jack, and the Barbary slaves as personified by Eliza. In his research for the characters, he explored the major scholarship about the period.
Stephenson did extensive research on the Age of Enlightenment, noting that it is accessible for English speaking researchers because of the many well documented figures such as Leibniz, Newton and Samuel Pepys. In the course of his research he noted historiographic inconsistencies regarding characters of the period which he had to reconcile. Especially prominent was the deification of Newton, Locke and Boyle and their scientific method by Enlightenment and Victorian scholars. He considers the scientific work done during the Baroque period as crucial to the Enlightenment. From his research he concluded that the Enlightenment in general "is and should be a controversial event because although it led to the flourishing of the sciences and political liberties and a lot of good stuff like that, one can also argue that it played a role in the French Revolution and some of the negative events of the time as well." The portrayal of a confusing and uncertain era develops throughout the book.
Some reviewers commented that Stephenson seems to carry his understanding of the period a little too far at times, delving into too much detail. Nick Hasted of The Independent wrote that this research made "descriptions of Restoration London feel leaden, and intellectual discourses between Newton and his contemporaries textbook-dry." Despite the thorough examination of the period, however, Stephenson does take liberty in depicting the Enlightenment. Both main and secondary fictional characters become prominent members of society who advise the most important figures of the period and affect everything from politics to economics and science. For example, he repopulates the real Cabal Ministry with fictional characters.
Style
Quicksilver is a historical fiction novel that occasionally uses fantasy and science fiction techniques. The book is written in "an omniscient modern presence occasionally given to wisecracks, with extensive use of the continuous present". Mark Sanderson of The Daily Telegraph and Steven Poole of The Guardian both describe the novel as in the picaresque genre, a genre common to 17th- and 18th-century Europe. Humor permeates the text, both situational and in the language itself, which emulates the picaresque style.
The narrative often presents protracted digressions. These digressions follow a multitude of events and subjects related to history, philosophy and scientific subjects. For example, USA Today commented on the length of discussion of Newton's interest in the nature of gravity. With these digressions, the narrative also rapidly changes between multiple perspectives, first and third person, as well as using multiple writing techniques, both those familiar to the modern reader and those popular during the Early Modern period. These techniques include letters, drama, cryptographic messaging, genealogies and "more interesting footnotes than found in many academic papers."
Stephenson incorporates 17th-century sentence structure and orthography throughout Quicksilver, most apparent in his use of italicization and capitalization. He adapts a combination of period and anachronistic language throughout the books, mostly to good effect, while allowing diction from modern usage, such as "canal rage" an allusion to road rage. Stephenson chose not to adapt period language for the entire text; instead he allowed such language to enter his writing when it was appropriate, often turning to modern English and modern labels for ideas familiar to modern readers. Stephenson said "I never tried to entertain the illusion that I was going to write something that had no trace of the 20th or the 21st century in it."
Plot
Quicksilver
The first book is a series of flashbacks from 1713 to the earlier life of Daniel Waterhouse. It begins as Enoch Root arrives in Boston in October 1713 to deliver a letter to Daniel containing a summons from Princess Caroline. She wants Daniel to return to England and attempt to repair the feud between Isaac Newton and Gottfried Leibniz. While following Daniel's decision to return to England and board a Dutch ship (the Minerva) to cross the Atlantic, the book flashes back to when Enoch and Daniel each first met Newton. During the flashbacks, the book refocuses on Daniel's life between 1661 and 1673.
While attending school at Trinity College, Cambridge, Daniel becomes Newton's companion, ensuring that Newton does not harm his health and assisting in his experiments (including rebuffing a clumsy sexual advance from Newton, exactly as Daniel's descendent Lawrence will rebuff Alan Turing in Cryptonomicon). However, the plague of 1665 forces them apart: Newton returns to his family manor and Daniel to the outskirts of London. Daniel quickly tires of the radical Puritan rhetoric of his father, Drake Waterhouse, and decides to join Reverend John Wilkins and Robert Hooke at John Comstock's Epsom estate.
There Daniel takes part in a number of experiments, including the exploration of the diminishing effects of gravity with changes in elevation, the transfusion of blood between dogs and Wilkins' attempts to create a philosophical language. Daniel soon becomes disgusted with some of the practices of the older natural philosophers (which include vivisection of animals) and visits Newton during his experiments with color and white light. They attempt to return to Cambridge, but again plague expels the students. Daniel returns to his father; however, his arrival on the outskirts of London coincides with the second day of the Fire of London. Drake, taken by religious fervour, dies atop his house as the King blows it up to create a fire break to prevent further spread of the fire. Soon after Drake's death, Newton and Daniel return to Cambridge and begin lecturing.
A flashforward finds Daniel's ship under attack by the fleet of Edward Teach (Blackbeard) in 1713. Then the story returns to the past as Daniel and Newton return to London: Newton is under the patronage of Louis Anglesey, the Earl of Upnor, and Daniel becomes secretary of the Royal Society when Henry Oldenburg is detained by the King for his active foreign correspondence. During his stint in London, Daniel encounters a number of important people from the period. Daniel remains one of the more prominent people in the Royal Society, close to Royal Society members involved in court life and politics. By 1672 both Daniel and Newton become fellows at Trinity College where they build an extensive alchemical laboratory which attracts other significant alchemists including John Locke and Robert Boyle. Daniel convinces Newton to present his work on calculus to the Royal Society.
In 1673, Daniel meets Leibniz in England and acts as his escort, leading him to meetings with important members of British society. Soon, Daniel gains the patronage of Roger Comstock as his architect. While under Roger's patronage, the actress Tess becomes Daniel's mistress both at court and in bed. Finally the book returns to 1713, where Daniel's ship fends off several of Teach's pirate ships. Soon they find out that Teach is after Daniel alone; however, with the application of trigonometry, the ship is able to escape the bay and the pirate band.
The King of the Vagabonds
The King of Vagabonds focuses on the travels of "Half-Cocked" Jack Shaftoe. It begins by recounting Jack's childhood in the slums outside London where he pursued many disreputable jobs, including hanging from the legs of hanged men to speed their demise. The book then jumps to 1683, when Jack travels to the Battle of Vienna to participate in the European expulsion of the Turks. While attacking the camp, Jack encounters Eliza, a European slave in the sultan's harem, about to be killed by janissaries. He kills the janissaries and loots the area, taking ostrich feathers and acquiring a Turkish warhorse which he calls Turk. The two depart from the camp of the victorious European army and travel through Bohemia into the Palatinate. To sell the ostrich feathers at a high price, they decide to wait until the spring fair in Leipzig. Jack and Eliza spend the winter near a cave warmed by a hot water spring. In the springtime, they travel to the fair dressed as a noblewoman and her bodyguard where they meet Doctor Leibniz. They quickly sell their goods with the help of Leibniz, and agree to accompany him to his silver mine in the Harz Mountains.
Once they arrive at the mine, Jack wanders into the local town where he has a brief encounter with Enoch Root in an apothecary's shop. Jack leaves town but gets lost in the woods, encountering pagan worshippers and witch hunters. He successfully escapes them by finding safe passage through a mine connecting to Leibniz's. Eliza and Jack move on to Amsterdam, where Eliza quickly becomes embroiled in the trade of commodities. Jack goes to Paris to sell the ostrich feathers and Turk, leaving Eliza behind. When he arrives in Paris, he meets and befriends St. George, a professional rat-killer and tamer, who helps him find lodging. While there, he becomes a messenger for bankers between Paris and Marseilles. However, during an attempt to sell Turk Jack is captured by nobles. Luckily, the presence of Jack's former employer, John Churchill, ensures that he is not immediately killed. With Churchill's help, Jack escapes from the barn where he has been held prisoner. During the escape, he rides Turk into a masquerade at the Hotel d'Arcachon in a costume similar to that of King Louis. With the aid of St. George's rats he escapes without injury but destroys the ballroom and removes the hand of Etienne d'Arcachon.
Meanwhile, Eliza becomes heavily involved in the politics of Amsterdam, helping Knott Bolstrood and the Duke of Monmouth manipulate the trade of VOC stock. This causes a panic from which they profit. Afterwards, the French Ambassador in Amsterdam persuades Eliza to go to Versailles and supply him information about the French court. Eliza agrees after a brief encounter and falling-out with Jack. William of Orange learns of Eliza's mission and intercepts her, forcing her to become a double agent for his benefit and to give him oral sex. Meanwhile, Jack, with an injury caused by Eliza, departs on the slaving trip. The ship is captured by Barbary pirates, and the end of the book has Jack as a captured galley-slave.
Odalisque
This book returns to Daniel Waterhouse, who in 1685, has become a courtier to Charles II because of his role as Secretary of the Royal Society. He warns James II, still Duke of York, of his brother Charles' impending death, following which, Daniel quickly becomes an advisor to James II. He continues to be deeply involved with the English court, ensuring the passage of several bills which reduce restrictions on non-conformists despite his detraction from the Francophile court. Meanwhile, Eliza becomes the governess of a widowers' two children in Versailles. She catches the eye of the king and becomes the broker of the French nobility. With her help, the French court, supported by King Louis, creates several market trends from which they profit extensively. Her active involvement in the French court gains her a title of nobility: Countess of Zeur.
Daniel and Eliza finally meet during a visit to the Netherlands where Daniel acts as an intermediary between William of Orange and the detracting English nobility. Daniel realizes Eliza's importance during a meeting at the house of Christiaan Huygens. Eliza woos Daniel and uses this connection to gain entrance into the English court and the Royal Society. Daniel also meets Nicholas Fatio while in Amsterdam. Soon after this meeting, Fatio and Eliza prevent the attempted kidnapping of William of Orange by an ambitious French courtier. Upon his return, Daniel is arrested by the notorious judge George Jeffreys, and later imprisoned in the Tower of London. Daniel escapes with the help of Jack Shaftoe's brother Bob, whose infantry unit is stationed there.
After a brief return to Versailles, Eliza joins Elizabeth Charlotte of the Palatinate at her estate before the invasion of the Palatinate in her name. Eliza informs William of Orange of the troop movements caused by the French invasion which frees his forces along the border of the Spanish Netherlands, a region of stalemate between France and the Dutch Republic. During her flight from the Electorate of the Palatinate, Eliza becomes pregnant by Louis's cryptographer, though popular knowledge suggested it was the French nobleman Etienne D'Arcachon's child. Meanwhile, William takes the free troops from the border on the Spanish Netherlands to England, precipitating the Glorious Revolution, including the expulsion of James II. James flees London and Daniel Waterhouse soon encounters him in a bar. Convinced that the Stuart monarchy has collapsed, Daniel returns to London and takes revenge on Jeffreys by inciting a crowd to capture him for trial and later execution. Though he plans to depart for Massachusetts, Daniel's case of bladder stones increasingly worsens during this period. The Royal Society and other family friends are very aware of this and force Daniel to get the stone removed by Robert Hooke at Bedlam.
Major themes
A 2003 interview in Newsweek quotes Stephenson's belief that "science fiction... is fiction in which ideas play an important part." Central to Quicksilver is the importance of the Enlightenment. By placing the reader among a world of ideas that change the course of science, Stephenson explores the development of the scientific method. One theme Stephenson explores in Quicksilver is the advancement of mathematical sciences which in turn led to important applications: Leibniz's theory of binary mathematics became the foundation upon which to develop computers. As he did in Cryptonomicon, Stephenson highlights the importance of networks and codes, which in Quicksilver occur against a "backdrop of staggering diversity and detail", writes Mark Sanderson in his review of the book for the Daily Telegraph. Also, returning to his cyberpunk roots, Stephenson emphasizes the manner in which information and ideas are dispersed in complex societies. Quicksilver uses the "interactions of philosophy, court intrigue, economics, wars, plagues and natural disasters" of the late 17th and early 18th century to create a historical backdrop. From one perspective, the characters are most useful in their roles as "carriers of information". Although the characters use various techniques to disseminate information, the most prominent is cryptography. Elizabeth Weisse writes in USA Today that the use of cryptography is "Stephenson's literary calling card", as she compares Quicksilver to Cryptonomicon.
In Quicksilver Stephenson presents the importance of freedom of thought, the diversity required for new ideas to develop, and the manner in which new ideas are expressed. To explore or accept an idea such as the theory of gravity often resulted in dire consequences or even "grotesque punishment" in the early 17th century. Stephenson also points out that research, particularly as conducted at the Royal Society, resulted in a changing of views in some cases:
If you read the records of the Royal Society and what they were doing in the 1660s, it's clear that at a certain point, some of these people – and I think Hooke was one of them – became a little bit disgusted with themselves and began excusing themselves when one of these vivisections was going to happen. I certainly don't think they turned into hardcore animal rights campaigners, or anything close to that, but I think after a while, they got a little bit sick of it and started to feel conflicted about what they were doing. So I've tried to show that ambivalence and complication in the book.
How to exist during a "time of dualities" is another important theme in Quicksilver, especially in their effects on Daniel Waterhouse, who is torn between "reason versus faith, freedom versus destiny, matter versus math."
Frequent mention of alchemy indicates the shift from an earlier age to a newer transformative age. Newton was an alchemist, and one character compares finance to alchemy: "all goods—silk, coins, shares in mines—lose their hard dull gross forms and liquefy, and give up their true nature, as ores in an alchemist's furnace sweat mercury". The book focuses on a period of social and scientific transmutations, expanding upon the symbolism of the book's title, Quicksilver, because it is a period in which the "principles governing transformation" are investigated and established. A commerce of different goods rapidly changing from one into another is a recurrent theme throughout the book. Also, the title Quicksilver connects the book to the method alchemists used to distill quicksilver, "the pure living essence of God's power and presence in the world", from, as one character put it, "the base, dark, cold, essentially fecal matter of which the world was made."
Characters
Main characters
In order of appearance:
Enoch Root – an elusive and mysterious alchemist who first appears at the beginning of the book and recurs throughout often in the company of Alchemists such as Newton and Locke.
Daniel Waterhouse – son of prominent Puritan Drake Waterhouse, roommate of Isaac Newton, friend of Gottfried Leibniz, and prominent member of the Royal Society. Waterhouse is both a savant and a strict Puritan. As Quicksilver progresses he becomes more and more involved in the inner workings of British politics.
"Half-Cocked" Jack Shaftoe – an English vagabond, known as "The King of the Vagabonds", who rescues Eliza and becomes the enemy of the Duke d'Arcachon.
Eliza – a former harem slave who becomes a French countess, investor, and spy for William of Orange and Gottfried Leibniz. She originally became a slave when she and her mother were kidnapped from their homeland of Qwghlm by a European pirate with breath that smelled of rotten fish.
Historical characters
Robert Boyle, Irish natural philosopher
Caroline of Ansbach, an inquisitive child who loses her mother to smallpox
John Churchill, former employer of Jack and a prominent British politician
William Curtius, German Fellow of the Royal Society, and diplomat for the House of Stuart.
Nicolas Fatio de Duillier
Judge Jeffreys, Lord Chancellor of England
Robert Hooke, English natural philosopher and biologist
Christiaan Huygens, continental natural philosopher
Gottfried Leibniz
Louis XIV, King of France
Isaac Newton
Henry Oldenburg, founding member and secretary of the Royal Society
Bonaventure Rossignol, a French cryptologist
James Scott, 1st Duke of Monmouth
James Stuart, as the Duke of York and as James II, King of England
Edward Teach, aka Blackbeard
John Wilkins, Bishop of Chester, founding member of the Royal Society, and advocate of religious tolerance in Britain
William III of England, as William, Prince of Orange
Benjamin Franklin
Samuel Pepys
Critical reception
The reception to Quicksilver was generally positive. Some reviewers found the length cumbersome; however, others found the length impressive in its quality and entertainment value. Paul Boutin at Slate Magazine comments that Quicksilver offers an insight into how advanced and complicated science was during the age of "alchemists and microscope-makers"; and that the scientists of the period were "the forerunners of the biotech and nanotech researchers who are today's IT Geeks". Entertainment Weekly rates Quicksilver an A−, stating that the book "makes you ponder concepts and theories you initially thought you'll never understand". The critic finds a parallel between Stephenson's approach and a passage from the book describing an effort to put "all human knowledge ... in a vast Encyclopedia that will be a sort of machine, not only for finding old knowledge but for making new".
The Independent places emphasis on the comparisons between the story that evolves in Quicksilver and Stephenson's earlier novel Cryptonomicon, with the former "shaping up to be a far more impressive literary endeavour than most so-called 'serious' fiction. And it ends on a hell of a cliffhanger. No scholarly, and intellectually provocative, historical novel has been this much fun since The Name of the Rose". Patrick Ness considers Quicksilver to be "entertaining over an impossible distance. This isn't a book; it's a place to move into and raise a family." His review focuses on the scope of the material and humour inherent in Quicksilver. Mark Sanderson calls the novel an "astonishing achievement", and compares Quicksilver to "Thomas Pynchon's Mason & Dixon and Lawrence Norfolk's Lempriere's Dictionary." Although full of historical description and incredibly lengthy, Quicksilver is noticeably full of what Sanderson called "more sex and violence ... than any Tarantino movie". Stephenson balances his desire to respect the period with a need to develop a novel which entertains modern readers.
In The Guardian, Steven Poole commented that 'Quicksilver was: ""
Polly Shulman of The New York Times finds Quicksilver hard to follow and amazingly complex but a good read. However she notes that the complicated and clunky dialogue between the characters is a distraction. She thinks a full appreciation of the work is only possible within the context of the remaining novels of The Baroque Cycle, and compares the novel to works by Dorothy Dunnett, William Gibson and Bruce Sterling, calling it "history-of-science fiction". In the post-publication review for The New York Times, Edward Rothstein remarks that the scope of the novel is at times detrimental: "Unfortunately, in this novelistic cauldron it can sometimes seem as if mercury's vapors had overtaken the author himself, as if every detail he had learned had to be anxiously crammed into his text, while still leaving the boundaries between fact and invention ambiguous". He considers the novel to be an "experiment in progress", although the historical background is compelling.
Deborah Friedell disliked Quicksilver. She mentions Stephenson's poor writing and his lack of knowledge of the literary tradition, which she considers to be because "the greatest influences upon Stephenson's work have been comic books and cartoons". She dislikes his use of anachronism, his failure to be literary and his general approach to historical fiction. She writes of Stephenson and the reviewers who reviewed the work in a positive manner:
Stephenson is decidedly not a prodigy; but his babe-in-the-woods routine has proved irresistible for some, who are hailing his seemingly innate ability to meld the products of exhaustive historical research with what they see as a brilliant, idiosyncratic sense of humor and adventure. Times critic has declared that Stephenson has a "once-in-a-generation gift", and that Quicksilver "will defy any category, genre, precedent or label—except for genius". This is promotional copy disguised as literary criticism. There is nothing category-defying about this ridiculous book.
From the foreign press, the review in the Frankfurter Allgemeine points out the historical period of Quicksilver is one of the birth of science which corresponds with a period of language shift as English became the language of science. Moreover, the review focuses on Leibniz's principles of mathematics which Stephenson claims established the framework for modern computing.
Publication history
Based on the success of Cryptonomicon, a New York Times bestseller with sales of about 300,000 copies, the initial print-run for Quicksilver was 250,000 copies. Five months before the release date, a web campaign was initiated to advertise the work. The novel was originally published in a single volume; in 2006 HarperCollins republished the books in three separate paperback volumes.
Editions
September 23, 2003, US, William Morrow (), hardback (first edition), 944 pages
October 2, 2003, UK, Willian Heinemann (), hardback
2003, UK, Willian Heinemann (), paperback
June 2004, US, William Morrow (), hardback (Special Edition), 968 pages
September 21, 2004, US, HarperCollins Perennial (), trade edition, 927 pages
October 2004, US HarperColllins (), CD, abridged audiobook, 22 hours 1 minute, narrated by Simon Prebble and Stina Nielson
November 2004, US, HarperCollins (), MP3 release of the abridged audio CD
Split into 3 volumes in 2006
Quicksilver, January 2006, US, HarperCollins (), mass market, 480 pages
The King of the Vagabonds, February 2006, US, HarperCollins (), mass market paperback, 400 pages
Odalisque, March 2006, US, HarperCollins (), mass market paperback, 464 pages
See also
The Age of Unreason cycle by Gregory Keyes has a similar approach to the period.
References
External links
The Metawebwas once an extensive Quicksilver wiki, including many pages written by Stephenson, about the historical and fictional persons and events of this book. The old data is mothballed; the website is now the corporate site for a startup spun out of Applied Minds. However, are still viewable via the Internet Archive's Wayback Machine.
Quicksilver at Complete Review; contains an archive of links to all major newspaper reviews of the book.
2003 American novels
2003 science fiction novels
Novels about alchemy
Fiction set in 1713
Novels set in the 1710s
HarperCollins books
Historical novels
Fiction about mining
Novels about cryptography
Novels set in Early Modern England
Novels set in Early Modern France
Novels set in the 1650s
Novels set in the 1660s
Novels set in the 1670s
Novels set in the Netherlands
The Baroque Cycle
Cultural depictions of James II of England
Cultural depictions of Blackbeard
Great Plague of London
sv:Quicksilver (bok) | Quicksilver (novel) | Astronomy | 6,007 |
44,280,390 | https://en.wikipedia.org/wiki/Productive%20aging | Productive aging refers to activities which older people engage in on a daily basis. Older adults have opportunities and constraints which are related to the productive aging process. The community and society need to develop more options for older adults to choose their way of being engaged in the community and contributing to others. Things such as policy changes and resource commitments are important to promote productive aging. One example of productive aging is retirement which moves older adults from paid forms of productivity to non-paid activities. Many activities can give older adults opportunities and constraints related to the productive aging process. These activities include retirement, employment, economic well-being, leisure, religious participation and spirituality, membership in community associations and volunteerism, education, and political action. Older adults will find many opportunities to engage in activities which contribute to society or follow personal creative activities.
References
Social Gerontology A Multidisciplinary perspective-ninth edition by Nancy R. Hooyman and H. Asuman Kiyak
Gerontology | Productive aging | Biology | 198 |
4,538,755 | https://en.wikipedia.org/wiki/6P1P | The 6P1P (Russian: 6П1П) is a Soviet-made miniature 9-pin beam tetrode vacuum tube with ratings similar to the 6AQ5, EL90 and the 6V6. Because of a different pinout (a 9-pin base versus 7-pin base) than an 6AQ5/EL90, it cannot be used as a plug-in replacement for these types; however, it will work in the same circuit with component values unchanged. Its maximum plate/screen voltage and dissipation ratings are actually slightly higher than a 6AQ5. A ruggedized/extended ratings version of the tube is designated 6P1P-EV (Russian: 6П1П-ЕВ), roughly equivalent to the 6AQ5W. A Chinese-manufactured version of the tube also exists, labeled 6P1.
The type was commonly used in Soviet-built vacuum tube radios and TV sets as an output audio amplifier, until it was replaced by the higher-performance 6P14P (close to the EL84). In some old soviet TV sets (mainly before 1960 on 70 degrees deflection picture tubes), it was also used as frame output tube, until more specialized tubes for this purpose were developed in the Soviet union.
The tube is no longer believed to be in production.
See also
6AQ5
6V6
Russian tube designations
References
6P1P pinout and specifications
External links
Russian/Soviet tube manufacturers and their logos
Vacuum tubes
Goods manufactured in the Soviet Union | 6P1P | Physics | 314 |
63,771,705 | https://en.wikipedia.org/wiki/Sound%20scenography | Sound scenography (also known as acoustic scenography) is the process of staging spaces and environments through sound. It combines expertise from the fields of architecture, acoustics, communication, sound design and interaction design to convey artistic, historical, scientific, or commercial content or to establish atmospheres and moods.
Definition
Initially developed as a sub-discipline of scenography, it is now primarily used in the context of exhibitions, museums, media installations and trade fairs, as well as shops, adventure parks, spas, reception areas, and open-plan offices.
Distinct from other applications in sound design, spatial localisation plays a central role in sound scenography. Sound in contexts such as film soundtracks has a synchronised and standardised listening experience. The sound experience should be the same for every visitor at every position (and in every cinema). Because exhibition spaces are freely traversable and show audio-visual content at various stations across the room, sound scenography aims at providing every visitor with an individual listening experience with distinct start and end points as well as a distinct progression. Thus, the dramaturgy of the sound experience is no longer determined by the timeline of the soundtrack, but by the position and movement of the visitor.
Methods of Sound Scenography
Spaces can be staged with sound in various ways. Rooms have different tonal properties and acoustics depending on their architecture and interior design. Live musicians can spread across the room or play in motion, which is especially common in spatial music. The reproduction of sounds via loudspeakers, offers a wide range of possibilities for integrating sound into spaces and is therefore the most commonly used method. In that context, sound scenography is influenced from various practices in the wider field of sound design and composition, such as generative music, sonic interaction design, and sound masking. Loudspeaker systems used to distribute sound range from standard spatial audio setups to the more customised distributions common in sound installation, such as the Acousmatic Room Orchestration System. The spatial integration of sound delivered via headphones is a defining feature of interactive soundwalks. Leveraging technologies such as geolocation and head tracking, sounds are used to augment real environments in what the BBC's R&D department calls "Audio AR". In the more controlled environment of an exhibition, this approach has been used to create fully virtual sound environments.
Functions of Sound Scenography
Sound scenography performs many of the established functions of sound in film soundtracks. It gives emotional connotations to spaces, exhibits or even individual interactions through the use of sound. Soundscapes are used to establish atmospheres and moods with varying degrees of realism. Sound content is also used to evoke memories and associations. Soundscapes and musical accents clarify visual content or re-contextualise it. Content can also be conveyed purely sonically without accompanying visual media. Especially in connection with large-scale video projection, sound is used to direct the viewer's attention. In all these application areas, sound scenography relates the different sonic components of an exhibition to one another in order to create a coherent overall soundscape.
See also
Spatial music
Exhibit design
Sound Art
Acoustical engineering
References
Further reading
Franinović, K. & Serafin, Stefania (2013) Sonic Interaction Design, Cambridge: Massachusetts Institute of Technology
Atelier Brückner (2010) Scenography / Szenografie – Making spaces talk / Narrative Räume, Stuttgart: avedition
Minard, Robin (1993) Sound Environments – music for public spaces, Berlin: Akademie der Künste
Kiefer, Peter (2010) Klangräume der Kunst, Heidelberg: Kehrer Verlag
Cancellaro, Joseph (2006) Sound Design for Interactive Media, New York: Thomson Delmar Learning
Metzger, Christoph (2015) Architektur und Resonanz, Berlin: jovis Verlag GmbH
Plot #10 The Power of Sound, 2013
Design
Sound production
Film sound production | Sound scenography | Engineering | 811 |
24,329,637 | https://en.wikipedia.org/wiki/Thiosulfinate | In organosulfur chemistry, thiosulfinate is a functional group consisting of the linkage R-S(O)-S-R (R are organic substituents). Thiolsulfinates are also named as alkanethiosulfinic (or arenethiosulfinic) acid esters.
They are the first of the series of functional groups containing an oxidized disulfide bond. Other members of this family include thiosulfonates (R-SO2-S-R), α-disulfoxides (R-S(O)-S(O)-R), sulfinyl sulfones (R-S(O)-SO2-R), and α-disulfones (R-SO2-SO2-R), of which all (except αdisulfoxides) are known. The thiosulfinate group can occur in cyclic as well as acyclic structures.
Occurrence
A variety of acyclic and cyclic thiosulfinates are found in plants, or formed when the plants are cut or crushed.
A well-known thiosulfinate is allicin, one of the active ingredients formed when garlic is crushed. Allicin was discovered in 1944 by Chester J. Cavallito and coworkers. Thiosulfinates containing various combinations of the methyl, n-propyl, 1-propenyl, 2-propenyl, n-butyl, 1-butenyl and 2-butenyl groups are formed upon crushing different Allium as well as Brassica species.
Zeylanoxides are cyclic thiosulfinates containing the 1,2-dithiolane-1-oxide ring, isolated from the tropical weed Sphenoclea zeylanica. These heterocyclic thiosulfinates are chiral at carbon as well as at sulfur.
Crushing the roots of Petiveria alliacea affords the thiosulfinates S-(2-hydroxyethyl) 2-hydroxyethane)thiosulfinate, S-(2-hydroxylethyl) phenylmethanethiosulfinate, S-benzyl 2-hydroxyethane)thiosulfinate and S-benzyl phenylmethanethiosulfinate (petivericin; PhCH2S(O)SCH2Ph). Asparagusic acid S-oxide and brugierol are other natural 1,2-dithiolane-1-oxides occurring in Asparagus officinalis and Bruguiera conjugata, respectively.
Properties
Allicin, S-benzyl phenylmethanethiosulfinate, and related thiosulfinates show radical-trapping antioxidant activity associated with easy formation of sulfenic acids The acyclic thiosulfinates from Allium and Brassica species possess antimicrobial, antiparasitic, antitumor and cysteine protease inhibitory activity while the natural 1,2-dithiolane-1-oxides are growth inhibitors. The thiosulfinates from Petiveria also exhibit antimicrobial activity.
Thiosulfinates feature a S(IV) center linked to a S(II) center, the former being stereogenic. Conversion of simple disulfides to thiosulfinates results in a considerable weakening of the S–S bond from about 47.8 to 28.0 kcal mol−1 for the S-S bond in PhS(O)SPh and from about 63.2 to 39.3 kcal mol−1 for the S-S bond in MeS(O)SMe, with the consequence that most thiosulfinates are both unstable and quite reactive. For this reason the mixtures of thiosulfinates from Allium plants can best be separated by HPLC at room temperature rather than by gas chromatography (GC), although GC has been used with some low molecular weight thiosulfinates. Thiosulfinates can be distinguished from sulfoxides by infrared spectroscopy since they have a characteristic S=O band at about 1078 cm−1 compared to 1030–1060 cm−1 in sulfoxides.
Formation and reactions
Synthetic thiosulfinates were first reported in 1947 by Cavallito and coworkers by oxidation of the corresponding disulfides.
One example of a moderately stable thiosulfinate is the tert-butyl derivative, (CH3)3CS(O)SC(CH3)3. This thiosulfinate can be obtained in optical purity by catalytic asymmetric oxidation of di-tert-butyl disulfide with hydrogen peroxide. Upon heating, (CH3)3CS(O)SC(CH3)3 decomposes into tert-butanethiosulfoxylic acid (CH3)3CSSOH) as shown by trapping studies.
In a similar manner racemic methyl methanethiosulfinate (CH3S(O)SCH3) can be obtained by peracetic acid oxidation of dimethyl disulfide. Methyl methanethiosulfinate decomposes thermally giving methanesulfenic acid (CH3SOH), the simplest sulfenic acid, as well as thioformaldehyde (CH2=S). Methyl methanethiosulfinate can also disproportionate to a 1:1 mixture of dimethyl disulfide and methyl methanethiosulfonate (CH3SO2SCH3) and rearrange via a Pummerer rearrangement to CH3S(O)CH2SSCH3.
An unusual three-membered ring thiosulfinate (a dithiirane 1-oxide) has been prepared through rearrangement of a 1,3-dithietane. A related compound, 3-(9-triptycyl)dithiirane-1-oxide, was prepared by the reaction of (9-triptycyl)diazomethane and . The X-ray structure of the dithiirane-1-oxide reveals a significantly lengthened sulfur-sulfur bond (211.9(3)pm).
Thiosulfinates have also been invoked as intermediates in the oxidation of thiols to sulfonic acids.
References
Thiosulfinates
Sulfur oxyanions | Thiosulfinate | Chemistry | 1,351 |
31,329,101 | https://en.wikipedia.org/wiki/Polycrystalline%20silicon | Polycrystalline silicon, or multicrystalline silicon, also called polysilicon, poly-Si, or mc-Si, is a high purity, polycrystalline form of silicon, used as a raw material by the solar photovoltaic and electronics industry.
Polysilicon is produced from metallurgical grade silicon by a chemical purification process, called the Siemens process. This process involves distillation of volatile silicon compounds, and their decomposition into silicon at high temperatures. An emerging, alternative process of refinement uses a fluidized bed reactor. The photovoltaic industry also produces upgraded metallurgical-grade silicon (UMG-Si), using metallurgical instead of chemical purification processes. When produced for the electronics industry, polysilicon contains impurity levels of less than one part per billion (ppb), while polycrystalline solar grade silicon (SoG-Si) is generally less pure.
In the 2010's, production shifted toward China, with China-based companies accounting for seven of the top ten producers and around 90% of total worldwide production capacity of approximately 1,400,000 MT. German, US and South Korea companies account for the remainder.
The polysilicon feedstock – large rods, usually broken into chunks of specific sizes and packaged in clean rooms before shipment – is directly cast into multicrystalline ingots or submitted to a recrystallization process to grow single crystal boules. The boules are then sliced into thin silicon wafers and used for the production of solar cells, integrated circuits and other semiconductor devices.
Polysilicon consists of small crystals, also known as crystallites, giving the material its typical metal flake effect. While polysilicon and multisilicon are often used as synonyms, multicrystalline usually refers to crystals larger than one millimetre. Multicrystalline solar cells are the most common type of solar cells in the fast-growing PV market and consume most of the worldwide produced polysilicon. About 5 tons of polysilicon is required to manufacture one 1 megawatt (MW) of conventional solar modules. Polysilicon is distinct from monocrystalline silicon and amorphous silicon.
Vs monocrystalline silicon
In single-crystal silicon, also known as monocrystalline silicon, the crystalline framework is homogeneous, which can be recognized by an even external colouring. The entire sample is one single, continuous and unbroken crystal as its structure contains no grain boundaries. Large single crystals are rare in nature and can also be difficult to produce in the laboratory (see also recrystallisation). In contrast, in an amorphous structure the order in atomic positions is limited to short range.
Polycrystalline and paracrystalline phases are composed of a number of smaller crystals or crystallites. Polycrystalline silicon (or semi-crystalline silicon, polysilicon, poly-Si, or simply "poly") is a material consisting of multiple small silicon crystals. Polycrystalline cells can be recognized by a visible grain, a "metal flake effect". Semiconductor grade (also solar grade) polycrystalline silicon is converted to single-crystal silicon – meaning that the randomly associated crystallites of silicon in polycrystalline silicon are converted to a large single crystal. Single-crystal silicon is used to manufacture most Si-based microelectronic devices. Polycrystalline silicon can be as much as 99.9999% pure. Ultra-pure poly is used in the semiconductor industry, starting from poly rods that are two to three meters in length. In the microelectronics industry (semiconductor industry), poly is used at both the macro and micro scales. Single crystals are grown using the Czochralski, zone melting and Bridgman–Stockbarger methods.
Components
At the component level, polysilicon has long been used as the conducting gate material in MOSFET and CMOS processing technologies. For these technologies it is deposited using low-pressure chemical-vapour deposition (LPCVD) reactors at high temperatures and is usually heavily doped n-type or p-type.
More recently, intrinsic and doped polysilicon is being used in large-area electronics as the active and/or doped layers in thin-film transistors. Although it can be deposited by LPCVD, plasma-enhanced chemical vapour deposition (PECVD), or solid-phase crystallization of amorphous silicon in certain processing regimes, these processes still require relatively high temperatures of at least 300 °C. These temperatures make deposition of polysilicon possible for glass substrates but not for plastic substrates.
The deposition of polycrystalline silicon on plastic substrates is motivated by the desire to be able to manufacture digital displays on flexible screens. Therefore, a relatively new technique called laser crystallization has been devised to crystallize a precursor amorphous silicon (a-Si) material on a plastic substrate without melting or damaging the plastic. Short, high-intensity ultraviolet laser pulses are used to heat the deposited a-Si material to above the melting point of silicon, without melting the entire substrate.
The molten silicon will then crystallize as it cools. By precisely controlling the temperature gradients, researchers have been able to grow very large grains, of up to hundreds of micrometers in size in the extreme case, although grain sizes of 10 nanometers to 1 micrometer are also common. In order to create devices on polysilicon over large-areas, however, a crystal grain size smaller than the device feature size is needed for homogeneity of the devices. Another method to produce poly-Si at low temperatures is metal-induced crystallization where an amorphous-Si thin film can be crystallized at temperatures as low as 150 °C if annealed while in contact of another metal film such as aluminium, gold, or silver.
Polysilicon has many applications in VLSI manufacturing. One of its primary uses is as gate electrode material for MOS devices. A polysilicon gate's electrical conductivity may be increased by depositing a metal (such as tungsten) or a metal silicide (such as tungsten silicide) over the gate. Polysilicon may also be employed as a resistor, a conductor, or as an ohmic contact for shallow junctions, with the desired electrical conductivity attained by doping the polysilicon material.
One major difference between polysilicon and a-Si is that the mobility of the charge carriers of the polysilicon can be orders of magnitude larger and the material also shows greater stability under electric field and light-induced stress. This allows more complex, high-speed circuitry to be created on the glass substrate along with the a-Si devices, which are still needed for their low-leakage characteristics. When polysilicon and a-Si devices are used in the same process, this is called hybrid processing. A complete polysilicon active layer process is also used in some cases where a small pixel size is required, such as in projection displays.
Feedstock for PV industry
Polycrystalline silicon is the key feedstock in the crystalline silicon based photovoltaic industry and used for the production of conventional solar cells. For the first time, in 2006, over half of the world's supply of polysilicon was being used by PV manufacturers. The solar industry was severely hindered by a shortage in supply of polysilicon feedstock and was forced to idle about a quarter of its cell and module manufacturing capacity in 2007. Only twelve factories were known to produce solar-grade polysilicon in 2008; however, by 2013 the number increased to over 100 manufacturers. Monocrystalline silicon is higher priced and a more efficient semiconductor than polycrystalline as it has undergone additional recrystallization via the Czochralski method.
Deposition methods
Polysilicon deposition, or the process of depositing a layer of polycrystalline silicon on a semiconductor wafer, is achieved by the chemical decomposition of silane (SiH4) at high temperatures of 580 to 650 °C. This pyrolysis process releases hydrogen.
(g) → Si(s) + 2 (g) CVD at 500-800°C
Polysilicon layers can be deposited using 100% silane at a pressure of or with 20–30% silane (diluted in nitrogen) at the same total pressure. Both of these processes can deposit polysilicon on 10–200 wafers per run, at a rate of 10–20 nm/min and with thickness uniformities of ±5%. Critical process variables for polysilicon deposition include temperature, pressure, silane concentration, and dopant concentration. Wafer spacing and load size have been shown to have only minor effects on the deposition process. The rate of polysilicon deposition increases rapidly with temperature, since it follows Arrhenius behavior, that is deposition rate = A·exp(–qEa/kT) where q is electron charge and k is the Boltzmann constant. The activation energy (Ea) for polysilicon deposition is about 1.7 eV. Based on this equation, the rate of polysilicon deposition increases as the deposition temperature increases. There will be a minimum temperature, however, wherein the rate of deposition becomes faster than the rate at which unreacted silane arrives at the surface. Beyond this temperature, the deposition rate can no longer increase with temperature, since it is now being hampered by lack of silane from which the polysilicon will be generated. Such a reaction is then said to be "mass-transport-limited". When a polysilicon deposition process becomes mass-transport-limited, the reaction rate becomes dependent primarily on reactant concentration, reactor geometry, and gas flow.
When the rate at which polysilicon deposition occurs is slower than the rate at which unreacted silane arrives, then it is said to be surface-reaction-limited. A deposition process that is surface-reaction-limited is primarily dependent on reactant concentration and reaction temperature. Deposition processes must be surface-reaction-limited because they result in excellent thickness uniformity and step coverage. A plot of the logarithm of the deposition rate against the reciprocal of the absolute temperature in the surface-reaction-limited region results in a straight line whose slope is equal to –qEa/k.
At reduced pressure levels for VLSI manufacturing, polysilicon deposition rate below 575 °C is too slow to be practical. Above 650 °C, poor deposition uniformity and excessive roughness will be encountered due to unwanted gas-phase reactions and silane depletion. Pressure can be varied inside a low-pressure reactor either by changing the pumping speed or changing the inlet gas flow into the reactor. If the inlet gas is composed of both silane and nitrogen, the inlet gas flow, and hence the reactor pressure, may be varied either by changing the nitrogen flow at constant silane flow, or changing both the nitrogen and silane flow to change the total gas flow while keeping the gas ratio constant. Recent investigations have shown that e-beam evaporation, followed by SPC (if needed) can be a cost-effective and faster alternative for producing solar-grade poly-Si thin films. Modules produced by such method are shown to have a photovoltaic efficiency of ~6%.
Polysilicon doping, if needed, is also done during the deposition process, usually by adding phosphine, arsine, or diborane. Adding phosphine or arsine results in slower deposition, while adding diborane increases the deposition rate. The deposition thickness uniformity usually degrades when dopants are added during deposition.
Siemens process
The Siemens process is the most commonly used method of polysilicon production, especially for electronics, with close to 75% of the world's production using this process as of 2005.
The process converts metallurgical-grade Si, of approximately 98% purity, to SiHCl3 and then to silicon in a reactor, thus removing transition metal and dopant impurities. The process is relatively expensive and slow.
It is a type of chemical vapor deposition process.
Upgraded metallurgical-grade silicon
Upgraded metallurgical-grade (UMG) silicon (also known as UMG-Si) for solar cells is being produced as a low cost alternative to polysilicon created by the Siemens process. UMG-Si greatly reduces impurities in a variety of ways that require less equipment and energy than the Siemens process. It is about 99% pure which is three or more orders of magnitude less pure and about 10 times less expensive than polysilicon ($1.70 to $3.20 per kg from 2005 to 2008 compared to $40 to $400 per kg for polysilicon). It has the potential to provide nearly-as-good solar cell efficiency at 1/5 the capital expenditure, half the energy requirements, and less than $15/kg.
In 2008 several companies were touting the potential of UMG-Si, but in 2010 the credit crisis greatly lowered the cost of polysilicon and several UMG-Si producers put plans on hold. The Siemens process will remain the dominant form of production for years to come due to more efficiently implementing the Siemens process. GT Solar claims a new Siemens process can produce at $27/kg and may reach $20/kg in 5 years. GCL-Poly expects production costs to be $20/kg by end of 2011. Elkem Solar estimates their UMG costs to be $25/kg, with a capacity of 6,000 tonnes by the end of 2010. Calisolar expects UMG technology to produce at $12/kg in 5 years with boron at 0.3 ppm and phosphorus at 0.6 ppm. At $50/kg and 7.5 g/W, module manufacturers spend $0.37/W for the polysilicon. For comparison, if a CdTe manufacturer pays spot price for tellurium ($420/kg in April 2010) and has a 3 μm thickness, their cost would be 10 times less, $0.037/Watt. At 0.1 g/W and $31/ozt for silver, polysilicon solar producers spend $0.10/W on silver.
Q-Cells, Canadian Solar, and Calisolar have used Timminco UMG. Timminco is able to produce UMG-Si with 0.5 ppm boron for $21/kg but were sued by shareholders because they had expected $10/kg. RSI and Dow Corning have also been in litigation over UMG-Si technology.
Potential applications
Currently, polysilicon is commonly used for the conducting gate materials in semiconductor devices such as MOSFETs; however, it has potential for large-scale photovoltaic devices. The abundance, stability, and low toxicity of silicon, combined with the low cost of polysilicon relative to single crystals makes this variety of material attractive for photovoltaic production. Grain size has been shown to have an effect on the efficiency of polycrystalline solar cells. Solar cell efficiency increases with grain size. This effect is due to reduced recombination in the solar cell. Recombination, which is a limiting factor for current in a solar cell, occurs more prevalently at grain boundaries, see figure 1.
The resistivity, mobility, and free-carrier concentration in monocrystalline silicon vary with doping concentration of the single crystal silicon. Whereas the doping of polycrystalline silicon does have an effect on the resistivity, mobility, and free-carrier concentration, these properties strongly depend on the polycrystalline grain size, which is a physical parameter that the material scientist can manipulate. Through the methods of crystallization to form polycrystalline silicon, an engineer can control the size of the polycrystalline grains which will vary the physical properties of the material.
Novel ideas
The use of polycrystalline silicon in the production of solar cells requires less material and therefore provides higher profits and increased manufacturing throughput. Polycrystalline silicon does not need to be deposited on a silicon wafer to form a solar cell, rather it can be deposited on other, cheaper materials, thus reducing the cost. Not requiring a silicon wafer alleviates the silicon shortages occasionally faced by the microelectronics industry. An example of not using a silicon wafer is crystalline silicon on glass (CSG) materials.
A primary concern in the photovoltaics industry is cell efficiency. However, sufficient cost savings from cell manufacturing can be suitable to offset reduced efficiency in the field, such as the use of larger solar cell arrays compared with more compact/higher efficiency designs. Designs such as CSG are attractive because of a low cost of production even with reduced efficiency. Higher efficiency devices yield modules that occupy less space and are more compact; however, the 5–10% efficiency of typical CSG devices still makes them attractive for installation in large central-service stations, such as a power station. The issue of efficiency versus cost is a value decision of whether one requires an "energy dense" solar cell or sufficient area is available for the installation of less expensive alternatives. For instance, a solar cell used for power generation in a remote location might require a more highly efficient solar cell than one used for low-power applications, such as solar accent lighting or pocket calculators, or near established power grids.
Manufacturers
Capacity
The polysilicon manufacturing market is growing rapidly. According to DigiTimes, in July 2011, the total polysilicon production in 2010 was 209,000 tons. First-tier suppliers account for 64% of the market while China-based polysilicon firms have 30% of market share. The total production is likely to increase 37.4% to 281,000 tons by end of 2011. For 2012, EETimes Asia predicts 328,000 tons production with only 196,000 tons of demand, with spot prices expected to fall 56%. While good for renewable energy prospects, the subsequent drop in price could be brutal for manufacturers. As of late 2012, SolarIndustryMag reports a capacity of 385,000 tons will be reached by yearend 2012.
As of 2010, as established producers (mentioned below) expand their capacities, additional newcomers – many from Asia – are moving into the market. Even long-time players in the field have recently had difficulties expanding plant production. It is yet unclear which companies will be able to produce at costs low enough to be profitable after the steep drop in spot-prices of the last months.
Leading producers
Wacker's projected its total hyperpure-polysilicon production capacity to increase to 67,000 metric tons by 2014, due to its new polysilicon-production facility in Cleveland, Tennessee (US) with an annual capacity of 15,000 metric tons.
Other manufacturers
LDK Solar (2010: 15 kt) China.
Tokuyama Corporation (2009: 8 kt, Jan 2013: 11 kt, 2015: 31 kt) Japan.
MEMC/SunEdison (2010: 8 kt, Jan 2013: 18 kt) USA.
Hankook Silicon (2011: 3.2 kt, 2013: 14.5 kt)
Nitol Solar, (2011: 5 kt, Jan 2011), Russia
Mitsubishi Polysilicon (2008: 4.3 kt)
Osaka Titanium Technologies (2008: 4.2 kt)
Daqo New Energy, (2011: 4.3 kt, under construction 3 kt), China
Beijing Lier High-temperature Materials Co. (2012: 5 kt)
Qatar Solar Technologies, at Ras Laffan, announced an 8 t facility for start in 2013.
Price
Prices of polysilicon are often divided into two categories, contract and spot prices, and higher purity commands higher prices. While in booming installation times, price rally occurs in polysilicon. Not only spot prices surpass contract prices in the market; but it is also hard to acquire enough polysilicon. Buyers will accept down payment and long-term agreements to acquire a large enough volume of polysilicon. On the contrary, spot prices will be below contract prices once the solar PV installation is in a down trend. In late 2010, booming installation brought up the spot prices of polysilicon. In the first half of 2011, prices of polysilicon kept strong owing to the FIT policies of Italy. The solar PV price survey and market research firm, PVinsights, reported that the prices of polysilicon might be dragged down by lack of installation in the second half of 2011. As recently as 2008 prices were over $400/kg spiking from levels around $200/kg, while seen falling to $15/kg in 2013.
Dumping
The Chinese government accused United States and South Korean manufacturers of predatory pricing or "dumping". As a consequence, in 2013 it imposed import tariffs of as much as 57 percent on polysilicon shipped from these two countries in order to stop the product from being sold below cost.
Waste
Due to the rapid growth in manufacturing in China and the lack of regulatory controls, there have been reports of the dumping of waste silicon tetrachloride. Normally the waste silicon tetrachloride is recycled but this adds to the cost of manufacture as it needs to be heated to .
See also
Amorphous silicon
Cadmium telluride
Metallurgical grade silicon
Nanocrystalline silicon
Photovoltaic module
Photovoltaics
Polycrystal
Solar cell
Thin-film solar cell
Wafer (electronics)
References
External links
Silicon, Polycrystalline
Crystals
Silicon solar cells
Allotropes of silicon | Polycrystalline silicon | Chemistry,Materials_science | 4,509 |
10,940,032 | https://en.wikipedia.org/wiki/Hydrogen%20anion | The hydrogen anion, H−, is a negative ion of hydrogen, that is, a hydrogen atom that has captured an extra electron. The hydrogen anion is an important constituent of the atmosphere of stars, such as the Sun. In chemistry, this ion is called hydride. The ion has two electrons bound by the electromagnetic force to a nucleus containing one proton.
The binding energy of H− equals the binding energy of an extra electron to a hydrogen atom, called electron affinity of hydrogen. It is measured to be or (see Electron affinity (data page)). The total ground state energy thus becomes .
Occurrence
The hydrogen anion is the dominant bound-free opacity source at visible and near-infrared wavelengths in the atmospheres of stars like the Sun and cooler; its importance was first noted in the 1930s. The ion absorbs photons with energies in the range 0.75–4.0 eV, which ranges from the infrared into the visible spectrum. Most of the electrons in these negative ions come from the ionization of metals with low first ionization potentials, including the alkali metals and alkaline earths. The process which ejects the electron from the ion is properly called photodetachment rather than photoionization because the result is a neutral atom (rather than an ion) and a free electron.
H− also occurs in the Earth's ionosphere and can be produced in particle accelerators.
Its existence was first proven theoretically by Hans Bethe in 1929. H− is unusual because, in its free form, it has no bound excited states, as was finally proven in 1977.
In chemistry, hydrogen has the formal oxidation state −1 in the hydride anion.
The term hydride is probably most often used to describe compounds of hydrogen with other elements in which the hydrogen is in the formal −1 oxidation state. In most such compounds the bonding between the hydrogen and its nearest neighbor is covalent. An example of a hydride is the borohydride anion ().
See also
Hydron (hydrogen cation)
Electride, another very simple anion
Hydrogen ion
References
Hydrogen physics
Astrophysics
Anions | Hydrogen anion | Physics,Chemistry,Astronomy | 445 |
63,169,268 | https://en.wikipedia.org/wiki/Modes%20of%20variation | In statistics, modes of variation are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent. In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).
Formulation
Modes of variation are a natural extension of PCA and FPCA.
Modes of variation in PCA
If a random vector has the mean vector , and the covariance matrix with eigenvalues and corresponding orthonormal eigenvectors , by eigendecomposition of a real symmetric matrix, the covariance matrix can be decomposed as
where is an orthogonal matrix whose columns are the eigenvectors of , and is a diagonal matrix whose entries are the eigenvalues of . By the Karhunen–Loève expansion for random vectors, one can express the centered random vector in the eigenbasis
where is the principal component associated with the -th eigenvector , with the properties
and
Then the -th mode of variation of is the set of vectors, indexed by ,
where is typically selected as .
Modes of variation in FPCA
For a square-integrable random function , where typically and is an interval, denote the mean function by , and the covariance function by
where are the eigenvalues and are the orthonormal eigenfunctions of the linear Hilbert–Schmidt operator
By the Karhunen–Loève theorem, one can express the centered function in the eigenbasis,
where
is the -th principal component with the properties
and
Then the -th mode of variation of is the set of functions, indexed by ,
that are viewed simultaneously over the range of , usually for .
Estimation
The formulation above is derived from properties of the population. Estimation is needed in real-world applications. The key idea is to estimate mean and covariance.
Modes of variation in PCA
Suppose the data represent independent drawings from some -dimensional population with mean vector and covariance matrix . These data yield the sample mean vector , and the sample covariance matrix with eigenvalue-eigenvector pairs . Then the -th mode of variation of can be estimated by
Modes of variation in FPCA
Consider realizations of a square-integrable random function with the mean function and the covariance function . Functional principal component analysis provides methods for the estimation of and in detail, often involving point wise estimate and interpolation. Substituting estimates for the unknown quantities, the -th mode of variation of can be estimated by
Applications
Modes of variation are useful to visualize and describe the variation patterns in the data sorted by the eigenvalues. In real-world applications, modes of variation associated with eigencomponents allow to interpret complex data, such as the evolution of function traits and other infinite-dimensional data. To illustrate how modes of variation work in practice, two examples are shown in the graphs to the right, which display the first two modes of variation. The solid curve represents the sample mean function. The dashed, dot-dashed, and dotted curves correspond to modes of variation with and , respectively.
The first graph displays the first two modes of variation of female mortality data from 41 countries in 2003. The object of interest is log hazard function between ages 0 and 100 years. The first mode of variation suggests that the variation of female mortality is smaller for ages around 0 or 100, and larger for ages around 25. An appropriate and intuitive interpretation is that mortality around 25 is driven by accidental death, while around 0 or 100, mortality is related to congenital disease or natural death.
Compared to female mortality data, modes of variation of male mortality data shows higher mortality after around age 20, possibly related to the fact that life expectancy for women is higher than that for men.
References
Dimension reduction
Functional analysis
Matrix decompositions | Modes of variation | Mathematics | 914 |
27,712,079 | https://en.wikipedia.org/wiki/Friedrich-Karl%20Thielemann | Friedrich-Karl "Friedel“ Thielemann (born 17 April 1951 in Mülheim an der Ruhr) is a German-Swiss theoretical astrophysicist.
Thielemann studied at the TH Darmstadt, where he in 1976 he acquired his Diplom. In 1980 he earned his PhD under Wolfgang Hillebrandt (in Garching) and E. R. Hilf in nuclear astrophysics. As a post-doc he was with David Schramm and William David Arnett at the University of Chicago, William A. Fowler at Caltech, Hans Klapdor at the Max-Planck-Institut für Kernphysik, am Max-Planck-Institut für Astrophysik in Garching (with Hillebrandt) and at the University of Illinois (with James W. Truran). Starting in 1986 he was Assistant Professor and from 1991 Associate Professor at the Center for Astrophysics Harvard & Smithsonian and at the Harvard Observatory of Harvard University. In 1994 he became a professor at the University of Basel. In 1995 he was a guest professor at the University of Turin and from 1997 to 2001 a guest scientist at Oak Ridge National Laboratory.
Besides theoretical and computer-simulated astrophysics and nuclear astrophysics (including important nuclear reactions and properties of unstable stellar cores, equations of state of quark-matter and core matter of higher density), he worked on the modeling of astrophysical plasmas for important subatomic processes. He investigated, among other things, supernovae, X-ray bursts, gamma ray bursts, fusion of neutron stars, emergence of heavy elements, and evolution of chemical elements in galaxies.
In 1979 he received the Otto Hahn Medal. In 1998 he was elected a fellow of the American Physical Society for "his work at the interface of nuclear physics and astrophysics and the applications to stellar nucleosynthesis, Type Ia and Type II Supernovae, as well as the r- and rp-process." In 2008 he received the Hans Bethe Prize "for his many outstanding theoretical contributions to the understanding of nucleosynthesis, stellar evolution and stellar explosions." Since 2004 he is a member of the Swiss Research Council.
References
External links
20th-century German physicists
Swiss astrophysicists
1951 births
Living people
Fellows of the American Physical Society
Theoretical physicists
21st-century German physicists | Friedrich-Karl Thielemann | Physics | 481 |
72,412,764 | https://en.wikipedia.org/wiki/Christopher%20O.%20Barnes | Christopher O. Barnes (born September 23, 1986) is an American chemist who is an assistant professor at Stanford University. During the COVID-19 pandemic, he studied the structure of the coronavirus spike protein and the antibodies that attack them. He was named one of ten "Scientists to watch" by Science News in 2022.
Early life and education
Barnes grew up in Huntersville, North Carolina. He attended North Mecklenburg High School. As a teenager, he competed in the science olympiad. He was an undergraduate at the University of North Carolina at Chapel Hill, where he was involved with the American football team. During his senior year, he was named the top student athlete. Although he had initially applied to study medicine, he changed his mind after being introduced to biophysics by Gary J. Pielak. He was a bachelor's student in psychology, and moved to chemistry for his graduate studies. In 2010 he moved to the University of Pittsburgh, where he started researching molecular pharmacology. He looked into eukaryotic transcription using crystallographic techniques and electron microscopy. After earning his doctorate, Barnes started investigating the structure of HIV and the antibodies that attack it. He looked to understand how the virus contacts/enters cells to better inform the design of therapeutics.
Research and career
Barnes was a postdoctoral researcher at California Institute of Technology when the COVID-19 pandemic started. He was working alongside Pamela J. Bjorkman, who challenged him to uncover the structure of immune proteins that would attack SARS-CoV-2. Barnes used high-resolution imaging to better understand coronavirus spike proteins and the antibodies that attack them. He used cryo-electron microscopy, and identified several antibodies that attach to the receptor binding domain on the coronavirus spike protein. He defined an antibody classification system to determine where on the receptor binding domain that the antibody attaches.
Barnes continued to work on antibody structure when he established his own laboratory at Stanford University. These antibodies target the N-terminal domain. He is interested in identifying antibodies that can attack all coronaviruses.
In September 2022 Science News named Barnes one of ten "Scientists to watch".
Awards and honors
2017 Howard Hughes Medical Institute Hanna H. Gray Fellow
2022 Rita Allen Foundation Scholar
Selected publications
Personal life
Christopher is married to scientist Naima G. Sharaf and has two sons.
References
1986 births
Living people
People from Huntersville, North Carolina
University of North Carolina at Chapel Hill alumni
Stanford University Department of Chemistry faculty
21st-century American chemists
Structural biologists | Christopher O. Barnes | Chemistry | 515 |
525,887 | https://en.wikipedia.org/wiki/Density%20of%20states | In condensed matter physics, the density of states (DOS) of a system describes the number of allowed modes or states per unit energy range. The density of states is defined as where is the number of states in the system of volume whose energies lie in the range from to . It is mathematically represented as a distribution by a probability density function, and it is generally an average over the space and time domains of the various states occupied by the system. The density of states is directly related to the dispersion relations of the properties of the system. High DOS at a specific energy level means that many states are available for occupation.
Generally, the density of states of matter is continuous. In isolated systems however, such as atoms or molecules in the gas phase, the density distribution is discrete, like a spectral density. Local variations, most often due to distortions of the original system, are often referred to as local densities of states (LDOSs).
Introduction
In quantum mechanical systems, waves, or wave-like particles, can occupy modes or states with wavelengths and propagation directions dictated by the system. For example, in some systems, the interatomic spacing and the atomic charge of a material might allow only electrons of certain wavelengths to exist. In other systems, the crystalline structure of a material might allow waves to propagate in one direction, while suppressing wave propagation in another direction. Often, only specific states are permitted. Thus, it can happen that many states are available for occupation at a specific energy level, while no states are available at other energy levels.
Looking at the density of states of electrons at the band edge between the valence and conduction bands in a semiconductor, for an electron in the conduction band, an increase of the electron energy makes more states available for occupation. Alternatively, the density of states is discontinuous for an interval of energy, which means that no states are available for electrons to occupy within the band gap of the material. This condition also means that an electron at the conduction band edge must lose at least the band gap energy of the material in order to transition to another state in the valence band.
This determines if the material is an insulator or a metal in the dimension of the propagation. The result of the number of states in a band is also useful for predicting the conduction properties. For example, in a one dimensional crystalline structure an odd number of electrons per atom results in a half-filled top band; there are free electrons at the Fermi level resulting in a metal. On the other hand, an even number of electrons exactly fills a whole number of bands, leaving the rest empty. If then the Fermi level lies in an occupied band gap between the highest occupied state and the lowest empty state, the material will be an insulator or semiconductor.
Depending on the quantum mechanical system, the density of states can be calculated for electrons, photons, or phonons, and can be given as a function of either energy or the wave vector . To convert between the DOS as a function of the energy and the DOS as a function of the wave vector, the system-specific energy dispersion relation between and must be known.
In general, the topological properties of the system such as the band structure, have a major impact on the properties of the density of states. The most well-known systems, like neutron matter in neutron stars and free electron gases in metals (examples of degenerate matter and a Fermi gas), have a 3-dimensional Euclidean topology. Less familiar systems, like two-dimensional electron gases (2DEG) in graphite layers and the quantum Hall effect system in MOSFET type devices, have a 2-dimensional Euclidean topology. Even less familiar are carbon nanotubes, the quantum wire and Luttinger liquid with their 1-dimensional topologies. Systems with 1D and 2D topologies are likely to become more common, assuming developments in nanotechnology and materials science proceed.
Definition
The density of states related to volume and countable energy levels is defined as:
Because the smallest allowed change of momentum for a particle in a box of dimension and length is , the volume-related density of states for continuous energy levels is obtained in the limit as
Here, is the spatial dimension of the considered system and the wave vector.
For isotropic one-dimensional systems with parabolic energy dispersion, the density of states is In two dimensions the density of states is a constant while in three dimensions it becomes
Equivalently, the density of states can also be understood as the derivative of the microcanonical partition function (that is, the total number of states with energy less than ) with respect to the energy:
The number of states with energy (degree of degeneracy) is given by:
where the last equality only applies when the mean value theorem for integrals is valid.
Symmetry
There is a large variety of systems and types of states for which DOS calculations can be done.
Some condensed matter systems possess a structural symmetry on the microscopic scale which can be exploited to simplify calculation of their densities of states. In spherically symmetric systems, the integrals of functions are one-dimensional because all variables in the calculation depend only on the radial parameter of the dispersion relation. Fluids, glasses and amorphous solids are examples of a symmetric system whose dispersion relations have a rotational symmetry.
Measurements on powders or polycrystalline samples require evaluation and calculation functions and integrals over the whole domain, most often a Brillouin zone, of the dispersion relations of the system of interest. Sometimes the symmetry of the system is high, which causes the shape of the functions describing the dispersion relations of the system to appear many times over the whole domain of the dispersion relation. In such cases the effort to calculate the DOS can be reduced by a great amount when the calculation is limited to a reduced zone or fundamental domain. The Brillouin zone of the face-centered cubic lattice (FCC) in the figure on the right has the 48-fold symmetry of the point group Oh with full octahedral symmetry. This configuration means that the integration over the whole domain of the Brillouin zone can be reduced to a 48-th part of the whole Brillouin zone. As a crystal structure periodic table shows, there are many elements with a FCC crystal structure, like diamond, silicon and platinum and their Brillouin zones and dispersion relations have this 48-fold symmetry. Two other familiar crystal structures are the body-centered cubic lattice (BCC) and hexagonal closed packed structures (HCP) with cubic and hexagonal lattices, respectively. The BCC structure has the 24-fold pyritohedral symmetry of the point group Th. The HCP structure has the 12-fold prismatic dihedral symmetry of the point group D3h. A complete list of symmetry properties of a point group can be found in point group character tables.
In general it is easier to calculate a DOS when the symmetry of the system is higher and the number of topological dimensions of the dispersion relation is lower. The DOS of dispersion relations with rotational symmetry can often be calculated analytically. This result is fortunate, since many materials of practical interest, such as steel and silicon, have high symmetry.
In anisotropic condensed matter systems such as a single crystal of a compound, the density of states could be different in one crystallographic direction than in another. These causes the anisotropic density of states to be more difficult to visualize, and might require methods such as calculating the DOS for particular points or directions only, or calculating the projected density of states (PDOS) to a particular crystal orientation.
k-space topologies
The density of states is dependent upon the dimensional limits of the object itself. In a system described by three orthogonal parameters (3 Dimension), the units of DOS is in a two dimensional system, the units of DOS is in a one dimensional system, the units of DOS is The referenced volume is the volume of -space; the space enclosed by the constant energy surface of the system derived through a dispersion relation that relates to . An example of a 3-dimensional -space is given in Fig. 1. It can be seen that the dimensionality of the system confines the momentum of particles inside the system.
Density of wave vector states (sphere)
The calculation for DOS starts by counting the allowed states at a certain that are contained within inside the volume of the system. This procedure is done by differentiating the whole k-space volume in n-dimensions at an arbitrary , with respect to . The volume, area or length in 3, 2 or 1-dimensional spherical -spaces are expressed by
for a -dimensional -space with the topologically determined constants
for linear, disk and spherical symmetrical shaped functions in 1, 2 and 3-dimensional Euclidean -spaces respectively.
According to this scheme, the density of wave vector states is, through differentiating with respect to , expressed by
The 1, 2 and 3-dimensional density of wave vector states for a line, disk, or sphere are explicitly written as
One state is large enough to contain particles having wavelength λ. The wavelength is related to through the relationship.
In a quantum system the length of λ will depend on a characteristic spacing of the system L that is confining the particles. Finally the density of states N is multiplied by a factor , where is a constant degeneracy factor that accounts for internal degrees of freedom due to such physical phenomena as spin or polarization. If no such phenomenon is present then . Vk is the volume in k-space whose wavevectors are smaller than the smallest possible wavevectors decided by the characteristic spacing of the system.
Density of energy states
To finish the calculation for DOS find the number of states per unit sample volume at an energy inside an interval . The general form of DOS of a system is given as
The scheme sketched so far only applies to monotonically rising and spherically symmetric dispersion relations. In general the dispersion relation is not spherically symmetric and in many cases it isn't continuously rising either. To express D as a function of E the inverse of the dispersion relation has to be substituted into the expression of as a function of k to get the expression of as a function of the energy. If the dispersion relation is not spherically symmetric or continuously rising and can't be inverted easily then in most cases the DOS has to be calculated numerically. More detailed derivations are available.
Dispersion relations
The dispersion relation for electrons in a solid is given by the electronic band structure.
The kinetic energy of a particle depends on the magnitude and direction of the wave vector k, the properties of the particle and the environment in which the particle is moving. For example, the kinetic energy of an electron in a Fermi gas is given by
where m is the electron mass. The dispersion relation is a spherically symmetric parabola and it is continuously rising so the DOS can be calculated easily.
For longitudinal phonons in a string of atoms the dispersion relation of the kinetic energy in a 1-dimensional k-space, as shown in Figure 2, is given by
where is the oscillator frequency, the mass of the atoms, the inter-atomic force constant and inter-atomic spacing. For small values of the dispersion relation is linear:
When the energy is
With the transformation and small this relation can be transformed to
Isotropic dispersion relations
The two examples mentioned here can be expressed like
This expression is a kind of dispersion relation because it interrelates two wave properties and it is isotropic because only the length and not the direction of the wave vector appears in the expression. The magnitude of the wave vector is related to the energy as:
Accordingly, the volume of n-dimensional -space containing wave vectors smaller than is:
Substitution of the isotropic energy relation gives the volume of occupied states
Differentiating this volume with respect to the energy gives an expression for the DOS of the isotropic dispersion relation
Parabolic dispersion
In the case of a parabolic dispersion relation (p = 2), such as applies to free electrons in a Fermi gas, the resulting density of states, , for electrons in a n-dimensional systems is
for , with for .
In 1-dimensional systems the DOS diverges at the bottom of the band as drops to . In 2-dimensional systems the DOS turns out to be independent of . Finally for 3-dimensional systems the DOS rises as the square root of the energy.
Including the prefactor , the expression for the 3D DOS is
where is the total volume, and includes the 2-fold spin degeneracy.
Linear dispersion
In the case of a linear relation (p = 1), such as applies to photons, acoustic phonons, or to some special kinds of electronic bands in a solid, the DOS in 1, 2 and 3 dimensional systems is related to the energy as:
Distribution functions
The density of states plays an important role in the kinetic theory of solids. The product of the density of states and the probability distribution function is the number of occupied states per unit volume at a given energy for a system in thermal equilibrium. This value is widely used to investigate various physical properties of matter. The following are examples, using two common distribution functions, of how applying a distribution function to the density of states can give rise to physical properties.
Fermi–Dirac statistics: The Fermi–Dirac probability distribution function, Fig. 4, is used to find the probability that a fermion occupies a specific quantum state in a system at thermal equilibrium. Fermions are particles which obey the Pauli exclusion principle (e.g. electrons, protons, neutrons). The distribution function can be written as
is the chemical potential (also denoted as EF and called the Fermi level when T=0), is the Boltzmann constant, and is temperature. Fig. 4 illustrates how the product of the Fermi-Dirac distribution function and the three-dimensional density of states for a semiconductor can give insight to physical properties such as carrier concentration and Energy band gaps.
Bose–Einstein statistics: The Bose–Einstein probability distribution function is used to find the probability that a boson occupies a specific quantum state in a system at thermal equilibrium. Bosons are particles which do not obey the Pauli exclusion principle (e.g. phonons and photons). The distribution function can be written as
From these two distributions it is possible to calculate properties such as the internal energy per unit volume , the number of particles , specific heat capacity , and thermal conductivity . The relationships between these properties and the product of the density of states and the probability distribution, denoting the density of states by instead of , are given by
is dimensionality, is sound velocity and is mean free path.
Applications
The density of states appears in many areas of physics, and helps to explain a number of quantum mechanical phenomena.
Quantization
Calculating the density of states for small structures shows that the distribution of electrons changes as dimensionality is reduced. For quantum wires, the DOS for certain energies actually becomes higher than the DOS for bulk semiconductors, and for quantum dots the electrons become quantized to certain energies.
Photonic crystals
The photon density of states can be manipulated by using periodic structures with length scales on the order of the wavelength of light. Some structures can completely inhibit the propagation of light of certain colors (energies), creating a photonic band gap: the DOS is zero for those photon energies. Other structures can inhibit the propagation of light only in certain directions to create mirrors, waveguides, and cavities. Such periodic structures are known as photonic crystals. In nanostructured media the concept of local density of states (LDOS) is often more relevant than that of DOS, as the DOS varies considerably from point to point.
Computational calculation
Interesting systems are in general complex, for instance compounds, biomolecules, polymers, etc. Because of the complexity of these systems the analytical calculation of the density of states is in most of the cases impossible. Computer simulations offer a set of algorithms to evaluate the density of states with a high accuracy. One of these algorithms is called the Wang and Landau algorithm.
Within the Wang and Landau scheme any previous knowledge of the density of states is required. One proceeds as follows: the cost function (for example the energy) of the system is discretized. Each time the bin i is reached one updates a histogram for the density of states, , by
where is called the modification factor. As soon as each bin in the histogram is visited a certain number of times (10-15), the modification factor is reduced by some criterion, for instance,
where denotes the -th update step. The simulation finishes when the modification factor is less than a certain threshold, for instance
The Wang and Landau algorithm has some advantages over other common algorithms such as multicanonical simulations and parallel tempering. For example, the density of states is obtained as the main product of the simulation. Additionally, Wang and Landau simulations are completely independent of the temperature. This feature allows to compute the density of states of systems with very rough energy landscape such as proteins.
Mathematically the density of states is formulated in terms of a tower of covering maps.
Local density of states
An important feature of the definition of the DOS is that it can be extended to any system. One of its properties are the translationally invariability which means that the density of the states is homogeneous and it's the same at each point of the system. But this is just a particular case and the LDOS gives a wider description with a heterogeneous density of states through the system.
Concept
Local density of states (LDOS) describes a space-resolved density of states. In materials science, for example, this term is useful when interpreting the data from a scanning tunneling microscope (STM), since this method is capable of imaging electron densities of states with atomic resolution. According to crystal structure, this quantity can be predicted by computational methods, as for example with density functional theory.
A general definition
In a local density of states the contribution of each state is weighted by the density of its wave function at the point. becomes
the factor of means that each state contributes more in the regions where the density is high. An average over of this expression will restore the usual formula for a DOS. The LDOS is useful in inhomogeneous systems, where contains more information than alone.
For a one-dimensional system with a wall, the sine waves give
where .
In a three-dimensional system with the expression is
In fact, we can generalise the local density of states further to
this is called the spectral function and it's a function with each wave function separately in its own variable. In more advanced theory it is connected with the Green's functions and provides a compact representation of some results such as optical absorption.
Solid state devices
LDOS can be used to gain profit into a solid-state device. For example, the figure on the right illustrates LDOS of a transistor as it turns on and off in a ballistic simulation. The LDOS has clear boundary in the source and drain, that corresponds to the location of band edge. In the channel, the DOS is increasing as gate voltage increase and potential barrier goes down.
Optics and photonics
In optics and photonics, the concept of local density of states refers to the states that can be occupied by a photon. For light it is usually measured by fluorescence methods, near-field scanning methods or by cathodoluminescence techniques. Different photonic structures have different LDOS behaviors with different consequences for spontaneous emission. In photonic crystals, near-zero LDOS are expected, inhibiting spontaneous emission.
Similar LDOS enhancement is also expected in plasmonic cavity.
However, in disordered photonic nanostructures, the LDOS behave differently. They fluctuate spatially with their statistics, and are proportional to the scattering strength of the structures.
In addition, the relationship with the mean free path of the scattering is trivial as the LDOS can be still strongly influenced by the short details of strong disorders in the form of a strong Purcell enhancement of the emission. and finally, for the plasmonic disorder, this effect is much stronger for LDOS fluctuations as it can be observed as a strong near-field localization.
See also
References
Further reading
Chen, Gang. Nanoscale Energy Transport and Conversion. New York: Oxford, 2005
Streetman, Ben G. and Sanjay Banerjee. Solid State Electronic Devices. Upper Saddle River, NJ: Prentice Hall, 2000.
Muller, Richard S. and Theodore I. Kamins. Device Electronics for Integrated Circuits. New York: John Wiley and Sons, 2003.
Kittel, Charles and Herbert Kroemer. Thermal Physics. New York: W.H. Freeman and Company, 1980
Sze, Simon M. Physics of Semiconductor Devices. New York: John Wiley and Sons, 1981
External links
Online lecture:ECE 606 Lecture 8: Density of States by M. Alam
Scientists shed light on glowing materials How to measure the Photonic LDOS
Statistical mechanics
Physical quantities
Electronic band structures | Density of states | Physics,Chemistry,Materials_science,Mathematics | 4,365 |
23,979,944 | https://en.wikipedia.org/wiki/C14H18N2O3 | {{DISPLAYTITLE:C14H18N2O3}}
The molecular formula C14H18N2O3 (molar mass: 262.30 g/mol, exact mass: 262.1317 u) may refer to:
Methohexital, or methohexitone
Reposal
Molecular formulas | C14H18N2O3 | Physics,Chemistry | 72 |
33,455,929 | https://en.wikipedia.org/wiki/Houzz | Houzz is an American website, online community and software for architecture; interior design and decorating; landscape design and home improvement. It was founded in 2009 and is based in Palo Alto, California.
History
Houzz was founded as an online platform in February 2009 by Adi Tatarko and her husband Alon Cohen, in response to the challenges they faced with their own home remodeling project. They found it difficult to communicate their vision for their home, and to find the right professionals for their project. Cohen coded the initial website himself, and they asked a few Bay Area architects to upload their portfolios, to give home renovators ideas for their projects. The site spread by word-of-mouth and they began to receive emails from homeowners and home professionals outside the Bay Area asking them to open more categories on Houzz and expand to other areas. Houzz became a company in the fall of 2010.
In November 2010, Houzz released an app for the iPad. The Android version of the app was released in December 2012.
In July 2013, Houzz introduced the Real Cost Finder, an interactive tool that helps users plan home renovations and total costs. The tool is based on data collected from the Houzz community.
In January 2014, Houzz announced that it had opened offices and hired local managers in the UK, Germany and Australia to accelerate its global expansion; 35% of the company's site traffic already came from outside the U.S. In February, Houzz launched Site Designer, a free website building and publishing tool for home professionals. In December, Houzz announced its expansion to the rest of Europe and into Asia, starting with Japan. It was reported that the Houzz database contained millions of images of home interiors and exteriors.
In May 2015, Houzz introduced HouzzTV, showing homeowners' projects using design professionals. In August, Houzz made its first acquisition, acquiring gardening and home advice site GardenWeb from NBCUniversal. In October, Houzz settled a privacy violation lawsuit for not providing the standard recorded notification that customers' and professionals' calls were being recorded for quality and training purposes.
In January 2016, Houzz introduced Sketch for Android. In February, Houzz introduced View in My Room within its app for iOS and Android that lets users virtually place products from the Houzz Marketplace in their homes before buying. In March, Houzz announced that it had opened its Commerce API to third party partners, to make it easier for merchants to sell and manage their inventory on Houzz. In May, Houzz won "Best App" at Google's inaugural Play Awards. In September, Houzz launched Visual Match, a tool that "uses deep learning technology to analyze more than 11 million home photos on Houzz. Furniture and decor that looks similar to the six million products on the Houzz Marketplace is then surfaced for users to browse (and hopefully buy)."
In May 2017, Houzz introduced a tool for its iPhone and iPad app, View in My Room 3D, that allows people to preview over 300,000 products in 3D within the context of their own rooms. In July, Sketch was made available as a web app. With the launch of iOS 11 from Apple in September, Houzz introduced an upgraded version of its 3D AR tool within its app for iPhone and iPad, with 500,000 products available to view.
In February 2018, Houzz acquired IvyMark, a company that developed business management software and a community platform for interior designers and home design firms. In March, the Houzz App for Android devices was updated with ARCore support, enabling users to "place virtual representations of furniture and other home decor items anywhere in their home to see how they would look."
As of June 2019, the company had reportedly 40 million users.
In April 2020, the company launched Houzz Pro software to help home professionals run their businesses. As of May, Houzz had over 20 million photos of design ideas on its platform.
By March 2021, the company reportedly had 2.7 million professionals in its professional services database.
Products and services
Houzz offers a home design photo database with millions of images of home interiors and exteriors. Homeowners browse photos by room, style and location, and bookmark photos in personal collections the site calls ideabooks. Users can click on an image to learn more about the designer, ask a question, and learn about products tagged in the photos.
Houzz also has a directory of home improvement professionals who use the site to reach homeowners. The platform can be used for consumers to search through professionals for hire, view their previous projects, and ultimately hire them.
The company also develops Houzz Pro software, designed to help designers manage their projects, manage new business and handle their projects' financial details.
HouzzTV
The company's HouzzTV video series shows homeowners' projects being implemented with the help of design professionals. Notable homeowners featured include Kristen Bell, Mila Kunis, and Taraji P. Henson.
Reception
Bloomberg Businessweek called Houzz "An online antidote to the housing bust." Architectural Digest wrote about how the app would encourage imagination. TechCrunch wrote that the idea behind the company was well-executed. The Mercury News wrote about the breadth of ideas for customers and their architects to collaborate with. In 2012, The New York Times noted that the app was one of the few nongame apps on iOS that had a 5 star rating.
References
External links
Real estate companies established in 2009
Online marketplaces of the United States
Interior design
Architectural design
Internet properties established in 2009
Companies based in Palo Alto, California | Houzz | Engineering | 1,183 |
317,900 | https://en.wikipedia.org/wiki/Clothes%20dryer | A clothes dryer (tumble dryer, drying machine, or simply dryer) is a powered household appliance that is used to remove moisture from a load of clothing, bedding and other textiles, usually after they are washed in the washing machine.
Many dryers consist of a rotating drum called a "tumbler" through which heated air is circulated to evaporate moisture while the tumbler is rotated to maintain air space between the articles. Using such a machine may cause clothes to shrink or become less soft (due to loss of short soft fibers). A simpler non-rotating machine called a "drying cabinet" may be used for delicate fabrics and other items not suitable for a tumble dryer. Other machines include steam to de-shrink clothes and avoid ironing.
Tumble dryers
Tumble dryers continuously draw in the ambient air around them and heat it before passing it through the tumbler. The resulting hot, humid air is usually vented outside to make room for more air to continue the drying process.
Tumble dryers are sometimes integrated with a washing machine, in the form of washer-dryer combos, which are essentially a front loading washing machine with an integrated dryer or (in the US) a laundry center, which stacks the dryer on top of the washer and integrates the controls for both machines into a single control panel. Often the washer and dryer functions will have a different capacity, with the dryer usually having a lower capacity than the washer. Tumble dryers can also be top loading, in which the drum is loaded from the top of the machine and the drum's end supports are in the left and right sides, instead of the more conventional front and rear. They can be as thin as in width, and may include detachable stationary racks for drying items like plush toys and footwear.
Ventless dryers
Spin dryers
These centrifuge machines simply spin their drums much faster than a typical washer could, in order to extract additional water from the load. They may remove more water in two minutes than a heated tumbler dryer can in twenty, thus saving significant amounts of time and energy. Although spinning alone will not completely dry clothing, this additional step saves a worthwhile amount of time and energy for large laundry operations such as those of hospitals.
Condenser dryers
Just as in a tumble dryer, condenser or condensation dryers pass heated air through the load. However, instead of exhausting this air, the dryer uses a heat exchanger to cool the air and condense the water vapor into either a drain pipe or a collection tank. The drier air is run through the loop again. The heat exchanger typically uses ambient air as its coolant, therefore the heat produced by the dryer will go into the immediate surroundings instead of the outside, increasing the room temperature. In some designs, cold water is used in the heat exchanger, eliminating this heating, but requiring increased water usage.
In terms of energy use, condenser dryers typically require around 2 kilowatt hours (kW⋅h) of energy per average load.
Because the heat exchange process simply cools the internal air using ambient air (or cold water in some cases), it will not dry the air in the internal loop to as low a level of humidity as typical fresh, ambient air. As a consequence of the increased humidity of the air used to dry the load, this type of dryer requires somewhat more time than a tumble dryer. Condenser dryers are a particularly attractive option where long, intricate ducting would be required to vent the dryer.
Heat pump dryers
A closed-cycle heat pump clothes dryer uses a heat pump to dehumidify the processing air. Such dryers typically use under half the energy per load of a condenser dryer.
Whereas condensation dryers use a passive heat exchanger cooled by ambient air, these dryers use a heat pump. The hot, humid air from the tumbler is passed through a heat pump where the cold side condenses the water vapor into either a drain pipe or a collection tank and the hot side reheats the air afterward for re-use. In this way not only does the dryer avoid the need for ducting, but it also conserves much of its heat within the dryer instead of exhausting it into the surroundings. Heat pump dryers can, therefore, use up to 50% less energy required by either condensation or conventional electric dryers. Heat pump dryers use about 1 kW⋅h of energy to dry an average load instead of 2 kW⋅h for a condenser dryer, or from 3 to 9 kW⋅h, for a conventional electric dryer. Domestic heat pump dryers are designed to work in typical ambient temperatures from . Below , drying times significantly increase.
As with condensation dryers, the heat exchanger will not dry the internal air to as low a level of humidity as the typical ambient air. With respect to ambient air, the higher humidity of the air used to dry the clothes has the effect of increasing drying times; however, because heat pump dryers conserve much of the heat of the air they use, the already-hot air can be cycled more quickly, possibly leading to shorter drying times than tumble dryers, depending on the model.
Mechanical steam compression dryers
A new type of dryer in development, these machines are a more advanced version of heat pump dryers. Instead of using hot air to dry the clothing, mechanical steam compression dryers use water recovered from the clothing in the form of steam. First, the tumbler and its contents are heated to . The wet steam that results purges the system of air and is the only remaining atmosphere in the tumbler.
As wet steam exits the tumbler, it is mechanically compressed (hence the name) to extract water vapor and transfer the heat of vaporization to the remaining gaseous steam. This pressurized, gaseous steam is then allowed to expand, and is superheated before being injected back into the tumbler where its heat causes more water to vaporize from the clothing, creating more wet steam and restarting the cycle.
Like heat pump dryers, mechanical steam compression dryers recycle much of the heat used to dry the clothes, and they operate in a very similar range of efficiency as heat pump dryers. Both types can be over twice as efficient as conventional tumble dryers. The considerably higher temperatures used in mechanical steam compression dryers result in drying times on the order of half as long as those of heat pump dryers.
Convectant drying
Marketed by some manufacturers as a "static clothes drying technique", convectant dryers simply consist of a heating unit at the bottom, a vertical chamber, and a vent at top. The unit heats air at the bottom, reducing its relative humidity, and the natural tendency of hot air to rise brings this low-humidity air into contact with the clothes. This design is slower than conventional tumble dryers, but relatively energy-efficient if well-implemented. It works particularly well in cold and humid environments, where it dries clothes substantially faster than line-drying. In hot and dry weather, the performance delta over line-drying is negligible.
Given that this is a relatively simple and cheap technique to materialize, most consumer products showcase the added benefit of portability and/or modularity. Newer designs implement a fan heater at the bottom to pump hot air into the vertical drying rack chamber. Temperatures in excess of can be reached inside these "hot air balloons," yet lint, static cling, and shrinkage are minimal. Upfront cost is significantly lower than tumble, condenser and heat pump designs.
If used in combination with washing machines featuring fast spin cycles (800+ rpm) or spin dryers, the cost-effectiveness of this technique has the potential to render tumble dryer-like designs obsolete in single-person and small family households. One disadvantage is that the moisture from the clothes is released into the immediate surroundings. Proper ventilation or a complementary dehumidifier is recommended for indoor use. It also cannot compete with the tumble dryer's capacity to dry multiple loads of wet clothing in a single day.
Solar clothes dryer
The solar dryer is a box-shaped stationary construction which encloses a second compartment where the clothes are held. It uses the sun's heat without direct sunlight reaching the clothes. Alternatively, a solar heating box may be used to heat air that is driven through a conventional tumbler dryer.
Microwave dryers
Japanese manufacturers have developed highly efficient clothes dryers that use microwave radiation to dry the clothes (though a vast majority of Japanese air dry their laundry). Most of the drying is done using microwaves to evaporate the water, but the final drying is done by convection heating, to avoid problems of arcing with metal pieces in the laundry. There are a number of advantages: shorter drying times (25% less), energy savings (17–25% less), and lower drying temperatures. Some analysts think that the arcing and fabric damage is a factor preventing microwave dryers from being developed for the US market.
Ultrasonic dryers
Ultrasonic dryers use high-frequency signals to drive piezoelectric actuators in order to mechanically shake the clothes, releasing water in the form of a mist which is then removed from the drum. They have the potential to significantly cut energy consumption while needing only one-third of the time needed by a conventional electric dryer for a given load. They also do not have the same issues related with lint in most other types of dryers.
Hybrid dryers
Some manufacturers, like LG Electronics and Whirlpool, have introduced hybrid dryers, that offer the user the option of using either a heat pump or a traditional electric heating element for drying the user's clothes. Hybrid dryers can also use a heat pump and a heating element at the same time to dry clothes faster.
Static electricity
Clothes dryers can cause static cling through the triboelectric effect. This can be a minor nuisance and is often a symptom of over-drying textiles to below their equilibrium moisture level, particularly when using synthetic materials. Fabric conditioning products such as dryer sheets are marketed to dissipate this static charge, depositing surfactants onto the fabric load by mechanical abrasion during tumbling. Modern dryers often have improved temperature and humidity sensors and electronic controls which aim to stop the drying cycle once textiles are sufficiently dry, avoiding over-drying and the static charge and energy wastage this causes.
Pest control use
Drying at a minimum of heat for thirty minutes kills many parasites including house dust mites, bed bugs, and scabies mites and their eggs; a bit more than ten minutes kills ticks. Simply washing drowns dust mites, and exposure to direct sunlight for three hours kills their eggs.
Lint build-up (tumble dryers)
Moisture and lint are byproducts of the tumble drying process and are pulled from the drum by a fan motor and then pushed through the remaining exhaust conduit to the exterior termination fitting. Typical exhaust conduit comprises flex transition hose found immediately behind the dryer, the rigid galvanized pipe and elbow fittings found within the wall framing, and the vent duct hood found outside the house.
A clean, unobstructed dryer vent improves both the efficiency and safety of the dryer. As the dryer duct pipe becomes partially obstructed and filled with lint, drying time markedly increases and causes the dryer to waste energy. A blocked vent increases the internal temperature and may result in a fire. Clothes dryers are one of the more costly home appliances to operate.
Several factors can contribute to or accelerate rapid lint build-up. These include long or restrictive ducts, bird or rodent nests in the termination, crushed or kinked flex transition hose, terminations with screen-like features, and condensation within the duct due to un-insulated ducts traveling through cold spaces such as a crawl space or attic. If plastic flaps are at the outside end of the duct, one may be able to flex, bend, and temporarily remove the plastic flaps, clean the inside surface of the flaps, clean the last foot or so of the duct, and reattach the plastic flaps. The plastic flaps keep insects, birds, and snakes out of the dryer vent pipe. During cold weather, the warm wet air condenses on the plastic flaps, and minor trace amounts of lint sticks to the wet inside part of the plastic flaps at the outside of the building.
Ventless dryers include multi-stage lint filtration systems and some even include automatic evaporator and condenser cleaning functions that can run even while the dryer is running. The evaporator and condenser are usually cleaned with running water. These systems are necessary, in order to prevent lint from building up inside the dryer and evaporator and condenser coils.
Aftermarket add-on lint and moisture traps can be attached to the dryer duct pipe, on machines originally manufactured as outside-venting, to facilitate installation where an outside vent is not available. Increased humidity at the location of installation is a drawback to this method.
Safety
Dryers expose flammable materials to heat. Underwriters Laboratories recommends cleaning the lint filter after every cycle for safety and energy efficiency, provision of adequate ventilation, and cleaning of the duct at regular intervals. UL also recommends that dryers not be used for glass fiber, rubber, foam or plastic items, or any item that has had a flammable substance spilled on it.
In the United States, an estimate from the US Fire Administration in a 2012 report estimated that from 2008 to 2010, fire departments responded to an estimated 2,900 clothes dryer fires in residential buildings each year across the nation. These fires resulted in an annual average loss of 5 deaths, 100 injuries, and $35 million in property loss. The Fire Administration attributes "Failure to clean" (34%) as the leading factor contributing to clothes dryer fires in residential buildings, and observed that new home construction trends place clothes dryers and washing machines in more hazardous locations away from outside walls, such as in bedrooms, second-floor hallways, bathrooms, and kitchens.
To address the problem of clothes dryer fires, a fire suppression system can be used with sensors to detect the change in temperature when a blaze starts in a dryer drum. These sensors then activate a water vapor mechanism to put out the fire.
Environmental impact
The environmental impact of clothes dryers is especially severe in the US and Canada, where over 80% of all homes have a clothes dryer. According to the US Environmental Protection Agency, if all residential clothes dryers sold in the US were energy efficient, "the utility cost savings would grow to more than $1.5 billion each year and more than 10 billion kilograms (22 billion pounds) of annual greenhouse gas emissions would be prevented”.
Clothes dryers are second only to refrigerators and freezers as the largest residential electrical energy consumers in America.
In the European Union, the EU energy labeling system is applied to dryers; dryers are classified with a label from A+++ (best) to G (worst) according to the amount of energy used per kilogram of clothes (kW⋅h/kg). Sensor dryers can automatically sense that clothes are dry and switch off. This means over-drying is not as frequent. Most of the European market sells sensor dryers now, and they are normally available in condenser and vented dryers.
History
A hand-cranked clothes dryer was created in 1800 by M. Pochon from France. Henry W. Altorfer invented and patented an electric clothes dryer in 1937. J. Ross Moore, an inventor from North Dakota, developed designs for automatic clothes dryers and published his design for an electrically operated dryer in 1938. Industrial designer Brooks Stevens developed an electric dryer with a glass window in the early 1940s.
See also
Laundry-folding machine
List of home appliances
Sheila Maid
Shoe dryer
Surge protector
References
External links
"What You Should Know About Clothes Dryers." Popular Mechanics, December 1954, pp. 170–175, basic principles of dryers even today.
19th-century inventions
Dryers
Home appliances
Laundry drying equipment
Products introduced in 1937 | Clothes dryer | Physics,Chemistry,Technology,Engineering | 3,374 |
18,306,410 | https://en.wikipedia.org/wiki/Sunbreak | A sunbreak is a natural phenomenon in which sunlight obscured over a relatively large area penetrates the obscuring material in a localized space. The typical example is of sunlight shining through a hole in cloud cover. A sunbreak piercing clouds normally produces a visible shaft of light reflected by atmospheric dust and or moisture, called a sunbeam. Another form of sunbreak occurs when sunlight passes into an area otherwise shadowed by surrounding large buildings through a gap temporarily aligned with the position of the sun.
The word is considered by some to have origins in Pacific Northwest English.
In art
Artists such as cartoonists and filmmakers often use sunbreak to show protection or relief being brought upon an area of land by God or a receding storm.
References
Earth phenomena
Light
Sun | Sunbreak | Physics | 151 |
729,572 | https://en.wikipedia.org/wiki/Bloch%20sphere | In quantum mechanics and computing, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch.
Mathematically each quantum mechanical system is associated with a separable complex Hilbert space . A pure state of a quantum system is represented by a non-zero vector in . As the vectors and (with ) represent the same state, the level of the quantum system corresponds to the dimension of the Hilbert space and pure states can be represented as equivalence classes, or, rays in a projective Hilbert space . For a two-dimensional Hilbert space, the space of all such states is the complex projective line This is the Bloch sphere, which can be mapped to the Riemann sphere.
The Bloch sphere is a unit 2-sphere, with antipodal points corresponding to a pair of mutually orthogonal state vectors. The north and south poles of the Bloch sphere are typically chosen to correspond to the standard basis vectors and , respectively, which in turn might correspond e.g. to the spin-up and spin-down states of an electron. This choice is arbitrary, however. The points on the surface of the sphere correspond to the pure states of the system, whereas the interior points correspond to the mixed states. The Bloch sphere may be generalized to an n-level quantum system, but then the visualization is less useful.
The natural metric on the Bloch sphere is the Fubini–Study metric. The mapping from the unit 3-sphere in the two-dimensional state space to the Bloch sphere is the Hopf fibration, with each ray of spinors mapping to one point on the Bloch sphere.
Definition
Given an orthonormal basis, any pure state of a two-level quantum system can be written as a superposition of the basis vectors and , where the coefficient of (or contribution from) each of the two basis vectors is a complex number. This means that the state is described by four real numbers. However, only the relative phase between the coefficients of the two basis vectors has any physical meaning (the phase of the quantum system is not directly measurable), so that there is redundancy in this description. We can take the coefficient of to be real and non-negative. This allows the state to be described by only three real numbers, giving rise to the three dimensions of the Bloch sphere.
We also know from quantum mechanics that the total probability of the system has to be one:
, or equivalently .
Given this constraint, we can write using the following representation:
, where and .
The representation is always unique, because, even though the value of is not unique when
is one of the states (see Bra-ket notation) or , the point represented by and is unique.
The parameters and , re-interpreted in spherical coordinates as respectively the colatitude with respect to the z-axis and the longitude with respect to the x-axis, specify a point
on the unit sphere in .
For mixed states, one considers the density operator. Any two-dimensional density operator can be expanded using the identity and the Hermitian, traceless Pauli matrices ,
,
where is called the Bloch vector.
It is this vector that indicates the point within the sphere that corresponds to a given mixed state. Specifically, as a basic feature of the Pauli vector, the eigenvalues of are . Density operators must be positive-semidefinite, so it follows that .
For pure states, one then has
in comportance with the above.
As a consequence, the surface of the Bloch sphere represents all the pure states of a two-dimensional quantum system, whereas the interior corresponds to all the mixed states.
u, v, w representation
The Bloch vector can be represented in the following basis, with reference to the density operator :
where
This basis is often used in laser theory, where is known as the population inversion. In this basis, the numbers are the expectations of the three Pauli matrices , allowing one to identify the three coordinates with x y and z axes.
Pure states
Consider an n-level quantum mechanical system. This system is described by an n-dimensional Hilbert space Hn. The pure state space is by definition the set of rays of Hn.
Theorem. Let U(n) be the Lie group of unitary matrices of size n. Then the pure state space of Hn can be identified with the compact coset space
To prove this fact, note that there is a natural group action of U(n) on the set of states of Hn. This action is continuous and transitive on the pure states. For any state , the isotropy group of , (defined as the set of elements of U(n) such that ) is isomorphic to the product group
In linear algebra terms, this can be justified as follows. Any of U(n) that leaves invariant must have as an eigenvector. Since the corresponding eigenvalue must be a complex number of modulus 1, this gives the U(1) factor of the isotropy group. The other part of the isotropy group is parametrized by the unitary matrices on the orthogonal complement of , which is isomorphic to U(n − 1). From this the assertion of the theorem follows from basic facts about transitive group actions of compact groups.
The important fact to note above is that the unitary group acts transitively on pure states.
Now the (real) dimension of U(n) is n2. This is easy to see since the exponential map
is a local homeomorphism from the space of self-adjoint complex matrices to U(n). The space of self-adjoint complex matrices has real dimension n2.
Corollary. The real dimension of the pure state space of Hn is 2n − 2.
In fact,
Let us apply this to consider the real dimension of an m qubit quantum register. The corresponding Hilbert space has dimension 2m.
Corollary. The real dimension of the pure state space of an m-qubit quantum register is 2m+1 − 2.
Plotting pure two-spinor states through stereographic projection
Mathematically the Bloch sphere for a two-spinor state can be mapped to a Riemann sphere , i.e., the projective Hilbert space with the 2-dimensional complex Hilbert space a representation space of SO(3).
Given a pure state
where and are complex numbers which are normalized so that
and such that and ,
i.e., such that and form a basis and have diametrically opposite representations on the Bloch sphere, then let
be their ratio.
If the Bloch sphere is thought of as being embedded in with its center at the origin and with radius one, then the plane z = 0 (which intersects the Bloch sphere at a great circle; the sphere's equator, as it were) can be thought of as an Argand diagram. Plot point u in this plane — so that in it has coordinates .
Draw a straight line through u and through the point on the sphere that represents . (Let (0,0,1) represent and (0,0,−1) represent .) This line intersects the sphere at another point besides . (The only exception is when , i.e., when and .) Call this point P. Point u on the plane z = 0 is the stereographic projection of point P on the Bloch sphere. The vector with tail at the origin and tip at P is the direction in 3-D space corresponding to the spinor . The coordinates of P are
Density operators
Formulations of quantum mechanics in terms of pure states are adequate for isolated systems; in general quantum mechanical systems need to be described in terms of density operators. The Bloch sphere parametrizes not only pure states but mixed states for 2-level systems. The density operator describing the mixed-state of a 2-level quantum system (qubit) corresponds to a point inside the Bloch sphere with the following coordinates:
where is the probability of the individual states within the ensemble and are the coordinates of the individual states (on the surface of Bloch sphere). The set of all points on and inside the Bloch sphere is known as the Bloch ball.
For states of higher dimensions there is difficulty in extending this to mixed states. The topological description is complicated by the fact that the unitary group does not act transitively on density operators. The orbits moreover are extremely diverse as follows from the following observation:
Theorem. Suppose A is a density operator on an n level quantum mechanical system whose distinct eigenvalues are μ1, ..., μk with multiplicities n1, ..., nk. Then the group of unitary operators V such that V A V* = A is isomorphic (as a Lie group) to
In particular the orbit of A is isomorphic to
It is possible to generalize the construction of the Bloch ball to dimensions larger than 2, but the geometry of such a "Bloch body" is more complicated than that of a ball.
Rotations
A useful advantage of the Bloch sphere representation is that the evolution of the qubit state is describable by rotations of the Bloch sphere. The most concise explanation for why this is the case is that the Lie algebra for the group of unitary and hermitian matrices is isomorphic to the Lie algebra of the group of three dimensional rotations .
Rotation operators about the Bloch basis
The rotations of the Bloch sphere about the Cartesian axes in the Bloch basis are given by
Rotations about a general axis
If is a real unit vector in three dimensions, the rotation of the Bloch sphere about this axis is given by:
An interesting thing to note is that this expression is identical under relabelling to the extended Euler formula for quaternions.
Derivation of the Bloch rotation generator
Ballentine presents an intuitive derivation for the infinitesimal unitary transformation. This is important for understanding why the rotations of Bloch spheres are exponentials of linear combinations of Pauli matrices. Hence a brief treatment on this is given here. A more complete description in a quantum mechanical context can be found here.
Consider a family of unitary operators representing a rotation about some axis. Since the rotation has one degree of freedom, the operator acts on a field of scalars such that:
where
We define the infinitesimal unitary as the Taylor expansion truncated at second order.
By the unitary condition:
Hence
For this equality to hold true (assuming is negligible) we require
.
This results in a solution of the form:
where is any Hermitian transformation, and is called the generator of the unitary family.
Hence
Since the Pauli matrices are unitary Hermitian matrices and have eigenvectors corresponding to the Bloch basis, , we can naturally see how a rotation of the Bloch sphere about an arbitrary axis is described by
with the rotation generator given by
External links
Online Bloch sphere visualization by Konstantin Herb
See also
Atomic electron transition
Gyrovector space
Versors
Specific implementations of the Bloch sphere are enumerated under the qubit article.
Notes
References
Quantum mechanics
Projective geometry | Bloch sphere | Physics | 2,298 |
65,146,708 | https://en.wikipedia.org/wiki/BevQ | BevQ is a queue management mobile application developed by Faircode Technologies of Kochi, Kerala. It is provided by the Kerala State Beverages Corporation under Government of Kerala.
History
This app was released together by the Government of Kerala and the Kerala State Beverages Corporation in order to implement social distancing in the liquor stores Kerala in the case of the COVID-19 pandemic in Kerala and to reduce the congestion of people. The BevQ App was released by Faircode Technologies on 27 May 2020 on the Google Play Store.
In January 2021, the app was withdrawn as bars had opened.
In June 2021, there was a commitment from the Kerala CM that the App will be relaunched again. It has been reported that over 132,000 new users downloaded the app in the 48 hours after the announcement.
Achievements
The BEVQ app, which works only in the state of Kerala, beat all other Indian food and drink apps in 2020 to see the highest growth in year-on-year sessions, according to the State of Mobile 2021 report by App Annie. The app even beat the likes of Domino’s, which is used all across India.
Around 300 government Liquor shops and 900 private liquor shops were enlisted in the platform. More than 200 million unique users registered in the platform. About 250,000 tokens were given out a day.
References
External links
Developer website
Android (operating system) software
Mobile applications | BevQ | Technology | 284 |
4,404,576 | https://en.wikipedia.org/wiki/Homoaromaticity | Homoaromaticity, in organic chemistry, refers to a special case of aromaticity in which conjugation is interrupted by a single sp3 hybridized carbon atom. Although this sp3 center disrupts the continuous overlap of p-orbitals, traditionally thought to be a requirement for aromaticity, considerable thermodynamic stability and many of the spectroscopic, magnetic, and chemical properties associated with aromatic compounds are still observed for such compounds. This formal discontinuity is apparently bridged by p-orbital overlap, maintaining a contiguous cycle of π electrons that is responsible for this preserved chemical stability.
The concept of homoaromaticity was pioneered by Saul Winstein in 1959, prompted by his studies of the “tris-homocyclopropenyl” cation. Since the publication of Winstein's paper, much research has been devoted to understanding and classifying these molecules, which represent an additional class of aromatic molecules included under the continuously broadening definition of aromaticity.
To date, homoaromatic compounds are known to exist as cationic and anionic species, and some studies support the existence of neutral homoaromatic molecules, though these are less common. The 'homotropylium' cation (C8H9+) is perhaps the best studied example of a homoaromatic compound.
Overview
Naming
The term "homoaromaticity" derives from the structural similarity between homoaromatic compounds and the analogous homo-conjugated alkenes previously observed in the literature. The IUPAC Gold Book requires that Bis-, Tris-, etc. prefixes be used to describe homoaromatic compounds in which two, three, etc. sp3 centers separately interrupt conjugation of the aromatic system.
History
The concept of homoaromaticity has its origins in the debate over the non-classical carbonium ions that occurred in the 1950s. Saul Winstein, a famous proponent of the non-classical ion model, first described homoaromaticity while studying the 3-bicyclo[3.1.0]hexyl cation.
In a series of acetolysis experiments, Winstein et al. observed that the solvolysis reaction occurred empirically faster when the tosyl leaving group was in the equatorial position. The group ascribed this difference in reaction rates to the anchimeric assistance invoked by the "cis" isomer. This result thus supported a non-classical structure for the cation.
Winstein subsequently observed that this non-classical model of the 3-bicyclo[3.1.0]hexyl cation is analogous to the previously well-studied aromatic cyclopropenyl cation. Like the cyclopropenyl cation, positive charge is delocalized over three equivalent carbons containing two π electrons. This electronic configuration thus satisfies Huckel's rule (requiring 4n+2 π electrons) for aromaticity. Indeed, Winstein noticed that the only fundamental difference between this aromatic propenyl cation and his non-classical hexyl cation was the fact that, in the latter ion, conjugation is interrupted by three -- units. The group thus proposed the name "tris-homocyclopropenyl"—the tris-homo counterpart to the cyclopropenyl cation.
Evidence for homoaromaticity
Criterion for homoaromaticity
The criterion for aromaticity has evolved as new developments and insights continue to contribute to our understanding of these remarkably stable organic molecules. The required characteristics of these molecules has thus remained the subject of some controversy. Classically, aromatic compounds were defined as planar molecules that possess a cyclically delocalized system of (4n+2)π electrons, satisfying Huckel's rule. Most importantly, these conjugated ring systems are known to exhibit enormous thermochemical stability relative to predictions based on localized resonance structures. Three important features seem to characterize aromatic compounds:
molecular structure (i.e. coplanarity: all contributing atoms in the same plane)
molecular energetics (i.e. increased thermodynamic stability)
spectroscopic and magnetic properties (i.e. magnetic field induced ring current)
A number of exceptions to these conventional rules exist, however. Many molecules, including Möbius 4nπ electron species, pericyclic transition states, molecules in which delocalized electrons circulate in the ring plane or through σ (rather than π) bonds, many transition-metal sandwich molecules, and others have been deemed aromatic though they somehow deviate from the conventional parameters for aromaticity.
Consequently, the criterion for homoaromatic delocalization remains similarly ambiguous and somewhat controversial. The homotropylium cation, (C8H9+), though not the first example of a homoaromatic compound ever discovered, has proven to be the most studied of the compounds classified as homoaromatic, and is therefore often considered the classic example of homoaromaticity. By the mid-1980s, there were more than 40 reported substituted derivatives of the homotropylium cation, reflecting the importance of this ion in formulating our understanding of homoaromatic compounds.
Early evidence for homoaromaticity
After initial reports of a "homoaromatic" structure for the tris-homocyclopropenyl cation were published by Winstein, many groups began to report observations of similar compounds. One of the best studied of these molecules is the homotropylium cation, the parent compound of which was first isolated as a stable salt by Pettit, et al. in 1962, when the group reacted cyclooctatraene with strong acids. Much of the early evidence for homoaromaticity comes from observations of unusual NMR properties associated with this molecule.
NMR spectroscopy studies
While characterizing the compound resulting from deprotonation of cyclooctatriene by 1H NMR spectroscopy, the group observed that the resonance corresponding to two protons bonded to the same methylene bridge carbon exhibited an astonishing degree of separation in chemical shift.
From this observation, Pettit, et al. concluded that the classical structure of the cyclooctatrienyl cation must be incorrect. Instead, the group proposed the structure of the bicyclo[5.1.0]octadienyl compound, theorizing that the cyclopropane bond located on the interior of the eight-membered ring must be subject to considerable delocalization, thus explaining the dramatic difference in observed chemical shift. Upon further consideration, Pettit was inclined to represent the compound as the "homotropylium ion," which shows the "internal cyclopropane" bond totally replaced by electron delocalization. This structure shows how delocalization is cyclic and involves 6 π electrons, consistent with Huckel's rule for aromaticity. The magnetic field of the NMR could thus induce a ring current in the ion, responsible for the significant differences in resonance between the exo and endo protons of this methylene bridge. Pettit, et al. thus emphasized the remarkable similarity between this compound and the aromatic tropylium ion, describing a new "homo-counterpart" to an aromatic species already known, precisely as predicted by Winstein.
Subsequent NMR studies undertaken by Winstein and others sought to evaluate the properties of metal carbonyl complexes with the homotropylium ion. Comparison between a molybdenum-complex and an iron-complex proved particularly fruitful. Molybdenum tricarbonyl was expected to coordinate to the homotropylium cation by accepting 6 π electrons, thereby preserving the homoaromatic features of the complex. By contrast, iron tricarbonyl was expected to coordinate to the cation by accepting only 4 π electrons from the homotropylium ion, creating a complex in which the electrons of the cation are localized. Studies of these complexes by 1H NMR spectroscopy showed a large difference in chemical shift values for methylene protons of the Mo-complex, consistent with a homoaromatic structure, but detected virtually no comparable difference in resonance for the same protons in the Fe-complex.
UV spectroscopy studies
An important piece of early evidence in support of the homotropylium cation structure that did not rely on the magnetic properties of the molecule involved the acquisition of its UV spectrum. Winstein et al. determined that the absorption maxima for the homotropylium cation exhibited a considerably shorter wavelength than would be precited for the classical cyclooctatrienyl cation or the bicyclo[5.1.0]octadienyl compound with the fully formed internal cyclopropane bond (and a localized electronic structure). Instead, the UV spectrum most resembled that of the aromatic tropylium ion. Further calculations allowed Winstein to determine that the bond order between the two carbon atoms adjacent to the outlying methylene bridge is comparable to that of the π-bond separating the corresponding carbon atoms in the tropylium cation. Although this experiment proved to be highly illuminating, UV spectra are generally considered to be poor indicators of aromaticity or homoaromaticity.
More recent evidence for homoaromaticity
More recently, work has been done to investigate the structure of the purportedly homoaromatic homotropylium ion by employing various other experimental techniques and theoretical calculations. One key experimental study involved analysis of a substituted homotropylium ion by X-ray crystallography. These crystallographic studies have been used to demonstrate that the internuclear distance between the atoms at the base of the cyclopropenyl structure is indeed longer than would be expected for a normal cyclopropane molecule, while the external bonds appear to be shorter, indicating involvement of the internal cyclopropane bond in charge delocalization.
Molecular orbital description
The molecular orbital explanation of the stability of homoaromaticity has been widely discussed with numerous diverse theories, mostly focused on the homotropenylium cation as a reference. R.C. Haddon initially proposed a Mobius model where the outer electrons of the sp3 hybridized methylene bridge carbon(2) back-donate to the adjacent carbons to stabilize the C1-C3 distance.
Perturbation molecular orbital theory
Homoaromaticity can better be explained using Perturbation Molecular Orbital Theory (PMO) as described in a 1975 study by Robert C. Haddon. The homotropenylium cation can be considered as a perturbed version of the tropenylium cation due to the addition of a homoconjugate linkage interfering with the resonance of the original cation.
First-order effects
The most important factor in influencing homoaromatic character is the addition of a single homoconjugate linkage into the parent aromatic compound. The location of the homoconjugate bond is not important as all homoaromatic species can be derived from aromatic compounds that possess symmetry and equal bond order between all carbons. The insertion of a homoconjugate linkage perturbs the π-electron density an amount δβ, which depending on the ring size, must be greater than 0 and less than 1, where 0 represents no perturbation and 1 represents total loss of aromaticity (destabilization equivalent to the open chain form). It is believed that with increasing ring size, the resonance stabilization of homoaromaticity is offset by the strain in forming the homoconjugate bridge. In fact, the maximum ring size for homoaromaticity is fairly low as a 16-membered annulene ring favours the formation of the aromatic dication over the strained bridged homocation.
Second-order effects
Second homoconjugate linkage
A significant second-order effect on the Perturbation Molecular Orbital model of homoaromaticity is the addition of a second homoconjugate linkage and its influence on stability. The effect is often a doubling of the instability brought about by the addition of a single homoconjugate linkage, although there is an additional term that depends on the proximity of the two linkages. In order to minimize δβ and thus keep the coupling term to a minimum, bishomoaromatic compounds form depending on the conformation of greatest stability by resonance and smallest steric hindrance. The synthesis of the 1,3-bishomotropenylium cation by protonating cis-bicyclo[6.1.0]nona-2,4,6-triene agrees with theoretical calculations and maximizes stability by forming the two methylene bridges at the 1st and 3rd carbons.
Substituents
The addition of a substituent to a homoaromatic compound has a large influence over the stability of the compound. Depending on the relative locations of the substituent and the homoconjugate linkage, the substituent can either have a stabilizing or destabilizing effect. This interaction is best demonstrated by looking at a substituted tropenylium cation. If an inductively electron-donating group is attached to the cation at the 1st or 3rd carbon position, it has a stabilizing effect, improving the homoaromatic character of the compound. However, if this same substituent is attached at the 2nd or 4th carbon, the interaction between the substituent at the homoconjugate bridge has a destabilizing effect. Therefore, protonation of methyl or phenyl substituted cyclooctatetraenes will result in the 1 isomer of the homotropenylium cation.
Examples of homoaromatic compounds
Following the discovery of the first homoaromatic compounds, research has gone into synthesizing new homoaromatic compounds that possess similar stability to their aromatic parent compounds. There are several classes of homoaromatic compounds, each of which have been predicted theoretically and proven experimentally.
Cationic homoaromatics
The most established and well-known homoaromatic species are cationic homoaromatic compounds. As stated earlier, the homotropenylium cation is one of the most studied homoaromatic compounds. Many homoaromatic cationic compounds use as a basis a cyclopropenyl cation, a tropylium cation, or a cyclobutadiene dication as these compounds exhibit strong aromatic character.
In addition to the homotropylium cation, another well established cationic homoaromatic compound is the norbornen-7-yl cation, which has been shown to be strongly homoaromatic, proven both theoretically and experimentally.
An intriguing case of σ-bishomoaromaticity can be found in the dications of pagodanes. In these 4-center-2-electron systems the delocalization happens in the plane that is defined by the four carbon atoms (prototype for the phenomenon of σ-aromaticity is cyclopropane which gains about 11.3 kcal mol−1 stability from the effect). The dications are accessible either via oxidation of pagodane or via oxidation of the corresponding bis-seco-dodecahedradiene:
Reduction of the corresponding six electrons dianions was not possible so far.
Neutral homoaromatics
There are many classes of neutral homoaromatic compounds although there is much debate as to whether they truly exhibit homoaromatic character or not.
One class of neutral homoaromatics are called monohomoaromatics, one of which is cycloheptatriene, and numerous complex monohomoaromatics have been synthesized. One particular example is a 60-carbon fulleroid derivative that has a single methylene bridge. UV and NMR analysis have shown that the aromatic character of this modified fulleroid is not disrupted by the addition of a homoconjugate linkage, therefore this compound is definitively homoaromatic.
Substituted neutral barbaralane derivatives (homoannulenes) have been disclosed as stable ground state homoaromatic molecules in 2023. Evidence for the homoaromatic character in this class of molecules stems from bond length analysis (X-Ray structural analysis) as well as shifts in the NMR spectrum. The homoannulenes also act as photoswitches by which means a local 6π homoaromaticity can be switched to a global 10π homoaromaticity.
Bishomoaromatics
It was long considered that the best examples of neutral homoaromatics are bishomoaromatics such as barrelene and semibullvalene. First synthesized in 1966, semibullvalene has a structure that should lend itself well to homoaromaticity although there has been much debate whether semibullvalene derivatives can provide a true delocalized, ground state neutral homoaromatic compound or not. In an effort to further stabilize the delocalized transition structure by substituting semibullvalene with electron donating and accepting groups, it has been found that the activation barrier to this rearrangement can be lowered, but not eliminated. However, with the introduction of ring strain into the molecule, aimed at destabilizing the localized ground state structure's through the strategic addition of cyclic annulations, a delocalized homoaromatic ground-state structure can indeed be achieved.
Of the neutral homoaromatics, the compounds best believed to exhibit neutral homoaromaticity are boron containing compounds of 1,2-diboretane and its derivatives. Substituted diboretanes are shown to have a much greater stabilization in the delocalized state over the localized one, giving strong indications of homoaromaticity. When electron-donating groups are attached to the two boron atoms, the compound favors a classical model with localized bonds. Homoaromatic character is best seen when electron-withdrawing groups are bonded to the boron atoms, causing the compound to adopt a nonclassical, delocalized structure.
Trishomoaromatics
As the name suggests, trishomoaromatics are defined as containing one additional methylene bridge compared to bishomoaromatics, therefore containing three of these homoconjugate bridges in total. Just like semibullvalene, there is still much debate as to the extent of the homoaromatic character of trishomoaromatics. While theoretically they are homoaromatic, these compounds show a stabilization of no more than 5% of benzene due to delocalization.
Anionic homoaromatics
Unlike neutral homoaromatic compounds, anionic homoaromatics are widely accepted to exhibit "true" homoaromaticity. These anionic compounds are often prepared from their neutral parent compounds through lithium metal reduction. 1,2-diboretanide derivatives show strong homoaromatic character through their three-atom (boron, boron, carbon), two-electron bond, which contains shorter C-B bonds than in the neutral classical analogue. These 1,2-diboretanides can be expanded to larger ring sizes with different substituents and all contain some degree of homoaromaticity.
Anionic homoaromaticity can also be seen in dianionic bis-diazene compounds, which contain a four-atom (four nitrogens), six-electron center. Experiment results have shown the shortening of the transannular nitrogen-nitrogen distance, therefore demonstrating that dianionic bis-diazene is a type of anionic bishomoaromatic compound. Peculiar feature of these systems is that the cyclic electron delocalization is taking place in the σ-plane defined by the four nitrogens. These bis-diazene-dianions are therefore the first examples for 4-center-6-electron σ-bishomoaromaticity. The corresponding 2 electron σ-bishomoaromatic systems were realized in the form of pagodane dications (see above).
Antihomoaromaticity
There are also reports of antihomoaromatic compounds. Just as aromatic compounds exhibit exceptional stability, antiaromatic compounds, which deviate from Huckel's rule and contain a closed loop of 4n π electrons, are relatively unstable. The bridged bicyclo[3.2.1]octa-3,6-dien-2-yl cation contains only 4 π electrons, and is therefore "bishomoantiaromatic." A series of theoretical calculations confirm that it is indeed less stable than the corresponding allyl cation.
Similarly, a substituted bicyclo[3.2.1]octa-3,6-dien-2-yl cation (the 2-(4'-Fluorophenyl) bicyclo[3.2.1]oct-3,6-dien-2-yl cation) was also shown to be an antiaromate when compared to its corresponding allyl cation, corroborated by theoretical calculations as well as by NMR analysis.
External links
Homoaromaticity on the Gold Book
References
Physical organic chemistry | Homoaromaticity | Chemistry | 4,348 |
25,007,304 | https://en.wikipedia.org/wiki/History%20of%20botany | The history of botany examines the human effort to understand life on Earth by tracing the historical development of the discipline of botany—that part of natural science dealing with organisms traditionally treated as plants.
Rudimentary botanical science began with empirically based plant lore passed from generation to generation in the oral traditions of paleolithic hunter-gatherers. The first writings that show human curiosity about plants themselves, rather than the uses that could be made of them, appear in ancient Greece and ancient India. In Ancient Greece, the teachings of Aristotle's student Theophrastus at the Lyceum in ancient Athens in about 350 BC are considered the starting point for Western botany. In ancient India, the Vṛkṣāyurveda, attributed to Parashara, is also considered one of the earliest texts to describe various branches of botany.
In Europe, botanical science was soon overshadowed by a medieval preoccupation with the medicinal properties of plants that lasted more than 1000 years. During this time, the medicinal works of classical antiquity were reproduced in manuscripts and books called herbals. In China and the Arab world, the Greco-Roman work on medicinal plants was preserved and extended.
In Europe, the Renaissance of the 14th–17th centuries heralded a scientific revival during which botany gradually emerged from natural history as an independent science, distinct from medicine and agriculture. Herbals were replaced by floras: books that described the native plants of local regions. The invention of the microscope stimulated the study of plant anatomy, and the first carefully designed experiments in plant physiology were performed. With the expansion of trade and exploration beyond Europe, the many new plants being discovered were subjected to an increasingly rigorous process of naming, description, and classification.
Progressively more sophisticated scientific technology has aided the development of contemporary botanical offshoots in the plant sciences, ranging from the applied fields of economic botany (notably agriculture, horticulture and forestry), to the detailed examination of the structure and function of plants and their interaction with the environment over many scales from the large-scale global significance of vegetation and plant communities (biogeography and ecology) through to the small scale of subjects like cell theory, molecular biology and plant biochemistry.
Introduction
Botany (Greek () meaning "pasture", "herbs" "grass", or "fodder"; Medieval Latin – herb, plant) and zoology are, historically, the core disciplines of biology whose history is closely associated with the natural sciences chemistry, physics and geology. A distinction can be made between botanical science in a pure sense, as the study of plants themselves, and botany as applied science, which studies the human use of plants. Early natural history divided pure botany into three main streams morphology-classification, anatomy and physiology – that is, external form, internal structure, and functional operation. The most obvious topics in applied botany are horticulture, forestry and agriculture although there are many others like weed science, plant pathology, floristry, pharmacognosy, economic botany and ethnobotany which lie outside modern courses in botany. Since the origin of botanical science there has been a progressive increase in the scope of the subject as technology has opened up new techniques and areas of study. Modern molecular systematics, for example, entails the principles and techniques of taxonomy, molecular biology, computer science and more.
Within botany, there are a number of sub-disciplines that focus on particular plant groups, each with their own range of related studies (anatomy, morphology etc.). Included here are: phycology (algae), pteridology (ferns), bryology (mosses and liverworts) and palaeobotany (fossil plants) and their histories are treated elsewhere (see side bar). To this list can be added mycology, the study of fungi, which were once treated as plants, but are now ranked as a unique kingdom.
Ancient knowledge
Nomadic hunter-gatherer societies passed on, by oral tradition, what they knew (their empirical observations) about the different kinds of plants that they used for food, shelter, poisons, medicines, for ceremonies and rituals etc. The uses of plants by these pre-literate societies influenced the way the plants were named and classified—their uses were embedded in folk-taxonomies, the way they were grouped according to use in everyday communication. The nomadic life-style was drastically changed when settled communities were established in about twelve centres around the world during the Neolithic Revolution which extended from about 10,000 to 2500 years ago depending on the region. With these communities came the development of the technology and skills needed for the domestication of plants and animals and the emergence of the written word provided evidence for the passing of systematic knowledge and culture from one generation to the next.
Plant lore and plant selection
During the Neolithic Revolution, plant knowledge increased most obviously through the use of plants for food and medicine. All of today's staple foods were domesticated in prehistoric times as a gradual process of selection of higher-yielding varieties took place, possibly unknowingly, over hundreds to thousands of years. Legumes were cultivated on all continents but cereals made up most of the regular diet: rice in East Asia, wheat and barley in the Middle east, and maize in Central and South America. By Greco-Roman times, popular food plants of today, including grapes, apples, figs, and olives, were being listed as named varieties in early manuscripts. Botanical authority William Stearn has observed that "cultivated plants are mankind's most vital and precious heritage from remote antiquity".
It is also from the Neolithic, in about 3000 BC, that we glimpse the first known illustrations of plants and read descriptions of impressive gardens in Egypt. However protobotany, the first pre-scientific written record of plants, did not begin with food; it was born out of the medicinal literature of Egypt, China, Mesopotamia and India. Botanical historian Alan Morton notes that agriculture was the occupation of the poor and uneducated, while medicine was the realm of socially influential shamans, priests, apothecaries, magicians and physicians, who were more likely to record their knowledge for posterity.
Early botany
Ancient India
Early Indian texts, like the Vedas mention plants with magical properties. The Sushruta Samhita, describes over 700 plants used for medicinal purposes. This text reflects a level of medical knowledge and practice comparable to ancient Egypt. Notably, the Sushruta Samhita categorizes food plants based on their utilized parts, taste, and dietary effects. While lacking detailed botanical descriptions beyond occasional habitat or foliage references, the text demonstrates close observation of plants. This is evident in the classification of sugarcane varieties and the listing of fungi based on their growth medium. The Charaka Samhitā, foundational Ayurvedic text, presents the earliest known plant classification system in India, using habitat, presence of flowers/fruits, and reproduction as criteria.
Classical antiquity
Classical Greece
Ancient Athens, of the 6th century BC, was the busy trade centre at the confluence of Egyptian, Mesopotamian and Minoan cultures at the height of Greek colonisation of the Mediterranean. The philosophical thought of this period ranged freely through many subjects. Empedocles (490–430 BC) foreshadowed Darwinian evolutionary theory in a crude formulation of the mutability of species and natural selection. The physician Hippocrates (460–370 BC) avoided the prevailing superstition of his day and approached healing by close observation and the test of experience. At this time, a genuine non-anthropocentric curiosity about plants emerged. The major works written about plants extended beyond the description of their medicinal uses to the topics of plant geography, morphology, physiology, nutrition, growth and reproduction.
Theophrastus and the origin of botanical science
Foremost among the scholars studying botany was Theophrastus of Eressus (Greek: ; –287 BC) who has been frequently referred to as the "Father of Botany". He was a student and close friend of Aristotle (384–322 BC) and succeeded him as head of the Lyceum (an educational establishment like a modern university) in Athens with its tradition of peripatetic philosophy. Aristotle's special treatise on plants — — is now lost, although there are many botanical observations scattered throughout his other writings (these have been assembled by Christian Wimmer in , 1836) but they give little insight into his botanical thinking. The Lyceum prided itself in a tradition of systematic observation of causal connections, critical experiment and rational theorizing. Theophrastus challenged the superstitious medicine employed by the physicians of his day, called rhizotomi, and also the control over medicine exerted by priestly authority and tradition. Together with Aristotle, he had tutored Alexander the Great whose military conquests were carried out with all the scientific resources of the day, the Lyceum garden probably containing many botanical trophies collected during his campaigns as well as other explorations in distant lands. It was in this garden where he gained much of his plant knowledge.
Enquiry into Plants and Causes of Plants
Theophrastus's major botanical works were the Enquiry into Plants (Historia Plantarum) and Causes of Plants (Causae Plantarum) which were his lecture notes for the Lyceum. The opening sentence of the Enquiry reads like a botanical manifesto:
The Enquiry is 9 books of "applied" botany dealing with the forms and classification of plants and economic botany, examining the techniques of agriculture (relationship of crops to soil, climate, water and habitat) and horticulture. He described some 500 plants in detail, often including descriptions of habitat and geographic distribution, and he recognised some plant groups that can be recognised as modern-day plant families. Some names he used, like Crataegus, Daucus and Asparagus have persisted until today. His second book Causes of Plants covers plant growth and reproduction (akin to modern physiology). Like Aristotle, he grouped plants into "trees", "undershrubs", "shrubs" and "herbs" but he also made several other important botanical distinctions and observations. He noted that plants could be annuals, perennials and biennials, they were also either monocotyledons or dicotyledons and he also noticed the difference between determinate and indeterminate growth and details of floral structure including the degree of fusion of the petals, position of the ovary and more. These lecture notes of Theophrastus comprise the first clear exposition of the rudiments of plant anatomy, physiology, morphology and ecology — presented in a way that would not be matched for another eighteen centuries.
Pedanius Dioscorides
A full synthesis of ancient Greek pharmacology was compiled in De Materia Medica c. 60 AD by Pedanius Dioscorides (c. 40-90 AD) who was a Greek physician with the Roman army. This work proved to be the definitive text on medicinal herbs, both oriental and occidental, for fifteen hundred years until the dawn of the European Renaissance being slavishly copied again and again throughout this period. Though rich in medicinal information with descriptions of about 600 medicinal herbs, the botanical content of the work was extremely limited.
Ancient Rome
The Romans contributed little to the foundations of botanical science laid by the ancient Greeks, but made a sound contribution to our knowledge of applied botany as agriculture. In works titled , four Roman writers contributed to a compendium Scriptores Rei Rusticae, published from the Renaissance on, which set out the principles and practice of agriculture. These authors were Cato (234–149 BC), Varro (116–27 BC) and, in particular, Columella (4–70 AD) and Palladius (4th century AD).
Pliny the Elder
Roman encyclopaedist Pliny the Elder (23–79 AD) deals with plants in Books 12 to 26 of his 37-volume highly influential work in which he frequently quotes Theophrastus but with a lack of botanical insight although he does, nevertheless, draw a distinction between true botany on the one hand, and farming and medicine on the other. It is estimated that at the time of the Roman Empire between 1300 and 1400 plants had been recorded in the West.
Ancient China
In ancient China, lists of different plants and herb concoctions for pharmaceutical purposes date back to at least the time of the Warring States (481 BC-221 BC). Many Chinese writers over the centuries contributed to the written knowledge of herbal pharmaceutics. The Chinese dictionary-encyclopaedia Erh Ya probably dates from about 300 BC and describes about 334 plants classed as trees or shrubs, each with a common name and illustration. The Han Dynasty (202 BC-220 AD) includes the notable work of the Huangdi Neijing and the famous pharmacologist Zhang Zhongjing.
Medieval knowledge
Medicinal plants of the early Middle Ages
In Western Europe, after Theophrastus, botany passed through a bleak period of 1800 years when little progress was made and, indeed, many of the early insights were lost. As Europe entered the Middle Ages (5th to 15th centuries), China, India and the Arab world enjoyed a golden age.
Medieval China
Chinese philosophy had followed a similar path to that of the ancient Greeks. Between 100 and 1700 AD, many new works on pharmaceutical botany were produced. The 11th century scientists and statesmen Su Song and Shen Kuo compiled learned treatises on natural history, emphasising herbal medicine. Among the pharmaceutical botany works were encyclopaedic accounts and treatises compiled for the Chinese imperial court. These were free of superstition and myth with carefully researched descriptions and nomenclature; they included cultivation information and notes on economic and medicinal uses — and even elaborate monographs on ornamental plants. But there was no experimental method and no analysis of the plant sexual system, nutrition, or anatomy.
Medieval India
In India, simple artificial plant classification became more botanical with the work of Parashara (c. 400 – c. 500 AD), the author of Vṛksayurveda (the science of life of trees). He made close observations of cells and leaves and divided plants into Dvimatrka (Dicotyledons) and Ekamatrka (Monocotyledons). He has developed a more elaborate classification based largely on morphological consideration such as floral characters, their resemblances and differences into groupings (ganas) akin to modern floral families: Samiganiya (Fabaceae), Puplikagalniya (Rutaceae), Svastikaganiya (Cruciferae), Tripuspaganiya (Cucurbitaceae), Mallikaganiya (Apocynaceae), and Kurcapuspaganiya (Asteraceae). Important medieval Indian works of plant physiology include the Prthviniraparyam of Udayana, Nyayavindutika of Dharmottara, Saddarsana-samuccaya of Gunaratna, and Upaskara of Sankaramisra.
Islamic Golden Age
The 400-year period from the 9th to 13th centuries AD was the Islamic Renaissance, a time when Islamic culture and science thrived. Greco-Roman texts were preserved, copied and extended although new texts always emphasised the medicinal aspects of plants. Kurdish biologist Ābu Ḥanīfah Āḥmad ibn Dawūd Dīnawarī (828–896 AD) is known as the founder of Arabic botany; his Kitâb al-nabât ('Book of Plants') describes 637 species, discussing plant development from germination to senescence and including details of flowers and fruits. The Mutazilite philosopher and physician Ibn Sina (Avicenna) (c. 980–1037 AD) was another influential figure, his The Canon of Medicine being a landmark in the history of medicine treasured until the Enlightenment.
The Silk Road
Following the fall of Constantinople (1453), the newly expanded Ottoman Empire welcomed European embassies in its capital, which in turn became the sources of plants from those regions to the east which traded with the empire. In the following century, twenty times as many plants entered Europe along the Silk Road as had been transported in the previous two thousand years, mainly as bulbs. Others were acquired primarily for their alleged medicinal value. Initially, Italy benefited from this new knowledge, especially Venice, which traded extensively with the East. From there, these new plants rapidly spread to the rest of Western Europe. By the middle of the sixteenth century, there was already a flourishing export trade of various bulbs from Turkey to Europe.
The Age of Herbals
In the European Middle Ages of the 15th and 16th centuries, the lives of European citizens were based around agriculture but when printing arrived, with movable type and woodcut illustrations, it was not treatises on agriculture that were published, but lists of medicinal plants with descriptions of their properties or "virtues". These first plant books, known as herbals showed that botany was still a part of medicine, as it had been for most of ancient history. Authors of herbals were often curators of university gardens, and most herbals were derivative compilations of classic texts, especially De Materia Medica.
However, the need for accurate and detailed plant descriptions meant that some herbals were more botanical than medicinal.
German Otto Brunfels's (1464–1534) Herbarum Vivae Icones (1530) contained descriptions of about 47 species new to science combined with accurate illustrations. His fellow countryman Hieronymus Bock's (1498–1554) Kreutterbuch of 1539 described plants he found in nearby woods and fields and these were illustrated in the 1546 edition. However, it was Valerius Cordus (1515–1544) who pioneered the formal botanical description that detailed both flowers and fruits, some anatomy including the number of chambers in the ovary, and the type of ovule placentation. He also made observations on pollen and distinguished between inflorescence types. His five-volume Historia Plantarum was published about 18 years after his early death aged 29 in 1561–1563. In England, William Turner (1515–1568) in his Libellus De Re Herbaria Novus (1538) published names, descriptions and localities of many native British plants and in Holland Rembert Dodoens (1517–1585), in Stirpium Historiae (1583), included descriptions of many new species from the Netherlands in a scientific arrangement.
Herbals contributed to botany by setting in train the science of plant description, classification, and botanical illustration. Up to the 17th century, botany and medicine were one and the same but those books emphasising medicinal aspects eventually omitted the plant lore to become modern pharmacopoeias; those that omitted the medicine became more botanical and evolved into the modern compilations of plant descriptions we call Floras. These were often backed by specimens deposited in a herbarium which was a collection of dried plants that verified the plant descriptions given in the Floras. The transition from herbal to Flora marked the final separation of botany from medicine.
The Renaissance and Age of Enlightenment (1550–1800)
The revival of learning during the European Renaissance renewed interest in plants. The church, feudal aristocracy and an increasingly influential merchant class that supported science and the arts, now jostled in a world of increasing trade. Sea voyages of exploration returned botanical treasures to the large public, private, and newly established botanic gardens, and introduced an eager population to novel crops, drugs and spices from Asia, the East Indies and the New World.
The number of scientific publications increased. In England, for example, scientific communication and causes were facilitated by learned societies like Royal Society (founded in 1660) and the Linnaean Society (founded in 1788): there was also the support and activities of botanical institutions like the Jardin du Roi in Paris, Chelsea Physic Garden, Royal Botanic Gardens Kew, and the Oxford and Cambridge Botanic Gardens, as well as the influence of renowned private gardens and wealthy entrepreneurial nurserymen. By the early 17th century the number of plants described in Europe had risen to about 6000. The 18th century Enlightenment values of reason and science coupled with new voyages to distant lands instigating another phase of encyclopaedic plant identification, nomenclature, description and illustration, "flower painting" possibly at its best in this period of history. Plant trophies from distant lands decorated the gardens of Europe's powerful and wealthy in a period of enthusiasm for natural history, especially botany (a preoccupation sometimes referred to as "botanophilia") that is never likely to recur. Often such exotic new plant imports (primarily from Turkey), when they first appeared in print in English, lacked common names in the language.
During the 18th century, botany was one of the few sciences considered appropriate for genteel educated women. Around 1760, with the popularization of the Linnaean system, botany became much more widespread among educated women who painted plants, attended classes on plant classification, and collected herbarium specimens although emphasis was on the healing properties of plants rather than plant reproduction which had overtones of sexuality. Women began publishing on botanical topics and children's books on botany appeared by authors like Charlotte Turner Smith. Cultural authorities argued that education through botany created culturally and scientifically aware citizens, part of the thrust for 'improvement' that characterised the Enlightenment. However, in the early 19th century with the recognition of botany as an official science, women were again excluded from the discipline. Compared to other sciences, however, in botany the number of female researchers, collectors, or illustrators has always been remarkably high.
Botanical gardens and herbaria
Public and private gardens have always been strongly associated with the historical unfolding of botanical science.
Early botanical gardens were physic gardens, repositories for the medicinal plants described in the herbals. As they were generally associated with universities or other academic institutions, the plants were also used for study. The directors of these gardens were eminent physicians with an educational role as "scientific gardeners" and it was staff of these institutions that produced many of the published herbals.
The botanical gardens of the modern tradition were established in northern Italy, the first being at Pisa (1544), founded by Luca Ghini (1490–1556). Although part of a medical faculty, the first chair of , essentially a chair in botany, was established in Padua in 1533. Then in 1534, Ghini became Reader in at Bologna University, where Ulisse Aldrovandi established a similar garden in 1568 (see below). Collections of pressed and dried specimens were called a (garden of dry plants) and the first accumulation of plants in this way (including the use of a plant press) is attributed to Ghini. Buildings called herbaria housed these specimens mounted on card with descriptive labels. Stored in cupboards in systematic order, they could be preserved in perpetuity and easily transferred or exchanged with other institutions, a taxonomic procedure that is still used today.
By the 18th century, the physic gardens had been transformed into "order beds" that demonstrated the classification systems that were being devised by botanists of the day — but they also had to accommodate the influx of curious, beautiful and new plants pouring in from voyages of exploration that were associated with European colonial expansion.
From Herbal to Flora
Plant classification systems of the 17th and 18th centuries now related plants to one another and not to man, marking a return to the non-anthropocentric botanical science promoted by Theophrastus over 1500 years before. In England, various herbals in either Latin or English were mainly compilations and translations of continental European works, of limited relevance to the British Isles. This included the rather unreliable work of Gerard (1597). The first systematic attempt to collect information on British plants was that of Thomas Johnson (1629), who was later to issue his own revision of Gerard's work (1633–1636).
However, Johnson was not the first apothecary or physician to organise botanical expeditions to systematise their local flora. In Italy, Ulisse Aldrovandi (1522 – 1605) organised an expedition to the Sibylline mountains in Umbria in 1557, and compiled a local Flora. He then began to disseminate his findings amongst other European scholars, forming an early network of knowledge sharing "molti amici in molti luoghi" (many friends in many places), including Charles de l'Écluse (Clusius) (1526 – 1609) at Montpellier and Jean de Brancion at Malines. Between them, they started developing Latin names for plants, in addition to their common names. The exchange of information and specimens between scholars was often associated with the founding of botanical gardens (above), and to this end Aldrovandi founded one of the earliest at his university in Bologna, the Orto Botanico di Bologna in 1568.
In France, Clusius journeyed throughout most of Western Europe, making discoveries in the vegetable kingdom along the way. He compiled Flora of Spain (1576), and Austria and Hungary (1583). He was the first to propose dividing plants into classes. Meanwhile, in Switzerland, from 1554, Conrad Gessner (1516 – 1565) made regular explorations of the Swiss Alps from his native Zurich and discovered many new plants. He proposed that there were groups or genera of plants. He said that each genus was composed of many species and that these were defined by similar flowers and fruits. This principle of organization laid the groundwork for future botanists. He wrote his important Historia Plantarum shortly before his death. At Malines, in Flanders he established and maintained the botanical gardens of Jean de Brancion from 1568 to 1573, and first encountered tulips.
This approach coupled with the new Linnaean system of binomial nomenclature resulted in plant encyclopaedias without medicinal information called Floras that meticulously described and illustrated the plants growing in particular regions. The 17th century also marked the beginning of experimental botany and application of a rigorous scientific method, while improvements in the microscope launched the new discipline of plant anatomy whose foundations, laid by the careful observations of Englishman Nehemiah Grew and Italian Marcello Malpighi, would last for 150 years.
Botanical exploration
More new lands were opening up to European colonial powers, the botanical riches being returned to European botanists for description. This was a romantic era of botanical explorers, intrepid plant hunters and gardener-botanists. Significant botanical collections came from: the West Indies (Hans Sloane (1660–1753)); China (James Cunningham); the spice islands of the East Indies (Moluccas, George Rumphius (1627–1702)); China and Mozambique (João de Loureiro (1717–1791)); West Africa (Michel Adanson (1727–1806)) who devised his own classification scheme and forwarded a crude theory of the mutability of species; Canada, Hebrides, Iceland, New Zealand by Captain James Cook's chief botanist Joseph Banks (1743–1820).
Classification and morphology
By the middle of the 18th century, the botanical booty resulting from the era of exploration was accumulating in gardens and herbaria – and it needed to be systematically catalogued. This was the task of the taxonomists, the plant classifiers.
Plant classifications have changed over time from "artificial" systems based on general habit and form, to pre-evolutionary "natural" systems expressing similarity using one to many characters, leading to post-evolutionary "natural" systems that use characters to infer evolutionary relationships.
Italian physician Andrea Caesalpino (1519–1603) studied medicine and taught botany at the University of Pisa for about 40 years eventually becoming Director of the Botanic Garden of Pisa from 1554 to 1558. His sixteen-volume De Plantis (1583) described 1500 plants and his herbarium of 260 pages and 768 mounted specimens still remains. Caesalpino proposed classes based largely on the detailed structure of the flowers and fruit; he also applied the concept of the genus. He was the first to try and derive principles of natural classification reflecting the overall similarities between plants and he produced a classification scheme well in advance of its day. Gaspard Bauhin (1560–1624) produced two influential publications (1620) and (1623). These brought order to the 6000 species now described and in the latter he used binomials and synonyms that may well have influenced Linnaeus's thinking. He also insisted that taxonomy should be based on natural affinities.
To sharpen the precision of description and classification, Joachim Jung (1587–1657) compiled a much-needed botanical terminology which has stood the test of time. English botanist John Ray (1623–1705) built on Jung's work to establish the most elaborate and insightful classification system of the day. His observations started with the local plants of Cambridge where he lived, with the (1860) which later expanded to his , essentially the first British Flora. Although his (1682, 1688, 1704) provided a step towards a world Flora as he included more and more plants from his travels, first on the continent and then beyond. He extended Caesalpino's natural system with a more precise definition of the higher classification levels, deriving many modern families in the process, and asserted that all parts of plants were important in classification. He recognised that variation arises from both internal (genotypic) and external environmental (phenotypic) causes and that only the former was of taxonomic significance. He was also among the first experimental physiologists. The can be regarded as the first botanical synthesis and textbook for modern botany. According to botanical historian Alan Morton, Ray "influenced both the theory and the practice of botany more decisively than any other single person in the latter half of the seventeenth century". Ray's family system was later extended by Pierre Magnol (1638–1715) and Joseph de Tournefort (1656–1708), a student of Magnol, achieved notoriety for his botanical expeditions, his emphasis on floral characters in classification, and for reviving the idea of the genus as the basic unit of classification.
Above all it was Swedish Carl Linnaeus (1707–1778), who eased the task of plant cataloguing. He adopted a sexual system of classification using stamens and pistils as important characters. Among his most important publications were Systema Naturae (1735), Genera Plantarum (1737), and Philosophia Botanica (1751) but it was in his Species Plantarum (1753) that he gave every species a binomial thus setting the path for the future accepted method of designating the names of all organisms. Linnaean thought and books dominated the world of taxonomy for nearly a century. His sexual system was later elaborated by Bernard de Jussieu (1699–1777) whose nephew Antoine-Laurent de Jussieu (1748–1836) extended it yet again to include about 100 orders (present-day families). Frenchman Michel Adanson (1727–1806) in his (1763, 1764), apart from extending the current system of family names, emphasized that a natural classification must be based on a consideration of all characters, even though these may later be given different emphasis according to their diagnostic value for the particular plant group. Adanson's method has, in essence, been followed to this day.
18th century plant taxonomy bequeathed to the 19th century a precise binomial nomenclature and botanical terminology, a system of classification based on natural affinities, and a clear idea of the ranks of family, genus and species — although the taxa to be placed within these ranks remains, as always, the subject of taxonomic research.
Anatomy
In the first half of the 18th century, botany was beginning to move beyond descriptive science into experimental science. Although the microscope was invented in 1590, it was only in the late 17th century that lens grinding provided the resolution needed to make major discoveries. Antony van Leeuwenhoek is a notable example of an early lens grinder who achieved remarkable resolution with his single-lens microscopes. Important general biological observations were made by Robert Hooke (1635–1703) but the foundations of plant anatomy were laid by Italian Marcello Malpighi (1628–1694) of the University of Bologna in his (1675) and Royal Society Englishman Nehemiah Grew (1628–1711) in his The Anatomy of Plants Begun (1671) and Anatomy of Plants (1682). These botanists explored what is now called developmental anatomy and morphology by carefully observing, describing and drawing the developmental transition from seed to mature plant, recording stem and wood formation. This work included the discovery and naming of parenchyma and stomata.
Physiology
In plant physiology, research interest was focused on the movement of sap and the absorption of substances through the roots. Jan Helmont (1577–1644) by experimental observation and calculation, noted that the increase in weight of a growing plant cannot be derived purely from the soil, and concluded it must relate to water uptake. Englishman Stephen Hales (1677–1761) established by quantitative experiment that there is uptake of water by plants and a loss of water by transpiration and that this is influenced by environmental conditions: he distinguished "root pressure", "leaf suction" and "imbibition" and also noted that the major direction of sap flow in woody tissue is upward. His results were published in Vegetable Staticks (1727) He also noted that "air makes a very considerable part of the substance of vegetables". English chemist Joseph Priestley (1733–1804) is noted for his discovery of oxygen (as now called) and its production by plants. Later, Jan Ingenhousz (1730–1799) observed that only in sunlight do the green parts of plants absorb air and release oxygen, this being more rapid in bright sunlight while, at night, the air (CO2) is released from all parts. His results were published in Experiments upon vegetables (1779) and with this the foundations for 20th century studies of carbon fixation were laid. From his observations, he sketched the cycle of carbon in nature even though the composition of carbon dioxide was yet to be resolved. Studies in plant nutrition had also progressed. In 1804, Nicolas-Théodore de Saussure's (1767–1845) was an exemplary study of scientific exactitude that demonstrated the similarity of respiration in both plants and animals, that the fixation of carbon dioxide includes water, and that just minute amounts of salts and nutrients (which he analyzed in chemical detail from plant ash) have a powerful influence on plant growth.
Plant sexuality
It was Rudolf Camerarius (1665–1721) who was the first to establish plant sexuality conclusively by experiment. He declared in a letter to a colleague, dated 1694 and titled , that "no ovules of plants could ever develop into seeds from the female style and ovary without first being prepared by the pollen from the stamens, the male sexual organs of the plant".
Some time later, the German academic and natural historian Joseph Kölreuter (1733–1806) extended this work by noting the function of nectar in attracting pollinators and the role of wind and insects in pollination. He also produced deliberate hybrids, observed the microscopic structure of pollen grains and how the transfer of matter from the pollen to the ovary inducing the formation of the embryo.
One hundred years after Camerarius, in 1793, Christian Sprengel (1750–1816) broadened the understanding of flowers by describing the role of nectar guides in pollination, the adaptive floral mechanisms used for pollination, and the prevalence of cross-pollination, even though male and female parts are usually together on the same flower.
Much was learned about plant sexuality by unravelling the reproductive mechanisms of mosses, liverworts and algae. In his of 1851, Wilhelm Hofmeister (1824–1877) starting with the ferns and bryophytes demonstrated that the process of sexual reproduction in plants entails an "alternation of generations" between sporophytes and gametophytes. This initiated the new field of comparative morphology which, largely through the combined work of William Farlow (1844–1919), Nathanael Pringsheim (1823–1894), Frederick Bower, Eduard Strasburger and others, established that an "alternation of generations" occurs throughout the plant kingdom.
Nineteenth-century foundations of modern botany
In about the mid-19th century, scientific communication changed. Until this time, ideas were largely exchanged by reading the works of authoritative individuals who dominated in their field: these were often wealthy and influential "gentlemen scientists". Now, research was reported by the publication of "papers" that emanated from research "schools" that promoted the questioning of conventional wisdom. This process had started in the late 18th century when specialist journals began to appear. Even so, botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's (1804–1881) , published in English in 1849 as Principles of Scientific Botany. By 1850, an invigorated organic chemistry had revealed the structure of many plant constituents. Although the great era of plant classification had now passed, the work of description continued. Augustin de Candolle (1778–1841) succeeded Antoine-Laurent de Jussieu in managing the botanical project (1824–1841) which involved 35 authors: it contained all the dicotyledons known in his day, some 58000 species in 161 families, and he doubled the number of recognized plant families, the work being completed by his son Alphonse (1806–1893) in the years from 1841 to 1873.
Plant geography and ecology
The opening of the 19th century was marked by an increase in interest in the connection between climate and plant distribution. Carl Willdenow (1765–1812) examined the connection between seed dispersal and distribution, the nature of plant associations and the impact of geological history. He noticed the similarities between the floras of N America and N Asia, the Cape and Australia, and he explored the ideas of "centre of diversity" and "centre of origin". German Alexander von Humboldt (1769–1859) and Frenchman Aime Bonpland (1773–1858) published a massive and highly influential 30 volume work on their travels; Robert Brown (1773–1852) noted the similarities between the floras of S Africa, Australia and India, while Joakim Schouw (1789–1852) explored more deeply than anyone else the influence on plant distribution of temperature, soil factors, especially soil water, and light, work that was continued by Alphonse de Candolle (1806–1893). Joseph Hooker (1817–1911) pushed the boundaries of floristic studies with his work on Antarctica, India and the Middle East with special attention to endemism. August Grisebach (1814–1879) in (1872) examined physiognomy in relation to climate and in America geographic studies were pioneered by Asa Gray (1810–1888).
Physiological plant geography, or ecology, emerged from floristic biogeography in the late 19th century as environmental influences on plants received greater recognition. Early work in this area was synthesised by Danish professor Eugenius Warming (1841–1924) in his book (Ecology of Plants, generally taken to mark the beginning of modern ecology) including new ideas on plant communities, their adaptations and environmental influences. This was followed by another grand synthesis, the of Andreas Schimper (1856–1901) in 1898 (published in English in 1903 as Plant-geography upon a physiological basis translated by W. R. Fischer, Oxford: Clarendon press, 839 pp).
Anatomy
During the 19th century, German scientists led the way towards a unitary theory of the structure and life-cycle of plants. Following improvements in the microscope at the end of the 18th century, Charles Mirbel (1776–1854) in 1802 published his and Johann Moldenhawer (1766–1827) published (1812) in which he describes techniques for separating cells from the middle lamella. He identified vascular and parenchymatous tissues, described vascular bundles, observed the cells in the cambium, and interpreted tree rings. He found that stomata were composed of pairs of cells, rather than a single cell with a hole.
Anatomical studies on the stele were consolidated by Carl Sanio (1832–1891), who described the secondary tissues and meristem including cambium and its action. Hugo von Mohl (1805–1872) summarized work in anatomy leading up to 1850 in (1851) but this work was later eclipsed by the encyclopaedic comparative anatomy of Heinrich Anton de Bary in 1877. An overview of knowledge of the stele in root and stem was completed by Van Tieghem (1839–1914) and of the meristem by Carl Nägeli (1817–1891). Studies had also begun on the origins of the carpel and flower that continue to the present day.
Water relations
The riddle of water and nutrient transport through the plant remained. Physiologist Von Mohl explored solute transport and the theory of water uptake by the roots using the concepts of cohesion, transpirational pull, capillarity and root pressure. German dominance in the field of experimental physiology, largely influenced by Wilhelm Knop and Julius von Sachs, was underlined by the publication of the definitive textbook on plant physiology synthesising the work of this period, Sachs' e of 1882. There were, however, some advances elsewhere such as the early exploration of geotropism (the effect of gravity on growth) by Englishman Thomas Knight, and the discovery and naming of osmosis by Frenchman Henri Dutrochet (1776–1847). The American Dennis Robert Hoagland (1884–1949) discovered the dependence of nutrient absorption and translocation by the plant on metabolic energy.
Cytology
The cell nucleus was discovered by Robert Brown in 1831. Demonstration of the cellular composition of all organisms, with each cell possessing all the characteristics of life, is attributed to the combined efforts of botanist Matthias Schleiden and zoologist Theodor Schwann (1810–1882) in the early 19th century, although Moldenhawer had already shown that plants were wholly cellular with each cell having its own wall and Julius von Sachs had shown the continuity protoplasm between cell walls.
From 1870 to 1880, it became clear that cell nuclei are never formed anew but always derived from the substance of another nucleus. In 1882, Flemming observed the longitudinal splitting of chromosomes in the dividing nucleus and concluded that each daughter nucleus received half of each of the chromosomes of the mother nucleus: then by the early 20th century, it was found that the number of chromosomes in a given species is constant. With genetic continuity confirmed and the finding by Eduard Strasburger that the nuclei of reproductive cells (in pollen and embryo) have a reducing division (halving of chromosomes, now known as meiosis) the field of heredity was opened up. By 1926, Thomas Morgan was able to outline a theory of the gene and its structure and function. The form and function of plastids received similar attention, the association with starch being noted at an early date. With observation of the cellular structure of all organisms and the process of cell division and continuity of genetic material, the analysis of the structure of protoplasm and the cell wall as well as that of plastids and vacuoles – what is now known as cytology, or cell theory became firmly established.
Later, the cytological basis of the gene-chromosome theory of heredity extended from about 1900–1944 and was initiated by the rediscovery of Gregor Mendel's (1822–1884) laws of plant heredity first published in 1866 in Experiments on Plant Hybridization and based on cultivated pea, Pisum sativum: this heralded the opening up of plant genetics. The cytological basis for gene-chromosome theory was explored through the role of polyploidy and hybridization in speciation and it was becoming better understood that interbreeding populations were the unit of adaptive change in biology.
Developmental morphology and evolution
Until the 1860s, it was believed that species had remained unchanged through time: each biological form was the result of an independent act of creation and therefore absolutely distinct and immutable. But the hard reality of geological formations and strange fossils needed scientific explanation. Charles Darwin's Origin of Species (1859) replaced the assumption of constancy with the theory of descent with modification. Phylogeny became a new principle as "natural" classifications became classifications reflecting, not just similarities, but evolutionary relationships. Wilhelm Hofmeister established that there was a similar pattern of organization in all plants expressed through the alternation of generations and extensive homology of structures.
German writer Johann Wolfgang von Goethe (1749–1832), a polymath, had interests and influence that extended into botany. In (1790), he provided a theory of plant morphology (he coined the word "morphology") and he included within his concept of "metamorphosis" modification during evolution, thus linking comparative morphology with phylogeny. Though the botanical basis of his work has been challenged, there is no doubt that he prompted discussion and research on the origin and function of floral parts. His theory probably stimulated the opposing views of German botanists Alexander Braun (1805–1877) and Matthias Schleiden who applied the experimental method to the principles of growth and form that were later extended by Augustin de Candolle (1778–1841).
Carbon fixation (photosynthesis)
At the start of the 19th century, the idea that plants could synthesize almost all their tissues from atmospheric gases had not yet emerged. The energy component of photosynthesis, the capture and storage of the Sun's radiant energy in carbon bonds (a process on which all life depends) was first elucidated in 1847 by Mayer, but the details of how this was done would take many more years. Chlorophyll was named in 1818 and its chemistry gradually determined, to be finally resolved in the early 20th century. The mechanism of photosynthesis remained a mystery until the mid-19th century when Sachs, in 1862, noted that starch was formed in green cells only in the presence of light, and in 1882, he confirmed carbohydrates as the starting point for all other organic compounds in plants. The connection between the pigment chlorophyll and starch production was finally made in 1864 but tracing the precise biochemical pathway of starch formation did not begin until about 1915.
Nitrogen fixation
Significant discoveries relating to nitrogen assimilation and metabolism, including ammonification, nitrification and nitrogen fixation (the uptake of atmospheric nitrogen by symbiotic soil microorganisms) had to wait for advances in chemistry and bacteriology in the late 19th century and this was followed in the early 20th century by the elucidation of protein and amino-acid synthesis and their role in plant metabolism. With this knowledge, it was then possible to outline the global nitrogen cycle.
Twentieth century
20th century science grew out of the solid foundations laid by the breadth of vision and detailed experimental observations of the 19th century. A vastly increased research force was now rapidly extending the horizons of botanical knowledge at all levels of plant organization from molecules to global plant ecology. There was now an awareness of the unity of biological structure and function at the cellular and biochemical levels of organisation. Botanical advance was closely associated with advances in physics and chemistry with the greatest advances in the 20th century mainly relating to the penetration of molecular organization. However, at the level of plant communities it would take until mid century to consolidate work on ecology and population genetics.
By 1910, experiments using labelled isotopes were being used to elucidate plant biochemical pathways, to open the line of research leading to gene technology. On a more practical level, research funding was now becoming available from agriculture and industry.
Molecules
In 1903, Chlorophylls a and b were separated by thin layer chromatography then, through the 1920s and 1930s, biochemists, notably Hans Krebs (1900–1981) and Carl (1896–1984) and Gerty Cori (1896–1957) began tracing out the central metabolic pathways of life. Between the 1930s and 1950s, it was determined that ATP, located in mitochondria, was the source of cellular chemical energy and the constituent reactions of photosynthesis were progressively revealed. Then, in 1944, DNA was extracted for the first time. Along with these revelations, there was the discovery of plant hormones or "growth substances", notably auxins, (1934) gibberellins (1934) and cytokinins (1964) and the effects of photoperiodism, the control of plant processes, especially flowering, by the relative lengths of day and night.
Following the establishment of Mendel's laws, the gene-chromosome theory of heredity was confirmed by the work of August Weismann who identified chromosomes as the hereditary material. Also, in observing the halving of the chromosome number in germ cells he anticipated work to follow on the details of meiosis, the complex process of redistribution of hereditary material that occurs in the germ cells. In the 1920s and 1930s, population genetics combined the theory of evolution with Mendelian genetics to produce the modern synthesis. By the mid-1960s, the molecular basis of metabolism and reproduction was firmly established through the new discipline of molecular biology. Genetic engineering, the insertion of genes into a host cell for cloning, began in the 1970s with the invention of recombinant DNA techniques and its commercial applications applied to agricultural crops followed in the 1990s. There was now the potential to identify organisms by molecular "fingerprinting" and to estimate the times in the past when critical evolutionary changes had occurred through the use of "molecular clocks".
Computers, electron microscopes and evolution
Increased experimental precision combined with vastly improved scientific instrumentation was opening up exciting new fields. In 1936, Alexander Oparin (1894–1980) demonstrated a possible mechanism for the synthesis of organic matter from inorganic molecules. In the 1960s, it was determined that the Earth's earliest life-forms treated as plants, the cyanobacteria known as stromatolites, dated back some 3.5 billion years.
Mid-century transmission and scanning electron microscopy presented another level of resolution to the structure of matter, taking anatomy into the new world of "ultrastructure".
New and revised "phylogenetic" classification systems of the plant kingdom were produced by several botanists, including August Eichler. A massive 23 volume was published by Adolf Engler & Karl Prantl over the period 1887 to 1915. Taxonomy based on gross morphology was now being supplemented by using characters revealed by pollen morphology, embryology, anatomy, cytology, serology, macromolecules and more. The introduction of computers facilitated the rapid analysis of large data sets used for numerical taxonomy (also called taximetrics or phenetics). The emphasis on truly natural phylogenies spawned the disciplines of cladistics and phylogenetic systematics. The grand taxonomic synthesis An Integrated System of Classification of Flowering Plants (1981) of American Arthur Cronquist (1919–1992) was superseded when, in 1998, the Angiosperm Phylogeny Group published a phylogeny of flowering plants based on the analysis of DNA sequences using the techniques of the new molecular systematics which was resolving questions concerning the earliest evolutionary branches of the angiosperms (flowering plants). The exact relationship of fungi to plants had for some time been uncertain. Several lines of evidence pointed to fungi being different from plants, animals and bacteria – indeed, more closely related to animals than plants. In the 1980s-90s, molecular analysis revealed an evolutionary divergence of fungi from other organisms about 1 billion years ago – sufficient reason to erect a unique kingdom separate from plants.
Biogeography and ecology
The publication of Alfred Wegener's (1880–1930) theory of continental drift 1912 gave additional impetus to comparative physiology and the study of biogeography while ecology in the 1930s contributed the important ideas of plant community, succession, community change, and energy flows. From 1940 to 1950, ecology matured to become an independent discipline as Eugene Odum (1913–2002) formulated many of the concepts of ecosystem ecology, emphasising relationships between groups of organisms (especially material and energy relationships) as key factors in the field. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) from 1914 to 1940 produced accounts of the geography, centres of origin, and evolutionary history of economic plants.
See also
International Botanical Congress
History of plant systematics
Botanical illustration
History of phycology
List of botanists
List of botanists by author abbreviation
References
Bibliography
Books
History of science
(see also The Jewel House)
History of botany, agriculture and horticulture
Fries, Robert Elias (1950). A short history of botany in Sweden. Uppsala: Almqvist & Wiksells boktr. OCLC 3954193
; originally published as
1999 reprint Google Books
, see also
Antiquity
British botany
Cultural studies
Botanical art and illustration
Historical sources
, in Botany pp. 243–254
Bibliographic sources
Articles
Websites
National Library of Medicine
Botany
Botany | History of botany | Biology | 10,981 |
2,173,873 | https://en.wikipedia.org/wiki/Small%20subgroup%20confinement%20attack | In cryptography, a subgroup confinement attack, or small subgroup confinement attack, on a cryptographic method that operates in a large finite group is where an attacker attempts to compromise the method by forcing a key to be confined to an unexpectedly small subgroup of the desired group.
Several methods have been found to be vulnerable to subgroup confinement attack, including some forms or applications of Diffie–Hellman key exchange and DH-EKE.
References
Cryptographic attacks
Finite groups | Small subgroup confinement attack | Mathematics,Technology | 93 |
31,627,702 | https://en.wikipedia.org/wiki/Hemithioacetal | In organic chemistry, hemithioacetals (or thiohemiacetals) are organosulfur compounds with the general formula . They are the sulfur analogues of the acetals, , with an oxygen atom replaced by sulfur (as implied by the thio- prefix). Because they consist of four differing substituents on a single carbon, hemithioacetals are chiral. A related family of compounds are the dithiohemiacetals, with the formula . Although they can be important intermediates, hemithioacetals are usually not isolated, since they exist in equilibrium with thiols () and aldehydes ().
Formation and structure
Hemithioacetals are formed by the reaction of a thiol () and an aldehyde ():
R-CHO + R'-SH <=> R-CH(OH)S-R'
Hemithioacetals usually arise via acid catalysis. They typically are intermediates in the formation of dithioacetals ():
R-CH(OH)S-R' + R'-SH <=> R-CH(S-R')2 + H2O
Isolable hemithioacetal
Hemithioacetals ordinarily readily dissociate into thiol and aldehyde, however, some have been isolated. In general, these isolable hemithioacetals are cyclic, which disfavors dissociation, and can often be further stabilized by the presence of acid. An important class are S-glycosides, such as octylthioglucoside, which are formed by a reaction between thiols and sugars. Other examples include 2-hydroxytetrahydrothiophene and the anti-HIV drug Lamivudine. Another class of isolable hemithioacetals are derived from carbonyl groups that form stable hydrates. For example, thiols react with hexafluoroacetone trihydrate to give hemithioacetals, which can be isolated.
Hemithioacetals in nature
Glyoxalase I, which is part of the glyoxalase system present in the cytosol, catalyzes the conversion of α-oxoaldehyde (RC(O)CHO) and the thiol glutathione (abbreviated GSH) to S-2-hydroxyacylglutathione derivatives [RCH(OH)CO-SG]. The catalytic mechanism involves an intermediate hemithioacetal adduct [RCOCH(OH)-SG]. The spontaneous reaction forms methylglyoxal-glutathione hemithioacetal and human glyoxalase I.
A hemithioacetal is also invoked in the mechanism of prenylcysteine lyase. In catalytic mechanism, S-farnesylcysteine is oxidized by a flavin to a thiocarbenium ion. The thiocarbenium ion hydrolyzes to form the hemithioacetal:
After formation, the hemithioacetal breaks into hydrogen peroxide, farnesal, and cysteine.
References
Acetals
Functional groups
Organosulfur compounds | Hemithioacetal | Chemistry | 701 |
33,179,436 | https://en.wikipedia.org/wiki/Spin%20angular%20momentum%20of%20light | The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon.
Introduction
Spin is the fundamental property that distinguishes the two types of elementary particles: fermions, with half-integer spins; and bosons, with integer spins. Photons, which are the quanta of light, have been long recognized as spin-1 gauge bosons. The polarization of the light is commonly accepted as its “intrinsic” spin degree of freedom. However, in free space, only two transverse polarizations are allowed. Thus, the photon spin is always only connected to the two circular polarizations. To construct the full quantum spin operator of light, longitudinal polarized photon modes have to be introduced.
An electromagnetic wave is said to have circular polarization when its electric and magnetic fields rotate continuously around the beam axis during propagation. The circular polarization is left () or right () depending on the field rotation direction and, according to the convention used: either from the point of view of the source, or the receiver. Both conventions are used in science, depending on the context.
When a light beam is circularly polarized, each of its photons carries a spin angular momentum (SAM) of , where is the reduced Planck constant and the sign is positive for left and negative for right circular polarizations (this is adopting the convention from the point of view of the receiver most commonly used in optics). This SAM is directed along the beam axis (parallel if positive, antiparallel if negative).
The above figure shows the instantaneous structure of the electric field of left () and right () circularly polarized light in space. The green arrows indicate the propagation direction.
The mathematical expressions reported under the figures give the three electric-field components of a circularly polarized plane wave propagating in the direction, in complex notation.
Mathematical expression
The general expression for the spin angular momentum is
where is the speed of light in free space and is the conjugate canonical momentum of the vector potential . The general expression for the orbital angular momentum of light is
where denotes four indices of the spacetime and Einstein's summation convention has been applied.
To quantize light, the basic equal-time commutation relations have to be postulated,
where is the reduced Planck constant and is the metric tensor of the Minkowski space.
Then, one can verify that both and satisfy the canonical angular momentum commutation relations
and they commute with each other .
After the plane-wave expansion, the photon spin can be re-expressed in a simple and intuitive form in the wave-vector space
where the vector is the field operator of the photon in wave-vector space and the matrix
is the spin-1 operator of the photon with the SO(3) rotation generators
and the two unit vectors denote the two transverse polarizations of light in free space and unit vector denotes the longitudinal polarization.
Due to the longitudinal polarized photon and scalar photon have been involved, both and are not gauge invariant. To incorporate the gauge invariance into the photon angular momenta, a re-decomposition of the total QED angular momentum and the Lorenz gauge condition have to be enforced. Finally, the direct observable part of spin and orbital angular momenta of light are given by
and
which recover the angular momenta of classical transverse light. Here, () is the transverse part of the electric field (vector potential), is the vacuum permittivity, and we are using SI units.
We can define the annihilation operators for circularly polarized transverse photons:
with polarization unit vectors
Then, the transverse-field photon spin can be re-expressed as
For a single plane-wave photon, the spin can only have two values , which are eigenvalues of the spin operator . The corresponding eigenfunctions describing photons with well defined values of SAM are described as circularly polarized waves:
See also
Helmholtz equation
Orbital angular momentum of light
Polarization (physics)
Photon polarization
Spin polarization
References
Further reading
Angular momentum of light
Light
Physical quantities | Spin angular momentum of light | Physics,Mathematics | 852 |
446,712 | https://en.wikipedia.org/wiki/Josephson%20effect | In physics, the Josephson effect is a phenomenon that occurs when two superconductors are placed in proximity, with some barrier or restriction between them. The effect is named after the British physicist Brian Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link. It is an example of a macroscopic quantum phenomenon, where the effects of quantum mechanics are observable at ordinary, rather than atomic, scale. The Josephson effect has many practical applications because it exhibits a precise relationship between different physical measures, such as voltage and frequency, facilitating highly accurate measurements.
The Josephson effect produces a current, known as a supercurrent, that flows continuously without any voltage applied, across a device known as a Josephson junction (JJ). These consist of two or more superconductors coupled by a weak link. The weak link can be a thin insulating barrier (known as a superconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-c-S).
Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, and RSFQ digital electronics. The NIST standard for one volt is achieved by an array of 20,208 Josephson junctions in series.
History
The DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors.
In 1962, Brian Josephson became interested into superconducting tunneling. He was then 23 years old and a second-year graduate student of Brian Pippard at the Mond Laboratory of the University of Cambridge. That year, Josephson took a many-body theory course with Philip W. Anderson, a Bell Labs employee on sabbatical leave for the 1961–1962 academic year. The course introduced
Josephson to the idea of broken symmetry in superconductors, and he "was fascinated by the idea of broken symmetry, and wondered whether there could be any way of observing it experimentally". Josephson studied the experiments by Ivar Giaever and Hans Meissner, and theoretical work by Robert Parmenter. Pippard initially believed that the tunneling effect was possible but that it would be too small to be noticeable, but Josephson did not agree, especially after Anderson introduced him to a preprint of "Superconductive Tunneling" by Cohen, Falicov, and Phillips about the superconductor-barrier-normal metal system.
Josephson and his colleagues were initially unsure about the validity of Josephson's calculations. Anderson later remembered:
We were all—Josephson, Pippard and myself, as well as various other people who also habitually sat at the Mond tea and participated in the discussions of the next few weeks—very much puzzled by the meaning of the fact that the current depends on the phase.
After further review, they concluded that Josephson's results were valid. Josephson then submitted "Possible new effects in superconductive tunnelling" to Physics Letters in June 1962. The newer journal Physics Letters was chosen instead of the better established Physical Review Letters due to their uncertainty about the results. John Bardeen, by then already Nobel Prize winner, was initially publicly skeptical of Josephson's theory in 1962, but came to accept it after further experiments and theoretical clarifications. See also: .
In January 1963, Anderson and his Bell Labs colleague John Rowell submitted the first paper to Physical Review Letters to claim the experimental observation of Josephson's effect "Probable Observation of the Josephson Superconducting Tunneling Effect". These authors were awarded patents on the effects that were never enforced, but never challenged.
Before Josephson's prediction, it was only known that single (i.e., non-paired) electrons can flow through an insulating barrier, by means of quantum tunneling. Josephson was the first to predict the tunneling of superconducting Cooper pairs. For this work, Josephson received the Nobel Prize in Physics in 1973. John Bardeen was one of the nominators.
Applications
Types of Josephson junction include the φ Josephson junction (of which π Josephson junction is a special example), long Josephson junction, and superconducting tunnel junction. Other uses include:
A "Dayem bridge" is a thin-film Josephson junction where the weak link comprises a superconducting wire measuring a few micrometres or less.
The Josephson junction count is a proxy variable for a device's complexity
SQUIDs, or superconducting quantum interference devices, are very sensitive magnetometers that operate via the Josephson effect
Superfluid helium quantum interference devices (SHeQUIDs) are the superfluid helium analog of a dc-SQUID
In precision metrology, the Josephson effect is a reproducible conversion between frequency and voltage. The Josephson voltage standard takes the caesium standard definition of frequency and gives the standard representation of a volt
Single-electron transistors are often made from superconducting materials and called "superconducting single-electron transistors".
Elementary charge is most precisely measured in terms of the Josephson constant and the von Klitzing constant which is related to the quantum Hall effect
RSFQ digital electronics are based on shunted Josephson junctions. Junction switching emits one magnetic flux quantum . Its presence and absence represents binary 1 and 0.
Superconducting quantum computing uses Josephon junctions as qubits such as in a flux qubit or other schemes where the phase and charge are conjugate variables.
Superconducting tunnel junction detectors are used in superconducting cameras
The Josephson equations
The Josephson effect can be calculated using the laws of quantum mechanics. A diagram of a single Josephson junction is shown at right. Assume that superconductor A has Ginzburg–Landau order parameter , and superconductor B , which can be interpreted as the wave functions of Cooper pairs in the two superconductors. If the electric potential difference across the junction is , then the energy difference between the two superconductors is , since each Cooper pair has twice the charge of one electron. The Schrödinger equation for this two-state quantum system is therefore:
where the constant is a characteristic of the junction. To solve the above equation, first calculate the time derivative of the order parameter in superconductor A:
and therefore the Schrödinger equation gives:
The phase difference of Ginzburg–Landau order parameters across the junction is called the Josephson phase:
The Schrödinger equation can therefore be rewritten as:
and its complex conjugate equation is:
Add the two conjugate equations together to eliminate :
Since , we have:
Now, subtract the two conjugate equations to eliminate :
which gives:
Similarly, for superconductor B we can derive that:
Noting that the evolution of Josephson phase is and the time derivative of charge carrier density is proportional to current , when , the above solution yields the Josephson equations:
where and are the voltage across and the current through the Josephson junction, and is a parameter of the junction named the critical current. Equation (1) is called the first Josephson relation or weak-link current-phase relation, and equation (2) is called the second Josephson relation or superconducting phase evolution equation. The critical current of the Josephson junction depends on the properties of the superconductors, and can also be affected by environmental factors like temperature and externally applied magnetic field.
The Josephson constant is defined as:
and its inverse is the magnetic flux quantum:
The superconducting phase evolution equation can be reexpressed as:
If we define:
then the voltage across the junction is:
which is very similar to Faraday's law of induction. But note that this voltage does not come from magnetic energy, since there is no magnetic field in the superconductors; Instead, this voltage comes from the kinetic energy of the carriers (i.e. the Cooper pairs). This phenomenon is also known as kinetic inductance.
Three main effects
There are three main effects predicted by Josephson that follow directly from the Josephson equations:
The DC Josephson effect
The DC Josephson effect is a direct current crossing the insulator in the absence of any external electromagnetic field, owing to tunneling. This DC Josephson current is proportional to the sine of the Josephson phase (phase difference across the insulator, which stays constant over time), and may take values between and .
The AC Josephson effect
With a fixed voltage across the junction, the phase will vary linearly with time and the current will be a sinusoidal AC (alternating current) with amplitude and frequency . This means a Josephson junction can act as a perfect voltage-to-frequency converter.
The inverse AC Josephson effect
Microwave radiation of a single (angular) frequency can induce quantized DC voltages across the Josephson junction, in which case the Josephson phase takes the form , and the voltage and current across the junction will be:
The DC components are:
This means a Josephson junction can act like a perfect frequency-to-voltage converter, which is the theoretical basis for the Josephson voltage standard.
Josephson inductance
When the current and Josephson phase varies over time, the voltage drop across the junction will also vary accordingly; As shown in derivation below, the Josephson relations determine that this behavior can be modeled by a kinetic inductance named Josephson Inductance.
Rewrite the Josephson relations as:
Now, apply the chain rule to calculate the time derivative of the current:
Rearrange the above result in the form of the current–voltage characteristic of an inductor:
This gives the expression for the kinetic inductance as a function of the Josephson Phase:
Here, is a characteristic parameter of the Josephson junction, named the Josephson Inductance.
Note that although the kinetic behavior of the Josephson junction is similar to that of an inductor, there is no associated magnetic field. This behaviour is derived from the kinetic energy of the charge carriers, instead of the energy in a magnetic field.
Josephson energy
Based on the similarity of the Josephson junction to a non-linear inductor, the energy stored in a Josephson junction when a supercurrent flows through it can be calculated.
The supercurrent flowing through the junction is related to the Josephson phase by the current-phase relation (CPR):
The superconducting phase evolution equation is analogous to Faraday's law:
Assume that at time , the Josephson phase is ; At a later time , the Josephson phase evolved to . The energy increase in the junction is equal to the work done on the junction:
This shows that the change of energy in the Josephson junction depends only on the initial and final state of the junction and not the path. Therefore, the energy stored in a Josephson junction is a state function, which can be defined as:
Here is a characteristic parameter of the Josephson junction, named the Josephson Energy. It is related to the Josephson Inductance by . An alternative but equivalent definition is also often used.
Again, note that a non-linear magnetic coil inductor accumulates potential energy in its magnetic field when a current passes through it; However, in the case of Josephson junction, no magnetic field is created by a supercurrent — the stored energy comes from the kinetic energy of the charge carriers instead.
The RCSJ model
The Resistively Capacitance Shunted Junction (RCSJ) model, or simply shunted junction model, includes the effect of AC impedance of an actual Josephson junction on top of the two basic Josephson relations stated above.
As per Thévenin's theorem, the AC impedance of the junction can be represented by a capacitor and a shunt resistor, both parallel to the ideal Josephson Junction. The complete expression for the current drive becomes:
where the first term is displacement current with – effective capacitance, and the third is normal current with – effective resistance of the junction.
Josephson penetration depth
The Josephson penetration depth characterizes the typical length on which an externally applied magnetic field penetrates into the long Josephson junction. It is usually denoted as and is given by the following expression (in SI):
where is the magnetic flux quantum, is the critical supercurrent density (A/m2), and characterizes the inductance of the superconducting electrodes
where is the thickness of the Josephson barrier (usually insulator), and are the thicknesses of superconducting electrodes, and and are their London penetration depths. The Josephson penetration depth usually ranges from a few μm to several mm if the critical current density is very low.
See also
Pi Josephson junction
φ Josephson junction
Josephson diode
Andreev reflection
Fractional vortices
Ginzburg–Landau theory
Macroscopic quantum phenomena
Macroscopic quantum self-trapping
Quantum computer
Quantum gyroscope
Rapid single flux quantum (RSFQ)
Semifluxon
Zero-point energy
Josephson vortex
References
Condensed matter physics
Superconductivity
Sensors
Mesoscopic physics
Energy (physics) | Josephson effect | Physics,Chemistry,Materials_science,Mathematics,Technology,Engineering | 2,802 |
23,167,524 | https://en.wikipedia.org/wiki/Kreft%27s%20dichromaticity%20index | Kreft's dichromaticity index (DI) is a measure for quantification of dichromatism. It is defined as the difference in hue angle (Δhab) between the color of the sample at the dilution, where the chroma (color saturation) is maximal, and the color of four times more diluted (or thinner) and four times more concentrated (or thicker) sample. The two hue angle differences are called the dichromaticity index towards lighter (Kreft's DIL) and
dichromaticity index towards darker (Kreft's DID) respectively. Kreft's dichromaticity indexes DIL and DID for pumpkin seed oil, which is one of the most dichromatic substances, are −9 and −44, respectively. This means, that pumpkin seed oil changes its color from green-yellow to orange-red (for 44 degrees in Lab color space) when the thickness of the observed layer is increased from cca 0.5 mm to 2 mm; and it changes slightly towards green (for 9 degrees) if its thickness is reduced for four-fold.
The color of pumpkin oil at increasing thickness or concentration presented in CIELAB colorspace diagram. Straight lines are vectors showing hue (angle) and chroma (length) of the color at maximal chroma (toward the square mark), and the colors of four-fold less or more diluted or thick pumpkin oil (DIL and DID). Note that DID is −44.1 degrees and DIL corresponds to −8.97 degrees.
Dichromaticity (DIL and DID) of selected substances, calculated from their VIS absorption spectra by the computer algorithm “Dichromaticity index calculator”:
Maximal chroma: chroma at concentration (thickness) where the color of the substance has maximal chroma (saturation). Angle at maximal chroma: the hue, which is represented by the angle of the vector to the color with maximal chroma in the CIELAB colorspace diagram.
References
External links
Free Dichromaticity Index Calculator software
Optics
Color
Index numbers
Scales
Spectroscopy | Kreft's dichromaticity index | Physics,Chemistry,Mathematics | 447 |
47,986,929 | https://en.wikipedia.org/wiki/Operator%20ideal | In functional analysis, a branch of mathematics, an operator ideal is a special kind of class of continuous linear operators between Banach spaces. If an operator belongs to an operator ideal , then for any operators and which can be composed with as , then is class as well. Additionally, in order for to be an operator ideal, it must contain the class of all finite-rank Banach space operators.
Formal definition
Let denote the class of continuous linear operators acting between arbitrary Banach spaces. For any subclass of and any two Banach spaces and over the same field , denote by the set of continuous linear operators of the form such that . In this case, we say that is a component of . An operator ideal is a subclass of , containing every identity operator acting on a 1-dimensional Banach space, such that for any two Banach spaces and over the same field , the following two conditions for are satisfied:
(1) If then ; and
(2) if and are Banach spaces over with and , and if , then .
Properties and examples
Operator ideals enjoy the following nice properties.
Every component of an operator ideal forms a linear subspace of , although in general this need not be norm-closed.
Every operator ideal contains all finite-rank operators. In particular, the finite-rank operators form the smallest operator ideal.
For each operator ideal , every component of the form forms an ideal in the algebraic sense.
Furthermore, some very well-known classes are norm-closed operator ideals, i.e., operator ideals whose components are always norm-closed. These include but are not limited to the following.
Compact operators
Weakly compact operators
Finitely strictly singular operators
Strictly singular operators
Completely continuous operators
References
Pietsch, Albrecht: Operator Ideals, Volume 16 of Mathematische Monographien, Deutscher Verlag d. Wiss., VEB, 1978.
Functional analysis | Operator ideal | Mathematics | 381 |
12,979,395 | https://en.wikipedia.org/wiki/Photoinduced%20electron%20transfer | Photoinduced electron transfer (PET) is an excited state electron transfer process by which an excited electron is transferred from donor to acceptor. Due to PET a charge separation is generated, i.e., redox reaction takes place in excited state (this phenomenon is not observed in Dexter electron transfer).
Breadth
Such materials include semiconductors that can be photoactivated like many solar cells, biological systems such as those used in photosynthesis, and small molecules with suitable absorptions and redox states.
Process
It is common to describe where electrons reside as electron bands in bulk materials and electron orbitals in molecules. For the sake of expedience the following description will be described in molecular terms. When a photon excites a molecule, an electron in a ground state orbital can be excited to a higher energy orbital. This excited state leaves a vacancy in a ground state orbital that can be filled by an electron donor. It produces an electron in a high energy orbital which can be donated to an electron acceptor. In these respects a photoexcited molecule can act as a good oxidizing agent or a good reducing agent.
Photoinduced oxidation
[MLn]2+ + hν → [MLn]2+*
[MLn]2+* + donor → [MLn]+ + donor+
Photoinduced reduction
[MLn]2+ + hν → [MLn]2+*
[MLn]2+* + acceptor → [MLn]3+ + acceptor−
The end result of both reactions is that an electron is delivered to an orbital that is higher in energy than where it previously resided. This is often described as a charge separated electron-hole pair when working with semiconductors.
In the absence of a proper electron donor or acceptor it is possible for such molecules to undergo ordinary fluorescence emission. The electron transfer is one form of photoquenching.
Subsequent processes
In many photo-productive systems this charge separation is kinetically isolated by delivery of the electron to a lower energy conductor attached to the p/n junction or into an electron transport chain. In this case some of the energy can be captured to do work. If the electron is not kinetically isolated thermodynamics will take over and the products will react with each other to regenerate the ground state starting material. This process is called recombination and the photon's energy is released as heat.
Recombination of photoinduced oxidation
[MLn]+ + donor+ → [MLn]2+ + donor
Potential induced photon production
The reverse process to photoinduced electron transfer is displayed by light emitting diodes (LED) and chemiluminescence, where potential gradients are used to create excited states that decay by light emission.
References
Photochemistry
Chemical reactions
Optoelectronics | Photoinduced electron transfer | Chemistry | 588 |
323,646 | https://en.wikipedia.org/wiki/Wilson%20prime | In number theory, a Wilson prime is a prime number such that divides , where "" denotes the factorial function; compare this with Wilson's theorem, which states that every prime divides . Both are named for 18th-century English mathematician John Wilson; in 1770, Edward Waring credited the theorem to Wilson, although it had been stated centuries earlier by Ibn al-Haytham.
The only known Wilson primes are 5, 13, and 563 . Costa et al. write that "the case is trivial", and credit the observation that 13 is a Wilson prime to . Early work on these numbers included searches by N. G. W. H. Beeger and Emma Lehmer, but 563 was not discovered until the early 1950s, when computer searches could be applied to the problem. If any others exist, they must be greater than 2 × 1013. It has been conjectured that infinitely many Wilson primes exist, and that the number of Wilson primes in an interval is about .
Several computer searches have been done in the hope of finding new Wilson primes.
The Ibercivis distributed computing project includes a search for Wilson primes. Another search was coordinated at the Great Internet Mersenne Prime Search forum.
Generalizations
Wilson primes of order
Wilson's theorem can be expressed in general as for every integer and prime . Generalized Wilson primes of order are the primes such that divides .
It was conjectured that for every natural number , there are infinitely many Wilson primes of order .
The smallest generalized Wilson primes of order are:
Near-Wilson primes
A prime satisfying the congruence with small can be called a near-Wilson prime. Near-Wilson primes with are bona fide Wilson primes. The table on the right lists all such primes with from up to 4.
Wilson numbers
A Wilson number is a natural number such that , where and where the term is positive if and only if has a primitive root and negative otherwise. For every natural number , is divisible by , and the quotients (called generalized Wilson quotients) are listed in . The Wilson numbers are
If a Wilson number is prime, then is a Wilson prime. There are 13 Wilson numbers up to 5.
See also
PrimeGrid
Table of congruences
Wall–Sun–Sun prime
Wieferich prime
Wolstenholme prime
References
Further reading
External links
The Prime Glossary: Wilson prime
Status of the search for Wilson primes
Classes of prime numbers
Factorial and binomial topics
Unsolved problems in number theory | Wilson prime | Mathematics | 523 |
33,502,579 | https://en.wikipedia.org/wiki/Arthrobotrys | Arthrobotrys is a genus of mitosporic fungi in the family Orbiliaceae. There are 71 species. They are predatory fungi that capture and feed on nematode worms. Firstly, there are rings that form on the hyphae constrict and entrap the worms, then hyphae grow into the worm and digest it.
Species
Arthrobotrys aggregata Mekht. 1979
Arthrobotrys alaskana Matsush.) Oorschot 1985
Arthrobotrys amerospora S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys anomala G.L. Barron & J.G.N. Davidson 1972
Arthrobotrys apscheronica Mekht. 1973
Arthrobotrys arthrobotryoides (Berl.) Lindau 1906
Arthrobotrys azerbaijanica (Mekht.) Oorschot 1985
Arthrobotrys bakunika Mekht. 1979
Arthrobotrys botryospora G.L. Barron 1979
Arthrobotrys brochopaga (Drechsler) S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys chazarica Mekht. 1998
Arthrobotrys chilensis Allesch. & Henn. 1897
Arthrobotrys cladodes Drechsler 1937
Arthrobotrys clavispora (R.C. Cooke) S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys compacta Mekht. 1973
Arthrobotrys conoides Drechsler 1937
Arthrobotrys constringens Dowsett, J. Reid & Kalkat 1984
Arthrobotrys cylindrospora (R.C. Cooke) S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys dactyloides Drechsler 1937
Arthrobotrys deflectens Bres. 1903
Arthrobotrys dendroides Kuthub., Muid & J. Webster 1985
Arthrobotrys doliiformis Soprunov 1958
Arthrobotrys drechsleri Soprunov 1958
Arthrobotrys elegans (Subram. & Chandrash.) Seifert & W.B. Kendr. 1983
Arthrobotrys ellipsospora Tubaki & K. Yamanaka 1984
Arthrobotrys entomopaga Drechsler 1944
Arthrobotrys ferox Onofri & S. Tosi 1992
Arthrobotrys foliicola Matsush. 1975
Arthrobotrys fruticulosa Mekht. 1979
Arthrobotrys globospora (Soprunov) Sidorova, Gorlenko & Nalepina 1964,(Soprunov) Mekht. 1964
Arthrobotrys haptospora (Drechsler) S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys hertziana M. Scholler & A. Rubner 1999
Arthrobotrys indica (Chowdhry & Bahl) M. Scholler, Hagedorn & A. Rubner 1999
Arthrobotrys irregularis (Matr.) Mekht. 1971
Arthrobotrys javanica (Rifai & R.C. Cooke) Jarow. 1970
Arthrobotrys kirghizica Soprunov 1958
Arthrobotrys longa Mekht. 1973
Arthrobotrys longiphora (Xing Z. Liu & B.S. Lu) M. Scholler, Hagedorn & A. Rubner 1999
Arthrobotrys longiramulifera Matsush. 1995
Arthrobotrys longispora Soprunov, Preuss 1853
Arthrobotrys mangrovispora Swe, Jeewon, Pointing & K.D. Hyde 2008
Arthrobotrys megaspora (Boedijn) Oorschot 1985
Arthrobotrys microscaphoides (Xing Z. Liu & B.S. Lu) M. Scholler, Hagedorn & A. Rubner 1999
Arthrobotrys microspora (Soprunov) Mekht. 1971
Arthrobotrys multisecundaria W.F. Hu & K.Q. Zhang 2006
Arthrobotrys musiformis Drechsler 1937
Arthrobotrys nematopaga (Mekht. & Faizieva) A. Rubner 1996
Arthrobotrys nonseptata Z.F. Yu, S.F. Li & K.Q. Zhang 2009
Arthrobotrys oligospora Fresen. 1850
Arthrobotrys oudemansii M. Scholler, Hagedorn & A. Rubner 2000
Arthrobotrys oviformis Soprunov 1958
Arthrobotrys perpasta (R.C. Cooke) Jarow. 1970
Arthrobotrys polycephala (Drechsler) Rifai 1968
Arthrobotrys pseudoclavata (Z.Q. Miao & Xing Z. Liu) J. Chen, L.L. Xu, B. Liu & Xing Z. Liu 2007
Arthrobotrys pyriformis (Juniper) Schenk, W.B. Kendr. & Pramer 1977
Arthrobotrys recta Preuss 1851
Arthrobotrys robusta Dudd. 1952
Arthrobotrys rosea Massee 1885
Arthrobotrys scaphoides (Peach) S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys sclerohypha (Drechsler) S. Schenck, W.B. Kendr. & Pramer 1977
Arthrobotrys shahriari (Mekht.) M. Scholler, Hagedorn & A. Rubner 1999
Arthrobotrys shizishanna (X.F. Liu & K.Q. Zhang) J. Chen, L.L. Xu, B. Liu & Xing Z. Liu 2007
Arthrobotrys sinensis (Xing Z. Liu & K.Q. Zhang) M. Scholler, Hagedorn & A. Rubner 1999
Arthrobotrys soprunovii Mekht. 1979
Arthrobotrys stilbacea J.A. Mey. 1958
Arthrobotrys straminicola Pidopl. 1948
Arthrobotrys superba Corda 1839
Arthrobotrys tabrizica (Mekht.) M. Scholler, Hagedorn & A. Rubner 1999
Arthrobotrys venusta K.Q. Zhang 1994
Arthrobotrys vermicola (R.C. Cooke & Satchuth.) Rifai 1968
Arthrobotrys yunnanensis M.H. Mo & K.Q. Zhang 2005
References
http://www.indexfungorum.org
Ascomycota genera
Fungal pest control agents
Pezizomycotina | Arthrobotrys | Biology | 1,570 |
11,762,174 | https://en.wikipedia.org/wiki/Ecological%20self | In environmental philosophy, ecological self is central to the school of Experiential Deep Ecology, which, based on the work of Norwegian philosopher Arne Næss, argues that through the process of self-actualisation, one transcends the notions of the individuated "egoic" self and arrives at a position of an ecological self. So long as one is working within the narrower concept of self, Næss argues, environmentally responsible behaviour is a form of altruism, a "doing good for the other", which historically has been a precarious ethical basis, usually involved in exhorting others to "be good". Næss argues that in his Ecosophy, the enlargement of the ego-self to the eco-self results in environmentally responsible behaviour as a form of self-interest.
Warwick Fox argued that Næss's philosophy was based upon a variety of "transpersonal ecology" in which self-interest was firmly embedded within the interest of the ecommunity ecosphere of which the self was eternally embedded
As deep ecologist John Seed has stated, "Deep ecology critiques the idea that we are the crown of creation, the measure of all being: that the world is a pyramid with humanity rightly on top, merely a resource, and that nature has instrumental value only". The concept of the Ecological Self goes beyond anthropocentrism, which, by contrast locates human concerns as the exclusive source of all value. It draws upon the Land Ethic of Aldo Leopold. Leopold argued that within conventional ethics, the land itself was considered only as property, occupying a role analogous to slavery in earlier societies that permitted the ownership of people. By comparison a land ethic enlarges the boundary of moral concern to include "soils, waters, plants, and animals, or collectively: the land". The basis of such a non-anthropocentric ethic, according to Leopold was that "A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise."
Like Thomas Berry and Brian Swimme, ecological philosopher Freya Mathews argues that in considering the ecological self, we need to look beyond the present to the "deep time" of ages past, in the evolution of life and the creation of the cosmos, in order to consider the real roots of human consciousness. Experiential deep ecologist Joanna Macy speaks of the Ecological Self in her book "World as Lover, World as Self", and uses the concept in her work on "Deep Time".
See also
Ecopsychology
Value-action gap
References
External links
"The Land Ethic" by Aldo Leopold
Earthprayer by John Seed
Deep ecology
Environmental ethics
Environmentalism
Self
Arne Næss | Ecological self | Biology,Environmental_science | 581 |
19,817,299 | https://en.wikipedia.org/wiki/Xnee | GNU Xnee is a suite of programs that can record, replay and distribute user actions under the X11 environment. It can be used for testing and demonstrating X11 applications.
Within X11 each user input (mouse click or key press) is an X Window System event. Xnee records these events into a file. Later Xnee is used to play the events back from the file and into an X Window System just as though the user were operating the system.
Xnee can also be used to play or distribute user input events to two or more machines in parallel.
As the target X Window application sees what appears to be physical user input it has resulted in Xnee being dubbed “Xnee is Not an Event Emulator.”
As Xnee is free software, it can be modified to handle special tasks. For example, inserting time stamps as part of the playback.
Software suite
cnee is a command line interface. Its name is a recursive acronym which means in English: “cnee's not an event emulator”.
gnee is a graphical interface (recursive acronym meaning in English “gnee's not an emulator either”).
pnee is a GNOME applet (recursive acronym meaning “pnee's not even emulating”).
libxnee is a software library used by cnee, pnee and gnee. (recursive acronym meaning in English “libxnee is basically xnee”, which can be translated as “libxnee is the very essence of xnee”).
See also
AutoHotkey
AutoIt
Automator (for Macintosh)
Automise
Bookmarklet
External links
X11::GUITest::record - Perl implementation of the X11 record extension
X11::GUITest - X11 Recording / Playbook using Perl script
References
Automation software
Xnee
X Window System | Xnee | Engineering | 388 |
434,326 | https://en.wikipedia.org/wiki/Hydrogeology | Hydrogeology (hydro- meaning water, and -geology meaning the study of the Earth) is the area of geology that deals with the distribution and movement of groundwater in the soil and rocks of the Earth's crust (commonly in aquifers). The terms groundwater hydrology, geohydrology, and hydrogeology are often used interchangeably, though hydrogeology is the most commonly used.
Hydrogeology is the study of the laws governing the movement of subterranean water, the mechanical, chemical, and thermal interaction of this water with the porous solid, and the transport of energy, chemical constituents, and particulate matter by flow (Domenico and Schwartz, 1998).
Groundwater engineering, another name for hydrogeology, is a branch of engineering which is concerned with groundwater movement and design of wells, pumps, and drains. The main concerns in groundwater engineering include groundwater contamination, conservation of supplies, and water quality.
Wells are constructed for use in developing nations, as well as for use in developed nations in places which are not connected to a city water system. Wells are designed and maintained to uphold the integrity of the aquifer, and to prevent contaminants from reaching the groundwater. Controversy arises in the use of groundwater when its usage impacts surface water systems, or when human activity threatens the integrity of the local aquifer system.
Introduction
Hydrogeology is an interdisciplinary subject; it can be difficult to account fully for the chemical, physical, biological, and even legal interactions between soil, water, nature, and society. The study of the interaction between groundwater movement and geology can be quite complex. Groundwater does not always follow the surface topography; groundwater follows pressure gradients (flow from high pressure to low), often through fractures and conduits in circuitous paths. Taking into account the interplay of the different facets of a multi-component system often requires knowledge in several diverse fields at both the experimental and theoretical levels. The following is a more traditional introduction to the methods and nomenclature of saturated subsurface hydrology.
Hydrogeology in relation to other fields
Hydrogeology, as stated above, is a branch of the earth sciences dealing with the flow of water through aquifers and other shallow porous media (typically less than 450 meters below the land surface). The very shallow flow of water in the subsurface (the upper 3 m) is pertinent to the fields of soil science, agriculture, and civil engineering, as well as to hydrogeology. The general flow of fluids (water, hydrocarbons, geothermal fluids, etc.) in deeper formations is also a concern of geologists, geophysicists, and petroleum geologists. Groundwater is a slow-moving, viscous fluid (with a Reynolds number less than unity); many of the empirically derived laws of groundwater flow can be alternately derived in fluid mechanics from the special case of Stokes flow (viscosity and pressure terms, but no inertial term).
The mathematical relationships used to describe the flow of water through porous media are Darcy's law, the diffusion, and Laplace equations, which have applications in many diverse fields. Steady groundwater flow (Laplace equation) has been simulated using electrical, elastic, and heat conduction analogies. Transient groundwater flow is analogous to the diffusion of heat in a solid, therefore some solutions to hydrological problems have been adapted from heat transfer literature.
Traditionally, the movement of groundwater has been studied separately from surface water, climatology, and even the chemical and microbiological aspects of hydrogeology (the processes are uncoupled). As the field of hydrogeology matures, the strong interactions between groundwater, surface water, water chemistry, soil moisture, and even climate are becoming more clear.
California and Washington both require special certification of hydrogeologists to offer professional services to the public. Twenty-nine states require professional licensing for geologists to offer their services to the public, which often includes work within the domains of developing, managing, and/or remediating groundwater resources.
For example: aquifer drawdown or overdrafting and the pumping of fossil water may be a contributing factor to sea-level rise.
Subjects
One of the main tasks a hydrogeologist typically performs is the prediction of future behavior of an aquifer system, based on analysis of past and present observations. Some hypothetical, but characteristic questions asked would be:
Can the aquifer support another subdivision?
Will the river dry up if the farmer doubles his irrigation?
Did the chemicals from the dry cleaning facility travel through the aquifer to my well and make me sick?
Will the plume of effluent leaving my neighbor's septic system flow to my drinking water well?
Most of these questions can be addressed through simulation of the hydrologic system (using numerical models or analytic equations). Accurate simulation of the aquifer system requires knowledge of the aquifer properties and boundary conditions. Therefore, a common task of the hydrogeologist is determining aquifer properties using aquifer tests.
In order to further characterize aquifers and aquitards some primary and derived physical properties are introduced below. Aquifers are broadly classified as being either confined or unconfined (water table aquifers), and either saturated or unsaturated; the type of aquifer affects what properties control the flow of water in that medium (e.g., the release of water from storage for confined aquifers is related to the storativity, while it is related to the specific yield for unconfined aquifers).
Aquifers
An aquifer is a collection of water underneath the surface, large enough to be useful in a spring or a well. Aquifers can be unconfined, where the top of the aquifer is defined by the water table, or confined, where the aquifer exists underneath a confining bed.
There are three aspects that control the nature of aquifers: stratigraphy, lithology, and geological formations and deposits. The stratigraphy relates the age and geometry of the many formations that compose the aquifer. The lithology refers to the physical components of an aquifer, such as the mineral composition and grain size. The structural features are the elements that arise due to deformations after deposition, such as fractures and folds. Understanding these aspects is paramount to understanding of how an aquifer is formed and how professionals can utilize it for groundwater engineering.
Hydraulic head
Differences in hydraulic head (h) cause water to move from one place to another; water flows from locations of high h to locations of low h. Hydraulic head is composed of pressure head (ψ) and elevation head (z). The head gradient is the change in hydraulic head per length of flowpath, and appears in Darcy's law as being proportional to the discharge.
Hydraulic head is a directly measurable property that can take on any value (because of the arbitrary datum involved in the z term); ψ can be measured with a pressure transducer (this value can be negative, e.g., suction, but is positive in saturated aquifers), and z can be measured relative to a surveyed datum (typically the top of the well casing). Commonly, in wells tapping unconfined aquifers the water level in a well is used as a proxy for hydraulic head, assuming there is no vertical gradient of pressure. Often only changes in hydraulic head through time are needed, so the constant elevation head term can be left out (Δh = Δψ).
A record of hydraulic head through time at a well is a hydrograph or, the changes in hydraulic head recorded during the pumping of a well in a test are called drawdown.
Porosity
Porosity (n) is a directly measurable aquifer property; it is a fraction between 0 and 1 indicating the amount of pore space between unconsolidated soil particles or within a fractured rock. Typically, the majority of groundwater (and anything dissolved in it) moves through the porosity available to flow (sometimes called effective porosity). Permeability is an expression of the connectedness of the pores. For instance, an unfractured rock unit may have a high porosity (it has many holes between its constituent grains), but a low permeability (none of the pores are connected). An example of this phenomenon is pumice, which, when in its unfractured state, can make a poor aquifer.
Porosity does not directly affect the distribution of hydraulic head in an aquifer, but it has a very strong effect on the migration of dissolved contaminants, since it affects groundwater flow velocities through an inversely proportional relationship.
Darcy's law is commonly applied to study the movement of water, or other fluids through porous media, and constitutes the basis for many hydrogeological analyses.
Water content
Water content (θ) is also a directly measurable property; it is the fraction of the total rock which is filled with liquid water. This is also a fraction between 0 and 1, but it must also be less than or equal to the total porosity.
The water content is very important in vadose zone hydrology, where the hydraulic conductivity is a strongly nonlinear function of water content; this complicates the solution of the unsaturated groundwater flow equation.
Hydraulic conductivity
Hydraulic conductivity (K) is a measure of permeability that is a property of both the fluid and the porous medium (i.e. the hydraulic conductivity of water and of oil will not be the same even if in the same geologic formation). Transmissivity is the product of hydraulic conductivity and the aquifer thickness (typically used as an indication of the ability of an aquifer to deliver water to a well). Intrinsic permeability (κ) is a property of the porous medium alone, and does not change with different fulids (e.g. different density or viscosity; it is used more in the petroleum industry.
Specific storage and specific yield
Specific storage (Ss) and its depth-integrated equivalent, storativity (S=Ssb), are indirect aquifer properties (they cannot be measured directly); they indicate the amount of groundwater released from storage due to a unit depressurization of a confined aquifer. They are fractions between 0 and 1.
Specific yield (Sy) is also a ratio between 0 and 1 (Sy ≤ porosity) and indicates the amount of water released due to drainage from lowering the water table in an unconfined aquifer. The value for specific yield is less than the value for porosity because some water will remain in the medium even after drainage due to intermolecular forces. Often the porosity or effective porosity is used as an upper bound to the specific yield. Typically Sy is orders of magnitude larger than Ss.
Fault zone hydrogeology
Fault zone hydrogeology is the study of how brittlely deformed rocks alter fluid flows in different lithological settings, such as clastic, igneous and carbonate rocks. Fluid movements, that can be quantified as permeability, can be facilitated or impeded due to the existence of a fault zone. This is because different mechanism and deformed rocks can alter the porosity and hence the permeability within fault zone. Fluids involved generally are groundwater (fresh and marine waters) and hydrocarbons (Oil and Gas). As fault zone is a zone of weakness that helps to increase the weathered zone thickness and hence the help in ground water recharge. Along with faults, fractures and foliations also facilitate the groundwater mainly in hard rock terrains.
Contaminant transport properties
Often we are interested in how the moving groundwater will transport dissolved contaminants around (the sub-field of contaminant hydrogeology). The contaminants which are man-made (e.g., petroleum products, nitrate, chromium or radionuclides) or naturally occurring (e.g., arsenic, salinity), can be transported through three main mechanisms, advection (transport along the main direction of flow at seepage velocity), diffusion (migration of the contaminant from high to low concentration areas), and hydrodynamic dispersion (due to microscale heterogeneities present in the porous medium and non-uniform velocity distribution relative to seepage velocity). Besides needing to understand where the groundwater is flowing, based on the other hydrologic properties discussed above, there are additional aquifer properties which affect how dissolved contaminants move with groundwater.
Hydrodynamic dispersion
Hydrodynamic dispersivity (αL, αT) is an empirical factor which quantifies how much contaminants stray away from the path of the groundwater which is carrying it. Some of the contaminants will be "behind" or "ahead" the mean groundwater, giving rise to a longitudinal dispersivity (αL), and some will be "to the sides of" the pure advective groundwater flow, leading to a transverse dispersivity (αT). Dispersion in groundwater arises because each water "particle", passing beyond a soil particle, must choose where to go, whether left or right or up or down, so that the water "particles" (and their solute) are gradually spread in all directions around the mean path. This is the "microscopic" mechanism, on the scale of soil particles. More important, over long distances, can be the macroscopic inhomogeneities of the aquifer, which can have regions of larger or smaller permeability, so that some water can find a preferential path in one direction, some other in a different direction, so that the contaminant can be spread in a completely irregular way, like in a (three-dimensional) delta of a river.
Dispersivity is actually a factor which represents our lack of information about the system we are simulating. There are many small details about the aquifer which are effectively averaged when using a macroscopic approach (e.g., tiny beds of gravel and clay in sand aquifers); these manifest themselves as an apparent dispersivity. Because of this, α is often claimed to be dependent on the length scale of the problem — the dispersivity found for transport through 1 m3 of aquifer is different from that for transport through 1 cm3 of the same aquifer material.
Molecular diffusion
Diffusion is a fundamental physical phenomenon, which Albert Einstein characterized as Brownian motion, that describes the random thermal movement of molecules and small particles in gases and liquids. It is an important phenomenon for small distances (it is essential for the achievement of thermodynamic equilibria), but, as the time necessary to cover a distance by diffusion is proportional to the square of the distance itself, it is less effective for spreading a solute over macroscopic distances on a short time scale. The diffusion coefficient, , is typically quite small, and its effect can often be neglected (unless groundwater flow velocities are extremely low, as they are in clay aquitards).
It is important not to confuse diffusion with dispersion, as the former is a physical phenomenon and the latter is an empirical hydrodynamic factor which is cast into a similar form as diffusion, because its a convenient way to mathematically describe and solve the question.
Retardation by adsorption
The retardation factor is another very important feature that make the motion of the contaminant to deviate from the average groundwater motion. It is analogous to the retardation factor of chromatography. Unlike diffusion and dispersion, which simply spread the contaminant, the retardation factor changes its global average velocity, so that it can be much slower than that of water. This is due to a chemico-physical effect: the adsorption to the soil, which holds the contaminant back and does not allow it to progress until the quantity corresponding to the chemical adsorption equilibrium has been adsorbed. This effect is particularly important for less soluble contaminants, which thus can move even hundreds or thousands times slower than water. The effect of this phenomenon is that only more soluble species can cover long distances. The retardation factor depends on the chemical nature of both the contaminant and the aquifer.
History and development
Henry Darcy: 19th century
Henry Darcy was a French scientist who made advances in flow of fluids through porous materials. He conducted experiments which studied the movement of fluids through sand columns. These experiments led to the determination of Darcy's law, which describes fluid flow through a medium with high levels of porosity. Darcy's work is considered to be the beginning of quantitative hydrogeology.
Oscar Edward Meinzer: 20th century
Oscar Edward Meinzer was an American scientist who is often called the "father of modern groundwater hydrology". He standardized key terms in the field as well as determined principles regarding occurrence, movement, and discharge. He proved that the flow of water obeys Darcy's law. He also proposed the use of geophysical methods and recorders on wells, as well as suggested pumping tests to gather quantitative information on the properties of aquifers. Meinzer also highlighted the importance of studying the geochemistry of water, as well as the impact of high salinity levels in aquifers.
Governing equations
Darcy's law
Darcy's law is a constitutive equation, empirically derived by Henry Darcy in 1856, which states that the amount of groundwater discharging through a given portion of aquifer is proportional to the cross-sectional area of flow, the hydraulic gradient, and the hydraulic conductivity.
Groundwater flow equation
The groundwater flow equation, in its most general form, describes the movement of groundwater in a porous medium (aquifers and aquitards). It is known in mathematics as the diffusion equation, and has many analogs in other fields. Many solutions for groundwater flow problems were borrowed or adapted from existing heat transfer solutions.
It is often derived from a physical basis using Darcy's law and a conservation of mass for a small control volume. The equation is often used to predict flow to wells, which have radial symmetry, so the flow equation is commonly solved in polar or cylindrical coordinates.
The Theis equation is one of the most commonly used and fundamental solutions to the groundwater flow equation; it can be used to predict the transient evolution of head due to the effects of pumping one or a number of pumping wells.
The Thiem equation is a solution to the steady state groundwater flow equation (Laplace's Equation) for flow to a well. Unless there are large sources of water nearby (a river or lake), true steady-state is rarely achieved in reality.
Both above equations are used in aquifer tests (pump tests).
The Hooghoudt equation is a groundwater flow equation applied to subsurface drainage by pipes, tile drains or ditches. An alternative subsurface drainage method is drainage by wells for which groundwater flow equations are also available.
Calculation of groundwater flow
To use the groundwater flow equation to estimate the distribution of hydraulic heads,
or the direction and rate of groundwater flow, this partial differential equation (PDE) must be solved. The most common means of analytically solving the diffusion equation in the hydrogeology literature are:
Laplace, Hankel and Fourier transforms (to reduce the number of dimensions of the PDE),
similarity transform (also called the Boltzmann transform) is commonly how the Theis solution is derived,
separation of variables, which is more useful for non-Cartesian coordinates, and
Green's functions, which is another common method for deriving the Theis solution — from the fundamental solution to the diffusion equation in free space.
No matter which method we use to solve the groundwater flow equation, we need both initial conditions
(heads at time (t) = 0) and boundary conditions (representing either the physical
boundaries of the domain, or an approximation of the domain beyond that
point). Often the initial conditions are supplied to a transient
simulation, by a corresponding steady-state simulation (where the time
derivative in the groundwater flow equation is set equal to 0).
There are two broad categories of how the (PDE) would be solved; either
analytical methods, numerical methods, or something possibly in between. Typically, analytic methods solve the groundwater flow equation under a simplified set of conditions exactly, while numerical methods solve it under more general conditions to an approximation.
Analytic methods
Analytic methods typically use the structure of mathematics to arrive at a simple, elegant solution, but the required derivation for all but the simplest domain geometries can be quite complex (involving non-standard coordinates, conformal mapping, etc.). Analytic solutions typically are also simply an equation that can give a quick answer based on a few basic parameters. The Theis equation is a very simple (yet still very useful) analytic solution to the groundwater flow equation, typically used to analyze the results of an aquifer test or slug test.
Numerical methods
The topic of numerical methods is quite large, obviously being of use to most fields of engineering and science in general. Numerical methods have been around much longer than computers have (In the 1920s Richardson developed some of the finite difference schemes still in use today, but they were calculated by hand, using paper and pencil, by human "calculators"), but they have become very important through the availability of fast and cheap personal computers. A quick survey of the main numerical methods used in hydrogeology, and some of the most basic principles are shown below and further discussed in the Groundwater model article.
There are two broad categories of numerical methods: gridded or discretized methods and non-gridded or mesh-free methods. In the common finite difference method and finite element method (FEM) the domain is completely gridded ("cut" into a grid or mesh of small elements). The analytic element method (AEM) and the boundary integral equation method (BIEM — sometimes also called BEM, or Boundary Element Method) are only discretized at boundaries or along flow elements (line sinks, area sources, etc.), the majority of the domain is mesh-free.
General properties of gridded methods
Gridded Methods like finite difference and finite element methods solve the groundwater flow equation by breaking the problem area (domain) into many small elements (squares, rectangles, triangles, blocks, tetrahedra, etc.) and solving the flow equation for each element (all material properties are assumed constant or possibly linearly variable within an element), then linking together all the elements using conservation of mass across the boundaries between the elements (similar to the divergence theorem). This results in a system which overall approximates the groundwater flow equation, but exactly matches the boundary conditions (the head or flux is specified in the elements which intersect the boundaries).
Finite differences are a way of representing continuous differential operators using discrete intervals (Δx and Δt), and the finite difference methods are based on these (they are derived from a Taylor series). For example, the first-order time derivative is often approximated using the following forward finite difference, where the subscripts indicate a discrete time location,
The forward finite difference approximation is unconditionally stable, but leads to an implicit set of equations (that must be solved using matrix methods, e.g. LU or Cholesky decomposition). The similar backwards difference is only conditionally stable, but it is explicit and can be used to "march" forward in the time direction, solving one grid node at a time (or possibly in parallel, since one node depends only on its immediate neighbors). Rather than the finite difference method, sometimes the Galerkin FEM approximation is used in space (this is different from the type of FEM often used in structural engineering) with finite differences still used in time.
Application of finite difference models
MODFLOW is a well-known example of a general finite difference groundwater flow model. It is developed by the US Geological Survey as a modular and extensible simulation tool for modeling groundwater flow. It is free software developed, documented and distributed by the USGS. Many commercial products have grown up around it, providing graphical user interfaces to its input file based interface, and typically incorporating pre- and post-processing of user data. Many other models have been developed to work with MODFLOW input and output, making linked models which simulate several hydrologic processes possible (flow and transport models, surface water and groundwater models and chemical reaction models), because of the simple, well documented nature of MODFLOW.
Application of finite element models
Finite Element programs are more flexible in design (triangular elements vs. the block elements most finite difference models use) and there are some programs available (SUTRA, a 2D or 3D density-dependent flow model by the USGS; Hydrus, a commercial unsaturated flow model; FEFLOW, a commercial modelling environment for subsurface flow, solute and heat transport processes; OpenGeoSys, a scientific open-source project for thermo-hydro-mechanical-chemical (THMC) processes in porous and fractured media; COMSOL Multiphysics (a commercial general modelling environment), FEATool Multiphysics an easy to use MATLAB simulation toolbox, and Integrated Water Flow Model (IWFM), but they are still not as popular in with practicing hydrogeologists as MODFLOW is. Finite element models are more popular in university and laboratory environments, where specialized models solve non-standard forms of the flow equation (unsaturated flow, density dependent flow, coupled heat and groundwater flow, etc.).
Application of finite volume models
The finite volume method is a method for representing and evaluating partial differential equations as algebraic equations. Similar to the finite difference method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in many computational fluid dynamics packages.
PORFLOW software package is a comprehensive mathematical model for simulation of Ground Water Flow and Nuclear Waste Management developed by Analytic & Computational Research, Inc., ACRi.
The FEHM software package is available free from Los Alamos National Laboratory. This versatile porous flow simulator includes capabilities to model multiphase, thermal, stress, and multicomponent reactive chemistry. Current work using this code includes simulation of methane hydrate formation, CO2 sequestration, oil shale extraction, migration of both nuclear and chemical contaminants, environmental isotope migration in the unsaturated zone, and karst formation.
Other methods
These include mesh-free methods like the Analytic Element Method (AEM) and the Boundary Element Method (BEM), which are closer to analytic solutions, but they do approximate the groundwater flow equation in some way. The BEM and AEM exactly solve the groundwater flow equation (perfect mass balance), while approximating the boundary conditions. These methods are more exact and can be much more elegant solutions (like analytic methods are), but have not seen as widespread use outside academic and research groups yet.
Water wells
A water well is a mechanism for bringing groundwater to the surface by drilling or digging and bringing it up to the surface with a pump or by hand using buckets or similar devices. The first historical instance of water wells was in the 52nd century BC in modern-day Austria. Today, wells are used all over the world, from developing nations to suburbs in the United States.
There are three main types of wells, shallow, deep, and artesian. Shallow wells tap into unconfined aquifers, and are, generally, shallow, less than 15 meters deep. Shallow wells have a small diameter, usually less than 15 centimeters. Deep wells access confined aquifers, and are always drilled by machine. All deep wells bring water to the surface using mechanical pumps. In artesian wells, water flows naturally without the use of a pump or some other mechanical device. This is due to the top of the well being located below the water table.
Water well design and construction
One of the most important aspects of groundwater engineering and hydrogeology is water well design and construction. Proper well design and construction are important to maintain the health of the groundwater and the people which will use the well. Factors which must be considered in well design are:
A reliable aquifer, providing a continuous water supply
The quality of the accessible groundwater
How to monitor the well
Operating costs of the well
Expected yield of the well
Any prior drilling into the aquifer
There are five main areas to be considered when planning and constructing a new water well, along with the factors above. They are:
Aquifer Suitability
"Well Design Considerations
Well Drilling Methods
Well Screen Design and Development
Well Testing"
Aquifer suitability starts with determining possible locations for the well using "USGS reports, well logs, and cross sections" of the aquifer. This information should be used to determine aquifer properties such as depth, thickness, transmissivity, and well yield. In this stage, the quality of the water in the aquifer should also be determined, and screening should occur to check for contaminants.
After factors such as depth and well yield are determined, the well design and drilling approach must be established. Drilling method is selected based on "soil conditions, well depth, design, and costs." At this stage, cost estimates are prepared, and plans are adjusted to meet budgetary needs.
Important parts of a well include the well seals, casings or liners, drive shoes, well screen assemblies, and a sand or gravel pack (optional). Each of these components ensures that the well only draws from one aquifer, and no leakage occurs at any stage of the process.
There are several methods of drilling which can be used when constructing a water well. They include: "Cable tool, Air rotary, Mud rotary, and Flooded reverse circulation dual rotary" drilling techniques. Cable tool drilling is inexpensive and can be used for all types of wells, but the alignment must be constantly checked and it has a slow advance rate. It is not an effective drilling technique for consolidated formations, but does provide a small drilling footprint. Air rotary drilling is cost effective and works well for consolidated formations. It has a fast advance rate, but is not adequate for large diameter wells. Mud rotary drilling is especially cost effective for deep wells. It maintains good alignment, but requires a larger footprint. It has a very fast advance rate. Flooded reverse circulation dual rotary drilling is more expensive, but good for large well designs. It is versatile and maintains alignment. It has a fast advance rate.
Well screens ensure that only water makes it to the surface, and sediments remain beneath the Earth's surface. Screens are placed along the shaft of the well to filter out sediment as water is pumped towards the surface. Screen design can be impacted by the nature of the soil, and natural pack designs can be used to maximize efficiency.
After construction of the well, testing must be done to assess productivity, efficiency and yield of the well, as well as determine the impacts of the well on the aquifer. Several different tests should be completed on the well in order to test all relevant qualities of the well.
Issues in groundwater engineering and hydrogeology
Contamination
Groundwater contamination happens when other fluids seep into the aquifer and mix with existing groundwater. Pesticides, fertilizers, and gasoline are common contaminants of aquifers. Underground storage tanks for chemicals such as gasoline are especially concerning sources of groundwater contamination. As these tanks corrode, they can leak, and their contents can contaminate nearby groundwater. For buildings which are not connected to a wastewater treatment system, septic tanks can be used to dispose of waste at a safe rate. If septic tanks are not built or maintained properly, they can leak bacteria, viruses and other chemicals into the surrounding groundwater. Landfills are another potential source of groundwater contamination. As trash is buried, harmful chemicals can migrate from the garbage and into the surrounding groundwater if the protective base layer is cracked or otherwise damaged. Other chemicals, such as road salts and chemicals used on lawns and farms, can runoff into local reservoirs, and eventually into aquifers. As water goes through the water cycle, contaminants in the atmosphere can contaminate the water. This water can also make its way into groundwater.
Controversy
Fracking
Contamination of groundwater due to fracking has long been debated. Since chemicals commonly used in hydraulic fracturing are not tested by government agencies responsible for determining the effects of fracking on groundwater, laboratories at the United States Environmental Protection Agency, or EPA, have a hard time determining if chemicals used in fracking are present in nearby aquifers. In 2016, the EPA released a report which states that drinking water can be contaminated by fracking. This was a reversal of their previous policies after a $29 million study into the effects of fracking on local drinking water.
California
California sees some of the largest controversies in groundwater usage due to the dry conditions California faces, high population, and intensive agriculture. Conflicts generally occur over pumping groundwater and shipping it out of the area, unfair use of water by a commercial company, and contamination of groundwater by development projects. In Siskiyou County in northern California, the California Superior Court ruled poor groundwater regulations have allowed pumping to diminish the flows in the Scott River and disturbed the natural habitat of salmon. In Owens Valley in central California, groundwater was pumped for use in fish farms, which resulted in the death of local meadows and other ecosystems. This resulted in a lawsuit and settlement against the fish companies. Development in southern California is threatening local aquifers, contaminating groundwater through construction and normal human activity. For example, a solar project in San Bernardino County would allegedly threaten the ecosystem of bird and wildlife species because of its use of up to 1.3 million cubic meters of groundwater, which could impact Harper Lake. In September 2014, California passed the Sustainable Groundwater Management Act, which requires users to manage groundwater appropriately, as it is connected to surface water systems.
Colorado
Due to its arid climate, the state of Colorado gets most of its water from underground. Because of this, there have been issues regarding groundwater engineering practices. As many as 65,000 people were affected when high levels of PFCs were found in the Widefield Aquifer. Groundwater use in Colorado dates back to before the 20th century. Nineteen of Colorado's 63 counties depend mostly on groundwater for supplies and domestic uses. The Colorado Geological Survey has three significant reports on groundwater in the Denver Basin. The first report Geology of Upper Cretaceous, Paleocene and Eocene Strata in the Southwestern Denver Basin, The second report Bedrock Geology, Structure, and Isopach Maps of the Upper Cretaceous to Paleogene Strata between Greeley and Colorado Springs, The third publication Cross Sections of the Freshwater Bearing Strata of the Denver Basin between Greeley and Colorado Springs.
New trends in groundwater engineering/hydrogeology
Since the first wells were made thousands of years ago, groundwater systems have been changed by human activity. 50 years ago, the sustainability of these systems on a larger scale began to come into consideration, becoming one of the main focuses of groundwater engineering. New ideas and research are advancing groundwater engineering into the 21st century, while still considering groundwater conservation.
Topographical mapping
New advancements have arisen in topographical mapping to improve sustainability. Topographic mapping has been updated to include radar, which can penetrate the ground to help pinpoint areas of concern. In addition, large computations can use gathered data from maps to further the knowledge of groundwater aquifers in recent years. This has made highly complex and individualized water cycle models possible, which has helped to make groundwater sustainability more applicable to specific situations.
The role of technology
Technological improvements have advanced topographical mapping, and have also improved the quality of lithosphere, hydrosphere, biosphere, and atmosphere simulations. These simulations are useful on their own; however, when used together, they help to give an even more accurate prediction of the future sustainability of an area, and what changes can be made to ensure stability in the area. This would not be possible without the advancement of technology. As technology continues to progress, the simulations will increase in accuracy and allow for more complex studies and projects in groundwater engineering.
Growing populations
As populations continue to grow, areas which were using groundwater at a sustainable rate are now beginning to face sustainability issues for the future. Populations of the size currently seen in large cities were not taken into consideration when the long term sustainability of aquifers. These large population sizes are beginning to stress groundwater supply. This has led to the need for new policies in some urban areas. These are known as proactive land-use management, where cities can move proactively to conserve groundwater.
In Brazil, overpopulation caused municipally provided water to run low. Due to the shortage of water, people began to drill wells within the range normally served by the municipal water system. This was a solution for people in high socioeconomic standing, but left much of the underprivileged population without access to water. Because of this, a new municipal policy was created which drilled wells to assist those who could not afford to drill wells of their own. Because the city is in charge of drilling the new wells, they can better plan for the future sustainability of the groundwater in the region, by carefully placing the wells and taking growing populations into consideration.
Dependency on groundwater in the United States
In the United States, 51% of the drinking water comes from groundwater supplies. Around 99% of the rural population depends on groundwater. In addition, 64% of the total groundwater of the country is used for irrigation, and some of it is used for industrial processes and recharge for lakes and rivers. In 2010, 22 percent of freshwater used in US came from groundwater and the other 78 percent came from surface water. Groundwater is important for some states that don't have access to fresh water. most of the fresh groundwater 65 percent is used for irrigation and the 21 percent is used for public purposes drinking mostly.
See also
Environmental engineering is a broad category hydrogeology fits into;
Flownet is an analysis tool for steady-state flow;
Groundwater energy balance: groundwater flow equations based on the energy balance;
Fault zone hydrogeology: field specifically analyzing hydrogeology in fault zones
Hydrogeophysics: field integrating hydrogeology with geophysics
Hydrology (agriculture)
Isotope hydrology is often used to understand sources and travel times in groundwater systems;
Oscar Edward Meinzer is considered the "father of modern groundwater hydrology";
SahysMod is a spatial agro-hydro-salinity model with groundwater flow in a polygonal network;
Spring (hydrology) and water supply network are subjects the hydrogeologist is concerned about;
Water cycle, hydrosphere and water resources are larger concepts which hydrogeology is a part of
Coastal hydrogeology
References
Further reading
General hydrogeology
Domenico, P.A. & Schwartz, W., 1998. Physical and Chemical Hydrogeology Second Edition, Wiley. — Good book for consultants, it has many real-world examples and covers additional topics (e.g. heat flow, multi-phase and unsaturated flow).
Driscoll, Fletcher, 1986. Groundwater and Wells, US Filter / Johnson Screens. — Practical book illustrating the actual process of drilling, developing and utilizing water wells, but it is a trade book, so some of the material is slanted towards the products made by Johnson Well Screens.
Freeze, R.A. & Cherry, J.A., 1979. Groundwater, Prentice-Hall. — A classic text; like an older version of Domenico and Schwartz.
de Marsily, G., 1986. Quantitative Hydrogeology: Groundwater Hydrology for Engineers, Academic Press, Inc., Orlando Florida. — Classic book intended for engineers with mathematical background but it can be read by hydrologists and geologists as well.
Good, accessible overview of hydrogeological processes.
Porges, Robert E. & Hammer, Matthew J., 2001. The Compendium of Hydrogeology, National Ground Water Association, . Written by practicing hydrogeologists, this inclusive handbook provides a concise, easy-to-use reference for hydrologic terms, equations, pertinent physical parameters, and acronyms
Todd, David Keith, 1980. Groundwater Hydrology Second Edition, John Wiley & Sons. — Case studies and real-world problems with examples.
Fetter, C.W. Contaminant Hydrogeology Second Edition, Prentice Hall.
Fetter, C.W. Applied Hydrogeology Fourth Edition, Prentice Hall.
Numerical groundwater modeling
Anderson, Mary P. & Woessner, William W., 1992 Applied Groundwater Modeling, Academic Press. — An introduction to groundwater modeling, a little bit old, but the methods are still very applicable.
Anderson, Mary P., Woessner, William W., & Hunt, Randall J., 2015, Applied Groundwater Modeling, 2nd Edition, Academic Press. — Updates the 1st edition with new examples, new material with respect to model calibration and uncertainty, and online Python scripts (https://github.com/Applied-Groundwater-Modeling-2nd-Ed).
Chiang, W.-H., Kinzelbach, W., Rausch, R. (1998): Aquifer Simulation Model for WINdows – Groundwater flow and transport modeling, an integrated program. - 137 p., 115 fig., 2 tab., 1 CD-ROM; Berlin, Stuttgart (Borntraeger).
Elango, L and Jayakumar, R (Eds.)(2001) Modelling in Hydrogeology, UNESCO-IHP Publication, Allied Publ., Chennai,
Rausch, R., Schäfer W., Therrien, R., Wagner, C., 2005 Solute Transport Modelling – An Introduction to Models and Solution Strategies. - 205 p., 66 fig., 11 tab.; Berlin, Stuttgart (Borntraeger).
Rushton, K.R., 2003, Groundwater Hydrology: Conceptual and Computational Models. John Wiley and Sons Ltd.
Wang H. F., Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology, Princeton Press, (2000).
Waltham T., Foundations of Engineering Geology, 2nd Edition, Taylor & Francis, (2001).
Zheng, C., and Bennett, G.D., 2002, Applied Contaminant Transport Modeling Second Edition, John Wiley & Sons.
Analytic groundwater modeling
Haitjema, Henk M., 1995. Analytic Element Modeling of Groundwater Flow, Academic Press. — An introduction to analytic solution methods, especially the Analytic element method (AEM).
Harr, Milton E., 1962. Groundwater and seepage, Dover. — a more civil engineering view on groundwater; includes a great deal on flownets.
Kovacs, Gyorgy, 1981. Seepage Hydaulics, Developments in Water Science; 10. Elsevier. - Conformal mapping well explained. , (series)
Lee, Tien-Chang, 1999. Applied Mathematics in Hydrogeology, CRC Press. — Great explanation of mathematical methods used in deriving solutions to hydrogeology problems (solute transport, finite element and inverse problems too).
Liggett, James A. & Liu, Phillip .L-F., 1983. The Boundary Integral Equation Method for Porous Media Flow'', George Allen and Unwin, London. — Book on BIEM (sometimes called BEM) with examples, it makes a good introduction to the method.
External links
International Association of Hydrogeologists — worldwide association for groundwater specialists.
UK Groundwater Forum — Groundwater in the UK
Centre for Groundwater Studies — Groundwater Education and Research.
EPA drinking water standards — the maximum contaminant levels (mcl) for dissolved species in US drinking water.
US Geological Survey water resources homepage — a good place to find free data (for both US surface water and groundwater) and free groundwater modeling software like MODFLOW.
US Geological Survey TWRI index — a series of instructional manuals covering common procedures in hydrogeology. They are freely available online as PDF files.
International Ground Water Modeling Center (IGWMC) — an educational repository of groundwater modeling software which offers support for most software, some of which is free.
The Hydrogeologist Time Capsule — a video collection of interviews of eminent hydrogeologists who have made a material difference to the profession.
IGRAC International Groundwater Resources Assessment Centre
US Army Geospatial Center — For information on OCONUS surface water and groundwater. | Hydrogeology | Environmental_science | 9,338 |
50,849,787 | https://en.wikipedia.org/wiki/Caloboletus%20yunnanensis | Caloboletus yunnanensis is a bolete fungus native to Yunnan province in China, where it grows under Pinus yunnanensis.
References
External links
yunnanensis
Fungi described in 2014
Fungi of China
Fungus species | Caloboletus yunnanensis | Biology | 43 |
9,716,366 | https://en.wikipedia.org/wiki/Romincka%20Forest | Romincka Forest (, ), also known as Krasny Les () or Rominte Heath (), is an extended forest and heath landscape stretching from the southeast of Russian Kaliningrad Oblast to the northeast of Polish Warmian-Masurian Voivodeship.
Etymology
The Polish and German names of the forest, like the Rominta/Rominte river and the settlement of Rominty/Rominten, are derived from the Lithuanian syllable rom, meaning calm, as the forest is in the land called Lithuania Minor. The Russian name, Krasnyy Les, means "Red Forest".
Geography
The total area of the Romincka landscape is about , stretching from the Masurian Lake District in the southwest up to the border with Lithuania at Lake Vištytis in the east. The southern Polish part (about one-third of the area) comprises a protected zone known as Puszcza Romincka Landscape Park.
The Krasnaya River flows through the Romincka Forest. Major settlements in the area include Krasnolesye in Kaliningrad Oblast, as well as Żytkiejmy and Gołdap in Poland.
Flora
The forest is part of the Central European mixed forests ecoregion. Trees in the Polish part of the forest are 40% spruce, 22% oak, 19% pine, 11% birch, 6% alder, and 2% linden and other species. Common plant communities include Tilio-Carpinetum forest on dry ground, composed of oak, spruce, linden, ash, alder, maple, elm, hornbeam, and birch. Undergrowth is generally sparse. Fraxino-Alnetum forest is found in marshy areas, with alder, spruce, linden, ash, and, less frequently, elm. Understory shrubs include bird cherry (Prunus padus), hazel, guelder rose, and saplings of canopy trees. Ground-elder (Aegopodium podagraria) and nettle (Urtica dioica) are common in the ground layer.
History
Part of the historic region of East Prussia, the extended forests were known for their red deer populations and became a popular hunting ground of the Hohenzollern princes who ruled the Duchy of Prussia from 1525. Part of the German Empire from 1871 onwards, a vast estate in Rominter Heide was purchased by Emperor Wilhelm II, who had his Rominten Hunting Lodge, including a chapel dedicated to Saint Hubertus, erected here in 1891. Hunt scenes were portrayed by notable painters such as Richard Friese (1854–1918).
Plundered by Russian forces in World War I, the hunting lodge and grounds were administered by the Free State of Prussia on Wilhelm's abdication in 1918; Minister-President Otto Braun was a regular guest. Later on, the estates were seized by Nazi minister Hermann Göring, whose Reichsjägerhof Rominten was built nearby in 1936. It also served as Göring's headquarters during the German Operation Barbarossa in 1941.
The Allied Potsdam Agreement after World War II divided the region between the re-established Polish Republic and the Soviet Union. The German history of the region is documented at the East Prussian Regional Museum in Lüneburg and at the German Hunting and Fishing Museum in Munich. In recent years, hunting tourism has again become popular.
References
Central European mixed forests
Forests of Poland
Forests of Russia
Biota of Poland
Biota of Russia
Geography of Kaliningrad Oblast
Geography of Warmian-Masurian Voivodeship | Romincka Forest | Biology | 728 |
3,923,230 | https://en.wikipedia.org/wiki/LEED | Leadership in Energy and Environmental Design (LEED) is a green building certification program used worldwide. Developed by the non-profit U.S. Green Building Council (USGBC), it includes a set of rating systems for the design, construction, operation, and maintenance of green buildings, homes, and neighborhoods, which aims to help building owners and operators be environmentally responsible and use resources efficiently.
there were over 195,000 LEED-certified buildings and over 205,000 LEED-accredited professionals in 186 countries worldwide.
In the US, the District of Columbia consistently leads in LEED-certified square footage per capita, followed in 2022 by the top-ranking states of Massachusetts, Illinois, New York, California, and Maryland.
Outside the United States, the top-ranking countries for 2022 were Mainland China, India, Canada, Brazil, and Sweden.
LEED Canada has developed a separate rating system adapted to the Canadian climate and regulations.
Many U.S. federal agencies, state and local governments require or reward LEED certification. , based on certified square feet per capita, the leading five states (after the District of Columbia) were Massachusetts, Illinois, New York, California, and Maryland. Incentives can include tax credits, zoning allowances, reduced fees, and expedited permitting. Offices, healthcare-, and education-related buildings are the most frequent LEED-certified buildings in the US (over 60%), followed by warehouses, distribution centers, retail projects and multifamily dwellings (another 20%).
Studies have found that for-rent LEED office spaces generally have higher rents and occupancy rates and lower capitalization rates.
LEED is a design tool rather than a performance-measurement tool and has tended to focus on energy modeling rather than actual energy consumption. It has been criticized for a point system that can lead to inappropriate design choices and the prioritization of LEED certification points over actual energy conservation; for lacking climate specificity; for not sufficiently addressing issues of climate change and extreme weather; and for not incorporating principles of a circular economy. Draft versions of LEED v5 were released for public comment in 2024, and the final version of LEED v5 is expected to appear in 2025. It may address some of the previous criticisms.
Despite concerns, LEED has been described as a "transformative force in the design and construction industry". LEED is credited with providing a framework for green building, expanding the use of green practices and products in buildings, encouraging sustainable forestry, and helping professionals to consider buildings in terms of the well-being of their occupants and as part of larger systems.
History
In April 1993, the U.S. Green Building Council (USGBC) was founded by Rick Fedrizzi, the head of environmental marketing at Carrier, real estate developer David Gottfried, and environmental lawyer Michael Italiano. Representatives from 60 firms and nonprofits met at the American Institute of Architects to discuss organizing within the building industry to support green building and develop a green building rating system.
Also influential early on was architect Bob Berkebile.
Fedrizzi served as the volunteer founding chair of USGBC from 1993 to 2004, and became its CEO as of 2004. As of November 4, 2016, he was succeeded as president and CEO of USGBC by Mahesh Ramanujam. Ramanujam served as CEO until 2021. Peter Templeton became interim president and CEO of USGBC as of November 1, 2021.
A key player in developing the Leadership in Energy and Environmental Design (LEED) green certification program was Natural Resources Defense Council (NRDC) senior scientist Robert K. Watson. It was Watson, sometimes referred to as the "Founding Father of LEED", who created the acronym.
Over two decades, Watson led a broad-based consensus process, bringing together non-profit organizations, government agencies, architects, engineers, developers, builders, product manufacturers and other industry leaders. The original planning group consisted of Watson, Mike Italiano, architect Bill Reed (founding LEED Technical Committee co-chair 1994–2003), architect Sandy Mendler, builder Gerard Heiber and engineer Richard Bourne.
Tom Paladino and Lynne Barker (formerly King) co-chaired the LEED Pilot Committee from 1996–2001.
Scot Horst chaired the LEED Steering Committee beginning in 2005 and was deeply involved in the development of LEED 2009.
Joel Ann Todd took over as chair of the steering committee from 2009 to 2013, working to develop LEED v4, and introducing social equity credits.
Other steering committee chairs include Chris Schaffner (2019) and Jennifer Sanguinetti (2020).
Chairs of the USGBC's Energy and Atmosphere Technical Advisory Group for LEED technology have included Gregory Kats.
The LEED initiative has been strongly supported by the USGBC Board of Directors, including
Chair of the Board of Directors Steven Winter (1999–2003). The current chair of the Board of Directors is Anyeley Hallová (2023).
LEED has grown from one standard for new construction to a comprehensive system of interrelated standards covering aspects from the design and construction to the maintenance and operation of buildings. LEED has also grown from six committee volunteers to an organization of 122,626 volunteers, professionals and staff.
, more than 185,000 LEED projects representing over have been proposed worldwide, and more than 105,000 projects representing over have been certified in 185 countries.
However, lumber, chemical and plastics trade groups have lobbied to weaken the application of LEED guidelines in several southern states. In 2013, the states of Alabama, Georgia and Mississippi effectively banned the use of LEED in new public buildings, in favor of other industry standards that the USGBC considers too lax. LEED is considered a target of a type of disinformation attack known as astroturfing, involving "fake grassroots organizations usually sponsored by large corporations".
Unlike model building codes, such as the International Building Code, only members of the USGBC and specific "in-house" committees may add to, subtract from, or edit the standard, subject to an internal review process. Proposals to modify the LEED standards are offered and publicly reviewed by USGBC's member organizations, of which there were 4551 as of October 2023.
Rating systems
LEED has evolved since 1998 to more accurately represent and incorporate emerging green building technologies. LEED has developed building programs specific to new construction (NC), core and shell (CS), commercial interiors (CI), existing buildings (EB), neighborhood development (ND), homes (LEED for Homes), retail, schools, and healthcare.
The pilot version, LEED New Construction (NC) v1.0, led to LEED NCv2.0, LEED NCv2.2 in 2005, LEED 2009 ( LEED v3) in 2009, and LEED v4 in November 2013. LEED 2009 was depreciated for new projects registered from October 31, 2016. LEED v4.1 was released on April 2, 2019.
Draft versions of LEED v5 have been released and revised in response to public comment during 2024. The official final version of LEED v5 is expected to be released in 2025. Future updates to the standard are planned to occur every five years.
LEED forms the basis for other sustainability rating systems such as the U.S. Environmental Protection Agency's (EPA) Labs21 and LEED Canada. The Australian Green Star is based on both LEED and the UK's Building Research Establishment Environmental Assessment Methodology (BREEAM).
LEED v3 (2009)
LEED 2009 encompasses ten rating systems for the design, construction and operation of buildings, homes and neighborhoods. Five overarching categories correspond to the specialties available under the LEED professional program. That suite consists of:
Green building design and construction (BD+C) – for new construction, core and shell, schools, retail spaces (new constructions and major renovations), and healthcare facilities
Green interior design and construction – for commercial and retail interiors
Green building operations and maintenance
Green neighborhood development
Green home design and construction
LEED v3 aligned credits across all LEED rating systems, weighted by environmental priority. It reflects a continuous development process, with a revised third-party certification program and online resources.
Under LEED 2009, an evaluated project scores points to a possible maximum of 100 across six categories: sustainable sites (SS), water efficiency (WE), energy and atmosphere (EA), materials and resources (MR), indoor environment quality (IEQ) and design innovation (INNO). Each of these categories also includes mandatory requirements, which receive no points. Up to 10 additional points may be earned: 4 for regional priority credits and 6 for innovation in design. Additional performance categories for residences (LEED for Homes) recognize the importance of transportation access, open space, and outdoor physical activity, and the need for buildings and settlements to educate occupants.
Buildings can qualify for four levels of certification:
Certified: 40–49 points
Silver: 50–59 points
Gold: 60–79 points
Platinum: 80 points and above
The aim of LEED 2009 is to allocate points "based on the potential environmental impacts and human benefits of each credit". These are weighed using the environmental impact categories of the EPA's Tools for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI) and the environmental-impact weighting scheme developed by the National Institute of Standards and Technology (NIST).
Prior to LEED 2009 evaluation and certification, a building must comply with minimum requirements including environmental laws and regulations, occupancy scenarios, building permanence and pre-rating completion, site boundaries and area-to-site ratios. Its owner must share data on the building's energy and water use for five years after occupancy (for new construction) or date of certification (for existing buildings).
The credit weighting process has the following steps: First, a collection of reference buildings are assessed to estimate the environmental impacts of similar buildings. NIST weightings are then applied to judge the relative importance of these impacts in each category. Data regarding actual impacts on environmental and human health are then used to assign points to individual categories and measures. This system results in a weighted average for each rating scheme based upon actual impacts and the relative importance of those impacts to human health and environmental quality.
The LEED council also appears to have assigned credit and measured weighting based upon the market implications of point allocation.
From 2010, buildings can use carbon offsets to achieve green power credits for LEED-NC (new construction certification).
LEED v4 (2014)
For LEED BD+C v4 credit, the IEQ category addresses thermal, visual, and acoustic comfort as well as indoor air quality. Laboratory and field research have directly linked occupants' satisfaction and performance to the building's thermal conditions. Energy reduction goals can be supported while improving thermal satisfaction. For example, providing occupants control over the thermostat or operable windows allows for comfort across a wider range of temperatures.
LEED v4.1 (2019)
On April 2, 2019, the USGBC released LEED v4.1, a new version of the LEED green building program, designed for use with cities, communities and homes. However, LEED v4.1 was never officially balloted.
An update to v4, proposed as of November 22, 2022, took effect on March 1, 2024. Any projects that register under LEED v4 after March 1, 2024 must meet these updated guidelines.
LEED v5 (Draft, 2023)
As of January 2023, USGBC began to develop LEED v5. LEED v5 is the first version of the LEED rating system to be based on the June 2022 Future of LEED principles. The LEED v5 rating system will cover both new construction and existing buildings.
An initial draft version was discussed at Greenbuild 2023. The beta draft of LEED v5 was released for an initial period of public comment on April 3, 2024. Changes were made in response to nearly 6,000 comments. A second public comment period was opened for the revised version, from September 27 to October 28, 2024. The official release of the final version of LEED v5 is expected to occur in 2025. Future updates of the certification system are planned to occur every five years.
LEED v5 reorganizes the credits system and prerequisites, and has a greater focus on decarbonization of buildings. The scorecard expresses three global goals of climate action (worth 50% of the certification points), quality of life (25%) and conservation and ecological restoration (25%) in terms of five principles: decarbonization, ecosystems, equity, health and resilience. One of the reponses to public comments was to emphasize a data-driven approach to Operations and Maintenance by more clearly identifying performance-based credits (80% of points) and decoupling them from strategic credits (20%).
LEED Canada
In 2003, the Canada Green Building Council (CAGBC) received permission to create LEED Canada-NC v1.0, which was based upon LEED-NC 2.0. As of 2021, Canada ranked second in the world (not including the USA) in its number of LEED-certified projects and square feet of space. Buildings in Canada such as Winnipeg's Canadian Museum for Human Rights are LEED certified due to practices including the use of rainwater harvesting, green roofs, and natural lighting.
As of March 18, 2022, the Canada Green Building Council took over direct oversight for LEED™ green building certification of projects in Canada, formerly done by GBCI Canada. CAGBC will continue to work with Green Business Certification Inc. (GBCI) and USGBC while consolidating certification and credentialing for CAGBC's Zero Carbon Building Standards, LEED, TRUE, and Investor Ready Energy Efficiency (IREE). IREE is a model supported by CAGBC and the Canada Infrastructure Bank (CIB) for the verification of proposed retrofit projects.
Certification process
LEED certification is granted by the Green Building Certification Institute (GBCI), which arranges third-party verification of a project's compliance with the LEED requirements. The certification process for design teams consists of the design application, under the purview of the architect and the engineer and documented in the official construction drawings, and the construction application, under the purview of the building contractor and documented during the construction and commissioning of the building.
A fee is required to register the building, and to submit the design and construction applications. Total fees are assessed based on building area, ranging from a minimum of $2,900 to over $1 million for a large project. "Soft" costs – i.e., added costs to the building project to qualify for LEED certification – may range from 1% to 6% of the total project cost. The average cost increase was about 2%, or an extra $3–$5 per square foot.
The application review and certification process is conducted through LEED Online, USGBC's web-based service. The GBCI also utilizes LEED Online to conduct their reviews.
LEED energy modeling
Applicants have the option of achieving credit points by building energy models. One model represents the building as designed, and a second model represents a baseline building in the same location, with the same geometry and occupancy. Depending on location (climate) and building size, the standard provides requirements for heating, ventilation and air-conditioning (HVAC) system type, and wall and window definitions. This allows for a comparison with emphasis on factors that heavily influence energy consumption when considering design decisions.
LEED for Homes rating system
The LEED for Homes rating system was first piloted in 2005. It has been available in countries including the U.S., Canada, Sweden, and India. LEED for Homes projects are low-rise residential.
The process of the LEED for Homes rating system differs significantly from the LEED rating system for new construction. Unlike LEED, LEED for Homes requires an on-site inspection. LEED for Homes projects are required to work with either an American or a Canadian provider organization and a green rater. The provider organization helps the project through the process while overseeing the green raters, individuals who conduct two mandatory site inspections: the thermal bypass inspection and the final inspection. The provider and rater assist in the certification process but do not themselves certify the project.
Professional accreditation
In addition to certifying projects pursuing LEED, USGBC's Green Business Certification Inc. (GBCI) offers various accreditations to people who demonstrate knowledge of the LEED rating system, including LEED Accredited Professional (LEED AP), LEED Green Associate, and LEED Fellow.
The Green Building Certification Institute (GBCI) describes its LEED professional accreditation as "demonstrat[ing] current knowledge of green building technologies, best practices" and the LEED rating system, to assure the holder's competency as one of "the most qualified, educated, and influential green building professionals in the marketplace."
Criticism
Critics of LEED certification such as Auden Schendler and Randy Udall have pointed out that the process is slow, complicated, and expensive. In 2005, they published an article titled "LEED is Broken; Let's Fix It", in which they argued that the certification process "makes green building more difficult than it needs to be" and called for changes "to make LEED easier to use and more popular" to better accelerate the transition to green building.
Schendler and Udall also identified a pattern which they call "LEED brain", in which participants may become focused on "point mongering" and pick and choose design elements that don't actually go well together or don't fit local conditions, to gain points. The public relations value of LEED certification begins to drive the development of buildings rather than focusing on design. They give the example of debating whether to add a reflective roof, used to can counter "heat island" effects in urban areas, to a building high in the Rocky Mountains.
A 2012 USA Today review of 7,100 LEED-certified commercial buildings found that designers tended to choose easier points such as using recycled materials, rather than more challenging ones that could increase the energy efficiency of a building.
Critics such as David Owen and Jeff Speck also point out that LEED certification focuses on the building itself, and does not take into account factors such as the location in which the building stands, or how employee commutes may be affected by a relocation. In Green Metropolis (2009), Owen discusses an environmentally-friendly building in San Bruno, California, built by Gap Inc., which was located from the company's corporate headquarters in downtown San Francisco, and from Gap's corporate campus in Mission Bay. Although the company added shuttle buses between buildings, "no bus is as green as an elevator". Similarly, in Walkable City (2013), Jeff Speck describes the relocation of the Environmental Protection Agencys Region 7 Headquarters from downtown Kansas City, Missouri, to a LEED-certified building away in the suburb of Lenexa, Kansas. Kaid Benfield of the Natural Resources Defense Council estimated that the carbon emissions associated with the additional miles driven were almost three times higher than before, a change from 0.39 metric tons per person per month to 1.08 metric tons of carbon dioxide per person per month. Speck writes that "The carbon saved by the new building's LEED status, if any, will be a small fraction of the carbon wasted by its location". Both Speck and Owen make the point that a building-centric standard that doesn't consider location will inevitably undervalue the benefits of people living closer together in cities, compared to the costs of automobile-oriented suburban sprawl.
Assessment
LEED is a design tool and as such has focused on energy modeling, rather than being a performance-measurement tool that measures actual energy consumption.
LEED uses modeling software to predict future energy use based on intended use. Buildings certified under LEED do not have to prove energy or water efficiency in practice to receive LEED certification points. This has led to criticism of LEED's ability to accurately determine the efficiency of buildings, and concerns about the accuracy of its predictive models.
Research papers provide most of what is known about the performance and effectiveness of LEED models and buildings. Much of the available research predates 2014, and therefore applies to buildings that were designed under early versions of the LEED rating and certification systems, LEED v3 (2009) or earlier. Research papers have tended to address performance and effectiveness of LEED in two credit category areas: energy (EA) and indoor environment quality (IEQ).
Many early analyses should be considered as at best preliminary. Studies should be repeated with longer data history and larger building samples, include newer LEED certified buildings, and clearly identify green-building rating schemes and certification levels of individual buildings. Buildings may also need to be grouped according to location, since local conditions and regulation may influence building design and confound assessment results.
Modelling assessment
In 2018, Pushkar examined LEED-NC 2009 (v3) Certified-level certified projects from countries in northern (Finland, Sweden) and southern (Turkey, Spain) regions of Europe to see how different types of credits are understood and applied. Pushkar found that credit achievements were similar within regions and countries for Indoor Environmental Quality (EQ), Materials and Resources (MR), Sustainable Sites (SS), and Water Efficiency (WE), but differed for Energy and Atmosphere (EA). Sustainable Sites (SS) and Water Efficiency (WE) were high achievement areas, scoring 80–100% and 70–75%; Indoor Environmental Quality was intermediate (40–60%); and Materials and Resources (MR) was low (20–40%). Energy and Atmosphere (EA) was intermediate (60–65%) in northern Europe, and low (40%) in southern Europe. These results examine the extent to which different credits have been chosen by modellers.
Energy performance research (EA)
Because LEED focuses on the design of the building and not on its actual energy consumption, it has been suggested that LEED buildings should be tracked to discover whether the potential energy savings from the design are being used in practice.
In 2009, architectural scientist Guy Newsham (et al.) of the National Research Council of Canada (NRC) re-analyzed a dataset of 100 LEED certified (v3 or earlier version) buildings. The data included only "medium use" buildings, and did not include 21 laboratories, data centers and supermarkets which were expected to have higher energy activity. Researchers further attempted to match each building with a conventional building within the Commercial Building Energy Consumption Survey (CBECS) database according to building type and occupancy.
On average, the LEED buildings consumed 18 to 39% less energy by floor area than the conventional buildings. However, 28 to 35% of LEED-certified buildings used more energy. The paper found no correlation between the number of energy points achieved or LEED certification level and measured building performance.
In 2009 physicist John Scofield published an article in response to Newsham et al., analyzing the same database of LEED buildings and arriving at different conclusions. Scofield criticized the earlier analysis for focusing on energy per floor area instead of a total energy consumption. Scofield considered source energy (accounting for energy losses during generation and transmission) as well as site energy, and used area-weighted energy use intensities (EUIs) (energy per unit area per year), when comparing buildings to account for the fact that larger buildings tend to have larger EUIs. Scofield concluded that, collectively, the LEED-certified buildings showed no significant source energy consumption savings or greenhouse gas emission reductions when compared to non-LEED buildings, although they did consume 10–17% less site energy.
Scofield notes the difficulties of building analysis, given both the lack of a randomly selected sample of LEED buildings, and the diversity of factors involved when selecting a comparison group of non-LEED buildings. In 2013 Scofield identified 21 LEED-certified New York City office buildings with publicly available energy performance data for 2011, out of 953 office buildings in New York City with such data. Results differed with certification level. LEED-Gold buildings were found to use 20% less source energy than conventional buildings. However, buildings at the Silver and Certified levels used 11 to 15% more source energy, on average, than conventional buildings. (Data was not available for Platinum-level buildings.)
An analysis of 132 LEED buildings based on municipal energy benchmarking data from Chicago in 2015 showed that LEED-certified buildings used about 10% less energy on site than comparable conventional buildings. However, the study did not show differences in use of source energy.
In 2014, architect Gwen Fuertes and engineer Stefano Schiavon developed the first study that analyzes plug loads using LEED-documented data from certified projects. The study compared plug load assumptions made by 92 energy modeling practitioners against ASHRAE and Title 24 requirements, and the evaluation of the plug load calculation methodology used by 660 LEED-CI and 429 LEED-NC certified projects. They found that energy modelers only considered the energy consumption of predictable plug loads, such as refrigerators, computers and monitors. Overall the results suggested a disconnection between assumptions in the models and the actual performance of buildings.
Energy modeling might be a source of error during the LEED design phase. Engineers Christopher Stoppel and Fernanda Leite evaluated the predicted and actual energy consumption of two twin buildings using the energy model during the LEED design phase and the utility meter data after one year of occupancy. The study's results suggest that mechanical systems turnover and occupancy assumptions significantly differ from predicted to actual values.
In a 2019 review, Amiri et al. suggest that judging energy efficiency based on source energy may not be appropriate where the availability of energy types depends on city council or government policies. If some types of source energy are not supported locally, there is no opportunity to choose the types of energy promoted by the LEED scoring system. Amiri emphasizes that many studies have weaknesses due to the lack of randomly selected samples of LEED buildings, and the difficulty of selecting comparison groups of non-LEED buildings. Amiri also notes that the standards for building design have changed significantly over time. For example, newer non-LEED buildings may routinely use features such as high-quality windows which were rarely used in older buildings. Comparisons of LEED and non-LEED buildings therefore need to consider age as well as size, use, occupant behavior, and location aspects such as climate zone.
Zhang et al. (2019) examined renewable energy assessment methods and different assessment systems, and noted that LEED-US addresses management problems at the pre-occupancy phase. Interest in Post‐occupancy evaluation (POE), the process of evaluating building performance after occupation, is increasing. This is due in part to concerns about differences between energy models in the design phase and actual use of buildings. POE research emphasizes the need to collect and analyze actual occupancy data from existing buildings, to better understand how people are using spaces and resources.
Asensio and Delmas (2017) carefully matched and compared buildings that did and did not participate in LEED, Energy Star, and Better Buildings Challenge programs in Los Angeles, California. They examined data for monthly energy consumption between 2005–2012, for more than 175,000 commercial buildings. Buildings from all three programs displayed “high magnitude” energy savings, ranging from 18–19% for Better Buildings and Energy Star to 30% for LEED-rated buildings. The three programs saved 210 million kilowatt-hours, equal to 145 kilotons of CO2 equivalent emissions per year.
IEQ performance research (IEQ)
The Centers for Disease Control and Prevention (CDC) defines indoor environmental quality (IEQ) as "the quality of a building's environment in relation to the health and wellbeing of those who occupy space within it." The USGBC includes the following considerations for attaining IEQ credits: indoor air quality, the level of volatile organic compounds (VOC), lighting, thermal comfort, and daylighting and views. In consideration of a building's indoor environmental quality, published studies have also included factors such as: acoustics, building cleanliness and maintenance, colors and textures, workstation size, ceiling height, window access and shading, surface finishes, furniture adaptability and comfort.
The most widely used method for post-occupancy evaluation (POE) in IEQ-related studies is occupant surveys. In 2013, architectural physicist Sergio Altamonte and Stefano Schiavon used occupant surveys from the Center for the Built Environment at Berkeley's database to study IEQ occupant satisfaction in 65 LEED buildings and 79 non-LEED buildings. They analyzed 15 IEQ-related factors including the ease of interaction, building cleanliness, the comfort of furnishing, the amount of light, building maintenance, colors and textures, workplace cleanliness, the amount of space, furniture adjustability, visual comfort, air quality, visual privacy, noise, temperature, and sound privacy. Occupants reported being slightly more satisfied in LEED buildings for the air quality and slightly more dissatisfied with the amount of light. Overall, occupants of both LEED and non-LEED buildings had equal satisfaction with the building overall and with the workspace. The authors noted that the data may not be representative of the entire building stock and a randomized approach was not used in the data assessment.
Newsham et al (2013) carried out an evaluation using both occupant interviews and physical site measurements. Field studies and post-occupancy evaluations (POE) were performed in 12 "green" and 12 conventional buildings across Canada and the northern United States. Most but not all of the "green" buildings were LEED-certified. 2545 occupants completed a questionnaire. On-site, 974 randomly selected workstations were measured for thermal conditions, air quality, acoustics, lighting, workstation size, ceiling height, window access and shading, and surface finishes. Responses were positive in the areas of environmental satisfaction, satisfaction with thermal conditions, satisfaction with outside views, aesthetic appearance, reduced disturbance from HVAC noise, workplace image, night-time sleep quality, mood, physical symptoms, and reduced number of airborne particulates. The green buildings were rated more highly and in the case of airborne particulates exhibited superior performance than the conventional buildings.
Schiavon and Altomonte (2014) found that occupants have equivalent satisfaction levels in LEED and non-LEED buildings when evaluated independently from the following factors: office type, spatial layout, distance from windows, building size, gender, age, type of work, time at workspace, and weekly working hours. LEED certified buildings may provide higher satisfaction in open spaces than in enclosed offices, in smaller buildings than in larger buildings, and to occupants having spent less than one year in their workspaces rather than to those who have used their workspace longer. This study suggests that the positive value of LEED certification as measured by occupant satisfaction may decrease with time.
In 2015, environmental health scientist Joseph Allen (et al.) reviewed studies of indoor environmental quality and the potential health benefits of green-certified buildings. He concluded that green buildings provide better indoor environmental quality with direct benefits to the human health of occupants, compared to non-green buildings. Statistically significant measures from different studies included decreased symptoms of sick building syndrome, decreased sick days, decreased respiratory symptoms during the daytime and asthma symptoms at night, and lowered levels of PM2.5, NO2, and nicotine. However, Allen noted that the frequent use of subjective health performance indicators was a limitations of many of the studies reviewed. He proposed a framework to encourage the use of direct, objective, and leading “Health Performance Indicators” in building assessment.
The daylight credit was updated in LEED v4 to include a simulation option for daylight analysis that uses spatial daylight autonomy (SDA) and annual sunlight exposure (ASE) metrics to evaluate daylight quality in LEED projects. SDA is a metric that measures the annual sufficiency of daylight levels in interior spaces and ASE describes the potential for visual discomfort by direct sunlight and glare. These metrics are approved by the Illuminating Engineering Society of North America (IES) and codified in the LM-83-12 standard. LEED recommends a minimum of 300 lux for at least 50% of total occupied hours of the year for at least 55% of the occupied floor area. The threshold recommended by LEED for ASE is that no more than 10% of regularly occupied floor area can be exposed to more than 1000 lux of direct sunlight for more than 250 hours per year. Additionally, LEED requires window shades to be closed when more than 2% of a space is subject to direct sunlight above 1000 lux. According to building scientist Christopher Reinhart, the direct sunlight requirement is a very stringent approach that can discourage good daylight design. Reinhart proposed the application of the direct sunlight criterion only in spaces that require stringent control of sunlight (e.g. desks, white boards, etc.).
In 2024, Kent et al. compared satisfaction of people in buildings that had received either WELL certification or LEED certification. Ratings of buildings certified with WELL and LEED were matched on six dimensions: award level, years in building, time in workspace, type of workspace, proximity to a window, and floor height. Satisfaction with the overall building and one's workspace were high under both rating systems. However, satisfaction with LEED‑certified buildings (73% and 71%) tended to be lower than that for WELL‑certified buildings (94% and 87%). This may be because WELL is a human-centered standard for building design that focuses primarily on comfort, health, and well-being. In contrast, only 10% of the credits in LEED certification relate to indoor environmental quality (IEQ). Differences may also reflect age of buildings, which were not matched for in the design.
Water Efficiency (WE)
Water systems involve both water and energy as resources. Outside buildings, the acquisition, treatment, and transportation of water is involved. Inside building, onsite water treatment, heating, and wastewater treatment are issues. Data on the energy use of specific water and wastewater systems is becoming increasingly available. Energy use can sometimes be estimated from public sources. LEED v4 includes a number of credits related to Water Efficiency (WE). Points are awarded for Outdoor Water Use Reduction, Indoor Water Use Reduction and Building-level Water Metering based on predetermined percentage reductions in water or energy use.
There has been criticism that the LEED rating system is not sensitive and does not vary enough with regard to local environmental conditions. For example, there are 16 climate zones in California, with unique weather and temperature patterns. The availability of electricity, water and other resources differs widely in different regions, making it important to consider interconnected systems and supply chain issues. Greer et al. (2019) reviewed renewable energy assessment methods and examined the effectiveness of LEED v4 buildings in California. They examined relationships between the climate mitigation points given for water efficiency (WE) and energy efficiency (EA) and used baseline energy and water budgets to calculate the avoided GHG emissions of buildings. Their calculations both demonstrate mitigation of expected climate change and also indicate high variability in environmental outcomes within the state.
While LEED v4 introduced “Impact Categories” as system goals, Greer suggests that closer linkages are needed between design points and outcomes, and that issues like supply chains, infrastructure, and regionalized variability should be considered. They report that impacts like the mitigation of expected climate change pollution can be calculated, and while "LEED points do not equally reward equal impact mitigation", such differences could be reconciled to better align LEED credits and goals.
Innovation in design research (ID)
The rise in LEED certification also brought forth a new era of construction and building research and ideation. Architects and designers have begun stressing the importance of occupancy health over high efficiency within new construction and have been trying to engage in more conversations with health professionals. Along with this, they also create buildings to perform better and analyze performance data to upkeep the process. Another way LEED has affected research is that designers and architects focus on creating spaces that are modular and flexible to ensure a longer lifespan while simultaneously sourcing products that are resilient through consistent use.
Innovation in LEED architecture is linked with new designs and high-quality construction. One example is use of nanoparticle technology for consolidation and conservation effects in cultural heritage buildings. This practice began with the use of calcium hydroxide nano-particles in porous structures to improve mechanical strength. Titanium, silica, and aluminum-based compounds may also be used.
Material technology and construction techniques could be among first issues to consider in building design. For the facade of high-rise buildings, such as the Empire State Building, the surface area provides opportunities for design innovation. VOC released from construction materials into the air is another challenge to address.
In Milan, a university-corporate partnership sought to produce semi-transparent solar panels to take the place of ordinary windows in glass-facade high-rise buildings. Similar concepts are under development elsewhere, with considerable market potential.
The Manzara Adalar skyscraper project in Istanbul, designed by Zaha Hadid, saw considerable innovation through the use of communal rooms, outdoor spaces, and natural lighting as part of the Urban Transformation Project of the Kartal port region.
Sustainable Sites (SS)
Remaining credit areas
Other credit areas include:
Materials and Resources (MR), and
Regional Priority (RP).
Financial considerations
When a LEED rating is pursued, the cost of initial design and construction may rise. There may be a lack of abundant availability of manufactured building components that meet LEED specifications. There are also added costs in USGBC correspondence, LEED design-aide consultants, and the hiring of the required Commissioning Authority, which are not in themselves necessary for an environmentally responsible project unless seeking LEED certification.
Proponents argue that these higher initial costs can be mitigated by the savings incurred over time due to projected lower-than-industry-standard operational costs typical of a LEED certified building. This life cycle costing is a method for assessing the total cost of ownership, taking into account all costs of acquiring, owning and operating, and the eventual disposal of a building. Additional economic payback may come in the form of employee productivity gains incurred as a result of working in a healthier environment. Studies suggest that an initial up-front investment of 2% extra yields over ten times that initial investment over the life cycle of the building.
LEED has been developed and continuously modified by workers in the green building industry, especially in the ten largest metro areas in the U.S.; however, LEED certified buildings have been slower to penetrate small and middle markets.
From a financial perspective, studies from 2008 and 2009 found that LEED for-rent office spaces generally charged higher rent and had higher occupancy rates. Analysis of CoStar Group property data estimated the extra cost for the minimum benefit at 3%, with an additional 2.5% for silver-certified buildings. More recent studies have confirmed earlier findings that certified buildings achieve significantly higher rents, sale prices and occupancy rates as well as lower capitalization rates, potentially reflecting lower investment risk.
Incentive programs
Many federal, state, and local governments and school districts have adopted various types of LEED initiatives and incentives. LEED incentive programs can include tax credits, tax breaks, density zoning bonuses, reduced fees, priority or expedited permitting, free or reduced-cost technical assistance, grants and low-interest loans.
In the United States, states that have provided incentives include
California,
New York, Delaware,
Hawaii,
Illinois,
Maryland,
Nevada,
New Mexico,
North Carolina,
Pennsylvania, and
Virginia.
Cincinnati, Ohio, provides property tax abatements for newly constructed or rehabilitated commercial or residential properties that earn are LEED certified.
Beginning in June 2013, USGBC has offered free LEED certification to the first LEED-certified project in a country that doesn't have one.
Notable certifications
Directories of LEED-certified projects
The USGBC and Canada Green Building Council maintain online directories of U.S. LEED-certified and LEED Canada-certified projects. In 2012 the USGBC launched the Green Building Information Gateway (GBIG) to connect green building efforts and projects worldwide. It provides searchable access to a database of activities, buildings, places and collections of green building-related information from many sources and programs, including LEED projects.
A number of sites including the Canada Green Building Council (CaGBC) Project Database list resources relating to LEED buildings in Canada.
Platinum certification
The Philip Merrill Environmental Center in Annapolis, Maryland was the first building to receive a LEED-Platinum rating, version 1.0. It was recognized as one of the "greenest" buildings constructed in the U.S. in 2001 at the time it was built. Sustainability issues ranging from energy use to material selection were given serious consideration throughout design and construction of this facility.
The first LEED platinum-rated building outside the U.S. is the CII Sohrabji Godrej Green Business Centre (CII GBC) in Hyderabad, India, certified in 2003 under LEED version 2.0.
The Coastal Maine Botanical Gardens Bosarge Family Education Center, completed in 2011, achieved LEED Platinum certification and became known as "Maine's greenest building".
In October 2011 Apogee Stadium at the University of North Texas became the first newly built stadium in the country to achieve Platinum-level certification.
In Pittsburgh, Sota Construction Services' corporate headquarters earned a LEED Platinum rating in 2012 with one of the highest scores by percentage of total points earned in any LEED category, making it one of the top ten greenest buildings in the world. It featured a super-efficient thermal envelope using cob walls, a geothermal well, radiant heat flooring, a roof-mounted solar panel array, and daylighting features.
When it received LEED Platinum in 2012, Manitoba Hydro Place in downtown Winnipeg was the most energy-efficient office tower in North America and the only office tower in Canada with a Platinum rating. The office tower employs south-facing winter gardens to capture solar energy during the harsh Manitoba winters and uses glass extensively to maximize natural light.
Gold certification
Pittsburgh's David L. Lawrence Convention Center was the first Gold LEED-certified convention center and largest "green" building in the world when it opened in 2003. It earned Platinum certification in 2012, becoming the only convention center with certifications for both the original building and new construction.
The Cashman Equipment building in Henderson, Nevada became the first construction equipment dealership to receive LEED gold certification in 2009. The headquarters of the Caterpillar brand, it is the largest LEED industrial complex in Nevada.
Around 2010, the Empire State Building underwent a $550 million renovation, including $120 million towards energy efficiency and eco-friendliness. It received a gold LEED rating in 2011, and at the time was the tallest LEED-certified building in the United States.
In July 2014, the San Francisco 49ers' Levi's Stadium became the first NFL venue to earn a LEED Gold certification. The Minnesota Vikings' U.S. Bank Stadium equaled this feat with a Gold certification in Building Design and Construction in 2017 as well as a Platinum certification in Operations and Maintenance in 2019, a first for any professional sports stadium.
In San Francisco's Presidio, the Letterman Digital Arts Center earned a Gold certification in 2013. It was built almost entirely from the recycled remains of the Letterman Army Hospital, which previously occupied the site.
Although originally constructed in 1973, Willis Tower a commercial office building located in Chicago, adopted and implemented a new set of sustainable practices in 2018, earning the property LEED Gold certification under the LEED for Existing Buildings: O&M™ rating system. This adoption earned Willis Tower the ranking of the tallest LEED-certified building in the United States.
Multiple certifications
In September 2012, The Crystal in London became the world's first building awarded LEED Platinum and BREEAM Outstanding status. It generates its own energy using solar power and ground-source heat pumps and utilizes extensive KNX technologies to automate the building's environmental controls.
In Pittsburgh, the visitor's center of Phipps Conservatory & Botanical Gardens received Silver certification, its Center for Sustainable Landscapes received a Platinum certification and fulfilled the Living Building Challenge for net-zero energy, and its greenhouse facility received Platinum certification. It may be the only greenhouse in the world to have achieved such a rating.
Torre Mayor, at one time the tallest building in Mexico, achieved LEED Gold certification for an existing building and eventually reached Platinum certification under LEED v4.1. The building is designed to withstand 8.5-magnitude earthquakes, and has enhanced many of its systems including air handling and water treatment.
In 2017, Kaiser Permanente, the largest integrated health system in the United States, opened California's first LEED Platinum certified hospital, the Kaiser Permanente San Diego Medical Center. By 2020, Kaiser Permanente owned 40 LEED certified buildings.
Its construction of LEED buildings was one of multiple initiatives that enabled Kaiser Permanente to report net-zero carbon emissions in 2020.
As of 2022, University of California, Irvine had 32 LEED-certified buildings across the campus. 21 were LEED Platinum certified, and 11 were LEED Gold.
Extreme structures
Extreme structures that have received LEED certification include: Amorepacific Headquarters in Seoul by David Chipperfield Architects; Project: Brave New World: SFMOMA by Snøhetta in San Francisco, California; Project: UFO in a Sequinned Dress: Centro Botín in Santander by Renzo Piano; Building Workshop in Zusammenarbeit with Luis Vidal + Architects, in Santander, Spain; and Project: Vertical factory: Office building in London by Allford Hall Monaghan Morris in London.
See also
Building enclosure commissioning
BREEAM
Center for the Built Environment
Center for Environmental Innovation in Roofing
Code for Sustainable Homes
Design Impact Measures
Ecological footprint
Energy conservation
Estidama
Environmental design
Green Globes
High-Performance Green Buildings
Home energy rating
Living Building Challenge
NAHBGreen
Passive house
QSAS
Renewable energy
SmartCode
Sustainable architecture
Sustainable refurbishment
U.S. Green Building Council
WELL Building Standard
Notes and references
Explanatory notes
Citations
External links
U.S. Green Building Council (USGBC)
About LEED at USGBC
LEED rating system at USGBC
LEED Project Directory at USGBC
EXPLORE GREEN BUILDINGS GBIG Green Building Information Gateway by USGBC
Canada Green Building Council (CGBC)
Project Database for CGBC
Building energy rating
Building engineering
Energy in the United States
Environment of the United States
Environmental design
Sustainable building in the United States
Sustainable building rating systems | LEED | Engineering | 9,770 |
43,037,038 | https://en.wikipedia.org/wiki/Thaine%27s%20theorem | In mathematics, Thaine's theorem is an analogue of Stickelberger's theorem for real abelian fields, introduced by . Thaine's method has been used to shorten the proof of the Mazur–Wiles theorem , to prove that some Tate–Shafarevich groups are finite, and in the proof of Mihăilescu's theorem .
Formulation
Let and be distinct odd primes with not dividing . Let be the Galois group of over , let be its group of units, let be the subgroup of cyclotomic units, and let be its class group. If annihilates then it annihilates .
References
See in particular Chapter 14 (pp. 91–94) for the use of Thaine's theorem to prove Mihăilescu's theorem, and Chapter 16 "Thaine's Theorem" (pp. 107–115) for proof of a special case of Thaine's theorem.
See in particular Chapter 15 (pp. 332–372) for Thaine's theorem (section 15.2) and its application to the Mazur–Wiles theorem.
Cyclotomic fields
Theorems in algebraic number theory | Thaine's theorem | Mathematics | 247 |
36,264,457 | https://en.wikipedia.org/wiki/MLS%20Innovation | MLS Innovation Inc. was a Greek software engineering and telecommunications equipment company founded in October 1989 in Thessaloniki. In May 2001, it was officially listed on the Athens Stock Exchange. Its headquarters are in Thessaloniki, while the company keeps a commercial department in Athens. Since 2019 the company has been virtually defunct and its headquarters building is to be sold in an auction.
MAIC (MLS Artificial Intelligence Center)
MLS Artificial Intelligence Center (MAIC) was a voice command technology that had been the main selling point of MLS and used in most of its devices.
MAIC uses voice recognition technologies to perform the actions such as calls, messaging, searching the internet and more.
Products
Educational boards
In 2010 MLS entered the educational technology market and completed the development of its own interactive touch board, the MLS IQBoard, for use in all forms of classroom-based instructional technologies. MLS undertook the supply and installation of approximately 1,100 interactive boards, totaling 1.6 mil. Euros.
Smartphones
In 2012 the company expanded its commercial activities in the mobile phone market by launching the first Greek Android smartphone MLS iQTalk.
MLS Diamond
At the end of 2015, MLS announced the creation of a new category of products with the brand name "Diamond", which included the premium smartphone models MLS Diamond 4G, MLS Diamond 5.2 4G and MLS Diamond Fingerprint 4G.
2-in-1 tablet & laptop
In 2016 MLS introduced MLS Magic, a product functioning as a tablet and a laptop which supported dual boot of both Windows and Android.
Corporate bond
MLS announced on Tuesday 19 July 2016 the trade of a total 400 common registered titles, each worth 10,000 Euros, which would begin trading in the Fixed Income Alternative Market of the Athens Stock Exchange (ASE). The duration of the corporate bond is four (4) years with an option to extend for one (1) more year. At the end of every coupon period, which will be quarterly, the issuer will deposit the coupon amount to every holder of the Corporate Bond title(s), calculated based on an annual interest rate of 5.30%. (coupon rate).
European projects
MLS Innovation has been engaging in various partnerships and research programs such as Inlife, Movesmart, Prosperity4All and Choreos
Market share
At the end of 2015 MLS reached the top of tablet sales in Greece, with 17% market share.
In 2019 MLS had liquidity problems and stopped production.
Prizes and distinctions
1998 European Information Technology Prize
2008 Product of the Year (Τ3&PC Magazine) – MLS Destinator 4800 awarded Product of the Year 2008 Title
2008 Business Innovation Award as part of the event "Money Business Awards 2008"
2009 Product of the Year (Τ3&PC Magazine) – Voice recognition system MLS Destinator Talk&Drive™ awarded Product of the Year 2009 title
2010 Product of the Year (Τ3&PC Magazine) – Live recognition system with voice MLS Destinator Talk&Drive liveTRAFFIC awarded Product of the Year 2010 title
2011 Product of the Year (Τ3&PC Magazine) – The navigation system with voice MLS Destinator Talk&Drive™ liveTRAFFIC 500 won Product of the Year 2011 title
2013 - Greek Exports Awards - Technology-Innovation Award (Greek Exports Forum 2013)
2013 Business Award in the category "Business Innovation Prize" (organization of business Money Awards)
2014 - LIGHTHOUSE RETAIL BUSINESS AWARDS - Supplier of the Year 2014
2014 Greek value - Innovation Award from the institution GREEK VALUE Federation of Industries of Northern Greece (SBBE)
2014 Supplier of the Year - Industry and Retail Trade Retail Magazine Business Awards
2015 Greek Branded Product Honors - Industrial Products (Made in Greece Awards)
2015 High Growth Business Award (organization Ethos events - Money business Awards)
References
Software companies of Greece
Computer hardware companies
Companies listed on the Athens Exchange
Mobile phone manufacturers
Companies based in Thessaloniki
Greek brands | MLS Innovation | Technology | 779 |
5,802,077 | https://en.wikipedia.org/wiki/Prickle%20%28protein%29 | Prickle is also known as REST/NRSF-interacting LIM domain protein, which is a putative nuclear translocation receptor. Prickle is part of the non-canonical Wnt signaling pathway that establishes planar cell polarity. A gain or loss of function of Prickle1 causes defects in the convergent extension movements of gastrulation. In epithelial cells, Prickle2 establishes and maintains cell apical/basal polarity. Prickle1 plays an important role in the development of the nervous system by regulating the movement of nerve cells.
The first prickle protein was identified in Drosophila as a planar cell polarity protein. Vertebrate prickle-1 was first found as a rat protein that binds to a transcription factor, neuron-restrictive silencer factor (NRSF). It was then recognized that other vertebrates including mice and humans have two genes that are related to Drosophila prickle. Mouse prickle-2 was found to be expressed in mature neurons of the brain along with mouse homologs of Drosophila planar polarity genes flamingo and dischevelled. Prickle interacts with flamingo to regulate sensory axon advance at the transition between the peripheral nervous system and the central nervous system. Also, Prickle1 interacts with RE1-silencing transcription factor (REST) by transporting REST out of the nucleus. REST turns off several critical genes in neurons by binding to particular regions of DNA in the nucleus.
Prickle is recruited to the cell surface membrane by strabismus, another planar cell polarity protein. In the developing Drosophila wing, prickle becomes concentrated at the proximal side of cells. Prickle can compete with the ankyrin-repeat protein Diego for a binding site on Dishevelled.
In Drosophila, prickle is present inside cells in multiple forms due to alternative splicing of the prickle mRNA. The relative levels of the alternate forms may be regulated and involved in the normal control of planar cell polarity.
Mutations in Prickle genes can cause epilepsy in humans by perturbing Prickle function. One mutation in Prickle1 gene can result in Prickle1-Related Progressive Myoclonus Epilepsy-Ataxia Syndrome. This mutation disrupts the interaction between prickle-like 1 and REST, which results in the inability to suppress REST. Gene knockdown of Prickle1 by shRNA or dominant-negative constructs results in decreased axonal and dendritic extension in neurons in the hippocampus. Prickle1 gene knockdown in neonatal retina causes defects in axon terminals of photoreceptors and in inner and outer segments.
References
External links
GeneReview/NCBI/NIH/UW entry on Progressive Myoclonus Epilepsy with Ataxia
Signal transduction | Prickle (protein) | Chemistry,Biology | 588 |
2,527,221 | https://en.wikipedia.org/wiki/Isotopes%20of%20mendelevium | Mendelevium (101Md) is a synthetic element, and thus a standard atomic weight cannot be given. Like all artificial elements, it has no stable isotopes. The first isotope to be synthesized was 256Md (which was also the first isotope of any element produced one atom at a time) in 1955. There are 17 known radioisotopes, ranging in atomic mass from 244Md to 260Md, and 5 isomers. The longest-lived isotope is 258Md with a half-life of 51.3 days, and the longest-lived isomer is 258mMd with a half-life of 57 minutes.
List of isotopes
|-
| rowspan=2|244Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 143
| rowspan=2|244.08116(40)#
| rowspan=2|0.36(14) s
| α
| 240Es
| rowspan=2|3+#
|-
| β+, SF (<14%)
| (various)
|-id=Mendelevium-244m
| style="text-indent:1em" | 244mMd
| colspan="3" style="text-indent:2em" | 200(150)# keV
| ~9 μs
| IT
| 244Md
| 7+#
|-id=Mendelevium-245
| 245Md
| style="text-align:right" | 101
| style="text-align:right" | 144
| 245.08086(28)#
| 0.38(10) s
| α
| 241Es
| (7/2−)
|-id=Mendelevium-245m
| style="text-indent:1em" | 245mMd
| colspan="3" style="text-indent:2em" | 100(100)# keV
| 0.90(25) ms
| SF
| (various)
| 1/2−#
|-id=Mendelevium-246
| 246Md
| style="text-align:right" | 101
| style="text-align:right" | 145
| 246.08171(28)#
| 0.92(18) s
| α
| 242Es
| 1−#
|-id=Mendelevium-246m
| rowspan=3 style="text-indent:1em" | 246mMd
| rowspan=3 colspan="3" style="text-indent:2em" | 60(60) keV
| rowspan=3|4.4(8) s
| β+ (~67%)
| 246Fm
| rowspan=3|4−#
|-
| α (<23%)
| 242Es
|-
| β+, SF (>10%)
| (various)
|-
| rowspan=2|247Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 146
| rowspan=2|247.08152(22)#
| rowspan=2|1.20(12) s
| α (99.14%)
| 243Es
| rowspan=2|(7/2−)
|-
| SF (0.86%)
| (various)
|-id=Mendelevium-247m
| rowspan=2 style="text-indent:1em" | 247mMd
| rowspan=2 colspan="3" style="text-indent:2em" | 153 keV
| rowspan=2|0.23(3) s
| α (80%)
| 243Es
| rowspan=2|(1/2−)
|-
| SF (20%)
| (various)
|-
| rowspan=3|248Md
| rowspan=3 style="text-align:right" | 101
| rowspan=3 style="text-align:right" | 147
| rowspan=3|248.08261(20)#
| rowspan=3|7(3) s
| β+ (80%)
| 248Fm
| rowspan=3|
|-
| α (20%)
| 244Es
|-
| β+, SF (<0.05%)
| (various)
|-
| rowspan=2|249Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 148
| rowspan=2|249.08286(18)
| rowspan=2|25.6(9) s
| α (75%)
| 245Es
| rowspan=2|(7/2−)
|-
| β+ (25%)
| 249Fm
|-id=Mendelevium-249m
| style="text-indent:1em" | 249mMd
| colspan="3" style="text-indent:2em" | 100(100)# keV
| 1.9(9) s
| α
| 245Es
| (1/2−)
|-
| rowspan=3|250Md
| rowspan=3 style="text-align:right" | 101
| rowspan=3 style="text-align:right" | 149
| rowspan=3|250.084165(98)
| rowspan=3|54(4) s
| β+ (93.0%)
| 250Fm
| rowspan=3|2−#
|-
| α (7.0%)
| 246Es
|-
| β+, SF (0.026%)
| (various)
|-id=Mendelevium-250m
| style="text-indent:1em" | 250mMd
| colspan="3" style="text-indent:2em" | 120(40) keV
| 42.4(45) s
| α
| 246Es
| 7+#
|-
| rowspan=2|251Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 150
| rowspan=2|251.084774(20)
| rowspan=2|4.21(23) min
| β+ (90%)
| 251Fm
| rowspan=2|(7/2−)
|-
| α (10%)
| 247Es
|-id=Mendelevium-251m
| style="text-indent:1em" | 251mMd
| colspan="3" style="text-indent:2em" | 53(8) keV
| 20# s
|
|
| (1/2−)
|-id=Mendelevium-252
| 252Md
| style="text-align:right" | 101
| style="text-align:right" | 151
| 252.086385(98)
| 2.3(8) min
| β+
| 252Fm
| 1+#
|-
| rowspan=2|253Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 152
| rowspan=2|253.087143(34)#
| rowspan=2|12(8) min
| β+ (~99.3%)
| 253Fm
| rowspan=2|(7/2−)
|-
| α (~0.7%)
| 249Es
|-id=Mendelevium-253m
| style="text-indent:1em" | 253mMd
| colspan="3" style="text-indent:2em" | 60(30) keV
| 1# min
|
|
| (1/2−)
|-id=Mendelevium-254
| 254Md
| style="text-align:right" | 101
| style="text-align:right" | 153
| 254.08959(11)#
| 10(3) min
| β+
| 254Fm
| 0−#
|-id=Mendelevium-254m
| style="text-indent:1em" | 254mMd
| colspan="3" style="text-indent:2em" | 50(100)# keV
| 28(8) min
| β+
| 254Fm
| 3−#
|-
| rowspan=2|255Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 154
| rowspan=2|255.0910817(60)
| rowspan=2|27(2) min
| β+ (93%)
| 255Fm
| rowspan=2|7/2−
|-
| α (7%)
| 251Es
|-
| rowspan=3|256Md
| rowspan=3 style="text-align:right" | 101
| rowspan=3 style="text-align:right" | 155
| rowspan=3|256.09389(13)#
| rowspan=3|77.7(18) min
| β+ (90.8%)
| 256Fm
| rowspan=3|(1−)
|-
| α (9.2%)
| 252Es
|-
| SF (<3%)
| (various)
|-
| rowspan=2|257Md
| rowspan=2 style="text-align:right" | 101
| rowspan=2 style="text-align:right" | 156
| rowspan=2|257.0955373(17)
| rowspan=2|5.52(5) h
| EC (85%)
| 257Fm
| rowspan=2|(7/2−)
|-
| α (15%)
| 253Es
|-
| rowspan=3|258Md
| rowspan=3 style="text-align:right" | 101
| rowspan=3 style="text-align:right" | 157
| rowspan=3|258.0984336(37)
| rowspan=3|51.59(29) d
| α
| 254Es
| rowspan=3|(8−)#
|-
| β− (<0.0015%)
| 258No
|-
| β+ (<0.0015%)
| 258Fm
|-id=Mendelevium-258m
| rowspan=3 style="text-indent:1em" | 258mMd
| rowspan=3 colspan="3" style="text-indent:2em" | 0(200)# keV
| rowspan=3|57.0(9) min
| EC (85%)
| 258Fm
| rowspan=3|1−#
|-
| SF (<15%)
| (various)
|-
| α (<1.2%)
| 254Es
|-id=Mendelevium-259
| 259Md
| style="text-align:right" | 101
| style="text-align:right" | 158
| 259.10045(11)#
| 1.60(6) h
| SF
| (various)
| 7/2−#
|-
| rowspan=4|260Md
| rowspan=4 style="text-align:right" | 101
| rowspan=4 style="text-align:right" | 159
| rowspan=4|260.10365(34)#
| rowspan=4|27.8(8) d
| SF
| (various)
| rowspan=4|
|-
| α (<5%)
| 256Es
|-
| EC (<5%)
| 260Fm
|-
| β− (<3.5%)
| 260No
Chronology of isotope discovery
References
Isotope masses from:
Half-life, spin, and isomer data selected from the following sources.
Mendelevium
Mendelevium | Isotopes of mendelevium | Chemistry | 2,635 |
1,498,680 | https://en.wikipedia.org/wiki/It%C3%B4%20calculus | Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations.
The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes:
where is a locally square-integrable process adapted to the filtration generated by , which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval.
The main insight is that the integral can be defined as long as the integrand is adapted, which loosely speaking means that its value at time can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used.
Important results of Itô calculus include the integration by parts formula and Itô's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms.
In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through clairvoyance: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums .
Notation
The process defined before as
is itself a stochastic process with time parameter t, which is also sometimes written as . Alternatively, the integral is often written in differential form , which is equivalent to . As Itô calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given
The σ-algebra represents the information available up until time , and a process is adapted if is -measurable. A Brownian motion is understood to be an -Brownian motion, which is just a standard Brownian motion with the properties that is -measurable and that is independent of for all .
Integration with respect to Brownian motion
The Itô integral can be defined in a manner similar to the Riemann–Stieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that is a Wiener process (Brownian motion) and that is a right-continuous (càdlàg), adapted and locally bounded process. If is a sequence of partitions of with mesh width going to zero, then the Itô integral of with respect to up to time is a random variable
It can be shown that this limit converges in probability.
For some applications, such as martingale representation theorems and local times, the integral is needed for processes that are not continuous. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If is any predictable process such that for every then the integral of with respect to can be defined, and is said to be -integrable. Any such process can be approximated by a sequence Hn of left-continuous, adapted and locally bounded processes, in the sense that
in probability. Then, the Itô integral is
where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the Itô isometry
which holds when is bounded or, more generally, when the integral on the right hand side is finite.
Itô processes
An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time,
Here, is a Brownian motion and it is required that σ is a predictable -integrable process, and μ is predictable and (Lebesgue) integrable. That is,
for each . The stochastic integral can be extended to such Itô processes,
This is defined for all locally bounded and predictable integrands. More generally, it is required that be -integrable and be Lebesgue integrable, so that
Such predictable processes are called -integrable.
An important result for the study of Itô processes is Itô's lemma. In its simplest form, for any twice continuously differentiable function on the reals and Itô process as described above, it states that is itself an Itô process satisfying
This is the stochastic calculus version of the change of variables formula and chain rule. It differs from the standard result due to the additional term involving the second derivative of , which comes from the property that Brownian motion has non-zero quadratic variation.
Semimartingales as integrators
The Itô integral is defined with respect to a semimartingale . These are processes which can be decomposed as for a local martingale and finite variation process . Important examples of such processes include Brownian motion, which is a martingale, and Lévy processes. For a left continuous, locally bounded and adapted process the integral exists, and can be calculated as a limit of Riemann sums. Let be a sequence of partitions of with mesh going to zero,
This limit converges in probability. The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of Itô's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations. However, it is inadequate for other important topics such as martingale representation theorems and local times.
The integral extends to all predictable and locally bounded integrands, in a unique way, such that the dominated convergence theorem holds. That is, if and for a locally bounded process , then
in probability. The uniqueness of the extension from left-continuous to predictable integrands is a result of the monotone class lemma.
In general, the stochastic integral can be defined even in cases where the predictable process is not locally bounded. If then and are bounded. Associativity of stochastic integration implies that is -integrable, with integral , if and only if and . The set of -integrable processes is denoted by .
Properties
The following properties can be found in works such as and :
The stochastic integral is a càdlàg process. Furthermore, it is a semimartingale.
The discontinuities of the stochastic integral are given by the jumps of the integrator multiplied by the integrand. The jump of a càdlàg process at a time is , and is often denoted by . With this notation, . A particular consequence of this is that integrals with respect to a continuous process are always themselves continuous.
Associativity. Let , be predictable processes, and be -integrable. Then, is integrable if and only if is -integrable, in which case
Dominated convergence. Suppose that and , where is an -integrable process. then . Convergence is in probability at each time . In fact, it converges uniformly on compact sets in probability.
The stochastic integral commutes with the operation of taking quadratic covariations. If and are semimartingales then any -integrable process will also be -integrable, and . A consequence of this is that the quadratic variation process of a stochastic integral is equal to an integral of a quadratic variation process,
Integration by parts
As with ordinary calculus, integration by parts is an important result in stochastic calculus. The integration by parts formula for the Itô integral differs from the standard result due to the inclusion of a quadratic covariation term. This term comes from the fact that Itô calculus deals with processes with non-zero quadratic variation, which only occurs for infinite variation processes (such as Brownian motion). If and are semimartingales then
where is the quadratic covariation process.
The result is similar to the integration by parts theorem for the Riemann–Stieltjes integral but has an additional quadratic variation term.
Itô's lemma
Itô's lemma is the version of the chain rule or change of variables formula which applies to the Itô integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous -dimensional semimartingale and twice continuously differentiable function from to , it states that is a semimartingale and,
This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation . The formula can be generalized to include an explicit time-dependence in and in other ways (see Itô's lemma).
Martingale integrators
Local martingales
An important property of the Itô integral is that it preserves the local martingale property. If is a local martingale and is a locally bounded predictable process then is also a local martingale. For integrands which are not locally bounded, there are examples where is not a local martingale. However, this can only occur when is not continuous. If is a continuous local martingale then a predictable process is -integrable if and only if
for each , and is always a local martingale.
The most general statement for a discontinuous local martingale is that if is locally integrable then exists and is a local martingale.
Square integrable martingales
For bounded integrands, the Itô stochastic integral preserves the space of square integrable martingales, which is the set of càdlàg martingales such that is finite for all . For any such square integrable martingale , the quadratic variation process is integrable, and the Itô isometry states that
This equality holds more generally for any martingale such that is integrable. The Itô isometry is often used as an important step in the construction of the stochastic integral, by defining to be the unique extension of this isometry from a certain class of simple integrands to all bounded and predictable processes.
p-Integrable martingales
For any , and bounded predictable integrand, the stochastic integral preserves the space of -integrable martingales. These are càdlàg martingales such that is finite for all . However, this is not always true in the case where . There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales.
The maximum process of a càdlàg process is written as . For any and bounded predictable integrand, the stochastic integral preserves the space of càdlàg martingales such that is finite for all . If then this is the same as the space of -integrable martingales, by Doob's inequalities.
The Burkholder–Davis–Gundy inequalities state that, for any given , there exist positive constants , that depend on , but not or on such that
for all càdlàg local martingales . These are used to show that if is integrable and is a bounded predictable process then
and, consequently, is a -integrable martingale. More generally, this statement is true whenever is integrable.
Existence of the integral
Proofs that the Itô integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form for stopping times and -measurable random variables , for which the integral is
This is extended to all simple predictable processes by the linearity of in .
For a Brownian motion , the property that it has independent increments with zero mean and variance can be used to prove the Itô isometry for simple predictable integrands,
By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying
in such way that the Itô isometry still holds. It can then be extended to all -integrable processes by localization. This method allows the integral to be defined with respect to any Itô process.
For a general semimartingale , the decomposition into a local martingale plus a finite variation process can be used. Then, the integral can be shown to exist separately with respect to and and combined using linearity, , to get the integral with respect to X. The standard Lebesgue–Stieltjes integral allows integration to be defined with respect to finite variation processes, so the existence of the Itô integral for semimartingales will follow from any construction for local martingales.
For a càdlàg square integrable martingale , a generalized form of the Itô isometry can be used. First, the Doob–Meyer decomposition theorem is used to show that a decomposition exists, where is a martingale and is a right-continuous, increasing and predictable process starting at zero. This uniquely defines , which is referred to as the predictable quadratic variation of . The Itô isometry for square integrable martingales is then
which can be proved directly for simple predictable integrands. As with the case above for Brownian motion, a continuous linear extension can be used to uniquely extend to all predictable integrands satisfying . This method can be extended to all local square integrable martingales by localization. Finally, the Doob–Meyer decomposition can be used to decompose any local martingale into the sum of a local square integrable martingale and a finite variation process, allowing the Itô integral to be constructed with respect to any semimartingale.
Many other proofs exist which apply similar methods but which avoid the need to use the Doob–Meyer decomposition theorem, such as the use of the quadratic variation [M] in the Itô isometry, the use of the Doléans measure for submartingales, or the use of the Burkholder–Davis–Gundy inequalities instead of the Itô isometry. The latter applies directly to local martingales without having to first deal with the square integrable martingale case.
Alternative proofs exist only making use of the fact that is càdlàg, adapted, and the set {H · Xt: |H| ≤ 1 is simple previsible} is bounded in probability for each time , which is an alternative definition for to be a semimartingale. A continuous linear extension can be used to construct the integral for all left-continuous and adapted integrands with right limits everywhere (caglad or L-processes). This is general enough to be able to apply techniques such as Itô's lemma . Also, a Khintchine inequality can be used to prove the dominated convergence theorem and extend the integral to general predictable integrands .
Differentiation in Itô calculus
The Itô calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion:
Malliavin derivative
Malliavin calculus provides a theory of differentiation for random variables defined over Wiener space, including an integration by parts formula .
Martingale representation
The following result allows to express martingales as Itô integrals: if is a square-integrable martingale on a time interval with respect to the filtration generated by a Brownian motion , then there is a unique adapted square integrable process on such that
almost surely, and for all . This representation theorem can be interpreted formally as saying that α is the "time derivative" of with respect to Brownian motion , since α is precisely the process that must be integrated up to time to obtain , as in deterministic calculus.
Itô calculus for physicists
In physics, usually stochastic differential equations (SDEs), such as Langevin equations, are used, rather than stochastic integrals. Here an Itô stochastic differential equation (SDE) is often formulated via
where is Gaussian white noise with
and Einstein's summation convention is used.
If is a function of the , then Itô's lemma has to be used:
An Itô SDE as above also corresponds to a Stratonovich SDE which reads
SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero.
For a recent treatment of different interpretations of stochastic differential equations see for example .
See also
Stochastic calculus
Itô's lemma
Otto calculus
Stratonovich integral
Semimartingale
Wiener process
References
Hagen Kleinert (2004). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore); Paperback . Fifth edition available online: PDF-files, with generalizations of Itô's lemma for non-Gaussian processes.
Mathematical Finance Programming in TI-Basic, which implements Ito calculus for TI-calculators.
Definitions of mathematical integration
Stochastic calculus
Mathematical finance
Integral calculus | Itô calculus | Mathematics | 3,911 |
66,669,680 | https://en.wikipedia.org/wiki/Leucopholiota%20lignicola | Leucopholiota lignicola is a species of fungus belonging to the family Agaricaceae.
Synonym:
Lepiota lignicola P.Karst., 1879 (= basionym)
References
Agaricaceae
Fungus species | Leucopholiota lignicola | Biology | 53 |
6,402,491 | https://en.wikipedia.org/wiki/Hydrogen%20compressor | A hydrogen compressor is a device that increases the pressure of hydrogen by reducing its volume resulting in compressed hydrogen or liquid hydrogen.
Traditionally, applications for hydrogen compressors included Chlorine electrolyser and many chemical applications like the production of hydrogen peroxide (HPPO). The newer applications related to green and environmentally friendly technologies include fuel cells and electrolysis for hydrogen production.
Compressor vs pump
Hydrogen compressors are closely related to hydrogen pumps and gas compressors: both increase the pressure on a fluid and both can transport the fluid through a pipe. As gases are compressible, the compressor also reduces the volume of hydrogen gas, whereas the main result of a pump raising the pressure of a liquid is to allow the liquid hydrogen to be transported elsewhere.
Types
Reciprocating piston compressors
A proven method to compress hydrogen is to apply reciprocating piston compressors. Widely used in refineries, they are the backbone of refining crude oil. Reciprocating piston compressors are commonly available as either oil-lubricated or non-lubricated; for high pressure (350 - 700 bar), non-lubricated compressors are preferred to avoid oil contamination of the hydrogen. Typical drive power is in the order of magnitude of Megawatts (300kW-15MW). Expert know-how on piston sealing and packing rings can ensure that reciprocating compressors outperform the competing technologies in terms of MTBO (Mean Time Between Overhaul).
Ionic liquid piston compressor
An ionic liquid piston compressor is a hydrogen compressor based on an ionic liquid piston instead of a metal piston as in a piston-metal diaphragm compressor.
Electrochemical hydrogen compressor
A multi-stage electrochemical hydrogen compressor incorporates a series of membrane-electrode-assemblies (MEAs), similar to those used in proton-exchange membrane fuel cells; this type of compressor has no moving parts and is compact. The electrochemical compressor works similar to a fuel cell, a voltage is applied to the membrane and the resulting electric current pulls hydrogen through the membrane. With electrochemical compression of hydrogen, a pressure of 14500 psi (1000bar or 100MPa) is achieved. A patent is pending claiming an exergy efficiency of 70 to 80% for pressures up to 10,000 psi or 700 bars. A single stage electrochemical compression to 800 bar was reported in 2011.
Hydride compressor
In a hydride compressor, thermal and pressure properties of a hydride are used to absorb low-pressure hydrogen gas at ambient temperatures and then release high-pressure hydrogen gas at higher temperatures; the bed of hydride is heated with hot water or an electric coil.
Piston-metal diaphragm compressor
Piston-metal diaphragm compressors are stationary high-pressure compressors, four-staged water-cooled, 11–15 kW, 30–50 Nm3/h 40 MPa for dispensation of hydrogen. Since compression generates heat, the compressed gas is to be cooled between stages making the compression less adiabatic and more isothermal. The default assumption on diaphragm hydrogen compressors is an adiabatic efficiency of 70%. Used in hydrogen stations.
Guided rotor compressor
The guided rotor compressor (GRC) is a positive-displacement rotary compressor based upon an envoluted trochoid geometry which utilizes a parallel trochoid curve to define its basic compression volume. It has a typical 80 to 85% adiabatic efficiency.
Linear compressor
The single-piston linear compressor uses dynamic counterbalancing, where an auxiliary movable mass is flexibly attached to a movable piston assembly and to the stationary compressor casing using auxiliary mechanical springs with zero vibration export at minimum electrical power and current consumed by the motor. It is used in cryogenics
Electric Pressure Machine
2023 saw the invention of a compressor cylinder which heats gas to multiply it's pressure. There is no pressure limit as any start pressure is multiplied. First public mention is in Patent Application No. PCT/AU2023/051351.
See also
References
External links
2004-Osti Hybrid Compressor-Expander
Gas compressors
Hydrogen technologies
Industrial gases | Hydrogen compressor | Chemistry | 847 |
53,855,904 | https://en.wikipedia.org/wiki/Evolutionary%20models%20of%20food%20sharing | Evolutionary biologists have developed various theoretical models to explain the evolution of food-sharing behavior—"[d]efined as the unresisted transfer of food" from one food-motivated individual to another—among humans and other animals.
Models of food-sharing are based upon general evolutionary theory. When applied to human behavior, these models are considered a branch of human behavioral ecology. Researchers have developed several types of food-sharing models, involving phenomena such as kin selection, reciprocal altruism, tolerated theft, group cooperation, and costly signaling. Kin-selection and reciprocal-altruism models of food-sharing are based upon evolutionary concepts of kin selection and altruism. Since the theoretical basis of these models involves reproductive fitness, one underlying assumption of these models is that greater resource-accumulation increases reproductive fitness. Food-sharing has been theorized as an important development in early human evolution.
Kin selection models
W. D. Hamilton was among the first to propose an explanation for natural selection of altruistic behaviors among related individuals. According to his model, natural selection will favor altruistic behavior towards kin when the benefit (as a contributing factor to reproductive fitness) towards the recipient (scaled based upon Wright's coefficient of genetic relatedness between donor and recipient) outweighs the cost of giving. In other words, kin selection implies that food will be given when there is a great benefit to the recipient with low cost to the donor. An example of this would be sharing food among kin during a period of surplus. The implications of kin-selection is that natural selection will also favor the development of ways of determining kin from non-kin and close kin from distant kin. When Hamilton's rule is applied to food-sharing behavior, a simple expectation of the model is that close kin should receive food shares either more frequently or in larger quantities than distant relatives and non-kin. Additionally, greater imbalances in the quantities shared between close kin are expected than those shared with non-kin or distantly related individuals However, if reciprocal altruism is an important factor, then this may not be the case among close kin who are also reciprocity partners (see subsection below). Based on Feinman's extension of Hamilton's model, cheating such as recipients not sharing food with their donors, is more likely to take place between unrelated (or distantly related) individuals than closely related individuals.
Reciprocal models
In addition to expanding the ideas of Hamilton and kin-based altruism, Saul Feinman also theorized food-sharing based on Robert Trivers' model of reciprocal altruism. Following Trivers' ideas, reciprocal altruism models of food-sharing generate expectations of when and how a recipient of food from a donor will share food in a future interaction with that donor. Like the kin selection model, food will be shared when the benefit towards the recipient is greater than the costs to the donor, such as a period of surplus.
In 1984, Jim Moore proposed a model for the evolution of reciprocal food-sharing based on evidence from chimpanzees and modern human hunter-gatherer groups. This model was designed to address problems that Trivers' model failed to address. One problem, according to Moore, was that the model did not adequately explain the initial development of reciprocal altruism. If a population started with one reciprocal altruist who only gives benefits to other reciprocal altruists, then this trait would not be selected as there would be no other altruists to interact with.
Moore's model made the following assumptions:
Humans evolved from chimp-like ancestors with a similar hierarchical social structure (i.e., open dominance hierarchy that is male-dominated)
Males were highly motivated to achieve high rank. The short term reward of this was the position itself and the long-term reward was reproductive success, which resulted from greater access to mates with high rank position.
Begging behavior similar to that of chimpanzees existed, such as that between an infant and mother
There is an intermediate range of resource abundance that will favor sharing (e.g., meat for human ancestors that were occasionally-carnivorous herbivores)
With these assumptions, Moore proposed that human ancestors would have learned to hunt and share food if they faced punitive action from other group members if an individual did not share food with the group. An individual who finds a scarce, defensible resource would have faced begging behavior from other group members. Costs and benefits of sharing in this scenario can be sorted into status, fight risk, and nutritional value. Nutritional value is determined by the size of the food portion relative to the nutritional needs of the individuals. The nutritional value remains constant throughout this scenario, or declines in value as portions of it are consumed by the obtaining individual. As another individual begs with increasing intensity, the beggar's social cost increases, until the beggar's social costs exceed that of fighting; this is paralleled by the benefits to the food-obtainer. If the benefits to the food-obtainer exceed the beggar's cost of fighting, then the beggar should attack. At or before this time, the food-obtainer should decide to share or not. This decision is made depending on the expected reaction from the beggar. It is expected that they will decide to share at a time that would maximize benefits and minimize loss in nutritional value. If the beggar was initially dominant, it may follow up the first sharing interaction with further begging or fighting, reciprocation of sharing, or let the incident pass. Further begging and fighting impose risk onto the beggar as the food-obtainer might feel and behave dominantly and become dominant if the beggar loses. Inaction allows the food-obtainer to remain slightly dominant. If other group members witnessed this change in dominance position, then the beggar may demonstrate redirected aggression and attack a lower-rank bystander. Later, when the beggar obtains food, it may choose to reciprocate after getting the initial food-obtainer to beg, but this only occurs if the benefits offered exceed the status cost of the initial food-obtainer's begging. Thus, Moore's model predicted that natural selection would favor aggressive sharing and assertive reciprocation to re-establish status.
Feinman attempted to combine kin selection and reciprocal food-sharing models. Based on inequalities in direct access to food resources among hunter-gatherers, donors are more likely to share foods that their recipients do not have direct access to. Feinman also hypothesizes that, as a donor's reproductive value decreases, the probability of the donor giving food increases, either with or without reciprocal sharing. For example, older individuals (mostly women) in some foraging societies will share food and care of younger unrelated individuals. Further, the probability of a donor giving food will decrease as the reproductive value of a recipient also decreases. Some foods are not as predictable in their distribution and are thus termed less reliable. As foods become less reliable, they are expected to be shared more because they are rarer than more commonly encountered food sources. As food becomes scarcer, donors are expected to become more strict in their sharing habits and may favor close relatives, recipients who actively reciprocate and give food, and recipients with higher reproductive value (such as younger offspring or prospective mates) over others. Donors will also tend to share with those who remain familiar and in close proximity to them. Feinman also hypothesizes that donors are more likely to share with those who are culturally similar to them over those who are culturally different.
Recent studies of chimpanzees and other nonhuman primates have found evidence to support reciprocal sharing; however, other factors such as grooming and begging behaviors may also contribute to patterns of nonhuman primate food sharing.
Tolerated theft
Models of tolerated theft seek to explain why in some hunter-gatherer societies, scrounging behaviour is tolerated by other members of a group. This developed in response to variance-reduction models, as reciprocal behavior might have socially related taboos tied to it (such as rules preventing a person to accumulate more wealth). Variance-reduction models seek to explain why redistribution of large food packages (caches or prey items) occurs in daily practices of hunter-gatherers. According to variance reduction theory, it is more optimal to do so than to let excess food rot or spoil.
Blurton Jones argued from an evolutionary biology perspective that, if these taboos did not exist, then it would be more beneficial to be selfish if one found food and did not share and also accepted shares from others (essentially, a cheater under altruism models). Under this idea, selfishness would be expected to develop as a dominant behavior. Unlike cooperative models that link sharing to hunting, tolerated theft links sharing to arrival of any food package (either hunted or scavenged). Cooperative and reciprocal models, according to Jones, could not explain how food-sharing initially developed in societies. He claimed tolerated theft did, then once it became frequent, reciprocal altruism could develop.
Jones outlines several assumptions for his model of tolerated theft:
Food can be taken by force, unlike rescue or other reciprocal behaviors
Food can be broken apart into portions, like parts of a single carcass
Individuals fight only for fitness gain. In this case, a resource's fitness benefit determines the fight outcome. Individuals with much to gain will fight harder and usually win, gaining the fitness benefit.
Resource values are unequal, but individuals have equal resource holding potential (RHP), a measurement of strength and fighting ability.
Some parts of a food package will have higher value than others (i.e., they follow a diminishing value curve).
Therefore, the individual that obtains a large package of food (in cases where this seldom happens) faces asymmetric contests over food, and the obtainer does not benefit from fighting over less valuable portions. However, latecomers who have not obtained food on their own will have much to gain even from the less valuable portions. It is not worth the effort for a food-obtainer to fight over these portions, and so the obtainer should be expected to relinquish these without a fight since it would likely lose to the latecomer. Thus, another expectation is that natural selection will favor the ability to assess costs and benefits. Among frequently interacting individuals, the roles of obtainer and latecomer/thief are interchanged, balancing out benefits of latecomer gains and obtainer losses, so reciprocation develops to avoid injury. Equal, reciprocal sharing is the eventual consequence from this tolerated theft behavior. When food packages are small to begin with, then theft is less tolerated as portions have lower returns along the assumed diminishing value curve. Hoarding is expected in cases where either food can be defended, there is a season of scarcity, a synchronous glut (ex. salmon run or agricultural harvest), or when accumulated food no longer follows a diminishing curve (as with financial capital). Scrounging is expected to increase among groups when active foragers miss out on sharing opportunities due to their own acts of less successful foraging. Foragers do worse than scroungers because scroungers are able to participate in 100% of sharing events.
Based on studies by de Waal and others, chimpanzees exhibit tolerated theft behaviors. These are typically between mother and offspring and involve commonly available plant foods.
Costly signaling models
Costly signaling has been interpreted from ethnographic studies in regards to food sharing to explain why certain individuals (primarily men) tend to target difficult-to-acquire foods that sometimes produce less optimal yields. One explanation for this behavior is as a means for men to signal their reproductive quality to prospective mates. This strategy increases reproductive success of hunters who successfully capture these costly resources; however, they risk increasing sexual conflicts with their mates. For high-quality individuals who can both invest less in current partnerships and signal at low cost, this behavior is most profitable.
In their study of turtle hunting by the Meriam people (one of the indigenous Torres Strait Islander peoples), Smith and Bliege Bird frame this collective food-sharing behavior within a costly signaling framework. Among the Meriam, a family will sponsor a turtle-hunting group to collect turtles for feasting purposes associated with funerary activities. Since the hunting group does not receive any of the turtles in return, this behavior does not fit within a reciprocal or cooperative food-sharing framework. Instead, the benefits to hunters is through the signals from hunting. The leaders of these hunting groups signal their skills, specialized knowledge, and charisma which allow them to gain widespread social status when they are successful. This benefit allows hunters to gain priority in other social considerations such as marriages and alliances. The family who sponsored these hunting groups receives all of the captured turtles with no expectation of reciprocal payment in the future. The costs to the hunters are through provisioning the hunt itself, including the purchase of fuel for the boats.
Group cooperation models
Among human societies, groups often target foods that pose some level of difficulty in their acquisition. These activities generally require the cooperation of several individuals. Despite the fact that a number of individuals are involved in this acquisition, ownership of the acquired food often goes to a single individual, such as the finder or the one who successfully kills the animal. Owners may share with nonowners who helped in acquiring a food resource; however, this act of sharing may be a means of ensuring cooperation in future activities. For example, the Inupiat of the Northwest Coast of North America shared specific portions of whales depending on membership and role in the successful whaling party. From this perspective, food-sharing is a form of trade-based reciprocal altruism where privately owned food is used to reward labor.
Another consideration of group cooperation is as a way to increase food yields in comparison to acquiring food individually; this can be considered a form of mutualism. Bruce Winterhalder incorporates food-sharing among groups into the diet breadth model derived from optimal foraging theory as a means to reduce risk. Risk is measured in this model as a measure of the probability that a forager's net acquisition rate (NAR) falls below a minimum value, such as a threshold for starvation. Through the use of a stochastic model, a group of independent foragers are able to meet and pool their resources at the end of a foraging period and divide the total resources equally among the group. Through a series of simulations exploring values of group size and other parameters of the model, the results suggest that risk is effectively reduced by pooling and dividing equal shares of resources. When compared with changes in diet breadth, sharing seems to be better at reducing risk than changing the diet breadth. Examination of various group sizes suggests that small groups are capable of effectively reducing risk.
See also
Altruism
Human behavioral ecology
Human evolution
Parental investment
References
Human evolution
Evolutionary ecology
Evolutionary psychology
Behavioral ecology
Eating behaviors | Evolutionary models of food sharing | Biology | 3,067 |
8,267 | https://en.wikipedia.org/wiki/Dimensional%20analysis | In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae.
Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless.
Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation.
The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822.
Formulation
The Buckingham π theorem describes how every physically meaningful equation involving variables can be equivalently rewritten as an equation of dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables.
A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below.
The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary".
There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols:
time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J).
The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity is given by
where , , , , , , are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since .
A quantity that has only (with all other exponents zero) is known as a geometric quantity. A quantity that has only both and is known as a kinematic quantity. A quantity that has only all of , , and is known as a dynamic quantity.
A quantity that has all exponents null is said to have dimension one.
The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, ; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity.
There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis.
Simple cases
As examples, the dimension of the physical quantity speed is
The dimension of the physical quantity acceleration is
The dimension of the physical quantity force is
The dimension of the physical quantity pressure is
The dimension of the physical quantity energy is
The dimension of the physical quantity power is
The dimension of the physical quantity electric charge is
The dimension of the physical quantity voltage is
The dimension of the physical quantity capacitance is
Rayleigh's method
In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh.
The method involves the following steps:
Gather all the independent variables that are likely to influence the dependent variable.
If is a variable that depends upon independent variables , , , ..., , then the functional equation can be written as .
Write the above equation in the form , where is a dimensionless constant and , , , ..., are arbitrary exponents.
Express each of the quantities in the equation in some base units in which the solution is required.
By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents , , , ..., .
Solve these equations to obtain the values of the exponents , , , ..., .
Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents.
As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis.
Concrete numbers and base units
Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof.
A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units.
Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as .
Percentages, derivatives and integrals
Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since .
Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus:
position () has the dimension L (length);
derivative of position with respect to time (, velocity) has dimension T−1L—length from position, time due to the gradient;
the second derivative (, acceleration) has dimension .
Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator.
force has the dimension (mass multiplied by acceleration);
the integral of force with respect to the distance () the object has travelled (, work) has dimension .
In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year).
In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
Dimensional homogeneity (commensurability)
The most basic rule of dimensional analysis is that of dimensional homogeneity.
However, the dimensions form an abelian group under multiplication, so:
For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h.
The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if , and denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions.
Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities.
To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use to convert 35 yards to 32.004 m.
A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres.
Conversion factor
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, because , and bar/bar cancels out, so .
Applications
Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well.
Mathematics
A simple application of dimensional analysis to mathematics is in computing the form of the volume of an -ball (the solid ball in n dimensions), or the area of its surface, the -sphere: being an -dimensional figure, the volume scales as , while the surface area, being -dimensional, scales as . Thus the volume of the -ball in terms of the radius is , for some constant . Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone.
Finance, economics, and accounting
In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios.
For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid".
In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year).
Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year.
Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.)
In financial analysis, bond duration can be defined as , where is the value of a bond (or portfolio), is the continuously compounded interest rate and is a derivative. From the previous point, the dimension of is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because is in the "denominator" of the derivative.
Fluid mechanics
In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include:
Reynolds number (), generally important in all types of fluid problems:
Froude number (), modeling flow with a free surface:
Euler number (), used in problems in which pressure is of interest:
Mach number (), important in high speed flows where the velocity approaches or exceeds the local speed of sound: where is the local speed of sound.
History
The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Joseph-Louis Lagrange, in a 1799 article at the Turin Academy of Science.
This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem.
Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity.
In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables.
James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant is taken as unity, thereby defining . By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. .
Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound.
The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.
Examples
A simple example: period of a harmonic oscillator
What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and .
The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, , and putting for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well.
The variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant (equal to from the original dimensionless equation).
When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here.
When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as .
A more complex example: energy of a vibrating wire
Consider the case of a vibrating wire of length (L) vibrating with an amplitude (L). The wire has a linear density (M/L) and is under tension (LM/T2), and we want to know the energy (L2M/T2) in the wire. Let and be two dimensionless products of powers of the variables chosen, given by
The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation
where is some unknown function, or, equivalently as
where is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function . But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident.
The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis.
A third example: demand versus capacity for a rotating disc
Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness (L) and radius (L). The disc has a density (M/L3), rotates at an angular velocity (T−1) and this leads to a stress (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following () non-dimensional groups:
demand/capacity =
thickness/radius or aspect ratio =
Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.
Properties
Mathematical properties
The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; , and the inverse of L is 1/L or L−1. L raised to any integer power is a member of the group, having an inverse of L or 1/L. The operation of the group is multiplication, having the usual rules for handling exponents (). Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second).
An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module.
A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa).
The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, .
In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like . However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions.
One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions and , one has the vector spaces and , and can define as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar.
The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., ) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, . (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity can be expressed in the general form
Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form
Knowing this restriction can be a powerful tool for obtaining new insight into the system.
Mechanics
The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent.
For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M.
On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons:
There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space).
Velocity, being expressible in terms of length and time (), is redundant (the set is not linearly independent).
Other fields of physics and chemistry
Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N.
In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features.
Polynomials and transcendental functions
Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form.
Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.)
While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity , where the logarithm is taken in any base, holds for dimensionless numbers and , but it does not hold if and are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.
Similarly, while one can evaluate monomials () of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for , the expression makes sense (as an area), while for , the expression does not make sense.
However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example,
This is the height to which an object rises in time if the acceleration of gravity is 9.8 and the initial upward speed is 500 . It is not necessary for to be in seconds. For example, suppose = 0.01 minutes. Then the first term would be
Combining units and numerical values
The value of a dimensional physical quantity is written as the product of a unit [] within the dimension and a dimensionless numerical value or numerical factor, .
When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed:
is identical to
The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted.
Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units.
Quantity equations
A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities.
In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit.
For example, a quantity equation for displacement as speed multiplied by time difference would be:
for = 5 m/s, where and may be expressed in any units, converted if necessary.
In contrast, a corresponding numerical-value equation would be:
where is the numeric value of when expressed in seconds and is the numeric value of when expressed in metres.
Generally, the use of numerical-value equations is discouraged.
Dimensionless concepts
Constants
The dimensionless constants that arise in the results obtained, such as the in the Poiseuille's Law problem and the in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc.
Formalisms
Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be , where is the dimension of the lattice.
It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: , , and , in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other.
Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , , and (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite.
Dimensional equivalences
Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.
SI units
Programming languages
Dimensional correctness as part of type checking has been studied since 1977.
Implementations for Ada and C++ were described in 1985 and 1988.
Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran.
Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices.
McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure.
Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.
Geometry: position vs. displacement
Affine quantities
Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change).
Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable:
adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward),
adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection),
subtracting two positions should yield a displacement,
but one may not add two positions.
This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement).
Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity.
Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity.
Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement.
Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis.
This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero,
−273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F,
where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated.
For temperature differences,
1 K = 1 °C ≠ 1 °F = 1 °R.
(Here °R refers to the Rankine scale, not the Réaumur scale).
Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C.
Orientation and frame of reference
Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference.
This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis.
Huntley's extensions
Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank of the dimensional matrix.
He introduced two approaches:
The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent.
Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia.
Directed dimensions
As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , the distance travelled, with dimension L, , , both dimensioned as T−1L, and the downward acceleration of gravity, with dimension T−2L.
With these four quantities, we may conclude that the equation for the range may be written:
Or dimensionally
from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation.
However, if we use directed length dimensions, then will be dimensioned as T−1L, as T−1L, as L and as T−2L. The dimensional equation becomes:
and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent.
Huntley's concept of directed length dimensions however has some serious limitations:
It does not deal well with vector equations involving the cross product,
nor does it handle well the use of angles as physical variables.
It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?
Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems.
Quantity of matter
In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition.
For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables:
There are three fundamental variables, so the above five equations will yield two independent dimensionless variables:
If we distinguish between inertial mass with dimension and quantity of matter with dimension , then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written:
where now only is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law.
Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable.
Siano's extension: orientational analysis
Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin at a speed and angle above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range , at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable , but offers no insight into the relationship between and .
Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols to denote vector directions, and an orientationless symbol 10. Thus, Huntley's L becomes L1 with L specifying the dimension of length, and specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that , the following multiplication table for the orientation symbols results:
The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Since (using to indicate orientational equivalence) we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. An expression such as is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written:
which for and yields . Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is .
The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd.
As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form:
Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of .
It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical.
Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
See also
Buckingham π theorem
Dimensionless numbers in fluid mechanics
Fermi estimate – used to teach dimensional analysis
Numerical-value equation
Rayleigh's method of dimensional analysis
Similitude – an application of dimensional analysis
System of measurement
Related areas of mathematics
Covariance and contravariance of vectors
Exterior algebra
Geometric algebra
Quantity calculus
Notes
References
As postscript
, (5): 147, (6): 101, (7): 129
Wilson, Edwin B. (1920) "Theory of Dimensions", chapter XI of Aeronautics, via Internet Archive
Further reading
External links
List of dimensions for variety of physical quantities
Unicalc Live web calculator doing units conversion by dimensional analysis
A C++ implementation of compile-time dimensional analysis in the Boost open-source libraries
Buckingham's pi-theorem
Quantity System calculator for units conversion based on dimensional approach
Units, quantities, and fundamental constants project dimensional analysis maps
Measurement
Conversion of units of measurement
Chemical engineering
Mechanical engineering
Environmental engineering | Dimensional analysis | Physics,Chemistry,Mathematics,Engineering | 10,504 |
871,659 | https://en.wikipedia.org/wiki/Auricupride | Auricupride is a natural alloy that combines copper and gold. Its chemical formula is Cu3Au. The alloy crystallizes in the cubic crystal system in the L12 structure type and occurs as malleable grains or platey masses. It is an opaque yellow with a reddish tint. It has a hardness of 3.5 and a specific gravity of 11.5.
A variant called tetra-auricupride (CuAu) exists. Silver may be present resulting in the variety argentocuproauride ().
It was first described in 1950 for an occurrence in the Ural Mountains in Russia. It occurs as low temperature unmixing product in serpentinites and as reduction "halos" in redbed deposits. It is most often found in Chile, Argentina, Tasmania, Russia, Cyprus, Switzerland and South Africa.
References
External links
National Pollutant Inventory - Copper and compounds fact sheet
Copper minerals
Gold minerals
Native element minerals
Cubic minerals
Minerals in space group 221
Minerals described in 1950 | Auricupride | Chemistry | 208 |
66,308,569 | https://en.wikipedia.org/wiki/Doubly%20triangular%20number | In mathematics, the doubly triangular numbers are the numbers that appear within the sequence of triangular numbers, in positions that are also triangular numbers. That is, if denotes the th triangular number, then the doubly triangular numbers are the numbers of the form .
Sequence and formula
The doubly triangular numbers form the sequence
0, 1, 6, 21, 55, 120, 231, 406, 666, 1035, 1540, 2211, ...
The th doubly triangular number is given by the quartic formula
The sums of row sums of Floyd's triangle give the doubly triangular numbers.
Another way of expressing this fact is that the sum of all of the numbers in the first rows of Floyd's triangle is the th doubly triangular number.
In combinatorial enumeration
Doubly triangular numbers arise naturally as numbers of of objects, including pairs where both objects are the same:
An example from mathematical chemistry is given by the numbers of overlap integrals between Slater-type orbitals.
Another example of this phenomenon from combinatorics is that the doubly-triangular numbers count the number of two-edge undirected multigraphs on labeled vertices. In this setting, an edge is an unordered pair of vertices, and a two-edge graph is an unordered pair of edges. The number of possible edges is a triangular number, and the number of pairs of edges (allowing both edges to connect the same two vertices) is a doubly triangular number.
In the same way, the doubly triangular numbers also count the number of distinct ways of coloring the four corners or the four edges of a square with colors, allowing some colors to be unused and counting two colorings as being the same when they differ from each other only by rotation or reflection of the square. The number of choices of colors for any two opposite features of the square is a triangular number, and a coloring of the whole square combines two of these colorings of pairs of opposite features.
When pairs with both objects the same are excluded, a different sequence arises, the tritriangular numbers which are given by the formula .
In numerology
Some numerologists and biblical studies scholars consider it significant that 666, the number of the beast, is a doubly triangular number.
References
Factorial and binomial topics
Integer sequences | Doubly triangular number | Mathematics | 485 |
47,787,704 | https://en.wikipedia.org/wiki/Sam%20Karunaratne | Samarajeewa "Sam" Karunaratne, FIET, FIEE, FIESL (born in 1937) is an emeritus professor of engineering and a leading Sri Lankan academic who is the founding chancellor and president of the Sri Lanka Institute of Information Technology and the former vice-chancellor of the University of Moratuwa. He has held a number of other appointments in the field of higher education in Sri Lanka, including senior professor of electrical engineering and dean of the Faculty of Engineering and Architecture, president of the Institution of Engineers, Sri Lanka. Karunaratne is a pioneer in the development of the use of computers in the field of engineering and played an important role in the development of information technology education and industry in Sri Lanka.
Early life and education
Karunaratne was born as Samarajeewa Karunaratne to a family of land proprietors, Mr. and Mrs. T. Karunaratne in Makoora, Kegalle. He had his early education at Bandaranaike Maha Vidyalaya, Hettimulla and received his secondary education at St. Mary's College, Kegalle. He then went on to do his higher studies at the University of Ceylon, where he gained a first class honours degree in electrical engineering. He then achieved his MSc degree in engineering from the University of Glasgow and a diploma in electrical engineering from the University of London.
He is a chartered engineer of both Sri Lanka and the United Kingdom, a fellow of the British Institution of Engineering and Technology (IET), fellow of the Institution of Engineers, Sri Lanka (FIESL), a fellow of the Institution of Electrical Engineers, London (FIEE), and a fellow of the National Academy of Sciences Sri Lanka.
Career
Karunaratne took part in many major construction projects in Sri Lanka, pioneering the use of computer aided designing.
An electrical engineer by profession, he took up to university teaching and was a lecturer in electrical engineering at the University of Ceylon, Peradeniya before moving to the University of Moratuwa as a professor of electrical engineering in July 1969, where he held the chair from then until his retirement 9 October 2002. He has been the teacher of over 500 electrical engineers who hold high positions in Sri Lanka and internationally.
He has been in universities in Sri Lanka and abroad since he joined the university as an undergraduate in 1956, except for a two-year period when he was with the State Engineering Corporation. During his time at the State Engineering Corporation, from 1967 to 1968, he was in charge of the country's first digital computer installation and he computerised the design of civil engineering structures, including the Kalutara Cetiya (Kalutara Degoba), a thick shell design that is the world's only hollow Buddhist shrine. He was responsible for the computerisation of the GCE Ordinary-Level and Advanced-Level examination processing in 1968, with over 350,000 candidates.
As the head of the Department of Electrical Engineering, he spearheaded the establishment of the Department of Computer Engineering. He then became the dean of the Faculty of Engineering, and later the vice-chancellor of the University of Moratuwa. He is considered to be the chief contributor towards the development of the Department of Electrical Engineering to its present status, and has been the teacher of over 500 electrical engineers who hold high positions in Sri Lanka and abroad. These are some of the many notable reasons why he has been awarded the degree of Doctor of Science from the University of Moratuwa.
He has served many institutions as a member of the Governing Board, including the National Engineering Research and Development Centre (NERD); the Natural Resources, Energy and Science Authority of Sri Lanka (NARESA); the Post-Graduate Institute of Management; the Institution of Engineers, Sri Lanka (IESL); the University of Moratuwa, Sri Lanka; the Arthur C. Clarke Centre for Modern Technologies (ACCMT); and the Sri Lanka Broadcasting Corporation (SLBC). He is the recipient of several scholarships and fellowships [Commonwealth Scholarship, Fulbright Scholarship, Commonwealth Fellowship, I.A.E.A Fellowship, Commonwealth Travelling Fellowship, UNESCO Fellowship].
Karunaratne was the President of the Institution of Engineers Sri Lanka. He was also the Director of the Arthur C. Clarke Centre for Modern Technologies, and was a member of the Board of Governors of the United Nations Centre for Space Science and Technology Education Asia-Pacific established in Dehradun, India. His research is mainly in electrical power systems and digital control system, and he has published several papers on these subject. He is a chartered engineer and a Fellow of the Institution of Electrical Engineers, London (FIEE), a Fellow of the Institution of Engineers Sri Lanka (FIESL), and a Fellow of the National Academy of Sciences.
He is the founding President of the Sri Lanka Institute of Information Technology, a leading research and higher education institute in the field of Information Technology, and currently also holds the position of Chancellor and executive head of this institute. He is also a member of the board of directors at the Institute of Technological Studies, Colombo.
Personal life
Karunaratne married Kusuma Ediriweera Jayasooriya in July 1967, who became a renowned professor and Dean of the Faculty of Graduate Studies, University of Colombo. She is a pioneer in the field of Sinhalese Studies and the first female Dean in Sri Lanka.
They have two sons, Savant Kaushalya and Passant Vatsalya, both electrical engineers specialising in Image processing, Graphics, and Video Processing. The elder, Savant Karunaratne has a PhD in Electrical and Computer Engineering from the University of Sydney, Australia. The younger, Passant Karunaratne has a PhD in electrical engineering and computer science from Northwestern University, in Evanston, Illinois, and is a Principal Research Engineer in the United States.
Awards
Karunaratne is the recipient of several scholarships and fellowships, including the Commonwealth Scholarship, the Fulbright Scholarship, the Commonwealth Fellowship, the International Atomic Energy Agency Fellowship, the Commonwealth Travelling Fellowship, and the UNESCO Fellowship.
In 2006 Karunaratne was awarded an honorary Doctorate from the University of Moratuwa.
See also
Sri Lanka Institute of Information Technology
References
2. Professor Karunaratne's webpage on Department of Electrical Engineering, University of Moratuwa
External links
Official website of University of Moratuwa
Sri Lankan academic administrators
Sri Lankan electrical engineers
Sri Lankan computer scientists
Alumni of the University of Ceylon (Peradeniya)
Alumni of the University of London
Alumni of the University of Glasgow
University of California, Berkeley alumni
People associated with the Sri Lanka Institute of Information Technology
Fellows of the Institution of Engineering and Technology
Living people
1937 births
Sinhalese people
Sri Lankan academics | Sam Karunaratne | Engineering | 1,374 |
49,151,754 | https://en.wikipedia.org/wiki/All%20Sky%20Automated%20Survey%20for%20SuperNovae | The All Sky Automated Survey for SuperNovae (ASAS-SN) is an automated program to search for new supernovae and other astronomical transients, headed by astronomers from the Ohio State University, including Christopher Kochanek and Krzysztof Stanek. It has 20 robotic telescopes in both the northern and southern hemispheres. It can survey the entire sky approximately once every day.
Initially, there were four ASAS-SN telescopes at Haleakala and another four at Cerro Tololo, a Las Cumbres Observatory site. Twelve more telescopes were deployed in 2017 in Chile, South Africa and Texas, with funds from the Gordon and Betty Moore Foundation, the Ohio State University, the Mount Cuba Astronomical Foundation, China, Chile, Denmark, and Germany. All the telescopes (Nikon telephoto 400mm/F2.8 lenses) have a diameter of 14 cm and ProLine PL230 CCD cameras. The pixel resolution in the cameras is 7.8 arc seconds, so follow-up observations on other telescopes are usually required to get a more accurate location.
The main goal of the project is to look for bright supernovae, and its discoveries have included the most powerful supernova event ever discovered, ASASSN-15lh. However, other transient objects are frequently discovered, including nearby tidal disruption events (TDEs) (e.g., ASASSN-19bt), Galactic novae (e.g., ASASSN-16kt, ASASSN-16ma, and ASASSN-18fv), cataclysmic variables, and stellar flares, including several of the largest flares ever seen. In July 2017 ASAS-SN discovered its first comet, ASASSN1, and in July 2019 it provided crucial data for the near-Earth asteroid 2019 OK. It can detect new objects as dim as apparent magnitude 18.
Objects discovered receive designations starting with ASASSN followed by a dash, a two digit year and letters, for example ASASSN-19bt.
References
External links
— Yes, that Nemesis
Astronomical surveys
Observational astronomy | All Sky Automated Survey for SuperNovae | Astronomy | 440 |
27,680 | https://en.wikipedia.org/wiki/Supernova | A supernova (: supernovae or supernovas) is a powerful and luminous explosion of a star. A supernova occurs during the last evolutionary stages of a massive star, or when a white dwarf is triggered into runaway nuclear fusion. The original object, called the progenitor, either collapses to a neutron star or black hole, or is completely destroyed to form a diffuse nebula. The peak optical luminosity of a supernova can be comparable to that of an entire galaxy before fading over several weeks or months.
The last supernova directly observed in the Milky Way was Kepler's Supernova in 1604, appearing not long after Tycho's Supernova in 1572, both of which were visible to the naked eye. The remnants of more recent supernovae have been found, and observations of supernovae in other galaxies suggest they occur in the Milky Way on average about three times every century. A supernova in the Milky Way would almost certainly be observable through modern astronomical telescopes. The most recent naked-eye supernova was SN 1987A, which was the explosion of a blue supergiant star in the Large Magellanic Cloud, a satellite galaxy of the Milky Way.
Theoretical studies indicate that most supernovae are triggered by one of two basic mechanisms: the sudden re-ignition of nuclear fusion in a white dwarf, or the sudden gravitational collapse of a massive star's core.
In the re-ignition of a white dwarf, the object's temperature is raised enough to trigger runaway nuclear fusion, completely disrupting the star. Possible causes are an accumulation of material from a binary companion through accretion, or by a stellar merger.
In the case of a massive star's sudden implosion, the core of a massive star will undergo sudden collapse once it is unable to produce sufficient energy from fusion to counteract the star's own gravity, which must happen once the star begins fusing iron, but may happen during an earlier stage of metal fusion.
Supernovae can expel several solar masses of material at speeds up to several percent of the speed of light. This drives an expanding shock wave into the surrounding interstellar medium, sweeping up an expanding shell of gas and dust observed as a supernova remnant. Supernovae are a major source of elements in the interstellar medium from oxygen to rubidium. The expanding shock waves of supernovae can trigger the formation of new stars. Supernovae are a major source of cosmic rays. They might also produce gravitational waves.
Etymology
The word supernova has the plural form supernovae () or supernovas and is often abbreviated as SN or SNe. It is derived from the Latin word , meaning , which refers to what appears to be a temporary new bright star. Adding the prefix "super-" distinguishes supernovae from ordinary novae, which are far less luminous. The word supernova was coined by Walter Baade and Fritz Zwicky, who began using it in astrophysics lectures in 1931. Its first use in a journal article came the following year in a publication by Knut Lundmark, who may have coined it independently.
Observation history
Compared to a star's entire history, the visual appearance of a supernova is very brief, sometimes spanning several months, so that the chances of observing one with the naked eye are roughly once in a lifetime. Only a tiny fraction of the 100 billion stars in a typical galaxy have the capacity to become a supernova, the ability being restricted to those having high mass and those in rare kinds of binary star systems with at least one white dwarf.
Early discoveries
The earliest record of a possible supernova, known as HB9, was likely viewed by an unknown prehistoric people of the Indian subcontinent and recorded on a rock carving in the Burzahama region of Kashmir, dated to . Later, SN 185 was documented by Chinese astronomers in 185 AD. The brightest recorded supernova was SN 1006, which was observed in AD 1006 in the constellation of Lupus. This event was described by observers in China, Japan, Iraq, Egypt and Europe. The widely observed supernova SN 1054 produced the Crab Nebula.
Supernovae SN 1572 and SN 1604, the latest Milky Way supernovae to be observed with the naked eye, had a notable influence on the development of astronomy in Europe because they were used to argue against the Aristotelian idea that the universe beyond the Moon and planets was static and unchanging. Johannes Kepler began observing SN 1604 at its peak on 17 October 1604, and continued to make estimates of its brightness until it faded from naked eye view a year later. It was the second supernova to be observed in a generation, after Tycho Brahe observed SN 1572 in Cassiopeia.
There is some evidence that the youngest known supernova in our galaxy, G1.9+0.3, occurred in the late 19th century, considerably more recently than Cassiopeia A from around 1680. Neither was noted at the time. In the case of G1.9+0.3, high extinction from dust along the plane of the galactic disk could have dimmed the event sufficiently for it to go unnoticed. The situation for Cassiopeia A is less clear; infrared light echoes have been detected showing that it was not in a region of especially high extinction.
Telescope findings
With the development of the astronomical telescope, observation and discovery of fainter and more distant supernovae became possible. The first such observation was of SN 1885A in the Andromeda Galaxy. A second supernova, SN 1895B, was discovered in NGC 5253 a decade later. Early work on what was originally believed to be simply a new category of novae was performed during the 1920s. These were variously called "upper-class Novae", "Hauptnovae", or "giant novae". The name "supernovae" is thought to have been coined by Walter Baade and Zwicky in lectures at Caltech in 1931. It was used, as "super-Novae", in a journal paper published by Knut Lundmark in 1933, and in a 1934 paper by Baade and Zwicky. By 1938, the hyphen was no longer used and the modern name was in use.
American astronomers Rudolph Minkowski and Fritz Zwicky developed the modern supernova classification scheme beginning in 1941. During the 1960s, astronomers found that the maximum intensities of supernovae could be used as standard candles, hence indicators of astronomical distances. Some of the most distant supernovae observed in 2003 appeared dimmer than expected. This supports the view that the expansion of the universe is accelerating. Techniques were developed for reconstructing supernovae events that have no written records of being observed. The date of the Cassiopeia A supernova event was determined from light echoes off nebulae, while the age of supernova remnant RX J0852.0-4622 was estimated from temperature measurements and the gamma ray emissions from the radioactive decay of titanium-44.
The most luminous supernova ever recorded is ASASSN-15lh, at a distance of 3.82 gigalight-years. It was first detected in June 2015 and peaked at , which is twice the bolometric luminosity of any other known supernova. The nature of this supernova is debated and several alternative explanations, such as tidal disruption of a star by a black hole, have been suggested.
SN 2013fs was recorded three hours after the supernova event on 6 October 2013, by the Intermediate Palomar Transient Factory. This is among the earliest supernovae caught after detonation, and it is the earliest for which spectra have been obtained, beginning six hours after the actual explosion. The star is located in a spiral galaxy named NGC 7610, 160 million light-years away in the constellation of Pegasus.
The supernova SN 2016gkg was detected by amateur astronomer Victor Buso from Rosario, Argentina, on 20 September 2016. It was the first time that the initial "shock breakout" from an optical supernova had been observed. The progenitor star has been identified in Hubble Space Telescope images from before its collapse. Astronomer Alex Filippenko noted: "Observations of stars in the first moments they begin exploding provide information that cannot be directly obtained in any other way."
The James Webb Space Telescope (JWST) has significantly advanced our understanding of supernovae by identifying around 80 new instances through its JWST Advanced Deep Extragalactic Survey (JADES) program. This includes the most distant spectroscopically confirmed supernova at a redshift of 3.6, indicating its explosion occurred when the universe was merely 1.8 billion years old. These findings offer crucial insights into the early universe's stellar evolution and the frequency of supernovae during its formative years.
Discovery programs
Because supernovae are relatively rare events within a galaxy, occurring about three times a century in the Milky Way, obtaining a good sample of supernovae to study requires regular monitoring of many galaxies. Today, amateur and professional astronomers are finding about two thousand every year, some when near maximum brightness, others on old astronomical photographs or plates. Supernovae in other galaxies cannot be predicted with any meaningful accuracy. Normally, when they are discovered, they are already in progress. To use supernovae as standard candles for measuring distance, observation of their peak luminosity is required. It is therefore important to discover them well before they reach their maximum. Amateur astronomers, who greatly outnumber professional astronomers, have played an important role in finding supernovae, typically by looking at some of the closer galaxies through an optical telescope and comparing them to earlier photographs.
Toward the end of the 20th century, astronomers increasingly turned to computer-controlled telescopes and CCDs for hunting supernovae. While such systems are popular with amateurs, there are also professional installations such as the Katzman Automatic Imaging Telescope. The Supernova Early Warning System (SNEWS) project uses a network of neutrino detectors to give early warning of a supernova in the Milky Way galaxy. Neutrinos are subatomic particles that are produced in great quantities by a supernova, and they are not significantly absorbed by the interstellar gas and dust of the galactic disk.
Supernova searches fall into two classes: those focused on relatively nearby events and those looking farther away. Because of the expansion of the universe, the distance to a remote object with a known emission spectrum can be estimated by measuring its Doppler shift (or redshift); on average, more-distant objects recede with greater velocity than those nearby, and so have a higher redshift. Thus the search is split between high redshift and low redshift, with the boundary falling around a redshift range of z=0.1–0.3, where z is a dimensionless measure of the spectrum's frequency shift.
High redshift searches for supernovae usually involve the observation of supernova light curves. These are useful for standard or calibrated candles to generate Hubble diagrams and make cosmological predictions. Supernova spectroscopy, used to study the physics and environments of supernovae, is more practical at low than at high redshift. Low redshift observations also anchor the low-distance end of the Hubble curve, which is a plot of distance versus redshift for visible galaxies.
As survey programmes rapidly increase the number of detected supernovae, collated collections of observations (light decay curves, astrometry, pre-supernova observations, spectroscopy) have been assembled. The Pantheon data set, assembled in 2018, detailed 1048 supernovae. In 2021, this data set was expanded to 1701 light curves for 1550 supernovae taken from 18 different surveys, a 50% increase in under 3 years.
Naming convention
Supernova discoveries are reported to the International Astronomical Union's Central Bureau for Astronomical Telegrams, which sends out a circular with the name it assigns to that supernova. The name is formed from the prefix SN, followed by the year of discovery, suffixed with a one or two-letter designation. The first 26 supernovae of the year are designated with a capital letter from A to Z. Next, pairs of lower-case letters are used: aa, ab, and so on. Hence, for example, SN 2003C designates the third supernova reported in the year 2003. The last supernova of 2005, SN 2005nc, was the 367th (14 × 26 + 3 = 367). Since 2000, professional and amateur astronomers have been finding several hundred supernovae each year (572 in 2007, 261 in 2008, 390 in 2009; 231 in 2013).
Historical supernovae are known simply by the year they occurred: SN 185, SN 1006, SN 1054, SN 1572 (called Tycho's Nova) and SN 1604 (Kepler's Star). Since 1885 the additional letter notation has been used, even if there was only one supernova discovered that year (for example, SN 1885A, SN 1907A, etc.); this last happened with SN 1947A. SN, for SuperNova, is a standard prefix. Until 1987, two-letter designations were rarely needed; since 1988, they have been needed every year. Since 2016, the increasing number of discoveries has regularly led to the additional use of three-letter designations. After zz comes aaa, then aab, aac, and so on. For example, the last supernova retained in the Asiago Supernova Catalogue when it was terminated on 31 December 2017 bears the designation SN 2017jzp.
Classification
Astronomers classify supernovae according to their light curves and the absorption lines of different chemical elements that appear in their spectra. If a supernova's spectrum contains lines of hydrogen (known as the Balmer series in the visual portion of the spectrum) it is classified Type II; otherwise it is Type I. In each of these two types there are subdivisions according to the presence of lines from other elements or the shape of the light curve (a graph of the supernova's apparent magnitude as a function of time).
Type I
Type I supernovae are subdivided on the basis of their spectra, with type Ia showing a strong ionised silicon absorption line. Type I supernovae without this strong line are classified as type Ib and Ic, with type Ib showing strong neutral helium lines and type Ic lacking them. Historically, the light curves of type I supernovae were seen as all broadly similar, too much so to make useful distinctions. While variations in light curves have been studied, classification continues to be made on spectral grounds rather than light-curve shape.
A small number of type Ia supernovae exhibit unusual features, such as non-standard luminosity or broadened light curves, and these are typically categorised by referring to the earliest example showing similar features. For example, the sub-luminous SN 2008ha is often referred to as SN 2002cx-like or class Ia-2002cx.
A small proportion of type Ic supernovae show highly broadened and blended emission lines which are taken to indicate very high expansion velocities for the ejecta. These have been classified as type Ic-BL or Ic-bl.
Calcium-rich supernovae are a rare type of very fast supernova with unusually strong calcium lines in their spectra. Models suggest they occur when material is accreted from a helium-rich companion rather than a hydrogen-rich star. Because of helium lines in their spectra, they can resemble type Ib supernovae, but are thought to have very different progenitors.
Type II
The supernovae of type II can also be sub-divided based on their spectra. While most type II supernovae show very broad emission lines which indicate expansion velocities of many thousands of kilometres per second, some, such as SN 2005gl, have relatively narrow features in their spectra. These are called type IIn, where the "n" stands for "narrow".
A few supernovae, such as SN 1987K and SN 1993J, appear to change types: they show lines of hydrogen at early times, but, over a period of weeks to months, become dominated by lines of helium. The term "type IIb" is used to describe the combination of features normally associated with types II and Ib.
Type II supernovae with normal spectra dominated by broad hydrogen lines that remain for the life of the decline are classified on the basis of their light curves. The most common type shows a distinctive "plateau" in the light curve shortly after peak brightness where the visual luminosity stays relatively constant for several months before the decline resumes. These are called type II-P referring to the plateau. Less common are type II-L supernovae that lack a distinct plateau. The "L" signifies "linear" although the light curve is not actually a straight line.
Supernovae that do not fit into the normal classifications are designated peculiar, or "pec".
Types III, IV and V
Zwicky defined additional supernovae types based on a very few examples that did not cleanly fit the parameters for type I or type II supernovae. SN 1961i in NGC 4303 was the prototype and only member of the type III supernova class, noted for its broad light curve maximum and broad hydrogen Balmer lines that were slow to develop in the spectrum. SN 1961f in NGC 3003 was the prototype and only member of the type IV class, with a light curve similar to a type II-P supernova, with hydrogen absorption lines but weak hydrogen emission lines. The type V class was coined for SN 1961V in NGC 1058, an unusual faint supernova or supernova impostor with a slow rise to brightness, a maximum lasting many months, and an unusual emission spectrum. The similarity of SN 1961V to the Eta Carinae Great Outburst was noted. Supernovae in M101 (1909) and M83 (1923 and 1957) were also suggested as possible type IV or type V supernovae.
These types would now all be treated as peculiar type II supernovae (IIpec), of which many more examples have been discovered, although it is still debated whether SN 1961V was a true supernova following an LBV outburst or an impostor.
Current models
Supernova type codes, as summarised in the table above, are taxonomic: the type number is based on the light observed from the supernova, not necessarily its cause. For example, type Ia supernovae are produced by runaway fusion ignited on degenerate white dwarf progenitors, while the spectrally similar type Ib/c are produced from massive stripped progenitor stars by core collapse.
Thermal runaway
A white dwarf star may accumulate sufficient material from a stellar companion to raise its core temperature enough to ignite carbon fusion, at which point it undergoes runaway nuclear fusion, completely disrupting it. There are three avenues by which this detonation is theorised to happen: stable accretion of material from a companion, the collision of two white dwarfs, or accretion that causes ignition in a shell that then ignites the core. The dominant mechanism by which type Ia supernovae are produced remains unclear. Despite this uncertainty in how type Ia supernovae are produced, type Ia supernovae have very uniform properties and are useful standard candles over intergalactic distances. Some calibrations are required to compensate for the gradual change in properties or different frequencies of abnormal luminosity supernovae at high redshift, and for small variations in brightness identified by light curve shape or spectrum.
Normal type Ia
There are several means by which a supernova of this type can form, but they share a common underlying mechanism. If a carbon-oxygen white dwarf accreted enough matter to reach the Chandrasekhar limit of about 1.44 solar masses (for a non-rotating star), it would no longer be able to support the bulk of its mass through electron degeneracy pressure and would begin to collapse. However, the current view is that this limit is not normally attained; increasing temperature and density inside the core ignite carbon fusion as the star approaches the limit (to within about 1%) before collapse is initiated. In contrast, for a core primarily composed of oxygen, neon and magnesium, the collapsing white dwarf will typically form a neutron star. In this case, only a fraction of the star's mass will be ejected during the collapse.
Within a few seconds of the collapse process, a substantial fraction of the matter in the white dwarf undergoes nuclear fusion, releasing enough energy (1–) to unbind the star in a supernova. An outwardly expanding shock wave is generated, with matter reaching velocities on the order of 5,000–20,000 km/s, or roughly 3% of the speed of light. There is also a significant increase in luminosity, reaching an absolute magnitude of −19.3 (or 5 billion times brighter than the Sun), with little variation.
The model for the formation of this category of supernova is a close binary star system. The larger of the two stars is the first to evolve off the main sequence, and it expands to form a red giant. The two stars now share a common envelope, causing their mutual orbit to shrink. The giant star then sheds most of its envelope, losing mass until it can no longer continue nuclear fusion. At this point, it becomes a white dwarf star, composed primarily of carbon and oxygen. Eventually, the secondary star also evolves off the main sequence to form a red giant. Matter from the giant is accreted by the white dwarf, causing the latter to increase in mass. The exact details of initiation and of the heavy elements produced in the catastrophic event remain unclear.
Type Ia supernovae produce a characteristic light curve—the graph of luminosity as a function of time—after the event. This luminosity is generated by the radioactive decay of nickel-56 through cobalt-56 to iron-56. The peak luminosity of the light curve is extremely consistent across normal type Ia supernovae, having a maximum absolute magnitude of about −19.3. This is because typical type Ia supernovae arise from a consistent type of progenitor star by gradual mass acquisition, and explode when they acquire a consistent typical mass, giving rise to very similar supernova conditions and behaviour. This allows them to be used as a secondary standard candle to measure the distance to their host galaxies.
A second model for the formation of type Ia supernovae involves the merger of two white dwarf stars, with the combined mass momentarily exceeding the Chandrasekhar limit. This is sometimes referred to as the double-degenerate model, as both stars are degenerate white dwarfs. Due to the possible combinations of mass and chemical composition of the pair there is much variation in this type of event, and, in many cases, there may be no supernova at all, in which case they will have a less luminous light curve than the more normal SN type Ia.
Non-standard type Ia
Abnormally bright type Ia supernovae occur when the white dwarf already has a mass higher than the Chandrasekhar limit, possibly enhanced further by asymmetry, but the ejected material will have less than normal kinetic energy. This super-Chandrasekhar-mass scenario can occur, for example, when the extra mass is supported by differential rotation.
There is no formal sub-classification for non-standard type Ia supernovae. It has been proposed that a group of sub-luminous supernovae that occur when helium accretes onto a white dwarf should be classified as type Iax. This type of supernova may not always completely destroy the white dwarf progenitor and could leave behind a zombie star.
One specific type of supernova originates from exploding white dwarfs, like type Ia, but contains hydrogen lines in their spectra, possibly because the white dwarf is surrounded by an envelope of hydrogen-rich circumstellar material. These supernovae have been dubbed type Ia/IIn, type Ian, type IIa and type IIan.
The quadruple star HD 74438, belonging to the open cluster IC 2391 the Vela constellation, has been predicted to become a non-standard type Ia supernova.
Core collapse
Very massive stars can undergo core collapse when nuclear fusion becomes unable to sustain the core against its own gravity; passing this threshold is the cause of all types of supernova except type Ia. The collapse may cause violent expulsion of the outer layers of the star resulting in a supernova. However, if the release of gravitational potential energy is insufficient, the star may instead collapse into a black hole or neutron star with little radiated energy.
Core collapse can be caused by several different mechanisms: exceeding the Chandrasekhar limit; electron capture; pair-instability; or photodisintegration.
When a massive star develops an iron core larger than the Chandrasekhar mass it will no longer be able to support itself by electron degeneracy pressure and will collapse further to a neutron star or black hole.
Electron capture by magnesium in a degenerate O/Ne/Mg core (8–10 solar mass progenitor star) removes support and causes gravitational collapse followed by explosive oxygen fusion, with very similar results.
Electron-positron pair production in a large post-helium burning core removes thermodynamic support and causes initial collapse followed by runaway fusion, resulting in a pair-instability supernova.
A sufficiently large and hot stellar core may generate gamma-rays energetic enough to initiate photodisintegration directly, which will cause a complete collapse of the core.
The table below lists the known reasons for core collapse in massive stars, the types of stars in which they occur, their associated supernova type, and the remnant produced. The metallicity is the proportion of elements other than hydrogen or helium, as compared to the Sun. The initial mass is the mass of the star prior to the supernova event, given in multiples of the Sun's mass, although the mass at the time of the supernova may be much lower.
Type IIn supernovae are not listed in the table. They can be produced by various types of core collapse in different progenitor stars, possibly even by type Ia white dwarf ignitions, although it seems that most will be from iron core collapse in luminous supergiants or hypergiants (including LBVs). The narrow spectral lines for which they are named occur because the supernova is expanding into a small dense cloud of circumstellar material. It appears that a significant proportion of supposed type IIn supernovae are supernova impostors, massive eruptions of LBV-like stars similar to the Great Eruption of Eta Carinae. In these events, material previously ejected from the star creates the narrow absorption lines and causes a shock wave through interaction with the newly ejected material.
Detailed process
When a stellar core is no longer supported against gravity, it collapses in on itself with velocities reaching 70,000 km/s (0.23c), resulting in a rapid increase in temperature and density. What follows depends on the mass and structure of the collapsing core, with low-mass degenerate cores forming neutron stars, higher-mass degenerate cores mostly collapsing completely to black holes, and non-degenerate cores undergoing runaway fusion.
The initial collapse of degenerate cores is accelerated by beta decay, photodisintegration and electron capture, which causes a burst of electron neutrinos. As the density increases, neutrino emission is cut off as they become trapped in the core. The inner core eventually reaches typically 30 km in diameter with a density comparable to that of an atomic nucleus, and neutron degeneracy pressure tries to halt the collapse. If the core mass is more than about 15 solar masses then neutron degeneracy is insufficient to stop the collapse and a black hole forms directly with no supernova.
In lower mass cores the collapse is stopped and the newly formed neutron core has an initial temperature of about 100 billion kelvin, 6,000 times the temperature of the Sun's core. At this temperature, neutrino-antineutrino pairs of all flavours are efficiently formed by thermal emission. These thermal neutrinos are several times more abundant than the electron-capture neutrinos. About 1046 joules, approximately 10% of the star's rest mass, is converted into a ten-second burst of neutrinos, which is the main output of the event. The suddenly halted core collapse rebounds and produces a shock wave that stalls in the outer core within milliseconds as energy is lost through the dissociation of heavy elements. A process that is is necessary to allow the outer layers of the core to reabsorb around 1044 joules (1 foe) from the neutrino pulse, producing the visible brightness, although there are other theories that could power the explosion.
Some material from the outer envelope falls back onto the neutron star, and, for cores beyond about , there is sufficient fallback to form a black hole. This fallback will reduce the kinetic energy created and the mass of expelled radioactive material, but in some situations, it may also generate relativistic jets that result in a gamma-ray burst or an exceptionally luminous supernova.
The collapse of a massive non-degenerate core will ignite further fusion. When the core collapse is initiated by pair instability (photons turning into electron-positron pairs, thereby reducing the radiation pressure) oxygen fusion begins and the collapse may be halted. For core masses of , the collapse halts and the star remains intact, but collapse will occur again when a larger core has formed. For cores of around , the fusion of oxygen and heavier elements is so energetic that the entire star is disrupted, causing a supernova. At the upper end of the mass range, the supernova is unusually luminous and extremely long-lived due to many solar masses of ejected 56Ni. For even larger core masses, the core temperature becomes high enough to allow photodisintegration and the core collapses completely into a black hole.
Type II
Stars with initial masses less than about never develop a core large enough to collapse and they eventually lose their atmospheres to become white dwarfs. Stars with at least (possibly as much as ) evolve in a complex fashion, progressively burning heavier elements at hotter temperatures in their cores. The star becomes layered like an onion, with the burning of more easily fused elements occurring in larger shells. Although popularly described as an onion with an iron core, the least massive supernova progenitors only have oxygen-neon(-magnesium) cores. These super-AGB stars may form the majority of core collapse supernovae, although less luminous and so less commonly observed than those from more massive progenitors.
If core collapse occurs during a supergiant phase when the star still has a hydrogen envelope, the result is a type II supernova. The rate of mass loss for luminous stars depends on the metallicity and luminosity. Extremely luminous stars at near solar metallicity will lose all their hydrogen before they reach core collapse and so will not form a supernova of type II. At low metallicity, all stars will reach core collapse with a hydrogen envelope but sufficiently massive stars collapse directly to a black hole without producing a visible supernova.
Stars with an initial mass up to about 90 times the Sun, or a little less at high metallicity, result in a type II-P supernova, which is the most commonly observed type. At moderate to high metallicity, stars near the upper end of that mass range will have lost most of their hydrogen when core collapse occurs and the result will be a type II-L supernova. At very low metallicity, stars of around will reach core collapse by pair instability while they still have a hydrogen atmosphere and an oxygen core and the result will be a supernova with type II characteristics but a very large mass of ejected 56Ni and high luminosity.
Type Ib and Ic
These supernovae, like those of type II, are massive stars that undergo core collapse. Unlike the progenitors of type II supernovae, the stars which become types Ib and Ic supernovae have lost most of their outer (hydrogen) envelopes due to strong stellar winds or else from interaction with a companion. These stars are known as Wolf–Rayet stars, and they occur at moderate to high metallicity where continuum driven winds cause sufficiently high mass-loss rates. Observations of type Ib/c supernova do not match the observed or expected occurrence of Wolf–Rayet stars. Alternate explanations for this type of core collapse supernova involve stars stripped of their hydrogen by binary interactions. Binary models provide a better match for the observed supernovae, with the proviso that no suitable binary helium stars have ever been observed.
Type Ib supernovae are the more common and result from Wolf–Rayet stars of type WC which still have helium in their atmospheres. For a narrow range of masses, stars evolve further before reaching core collapse to become WO stars with very little helium remaining, and these are the progenitors of type Ic supernovae.
A few percent of the type Ic supernovae are associated with gamma-ray bursts (GRB), though it is also believed that any hydrogen-stripped type Ib or Ic supernova could produce a GRB, depending on the circumstances of the geometry. The mechanism for producing this type of GRB is the jets produced by the magnetic field of the rapidly spinning magnetar formed at the collapsing core of the star. The jets would also transfer energy into the expanding outer shell, producing a super-luminous supernova.
Ultra-stripped supernovae occur when the exploding star has been stripped (almost) all the way to the metal core, via mass transfer in a close binary. As a result, very little material is ejected from the exploding star (c. ). In the most extreme cases, ultra-stripped supernovae can occur in naked metal cores, barely above the Chandrasekhar mass limit. SN 2005ek might be the first observational example of an ultra-stripped supernova, giving rise to a relatively dim and fast decaying light curve. The nature of ultra-stripped supernovae can be both iron core-collapse and electron capture supernovae, depending on the mass of the collapsing core. Ultra-stripped supernovae are believed to be associated with the second supernova explosion in a binary system, producing for example a tight double neutron star system.
In 2022 a team of astronomers led by researchers from the Weizmann Institute of Science reported the first supernova explosion showing direct evidence for a Wolf-Rayet progenitor star. SN 2019hgp was a type Icn supernova and is also the first in which the element neon has been detected.
Electron-capture supernovae
In 1980, a "third type" of supernova was predicted by Ken'ichi Nomoto of the University of Tokyo, called an electron-capture supernova. It would arise when a star "in the transitional range (~8 to 10 solar masses) between white dwarf formation and iron core-collapse supernovae", and with a degenerate O+Ne+Mg core, imploded after its core ran out of nuclear fuel, causing gravity to compress the electrons in the star's core into their atomic nuclei, leading to a supernova explosion and leaving behind a neutron star. In June 2021, a paper in the journal Nature Astronomy reported that the 2018 supernova SN 2018zd (in the galaxy NGC 2146, about 31 million light-years from Earth) appeared to be the first observation of an electron-capture supernova. The 1054 supernova explosion that created the Crab Nebula in our galaxy had been thought to be the best candidate for an electron-capture supernova, and the 2021 paper makes it more likely that this was correct.
Failed supernovae
The core collapse of some massive stars may not result in a visible supernova. This happens if the initial core collapse cannot be reversed by the mechanism that produces an explosion, usually because the core is too massive. These events are difficult to detect, but large surveys have detected possible candidates. The red supergiant N6946-BH1 in NGC 6946 underwent a modest outburst in March 2009, before fading from view. Only a faint infrared source remains at the star's location.
Light curves
The ejecta gases would dim quickly without some energy input to keep them hot. The source of this energy—which can maintain the optical supernova glow for months—was, at first, a puzzle. Some considered rotational energy from the central pulsar as a source. Although the energy that initially powers each type of supernovae is delivered promptly, the light curves are dominated by subsequent radioactive heating of the rapidly expanding ejecta. The intensely radioactive nature of the ejecta gases was first calculated on sound nucleosynthesis grounds in the late 1960s, and this has since been demonstrated as correct for most supernovae. It was not until SN 1987A that direct observation of gamma-ray lines unambiguously identified the major radioactive nuclei.
It is now known by direct observation that much of the light curve (the graph of luminosity as a function of time) after the occurrence of a type II Supernova, such as SN 1987A, is explained by those predicted radioactive decays. Although the luminous emission consists of optical photons, it is the radioactive power absorbed by the ejected gases that keeps the remnant hot enough to radiate light. The radioactive decay of 56Ni through its daughters 56Co to 56Fe produces gamma-ray photons, primarily with energies of and , that are absorbed and dominate the heating and thus the luminosity of the ejecta at intermediate times (several weeks) to late times (several months). Energy for the peak of the light curve of SN1987A was provided by the decay of 56Ni to 56Co (half-life 6 days) while energy for the later light curve in particular fit very closely with the 77.3-day half-life of 56Co decaying to 56Fe. Later measurements by space gamma-ray telescopes of the small fraction of the 56Co and 57Co gamma rays that escaped the SN 1987A remnant without absorption confirmed earlier predictions that those two radioactive nuclei were the power sources.
The late-time decay phase of visual light curves for different supernova types all depend on radioactive heating, but they vary in shape and amplitude because of the underlying mechanisms, the way that visible radiation is produced, the epoch of its observation, and the transparency of the ejected material. The light curves can be significantly different at other wavelengths. For example, at ultraviolet wavelengths there is an early extremely luminous peak lasting only a few hours corresponding to the breakout of the shock launched by the initial event, but that breakout is hardly detectable optically.
The light curves for type Ia are mostly very uniform, with a consistent maximum absolute magnitude and a relatively steep decline in luminosity. Their optical energy output is driven by radioactive decay of ejected nickel-56 (half-life 6 days), which then decays to radioactive cobalt-56 (half-life 77 days). These radioisotopes excite the surrounding material to incandescence. Modern studies of cosmology rely on 56Ni radioactivity providing the energy for the optical brightness of supernovae of type Ia, which are the "standard candles" of cosmology but whose diagnostic and gamma rays were first detected only in 2014. The initial phases of the light curve decline steeply as the effective size of the photosphere decreases and trapped electromagnetic radiation is depleted. The light curve continues to decline in the B band while it may show a small shoulder in the visual at about 40 days, but this is only a hint of a secondary maximum that occurs in the infra-red as certain ionised heavy elements recombine to produce infra-red radiation and the ejecta become transparent to it. The visual light curve continues to decline at a rate slightly greater than the decay rate of the radioactive cobalt (which has the longer half-life and controls the later curve), because the ejected material becomes more diffuse and less able to convert the high energy radiation into visual radiation. After several months, the light curve changes its decline rate again as positron emission from the remaining cobalt-56 becomes dominant, although this portion of the light curve has been little-studied.
Type Ib and Ic light curves are similar to type Ia although with a lower average peak luminosity. The visual light output is again due to radioactive decay being converted into visual radiation, but there is a much lower mass of the created nickel-56. The peak luminosity varies considerably and there are even occasional type Ib/c supernovae orders of magnitude more and less luminous than the norm. The most luminous type Ic supernovae are referred to as hypernovae and tend to have broadened light curves in addition to the increased peak luminosity. The source of the extra energy is thought to be relativistic jets driven by the formation of a rotating black hole, which also produce gamma-ray bursts.
The light curves for type II supernovae are characterised by a much slower decline than type I, on the order of 0.05 magnitudes per day, excluding the plateau phase. The visual light output is dominated by kinetic energy rather than radioactive decay for several months, due primarily to the existence of hydrogen in the ejecta from the atmosphere of the supergiant progenitor star. In the initial destruction this hydrogen becomes heated and ionised. The majority of type II supernovae show a prolonged plateau in their light curves as this hydrogen recombines, emitting visible light and becoming more transparent. This is then followed by a declining light curve driven by radioactive decay although slower than in type I supernovae, due to the efficiency of conversion into light by all the hydrogen.
In type II-L the plateau is absent because the progenitor had relatively little hydrogen left in its atmosphere, sufficient to appear in the spectrum but insufficient to produce a noticeable plateau in the light output. In type IIb supernovae the hydrogen atmosphere of the progenitor is so depleted (thought to be due to tidal stripping by a companion star) that the light curve is closer to a type I supernova and the hydrogen even disappears from the spectrum after several weeks.
Type IIn supernovae are characterised by additional narrow spectral lines produced in a dense shell of circumstellar material. Their light curves are generally very broad and extended, occasionally also extremely luminous and referred to as a superluminous supernova. These light curves are produced by the highly efficient conversion of kinetic energy of the ejecta into electromagnetic radiation by interaction with the dense shell of material. This only occurs when the material is sufficiently dense and compact, indicating that it has been produced by the progenitor star itself only shortly before the supernova occurs.
Large numbers of supernovae have been catalogued and classified to provide distance candles and test models. Average characteristics vary somewhat with distance and type of host galaxy, but can broadly be specified for each supernova type.
Notes:
Asymmetry
A long-standing puzzle surrounding type II supernovae is why the remaining compact object receives a large velocity away from the epicentre; pulsars, and thus neutron stars, are observed to have high peculiar velocities, and black holes presumably do as well, although they are far harder to observe in isolation. The initial impetus can be substantial, propelling an object of more than a solar mass at a velocity of 500 km/s or greater. This indicates an expansion asymmetry, but the mechanism by which momentum is transferred to the compact object a puzzle. Proposed explanations for this kick include convection in the collapsing star, asymmetric ejection of matter during neutron star formation, and asymmetrical neutrino emissions.
One possible explanation for this asymmetry is large-scale convection above the core. The convection can create radial variations in density giving rise to variations in the amount of energy absorbed from neutrino outflow. However analysis of this mechanism predicts only modest momentum transfer. Another possible explanation is that accretion of gas onto the central neutron star can create a disk that drives highly directional jets, propelling matter at a high velocity out of the star, and driving transverse shocks that completely disrupt the star. These jets might play a crucial role in the resulting supernova. (A similar model is used for explaining long gamma-ray bursts.) The dominant mechanism may depend upon the mass of the progenitor star.
Initial asymmetries have also been confirmed in type Ia supernovae through observation. This result may mean that the initial luminosity of this type of supernova depends on the viewing angle. However, the expansion becomes more symmetrical with the passage of time. Early asymmetries are detectable by measuring the polarisation of the emitted light.
Energy output
Although supernovae are primarily known as luminous events, the electromagnetic radiation they release is almost a minor side-effect. Particularly in the case of core collapse supernovae, the emitted electromagnetic radiation is a tiny fraction of the total energy released during the event.
There is a fundamental difference between the balance of energy production in the different types of supernova. In type Ia white dwarf detonations, most of the energy is directed into heavy element synthesis and the kinetic energy of the ejecta. In core collapse supernovae, the vast majority of the energy is directed into neutrino emission, and while some of this apparently powers the observed destruction, 99%+ of the neutrinos escape the star in the first few minutes following the start of the collapse.
Standard type Ia supernovae derive their energy from a runaway nuclear fusion of a carbon-oxygen white dwarf. The details of the energetics are still not fully understood, but the result is the ejection of the entire mass of the original star at high kinetic energy. Around half a solar mass of that mass is 56Ni generated from silicon burning. 56Ni is radioactive and decays into 56Co by beta plus decay (with a half life of six days) and gamma rays. 56Co itself decays by the beta plus (positron) path with a half life of 77 days into stable 56Fe. These two processes are responsible for the electromagnetic radiation from type Ia supernovae. In combination with the changing transparency of the ejected material, they produce the rapidly declining light curve.
Core collapse supernovae are on average visually fainter than type Ia supernovae, but the total energy released is far higher, as outlined in the following table.
In some core collapse supernovae, fallback onto a black hole drives relativistic jets which may produce a brief energetic and directional burst of gamma rays and also transfers substantial further energy into the ejected material. This is one scenario for producing high-luminosity supernovae and is thought to be the cause of type Ic hypernovae and long-duration gamma-ray bursts. If the relativistic jets are too brief and fail to penetrate the stellar envelope then a low-luminosity gamma-ray burst may be produced and the supernova may be sub-luminous.
When a supernova occurs inside a small dense cloud of circumstellar material, it will produce a shock wave that can efficiently convert a high fraction of the kinetic energy into electromagnetic radiation. Even though the initial energy was entirely normal the resulting supernova will have high luminosity and extended duration since it does not rely on exponential radioactive decay. This type of event may cause type IIn hypernovae.
Although pair-instability supernovae are core collapse supernovae with spectra and light curves similar to type II-P, the nature after core collapse is more like that of a giant type Ia with runaway fusion of carbon, oxygen and silicon. The total energy released by the highest-mass events is comparable to other core collapse supernovae but neutrino production is thought to be very low, hence the kinetic and electromagnetic energy released is very high. The cores of these stars are much larger than any white dwarf and the amount of radioactive nickel and other heavy elements ejected from their cores can be orders of magnitude higher, with consequently high visual luminosity.
Progenitor
The supernova classification type is closely tied to the type of progenitor star at the time of the collapse. The occurrence of each type of supernova depends on the star's metallicity, since this affects the strength of the stellar wind and thereby the rate at which the star loses mass.
Type Ia supernovae are produced from white dwarf stars in binary star systems and occur in all galaxy types. Core collapse supernovae are only found in galaxies undergoing current or very recent star formation, since they result from short-lived massive stars. They are most commonly found in type Sc spirals, but also in the arms of other spiral galaxies and in irregular galaxies, especially starburst galaxies.
Type Ib and Ic supernovae are hypothesised to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via strong stellar winds or mass transfer to a companion. They normally occur in regions of new star formation, and are extremely rare in elliptical galaxies. The progenitors of type IIn supernovae also have high rates of mass loss in the period just prior to their explosions. Type Ic supernovae have been observed to occur in regions that are more metal-rich and have higher star-formation rates than average for their host galaxies. The table shows the progenitor for the main types of core collapse supernova, and the approximate proportions that have been observed in the local neighbourhood.
There are a number of difficulties reconciling modelled and observed stellar evolution leading up to core collapse supernovae. Red supergiants are the progenitors for the vast majority of core collapse supernovae, and these have been observed but only at relatively low masses and luminosities, below about and , respectively. Most progenitors of type II supernovae are not detected and must be considerably fainter, and presumably less massive. This discrepancy has been referred to as the red supergiant problem. It was first described in 2009 by Stephen Smartt, who also coined the term. After performing a volume-limited search for supernovae, Smartt et al. found the lower and upper mass limits for type II-P supernovae to form to be and , respectively. The former is consistent with the expected upper mass limits for white dwarf progenitors to form, but the latter is not consistent with massive star populations in the Local Group. The upper limit for red supergiants that produce a visible supernova explosion has been calculated at .
It is thought that higher mass red supergiants do not explode as supernovae, but instead evolve back towards hotter temperatures. Several progenitors of type IIb supernovae have been confirmed, and these were K and G supergiants, plus one A supergiant. Yellow hypergiants or LBVs are proposed progenitors for type IIb supernovae, and almost all type IIb supernovae near enough to observe have shown such progenitors.
Blue supergiants form an unexpectedly high proportion of confirmed supernova progenitors, partly due to their high luminosity and easy detection, while not a single Wolf–Rayet progenitor has yet been clearly identified. Models have had difficulty showing how blue supergiants lose enough mass to reach supernova without progressing to a different evolutionary stage. One study has shown a possible route for low-luminosity post-red supergiant luminous blue variables to collapse, most likely as a type IIn supernova. Several examples of hot luminous progenitors of type IIn supernovae have been detected: SN 2005gy and SN 2010jl were both apparently massive luminous stars, but are very distant; and SN 2009ip had a highly luminous progenitor likely to have been an LBV, but is a peculiar supernova whose exact nature is disputed.
The progenitors of type Ib/c supernovae are not observed at all, and constraints on their possible luminosity are often lower than those of known WC stars. WO stars are extremely rare and visually relatively faint, so it is difficult to say whether such progenitors are missing or just yet to be observed. Very luminous progenitors have not been securely identified, despite numerous supernovae being observed near enough that such progenitors would have been clearly imaged. Population modelling shows that the observed type Ib/c supernovae could be reproduced by a mixture of single massive stars and stripped-envelope stars from interacting binary systems. The continued lack of unambiguous detection of progenitors for normal type Ib and Ic supernovae may be due to most massive stars collapsing directly to a black hole without a supernova outburst. Most of these supernovae are then produced from lower-mass low-luminosity helium stars in binary systems. A small number would be from rapidly rotating massive stars, likely corresponding to the highly energetic type Ic-BL events that are associated with long-duration gamma-ray bursts.
External impact
Supernovae events generate heavier elements that are scattered throughout the surrounding interstellar medium. The expanding shock wave from a supernova can trigger star formation. Galactic cosmic rays are generated by supernova explosions.
Source of heavy elements
Supernovae are a major source of elements in the interstellar medium from oxygen through to rubidium, though the theoretical abundances of the elements produced or seen in the spectra varies significantly depending on the various supernova types. Type Ia supernovae produce mainly silicon and iron-peak elements, metals such as nickel and iron. Core collapse supernovae eject much smaller quantities of the iron-peak elements than type Ia supernovae, but larger masses of light alpha elements such as oxygen and neon, and elements heavier than zinc. The latter is especially true with electron capture supernovae. The bulk of the material ejected by type II supernovae is hydrogen and helium. The heavy elements are produced by: nuclear fusion for nuclei up to 34S; silicon photodisintegration rearrangement and quasiequilibrium during silicon burning for nuclei between 36Ar and 56Ni; and rapid capture of neutrons (r-process) during the supernova's collapse for elements heavier than iron. The r-process produces highly unstable nuclei that are rich in neutrons and that rapidly beta decay into more stable forms. In supernovae, r-process reactions are responsible for about half of all the isotopes of elements beyond iron, although neutron star mergers may be the main astrophysical source for many of these elements.
In the modern universe, old asymptotic giant branch (AGB) stars are the dominant source of dust from oxides, carbon and s-process elements. However, in the early universe, before AGB stars formed, supernovae may have been the main source of dust.
Role in stellar evolution
Remnants of many supernovae consist of a compact object and a rapidly expanding shock wave of material. This cloud of material sweeps up surrounding interstellar medium during a free expansion phase, which can last for up to two centuries. The wave then gradually undergoes a period of adiabatic expansion, and will slowly cool and mix with the surrounding interstellar medium over a period of about 10,000 years.
The Big Bang produced hydrogen, helium and traces of lithium, while all heavier elements are synthesised in stars, supernovae, and collisions between neutron stars (thus being indirectly due to supernovae). Supernovae tend to enrich the surrounding interstellar medium with elements other than hydrogen and helium, which usually astronomers refer to as "metals". These ejected elements ultimately enrich the molecular clouds that are the sites of star formation. Thus, each stellar generation has a slightly different composition, going from an almost pure mixture of hydrogen and helium to a more metal-rich composition. Supernovae are the dominant mechanism for distributing these heavier elements, which are formed in a star during its period of nuclear fusion. The different abundances of elements in the material that forms a star have important influences on the star's life, and may influence the possibility of having planets orbiting it: more giant planets form around stars of higher metallicity.
The kinetic energy of an expanding supernova remnant can trigger star formation by compressing nearby, dense molecular clouds in space. The increase in turbulent pressure can also prevent star formation if the cloud is unable to lose the excess energy.
Evidence from daughter products of short-lived radioactive isotopes shows that a nearby supernova helped determine the composition of the Solar System 4.5 billion years ago, and may even have triggered the formation of this system.
Fast radio bursts (FRBs) are intense, transient pulses of radio waves that typically last no more than milliseconds. Many explanations for these events have been proposed; magnetars produced by core-collapse supernovae are leading candidates.
Cosmic rays
Supernova remnants are thought to accelerate a large fraction of galactic primary cosmic rays, but direct evidence for cosmic ray production has only been found in a small number of remnants. Gamma rays from pion-decay have been detected from the supernova remnants IC 443 and W44. These are produced when accelerated protons from the remnant impact on interstellar material.
Gravitational waves
Supernovae are potentially strong galactic sources of gravitational waves, but none have so far been detected. The only gravitational wave events so far detected are from mergers of black holes and neutron stars, probable remnants of supernovae. Like the neutrino emissions, the gravitational waves produced by a core-collapse supernova are expected to arrive without the delay that affects light. Consequently, they may provide information about the core-collapse process that is unavailable by other means. Most gravitational-wave signals predicted by supernova models are short in duration, lasting less than a second, and thus difficult to detect. Using the arrival of a neutrino signal may provide a trigger that can identify the time window in which to seek the gravitational wave, helping to distinguish the latter from background noise.
Effect on Earth
A near-Earth supernova is a supernova close enough to the Earth to have noticeable effects on its biosphere. Depending upon the type and energy of the supernova, it could be as far as 3,000 light-years away.
In 1996 it was theorised that traces of past supernovae might be detectable on Earth in the form of metal isotope signatures in rock strata. Iron-60 enrichment was later reported in deep-sea rock of the Pacific Ocean. In 2009, elevated levels of nitrate ions were found in Antarctic ice, which coincided with the 1006 and 1054 supernovae. Gamma rays from these supernovae could have boosted atmospheric levels of nitrogen oxides, which became trapped in the ice.
Historically, nearby supernovae may have influenced the biodiversity of life on the planet. Geological records suggest that nearby supernova events have led to an increase in cosmic rays, which in turn produced a cooler climate. A greater temperature difference between the poles and the equator created stronger winds, increased ocean mixing, and resulted in the transport of nutrients to shallow waters along the continental shelves. This led to greater biodiversity.
Type Ia supernovae are thought to be potentially the most dangerous if they occur close enough to the Earth. Because these supernovae arise from dim, common white dwarf stars in binary systems, it is likely that a supernova that can affect the Earth will occur unpredictably and in a star system that is not well studied. The closest-known candidate is IK Pegasi (HR 8210), about 150 light-years away, but observations suggest it could be as long as 1.9 billion years before the white dwarf can accrete the critical mass required to become a type Ia supernova.
According to a 2003 estimate, a type II supernova would have to be closer than to destroy half of the Earth's ozone layer, and there are no such candidates closer than about 500 light-years.
Milky Way candidates
The next supernova in the Milky Way will likely be detectable even if it occurs on the far side of the galaxy. It is likely to be produced by the collapse of an unremarkable red supergiant, and it is very probable that it will already have been catalogued in infrared surveys such as 2MASS. There is a smaller chance that the next core collapse supernova will be produced by a different type of massive star such as a yellow hypergiant, luminous blue variable, or Wolf–Rayet. The chances of the next supernova being a type Ia produced by a white dwarf are calculated to be about a third of those for a core collapse supernova. Again it should be observable wherever it occurs, but it is less likely that the progenitor will ever have been observed. It is not even known exactly what a type Ia progenitor system looks like, and it is difficult to detect them beyond a few parsecs. The total supernova rate in the Milky Way is estimated to be between 2 and 12 per century, although one has not actually been observed for several centuries.
Statistically, the most common variety of core-collapse supernova is type II-P, and the progenitors of this type are red supergiants. It is difficult to identify which of those supergiants are in the final stages of heavy element fusion in their cores and which have millions of years left. The most-massive red supergiants shed their atmospheres and evolve to Wolf–Rayet stars before their cores collapse. All Wolf–Rayet stars end their lives from the Wolf–Rayet phase within a million years or so, but again it is difficult to identify those that are closest to core collapse. One class that is expected to have no more than a few thousand years before exploding are the WO Wolf–Rayet stars, which are known to have exhausted their core helium. Only eight of them are known, and only four of those are in the Milky Way.
A number of close or well-known stars have been identified as possible core collapse supernova candidates: the high-mass blue stars Spica and Rigel, the red supergiants Betelgeuse, Antares, and VV Cephei A; the yellow hypergiant Rho Cassiopeiae; the luminous blue variable Eta Carinae that has already produced a supernova impostor; and both components, a blue supergiant and a Wolf–Rayet star, of the Regor or Gamma Velorum system. Mimosa and Acrux, two bright star systems in the southern constellation of Crux, each contain blue stars with sufficient masses to explode as supernovae. Others have gained notoriety as possible, although not very likely, progenitors for a gamma-ray burst; for example WR 104.
Identification of candidates for a type Ia supernova is much more speculative. Any binary with an accreting white dwarf might produce a supernova although the exact mechanism and timescale is still debated. These systems are faint and difficult to identify, but the novae and recurrent novae are such systems that conveniently advertise themselves. One example is U Scorpii.
See also
List of supernovae
List of supernova remnants
References
Further reading
External links
A searchable catalogue
An open-access catalog of supernova light curves and spectra.
Articles containing video clips
Astronomical events
Light sources
Standard candles
Stellar evolution
Stellar phenomena
Concepts in astronomy | Supernova | Physics,Chemistry,Astronomy | 13,160 |
19,135,734 | https://en.wikipedia.org/wiki/Splunk | Splunk Inc. is an American software company based in San Francisco, California, that produces software for searching, monitoring, and analyzing machine-generated data via a web-style interface. Its software helps capture, index and correlate real-time data in a searchable repository, from which it can generate graphs, reports, alerts, dashboards and visualizations.
The firm uses machine data for identifying data patterns, providing metrics, diagnosing problems and providing intelligence for business operations. It is a horizontal technology used for application management, security and compliance, as well as business and web analytics.
In September 2023, it was announced that Splunk would be acquired by Cisco for $28 billion in an all-cash deal. The transaction was completed on March 18, 2024.
History
Founding & early years
Michael Baum, Rob Das and Erik Swan co-founded Splunk Inc in 2003. Venture firms August Capital, Sevin Rosen, Ignition Partners and JK&B Capital backed the company.
By 2007, Splunk had raised . It became profitable in 2009. In 2012, Splunk had its initial public offering, trading under NASDAQ symbol SPLK.
Company growth
In September 2013 the company acquired BugSense, a mobile-device data-analytics company. BugSense provides "a mobile analytics platform used by developers to improve app performance and improve quality." It supplied a "software developer kit" to give developers access to data analytics from mobile devices that it managed from its scalable cloud platform. The acquisition amount was undisclosed.
In December 2013, Splunk acquired Cloudmeter, a provider of network data capture technologies. In June 2015, Splunk acquired the software company Metafor that uses machine learning technology to analyze data generated from IT infrastructure and applications. In July 2015, Splunk acquired Caspida, a cybersecurity startup, for .
In October 2015, Splunk sealed a "cybersecurity alliance" with U.S. government security contractor Booz Allen Hamilton Inc. to offer combined cyber threat detection and intelligence-analysis technology.
In 2016, Splunk pledged to donate $100 million in software licenses, training, support, education, and volunteerism for nonprofits and schools over a 10-year period.
According to Glassdoor, it was the fourth highest-paying company for employees in the United States in April 2017. In May 2017, Splunk acquired Drastin, a software company that provides search-based analytics for enterprises.
In September 2017, Splunk acquired SignalSense which developed cloud-based data collection and breach detection software. Splunk announced it was using machine learning about that time.
In October 2017, Splunk acquired technology and intellectual property from smaller rival Rocana. On April 9, 2018, Splunk acquired Phantom Cyber Corporation for approximately US$350 million. In April 2018, it reached US$14.8 billion of market capitalization. On June 11, 2018, Splunk announced its acquisition of VictorOps, a DevOps incident management startup, for US$120 million. In July 2018 Splunk acquired KryptonCloud, an industrial IoT and analytics SaaS company. Splunk acquired the cloud monitoring company, SignalFx, in October 2019 for $1.05 billion. Two weeks later on September 4, 2019, Splunk acquired Omnition—an early-stage startup specializing in distributed tracing—for an undisclosed amount.
Splunk also announced the launch of its corporate venture fund, Splunk Ventures—a $100 million Innovation Fund and a $50 million Social Impact Fund to invest in early-stage startups.
Recent history
Splunk reported its fiscal 2021 fourth-quarter revenue of $745.1 million. For all of fiscal 2021, Splunk reported revenue of $2.23 billion. On November 15, 2021, Douglas Merritt stepped down as president and CEO. Graham Smith, Splunk's chairman since 2019, took over as interim CEO. On March 2, 2022, Splunk named Gary Steele, previously at Proofpoint, as its CEO and the successor to interim chief Graham Smith effective April 2022.
Cisco acquisition
On September 21, 2023 Cisco announced it would acquire Splunk for $28 billion in an all-cash deal. In November 2023, the company announced layoffs affecting 7% or 500 of its employees, following an earlier reduction of 300 staff in the same year. CEO Gary Steele clarified in a letter to employees, filed with the U.S. Securities and Exchange Commission, that the decision was not related to the Cisco deal.
In April 2024, Splunk won an infringement case against Crible, Inc., a startup competitor, for copying enterprise data analysis software. The jury awarded Splunk $1 in damages.
The acquisition of Splunk was completed in March 2024. It was the largest deal in Cisco's history. At the time, Splunk had 1,100 patents, with clients such as Singapore Airlines, Papa Johns, Heineken, and McLaren. Splunk continued under the same management, with pricing projected to stay the same.
In May 2024, former Splunk CEO Gary Steele was promoted to a Cisco executive, although Splunk continued to report to him. He remained Splunk general manager. Cisco's observability product development including its Cisco AppDynamics software was moved into Splunk after the integration.
Products
Splunk's core offering collects and analyzes high volumes of machine-generated data. It uses a lightweight agent to locally collect log messages from files, receives them via TCP or UDP syslog protocol on an open port (not preferred), or calls scripts to collect events from various application programming interfaces (APIs) to connect to applications and devices. It was developed for troubleshooting and monitoring distributed applications based on log messages.
Splunk Enterprise Security (ES) provides security information and event management (SIEM) for machine data generated from security technologies such as network, endpoints, access, malware, vulnerability, and identity information. It is a premium application that is licensed independently.
In 2011, Splunk released Splunk Storm, a cloud-based version of the core Splunk product. Splunk Storm offered a turnkey, managed, and hosted service for machine data. In 2013, Splunk announced that Splunk Storm would become a completely free service and expanded its cloud offering with Splunk Cloud. In 2015, Splunk shut down Splunk Storm.
In 2013, Splunk announced a product called Hunk: Splunk Analytics for Hadoop, which supports accessing, searching, and reporting on external data sets located in Hadoop from a Splunk interface.
In 2015, Splunk announced a Light version of the core Splunk product aimed at smaller IT environments and mid-sized enterprises. Splunk debuted Splunk IT Service Intelligence (ITSI) in September 2015. ITSI leverages Splunk data to provide visibility into IT performance. Software analytics can detect anomalies and determine their causes and the areas it affects.
Splunk Security Orchestration, Automation and Response (SOAR) free community edition, is free for as long as you want, up to 100 actions/day to automate tasks, orchestrate workflows, and reduce incident response times for cloud, on-premises or hybrid deployments.
Cloud transformation
In 2016, Google announced its cloud platform would integrate with Splunk to expand in areas like IT ops, security, and compliance. The company also announced additional machine learning capabilities for several of its major product offerings, which are installed on top of the platform. Splunk Cloud received FedRAMP authorization from the General Services Administration FedRAMP Program Management Office at the moderate level in 2019, enabling Splunk to sell to the federal government. This allows customers to access Google's AI and ML services and power them with data from Splunk. Also, by integrating with Google Anthos and Google Cloud Security Command Center, Splunk data can be shared among different cloud-based applications. To help companies manage the shift to a multi cloud environment, Splunk launched its Observability Cloud, which combines infrastructure monitoring, application performance monitoring, digital experience monitoring, log investigation, and incident response capabilities. In 2020, the company announced that Splunk Cloud is available on the Google Cloud Platform and launched an initiative with Amazon Web Services to help customers migrate on-premises Splunk workloads to Splunk Cloud on the AWS cloud.
In 2017, Splunk introduced Splunk Insights for ransomware, an analytics tool for assessing and investigating potential threats by ingesting event logs from multiple sources. The software is targeted toward smaller organizations like universities. The company also launched Splunk Insights for AWS Cloud Monitoring, a service to facilitate enterprises' migration to Amazon Web Services' cloud.
In 2018, Splunk introduced Splunk Industrial Asset Intelligence, which extracts information from IIoT(Industrial Internet of Things) data from various resources and presents its users with critical alerts.
In 2019, Splunk announced new capabilities to its platform, including the general availability of Data Fabric Search and Data Stream Processor. Data Fabric Search uses datasets across different data stores, including those that are not Splunk-based, into a single view. The required data structure is only created when a query is run.
Data Stream Processor is a real-time processing product that collects data from various sources and then distributes results to Splunk or other destinations. It allows role-based access to create alerts and reports based on data that is relevant for each individual. In 2020, it was updated to allow it to access, process, and route real-time data from multiple cloud services. Also, in 2019, Splunk rolled out Splunk Connected Experiences, which extends its data processing and analytics capabilities to augmented reality (AR), mobile devices, and mobile applications.
In 2020, Splunk announced Splunk Enterprise 8.1 and the Splunk Cloud edition. They include stream processing, machine learning, and multi-cloud capabilities.
In October 2019, Splunk announced the integration of its security tools - including security information and event management (SIEM), user behavior analytics (UBA), and security orchestration, automation, and response (Splunk Phantom) — into the new Splunk Mission Control.
In 2019, Splunk introduced an application performance monitoring (APM) platform, SignalFx Microservices APM, that pairs “no-sample’ monitoring and analysis features with Omnition's full-fidelity tracing capabilities. Splunk also announced that a capability called Kubernetes Navigator would be available through their product, SignalFx Infrastructure Monitoring.
Splunkbase
Splunkbase is a community hosted by Splunk where users can go to find apps and add-ons for Splunk, which can improve the functionality and usefulness of Splunk, as well as provide a quick and easy interface for specific use cases and/or vendor products. As of October 2019, more than 2,000 apps were available on the site.
Integrations on Splunkbase include the Splunk App for New Relic, the ForeScout Extended Module for Splunk, and Splunk App for AWS.
Sponsorships
McLaren
Starting in 2020, Splunk announced a partnership with the McLaren Formula One team, sponsoring the team and working with them to provide data analysis and insight on racing performance.
Splunk worked with McLaren Racing for several years, evaluating the performance data pulled from the nearly 300 sensors on every racecar, before becoming McLaren's official technology partner in February 2020. The partnership resulted in Splunk deployed across the McLaren Group. This included using Splunk to interpret data from McLaren's e-sports team. As part of the partnership, Splunk's logo was added to the sidepod and cockpit surrounds of the MCL35 racecar.
Trek-Segafredo
In November 2018, Splunk signed a sponsorship deal with the Trek-Segafredo professional road cycling team; the partnership started in 2019. Splunk replaced CA Industries as the company's technology partner. Splunk provides data analysis for the company, including analysis on riders, coaches, and mechanics. Team jerseys, bikes, and vehicles carry Splunk branding. Splunk also participates in Trek's race hospitality program.
References
External links
2012 initial public offerings
Companies based in San Francisco
Companies formerly listed on the Nasdaq
Software companies based in the San Francisco Bay Area
American companies established in 2003
Software companies established in 2003
Business intelligence companies
Computer security companies
Cloud computing providers
Big data companies
Data security
Software companies of the United States
System administration
Proprietary software
Reporting software
Website monitoring software
2024 mergers and acquisitions
Cisco Systems acquisitions | Splunk | Technology,Engineering | 2,663 |
20,698,768 | https://en.wikipedia.org/wiki/Burr%2C%20Egan%2C%20Deleage%20%26%20Co. | Burr, Egan, Deleage & Co. (BEDCO) was a venture capital firm which focused on investments in information technology, communications, and healthcare/biotechnology companies.
BEDCO was one of the first venture capital firm to set up a bi-coastal operation with a presence in both Boston, Massachusetts and Silicon Valley in California.
The firm was founded in 1979 and dissolved in 1996, with Alta Partners and Alta Communications assuming management responsibility for certain existing investments. Other successor firms include Polaris Venture Partners and Alta Berkeley.
History
The firm was founded in 1979 by Craig Burr, William P. Egan, and Dr. Jean Deleage, Ph.D. Burr and Egan had worked together at the pioneering private equity firm TA Associates. Deleage had come from Sofinnova Partners, a venture capital firm in France that he had co-founded in 1971. In 1976, Deleage had founded Sofinnova’s US affiliate, based in San Francisco. At its zenith the company managed over $700 million in assets and today is the direct predecessor of four venture capital firms with aggregate capital in excess of $5.7 billion:
Polaris Venture Partners based in Waltham, Massachusetts
Alta Partners based in California
Alta Communications based in Boston
Alta Berkeley Capital based in London.
Among BEDCO’s most notable investments were Continental Cablevision with H. Irving Grousbeck, Qwest with Philip Anschutz, Cephalon, American Superconductor and SyQuest Technology.
Investment funds
From its founding in 1979 through its dissolution in 1996, BEDCO raised more than a dozen venture capital funds with total investor commitments of approximately $700 million.
1980 - $12m - Alta Company
1982 - $60m - Alta II (Late Stage)
1982 - $10m - Alta Berkeley Venture Partners (Bal. Stage)
1983 - $18m - Alta Berkeley LP Liquidating Trust (Early Stage)
1984 - $15.7m - Alta Berkeley Eurofund (Early Stage)
1986 - $45m - Alta Subordinated Debt (Mezzanine)
1986 - $32.3m - Alta Berkeley L.P. II (Early Stage)
1988 - $35m - Alta Communications IV (Late Stage)
1988 - $124m - Alta IV (Late Stage)
1988 - $68m - Alta Subordinated Debt II (Mezzanine)
1992 - $161m - Alta V
1993 - $96m - Alta Subordinated Debt III (Mezzanine)
1993 - $27m - Alta Berkeley III (Early Stage)
Successor firms
Polaris Venture Partners
Polaris Venture Partners was the first of four successors to BEDCO. One of the contributing factors in the dissolution was the departure of three of BEDCO’s junior partners, Jon Flint, Terry McGuire, Steve Arnold (Venture Capitalist). The three founders, left BEDCO in 1995 and began raising their first independent venture capital fund. Today, Polaris Venture Partners is a venture capital firm specializing in seed and early stage investments particularly in companies engaged in the information technology and life sciences sectors. Polaris is based in Waltham, Massachusetts with an additional office in Seattle, Washington.
Since 1995, Polaris has raised five venture capital funds with over $2.6 billion of investor commitments. To date, all of the Polaris funds have been above average performers for their respective vintage years. Polaris Venture Partners V, which was closed in 2006, raised $1 billion of investor commitments.
Alta Partners
Alta Partners was founded by BEDCO co-founder Dr. Jean Deleage, Ph.D., along with Garrett Gruener, a former partner at BEDCO. Today, the firm, which is based in San Francisco, invests in biotechnology and life science companies and has made over 130 investments since its separation from BEDCO.
Since 1996, Alta has raised eight venture capital funds including four funds in its Alta California Funds series, three funds in its Alta Biopharma Partners series and most recently in 2006, Alta raised Alta Partners VIII with $500 million of investor commitments
Alta Communications
Alta Communications was founded by BEDCO co-founders Craig Burr and William P. Egan. Today the firm, which is based in Waltham, Massachusetts (just outside Boston), invests primarily in later-stage opportunities in the traditional media and telecommunications sectors.
Since 1996, Alta Communications has raised four venture capital funds with over $1.4 billion of investor commitments.
Alta Berkeley
Alta Berkeley was formed from BEDCO’s European operations. In 1982, Bryan Wood, who had been working in corporate finance in Britain, partnered with BEDCO to establish a presence in Europe. Today, Alta Berkeley makes early stage investments in technology companies based in Europe and Israel. Alta Berkeley's sixth and most recent fund raised approximately $145 million of investor commitments.
References
A Generation Gap in Venture Capital
Gupta, Udayan. Done Deals: Venture Capitalists Tell Their Stories, 2000
Venture-Capital Firms Prepare for Next Generation of Partners
Jean Deleage (Forbes Profile)
"The Thrill of Defeat". The Boston Globe, February 2001
Cocktails & Conversation with Bill Egan, Alta Communications. Wharton School of Business, 2006.
External links
Polaris Venture Partners
Alta Partners (California)
Alta Communications (Boston)
Alta Berkeley (UK)
Financial services companies established in 1979
TA Associates
Venture capital firms of the United States
Companies based in Boston
Life sciences industry
Financial services companies disestablished in 1996
American companies established in 1979 | Burr, Egan, Deleage & Co. | Biology | 1,096 |
5,181,818 | https://en.wikipedia.org/wiki/Time-use%20research | Time-use research is an interdisciplinary field of study dedicated to learning how people allocate their time during an average day. Work intensity is the umbrella topic that incorporates time use, specifically time poverty.
The comprehensive approach to time-use research addresses a wide array of political, economic, social, and cultural issues through the use of time-use surveys. Surveys provide geographic data and time diaries that volunteers record using GPS technology and time diaries. Time-use research investigates human activity inside and outside the paid economy. It also looks at how these activities change over time.
Time-use research is not to be confused with time management. Time-use research is a social science interested in human behavioural patterns and seeks to build a body of knowledge to benefit a wide array of disciplines interested in how people use their time. Time management is an approach to time allocation with a specific managerial purpose aimed at increasing the efficiency or effectiveness of a given process.
Questions relating to time-use research arise in most professional and academic disciplines, notably:
urban planning and urban design (how does community design impact people's use of time?)
transportation planning (what groups use active transportation and public transit?)
social work (how do people maintain social relationships and who is more likely to spend time alone?)
recreation and active living (which groups are more physically active?)
information technology (what role does information technology play in people's daily lives?)
feminist economics (how does non-market work affect gender inequality and economic well-beings in our society?)
Categories of time
Time-use researcher Dagfinn Aas classifies time into four meaningful categories: contracted time; committed time; necessary time; and free time.
Contracted time
Contracted time refers to the time a person allocates toward an agreement to work or study. When a person is using contracted time to commute this person understands that this travel time is directly related to paid work or study and any break in this commute time directly affects job- or school-related performance.
Committed time
Committed time, like contracted time, takes priority over necessary and free time because it is viewed as productive work. It refers to the time allocated to maintain a home and family. When a person is commuting using committed time this person may feel that the commute is a duty to family such as walking children to school or driving a spouse to work. Contracted and committed time users may feel that their commute is more important than the commute of necessary or free time users because their commute is productive work. Therefore, they may be more inclined to choose a motorized mode of travel.
Necessary time
Necessary time refers to the time required to maintain one’s self as it applies to activities such as eating, sleeping, and cleansing and to a large extent exercising. People who commute using necessary time may feel that the commute is an important activity for personal well-being and may also take into account the well-being of the natural and social environment. The person commuting in necessary time may be more inclined to choose an active mode of transportation for personal reasons that include exercise on top of transportation.
Since sleeping is included in this category, necessary time usually constitutes the majority of people’s time.
Free time
Free time refers to the remains of the day after the three other types of time have been subtracted from the 24-hour day. This type of time is not necessarily discretionary time as the term “free” time may imply because people tend to plan activities in advance and creating committed free time in lieu of discretionary time. People who commute using free time are more apt to view the commute as a recreational activity. Commuting in free time provides the greatest gains for social capital because the person commuting in free time is more likely to slow down or stop the commute at his discretion to undertake another activity or engage in social interaction. He or she may also view the commute as part of his destination activity to which he has gladly committed his or her free time.
Primary vs. secondary time
The distinction between primary and secondary time is a way to include activities when multitasking. Activities that take place at the same time are separated into primary and secondary categories based on priority assigned to each, with the activity with the highest priority considered to be the primary. This distinction plays an important role when evaluating time spent on activities that often considered secondary when multitasking, as overlooking secondary activities can lead to significant underestimations of the time committed to those activities.
According to research in Australia, approximately two thirds of time spent on childcare is considered secondary time. Research in the United States is more variable ranging from approximately one third to approximately three fourths of time spent on childcare being secondary time.
Primary time
Primary time refers to time spent on a primary activity only. The primary activity is the activity that has the highest priority. For example, the primary task when drinking coffee while working would be working and the time therefore classified as contracted time. Assigning priority to each activity is left up to the person recording their time usage and similar combinations of activities may be treated differently under different circumstances. While eating in front of a television, both eating and watching television could be considered the primary activity depending on the circumstances.
Secondary time
Secondary time is the time spent on secondary or side activities. When drinking coffee while working, drinking coffee would be the secondary activity and would be considered necessary time even though the primary activity, working, would be classified as contracted time. Unlike primary time, secondary time does not necessarily add up to 24 hours each day because there may not always be a secondary activity. It is also important to note that including secondary time may make it appear that a person spends more than 24 hours a day on activities due to the overlapping nature primary and secondary time.
Journals
electronic International Journal of Time Use Research
Review of Economics of the Household
Demography — Scope and links to issue contents & abstracts.
Journal of Population Economics — Aims and scope and 20th Anniversary statement, 2006.
Feminist Economics
See also
Time geography
Return on time invested
References
External links
International Association for Time Use Research Broken link
Saint Mary's University - Time Use Research Program
University of Oxford - Centre for Time Use Research
The Time Use Institute
Urban planning
Transportation planning
Economics and time
Feminist economics | Time-use research | Physics,Engineering | 1,266 |
61,606,444 | https://en.wikipedia.org/wiki/Supportive%20communication | Supportive communication is the support given, both verbal and nonverbal, in times of stress, heartbreak, physical and emotional distress, and other life stages that cause distress. The intention of this support is to assist those seen as being in need of such support. For example, individuals could be struggling with anger, frustration, hurt, and also physical distress, and Supportive Communication becomes a strategy utilized to help individuals cope with those feelings and experiences. At times, individuals do not like facing things alone, so they will seek Supportive Communication from family, friends, and other trusted sources. At other times, individuals such as family and friends will offer Supportive Communication to someone they feel is in need of such support. The impact of Supportive Communication has varied in research studies partially due to the reception of the communication. An individual may not receive the support in the intended way, or it may dredge up previous stress emotions and intensify them. The field of social support is still relatively new with the typologies below being discussed as recent as the mid to late 1970s.
Background
The research on the topic of supportive communication, or variations thereof, have fairly recent beginnings with most of the heavy research beginning in the mid to late 1970s . Early research recognized the role of communication in helping others specifically as a role of social support, which also garnered quite a bit of attention in this time period. As a form of social support, scholars found that, unlike the sociological and psychological perspectives of social support, the supportive communication aspect served a specific role in actual communication of support unlike the psychological perspective which is the perceived belief of support or the sociological perspective which is considered more of the role of social integration.
Research in Supportive Communication has utilized a typology of supportive behaviors created in the 1970s and 80's which includes emotional, esteem, network support, informational, and tangible. Through these typologies, researchers have been able to better study the impact of each of the support types.
Types
Nurturant support
Emotional
This type of supportive communication would be utilized to help those who are experiencing emotional distress. This emotional distress could be due to many environmental factors, some are listed above, but are all emotional stressors. The goal would be to help alleviate the pain on an emotional level, but cannot help necessarily on a physical level.
Esteem
The esteem type of supportive communication, in contrast to emotional supportive communication, would encourage the individual in need of support on a different level. This type of support would enhance the individuals feelings towards themselves. The support would highlight the individuals accomplishments, abilities, and/or their attributes in an effort to provide support when the supporter recognizes the need of social support.
Network
Network support is a type of support that gives the individual a sense of belonging among a group of individuals who may have experienced the same stressors that the individual is currently going through. This can be found by creating a group of individuals the person already knows, or joining a support group specifically categorized by the type of support the individual needs.
Action-facilitating support
Informational
On the other end of the spectrum from emotional or esteem, informational support is focused more on practical application of the support that is given. The practical applicability of this type of support can range from advice on what to do to feedback on what should not have been done. This type of support is also known as an action-facilitating support type which has the main goal of helping to solve the problem that is causing the stress.
Tangible
This type of support, much like the informational support, is considered an action-facilitating support type. The difference in tangible and informational is the action of assisting instead of just the advice, or verbal support. Tangible support would seek to provide money, housing, transportation, or other such services to help alleviate stressors in the individuals life.
Supportive communication in business
Supportive communication helps employees to communicate accurately and honestly without jeopardizing interpersonal relationships. Supportive communication aims to preserve the relationship employees have even if management or other employees have to correct or point out a mistake in someone's actions.
Social media
Social media has created a platform not only for sharing information, but also for individuals to seek Supportive Communication. Positive affirmation and communication in Social Media platforms have been linked in positive psychological benefits, reinforcing the idea of Supportive Communication helping in an emotional state. Social Media has also created for individuals the idea of social capital where individuals believe they have created a network that they can rely on when support is needed. Looking at the definition above for Social support, we can see how social media can potentially provide emotional, informational, esteem, and even network support.
With the open source structure of social media, a world of communication is opened for both positive and negative reinforcement. Bullying has become a prevalent concern when discussions occur regarding social media. Cyber bullying can occur because of race, sexual orientation, age, and political preference, among other attributes. Bullied individuals, specifically, can experience real life impact outside the digital world. This experience, without the Supportive Communication of their network, can lead to stress, anxiety, and other social factors impacting their daily lives. Emotional, informational, esteem, and network supportive communication can be an especially beneficial to the individuals experiencing the bullying as they receive the communication they are valued and cared for.
See also
Supportive psychotherapy
References
Human communication | Supportive communication | Biology | 1,076 |
21,766,209 | https://en.wikipedia.org/wiki/Koliada | Koliada or koleda (Cyrillic: коляда, коледа, колада, коледе) is the traditional Slavic name for the period from Christmas to Epiphany or, more generally, for Slavic Christmas-related rituals, some dating to pre-Christian times. It represents a festival or holiday, celebrated at the end of December to honor the sun during the Northern-hemisphere winter solstice. It also involves groups of singers who visit houses to sing carols.
Terminology
The word is still used in modern Russian (Коляда́), Ukrainian ("Коляда", Koliadá), Belarusian (Каляда, Kalada, Kaliada), Polish (Szczodre Gody kolęda ), Bulgarian, Macedonian, Serbo-Croatian (Коледа, Коледе, koleda, kolenda), Lithuanian (Kalėdos, Kalėda), Czech, Slovak, Slovene (koleda) and Romanian (Colindă).
The word used in Old Church Slavonic language (Колѧда - Kolęda) sounds closest to the current Polish language pronunciation, as Polish is one of two Slavic languages which retains the nasal vowels of the Proto-Slavic language (the other is closely related Kashubian). One theory states that Koliada is the name of a cycle of winter rituals stemming from the ancient calendae as for example the Kalenda Proclamation.
In modern Belarusian, Ukrainian (koliada), Czech, Slovak, Croatian (koleda, kolenda), Kashubian (kòlãda [kwɛlãda]) and Polish (kolęda , Old Polish kolenda) the meaning has shifted from Christmas itself to denoting the tradition of strolling, singing, and having fun on Christmas Eve, same in the Balkan Slavs. It specifically applies to children and teens who walk house to house greeting people, singing and sifting grain that denotes the best wishes and receiving candy and small money in return. The action is called kolyadovanye () in Russian, kolyaduvannya (Ukrainian колядування) in Ukrainian and is now applied to similar Old East Slavic celebrations of other old significant holidays, such as Generous Eve (, , ) the evening before New Year's Day, as well as the celebration of the arrival of spring. Similarly in Bulgaria and North Macedonia, in the tradition of koleduvane (коледуване) or koledarenje (коледарење) around Christmas, groups of children visiting houses, singing carols and receiving a gift at parting. The kids are called 'koledari' or rarely 'kolezhdani' who sing kolyadki (songs).
Koleda is also celebrated across northern Greece by the Slavic speakers of Greek Macedonia, in areas from Florina to Thessaloniki, where it is called Koleda (Κόλιντα, Κόλιαντα) or Koleda Babo (Κόλιντα Μπάμπω) which means "Koleda Grandmother" in Slavic. It is celebrated before Christmas by gathering in the village square and lighting a bonfire, followed by local Macedonian music and dancing.
Croatian composer Jakov Gotovac wrote in 1925 the composition "Koleda", which he called a "folk rite in five parts", for male choir and small orchestra (three clarinets, two bassoons, timpani and drum). Also, Dubrovnik kolenda is one of the oldest recorded traditions of this kind in Croatia (its first mentioned in 13th century). There is also a dance from Dubrovnik called "The Dubrovnik Koleda."
It is celebrated in the Büyükmandıra village of Babaeski district, Kırklareli Province in Turkey as a halloween-like festival and dates back a thousand years.
See also
Colindă, a similar Romanian/Moldovan tradition
Korochun
Crăciun (disambiguation)
Twelfth Night (holiday)
Yule
Christmas carol
List of Christmas carols
Ķekatas
Koliadka
Koledari
Mummering
Turoń
Koleda (Koledovanie) in the Serbian tradition
Kalenda Proclamation
Shchedryk (song)
Calennig
Christmas Waits
Beltane, Gaelic festival in honour of the sun
Festive Procession with a Song. Kolyada
References
Intangible Cultural Heritage of Humanity
Slavic culture
Slavic holidays
Folk calendar of the East Slavs
Belarusian traditions
Bulgarian traditions
Czech traditions
Polish traditions
Russian traditions
Serbian traditions
Slovak traditions
Ukrainian traditions
Slavic Christmas traditions
Winter solstice
Intangible Cultural Heritage of Ukraine
Croatian traditions | Koliada | Astronomy | 1,016 |
43,639,818 | https://en.wikipedia.org/wiki/Pier%20Luigi%20Ighina | Pier Luigi Ighina (1908 in Milan – 2004 in Imola), was an Italian researcher. His unorthodox theories on electromagnetism are not recognized by the scientific community.
Biography
Pier Luigi Ighina studied in Milan Electronics and Radio Transmission and worked at Magneti Marelli, CGE (Compagnia Generale di Elettricità) and then at Ansaldo Lorenz in Genova. He claimed to be the assistant of Guglielmo Marconi for a number of scientific findings. However, no official proof of these collaborations are known.
The Theory of "Magnetic Atom"
His theories involve the concept of the pulsing "Magnetic Atom" at the base of the matter. This "Magnetic Atom" could be scattered into magnetic monopoles, whose interactions are at the basis of life and matter. Waves of magnetic atoms are continuously exchanged between the Sun and the Earth, with a proper frequency and shape. Based on this claims, Ighina built some extravagant machines: in particular one to prevent earthquakes and one other to make it rain. his discoveries and inventions were considered mysterious and nothing short of revolutionary. For years he was assistant to Guglielmo Marconi, from whom he learned some secrets about physics that would be useful for his future theories. However, Ighina did not patent any of his machines and findings.
Influences in Art
The famous Italian musician Franco Battiato cited directly the claims of Ighina in his album Pollution (1972).
Bibliography
Pier Luigi Ighina, La scoperta dell'atomo magnetico, Imola, Galeati, 1954.().
Giusy Zitoli, Io l'ho conosciuto. L'uomo che avrebbe potuto salvare il pianeta da una distruzione "annunciata". Chi è dunque? Scienziato geniale o "extraterrestre" incarnato uomo?, Pogliano Milanese, Atlantide, 1996. .
"Ighina, Pierluigi", in Paolo Albani e Paolo della Bella, Forse Queneau. Enciclopedia delle scienze anomale, Bologna, Zanichelli, 1999, p. 206. .
Massimo Barbieri, comments on L'atomo magnetico by Pier Luigi Ighina, 27 dicembre 2001, in Tecnologie di frontiera.
Alberto Tavanti (a cura di), Pier Luigi Ighina profeta sconosciuto, 2007.
Alberto Tavanti (a cura di), 1908–2008. Centenario della nascita di Pier Luigi Ighina, un uomo venuto dal futuro, Faenza, 2008.
Antonio Castronuovo, "Cent'anni di Ighina, scienziato anomalo", in Università aperta. Terza pagina, n. 11 (2008), pp. 10–11.
References
1908 births
2004 deaths
Experimental physicists
20th-century Italian scientists
Scientists from Milan | Pier Luigi Ighina | Physics | 642 |
30,791,395 | https://en.wikipedia.org/wiki/Flusilazole | Flusilazole (DPX-H6573) is an organosilicon fungicide invented by DuPont, which is used to control fungal infections on a variety of fruit and vegetable crops. It is moderately toxic to animals and has been shown to produce birth defects in high doses.
References
External links
Fungicides
Organosilicon compounds
Embryotoxicants
4-Fluorophenyl compounds | Flusilazole | Chemistry,Biology | 86 |
10,140,499 | https://en.wikipedia.org/wiki/Algorithm%20engineering | Algorithm engineering focuses on the design, analysis, implementation, optimization, profiling and experimental evaluation of computer algorithms, bridging the gap between algorithmics theory and practical applications of algorithms in software engineering.
It is a general methodology for algorithmic research.
Origins
In 1995, a report from an NSF-sponsored workshop "with the purpose of assessing the current goals and directions of the Theory of Computing (TOC) community" identified the slow speed of adoption of theoretical insights by practitioners as an important issue and suggested measures to
reduce the uncertainty by practitioners whether a certain theoretical breakthrough will translate into practical gains in their field of work, and
tackle the lack of ready-to-use algorithm libraries, which provide stable, bug-free and well-tested implementations for algorithmic problems and expose an easy-to-use interface for library consumers.
But also, promising algorithmic approaches have been neglected due to difficulties in mathematical analysis.
The term "algorithm engineering" was first used with specificity in 1997, with the first Workshop on Algorithm Engineering (WAE97), organized by Giuseppe F. Italiano.
Difference from algorithm theory
Algorithm engineering does not intend to replace or compete with algorithm theory, but tries to enrich, refine and reinforce its formal approaches with experimental algorithmics (also called empirical algorithmics).
This way it can provide new insights into the efficiency and performance of algorithms in cases where
the algorithm at hand is less amenable to algorithm theoretic analysis,
formal analysis pessimistically suggests bounds which are unlikely to appear on inputs of practical interest,
the algorithm relies on the intricacies of modern hardware architectures like data locality, branch prediction, instruction stalls, instruction latencies which the machine model used in Algorithm Theory is unable to capture in the required detail,
the crossover between competing algorithms with different constant costs and asymptotic behaviors needs to be determined.
Methodology
Some researchers describe algorithm engineering's methodology as a cycle consisting of algorithm design, analysis, implementation and experimental evaluation, joined by further aspects like machine models or realistic inputs.
They argue that equating algorithm engineering with experimental algorithmics is too limited, because viewing design and analysis, implementation and experimentation as separate activities ignores the crucial feedback loop between those elements of algorithm engineering.
Realistic models and real inputs
While specific applications are outside the methodology of algorithm engineering, they play an important role in shaping realistic models of the problem and the underlying machine, and supply real inputs and other design parameters for experiments.
Design
Compared to algorithm theory, which usually focuses on the asymptotic behavior of algorithms, algorithm engineers need to keep further requirements in mind: Simplicity of the algorithm, implementability in programming languages on real hardware, and allowing code reuse.
Additionally, constant factors of algorithms have such a considerable impact on real-world inputs that sometimes an algorithm with worse asymptotic behavior performs better in practice due to lower constant factors.
Analysis
Some problems can be solved with heuristics and randomized algorithms in a simpler and more efficient fashion than with deterministic algorithms. Unfortunately, this makes even simple randomized algorithms difficult to analyze because there are subtle dependencies to be taken into account.
Implementation
Huge semantic gaps between theoretical insights, formulated algorithms, programming languages and hardware pose a challenge to efficient implementations of even simple algorithms, because small implementation details can have rippling effects on execution behavior.
The only reliable way to compare several implementations of an algorithm is to spend an considerable amount of time on tuning and profiling, running those algorithms on multiple architectures, and looking at the generated machine code.
Experiments
See: Experimental algorithmics
Application engineering
Implementations of algorithms used for experiments differ in significant ways from code usable in applications.
While the former prioritizes fast prototyping, performance and instrumentation for measurements during experiments, the latter requires thorough testing, maintainability, simplicity, and tuning for particular classes of inputs.
Algorithm libraries
Stable, well-tested algorithm libraries like LEDA play an important role in technology transfer by speeding up the adoption of new algorithms in applications.
Such libraries reduce the required investment and risk for practitioners, because it removes the burden of understanding and implementing the results of academic research.
Conferences
Two main conferences on Algorithm Engineering are organized annually, namely:
Symposium on Experimental Algorithms (SEA), established in 1997 (formerly known as WEA).
SIAM Meeting on Algorithm Engineering and Experiments (ALENEX), established in 1999.
The 1997 Workshop on Algorithm Engineering (WAE'97) was held in Venice (Italy) on September 11–13, 1997. The Third International Workshop on Algorithm Engineering (WAE'99) was held in London, UK in July 1999.
The first Workshop on Algorithm Engineering and Experimentation (ALENEX99) was held in Baltimore, Maryland on January 15–16, 1999. It was sponsored by DIMACS, the Center for Discrete Mathematics and Theoretical Computer Science (at Rutgers University), with additional support from SIGACT, the ACM Special Interest Group on Algorithms and Computation Theory, and SIAM, the Society for Industrial and Applied Mathematics.
References
Algorithms
Theoretical computer science | Algorithm engineering | Mathematics | 1,023 |
14,920,392 | https://en.wikipedia.org/wiki/Optics%20and%20Spectroscopy | Optics and Spectroscopy is a monthly peer-reviewed scientific journal. It is the English version of the Russian journal () that was established in 1956. The journal was aided in development by Patricia Wakeling through a grant to her from the National Science Foundation. It covers research on spectroscopy of electromagnetic waves, from radio waves to X-rays, and related topics in optics, including quantum optics.
External links
Optics journals
Spectroscopy journals
Science and technology in Russia
Science and technology in the Soviet Union
Academic journals established in 1956
Monthly journals
English-language journals | Optics and Spectroscopy | Physics,Chemistry,Astronomy | 108 |
34,048,151 | https://en.wikipedia.org/wiki/International%20Society%20for%20Biological%20and%20Environmental%20Repositories | The International Society for Biological and Environmental Repositories (ISBER) is a professional society of individuals and organizations involved in biospecimen banking. Its main activities include creating educational and training opportunities, providing an online forum service, showcasing related products and services, and creating opportunities for networking. It also has published works.
Membership
Membership includes organizations and individuals from over 30 countries involved in long-term preservation and storage of animal, environmental, human, microorganism culture, museum, and plant/seed collections. A complete list of members is available on the ISBER website.
Meetings
ISBER holds one international meeting each year. Lectures, workshops, poster presentations, and working group discussions focus on technical issues and challenges such as quality assurance and control, regulations, human subject privacy and confidentiality issues, and provide information about sources of equipment and expertise.
ISBER Annual Meeting Locations
Source:
May 7-10, 2019 - Shanghai, China
May 20-24, 2018 - Dallas, TX, USA
May 9-12, 2017 - Toronto, ONT, Canada
May 20–24, 2014 - Orlando, FL, USA
2013 - Sydney, NSW, Australia
2012 - Vancouver, BC, Canada
2011 - Arlington, VA, USA
2010 - Rotterdam, SH, Netherlands
2009 - Portland, OR, USA
2008 - Bethesda, MD, USA
2007 - Singapore
2006 - Bethesda, MD, USA
2005 - Bellevue, WA, USA
2004 - New York, NY, USA
Best Practices
The ISBER Best Practices are publications periodically reviewed and revised to reflect advances in research and technology. The fourth edition (2018) of the Best Practices builds on the foundation established in the first, second, and third editions which were published in 2005, 2008, and 2012 respectively. The fifth edition is currently being written.
Current Best Practices
ISBER Best Practices: Recommendations for Repositories provides repository professionals with standardized guidelines for the management of biobank specimen collections and repositories. The most current version of the ISBER Best Practices was published in Biopreservation and Biobanking (BIO), February 2018 issue.
Self-Assessment Tool (SAT)
SAT Information
This testing tool allows individuals to evaluate their knowledge of the Best Practices. It contains 158 questions which may be answered in a single or multiple sessions. Each page of the survey corresponds to a section of the ISBER Best Practices. Results from pilot tests indicated that the SAT takes about hour to complete, if all information is available at the time of completing the survey. The tool is free to ISBER members, but non-members may participate for a fee.
After completion of the SAT, a personalized e-mail is sent to the participant which includes a "risk-balanced assessment score" and notification of top deviation areas to help the participant evaluate how their current practices conform to the ISBER Best Practices. The score is based on possible risk to the specimens, frequency of implementation of each practice, and the ease with which deviations can be detected.
Biorepository Proficiency Testing Program
Developed in collaboration with the Integrated Biobank of Luxembourg (IBBL), the Biorepository Proficiency Testing Program is designed to allow biorepositories to assess the accuracy of their quality control assays and characterization of biospecimens. Participants can compare their results with those obtained in other laboratories and can identify testing issues that may be related to individual staff performance or calibration of instrumentation used in biospecimen quality control. The program provides guidance to biorepositories so they can take appropriate remedial action to be in compliance with ISO/IEC 17043:2010, providing a necessary External Quality Assessment tool for biorepositories who wish to seek accreditation (ISO 17025, CLIA or equivalent).
References
External links
Environmental organizations based in British Columbia
Biorepositories
Biobank organizations
Organizations based in Vancouver
Professional associations based in Canada | International Society for Biological and Environmental Repositories | Biology | 793 |
16,970,848 | https://en.wikipedia.org/wiki/Median%20graph | In graph theory, a division of mathematics, a median graph is an undirected graph in which every three vertices a, b, and c have a unique median: a vertex m(a,b,c) that belongs to shortest paths between each pair of a, b, and c.
The concept of median graphs has long been studied, for instance by or (more explicitly) by , but the first paper to call them "median graphs" appears to be . As Chung, Graham, and Saks write, "median graphs arise naturally in the study of ordered sets and discrete distributive lattices, and have an extensive literature". In phylogenetics, the Buneman graph representing all maximum parsimony evolutionary trees is a median graph. Median graphs also arise in social choice theory: if a set of alternatives has the structure of a median graph, it is possible to derive in an unambiguous way a majority preference among them.
Additional surveys of median graphs are given by , , and .
Examples
Every tree is a median graph. To see this, observe that in a tree, the union of the three shortest paths between pairs of the three vertices a, b, and c is either itself a path, or a subtree formed by three paths meeting at a single central node with degree three. If the union of the three paths is itself a path, the median m(a,b,c) is equal to one of a, b, or c, whichever of these three vertices is between the other two in the path. If the subtree formed by the union of the three paths is not a path, the median of the three vertices is the central degree-three node of the subtree.
Additional examples of median graphs are provided by the grid graphs. In a grid graph, the coordinates of the median m(a,b,c) can be found as the median of the coordinates of a, b, and c. Conversely, it turns out that, in every median graph, one may label the vertices by points in an integer lattice in such a way that medians can be calculated coordinatewise in this way.
Squaregraphs, planar graphs in which all interior faces are quadrilaterals and all interior vertices have four or more incident edges, are another subclass of the median graphs. A polyomino is a special case of a squaregraph and therefore also forms a median graph.
The simplex graph κ(G) of an arbitrary undirected graph G has a vertex for every clique (complete subgraph) of G; two vertices of κ(G) are linked by an edge if the corresponding cliques differ by one vertex of G . The simplex graph is always a median graph, in which the median of a given triple of cliques may be formed by using the majority rule to determine which vertices of the cliques to include.
No cycle graph of length other than four can be a median graph. Every such cycle has three vertices a, b, and c such that the three shortest paths wrap all the way around the cycle without having a common intersection. For such a triple of vertices, there can be no median.
Equivalent definitions
In an arbitrary graph, for each two vertices a and b, the minimal number of edges between them is called their distance, denoted by d(x,y). The interval of vertices that lie on shortest paths between a and b is defined as
I(a,b) = {v | d(a,b) = d(a,v) + d(v,b)}.
A median graph is defined by the property that, for every three vertices a, b, and c, these intervals intersect in a single point:
For all a, b, and c, |I(a,b) ∩ I(a,c) ∩ I(b,c)| = 1.
Equivalently, for every three vertices a, b, and c one can find a vertex m(a,b,c) such that the unweighted distances in the graph satisfy the equalities
d(a,b) = d(a,m(a,b,c)) + d(m(a,b,c),b)
d(a,c) = d(a,m(a,b,c)) + d(m(a,b,c),c)
d(b,c) = d(b,m(a,b,c)) + d(m(a,b,c),c)
and m(a,b,c) is the only vertex for which this is true.
It is also possible to define median graphs as the solution sets of 2-satisfiability problems, as the retracts of hypercubes, as the graphs of finite median algebras, as the Buneman graphs of Helly split systems, and as the graphs of windex 2; see the sections below.
Distributive lattices and median algebras
In lattice theory, the graph of a finite lattice has a vertex for each lattice element and an edge for each pair of elements in the covering relation of the lattice. Lattices are commonly presented visually via Hasse diagrams, which are drawings of graphs of lattices. These graphs, especially in the case of distributive lattices, turn out to be closely related to median graphs.
In a distributive lattice, Birkhoff's self-dual ternary median operation
m(a,b,c) = (a ∧ b) ∨ (a ∧ c) ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c) ∧ (b ∨ c),
satisfies certain key axioms, which it shares with the usual median of numbers in the range from 0 to 1 and with median algebras more generally:
Idempotence: for all a and b.
Commutativity: for all a, b, and c.
Distributivity: for all a, b, c, d, and e.
Identity elements: m(0,a,1) = a for all a.
The distributive law may be replaced by an associative law:
Associativity: m(x,w,m(y,w,z)) = m(m(x,w,y),w,z)
The median operation may also be used to define a notion of intervals for distributive lattices:
I(a,b) = {x | m(a,x,b) = x} = {x | a ∧ b ≤ x ≤ a ∨ b}.
The graph of a finite distributive lattice has an edge between vertices a and b whenever I(a,b) = {a,b}. For every two vertices a and b of this graph, the interval defined in lattice-theoretic terms above consists of the vertices on shortest paths from a to b, and thus coincides with the graph-theoretic intervals defined earlier. For every three lattice elements a, b, and c, m(a,b,c) is the unique intersection of the three intervals , and . Therefore, the graph of an arbitrary finite distributive lattice is a median graph. Conversely, if a median graph G contains two vertices 0 and 1 such that every other vertex lies on a shortest path between the two (equivalently, m(0,a,1) = a for all a), then we may define a distributive lattice in which a ∧ b = m(a,0,b) and a ∨ b = m(a,1,b), and G will be the graph of this lattice.
characterize graphs of distributive lattices directly as diameter-preserving retracts of hypercubes. More generally, every median graph gives rise to a ternary operation m satisfying idempotence, commutativity, and distributivity, but possibly without the identity elements of a distributive lattice. Every ternary operation on a finite set that satisfies these three properties (but that does not necessarily have 0 and 1 elements) gives rise in the same way to a median graph.
Convex sets and Helly families
In a median graph, a set S of vertices is said to be convex if, for every two vertices a and b belonging to S, the whole interval I(a,b) is a subset of S. Equivalently, given the two definitions of intervals above, S is convex if it contains every shortest path between two of its vertices, or if it contains the median of every set of three points at least two of which are from S. Observe that the intersection of every pair of convex sets is itself convex.
The convex sets in a median graph have the Helly property: if F is an arbitrary family of pairwise-intersecting convex sets, then all sets in F have a common intersection. For, if F has only three convex sets S, T, and U in it, with a in the intersection of the pair S and T, b in the intersection of the pair T and U, and c in the intersection of the pair S and U, then every shortest path from a to b must lie within T by convexity, and similarly every shortest path between the other two pairs of vertices must lie within the other two sets; but m(a,b,c) belongs to paths between all three pairs of vertices, so it lies within all three sets, and forms part of their common intersection. If F has more than three convex sets in it, the result follows by induction on the number of sets, for one may replace an arbitrary pair of sets in F by their intersection, using the result for triples of sets to show that the replaced family is still pairwise intersecting.
A particularly important family of convex sets in a median graph, playing a role similar to that of halfspaces in Euclidean space, are the sets
Wuv = {w | d(w,u) < d(w,v)}
defined for each edge uv of the graph. In words, Wuv consists of the vertices closer to u than to v, or equivalently the vertices w such that some shortest path from v to w goes through u.
To show that Wuv is convex, let w1w2...wk be an arbitrary shortest path that starts and ends within Wuv; then w2 must also lie within Wuv, for otherwise the two points m1 = m(u,w1,wk) and m2 = m(m1,w2...wk) could be shown (by considering the possible distances between the vertices) to be distinct medians of u, w1, and wk, contradicting the definition of a median graph which requires medians to be unique. Thus, each successive vertex on a shortest path between two vertices of Wuv also lies within Wuv, so Wuv contains all shortest paths between its nodes, one of the definitions of convexity.
The Helly property for the sets Wuv plays a key role in the characterization of median graphs as the solution of 2-satisfiability instances, below.
2-satisfiability
Median graphs have a close connection to the solution sets of 2-satisfiability problems that can be used both to characterize these graphs and to relate them to adjacency-preserving maps of hypercubes.
A 2-satisfiability instance consists of a collection of Boolean variables and a collection of clauses, constraints on certain pairs of variables requiring those two variables to avoid certain combinations of values. Usually such problems are expressed in conjunctive normal form, in which each clause is expressed as a disjunction and the whole set of constraints is expressed as a conjunction of clauses, such as
A solution to such an instance is an assignment of truth values to the variables that satisfies all the clauses, or equivalently that causes the conjunctive normal form expression for the instance to become true when the variable values are substituted into it. The family of all solutions has a natural structure as a median algebra, where the median of three solutions is formed by choosing each truth value to be the majority function of the values in the three solutions; it is straightforward to verify that this median solution cannot violate any of the clauses. Thus, these solutions form a median graph, in which the neighbor of each solution is formed by negating a set of variables that are all constrained to be equal or unequal to each other.
Conversely, every median graph G may be represented in this way as the solution set to a 2-satisfiability instance. To find such a representation, create a 2-satisfiability instance in which each variable describes the orientation of one of the edges in the graph (an assignment of a direction to the edge causing the graph to become directed rather than undirected) and each constraint allows two edges to share a pair of orientations only when there exists a vertex v such that both orientations lie along shortest paths from other vertices to v. Each vertex v of G corresponds to a solution to this 2-satisfiability instance in which all edges are directed towards v. Each
solution to the instance must come from some vertex v in this way, where v is the common intersection of the sets Wuw for edges directed from w to u; this common intersection exists due to the Helly property of the sets Wuw. Therefore, the solutions to this 2-satisfiability instance correspond one-for-one with the vertices of G.
Retracts of hypercubes
A retraction of a graph G is an adjacency-preserving map from G to one of its subgraphs. More precisely, it is graph homomorphism φ from G to itself such that φ(v) = v for each vertex v in the subgraph φ(G). The image of the retraction is called a retract of G.
Retractions are examples of metric maps: the distance between φ(v) and φ(w), for every v and w, is at most equal to the distance between v and w, and is equal whenever v and w both belong to φ(G). Therefore, a retract must be an isometric subgraph of G: distances in the retract equal those in G.
If G is a median graph, and a, b, and c are an arbitrary three vertices of a retract φ(G), then φ(m(a,b,c)) must be a median of a, b, and c, and so must equal m(a,b,c). Therefore, φ(G) contains medians of all triples of its vertices, and must also be a median graph. In other words, the family of median graphs is closed under the retraction operation.
A hypercube graph, in which the vertices correspond to all possible k-bit bitvectors and in which two vertices are adjacent when the corresponding bitvectors differ in only a single bit, is a special case of a k-dimensional grid graph and is therefore a median graph. The median of three bitvectors a, b, and c may be calculated by computing, in each bit position, the majority function of the bits of a, b, and c. Since median graphs are closed under retraction, and include the hypercubes, every retract of a hypercube is a median graph.
Conversely, every median graph must be the retract of a hypercube. This may be seen from the connection, described above, between median graphs and 2-satisfiability: let G be the graph of solutions to a 2-satisfiability instance; without loss of generality this instance can be formulated in such a way that no two variables are always equal or always unequal in every solution. Then the space of all truth assignments to the variables of this instance forms a hypercube. For each clause, formed as the disjunction of two variables or their complements, in the 2-satisfiability instance, one can form a retraction of the hypercube in which truth assignments violating this clause are mapped to truth assignments in which both variables satisfy the clause, without changing the other variables in the truth assignment. The composition of the retractions formed in this way for each of the clauses gives a retraction of the hypercube onto the solution space of the instance, and therefore gives a representation of G as the retract of a hypercube. In particular, median graphs are isometric subgraphs of hypercubes, and are therefore partial cubes. However, not all partial cubes are median graphs; for instance, a six-vertex cycle graph is a partial cube but is not a median graph.
As describe, an isometric embedding of a median graph into a hypercube may be constructed in time O(m log n), where n and m are the numbers of vertices and edges of the graph respectively.
Triangle-free graphs and recognition algorithms
The problems of testing whether a graph is a median graph, and whether a graph is triangle-free, both had been well studied when observed that, in some sense, they are computationally equivalent. Therefore, the best known time bound for testing whether a graph is triangle-free, O(m1.41), applies as well to testing whether a graph is a median graph, and any improvement in median graph testing algorithms would also lead to an improvement in algorithms for detecting triangles in graphs.
In one direction, suppose one is given as input a graph G, and must test whether G is triangle-free. From G, construct a new graph H having as vertices each set of zero, one, or two adjacent vertices of G. Two such sets are adjacent in H when they differ by exactly one vertex. An equivalent description of H is that it is formed by splitting each edge of G into a path of two edges, and adding a new vertex connected to all the original vertices of G. This graph H is by construction a partial cube, but it is a median graph only when G is triangle-free: if a, b, and c form a triangle in G, then {a,b}, {a,c}, and {b,c} have no median in H, for such a median would have to correspond to the set {a,b,c}, but sets of three or more vertices of G do not form vertices in H. Therefore, G is triangle-free if and only if H is a median graph. In the case that G is triangle-free, H is its simplex graph. An algorithm to test efficiently whether H is a median graph could by this construction also be used to test whether G is triangle-free. This transformation preserves the computational complexity of the problem, for the size of H is proportional to that of G.
The reduction in the other direction, from triangle detection to median graph testing, is more involved and depends on the previous median graph recognition algorithm of , which tests several necessary conditions for median graphs in near-linear time. The key new step involves using a breadth first search to partition the graph's vertices into levels according to their distances from some arbitrarily chosen root vertex, forming a graph from each level in which two vertices are adjacent if they share a common neighbor in the previous level, and searching for triangles in these graphs. The median of any such triangle must be a common neighbor of the three triangle vertices; if this common neighbor does not exist, the graph is not a median graph. If all triangles found in this way have medians, and the previous algorithm finds that the graph satisfies all the other conditions for being a median graph, then it must actually be a median graph. This algorithm requires, not just the ability to test whether a triangle exists, but a list of all triangles in the level graph. In arbitrary graphs, listing all triangles sometimes requires Ω(m3/2) time, as some graphs have that many triangles, however Hagauer et al. show that the number of triangles arising in the level graphs of their reduction is near-linear, allowing the Alon et al. fast matrix multiplication based technique for finding triangles to be used.
Evolutionary trees, Buneman graphs, and Helly split systems
Phylogeny is the inference of evolutionary trees from observed characteristics of species; such a tree must place the species at distinct vertices, and may have additional latent vertices, but the latent vertices are required to have three or more incident edges and must also be labeled with characteristics. A characteristic is binary when it has only two possible values, and a set of species and their characteristics exhibit perfect phylogeny when there exists an evolutionary tree in which the vertices (species and latent vertices) labeled with any particular characteristic value form a contiguous subtree. If a tree with perfect phylogeny is not possible, it is often desired to find one exhibiting maximum parsimony, or equivalently, minimizing the number of times the endpoints of a tree edge have different values for one of the characteristics, summed over all edges and all characteristics.
described a method for inferring perfect phylogenies for binary characteristics, when they exist. His method generalizes naturally to the construction of a median graph for any set of species and binary characteristics, which has been called the median network or Buneman graph and is a type of phylogenetic network. Every maximum parsimony evolutionary tree embeds into the Buneman graph, in the sense that tree edges follow paths in the graph and the number of characteristic value changes on the tree edge is the same as the number in the corresponding path. The Buneman graph will be a tree if and only if a perfect phylogeny exists; this happens when there are no two incompatible characteristics for which all four combinations of characteristic values are observed.
To form the Buneman graph for a set of species and characteristics, first, eliminate redundant species that are indistinguishable from some other species and redundant characteristics that are always the same as some other characteristic. Then, form a latent vertex for every combination of characteristic values such that every two of the values exist in some known species. In the example shown, there are small brown tailless mice, small silver tailless mice, small brown tailed mice, large brown tailed mice, and large silver tailed mice; the Buneman graph method would form a latent vertex corresponding to an unknown species of small silver tailed mice, because every pairwise combination (small and silver, small and tailed, and silver and tailed) is observed in some other known species. However, the method would not infer the existence of large brown tailless mice, because no mice are known to have both the large and tailless traits. Once the latent vertices are determined, form an edge between every pair of species or latent vertices that differ in a single characteristic.
One can equivalently describe a collection of binary characteristics as a split system, a family of sets having the property that the complement set of each set in the family is also in the family. This split system has a set for each characteristic value, consisting of the species that have that value. When the latent vertices are included, the resulting split system has the Helly property: every pairwise intersecting subfamily has a common intersection. In some sense median graphs are characterized as coming from Helly split systems: the pairs (Wuv, Wvu) defined for each edge uv of a median graph form a Helly split system, so if one applies the Buneman graph construction to this system no latent vertices will be needed and the result will be the same as the starting graph.
and describe techniques for simplified hand calculation of the Buneman graph, and use this construction to visualize human genetic relationships.
Additional properties
The Cartesian product of every two median graphs is another median graph. Medians in the product graph may be computed by independently finding the medians in the two factors, just as medians in grid graphs may be computed by independently finding the median in each linear dimension.
The windex of a graph measures the amount of lookahead needed to optimally solve a problem in which one is given a sequence of graph vertices si, and must find as output another sequence of vertices ti minimizing the sum of the distances and . Median graphs are exactly the graphs that have windex 2. In a median graph, the optimal choice is to set .
The property of having a unique median is also called the unique Steiner point property. An optimal Steiner tree for three vertices a, b, and c in a median graph may be found as the union of three shortest paths, from a, b, and c to m(a,b,c). study more generally the problem of finding the vertex minimizing the sum of distances to each of a given set of vertices, and show that it has a unique solution for any odd number of vertices in a median graph. They also show that this median of a set S of vertices in a median graph satisfies the Condorcet criterion for the winner of an election: compared to any other vertex, it is closer to a majority of the vertices in S.
As with partial cubes more generally, every median graph with n vertices has at most (n/2) log2 n edges. However, the number of edges cannot be too small: prove that in every median graph the inequality holds, where m is the number of edges and k is the dimension of the hypercube that the graph is a retract of. This inequality is an equality if and only if the median graph contains no cubes. This is a consequence of another identity for median graphs: the Euler characteristic Σ (−1)dim(Q) is always equal to one, where the sum is taken over all hypercube subgraphs Q of the given median graph.
The only regular median graphs are the hypercubes.
Every median graph is a modular graph. The modular graphs are a class of graphs in which every triple of vertices has a median but the medians are not required to be unique.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Median graphs, Information System for Graph Class Inclusions.
Network, Free Phylogenetic Network Software. Network generates evolutionary trees and networks from genetic, linguistic, and other data.
PhyloMurka, open-source software for median network computations from biological data.
Graph families
Bipartite graphs
Lattice theory
Social choice theory
Phylogenetics | Median graph | Mathematics,Biology | 5,521 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.