text
stringlengths
11
320k
source
stringlengths
26
161
The Prix Jules Janssen is the highest award of the Société astronomique de France (SAF), the French astronomical society. This annual prize is given to a professional French astronomer or to an astronomer of another nationality in recognition of astronomical work in general, or for services rendered to Astronomy. [ 1 ] The first recipient of the prize was Camille Flammarion , the founder of the Société astronomique de France, in 1897. The prize has been continuously awarded since then with the exception of the two World Wars. Non-French recipients have come from various countries including the United States , the United Kingdom , Canada , Switzerland , the Netherlands , Germany , Belgium , Sweden , Italy , Spain , Hungary , India , the former Czechoslovakia , and the former Soviet Union . It was established by the French astronomer Pierre Jules César Janssen (known as Jules Janssen) during his tenure as president of SAF from 1895 to 1897. [ 2 ] Janssen announced the creation of the new prize at a meeting of the Société Astronomique de France on 2 December 1896. [ 3 ] The medal was designed in 1896 by the Parisian engraver Alphée Dubois (1831–1905). [ 4 ] It is minted by the Monnaie de Paris . This prize is distinct from the Janssen Medal (created in 1886), which is awarded by the French Academy of Sciences and also named for Janssen.
https://en.wikipedia.org/wiki/Prix_Jules_Janssen
The Prix Michel-Sarrazin is awarded annually in the Canadian province of Quebec by the Club de Recherches Clinique du Québec to a celebrated Québécois scientist who, by their dynamism and productivity, have contributed in an important way to the advancement of research biomedical. It is named in honour of Michel Sarrazin (1659–1734) who was the first Canadian scientist. Source: CRCQ
https://en.wikipedia.org/wiki/Prix_Michel-Sarrazin
The Prix Wilder-Penfield is an award by the government of Quebec that is part of the Prix du Québec , which "goes to scientists whose research aims fall within the field of biomedicine. These fields include the medical sciences, the natural sciences, and engineering". [ 1 ] It is named in honour of Wilder Penfield . Source: [ 1 ] This Quebec -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prix_Wilder-Penfield
Pro-Test was a British group that promoted and supported animal testing in medical research . It was founded on 29 January 2006 to counter SPEAK , an animal-rights campaign opposing the construction by Oxford University of a biomedical and animal-research facility, [ 1 ] which SPEAK believes may include a primate -testing centre. [ 2 ] Pro-Test held its first rally on 25 February 2006, attracting hundreds in support of the research facility and opposed by a smaller number of anti-lab demonstrators. [ 3 ] The group was founded by Laurie Pycroft from Swindon when he was 16. After forming the group, British newspapers described Pycroft as a " sixth form drop-out", "bedroom blogger ", [ 4 ] and "campaigning hero." [ 5 ] It is now run by a committee of ten: academics ( Tipu Aziz , John Stein, and David Priestman), five Oxford graduate and undergraduate students, medical writer Alison Eden, and Pycroft. [ 6 ] Pro-Test says that it stands for "science, reasoned debate and, above all, the welfare of mankind. … We support only non-violent protest and we condemn those using violence or intimidation to further their goals. We strongly support animal testing as crucially necessary to further medical science." [ 7 ] In February 2011, five years after its first rally, Pro-Test wound up its activities, saying it had "successfully met its goals of defending the construction of the Oxford Lab, increasing awareness of the importance of animal research, and bringing the public on-side in support of life-saving medical research." Its US-based spin-off, Speaking of Research, remained active in the UK and US. The construction site of the Oxford research centre is located on South Parks Road behind a five-metre (15 ft) barrier. Construction work is carried out by workmen wearing balaclavas and using unmarked vehicles, after the first contractor, Walter Lilly, owned by Montpellier plc, pulled out in the face of threats. [ 8 ] The facility is intended to become the "centre for all animal research at Oxford", according to Mark Matfield, former director of the Research Defence Society , [ 9 ] resulting in "the closure of a number of existing animal facilities". [ 10 ] The formation of Pro-Test coincided with threats made by the Animal Liberation Front , against Oxford staff and students, on the Bite Back website. [ 11 ] ALF spokesman, Robin Webb confirmed that "high-level student groups working against SPEAK protesters may be targeted." [ 12 ] Pycroft describes in his blog , hosted at the LiveJournal website, how he set up Pro-Test after visiting his girlfriend in Oxford on 28 January 2006 and watching a SPEAK demonstration from the window of a coffee shop. [ 13 ] [ 14 ] Pycroft, his girlfriend, and one other, staged a personal counter- demonstration . After writing about the experience on his blog, Pycroft has said he was receiving 300 hits an hour within days, [ 15 ] and after attracting interest from the media, Oxford students, and the pro-animal-testing movement, he decided to schedule a second demonstration to coincide with a SPEAK protest on 25 February 2006. According to The Times , "Pro-Test’s tactics mirror those of animal rights activists, with about 150 students using websites and chat forums to organise protests." [ 16 ] According to the Daily Telegraph, [ 17 ] over 800 students, academics and members of the public took part in the 25 February 2006 protest in the centre of Oxford which passed without violent incident, [ 18 ] marching at the same time as more than 150 SPEAK protesters [ 19 ] demonstrated in various locations across the city. A number of politicians and scientists addressed the Pro-Test demonstrators. These included Evan Harris , the Liberal Democrat science spokesperson and MP for Oxford West and Abingdon ; the Radcliffe Hospital's neurosurgeon and Pro-Test committee member Professor Tipu Aziz , [ 6 ] whose research into Parkinson's disease "involves the use of primates", [ 8 ] and who recently spoke out in support of testing cosmetics on animals; [ 20 ] Simon Festing of the Research Defence Society , a lobby group funded by the pharmaceutical industry and universities; and Pro-Test committee member Professor John Stein, [ 6 ] an Oxford neurophysiologist who "induces Parkinson's disease in monkeys and then attaches electrodes to their brains to test therapies which may help human sufferers", according to The Guardian . [ 4 ] In his speech to the crowd, Stein declared, "This is a historic day; we are drawing a line in the sand." [ 21 ] Supporters of Pro-Test marched through Oxford on Saturday, 3 June 2006. Their route led them through Radcliffe Square, the High Street and ended nearby the laboratory in the university's science area. Speakers included Colin Blakemore (then chief executive of the Medical Research Council ), Evan Harris MP and Alan Duncan MP (the Shadow Cabinet Trade and Industry Secretary ). David Priestman, a researcher of genetic disorders in children at Oxford University, told the Oxford Mail his reasons for joining the rally: [ 22 ] I have worked in animal research for nearly 30 years and at last I can speak out about what I do. I'm exceptionally proud of my work. What right have animal rights activists to say my work is not scientific? Pro-Test held a third rally in Oxford on 9 February 2008. According to the BBC, around 200 people marched in protest at "fear and intimidation" from animal rights groups. [ 23 ] Towards the start of the event, a lone animal rights protester started to shout in counter protest, but was escorted away by the police. [ 23 ] Speakers at the rally included Robin Lovell-Badge, a stem cell researcher at the National Institute for Medical Research , Evan Harris and Laurie Pycroft. Peter Hollins, chief executive of the British Heart Foundation and chair of the Coalition for Medical Progress , was also scheduled to attend but was unable due to illness. [ 24 ] In Spring 2008, Pro-Test Spokesman, Tom Holder, set up Speaking of Research , a group based in the US with similar goals to that of Pro-Test [ 25 ] On 22 April 2009 more than 700 staff, students and Los Angeles residents led by the neuroscientist Professor David Jentsch held a rally to launch the UCLA chapter of Pro-Test, and to stand up to the animal rights extremists who has targeted Prof. Jentsch and other scientists in a campaign of harassment and arson. [ 26 ] [ 27 ] At the event, Tom Holder announced the launch of The Pro-Test Petition which aims to give people in the US the "opportunity to show [their] support for the scientists and [their] opposition to the use of threats and violence". [ 28 ] This petition, to defend animal research, is similar to The People's Petition which gained over 20,000 signatures in the United Kingdom. An unnamed Oxford academic told the BBC that "a war is looming over 'scientific freedom' and the 'future of progress'", and suggests that the Pro-Test campaign is part of a wider reaction against animal-rights activism. [ 29 ] Pro-Test have taken the case for animal research to Parliament , participating in a debate at The Associate Parliamentary Group for Animal Welfare (APGAW). The debate focused specifically upon whether the Oxford biomedical research lab should be built and involved both MPs and members of the public. The principal speakers were Iain Simpson, press officer for Pro-Test, and Dr. Jarrod Bailey of Europeans for Medical Progress. [ 30 ] Pro-Test handed out doughnuts and cakes to workers on the South Parks Road site on 31 March 2006 to show their support for their work. [ 31 ] Pro-Test fielded Pycroft for a debate at the Oxford Union on the motion "This house would not test on animals". Supporting the motion were Dr Gill Langley , Dr Andrew Knight, Uri Geller and Alistair Currie. On the opposing side were Pycroft, Professor Colin Blakemore , Professor John Stein and Professor Lord Robert Winston . The motion was defeated, 273 to 48 of the Union members voting with the opposing side. A cross-college student referendum proposed by Pro-Test was held on 16 November 2006. It proposed support for the Oxford lab's construction and animal testing in general, and found support from approximately 90% of voters. [2] On 9 May 2006, the BBC reported that Pro-Test had bought ten shares in GlaxoSmithKline (GSK), as a "gesture of solidarity" with the company and its investors. An animal rights group had earlier sent letters to individual shareholders threatening to reveal personal details unless their shares were sold. The letters explained GSK's investors were targeted because of the company's association with Huntingdon Life Sciences . Pro-Test announced that their share purchase was to demonstrate that "intimidation has no place in the UK". [ 32 ] British Prime Minister Tony Blair gave his support to Pro-Test and The People's Petition in an article for the Sunday Telegraph , citing "the Pro-Test demonstration in Oxford, which... deserves support" as an example of the change in public attitudes in the UK. [3] [4] [5] The BBC programme Newsnight hosted a debate on animal testing on 24 July 2006. Tipu Aziz, John Stein and Iain Simpson of Pro-Test featured in the debate, as did members of SPEAK and Europeans for Medical Progress. [ 33 ] In February 2011, five years after its first rally, Pro-Test announced that it had wound up its activities after it claimed to have "successfully met its goals of defending the construction of the Oxford Lab, increasing awareness of the importance of animal research, and bringing the public on-side in support of life-saving medical research." However, its initially US-based spin-off, Speaking of Research , "continues to be active in the UK and US." [ 34 ] In September 2012, an Italian spin-off of Pro-Test was created and named " Pro-Test Italia ". [ 35 ] It has been founded by a group of scientists and students concerned about the spiralling of violence and pressure over government and public opinion against animal testing ; these circumstances led to the closure of "Green Hill", a beagle-breeding facility in Northern Italy in July 2012, [ 36 ] after several raids during the previous months by animal-rights activists, one of which including the stealing of some dogs from the facility on 28 April 2012. On 20 April 2013, another foray to an animal testing facility took place at the University of Milan , [ 37 ] which led to the release of mice and rabbits and consistent damage to researches carried out for years [ 38 ] ). It was made by the same group of activists, united under the banner of "Stop Green Hill". Following this event, Pro-Test Italia called for a rally in defense of animal testing on 1 June in Milan . [ 39 ] It was meant to condemn the animal-rights activists' actions and to raise awareness about the importance of animal testing in medical research. [ 40 ] The protest also had positive press coverage in international scientific journals such as Nature [ 40 ] and The Scientist . [ 39 ] Some animal-rights activists tried to interfere but the Police prevented any escalation. On 8 June 2013 Pro-Test Italia organized in various Italian cities [ 41 ] the event "Italia unita per la corretta informazione scientifica" (Italy united for scientific information). [ 42 ] [ 43 ] On 19 September 2013 a second demonstration took place, [ 44 ] this time in Rome , to persuade the Italian government to revise the national amendments to the European Directive 2013/63/EU which could put at risk biomedical research in Italy. In May 2015, a group of students and scientists in Germany decided to follow the example of their colleagues in the UK and Italy and founded "Pro-Test Deutschland". [ 45 ] Pro-Test Deutschland is a non-profit organization that first began as a reaction to the decision made by Nikos Logothetis , director of the Max Planck Institute for Biological Cybernetics in Tübingen to discontinue his research with nonhuman primates. [ 46 ] Logothetis's decision came after an undercover animal rights activist had filmed in the monkey facility of the Tübingen institute. The film was broadcast on national television in September 2014, leading to protests and hostility against the institute and against animal research in general. After these events there was a lack of response by the scientific community to come out publicly in support of basic animal research like that conducted at the Tübingen institute. Many officials seemed quite unprepared for such a situation. Pro-Test Deutschland therefore decided to promote the education of its members and the public about how to speak and communicate about animal research effectively. Pro-Test Deutschland issued a mission statement in which they point out that scientists do not lack moral fibre but rather a voice to speak about science. Pro-Test Deutschland intends to lend its voice so the public and scientists can engage in an informed and fair debate. [ 47 ] Unlike Pro-Test UK and Pro-Test Italia, who take a very vocal position for animal research, and raise support through public actions and demonstrations, Pro-Test Deutschland is more interested in sharing information and engendering an open, educated and unbiased debate. To date Pro-Test Deutschland mostly focuses its activities on maintaining an informative and well-balanced website containing FAQs and fact checking sections as well as on community outreach and media communication. Additionally, Pro-Test Deutschland is engaging with the Tübingen public more directly by means such as information booths in the Market Square. Since journalists in Germany wishing to report on animal research had heretofore been lacking reliable information in German, Pro-Test Deutschland quickly received a lot of attention, with national newspapers printing interviews. [ 48 ] and national radio inviting one of their speakers to panel discussions. [ 49 ] Pro-Test Deutschland, being initially based in Tübingen, has by now grown to include students and scientists in other German towns and cities such as Frankfurt , Bonn , Münster , Göttingen , Leipzig and Berlin . [ 50 ]
https://en.wikipedia.org/wiki/Pro-Test
The Pro-Truth Pledge is an initiative promoting truth seeking and rational thinking, particularly in politics. [ 1 ] [ 2 ] [ 3 ] I pledge My Earnest Efforts To: Share truth Honor truth Encourage truth First published in December 2016, the pledge is a movement and initiative of the Rational Politics project of Intentional Insights, a nonprofit organization dedicated to promoting rational thinking and good decision making in various areas of life. [ 4 ] The Pro-Truth Pledge is partially a reaction (and a would-be answer) to recent political trends in the US and UK, for example to alternative facts , growth of fake news and post-truth politics ; all of which are seen by pledgees as acute problems. [ 5 ] [ 6 ] [ 7 ] The founders of Pro-Truth Pledge come from its mother organization, Intentional Insights. The behavior and social science methodologies behind the Pro-Truth Pledge were applied to the topic by Gleb Tsipursky, one of the founders of Intentional Insights. [ 8 ] [ 9 ] According to the project's home page, as of August 26, 2018, there are 8,374 signatories to the pledge, including 85 organizations, 625 government officials, and 850 public figures [ 10 ] (including Jonathan Haidt , Michael Shermer , Steven Pinker and Pierre Whalon ). [ 11 ] [ 12 ] The Pro-Truth Pledge has received media coverage. [ 7 ] At least two peer-reviewed studies have been conducted to determine the effectiveness of taking the Pro-Truth Pledge. A study published in the journal Behavior and Social Issues examined the sharing of news-related content on Facebook before and after taking the pledge. The findings "suggest that taking the PTP had a statistically significant effect on behavior change in favor of more truthful sharing on Facebook." [ 13 ] Another study, published in the Journal of Social and Political Psychology , used a different methodology and reached a similar conclusion: "taking the pledge results in a statistically significant increase in alignment with the behaviors of the pledge." [ 14 ] The pledge has been translated into Spanish, Hungarian, Russian, Ukrainian, Portuguese and German, but the map of the pledge takers shows that most (above 90%) of the pledge takers live in North America, mainly in the US. [ 15 ]
https://en.wikipedia.org/wiki/Pro-Truth_Pledge
Pro-aging trance , also known as pro-aging edifice , [ 1 ] is a term coined by British author and biomedical gerontologist Aubrey de Grey to describe the broadly positive and fatalistic attitude toward aging in society . According to de Grey, the pro-aging trance explains why many people gloss over aging through irrational thought patterns. [ 2 ] [ 3 ] The concept says that the thought of one's own body slowly but ceaselessly deteriorating is so burdensome that it seems most sensible from a psychological point of view to try to put it out of one's mind. [ 4 ] Since aging has been present throughout human history, this coping strategy would be deeply rooted in human thinking. [ 5 ] It is striking that, in defending their point of view, those affected often commit fallacies which, from experience, would not be expected of them in a different context. [ 6 ] The name, according to de Grey, comes from the similarity of persons affected to hypnotised people, whose subconscious minds in the trance state prefer to resort to illogical explanations rather than abandon a deeply-held belief. [ 7 ] The pro-aging trance consists both in the belief that the aging process is inevitable and therefore will not be prevented even by future developments, and in the view that any success in the fight against aging would have mainly negative societal effects . [ 8 ] Examples cited include boredom , overpopulation , unresolved problems regarding current pension systems , and dictators living forever, [ 9 ] but there is no nuanced and factual discussion of counter-arguments and proposed solutions [ 10 ] and no juxtaposition or weighing of these potential disadvantages with the benefits of eliminating aging (such as saving about 100,000 lives per day). [ 11 ] De Grey assumes that robust mouse rejuvenation will provide a paradigm shift in society in this regard. [ 12 ] The phenomenon of the pro-aging trance is a hurdle in the rapid development of anti-aging medicine . [ 13 ] The reason is that it takes time for people to break out of it and the result of lacking public support is low research funding. [ 14 ] [ 15 ] Furthermore, aging is not socially perceived as a disease to be fought, [ 12 ] [ 15 ] which is why it is more difficult to get support for fighting it than for fighting cancer , Alzheimer's disease , or similar illnesses. De Grey sees the reason for this in the rhetoric of many gerontologists during the 1950s, 1960s, and 1970s, who usually drew a line in public communication between age-related diseases and "aging itself", even though the former were merely late stages of aging and therefore should not be viewed independently of the aging process. [ 16 ] Moreover, he argues that the post-aging world is portrayed predominantly as dystopian in fiction, hence reinforcing people in their assumption that defeating aging is undesirable. [ 17 ] The American philosopher Benjamin Ross criticises de Grey's approach to aging in his dissertation, saying that it is precisely his activism and the associated intention to wake people up from the pro-aging trance that is, whether he realises it or not, first defined by aging. He and other anti-aging activists would build almost their entire lives around the fact of age-related death. By achieving their goal of defeating the pro-aging trance and, by extension, aging, they would therefore also abolish an important aspect of their identity and the very circumstance that currently gives meaning to their lives. [ 18 ] Other works are also critical of the condemnation of the opposition to anti-aging with the term "trance". For example, it is mentioned that this, just like the "deathism" denounced by Nick Bostrom , prevents an evaluation of the discussion beyond the binary view of "death bad, extended life good". [ 19 ] The German bioethicist Mark Schweda argues that far-reaching interventions in the aging process must always be carefully weighed up, but that in the meantime no one can invoke the image of aging as a "totally unavailable natural reality", if only because scientific and cultural developments have already made it obsolete. At the same time, however, he criticises the modern "naturalistic" view of aging, which reduces it to physical decay and ignores all other aspects. [ 20 ] Another bioethicist, Gregor Wolbring, agreed that longevity researchers reject the rhetoric of ending aging entirely, but contended that ramifications of the proposal raised complications. [ 21 ] Arthur Diamond, author of Openness to Creative Destruction: Sustaining Innovative Dynamism , embraced the concept as well as something needing to be conquered if death is to be overcome. [ 22 ] The described pro-aging attitude is compared to the Stockholm syndrome by anti-aging advocates in the context of examining possible reasons for rejecting life-prolonging technologies: just as hostages sympathise with their captors after a certain period of time, people come to terms with the idea that they will age and eventually die. [ 23 ] [ 24 ] The Russian computer scientist and biotechnologist Alex Zhavoronkov assumes that the cause of the pro-aging trance lies in the tendency of people not to want to get their hopes up unnecessarily. He also posits that once the possibility of a dramatic extension of the healthy human lifespan is present, it can trigger feelings of guilt that one does nothing to hasten its implementation, which is why it is easier to block it out. [ 25 ] The American social psychologist Tom Pyszczynski , one of the founding psychologists of terror management theory , explains the opposition to life-prolonging therapies with exactly this model. According to him, the cause of that opposition is paradoxically that the critics fear death and actually long for radical life extension. However, since they do not consider it feasible or likely in their remaining lifetime, they try to deal with the terror caused by their own mortality through investing in a cultural worldview in the hope of achieving literal or symbolic immortality. The actual possibility of life extension challenges the beliefs and values that serve them as their protector from death-related thoughts. It thus generates the need to defend them and object to treatments that would actually extend lifespan. This goes hand in hand with the mortality salience hypothesis . [ 26 ] According to representatives of the anti-aging movement , learned helplessness could also play a role in why many people resign themselves to aging. [ 27 ] In 1967, the psychologist and behavioural scientist Martin Seligman showed that dogs that are exposed to mild electric shocks and realise that they cannot do anything about it tend to continue to endure the shocks after this phase, even if they have the opportunity to avoid them. Proponents of life extension compare this to the attitude which many people show toward their own aging process: in their opinion, these people have learned that any attempt to fight against aging is in vain and will therefore disregard new possibilities. [ 27 ]
https://en.wikipedia.org/wiki/Pro-aging_trance
Pro-gastrin-releasing-peptide , also known as Pro-GRP, is a gastrin-releasing peptide (GRP) precursor, a neurotransmitter that belongs to the bombesin -related neuromedin B family. GRP stimulates the secretion of gastrin in order to increase the acidity of the gastric acid . Pro-GRP is a peptide composed of 125 amino acids , expressed in the nervous system and digestive tract . [ 1 ] [ 2 ] It is different from progastrin , consisting of 80 amino acids , precursor of gastrin in its intracellular version and oncogene in its extracellular version ( hPG80 ). [ 3 ] [ 4 ] The presence of GRP in lung cancer samples was identified in 1983. [ 5 ] In pathological situations, GRP has mitogenic activity in vitro in many cancers including pancreatic cancer , small cell lung carcinoma , prostate cancer , kidney cancer , breast and colorectal cancer . [ 6 ] [ 7 ] [ 8 ] [ 9 ] GRP could operate as an autocrine growth factor. In cancers, GRP induces cell growth and inhibits apoptosis by shutting down the endoplasmic reticulum stress pathway. [ 10 ] The mechanisms of the impacted signal pathways have not been established. [ 11 ] As early as 1994, research on Pro-GRP as a biomarker for small-cell lung carcinoma began. [ 12 ] Because of the very short half-life of GRP (2 minutes), the Pro-GRP is used for measurements and analysis. Since then, Pro-GRP has been used as a tumor marker for patients with small-cell lung carcinoma in limited and extended stages. [ 13 ]
https://en.wikipedia.org/wiki/Pro-gastrin-releasing-peptide
Proponents of nuclear energy contend that nuclear power is safe, and a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on imported energy sources. Nuclear energy is often considered to be a controversial area of public policy . [ 1 ] [ 2 ] The debate about nuclear power peaked during the 1970s and 1980s, when it "reached an intensity unprecedented in the history of technology controversies", in some countries. [ 3 ] [ 4 ] Proponents of nuclear energy point to the fact that nuclear power produces very little conventional air pollution, greenhouse gases , and smog , in contrast to fossil fuel sources of energy. [ 5 ] Proponents also argue that perceived risks of storing waste are exaggerated, and point to an operational safety record in the Western world which is excellent in comparison to the other major kinds of power plants. [ 6 ] Historically, there have been numerous proponents of nuclear energy, including Georges Charpak , Glenn T. Seaborg , Edward Teller , Alvin M. Weinberg , Eugene Wigner , Ted Taylor , and Jeff Eerkens. There are also scientists who write favorably about nuclear energy in terms of the broader energy landscape, including Robert B. Laughlin , Michael McElroy , and Vaclav Smil . In particular, Laughlin writes in "Powering the Future" (2011) that expanded use of nuclear power will be nearly inevitable, either because of a political choice to leave fossil fuels in the ground, or because fossil fuels become depleted. Globally, there are dozens of companies with an interest in the nuclear industry, including Areva , BHP , Cameco , China National Nuclear Corporation , EDF , Iberdrola , Nuclear Power Corporation of India , Ontario Power Generation , Rosatom , Tokyo Electric Power Company , and Vattenfall . Many of these companies lobby politicians and others about nuclear power expansion, undertake public relation activities, petition government authorities, as well as influence public policy through referendum campaigns and involvement in elections. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] The nuclear industry has "tried a variety of strategies to persuade the public to accept nuclear power", including the publication of numerous "fact sheets" that discuss issues of public concern. [ 12 ] Nuclear proponents have worked to boost public support by offering newer, safer, reactor designs. These designs include those that incorporate passive safety and Small Modular Reactors . Since 2000 the nuclear industry has undertaken an international media and lobbying campaign to promote nuclear power as a solution to the greenhouse effect and climate change . Though reactor operation is free of carbon dioxide emissions, other stages of the nuclear fuel chain – from uranium mining , to reactor decommissioning and radioactive waste management – use fossil fuels and hence emit carbon dioxide. The Nuclear Energy Institute has formed various sub-groups to promote nuclear power. These include the Washington-based Clean and Safe Energy Coalition, which was formed in 2006 and led by Patrick Moore. Christine Todd Whitman , former head of the USEPA has also been involved. Clean Energy America is another group also sponsored by the NEI. [ 13 ] In Britain, James Lovelock well known for his Gaia Hypothesis began to support nuclear power in 2004. He is patron of the Supporters of Nuclear Energy. SONE also recognise that there are serious technical challenges associated with an electric grid reliant on intermittent and low-density sources of energy. The main nuclear lobby group in Britain is FORATOM . [ 13 ] As of 2014, the U.S. nuclear industry has begun a new lobbying effort, hiring three former senators — Evan Bayh , a Democrat; Judd Gregg , a Republican; and Spencer Abraham , a Republican — as well as William M. Daley , a former staffer to President Obama. The initiative is called Nuclear Matters, and it has begun a newspaper advertising campaign. [ 14 ] In March 2017, a bipartisan group of eight senators, including five Republicans and three Democrats introduced S. 512, the Nuclear Energy Innovation and Modernization Act (NEIMA). The legislation would help to modernize the Nuclear Regulatory Commission (NRC), support the advancement of the nation's nuclear industry and develop the regulatory framework to enable the licensing of advanced nuclear reactors, while improving the efficiency of uranium regulation. Letters of support for this legislation were provided by thirty-six organizations , including for profit enterprises, non-profit organizations and educational institutions. The most prominent entities from that group and other well-known organizations actively supporting the continued or expanded use of nuclear power as a solution for providing clean, reliable energy include: The United States generates about 19% of its electricity from nuclear power plants. Nearly 60% of all clean energy generated in the U.S. comes from nuclear power. [ citation needed ] Studies have shown that closing a nuclear power plant results in greatly increased carbon emissions as only burning coal or natural gas can make up for the massive amount of energy lost from a nuclear power plant. [ citation needed ] Even though there have long been protests against nuclear power, the effect of long-term scrutiny has elevated safety within the industry, making nuclear power the safest form of energy in operation today, despite the fact that many continue to fear it. [ citation needed ] Nuclear power plants create thousands of jobs, many in health and safety jobs, and seldom experience protests from area residents, as they bring large amounts of economic activity, attract educated employees and leave the air clear safe, unlike oil, coal or gas plants, which bring disease and environmental damage to their workers and neighbors. [ citation needed ] Nuclear engineers have traditionally worked, directly or indirectly, in the nuclear power industry, in academia or for national laboratories. More recently, young nuclear engineers have started to innovate and launch new companies, becoming entrepreneurs in order to bring their enthusiasm for using the power of the atom to address the climate crisis. As of June 2015, Third Way released a report identifying 48 nuclear start-ups or projects organized to work on nuclear innovations in what is being called "advanced nuclear" designs. [ 21 ] Current research in the industry is directed at producing economical, proliferation -resistant reactor designs with passive safety features. Although government labs research the same areas as industry, they also study a myriad of other issues such as nuclear fuels and nuclear fuel cycles , advanced reactor designs , and nuclear weapon design and maintenance. A principal pipeline for trained personnel for US reactor facilities is the Navy Nuclear Power Program. The job outlook for nuclear engineering from the year 2012 to the year 2022 is predicted to grow 9% due to many elder nuclear engineers retiring, safety systems needing to be updated in power plants, and the advancements made in nuclear medicine. [ 22 ] A pragmatic need for secure energy supply is a leading reason for many to support nuclear energy. Many people, including former opponents of nuclear energy, now say that nuclear energy is necessary for reducing carbon dioxide emissions . They recognize that the threat to humanity from climate change is far worse than any risk associated with nuclear energy. Many nuclear energy supporters, but not all, acknowledge that renewable energy is also important to the effort to eliminate emissions. Early environmentalists who publicly voiced support for nuclear power include James Lovelock , originator of the Gaia hypothesis , Patrick Moore , an early member of Greenpeace and former president of Greenpeace Canada, George Monbiot and Stewart Brand , creator of the Whole Earth Catalog . [ 23 ] [ 24 ] Lovelock goes further to refute claims about the danger of nuclear energy and its waste products. [ 25 ] In a January 2008 interview, Moore said that "It wasn't until after I'd left Greenpeace and the climate change issue started coming to the forefront that I started rethinking energy policy in general and realized that I had been incorrect in my analysis of nuclear as being some kind of evil plot." [ 26 ] There are increasing numbers of scientists and laymen who are environmentalists with views that depart from the mainstream environmental stance that rejects a role for nuclear power in the climate fight (once labelled "Nuclear Greens ," [ 27 ] some now consider themselves Ecomodernists ). Other academics and professionals, alarmed by the exaggerated impact media coverage of nuclear accidents have formed a group called Scientists for Accurate Radiation Information (SARI). [ 28 ] This was formed after a tsunami in Japan in 2011 caused an accidental release at Fukushima Daiichi, local people were unnecessarily relocated and psychologically stressed by false fears. This evacuation is estimated to have produced increased mortality rates equivalent to 2,313 deaths. [ 29 ] This effective suffering is known as the ‘nocebo’ effect and describes a situation where a negative outcome occurs due to a belief that an intervention will cause harm. Others who have spoken publicly on the benefits of nuclear power include: Climate and energy scientists in 2013: there is no credible path to climate stabilization that does not include a substantial role for nuclear power [ 69 ] [ 70 ] [ 71 ] [ 72 ] Conservation biologists in 2014: to replace the burning of fossil fuels, if we are to have any chance of mitigating severe climate change […we] need to accept a substantial role for advanced nuclear power systems with complete fuel recycling [ 73 ] [ 74 ] [ 75 ] The following is a list of people that signed the open letter: [ 76 ] The International Thermonuclear Experimental Reactor , located in France, is the world's largest and most advanced experimental tokamak nuclear fusion reactor project. A collaboration between the European Union (EU), India, Japan, China, Russia, South Korea and the United States, the project aims to make a transition from experimental studies of plasma physics to electricity-producing fusion power plants. However, the World Nuclear Association says that nuclear fusion "presents so far insurmountable scientific and engineering challenges". [ 78 ] Construction of the ITER facility began in 2007, but the project has run into many delays and budget overruns . The facility is now not expected to begin operations until the year 2027 – 11 years after initially anticipated. [ 79 ] Another nuclear power program is the Energy Impact Center 's OPEN100 project. OPEN100 was launched in 2020 and has published open-source blueprints for a nuclear power plant with a 100-megawatt pressurized water reactor . [ 80 ] The project aims to minimize the costs and duration of construction to increase nuclear power supply and potentially reverse the effects of climate change. [ 81 ]
https://en.wikipedia.org/wiki/Pro-nuclear_energy_movement
Pro-oxidants are chemicals that induce oxidative stress , either by generating reactive oxygen species or by inhibiting antioxidant systems. [ 1 ] The oxidative stress produced by these chemicals can damage cells and tissues, for example, an overdose of the analgesic paracetamol (acetaminophen) can fatally damage the liver , partly through its production of reactive oxygen species. [ 2 ] [ 3 ] Some substances can serve as either antioxidants or pro-oxidants, depending on conditions. [ 4 ] Some of the important conditions include the concentration of the chemical and if oxygen or transition metals are present. While thermodynamically very favored, reduction of molecular oxygen or peroxide to superoxide or hydroxyl radical respectively is spin forbidden . This greatly reduces the rates of these reactions, thus allowing aerobic life to exist. As a result, the reduction of oxygen typically involves either the initial formation of singlet oxygen , or spin–orbit coupling through a reduction of a transition-series metal such as manganese, iron, or copper. This reduced metal then transfers the single electron to molecular oxygen or peroxide. [ citation needed ] Transition metals can serve as pro-oxidants. For example, chronic manganism is a classic "pro-oxidant" disease. [ 5 ] Another disease associated with the chronic presence of a pro-oxidant transition-series metal is hemochromatosis , associated with elevated iron levels. Similarly, Wilson's disease is associated with elevated tissue levels of copper. Such syndromes tend to be associated with common symptomology. Thus, all are occasional symptoms of (e.g) hemochromatosis, another name for which is "bronze diabetes". The pro-oxidant herbicide paraquat , Wilson's disease, and striatal iron have similarly been linked to human Parkinsonism . Paraquat also produces Parkinsonian-like symptoms in rodents. [ citation needed ] Fibrosis or scar formation is another pro-oxidant-related symptom. E.g., interocular copper or vitreous chalicosis is associated with severe vitreous fibrosis , as is interocular iron. Liver cirrhosis is also a major symptom of Wilson's disease. The pulmonary fibrosis produced by paraquat and the antitumor agent bleomycin is also thought to be induced by the pro-oxidant properties of these agents. It may be that oxidative stress produced by such agents mimics a normal physiological signal for fibroblast conversion to myofibroblasts . [ citation needed ] Vitamins that are reducing agents can be pro-oxidants. Vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide , [ 6 ] however, it can also reduce metal ions which leads to the generation of free radicals through the Fenton reaction . [ 7 ] [ 8 ] The metal ion in this reaction can be reduced, oxidized, and then re-reduced, in a process called redox cycling that can generate reactive oxygen species. [ citation needed ] The relative importance of the antioxidant and pro-oxidant activities of antioxidant vitamins is an area of current research, but vitamin C, for example, appears to have a mostly antioxidant action in the body. [ 7 ] [ 9 ] However, less data is available for other dietary antioxidants, such as polyphenol antioxidants , [ 10 ] zinc , [ 11 ] and vitamin E . [ 12 ] Several important anticancer agents both bind to DNA and generate reactive oxygen species. These include adriamycin and other anthracyclines , bleomycin , and cisplatin . These agents may show specific toxicity towards cancer cells because of the low level of antioxidant defenses found in tumors. Recent research demonstrates that redox dysregulation originating from metabolic alterations and dependence on mitogenic and survival signaling through reactive oxygen species represents a specific vulnerability of malignant cells that can be selectively targeted by pro-oxidant non-genotoxic redox chemotherapeutics. [ 13 ] Photodynamic therapy is used to treat some cancers as well as other conditions. It involves the administration of a photosensitizer followed by exposing the target to appropriate wavelengths of light. The light excites the photosensitizer, causing it to generate reactive oxygen species, which can damage or destroy diseased or unwanted tissue. [ citation needed ]
https://en.wikipedia.org/wiki/Pro-oxidant
ProBiS is a computer software which allows prediction of binding sites and their corresponding ligands for a given protein structure. Initially ProBiS was developed as a ProBiS algorithm by Janez Konc and Dušanka Janežič in 2010 [ 1 ] and is now available as ProBiS server, ProBiS CHARMMing server, ProBiS algorithm and ProBiS plugin. The name ProBiS originates from the purpose of the software itself, that is to predict for a given Pro tein structure Bi nding S ites and their corresponding ligands. ProBiS software started as ProBiS algorithm that detects structurally similar sites on protein surfaces by local surface structure alignment using a fast maximum clique algorithm . The ProBiS algorithm was followed by ProBiS server which provides access to the program ProBiS that detects protein binding sites based on local structural alignments. There are two ProBiS servers available, ProBiS server and ProBiS CHARMMing server. The latter connects ProBiS and CHARMMing servers into one functional unit that enables prediction of protein−ligand complexes and allows for their geometry optimization and interaction energy calculation. The ProBiS CHARMMing server with these additional functions can only be used at National Institutes of Health , USA. Otherwise it acts as a regular ProBiS server. Additionally a ProBiS PyMOL plugin and ProBiS UCSF Chimera plugin have been made. Both plugins are connected via the internet to a newly prepared database of pre-calculated binding site comparisons to allow fast prediction of binding sites in existing proteins from the Protein Data Bank. They enable viewing of predicted binding sites and ligands poses in three-dimensional graphics.
https://en.wikipedia.org/wiki/ProBiS
ProGlycProt is a database of experimentally verified glycosites and glycoproteins of the prokaryotes . [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ProGlycProt
ProGuard is an open source command-line tool which shrinks, optimizes and obfuscates Java code. It is able to optimize bytecode as well as detect and remove unused instructions . [ 4 ] ProGuard is free software and is distributed under the GNU General Public License , version 2. [ 3 ] ProGuard was distributed as part of the Android SDK and ran when building the application in release mode . [ 5 ] ProGuard obfuscates Java and Android programs by renaming classes , fields , and methods using meaningless names (an implementation of security through obscurity ), making it more difficult to reverse-engineer the final application. [ 6 ] Besides removing unused instructions from the compiled bytecode , ProGuard optimizes it using techniques such as control flow analysis , data-flow analysis , partial evaluation , static single assignment , global value numbering , and liveness analysis . [ 6 ] ProGuard can remove many types of unused and duplicated code, perform over 200 peephole optimizations , reduce variable allocation , inline constant and short methods , simplify tail recursion calls, remove logging code, amongst others. [ 6 ]
https://en.wikipedia.org/wiki/ProGuard
Program for Monitoring Emerging Diseases (also known as ProMED-mail , abbreviated ProMED ) is among the largest publicly available emerging diseases and outbreak reporting systems in the world. [ 1 ] The purpose of ProMED is to promote communication amongst the international infectious disease community, including scientists , physicians , veterinarians , epidemiologists , public health professionals, and others interested in infectious diseases on a global scale. Founded in 1994, ProMED has pioneered the concept of electronic, Internet-based emerging disease and outbreak detection reporting. [ 2 ] In 1999, ProMED became a program of the International Society for Infectious Diseases . As of 2016, ProMED has more than 75,000 subscribers in over 185 countries. [ 3 ] With an average of 13 posts per day, ProMED provides users with up-to-date information concerning infectious disease outbreaks on a global scale. [ citation needed ] ProMED's guiding principles include: One of the essential global health priorities is the timely recognition and reporting of emerging and re-emerging infectious diseases . Early recognition can enable coordinated and rapid responses to an outbreak , preventing catastrophic morbidity and mortality . Additionally, early detection can alleviate grave economic hardship brought upon by pandemics and emerging diseases . Burgeoning globalization of commerce , finance , manufacturing , and services has fostered ever-increasing movement of people , animals , plants , food , and animal feed . Other contributing factors to the risk of new pathogens emerging and known pathogens re-emerging include climate change , urbanization , land use changes, and political instability . Outbreaks that begin in the most remote parts of the world now spread swiftly to urban centres in countries far away. The epidemiological data in ProMED posts has been used to estimate mortality rates and demographic parameters for specific diseases. [ 4 ] [ 5 ] The severe acute respiratory syndrome (SARS) outbreak in 2003 and the Middle East respiratory syndrome (MERS) outbreak in 2012 demonstrated the importance of early identification for emerging disease occurrences. The initial outbreak reports in both events were posted by astute clinicians. The use of non-traditional information sources can provide prompt information to the international community on emerging infectious disease problems that have yet to be officially reported. [ 6 ] The early dissemination of information may lead to rapid official confirmation of ongoing outbreaks. The Epicore programme, launched in March 2016 by various organizations including the patrons of ProMED-mail, makes use of volunteers throughout the world to find and report outbreaks using non-traditional methods. [ 7 ] Under the auspices of the Federation of American Scientists , ProMED-mail was founded in 1994 by Dr. Stephen Morse, then of Rockefeller University , Dr. Barbara Rosenberg of the State University of New York at Purchase , and Dr. Jack Woodall , then of the New York State Department of Health . [ 8 ] Originally envisioned as a direct scientist-to-scientist network, ProMED rapidly grew into a prototype outbreak reporting and discussion list, especially after the 1995 Ebola outbreak. The idea of a global network was first proposed by Donald A. Henderson in 1989. [ 9 ] ProMED played a crucial role in identifying the SARS outbreak early in 2003. [ 10 ] An astute physician in Silver Spring, Maryland, Stephen O. Cunnion, MD, PhD, MPH, submitted the first report to the database for this emerging outbreak, allowing the ProMED community to track the outbreak nearly two months in advance of the worldwide alert. [ 11 ] The email read: “Have you heard of an epidemic in Guangzhou? An acquaintance of mine . . . reports that the hospitals there have been closed and people are dying.” It was one month later before the Chinese government officially acknowledged the outbreak. In 2012, the users of ProMED were again some of the first to identify the outbreak of MERS . A physician was responsible for identifying the outbreak and reporting the event to the ProMED community, which was posted online. [ 12 ] Eight days after the initial report, the Saudi Health Ministry announced the diagnosis of a new form of coronavirus . [ citation needed ] The ProMED-mail system grew from 1994 in parallel with Public Health Agency of Canada 's similar Global Public Health Intelligence Network (GPHIN),. [ 13 ] At 23:59 on 30 December 2019, ProMED-mail received its first communication about the COVID-19 outbreak. [ 14 ] [ 15 ] On 3 August 2023, 21 out of the 38 paid moderators and editors for the service went on strike, citing lack of support from ISID. [ 16 ] [ 17 ] The importance of using unofficial sources of information for public health surveillance has become increasingly recognized. [ 19 ] Sometimes referred to as “event-based surveillance” or “epidemic intelligence,” informal disease reporting services, pioneered by ProMED, have become a crucial component of the overall global infectious disease surveillance picture. [ 20 ] According to the WHO Epidemic & Pandemic Alert & Response, more than 60% of the initial outbreak reports come from informal sources, including ProMED-mail. [ citation needed ] ProMED-mail publishes reports to the website ProMEDmail.org in addition to sending e-mails to service subscribers. There is no cost for subscription to the reporting network. ProMED encourages subscribers to contribute data, respond to requests for information, and collaborate in outbreak investigations and prevention efforts. [ citation needed ] Posts are published on a real-time basis. Reports detail infectious disease outbreaks with geotags and links to related media articles and other ProMED posts. Each report is screened and edited by a staff of expert moderators, ensuring the validity of the published information. There are 45 moderators for ProMED stationed throughout the world. Moderators review reports submitted within their region of oversight, drawing on their knowledge of the region and infrastructure to provide accurate descriptions of the outbreak event. [ citation needed ] As ProMED-mail is an independent program of the non-government, nonprofit International Society for Infectious Diseases, this ensures the avoidance of delay or suppression of disease reporting by governments for bureaucratic or strategic reasons. To provide access to a global audience, versions of ProMED are available in several languages and cover different regions, including ProMED-ESP (Spanish), ProMED-PORT (Portuguese), ProMED-RUS (Russian), ProMED-FRA (French), ProMED-MENA (Middle East and North Africa), ProMED-SoAs (South Asia and Arabic Summaries), as well as English-language versions ProMED-MBDS (Mekong Basin region of Southeast Asia) and ProMED-EAFR (Africa). One Health is the principle that human, animal, and environmental health are inextricably linked and should no longer be researched and learned in a siloed manner. [ 21 ] ProMED embodied this concept in the sphere of infectious disease reporting since its inception. It is estimated that 70% of emerging human diseases originate in other animal species – termed zoonotic diseases . As diseases in both animal and agriculture species have health implications for humans, ProMED includes posts on emerging animal diseases and diseases related to agriculturally important plants due to their impact on human survival. [ citation needed ] ProMED-mail and HealthMap were awarded a grant from Google.org's Predict and Prevent initiative in October 2008. This collaboration combined ProMED-mail's global network of human, animal, and ecosystem health specialists with HealthMap's digital detection efforts. [ citation needed ]
https://en.wikipedia.org/wiki/ProMED-mail
ProMax is a chemical process simulator for process troubleshooting and design, developed and sold by Bryan Research and Engineering, Inc. Initially released in late 2005, ProMax is a continuance of two previous process simulators, PROSIM and TSWEET. ProMax is considered the industry standard for designing amine gas treating and glycol dehydration units. [ 1 ] In 1974 Bryan Research and Engineering (BR&E) began developing simulation software for sulfur recovery units with a command-line interface. In 1976 this program was released under the name SULFUR. Amine sweetening , for which BR&E is most well known, was added in 1978 and the simulation package was renamed TSWEET. A second product, DEHY, was released in 1980 for modeling glycol dehydration units. Natural gas processing was added to the DEHY program in 1983 and the package was renamed PROSIM. In 1988 BR&E introduced a graphical user interface to both programs, a novelty for chemical process simulators at the time. TSWEET and PROSIM were both MS-DOS based programs and were both incorporated into ProMax. ProMax is a late generation Windows application which uses Microsoft Visio as the graphical user interface . Other capabilities were included in ProMax besides those already available in TSWEET and PROSIM enabling it to model almost any process in the oil and gas industry. [ 2 ] Bryan Research & Engineering, Inc. (BR&E) is a privately owned provider of software and engineering solutions to the oil, gas, refining and chemical industries. Since the company’s inception in 1974, BR&E has combined research and development in process simulation to provide clients with simulation tools.
https://en.wikipedia.org/wiki/ProMax
ProSAS is a database describing the effects of splicing on the structure of a protein [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ProSAS
ProVerif is a software tool for automated reasoning about the security properties of cryptographic protocols . The tool has been developed by Bruno Blanchet and others. Support is provided for cryptographic primitives including: symmetric & asymmetric cryptography; digital signatures ; hash functions; bit-commitment ; and signature proofs of knowledge. The tool is capable of evaluating reachability properties, correspondence assertions and observational equivalence . These reasoning capabilities are particularly useful to the computer security domain since they permit the analysis of secrecy and authentication properties. Emerging properties such as privacy, traceability and verifiability can also be considered. Protocol analysis is considered with respect to an unbounded number of sessions and an unbounded message space. The tool is capable of attack reconstruction: when a property cannot be proved, an execution trace which falsifies the desired property is constructed. ProVerif has been used in the following case studies, which include the security analysis of actual network protocols: Further examples can be found online: [1] . Alternative analysis tools include: AVISPA (for reachability and correspondence assertions), KISS (for static equivalence), YAPA (for static equivalence). CryptoVerif for verification of security against polynomial time adversaries in the computational model. The Tamarin Prover is a modern alternative to ProVerif, with excellent support for Diffie-Hellman equational reasoning, and verification of observational equivalence properties.
https://en.wikipedia.org/wiki/ProVerif
Pro Tools is a digital audio workstation (DAW) developed and released by Avid Technology (formerly Digidesign ) [ 1 ] for Microsoft Windows and macOS . [ 2 ] It is used for music creation and production, sound for picture ( sound design , audio post-production and mixing ) [ 3 ] and, more generally, sound recording , editing, and mastering processes. Pro Tools operates both as standalone software and in conjunction with a range of external analog-to-digital converters and PCIe cards with on-board digital signal processors (DSP). The DSP is used to provide additional processing power to the host computer for processing real-time effects , such as reverb , equalization , and compression [ 4 ] and to obtain lower latency audio performance. [ 5 ] Like all digital audio workstation software, Pro Tools can perform the functions of a multitrack tape recorder and a mixing console along with additional features that can only be performed in the digital domain, such as non-linear [ 6 ] and non-destructive editing (most of audio handling is done without overwriting the source files), track compositing with multiple playlists, [ 7 ] time compression and expansion , pitch shifting , and faster-than-real-time mixdown. Audio, MIDI , and video tracks are graphically represented on a timeline. Audio effects , virtual instruments , and hardware emulators—such as microphone preamps or guitar amplifiers—can be added, adjusted, and processed in real-time in a virtual mixer . 16-bit, 24-bit, and 32-bit float audio bit depths at sample rates up to 192 kHz are supported. Pro Tools supports mixed bit depths and audio formats in a session: BWF / WAV (including WAVE Extensible, RF64 and BW64) and AIFF . It imports and exports MOV video files [ 8 ] and ADM BWF files (audio files with Dolby Atmos metadata); [ 9 ] it also imports MXF , ACID and REX files and the lossy formats MP3 , AAC , M4A , and audio from video files ( MOV , MP4 , M4V ). [ 10 ] The legacy SDII format was dropped with Pro Tools 10, [ 11 ] although SDII conversion is still possible on macOS. [ 10 ] Pro Tools has incorporated video editing capabilities, so users can import and manipulate high-definition video file formats such as XDCAM, MJPG-A, PhotoJPG, DV25, QuickTime , and more. It features time code , tempo maps, elastic audio, and automation ; supports mixing in surround sound , Dolby Atmos and VR sound using Ambisonics . [ 12 ] The Pro Tools TDM mix engine, supported until 2011 with version 10, employed 24-bit fixed-point arithmetic for plug-in processing and 48-bit for mixing. Current HDX hardware systems, HD Native and native systems use 32-bit floating-point resolution for plug-ins and 64-bit floating-point summing. [ 4 ] The software and the audio engine were adapted to 64-bit architecture from version 11. [ 13 ] In 2022, Avid switched Pro Tools from a perpetual license to a subscription model. New users have to choose between three new plans: Pro Tools Artist, which costs $9.99 per month or $99 per year; Pro Tools Studio, which costs $39.99 per month or $299 per year; and Pro Tools Flex, which costs $99.99 per month or $999 per year. [ 14 ] Later in 2022, Avid launched a free version: Pro Tools Intro. [ 15 ] In 2004, Pro Tools was inducted into the TECnology Hall of Fame , an honor given to "products and innovations that have had an enduring impact on the development of audio technology." [ 16 ] Pro Tools was developed by UC Berkeley graduates Evan Brooks, who majored in electrical engineering and computer science , and Peter Gotcher. [ 17 ] In 1983, the two friends, sharing an interest in music and electronic and software engineering, decided to study the memory mapping of the newly released E-mu Drumulator drum machine to create EPROM sound replacement chips. The Drumulator was quite popular at that time, although it was limited to its built-in samples. [ 18 ] They started selling the upgrade chips one year later under their new Digidrums label. [ 19 ] Five different upgrade chips were available, offering different alternate drum styles. The chips, easily switchable with the original ones, enjoyed remarkable success between the Drumulator users, selling 60,000 units overall. [ 20 ] When Apple released its first Macintosh computer in 1984, the pair thought to design a more functional and flexible solution which could take advantage of a graphical interface. [ 21 ] In collaboration with E-Mu , they developed a Mac-based visual sample editing system for the Emulator II keyboard, called Sound Designer, released under the Digidesign brand [ 22 ] and inspired by the interface of the Fairlight CMI . [ 23 ] This system, the first ancestor of Pro Tools, was released in 1985 at the price of US$995. [ 18 ] Brooks and Gotcher rapidly ported Sound Designer to many other sampling keyboards, such as E-mu Emax , Akai S900 , Sequential Prophet 2000 , Korg DSS-1 , and Ensoniq Mirage . [ 23 ] Thanks to the universal file specification subsequently developed by Brooks with version 1.5, [ 23 ] Sound Designer files could be transferred via MIDI between sampling keyboards of different manufacturers. [ 24 ] This universal file specification, along with the printed source code to a 68000 assembly language interrupt-driven MIDI driver, was distributed through Macintosh MIDI interface manufacturer Assimilation, which manufactured the first MIDI interface for the Mac in 1985. Starting from the same year, a dial-up service provided by Beaverton Digital Systems, called MacMusic, allowed Sound Designer users to download and install the entire Emulator II sound library to other less expensive samplers: sample libraries could be shared across different manufacturers platforms without copyright infringement. MacMusic contributed to Sound Designer's success by leveraging both the universal file format and developing the first online sample file download site globally, many years before the World Wide Web use soared. The service used 2400- baud modems and 100 MB of disk space with Red Ryder host on a 1 MB Macintosh Plus . [ 23 ] With the release of Apple Macintosh II in 1987, which provided card slots, a hard disk, and more capable memory, Brooks and Gotcher saw the possibility to evolve Sound Designer into a featured digital audio workstation . They discussed with E-mu the opportunity of using the Emulator III as a platform for their updated software, but E-mu rejected this offer. Therefore, they decided to design both the software and the hardware autonomously. Motorola , which was working on its 56K series of digital signal processors , invited the two to participate in its development. Brooks designed a circuit board for the processor, then developed the software to make it work with Sound Designer. A beta version of the DSP was ready by December 1988. [ 21 ] The combination of the hardware and the software was called Sound Tools. Advertised as the "first tapeless studio", [ 21 ] it was presented on January 20, 1989, at the NAMM International Music & Sound Expo . The system relied on a NuBus card called Sound Accelerator, equipped with one Motorola 56001 processor. The card provided 16-bit playback and 44.1/48 kHz recording through a two-channel A/D converter (AD In), while the DSP handled signal processing, which included a ten-band graphic equalizer , a parametric equalizer , time stretching with pitch preservation, fade-in/fade-out envelopes, and crossfades ("merging") between two sound files. [ 25 ] [ 26 ] Sound Tools was bundled with Sound Designer II software, which was, at this time, a simple mono or stereo audio editor running on Mac SE or Mac II ; digital audio acquisition from DAT was also possible. [ 27 ] A two-channel digital interface (DAT-I/O) with AES/EBU and S/PDIF connections was made available later in 1989, while the Pro I/O interface came out in 1990 with 18-bit converters. [ 1 ] The file format used by Sound Designer II (SDII) became eventually a standard for digital audio file exchange until the WAV file format took over a decade later. Since audio streaming and non-destructive editing were performed on hard drives, the software was still limited by their performance; densely edited tracks could cause glitches. [ 28 ] However, the rapidly evolving computer technology allowed developments towards a multi-track sequencer. The core engine and much of the user interface of the first iteration of Pro Tools was based on Deck. The software, published in 1990, was the first multi-track digital recorder based on a personal computer. It was developed by OSC, a small San Francisco company founded the same year, in conjunction with Digidesign and ran on Digidesign's hardware. [ 29 ] Deck could run four audio tracks with automation; MIDI sequencing was possible during playback and record, and one effect combination could be assigned to each audio track (2-band parametric equalizer, 1-band EQ with delay , 1-band EQ with chorus , delay with chorus). [ 30 ] The first Pro Tools system was launched on June 5, 1991. It was based on an adapted version of Deck (ProDeck) along with Digidesign's new editing software, ProEdit, created by Mark Jeffery; [ 31 ] Sound Designer II was still supplied for two-channel editing. [ 32 ] Pro Tools relied on Digidesign's Audiomedia card, mounting one Motorola 56001 processor [ 33 ] with a clock rate of 22.58 MHz [ 34 ] and offering two analog and two digital channels of I/O , and on the Sound Accelerator card. External synchronization with audio and video tape machines was possible with SMPTE timecode and the Video Slave drivers. [ 32 ] The complete system was selling for US$6,000. [ 35 ] Sound Tools II was launched in 1992 with a new DSP card. Two interfaces were also released: Pro Master 20, providing 20-bit A/D conversion, [ 32 ] and Audiomedia II, with improved digital converters and one Motorola 56001 processor running at 33.86 MHz. [ 36 ] In 1993, Josh Rosen, Mats Myrberg and John Dalton, the OSC's engineers who developed Deck, split from Digidesign to focus on releasing lower-cost multi-track software that would run on computers with no additional hardware. This software was known as Session (for stereo-only audio cards) and Session 8 (for multichannel audio interfaces) and was selling for US$399. [ 37 ] [ 29 ] Peter Gotcher felt that the software needed a significant rewrite. Pro Tools II, the first software release fully developed by Digidesign, followed in the same year and addressed its predecessor's weaknesses. [ 20 ] The editor and the mixer were merged into a single Pro Tools application that utilized the Digidesign Audio Engine (DAE) created by Peter Richert. DAE was also provided as a separate application to favor hardware support from third-party developers, enabling the use of Pro Tools hardware and plug-ins on other DAWs. [ 18 ] [ 38 ] Selling more than 8,000 systems worldwide, Pro Tools II became the best-selling digital audio workstation. [ 20 ] In 1994, Pro Tools 2.5 implemented Digidesign's newly developed time-division multiplexing technology, which allowed routing of multiple digital audio streams between DSP cards. With TDM, up to four NuBus cards could be linked, obtaining a 16-track system, while multiple DSP-based plug-ins could be run simultaneously and in real-time. [ 39 ] The wider bandwidth required to run the larger number of tracks was achieved with a SCSI expansion card developed by Grey Matter Response, called System Accelerator. [ 32 ] In the same year, Digidesign announced that it merged into the American multimedia company Avid , [ 40 ] developer of the digital video editing platform Media Composer and one of Digidesign's major customers (25% of Sound Accelerator and Audiomedia cards produced was being bought by Avid). The operation was finalized in 1995. [ 39 ] With a redesigned Disk I/O card, Pro Tools III was able to provide 16 tracks with a single NuBus card; [ 41 ] the system could be expanded using TDM to up to three Disk I/O cards, achieving 48 tracks. [ 39 ] DSP Farm cards were introduced to increase the processing power needed for a more extensive real-time audio processing; each card was equipped with three Motorola 56001 chips running at 40 MHz. [ 42 ] Multiple DSP cards could be added for additional processing power; each card could handle the playback of 16 tracks. [ 33 ] A dedicated SCSI card was still required to provide the required bandwidth to support multiple-card systems. [ 41 ] Along with Pro Tools III, Digidesign launched the 888 interface, with eight channels of analog and digital I/O, and the cheaper 882 interface. [ 41 ] The Session 8 system included a control surface with eight faders. [ 43 ] A series of TDM plug-ins were bundled with the software, including dynamics processing , EQ, delay, modulation, and reverb . [ 39 ] In 1996, following Apple's decision to drop NuBus in favor of PCI bus , Digidesign added PCI support with Pro Tools 3.21. The PCI version of the Disk I/O card incorporated a high-speed SCSI along with DSP chips, [ 41 ] while the upgraded DSP Farm PCI card included four Motorola 56002 chips running at 66 MHz. [ 44 ] This change of architecture allowed the convergence of Macintosh computers with Intel -based PCs, for which PCI had become the standard internal communication bus. [ 33 ] With the PCI version of Digidesign's Audiomedia card in 1997 (Audiomedia III), [ 45 ] Sound Tools and Pro Tools could be run on Windows platforms for the first time. [ 33 ] With the release of Pro Tools | 24 in 1997, Digidesign introduced a new 24-bit interface (the 888|24) and a new PCI card (the d24). The d24 relied on Motorola 56301 processors, offering increased processing power and 24 tracks of 24-bit audio [ 46 ] (later increased to 32 tracks with a DAE software update). A SCSI accelerator was required to keep up with the increased data throughput . Digidesign dropped its proprietary SCSI controller in favor of commercially available ones. [ 39 ] 64 tracks with dual d24 support were introduced with Pro Tools 4.1.1 in 1998, [ 47 ] while the updated Pro Tools | 24 MIX system provided three times more DSP power with the MIX Core DSP cards. MIXplus systems combined a MIX Core with a MIX Farm, obtaining a performance increase of 700% compared to a Pro Tools | 24 system. [ 39 ] Pro Tools 5 saw two substantial software developments: extended MIDI functionality and integration in 1999 (an editable piano-roll view in the editor; MIDI automation, quantize and transpose) [ 39 ] and the introduction of surround sound mixing and multichannel plug-ins—up to the 7.1 format —with Pro Tools TDM 5.1 [ 48 ] in 2001. [ 47 ] The migration from traditional, tape-based analog studio technology to the Pro Tools platform took place within the industry: [ 21 ] Ricky Martin 's " Livin' la Vida Loca " (1999) was the first Billboard Hot 100 number-one single to be recorded, edited, and mixed entirely within the Pro Tools environment, [ 49 ] allowing a more meticulous and effortless editing workflow (especially on vocals). [ 50 ] While consolidating its presence in professional studios, Digidesign began to target the mid-range consumer market in 1999 by introducing the Digi001 bundle, consisting of a rack-mount audio interface with eight inputs and outputs with 24-bit, 44.1/48 kHz capability and MIDI connections. The package was distributed with Pro Tools LE, a specific version of the software without DSP support, limited to 24 mixing tracks. [ 18 ] Following the launch of Mac OS X operating system in 2001, Digidesign made a substantial redesign of Pro Tools hardware and software. Pro Tools | HD was launched in 2002, replacing the Pro Tools | 24 system and relying on a new range of DSP cards (HD Core and HD Process, replacing MIX Core and MIX Farm), new interfaces running at up to 192 kHz or 96 kHz sample rates (HD 192 and 96, replacing 888 and 882), along with an updated version of the software (Pro Tools 6) with new features and a redesigned GUI, developed for OS X and Windows XP . [ 51 ] Two HD interfaces could be linked together for increased I/O through a proprietary connection. The base system was selling for US$12,000, while the full system was selling for US$20,000. [ 21 ] Both HD Core and Process cards mounted nine Motorola 56361 chips running at 100 MHz, each providing 25% more processing power than the Motorola 56301 chips mounted on MIX cards; this translated to about twice the power for a single card. A system could combine one HD Core card with up to two HD Process cards, supporting playback for 96/48/12 tracks at 48/96/192 kHz sample rates (with a single HD Core card installed) and 128/64/24 tracks at 48/96/192 kHz sample rates (with one or two HD Process cards). [ 52 ] When Apple changed the expansion slot architecture of the Mac G5 to PCI Express , Digidesign launched a line of PCIe DSP cards that both adopted the new card slot format and slightly changed the combination of chips. HD Process cards were replaced with HD Accel, each mounting nine Motorola 56321 chips running at 200 MHz and each providing twice the power than an HD Process card; track count for systems mounting an HD Accel was extended to 192/96/36 tracks at 48/96/192 kHz sample rates. [ 53 ] The use of PCI Express connection reduced round-trip delay time , while DSP audio processing allowed the use of smaller hardware buffer sizes during recording, assuring stable performance with extremely low latency. [ 5 ] Pro Tools, offering a solid and reliable alternative to analog recording and mixing, eventually became a standard in professional studios throughout the decade, while editing features such as Beat Detective (introduced with Pro Tools 5.1 in 2001) [ 48 ] and Elastic Audio (introduced with Pro Tools 7.4 in 2007) [ 54 ] redefined the workflow adopted in contemporary music production. [ 18 ] Other software milestones were background tasks processing (such as fade rendering, file conversion or relinking), real-time insertion of TDM plug-ins during playback, and a browser/database environment introduced with Pro Tools 6 in 2003; [ 51 ] Automatic plug-in Delay Compensation (ADC), introduced with Pro Tools 6.4 in 2004 and only available with TDM systems with HD Accel; [ 55 ] a new implementation of RTAS with multi-threading support and improved performance, Region groups, Instrument tracks, and real-time MIDI processing, introduced with Pro Tools 7 in 2006; [ 56 ] VCA and volume trim, introduced with Pro Tools 7.2 in 2006; [ 57 ] support for ten track inserts, MIDI Editor, and MIDI Score, introduced with Pro Tools 8 in 2009. [ 58 ] Pro Tools | MIX hardware support was dropped with version 6.4.1. Pro Tools LE, first introduced and distributed in 1999 with the Digi 001 interface, [ 59 ] was a specific Pro Tools version in which the signal processing entirely relied on the host CPU. The software required a Digidesign interface to run, which acted as a copy-protection mechanism for the software. Mbox was the entry-level range of the available interface; Digi 001 and Digi 002/003, which also provided a control surface, were the upper range. The Eleven Rack also ran on Pro Tools LE, included in-box DSP processing via an FPGA chip, offloading guitar amp/speaker emulation, and guitar effects plug-in processing to the interface, allowing them to run without taxing the host system. Pro Tools LE shared the same interface of Pro Tools HD but had a smaller track count (24 tracks with Pro Tools 5, extended to 32 tracks with Pro Tools 6 [ 51 ] and 48 tracks with Pro Tools 8) [ 60 ] and supported a maximum sample rate of 96 kHz [ 61 ] (depending on the interface used). Some advanced software features, such as Automatic Delay Compensation, surround mixing, multi-track Beat Detective, OMF/AAF support, and SMPTE Timecode , were omitted. Some of them, as well as support for 48 tracks/96 voices (extended to 64 tracks/128 voices with Pro Tools 8) and additional plug-ins, were made available through an expansion package called "Music Production Toolkit". [ 62 ] The "Complete Production Toolkit", introduced with Pro Tools 8, added support for surround mixing and 128 tracks (while the system was still limited to 128 voices). [ 60 ] With the acquisition of M-Audio in 2004–2005, Digidesign released a specific variant of Pro Tools, called M-Powered , which was equivalent to Pro Tools LE and could be run with M-Audio interfaces. [ 63 ] The Pro Tools LE/ M-Powered line was discontinued with the release of Pro Tools 9. Pro Tools 9, released in November 2010, dropped the requirement of proprietary hardware to run the software. Any audio device could be used through Core Audio on macOS or the ASIO driver on a Windows. Core Audio allowed device aggregation, enabling using of more than one interface simultaneously. Some Pro Tools HD software features, such as automatic plug-in delay compensation, OMF/AAF file import, Timecode ruler, and multi-track Beat Detective, were included in the standard version of Pro Tools 9. [ 64 ] When operating on a machine containing one or more HD Core, Accel, or Native cards, the software ran as Pro Tools HD with the complete HD feature set. In all other cases, it ran as Pro Tools 9 standard, with a smaller track count and some advanced features turned off. In response to Apple's decision to include Emagic 's complete line of virtual instruments in Logic Pro in 2004 and following Avid 's acquisition of German virtual instruments developer Wizoo in 2005, Pro Tools 8 was supplied with its first built-in virtual instruments library, the AIR Creative Collection, as well as with some new plug-ins, to make it more appealing for music production. [ 60 ] An expansion was also available, called AIR Complete Collection. In October 2011, Avid introduced Pro Tools 10 and a new series of DSP PCIe cards named HDX. Each card mounted 18 DSP processors, manufactured by Texas Instruments, allowing an increased computational precision ( 32-bit floating-point resolution for audio processing and 64-bit floating-point summing, versus the previous 24-bit and 48-bit fixed-point resolution of the TDM engine), [ 4 ] thus improving dynamic range performance. Signal processing could be run on the embedded DSP, providing additional computational power and enabling near zero-latency for DSP-reliant plug-ins. Two FPGA chips handled track playback, monitoring, and internal routing, providing a lower round trip latency. A second line of PCIe cards, called HD Native, provided low latency with a single FPGA chip but did not mount DSP (audio processing relied on the host system's CPU). [ 65 ] Round trip latency at 96 kHz was 0.7 ms for HDX and 1.7 ms for HD Native (with a 64-sample buffer). [ 66 ] To maintain performance consistency, HDX products were specified with a fixed maximum number of voices (each voice representing a monophonic channel). Each HDX card enabled 256 simultaneous voices at 44.1/48 kHz; voice count halved when the sample rate doubled (128 voices at 88.2/96 kHz, 64 voices at 176.4/192 kHz). Up to three HDX cards could be installed on a single system for a maximum of 768/384/192 total voices and for increased processing power. On Native systems, voice count was limited to 96/48/24 voices with the standard version of Pro Tools and 256/128/64 voices with Pro Tools HD software. [ 4 ] With Pro Tools 10, Avid deployed a new plug-in format for both Native and HDX systems called AAX (an acronym for Avid Audio eXtension). [ 67 ] AAX Native replaced RTAS plug-ins and AAX DSP, a specific format running on HDX systems, replaced TDM plug-ins. AAX was developed to provide the future implementation of 64-bit plug-ins, although 32-bit versions of AAX were still used in Pro Tools 10. TDM support was dropped with HDX, [ 68 ] while Pro Tools 10 would be the final release for Pro Tools | HD Process and Accel systems. Notable software features introduced with Pro Tools 10 were editable clip-based gain automation (Clip gain), the ability to load the session's audio data into RAM to improve transport responsiveness (Disk caching), quadrupled Automatic Delay Compensation length, audio fades processed in real-time, timeline length extended to 24 hours, support for 32-bit float audio and mixed audio formats within the session, and the addition of Avid Channel Strip plug-in (based on Euphonix System 5 console's channel strip, following Avid's acquisition of Euphonix in 2010). [ 69 ] [ 48 ] Pro Tools 11, released in June 2013, switched from 32-bit to 64-bit software architecture with new audio and video engines, enabling the application and plug-ins to fully take advantage of system memory. The new audio engine (AAE) introduced support of offline bouncing and simultaneous mixdowns multiple sources; dynamic plug-in processing allowed to reduce CPU usage when active native plug-ins do not receive any input. Two separate buffers were used for playback and for monitoring of record-enabled or input-monitored tracks. The new video engine (AVE) improved performance and handling of multiple CPU cores. Support for HD Accel systems, legacy HD interfaces, TDM and 32-bit AAX plug-ins was dropped due to their incompatibility with 64-bit architecture. [ 13 ] A free starter edition providing the essential features of Pro Tools, called "First", was launched in 2015 and discontinued in December 2021 for being "unviable to continue on a technical level". [ 70 ] Pro Tools workflow is organized into two main windows: the timeline is shown in the Edit window, while the mixer is shown in the Mix window. MIDI and Score Editor windows provide a dedicated environment to edit MIDI. [ 71 ] Different window layouts, along with shown and hidden tracks and their width settings, can be stored and recalled from the Window configuration list. [ 72 ] The timeline provides a graphical representation of all types of tracks: the audio envelope or waveform (when zoomed in) for audio tracks, a piano roll showing MIDI notes and controller values for MIDI and Instrument tracks, a sequence of frame thumbnails for video tracks, audio levels for auxiliary, master and VCA master tracks. [ 73 ] Alternate audio and MIDI content can be recorded, shown, and edited in multiple layers for each track (called playlists), which can be used for track compositing. [ 74 ] All the mixer parameters (such as track and sends volume, pan, and mute status) and plug-in parameters can be changed over time through automation . [ 75 ] Any automation type can be shown and edited in multiple lanes for each track. [ 76 ] Track-based volume automation can be converted to clip-based automation and vice versa; [ 77 ] automation of any type can also be copied and pasted to any other automation type. [ 78 ] Time can be measured and displayed on the timeline in different scales: bars and beats, time or SMPTE timecode (with selectable frame rates), audio samples, or film stock feet for audio-for-film referencing (based on the 35 mm film format). [ 79 ] Tempo and meter changes can also be programmed; both MIDI and audio clips can move or time-stretch to follow tempo changes ("tick-based" tracks) or maintain their absolute position ("sample-based" tracks). Elastic Audio must be enabled to allow time stretching of audio clips. [ 80 ] Audio and MIDI clips can be moved, cut, and duplicated non-destructively on the timeline (edits change the clip organization on the timeline, but source files are not overwritten). [ 81 ] Time stretching (TCE), pitch shifting , equalization, and dynamics processing can be applied to audio clips non-destructively and in real-time with Elastic Audio [ 82 ] and Clip Effects; [ 83 ] gain can be adjusted statically or dynamically on individual clips with Clip Gain; [ 84 ] fade and crossfades can be applied, adjusted and are processed in real-time. All other types of audio processing can be rendered on the timeline with the AudioSuite (non-real-time) version of AAX plug-ins. [ 85 ] Audio clips can be converted to MIDI data using the Celemony Melodyne engine; pitches with timing and velocities are extracted through melodic, polyphonic, or rhythmic analysis algorithms. [ 86 ] Pitch and rhythm of audio tracks can also be viewed and manipulated with the bundled Melodyne Essential. MIDI notes, velocities, and controllers can be edited directly on the timeline, each MIDI track showing an individual piano roll, or in a specific window, where several MIDI and Instrument tracks can be shown together in a single piano roll with color-coding. Multiple MIDI controllers for each track can be viewed and edited on different lanes. [ 87 ] MIDI tracks can also be shown in musical notation within a score editor. [ 88 ] MIDI data such as note quantization, duration, transposition, delay, and velocity can also be altered non-destructively and in real-time on a track-per-track basis. [ 89 ] Video files can be imported to one or more video tracks and organized in multiple playlists. Multiple video files can be edited together and played back in real-time. Video processing is GPU-accelerated and managed by the Avid Video Engine (AVE). Video output from one video track is provided in a separate window or can be viewed full screen. [ 90 ] The virtual mixer shows controls and components of all tracks, including inserts , sends , input and output assignments , automation read/write controls, panning , solo/mute buttons, arm record buttons, the volume fader , the level meter , and the track name. It also can show additional controls for the inserted virtual instrument , mic preamp gain, HEAT settings, and the EQ curve of supported plug-ins. [ 91 ] Each track inputs and outputs can have different channel depths: mono , stereo , multichannel (LCR, LCRS , Quad , 5.0/5.1 , 6.0/6.1, 7.0/7.1 ); Dolby Atmos and Ambisonics formats are also available for mixing. [ 92 ] Audio can be routed to and from different outputs and inputs, both physical and internal. Internal routing is achieved using busses and auxiliary tracks; each track can have multiple output assignments. [ 93 ] Virtual instruments are loaded on Instrument tracks—a specific type of track that receives MIDI data in input and returns audio in output. [ 94 ] Plug-ins are processed in real-time with dedicated DSP chips (AAX DSP format) or using the host computer's CPU (AAX Native format). [ 95 ] Audio, auxiliary, and Instrument tracks (or MIDI tracks routed to a virtual instrument plug-in) can be committed to new tracks containing their rendered output. Virtual instruments can be committed to audio to prepare an arrangement project for mixing; track commit is also used to free up system resources during mixing or when the session is shared with systems not having some plug-ins installed. Multiple tracks can be rendered at a time; it is also possible to render a specific timeline selection and define which range of inserts to render. [ 96 ] Similarly, tracks can be frozen with their output rendered at the end of the plug-in chain or at a specific insert of their chain. Editing is suspended on frozen tracks, but they can subsequently be unfrozen if further adjustments are needed. For example, virtual instruments can be frozen to free up system memory and improve performance while keeping the possibility to unfreeze them to make arrangement changes. [ 97 ] The main mix of the session—or any internal mix bus or output path—can be bounced to disk in real-time (if hardware inserts from analog hardware are used, or if any audio or MIDI source is monitored live into the session) or offline (faster-than-real-time). The selected source can be mixed to mono, stereo, or any other multichannel format. Multichannel mixdowns can be written as an interleaved audio file or in multiple mono files. Up to 24 sources of up to 10 channels each can be mixed down simultaneously—for example, to deliver audio stems . [ 98 ] Audio and video can be bounced together to a MOV file; video is transcoded with the DNxHD, DNxHR, Apple ProRes, and H.264 video codecs. [ 99 ] Session data can be partially or entirely exchanged with other DAWs or video editing software that support AAF , OMF , or MXF . AAF and OMF sequences embed audio and video files with their metadata; when opened by the destination application, session structure is rebuilt with the original clip placement, edits, and basic track and clip automation. [ 100 ] Track contents and any of its properties can be selectively exchanged between Pro Tools sessions with Import Session Data (for example, importing audio clips from an external session to a designated track while keeping track settings or importing track inserts while keeping audio clips). [ 101 ] Similarly, the same track data for any track set—a given processing chain, a collection of clips, or a group of tracks with their assignments—can be stored and recalled as Track Presets. [ 102 ] Pro Tools projects can be synchronized to the Avid Cloud and shared with other users on a track-by-track basis. Different users can simultaneously work on the project and upload new tracks or any changes to existing tracks (such as audio and MIDI clips, automation, inserted plug-ins, and mixer status) or alterations to the project structure (such as tempo, meter, or key). [ 103 ] Pro Tools reads embedded metadata in media files to manage multichannel recordings made by field recorders in production sound. All stored metadata (such as scene and take numbers, tape or sound roll name, or production comments) can be accessed in the Workspace browser. [ 104 ] Analogous audio clips are identified by overlapping longitudinal timecode (LTC) and by one or more user-defined criteria (such as matching file length, file name, or scene and take numbers). An audio segment can be replaced from matching channels (for example, to replace audio from a boom microphone with the audio from a lavalier microphone ) while maintaining edits and fades in the timeline, or any matching channels can be added to new tracks. [ 105 ] Up to twelve Pro Tools Ultimate systems with dedicated hardware can be linked together over an Ethernet network—for example, in multi-user mixing environments where different mix components (such as dialog, ADR, effects, and music) reside on different systems, or if a larger track count or processing power is needed. Transport, solo, and mute are controlled by a single system and with a single control surface. [ 106 ] One system can also be designated for video playback to optimize performance. [ 107 ] Pro Tools can synchronize to external devices using SMPTE/EBU timecode or MIDI timecode . [ 108 ] Pro Tools software is available in three subscription-based paid versions (Artist, Studio and Ultimate) and one free version (Intro). Before 2022, two different perpetual licenses could be purchased: a standard edition for US$599 (informally called "Vanilla"), [ 109 ] which provided all the key features for audio mixing and post-production, and a complete edition for US$2599 (officially called "Ultimate" and known as "HD" between 2002 and 2018), which unlocked functionality for advanced workflows and a higher track count. Dolby Atmos / Ambisonics In the mid-1990s, Digidesign started working on a studio device that could replace classic analog consoles and provide integration with Pro Tools. ProControl (1998) was the first Digidesign control surface, providing motorized, touch-sensitive faders, an analog control room communication section, and connecting to the host computer via Ethernet . ProControl could be later expanded by adding up to five fader packs, each providing eight additional fader strips and controls. [ 39 ] Control 24 (2001) added 5.1 monitoring support and included 16 class A preamps designed by Focusrite . Icon D-Control (2004) incorporated an HD Accel system and was developed for larger TV and film productions in mind. Command|8 (2004) and D-Command (2005) were the smaller counterparts of Control 24 and D-Control, connected with the host computer via USB; Venue (2005) was a similar system specifically designed for live sound applications. [ 48 ] C|24 (2007) was a revision of Control 24 with improved preamps, while Icon D-Control ES (2008) and Icon D-Command ES (2009) were redesigns of Icon D-Control and D-Command. [ 48 ] In 2010 Avid acquired Euphonix, manufacturer of the Artist Series, and System 5 control surfaces. They were integrated with Pro Tools along with the EuCon protocols. Avid S6 (2013) and Avid S3 (2014) control surfaces followed by merging the Icon and System 5 series. Pro Tools Dock (2015) was an iPad-based control surface running Pro Tools Control software. [ 114 ] Footnotes Bibliography
https://en.wikipedia.org/wiki/Pro_Tools
In information systems, proactive information delivery (PID) is a paradigm of supporting users in doing their work by delivering them information related to the current working situation. Unlike information search process where the user has to initiate the search, PID tries to identify the user's current information need. PID can be used for Just-In-Time learning embedded in working processes. Intellext Watson was a prominent tool illustrating the concept of lightweight PID. Classification of PID tools is given in [ 1 ]
https://en.wikipedia.org/wiki/Proactive_information_delivery
Proactive maintenance is the maintenance philosophy that supplants “failure reactive” with “failure proactive” by activities that avoid the underlying conditions that lead to machine faults and degradation. Unlike predictive or preventive maintenance , proactive maintenance commissions corrective actions aimed at failure root causes, not failure symptoms. Its central theme is to extend the life of machinery as opposed to Proactive maintenance depends on rigorous machine inspection and condition monitoring. In mechanical machinery it seeks to detect and eradicate failure root causes such as wrong lubricant , degraded lubricant, contaminated lubricant, botched repair, misalignment, unbalance and operator error . [ 1 ] [ 2 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Proactive_maintenance
Proactor is a software design pattern for event handling in which long running activities are running in an asynchronous part. A completion handler is called after the asynchronous part has terminated. The proactor pattern can be considered to be an asynchronous variant of the synchronous reactor pattern . [ 1 ] Operation specific actors: Standardized actors This software-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Proactor_pattern
Proanthocyanidins are a class of polyphenols found in many plants, such as cranberry , blueberry , and grape seeds . Chemically, they are oligomeric flavonoids . Many are oligomers of catechin and epicatechin and their gallic acid esters . More complex polyphenols, having the same polymeric building block, form the group of condensed tannins . Proanthocyanidins were discovered in 1947 by Jacques Masquelier, who developed and patented techniques for the extraction of oligomeric proanthocyanidins from pine bark and grape seeds . [ 1 ] Proanthocyanidins are under preliminary research for the potential to reduce the risk of urinary tract infections (UTIs) by consuming cranberries, grape seeds or red wine . [ 2 ] [ 3 ] Proanthocyanidins, including the lesser bioactive and bioavailable polymers (four or more catechins), represent a group of condensed flavan-3-ols, such as procyanidins , prodelphinidins and propelargonidins. They can be found in many plants, most notably apples , maritime pine bark and that of most other pine species, cinnamon , [ 4 ] aronia fruit, cocoa beans , grape seed, grape skin (procyanidins and prodelphinidins ), [ 5 ] and red wines of Vitis vinifera (the European wine grape). However, bilberry , cranberry , black currant , green tea , black tea , and other plants also contain these flavonoids. Cocoa beans contain the highest concentrations. [ 6 ] Proanthocyanidins also may be isolated from Quercus petraea and Q. robur heartwood (wine barrel oaks ). [ 7 ] Açaí oil , obtained from the fruit of the açaí palm ( Euterpe oleracea ), is rich in numerous procyanidin oligomers. [ 8 ] Apples contain on average per serving about eight times the amount of proanthocyanidin found in wine, with some of the highest amounts found in the Red Delicious and Granny Smith varieties. [ 9 ] An extract of maritime pine bark called Pycnogenol bears 65–75 percent proanthocyanidins (procyanidins). [ 10 ] Thus a 100 mg serving would contain 65 to 75 mg of proanthocyanidins (procyanidins). Proanthocyanidin glycosides can be isolated from cocoa liquor . [ 11 ] The seed testas of field beans ( Vicia faba ) contain proanthocyanidins [ 12 ] that affect the digestibility in piglets [ 13 ] and could have an inhibitory activity on enzymes. [ 14 ] Cistus salviifolius also contains oligomeric proanthocyanidins. [ 15 ] (mg/100g) Condensed tannins may be characterised by a number of techniques including depolymerisation , asymmetric flow field flow fractionation or small-angle X-ray scattering . DMACA is a dye that is particularly useful for localization of proanthocyanidin compounds in plant histology. The use of the reagent results in blue staining. [ 17 ] It can also be used to titrate proanthocyanidins. Proanthocyanidins from field beans ( Vicia faba ) [ 18 ] or barley [ 19 ] have been estimated using the vanillin-HCl method , resulting in a red color of the test in the presence of catechins or proanthocyanidins. Proanthocyanidins can be titrated using the Procyanidolic Index (also called the Bates-Smith Assay ). It is a testing method that measures the change in color when the product is mixed with certain chemicals. The greater the color changes, the higher the PCOs content is. However, the Procyanidolic Index is a relative value that can measure well over 100. Unfortunately, a Procyanidolic Index of 95 was erroneously taken to mean 95% PCO by some and began appearing on the labels of finished products. All current methods of analysis suggest that the actual PCO content of these products is much lower than 95%. [ 20 ] [ unreliable medical source? ] Gel permeation chromatography (GPC) analysis allows separation of monomers from larger proanthocyanidin molecules. [ 21 ] Monomers of proanthocyanidins can be characterized by analysis with HPLC and mass spectrometry . [ 22 ] Condensed tannins can undergo acid-catalyzed cleavage in the presence of a nucleophile like phloroglucinol (reaction called phloroglucinolysis), thioglycolic acid (thioglycolysis), benzyl mercaptan or cysteamine (processes called thiolysis [ 23 ] ) leading to the formation of oligomers that can be further analyzed. [ 24 ] Tandem mass spectrometry can be used to sequence proanthocyanidins. [ 25 ] Oligomeric proanthocyanidins (OPC) strictly refer to dimer and trimer polymerizations of catechins. OPCs are found in most plants and thus are common in the human diet. Especially the skin , seeds, and seed coats of purple or red pigmented plants contain large amounts of OPCs. [ 6 ] They are dense in grape seeds and skin, and therefore in red wine and grape seed extract, cocoa, nuts and all Prunus fruits (most concentrated in the skin), and in the bark of Cinnamomum ( cinnamon ) [ 4 ] and Pinus pinaster (pine bark; formerly known as Pinus maritima ), along with many other pine species. OPCs also can be found in blueberries , cranberries (notably procyanidin A2 ), [ 26 ] aronia , [ 27 ] hawthorn , rosehip , and sea buckthorn . [ 28 ] Oligomeric proanthocyanidins can be extracted via Vaccinium pahalae from in vitro cell culture. [ 29 ] The US Department of Agriculture maintains a database of botanical and food sources of proanthocyanidins. [ 6 ] In nature, proanthocyanidins serve among other chemical and induced defense mechanisms against plant pathogens and predators , such as occurs in strawberries . [ 30 ] Proanthocyanidin has low bioavailability, with 90% remaining unabsorbed from the intestines until metabolized by gut flora to the more bioavailable metabolites. [ 16 ] Condensed tannins can undergo acid-catalyzed cleavage in the presence of (or an excess of) a nucleophile [ 31 ] like phloroglucinol (reaction called phloroglucinolysis), benzyl mercaptan (reaction called thiolysis ), thioglycolic acid (reaction called thioglycolysis) or cysteamine . Flavan-3-ol compounds used with methanol produce short-chain procyanidin dimers , trimers , or tetramers which are more absorbable. [ 32 ] These techniques are generally called depolymerisation and give information such as average degree of polymerisation or percentage of galloylation. These are SN1 reactions , a type of substitution reaction in organic chemistry , involving a carbocation intermediate under strongly acidic conditions in polar protic solvents like methanol. The reaction leads to the formation of free and derived monomers that can be further analyzed or used to enhance procyanidin absorption and bioavailability . [ 32 ] The free monomers correspond to the terminal units of the condensed tannins chains. In general, reactions are made in methanol, especially thiolysis, as benzyl mercaptan has a low solubility in water. They involve a moderate (50 to 90 °C) heating for a few minutes. Epimerisation may happen. Phloroglucinolysis can be used for instance for proanthocyanidins characterisation in wine or in grape seeds and skin. [ 33 ] Thioglycolysis can be used to study proanthocyanidins [ 34 ] or the oxidation of condensed tannins. [ 35 ] It is also used for lignin quantitation . [ 36 ] Reaction on condensed tannins from Douglas fir bark produces epicatechin and catechin thioglycolates . [ 37 ] Condensed tannins from Lithocarpus glaber leaves have been analysed through acid-catalyzed degradation in the presence of cysteamine . [ 38 ] Cranberries have A2-type proanthocyanidins (PACs) which may be important for the ability of PACs to bind to proteins, such as the adhesins present on E. coli fimbriae and were thought to inhibit bacterial infections, such as urinary tract infections (UTIs). [ 39 ] Clinical trials have produced mixed results when asking the question to confirm that PACs, particularly from cranberries, were an alternative to antibiotic prophylaxis for UTIs: 1) a 2014 scientific opinion by the European Food Safety Authority rejected physiological evidence that cranberry PACs have a role in inhibiting bacterial pathogens involved in UTIs; [ 2 ] 2) an updated 2023 Cochrane Collaboration review supported the use of cranberry products for the prevention of UTIs for certain groups. [ 3 ] A 2017 systematic review concluded that cranberry products significantly reduced the incidence of UTIs, indicating that cranberry products may be effective particularly for individuals with recurrent infections. [ 40 ] In 2019, the American Urological Association released guidelines stating that a moderate level of evidence supports the use of cranberry products containing PACs for possible prevention from recurrent UTIs. [ 41 ] Proanthocyanidins are the principal polyphenols in red wine that are under research to assess risk of coronary heart disease and lower overall mortality. [ 42 ] With tannins , they also influence the aroma, flavor, mouth-feel and astringency of red wines. [ 43 ] [ 44 ] In red wines, total OPC content, including flavan-3-ols ( catechins ), was substantially higher (177  mg/L) than that in white wines (9  mg/L). [ 45 ] Proanthocyanidins found in the proprietary extract of maritime pine bark called Pycnogenol were not found (in 2012) to be effective as a treatment for any disease: Proanthocyanidins are present in fresh grapes, juice, red wine , and other darkly pigmented fruits such as cranberry , blackcurrant , elderberry , and aronia . [ 47 ] Although red wine may contain more proanthocyanidins by mass per unit of volume than does red grape juice, red grape juice contains more proanthocyanidins per average serving size. An eight US fluid ounces (240 ml) serving of grape juice averages 124 milligrams proanthocyanidins, whereas a five US fluid ounces (150 ml) serving of red wine averages 91 milligrams ( i.e. , 145.6 milligrams per 8 fl. oz. or 240 mL). [ 6 ] Many other foods and beverages may also contain proanthocyanidins, but few attain the levels found in red grape seeds and skins, [ 6 ] with a notable exception being aronia , which has the highest recorded level of proanthocyanidins among fruits assessed to date (664 milligrams per 100 g). [ 47 ]
https://en.wikipedia.org/wiki/Proanthocyanidin
Probabilistic argumentation refers to different formal frameworks pertaining to probabilistic logic . All share the idea that qualitative aspects can be captured by an underlying logic, while quantitative aspects of uncertainty can be accounted for by probabilistic measures. The framework of "probabilistic labellings" refers to probability spaces where the sample space is a set of labellings of argumentation graphs ( Riveret et al. 2018 ). A labelling of an argumentation graph associates any argument of the graph with a label to reflect the acceptability of the argument within the graph. For example, an argument can be associated with a label "in" (the argument is accepted), "out" (the argument is rejected), or "und" (the status of the argument is undecided — neither accepted nor rejected). Consequently, the approach of probabilistic labellings associates any argument with the probability of a label to reflect the probability of the argument to be labelled as such. The name "probabilistic argumentation" has been used to refer to a particular theory of reasoning that encompasses uncertainty and ignorance, combining probability theory and deductive logic ( Haenni, Kohlas & Lehmann 2000 ). OpenPAS is an open-source implementation of such a probabilistic argumentation system. Probabilistic argumentation systems encounter a problem when used to determine the occurrence of Black Swan events since, by definition, those events are so improbable as to seem impossible. As such, probabilistic arguments should be considered fallacious arguments known as appeals to probability .
https://en.wikipedia.org/wiki/Probabilistic_argumentation
A probabilistic logic network ( PLN ) is a conceptual, mathematical and computational approach to uncertain inference . It was inspired by logic programming and it uses probabilities in place of crisp (true/false) truth values, and fractional uncertainty in place of crisp known/unknown values . In order to carry out effective reasoning in real-world circumstances, artificial intelligence software handles uncertainty. Previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN encompasses uncertain logic with such ideas as induction, abduction , analogy , fuzziness and speculation, and reasoning about time and causality . [ 1 ] PLN was developed by Ben Goertzel , Matt Ikle, Izabela Lyon Freire Goertzel, and Ari Heljakka for use as a cognitive algorithm used by MindAgents within the OpenCog Core. PLN was developed originally for use within the Novamente Cognition Engine. [ 2 ] The basic goal of a PLN is to provide accurate probabilistic inference in a way that is compatible with both term logic and predicate logic and scales up to operate in real-time on large dynamic knowledge bases . [ 2 ] The goal underlying the theoretical development of PLN has been the creation of practical software systems carrying out complex inferences based on uncertain knowledge and drawing uncertain conclusions. PLN has been designed to allow basic probabilistic inference to interact with other kinds of inference such as intensional inference, fuzzy inference , and higher-order inference using quantifiers, variables, and combinators, and be a more convenient approach than Bayesian networks (or other conventional approaches) for the purpose of interfacing basic probabilistic inference with these other sorts of inference. In addition, the inference rules are formulated in such a way as to avoid the paradoxes of Dempster–Shafer theory . PLN begins with a term logic foundation and then adds on elements of probabilistic and combinatory logic, as well as some aspects of predicate logic and autoepistemic logic , to form a complete inference system, tailored for easy integration with software components embodying other (not explicitly logical) aspects of intelligence. PLN represents truth values as intervals, but with different semantics than in imprecise probability theory . In addition to the interpretation of truth in a probabilistic fashion, a truth value in PLN also has an associated amount of certainty . This generalizes the notion of truth values used in autoepistemic logic , where truth values are either known or unknown and when known, are either true or false. The current version of PLN has been used in narrow-AI applications such as the inference of biological hypotheses from knowledge extracted from biological texts via language processing, and to assist the reinforcement learning of an embodied agent, in a simple virtual world , as it is taught to play "fetch".
https://en.wikipedia.org/wiki/Probabilistic_logic_network
In mathematics , the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős , for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain , without any possible error. This method has now been applied to other areas of mathematics such as number theory , linear algebra , and real analysis , as well as in computer science (e.g. randomized rounding ), and information theory . If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Thus, by contraposition , if the probability that a random object chosen from the collection has that property is nonzero, then some object in the collection must possess the property. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. Another way to use the probabilistic method is by calculating the expected value of some random variable . If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction. Common tools used in the probabilistic method include Markov's inequality , the Chernoff bound , and the Lovász local lemma . Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles ), many of the most well known proofs using this method are due to Erdős. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number R ( r , r ) . Suppose we have a complete graph on n vertices . We wish to show (for small enough values of n ) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on r vertices which is monochromatic (every edge colored the same color). To do so, we color the graph randomly. Color each edge independently with probability 1/2 of being red and 1/2 of being blue. We calculate the expected number of monochromatic subgraphs on r vertices as follows: For any set S r {\displaystyle S_{r}} of r {\displaystyle r} vertices from our graph, define the variable X ( S r ) {\displaystyle X(S_{r})} to be 1 if every edge amongst the r {\displaystyle r} vertices is the same color, and 0 otherwise. Note that the number of monochromatic r {\displaystyle r} -subgraphs is the sum of X ( S r ) {\displaystyle X(S_{r})} over all possible subsets S r {\displaystyle S_{r}} . For any individual set S r i {\displaystyle S_{r}^{i}} , the expected value of X ( S r i ) {\displaystyle X(S_{r}^{i})} is simply the probability that all of the C ( r , 2 ) {\displaystyle C(r,2)} edges in S r i {\displaystyle S_{r}^{i}} are the same color: (the factor of 2 comes because there are two possible colors). This holds true for any of the C ( n , r ) {\displaystyle C(n,r)} possible subsets we could have chosen, i.e. i {\displaystyle i} ranges from 1 to C ( n , r ) {\displaystyle C(n,r)} . So we have that the sum of E [ X ( S r i ) ] {\displaystyle E[X(S_{r}^{i})]} over all S r i {\displaystyle S_{r}^{i}} is The sum of expectations is the expectation of the sum ( regardless of whether the variables are independent ), so the expectation of the sum (the expected number of all monochromatic r {\displaystyle r} -subgraphs) is Consider what happens if this value is less than 1 . Since the expected number of monochromatic r -subgraphs is strictly less than 1 , there exists a coloring satisfying the condition that the number of monochromatic r -subgraphs is strictly less than 1 . The number of monochromatic r -subgraphs in this random coloring is a non-negative integer , hence it must be 0 ( 0 is the only non-negative integer less than 1 ). It follows that if (which holds, for example, for n = 5 and r = 4 ), there must exist a coloring in which there are no monochromatic r -subgraphs. [ a ] By definition of the Ramsey number , this implies that R ( r , r ) must be bigger than n . In particular, R ( r , r ) must grow at least exponentially with r . A weakness of this argument is that it is entirely nonconstructive . Even though it proves (for example) that almost every coloring of the complete graph on (1.1) r vertices contains no monochromatic r -subgraph, it gives no explicit example of such a coloring. The problem of finding such a coloring has been open for more than 50 years. A 1959 paper of Erdős (see reference cited below) addressed the following problem in graph theory : given positive integers g and k , does there exist a graph G containing only cycles of length at least g , such that the chromatic number of G is at least k ? It can be shown that such a graph exists for any g and k , and the proof is reasonably simple. Let n be very large and consider a random graph G on n vertices, where every edge in G exists with probability p = n 1/ g −1 . We show that with positive probability, G satisfies the following two properties: Proof. Let X be the number cycles of length less than g . The number of cycles of length i in the complete graph on n vertices is and each of them is present in G with probability p i . Hence by Markov's inequality we have Proof. Let Y be the size of the largest independent set in G . Clearly, we have when For sufficiently large n , the probability that a graph from the distribution has both properties is positive, as the events for these properties cannot be disjoint (if they were, their probabilities would sum up to more than 1). Here comes the trick: since G has these two properties, we can remove at most n /2 vertices from G to obtain a new graph G′ on n ′ ≥ n / 2 {\displaystyle n'\geq n/2} vertices that contains only cycles of length at least g . We can see that this new graph has no independent set of size ⌈ n ′ k ⌉ {\displaystyle \left\lceil {\frac {n'}{k}}\right\rceil } . G′ can only be partitioned into at least k independent sets, and, hence, has chromatic number at least k . This result gives a hint as to why the computation of the chromatic number of a graph is so difficult: even when there are no local reasons (such as small cycles) for a graph to require many colors the chromatic number can still be arbitrarily large.
https://en.wikipedia.org/wiki/Probabilistic_method
In mathematics , Probabilistic number theory is a subfield of number theory , which explicitly uses probability to answer questions about the integers and integer-valued functions . One basic idea underlying it is that different prime numbers are, in some serious sense, like independent random variables . This however is not an idea that has a unique useful formal expression. The founders of the theory were Paul Erdős , Aurel Wintner and Mark Kac during the 1930s, one of the periods of investigation in analytic number theory . Foundational results include the Erdős–Wintner theorem , the Erdős–Kac theorem on additive functions and the DDT theorem . This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Probabilistic_number_theory
Probabilistic numerics is an active field of study at the intersection of applied mathematics , statistics , and machine learning centering on the concept of uncertainty in computation . In probabilistic numerics, tasks in numerical analysis such as finding numerical solutions for integration , linear algebra , optimization and simulation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] A numerical method is an algorithm that approximates the solution to a mathematical problem (examples below include the solution to a linear system of equations , the value of an integral , the solution of a differential equation , the minimum of a multivariate function). In a probabilistic numerical algorithm, this process of approximation is thought of as a problem of estimation , inference or learning and realised in the framework of probabilistic inference (often, but not always, Bayesian inference ). [ 6 ] Formally, this means casting the setup of the computational problem in terms of a prior distribution , formulating the relationship between numbers computed by the computer (e.g. matrix-vector multiplications in linear algebra, gradients in optimization, values of the integrand or the vector field defining a differential equation) and the quantity in question (the solution of the linear problem, the minimum, the integral, the solution curve) in a likelihood function , and returning a posterior distribution as the output. In most cases, numerical algorithms also take internal adaptive decisions about which numbers to compute, which form an active learning problem. Many of the most popular classic numerical algorithms can be re-interpreted in the probabilistic framework. This includes the method of conjugate gradients , [ 7 ] [ 8 ] [ 9 ] Nordsieck methods , Gaussian quadrature rules, [ 10 ] and quasi-Newton methods . [ 11 ] In all these cases, the classic method is based on a regularized least-squares estimate that can be associated with the posterior mean arising from a Gaussian prior and likelihood. In such cases, the variance of the Gaussian posterior is then associated with a worst-case estimate for the squared error. Probabilistic numerical methods promise several conceptual advantages over classic, point-estimate based approximation techniques: These advantages are essentially the equivalent of similar functional advantages that Bayesian methods enjoy over point-estimates in machine learning, applied or transferred to the computational domain. Probabilistic numerical methods have been developed for the problem of numerical integration , with the most popular method called Bayesian quadrature . [ 15 ] [ 16 ] [ 17 ] [ 18 ] In numerical integration, function evaluations f ( x 1 ) , … , f ( x n ) {\displaystyle f(x_{1}),\ldots ,f(x_{n})} at a number of points x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are used to estimate the integral ∫ f ( x ) ν ( d x ) {\displaystyle \textstyle \int f(x)\nu (dx)} of a function f {\displaystyle f} against some measure ν {\displaystyle \nu } . Bayesian quadrature consists of specifying a prior distribution over f {\displaystyle f} and conditioning this prior on f ( x 1 ) , … , f ( x n ) {\displaystyle f(x_{1}),\ldots ,f(x_{n})} to obtain a posterior distribution over f {\displaystyle f} , then computing the implied posterior distribution on ∫ f ( x ) ν ( d x ) {\displaystyle \textstyle \int f(x)\nu (dx)} . The most common choice of prior is a Gaussian process as this allows us to obtain a closed-form posterior distribution on the integral which is a univariate Gaussian distribution. Bayesian quadrature is particularly useful when the function f {\displaystyle f} is expensive to evaluate and the dimension of the data is small to moderate. Probabilistic numerics have also been studied for mathematical optimization , which consist of finding the minimum or maximum of some objective function f {\displaystyle f} given (possibly noisy or indirect) evaluations of that function at a set of points. Perhaps the most notable effort in this direction is Bayesian optimization , [ 20 ] a general approach to optimization grounded in Bayesian inference. Bayesian optimization algorithms operate by maintaining a probabilistic belief about f {\displaystyle f} throughout the optimization procedure; this often takes the form of a Gaussian process prior conditioned on observations. This belief then guides the algorithm in obtaining observations that are likely to advance the optimization process. Bayesian optimization policies are usually realized by transforming the objective function posterior into an inexpensive, differentiable acquisition function that is maximized to select each successive observation location. One prominent approach is to model optimization via Bayesian sequential experimental design , seeking to obtain a sequence of observations yielding the most optimization progress as evaluated by an appropriate utility function . A welcome side effect from this approach is that uncertainty in the objective function, as measured by the underlying probabilistic belief, can guide an optimization policy in addressing the classic exploration vs. exploitation tradeoff . Probabilistic numerical methods have been developed in the context of stochastic optimization for deep learning , in particular to address main issues such as learning rate tuning and line searches , [ 21 ] batch-size selection, [ 22 ] early stopping , [ 23 ] pruning, [ 24 ] and first- and second-order search directions. [ 25 ] [ 26 ] In this setting, the optimization objective is often an empirical risk of the form L ( θ ) = 1 N ∑ n = 1 N ℓ ( y n , f θ ( x n ) ) {\displaystyle \textstyle L(\theta )={\frac {1}{N}}\sum _{n=1}^{N}\ell (y_{n},f_{\theta }(x_{n}))} defined by a dataset D = { ( x n , y n ) } n = 1 N {\displaystyle \textstyle {\mathcal {D}}=\{(x_{n},y_{n})\}_{n=1}^{N}} , and a loss ℓ ( y , f θ ( x ) ) {\displaystyle \ell (y,f_{\theta }(x))} that quantifies how well a predictive model f θ ( x ) {\displaystyle f_{\theta }(x)} parameterized by θ {\displaystyle \theta } performs on predicting the target y {\displaystyle y} from its corresponding input x {\displaystyle x} . Epistemic uncertainty arises when the dataset size N {\displaystyle N} is large and cannot be processed at once meaning that local quantities (given some θ {\displaystyle \theta } ) such as the loss function L ( θ ) {\displaystyle L(\theta )} itself or its gradient ∇ L ( θ ) {\displaystyle \nabla L(\theta )} cannot be computed in reasonable time. Hence, generally mini-batching is used to construct estimators of these quantities on a random subset of the data. Probabilistic numerical methods model this uncertainty explicitly and allow for automated decisions and parameter tuning. Probabilistic numerical methods for linear algebra [ 7 ] [ 8 ] [ 27 ] [ 9 ] [ 28 ] [ 29 ] have primarily focused on solving systems of linear equations of the form A x = b {\displaystyle Ax=b} and the computation of determinants | A | {\displaystyle |A|} . [ 30 ] [ 31 ] A large class of methods are iterative in nature and collect information about the linear system to be solved via repeated matrix-vector multiplication v ↦ A v {\displaystyle v\mapsto Av} with the system matrix A {\displaystyle A} with different vectors v {\displaystyle v} . Such methods can be roughly split into a solution- [ 8 ] [ 28 ] and a matrix-based perspective, [ 7 ] [ 9 ] depending on whether belief is expressed over the solution x {\displaystyle x} of the linear system or the (pseudo-)inverse of the matrix H = A † {\displaystyle H=A^{\dagger }} . The belief update uses that the inferred object is linked to matrix multiplications y = A v {\displaystyle y=Av} or z = A ⊺ v {\displaystyle z=A^{\intercal }v} via b ⊺ z = x ⊺ v {\displaystyle b^{\intercal }z=x^{\intercal }v} and v = A − 1 y {\displaystyle v=A^{-1}y} . Methods typically assume a Gaussian distribution, due to its closedness under linear observations of the problem. While conceptually different, these two views are computationally equivalent and inherently connected via the right-hand-side through x = A − 1 b {\displaystyle x=A^{-1}b} . [ 27 ] Probabilistic numerical linear algebra routines have been successfully applied to scale Gaussian processes to large datasets. [ 31 ] [ 32 ] In particular, they enable exact propagation of the approximation error to a combined Gaussian process posterior, which quantifies the uncertainty arising from both the finite number of data observed and the finite amount of computation expended. [ 32 ] Probabilistic numerical methods for ordinary differential equations y ˙ ( t ) = f ( t , y ( t ) ) {\displaystyle {\dot {y}}(t)=f(t,y(t))} , have been developed for initial and boundary value problems. Many different probabilistic numerical methods designed for ordinary differential equations have been proposed, and these can broadly be grouped into the two following categories: The boundary between these two categories is not sharp, indeed a Gaussian process regression approach based on randomised data was developed as well. [ 40 ] These methods have been applied to problems in computational Riemannian geometry, [ 41 ] inverse problems, latent force models, and to differential equations with a geometric structure such as symplecticity. A number of probabilistic numerical methods have also been proposed for partial differential equations . As with ordinary differential equations, the approaches can broadly be divided into those based on randomisation, generally of some underlying finite-element mesh [ 33 ] [ 42 ] and those based on Gaussian process regression. [ 4 ] [ 3 ] [ 43 ] [ 44 ] Probabilistic numerical PDE solvers based on Gaussian process regression recover classical methods on linear PDEs for certain priors, in particular methods of mean weighted residuals , which include Galerkin methods , finite element methods , as well as spectral methods . [ 44 ] The interplay between numerical analysis and probability is touched upon by a number of other areas of mathematics, including average-case analysis of numerical methods, information-based complexity , game theory , and statistical decision theory . Precursors to what is now being called "probabilistic numerics" can be found as early as the late 19th and early 20th century. The origins of probabilistic numerics can be traced to a discussion of probabilistic approaches to polynomial interpolation by Henri Poincaré in his Calcul des Probabilités . [ 45 ] In modern terminology, Poincaré considered a Gaussian prior distribution on a function f : R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } , expressed as a formal power series with random coefficients, and asked for "probable values" of f ( x ) {\displaystyle f(x)} given this prior and n ∈ N {\displaystyle n\in \mathbb {N} } observations f ( a i ) = B i {\displaystyle f(a_{i})=B_{i}} for i = 1 , … , n {\displaystyle i=1,\dots ,n} . A later seminal contribution to the interplay of numerical analysis and probability was provided by Albert Suldin in the context of univariate quadrature . [ 46 ] The statistical problem considered by Suldin was the approximation of the definite integral ∫ u ( t ) d t {\displaystyle \textstyle \int u(t)\,\mathrm {d} t} of a function u : [ a , b ] → R {\displaystyle u\colon [a,b]\to \mathbb {R} } , under a Brownian motion prior on u {\displaystyle u} , given access to pointwise evaluation of u {\displaystyle u} at nodes t 1 , … , t n ∈ [ a , b ] {\displaystyle t_{1},\dots ,t_{n}\in [a,b]} . Suldin showed that, for given quadrature nodes, the quadrature rule with minimal mean squared error is the trapezoidal rule ; furthermore, this minimal error is proportional to the sum of cubes of the inter-node spacings. As a result, one can see the trapezoidal rule with equally-spaced nodes as statistically optimal in some sense — an early example of the average-case analysis of a numerical method. Suldin's point of view was later extended by Mike Larkin. [ 47 ] Note that Suldin's Brownian motion prior on the integrand u {\displaystyle u} is a Gaussian measure and that the operations of integration and of point wise evaluation of u {\displaystyle u} are both linear maps . Thus, the definite integral ∫ u ( t ) d t {\displaystyle \textstyle \int u(t)\,\mathrm {d} t} is a real-valued Gaussian random variable. In particular, after conditioning on the observed pointwise values of u {\displaystyle u} , it follows a normal distribution with mean equal to the trapezoidal rule and variance equal to 1 12 ∑ i = 2 n ( t i − t i − 1 ) 3 {\displaystyle \textstyle {\frac {1}{12}}\sum _{i=2}^{n}(t_{i}-t_{i-1})^{3}} . This viewpoint is very close to that of Bayesian quadrature , seeing the output of a quadrature method not just as a point estimate but as a probability distribution in its own right. As noted by Houman Owhadi and collaborators, [ 3 ] [ 48 ] interplays between numerical approximation and statistical inference can also be traced back to Palasti and Renyi, [ 49 ] Sard, [ 50 ] Kimeldorf and Wahba [ 51 ] (on the correspondence between Bayesian estimation and spline smoothing/interpolation) and Larkin [ 47 ] (on the correspondence between Gaussian process regression and numerical approximation). Although the approach of modelling a perfectly known function as a sample from a random process may seem counterintuitive, a natural framework for understanding it can be found in information-based complexity (IBC), [ 52 ] the branch of computational complexity founded on the observation that numerical implementation requires computation with partial information and limited resources. In IBC, the performance of an algorithm operating on incomplete information can be analyzed in the worst-case or the average-case (randomized) setting with respect to the missing information. Moreover, as Packel [ 53 ] observed, the average case setting could be interpreted as a mixed strategy in an adversarial game obtained by lifting a (worst-case) minmax problem to a minmax problem over mixed (randomized) strategies. This observation leads to a natural connection [ 54 ] [ 3 ] between numerical approximation and Wald's decision theory , evidently influenced by von Neumann's theory of games . To describe this connection consider the optimal recovery setting of Micchelli and Rivlin [ 55 ] in which one tries to approximate an unknown function from a finite number of linear measurements on that function. Interpreting this optimal recovery problem as a zero-sum game where Player I selects the unknown function and Player II selects its approximation, and using relative errors in a quadratic norm to define losses, Gaussian priors emerge [ 3 ] as optimal mixed strategies for such games, and the covariance operator of the optimal Gaussian prior is determined by the quadratic norm used to define the relative error of the recovery.
https://en.wikipedia.org/wiki/Probabilistic_numerics
A probabilistic proposition is a proposition with a measured probability of being true for an arbitrary person at an arbitrary time. They may be contrasted with deterministic propositions, which assert that something is certain with no element of chance. Probabilistic proportions may be either categorical or conditional. [ 1 ] This statistics -related article is a stub . You can help Wikipedia by expanding it . This logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Probabilistic_proposition
In quantum mechanics , a probability amplitude is a complex number used for describing the behaviour of systems. The square of the modulus of this quantity at a point in space represents a probability density at that point. Probability amplitudes provide a relationship between the quantum state vector of a system and the results of observations of that system, a link that was first proposed by Max Born , in 1926. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. In fact, the properties of the space of wave functions were being used to make physical predictions (such as emissions from atoms being at certain discrete energies) before any physical interpretation of a particular function was offered. Born was awarded half of the 1954 Nobel Prize in Physics for this understanding, and the probability thus calculated is sometimes called the "Born probability". These probabilistic concepts, namely the probability density and quantum measurements , were vigorously contested at the time by the original physicists working on the theory, such as Schrödinger and Einstein . It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics —topics that continue to be debated even today. Neglecting some technical complexities, the problem of quantum measurement is the behaviour of a quantum state, for which the value of the observable Q to be measured is uncertain . Such a state is thought to be a coherent superposition of the observable's eigenstates , states on which the value of the observable is uniquely defined, for different possible values of the observable. When a measurement of Q is made, the system (under the Copenhagen interpretation ) jumps to one of the eigenstates , returning the eigenvalue belonging to that eigenstate. The system may always be described by a linear combination or superposition of these eigenstates with unequal "weights" . Intuitively it is clear that eigenstates with heavier "weights" are more "likely" to be produced. Indeed, which of the above eigenstates the system jumps to is given by a probabilistic law: the probability of the system jumping to the state is proportional to the absolute value of the corresponding numerical weight squared. These numerical weights are called probability amplitudes, and this relationship used to calculate probabilities from given pure quantum states (such as wave functions) is called the Born rule . Clearly, the sum of the probabilities, which equals the sum of the absolute squares of the probability amplitudes, must equal 1. This is the normalization requirement. If the system is known to be in some eigenstate of Q (e.g. after an observation of the corresponding eigenvalue of Q ) the probability of observing that eigenvalue becomes equal to 1 (certain) for all subsequent measurements of Q (so long as no other important forces act between the measurements). In other words, the probability amplitudes are zero for all the other eigenstates, and remain zero for the future measurements. If the set of eigenstates to which the system can jump upon measurement of Q is the same as the set of eigenstates for measurement of R , then subsequent measurements of either Q or R always produce the same values with probability of 1, no matter the order in which they are applied. The probability amplitudes are unaffected by either measurement, and the observables are said to commute . By contrast, if the eigenstates of Q and R are different, then measurement of R produces a jump to a state that is not an eigenstate of Q . Therefore, if the system is known to be in some eigenstate of Q (all probability amplitudes zero except for one eigenstate), then when R is observed the probability amplitudes are changed. A second, subsequent observation of Q no longer certainly produces the eigenvalue corresponding to the starting state. In other words, the probability amplitudes for the second measurement of Q depend on whether it comes before or after a measurement of R , and the two observables do not commute . In a formal setup, the state of an isolated physical system in quantum mechanics is represented, at a fixed time t {\displaystyle t} , by a state vector |Ψ⟩ belonging to a separable complex Hilbert space . Using bra–ket notation the relation between state vector and "position basis " { | x ⟩ } {\displaystyle \{|x\rangle \}} of the Hilbert space can be written as [ 1 ] Its relation with an observable can be elucidated by generalizing the quantum state ψ {\displaystyle \psi } to a measurable function and its domain of definition to a given σ -finite measure space ( X , A , μ ) {\displaystyle (X,{\mathcal {A}},\mu )} . This allows for a refinement of Lebesgue's decomposition theorem , decomposing μ into three mutually singular parts where μ ac is absolutely continuous with respect to the Lebesgue measure, μ sc is singular with respect to the Lebesgue measure and atomless, and μ pp is a pure point measure. [ 2 ] [ 3 ] A usual presentation of the probability amplitude is that of a wave function ψ {\displaystyle \psi } belonging to the L 2 space of ( equivalence classes of) square integrable functions , i.e., ψ {\displaystyle \psi } belongs to L 2 ( X ) if and only if If the norm is equal to 1 and | ψ ( x ) | 2 ∈ R ≥ 0 {\displaystyle |\psi (x)|^{2}\in \mathbb {R} _{\geq 0}} such that then | ψ ( x ) | 2 {\displaystyle |\psi (x)|^{2}} is the probability density function for a measurement of the particle's position at a given time, defined as the Radon–Nikodym derivative with respect to the Lebesgue measure (e.g. on the set R of all real numbers ). As probability is a dimensionless quantity, | ψ ( x ) | 2 must have the inverse dimension of the variable of integration x . For example, the above amplitude has dimension [L −1/2 ], where L represents length . Whereas a Hilbert space is separable if and only if it admits a countable orthonormal basis, the range of a continuous random variable x {\displaystyle x} is an uncountable set (i.e. the probability that the system is "at position x {\displaystyle x} " will always be zero ). As such, eigenstates of an observable need not necessarily be measurable functions belonging to L 2 ( X ) (see normalization condition below). A typical example is the position operator x ^ {\displaystyle {\hat {\mathrm {x} }}} defined as whose eigenfunctions are Dirac delta functions which clearly do not belong to L 2 ( X ) . By replacing the state space by a suitable rigged Hilbert space , however, the rigorous notion of eigenstates from spectral theorem as well as spectral decomposition is preserved. [ 4 ] Let μ p p {\displaystyle \mu _{pp}} be atomic (i.e. the set A ⊂ X {\displaystyle A\subset X} in A {\displaystyle {\mathcal {A}}} is an atom ); specifying the measure of any discrete variable x ∈ A equal to 1 . The amplitudes are composed of state vector |Ψ⟩ indexed by A ; its components are denoted by ψ ( x ) for uniformity with the previous case. If the ℓ 2 -norm of |Ψ⟩ is equal to 1, then | ψ ( x ) | 2 is a probability mass function . A convenient configuration space X is such that each point x produces some unique value of the observable Q . For discrete X it means that all elements of the standard basis are eigenvectors of Q . Then ψ ( x ) {\displaystyle \psi (x)} is the probability amplitude for the eigenstate | x ⟩ . If it corresponds to a non- degenerate eigenvalue of Q , then | ψ ( x ) | 2 {\displaystyle |\psi (x)|^{2}} gives the probability of the corresponding value of Q for the initial state |Ψ⟩ . | ψ ( x ) | = 1 if and only if | x ⟩ is the same quantum state as |Ψ⟩ . ψ ( x ) = 0 if and only if | x ⟩ and |Ψ⟩ are orthogonal . Otherwise the modulus of ψ ( x ) is between 0 and 1. A discrete probability amplitude may be considered as a fundamental frequency in the probability frequency domain ( spherical harmonics ) for the purposes of simplifying M-theory transformation calculations. [ citation needed ] Discrete dynamical variables are used in such problems as a particle in an idealized reflective box and quantum harmonic oscillator . [ clarification needed ] An example of the discrete case is a quantum system that can be in two possible states , e.g. the polarization of a photon . When the polarization is measured, it could be the horizontal state | H ⟩ {\displaystyle |H\rangle } or the vertical state | V ⟩ {\displaystyle |V\rangle } . Until its polarization is measured the photon can be in a superposition of both these states, so its state | ψ ⟩ {\displaystyle |\psi \rangle } could be written as with α {\displaystyle \alpha } and β {\displaystyle \beta } the probability amplitudes for the states | H ⟩ {\displaystyle |H\rangle } and | V ⟩ {\displaystyle |V\rangle } respectively. When the photon's polarization is measured, the resulting state is either horizontal or vertical. But in a random experiment, the probability of being horizontally polarized is | α | 2 {\displaystyle |\alpha |^{2}} , and the probability of being vertically polarized is | β | 2 {\displaystyle |\beta |^{2}} . Hence, a photon in a state | ψ ⟩ = 1 3 | H ⟩ − i 2 3 | V ⟩ {\textstyle |\psi \rangle ={\sqrt {\frac {1}{3}}}|H\rangle -i{\sqrt {\frac {2}{3}}}|V\rangle } would have a probability of 1 3 {\textstyle {\frac {1}{3}}} to come out horizontally polarized, and a probability of 2 3 {\textstyle {\frac {2}{3}}} to come out vertically polarized when an ensemble of measurements are made. The order of such results, is, however, completely random. Another example is quantum spin. If a spin-measuring apparatus is pointing along the z-axis and is therefore able to measure the z-component of the spin ( σ z {\textstyle \sigma _{z}} ), the following must be true for the measurement of spin "up" and "down": If one assumes that system is prepared, so that +1 is registered in σ x {\textstyle \sigma _{x}} and then the apparatus is rotated to measure σ z {\textstyle \sigma _{z}} , the following holds: The probability amplitude of measuring spin up is given by ⟨ r | u ⟩ {\textstyle \langle r|u\rangle } , since the system had the initial state | r ⟩ {\textstyle |r\rangle } . The probability of measuring | u ⟩ {\textstyle |u\rangle } is given by Which agrees with experiment. In the example above, the measurement must give either | H ⟩ or | V ⟩ , so the total probability of measuring | H ⟩ or | V ⟩ must be 1. This leads to a constraint that α 2 + β 2 = 1 ; more generally the sum of the squared moduli of the probability amplitudes of all the possible states is equal to one . If to understand "all the possible states" as an orthonormal basis , that makes sense in the discrete case, then this condition is the same as the norm-1 condition explained above . One can always divide any non-zero element of a Hilbert space by its norm and obtain a normalized state vector. Not every wave function belongs to the Hilbert space L 2 ( X ) , though. Wave functions that fulfill this constraint are called normalizable . The Schrödinger equation , describing states of quantum particles, has solutions that describe a system and determine precisely how the state changes with time . Suppose a wave function ψ ( x , t ) gives a description of the particle (position x at a given time t ). A wave function is square integrable if After normalization the wave function still represents the same state and is therefore equal by definition to [ 5 ] [ 6 ] Under the standard Copenhagen interpretation , the normalized wavefunction gives probability amplitudes for the position of the particle. Hence, ρ ( x ) = | ψ ( x , t ) | 2 is a probability density function and the probability that the particle is in the volume V at fixed time t is given by The probability density function does not vary with time as the evolution of the wave function is dictated by the Schrödinger equation and is therefore entirely deterministic. [ 7 ] This is key to understanding the importance of this interpretation: for a given particle constant mass , initial ψ ( x , t 0 ) and potential , the Schrödinger equation fully determines subsequent wavefunctions. The above then gives probabilities of locations of the particle at all subsequent times. Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described above. For example, in the classic double-slit experiment , electrons are fired randomly at two slits, and the probability distribution of detecting electrons at all parts on a large screen placed behind the slits, is questioned. An intuitive answer is that P (through either slit) = P (through first slit) + P (through second slit) , where P (event) is the probability of that event. This is obvious if one assumes that an electron passes through either slit. When no measurement apparatus that determines through which slit the electrons travel is installed, the observed probability distribution on the screen reflects the interference pattern that is common with light waves. If one assumes the above law to be true, then this pattern cannot be explained. The particles cannot be said to go through either slit and the simple explanation does not work. The correct explanation is, however, by the association of probability amplitudes to each event. The complex amplitudes which represent the electron passing each slit ( ψ first and ψ second ) follow the law of precisely the form expected: ψ total = ψ first + ψ second . This is the principle of quantum superposition . The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex: P = | ψ first + ψ second | 2 = | ψ first | 2 + | ψ second | 2 + 2 | ψ first | | ψ second | cos ⁡ ( φ 1 − φ 2 ) . {\displaystyle P=\left|\psi _{\text{first}}+\psi _{\text{second}}\right|^{2}=\left|\psi _{\text{first}}\right|^{2}+\left|\psi _{\text{second}}\right|^{2}+2\left|\psi _{\text{first}}\right|\left|\psi _{\text{second}}\right|\cos(\varphi _{1}-\varphi _{2}).} Here, φ 1 {\displaystyle \varphi _{1}} and φ 2 {\displaystyle \varphi _{2}} are the arguments of ψ first and ψ second respectively. A purely real formulation has too few dimensions to describe the system's state when superposition is taken into account. That is, without the arguments of the amplitudes, we cannot describe the phase-dependent interference. The crucial term 2 | ψ first | | ψ second | cos ⁡ ( φ 1 − φ 2 ) {\textstyle 2\left|\psi _{\text{first}}\right|\left|\psi _{\text{second}}\right|\cos(\varphi _{1}-\varphi _{2})} is called the "interference term", and this would be missing if we had added the probabilities. However, one may choose to devise an experiment in which the experimenter observes which slit each electron goes through. Then, due to wavefunction collapse , the interference pattern is not observed on the screen. One may go further in devising an experiment in which the experimenter gets rid of this "which-path information" by a "quantum eraser" . Then, according to the Copenhagen interpretation , the case A applies again and the interference pattern is restored. [ 8 ] Intuitively, since a normalised wave function stays normalised while evolving according to the wave equation, there will be a relationship between the change in the probability density of the particle's position and the change in the amplitude at these positions. Define the probability current (or flux) j as measured in units of (probability)/(area × time). Then the current satisfies the equation The probability density is ρ = | ψ | 2 {\displaystyle \rho =|\psi |^{2}} , this equation is exactly the continuity equation , appearing in many situations in physics where we need to describe the local conservation of quantities. The best example is in classical electrodynamics, where j corresponds to current density corresponding to electric charge, and the density is the charge-density. The corresponding continuity equation describes the local conservation of charges . For two quantum systems with spaces L 2 ( X 1 ) and L 2 ( X 2 ) and given states |Ψ 1 ⟩ and |Ψ 2 ⟩ respectively, their combined state |Ψ 1 ⟩ ⊗ |Ψ 2 ⟩ can be expressed as ψ 1 ( x 1 ) ψ 2 ( x 2 ) a function on X 1 × X 2 , that gives the product of respective probability measures . In other words, amplitudes of a non- entangled composite state are products of original amplitudes, and respective observables on the systems 1 and 2 behave on these states as independent random variables . This strengthens the probabilistic interpretation explicated above . The concept of amplitudes is also used in the context of scattering theory , notably in the form of S-matrices . Whereas moduli of vector components squared, for a given vector, give a fixed probability distribution, moduli of matrix elements squared are interpreted as transition probabilities just as in a random process. Like a finite-dimensional unit vector specifies a finite probability distribution, a finite-dimensional unitary matrix specifies transition probabilities between a finite number of states. The "transitional" interpretation may be applied to L 2 s on non-discrete spaces as well. [ clarification needed ]
https://en.wikipedia.org/wiki/Probability_amplitude
Probability and statistics are two closely related fields in mathematics that are sometimes combined for academic purposes. [ 1 ] They are covered in multiple articles and lists: Publications named for both fields include the following:
https://en.wikipedia.org/wiki/Probability_and_statistics
The standard probability axioms are the foundations of probability theory introduced by Russian mathematician Andrey Kolmogorov in 1933. [ 1 ] These axioms remain central and have direct contributions to mathematics, the physical sciences, and real-world probability cases. [ 2 ] There are several other (equivalent) approaches to formalising probability. Bayesians will often motivate the Kolmogorov axioms by invoking Cox's theorem or the Dutch book arguments instead. [ 3 ] [ 4 ] The assumptions as to setting up the axioms can be summarised as follows: Let ( Ω , F , P ) {\displaystyle (\Omega ,F,P)} be a measure space such that P ( E ) {\displaystyle P(E)} is the probability of some event E {\displaystyle E} , and P ( Ω ) = 1 {\displaystyle P(\Omega )=1} . Then ( Ω , F , P ) {\displaystyle (\Omega ,F,P)} is a probability space , with sample space Ω {\displaystyle \Omega } , event space F {\displaystyle F} and probability measure P {\displaystyle P} . [ 1 ] The probability of an event is a non-negative real number: where F {\displaystyle F} is the event space. It follows (when combined with the second axiom) that P ( E ) {\displaystyle P(E)} is always finite, in contrast with more general measure theory . Theories which assign negative probability relax the first axiom. This is the assumption of unit measure : that the probability that at least one of the elementary events in the entire sample space will occur is 1. This is the assumption of σ-additivity : Some authors consider merely finitely additive probability spaces, in which case one just needs an algebra of sets , rather than a σ-algebra . [ 5 ] Quasiprobability distributions in general relax the third axiom. From the Kolmogorov axioms, one can deduce other useful rules for studying probabilities. The proofs [ 6 ] [ 7 ] [ 8 ] of these rules are a very insightful procedure that illustrates the power of the third axiom, and its interaction with the prior two axioms. Four of the immediate corollaries and their proofs are shown below: If A is a subset of, or equal to B, then the probability of A is less than, or equal to the probability of B. Source: [ 6 ] In order to verify the monotonicity property, we set E 1 = A {\displaystyle E_{1}=A} and E 2 = B ∖ A {\displaystyle E_{2}=B\setminus A} , where A ⊆ B {\displaystyle A\subseteq B} and E i = ∅ {\displaystyle E_{i}=\varnothing } for i ≥ 3 {\displaystyle i\geq 3} . From the properties of the empty set ( ∅ {\displaystyle \varnothing } ), it is easy to see that the sets E i {\displaystyle E_{i}} are pairwise disjoint and E 1 ∪ E 2 ∪ ⋯ = B {\displaystyle E_{1}\cup E_{2}\cup \cdots =B} . Hence, we obtain from the third axiom that Since, by the first axiom, the left-hand side of this equation is a series of non-negative numbers, and since it converges to P ( B ) {\displaystyle P(B)} which is finite, we obtain both P ( A ) ≤ P ( B ) {\displaystyle P(A)\leq P(B)} and P ( ∅ ) = 0 {\displaystyle P(\varnothing )=0} . In many cases, ∅ {\displaystyle \varnothing } is not the only event with probability 0. P ( ∅ ∪ ∅ ) = P ( ∅ ) {\displaystyle P(\varnothing \cup \varnothing )=P(\varnothing )} since ∅ ∪ ∅ = ∅ {\displaystyle \varnothing \cup \varnothing =\varnothing } , P ( ∅ ) + P ( ∅ ) = P ( ∅ ) {\displaystyle P(\varnothing )+P(\varnothing )=P(\varnothing )} by applying the third axiom to the left-hand side (note ∅ {\displaystyle \varnothing } is disjoint with itself), and so P ( ∅ ) = 0 {\displaystyle P(\varnothing )=0} by subtracting P ( ∅ ) {\displaystyle P(\varnothing )} from each side of the equation. P ( A ∁ ) = P ( Ω − A ) = 1 − P ( A ) {\displaystyle P\left(A^{\complement }\right)=P(\Omega -A)=1-P(A)} Given A {\displaystyle A} and A ∁ {\displaystyle A^{\complement }} are mutually exclusive and that A ∪ A ∁ = Ω {\displaystyle A\cup A^{\complement }=\Omega } : P ( A ∪ A ∁ ) = P ( A ) + P ( A ∁ ) {\displaystyle P(A\cup A^{\complement })=P(A)+P(A^{\complement })} ... (by axiom 3) and, P ( A ∪ A ∁ ) = P ( Ω ) = 1 {\displaystyle P(A\cup A^{\complement })=P(\Omega )=1} ... (by axiom 2) ⇒ P ( A ) + P ( A ∁ ) = 1 {\displaystyle \Rightarrow P(A)+P(A^{\complement })=1} ∴ P ( A ∁ ) = 1 − P ( A ) {\displaystyle \therefore P(A^{\complement })=1-P(A)} It immediately follows from the monotonicity property that Given the complement rule P ( E c ) = 1 − P ( E ) {\displaystyle P(E^{c})=1-P(E)} and axiom 1 P ( E c ) ≥ 0 {\displaystyle P(E^{c})\geq 0} : 1 − P ( E ) ≥ 0 {\displaystyle 1-P(E)\geq 0} ⇒ 1 ≥ P ( E ) {\displaystyle \Rightarrow 1\geq P(E)} ∴ 0 ≤ P ( E ) ≤ 1 {\displaystyle \therefore 0\leq P(E)\leq 1} Another important property is: This is called the addition law of probability, or the sum rule. That is, the probability that an event in A or B will happen is the sum of the probability of an event in A and the probability of an event in B , minus the probability of an event that is in both A and B . The proof of this is as follows: Firstly, So, Also, and eliminating P ( B ∖ ( A ∩ B ) ) {\displaystyle P(B\setminus (A\cap B))} from both equations gives us the desired result. An extension of the addition law to any number of sets is the inclusion–exclusion principle . Setting B to the complement A c of A in the addition law gives That is, the probability that any event will not happen (or the event's complement ) is 1 minus the probability that it will. Consider a single coin-toss, and assume that the coin will either land heads (H) or tails (T) (but not both). No assumption is made as to whether the coin is fair or as to whether or not any bias depends on how the coin is tossed. [ 9 ] We may define: Kolmogorov's axioms imply that: The probability of neither heads nor tails, is 0. The probability of either heads or tails, is 1. The sum of the probability of heads and the probability of tails, is 1.
https://en.wikipedia.org/wiki/Probability_axioms
Probability bounds analysis ( PBA ) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes , and constrain cumulative probability distributions (rather than densities or mass functions ). This bounding approach permits analysts to make calculations without requiring overly precise assumptions about parameter values, dependence among variables, or even distribution shape. Probability bounds analysis is essentially a combination of the methods of standard interval analysis and classical probability theory . Probability bounds analysis gives the same answer as interval analysis does when only range information is available. It also gives the same answers as Monte Carlo simulation does when information is abundant enough to precisely specify input distributions and their dependencies. Thus, it is a generalization of both interval analysis and probability theory. The diverse methods comprising probability bounds analysis provide algorithms to evaluate mathematical expressions when there is uncertainty about the input values, their dependencies, or even the form of mathematical expression itself. The calculations yield results that are guaranteed to enclose all possible distributions of the output variable if the input p-boxes were also sure to enclose their respective distributions. In some cases, a calculated p-box will also be best-possible in the sense that the bounds could be no tighter without excluding some of the possible distributions. P-boxes are usually merely bounds on possible distributions. The bounds often also enclose distributions that are not themselves possible. For instance, the set of probability distributions that could result from adding random values without the independence assumption from two (precise) distributions is generally a proper subset of all the distributions enclosed by the p-box computed for the sum. That is, there are distributions within the output p-box that could not arise under any dependence between the two input distributions. The output p-box will, however, always contain all distributions that are possible, so long as the input p-boxes were sure to enclose their respective underlying distributions. This property often suffices for use in risk analysis and other fields requiring calculations under uncertainty. The idea of bounding probability has a very long tradition throughout the history of probability theory. Indeed, in 1854 George Boole used the notion of interval bounds on probability in his The Laws of Thought . [ 1 ] [ 2 ] Also dating from the latter half of the 19th century, the inequality attributed to Chebyshev described bounds on a distribution when only the mean and variance of the variable are known, and the related inequality attributed to Markov found bounds on a positive variable when only the mean is known. Kyburg [ 3 ] reviewed the history of interval probabilities and traced the development of the critical ideas through the 20th century, including the important notion of incomparable probabilities favored by Keynes . Of particular note is Fréchet 's derivation in the 1930s of bounds on calculations involving total probabilities without dependence assumptions. Bounding probabilities has continued to the present day (e.g., Walley's theory of imprecise probability . [ 4 ] ) The methods of probability bounds analysis that could be routinely used in risk assessments were developed in the 1980s. Hailperin [ 2 ] described a computational scheme for bounding logical calculations extending the ideas of Boole. Yager [ 5 ] described the elementary procedures by which bounds on convolutions can be computed under an assumption of independence. At about the same time, Makarov, [ 6 ] and independently, Rüschendorf [ 7 ] solved the problem, originally posed by Kolmogorov , of how to find the upper and lower bounds for the probability distribution of a sum of random variables whose marginal distributions, but not their joint distribution, are known. Frank et al. [ 8 ] generalized the result of Makarov and expressed it in terms of copulas . Since that time, formulas and algorithms for sums have been generalized and extended to differences, products, quotients and other binary and unary functions under various dependence assumptions. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Arithmetic expressions involving operations such as additions, subtractions, multiplications, divisions, minima, maxima, powers, exponentials, logarithms, square roots, absolute values, etc., are commonly used in risk analyses and uncertainty modeling. Convolution is the operation of finding the probability distribution of a sum of independent random variables specified by probability distributions. We can extend the term to finding distributions of other mathematical functions (products, differences, quotients, and more complex functions) and other assumptions about the intervariable dependencies. There are convenient algorithms for computing these generalized convolutions under a variety of assumptions about the dependencies among the inputs. [ 5 ] [ 9 ] [ 10 ] [ 14 ] Let D {\displaystyle \mathbb {D} } denote the space of distribution functions on the real numbers R , {\displaystyle \mathbb {R} ,} i.e., A p-box is a quintuple where F ¯ {\displaystyle {\overline {F}}} and F _ ∈ D {\displaystyle {\underline {F}}\in \mathbb {D} } , m {\displaystyle m} and v {\displaystyle v} are real intervals, and F ⊂ D {\displaystyle \mathbf {F} \subset \mathbb {D} } . This quintuple denotes the set of distribution functions F ∈ F ⊂ D {\displaystyle F\in \mathbf {F} \subset \mathbb {D} } such that: If a function satisfies all the conditions above it is said to be inside the p-box. In some cases, there may be no information about the moments or distribution family other than what is encoded in the two distribution functions that constitute the edges of the p-box. Then the quintuple representing the p-box { B 1 , B 2 , [ − ∞ , ∞ ] , [ 0 , ∞ ] , D } {\displaystyle \{B_{1},B_{2},[-\infty ,\infty ],[0,\infty ],\mathbb {D} \}} can be denoted more compactly as [ B 1 , B 2 ]. This notation harkens to that of intervals on the real line, except that the endpoints are distributions rather than points. The notation X ∼ F {\displaystyle X\sim F} denotes the fact that X ∈ R {\displaystyle X\in \mathbb {R} } is a random variable governed by the distribution function F , that is, Let us generalize the tilde notation for use with p-boxes. We will write X ~ B to mean that X is a random variable whose distribution function is unknown except that it is inside B . Thus, X ~ F ∈ B can be contracted to X ~ B without mentioning the distribution function explicitly. If X and Y are independent random variables with distributions F and G respectively, then X + Y = Z ~ H given by This operation is called a convolution on F and G . The analogous operation on p-boxes is straightforward for sums. Suppose If X and Y are stochastically independent, then the distribution of Z = X + Y is inside the p-box Finding bounds on the distribution of sums Z = X + Y without making any assumption about the dependence between X and Y is actually easier than the problem assuming independence. Makarov [ 6 ] [ 8 ] [ 9 ] showed that These bounds are implied by the Fréchet–Hoeffding copula bounds. The problem can also be solved using the methods of mathematical programming . [ 13 ] The convolution under the intermediate assumption that X and Y have positive dependence is likewise easy to compute, as is the convolution under the extreme assumptions of perfect positive or perfect negative dependency between X and Y . [ 14 ] Generalized convolutions for other operations such as subtraction, multiplication, division, etc., can be derived using transformations. For instance, p-box subtraction A − B can be defined as A + (− B ), where the negative of a p-box B = [ B 1 , B 2 ] is [ B 2 (− x ), B 1 (− x )]. Logical or Boolean expressions involving conjunctions ( AND operations), disjunctions ( OR operations), exclusive disjunctions, equivalences, conditionals, etc. arise in the analysis of fault trees and event trees common in risk assessments. If the probabilities of events are characterized by intervals, as suggested by Boole [ 1 ] and Keynes [ 3 ] among others, these binary operations are straightforward to evaluate. For example, if the probability of an event A is in the interval P(A) = a = [0.2, 0.25], and the probability of the event B is in P(B) = b = [0.1, 0.3], then the probability of the conjunction is surely in the interval so long as A and B can be assumed to be independent events. If they are not independent, we can still bound the conjunction using the classical Fréchet inequality . In this case, we can infer at least that the probability of the joint event A & B is surely within the interval where env([ x 1 , x 2 ], [ y 1 , y 2 ]) is [min( x 1 , y 1 ), max( x 2 , y 2 )]. Likewise, the probability of the disjunction is surely in the interval if A and B are independent events. If they are not independent, the Fréchet inequality bounds the disjunction It is also possible to compute interval bounds on the conjunction or disjunction under other assumptions about the dependence between A and B. For instance, one might assume they are positively dependent, in which case the resulting interval is not as tight as the answer assuming independence but tighter than the answer given by the Fréchet inequality. Comparable calculations are used for other logical functions such as negation, exclusive disjunction, etc. When the Boolean expression to be evaluated becomes complex, it may be necessary to evaluate it using the methods of mathematical programming [ 2 ] to get best-possible bounds on the expression. A similar problem one presents in the case of probabilistic logic (see for example Gerla 1994). If the probabilities of the events are characterized by probability distributions or p-boxes rather than intervals, then analogous calculations can be done to obtain distributional or p-box results characterizing the probability of the top event. The probability that an uncertain number represented by a p-box D is less than zero is the interval Pr( D < 0) = [ F (0), F̅ (0)], where F̅ (0) is the left bound of the probability box D and F (0) is its right bound, both evaluated at zero. Two uncertain numbers represented by probability boxes may then be compared for numerical magnitude with the following encodings: Thus the probability that A is less than B is the same as the probability that their difference is less than zero, and this probability can be said to be the value of the expression A < B . Like arithmetic and logical operations, these magnitude comparisons generally depend on the stochastic dependence between A and B , and the subtraction in the encoding should reflect that dependence. If their dependence is unknown, the difference can be computed without making any assumption using the Fréchet operation. Some analysts [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] use sampling-based approaches to computing probability bounds, including Monte Carlo simulation , Latin hypercube methods or importance sampling . These approaches cannot assure mathematical rigor in the result because such simulation methods are approximations, although their performance can generally be improved simply by increasing the number of replications in the simulation. Thus, unlike the analytical theorems or methods based on mathematical programming, sampling-based calculations usually cannot produce verified computations . However, sampling-based methods can be very useful in addressing a variety of problems which are computationally difficult to solve analytically or even to rigorously bound. One important example is the use of Cauchy-deviate sampling to avoid the curse of dimensionality in propagating interval uncertainty through high-dimensional problems. [ 21 ] PBA belongs to a class of methods that use imprecise probabilities to simultaneously represent aleatoric and epistemic uncertainties . PBA is a generalization of both interval analysis and probabilistic convolution such as is commonly implemented with Monte Carlo simulation . PBA is also closely related to robust Bayes analysis , which is sometimes called Bayesian sensitivity analysis . PBA is an alternative to second-order Monte Carlo simulation . P-boxes and probability bounds analysis have been used in many applications spanning many disciplines in engineering and environmental science, including:
https://en.wikipedia.org/wiki/Probability_bounds_analysis
In quantum mechanics , the probability current (sometimes called probability flux ) is a mathematical quantity describing the flow of probability . Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism . As in those fields, the probability current (i.e. the probability current density) is related to the probability density function via a continuity equation . The probability current is invariant under gauge transformation . The concept of probability current is also used outside of quantum mechanics, when dealing with probability density functions that change over time, for instance in Brownian motion and the Fokker–Planck equation . [ 1 ] The relativistic equivalent of the probability current is known as the probability four-current . In non-relativistic quantum mechanics, the probability current j of the wave function Ψ of a particle of mass m in one dimension is defined as [ 2 ] j = ℏ 2 m i ( Ψ ∗ ∂ Ψ ∂ x − Ψ ∂ Ψ ∗ ∂ x ) = ℏ m ℜ { Ψ ∗ 1 i ∂ Ψ ∂ x } = ℏ m ℑ { Ψ ∗ ∂ Ψ ∂ x } , {\displaystyle j={\frac {\hbar }{2mi}}\left(\Psi ^{*}{\frac {\partial \Psi }{\partial x}}-\Psi {\frac {\partial \Psi ^{*}}{\partial x}}\right)={\frac {\hbar }{m}}\Re \left\{\Psi ^{*}{\frac {1}{i}}{\frac {\partial \Psi }{\partial x}}\right\}={\frac {\hbar }{m}}\Im \left\{\Psi ^{*}{\frac {\partial \Psi }{\partial x}}\right\},} where Note that the probability current is proportional to a Wronskian W ( Ψ , Ψ ∗ ) . {\displaystyle W(\Psi ,\Psi ^{*}).} In three dimensions, this generalizes to j = ℏ 2 m i ( Ψ ∗ ∇ Ψ − Ψ ∇ Ψ ∗ ) = ℏ m ℜ { Ψ ∗ ∇ i Ψ } = ℏ m ℑ { Ψ ∗ ∇ Ψ } , {\displaystyle \mathbf {j} ={\frac {\hbar }{2mi}}\left(\Psi ^{*}\mathbf {\nabla } \Psi -\Psi \mathbf {\nabla } \Psi ^{*}\right)={\frac {\hbar }{m}}\Re \left\{\Psi ^{*}{\frac {\nabla }{i}}\Psi \right\}={\frac {\hbar }{m}}\Im \left\{\Psi ^{*}\nabla \Psi \right\}\,,} where ∇ {\displaystyle \nabla } denotes the del or gradient operator . This can be simplified in terms of the kinetic momentum operator , p ^ = − i ℏ ∇ {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla } to obtain j = 1 2 m ( Ψ ∗ p ^ Ψ + Ψ ( p ^ Ψ ) ∗ ) . {\displaystyle \mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}\mathbf {\hat {p}} \Psi +\Psi \left(\mathbf {\hat {p}} \Psi \right)^{*}\right)\,.} These definitions use the position basis (i.e. for a wavefunction in position space ), but momentum space is possible. In fact, one can write the probability current operator as j ^ ( r ) = p ^ | r ⟩ ⟨ r | + | r ⟩ ⟨ r | p ^ 2 m {\displaystyle \mathbf {\hat {j}} (\mathbf {r} )={\frac {\mathbf {\hat {p}} |\mathbf {r} \rangle \langle \mathbf {r} |+|\mathbf {r} \rangle \langle \mathbf {r} |\mathbf {\hat {p}} }{2m}}} which do not depend on a particular choice of basis. The probability current is then the expectation of this operator, j ( r , t ) = ⟨ Ψ ( t ) | j ^ ( r ) | Ψ ( t ) ⟩ . {\displaystyle \mathbf {j} (\mathbf {r} ,t)=\langle \Psi (t)|{\hat {\mathbf {j} }}(\mathbf {r} )|\Psi (t)\rangle .} The above definition should be modified for a system in an external electromagnetic field . In SI units , a charged particle of mass m and electric charge q includes a term due to the interaction with the electromagnetic field; [ 3 ] j = 1 2 m [ ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) − 2 q A | Ψ | 2 ] {\displaystyle \mathbf {j} ={\frac {1}{2m}}\left[\left(\Psi ^{*}\mathbf {\hat {p}} \Psi -\Psi \mathbf {\hat {p}} \Psi ^{*}\right)-2q\mathbf {A} |\Psi |^{2}\right]} where A = A ( r , t ) is the magnetic vector potential . The term q A has dimensions of momentum. Note that p ^ = − i ℏ ∇ {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla } used here is the canonical momentum and is not gauge invariant , unlike the kinetic momentum operator P ^ = − i ℏ ∇ − q A {\displaystyle \mathbf {\hat {P}} =-i\hbar \nabla -q\mathbf {A} } . In Gaussian units : j = 1 2 m [ ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) − 2 q c A | Ψ | 2 ] {\displaystyle \mathbf {j} ={\frac {1}{2m}}\left[\left(\Psi ^{*}\mathbf {\hat {p}} \Psi -\Psi \mathbf {\hat {p}} \Psi ^{*}\right)-2{\frac {q}{c}}\mathbf {A} |\Psi |^{2}\right]} where c is the speed of light . If the particle has spin , it has a corresponding magnetic moment , so an extra term needs to be added incorporating the spin interaction with the electromagnetic field. According to Landau-Lifschitz's Course of Theoretical Physics the electric current density is in Gaussian units: [ 4 ] j e = q 2 m [ ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) − 2 q c A | Ψ | 2 ] + μ S c s ℏ ∇ × ( Ψ ∗ S Ψ ) {\displaystyle \mathbf {j} _{e}={\frac {q}{2m}}\left[\left(\Psi ^{*}\mathbf {\hat {p}} \Psi -\Psi \mathbf {\hat {p}} \Psi ^{*}\right)-{\frac {2q}{c}}\mathbf {A} |\Psi |^{2}\right]+{\frac {\mu _{S}c}{s\hbar }}\nabla \times (\Psi ^{*}\mathbf {S} \Psi )} And in SI units: j e = q 2 m [ ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) − 2 q A | Ψ | 2 ] + μ S s ℏ ∇ × ( Ψ ∗ S Ψ ) {\displaystyle \mathbf {j} _{e}={\frac {q}{2m}}\left[\left(\Psi ^{*}\mathbf {\hat {p}} \Psi -\Psi \mathbf {\hat {p}} \Psi ^{*}\right)-2q\mathbf {A} |\Psi |^{2}\right]+{\frac {\mu _{S}}{s\hbar }}\nabla \times (\Psi ^{*}\mathbf {S} \Psi )} Hence the probability current (density) is in SI units: j = j e / q = 1 2 m [ ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) − 2 q A | Ψ | 2 ] + μ S q s ℏ ∇ × ( Ψ ∗ S Ψ ) {\displaystyle \mathbf {j} =\mathbf {j} _{e}/q={\frac {1}{2m}}\left[\left(\Psi ^{*}\mathbf {\hat {p}} \Psi -\Psi \mathbf {\hat {p}} \Psi ^{*}\right)-2q\mathbf {A} |\Psi |^{2}\right]+{\frac {\mu _{S}}{qs\hbar }}\nabla \times (\Psi ^{*}\mathbf {S} \Psi )} where S is the spin vector of the particle with corresponding spin magnetic moment μ S and spin quantum number s . It is doubtful if this formula is valid for particles with an interior structure. [ citation needed ] The neutron has zero charge but non-zero magnetic moment, so μ S q s ℏ {\displaystyle {\frac {\mu _{S}}{qs\hbar }}} would be impossible (except ∇ × ( Ψ ∗ S Ψ ) {\displaystyle \nabla \times (\Psi ^{*}\mathbf {S} \Psi )} would also be zero in this case). For composite particles with a non-zero charge – like the proton which has spin quantum number s=1/2 and μ S = 2.7927· μ N or the deuteron (H-2 nucleus) which has s=1 and μ S =0.8574·μ N [ 5 ] – it is mathematically possible but doubtful. The wave function can also be written in the complex exponential ( polar ) form: Ψ = R e i S / ℏ {\displaystyle \Psi =Re^{iS/\hbar }} where R, S are real functions of r and t . Written this way, the probability density is ρ = Ψ ∗ Ψ = R 2 {\displaystyle \rho =\Psi ^{*}\Psi =R^{2}} and the probability current is: j = ℏ 2 m i ( Ψ ∗ ∇ Ψ − Ψ ∇ Ψ ∗ ) = ℏ 2 m i ( R e − i S / ℏ ∇ R e i S / ℏ − R e i S / ℏ ∇ R e − i S / ℏ ) = ℏ 2 m i [ R e − i S / ℏ ( e i S / ℏ ∇ R + i ℏ R e i S / ℏ ∇ S ) − R e i S / ℏ ( e − i S / ℏ ∇ R − i ℏ R e − i S / ℏ ∇ S ) ] . {\displaystyle {\begin{aligned}\mathbf {j} &={\frac {\hbar }{2mi}}\left(\Psi ^{*}\mathbf {\nabla } \Psi -\Psi \mathbf {\nabla } \Psi ^{*}\right)\\[5pt]&={\frac {\hbar }{2mi}}\left(Re^{-iS/\hbar }\mathbf {\nabla } Re^{iS/\hbar }-Re^{iS/\hbar }\mathbf {\nabla } Re^{-iS/\hbar }\right)\\[5pt]&={\frac {\hbar }{2mi}}\left[Re^{-iS/\hbar }\left(e^{iS/\hbar }\mathbf {\nabla } R+{\frac {i}{\hbar }}Re^{iS/\hbar }\mathbf {\nabla } S\right)-Re^{iS/\hbar }\left(e^{-iS/\hbar }\mathbf {\nabla } R-{\frac {i}{\hbar }}Re^{-iS/\hbar }\mathbf {\nabla } S\right)\right].\end{aligned}}} The exponentials and R ∇ R terms cancel: j = ℏ 2 m i [ i ℏ R 2 ∇ S + i ℏ R 2 ∇ S ] . {\displaystyle \mathbf {j} ={\frac {\hbar }{2mi}}\left[{\frac {i}{\hbar }}R^{2}\mathbf {\nabla } S+{\frac {i}{\hbar }}R^{2}\mathbf {\nabla } S\right].} Finally, combining and cancelling the constants, and replacing R 2 with ρ , j = ρ ∇ S m . {\displaystyle \mathbf {j} =\rho {\frac {\mathbf {\nabla } S}{m}}.} Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. If we take the familiar formula for the mass flux in hydrodynamics: j = ρ v , {\displaystyle \mathbf {j} =\rho \mathbf {v} ,} where ρ {\displaystyle \rho } is the mass density of the fluid and v is its velocity (also the group velocity of the wave). In the classical limit, we can associate the velocity with ∇ S m , {\displaystyle {\tfrac {\nabla S}{m}},} which is the same as equating ∇ S with the classical momentum p = m v however, it does not represent a physical velocity or momentum at a point since simultaneous measurement of position and velocity violates uncertainty principle . This interpretation fits with Hamilton–Jacobi theory , in which p = ∇ S {\displaystyle \mathbf {p} =\nabla S} in Cartesian coordinates is given by ∇ S , where S is Hamilton's principal function . The de Broglie-Bohm theory equates the velocity with ∇ S m {\displaystyle {\tfrac {\nabla S}{m}}} in general (not only in the classical limit) so it is always well defined. It is an interpretation of quantum mechanics. The definition of probability current and Schrödinger's equation can be used to derive the continuity equation , which has exactly the same forms as those for hydrodynamics and electromagnetism . [ 6 ] For some wave function Ψ , let: ρ ( r , t ) = | Ψ | 2 = Ψ ∗ ( r , t ) Ψ ( r , t ) . {\displaystyle \rho (\mathbf {r} ,t)=|\Psi |^{2}=\Psi ^{*}(\mathbf {r} ,t)\Psi (\mathbf {r} ,t).} be the probability density (probability per unit volume, * denotes complex conjugate ). Then, d d t ∫ V d V ρ = ∫ V d V ( ∂ ψ ∂ t ψ ∗ + ψ ∂ ψ ∗ ∂ t ) = ∫ V d V [ − i ℏ ( − ℏ 2 2 m ∇ 2 ψ + V ψ ) ψ ∗ + i ℏ ( − ℏ 2 2 m ∇ 2 ψ ∗ + V ψ ∗ ) ψ ] = ∫ V d V i ℏ 2 m [ ( ∇ 2 ψ ) ψ ∗ − ψ ( ∇ 2 ψ ∗ ) ] = ∫ V d V ∇ ⋅ ( i ℏ 2 m ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) ) = ∫ S d a ⋅ ( i ℏ 2 m ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) ) {\displaystyle {\begin{aligned}{\frac {d}{dt}}\int _{\mathcal {V}}dV\,\rho &=\int _{\mathcal {V}}dV\,\left({\frac {\partial \psi }{\partial t}}\psi ^{*}+\psi {\frac {\partial \psi ^{*}}{\partial t}}\right)\\&=\int _{\mathcal {V}}dV\,\left[-{\frac {i}{\hbar }}\left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi +V\psi \right)\psi ^{*}+{\frac {i}{\hbar }}\left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi ^{*}+V\psi ^{*}\right)\psi \right]\\&=\int _{\mathcal {V}}dV\,{\frac {i\hbar }{2m}}\left[\left(\nabla ^{2}\psi \right)\psi ^{*}-\psi \left(\nabla ^{2}\psi ^{*}\right)\right]\\&=\int _{\mathcal {V}}dV\,\nabla \cdot \left({\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})\right)\\&=\int _{\mathcal {S}}d\mathbf {a} \cdot \left({\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})\right)\end{aligned}}} where V is any volume and S is the boundary of V . This is the conservation law for probability in quantum mechanics. The integral form is stated as: ∫ V ( ∂ | Ψ | 2 ∂ t ) d V + ∫ V ( ∇ ⋅ j ) d V = 0 {\displaystyle \int _{V}\left({\frac {\partial |\Psi |^{2}}{\partial t}}\right)\mathrm {d} V+\int _{V}\left(\mathbf {\nabla } \cdot \mathbf {j} \right)\mathrm {d} V=0} where j = 1 2 m ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) = − i ℏ 2 m ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) = ℏ m Im ⁡ ( ψ ∗ ∇ ψ ) {\displaystyle \mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}{\hat {\mathbf {p} }}\Psi -\Psi {\hat {\mathbf {p} }}\Psi ^{*}\right)=-{\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}\operatorname {Im} (\psi ^{*}\nabla \psi )} is the probability current or probability flux (flow per unit area). Here, equating the terms inside the integral gives the continuity equation for probability: ∂ ∂ t ρ ( r , t ) + ∇ ⋅ j = 0 , {\displaystyle {\frac {\partial }{\partial t}}\rho \left(\mathbf {r} ,t\right)+\nabla \cdot \mathbf {j} =0,} and the integral equation can also be restated using the divergence theorem as: In particular, if Ψ is a wavefunction describing a single particle, the integral in the first term of the preceding equation, sans time derivative, is the probability of obtaining a value within V when the position of the particle is measured. The second term is then the rate at which probability is flowing out of the volume V . Altogether the equation states that the time derivative of the probability of the particle being measured in V is equal to the rate at which probability flows into V . By taking the limit of volume integral to include all regions of space, a well-behaved wavefunction that goes to zero at infinities in the surface integral term implies that the time derivative of total probability is zero ie. the normalization condition is conserved. [ 7 ] This result is in agreement with the unitary nature of time evolution operators which preserve length of the vector by definition. The probability (4-)current arises from Noether's theorem as applied to the Lagrangian the Klein-Gordon Lagrangian density L = ∂ μ ϕ ∗ ∂ μ ϕ + V ( ϕ ∗ ϕ ) {\displaystyle {\mathcal {L}}=\partial _{\mu }\phi ^{*}\,\partial ^{\mu }\phi +V(\phi ^{*}\,\phi )} of the complex scalar field ϕ : R n + 1 ↦ C \phi :\mathbb {R} ^{n+1}\mapsto \mathbb {C} . This is invariant under the symmetry transformation ϕ ↦ ϕ ′ = ϕ e i α . {\displaystyle \phi \mapsto \phi '=\phi \,e^{i\alpha }\,.} Defining δ ϕ = ϕ ′ − ϕ \delta \phi =\phi '-\phi we find the Noether current j μ := d L d q ˙ ⋅ Q r = d L d ( ∂ μ ) ϕ d ( δ ϕ ) d α | α = 0 + d L d ( ∂ μ ) ϕ ∗ d ( δ ϕ ∗ ) d α | α = 0 = i ϕ ( ∂ μ ϕ ∗ ) − i ϕ ∗ ( ∂ μ ϕ ) {\displaystyle j^{\mu }:={\frac {d{\mathcal {L}}}{d{\dot {\mathbf {q} }}}}\cdot \mathbf {Q} _{r}={\frac {d{\mathcal {L}}}{d(\partial _{\mu })\phi }}\,{\frac {d(\delta \phi )}{d\alpha }}{\bigg |}_{\alpha =0}+{\frac {d{\mathcal {L}}}{d(\partial _{\mu })\phi ^{*}}}\,{\frac {d(\delta \phi ^{*})}{d\alpha }}{\bigg |}_{\alpha =0}=i\,\phi \,(\partial ^{\mu }\phi ^{*})-i\,\phi ^{*}\,(\partial ^{\mu }\phi )} which satisfies the continuity equation. Here Q r {\displaystyle \mathbf {Q} _{r}} is the generator of the symmetry, which is d ( δ q ) d α r {\displaystyle {\frac {d(\delta \mathbf {q} )}{d\alpha _{r}}}} in the case of a single parameter α {\displaystyle \alpha } . The continuity equation ∂ μ j μ = 0 {\displaystyle \partial _{\mu }j^{\mu }=0} is satisfied. However, note that now, the analog of the probability density is not ϕ ϕ ∗ {\displaystyle \phi \phi ^{*}} but rather ϕ ∗ ∂ t ϕ − ϕ ∂ t ϕ ∗ {\displaystyle \phi ^{*}\partial _{t}\phi -\phi \partial _{t}\phi ^{*}} . As this quantity can now be negative, we must interpret it as a charge density, with an associated current density and 4-current . In regions where a step potential or potential barrier occurs, the probability current is related to the transmission and reflection coefficients, respectively T and R ; they measure the extent the particles reflect from the potential barrier or are transmitted through it. Both satisfy: T + R = 1 , {\displaystyle T+R=1\,,} where T and R can be defined by: T = | j t r a n s | | j i n c | , R = | j r e f | | j i n c | , {\displaystyle T={\frac {|\mathbf {j} _{\mathrm {trans} }|}{|\mathbf {j} _{\mathrm {inc} }|}}\,,\quad R={\frac {|\mathbf {j} _{\mathrm {ref} }|}{|\mathbf {j} _{\mathrm {inc} }|}}\,,} where j inc , j ref , j trans are the incident, reflected and transmitted probability currents respectively, and the vertical bars indicate the magnitudes of the current vectors. The relation between T and R can be obtained from probability conservation: j t r a n s + j r e f = j i n c . {\displaystyle \mathbf {j} _{\mathrm {trans} }+\mathbf {j} _{\mathrm {ref} }=\mathbf {j} _{\mathrm {inc} }\,.} In terms of a unit vector n normal to the barrier, these are equivalently: T = | j t r a n s ⋅ n j i n c ⋅ n | , R = | j r e f ⋅ n j i n c ⋅ n | , {\displaystyle T=\left|{\frac {\mathbf {j} _{\mathrm {trans} }\cdot \mathbf {n} }{\mathbf {j} _{\mathrm {inc} }\cdot \mathbf {n} }}\right|\,,\qquad R=\left|{\frac {\mathbf {j} _{\mathrm {ref} }\cdot \mathbf {n} }{\mathbf {j} _{\mathrm {inc} }\cdot \mathbf {n} }}\right|\,,} where the absolute values are required to prevent T and R being negative. For a plane wave propagating in space: Ψ ( r , t ) = A e i ( k ⋅ r − ω t ) {\displaystyle \Psi (\mathbf {r} ,t)=\,Ae^{i(\mathbf {k} \cdot {\mathbf {r} }-\omega t)}} the probability density is constant everywhere; ρ ( r , t ) = | A | 2 → ∂ | Ψ | 2 ∂ t = 0 {\displaystyle \rho (\mathbf {r} ,t)=|A|^{2}\rightarrow {\frac {\partial |\Psi |^{2}}{\partial t}}=0} (that is, plane waves are stationary states ) but the probability current is nonzero – the square of the absolute amplitude of the wave times the particle's speed; j ( r , t ) = | A | 2 ℏ k m = ρ p m = ρ v {\displaystyle \mathbf {j} \left(\mathbf {r} ,t\right)=\left|A\right|^{2}{\hbar \mathbf {k} \over m}=\rho {\frac {\mathbf {p} }{m}}=\rho \mathbf {v} } illustrating that the particle may be in motion even if its spatial probability density has no explicit time dependence. For a particle in a box , in one spatial dimension and of length L , confined to the region 0 < x < L {\displaystyle 0<x<L} , the energy eigenstates are Ψ n = 2 L sin ⁡ ( n π L x ) {\displaystyle \Psi _{n}={\sqrt {\frac {2}{L}}}\sin \left({\frac {n\pi }{L}}x\right)} and zero elsewhere. The associated probability currents are j n = i ℏ 2 m ( Ψ n ∗ ∂ Ψ n ∂ x − Ψ n ∂ Ψ n ∗ ∂ x ) = 0 {\displaystyle j_{n}={\frac {i\hbar }{2m}}\left(\Psi _{n}^{*}{\frac {\partial \Psi _{n}}{\partial x}}-\Psi _{n}{\frac {\partial \Psi _{n}^{*}}{\partial x}}\right)=0} since Ψ n = Ψ n ∗ {\displaystyle \Psi _{n}=\Psi _{n}^{*}} For a particle in one dimension on ℓ 2 ( Z ) , {\displaystyle \ell ^{2}(\mathbb {Z} ),} we have the Hamiltonian H = − Δ + V {\displaystyle H=-\Delta +V} where − Δ ≡ 2 I − S − S ∗ {\displaystyle -\Delta \equiv 2I-S-S^{\ast }} is the discrete Laplacian, with S being the right shift operator on ℓ 2 ( Z ) . {\displaystyle \ell ^{2}(\mathbb {Z} ).} Then the probability current is defined as j ≡ 2 ℑ { Ψ ¯ i v Ψ } , {\displaystyle j\equiv 2\Im \left\{{\bar {\Psi }}iv\Psi \right\},} with v the velocity operator, equal to v ≡ − i [ X , H ] {\displaystyle v\equiv -i[X,\,H]} and X is the position operator on ℓ 2 ( Z ) . {\displaystyle \ell ^{2}\left(\mathbb {Z} \right).} Since V is usually a multiplication operator on ℓ 2 ( Z ) , {\displaystyle \ell ^{2}(\mathbb {Z} ),} we get to safely write − i [ X , H ] = − i [ X , − Δ ] = − i [ X , − S − S ∗ ] = i S − i S ∗ . {\displaystyle -i[X,\,H]=-i[X,\,-\Delta ]=-i\left[X,\,-S-S^{\ast }\right]=iS-iS^{\ast }.} As a result, we find: j ( x ) ≡ 2 ℑ { Ψ ¯ ( x ) i v Ψ ( x ) } = 2 ℑ { Ψ ¯ ( x ) ( ( − S Ψ ) ( x ) + ( S ∗ Ψ ) ( x ) ) } = 2 ℑ { Ψ ¯ ( x ) ( − Ψ ( x − 1 ) + Ψ ( x + 1 ) ) } {\displaystyle {\begin{aligned}j\left(x\right)\equiv 2\Im \left\{{\bar {\Psi }}(x)iv\Psi (x)\right\}&=2\Im \left\{{\bar {\Psi }}(x)\left((-S\Psi )(x)+\left(S^{\ast }\Psi \right)(x)\right)\right\}\\&=2\Im \left\{{\bar {\Psi }}(x)\left(-\Psi (x-1)+\Psi (x+1)\right)\right\}\end{aligned}}}
https://en.wikipedia.org/wiki/Probability_current
In probability theory , a probability density function ( PDF ), density function , or density of an absolutely continuous random variable , is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. [ 2 ] [ 3 ] Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values , as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1. The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function , or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. [ 4 ] In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on. In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour −1 ). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour −1 . This quantity 2 hour −1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour −1 ) dt . This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour −1 )×(1 nanosecond) ≈ 6 × 10 −13 (using the unit conversion 3.6 × 10 12 nanoseconds = 1 hour). There is a probability density function f with f (5 hours) = 2 hour −1 . The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window. A probability density function is most commonly associated with absolutely continuous univariate distributions . A random variable X {\displaystyle X} has density f X {\displaystyle f_{X}} , where f X {\displaystyle f_{X}} is a non-negative Lebesgue-integrable function, if: Pr [ a ≤ X ≤ b ] = ∫ a b f X ( x ) d x . {\displaystyle \Pr[a\leq X\leq b]=\int _{a}^{b}f_{X}(x)\,dx.} Hence, if F X {\displaystyle F_{X}} is the cumulative distribution function of X {\displaystyle X} , then: F X ( x ) = ∫ − ∞ x f X ( u ) d u , {\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(u)\,du,} and (if f X {\displaystyle f_{X}} is continuous at x {\displaystyle x} ) f X ( x ) = d d x F X ( x ) . {\displaystyle f_{X}(x)={\frac {d}{dx}}F_{X}(x).} Intuitively, one can think of f X ( x ) d x {\displaystyle f_{X}(x)\,dx} as being the probability of X {\displaystyle X} falling within the infinitesimal interval [ x , x + d x ] {\displaystyle [x,x+dx]} . ( This definition may be extended to any probability distribution using the measure-theoretic definition of probability . ) A random variable X {\displaystyle X} with values in a measurable space ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} (usually R n {\displaystyle \mathbb {R} ^{n}} with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X ∗ P on ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} : the density of X {\displaystyle X} with respect to a reference measure μ {\displaystyle \mu } on ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} is the Radon–Nikodym derivative : f = d X ∗ P d μ . {\displaystyle f={\frac {dX_{*}P}{d\mu }}.} That is, f is any measurable function with the property that: Pr [ X ∈ A ] = ∫ X − 1 A d P = ∫ A f d μ {\displaystyle \Pr[X\in A]=\int _{X^{-1}A}\,dP=\int _{A}f\,d\mu } for any measurable set A ∈ A . {\displaystyle A\in {\mathcal {A}}.} In the continuous univariate case above , the reference measure is the Lebesgue measure . The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers , or some subset thereof). It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere . Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f ( x ) = 2 for 0 ≤ x ≤ 1/2 and f ( x ) = 0 elsewhere. The standard normal distribution has probability density f ( x ) = 1 2 π e − x 2 / 2 . {\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\,e^{-x^{2}/2}.} If a random variable X is given and its distribution admits a probability density function f , then the expected value of X (if the expected value exists) can be calculated as E ⁡ [ X ] = ∫ − ∞ ∞ x f ( x ) d x . {\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\,f(x)\,dx.} Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution , even though it has no discrete component, i.e., does not assign positive probability to any individual point. A distribution has a density function if its cumulative distribution function F ( x ) is absolutely continuous . In this case: F is almost everywhere differentiable , and its derivative can be used as probability density: d d x F ( x ) = f ( x ) . {\displaystyle {\frac {d}{dx}}F(x)=f(x).} If a probability distribution admits a density, then the probability of every one-point set { a } is zero; the same holds for finite and countable sets. Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero . In the field of statistical physics , a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: If dt is an infinitely small number, the probability that X is included within the interval ( t , t + dt ) is equal to f ( t ) dt , or: Pr ( t < X < t + d t ) = f ( t ) d t . {\displaystyle \Pr(t<X<t+dt)=f(t)\,dt.} It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function . (This is not possible with a probability density function in the sense defined above, it may be done with a distribution .) For example, consider a binary discrete random variable having the Rademacher distribution —that is, taking −1 or 1 for values, with probability 1 ⁄ 2 each. The density of probability associated with this variable is: f ( t ) = 1 2 ( δ ( t + 1 ) + δ ( t − 1 ) ) . {\displaystyle f(t)={\frac {1}{2}}(\delta (t+1)+\delta (t-1)).} More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is: f ( t ) = ∑ i = 1 n p i δ ( t − x i ) , {\displaystyle f(t)=\sum _{i=1}^{n}p_{i}\,\delta (t-x_{i}),} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the discrete values accessible to the variable and p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} are the probabilities associated with these values. This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean , variance , and kurtosis ), starting from the formulas given for a continuous distribution of the probability. It is common for probability density functions (and probability mass functions ) to be parametrized—that is, to be characterized by unspecified parameters . For example, the normal distribution is parametrized in terms of the mean and the variance , denoted by μ {\displaystyle \mu } and σ 2 {\displaystyle \sigma ^{2}} respectively, giving the family of densities f ( x ; μ , σ 2 ) = 1 σ 2 π e − 1 2 ( x − μ σ ) 2 . {\displaystyle f(x;\mu ,\sigma ^{2})={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}.} Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution. Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. For continuous random variables X 1 , ..., X n , it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function . This density function is defined as a function of the n variables, such that, for any domain D in the n -dimensional space of the values of the variables X 1 , ..., X n , the probability that a realisation of the set variables falls inside the domain D is Pr ( X 1 , … , X n ∈ D ) = ∫ D f X 1 , … , X n ( x 1 , … , x n ) d x 1 ⋯ d x n . {\displaystyle \Pr \left(X_{1},\ldots ,X_{n}\in D\right)=\int _{D}f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{n}.} If F ( x 1 , ..., x n ) = Pr( X 1 ≤ x 1 , ..., X n ≤ x n ) is the cumulative distribution function of the vector ( X 1 , ..., X n ) , then the joint probability density function can be computed as a partial derivative f ( x ) = ∂ n F ∂ x 1 ⋯ ∂ x n | x {\displaystyle f(x)=\left.{\frac {\partial ^{n}F}{\partial x_{1}\cdots \partial x_{n}}}\right|_{x}} For i = 1, 2, ..., n , let f X i ( x i ) be the probability density function associated with variable X i alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X 1 , ..., X n by integrating over all values of the other n − 1 variables: f X i ( x i ) = ∫ f ( x 1 , … , x n ) d x 1 ⋯ d x i − 1 d x i + 1 ⋯ d x n . {\displaystyle f_{X_{i}}(x_{i})=\int f(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{i-1}\,dx_{i+1}\cdots dx_{n}.} Continuous random variables X 1 , ..., X n admitting a joint density are all independent from each other if f X 1 , … , X n ( x 1 , … , x n ) = f X 1 ( x 1 ) ⋯ f X n ( x n ) . {\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}).} If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable f X 1 , … , X n ( x 1 , … , x n ) = f 1 ( x 1 ) ⋯ f n ( x n ) , {\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{1}(x_{1})\cdots f_{n}(x_{n}),} (where each f i is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by f X i ( x i ) = f i ( x i ) ∫ f i ( x ) d x . {\displaystyle f_{X_{i}}(x_{i})={\frac {f_{i}(x_{i})}{\int f_{i}(x)\,dx}}.} This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call R → {\displaystyle {\vec {R}}} a 2-dimensional random vector of coordinates ( X , Y ) : the probability to obtain R → {\displaystyle {\vec {R}}} in the quarter plane of positive x and y is Pr ( X > 0 , Y > 0 ) = ∫ 0 ∞ ∫ 0 ∞ f X , Y ( x , y ) d x d y . {\displaystyle \Pr \left(X>0,Y>0\right)=\int _{0}^{\infty }\int _{0}^{\infty }f_{X,Y}(x,y)\,dx\,dy.} If the probability density function of a random variable (or vector) X is given as f X ( x ) , it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g ( X ) . This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape f g ( X ) = f Y using a known (for instance, uniform) random number generator. It is tempting to think that in order to find the expected value E( g ( X )) , one must first find the probability density f g ( X ) of the new random variable Y = g ( X ) . However, rather than computing E ⁡ ( g ( X ) ) = ∫ − ∞ ∞ y f g ( X ) ( y ) d y , {\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }yf_{g(X)}(y)\,dy,} one may find instead E ⁡ ( g ( X ) ) = ∫ − ∞ ∞ g ( x ) f X ( x ) d x . {\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx.} The values of the two integrals are the same in all cases in which both X and g ( X ) actually have probability density functions. It is not necessary that g be a one-to-one function . In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician . Let g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } be a monotonic function , then the resulting density function is [ 5 ] f Y ( y ) = f X ( g − 1 ( y ) ) | d d y ( g − 1 ( y ) ) | . {\displaystyle f_{Y}(y)=f_{X}{\big (}g^{-1}(y){\big )}\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|.} Here g −1 denotes the inverse function . This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, | f Y ( y ) d y | = | f X ( x ) d x | , {\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,} or f Y ( y ) = | d x d y | f X ( x ) = | d d y ( x ) | f X ( x ) = | d d y ( g − 1 ( y ) ) | f X ( g − 1 ( y ) ) = | ( g − 1 ) ′ ( y ) | ⋅ f X ( g − 1 ( y ) ) . {\displaystyle f_{Y}(y)=\left|{\frac {dx}{dy}}\right|f_{X}(x)=\left|{\frac {d}{dy}}(x)\right|f_{X}(x)=\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|f_{X}{\big (}g^{-1}(y){\big )}={\left|\left(g^{-1}\right)'(y)\right|}\cdot f_{X}{\big (}g^{-1}(y){\big )}.} For functions that are not monotonic, the probability density function for y is ∑ k = 1 n ( y ) | d d y g k − 1 ( y ) | ⋅ f X ( g k − 1 ( y ) ) , {\displaystyle \sum _{k=1}^{n(y)}\left|{\frac {d}{dy}}g_{k}^{-1}(y)\right|\cdot f_{X}{\big (}g_{k}^{-1}(y){\big )},} where n ( y ) is the number of solutions in x for the equation g ( x ) = y {\displaystyle g(x)=y} , and g k − 1 ( y ) {\displaystyle g_{k}^{-1}(y)} are these solutions. Suppose x is an n -dimensional random variable with joint density f . If y = G ( x ) , where G is a bijective , differentiable function , then y has density p Y : p Y ( y ) = f ( G − 1 ( y ) ) | det [ d G − 1 ( z ) d z | z = y ] | {\displaystyle p_{Y}(\mathbf {y} )=f{\Bigl (}G^{-1}(\mathbf {y} ){\Bigr )}\left|\det \left[\left.{\frac {dG^{-1}(\mathbf {z} )}{d\mathbf {z} }}\right|_{\mathbf {z} =\mathbf {y} }\right]\right|} with the differential regarded as the Jacobian of the inverse of G (⋅) , evaluated at y . [ 6 ] For example, in the 2-dimensional case x = ( x 1 , x 2 ) , suppose the transform G is given as y 1 = G 1 ( x 1 , x 2 ) , y 2 = G 2 ( x 1 , x 2 ) with inverses x 1 = G 1 −1 ( y 1 , y 2 ) , x 2 = G 2 −1 ( y 1 , y 2 ) . The joint distribution for y = ( y 1 , y 2 ) has density [ 7 ] p Y 1 , Y 2 ( y 1 , y 2 ) = f X 1 , X 2 ( G 1 − 1 ( y 1 , y 2 ) , G 2 − 1 ( y 1 , y 2 ) ) | ∂ G 1 − 1 ∂ y 1 ∂ G 2 − 1 ∂ y 2 − ∂ G 1 − 1 ∂ y 2 ∂ G 2 − 1 ∂ y 1 | . {\displaystyle p_{Y_{1},Y_{2}}(y_{1},y_{2})=f_{X_{1},X_{2}}{\big (}G_{1}^{-1}(y_{1},y_{2}),G_{2}^{-1}(y_{1},y_{2}){\big )}\left\vert {\frac {\partial G_{1}^{-1}}{\partial y_{1}}}{\frac {\partial G_{2}^{-1}}{\partial y_{2}}}-{\frac {\partial G_{1}^{-1}}{\partial y_{2}}}{\frac {\partial G_{2}^{-1}}{\partial y_{1}}}\right\vert .} Let V : R n → R {\displaystyle V:\mathbb {R} ^{n}\to \mathbb {R} } be a differentiable function and X {\displaystyle X} be a random vector taking values in R n {\displaystyle \mathbb {R} ^{n}} , f X {\displaystyle f_{X}} be the probability density function of X {\displaystyle X} and δ ( ⋅ ) {\displaystyle \delta (\cdot )} be the Dirac delta function. It is possible to use the formulas above to determine f Y {\displaystyle f_{Y}} , the probability density function of Y = V ( X ) {\displaystyle Y=V(X)} , which will be given by f Y ( y ) = ∫ R n f X ( x ) δ ( y − V ( x ) ) d x . {\displaystyle f_{Y}(y)=\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} .} This result leads to the law of the unconscious statistician : E Y ⁡ [ Y ] = ∫ R y f Y ( y ) d y = ∫ R y ∫ R n f X ( x ) δ ( y − V ( x ) ) d x d y = ∫ R n ∫ R y f X ( x ) δ ( y − V ( x ) ) d y d x = ∫ R n V ( x ) f X ( x ) d x = E X ⁡ [ V ( X ) ] . {\displaystyle \operatorname {E} _{Y}[Y]=\int _{\mathbb {R} }yf_{Y}(y)\,dy=\int _{\mathbb {R} }y\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} \,dy=\int _{{\mathbb {R} }^{n}}\int _{\mathbb {R} }yf_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,dy\,d\mathbf {x} =\int _{\mathbb {R} ^{n}}V(\mathbf {x} )f_{X}(\mathbf {x} )\,d\mathbf {x} =\operatorname {E} _{X}[V(X)].} Proof: Let Z {\displaystyle Z} be a collapsed random variable with probability density function p Z ( z ) = δ ( z ) {\displaystyle p_{Z}(z)=\delta (z)} (i.e., a constant equal to zero). Let the random vector X ~ {\displaystyle {\tilde {X}}} and the transform H {\displaystyle H} be defined as H ( Z , X ) = [ Z + V ( X ) X ] = [ Y X ~ ] . {\displaystyle H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\{\tilde {X}}\end{bmatrix}}.} It is clear that H {\displaystyle H} is a bijective mapping, and the Jacobian of H − 1 {\displaystyle H^{-1}} is given by: d H − 1 ( y , x ~ ) d y d x ~ = [ 1 − d V ( x ~ ) d x ~ 0 n × 1 I n × n ] , {\displaystyle {\frac {dH^{-1}(y,{\tilde {\mathbf {x} }})}{dy\,d{\tilde {\mathbf {x} }}}}={\begin{bmatrix}1&-{\frac {dV({\tilde {\mathbf {x} }})}{d{\tilde {\mathbf {x} }}}}\\\mathbf {0} _{n\times 1}&\mathbf {I} _{n\times n}\end{bmatrix}},} which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that f Y , X ( y , x ) = f X ( x ) δ ( y − V ( x ) ) , {\displaystyle f_{Y,X}(y,x)=f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )},} which if marginalized over x {\displaystyle x} leads to the desired probability density function. The probability density function of the sum of two independent random variables U and V , each of which has a probability density function, is the convolution of their separate density functions: f U + V ( x ) = ∫ − ∞ ∞ f U ( y ) f V ( x − y ) d y = ( f U ∗ f V ) ( x ) {\displaystyle f_{U+V}(x)=\int _{-\infty }^{\infty }f_{U}(y)f_{V}(x-y)\,dy=\left(f_{U}*f_{V}\right)(x)} It is possible to generalize the previous relation to a sum of N independent random variables, with densities U 1 , ..., U N : f U 1 + ⋯ + U ( x ) = ( f U 1 ∗ ⋯ ∗ f U N ) ( x ) {\displaystyle f_{U_{1}+\cdots +U}(x)=\left(f_{U_{1}}*\cdots *f_{U_{N}}\right)(x)} This can be derived from a two-way change of variables involving Y = U + V and Z = V , similarly to the example below for the quotient of independent random variables. Given two independent random variables U and V , each of which has a probability density function, the density of the product Y = UV and quotient Y = U / V can be computed by a change of variables. To compute the quotient Y = U / V of two independent random variables U and V , define the following transformation: Y = U / V Z = V {\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}} Then, the joint density p ( y , z ) can be computed by a change of variables from U , V to Y , Z , and Y can be derived by marginalizing out Z from the joint density. The inverse transformation is U = Y Z V = Z {\displaystyle {\begin{aligned}U&=YZ\\V&=Z\end{aligned}}} The absolute value of the Jacobian matrix determinant J ( U , V ∣ Y , Z ) {\displaystyle J(U,V\mid Y,Z)} of this transformation is: | det [ ∂ u ∂ y ∂ u ∂ z ∂ v ∂ y ∂ v ∂ z ] | = | det [ z y 0 1 ] | = | z | . {\displaystyle \left|\det {\begin{bmatrix}{\frac {\partial u}{\partial y}}&{\frac {\partial u}{\partial z}}\\{\frac {\partial v}{\partial y}}&{\frac {\partial v}{\partial z}}\end{bmatrix}}\right|=\left|\det {\begin{bmatrix}z&y\\0&1\end{bmatrix}}\right|=|z|.} Thus: p ( y , z ) = p ( u , v ) J ( u , v ∣ y , z ) = p ( u ) p ( v ) J ( u , v ∣ y , z ) = p U ( y z ) p V ( z ) | z | . {\displaystyle p(y,z)=p(u,v)\,J(u,v\mid y,z)=p(u)\,p(v)\,J(u,v\mid y,z)=p_{U}(yz)\,p_{V}(z)\,|z|.} And the distribution of Y can be computed by marginalizing out Z : p ( y ) = ∫ − ∞ ∞ p U ( y z ) p V ( z ) | z | d z {\displaystyle p(y)=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz} This method crucially requires that the transformation from U , V to Y , Z be bijective . The above transformation meets this because Z can be mapped directly back to V , and for a given V the quotient U / V is monotonic . This is similarly the case for the sum U + V , difference U − V and product UV . Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables. Given two standard normal variables U and V , the quotient can be computed as follows. First, the variables have the following density functions: p ( u ) = 1 2 π e − u 2 / 2 p ( v ) = 1 2 π e − v 2 / 2 {\displaystyle {\begin{aligned}p(u)&={\frac {1}{\sqrt {2\pi }}}e^{-{u^{2}}/{2}}\\[1ex]p(v)&={\frac {1}{\sqrt {2\pi }}}e^{-{v^{2}}/{2}}\end{aligned}}} We transform as described above: Y = U / V Z = V {\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}} This leads to: p ( y ) = ∫ − ∞ ∞ p U ( y z ) p V ( z ) | z | d z = ∫ − ∞ ∞ 1 2 π e − 1 2 y 2 z 2 1 2 π e − 1 2 z 2 | z | d z = ∫ − ∞ ∞ 1 2 π e − 1 2 ( y 2 + 1 ) z 2 | z | d z = 2 ∫ 0 ∞ 1 2 π e − 1 2 ( y 2 + 1 ) z 2 z d z = ∫ 0 ∞ 1 π e − ( y 2 + 1 ) u d u u = 1 2 z 2 = − 1 π ( y 2 + 1 ) e − ( y 2 + 1 ) u | u = 0 ∞ = 1 π ( y 2 + 1 ) {\displaystyle {\begin{aligned}p(y)&=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}y^{2}z^{2}}{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}z^{2}}|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}|z|\,dz\\[5pt]&=2\int _{0}^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}z\,dz\\[5pt]&=\int _{0}^{\infty }{\frac {1}{\pi }}e^{-\left(y^{2}+1\right)u}\,du&&u={\tfrac {1}{2}}z^{2}\\[5pt]&=\left.-{\frac {1}{\pi \left(y^{2}+1\right)}}e^{-\left(y^{2}+1\right)u}\right|_{u=0}^{\infty }\\[5pt]&={\frac {1}{\pi \left(y^{2}+1\right)}}\end{aligned}}} This is the density of a standard Cauchy distribution .
https://en.wikipedia.org/wiki/Probability_density_function
In probability theory , a probability space or a probability triple ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a die . A probability space consists of three elements: [ 1 ] [ 2 ] In order to provide a model of probability, these elements must satisfy probability axioms . In the example of the throw of a standard die, When an experiment is conducted, it results in exactly one outcome ω {\displaystyle \omega } from the sample space Ω {\displaystyle \Omega } . All the events in the event space F {\displaystyle {\mathcal {F}}} that contain the selected outcome ω {\displaystyle \omega } are said to "have occurred". The probability function P {\displaystyle P} must be so defined that if the experiment were repeated arbitrarily many times, the number of occurrences of each event as a fraction of the total number of experiments, will most likely tend towards the probability assigned to that event. The Soviet mathematician Andrey Kolmogorov introduced the notion of a probability space and the axioms of probability in the 1930s. In modern probability theory, there are alternative approaches for axiomatization, such as the algebra of random variables . A probability space is a mathematical triplet ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} that presents a model for a particular class of real-world situations. As with other models, its author ultimately defines which elements Ω {\displaystyle \Omega } , F {\displaystyle {\mathcal {F}}} , and P {\displaystyle P} will contain. Not every subset of the sample space Ω {\displaystyle \Omega } must necessarily be considered an event: some of the subsets are simply not of interest, others cannot be "measured" . This is not so obvious in a case like a coin toss. In a different example, one could consider javelin throw lengths, where the events typically are intervals like "between 60 and 65 meters" and unions of such intervals, but not sets like the "irrational numbers between 60 and 65 meters". In short, a probability space is a measure space such that the measure of the whole space is equal to one. The expanded definition is the following: a probability space is a triple ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} consisting of: Discrete probability theory needs only at most countable sample spaces Ω {\displaystyle \Omega } . Probabilities can be ascribed to points of Ω {\displaystyle \Omega } by the probability mass function p : Ω → [ 0 , 1 ] {\displaystyle p:\Omega \to [0,1]} such that ∑ ω ∈ Ω p ( ω ) = 1 {\textstyle \sum _{\omega \in \Omega }p(\omega )=1} . All subsets of Ω {\displaystyle \Omega } can be treated as events (thus, F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} is the power set ). The probability measure takes the simple form The greatest σ-algebra F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} describes the complete information. In general, a σ-algebra F ⊆ 2 Ω {\displaystyle {\mathcal {F}}\subseteq 2^{\Omega }} corresponds to a finite or countable partition Ω = B 1 ∪ B 2 ∪ … {\displaystyle \Omega =B_{1}\cup B_{2}\cup \dots } , the general form of an event A ∈ F {\displaystyle A\in {\mathcal {F}}} being A = B k 1 ∪ B k 2 ∪ … {\displaystyle A=B_{k_{1}}\cup B_{k_{2}}\cup \dots } . See also the examples. The case p ( ω ) = 0 {\displaystyle p(\omega )=0} is permitted by the definition, but rarely used, since such ω {\displaystyle \omega } can safely be excluded from the sample space. If Ω is uncountable , still, it may happen that P ( ω ) ≠ 0 for some ω ; such ω are called atoms . They are an at most countable (maybe empty ) set, whose probability is the sum of probabilities of all atoms. If this sum is equal to 1 then all other points can safely be excluded from the sample space, returning us to the discrete case. Otherwise, if the sum of probabilities of all atoms is between 0 and 1, then the probability space decomposes into a discrete (atomic) part (maybe empty) and a non-atomic part. If P ( ω ) = 0 for all ω ∈ Ω (in this case, Ω must be uncountable, because otherwise P(Ω) = 1 could not be satisfied), then equation ( ⁎ ) fails: the probability of a set is not necessarily the sum over the probabilities of its elements, as summation is only defined for countable numbers of elements. This makes the probability space theory much more technical. A formulation stronger than summation, measure theory is applicable. Initially the probabilities are ascribed to some "generator" sets (see the examples). Then a limiting procedure allows assigning probabilities to sets that are limits of sequences of generator sets, or limits of limits, and so on. All these sets are the σ-algebra F {\displaystyle {\mathcal {F}}} . For technical details see Carathéodory's extension theorem . Sets belonging to F {\displaystyle {\mathcal {F}}} are called measurable . In general they are much more complicated than generator sets, but much better than non-measurable sets . A probability space ( Ω , F , P ) {\displaystyle (\Omega ,\;{\mathcal {F}},\;P)} is said to be a complete probability space if for all B ∈ F {\displaystyle B\in {\mathcal {F}}} with P ( B ) = 0 {\displaystyle P(B)=0} and all A ⊂ B {\displaystyle A\;\subset \;B} one has A ∈ F {\displaystyle A\in {\mathcal {F}}} . Often, the study of probability spaces is restricted to complete probability spaces. If the experiment consists of just one flip of a fair coin , then the outcome is either heads or tails: Ω = { H , T } {\displaystyle \Omega =\{{\text{H}},{\text{T}}\}} . The σ-algebra F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} contains 2 2 = 4 {\displaystyle 2^{2}=4} events, namely: { H } {\displaystyle \{{\text{H}}\}} ("heads"), { T } {\displaystyle \{{\text{T}}\}} ("tails"), { } {\displaystyle \{\}} ("neither heads nor tails"), and { H , T } {\displaystyle \{{\text{H}},{\text{T}}\}} ("either heads or tails"); in other words, F = { { } , { H } , { T } , { H , T } } {\displaystyle {\mathcal {F}}=\{\{\},\{{\text{H}}\},\{{\text{T}}\},\{{\text{H}},{\text{T}}\}\}} . There is a fifty percent chance of tossing heads and fifty percent for tails, so the probability measure in this example is P ( { } ) = 0 {\displaystyle P(\{\})=0} , P ( { H } ) = 0.5 {\displaystyle P(\{{\text{H}}\})=0.5} , P ( { T } ) = 0.5 {\displaystyle P(\{{\text{T}}\})=0.5} , P ( { H , T } ) = 1 {\displaystyle P(\{{\text{H}},{\text{T}}\})=1} . The fair coin is tossed three times. There are 8 possible outcomes: Ω = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} (here "HTH" for example means that first time the coin landed heads, the second time tails, and the last time heads again). The complete information is described by the σ-algebra F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} of 2 8 = 256 events, where each of the events is a subset of Ω. Alice knows the outcome of the second toss only. Thus her incomplete information is described by the partition Ω = A 1 ⊔ A 2 = {HHH, HHT, THH, THT} ⊔ {HTH, HTT, TTH, TTT} , where ⊔ is the disjoint union , and the corresponding σ-algebra F Alice = { { } , A 1 , A 2 , Ω } {\displaystyle {\mathcal {F}}_{\text{Alice}}=\{\{\},A_{1},A_{2},\Omega \}} . Bryan knows only the total number of tails. His partition contains four parts: Ω = B 0 ⊔ B 1 ⊔ B 2 ⊔ B 3 = {HHH} ⊔ {HHT, HTH, THH} ⊔ {TTH, THT, HTT} ⊔ {TTT} ; accordingly, his σ-algebra F Bryan {\displaystyle {\mathcal {F}}_{\text{Bryan}}} contains 2 4 = 16 events. The two σ-algebras are incomparable : neither F Alice ⊆ F Bryan {\displaystyle {\mathcal {F}}_{\text{Alice}}\subseteq {\mathcal {F}}_{\text{Bryan}}} nor F Bryan ⊆ F Alice {\displaystyle {\mathcal {F}}_{\text{Bryan}}\subseteq {\mathcal {F}}_{\text{Alice}}} ; both are sub-σ-algebras of 2 Ω . If 100 voters are to be drawn randomly from among all voters in California and asked whom they will vote for governor, then the set of all sequences of 100 Californian voters would be the sample space Ω. We assume that sampling without replacement is used: only sequences of 100 different voters are allowed. For simplicity an ordered sample is considered, that is a sequence (Alice, Bryan) is different from (Bryan, Alice). We also take for granted that each potential voter knows exactly his/her future choice, that is he/she does not choose randomly. Alice knows only whether or not Arnold Schwarzenegger has received at least 60 votes. Her incomplete information is described by the σ-algebra F Alice {\displaystyle {\mathcal {F}}_{\text{Alice}}} that contains: (1) the set of all sequences in Ω where at least 60 people vote for Schwarzenegger; (2) the set of all sequences where fewer than 60 vote for Schwarzenegger; (3) the whole sample space Ω; and (4) the empty set ∅. Bryan knows the exact number of voters who are going to vote for Schwarzenegger. His incomplete information is described by the corresponding partition Ω = B 0 ⊔ B 1 ⊔ ⋯ ⊔ B 100 and the σ-algebra F Bryan {\displaystyle {\mathcal {F}}_{\text{Bryan}}} consists of 2 101 events. In this case, Alice's σ-algebra is a subset of Bryan's: F Alice ⊂ F Bryan {\displaystyle {\mathcal {F}}_{\text{Alice}}\subset {\mathcal {F}}_{\text{Bryan}}} . Bryan's σ-algebra is in turn a subset of the much larger "complete information" σ-algebra 2 Ω consisting of 2 n ( n −1)⋯( n −99) events, where n is the number of all potential voters in California. A number between 0 and 1 is chosen at random, uniformly. Here Ω = [0,1], F {\displaystyle {\mathcal {F}}} is the σ-algebra of Borel sets on Ω, and P is the Lebesgue measure on [0,1]. In this case, the open intervals of the form ( a , b ) , where 0 < a < b < 1 , could be taken as the generator sets. Each such set can be ascribed the probability of P (( a , b )) = ( b − a ) , which generates the Lebesgue measure on [0,1], and the Borel σ-algebra on Ω. A fair coin is tossed endlessly. Here one can take Ω = {0,1} ∞ , the set of all infinite sequences of numbers 0 and 1. Cylinder sets {( x 1 , x 2 , ...) ∈ Ω : x 1 = a 1 , ..., x n = a n } may be used as the generator sets. Each such set describes an event in which the first n tosses have resulted in a fixed sequence ( a 1 , ..., a n ) , and the rest of the sequence may be arbitrary. Each such event can be naturally given the probability of 2 − n . These two non-atomic examples are closely related: a sequence ( x 1 , x 2 , ...) ∈ {0,1} ∞ leads to the number 2 −1 x 1 + 2 −2 x 2 + ⋯ ∈ [0,1] . This is not a one-to-one correspondence between {0,1} ∞ and [0,1] however: it is an isomorphism modulo zero , which allows for treating the two probability spaces as two forms of the same probability space. In fact, all non-pathological non-atomic probability spaces are the same in this sense. They are so-called standard probability spaces . Basic applications of probability spaces are insensitive to standardness. However, non-discrete conditioning is easy and natural on standard probability spaces, otherwise it becomes obscure. A random variable X is a measurable function X : Ω → S from the sample space Ω to another measurable space S called the state space . If A ⊂ S , the notation Pr( X ∈ A ) is a commonly used shorthand for P ( { ω ∈ Ω : X ( ω ) ∈ A } ) {\displaystyle P(\{\omega \in \Omega :X(\omega )\in A\})} . If Ω is countable , we almost always define F {\displaystyle {\mathcal {F}}} as the power set of Ω, i.e. F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} which is trivially a σ-algebra and the biggest one we can create using Ω. We can therefore omit F {\displaystyle {\mathcal {F}}} and just write (Ω,P) to define the probability space. On the other hand, if Ω is uncountable and we use F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} we get into trouble defining our probability measure P because F {\displaystyle {\mathcal {F}}} is too "large", i.e. there will often be sets to which it will be impossible to assign a unique measure. In this case, we have to use a smaller σ-algebra F {\displaystyle {\mathcal {F}}} , for example the Borel algebra of Ω, which is the smallest σ-algebra that makes all open sets measurable. Kolmogorov's definition of probability spaces gives rise to the natural concept of conditional probability. Every set A with non-zero probability (that is, P ( A ) > 0 ) defines another probability measure P ( B ∣ A ) = P ( B ∩ A ) P ( A ) {\displaystyle P(B\mid A)={P(B\cap A) \over P(A)}} on the space. This is usually pronounced as the "probability of B given A ". For any event A such that P ( A ) > 0 , the function Q defined by Q ( B ) = P ( B | A ) for all events B is itself a probability measure. Two events, A and B are said to be independent if P ( A ∩ B ) = P ( A ) P ( B ) . Two random variables, X and Y , are said to be independent if any event defined in terms of X is independent of any event defined in terms of Y . Formally, they generate independent σ-algebras, where two σ-algebras G and H , which are subsets of F are said to be independent if any element of G is independent of any element of H . Two events, A and B are said to be mutually exclusive or disjoint if the occurrence of one implies the non-occurrence of the other, i.e., their intersection is empty. This is a stronger condition than the probability of their intersection being zero. If A and B are disjoint events, then P ( A ∪ B ) = P ( A ) + P ( B ) . This extends to a (finite or countably infinite) sequence of events. However, the probability of the union of an uncountable set of events is not the sum of their probabilities. For example, if Z is a normally distributed random variable, then P ( Z = x ) is 0 for any x , but P ( Z ∈ R ) = 1 . The event A ∩ B is referred to as " A and B ", and the event A ∪ B as " A or B ".
https://en.wikipedia.org/wiki/Probability_space
In United States criminal law , probable cause is the legal standard by which police authorities have reason to obtain a warrant for the arrest of a suspected criminal and for a court's issuing of a search warrant . [ 1 ] One definition of the standard derives from the U.S. Supreme Court decision in the case of Beck v. Ohio (1964), that probable cause exists when “at [the moment of arrest] the facts and circumstances within [the] knowledge [of the police], and of which they had reasonably trustworthy information, [are] sufficient to warrant a prudent [person] in believing that [a suspect] had committed or was committing an offense.” [ 2 ] Moreover, the grand jury uses the probable cause standard to determine whether or not to issue a criminal indictment. The principle behind the probable cause standard is to limit the power of authorities to conduct unlawful search and seizure of person and property, and to promote formal, forensic procedures for gathering lawful evidence for the prosecution of the arrested criminal. [ 3 ] In the case of Berger v. New York (1967), the Supreme Court said that the purpose of the probable-cause requirement of the Fourth Amendment is to keep the state out of Constitutionally protected areas until the state has reason to believe that a specific crime is being committed or has been committed. [ 4 ] The term of criminal law, the probable cause standard is stipulated in the text of the Fourth Amendment to the U.S. Constitution: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause , supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized. Moreover, in U.S. immigration law, the term “reason to believe” is equivalent to the probable cause standard of criminal law, [ 5 ] and should not be confused with reasonable suspicion , which is the legal criterion required to perform a Terry stop in the U.S. The usual definition of the probable cause standard includes “a reasonable amount of suspicion, supported by circumstances sufficiently strong to justify a prudent and cautious person’s belief that certain facts are probably true.” [ 6 ] Notably, this definition does not require that the person making the recognition must hold a public office or have public authority, which allows the citizenry’s common-sense understanding of the legal standard of probable cause for arrest. Regarding the issuance of a warrant for arrest, probable cause is the “information sufficient to warrant a prudent person’s belief that the wanted individual had committed a crime (for an arrest warrant) or that evidence of a crime or contraband would be found in a search (for a search warrant)”. As a legal standard, probable cause is stronger than reasonable suspicion , but weaker than the requirement of evidence to secure a criminal conviction . Moreover, according to the Aguilar–Spinelli test a criminal court can choose to accept hearsay as a source of probable cause if the source-person is of reliable character or if other evidence supports the hearsay. In the case of Brinegar v. United States (1949), the Supreme Court defined probable cause as “where the facts and [the] circumstances within the officers’ knowledge, and of which they have reasonably trustworthy information, are sufficient, in themselves, to warrant a belief, by a man of reasonable caution, that a crime is being committed.” [ 7 ] The use of probable cause in the United States and its integration in the Fourth Amendment has roots in English common law and the old saying that "a man's home is his castle". This is the idea that someone has the right to defend their "castle" or home from unwanted "attacks" or intrusion. In the 1600s, this saying started to apply legally to landowners to protect them from casual searches from government officials. [ 8 ] In the 1700s, the British use of the writs of assistance and general warrants, which allowed authorities to search wherever and whenever sometimes, without expiration date, in the American colonies were raised in several court cases. The first was in Massachusetts in 1761 when a customs agent submitted for a new writ of assistance and Boston merchants challenged its legality. In the case the lawyer for the merchants James Otis argued that writs of assistance violated the fundamentals of English Law and was unconstitutional. John Adams , a lawyer at the time who later wrote the Massachusetts provision on which the Fourth Amendment heavily relied, was impacted by James Otis's argument [ 9 ] A case against general warrants was the English case Entick v. Carrington (1765). In that case, Lord Camden the chief judge said that general warrants were not the same as specific warrants and that parliament or case law could not authorize general warrants. Along with these statements, Lord Camden also affirmed that the needs of the state were more important than the individual's rights. This upheld the ideology of the social contract while holding to idea that the government purpose was to protect the property of the people. [ 8 ] He called for the government to seek reasonable means in order to search private property, as well as a cause. In early cases in the United States, the Supreme Court held that when a person is on probation, the standard required for a search to be lawful is lowered from "probable cause" to "reasonable grounds" [ 10 ] or "reasonable suspicion". Specifically, the degree of individualized suspicion required of a search was a determination of when there is a sufficiently high probability that criminal conduct is occurring to make the intrusion on the individual's privacy interest reasonable. The Supreme Court held in United States v. Knights : Although the Fourth Amendment ordinarily requires the degree of probability embodied in the term "probable cause," a lesser degree satisfies the Constitution when the balance of governmental and private interests makes such a standard reasonable ... When an officer has reasonable suspicion that a probationer subject to a search condition is engaged in criminal activity, there is enough likelihood that criminal conduct is occurring that an intrusion on the probationer's significantly diminished privacy interests is reasonable. [ 11 ] Later, in Samson v. California , the Supreme Court ruled that reasonable suspicion is not even necessary: The California Legislature has concluded that, given the number of inmates the State paroles and its high recidivism rate, a requirement that searches be based on individualized suspicion would undermine the State's ability to effectively supervise parolees and protect the public from criminal acts by reoffenders. This conclusion makes eminent sense. Imposing a reasonable suspicion requirement, as urged by petitioner, would give parolees greater opportunity to anticipate searches and conceal criminality. The court held that reasonableness, not individualized suspicion, is the touchstone of the Fourth Amendment. [ 12 ] It has been proposed that Fourth Amendment rights be extended to probationers and parolees, but such proposals have not gained traction. [ 13 ] There is not much that remains of the Fourth Amendment rights of probationers after waiving their right to be free from unreasonable searches and seizures. [ 14 ] An essay called "They Released Me from My Cage...But They Still Keep Me Handcuffed" was written in response to the Samson decision. [ 15 ] It has been argued that the requirement that a police officer must have individualized suspicion before searching a parolee's person and home was long considered a foundational element of the Court's analysis of Fourth Amendment questions and that abandoning it in the name of crime prevention represents an unprecedented blow to individual liberties. [ 16 ] In the United States, use of a trained dog to smell for narcotics has been ruled in several court cases as sufficient probable cause. A K-9 Sniff in a public area is not a search according to the Supreme Court's ruling in 1983 United States v. Place . In this particular case, Place was in LaGuardia Airport in New York City , and DEA agents took his luggage, even though he refused to have his bag searched. His luggage smelled of drugs, and the trained dog alerted the agents to this. Dogs alerting their officers provides enough probable cause for the officer to obtain a warrant. The DEA then procured a warrant and found a sizable amount of drugs in Place's luggage. It was not considered a search until after the warrant because a trained dog can sniff out the smell of narcotics, without having to open and look through the luggage. However, In Florida v. Jardines [ 17 ] the court ruled that a police officer and narcotic-sniffing dog entering the porch of a home constitutes a search which invokes the requirement of probable cause or a valid search warrant The power of probable cause by K-9 units smelling for drugs is not limited to just airports, but even in schools, public parking lots, high crime neighborhood streets, mail, visitors in prisons, traffic stops, etc. If there is an incident where the dog alerts its officer, the probable cause from the dog is considered enough to conduct a search, as long as one of the exceptions to a warrant are present, such as incident to arrest , automobile, exigency , or with a stop and frisk . During a traffic stop and checkpoint, it is legal for police to allow a drug dog to sniff the exterior of the car. This is legal as long as it does not cause the traffic stop to be any longer than it would have been without the dog. If the dog finds a scent, it is again a substitute for probable cause. [ 18 ] Under the 2001 USA Patriot Act , law enforcement officials did not need probable cause to access communications records, credit cards, bank numbers and stored emails held by third parties. They only need reasonable suspicion that the information they were accessing was part of criminal activities. Under this, officers were authorized for a court order to access the communication information. Only certain information could be accessed under this act (such as names, addresses, and phone numbers, etc.). Probable cause was, and is, needed for more detailed information because law enforcement needs a warrant to access additional information. Generally, law enforcement was not required to notify the suspect. [ 19 ] However, the text of the Patriot Act limits the application of that statute to issues that clearly involve the national security of the United States. [ 20 ] The relevant sections of the Act expired on June 1, 2015. [ 21 ] If voluntary consent is given and the individual giving the consent has authority over the search area, such as a car, house, business, etc. then a law enforcement officer does not need probable cause or even reasonable suspicion. If the person does not give voluntary consent, then the officer needs probable cause, and in some cases, a search warrant may be required to search the premises. Unless another exclusion to the fourth amendment of the US constitution occurs, when the person withdraws their consent for searching, the officer has to stop looking immediately. [ 22 ] In the United States, the term probable cause is used in accident investigation to describe the conclusions reached by the investigating body as to the factor or factors which caused the accident. This is primarily seen in reports on aircraft accidents, but the term is used for the conclusion of diverse types of transportation accidents investigated in the United States by the National Transportation Safety Board or its predecessor, the Civil Aeronautics Board . In the various states, a probable cause hearing is the preliminary hearing typically taking place before arraignment and before a serious crime goes to trial . The judge is presented with the basis of the prosecution 's case, and the defendant is afforded full right of cross-examination and the right to be represented by legal counsel . If the prosecution cannot make a case of probable cause, the court must dismiss the case against the accused. In the criminal code of some European countries, notably Sweden , probable cause is a higher level of suspicion than "justifiable grounds" in a two level system of formal suspicion. The latter refers only to the suspect being able to and sometimes having a motive to commit the crime and in some cases witness accounts, whereas probable cause generally requires a higher degree of physical evidence and allows for longer periods of detention before trial. See häktning . Powers of arrest without a warrant can be exercised by a constable who 'has reasonable grounds' to suspect that an individual is "about to commit an offence", or is "committing an offence"; in accordance with the Serious Organised Crime and Police Act 2005 and the partially repealed Police and Criminal Evidence Act 1984 . [ 30 ] [ 31 ] The concept of "reasonable grounds for suspecting" is used throughout the law dealing with police powers. In Scotland, the legal language that provides the police with powers pertaining to stopping, arresting and searching a person – who "has committed or is committing an offence", [ 32 ] or is in possession of an offensive article, or an article used in connection with an offence – is similar [ how? ] [ vague ] to that in England and Wales. The powers are provided by the Criminal Procedure (Scotland) Act 1995 and the Police, Public Order and Criminal Justice (Scotland) Act 2005.
https://en.wikipedia.org/wiki/Probable_cause
Probe electrospray ionization ( PESI ) is an electrospray -based ambient ionization technique which is coupled with mass spectrometry for sample analysis. [ 1 ] [ 2 ] Unlike traditional mass spectrometry ion sources which must be maintained in a vacuum, ambient ionization techniques permit sample ionization under ambient conditions, allowing for the high-throughput analysis of samples in their native state, often with minimal or no sample pre-treatment. [ 3 ] The PESI ion source simply consists of a needle to which a high voltage is applied following sample pick-up, initiating electrospray directly from the solid needle. Probe electrospray ionization is an ambient ionization mass spectrometry technique developed by Kenzo Hiraoka et al. at the University of Yamanashi , Japan. [ 4 ] The technique was developed to address some of the issues associated with traditional electrospray ionization (ESI), including clogging of the capillary and contamination, whilst providing a means of rapid and direct sample analysis. Since its initial conception, various modified forms of the PESI ion source have been developed, and the PESI-MS system has been commercialized by instrument manufacturing company Shimadzu . The PESI ion source consists of a solid needle or wire which acts as both the sampling probe and electrospray emitter. [ 5 ] The needle is moved up and down along a vertical axis, a process which can be either automated or manual. When the needle is lowered to the sampling stage, the tip of the needle briefly touches the surface of a typical liquid sample. During this stage, the needle is held at ground potential. The needle is then raised to be level with the mass spectrometer inlet where a high voltage of 2–3 kV is applied. Electrospray is induced at the tip of the needle, producing analyte ions which are drawn into the mass spectrometer for analysis. The mechanism by which ions are formed is believed to be identical to traditional electrospray ionization. As a result, in positive ion mode analytes are often observed as the protonated, sodiated and potentiated ions, depending on the sample and analyte type. Although the amount of sample picked up by the needle is largely dependent on sample viscosity, it has been estimated that just a few picolitres of the sample solution are typically used. [ 6 ] Because of this, the technique can be applied to small sample sizes, particularly ideal when limited sample amounts are available. As such a small sample amount is picked up and completely exhausted during the ionization process, issues of contamination are severely reduced. Furthermore, the process of sampling and ionization takes just a few seconds, so PESI-MS is suitable for high-throughput analysis. A phenomenon observed with probe electrospray ionization is the sequential and exhaustive ionization of analytes with different surface activities. During the development of PESI, it was discovered that analytes could be sequentially ionized throughout the electrospray, thus enabling a temporal separation of components within a sample. [ 7 ] In normal ESI, the sample solution is typically continuously supplied through a capillary and the charged droplets contain all sample components, with more surface-active analytes being constantly preferentially ionized. In PESI, surface-active analytes are also preferentially ionized. However, as a finite droplet exists on the tip of the needle, following the depletion of surface-active analytes, the remaining components in the droplet can then be ionized and observed. This can result in the production of distinctively different mass spectra from a single sample over the application of the high voltage for just a few seconds. This effect offers a particular advantage in the analysis of analytes suffering from ion suppression effects. The presence of surface-active analytes or charged solvent additives can result in the suppressed ionization of analytes of interest, resulting in low sensitivity or the complete absence of the analyte. [ 7 ] The effects of ion suppression can be minimized by reducing the complexity of the sample, for instance through sample extraction techniques such as solid phase extraction , or by separation of analytes of interest using chromatographic separation. However, these sample preparation steps can be laborious, time-consuming and expensive. PESI enables a reduction in ion suppression without the need for sample pre-treatment. By separating the ionization of different analytes, components causing ion suppression can be exhausted before enabling the ionization of components of interest. This has been demonstrated in a number of scenarios, including in the analysis of raw urine, with concentrated components such as creatinine ionization initially, followed by the appearance of previously undetected metabolites. [ 8 ] As the PESI needle is only applicable to liquid or penetrable solid samples, it cannot be used for the analysis of the majority of dry solid materials. To circumvent this limitation, sheath-flow probe electrospray ionization (sfPESI) was developed, a modification of the traditional PESI technique. The sfPESI ion source consists of a solid needle housed within a plastic sheath (typically a gel-loading tip) filled with a small amount of solvent. The needle protrudes from the base of the sheath by approximately 0.1 mm, where a minute solvent droplet is held. The based of [ clarification needed ] based the probe is briefly touched to the sample surface, where a convex solvent meniscus forms between the probe and the sample, wetting the sample and enabling analyte extraction. [ 9 ] The chemistry of the solvent can be modified to induce the extraction of particular analytes of interest. After application to the sample, the sfPESI probe is then raised to be level with the mass spectrometer inlet, with solubilised analytes held in the droplet at the tip of the needle, and a high voltage applied. sfPESI offers the same advantages as standard PESI, including the sequential and exhaustive ionization phenomenon, whilst enabling the direct analysis of dry samples. PESI-MS has proven to be particularly effective in the metabolic analysis of biological materials, having been applied to the analysis of cancerous and non-cancerous breast tissue, [ 10 ] as well as brain and liver tissue removed from mice. [ 11 ] [ 12 ] Interestingly, PESI-MS has recently been applied to the direct analysis of living animals for real-time metabolic profiling. [ 13 ] [ 14 ] Due to the narrow diameter of the PESI needle and brief sample introduction time, PESI is reasonably non-invasive. As a result, the technique has been used to sample from the organs of living anaesthetized animals, specifically to analyse metabolites in the brain, spleen, liver and kidney of a living mouse. In addition to this, PESI-MS has been applied to the on-site analysis of food products for the purpose of quality control, to the detection of herbicides in body fluids to demonstrate exposure, and finally to the detection of illicit drugs in bodily fluids to indicate drug use. Several groups have also harnessed the small size of the PESI probe to achieve single-cell analysis, demonstrating the capability of rapidly detecting metabolites at cellular and subcellular levels. [ 15 ] [ 16 ] [ 17 ] The PESI modification known as sheath-flow PESI has been applied to the analysis of various solid samples in their native state, including pharmaceutical tablets, [ 9 ] illicit drugs, [ 5 ] food and agricultural products, [ 18 ] and pesticides. [ 19 ] In addition, sfPESI has been utilised in the field of forensic science for the analysis and identification of fresh and dried body fluids of forensic interest. [ 8 ] In this work, sfPESI was also coupled with tandem mass spectrometry (MS/MS), demonstrating the capability of ion fragmentation for identification of unknown components.
https://en.wikipedia.org/wiki/Probe_electrospray_ionization
Problem Solving Through Recreational Mathematics is a textbook in mathematics on problem solving techniques and their application to problems in recreational mathematics , intended as a textbook for general education courses in mathematics for liberal arts education students. It was written by Bonnie Averbach and Orin Chein, published in 1980 by W. H. Freeman and Company , and reprinted in 2000 by Dover Publications . Problem Solving Through Recreational Mathematics is based on mathematics courses taught by the authors, who were both mathematics professors at Temple University . [ 1 ] [ 2 ] It follows a principle in mathematics education popularized by George Pólya , of focusing on techniques for mathematical problem solving, motivated by the idea that by doing mathematics rather than being told about its "history, culture, or applications", liberal arts education students (for whom this might be their only college-level mathematics course) can gain a better idea of the nature of mathematics. [ 1 ] [ 3 ] By concentrating on problems in recreational mathematics , Averbach and Chein hope to motivate students by the fun aspect of these problems. However, this approach may also lead the students to lose sight of the important applications of the mathematics they learn, [ 3 ] and contains little to no material on mathematical proof . [ 2 ] [ 4 ] The book's exercises include some with detailed solutions, some with less-detailed answers, and some that provide only hints to the solution, providing flexibility to instructors in using this book as a textbook. [ 1 ] [ 5 ] Cartoons and other illustrations of the concepts help make the material more inviting to students. [ 1 ] As well as for general education at the college level, this book could also be used to help prepare students going into mathematics education, [ 1 ] and for mathematics appreciation for secondary school students. It could also be used as a reference by secondary school mathematics teachers in providing additional examples for their students, [ 5 ] [ 6 ] or as personal reading for anyone teenaged or older who is interested in mathematics. [ 6 ] Alternatively, reviewer Murray Klamkin suggests using the books of Polyá for these purposes, but adding Problem Solving Through Recreational Mathematics as a supplement to these books. [ 3 ] The book begins with an introductory chapter on problem-solving techniques in general, [ 4 ] including six problems to motivate these techniques. [ 1 ] The rest of the book is organized into eight thematic chapters, each of which can stand alone or be read in an arbitrary order. [ 3 ] [ 4 ] The topics of these chapters are:
https://en.wikipedia.org/wiki/Problem_Solving_Through_Recreational_Mathematics
Problem analysis or the problem frames approach is an approach to software requirements analysis . It was developed by British software consultant Michael A. Jackson in the 1990s. The problem frames approach was first sketched by Jackson in his book Software Requirements & Specifications (1995) and in a number of articles in various journals devoted to software engineering. It has received its fullest description in his Problem Frames: Analysing and Structuring Software Development Problems (2001). A session on problem frames was part of the 9th International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ)] held in Klagenfurt/Velden, Austria in 2003. [ 1 ] The First International Workshop on Applications and Advances in Problem Frames [ 2 ] was held as part of ICSE’04 held in Edinburgh, Scotland. One outcome of that workshop was a 2005 special issue on problem frames in the International Journal of Information and Software Technology . The Second International Workshop on Applications and Advances in Problem Frames [ 3 ] was held as part of ICSE 2006 in Shanghai, China. The Third International Workshop on Applications and Advances in Problem Frames (IWAAPF) [ 4 ] was held as part of ICSE 2008 in Leipzig, Germany. In 2010, the IWAAPF workshops were replaced by the International Workshop on Applications and Advances of Problem-Orientation (IWAAPO). IWAAPO broadens the focus of the workshops to include alternative and complementary approaches to software development that share an emphasis on problem analysis. [ 5 ] IWAAPO-2010 was held as part of ICSE 2010 in Cape Town, South Africa. [ 6 ] Today research on the problem frames approach is being conducted at a number of universities, most notably at the Open University in the United Kingdom as part of its Relating Problem & Solution Structures research theme [ 7 ] [ 8 ] The ideas in the problem frames approach have been generalized into the concepts of problem-oriented development (POD) and problem-oriented engineering (POE), of which problem-oriented software engineering (POSE) is a particular sub-category. The first International Workshop on Problem-Oriented Development was held in June 2009. Problem analysis or the problem frames approach is an approach — a set of concepts — to be used when gathering requirements and creating specifications for computer software. Its basic philosophy is strikingly different from other software requirements methods in insisting that: It is more helpful ... to recognize that the solution is located in the computer and its software, and the problem is in the world outside. ... The computers can provide solutions to these problems because they are connected to the world outside. [ 10 ] The moral is clear: to study and analyse a problem you must focus on studying and analysing the problem world in some depth, and in your investigations you must be willing to travel some distance away from the computer. ... [In a call forwarding problem...] You need to describe what's there – people and offices and holidays and moving office and delegating responsibility – and what effects [in the problem world] you would like the system to achieve – calls to A's number must reach A, and [when B is on vacation, and C is temporarily working at D's desk] calls to B's or C's number must reach C. [ 11 ] None of these appear in the interface with the computer.... They are all deeper into the world than that. [ 12 ] The approach uses three sets of conceptual tools. Concepts used for describing specific problems include: phenomena (of various kinds, including events ), problem context , problem domain , solution domain (aka the machine ), shared phenomena (which exist in domain interfaces ), domain requirements (which exist in the problem domains) and specifications (which exist at the problem domain:machine interface). The graphical tools for describing problems are the context diagram and the problem diagram . The Problem Frames Approach includes concepts for describing classes of problems. A recognized class of problems is called a problem frame (roughly analogous to a design pattern ). In a problem frame, domains are given general names and described in terms of their important characteristics. A domain, for example, may be classified as causal (reacts in a deterministic, predictable way to events) or biddable (can be bid, or asked, to respond to events, but cannot be expected always to react to events in any predictable, deterministic way). (A biddable domain usually consists of people.) The graphical tool for representing a problem frame is a frame diagram . A frame diagram looks generally like a problem diagram except for a few minor differences—domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain. The first group of problem frames identified by Jackson included: Subsequently, other researchers have described or proposed additional problem frames. Problem analysis considers a software application to be a kind of software machine . A software development project aims to change the problem context by creating a software machine and adding it to the problem context, where it will bring about certain desired effects. The particular portion of the problem context that is of interest in connection with a particular problem — the particular portion of the problem context that forms the context of the problem — is called the application domain . After the software development project has been finished, and the software machine has been inserted into the problem context, the problem context will contain both the application domain and the machine. At that point, the situation will look like this: The problem context contains the machine and the application domain. The machine interface is where the Machine and the application domain meet and interact. The same situation can be shown in a different kind of diagram, a context diagram , this way: The problem analyst's first task is to truly understand the problem. That means understanding the context in which the problem is set. And that means drawing a context diagram. Here is Jackson's description of examining the problem context, in this case the context for a bridge to be built: An analyst trying to understand a software development problem must go through the same process as the bridge engineer. He starts by examining the various problem domains in the application domain. These domains form the context into which the planned Machine must fit. Then he imagines how the Machine will fit into this context. And then he constructs a context diagram showing his vision of the problem context with the Machine installed in it. The context diagram shows the various problem domains in the application domain, their connections, and the Machine and its connections to (some of) the problem domains. Here is what a context diagram looks like. This diagram shows: A domain is simply a part of the world that we are interested in. It consists of phenomena — individuals, events, states of affairs, relationships, and behaviors. A domain interface is an area where domains connect and communicate. Domain interfaces are not data flows or messages. An interface is a place where domains partially overlap, so that the phenomena in the interface are shared phenomena — they exist in both of the overlapping domains. You can imagine domains as being like primitive one-celled organisms (like amoebas). They are able to extend parts of themselves into pseudopods. Imagine that two such organisms extend pseudopods toward each other in a sort of handshake, and that the cellular material in the area where they are shaking hands is mixing, so that it belongs to both of them. That's an interface. In the following diagram, X is the interface between domains A and B. Individuals that exist or events that occur in X, exist or occur in both A and B. Shared individuals, states and events may look differently to the domains that share them. Consider for example an interface between a computer and a keyboard. When the keyboard domain sees an event Keyboard operator presses the spacebar the computer will see the same event as Byte hex("20") appears in the input buffer. The problem analyst's basic tool for describing a problem is a problem diagram . Here is a generic problem diagram. In addition to the kinds of things shown on a context diagram, a problem diagram shows: An interface that connects a problem domain to the machine is called a specification interface and the phenomena in the specification interface are called specification phenomena . The goal of the requirements analyst is to develop a specification for the behavior that the Machine must exhibit at the Machine interface in order to satisfy the requirement. Here is an example of a real, if simple, problem diagram. This problem might be part of a computer system in a hospital. In the hospital, patients are connected to sensors that can detect and measure their temperature and blood pressure. The requirement is to construct a Machine that can display information about patient conditions on a panel in the nurses station. The name of the requirement is "Display ~ Patient Condition". The tilde (~) indicates that the requirement is about a relationship or correspondence between the panel display and patient conditions. The arrowhead indicates that the requirement reference connected to the Panel Display domain is also a requirement constraint. That means that the requirement contains some kind of stipulation that the Panel display must meet. In short, the requirement is that The panel display must display information that matches and accurately reports the condition of the patients. A problem frame is a description of a recognizable class of problems, where the class of problems has a known solution. In a sense, problem frames are problem patterns. Each problem frame has its own frame diagram . A frame diagram looks essentially like a problem diagram, but instead of showing specific domains and requirements, it shows types of domains and types of requirements; domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain. In Problem Frames Jackson discussed variants of the five basic problem frames that he had identified. A variant typically adds a domain to the problem context. Jackson also discusses certain kinds of concerns that arise when working with problem frames. Particular concerns Composition concerns The first problem frames identified by Jackson included: Subsequently, other researchers have described or proposed additional problem frames. The intuitive idea behind this problem frame is: The intuitive idea behind this problem frame is: The intuitive idea behind this problem frame is: The intuitive idea behind this problem frame is: The intuitive idea behind this problem frame is: When problem analysis is incorporated into the software development process , the software development lifecycle starts with the problem analyst, who studies the situation and: At this point, problem analysis — problem decomposition — is complete. The next step is to reverse the process and to build the desired software system though a process of solution composition . The solution composition process is not yet well understood, and is still very much a research topic. Extrapolating from hints in Software Requirements & Specifications , we can guess that the software development process would continue with the developers, who would: There are a few other software development ideas that are similar in some ways to problem analysis.
https://en.wikipedia.org/wiki/Problem_frames_approach
Future contingent propositions (or simply, future contingents ) are statements about states of affairs in the future that are contingent : neither necessarily true nor necessarily false. The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation ( De Interpretatione ), using the famous sea-battle example. [ 1 ] Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious master argument . [ 2 ] The problem was later discussed by Leibniz . The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow. Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case in the future was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true in the past, prior and up to the original statement "A sea battle will not be fought tomorrow", that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore, it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. "For a man may predict an event ten thousand years beforehand, and another may predict the reverse; that which was truly predicted at the moment in the past will of necessity take place in the fullness of time" ( De Int. 18b35). This conflicts with the idea of our own free choice : that we have the power to determine or control the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen. As Aristotle says, if so there would be no need "to deliberate or to take trouble, on the supposition that if we should adopt a certain course, a certain result would follow, while, if we did not, the result would not follow". Aristotle solved the problem by asserting that the principle of bivalence found its exception in this paradox of the sea battles: in this specific case, what is impossible is that both alternatives can be possible at the same time: either there will be a battle, or there won't. Both options can't be simultaneously taken. Today, they are neither true nor false; but if one is true, then the other becomes false. According to Aristotle, it is impossible to say today if the proposition is correct: we must wait for the contingent realization (or not) of the battle, logic realizes itself afterwards: For Diodorus, the future battle was either impossible or necessary. Aristotle added a third term, contingency , which saves logic while in the same time leaving place for indetermination in reality. What is necessary is not that there will or that there will not be a battle tomorrow, but the dichotomy itself is necessary: What exactly al-Farabi posited on the question of future contingents is contentious. Nicholas Rescher argues that al-Farabi's position is that the truth value of future contingents is already distributed in an "indefinite way", whereas Fritz Zimmerman argues that al-Farabi endorsed Aristotle's solution that the truth value of future contingents has not been distributed yet. [ 3 ] Peter Adamson claims they are both correct as al-Farabi endorses both perspectives at different points in his writing, depending on how far he is engaging with the question of divine foreknowledge. [ 3 ] Al-Farabi's argument about "indefinite" truth values centers around the idea that "from premises that are contingently true, a contingently true conclusion necessarily follows". [ 3 ] This means that even though a future contingent will occur, it may not have done so according to present contingent facts; as such, the truth value of a proposition concerning that future contingent is true, but true in a contingent way. al-Farabi uses the following example; if we argue truly that Zayd will take a trip tomorrow, then he will, but crucially: There is in Zayd the possibility that he stays home....if we grant that Zayd is capable of staying home or of making the trip, then these two antithetical outcomes are equally possible [ 3 ] Al-Farabi's argument deals with the dilemma of future contingents by denying that the proposition P "it is true at t 1 {\displaystyle t_{1}} that Zayd will travel at t 2 {\displaystyle t_{2}} " and the proposition Q "it is true that at t 2 {\displaystyle t_{2}} that Zayd travels" [ 3 ] would lead us to conclude that necessarily if P then necessarily Q . He denies this by arguing that "the truth of the present statement about Zayd's journey does not exclude the possibility of Zayd’s staying at home: it just excludes that this possibility will be realized". [ 3 ] Leibniz gave another response to the paradox in §6 of Discourse on Metaphysics : "That God does nothing which is not orderly, and that it is not even possible to conceive of events which are not regular." Thus, even a miracle , the Event by excellence, does not break the regular order of things. What is seen as irregular is only a default of perspective, but does not appear so in relation to universal order, and thus possibility exceeds human logics. Leibniz encounters this paradox because according to him: If everything that happens to Alexander derives from the haecceity of Alexander, then fatalism threatens Leibniz's construction: Against Aristotle's separation between the subject and the predicate , Leibniz states: The predicate (what happens to Alexander) must be completely included in the subject (Alexander) "if one understands perfectly the concept of the subject". Leibniz henceforth distinguishes two types of necessity: necessary necessity and contingent necessity, or universal necessity vs singular necessity. Universal necessity concerns universal truths, while singular necessity concerns something necessary that could not be (it is thus a "contingent necessity"). Leibniz hereby uses the concept of compossible worlds. According to Leibniz, contingent acts such as "Caesar crossing the Rubicon" or "Adam eating the apple" are necessary: that is, they are singular necessities, contingents and accidentals, but which concerns the principle of sufficient reason . Furthermore, this leads Leibniz to conceive of the subject not as a universal, but as a singular: it is true that "Caesar crosses the Rubicon", but it is true only of this Caesar at this time , not of any dictator nor of Caesar at any time (§8, 9, 13). Thus Leibniz conceives of substance as plural: there is a plurality of singular substances, which he calls monads . Leibniz hence creates a concept of the individual as such, and attributes to it events. There is a universal necessity, which is universally applicable, and a singular necessity, which applies to each singular substance, or event. There is one proper noun for each singular event: Leibniz creates a logic of singularity, which Aristotle thought impossible (he considered that there could only be knowledge of generality). One of the early motivations for the study of many-valued logics has been precisely this issue. In the early 20th century, the Polish formal logician Jan Łukasiewicz proposed three truth-values: the true, the false and the as-yet-undetermined . This approach was later developed by Arend Heyting and L. E. J. Brouwer ; [ 4 ] see Łukasiewicz logic . Issues such as this have also been addressed in various temporal logics , where one can assert that " Eventually , either there will be a sea battle tomorrow, or there won't be." (Which is true if "tomorrow" eventually occurs.) By asserting " A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow ", Aristotle is simply claiming "necessarily (a or not-a)", which is correct. However, if we then conclude: "If a is the case, then necessarily, a is the case", then this is known as the modal fallacy . [ 5 ] Expressed in another way: That is, there are no contingent propositions. Every proposition is either necessarily true or necessarily false. The fallacy arises in the ambiguity of the first premise. If we interpret it close to the English, we get: However, if we recognize that the English expression (i) is potentially misleading, that it assigns a necessity to what is simply nothing more than a necessary condition, then we get instead as our premises: From these latter two premises, one cannot validly infer the conclusion:
https://en.wikipedia.org/wiki/Problem_of_future_contingents
The problem of multiple generality names a failure in traditional logic to describe valid inferences that involves multiple quantifiers . For example, it is intuitively clear that if: then it follows logically that: The syntax of traditional logic (TL) permits exactly one quantifier, i.e. there are four sentence types: "All A's are B's", "No A's are B's", "Some A's are B's" and "Some A's are not B's". Since the sentences above each contain two quantifiers ('some' and 'every' in the first sentence and 'all' and 'at least one' in the second sentence), they cannot be adequately represented in TL. The best TL can do is to incorporate the second quantifier from each sentence into the second term, thus rendering the artificial-sounding terms 'feared-by-every-mouse' and 'afraid-of-at-least-one-cat'. This in effect "buries" these quantifiers, which are essential to the inference's validity, within the hyphenated terms. Hence the sentence "Some cat is feared by every mouse" is allotted the same logical form as the sentence "Some cat is hungry". And so the logical form in TL is: which is clearly invalid. The first logical calculus capable of dealing with such inferences was Gottlob Frege 's Begriffsschrift (1879), the ancestor of modern predicate logic , which dealt with quantifiers by means of variable bindings. Modestly, Frege did not argue that his logic was more expressive than extant logical calculi, but commentators on Frege's logic regard this as one of his key achievements. Using modern predicate calculus , we quickly discover that the statement is ambiguous. could mean (Some cat is feared) by every mouse (paraphrasable as Every mouse fears some cat ), i.e. in which case the conclusion is trivial. But it could also mean Some cat is (feared by every mouse) (paraphrasable as There's a cat feared by all mice ), i.e. This example illustrates the importance of specifying the scope of such quantifiers as for all and there exists . This logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Problem_of_multiple_generality
In theoretical physics , the problem of time is a conceptual conflict between quantum mechanics and general relativity . Quantum mechanics regards the flow of time as universal and absolute, whereas general relativity regards the flow of time as malleable and relative. [ 1 ] [ 2 ] This problem raises the question of what time really is in a physical sense and whether it is truly a real, distinct phenomenon. It also involves the related question of why time seems to flow in a single direction, despite the fact that no known physical laws at the microscopic level seem to require a single direction. [ 3 ] In classical mechanics , a special status is assigned to time in the sense that it is treated as a classical background parameter, external to the system itself. This special role is seen in the standard Copenhagen interpretation of quantum mechanics: all measurements of observables are made at certain instants of time and probabilities are only assigned to such measurements. Furthermore, the Hilbert space used in quantum theory relies on a complete set of observables which commute at a specific time. [ 4 ] : 759 In general relativity time is no longer a unique background parameter, but a general coordinate. The field equations of general relativity are not parameterized by time but formulated in terms of spacetime. Many of the issues related to the problem of time exist within general relativity. At the cosmic scale, general relativity shows a closed universe with no external time. These two very different roles of time are incompatible. [ 4 ] Quantum gravity describes theories that attempt to reconcile or unify quantum mechanics and general relativity , the current theory of gravity. [ 5 ] The problem of time is central to these theoretical attempts. It remains unclear how time is related to quantum probability, whether time is fundamental or a consequence of processes, and whether time is approximate, among other issues. Different theories try different answers to the questions but no clear solution has emerged. [ 6 ] The most commonly discussed aspect of the problem of time is the frozen formalism problem. The non-relativistic equation of quantum mechanics includes time evolution: i ℏ ∂ ∂ t ψ ( t ) = H ψ ( t ) , {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t),} where H {\displaystyle H} is an energy operator characterizing the system and the wave function ψ ( t ) {\displaystyle \psi (t)} over space evolves in time, t . In general relativity the energy operator becomes a constraint in the Wheeler–DeWitt equation : H ^ ( x ) | ψ ⟩ = 0 , {\displaystyle {\hat {H}}(x)|\psi \rangle =0,} where the operator varies throughout space, but the wavefunction here, called the wavefunction of the universe, is constant. Consequently this cosmic universal wavefunction is frozen and does not evolve. Somehow, at a smaller scale, the laws of physics, including a concept of time, apply within the universe while the cosmic level is static. [ 4 ] : 762 Work started by Don Page and William Wootters [ 7 ] [ 8 ] [ 9 ] suggests that the universe appears to evolve for observers on the inside because of energy entanglement between an evolving system and a clock system, both within the universe. [ 10 ] In this way the overall system can remain timeless while parts experience time via entanglement. The issue remains an open question closely related to attempted theories of quantum gravity. [ 11 ] [ 12 ] In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or any objects usable as clocks) into the same history. In 2013, at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, Ekaterina Moreva, together with Giorgio Brida, Marco Gramegna, Vittorio Giovannetti, Lorenzo Maccone, and Marco Genovese performed the first experimental test of Page and Wootters' ideas. They confirmed for photons that time is an emergent phenomenon for internal observers of a quantum system but is absent for external observers, which is consistent with the predictions of the Wheeler–DeWitt equation . [ 10 ] [ 13 ] [ 14 ] Consistent discretizations approach developed by Jorge Pullin and Rodolfo Gambini have no constraints. These are lattice approximation techniques for quantum gravity. In the canonical approach, if one discretizes the constraints and equations of motion, the resulting discrete equations are inconsistent: they cannot be solved simultaneously. To address this problem, one uses a technique based on discretizing the action of the theory and working with the discrete equations of motion. These are automatically guaranteed to be consistent. Most of the hard conceptual questions of quantum gravity are related to the presence of constraints in the theory. Consistent discretized theories are free of these conceptual problems and can be straightforwardly quantized, providing a solution to the problem of time. It is a bit more subtle than this. Although without constraints and having "general evolution", the latter is only in terms of a discrete parameter that isn't physically accessible. The way out is addressed in a way similar to the Page–Wootters approach. The idea is to pick one of the physical variables to be a clock and ask relational questions. These ideas, where the clock is also quantum mechanical, have actually led to a new interpretation of quantum mechanics — the Montevideo interpretation of quantum mechanics. [ 15 ] [ 16 ] This new interpretation solves the problems of the use of environmental decoherence as a solution to the problem of measurement in quantum mechanics by invoking fundamental limitations, due to the quantum mechanical nature of clocks, in the process of measurement. These limitations are very natural in the context of generally covariant theories as quantum gravity where the clock must be taken as one of the degrees of freedom of the system itself. They have also put forward this fundamental decoherence as a way to resolve the black hole information paradox . [ 17 ] [ 18 ] In certain circumstances, a matter field is used to de-parametrize the theory and introduce a physical Hamiltonian. This generates physical time evolution, not a constraint. Reduced phase-space quantization constraints are solved first and then quantized. This approach was considered for some time to be impossible, as it seems to require first finding the general solution to Einstein's equations. However, with the use of ideas involved in Dittrich's approximation scheme (built on ideas of Carlo Rovelli ) a way to explicitly implement, at least in principle, a reduced phase-space quantization was made viable. [ 19 ] Avshalom Elitzur and Shahar Dolev argue that quantum-mechanical experiments such as the "quantum liar" [ 20 ] provide evidence of inconsistent histories, and that spacetime itself may therefore be subject to change affecting entire histories. [ 21 ] Elitzur and Dolev also believe that an objective passage of time and relativity can be reconciled and that it would resolve many of the issues with the block universe and the conflict between relativity and quantum mechanics. [ 22 ] One solution to the problem of time proposed by Lee Smolin is that there exists a "thick present" of events, in which two events in the present can be causally related to each other, but in contrast to the block-universe view of time in which all time exists eternally . [ 23 ] Marina Cortês and Lee Smolin argue that certain classes of discrete dynamical systems demonstrate time asymmetry and irreversibility, which is consistent with an objective passage of time. [ 24 ] Motivated by the Immirzi ambiguity in loop quantum gravity and the near-conformal invariance of the standard model of elementary particles, [ 25 ] Charles Wang and co-workers have argued that the problem of time may be related to an underlying scale invariance of gravity–matter systems. [ 26 ] [ 27 ] [ 28 ] Scale invariance has also been proposed to resolve the hierarchy problem of fundamental couplings. [ 29 ] As a global continuous symmetry, scale invariance generates a conserved Weyl current [ 26 ] [ 27 ] according to Noether’s theorem . In scale-invariant cosmological models, this Weyl current naturally gives rise to a harmonic time. [ 30 ] In the context of loop quantum gravity, Charles Wang et al. suggest that scale invariance may lead to the existence of a quantized time. [ 26 ] The thermal time hypothesis is a possible solution to the problem of time in classical and quantum theory as has been put forward by Carlo Rovelli and Alain Connes . Physical time flow is modeled as a fundamental property of the theory, a macroscopic feature of thermodynamical origin . [ 31 ] [ 32 ]
https://en.wikipedia.org/wiki/Problem_of_time
A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution, formulating problems, selecting algorithm , simulating numerical value, viewing and analysing results. Many PSEs were introduced in the 1990s. They use the language of the respective field and often employ modern graphical user interfaces . The goal is to make the software easy to use for specialists in fields other than computer science . PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering like gas turbine design. [ 1 ] The Problem Solving Environment (PSE) released a few years after the release of Fortran and Algol 60 . People thought that this system with high-level language would cause elimination of professional programmers. However, surprisingly, PSE has been accepted and even though scientists used it to write programs. [ 2 ] The Problem Solving Environment for Parallel Scientific Computation was introduced in 1960, where this was the first Organised Collections with minor standardisation. [ 2 ] In 1970, PSE was initially researched for providing high-class programming language rather than Fortran, [ citation needed ] also Libraries Plotting Packages advent. Development of Libraries were continued, and there were introduction of Emergence of Computational Packages and Graphical systems which is data visualisation . By 1990s, hypertext , point-and-click had moved towards inter-operability . Moving on, a "Software Parts" Industry finally existed. [ 2 ] Throughout a few decades, recently, many PSEs have been developed and to solve problem and also support users from different categories, including education, general programming, CSE software learning, job executing and Grid / Cloud computing . [ citation needed ] The shell software GOSPEL is an example of how a PSE can be designed for EHL modelling using a Grid resource. With the PSE, one can visualise the optimisation progress, as well as interact with other simulations. [ 3 ] The PSE parallelise and embed many individual numerical calculations in an industrial serial optimisation code. It is built in NAG's IRIS Explorer package to solve EHL and Parallelism problems and can use the gViz libraries, to run all the communication between the PSE and the simulation. It also uses MPI — part of the NAG libraries — which gives significantly quicker and better solutions by combining the maximum levels of continuation. [ 3 ] Moreover, the system is designed to allow users to steer simulations using visualised output. An example is utilising local minima, or layering additional details when around a local in and out of the simulation and it can imagine the information which is produced in any sharp and also still allow to steer the simulation. [ 4 ] PSEs are require a large amount of resources that strain even the most powerful computers of today. Translating PSEs into software that can be used for mobile devices in an important challenge that faces programmers today. [ 5 ] Grid computing is seen as a solution to the rescue issues of PSEs for mobile devices. This is made possible through a "Brokering Service". This service is started by an initiating device that sends the necessary information for PSE to resolve task. The brokering service then breaks this down into subtasks that distributes the information to various subordinate devices that perform these subtasks. [ 5 ] The brokering necessitates an Active Agent Repository (AAR) and a Task Allocation Table (TAT) that both work to manage the subtasks. A Keep-Alive Server is tapped to handle communication between the brokering service and the subordinate devices. The Keep-Alive server relies on a lightweight client application installed in the participating mobile devices. Security, transparency and dependability are issues that may arise when using the grid for mobile device-based PSEs. [ 5 ] There are a revolution for network-based learning and e-learning for education but it is very difficult to collect education data and data of the student activities. TSUNA-TASTE, is developed by T. Teramoto, a PSE to support education and learning processes. This system may create a new idea of the e-learning by supporting teachers and students in computer-related education. It consists of four parts, including agents of students, an education support server, a database system and a Web server. This system makes e-learning more convenient as information is earlier to store and collect for students and teachers. [ citation needed ] A computer-assisted parallel program generation support(P-NCAS), is a PSE, creates a new way to reduce the programming hard task for the computer programming. This program can avoid or reduce the chance that huge computer software breaking down so this restrict uncertainty and major accidents in the society. Moreover, partial differential equations(PDEs) problems can be solved by parallel programs which are generated by P-NCAS supports. P-NCAS employs the Single Program Multi Data (SPMD) and uses a decomposition method for the parallelisation. These enable users of P-NCAS to input problems described by PDES, algorithm and discretisation scheme etc., and to view and edit all details through the visualisation and windows for edition. At last, parallel program will be outputted in C language by P-NCAS and also include documents which show everything has inputted in the beginning. [ 6 ] At first it was difficult doing 2-D EHL problems because of the expense and computer power available. The development of parallel 2-D EHL codes and faster computers have now paved the way for 2-D EHL problem solving to be possible. Friction and lubricant data need a higher level of security given their sensitivity. Accounting for simulations may be difficult because these are done in rapidly and in the thousands. This can be solved by a registration system or a 'directory'. Collaborative PSEs with multiple users will encounter difficulties tracking changes, especially which specific changes were made and when those changes were made. This may also be solved with a directory of changes made. [ 3 ] Secondly, future improvement of the Grid-based PSEs for mobile devices, the group aims to generate new scenarios through manipulation of the control variables available. By changing those control variables, the simulation software is able to create scenarios from each other, allowing for more scrutiny of the conditions in each scenario. It is expected that manipulation of three variables will generate twelve different scenarios. [ 5 ] The variables that we are interested in studying are network stability and device mobility. We feel that these variables will hater the greatest impact on grid performance. Our study will measure performance using task completion time as the primary outcome. [ 5 ] As PSEs grow more complex, the need for computing resources has risen dramatically. Conversely, with PSE applications venturing into fields and environments of growing complexity, the creation of PSEs have become tedious and difficult. Hirumichi Kobashi and his colleagues have designed a PSE meant to create other PSEs. This has been dubbed as a 'meta PSE' or a PSEs. This was how PSE Park was born. [ citation needed ] The architecture of PSE Park emphasises flexibility and extensibility. These characteristics make it an attractive platform for varied levels of expertise, from entry-level users to developers. [ citation needed ] PSE Park provides these through its repository of functions. the repository contains modules required to build PSEs. Some of the most basic modules, called Cores, are used as the foundation of PSEs. More complex modules are available for use by programmers. Users access PSE Park through a console linked to the programmers. Once the user is register, he/she has assess to the repository. A PIPE server is used as the mediator between the user and PSE Park. It grants access to modules and constructs the selected functions into a PSE. [ citation needed ] Developers can develop functions, or even whole PSEs, for inclusion into the repository. Entry-level and expert users can access these pre-made PSEs for their own purposes. Given this architecture, PSE Park requires a cloud computing environment to support the enormous data sharing that occurs during PSE use and development. [ citation needed ] The PIPE Server differs from other servers in terms of how it handles intermediate results. Since the PIPE Server acts as a mediator in a meta-PSE, any results or variables generated by a core module are retrieved as global variables to be used by the next core. The sequence or hierarchy is defined by the user. The way, same name variables are revised to the new set of variables. [ citation needed ] Another important characteristics of the PIPE Server is that it executes each module or core independently. This means that the language of each module does not have to be the same as the others in the PSE. Modules are implemented depending on the defined hierarchy. This feature brings enormous flexibility for developers and users who have varied backgrounds in programming. The modular format also enables that existing PSEs can be extended and modified easily. [ citation needed ] In order be registered, a core must be fully defined. The input and output definitions allow the PIPE server to determine compatibility with other cores and modules. Any lack of definition is flagged by the PIPE server for incompatibility. [ citation needed ] The registration engine keeps track of all cores that may be used in PSE Park. A history of use is also created. A core map may be developed in order to help users understand a core or module better. The console is the users' main interface with PSE Park. It is highly visual and diagrammatic, allowing users to better understand the linkages between modules and cores for the PSEs that they are working on. [ citation needed ]
https://en.wikipedia.org/wiki/Problem_solving_environment
Problems involving arithmetic progressions are of interest in number theory , [ 1 ] combinatorics , and computer science , both from theoretical and applied points of view. Find the cardinality (denoted by A k ( m )) of the largest subset of {1, 2, ..., m } which contains no progression of k distinct terms. The elements of the forbidden progressions are not required to be consecutive. For example, A 4 (10) = 8, because {1, 2, 3, 5, 6, 8, 9, 10} has no arithmetic progressions of length 4, while all 9-element subsets of {1, 2, ..., 10} have one. In 1936, Paul Erdős and Pál Turán posed a question related to this number [ 2 ] and Erdős set a $1000 prize for an answer to it. The prize was collected by Endre Szemerédi for a solution published in 1975, what has become known as Szemerédi's theorem . Szemerédi's theorem states that a set of natural numbers of non-zero upper asymptotic density contains finite arithmetic progressions, of any arbitrary length k . Erdős made a more general conjecture from which it would follow that This result was proven by Ben Green and Terence Tao in 2004 and is now known as the Green–Tao theorem . [ 3 ] See also Dirichlet's theorem on arithmetic progressions . As of 2020 [update] , the longest known arithmetic progression of primes has length 27: [ 4 ] As of 2011, the longest known arithmetic progression of consecutive primes has length 10. It was found in 1998. [ 5 ] [ 6 ] The progression starts with a 93-digit number and has the common difference 210. The prime number theorem for arithmetic progressions deals with the asymptotic distribution of prime numbers in an arithmetic progression.
https://en.wikipedia.org/wiki/Problems_involving_arithmetic_progressions
Procalcitonin ( PCT ) is a peptide precursor of the hormone calcitonin , the latter being involved with calcium homeostasis . It arises once preprocalcitonin is cleaved by endopeptidase . [ 1 ] It was first identified by Leonard J. Deftos and Bernard A. Roos in the 1970s. [ 2 ] It is composed of 116 amino acids and is produced by parafollicular cells (C cells) of the thyroid and by the neuroendocrine cells of the lung and the intestine. The level of procalcitonin in the blood stream of healthy individuals is below the limit of detection (0.01 μg/L) of clinical assays. [ 3 ] The level of procalcitonin rises in a response to a pro-inflammatory stimulus, especially of bacterial origin. It is therefore often classed as an acute phase reactant . [ 4 ] The induction period for procalcitonin ranges from 4–12 hours with a half-life spanning anywhere from 22–35 hours. [ 5 ] It does not rise significantly with viral or non-infectious inflammations. In the case of viral infections this is due to the fact that one of the cellular responses to a viral infection is to produce interferon gamma , which also inhibits the initial formation of procalcitonin. [ 6 ] With the inflammatory cascade and systemic response that a severe infection brings, the blood levels of procalcitonin may rise multiple orders of magnitude with higher values correlating with more severe disease. [ 7 ] However, the high procalcitonin levels produced during infections are not followed by a parallel increase in calcitonin or a decrease in serum calcium levels. [ 8 ] PCT is a member of the calcitonin (CT) superfamily of peptides . It is a peptide of 116 amino acids with an approximate molecular weight of 14.5 kDa, and its structure can be divided into three sections ( see Figure 1 ): [ 9 ] amino terminus (represented by the ball and stick model in Figure 1), immature calcitonin (shown in Figure 1 from PDB as the crystal structure of procalcitonin is not yet available), and calcitonin carboxyl-terminus peptide 1. [ 9 ] Under normal physiological conditions, active CT is produced and secreted in the C-cells of the thyroid gland after proteolytic cleavage of PCT, meaning, in a healthy individual, that PCT levels in circulation are very low (<.05 ng/mL). [ citation needed ] The pathway for production of PCT under normal and inflammatory conditions are shown in Figure 2. [ 10 ] During inflammation, LPS, microbial toxin, and inflammatory mediators, such as IL-6 or TNF-α, induce the CALC-1 gene in adipocytes, but PCT never gets cleaved to produce CT. [ 10 ] In a healthy individual, PCT in endocrine cells is produced by CALC-1 by elevated calcium levels, glucocorticoids, CGRP, glucagon, or gastrin, and is cleaved to form CT, which is released to the blood. [ 10 ] PCT is located on the CALC-1 gene on chromosome 11. [ 9 ] Bacterial infections induce a universal increase in the CALC-1 gene expression and a release of PCT (>1 μg/mL). [ 11 ] Expression of this hormone occurs in a site specific manner. [ 9 ] In healthy and non-infected individuals, transcription of PCT only occurs in neuroendocrine tissue, except for the C cells in the thyroid. The formed PCT then undergoes post-translational modifications, resulting in the production small peptides and mature CT by removal of the C-terminal glycine from the immature CT by peptidylglycine α-amidating monooxygenase (PAM). [ 12 ] In a microbial infected individual, non-neuroendocrine tissue also secretes PCT by expression of CALC-1. A microbial infection induces a substantial increase in the expression of CALC-1, leading to the production of PCT in all differentiated cell types. [ 13 ] The function of PCT synthesized in nonneuroendocrine tissue due to a microbial infection is currently unknown, but, its detection aids in the differentiation of inflammatory processes. [ 9 ] Due to PCT’s variance between microbial infections and healthy individuals, it has become a marker to improve identification of bacterial infection and guide antibiotic therapy. [ 14 ] The table below is a summary from Schuetz, Albrich, and Mueller, [ 14 ] summarizing the current data of selected, relevant studies investigating PCT in different types of infections. Legend: ✓ = Moderate evidence in favor of PCT ✓✓ = Good evidence in favor of PCT ✓✓✓ = Strong evidence in favor of PCT ~ = Evidence in favor or against the use of PCT, or still undefined Measurement of procalcitonin can be used as a marker of severe sepsis caused by bacteria and generally grades well with the degree of sepsis, [ 51 ] although levels of procalcitonin in the blood are very low. PCT has the greatest sensitivity (90%) and specificity (91%) for differentiating patients with systemic inflammatory response syndrome (SIRS) from those with sepsis, when compared with IL-2 , IL-6 , IL-8 , CRP and TNF-alpha . [ 52 ] Evidence is emerging that procalcitonin levels can reduce unnecessary antibiotic prescribing to people with lower respiratory tract infections . [ 53 ] Currently, procalcitonin assays are widely used in the clinical environment. [ 54 ] A meta-analysis reported a sensitivity of 76% and specificity of 70% for bacteremia. [ 55 ] A 2018 systematic review comparing PCT and C-reactive protein (CRP) found PCT to have a sensitivity of 80% and a specificity of 77% in identifying septic patients. In the study, PCT outperformed CRP in diagnostic accuracy of predicting sepsis. [ 56 ] In a 2018 meta-analysis of randomized trials of over 4400 ICU patients with sepsis, researchers concluded that PCT led therapy resulted in lower mortality and lower antibiotic administration. [ 57 ] Immune responses to both organ rejection and severe bacterial infection can lead to similar symptoms such as swelling and fever that can make initial diagnosis difficult. To differentiate between acute rejection of an organ transplant and bacterial infections, plasma procalcitonin levels have been proposed as a potential diagnostic tool. [ 58 ] Typically the levels of procalcitonin in the blood remain below 0.5 ng/mL in cases of acute organ rejection, which has been stated previously to be well below the 1 μg/mL typically seen in bacterial infection. [ 6 ] Given procalcitonin is a blood marker for bacterial infections, evidence shows that it is a useful tool in guiding the initiation and duration of antibiotics in patients with bacterial pneumonia and other acute respiratory infections. [ 59 ] The use of procalcitonin guided antibiotic therapy leads to lower mortality, less antibiotic usage, decreased side effects due to antibiotics and promotes good antibiotic stewardship . [ 59 ] The value in these protocols are evident since a high PCT level correlates with increased mortality in critically ill pneumonia patients especially those with a low CURB-65 pneumonia risk factor score. [ 60 ] In adults with acute respiratory infections, a 2017 systematic review found that PCT-guided therapy reduced mortality, reduced antibiotic use (2.4 fewer days of antibiotics) and led to decreased adverse drug effects across a variety of clinical settings (ED, ICU, primary care clinic). [ 59 ] Procalcitonin-guided treatment limits antibiotic exposure with no increased mortality in patients with acute exacerbation of chronic obstructive pulmonary disease . [ 61 ] Using procalcitonin to guide protocol in acute asthma exacerbation led to reduction in prescriptions of antibiotics in primary care clinics, emergency departments and during hospital admission. This was apparent without an increase in ventilator days or risk of intubation. Be that acute asthma exacerbation is one condition that leads to overuse of antibiotics worldwide, researchers concluded that PCT could help curb over-prescribing. [ 62 ] PCT serves a marker to help differentiate acute respiratory illness such as infection from an acute cardiovascular concern. It also has value as a prognostic lab value in patients with atherosclerosis or coronary heart disease as its levels correlate with the severity of the illness. [ 63 ] The European Society of Cardiology recently released a PCT-guided algorithm for administering antibiotics in patients with dyspnea and suspected acute heart failure . The guidelines use a cut off point of .2 ng/mL and above as the point at which to give antibiotics. [ 64 ] This coincides with a 2017 review of literature which concluded that PCT can help reduce antibiotic overuse in patients presenting with acute heart failure. [ 65 ] In regards to mortality, a meta analysis of over 5000 patients with heart failure concluded that elevated PCT was reliable in predicting short term mortality. [ 66 ] Blood procalcitonin levels can help confirm bacterial meningitis and. if negative, can effectively rule out bacterial meningitis. This was shown in a review of over 2000 patients in which PCT had a sensitivity of 86% and a specificity of 80% for cerebrospinal fluid PCT. Blood PCT measurements proved superior to cerebrospinal fluid PCT with a sensitivity of 95% and a specificity of 97% as a marker for bacterial meningitis. [ 67 ] In acute meningitis, serum PCT is useful as a biomarker for sepsis. It can also be of use in determining viral meningitis versus bacterial meningitis. These findings are the result of a 2018 literature review. [ 68 ] This followed a 2015 meta analysis that showed that PCT had a sensitivity of 90% and a specificity of 98% in judging viral versus bacterial meningitis. PCT also outperformed other biomarkers such as C-reactive protein. [ 69 ] Evidence shows that an elevated PCT above .5 ng/mL could help diagnose infectious complications of inflammatory bowel disease such as abdominal abscesses, bacterial enterocolitis etc. PCT can be effective in early recognition of infections in IBD patients and decisions on whether to prescribe antibiotics. [ 70 ] Patients with chronic kidney disease and end-stage renal disease are at higher risk for infections, and procalcitonin has been studied in these populations, who often have higher levels. Procalcitonin can be dialyzed, and so levels are dependent upon when patients receive hemodialysis . While there is no formally accepted cutoff value for patients undergoing HD, using a value of greater or equal to 0.5 ng/mL yielded a sensitivity of 97-98% and a specificity of 70-96%. [ 71 ] PCT, possibly together with CRP , is used to corroborate the MELD score. [ 72 ] [ 73 ] PCT at a cutoff value of .5 ng/mL was effective at ruling in septic arthritis in an analysis of over 8000 patients across 10 prospective studies. PCT had a sensitivity of 54% and specificity of 95%. The study also concluded that PCT outperforms C-reactive protein in differentiating septic arthritis from non-septic arthritis. [ 74 ] A 2016 literature review showed that PCT has good value in diagnosing infections in oncologic patients. Moreso, it is especially effective in diagnosing major life threatening episodes in cancer patient such as bacteremia and sepsis. [ 75 ] Procalcitonin is reliable to monitor recurrence of medullary thyroid carcinoma . In detecting cancer recurrence, PCT had a sensitivity and specificity of 96% and 96% respectively. [ 76 ] In a meta analysis of 17 studies, PCT had a sensitivity of 85% and a specificity of 54% in diagnosing sepsis in neonates and children. The PCT cut off used was between 2-2.5 ng/mL. [ 77 ] In children presenting with fever without an apparent source, a PCT level of .5 ng/mL had a sensitivity of 82% and specificity of 86%. At a 5 ng/mL value, the sensitivity and specificity were 61% and 94%. PCT can help the clinical decision making while identifying invasive bacterial infection in children with unexplained fever. [ 78 ] PCT levels correlate with the degree of illness in pediatric patients with sepsis or urinary tract infections making it effective as a prognostic lab value in these patients. [ 79 ] Procalcitonin guided cessation of antibiotic use reduces duration of antibiotic exposure and lowers mortality in critically ill patients in the Intensive Care Unit . [ 80 ] In adult emergency department patients with respiratory tract illnesses, PCT-guided treatment groups had reduced antibiotic use. [ 81 ] PCT references ranges are also used to determine the likelihood a patient has systemic infection (sepsis), thereby reducing incidence of unnecessary antibiotic use in cases where sepsis is unlikely. [ 82 ] Although some literature differs in antibiotic cessation requirements the general consensus is stopping antibiotics when procalcitonin levels fall 80% below peak or below 0.5 μg/L at day five or later during antibiotic therapy. [ 83 ] Excessive overdose on amphetamine or its analogs can induce systemic inflammation; in a case report of amphetamine overdose, without bacterial infection, significant elevations in procalcitonin were observed. [ 84 ]
https://en.wikipedia.org/wiki/Procalcitonin
The Proceedings of the Combustion Institute are the proceedings of the biennial Combustion Symposium put on by The Combustion Institute . The publication contains the most significant contributions in fundamentals and applications, fundamental research of combustion science and combustion phenomena . Research papers and invited topical reviews are included on topics of reaction kinetics, soot, PAH and other large molecules, diagnostics, laminar flames, turbulent flames, heterogenous combustion, spray and droplet combustion, detonations, explosions and supersonic combustion, fire research, stationary combustion systems, internal combustion engine and gas turbine combustion, and new technology concepts. The editors-in-chief are Daniel C. Haworth ( Pennsylvania State University ) and Terese Løvås ( no ) ( Norwegian University of Science and Technology ). The need for development of automotive engines, fuels, and aviation formed the basis for the organization which became The Combustion Institute . The first three symposiums were held in 1928, 1937, and 1948. Since 1952, symposiums have been held every second year. The first combustion symposium with published proceedings was in 1948. The journal is abstracted and indexed in: According to the Journal Citation Reports , the journal has a 2015 impact factor of 4.120. [ 1 ]
https://en.wikipedia.org/wiki/Proceedings_of_the_Combustion_Institute
The Journal of Engineering Tribology , Part J of the Proceedings of the Institution of Mechanical Engineers (IMechE), is a peer-reviewed academic journal that publishes research on engineering science associated with tribology and its applications. The journal was first published in 1994 and is published by SAGE Publications on behalf of IMechE. [ 1 ] The Journal of Engineering Tribology is abstracted and indexed in Scopus and the Science Citation Index . According to the Journal Citation Reports , its 2013 impact factor is 0.631, ranking it 81st out of 126 journals in the category "Engineering, Mechanical". [ 2 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Proceedings_of_the_Institution_of_Mechanical_Engineers,_Part_J
The Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability is a quarterly peer-reviewed academic journal that covers risk analysis and reliability engineering , including engineering, mathematical modelling and statistical analysis. The journal was established in 2006 and is published by SAGE Publications on behalf of the Institution of Mechanical Engineers . [ 1 ] According to the Journal Citation Reports , its 2013 impact factor is 0.775. [ 2 ] The journal is abstracted and indexed in Scopus and the Science Citation Index Expanded .
https://en.wikipedia.org/wiki/Proceedings_of_the_Institution_of_Mechanical_Engineers,_Part_O
A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include:
https://en.wikipedia.org/wiki/Process
Process-centered design ( PCD ) is a design methodology , which proposes a business centric approach for designing user interfaces . Because of the multi-stage business analysis steps involved right from the beginning of the PCD life cycle, it is believed to achieve the highest levels of business-IT alignment that is possible through UI. This method is aimed at enterprise applications where there is a business process involved. Unlike content oriented systems such as websites or portals, enterprise applications are built to enable a company's business processes. Enterprise applications often have a clear business goal and a set of specific objectives like- improve employee productivity , increase business performance by a certain percent, etc. Although there are proven UI design methodologies (like the most popular " user-centered design ", which helps design highly Usable Interfaces), PCD differentiates itself by precisely catering to business process intensive software which has not been the case with other UI design methodologies. Process-UI alignment is a component of PCD, which ensures tight alignment between the business process and the enterprise application being developed. [ 1 ] UI design activities are affected by PCD. For example: A call center software used by a customer support agent, if designed for high process-UI alignment will achieve tremendous agent productivity improvement and call center performance; which is not likely to be seen if it were designed only for user satisfaction, ease of use, etc.
https://en.wikipedia.org/wiki/Process-centered_design
A process-data diagram (PDD) , also known as process-deliverable diagram is a diagram that describes processes and data that act as output of these processes. On the left side the meta-process model can be viewed and on the right side the meta-data model can be viewed. [ 1 ] A process-data diagram can be seen as combination of a business process model and data model . The process-data diagram that is depicted at the right, gives an overview of all of these activities/processes and deliverables. The four gray boxes depict the four main implementation phases, which each contain several processes that are in this case all sequential. The boxes at the right show all the deliverables/ concepts that result from the processes. Boxes without a shadow have no further sub-concepts. Boxes with a black shadow depict complex closed concepts, so concepts that have sub-concepts, which however will not be described in any more detail. Boxes with a white shadow (a box behind it) depict open closed concepts, where the sub-concepts are expanded in greater detail. The lines with diamonds show a has-a relationship between concepts. The SAP Implementation process is made up out of four main phases, i.e. the project preparation where a vision of the future-state of the SAP solution is being created, a sizing and blueprinting phase where a software stack is acquired and training is being performed, a functional development phase and finally a final preparation phase, when the last tests are being performed before the actual go live. For each phase, the vital activities are addressed and the deliverables / products are explained. Sequential activities are activities that need to be carried out in a pre-defined order. The activities are connected with an arrow, implying that they have to be followed in that sequence. Both activities and sub-activities can be modeled in a sequential way. In Figure 1 an activity diagram is illustrated with one activity and two sequential sub-activities. A special kind of sequential activities are the start and stop states, which are also illustrated in Figure 1. In Figure 2 an example from practice is illustrated. The example is taken from the requirements capturing workflow in UML-based Web Engineering. The main activity, user & domain modeling, consists of three activities that need to be carried out in a predefined order. Unordered activities are used when sub-activities of an activity do not have a pre-defined sequence in which they need to be carried out. Only sub-activities can be unordered. Unordered activities are represented as sub-activities without transitions within an activity, as is represented in Figure 3. Sometimes an activity consists of both sequential and unordered sub-activities. The solution to this modeling issue is to divide the main activity in different parts. In Figure 4 an example is illustrated, which clarifies the necessity to be able to model unordered activities. The example is taken from the requirements analysis workflow of the Unified Process. The main activity, “describe candidate requirements”, is divided into two parts. The first part is a sequential activity. The second part consists of four activities that do not need any sequence in order to be carried out correctly. Activities can occur concurrently. This is handled with forking and joining. By drawing the activities parallel in the diagram, connected with a synchronization bar, one can fork several activities. Later on these concurrent activities can join again by using the same synchronization bar. Both activities and sub-activities can occur concurrently. In the example of Figure 5, Activity 2 and Activity 3 are concurrent activities. In Figure 6, a fragment of a requirements capturing process is depicted. Two activities, defining the actors and defining the use cases, are carried out concurrently. The reason for carrying out these activities concurrently is that defining the actors influences the use cases greatly, and vice versa. Conditional activities are activities that are only carried out if a pre-defined condition is met. This is graphically represented by using a branch. Branches are illustrated with a diamond and can have incoming and outgoing transitions. Every outgoing transition has a guard expression, the condition. This guard expression is actually a Boolean expression, used to make a choice which direction to go. Both activities and sub-activities can be modeled as conditional activities. In Figure 7 two conditional activities are illustrated. In Figure 8 an example from practice is illustrated. A requirements analysis starts with studying the material. Based on this study, the decision is taken whether to do an extensive requirements elicitation session or not. The condition for not carrying out this requirements session is represented at the left of the branch, namely [requirements clear]. If this condition is not met, [else], the other arrow is followed. The integration of both types of diagrams is quite straightforward. Each action or activity results in a concept. They are connected with a dotted arrow to the produced artifacts, as is demonstrated in Figure 9. The concepts and activities are abstract in this picture. In Table 1 a generic table is presented with the description of activities, sub-activities and their relations to the concepts. In section 5 examples are given of both process-data diagram and activity table. In Figure 10 an example of a process-data diagram is illustrated. It concerns an example from the orientation phase of complex project in a WebEngineering method. [ 1 ] Notable is the use of open and closed concepts. Since project management is actually not within the scope of this research, the concept CONTROL MANAGEMENT has not been expanded. However, in a complex project is RISK MANAGEMENT of great importance. Therefore, the choice is made to expand the RISK MANAGEMENT concept. In Table 2 the activities and sub-activities, and relation to the concepts are described.
https://en.wikipedia.org/wiki/Process-data_diagram
In anatomy , a process ( Latin : processus ) is a projection or outgrowth of tissue from a larger body. [ 1 ] For instance, in a vertebra , a process may serve for muscle attachment and leverage (as in the case of the transverse and spinous processes ), or to fit (forming a synovial joint ), with another vertebra (as in the case of the articular processes ). [ 2 ] The word is also used at the microanatomic level, where cells can have processes such as cilia or pedicels . Depending on the tissue, processes may also be called by other terms, such as apophysis , tubercle , or protuberance . Examples of processes include: This anatomy article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Process_(anatomy)
Process Lasso is Windows process automation and optimization software developed by Jeremy Collake of Bitsum Technologies. It features a graphical user interface that allows for automating various process-related tasks, and several novel algorithms to control how processes are run. The original and headline algorithm is ProBalance, which works to retain system responsiveness during high CPU loads by dynamically adjusting process priority classes. [ 3 ] More recently, algorithms such as the CPU Limiter, [ 4 ] Instance Balancer, [ 5 ] and Group Extender [ 6 ] were added. These algorithms help to control how processes are allocated to CPU cores . Numerous additional automation capabilities exist, including disallowed processes and application power plans. The paid (Pro) version has some extra features, such as the ability to run the core engine (Process Governor) as a system service . [ 7 ] Among this program's features are the following: [ 8 ] Users who take advantage of the programs advanced features; such as assigning persistent priority class and CPU affinities to services or programs which are CPU intensive should fully familiarize themselves with Process Lasso's documentation. While optimizing and parking specific services and programs CPU cores and fine tuning priority classes can enhance system performance; a user could lock their system into "full load" by incorrectly elevating a service or program which makes use of multi-threading; where by the program can make the system; including mouse and keyboard actions non-responsive. The program was featured on FreewareBB, [ 9 ] and received an "Excellent" rating from Softpedia , as well as a certification for containing no malware . [ 10 ] The application has a 4.63 rating (out of a possible 5) at MajorGeeks.com. Editors at CNET gave it 'Outstanding', 4.5 of a possible 5 stars. [ 11 ]
https://en.wikipedia.org/wiki/Process_Lasso
Process analytical chemistry (PAC) is the application of analytical chemistry with specialized techniques, algorithms, and sampling equipment for solving problems related to chemical processes. It is a specialized form of analytical chemistry used for process manufacturing similar to process analytical technology (PAT) used in the pharmaceutical industry. The chemical processes are for production and quality control of manufactured products, and process analytical technology is used to determine the physical and chemical composition of the desired products during a manufacturing process. It is first mentioned in the chemical literature in 1946(1,2). [ 1 ] Process analysis initially involved sampling the variety of process streams or webs and transporting samples to quality control or central analytical service laboratories. Time delays for analytical results due to sample transport and analytical preparation steps negated the value of many chemical analyses for purposes other than product release. Over time it was understood that real-time measurements provided timely information about a process, which was far more useful for high efficiency and quality. The development of real-time process analysis has provided information for process optimization during any manufacturing process. The journal Analytical Chemistry (journal) publishes a biennial review of the most recent developments in the field. [ 2 ] The first real-time measurements in a production environment were made with modified laboratory instrumentation; in recent times specialized process and handheld instrumentation has been developed for immediate analysis. Process analytical chemistry involves the following sub-disciplines of analytical chemistry : microanalytical systems , nanotechnology , chemical detection, electrochemistry or electrophoresis , chromatography , spectroscopy , mass spectrometry , process chemometrics , process control , flow injection analysis , ultrasound , and handheld sensors.
https://en.wikipedia.org/wiki/Process_analytical_chemistry
The Process and General Workers' Union was a British trade union representing workers involved in mining and processing salt, and related industries, mostly in Cheshire . The union was founded in November 1888, as the Northwich Amalgamated Society of Salt Workers, Rock Salt Miners, Alkali Workers, Mechanics and General Labourers . Six months later, William Yarwood took over as its general secretary, resolving numerous industrial disputes. He brought the union into the Trades Union Congress , and the National Transport Workers' Federation . [ 1 ] It was based at the Vine Tavern in Northwich , then in the 1920s moved to the George and Dragon. [ 2 ] In 1951, the union had 2,196 members, and renamed itself as the Mid-Cheshire Salt and Chemical Industries Allied Workers' Union , and in 1966 it became the Process and General Workers' Union . Three years later, it merged into the Transport and General Workers' Union . [ 2 ]
https://en.wikipedia.org/wiki/Process_and_General_Workers'_Union
Process chemistry is the arm of pharmaceutical chemistry concerned with the development and optimization of a synthetic scheme and pilot plant procedure to manufacture compounds for the drug development phase. Process chemistry is distinguished from medicinal chemistry , which is the arm of pharmaceutical chemistry tasked with designing and synthesizing molecules on small scale in the early drug discovery phase. Medicinal chemists are largely concerned with synthesizing a large number of compounds as quickly as possible from easily tunable chemical building blocks (usually for SAR studies). In general, the repertoire of reactions utilized in discovery chemistry is somewhat narrow (for example, the Buchwald-Hartwig amination , Suzuki coupling and reductive amination are commonplace reactions). [ 1 ] In contrast, process chemists are tasked with identifying a chemical process that is safe, cost and labor efficient, “green,” and reproducible, among other considerations. Oftentimes, in searching for the shortest, most efficient synthetic route, process chemists must devise creative synthetic solutions that eliminate costly functional group manipulations and oxidation/reduction steps. This article focuses exclusively on the chemical and manufacturing processes associated with the production of small molecule drugs. Biological medical products (more commonly called “biologics”) represent a growing proportion of approved therapies, but the manufacturing processes of these products are beyond the scope of this article. Additionally, the many complex factors associated with chemical plant engineering (for example, heat transfer and reactor design) and drug formulation will be treated cursorily. Cost efficiency is of paramount importance in process chemistry and, consequently, is a focus in the consideration of pilot plant synthetic routes. The drug substance that is manufactured, prior to the formulation, is commonly referred to as the active pharmaceutical ingredient (API) and will be referred to as such herein. API production cost can be broken into two components: the “material cost” and the “conversion cost.” [ 2 ] The ecological and environmental impact of a synthetic process should also be evaluated by an appropriate metric (e.g. the EcoScale). An ideal process chemical route will score well in each of these metrics, but inevitably tradeoffs are to be expected. Most large pharmaceutical process chemistry and manufacturing divisions have devised weighted quantitative schemes to measure the overall attractiveness of a given synthetic route over another. As cost is a major driver, material cost and volume-time output are typically weighted heavily. The material cost of a chemical process is the sum of the costs of all raw materials, intermediates, reagents, solvents, and catalysts procured from external vendors. Material costs may influence the selection of one synthetic route over another or the decision to outsource production of an intermediate. The conversion cost of a chemical process is a factor of that procedure's overall efficiency, both in materials and time, and its reproducibility. The efficiency of a chemical process can be quantified by its atom economy, yield, volume-time output, and environmental factor (E-factor), and its reproducibility can be evaluated by the Quality Service Level (QSL) and Process Excellence Index (PEI) metrics. The atom economy of a reaction is defined as the number of atoms from the starting materials that are incorporated into the final product. Atom economy can be viewed as an indicator of the “efficiency” of a given synthetic route. [ 3 ] For example, the Claisen rearrangement and the Diels-Alder cycloaddition are examples of reactions that are 100 percent atom economical. On the other hand, a prototypical Wittig reaction has an especially poor atom economy (merely 20 percent in the example shown). Process synthetic routes should be designed such that atom economy is maximized for the entire synthetic scheme. Consequently, “costly” reagents such as protecting groups and high molecular weight leaving groups should be avoided where possible. An atom economy value in the range of 70 to 90 percent for an API synthesis is ideal, but it may be impractical or impossible to access certain complex targets within this range. Nevertheless, atom economy is a good metric to compare two routes to the same molecule. Yield is defined as the amount of product obtained in a chemical reaction. The yield of practical significance in process chemistry is the isolated yield—the yield of the isolated product after all purification steps. In a final API synthesis, isolated yields of 80 percent or above for each synthetic step are expected. The definition of an acceptable yield depends entirely on the importance of the product and the ways in which available technologies come together to allow their efficient application; yields approaching 100% are termed quantitative, and yields above 90% are broadly understood as excellent. [ 4 ] There are several strategies that are employed in the design of a process route to ensure the adequate overall yield of the pharmaceutical product. The first is the concept of convergent synthesis . Assuming a very good to excellent yield in each synthetic step, the overall yield of a multistep reaction can be maximized by combining several key intermediates at a late stage that are prepared independently from each other. Another strategy to maximize isolated yield (as well as time efficiency) is the concept of telescoping synthesis (also called one-pot synthesis). This approach describes the process of eliminating workup and purification steps from a reaction sequence, typically by simply adding reagents sequentially to a reactor. In this way, unnecessary losses from these steps can be avoided. Finally, to minimize overall cost, synthetic steps involving expensive reagents, solvents, or catalysts should be designed into the process route as late stage as possible, to minimize the amount of reagent used. In a pilot plant or manufacturing plant setting, yield can have a profound effect on the material cost of an API synthesis, so the careful planning of a robust route and the fine-tuning of reaction conditions are crucially important. After a synthetic route has been selected, process chemists will subject each step to exhaustive optimization in order to maximize the overall yield. Low yields are typically indicative of unwanted side product formation, which can raise red flags in the regulatory process as well as pose challenges for reactor cleaning operations. The volume-time output (VTO) of a chemical process represents the cost of occupancy of a chemical reactor for a particular process or API synthesis. For example, a high VTO indicates that a particular synthetic step is costly in terms of “reactor hours” used for a given output. Mathematically, the VTO for a particular process is calculated by the total volume of all reactors (m 3 ) that are occupied times the hours per batch divided by the output for that batch of API or intermediate (measured in kg). The process chemistry group at Boehringer Ingelheim, for example, targets a VTO of less than 1 for any given synthetic step or chemical process. Additionally, the raw conversion cost of an API synthesis (in dollars per batch) can be calculated from the VTO, given the operating cost and usable capacity of a particular reactor. Oftentimes, for large-volume APIs, it is economical to build a dedicated production plant rather than to use space in general pilot plants or manufacturing plants. Both of these measures, which capture the environmental impact of a synthetic reaction, intend to capture the significant and rising cost of waste disposal in the manufacturing process. The E-factor for an entire API process is computed by the ratio of the total mass of waste generated in the synthetic scheme to the mass of product isolated. A similar measure, the process mass intensity (PMI) calculates the ratio of the total mass of materials to the mass of the isolated product. For both metrics, all materials used in all synthetic steps, including reaction and workup solvents, reagents, and catalysts, are counted, even if solvents or catalysts are recycled in practice. Inconsistencies in E-factor or PMI computations may arise when choosing to consider the waste associated with the synthesis of outsourced intermediates or common reagents. Additionally, the environmental impact of the generated waste is ignored in this calculation; therefore, the environmental quotient (EQ) metric was devised, which multiplies the E-factor by an “unfriendliness quotient” associated with various waste streams. A reasonable target for the E-factor or PMI of a single synthetic step is any value between 10 and 40. The final two "conversion cost" considerations involve the reproducibility of a given reaction or API synthesis route. The quality service level (QSL) is a measure of the reproducibility of the quality of the isolated intermediate or final API. While the details of computing this value are slightly nuanced and unimportant for the purposes of this article, in essence, the calculation involves the ratio of satisfactory quality batches to the total number of batches. A reasonable QSL target is 98 to 100 percent. Like the QSL, the process excellence index (PEI) is a measure of process reproducibility. Here, however, the robustness of the procedure is evaluated in terms of yield and cycle time of various operations. The PEI yield is defined as follows: In practice, if a process is high-yielding and has a narrow distribution of yield outcomes, then the PEI should be very high. Processes that are not easily reproducible may have a higher aspiration level yield and a lower average yield, lowering the PEI yield. Similarly, a PEI cycle time may be defined as follows: For this expression, the terms are inverted to reflect the desirability of shorter cycle times (as opposed to higher yields). The reproducibility of cycle times for critical processes such as reaction, centrifugation, or drying may be critical if these operations are rate-limiting in the manufacturing plant setting. For example, if an isolation step is particularly difficult or slow, it could become the bottleneck for API synthesis, in which case the reproducibility and optimization of that operation become critical. For an API manufacturing process, all PEI metrics (yield and cycle times) should be targeted at 98 to 100 percent. In 2006, Van Aken, et al. [ 5 ] developed a quantitative framework to evaluate the safety and ecological impact of a chemical process, as well as minor weighting of practical and economical considerations. Others have modified this EcoScale by adding, subtracting and adjusting the weighting of various metrics. Among other factors, the EcoScale takes into account the toxicity, flammability, and explosive stability of reagents used, any nonstandard or potentially hazardous reaction conditions (for example, elevated pressure or inert atmosphere), and reaction temperature. Some EcoScale criteria are redundant with previously considered criteria (e.g. E-factor). Macrocyclization is a recurrent challenge for process chemists, and large pharmaceutical companies have necessarily developed creative strategies to overcome these inherent limitations. An interesting case study in this area involves the development of novel NS3 protease inhibitors to treat Hepatitis C patients by scientists at Boehringer Ingelheim . [ 6 ] The process chemistry team at BI was tasked with developing a cheaper and more efficient route to the active NS3 inhibitor BI 201302, a close analog of BILN 2061. Two significant shortcomings were immediately identified with the initial scale-up route to BILN 2061, depicted in the scheme below. [ 7 ] The macrocyclization step posed four challenges inherent to the cross-metathesis reaction. Additionally, the final double S N 2 sequence to install the quinoline heterocycle was identified as a secondary inefficiency in the synthetic route. Analysis of the cross-metathesis reaction revealed that the conformation of the acyclic precursor had a profound impact on the formation of dimers and oligomers in the reaction mixture. By installing a Boc protecting group at the C-4 amide nitrogen, the Boehringer Ingelheim chemists were able to shift the site of initiation from the vinylcyclopropane moiety to the nonenoic acid moiety, improving the rate of the intramolecular reaction and decreasing the risk of epimerization. Additionally, the catalyst employed was switched from the expensive 1st generation Hoveyda catalyst to the more reactive, less expensive Grela catalyst. [ 9 ] These modifications allowed the process chemists to run the reaction at a standard reaction dilution of 0.1-0.2 M, given that the rates of competing dimerization and oligomerization reactions was so dramatically reduced. Additionally, the process chemistry team envisioned a S N Ar strategy to install the quinoline heterocycle, instead of the S N 2 strategy that they had employed for the synthesis of BILN 2061. This modification prevented the need for inefficient double inversion by proceeding through retention of stereochemistry at the C-4 position of the hydroxyproline moiety. [ 10 ] It is interesting to examine this case study from a VTO perspective. For the unoptimized cross-metathesis reaction using the Grela catalyst at 0.01 M diene, the reaction yield was determined to be 82 percent after a reaction and workup time of 48 hours. A 6-cubic meter reactor filled to 80% capacity afforded 35 kg of the desired product. For the unoptimized reaction: This VTO value was considered prohibitively high and a steep investment in a dedicated plant would have been necessary even before launching Phase III trials with this API, given its large projected annual demand. But after reaction development and optimization, the process team was able to improve the reaction yield to 93 percent after just 1 hour (plus 12 hours for workup and reactor cleaning time) at a diene concentration of 0.2 M. With these modifications, a 6-cubic meter reactor filled to 80% capacity afforded 799 kg of the desired product. For this optimized reaction: Thus, after optimization, this synthetic step became less costly in terms of equipment and time and more practical to perform in a standard manufacturing facility, eliminating the need for costly investment in a new dedicated plant. Recently, large pharmaceutical process chemists have relied heavily on the development of enzymatic reactions to produce important chiral building blocks for API synthesis. Many varied classes of naturally occurring enzymes have been co-opted and engineered for process pharmaceutical chemistry applications. The widest range of applications come from ketoreductases and transaminases , but there are isolated examples from hydrolases , aldolases , oxidative enzymes, esterases and dehalogenases , among others. [ 11 ] One of the most prominent uses of biocatalysis in process chemistry today is in the synthesis of Januvia ®, a DPP-4 inhibitor developed by Merck for the management of type II diabetes . The traditional process synthetic route involved a late-stage enamine formation followed by rhodium-catalyzed asymmetric hydrogenation to afford the API sitagliptin . This process suffered from a number of limitations, including the need to run the reaction under a high-pressure hydrogen environment, the high cost of a transition-metal catalyst, the difficult process of carbon treatment to remove trace amounts of catalyst and insufficient stereoselectivity, requiring a subsequent recrystallization step before final salt formation. [ 12 ] [ 13 ] Merck's process chemistry department contracted Codexis , a medium-sized biocatalysis firm, to develop a large-scale biocatalytic reductive amination for the final step of its sitagliptin synthesis. Codexis engineered a transaminase enzyme from the bacteria Arthrobacter through 11 rounds of directed evolution. The engineered transaminase contained 27 individual point mutations and displayed activity four orders of magnitude greater than the parent enzyme. Additionally, the enzyme was engineered to handle high substrate concentrations (100 g/L) and to tolerate the organic solvents, reagents and byproducts of the transamination reaction. This biocatalytic route successfully avoided the limitations of the chemocatalyzed hydrogenation route: the requirements to run the reaction under high pressure, to remove excess catalyst by carbon treatment and to recrystallize the product due to insufficient enantioselectivity were obviated by the use of a biocatalyst. Merck and Codexis were awarded the Presidential Green Chemistry Challenge Award in 2010 for the development of this biocatalytic route toward Januvia®. [ 14 ] In recent years, much progress has been made in the development and optimization of flow reactors for small-scale chemical synthesis (the Jamison Group Archived 2022-04-24 at the Wayback Machine at MIT and Ley Group at Cambridge University, among others, have pioneered efforts in this field). The pharmaceutical industry, however, has been slow to adopt this technology for large-scale synthetic operations. For certain reactions, however, continuous processing may possess distinct advantages over batch processing in terms of safety, quality, and throughput. A case study of particular interest involves the development of a fully continuous process by the process chemistry group at Eli Lilly and Company for an asymmetric hydrogenation to access a key intermediate in the synthesis of LY500307, [ 15 ] a potent ERβ agonist that is entering clinical trials for the treatment of patients with schizophrenia , in addition to a regimen of standard antipsychotic medications. In this key synthetic step, a chiral rhodium-catalyst is used for the enantioselective reduction of a tetrasubstituted olefin. After extensive optimization, it was found that in order to reduce the catalyst loading to a commercially practical level, the reaction required hydrogen pressure up to 70 atm. The pressure limit of a standard chemical reactor is about 10 atm, although high-pressure batch reactors may be acquired at significant capital cost for reactions up to 100 atm. Especially for an API in the early stages of chemical development, such an investment clearly bears a large risk. An additional concern was that the hydrogenation product has an unfavorable eutectic point , so it was impossible to isolate the crude intermediate in more than 94 percent ee by batch process. Because of this limitation, the process chemistry route toward LY500307 necessarily involved a kinetically controlled crystallization step after the hydrogenation to upgrade the enantiopurity of this penultimate intermediate to >99 percent ee. The process chemistry team at Eli Lilly successfully developed a fully continuous process to this penultimate intermediate, including reaction, workup and kinetically controlled crystallization modules (the engineering considerations implicit in these efforts are beyond the scope of this article). An advantage of flow reactors is that high-pressure tubing can be utilized for hydrogenation and other hyperbaric reactions. Because the headspace of a batch reactor is eliminated, however, many of the safety concerns associated with running high-pressure reactions are obviated by the use of a continuous process reactor. Additionally, a two-stage mixed suspension-mixed product removal (MSMPR) module was designed for the scalable, continuous, kinetically controlled crystallization of the product, so it was possible to isolate in >99 percent ee, eliminating the need for an additional batch crystallization step. This continuous process afforded 144 kg of the key intermediate in 86 percent yield, comparable with a 90 percent isolated yield using the batch process. This 73-liter pilot-scale flow reactor (occupying less than 0.5 m 3 space) achieved the same weekly throughput as theoretical batch processing in a 400-liter reactor. Therefore, the continuous flow process demonstrates advantages in safety, efficiency (eliminates the need for batch crystallization), and throughput, compared with a theoretical batch process. Institute of Process Research & Development , University of Leeds
https://en.wikipedia.org/wiki/Process_chemistry
Process Decision Program Chart (PDPC) is a technique designed to help prepare contingency plans . The emphasis of the PDPC is to identify the consequential impact of failure on activity plans, and create appropriate contingency plans to limit risks. Process diagrams and planning tree diagrams are extended by a couple of levels when the PDPC is applied to the bottom level tasks on those diagrams. From the bottom level of some activity box, the PDPC adds levels for:
https://en.wikipedia.org/wiki/Process_decision_program_chart
Process development execution systems ( PDES ) are software systems used to guide the development of high-tech manufacturing technologies like semiconductor manufacturing, MEMS manufacturing, photovoltaics manufacturing, biomedical devices or nanoparticle manufacturing. Software systems of this kind have similarities to product lifecycle management (PLM) systems. They guide the development of new or improved technologies from its conception, through development and into manufacturing. Furthermore, they borrow on concepts of manufacturing execution systems (MES) systems but tailor them for R&D rather than for production. PDES integrate people (with different backgrounds from potentially different legal entities), data (from diverse sources), information, knowledge and business processes. Documented benefits of process development execution systems include: A process development execution system (PDES) is a system used by companies to perform development activities for high-tech manufacturing processes. Software systems of this kind leverage diverse concepts from other software categories like PLM , manufacturing execution system (MES) , ECM but focus on tools to speed up the technology development rather than the production. A PDES is similar to a manufacturing execution systems (MES) in several ways. The key distinguishing factor of a PDES is that it is tailored for steering the development of a manufacturing process, while MES is tailored for executing the volume production using the developed process. Therefore, the toolset and focus of a PDES is on lower volume but higher flexibility and experimentation freedom. The tools of an MES are more focused on less variance, higher volumes, tighter control and logistics. Both types of application software increase traceability, productivity, and quality of the delivered result. For PDESs quality refers to the capability of the process to perform without failure under a wide range of conditions, i.e. the robustness of the developed manufacturing process. For MESs quality refers to the quality of the manufactured good/commodity. Additionally both software types share functions including equipment tracking, product genealogy, labour and item tracking, costing, electronic signature capture, defect and resolution monitoring, executive dashboards , and other various reporting solutions. In contrast to PLM systems, PDES typically address the collaboration and innovation challenges with a bottom-up approach. They start-out with the details of manufacturing technologies (like PPLM ), a single manufacturing step with all its physical aware parameterization and integrating steps into sequences, into devices, into systems, etc. Other rather similar software categories are laboratory information management systems (LIMS) and laboratory information system (LIS). PDESs offer a wider set of functionalities e.g. virtual manufacturing techniques, while they are typically not integrated with the equipment in the laboratory. PDESs have many parts and can be deployed on various scales – from simple Work in Progress tracking, to a complex solution integrated throughout an enterprise development infrastructure. The latter connects with other enterprise systems like enterprise resource and planning systems (ERPs), manufacturing execution systems (MESs), product lifecycle management (PLM), supervisory, control and data acquisition (SCADA) solutions and scheduling and planning systems (both long-term and short-term tactical). New ideas for manufacturing processes (for new goods/commodities or improved manufacturing) are often based on, or can at least benefit from, previous developments and recipes already in use. The same is true when developing new devices, for example, a MEMS sensor or actuator. A PDES offers an easy way to access these previous developments in a structured manner. Information can be retrieved faster, and previous results can be taken into account more efficiently. A PDES typically offers means to display and search for result data from different viewpoints, and to categorise the data according to the different aspects. These functionalities are applied to all result data, such as materials, process steps, machines, experiments, documents and pictures. The PDES also provides a way to relate entities belonging to the same or similar context and to explore the resulting information. In the assembly phase from process steps to process flows, a PDES helps to easily build, store, print, and transfer new process flows. By providing access to previously assembled process flows the designer is able to use those as building blocks or modules in the newly developed flow. The usage of standard building blocks can dramatically reduce the design time and the probability of errors. A PDES demonstrates its real benefits in the verification phase. Knowledge (for example in the semiconductor device fabrication – clean before deposition; After polymer spin-on no temperature higher than 100 °C until resist is removed) is provided in a format that can be interpreted by a computer as rules. If a domain expert enters the rules for his/her process steps, all engineers can later use these rules to check newly developed process flows, even if the domain expert is not available. For a PDES, this means it has to be able to The processing rule check gives no indication about the functionality or even the structure of the produced good or device. In the area of semiconductor device fabrication , the techniques of semiconductor process simulation / TCAD can provide an idea about the produced structures. To support this 'virtual fabrication', a PDES is able to manage simulation models for process steps. Usually the simulation results are seen as standalone data. To rectify this situation PDESs are able to manage the resulting files in combination with the process flow. This enables the engineer to easily compare the expected results with the simulated outcome. The knowledge gained from the comparison can again be used to improve the simulation model. After virtual verification the device is produced in an experimental fabrication environment. A PDES allows a transfer of the process flow to the fabrication environment (for example in semiconductor: FAB ). This can be done by simply printing out a runcard for the operator or by interfacing to the Manufacturing Execution Systems (MES) of the facility. On the other hand, a PDES is able to manage and document last minute changes to the flow like parameter adjustments during the fabrication. During and after processing a lot of measurements are taken. The results of these measurements are often produced in the form of files such as images or simple text files containing rows and columns of data. The PDES is able to manage these files, to link related results together, and to manage different versions of certain files, for example reports. Paired with flexible text, and graphical retrieval and search methods, a PDES provides the mechanism to view and assess the accumulated data, information and knowledge from different perspectives. It provides insight into both the information aspects as well as the time aspects of previous developments. Development activities within high tech industries are an increasingly collaborative effort. This leads to the need to exchange information between the partners or to transfer process intellectual property from a vendor to a customer. PDESs' support this transfer while being selective to protect the IPR of the company.
https://en.wikipedia.org/wiki/Process_development_execution_system
Process duct work conveys large volumes of hot, dusty air from processing equipment to mills, baghouses to other process equipment. Process duct work may be round or rectangular. Although round duct work costs more to fabricate than rectangular duct work, it requires fewer stiffeners and is favored in many applications over rectangular ductwork. The air in process duct work may be at ambient conditions or may operate at up to 900 °F (482 °C). Process ductwork varies in size from 2 ft diameter to 20 ft diameter or to perhaps 20 ft by 40 ft rectangular. Large process ductwork may fill with dust, depending on slope, to up to 30% of cross section, which can weigh 2 to 4 tons per linear foot. Round ductwork is subject to duct suction collapse, and requires stiffeners to minimize this, but is more efficient in material than rectangular duct work. There are no comprehensive, design references for process duct work design. The ASCE reference for the design of power plant duct design gives some general guidance on duct design, but does not specifically give designers sufficient information to design process duct work. Structural process ductwork carries large volumes of high temperature, dusty air, between pieces of process equipment. The design of this ductwork requires an understanding of the interaction of heat softening of metals , potential effects of dust buildup in large ductwork, and structural design principles. There are two basic shapes for structural process ductwork: rectangular and round. Rectangular ductwork is covered by the ASCE "The Structural Design of Air & Gas Ducts for Process Power Stations and Industrial Applications". In the practical design of primarily round structural process ductwork in the cement , lime and lead industries, the duct size involved ranges from 18 inches (46 cm) to 30 feet (9.1 m). The air temperature may vary from ambient to 1,000 °F (538 °C). Process ductwork is subject to large loads due to dust buildup, fan suction pressure, wind, and earthquake forces. As of 2009 [update] 30 ft diameter process ductwork may cost $7,000 per ton. Failure to properly integrate design forces may lead to catastrophic duct collapse. Overdesign of ductwork is expensive. [ 1 ] The structural design of ductwork plate is based on buckling of the plate element. Round ductwork plate design is based on diameter to duct plate thickness ratios, and the allowable stresses are contained in multiple references such as US Steel Plate , ASME/ANSI STS-1,SMNACA, Tubular Steel Structures , and other references. In actuality round ductwork bent in bending is approximately 30% stronger than a similar shape in compression, however one uses the same allowable stresses in bending as we do for compression. Round ducts require typical stiffeners at roughly 3 diameter spacing, or roughly 20 ft. O.C. for wind ovaling and fabrication and truck shipping requirements. Round ducts, larger than 6 feet 6 inches (1.98 m) in diameter (1/4" plate) require support ring stiffeners. Smaller-diameter ducts may not require support ring stiffeners, but may be designed with saddle supports. When stiffener rings are required they are traditionally designed based on "Roark", although this reference is quite conservative. Round duct elbow allowable stresses are lower than the allowable stresses for straight duct by a K factor = 1.65/(h 2/3power) where h = t (duct) * R (elbow) /(r(duct)*r (duct). This equation, or similar equations is found in Tubular Steel Structures section 9.9. Rectangular ductwork design properties is based on width-to-thickness ratios. This is simplified, normally to width=t/16, from corner elements or corner angle stiffeners, although in reality, the entire duct top & side plate does participate, somewhat in duct section properties. Duct logic is the process of planning for duct thermal movement, combined with planning to minimize duct dust dropout. Ducts move with changes in internal temperature. Ducts are assumed to have the same temperature as their internal gasses, which may be up to 900 °F. If the internal duct temperature exceeds 1000 °F, refractory lining is used to minimize the duct surface temperature. At 1000 °F, ducts may grow approximately 5/8 inch per 10 feet of length. This movement must be carefully planned for, with cloth (or metal) expansion joints at each equipment flange, and one joint per each straight section of ductwork. Sloping ductwork at or above the duct dust angle of repose will minimize dust buildup. Therefore, many ducts carrying high dust loads slope at 30 degrees, or steeper. To minimize pressure loss in duct elbows, the typical elbow radius is 1 1/2 times the duct diameter. In cases where this elbow radius is not feasible, turning vanes are added to the duct. Process ductwork is often large (6-foot diameter to 18-foot diameter), carrying large volumes of hot dirty gasses, at velocities of 3000 to 4500 feet per minute. The fans used to move these gasses are also large, 250 to 4000 horsepower. Therefore, minimizing duct pressure drop by minimizing turbulence at elbows and transitions is of importance. Duct elbow radius is usually 1 1/2 to 2 times the duct size. The side slopes of transitions are typically 10 to 30 degrees. Note: the duct gas velocity is chosen to minimize duct dust dropout. Cement and lime plant duct velocity at normal operations is 3000 to 3200-foot per minute, lead plant velocities are 4000 to 4500-foot per minute, as the dust is heavier. Other industries, such as grain have lower gas velocities. Higher duct gas velocity may require more powerful fans than lower duct velocities. For cement plant and Lime plant process ductwork, duct loads are a combination of: For duct sloping 0 degrees to 30 degrees, duct internal dust is 25% of duct cross section. For duct sloping 30 degrees to 45 degrees duct dust loads are reduced to 15% of cross section, plus internal duct coating loads. For ducts sloping 45 degrees to 85 degrees, duct internal dust is 5% of duct cross section, plus internal duct coating loads. For ducts sloping over 85 degrees. Because of the potential for high dust loading, most process ductwork is run at a 30 to 45 degree slope. 2a) Duct dust loading in non-process ducts (2-foot diameter and smaller), such as conveyor venting ducts are sometimes run horizontally and can be filled to 100% of cross section. 2b) Power plant internal duct dust loads are coordinated with the client, and are sometimes used at 1 to 2-foot of internal ash loadings. 3) Duct internal, coating dust loads, which sometimes are used as a 2 in (51 mm) coating of dust on the internal perimeter. 4) Duct suction pressure loads. Most process duct loads have design pressures of 25 to 40 inches (640 to 1,020 mm) of water pressure. This suction pressure operates to cause suction pressure collapse on the duct side walls. Also this pressure operates perpendicular to the duct "expansion joints" to create an additional load on the duct supports that adds to dead, and live loads. Please note: duct pressure loads vary with temperature, as the gas density varies with temperature. A duct pressure of 25 inches of H 2 O , at room temperature may become 12 inches to 6 inches at duct operating pressures. 5) Duct wind loads 6) Duct Seismic loads 7) Duct Snow loads, normally inconsequential, as snow will melt quickly unless the plant is in shutdown mode. 8) Top of duct dust loads, often used as zero, since plant dust generation is much less now, than in the past. 9) Duct suction pressure loads, act perpendicular to end of duct cross section, and can be significant. For a duct designed for 25" of water at a startup temperature of 70 degree F, on an 8-foot in diameter duct, this is equal to 8000 pounds at each end of the duct. The majority of cement plant process ductwork is round. This is because the round duct shape does not bend between circumferential stiffeners. Therefore, bending stiffeners are not required, and round ductwork requires fewer and lighter intermediate stiffeners than rectangular ductwork. Round cement plant duct stiffeners are sometimes about 5% duct plate weight. Rectangular cement plant duct stiffeners are 15 to 20% times duct plate weight. Power plant ductwork is often larger. Power plant ductwork is usually rectangular, with stiffener weights of 50% (or more) times duct plate weight. (this is based on personal experience, and my vary with loads, duct size, and industry standards) Large, round process ductwork is usually fabricated from 1 ⁄ 4 -inch (6.4 mm) mild steel plate, with ovaling stiffening rings at 15 to 20 ft (4.6 to 6.1 m) on center, regardless of diameter. These lengths allow for resistance to wind ovaling and resistance to out of round when shipping by truck. This also works well with fabricator equipment. The typical intermediate rings are designed for wind bending stresses , reduced as required by the yield stress reduction at working temperatures. The typical rings are fabricated from rolled steel plate, angles or tee's welded together to create the ring cross section required. Rings are fabricated from any combination of plate, tee or W shape that the shop can roll. Rings are usually mild carbon steel, ASTM A36 plate, or equivalent. The location of ring butt welds should preferably be offset 15 degrees(+/-)from point of maximum stress to minimize the effect of weld porosity on weld allowable stress. See US Steel Plate, volume II for empirical ring spacing, and wind bending stress: Spacing = Ls = 60 sqrt [Do (ft) * t plate (in) /wind pressure (psf)] Section = p * L (spacing, ft) * Do (ft) * Do (ft)/Fb (20,000 at ambient T) This reference is older, but a good starting point for duct design. SMACNA, (2ND Edition) chapter 4 has many useful formulas for round ducts, allowable stresses, ring spacing, effect of dust, ice, and live loads. The basic factor of safety for SMACNA, 3, is larger than typically used on typical structural engineering projects, of 1.6. Under SMACNA the critical ring spacing for rings is L = 1.25 * D (ft) sqrt (D(ft)/t(inches)), which is similar to tubular steel structures, L = 3.13 * R sqrt (R/t). In effect, using Spacing = 60 sqrt [Do (ft) * t plate (in) /wind pressure (psf)] is conservative. Allowable bending and compression stress in ducts can come from several sources. See API 560 for design of wind ovaling stiffeners See Tubular Steel structures, chapter 2, 9 & 12 for the allowable stresses for thin, round ducts, their allowable stresses, elbows, elbow softening coefficients, and some procedures for the design of duct support rings. These allowable stresses can be verified with select review of chapters of US Steel Plate, Blodgett Design of plate structures, Roark & Young, or API 650. Round duct support rings are spaced, often at three diameters, or as require at up to about 50 ft centers (14 m). At this spacing the main support rings are designed for the sum of suction pressure stresses & support bending moments. Round ductwork allowable compressive stress is = 662 /(d/t) +339 * Fy (tubular steel structures, chapter 2). Other reference use similar equations. Ductwork typical cement plant pressure drops are: 60% to 80% of high temperature process duct work pressure drop occurs in the process equipment, baghouses, mills and cyclones. Since motor 1 (one) horsepower cost roughly $1000/year (US$) (2005), duct efficiency is important. Minimizing duct pressure drop can reduce plan operating costs. most ductwork, non-equipment pressure drop occurs at transitions and changes of directions (elbows). The bests way to minimize duct pressure drop or to minimize plant operating costs, is to use elbows with an elbow radius to duct radius exceeding 1.5. (For a 15-foot duct, the elbow radius would therefore equal, or exceed, 22.5 ft.) Process duct pressure drops (US practice) are usually measured in inches of water. A typical duct operates at about - 25 inches (160 psf.) total suction pressure, with roughly 75% of the pressure loss in the bag house, and 10% of pressure lost in duct friction, and 15% (nominal)lost in elbow turbulence. A major consideration of duct design is to minimize duct pressure losses, turbulence, as poor duct geometry, increases turbulence, and increases plant electrical usage. Round duct work suction pressure collapse, in ducts over 6 feet in diameter, is prevented with rings at supports, and roughly 3 diameter centers. Round duct support rings are traditionally designed from the formula's found in Roark & Young. However, this reference is based on point loads on rings, while actual duct ring loads are based on almost uniform bottom dust. Therefore, these formulars can be shown with Ram, or other analysis methods to have conservatism factor of roughly 2 above the stresses given In Roark. The duct ring force dead, live and dust forces need to be combined with suction pressure stresses. Suction pressure forces concentrate on the rings, as they are the stiffest element present. Round ductwork elbow allowable stresses are reduced due to the elbow curvature. Various references give similar results for this reduction. Tubular steel structures, Section 9.9 gives the (Beskin) reduction factor of K= 1.65/(h (2/3 power)) where h= t (plate) *R(elbow)/ r (duct) (where suction pressures are smaller). This K reduces the I factor of the duct I effective = I/K. Round duct rings are fabricated from rolled tees, angles, or plates, welded into the shape required. Typically these are designed with ASTM A-36 properties. Typical duct round plate factor of safety (traditional factor of safety) should be 1.6, because duct plate bending, and buckling is mostly controlled by typical intermediate ring design. Typical intermediate ring factor of safety should be 1.6, because there is ample evidence in various codes, (API 360, etc.) that intermediate rings designed for wind ovaling and suction pressure combinations are safe. Typical main support ring factor of safety, if designed by "Roark" formulas should be 1.6, (If constructed to the Roark normal 1% out of round standard tolerance) because it can be shown by various methods that these formulas are at least a factor of two, above three D duct ring analysis results etc.. Typical duct elbow factor of safety should be above 1.6, because it can be difficult to show that shipping out of round for elbows corresponds to the normal 1% out of round standard tolerance. (various code and reference notes). Round structural tubes are sometimes used to support and contain conveyors transporting coal, lead concentrate, or other dusty material over county roads, plant access roads, or river barge loading facilities. When tubes are used for these purposes they may be 10'-6" to 12-foot diameter, and up to 250-foot long, using up to 1/2" plate and ovaling ring stiffeners at 8-foot (to 20-foot centers). On one such project My firm added L8x8x3/4 at the top 45 degree location to stiffen the plate near the point of maximum stress for tubes (as per Timoshenko, and others). Some vendor provide conveyor galleries for the same purpose. Rectangular cement plant ductwork is often 1 ⁄ 4 in (6.4 mm) duct plate, with stiffeners spaced at about 2'-6", depending on suction pressure and temperature. Thinner plate requires a closer stiffener spacing. The stiffeners are usually considered pinned end. Power plant ductwork can be 5/16" thick duct plate, with "fixed end" W stiffeners at roughly 2'-5" spacing. Because rectangular duct plate bends, stiffeners are required at reasonably close spacing. Duct plate 3/16", or thinner, may dishpan, or make noise, and should be avoided. Rectangular duct section properties are calculated from the distance between the upper to lower duct corners of the ductwork The flanges areas are based on the size of corner angles plus duct plate width based on the plate thickness ratio of 16*t. (see AISC structural duct design below) For section properties the "web" plate is ignored. The typical stiffener spacing for cement plant duct work is usually based on duct plate bending M = W * L * L / 8. This is because using a fixed-fixed condition requires difficult to design plate attachments. Power plant, and other larger ductwork, usually goes thru the expense of creating "fixed End" corner moment. all stiffeners for rectangular ductwork requires consideration of lateral torsional bracing stiffeners. Ducts are usually designed as if the duct plate and stiffener temperatures match the internal duct gas temperatures. For mild carbon steels (ASTM A36) temperatures, the design yield stress ratio at 300 °F is 84% of room temperature stress. At 500 °F, the design yield stress ratio is 77% of room temperature stress. At 700 °F, the design yield stress ratio is about 71% of room temperature stress. Temperatures above 800 °F may cause mild carbon steel to warp. This is because, in this temperature range, the crystal lattice structure of mild carbon steel changes with temperatures above about 800 degrees F (reference, US Steel Plate, elevated temperature steel). For ductwork operating above 800 degrees F, duct plate material should resist warping. Either Core-ten or ASTM A304 stainless steel may be used for duct plate between 800 °F and 1200 °F, Core-ten plate is less expensive than stainless steel. Corten steels have essentially the same yield stress ratios as Corten through 700 °F. At 900 °F, the yield stress ratio is 63%. At 1100 °F, the yield stress ratio is 58% (AISC tables). Corten steels should not be used above 1100 °F. Unless the duct and its stiffeners are insulated, the stiffeners can be designed in ASTM A36 steels, even at a duct temperature of 1000 °F. This is because the stiffener temperature is cooler than the duct gas temperature by several hundred degrees (F). Duct stiffener temperatures are assumed to drop about 100 °F per inch of depth (when uninsulated) (no reference available). As reducing the loss of heat at plants has changed over the years, ductwork now connects more pieces of equipment than ever before. Care needs to be taken to avoid condensation of moisture in plant ductwork. Once condensation occurs, the condensation may absorb CO 2 , other components in the gas stream, and become corrosive on low carbon steel. Methods to avoid this problem may include Sulfuric acid attack may require stainless steel ducts, fiberglass ducts, etc. Many plant exhaust gasses contain dusts with high wear potential. Typically wear resistant steels are not useful in resisting duct wear, particularly at higher temperatures. Wear resistant steel ducts are hard to fabricate, and refractory coatings are usually less expensive than wear resistant steel ductwork. Each industry may have different approaches to resist duct wear. Cement plant clinker dust is more abrasive than sand. In high temperature ducts, or ducts with wear potential, 2 + 1 ⁄ 2 -inch refractory, is often anchored to the duct plate with V anchors at 6" O.C. (+/-) to resist a) temperature, or b) wear at elbows or a combination of these effects. Occasionally ceramic tiles or ceramic mortars are anchored to ductwork to resist temperature and wear. Grain plant hulls are also very abrasive. Sometimes plastic liners are used to resist wear in grain facilities, where temperatures are lower than in mineral processing facilities. Duct segments are typically separated with metal or fabric expansion joints. These joints are designed and detailed for the duct suction pressure, temperatures, and movements between duct segments. Fabric joints are often chosen to separate the duct segments because they usually cost 40% less than metal joints. Also metal joints place an additional loads onto duct segments. Metal joints prefer axial movements, and provide significant lateral loads onto duct segments. fabric joints cost $100 to $200 per square foot of joint (2010). Metal joints can cost twice this amount. fabric expansion duct forces are assumed to be 0 #/inch. Metal expansion joint forces for metal joints a 24-inch diameter duct are on the order of 850#/ inch of movement for axial spring rate, and 32,500 #/inch for lateral movement. These coefficients will vary with duct size, joint thickness, and becomes larger for rectangular ducts (based on one recent job). Fabric expansion joint life is about 5 years under field conditions. Many plants prefer access platforms near the joints for replacing the joint fabric. Currently software is available to model ductwork in 3D. This software needs to be used with care, as the design rules for width to thickness and elbow softening coefficients, etc., may not be input into the design program. It is easy to draw ducts in 3D without correct dimensioning. Drawings should be laid out with: Special duct loading conditions may occur outside of dead, live, dust and temperature conditions. Ductwork associated with coal mills, coke grinding facilities, and to some extent grain processing facilities, may be subject to explosive dusts. Ductwork designed for explosive dust is typically designed for 50 psi internal pressure, and will typically have one explosion relief one vent per duct section. the likelihood of a dust explosion on an indirect coal mill system is 100%, over time. This can generate a plum of fire 5 ft. to 15 ft. in diameter, and 20 ft. to 30 ft. long. Therefore, access to areas surrounding explosion vents shall limit personal access with locked access. Ducts are shipped from fabricating facility to job sites on trucks, by rail, or on barges in lengths accommodating the mode of transport, often in 20-foot sections. These sections are connected with flanges, or weld straps. Flanges are provided at expansion joints, or to join low stress duct sections. Flanges may be difficult to design for the duct plate forces. Flange gaskets add flexibility to the flanges that make their ability to carry forces problematic. Therefore, weld straps (short steel straps) are commonly used for higher stress duct plate connections. A close look at the fixed duct support photo shows several properties or round ring supports. There are stiffeners at roughly 60 degrees on center. This duct ring is fabricated from two rolled WTs, welded at the center. This is a smaller duct, with light loads, so that the bottom flange was slightly modified by support clearance requirements. A small gap is shown for placing the duct PTFE slide bearing, although a fixed support could also be inserted in this gap. In the background of this photo is a duct flange. The duct flange normally has 3/4" bolts at 6" nominal; spacing. Duct flange angle thickness needs to be designed for duct plate tensile stresses, as flanges will bend. 5/16" or 3/8" angle thicknesses are common. See above photo of round duct elbows, transitions, and stiffeners. The duct elbow radius is from 1 1/2 to 2 times the duct diameter. The round duct has ovaling, and shipping rings at 20-foot nominal spacing, and larger support rings at supports. The Y split has suction stiffeners at the duct intersection. Note the 3000 HP fan inlet transition and stack inlet transition also shown in this photo. The adjacent photo also shows several principles of process ductwork. It shows a large baghouse inlet ductwork. The inlet duct is tapered to minimize dust dropout. A shallow taper such at this also reduces pressure losses when changing duct diameters. Note the rectangular duct ring spacing is roughly 2'-6" on center. The round duct is stiffened near each branch duct. There are several references for process duct work. These references are worked together to review duct design processes. Other references are often used for duct design, but they give similar results. Finite element design of process duct work is possible, but a requirement of design theory and allowable stresses is required to properly interpret the finite element model. Cement, lime and lead industry accepted dust loads (for structural loading) are: Process ductwork is intended to convey large volumes of dust. some of this dust will settle to the bottom of the duct during power outages and normal operation. The percentage of duct cross section filled with dust is often assumed to be as follows: To minimize the buildup of dust, each material has a minimum carrying velocity, lime = about 2800 fpm., cement about 3200 fpm, and lead dust about 4200 fpm. Dust density depends on industry, Normally these are: cement dust density = 94 pcf, lime industry = 50 pcf, lead oxide dust = 200 pcf. Duct Wear: High temperature ductwork often carries large volumes of hot abrasive dust. Often the design temperature of the duct, or the abrasiveness of the dust, prevents the use of abrasive resisting steels. In these cases refractory can be anchored inside the duct, or abrasive resisting tiles, with weld nuts, are welded to the inside of the ductwork. Duct Thermal Movement Duct steels expand with temperature. Each type of steel may have a different coefficient of thermal expansion, typical mild carbon steels expand with the coefficient of 0.0000065 (See AISC).
https://en.wikipedia.org/wiki/Process_duct_work
Process engineering is a field of study focused on the development and optimization of industrial processes . It consists of the understanding and application of the fundamental principles and laws of nature to allow humans to transform raw material and energy into products that are useful to society, at an industrial level . [ 1 ] By taking advantage of the driving forces of nature such as pressure , temperature and concentration gradients , as well as the law of conservation of mass , process engineers can develop methods to synthesize and purify large quantities of desired chemical products. [ 1 ] Process engineering focuses on the design, operation, control, optimization and intensification of chemical, physical, and biological processes. Their work involves analyzing the chemical makeup of various ingredients and determining how they might react with one another. A process engineer can specialize in a number of areas, including the following: Process engineering involves the utilization of multiple tools and methods. Depending on the exact nature of the system, processes need to be simulated and modeled using mathematics and computer science. Processes where phase change and phase equilibria are relevant require analysis using the principles and laws of thermodynamics to quantify changes in energy and efficiency. In contrast, processes that focus on the flow of material and energy as they approach equilibria are best analyzed using the disciplines of fluid mechanics and transport phenomena. Disciplines within the field of mechanics need to be applied in the presence of fluids or porous and dispersed media. Materials engineering principles also need to be applied, when relevant. [ 1 ] Manufacturing in the field of process engineering involves an implementation of process synthesis steps. [ 2 ] Regardless of the exact tools required, process engineering is then formatted through the use of a process flow diagram (PFD) where material flow paths, storage equipment (such as tanks and silos), transformations (such as distillation columns , receiver/head tanks, mixing, separations, pumping, etc.) and flowrates are specified, as well as a list of all pipes and conveyors and their contents, material properties such as density , viscosity , particle-size distribution , flowrates, pressures, temperatures, and materials of construction for the piping and unit operations . [ 1 ] The process flow diagram is then used to develop a piping and instrumentation diagram (P&ID) which graphically displays the actual process occurring. P&ID are meant to be more complex and specific than a PFD. [ 3 ] They represent a less muddled approach to the design. The P&ID is then used as a basis of design for developing the "system operation guide" or " functional design specification " which outlines the operation of the process. [ 4 ] It guides the process through operation of machinery, safety in design, programming and effective communication between engineers. [ 5 ] From the P&ID, a proposed layout (general arrangement) of the process can be shown from an overhead view ( plot plan ) and a side view (elevation), and other engineering disciplines are involved such as civil engineers for site work (earth moving), foundation design, concrete slab design work, structural steel to support the equipment, etc. All previous work is directed toward defining the scope of the project, then developing a cost estimate to get the design installed, and a schedule to communicate the timing needs for engineering, procurement, fabrication, installation, commissioning, startup, and ongoing production of the process. Depending on needed accuracy of the cost estimate and schedule that is required, several iterations of designs are generally provided to customers or stakeholders who feed back their requirements. The process engineer incorporates these additional instructions (scope revisions) into the overall design and additional cost estimates, and schedules are developed for funding approval. Following funding approval, the project is executed via project management . [ 6 ] Process engineering activities can be divided into the following disciplines: [ 7 ] Various chemical techniques have been used in industrial processes since time immemorial. However, it wasn't until the advent of thermodynamics and the law of conservation of mass in the 1780s that process engineering was properly developed and implemented as its own discipline. The set of knowledge that is now known as process engineering was then forged out of trial and error throughout the industrial revolution. [ 1 ] The term process , as it relates to industry and production, dates back to the 18th century. During this time period, demands for various products began to drastically increase, and process engineers were required to optimize the process in which these products were created. [ 1 ] By 1980, the concept of process engineering emerged from the fact that chemical engineering techniques and practices were being used in a variety of industries. By this time, process engineering had been defined as "the set of knowledge necessary to design, analyze, develop, construct, and operate, in an optimal way, the processes in which the material changes". [ 1 ] By the end of the 20th century, process engineering had expanded from chemical engineering-based technologies to other applications, including metallurgical engineering , agricultural engineering, and product engineering .
https://en.wikipedia.org/wiki/Process_engineering
Process flowsheeting is the use of computer aids to perform steady-state heat and mass balancing, sizing and costing calculations for a chemical process . It is an essential and core component of process design . The process design effort may be split into three basic steps Synthesis is the step where the structure of the flowsheet is chosen. It is also in this step that one initializes values for variables which one is free to set. Analysis is usually made up of three steps Optimization involves both structural optimization of the flow sheet itself as well as optimization of parameters in a given flowsheet. In the former one may alter the equipment used and/or its connections with other equipment. In the latter one can change the values of parameters such as temperature and pressure. Parameter Optimization is a more advanced stage of theory than process flowsheet optimization. The first step in the sequence leading to the construction of a process plant and its use in the manufacture of a product is the conception of a process. The concept is embodied in the form of a "flow sheet". Process design then proceeds on the basis of the flow sheet chosen. Physical property data are the other component needed for process design apart from a flow sheet. The result of process design is a process flow diagram , PFD. Detailed engineering for the project and vessel specifications then begin. Process flowsheeting ends at the point of generation of a suitable PFD. [ 1 ] General purpose flowsheeting programs became usable and reliable around 1965-1970.
https://en.wikipedia.org/wiki/Process_flowsheeting
In thermodynamics , a quantity that is well defined so as to describe the path of a process through the equilibrium state space of a thermodynamic system is termed a process function , [ 1 ] or, alternatively, a process quantity , or a path function . As an example, mechanical work and heat are process functions because they describe quantitatively the transition between equilibrium states of a thermodynamic system. Path functions depend on the path taken to reach one state from another. Different routes give different quantities. Examples of path functions include work , heat and arc length . In contrast to path functions, state functions are independent of the path taken. Thermodynamic state variables are point functions, differing from path functions. For a given state, considered as a point, there is a definite value for each state variable and state function. Infinitesimal changes in a process function X are often indicated by δX to distinguish them from infinitesimal changes in a state function Y which is written dY . The quantity dY is an exact differential , while δX is not, it is an inexact differential . Infinitesimal changes in a process function may be integrated, but the integral between two states depends on the particular path taken between the two states, whereas the integral of a state function is simply the difference of the state functions at the two points, independent of the path taken. In general, a process function X may be either holonomic or non-holonomic. For a holonomic process function, an auxiliary state function (or integrating factor) λ may be defined such that Y = λX is a state function. For a non-holonomic process function, no such function may be defined. In other words, for a holonomic process function, λ may be defined such that dY = λδX is an exact differential. For example, thermodynamic work is a holonomic process function since the integrating factor λ = ⁠ 1 / p ⁠ (where p is pressure) will yield exact differential of the volume state function dV = ⁠ δW / p ⁠ . The second law of thermodynamics as stated by Carathéodory essentially amounts to the statement that heat is a holonomic process function since the integrating factor λ = ⁠ 1 / T ⁠ (where T is temperature) will yield the exact differential of an entropy state function dS = ⁠ δQ / T ⁠ . [ 1 ] This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Process_function
Process integration is a term in chemical engineering which has two possible meanings. In the context of chemical engineering, process integration can be defined as a holistic approach to process design and optimization, which exploits the interactions between different units in order to employ resources effectively and minimize costs. Process integration is not limited to the design of new plants, but it also covers retrofit design (e.g. new units to be installed in an old plant) and the operation of existing systems. Nick Hallale (2001) explains that with process integration, industries are making more money from their raw materials and capital assets while becoming cleaner and more sustainable. [ 1 ] The main advantage of process integration is to consider a system as a whole (i.e. integrated or holistic approach) in order to improve their design and/or operation. In contrast, an analytical approach would attempt to improve or optimize process units separately without necessarily taking advantage of potential interactions among them. For instance, by using process integration techniques it might be possible to identify that a process can use the heat rejected by another unit and reduce the overall energy consumption, even if the units are not running at optimum conditions on their own. Such an opportunity would be missed with an analytical approach, as it would seek to optimize each unit, and thereafter it wouldn’t be possible to re-use the heat internally. Typically, process integration techniques are employed at the beginning of a project (e.g. a new plant or the improvement of an existing one) to screen out promising options to optimize the design and/or operation of a process plant. Also it is often employed, in conjunction with simulation and mathematical optimization tools to identify opportunities in order to better integrate a system (new or existing) and reduce capital and/or operating costs. Most process integration techniques employ Pinch analysis or Pinch Tools to evaluate several processes as a whole system. Therefore, strictly speaking, both concepts are not the same, even if in certain contexts they are used interchangeably. The review by Nick Hallale (2001) explains that in the future, several trends are to be expected in the field. In the future, it seems probable that the boundary between targets and design will be blurred and that these will be based on more structural information regarding the process network. Second, it is likely that we will see a much wider range of applications of process integration. There is still much work to be carried out in the area of separation, not only in complex distillation systems, but also in mixed types of separation systems. This includes processes involving solids, such as flotation and crystallization. The use of process integration techniques for reactor design has seen rapid progress, but is still in its early stages. Third, a new generation of software tools is expected. The emergence of commercial software for process integration is fundamental to its wider application in process design.
https://en.wikipedia.org/wiki/Process_integration
In manufacturing engineering , process layout is a design for the floor plan of a plant which aims to improve efficiency by arranging equipment according to its function. [ 1 ] The production line should ideally be designed to eliminate waste in material flows, inventory handling and management. [ 2 ] In process layout, the work stations and machinery are not arranged according to a particular production sequence. Instead, there is an assembly of similar operations or similar machinery in each department (for example, a drill department, a paint department, etc.) It is also known as function layout. In this layout, machining operations are performed in group together and are not arranged according to any sequence . A common criticism of this layout is that the work can be monotonous for staff, especially if they are involved only in one stage of the process. This criticism can however be eliminated if the staff are rotated to different departments (involving different processes) thus developing a multi-skilled body of staff.
https://en.wikipedia.org/wiki/Process_layout
Process map is a global-system process model that is used to outline the processes that make up the business system and how they interact with each other. Process map shows the processes as objects , which means it is a static and non-algorithmic view of the processes. It should be differentiated from a detailed process model, which shows a dynamic and algorithmic view of the processes, usually known as a process flow diagram . [ 1 ] There are different notation standards that can be used for modelling process maps, but the most notable ones are TOGAF Event Diagram, Eriksson-Penker notation, and ARIS Value Added Chain. [ 2 ] Global characteristics of the business system are captured by global or system models. Global process models are presented using different methodologies and sometimes under different names. Most notably, they are named process map in Visual Paradigm [ 3 ] and MMABP , [ 2 ] value-added chain in ARIS, [ 4 ] and process diagram in Eriksson-Penker notation [ 5 ] – which can easily lead to the confusion with process flow (detailed process model). [ 1 ] Global models are mainly object-oriented and present a static view of the business system; they do not describe dynamic aspects of processes. A process map shows the presence of processes and their mutual relationships. The requirement for the global perspective of the system as a supplementary to the internal process logic description results from the necessity of taking into consideration not only the internal process logic but also its significant surroundings. The algorithmic process model cannot take the place of this perspective since it represents the system model of the process. The detailed process model and the global process model represent different perspectives on the same business system, so these models must be mutually consistent. [ 2 ] A macro process map represents the major processes required to deliver a product or service to the customer. These macro process maps can be further detailed in sub-diagrams. It is often the case that process maps cross different functional areas of the organization. [ 6 ] Process maps are used by many companies to have a holistic view of all processes and the connections between them. Maps help in navigating the sub-processes and make understanding of the organization's operations easier. The process map shows relationships and dependencies between processes and its focus should be on core business processes of the organization. [ 7 ] A process map can be seen as the most abstract level of the process architecture, and it acts as the introduction to the more detailed levels. A process map that is correctly designed is able to provide a general understanding of a company's operations. Designing the process map is an important and strategic step for the organization, and it is followed by further business process modelling implementation. [ 8 ] Methodology for Modelling and Analysis of Business Process (MMABP) is a business process modelling methodology developed at the Department of Information Technology, Faculty of Informatics and Statistics of the Prague University of Economics and Business . The methodology is defined as a “general methodology for modelling business systems using informatics methods and approaches”. [ 2 ] Methodology is used to analyse business processes and to develop a comprehensive model of the system. The goal of developing a model is to be used for process optimization. The model should be created following the characteristics and specifics of the organization in question and following external influences that can affect the organization. The model should be optimal from an economic perspective, but it should also be optimal from a factual perspective, meaning that it should be as simple as possible while maintaining complete functionality. [ 1 ] Business system modelling is based on a two-dimensional approach: [ 9 ] Additionally, there are also two views of the systems: [ 9 ] This results in the need to model the system from four different perspectives in order to achieve the complete and comprehensive view of the business system. MMABP also proposes which notation languages can be used for modelling each perspective, and it also suggests some improvements to the notation languages in order to fit the purpose. [ 1 ] Eriksson-Penker diagram is a tool used in business model analysis and design. It is named after Hans-Erik Eriksson and Magnus Penker, who developed the concept in their book "Business modelling with UML: Business Patterns at Work”. [ 5 ] Eriksson-Penker diagrams are used to map out the key components of a business model and how they interact with one another. The diagrams typically consist of a series of boxes and lines that represent the different elements of the business model, such as the value proposition, customer segments, channels, revenue streams, and key resources. The lines between the boxes represent the relationships and dependencies between the different elements of the business model. These diagrams are useful for visualizing and understanding the various components of a business model, and can help organizations identify potential areas for improvement or areas of risk. They can also be used as a communication tool to help stakeholders understand the business model and its underlying assumptions. [ 5 ] These diagrams are useful for visualizing and understanding the various components of a business model, and can help organizations identify potential areas for improvement or areas of risk. They can also be used as a communication tool to help stakeholders understand the business model and its underlying assumptions. It is possible to use Eriksson-Penker diagrams to create a global process view of a business. In this case, a diagram would be used to map out the key processes and activities that are involved in the business, as well as the relationships and dependencies between these processes. [ 5 ] For example, an Eriksson-Penker diagram could be used to depict the various steps involved in the product development process, from concept development to market launch. It could also be used to show how different functions within the organization, such as marketing, sales, and production, interact and depend on one another to support the overall business. Eriksson-Penker diagram is one of the most popular de facto standards that can be used for an object-oriented global view of business processes. [ 1 ] It is developed as an extension of the UML , [ 10 ] and it is often used together with the BPMN to compensate for the lack of possibility to model the global view with this widely accepted standard. [ 1 ] TOGAF (The Open Group Architecture Framework) is a framework for enterprise architecture that provides a common language and set of standards for designing, planning, implementing, and governing an enterprise's IT architecture. TOGAF event diagrams are diagrams used in the TOGAF framework to represent the flow of events within a system or process. [ 11 ] The TOGAF Event Diagram is a visual representation of the events within an organization or system. It can be used to show the sequence of events that occur in a particular process, as well as the relationships between the events and the stakeholders involved. TOGAF Event Diagrams can be useful in creating a global process view because they provide a visual representation of the events, which can be helpful in understanding how the process fits into the larger context of the organization. [ 11 ] TOGAF Event Diagram is the most perspective standard for the system view of processes today. It is used to represent the system of processes as well as their connections to the functional organizational structure. [ 1 ] ARIS (Architecture of Integrated Information Systems) is a methodology and a set of tools for designing and managing business processes. It is based on the idea that business processes are the core of an organization and that they can be modelled and optimized to improve efficiency and effectiveness. The ARIS methodology provides a framework for understanding and analysing business processes, as well as for designing and implementing improvements to those processes. It includes a set of graphical modelling languages and tools for creating process models, as well as a database for storing and managing process information. [ 4 ] In the context of the ARIS methodology, a value added chain (VAC) diagram is a specific type of process model that is created using the ARIS modelling languages and tools. [ 4 ] The ARIS methodology recognizes the distinction between a global view of the system of processes and a detailed view of one process. VAC notation can be used in ARIS for modelling the global view. [ 1 ] Business process models should be consistent, both within a single model and in terms of mutual consistency with other models. Consistency applies to both global (Process Map) and detailed (Process Diagram) views. In order to be considered consistent, models should satisfy two consistency criteria: completeness and correctness. [ 1 ] In terms of a single model, business process models should be: [ 9 ] In terms of mutual consistency with other models, business process models should be: [ 9 ]
https://en.wikipedia.org/wiki/Process_map
Chemical process miniaturization refers to a philosophical concept within the discipline of process design that challenges the notion of " economy of scale " or "bigger is better". In this context, process design refers to the discipline taught primarily to chemical engineers . However, the emerging discipline of process miniaturization will involve integrated knowledge from many areas; as examples, systems engineering and design, remote measurement and control using intelligent sensors , biological process systems engineering, and advanced manufacturing robotics , etc. One of the challenges of chemical engineering has been to design processes based on chemical laboratory-scale methods, and to scale-up processes so that products can be manufactured that are economically affordable. As a process becomes larger, more product can be produced per unit time, so when a process technology becomes established or mature, and operates consistently without upsets or “downtime”, more economic efficiency can be gained from scale-up. Given a fixed price for the feedstock (e.g. the price per barrel of crude oil), the product cost can be decreased using a larger scale process because the capital investment and operational costs do not normally increase linearly with scale. For example, the capacity or volume of a cylindrical vessel used to produce a product increases proportional to the square of the radius of the cylinder, so cost of materials per unit volume decreases. But the costs to design and fabricate the vessel have traditionally been less sensitive to scale. In other words, one can design a small vessel and fabricate it for about the same cost as the larger vessel. In addition, the cost to control and operate a process (or a process unit component) does not change substantially with the scale. For example, if it takes one operator to operate a small process, that same operator can probably operate the larger process. The economy of scale concept, as taught to chemical engineers, has led to the notion that one of the objectives of process development and design is to achieve “economy of scale” by scaling-up to the largest possible size processing plant so that the product cost can be economically affordable. This disciplinary philosophy has been reinforced by example designs in the petroleum refining and petrochemical industries, where feedstocks have been transported as fluids in pipelines, large tanker ships, and railcars. Fluids, by definition are materials that flow and can be transferred using pumps or gravity. Therefore, large pumps, valves, and pipelines exist to transfer large amounts of fluids in the process industries. Process miniaturization, in contrast, will involve processing of large amounts of solids from renewable biomass resources; therefore, new thinking towards process designs optimized for solids processing will be required. The concept of a microprocess has been defined by S. S. Sofer while a professor at the New Jersey Institute of Technology . A microprocess has the following characteristics: [ 1 ] The microprocess design philosophy has been largely envisioned by historical analysis of the role that component miniaturization has played in the information technology industry. It is the evolution of the miniaturization of computer hardware that has enabled the thinking about process miniaturization, in the chemical engineering design context. Rather than the traditional design objective as “scale-up” of processing to one centralized large processing plant (e.g. the mainframe), one can envision achieving the economic objectives using a “scale-out” philosophy (e.g. multiple microcomputers). Electrical and electronic devices have always played an important role in chemical process plant automation. However, initially, simple thermometers such as those containing mercury, and pressure gauges which were completely mechanical in nature were used to monitor process conditions (such as the temperature, pressure and level in a chemical reactor). Process conditions were adjusted based largely on a human operator's heuristic knowledge of the process behavior. Even with electronic automation installed, many process still require substantial operator interaction, particularly during the start-up phase of the process, or during deployment of a new technology. Process control of the future will involve the widespread utilization of intelligent sensors, and mass-produced intelligent miniaturized devices such as programmable logic controllers that communicate wirelessly to process actuators. Since these devices will be miniaturized to reduce manufacturing cost, this enables the devices to be embedded in structures so that they become invisible to the casual observer. The cost of such sensors will likely be reduced to a point where they either "function or don't function". When that cost threshold has been reached, the repair procedure will be to disable the sensor, and to actuate a redundant working sensor. In otherwords, entire complex control systems will become so low cost, that repair will not be economically viable. The intelligence of the process will be developed using process simulation models based on scientific fundamentals. Heuristic rules will be programmed into the micro-controllers, which will largely eliminate the need for constant monitoring by human heuristic knowledge of the process behavior. Process which can automatically self-optimize through advanced algorithms developed by microprocess engineers will be embedded, and only accessible to the knowledge-owner. This will enable the construction of large networks of autonomous microprocesses. Advanced process control systems for process miniaturization will increase the need for controlling the security and ownership of process intelligence in a knowledge-based business. It will become more difficult to control intellectual property through the traditional method of patents; therefore, trademarks, brand recognition, and copyright laws will play a more important role in value security for knowledge-based businesses of the future. Techno-economic analysis, as taught in traditional chemical process design, will also dramatically shift from a conservative viewpoint of utilization of historical trend economics and cash flow analysis. Economic viability of a given enterprise will be more linked to acquisition of real-time economic information, that can rapidly change based on empirical observations created by an emerging discipline of microprocess development systems; therefore, the models will be more based on "what can be?" rather that "what has the past shown?" Rather than one large central plant, that has to be fed a large amount of feedstock, such as a refinery that can unload a tanker shipment of petroleum if located next to an ocean, the discipline of process miniaturization envisions the distribution of the process technology to areas where the feedstock is not readily transportable in large quantities to a large centralized processing plant. The miniaturized process technology may simply involve transformation of solid biomass materials from multiple distributed microprocesses into more easily manageable fluids. The fluids can then be transported or distributed to larger-scale intelligent processing nodes using conventional fluid transport technology. Historically, small processes or microprocesses per se have always existed. For example, small vineyards and breweries have produced feedstock, processed it, and stored product in what could be considered “microprocess” when compared to processes designed based on the petrochemical industry model or, for example, large-scale production of beer. Small villages in India and other places in the world have learned to produce biogas from animal manure in what could be considered small-scale microprocesses for the production of energy. However, microprocesses and process miniaturization as a design philosophy includes the notion of approaching total automation, and is a new technology which has been enabled by computer hardware miniaturization, for example, the microprocessor. It is easy to envision processes which can be mass-produced and transported. For example, many appliances such as air conditioners, domestic washing machines, and refrigerators could be considered microprocesses. The design philosophy of process miniaturization envisions that “scale-down” of complex processes involving multiple process unit operations can be achieved, and that economy of scale will be more related to the size of a network of distributed autonomous microprocesses. Since failure of one autonomous microprocess does not cause shutdown of the entire network, microprocesses will lead to more economically efficient, robust, and stable production of products that have traditionally been produced for a petroleum-based society. Since fossil fuels by definition are being consumed and are non-renewable, future fuel and materials will be based on renewable biomass . The conversion of biomass into energy is perhaps more challenging to the technologist than energy from fossil fuels. Water, dissolved organic and inorganic compounds, and solid particulates of various size can be present in biomass processes. It is perhaps the development of microbial fuel cells where the philosophical thinking of process miniaturization will play a wider role. Distribution of knowledge, in a fashionable, intriguing style through miniaturized devices, can be substantially enhanced (accelerated) by low power consuming devices (such as smart phones). A rethinking of "what is a powerplant?" can create enormous innovations, given recent advances in membrane materials of construction, immobilized whole cell methodologies, metabolic engineering , and nanotechnology . The challenges of microbial fuel cells relate mainly to finding lower cost manufacturing methods, materials of construction, and systems design. Bruce Logan from the Penn State University has described in several research articles and reviews these challenges. However, even with existing designs which generate low power, there are applications in distribution of electrical recharging systems to remote areas of Africa, where smart phone, can enable access to the vast information of the internet, and to provide lighting. These systems can run on agricultural, animal and human waste streams using naturally occurring bacteria. Nuclear power is considered "green technology" in that it does not produce carbon dioxide, a green house gas, as do traditional natural gas or coal-fired power plants. The economics of the deployment of mini nuclear reactors has been discussed in an article in " The Economist ". The advantages of mini nuclear reactors has also been discussed by Secretary of Energy, Steven Chu . [ 2 ] As discussed by Chu, the reactors would be manufactured in a factory-like situation and then transported, intact by rail or ship to different parts of the country or world. Economy of scale by size is replaced by economy of scale by number. Many companies are not willing to accept the risk of investing $8B to $9B dollars in single large reactor, so one of the most attractive features of process miniaturization is a reduction in the risk of capital investment, and the possibility of recovering investment by reselling and relocating a functional turn-key microprocess to a new owner - a major economic advantage of the portability of microprocesses.
https://en.wikipedia.org/wiki/Process_miniaturization
The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model . Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. [ 2 ] The goals of a process model are to be: From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers. [ 1 ] The activity of modeling a business process usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process. Change management programmes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies include Unified Modeling Language (UML), model-driven architecture , and service-oriented architecture . Process modeling addresses the process aspects of an enterprise business architecture , leading to an all encompassing enterprise architecture . The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporate mergers and acquisitions ; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger. Process modeling has always been a key aspect of business process reengineering , and continuous improvement approaches seen in Six Sigma . There are five types of coverage where the term process model has been defined differently: [ 3 ] Processes can be of different kinds. [ 2 ] These definitions "correspond to the various ways in which a process can be modelled". Granularity refers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand. [ 2 ] Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people. While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver). [ 2 ] [ 7 ] It was found that while process models were prescriptive, in actual practice departures from the prescription can occur. [ 6 ] Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situational method engineering . Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'. [ 8 ] Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs." [ 9 ] As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two. Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible. [ 10 ] This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques [ 10 ] In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before. Quality properties that relate to business process modeling techniques discussed in [ 10 ] are: To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques. It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating. There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed by Krogstie , quality measurement focus more on technical level instead of individual model level. [ 11 ] Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendling et al. who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question. [ 12 ] The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models. Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on. [ 13 ] Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines. [ 14 ] Hommes quoted Wang et al. (1994) [ 11 ] that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used. Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity). A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied. Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL. [ 15 ] [ 16 ] It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling. The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out [ 17 ] According to previous research done by Moody et al. [ 18 ] with use of conceptual model quality framework proposed by Lindland et al. (1994) to evaluate quality of process model, three levels of quality [ 19 ] were identified: From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done by Krogstie . This framework is called SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects. Dimensions of Conceptual Quality framework [ 20 ] Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain. It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains. Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model. In later work, Krogstie et al. [ 15 ] stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain . In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain. Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters. The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this. Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research. [ 15 ] The other framework in use is Guidelines of Modeling (GoM) [ 21 ] based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems. Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model. Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases. Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling. Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary. The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts. The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use. Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models); [ 22 ] Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models. [ 12 ] [ 23 ] The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility. Further work by Mendling et al. investigates the connection between metrics and understanding [ 24 ] and [ 25 ] While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models. Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice. Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically. From the research. [ 26 ] value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles. From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility. [ 24 ] [ 27 ] Based on these a set of guidelines was presented [ 28 ] 7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows: 7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented. It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out. The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only. This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline? [ 28 ]
https://en.wikipedia.org/wiki/Process_modeling
The Process Molecular Gene Concept is an alternative definition of a gene that states that in order for synthesis of a polypeptide to occur you need non-DNA factors and regulatory regions to regulate gene expression on DNA and derived mRNA . This is important because a DNA sequence can code for multiple polypeptides , [ 1 ] so it is these non-DNA factors that are present in order to help determine the polypeptide that is made. The definition was first proposed by Eva M. Neumann-Held , suggesting that a redefinition of our view of the "gene" in relation to developmental genetics. This concept claims that the definition is too general. We therefore need to either clarify its definition or stop using the term "gene". [ 2 ] In the Cycles of Contingency , Neumann-Held states, [ 3 ] "This empirical evidence shows that it is not only the presence of DNA sequence that determines the course of events that lead to the synthesis of a polypeptide but, in addition, specific non-DNA factors must act on DNA and derived mRNA to determine the particular processing mechanisms." The developmental state and tissue determine the outcome of the DNA . An example Neumann-Held gives of this is RNA editing . Depending on the environmental and developmental state of the organism mRNA might enhance, delete, or even add nucleotides to create a different mRNA . So according to Neumann-Held the “gene” is the process that brings together the non-DNA elements to DNA in order to create a specific polypeptide . This process has specific interactions between certain DNA segments and certain non-DNA segments, specific mechanism for mRNA's resulting interactions with non-DNA entities, which in turn creates a specific polypeptide . This gene article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Process_molecular_gene_concept
Process network synthesis (PNS) is a method to represent a process structure in a 'directed bipartite graph'. Process network synthesis uses the P-graph method to create a process structure. The scientific aim of this method is to find optimum structures. Process network synthesis uses a bipartite graph method P-graph [ 1 ] and employs combinatorial rules to find all feasible network solutions (maximum structure) and links raw materials to desired products related to the given problem. With a branch and bound optimisation routine and by defining the target value an optimum structure can be generated that optimises a chosen target function. Process Network Synthesis was originally developed to solve chemical process engineering processes. Target value as well as the structure can be changed depending on the field of application. Thus many more fields of application followed. At Pannon University software the tools PNS Editor and PNS Studio were programmed to generate maximum structure of processes. This software includes the p-graph method and MSG, SSG and ABB branch and bound algorithms to detect optimum structures within the maximum available process flows. [ 2 ] PNS is used in different applications where it can be used to find optimum process structures like:
https://en.wikipedia.org/wiki/Process_network_synthesis
Process of elimination is a logical method to identify an entity of interest among several ones by excluding all other entities. In educational testing , it is a process of deleting options whereby the possibility of an option being correct is close to zero or significantly lower compared to other options. This version of the process does not guarantee success, even if only one option remains, since it eliminates possibilities merely as improbable. The process of elimination can only narrow the possibilities down, and thus, if the correct option is not amongst the known options, it will not arrive at the truth. The method of elimination is iterative . One looks at the answers, determines that several answers are unfit, eliminates these, and repeats, until one cannot eliminate any more. This iteration is most effectively applied when there is logical structure between the answers – that is to say, when by eliminating an answer one can eliminate several others. In this case one can find the answers which one cannot eliminate by eliminating any other answers and test them alone – the others are eliminated as a logical consequence ; this is the idea behind optimizations for computerized searches when the input is sorted – as, for instance, in binary search . In order for the method to work it is necessary to list all possible, even improbable, possibilities. Any omissions render the method invalid as a logical method. A process of elimination can be used to reach a diagnosis of exclusion . It is an underlying method in performing a differential diagnosis . This philosophy -related article is a stub . You can help Wikipedia by expanding it . This logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Process_of_elimination
Process safety is an interdisciplinary engineering domain focusing on the study, prevention, and management of large-scale fires , explosions and chemical accidents (such as toxic gas clouds) in process plants or other facilities dealing with hazardous materials , such as refineries and oil and gas ( onshore and offshore ) production installations. Thus, process safety is generally concerned with the prevention of, control of, mitigation of and recovery from unintentional hazardous materials releases that can have a serious effect to people (onsite and offsite), plant and/or the environment. [ 1 ] [ 2 ] [ 3 ] The American Petroleum Institute defines process safety as follows: A disciplined framework for managing the integrity of hazardous operating systems and processes by applying good design principles, engineering, and operating and maintenance practices. It deals with the prevention and control of events that have the potential to release hazardous materials or energy. Such events can cause toxic effects, fire or explosion and could ultimately result in serious injuries, property damage, lost production, and environmental impact. [ 4 ] The same definition is given by the International Association of Oil & Gas Producers (IOGP). [ 2 ] The Center for Chemical Process Safety (CCPS) of the American Institute of Chemical Engineers (AIChE) gives the following: A discipline that focuses on the prevention of fires, explosions, and accidental chemical releases at chemical process facilities. [ 5 ] Process safety scope is usually contrasted with occupational safety and health (OSH). While both domains deal with dangerous conditions and hazardous events occurring at work sites and/or while carrying out one's job duties, they differ at several levels. Process safety is primarily concerned with events which involve hazardous materials and are or have the potential to escalate to major accidents. A major accident is usually defined as an event causing multiple fatalities, extensive environmental impact, and/or significant financial consequences. The consequences of major accidents, while typically limited to the work site, can overcome the plant or installation boundaries, thus causing significant offsite impact. In contrast to this, occupational safety and health focuses on events that cause harm to a limited number of workers (usually one or two per event), have consequences limited to well within the work site boundaries, and do not necessarily involve unintended contact with a hazardous material. [ 6 ] Thus, for example, a gasoline storage tank loss of containment resulting in a fire is a process safety event, while a fall from height occurring while inspecting the tank is an OSH event. Although they may result in far higher impact to people, assets and the environment, process safety accidents are significantly less frequent than OSH events, with the latter account for the majority of workplace fatalities. [ 7 ] However, the impact of a single major process safety event on such aspects as regional environmental resources, company reputation, or the societal perception of the chemical and process industries, can be very considerable and is usually given prominent visibility in the media. The pivotal step in a process safety accident, around which a chain of accident causation and escalation can be built (including preventative and control/mitigative safety barriers), is generally the loss of containment of a hazardous material. [ 8 ] It is this occurrence that frees the chemical energy available for the harmful consequences to materialize. Inadequate isolation, overflow, runaway or unplanned chemical reaction , defective equipment, human error , procedural violation, inadequate procedures, blockage, corrosion , degradation of material properties, excessive mechanical stress, fatigue , vibration , overpressure, and incorrect installation are the usual proximate causes for such loss of containment. [ 9 ] If the material is flammable and encounters a source of ignition, a fire will take place. Under particular conditions, such as local congestion (e.g., arising from structures and piping in the area where the release occurred or the flammable gas cloud migrated), the flame front of a flammable gas cloud can accelerate and transition to an explosion , which can cause overpressure damage to nearby equipment and structures and harm to people. If the released chemical is a toxic gas or a liquid whose vapors are toxic, then a toxic gas cloud occurs, which may harm or kill people locally at the release source or remotely, if its size and the atmospheric conditions do not immediately result in its dilution to below hazardous concentration thresholds. Fires, explosions, and toxic clouds are the main types of accidents with which process safety is concerned. [ 10 ] In the domain of offshore oil and gas extraction, production, and subsea pipelines, the discipline of process safety is sometimes understood to extend to major accidents not directly associated with hazardous materials processing, storage, or transport. In this context, the potential for accidents such as ship collisions against oil platforms, loss of FPSO hull stability, or crew transportation accidents (such as from helicopter or boating events), is analyzed and managed with tools typical of process safety. [ 11 ] Process safety is usually associated with fixed onshore process and storage facilities, as well as fixed and floating offshore production and/or storage installations. However, process safety tools can and often are used (although to varying degrees) to analyze and manage bulk transportation of hazardous materials, such as by road tankers , rail tank cars , sea-going tankers , and onshore and offshore pipelines . Industrial domains that share similarities with the chemical process industries, and to which process safety concepts often apply, are nuclear power , fossil fuel power production , mining , steelmaking , foundries , etc. Some of these industries, notably nuclear power, follow an approach very similar to process safety's, which is usually referred to as system safety . In the early chemical industry, processes were relatively simple and societal expectations regarding safety were low by today’s standards. As chemical technology evolved and increased in complexity, and, simultaneously, societal expectations for safety in industrial activities increased, it became clear that there was a need for increasingly specialized expertise and knowledge in safety and loss prevention for the chemical industry. [ 12 ] Organizations in the process industries originally had safety reviews for processes that relied on the experience and expertise of the people in the review. In the mid 20th century, more formal review techniques began to appear. These included the hazard and operability (HAZOP) review, developed by ICI in the 1960s, failure mode and effects analysis (FMEA), checklists and what-if reviews . These were mostly qualitative techniques for identifying the hazards of a process. [ 13 ] Quantitative analysis techniques, such as fault tree analysis (FTA, which had been in use by the nuclear industry ), quantified risk assessment (QRA, also referred to as Quantitative Risk Analysis), and layer-of-protection analysis (LOPA) also began to be used in the process industries in the 1970s, 1980s and 1990s. Modeling techniques were developed for analyzing the consequences of spills and releases, explosions, and toxic exposure. [ 13 ] The expression "process safety" began to be used increasingly to define this engineering field of study. It was generally understood to be a branch of chemical engineering , as it primarily relied on the understanding of industrial chemical processes, as exemplified in the HAZOP technique. In time, it absorbed a range of elements from other disciplines (such as chemistry and physics for mathematical modelling of releases, fires and explosions, instrumentation engineering , asset management , human factors and ergonomics , reliability engineering , etc.), thus becoming a relatively interdisciplinary engineering domain, although at its core it remains strongly connected with the understanding of industrial process chemical technology. "Process safety" gradually prevailed over alternative terms; for example, Frank P. Lees in his monumental work Loss Prevention in the Process Industries [ 14 ] either used the titular expression or "safety and loss prevention", and so did Trevor Kletz , [ 15 ] a central figure in the development of this discipline. One of the first publications to use the term in its current sense is the Process Safety Guide by the Dow Chemical Company . [ 16 ] By the mid to late 1970s, process safety was a recognized technical specialty. The American Institute of Chemical Engineers (AIChE) formed its Safety and Health Division in 1979. [ 13 ] In 1985, AIChE established the Center for Chemical Process Safety (CCPS), partly in response to the Bhopal tragedy occurred the previous year. [ 17 ] Lessons learnt from past events have been key in determining advances in process safety. Some of the major accidents that shaped it as an engineering discipline are: [ 10 ] The following is a list of topics covered in process safety. [ 10 ] There are some overlaps with equivalent domains from other disciplines, especially occupational safety and health (OSH), although the focus in process safety will always be specifically on the loss of control in the handling of hazardous materials at industrial scale. Strictly related to process safety, although for historical reasons usually not considered to belong to its domain, is the design of the following systems (note however that their selection is often the responsibility of a specialized process safety engineer): Companies whose business heavily relies on the extraction, processing, storage, and/or transport of hazardous materials, usually integrate elements of process safety management (PSM) within their health and safety management system. PSM was notably regulated by the United States' OSHA in 1992. [ 19 ] The OSHA model for PSM is still widely used, not only in the US but also internationally. Other equivalent models and regulations have become available since, notably by the EPA , [ 20 ] the Center for Chemical Process Safety (CCPS), [ 21 ] and the UK's Energy Institute . [ 22 ] PSM schemes are organized in 'elements'. Different schemes are based on different lists of elements. This is the CCPS scheme for risk-based process safety, which can be reconciled with most other established PSM schemes: [ 21 ] While originally designed eminently for plants in their operations phase, elements of PSM can and should be implemented through the entire lifecycle of a project, wherever applicable. This includes design (from front-end loading to detailed design), procurement of equipment, commissioning , operations, material and organizational changes , and decommissioning. A common model used to represent and explain the various different but connected systems related to achieving process safety is described by James T. Reason 's Swiss cheese model . [ 8 ] [ 23 ] In this model, barriers that prevent, detect, control and mitigate a major accident are depicted as slices, each having a number of holes. The holes represent imperfections in the barrier, which can be defined as specific performance standards. The better managed the barrier, the smaller these holes will be. When a major accident happens, this is invariably because all the imperfections in the barriers (the holes) have lined up. It is the multiplicity of barriers that provide the protection.
https://en.wikipedia.org/wiki/Process_safety
Process simulation is used for the design, development, analysis, and optimization of technical process of simulation of processes such as: chemical plants , chemical processes , environmental systems, power stations , complex manufacturing operations, biological processes, and similar technical functions. Process simulation is a model -based representation of chemical , physical , biological , and other technical processes and unit operations in software. Basic prerequisites for the model are chemical and physical properties [ 1 ] of pure components and mixtures, of reactions, and of mathematical models which, in combination, allow the calculation of process properties by the software. Process simulation software describes processes in flow diagrams where unit operations are positioned and connected by product or educt streams. The software solves the mass and energy balance to find a stable operating point on specified parameters. The goal of a process simulation is to find optimal conditions for a process. This is essentially an optimization problem which has to be solved in an iterative process. In the example above the feed stream to the column is defined in terms of its chemical and physical properties. This includes the composition of individual molecular species in the stream; the overall mass flowrate; the streams pressure and temperature. For hydrocarbon systems the Vapor-Liquid Equilibrium Ratios (K-Values) or models that are used to define them are specified by the user. The properties of the column are defined such as the inlet pressure and the number of theoretical plates. The duty of the reboiler and overhead condenser are calculated by the model to achieve a specified composition or other parameter of the bottom and/or top product. The simulation calculates the chemical and physical properties of the product streams, each is assigned a unique number which is used in the mass and energy diagram. Process simulation uses models which introduce approximations and assumptions but allow the description of a property over a wide range of temperatures and pressures which might not be covered by available real data. Models also allow interpolation and extrapolation - within certain limits - and enable the search for conditions outside the range of known properties. The development of models [ 2 ] for a better representation of real processes is the core of the further development of the simulation software. Model development is done through the principles of chemical engineering but also control engineering and for the improvement of mathematical simulation techniques. Process simulation is therefore a field where practitioners from chemistry , physics , computer science , mathematics , and engineering work together. Efforts are made to develop new and improved models for the calculation of properties. This includes for example the description of There are two main types of models: The equations and correlations are normally preferred because they describe the property (almost) exactly. To obtain reliable parameters it is necessary to have experimental data which are usually obtained from factual data banks [ 3 ] [ 4 ] or, if no data are publicly available, from measurements . Using predictive methods is more cost effective than experimental work and also than data from data banks. Despite this advantage predicted properties are normally only used in early stages of the process development to find first approximate solutions and to exclude false pathways because these estimation methods normally introduce higher errors than correlations obtained from real data. Process simulation has encouraged the development of mathematical models in the fields of numerics and the solving of complex problems. [ 5 ] [ 6 ] The history of process simulation is related to the development of the computer science and of computer hardware and programming languages. Early implementations of partial aspects of chemical processes were introduced in the 1970s when suitable hardware and software (here mainly the programming languages FORTRAN and C ) became available. The modelling of chemical properties began much earlier, notably the cubic equation of states and the Antoine equation were precursory developments of the 19th century. Initially process simulation was used to simulate steady state processes. Steady-state models perform a mass and energy balance of a steady state process (a process in an equilibrium state) independent of time. Dynamic simulation is an extension of steady-state process simulation whereby time-dependence is built into the models via derivative terms i.e. accumulation of mass and energy. The advent of dynamic simulation means that the time-dependent description, prediction and control of real processes in real time has become possible. This includes the description of starting up and shutting down a plant, changes of conditions during a reaction, holdups, thermal changes and more. Dynamic simulation require increased calculation time and are mathematically more complex than a steady state simulation. It can be seen as a multiple repeated steady state simulation (based on a fixed time step) with constantly changing parameters. Dynamic simulation can be used in both an online and offline fashion. The online case being model predictive control, where the real-time simulation results are used to predict the changes that would occur for a control input change, and the control parameters are optimised based on the results. Offline process simulation can be used in the design, troubleshooting and optimisation of process plant as well as the conduction of case studies to assess the impacts of process modifications. Dynamic simulation is also used for operator training .
https://en.wikipedia.org/wiki/Process_simulation
Process supervision is a form of operating system service management in which some master process remains the parent of the service processes. Benefits [ 1 ] compared to traditional process launchers and system boot mechanisms, like System V init , include:
https://en.wikipedia.org/wiki/Process_supervision
In a network based on packet switching , processing delay is the time it takes routers to process the packet header . Processing delay is a key component in network delay . During processing of a packet, routers may check for bit-level errors in the packet that occurred during transmission as well as determining where the packet's next destination is. Processing delays in high-speed routers are typically on the order of microseconds or less. After this nodal processing, the router directs the packet to the queue where further delay can happen ( queuing delay ). In the past, the processing delay has been ignored as insignificant compared to the other forms of network delay. However, in some systems, the processing delay can be quite large especially where routers are performing complex encryption algorithms and examining or modifying packet content. [ 1 ] Deep packet inspection done by some networks examine packet content for security, legal, or other reasons, which can cause very large delay and thus is only done at selected inspection points. Routers performing network address translation also have higher than normal processing delay because those routers need to examine and modify both incoming and outgoing packets. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Processing_delay
In industrial engineering , a processing medium is a gaseous, vaporous, fluid or shapeless solid material that plays an active role in manufacturing processes - comparable to that of a tool. A processing medium for washing is a soap solution, a processing medium for steel melting is a plasma, and a processing medium for steam drying is superheated steam . This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Processing_medium
Data processing modes or computing modes are classifications of different types of computer processing. [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Processing_mode
In stereochemistry , prochiral molecules are those that can be converted from achiral to chiral in a single step , such as changing one atom. [ 1 ] [ 2 ] An achiral species which can be converted to a chiral in two steps is called proprochiral . [ 2 ] A molecule having only one plane of symmetry, or an inversion point and no plane of symmetry, is prochiral if it is possible to change one of the two sides or to destroy the symmetry in another way. But a molecule with more symmetry, such as ethane , may require two substitutions to become chiral, and is thus proprochiral. Methane requires three substitutions to become chiral. If two identical substituents are attached to an sp 3 -hybridized atom , the descriptors pro -R and pro -S are used to distinguish between the two. Promoting the pro -R substituent to higher priority than the other identical substituent results in an R chirality center at the original sp 3 -hybridized atom, and analogously for the pro -S substituent. A trigonal planar sp 2 -hybridized atom can be converted to a chiral center when a substituent is added to the re or si (from Latin rectus ' right ' and sinister ' left ' ) face of the molecule. A face is labeled re if, when looking at that face, the substituents at the trigonal atom are arranged in increasing Cahn-Ingold-Prelog priority order (1 to 2 to 3) in a clockwise order, and si if the priorities increase in anti-clockwise order; note that the designation of the resulting chiral center as S or R depends on the priority of the incoming group. [ 3 ] [ 4 ] The concept of prochirality is necessary for understanding some aspects of enzyme stereospecificity . Alexander Ogston [ 5 ] pointed out that when a symmetrical molecule is placed in an asymmetric environment, such as the surface of an enzyme , supposedly identically placed groups become distinguishable. In this way he showed that earlier exclusion of non-chiral citrate as a possible intermediate in the tricarboxylate cycle was mistaken. Another biochemical example of prochirality is glycerol . It is achiral, but when it is phosphorylated (at carbon number 3 in stereospecific numbering ) the molecule becomes the chiral glycerol 3-phosphate , also called L-α-glycerophosphoric acid. A triacylglycerol having the same fatty acid at carbon 1 and carbon 3 is achiral, but when one of these two is released by hydrolysis, the resulting diacylglyerol is chiral.
https://en.wikipedia.org/wiki/Prochirality
Prochloraz , brand name Sportak , is an imidazole fungicide that was introduced in 1978 [ 3 ] and is widely used in Europe , Australia , Asia , and South America within gardening and agriculture to control the growth of fungi . [ 4 ] [ 5 ] It is not registered for use in the United States . [ 5 ] Similarly to other azole fungicides, prochloraz is an inhibitor of the enzyme lanosterol 14α-demethylase (CYP51A1), which is necessary for the production of ergosterol – an essential component of the fungal cell membrane – from lanosterol . [ 6 ] The agent is a broad-spectrum , protective and curative fungicide, effective against Alternaria spp., Botrytis spp., Erysiphe spp., Helminthosporium spp., Fusarium spp., Pseudocerosporella spp., Pyrenophora spp., Rhynchosporium spp., and Septoria spp. [ 5 ] [ 2 ] Like many imidazole and triazole fungicides and antifungal medications, prochloraz is not particularly selective in its actions. [ 4 ] [ 6 ] In addition to inhibition of lanosterol 14α-demethylase, prochloraz has also been found to act as an antagonist of the androgen and estrogen receptors , as an agonist of the aryl hydrocarbon receptor , and as an inhibitor of enzymes in the steroidogenesis pathway such as CYP17A1 and aromatase . [ 4 ] [ 6 ] In accordance, it has been shown to produce reproductive malformations in mice. [ 4 ] [ 6 ] As such, prochloraz is considered to be an endocrine disruptor . [ 4 ] [ 6 ]
https://en.wikipedia.org/wiki/Prochloraz
Prochlorococcus is a genus of very small (0.6 μm ) marine cyanobacteria with an unusual pigmentation ( chlorophyll a2 and b2 ). These bacteria belong to the photosynthetic picoplankton and are probably the most abundant photosynthetic organism on Earth. Prochlorococcus microbes are among the major primary producers in the ocean, responsible for a large percentage of the photosynthetic production of oxygen . [ 1 ] [ 2 ] Prochlorococcus strains, called ecotypes, have physiological differences enabling them to exploit different ecological niches. [ 3 ] Analysis of the genome sequences of Prochlorococcus strains show that 1,273 [ 4 ] genes are common to all strains, and the average genome size is about 2,000 genes . [ 1 ] In contrast, eukaryotic algae have over 10,000 genes. [ 4 ] Although there had been several earlier records of very small chlorophyll- b -containing cyanobacteria in the ocean, [ 5 ] [ 6 ] Prochlorococcus was discovered in 1986 [ 7 ] by Sallie W. (Penny) Chisholm of the Massachusetts Institute of Technology , Robert J. Olson of the Woods Hole Oceanographic Institution , and other collaborators in the Sargasso Sea using flow cytometry . Chisholm was awarded the Crafoord Prize in 2019 for the discovery. [ 8 ] The first culture of Prochlorococcus was isolated in the Sargasso Sea in 1988 ( strain SS120) and shortly another strain was obtained from the Mediterranean Sea (strain MED). The name Prochlorococcus [ 9 ] originated from the fact it was originally assumed that Prochlorococcus was related to Prochloron and other chlorophyll- b -containing bacteria, called prochlorophytes, but it is now known that prochlorophytes form several separate phylogenetic groups within the cyanobacteria subgroup of the bacteria domain. The only species within the genus described is Prochlorococcus marinus , although two subspecies have been named for low-light and high-light adapted niche variations. [ 10 ] Marine cyanobacteria are to date the smallest known photosynthetic organisms; Prochlorococcus is the smallest at just 0.5 to 0.7 micrometres in diameter. [ 11 ] [ 2 ] The coccoid shaped cells are non-motile and free-living. Their small size and large surface-area-to-volume ratio , gives them an advantage in nutrient-poor water. Still, it is assumed that Prochlorococcus have a very small nutrient requirement. [ 12 ] Moreover, Prochlorococcus have adapted to use sulfolipids instead of phospholipids in their membranes to survive in phosphate deprived environments. [ 13 ] This adaptation allows them to avoid competition with heterotrophs that are dependent on phosphate for survival. [ 13 ] Typically, Prochlorococcus divide once a day in the subsurface layer or oligotrophic waters. [ 12 ] Prochlorococcus is abundant in the euphotic zone of the world's tropical oceans. [ 14 ] It is possibly the most plentiful genus on Earth: a single millilitre of surface seawater may contain 100,000 cells or more. Worldwide, the average yearly abundance is (2.8 to 3.0) × 10 27 individuals [ 15 ] (for comparison, that is approximately the number of atoms in a ton of gold ). Prochlorococcus is ubiquitous between 40°N and 40°S and dominates in the oligotrophic (nutrient-poor) regions of the oceans. [ 12 ] Prochlorococcus is mostly found in a temperature range of 10–33 °C and some strains can grow at depths with low light (<1% surface light). [ 1 ] These strains are known as LL (Low Light) ecotypes, with strains that occupy shallower depths in the water column known as HL (High Light) ecotypes. [ 16 ] Furthermore, Prochlorococcus are more plentiful in the presence of heterotrophs that have catalase abilities. [ 17 ] Prochlorococcus do not have mechanisms to degrade reactive oxygen species and rely on heterotrophs to protect them. [ 17 ] The bacterium accounts for an estimated 13–48% of the global photosynthetic production of oxygen , and forms part of the base of the ocean food chain . [ 18 ] Prochlorococcus is closely related to Synechococcus , another abundant photosynthetic cyanobacteria, which contains the light-harvesting antennae phycobilisomes . However, Prochlorochoccus has evolved to use a unique light-harvesting complex, consisting predominantly of divinyl derivatives of chlorophyll a (Chl a2) and chlorophyll b (Chl b2) and lacking monovinyl chlorophylls and phycobilisomes. [ 19 ] Prochlorococcus is the only known wild-type oxygenic phototroph that does not contain Chl a as a major photosynthetic pigment, and is the only known prokaryote with α-carotene. [ 20 ] The genomes of several strains of Prochlorococcus have been sequenced. [ 21 ] [ 22 ] Twelve complete genomes have been sequenced which reveal physiologically and genetically distinct lineages of Prochlorococcus marinus that are 97% similar in the 16S rRNA gene. [ 23 ] Research has shown that a massive genome reduction occurred during the Neoproterozoic Snowball Earth , which was followed by population bottlenecks . [ 24 ] The high-light ecotype has the smallest genome (1,657,990 basepairs, 1,716 genes) of any known oxygenic phototroph, but the genome of the low-light type is much larger (2,410,873 base pairs, 2,275 genes). [ 21 ] Marine Prochlorococcus cyanobacteria have several genes that function in DNA recombination , repair and replication . These include the recBCD gene complex whose product, exonuclease V, functions in recombinational repair of DNA, and the umuCD gene complex whose product, DNA polymerase V , functions in error-prone DNA replication. [ 25 ] These cyanobacteria also have the gene lexA that regulates an SOS response system, probably a system like the well-studied E. coli SOS system that is employed in the response to DNA damage . [ 25 ] Ancestors of Prochlorococcus contributed to the production of early atmospheric oxygen. [ 26 ] Despite Prochlorococcus being one of the smallest types of marine phytoplankton in the world's oceans, its substantial number make it responsible for a major part of the oceans', world's photosynthesis, and oxygen production. [ 2 ] The size of Prochlorococcus (0.5 to 0.7 μm) [ 12 ] and the adaptations of the various ecotypes allow the organism to grow abundantly in low nutrient waters such as the waters of the tropics and the subtropics (c. 40°N to 40°S); [ 27 ] however, they can be found in higher latitudes as high up as 60° north but at fairly minimal concentrations and the bacteria's distribution across the oceans suggest that the colder waters could be fatal. This wide range of latitude along with the bacteria's ability to survive up to depths of 100 to 150 metres, i.e. the average depth of the mixing layer of the surface ocean, allows it to grow to enormous numbers, up to 3 × 10 27 individuals worldwide. [ 15 ] This enormous number makes the Prochlorococcus play an important role in the global carbon cycle and oxygen production. Along with Synechococcus (another genus of cyanobacteria that co-occurs with Prochlorococcus ) these cyanobacteria are responsible for approximately 50% of marine carbon fixation, making it an important carbon sink via the biological carbon pump (i.e. the transfer of organic carbon from the surface ocean to the deep via several biological, physical and chemical processes). [ 28 ] The abundance, distribution and all other characteristics of the Prochlorococcus make it a key organism in oligotrophic waters serving as an important primary producer to the open ocean food webs. Prochlorococcus has different "ecotypes" occupying different niches and can vary by pigments, light requirements, nitrogen and phosphorus utilization, copper, and virus sensitivity. [ 29 ] [ 11 ] [ 21 ] It is thought that Prochlorococcus may occupy potentially 35 different ecotypes and sub-ecotypes within the worlds' oceans. They can be differentiated on the basis of the sequence of the ribosomal RNA gene. [ 11 ] [ 29 ] It has been broken down by NCBI Taxonomy into two different subspecies, Low-light Adapted (LL) or High-light Adapted (HL). [ 10 ] There are six clades within each subspecies. [ 11 ] Prochlorococcus marinus subsp. marinus is associated with low-light adapted types. [ 10 ] It is also further classified by sub-ecotypes LLI-LLVII, where LLII/III has not been yet phylogenetically uncoupled. [ 11 ] [ 30 ] LV species are found in highly iron scarce locations around the equator, and as a result, have lost several ferric proteins. [ 31 ] The low-light adapted subspecies is otherwise known to have a higher ratio of chlorophyll b2 to chlorophyll a2, [ 29 ] which aids in its ability to absorb blue light. [ 32 ] Blue light is able to penetrate ocean waters deeper than the rest of the visible spectrum, and can reach depths of >200 m, depending on the turbidity of the water. Their ability to photosynthesize at a depth where blue light penetrates allows them to inhabit depths between 80 and 200 m. [ 23 ] [ 33 ] Their genomes can range from 1,650,000 to 2,600,000 basepairs in size. [ 30 ] Prochlorococcus marinus subsp. pastoris is associated with high-light adapted types. [ 10 ] It can be further classified by sub-ecotypes HLI-HLVI. [ 30 ] [ 11 ] HLIII, like LV, is also located in an iron-limited environment near the equator, with similar ferric adaptations. [ 31 ] The high-light adapted subspecies is otherwise known to have a low ratio of chlorophyll b2 to chlorophyll a2. [ 29 ] High-light adapted strains inhabit depths between 25 and 100 m. [ 23 ] Their genomes can range from 1,640,000 to 1,800,000 basepairs in size. [ 30 ] Most cyanobacterium are known to have an incomplete tricarboxylic acid cycle (TCA). [ 34 ] [ 35 ] In this process, 2-oxoglutarate decarboxylase (2OGDC) and succinic semialdehyde dehydrogenase (SSADH), replace the enzyme 2-oxoglutarate dehydrogenase (2-OGDH). [ 35 ] Normally, when this enzyme complex joins with NADP+ , it can be converted to succinate from 2-oxoglutarate (2-OG). [ 35 ] This pathway is non-functional in Prochlorococcus , [ 35 ] as succinate dehydrogenase has been lost evolutionarily to conserve energy that may have otherwise been lost to phosphate metabolism. [ 36 ] Table modified from [ 30 ]
https://en.wikipedia.org/wiki/Prochlorococcus
Prochymal is a stem cell therapy made by Osiris Therapeutics . It is the first stem cell therapy approved by Canada. It is also the first therapy approved by Canada for acute graft-vs-host disease (GvHD). [ 1 ] Also known as remestemcel-L, Prochymal was sold to Australia-based Mesoblast in 2013 [ 2 ] at which time its brand name was changed to Ryoncil. It is an allogeneic stem therapy based on mesenchymal stem cells (also medicinal signalling cells, mesenchymal stromal cells , and MSCs [ 3 ] ) derived from the bone marrow of adult donors. MSCs are purified from the marrow, cultured and packaged, with up to 10,000 doses derived from a single donor. The doses are stored frozen until needed. [ 4 ] In May 2012 Health Canada approved the use of remestemcel-L (Prochymal) for the management of acute GvHD in children who are unresponsive to steroids, with the approval conditional upon further trials being conducted. [ 1 ] [ 5 ] Separately, a pilot study of Ryoncil on ventilator-assisted COVID-19 patients with Acute Respiratory Distress Syndrome provided sufficient evidence for the FDA to approve a Phase 2/3 placebo-controlled trial on 300 patients. That trial commenced enrollment on May 5, 2020. It is being overseen by Mount Sinai Hospital and the Cardiothoracic Clinical Trials Network and funded by the National Institutes of Health . [ 6 ] Preliminary results of a phase III trial for GvHD were released in Sept 2009. [ 7 ] This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prochymal
Proclus Lycius ( / ˈ p r ɒ k l ə s l aɪ ˈ s iː ə s / ; 8 February 412 – 17 April 485), called Proclus the Successor ( Ancient Greek : Πρόκλος ὁ Διάδοχος , Próklos ho Diádokhos ), was a Greek Neoplatonist philosopher , one of the last major classical philosophers of late antiquity . He set forth one of the most elaborate and fully developed systems of Neoplatonism and, through later interpreters and translators, exerted an influence on Byzantine philosophy , early Islamic philosophy , scholastic philosophy , and German idealism , especially G. W. F. Hegel , who called Proclus's Platonic Theology "the true turning point or transition from ancient to modern times, from ancient philosophy to Christianity." [ 1 ] The primary source for the life of Proclus is the eulogy Proclus , or On Happiness that was written for him upon his death by his successor, Marinus , [ 2 ] Marinus' biography set out to prove that Proclus reached the peak of virtue and attained eudaimonia . [ 2 ] There are also a few details about the time in which he lived in the similarly structured Life of Isidore written by the philosopher Damascius in the following century. [ 2 ] According to Marinus, [ 3 ] Proclus was born in 412 AD in Constantinople to a family of high social status from Lycia , and raised in Xanthus . He studied rhetoric , philosophy and mathematics in Alexandria , with the intent of pursuing a judicial position like his father. Before completing his studies, he returned to Constantinople when his rector, his principal instructor (one Leonas), had business there. [ 4 ] Proclus became a successful practicing lawyer. However, the experience of the practice of law made Proclus realize that he truly preferred philosophy. He returned to Alexandria, and began determinedly studying the works of Aristotle under Olympiodorus the Elder . He also began studying mathematics during this period as well with a teacher named Heron (no relation to Hero of Alexandria , who was also known as Heron). As a gifted student, he eventually became dissatisfied with the level of philosophical instruction available in Alexandria , and went to Athens , philosophical center of the day, in 431 to study at the Neoplatonic successor of the New Academy , where he was taught by Plutarch of Athens (not to be confused with Plutarch of Chaeronea ), Syrianus , and Asclepigenia ; he succeeded Syrianus [ 5 ] as head of the Academy in 437, and would in turn be succeeded on his death by Marinus of Neapolis . He lived in Athens as a vegetarian bachelor, prosperous and generous to his friends, until the end of his life, except for a one-year exile, to avoid pressure from Christian authorities. [ 2 ] Marinus reports that he was writing seven hundred lines each day. One challenge with determining Proclus' specific doctrines is that the Neoplatonists of his time did not consider themselves innovators; they believed themselves to be the transmitters of the correct interpretations of Plato himself. [ 6 ] Although the neoplatonic doctrines are much different from the doctrines in Plato's dialogues, it's often difficult to distinguish between different Neoplatonic thinkers and determine what is original to each one. [ 6 ] For Proclus, this is largely only possible with Plotinus , the only other Neoplatonic writer for whom a significant amount of writings survive. [ 6 ] Proclus, like Plotinus and many of the other Neoplatonists , agreed on the three hypostases of Neoplatonism: The One ( hen ), The Intellect ( nous ) and The Soul ( psyche ), and wrote a commentary on the Enneads , of which unfortunately only fragments survive. At other times he criticizes Plotinus' views, such as the prime mover . [ 6 ] Unlike Plotinus, Proclus also did not hold that matter was evil, an idea that caused contradictions in the system of Plotinus. [ 6 ] It is difficult to determine what, if anything, is different between the doctrines of Proclus and Syrianus: for the latter, only a commentary on Aristotle's Metaphysics survives, and Proclus never criticizes his teacher in any of his preserved writings. [ 6 ] The particular characteristic of Proclus's system is his elaboration of a level of individual ones, called henads , between the One which is before being and intelligible divinity. [ 6 ] The henads exist "superabundantly", also beyond being, but they stand at the head of chains of causation ( seirai ) and in some manner give to these chains their particular character. [ 6 ] He identifies them with the Greek gods, so one henad might be Apollo and be the cause of all things apollonian, while another might be Helios and be the cause of all sunny things. Each henad participates in every other henad, according to its character. What appears to be multiplicity is not multiplicity at all, because any henad may rightly be considered the center of the polycentric system. [ citation needed ] According to Proclus, philosophy is the activity which can liberate the soul from a subjection to bodily passions, remind it of its origin in Soul, Intellect, and the One, and prepare it not only to ascend to the higher levels while still in this life, but to avoid falling immediately back into a new body after death. [ citation needed ] Because the soul's attention, while inhabiting a body, is turned so far away from its origin in the intelligible world, Proclus thinks that we need to make use of bodily reminders of our spiritual origin. [ citation needed ] In this he agrees with the doctrines of theurgy put forward by Iamblichus . Theurgy is possible because the powers of the gods (the henads ) extend through their series of causation even down to the material world. [ citation needed ] And by certain power-laden words, acts, and objects, the soul can be drawn back up the series, so to speak. Proclus himself was a devotee of many of the religions in Athens, considering that the power of the gods could be present in these various approaches. [ citation needed ] The majority of Proclus's works are commentaries on dialogues of Plato ( Alcibiades , Cratylus , Parmenides , Republic , Timaeus ). [ 5 ] In these commentaries, he presents his own philosophical system as a faithful interpretation of Plato, and in this he did not differ from other Neoplatonists, as he considered that "nothing in Plato's corpus is unintended or there by chance", that "Plato's writings were divinely inspired" (ὁ θεῖος Πλάτων ho theios Platon —the divine Plato, inspired by the gods), that "the formal structure and the content of Platonic texts imitated those of the universe", [ 7 ] and therefore that they spoke often of things under a veil, hiding the truth from the philosophically uninitiated. Proclus was however a close reader of Plato, and quite often makes very astute points about his Platonic sources. In his commentary on Plato's Timaeus Proclus explains the role the Soul as a principle has in mediating the Forms in Intellect to the body of the material world as a whole. The Soul is constructed through certain proportions, described mathematically in the Timaeus , which allow it to make Body as a divided image of its own arithmetical and geometrical ideas. In addition to his commentaries, Proclus wrote two major systematic works. [ 8 ] The Elements of Theology (Στοιχείωσις θεολογική) consists of 211 propositions, each followed by a proof, beginning from the existence of the One (divine Unity) and ending with the descent of individual souls into the material world. The Platonic Theology (Περὶ τῆς κατὰ Πλάτωνα θεολογίας) is a systematization of material from Platonic dialogues, showing from them the characteristics of the divine orders, the part of the universe which is closest to the One. We also have three essays, extant only in Latin translation: Ten doubts concerning providence ( De decem dubitationibus circa providentiam ); On providence and fate ( De providentia et fato ); On the existence of evils ( De malorum subsistentia ). [ 8 ] Proclus, the scholiast to Euclid, knew Eudemus of Rhodes ' History of Geometry well, and gave a short sketch of the early history of geometry, which appeared to be founded on the older, lost book of Eudemus. The passage has been referred to as "the Eudemian summary," and determines some approximate dates, which otherwise might have remained unknown. [ 9 ] The influential commentary on the first book of Euclid 's Elements is one of the most valuable sources we have for the history of ancient mathematics, [ 10 ] and its Platonic account of the status of mathematical objects was influential. In this work, Proclus also listed the first mathematicians associated with Plato: a mature set of mathematicians ( Leodamas of Thasos , Archytas of Taras , and Theaetetus ), a second set of younger mathematicians ( Neoclides , Eudoxus of Cnidus ), and a third yet younger set ( Amyntas , Menaechmus and his brother Dinostratus , Theudius of Magnesia , Hermotimus of Colophon and Philip of Opus ). Some of these mathematicians were influential in arranging the Elements that Euclid later published. Proclus authored a theology of Plato, which is text concerned with the divine hierarchies and their complex ramifications. A commentary on the Works and Days of Hesiod (incomplete); some scholia on Homer . [ 5 ] A number of his Platonic commentaries are lost. In addition to the Alcibiades, the Cratylus, the Timaeus, and the Parmenides, he also wrote commentaries on the remainder of the dialogues in the Neoplatonic curriculum. [ 11 ] He also wrote a commentary on the Organon , as well as prolegomena to both Plato and Aristotle. [ 11 ] Proclus exerted a great deal of influence on Medieval philosophy , though largely indirectly, through the works of the commentator Pseudo-Dionysius the Areopagite . [ 12 ] This late-5th- or early-6th-century Christian Greek author wrote under the pseudonym Dionysius the Areopagite , the figure converted by St. Paul in Athens. Because of this pseudonym, his writings were taken to have almost apostolic authority. He is an original Christian writer, and in his works can be found a great number of Proclus's metaphysical principles. [ 13 ] Another important source for the influence of Proclus on the Middle Ages is Boethius 's Consolation of Philosophy , which has a number of Proclus principles and motifs. [ citation needed ] The central poem of Book III is a summary of Proclus's Commentary on the Timaeus [ citation needed ] , and Book V contains the important principle of Proclus that things are known not according to their own nature, but according to the character of the knowing subject. [ 12 ] A summary of Proclus's Elements of Theology circulated under the name Liber de Causis ( Book of Causes ). [ 12 ] This book is of uncertain origin, but circulated in the Arabic world as a work of Aristotle, and was translated into Latin as such. [ 12 ] It had great authority because of its supposed Aristotelian origin, and it was only when Proclus's Elements were translated into Latin that Thomas Aquinas realised its true origin. [ 12 ] Proclus's works also exercised an influence during the Renaissance through figures such as Nicholas of Cusa and Marsilio Ficino . The most significant early scholar of Proclus in the English-speaking world was Thomas Taylor , who produced English translations of most of his works. [ 12 ] The crater Proclus on the Moon is named after him. The Liber de Causis (Book of Causes) is not a work by Proclus, but a summary of his work the Elements of Theology , likely written by an Arabic interpreter. Monographs Collections Bibliographic resources
https://en.wikipedia.org/wiki/Proclus
Proclus ( Greek : Πρόκλος ) or Proculeius , son of the physician Themison , was a hierophant at Laodiceia in Syria . He wrote, according to the Suda , the following works: [ 1 ] He is also mentioned by Damascius in a commentary on Plato . [ 2 ] Although a commentary on the Pythagorean Golden Verses , known through a translation into Arabic (in the El Escorial library as manuscript 888) has sometimes been attributed to this Proclus (following a theory promoted by Leendert Gerrit Westerink [ nl ] ), this is disputed, and a more widely accepted theory is that the commentary is instead by Proclus Diadochus . [ 2 ] This article incorporates text from a publication now in the public domain : Mason, Charles Peter (1870). "Proclus (Πρόκλος), literary". In Smith, William (ed.). Dictionary of Greek and Roman Biography and Mythology . Vol. 3. p. 533.
https://en.wikipedia.org/wiki/Proclus_of_Laodicea
A Procrustes transformation is a geometric transformation that involves only translation , rotation , uniform scaling , or a combination of these transformations . Hence, it may change the size, position, and orientation of a geometric object, but not its shape . The Procrustes transformation is named after the mythical Greek robber Procrustes [ 1 ] who made his victims fit his bed either by stretching their limbs or cutting them off. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Procrustes_transformation
The Procurement G6 [ 1 ] is an informal group of six national central purchasing bodies. It is also known as the Multilateral Meeting on Government Procurement (MMGP). Members of the Procurement G6 are: Each country shares experiences about: Past meetings of the Procurement G6 have included:
https://en.wikipedia.org/wiki/Procurement_G6
Procyanidins are members of the proanthocyanidin (or condensed tannins ) class of flavonoids . They are oligomeric compounds, formed from catechin and epicatechin molecules. They yield cyanidin when depolymerized under oxidative conditions. See the box below entitled "Types of procyanidins" for links to articles on the various types. Procyanidins, including the lesser bioactive / bioavailable polymers (4 or more catechines), represent a group of condensed flavan-3-ols that can be found in many plants, most notably apples , maritime pine bark, cinnamon , aronia fruit, cocoa beans , grape seed , grape skin , [ 1 ] and red wines of Vitis vinifera (the common grape). [ 2 ] However, bilberry , cranberry , black currant , green tea , black tea , and other plants also contain these flavonoids. [ 3 ] Procyanidins can also be isolated from Quercus petraea and Q. robur heartwood (wine barrel oaks ). [ 4 ] Açaí oil , obtained from the fruit of the açaí palm ( Euterpe oleracea ), is rich in numerous procyanidin oligomers . [ 5 ] Apples contain on average per serving about eight times the amount of procyanidin found in wine, with some of the highest amounts found in the Red Delicious and Granny Smith varieties. [ 6 ] The seed testas of field beans ( Vicia faba ) contain procyanidins [ 7 ] that affect the digestibility in piglets [ 8 ] and could have an inhibitory activity on enzymes . [ 9 ] Cistus salviifolius also contains oligomeric procyanidins. [ 10 ] Condensed tannins can be characterised by a number of techniques including depolymerisation , asymmetric flow field flow fractionation or small-angle X-ray scattering . DMACA is a dye used for localization of procyanidin compounds in plant histology . The use of the reagent results in blue staining. [ 11 ] It can also be used to titrate procyanidins. Total phenols (or antioxidant effect) can be measured using the Folin-Ciocalteu reaction . Results are typically expressed as gallic acid equivalents (GAE). Procyanidins from field beans ( Vicia faba ) [ 12 ] or barley [ 13 ] have been estimated using the vanillin-HCl method , resulting in a red color of the test in the presence of catechin or proanthocyanidins. Procyanidins can be titrated using the Procyanidolic Index (also called the Bates-Smith Assay ). It is a testing method that measures the change in color when the product is mixed with certain chemicals. The greater the color changes, the higher the PCOs content is. However, the Procyanidolic Index is a relative value that can measure well over 100. Unfortunately, a Procyanidolic Index of 95 was erroneously taken to mean 95% PCO by some and began appearing on the labels of finished products. All current methods of analysis suggest that the actual PCO content of these products is much lower than 95%. [ 14 ] [ unreliable medical source? ] An improved colorimetric test, called the Porter Assay or butanol-HCl-iron method , is the most common PCO assay currently in use. [ 15 ] [ self-published source? ] The unit of measurement of the Porter Assay is the PVU (Porter Value Unit). The Porter Assay is a chemical test to help determine the potency of procyanidin containing compounds, such as grape seed extract. It is an acid hydrolysis, which splits larger chain units (dimers and trimers) into single unit monomers and oxidizes them. This leads to a colour change, which can be measured using a spectrophotometer . The greater the absorbance at a certain wavelength of light, the greater the potency. Ranges for grape seed extract are from 25 PVU for low grade material to over 300 for premium grape seed extracts. [ 16 ] [ unreliable medical source? ] Gel permeation chromatography (GPC) analysis allows to separate monomers from larger PCO molecules. Monomers of procyanidins can be characterized by HPLC analysis. Condensed tannins can undergo acid-catalyzed cleavage in the presence of a nucleophile like phloroglucinol (reaction called phloroglucinolysis), thioglycolic acid (thioglycolysis), benzyl mercaptan or cysteamine (processes called thiolysis [ 17 ] ) leading to the formation of oligomers that can be further analyzed. [ 18 ] Phloroglucinolysis can be used for instance for procyanidins characterisation in wine [ 19 ] or in the grape seed and skin tissues. [ 20 ] Thioglycolysis can be used to study procyanidins [ 21 ] or the oxidation of condensed tannins. [ 22 ] It is also used for lignin quantitation . [ 23 ] Reaction on condensed tannins from Douglas fir bark produces epicatechin and catechin thioglycolates . [ 24 ] Condensed tannins from Lithocarpus glaber leaves have been analysed through acid-catalyzed degradation in the presence of cysteamine . [ 25 ] Procyanidin content in dietary supplements has not been well documented. [ 26 ] Pycnogenol is a dietary supplement derived from extracts from maritime pine bark that contains 70% procyanidins, and is marketed with claims it can treat many conditions. The medical evidence is insufficient to support its use for the treatment of seven different chronic disorders . [ 27 ]
https://en.wikipedia.org/wiki/Procyanidin
Procymidone is a pesticide . It is often used for killing unwanted ferns and nettles, and as a dicarboximide fungicide for killing fungi, for example as seed dressing, pre-harvest spray or post-harvest dip of lupins, grapes, stone fruit, strawberries. [ 1 ] It is a known endocrine disruptor ( androgen receptor antagonist ) [ citation needed ] which interferes with the sexual differentiation of male rats. [ 2 ] It is considered to be a poison. [ 3 ] This article about an organic compound is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Procymidone
Produced water is a term used in the oil industry or geothermal industry to describe water that is produced as a byproduct during the extraction of oil and natural gas , [ 1 ] or used as a medium for heat extraction. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Water that is produced along with the hydrocarbons is generally brackish and saline in nature. [ 6 ] Oil and gas reservoirs often have water as well as hydrocarbons, sometimes in a zone that lies under the hydrocarbons, and sometimes in the same zone with the oil and gas. In geothermal plants, the produced water is usually hot. It contains steam with dissolved solutes and gases, providing important information on the geological, chemical, and hydrological characteristics of geothermal systems. [ 2 ] Oil wells sometimes produce large volumes of water with the oil, while gas wells tend to produce water in smaller proportions. As an oilfield becomes old, its natural drive to produce hydrocarbons decreases leading to decline in production. To achieve maximum oil recovery , waterflooding is often implemented, in which water is injected into the reservoirs to help force the oil to the production wells. In offshore areas, sea water is used. In onshore installations, the injected water is obtained from rivers, treated produced water, or underground. Injected water is treated with many chemicals to make it suitable for injection. The injected water eventually reaches the production wells, and so in the later stages of water flooding, the produced water's proportion ("cut") of the total production increases. [ 7 ] The water composition ranges widely from well to well and even over the life of the same well. Much produced water is brine , and most formations result in total dissolved solids too high for beneficial reuse . In oil fields, almost all produced water contains oil and suspended solids. [ 8 ] Some produced water contains heavy metals and traces of naturally occurring radioactive material (NORM), which over time deposits radioactive scale in the piping at the well. [ 9 ] [ 10 ] Metals found in produced water include zinc , lead , manganese , iron , and barium . [ 11 ] In geothermal fields, produced waters are classified into 3 chemical types: HCO3-Ca⋅Mg, HCO3-Na and SO4⋅Cl-Na. [ 2 ] The U.S. Environmental Protection Agency (EPA) in 1987 and 1999 indicates that during drilling and operations, additives may be used to reduce solid deposition on equipment and casings. Water produced from underground formations for geothermal electric power generation often exceeds primary and secondary drinking water standards for total dissolved solids, fluoride, chloride, and sulfate. Water is required for both traditional geothermal systems and EGS throughout the life cycle of a power plant. For traditional projects, the water available at the resource is typically used for energy generation during plant operations. [ 12 ] Historically, produced water was disposed of in large evaporation ponds . However, this has become an increasingly unacceptable disposal method from both environmental and social perspectives. Produced water is considered industrial waste . [ 13 ] The broad management options for re-use are direct injection , environmentally acceptable direct-use of untreated water, or treatment to a government-issued standard before disposal or supply to users. Treatment requirements vary throughout the world. In the United States, these standards are issued by the U.S. Environmental Protection Agency (EPA) for underground injection [ 14 ] [ 15 ] and discharges to surface waters . [ 16 ] Although beneficial reuse for drinking water and agriculture have been researched, the industry has not adopted these measures due to cost, water availability, and social acceptance. [ 17 ] Gravity separators , hydrocyclones , plate coalescers , dissolved gas flotation , and nut shell filters are some of the technologies used in treating wastes from produced water. [ 18 ] The use of produced water for road deicing has been criticized as unsafe. [ 19 ] In January 2020, Rolling Stone magazine published an extensive report about radioactivity content in produced water and its effects on workers and communities across the United States. It was reported that brine sampled from a plant in Ohio was tested in a University of Pittsburgh laboratory and registered radium levels above 3,500 pCi/L. The Nuclear Regulatory Commission requires industrial discharges to remain below 60 pCi/L for each of the most common isotopes of radium, radium-226 and radium-228. [ 20 ]
https://en.wikipedia.org/wiki/Produced_water
Producer gas is fuel gas that is manufactured by blowing through a coke or coal fire with air and steam simultaneously. [ 1 ] It mainly consists of carbon monoxide (CO), hydrogen (H 2 ), as well as substantial amounts of nitrogen (N 2 ). The caloric value of the producer gas is low (mainly because of its high nitrogen content), and the technology is obsolete. Improvements over producer gas, also obsolete, include water gas where the solid fuel is treated intermittently with air and steam and, far more efficiently synthesis gas where the solid fuel is replaced with methane. In the US, producer gas may also be referred to by other names based on the fuel used for production such as wood gas . Producer gas may also be referred to as suction gas . The term suction refers to the way the air was drawn into the gas generator by an internal combustion engine. Wood gas is produced in a gasifier Producer gas is generally made from coke , or other carbonaceous material [ 2 ] such as anthracite . Air is passed over the red-hot carbonaceous fuel and carbon monoxide is produced. The reaction is exothermic and proceeds as follows: Formation of producer gas from air and carbon: Reactions between steam and carbon: Reaction between steam and carbon monoxide: The average composition of ordinary producer gas according to Latta was: CO 2 : 5.8%; O 2 : 1.3%; CO: 19.8%; H 2 : 15.1%; CH 4 : 1.3%; N 2 : 56.7%; B.T.U. gross per cu.ft 136 [ 3 ] [ 4 ] The concentration of carbon monoxide in the "ideal" producer gas was considered to be 34.7% carbon monoxide (carbonic oxide) and 65.3% nitrogen. [ 5 ] After "scrubbing", to remove tar , the gas may be used to power gas turbines (which are well-suited to fuels of low calorific value ), spark ignited engines (where 100% petrol fuel replacement is possible) or diesel internal combustion engines (where 15% to 40% of the original diesel fuel requirement is still used to ignite the gas [ 6 ] ). During World War II in Britain, plants were built in the form of trailers for towing behind commercial vehicles, especially buses, to supply gas as a replacement for petrol (gasoline) fuel. [ 7 ] A range of about 80 miles for every charge of anthracite was achieved. [ 8 ] In old movies and stories, when there is a description of suicide by "turning on the gas" and leaving an oven door open without lighting the flame, the reference was to coal gas or town gas. As this gas contained a significant amount of carbon monoxide it was quite toxic. Most town gas was also odorized, if it did not have its own odor. Modern 'natural gas' used in homes is far less toxic, and has a mercaptan added to it for odor for identifying leaks. Various names are used for producer gas, air gas and water gas generally depending on the fuel source, process or end use including: Other similar fuel gasses Uses and Advantages of Producer Gas:
https://en.wikipedia.org/wiki/Producer_gas
Producing Great Sound for Film and Video: Expert Tips from Preproduction to Final Mix is a non-fiction, filmmaking handbook. It covers the process of acquiring quality sound for motion picture productions. Author Jay Rose is an Emmy -award winning sound professional. [ 1 ] He has won over 150 major awards including 12 Clios , and he has contributed to nearly 15,000 commercials. [ 2 ] His work includes the MGM release Two Weeks . [ 3 ] The book is published by Focal Press , a media and technology publishing company. [ 4 ] Focal Press is an imprint of the academic press Taylor & Francis . [ 5 ] [ 6 ] The book was first published in 1999 under the title Producing Great Sound for Digital Video by Miller Freeman Books and was 375 pages. [ 7 ] [ 8 ] Seventeen years later, as of 2016, the book is in its fourth edition, and stands at 520 pages. [ 9 ] It has been a part of required reading at many film schools, including the University of Southern California (USC) . [ 10 ] The book was also awarded five out of five stars by Videomaker Magazine . [ 11 ] When released, Millimeter Magazine noted that the book was one of very few publications extensively covering the art of capturing motion picture sound. [ 12 ] Producing Great Sound for Film and Video has been called, "... the book on the subject." [ 13 ] Producing Great Sound for Film and Video is broken into four main sections, ordered to reflect real-world filming situations: Subjects covered include analog versus digital audio , recording and using sound effects , microphone techniques, ADR , mixing , and mastering . One section highlighted as unique by Videomaker Magazine was that on "editing voices." [ 11 ] Rose breaks down how human speech works, and how that translates to film and video productions. Tips include stealing unvoiced sounds from other characters, or people speaking in a scene, and using them to replace problematic recordings of others. The "editing voices" section also discusses sounds with "hard attacks" and training the ear to hear phonemes , which helps in isolating and correcting speech recording issues. Numerous "recipes" for dealing with common sound issues, such as reducing or eliminating echo on sets, and removing line hum and buzz from recorded audio, are also provided. Eliva Silva writing for San Antonio Express-News said of the book: [It is] the whole theory -- and beautiful theory -- on the science of audio and the way that audio is recorded through voltage, converted into digital information, back to voltage into sound. [ 14 ] Author Rose states in the book that he wishes to appeal to technical and non-technical people alike, adding he hopes to keep the book approachable and conversational in tone, dispelling the idea that audio needs to be difficult to understand. He states that audio is not "rocket science." While the book does contain math and science, Rose points out the math is at an elementary school level and the physics is "common sense." The current version of the book provides downloadable files including sample sounds and music, diagnostic tools and additional tutorials allowing the reader to practice with the principles explained. Earlier versions of the book included a CD-ROM of similar assets. In 2003, Millimeter Magazine wrote about the book: Digital artists are very much hands-on, and Rose is the right man to write audio books for this new generation of filmmakers. Rose operates his own boutique sound studio and bridges the analog and digital eras - he's made the discoveries and mistakes that no one should have to learn on the job. This direct experience with DV equipment and projects is apparent throughout the book. [ 12 ] Major universities and film schools that have used the book as a textbook include: A 33-page instructor guide is also provided by the publisher. [ 20 ] Covering the first edition of the book, Videomaker Magazine noted the high price tag of the book, but said: With PBS and Turner Network Television production experience under his belt, author Jay Rose brings a wealth of experience to Producing Great Sound for Digital Video ...the book is replete with facts and useful information. [ 7 ] Of the second edition, Millimeter Magazine said: Shortchanging film sound is typical of new filmmakers, and the emphasis on picture over sound is a bias running through film schools and film publications - articles, books, and courses on visual subjects far outnumber those on film sound. Author Jay Rose is single-handedly addressing the problem. [ 12 ] John Hartney writing for Creative COW said of the second edition, ...it offers such a wide range of usable information about hands-on digital audio production, that by reading it, the reader is empowered with production skills and enlightened by an appreciation of how the experience of audio enriches video. [ 21 ] In 2008, covering the third edition of the book, Videomaker Magazine awarded it "five out of five stars" and said: I have been looking for this book for 20 years - no exaggeration...The chapter on Editing Voices alone is worth the price tag. [ 11 ] Academy Award winner Randy Thom (director of Sound Design at Skywalker Sound , [ 22 ] Oscar winner for The Right Stuff and The Incredibles [ 23 ] ) wrote praise of the author and book: Jay Rose is one of the leaders in spreading the gospel of using sound creatively. He presents cutting-edge ideas about the collaboration of sound and image, and also covers the basics… all in an easy to read, easy to understand style. [ 9 ] Academy Award nominated sound mixer Jeff Wexler [ 24 ] ( Independence Day , The Last Samurai , Fight Club ) wrote a blurb for the book stating: "This is the definitive book. It should be mandatory reading for anyone who is seriously considering a career making movies." [ 9 ]
https://en.wikipedia.org/wiki/Producing_Great_Sound_for_Film_and_Video
The product-determining step is the step of a chemical reaction that determines the ratio of products formed via differing reaction mechanisms that start from the same reactants . The product determining step is not rate limiting if the rate limiting step of each mechanism is the same. [ 1 ] This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Product-determining_step
Product-family engineering ( PFE ), also known as product-line engineering , is based on the ideas of " domain engineering " created by the Software Engineering Institute , a term coined by James Neighbors in his 1980 dissertation [ 1 ] at University of California, Irvine . Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product-family engineering. Product-family engineering can be defined as a method that creates an underlying architecture of an organization's product platform . It provides an architecture that is based on commonality as well as planned variabilities. The various product variants can be derived from the basic product family, which creates the opportunity to reuse and differentiate on products in the family. Product-family engineering is conceptually similar to the widespread use of vehicle platforms in the automotive industry. Product-family engineering is a relatively new approach to the creation of new products. It focuses on the process of engineering new products in such a way that it is possible to reuse product components and apply variability with decreased costs and time. Product-family engineering is all about reusing components and structures as much as possible. Several studies have proven that using a product-family engineering approach for product development can have several benefits. [ 2 ] Here is a list of some of them: The Nokia case mentioned below also illustrates these benefits. The product family engineering process consists of several phases. The three main phases are: The process has been modeled on a higher abstraction level. This has the advantage that it can be applied to all kinds of product lines and families, not only software . The model can be applied to any product family. Figure 1 (below) shows a model of the entire process. Below, the process is described in detail. The process description contains elaborations of the activities and the important concepts being used. All concepts printed in italic are explained in Table 1. The first phase is the starting up of the whole process. In this phase some important aspects are defined especially with regard to economic aspects. This phase is responsible for outlining market strategies and defining a scope , which tells what should and should not be inside the product family. During this first activity all context information relevant for defining the scope of the product line is collected and evaluated. It is important to define a clear market strategy and take external market information into account, such as consumer demands. The activity should deliver a context document that contains guidelines , constraints and the product strategy . Scoping techniques are applied to define which aspects are within the scope. This is based upon the previous step in the process, where external factors have been taken into account. The output is a product portfolio description, which includes a list of current and future products and also a product roadmap . It can be argued whether phase 1, product management, is part of the product-family-engineering process, because it could be seen as an individual business process that is more focused on the management aspects instead of the product aspect. However phase 2 needs some important input from this phase, as a large piece of the scope is defined in this phase. So from this point of view it is important to include the product-management phase (phase 1) into the entire process as a base for the domain-engineering process. During the domain-engineering phases, the variable and common requirements are gathered for the whole product line. The goal is to establish a reusable platform. The output of this phase is a set of common and variable requirements for all products in the product line. This activity includes all activities for analyzing the domain with regard to concept requirements. The requirements are categorized and split up into two new activities. The output is a document with the domain analysis . As can be seen in Figure 1 the process of defining common requirements is a parallel process with defining variable requirements. Both activities take place at the same time. Includes all activities for eliciting and documenting the common requirements of the product line, resulting in a document with reusable common requirements . Includes all activities for eliciting and documenting the variable requirements of the product line, resulting in a document with variable requirements . This process step consists of activities for defining the reference architecture of the product line. This generates an abstract structure for all products in the product line. During this step a detailed design of the reusable components and the implementation of these components are created. Validates and verifies the reusability of components. Components are tested against their specifications. After successful testing of all components in different use cases and scenarios, the domain engineering phase has been completed. In the final phase a product X is being engineered. This product X uses the commonalities and variability from the domain engineering phase, so product X is being derived from the platform established in the domain engineering phase. It basically takes all common requirements and similarities from the preceding phase plus its own variable requirements. Using the base from the domain engineering phase and the individual requirements of the product engineering phase a complete and new product can be built. After that the product has been fully tested and approved, the product X can be delivered. Developing the product requirements specification for the individual product and reuse the requirements from the preceding phase. All activities for producing the product architecture . Makes use of the reference architecture from the step "design domain", it selects and configures the required parts of the reference architecture and incorporates product specific adaptations. During this process the product is built, using selections and configurations of the reusable components . During this step the product is verified and validated against its specifications. A test report gives information about all tests that were carried out, this gives an overview of possible errors in the product. If the product in the next step is not accepted, the process will loop back to "build product", in Figure 1 this is indicated as "[unsatisfied]". The final step is the acceptance of the final product. If it has been successfully tested and approved to be complete, it can be delivered. If the product does not satisfy to the specifications, it has to be rebuilt and tested again. The next figure shows the overall process of product-family engineering as described above. It is a full process overview with all concepts attached to the different steps. On the left side the entire process from the top to bottom has been drawn. All activities on the left side are linked to the concepts on the right side through dotted lines. Every concept has a number, which reflects the association with other concepts. Below the list with concepts will be explained. Most concept definitions are extracted from Pohl, Bockle, & Linden (2005) and also some new definitions have been added. Domain analysis Document contains an analysis of the domain through which common and variable requirements can be split up. Reusable common requirements Document contains requirements that are common to all products in the product line. Variable requirements Document contains derivation of customised requirements for different products. Reference Architecture Determines the static and dynamic decomposition that is valid for all products of the product line. Also the collection of common rules guiding the design, realisation of the parts, and how they are combined to form products. Variability model Defines the variability of the product line. Design & implementation assets of reusable components The major components for the design and implementation aspects, which are relevant for the whole product family. Test results The output of the tests performed in domain testing. Reusable test artifacts Test artifacts include the domain test plan, the domain test cases, and the domain test case scenarios. Requirements specifications The requirements for a particular product. Product architecture Comparable to reference architecture, but this contains the product specific architecture. Running application A working application that can be tested later on. Detailed design artifacts These include the different kinds of models that capture the static and dynamic structure of each component. Test report Document with all test results of the product. Problem report Document, which lists all problems encountered while testing the product. Final product The delivery of the completed product. Family model The overlapping concept of all family members with all sub products. Family member The concept of the individual product. Context document Document containing important information for determining the scope; containing guidelines, constraints and production strategy. Guidelines Market/business/product guidelines Constraints Market/business/product constraints Product strategy Product strategy with regard to markets Product portfolio description Portfolio containing all available products, with important properties. List of current & future products A list of all current products and the products that will be produced in the future. Product roadmap Describes the features of all products of the product line and categorises the feature into common features that are part of each product and variable features that are only part of some products. Table 1: List of concepts There are some good examples of the use of product family engineering, which were quite successful. The abstract model of product family engineering allows different kinds of uses, most of them are related to the consumer electronics market. Below an example is given of an application of the product line engineering process, based on a real experience of Nokia. Nokia produces different types of products. Among them is a mobile phones product family, currently containing 25 to 30 new products every year. These products are sold all over the world, which makes it necessary to support many different languages and user interfaces. A main problem here is that several different user interfaces must be supported, and because new products succeed each other very quickly, this should be done as efficiently as possible. Product family engineering makes it possible to create software for the different products and use variability to customize the software to each different mobile phone. The Nokia case is comparable with a normal software product line . During the first phase, product management , it is possible to define the scope of the different mobile-phone series. During the second phase, domain engineering , requirements are defined for the family, and for the individual types of phones, e.g., 6100/8300 series. In this phase, the software requirements are made, which can serve as a base for the whole product family. This speeds the overall development process for the software. The last phase, product engineering , is more focused on the individual types of phones. The requirements from the preceding phase are used to create individual software for the type of phone then being developed. The use of a product line gave Nokia the opportunity to increase their production of new mobile-phone models from 5-10 to around 30. [ 3 ]
https://en.wikipedia.org/wiki/Product-family_engineering
Products are the species formed from chemical reactions . [ 1 ] During a chemical reaction, reactants are transformed into products after passing through a high energy transition state . This process results in the consumption of the reactants. It can be a spontaneous reaction or mediated by catalysts which lower the energy of the transition state, and by solvents which provide the chemical environment necessary for the reaction to take place. When represented in chemical equations , products are by convention drawn on the right-hand side, even in the case of reversible reactions . [ 2 ] The properties of products such as their energies help determine several characteristics of a chemical reaction, such as whether the reaction is exergonic or endergonic . Additionally, the properties of a product can make it easier to extract and purify following a chemical reaction, especially if the product has a different state of matter than the reactants. Spontaneous reaction Catalysed reaction Much of chemistry research is focused on the synthesis and characterization of beneficial products, as well as the detection and removal of undesirable products. Synthetic chemists can be subdivided into research chemists who design new chemicals and pioneer new methods for synthesizing chemicals, as well as process chemists who scale up chemical production and make it safer, more environmentally sustainable, and more efficient. [ 3 ] Other fields include natural product chemists who isolate products created by living organisms and then characterize and study these products. The products of a chemical reaction influence several aspects of the reaction. If the products are lower in energy than the reactants, then the reaction will give off excess energy making it an exergonic reaction . Such reactions are thermodynamically favorable and tend to happen on their own. If the kinetics of the reaction are high enough, however, then the reaction may occur too slowly to be observed, or not even occur at all. This is the case with the conversion of diamond to lower energy graphite at atmospheric pressure, in such a reaction diamond is considered metastable and will not be observed converting into graphite. [ 4 ] [ 5 ] If the products are higher in chemical energy than the reactants then the reaction will require energy to be performed and is therefore an endergonic reaction. Additionally if the product is less stable than a reactant, then Leffler's assumption holds that the transition state will more closely resemble the product than the reactant. [ 6 ] Sometimes the product will differ significantly enough from the reactant that it is easily purified following the reaction such as when a product is insoluble and precipitates out of solution while the reactants remained dissolved. Ever since the mid-nineteenth century, chemists have been increasingly preoccupied with synthesizing chemical products. [ 7 ] Disciplines focused on isolation and characterization of products, such as natural products chemists, remain important to the field, and the combination of their contributions alongside synthetic chemists has resulted in much of the framework through which chemistry is understood today. [ 7 ] Much of synthetic chemistry is concerned with the synthesis of new chemicals as occurs in the design and creation of new drugs, as well as the discovery of new synthetic techniques. Beginning in the early 2000s, process chemistry began emerging as a distinct field of synthetic chemistry focused on scaling up chemical synthesis to industrial levels, as well as finding ways to make these processes more efficient, safer, and environmentally responsible. [ 3 ] In biochemistry , enzymes act as biological catalysts to convert substrate to product. [ 8 ] For example, the products of the enzyme lactase are galactose and glucose , which are produced from the substrate lactose . Some enzymes display a form of promiscuity where they convert a single substrate into multiple different products. It occurs when the reaction occurs via a high energy transition state that can be resolved into a variety of different chemical products. [ 9 ] Some enzymes are inhibited by the product of their reaction binds to the enzyme and reduces its activity. [ 10 ] This can be important in the regulation of metabolism as a form of negative feedback controlling metabolic pathways . [ 11 ] Product inhibition is also an important topic in biotechnology , as overcoming this effect can increase the yield of a product. [ 12 ]
https://en.wikipedia.org/wiki/Product_(chemistry)
Product analysis involves examining product features, costs, availability, quality, appearance and other aspects. Product analysis is conducted by potential buyers, by product managers attempting to understand competitors and by third party reviewers. [ 1 ] [ 2 ] Product analysis can also be used as part of product design to convert a high-level product description into project deliverables and requirements. It involves all facts of the product, its purpose, its operation, and its characteristics. Related techniques include product breakdown, systems analysis, systems engineering, value engineering, value analysis and functional analysis. [ 3 ] Technological analysis is sometimes applied in decision-making often related to investments, policy-decisions [ 4 ] and public spending. They can be done by a variety of organization-types such as for-profit companies, [ 5 ] non-profit think tanks , research institutes , public platforms and government agencies [ 6 ] and evaluate established, emerging and potential future technologies on a variety of measures and metrics – all of which are related to ideals and goals such as minimal global greenhouse gas emissions – such as life-cycle - sustainability , openness , performance , control, [ 7 ] financial costs, resource costs, health impacts and more. Results are sometimes published as public reports or as scientific peer-reviewed studies. [ additional citation(s) needed ] Based on such reports standardization can enable interventions or efforts which balance competition and cooperation [ 8 ] and improve sustainability, reduce waste and redundancy, [ 9 ] or accelerate innovation. They can also be used for the creation of standardized system designs that integrate a variety of technologies as their components. [ 10 ] Other applications include risk assessment and research of defense applications. [ 11 ] They can also be used or created for determining the hypothetical or existing optimal solution/s [ 7 ] and to identify challenges, innovation directions and applications. [ 12 ] Technological analysis can encompass or overlap with analysis of infrastructures and non-technological products. Standard-setting organizations can "spearhead convergence around standards". [ 13 ] A study found that, in many cases, greater variety of standards can lead to higher innovativeness only in administration. [ 14 ] Tools of technology analysis include analytical frameworks that describe the individual technological artefacts, chart technological limits, and determine the socio-technical preference profile. [ 15 ] Governments can coordinate or resolve conflicting interests in standardisation. [ 16 ] Moreover, potentials-assessment studies, including potential analyses , can investigate potentials, trade-offs, requirements and complications of existing, hypothetical and novel variants of technologies and inform the development of design -criteria and -parameters and deployment-strategies. [ 17 ] [ 18 ] [ 19 ] [ 20 ]
https://en.wikipedia.org/wiki/Product_analysis
Within agile project management , product backlog refers to a prioritized list of functionality which a product should contain. It is sometimes referred to as a to-do list , [ 1 ] and is considered an 'artifact' (a form of documentation) within the scrum software development framework. [ 2 ] The product backlog is referred to with different names in different project management frameworks, such as product backlog in scrum, [ 2 ] [ 3 ] work item list in disciplined agile , [ 3 ] [ 4 ] and option pool in lean . [ 3 ] In the scrum framework, creation and continuous maintenance of the product backlog is part of the responsibility of the product owner . [ 5 ] A sprint backlog [ 6 ] consists of selected elements from the product backlog which are planned to be developed within that particular sprint . In scrum, coherence is defined as a measure of the relationships between backlog items which make them worthy of consideration as a whole. [ 7 ] The agile product backlog in scrum is a prioritized features list, containing short descriptions of all functionality desired in the product. When applying the scrum or other agile development methodology, it is not necessary to start a project with a lengthy, upfront effort to document all requirements as is more common with traditional project management methods following the waterfall model . [ citation needed ] Instead, a scrum team and its product owner will typically begin by writing down every relevant feature they can think of for the project's agile backlog prioritization, and the initial agile product backlog is almost always more than enough for a first sprint. The scrum product backlog is then allowed to grow further throughout the project life cycle and change as more is learned about the product and its customers. A typical scrum backlog comprises features , bugs , technical work and knowledge acquisition. [ clarification needed ] This software-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Product_backlog
Product ecosystem theory is an emerging theory that describes how the design of manufactured products evolves over time and draws parallels with how species evolve within a natural ecosystem . Fundamental to this theory is that manufactured product lines respond to external threats and opportunities in much the same way that species respond to threats and opportunities. Competition and other environmental pressures may cause a species to become extinct. An example of the parallel in consumer products is the way in which the typewriter became displaced (or extinct) due to pressures from the personal computer. Products lines can be seen to incrementally change and branch over time following the principle of phyletic gradualism . Or they can be seen to have periods of stasis followed by disruptive innovation. This follows the principle of punctuated equilibrium . Ecosystem Theory provides a conceptual framework that helps designers and others understand the mechanisms underpinning product innovation in a tangible and visual way. Technology change is one of the environmental variables that provide both opportunity and threat for products in much the same way that environmental variables such as climate provide opportunity and threat for species. Evaluating designed products from this perspective is useful as it shows that the value of a product is not contained entirely within the product itself. Value is also obtained from the rest of the ecosystem. The term was first used in this context by Tim Williams in 2013 in a paper he wrote with Marianella Chamorro-Koc. [ 1 ] This paper proposed a methodology for understanding the difficulties of implementation for disruptive innovation based on a case study of the MIT City Car . The term has been used non-specifically prior to this on a number of occasions. [ 2 ] [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Product_ecosystem_theory
Manufacturing engineering or production engineering is a branch of professional engineering that shares many common concepts and ideas with other fields of engineering such as mechanical, chemical, electrical, and industrial engineering. Manufacturing engineering requires the ability to plan the practices of manufacturing; to research and to develop tools, processes, machines, and equipment; and to integrate the facilities and systems for producing quality products with the optimum expenditure of capital. [ 1 ] The manufacturing or production engineer's primary focus is to turn raw material into an updated or new product in the most effective, efficient & economic way possible. An example would be a company uses computer integrated technology in order for them to produce their product so that it is faster and uses less human labor. Manufacturing Engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics, and business management. This field also deals with the integration of different facilities and systems for producing quality products (with optimal expenditure) by applying the principles of physics and the results of manufacturing systems studies, such as the following: Manufacturing engineers develop and create physical artifacts, production processes, and technology. It is a very broad area which includes the design and development of products. Manufacturing engineering is considered to be a subdiscipline of industrial engineering / systems engineering and has very strong overlaps with mechanical engineering . Manufacturing engineers' success or failure directly impacts the advancement of technology and the spread of innovation. This field of manufacturing engineering emerged from the tool and die discipline in the early 20th century. It expanded greatly from the 1960s when industrialized countries introduced factories with: 1. Numerical control machine tools and automated systems of production. 2. Advanced statistical methods of quality control : These factories were pioneered by the American electrical engineer William Edwards Deming , who was initially ignored by his home country. The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality. 3. Industrial robots on the factory floor, introduced in the late 1970s: These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved production speed. The history of manufacturing engineering can be traced to factories in the mid-19th century USA and 18th century UK. Although large home production sites and workshops were established in China, ancient Rome, and the Middle East, the Venice Arsenal provides one of the first examples of a factory in the modern sense of the word. Founded in 1104 in the Republic of Venice several hundred years before the Industrial Revolution , this factory mass-produced ships on assembly lines using manufactured parts. The Venice Arsenal apparently produced nearly one ship every day and, at its height, employed 16,000 people. Many historians regard Matthew Boulton's Soho Manufactory (established in 1761 in Birmingham) as the first modern factory. Similar claims can be made for John Lombe's silk mill in Derby (1721), or Richard Arkwright's Cromford Mill (1771). The Cromford Mill was purpose-built to accommodate the equipment it held and to take the material through the various manufacturing processes. One historian, Jack Weatherford , contends that the first factory was in Potosí . The Potosi factory took advantage of the abundant silver that was mined nearby and processed silver ingot slugs into coins. British colonies in the 19th century built factories simply as buildings where a large number of workers gathered to perform hand labor, usually in textile production. This proved more efficient for the administration and distribution of materials to individual workers than earlier methods of manufacturing, such as cottage industries or the putting-out system. Cotton mills used inventions such as the steam engine and the power loom to pioneer the industrial factories of the 19th century, where precision machine tools and replaceable parts allowed greater efficiency and less waste. This experience formed the basis for the later studies of manufacturing engineering. Between 1820 and 1850, non-mechanized factories supplanted traditional artisan shops as the predominant form of manufacturing institution. Henry Ford further revolutionized the factory concept and thus manufacturing engineering in the early 20th century with the innovation of mass production. Highly specialized workers situated alongside a series of rolling ramps would build up a product such as (in Ford's case) an automobile. This concept dramatically decreased production costs for virtually all manufactured goods and brought about the age of consumerism. Modern manufacturing engineering studies include all intermediate processes required for the production and integration of a product's components. Some industries, such as semiconductor and steel manufacturers use the term "fabrication" for these processes. Automation is used in different processes of manufacturing such as machining and welding. Automated manufacturing refers to the application of automation to produce goods in a factory. The main advantages of automated manufacturing for the manufacturing process are realized with effective implementation of automation and include higher consistency and quality, reduction of lead times, simplification of production, reduced handling, improved workflow, and improved worker morale. Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in manufacturing engineering. Robots allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform economically, and ensure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications. Manufacturing Engineers focus on the design, development, and operation of integrated systems of production to obtain high quality & economically competitive products. [ 2 ] These systems may include material handling equipment, machine tools, robots, or even computers or networks of computers. Manufacturing engineers possess an associate's or bachelor's degree in engineering with a major in manufacturing engineering. The length of study for such a degree is usually two to five years followed by five more years of professional practice to qualify as a professional engineer. Working as a manufacturing engineering technologist involves a more applications-oriented qualification path. Academic degrees for manufacturing engineers are usually the Associate or Bachelor of Engineering, [BE] or [BEng], and the Associate or Bachelor of Science, [BS] or [BSc]. For manufacturing technologists the required degrees are Associate or Bachelor of Technology [B.TECH] or Associate or Bachelor of Applied Science [BASc] in Manufacturing, depending upon the university. Master's degrees in engineering manufacturing include Master of Engineering [ME] or [MEng] in Manufacturing, Master of Science [M.Sc] in Manufacturing Management, Master of Science [M.Sc] in Industrial and Production Management, and Master of Science [M.Sc] as well as Master of Engineering [ME] in Design, which is a subdiscipline of manufacturing. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university. The undergraduate degree curriculum generally includes courses in physics, mathematics, computer science, project management, and specific topics in mechanical and manufacturing engineering. Initially, such topics cover most, if not all, of the subdisciplines of manufacturing engineering. Students then choose to specialize in one or more subdisciplines towards the end of their degree work. The Foundational Curriculum for a Bachelor's Degree in Manufacturing Engineering or Production Engineering includes below mentioned syllabus. This syllabus is closely related to Industrial Engineering and Mechanical Engineering, but it differs by placing more emphasis on Manufacturing Science or Production Science. It includes the following areas: A degree in Manufacturing Engineering typically differs from Mechanical Engineering in only a few specialized classes. Mechanical Engineering degrees focus more on the product design process and on complex products which requires more mathematical expertise. Certification and licensure: In some countries, "professional engineer" is the term for registered or licensed engineers who are permitted to offer their professional services directly to the public. Professional Engineer , abbreviated (PE - USA) or (PEng - Canada), is the designation for licensure in North America. To qualify for this license, a candidate needs a bachelor's degree from an ABET -recognized university in the USA, a passing score on a state examination, and four years of work experience usually gained via a structured internship. In the USA, more recent graduates have the option of dividing this licensure process into two segments. The Fundamentals of Engineering (FE) exam is often taken immediately after graduation and the Principles and Practice of Engineering exam is taken after four years of working in a chosen engineering field. Society of Manufacturing Engineers (SME) certification (USA): The SME administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at the professional engineering level. The following discussion deals with qualifications in the USA only. Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience. Certified Manufacturing Engineer (CMfgE) is an engineering qualification administered by the Society of Manufacturing Engineers, Dearborn, Michigan, USA. Candidates qualifying for a Certified Manufacturing Engineer credential must pass a four-hour, 180-question multiple-choice exam which covers more in-depth topics than the CMfgT exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience. Certified Engineering Manager (CEM). The Certified Engineering Manager Certificate is also designed for engineers with eight years of combined education and manufacturing experience. The test is four hours long and has 160 multiple-choice questions. The CEM certification exam covers business processes, teamwork, responsibility, and other management-related categories. Many manufacturing companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and ease of use in designing mating interfaces and tolerances. Other CAE programs commonly used by product manufacturers include product life cycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of relatively few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity , complex contact between mating parts, or non-Newtonian flows. Just as manufacturing engineering is linked with other disciplines, such as mechatronics, multidisciplinary design optimization (MDO) is also being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also utilize sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems. On the business side of manufacturing engineering, enterprise resource planning (ERP) tools can overlap with PLM tools and use connector programs with CAD tools to share drawings, sync revisions, and be the master for certain data used in the other modern tools above, like part numbers and descriptions. Manufacturing engineering is an extremely important discipline worldwide. It goes by different names in different countries. In the United States and the continental European Union it is commonly known as Industrial Engineering and in the United Kingdom and Australia it is called Manufacturing Engineering. [ 3 ] Mechanics, in the most general sense, is the study of forces and their effects on matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include: If the engineering project were to design a vehicle, statics might be employed to design the frame of the vehicle to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle or to design the intake system for the engine. Kinematics is the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. The movement of a crane and the oscillations of a piston in an engine are both simple kinematic systems. The crane is a type of open kinematic chain, while the piston is part of a closed four-bar linkage. Engineers typically use kinematics in the design and analysis of mechanisms. Kinematics can be used to find the possible range of motion for a given mechanism, or, working in reverse, can be used to design a mechanism that has a desired range of motion. Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman . Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manufacture parts manually in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine. Drafting is used in nearly every subdiscipline of mechanical and manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD). Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and providing a guided movement of the parts of the machine. Metal fabrication is the building of metal structures by cutting, bending, and assembling processes. Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. Mechatronics is an engineering discipline that deals with the convergence of electrical, mechanical and manufacturing systems. Such combined systems are known as electromechanical systems and are widespread. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various aircraft and automobile subsystems. The term mechatronics is typically used to refer to macroscopic systems, but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to initiate the deployment of airbags, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high-definition printing. In the future, it is hoped that such devices will be used in tiny implantable medical devices and to improve optical communication. Textile engineering courses deal with the application of scientific and engineering principles to the design and control of all aspects of fiber, textile, and apparel processes, products, and machinery. These include natural and man-made materials, interaction of materials with machines, safety and health, energy conservation, and waste and pollution control. Additionally, students are given experience in plant design and layout, machine and wet process design and improvement, and designing and creating textile products. Throughout the textile engineering curriculum, students take classes from other engineering and disciplines including: mechanical, chemical, materials and industrial engineering. Advanced composite materials (engineering) (ACMs) are also known as advanced polymer matrix composites. These are generally characterized or determined by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors. Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. Manufacturing ACMs is a multibillion-dollar industry worldwide. Composite products range from skateboards to components of the space shuttle. The industry can be generally divided into two basic segments, industrial composites and advanced composites. Manufacturing engineering is just one facet of the engineering manufacturing industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they focus on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk, produced efficiently and economically. Manufacturing engineers are closely connected with engineering and industrial design efforts. Examples of major companies that employ manufacturing engineers in the United States include General Motors Corporation, Ford Motor Company, Chrysler, Boeing , Gates Corporation and Pfizer. Examples in Europe include Airbus , Daimler, BMW , Fiat, Navistar International , and Michelin Tyre. Industries where manufacturing engineers are generally employed include: A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react to changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, both of which have numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types and the ability to change the order of operations executed on a part. The second category, called routing flexibility, consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability. Most FMS systems comprise three main systems. The work machines, which are often automated CNC machines, are connected by a material handling system to optimize parts flow, and to a central control computer, which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products from a mass production. Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. Traditionally separated process methods are joined through a computer by CIM. This integration allows the processes to exchange information and to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing. Friction stir welding was discovered in 1991 by The Welding Institute (TWI). This innovative steady state (non-fusion) welding technique joins previously un-weldable materials, including several aluminum alloys . It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include: welding the seams of the aluminum main space shuttle external tank, the Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket; armor plating for amphibious assault ships; and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation, among an increasingly growing range of uses. Other areas of research are Product Design , MEMS (Micro-Electro-Mechanical Systems), Lean Manufacturing , Intelligent Manufacturing Systems, Green Manufacturing, Precision Engineering, Smart Materials, etc.
https://en.wikipedia.org/wiki/Product_engineering
A Product fit analysis ( PFA ) is a form of requirements analysis of the gap between an IT product's functionality and required functions. It is a document which consists of all the business requirements which are mapped to the product or application . Requirements are specifically mentioned and the application is designed accordingly. A PFA document is designed covering all the functionality required by the business and how it is addressed in the application. It covers all the data inputs , data processing and data outputs. This business-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Product_fit_analysis
In manufacturing engineering , a product layout refers to a production system where the work stations and equipment are located along the line of production, as with assembly lines . Usually, work units are moved along line (not necessarily a geometric line , but a set of interconnected work stations) by a conveyor . Work is done in small amounts at each of the work stations on the line. To use the product layout, the total work to be performed must be dividable into small tasks that can be assigned to each of the workstations. Because the work stations each do small amounts of work, the stations utilize specific techniques and equipment tailored to the individual job they are assigned. This can lead to a higher production rate.
https://en.wikipedia.org/wiki/Product_layout
In mathematics , given two measurable spaces and measures on them, one can obtain a product measurable space and a product measure on that space. Conceptually, this is similar to defining the Cartesian product of sets and the product topology of two topological spaces, except that there can be many natural choices for the product measure. Let ( X 1 , Σ 1 ) {\displaystyle (X_{1},\Sigma _{1})} and ( X 2 , Σ 2 ) {\displaystyle (X_{2},\Sigma _{2})} be two measurable spaces , that is, Σ 1 {\displaystyle \Sigma _{1}} and Σ 2 {\displaystyle \Sigma _{2}} are sigma algebras on X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} respectively, and let μ 1 {\displaystyle \mu _{1}} and μ 2 {\displaystyle \mu _{2}} be measures on these spaces. Denote by Σ 1 ⊗ Σ 2 {\displaystyle \Sigma _{1}\otimes \Sigma _{2}} the sigma algebra on the Cartesian product X 1 × X 2 {\displaystyle X_{1}\times X_{2}} generated by subsets of the form B 1 × B 2 {\displaystyle B_{1}\times B_{2}} , where B 1 ∈ Σ 1 {\displaystyle B_{1}\in \Sigma _{1}} and B 2 ∈ Σ 2 {\displaystyle B_{2}\in \Sigma _{2}} : Σ 1 ⊗ Σ 2 = σ ( { B 1 × B 2 ∣ B 1 ∈ Σ 1 , B 2 ∈ Σ 2 } ) {\displaystyle \Sigma _{1}\otimes \Sigma _{2}=\sigma \left(\lbrace B_{1}\times B_{2}\mid B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\rbrace \right)} This sigma algebra is called the tensor-product σ-algebra on the product space. A product measure μ 1 × μ 2 {\displaystyle \mu _{1}\times \mu _{2}} (also denoted by μ 1 ⊗ μ 2 {\displaystyle \mu _{1}\otimes \mu _{2}} by many authors) is defined to be a measure on the measurable space ( X 1 × X 2 , Σ 1 ⊗ Σ 2 ) {\displaystyle (X_{1}\times X_{2},\Sigma _{1}\otimes \Sigma _{2})} satisfying the property (In multiplying measures, some of which are infinite, we define the product to be zero if any factor is zero.) In fact, when the spaces are σ {\displaystyle \sigma } -finite, the product measure is uniquely defined, and for every measurable set E , where E x = { y ∈ X 2 | ( x , y ) ∈ E } {\displaystyle E_{x}=\{y\in X_{2}|(x,y)\in E\}} and E y = { x ∈ X 1 | ( x , y ) ∈ E } {\displaystyle E^{y}=\{x\in X_{1}|(x,y)\in E\}} , which are both measurable sets. The existence of this measure is guaranteed by the Hahn–Kolmogorov theorem . The uniqueness of product measure is guaranteed only in the case that both ( X 1 , Σ 1 , μ 1 ) {\displaystyle (X_{1},\Sigma _{1},\mu _{1})} and ( X 2 , Σ 2 , μ 2 ) {\displaystyle (X_{2},\Sigma _{2},\mu _{2})} are σ-finite . The Borel measures on the Euclidean space R n can be obtained as the product of n copies of Borel measures on the real line R . Even if the two factors of the product space are complete measure spaces , the product space may not be. Consequently, the completion procedure is needed to extend the Borel measure into the Lebesgue measure , or to extend the product of two Lebesgue measures to give the Lebesgue measure on the product space. The opposite construction to the formation of the product of two measures is disintegration , which in some sense "splits" a given measure into a family of measures that can be integrated to give the original measure. This article incorporates material from Product measure on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Product_measure
Given a Hilbert space with a tensor product structure a product numerical range is defined as a numerical range with respect to the subset of product vectors. In some situations, especially in the context of quantum mechanics product numerical range is known as local numerical range Let X {\displaystyle X} be an operator acting on an N {\displaystyle N} -dimensional Hilbert space H N {\displaystyle {\mathcal {H}}_{N}} . Let Λ ( X ) {\displaystyle \mathrm {\Lambda } (X)} denote its numerical range , i.e. the set of all λ {\displaystyle \lambda } such that there exists a normalized state | ψ ⟩ ∈ H N {\displaystyle {|\psi \rangle }\in {\mathcal {H}}_{N}} , | | ψ | | = 1 {\displaystyle ||\psi ||=1} , which satisfies ⟨ ψ | X | ψ ⟩ = λ {\displaystyle {\langle \psi |}X{|\psi \rangle }=\lambda } . An analogous notion can be defined for operators acting on a composite Hilbert space with a tensor product structure. Consider first a bi–partite Hilbert space, H N = H K ⊗ H M , {\displaystyle {\mathcal {H}}_{N}={\mathcal {H}}_{K}\otimes {\mathcal {H}}_{M},} of a composite dimension N = K M {\displaystyle N=KM} . Let X {\displaystyle X} be an operator acting on the composite Hilbert space. We define the product numerical range Λ ⊗ ( X ) {\displaystyle \mathrm {\Lambda } ^{\!\otimes }\!\left(X\right)} of X {\displaystyle X} , with respect to the tensor product structure of H N {\displaystyle {\mathcal {H}}_{N}} , as Λ ⊗ ( X ) = { ⟨ ψ A ⊗ ψ B | X | ψ A ⊗ ψ B ⟩ : | ψ A ⟩ ∈ H K , | ψ B ⟩ ∈ H M } , {\displaystyle \mathrm {\Lambda } ^{\!\otimes }\!\left(X\right)=\left\{{\langle \psi _{A}\otimes \psi _{B}|}X{|\psi _{A}\otimes \psi _{B}\rangle }:{|\psi _{A}\rangle }\in {\mathcal {H}}_{K},{|\psi _{B}\rangle }\in {\mathcal {H}}_{M}\right\},} where | ψ A ⟩ ∈ H K {\displaystyle {|\psi _{A}\rangle }\in {\mathcal {H}}_{K}} and | ψ B ⟩ ∈ H M {\displaystyle {|\psi _{B}\rangle }\in {\mathcal {H}}_{M}} are normalized. Let H N = H K ⊗ H M {\displaystyle {\mathcal {H}}_{N}={\mathcal {H}}_{K}\otimes {\mathcal {H}}_{M}} be a tensor product Hilbert space. We define the product numerical radius r ⊗ ( X ) {\displaystyle r^{\otimes }(X)} of X {\displaystyle X} , with respect to this tensor product structure, as r ⊗ ( X ) = max { | z | : z ∈ Λ ⊗ ( X ) } . {\displaystyle r^{\otimes }(X)=\max\{|z|:z\in \mathrm {\Lambda } ^{\!\otimes }\!\left(X\right)\}.} The notion of numerical range of a given operator, also called "field of values", has been extensively studied during the last few decades and its usefulness in quantum theory has been emphasized. Several generalizations of numerical range are known. In particular, Marcus introduced the notion of ’’’decomposable numerical range’’’, the properties of which are a subject of considerable interest. The product numerical range can be considered as a particular case of the decomposable numerical range defined for operators acting on a tensor product Hilbert space. This notion may also be considered as a numerical range relative to the proper subgroup U ( K ) × U ( M ) {\displaystyle U(K)\times U(M)} of the full unitary group U ( K M ) {\displaystyle U(KM)} . It is not difficult to establish the basic properties of the product numerical range which are independent of the partition of the Hilbert space and of the structure of the operator. We list them below leaving some simple items without a proof. Topological facts concerning product numerical range for general operators. The product numerical range does not need to be convex. Consider the following simple example. Let Matrix A {\displaystyle A} defined above is matrix with eigenvalues 0 , 1 , i {\displaystyle 0,1,i} . It is easy to see that 1 ∈ Λ ⊗ ( A ) {\displaystyle 1\in \mathrm {\Lambda } ^{\!\otimes }\!\left({A}\right)} and i ∈ Λ ⊗ ( A ) {\displaystyle i\in \mathrm {\Lambda } ^{\!\otimes }\!\left({A}\right)} , but ( 1 + i ) / 2 ∉ Λ ⊗ ( A ) {\displaystyle (1+i)/2\not \in \mathrm {\Lambda } ^{\!\otimes }\!\left({A}\right)} . Actually, by direct computation we have Λ ⊗ ( A ) = { x + y i : 0 ≤ x , 0 ≤ y , x + y ≤ 1 } . {\displaystyle \mathrm {\Lambda } ^{\!\otimes }\!\left({A}\right)=\left\{x+yi:0\leq x,0\leq y,{\sqrt {x}}+{\sqrt {y}}\leq 1\right\}.} Product numerical range of matrix A {\displaystyle A} is presented below. Product numerical range forms a nonempty set for a general operator. In particular it contains the barycenter of the spectrum. Product numerical range of A ∈ M K × M {\displaystyle A\in \mathbb {M} _{K\times M}} includes the barycenter of the spectrum, 1 K M t r A ∈ Λ ⊗ ( A ) . {\displaystyle {\frac {1}{KM}}\;{\mathrm {tr} }A\ \in \ \mathrm {\Lambda } ^{\!\otimes }\!\left({A}\right).} Product numerical radius is a vector norm on matrices, but it is not a matrix norm. Product numerical radius is invariant with respect to local unitaries, which have the tensor product structure.
https://en.wikipedia.org/wiki/Product_numerical_range
Product of experts (PoE) is a machine learning technique. It models a probability distribution by combining the output from several simpler distributions. It was proposed by Geoffrey Hinton in 1999, [ 1 ] along with an algorithm for training the parameters of such a system. The core idea is to combine several probability distributions ("experts") by multiplying their density functions—making the PoE classification similar to an "and" operation. This allows each expert to make decisions on the basis of a few dimensions without having to cover the full dimensionality of a problem: P ( y | { x k } ) = 1 Z ∏ j = 1 M f j ( y | { x k } ) {\displaystyle P(y|\{x_{k}\})={\frac {1}{Z}}\prod _{j=1}^{M}f_{j}(y|\{x_{k}\})} where f j {\displaystyle f_{j}} are unnormalized expert densities and Z = ∫ d y ∏ j = 1 M f j ( y | { x k } ) {\displaystyle Z=\int {\mbox{d}}y\prod _{j=1}^{M}f_{j}(y|\{x_{k}\})} is a normalization constant (see partition function (statistical mechanics) ). This is related to (but quite different from) a mixture model , where several probability distributions p j ( y | { x j } ) {\displaystyle p_{j}(y|\{x_{j}\})} are combined via an "or" operation, which is a weighted sum of their density functions: P ( y | { x k } ) = ∑ j = 1 M α j p j ( y | { x k } ) , {\displaystyle P(y|\{x_{k}\})=\sum _{j=1}^{M}\alpha _{j}p_{j}(y|\{x_{k}\}),} with ∑ j α j = 1. {\displaystyle \sum _{j}\alpha _{j}=1.} The experts may be understood as each being responsible for enforcing a constraint in a high-dimensional space. A data point is considered likely if and only if none of the experts say that the point violates a constraint. To optimize it, he proposed the contrastive divergence minimization algorithm. [ 2 ] This algorithm is most often used for learning restricted Boltzmann machines . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Product_of_experts