text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Originally registered as the Institution of Welding Engineers in 1923, The Welding Institute has grown and changed over the intervening decades, yet maintains a specialisation in welding , joining and allied technologies.
The formation in 1923 of the professional institution, later to become The Welding Institute, and the establishment of the British Welding Research Association (BWRA) in 1946 provided the basis of the company group as it is today.
The Welding Institute Group now encompasses a professional membership institution (‘ The Welding Institute’ ) and an engineering research, consultancy and technology organisation (‘TWI Ltd’ ), as well as an international training school ( ‘TWI Training’ ), the National Structural Integrity Research Centre ( ‘NSIRC’ ), a series of collaborative enterprises with academia (‘The TWI Innovation Network’ ), and more.
It has been headquartered near Cambridge , England, since 1946, and has other facilities across the UK and around the world.
Descended from the British Welding Research Association (BWRA), TWI Ltd is now a global independent research and technology organisation.
As a membership-based organisation, TWI Ltd works across all industry sectors and in all aspects of manufacturing, fabrication and whole-life integrity management technologies, where it provides services such as consultancy, technical advice, research and investigation for industrial member companies and public funding bodies.
TWI Ltd provides impartial advice, know-how and safety assurance through engineering, materials and joining technologies – spanning innovation, knowledge transfer and problem resolution across all aspects of welding, joining, surface engineering, inspection and whole-life integrity management.
TWI's R&D work has delivered a number inventions and developments to industry, including advancing MIG and TIG welding, creating CTOD and methods of understanding of brittle fracture, fatigue design SN curves, linear friction and friction stir welding , local vacuum electron beam welding, and many more.
Along with wider research programmes, TWI Ltd works directly with industrial member companies through single client projects to provide bespoke solutions. Much of the work is confidential, with the outcomes and associated intellectual property owned exclusively by the client.
Through TWI Training, it also offers training and examination services in NDT, welding and inspection across the globe.
While TWI Ltd works for its industrial member companies, The Welding Institute has a separate membership of around 4,500 individual professionals, who receive range of support in their careers and professional development. [1]
The Welding Institute is a professional engineering institution established in 1923 to support the development of engineering professionals in the fields of welding, joining and allied technologies.
The Welding Institute is a membership organisation as well as being a licensed member of the Engineering Council, which allows them to assess and nominate eligible members to become registered as a Chartered Engineer (CEng), Incorporated Engineer (IEng) or Engineering Technician (EngTech).
The Institute also provides guidance to statutory bodies such as the British Standards Institution, the Engineering Council, and the UK government.
The professional division of the organisation (The Welding Institute) is a licensed member of the Engineering Council . It is situated at Granta Park, near Duxford Museum .
Both industrial and professional members are represented on the Council that oversees TWI's business and operational activities.
TWI has several facilities both in the UK and overseas:
The organisation has international branches in Australia, Bahrain, Canada, India, Indonesia, Malaysia, Pakistan, Thailand, Turkey, and the United Arab Emirates. [ 1 ]
The Welding Institute (TWI Professional Group) is a direct descendant of the Institution of Welding Engineers Limited, which began when 20 men gathered on 26 January 1922 in the Holborn Restaurant in London and resolved to establish an association to bring together acetylene welders and those interested in electric arc welding . The date of registration under the Companies Act was 15 February 1923. Slow growth over the next ten years saw membership grow to 600 with an income of £800 per annum.
In April 1934, the Institution merged with the British Advisory Welding Council to form a new organisation – the Institute of Welding.
This was important to the later creation of TWI Ltd as it took the scope of the Institute beyond personal professional membership to also include industrial member companies in order to further support research activities.
A symposium that same year, Welding of Iron and Steel, held in conjunction with the Iron and Steel Institute , showed the need for a research programme . It took the threat of war, the Welding Research Council and modest funding from the Department of Scientific and Industrial Research ( DSIR ), to generate the will and ability to commence such a programme in 1937. [ [ 2 ] ] The Institute had no laboratories of its own and supported work, mainly in UK universities.
In the late 1940s, a move was made to transform the Welding Research Council to the recently established status of Research Association, thereby giving it access to DSIR funding in proportion to that raised from industry. At the time, professional institutions were debarred from acting as Research Associations [ [ 3 ] ] The debarring of professional institutions from acting as research associations forced the two arms of the Institute to split in 1946, leading to the creation of the British Welding Research Association (BWRA) as a separate entity to The Institute of Welding.
In 1946 the BWRA bought Abington Hall, near Cambridge , UK, a country house and grounds in poor repair, for £3850 and commenced business under Allan Ramsay Moon as its director of research. The first welding shop was established in stables adjoining the house, and fatigue research commenced under Dr. Richard Weck.
BWRA also occupied a house in London, 29 Park Crescent, which it converted into a metallurgical laboratory, with the butler's pantry becoming the polishing room and the coachman's quarters, the machine shop.
Ramsay Moon left after one year, disillusioned at the grant of only £30,000 from DSIR, and it fell to Dr. Harry Taylor to grow the organisation into a viable business. [ [ 4 ] ]
In 1948, The Welding Institute celebrated its silver jubilee with the award of a Grant of Arms by the College of Arms. The coat of arms depicts a joint being made through the application of heat with a Latin motto that translates as ‘out of two, one.’
Meanwhile, The Institute of Welding had bought property in London very close to the Imperial College of Science and Technology. It ran an expanding training programme through its School of Welding Technology and later the School of Non-Destructive Testing in what is a clear forerunner to today's TWI Training.
The first course, on the welding of pressure vessels, saw nearly 100 applicants for the 40 places, demonstrating a need for such courses.
In 1957, Richard Weck, became Director of BWRA. The 1960s saw significant growth in the size and scope of BWRA, including its involvement in training. [ [ 5 ] ] In general, these activities complemented those of the Institute of Welding but it became apparent that the two organisations would serve industry better by merging. The scessor to DSIR, the Ministry of Technology , put forward no objection so a merger was agreed and a new body – The Welding Institute – was created on 28 March 1968.
These earliest years were the foundation of The Welding Institute and what would later become TWI Ltd.
Direct support from Government departments ceased in the 1970s, but TWI not only survived this funding crisis but grew rapidly. The original individual professional membership envisaged in 1922 developed into a body of more than 7000 engineers.
In 1988 Bevan Braithwaite was appointed as chief executive of The Welding Institute. By 2008, the organisation had opened offices and laboratories at three further sites within the UK (in Middlesbrough , Port Talbot and the Advanced Manufacturing Park , South Yorkshire) and operated facilities in the North America , China , Southeast Asia , India and the Middle East .
In 2012, it launched the National Structural Integrity Research Centre (NSIRC) for postgraduate education.
By 2015, TWI had established a further UK base in Aberdeen and 12 international branches.
In 2016, TWI formed the Tipper Group, "a group designed especially for women in the engineering profession." It was created with the aim to support and inspire female engineers in welding, joining and associated technologies, but has since extended its remit to support wider diversity and inclusion within the profession and at TWI.
David Wrathmall was appointed as interim Chief Executive of The Welding Institute Group in April, 2024. | https://en.wikipedia.org/wiki/The_Welding_Institute |
The World in the Model: How Economists Work and Think is a work by Mary S. Morgan published by Cambridge University Press in 2012.
Mary S. Morgan , defined by Robert Sugden "a major philosopher and historian of economics", [ 1 ] analyzes with examples how economists work and think using models. [ 2 ] Her book reconstructs the path taken by models to become economists' "natural way of doing economics." [ 2 ] [ 3 ] : 17
For Morgan, both the "method of mathematical postulation and proof" and that of modelling have emerged in late 19th century, [ 1 ] [ 3 ] : 18 and models have become economists' main tools only from the 1930s, replacing classical economics relying on "universal laws". [ 1 ]
A concept stressed throughout this work is that:
Economic modelling is not primarily a method of proof, but rather a method of inquiry. [ 3 ] : 239
The pragmatic orientation of her work is stated right at the beginning, [ 4 ] [ 1 ] noting that "Science is messy", [ 3 ] : xv thus
Asking: What qualities do models need to make them useful in a science? and What functions do models play in a science? are more fruitful than asking What are models? [ 3 ] : xvi
According to Morgan, [ 3 ] : 225 who draws a parallel with physics, [ 5 ] economists follow four general steps in their work with models: [ 3 ] : 17 [ 6 ] [ 7 ]
For the Author, economists [ 3 ] : 37
... reason about the small world in the model and reason about the big economic world with the model.
The existence of a world inside the model itself is a central feature of the work. [ 2 ] Modelling is not just an activity of abstraction, simplification, idealization, and mathematization, but entails the creation of new artifacts to be explored and reasoned with, in a play where the economist and the model are "jointly active participants". [ 2 ] [ 3 ] : 256
As noted by Sugden, [ 1 ] Morgan repeats at several point this double function of models:
Models are objects to enquire into and to enquire with: economists enquire into the world of the economic model, and use them to enquire with into the economic world that the model represents [ 3 ] : 217
The case studies span a period going from early nineteenth century to the second half of the twentieth century, though the earlier Quesnay 's Tableau économiques are mentioned among the antecedents of modelling (Chapter 1). [ 4 ] Models start thus with Ricardo 's "model farm" (Chapter 2), to study for example how an increase in grain prices would affect rents, [ 7 ] to continue with the Edgeworth box for the trading of goods, later the subject of developments by Vilfredo Pareto [ 7 ] (Chapter 3), with the rational agent (Chapter 4), and with Newlyn–Phillips hydraulic machine of the economy (Chapter 5). [ 4 ] The book continues with the business cycle work of Ragnar Frisch and Jan Tinbergen (this latter father of Econometrics ), and the macroeconomic models of Meade , Samuelson and Hicks (Chapter 6), with supply and demand models (Chapter 7), all the way to modern simulation modelling (Chapter 8). [ 4 ] Monte Carlo methods are included, and a full chapter (Chapter 9) is devoted to the Prisoner Dilemma , [ 6 ] [ 4 ] where the Author discusses Nash equilibrium using both the classical example of the two prisoners confronted with the choice to either confess or stay silent while ignoring their companion's choice, and a less seen example drawn from Puccini 's Tosca , with illustrations from a paper of Anatol Rapoport . [ 8 ]
As noted by economist Gene Callahan [ 7 ] Morgan is attentive to the specialized talents that are needed in economic modelling, including a tacit, craft-based, knowledge that can only be acquired via apprenticeship. [ 3 ] : 15
In discussing what makes a model "fruitful" Morgan notes that while they must have enough internal resources to operate, including some salient aspects of "the economic world," [ 2 ] they need to be able to generate variety in their outcome, as to potentially surprise the analyst, even when the surprise is that too many solutions are possible, as noted by Paul Samuelson when trying to translate Keynes 's General Theory of Employment, Interest and Money into a model. [ 3 ] : 229 [ 2 ] Also, size in relation to content matters , [ 3 ] : 237 so that models must also be small enough to be manipulable. [ 6 ]
Morgan aims to provide an account of "modelling (in economics) as an autonomous epistemic genre". [ 3 ] : xvi In this, the use of narratives in relation to models is not just rhetorical; it is foremost epistemological. [ 6 ] Ragnar Frisch 's image of the business cycle as a rocking horse randomly hit by a boy with a club successfully conveyed to the economist of the 1930's the concept of an harmonic process impulsed by shocks. [ 6 ] [ 3 ] : 239 The centrality of narration for models in the work of Morgan is noted by François Claveau [ 4 ] and Gene Callahan. This latter notes how this is well illustrated by the prisoner dilemma , that would be incomprehensible if offered only in terms of payoffs, without the accompanying story. [ 7 ] [ 3 ] : 372
Chapter 7 offers a discussion of what makes models different from physical experiments. [ 4 ] Morgan sees a difference in that experiments are "made of the same stuff" of the world, while in the case of models "there is no shared stuff". [ 3 ] : 287
Coming to how economic models influence policy thanks to their scientific authority, Morgan, citing Nancy Cartwright , [ 9 ] offers a word of caution: [ 2 ]
the looseness of the criteria of plausibility always make it doubtful, difficult, and potentially dangerous, to use these little mathematical models to intervene directly in the economic world. [ 3 ] : 248
The book is praised by Verena Halsmayer [ 2 ] and by Maxime Desmarais-Tremblay [ 6 ] for showing the variety of tools mobilized by economists, such as the hydraulic system of the Newlyn–Philips Machine made of pipes, valves and tanks, graphs such as those of the Edgeworth box , pen and paper tabulated records such as Ricardo's ideal farm, conceptual games such as the Prisoner dilemma , and modern day equations.
For François Claveau it is disappointing that Morgan, though author of The History of Econometric Ideas (1990) does not discuss econometric models, nor the distinction between these, embedded into real world data and statistical testing, and non-econometric models. Claveau also laments the absence of a discussion of the difference between theories and models. [ 4 ]
A limit of Morgan's work that is flagged by Verena Halsmayer is that in examining a selected set of successful models the book discounts alternative traditions, the "verbal economics" and other historical traditions, evading the strategic development leading to the dominance of ( neoclassical ) economic modeling, thus ignoring "what was lost by adopting modeling as the dominant mode of economic knowledge production". [ 2 ]
Robert Sugden praises Morgan's vivid portrait of Ricardo. Morgan details how the numerical examples of his model farm were inspired by agricultural experiments run by other gentleman farmers. [ 1 ] The same reviewer notes that the book of Morgan, as the tile implies, has more to say about how economist work inside the model than in how they look at the world outside it. As a modeller, Sugden suggests that Morgan's emphasis on how economists manipulate their models to obtain insights may discount models' autonomy, and in what way models may speak with their own voice.
For this work Morgan won in 2013 the best book award [ 6 ] from The European Society for the History of Economic Thought. [ 10 ] | https://en.wikipedia.org/wiki/The_World_in_the_Model:_How_Economists_Work_and_Think |
The World of Null-A , sometimes written The World of Ā , is a 1948 science fiction novel by Canadian-American writer A. E. van Vogt . It was originally published as a three-part serial in 1945 in Astounding Stories . It incorporates concepts from the general semantics of Alfred Korzybski . The name Ā refers to non-Aristotelian logic .
Gilbert Gosseyn (pronounced go-sane ), a man living in an apparent utopia where those with superior understanding and mental control rule the rest of humanity, wants to be tested by the giant Machine that determines such superiority. However, he finds that his memories are false. In his search for his real identity, he discovers that he has extra bodies that are activated when he dies (so that, in a sense, he cannot be killed), that a galactic society of humans exists outside the Solar System , a large interstellar empire wishes to conquer both the Earth and Venus (inhabited by masters of non-Aristotelian logic ), and he has extra brain matter that, when properly trained, can allow him to move matter with his mind.
The novel originally appeared as a serial entitled "The World of Ā" in the August 1945 to October 1945 issues of the magazine Astounding Science Fiction , which was edited by John W. Campbell, Jr.
Van Vogt significantly revised and shortened the tale for the 1948 novel release. Like the serial, the 1948 hardcover ( Simon & Schuster ) and the 1950 hardcover ( Grosset & Dunlap ) editions were entitled The World of Ā . To reduce printing costs, the 1953 and 1964 Ace Books paperback editions were entitled The World of Null-A , and the symbol Ā was replaced with "null-A" throughout the text. The 1970 revision kept this change, added some brief new passages to chapters 10, 24, and 35, and also included a new introduction in which van Vogt defended the controversial work, but also admitted that the original serial had been flawed.
It won the Manuscripters Club Award. It was listed by the New York area library association among the hundred best novels of 1948. World of Null-A has been translated into nine languages, and when first published, created the French Science Fiction Market all by itself - according to Jacques Sadoul, editor of Editions OPTA . The World of Null-A finished second in the Retro Hugo award voting for Best Novel of 1945 presented in 1996 at L.A.con III.
For many years, two quotes appeared on the paperback editions of this novel. "Without doubt one of the most exciting, continuously complex and richly patterned science fiction novels ever written!" - Groff Conklin; and "One of those once-in-a-decade classics!" - John Campbell.
In 1945, the novel was the subject of an extended critical essay by fellow author and critic Damon Knight . [ 1 ] In the review, which was later expanded into "Cosmic Jerrybuilder: A. E. van Vogt", [ 2 ] Knight writes that "far from being a 'classic' by any reasonable standard, The World of Ā is one of the worst allegedly-adult science fiction stories ever published." Knight criticizes the novel on four main levels:
In his author's introduction to the 1970 revised edition, van Vogt acknowledges that he has taken Knight's criticisms seriously, thus the reason for his revising the novel so many years after its original publication.
In 1974 Damon Knight walked back his original criticisms:
Van Vogt has just revealed, for the first time as far as I know, that during this period he made a practice of dreaming about his stories and waking himself up every ninety minutes to take notes. This explains a good deal about the stories, and suggests that it is really useless to attack them by conventional standards. If the stories have a dream consistency which affects readers powerfully, it is probably irrelevant that they lack ordinary consistency. [ 3 ]
The World of Null-A was followed by the sequel, The Pawns of Null-A (also known as The Players of Null-A , 1956), and much later by a follow-up, Null-A Three (1984).
In 2008 John C. Wright wrote a novel as new chapter to the story of Gilbert Gosseyn, Null-A Continuum , in the style of van Vogt. | https://en.wikipedia.org/wiki/The_World_of_Null-A |
The World of Robert Burns is educational software which teaches about the life and times of Robert Burns . It was launched to coincide with the 200th anniversary of Burns's death. The software was awarded Gold by Acorn User magazine. [ 2 ]
It was developed by Cambridge Software House in associate with Galloway Education Authority , and featured poems, letters, music, photographs and videos. [ 3 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/The_World_of_Robert_Burns |
The Zoologist's Guide to the Galaxy. What Animals on Earth Reveal about Aliens – and Ourselves is a 2020 popular science book by the Cambridge University zoologist Arik Kershenbaum. It discusses the possible nature of life on other planets , based on the study of animal life on Earth.
The book argues that the evolutionary processes that are observed operating on Earth are universal, and a necessary requirement for the presence of complex life on any planet. As a result, many aspects of animal behavior are likely to be present in the equivalent lifeforms on alien planets. This includes certain features of social behavior, communication, and movement, the evolutionary origin of which on Earth is underpinned by universal processes.
The book has been praised by critics for its accessibility and engaging conversational tone, [ 1 ] and described by Richard Dawkins as "A wonderfully insightful sidelong look at Earthly biology". [ 2 ]
Kershenbaum is a College Lecturer at Girton College , University of Cambridge , [ 3 ] [ 4 ] and an academic visitor at the Department of Zoology . [ 5 ] He studies animal communication [ 6 ] and particularly the vocal communication of wolves [ 7 ] and dolphins. [ 8 ]
Although the field of astrobiology usually investigates possibilities of simple lifeforms that may exist on alien planets, The Zoologist's Guide to the Galaxy considers the possibilities of complex life, and in particular, life that might be considered as animal life. The book begins by laying out the argument that evolution by natural selection is the only mechanism by which complex life can evolve. It then examines the implications of natural selection for life on other planets. The book ends by examining the question of whether humanity is a parochial Earth-centric concept, or whether intelligent alien life should also be considered human .
The book draws on the work of paleontologist Simon Conway Morris on convergent evolution , [ 9 ] and on Universal Darwinism , popularised by Richard Dawkins . [ 10 ]
The Zoologist's Guide to the Galaxy was featured as one of the New York Times Editors' Choice of books. [ 14 ]
Professor Lewis Dartnell , writing in The Times , summarised, "Pondering scientifically on the concept of the extraterrestrial, of universalities and alternatives, is to hold a full-length mirror up to ourselves. This allows us to deconstruct everything from our physiology to psychology, and so explore why humans are the way we are. To comprehend the alien is to know thyself." [ 15 ]
In The Sunday Times , titled Using Darwinism to imagine what extraterrestrials may really be like James McConnachie wrote, "Arik Kershenbaum is a Cambridge zoologist who wants to prepare us for first contact. When we finally discover aliens, what might they be like?... Where much writing on astrobiology is joyously speculative, Kershenbaum is doggedly cautious, building his case from first evolutionary principles." [ 16 ]
Primatologist Frans de Waal wrote, "If you don't want to be surprised by extraterrestrial life, look no further than this lively overview of the laws of evolution that have produced life on earth. Assuming these laws to be universal, Arik Kershenbaum predicts what alien organisms might look like." [ 1 ] | https://en.wikipedia.org/wiki/The_Zoologist's_Guide_to_the_Galaxy |
Zuckerberg Institute for Water Research (ZIWR) is one of three research institutes constituting the Jacob Blaustein Institutes for Desert Research , a faculty of Ben-Gurion University of the Negev (BGU). The ZIWR is located on BGU's Sede Boqer Campus in Midreshet Ben-Gurion in Israel's Negev Desert , and hosts researchers who focus on developing new technologies to provide drinking water and water for agricultural and industrial use and to promote the sustainable use of water resources. [ 1 ] The ZIWR encompasses the Department of Environmental Hydrology and Microbiology, and the Department of Desalination and Water Treatment. [ 2 ]
The Zuckerberg Institute for Water Research was founded in 2002 and was named for Roy J. Zuckerberg, Senior Director of the Goldman Sachs Group and a philanthropist, based in New York City. [ 3 ] [ 4 ] The ZIWR is one of three institutes currently constituting the Jacob Blaustein Institutes for Desert Research, which were originally established in 1974. [ 5 ] In 2016, the estate of Dr. Howard and Lottie Marcus made a donation of $400 million dollars to Ben-Gurion University, believed to be the largest gift ever to a university in Israel, with a portion of it going to the Zuckerberg Institute for Water Research for research into water resources and desalination technologies. [ 6 ]
The Institute runs two department: The Department of Environmental Hydrology and Microbiology, and the Department of Desalination and Water Treatment. [ 7 ] It also offers an MSc degree in Hydrology and Water Quality, in collaboration with the Albert Katz International School for Desert Studies , which is located at BGU's Sde Boker Campus. [ 8 ]
The Department of Environmental Hydrology and Microbiology hosts researchers who specialize in hydrology, hydrogeology, chemistry, and microbiology. [ 9 ] Some of their particular research areas include flow and transport processes, remediation of contaminated water, and biological treatment of wastewater. [ 10 ]
The Department of Desalination and Water Treatment employs researchers who focus on various aspects of desalination and water treatment processes including the improvement and development of membranes for reverse osmosis, forward osmosis, and nanofiltration processes; processes to eliminate toxic materials from industrial effluents and polluted groundwater; and brine concentrate management. [ 11 ]
This master's degree program, offered through the Albert Katz International School for Desert Studies, aims to introduce students to research in water sciences with the goal of improving human life in drylands and the development of policies for the sustainable use of water resources. The program offers the following tracks of study: 1. Water Resources, 2. Desalination and Water Treatment, and 3. Microbiology and Water Quality. [ 12 ]
Researchers from the ZIWR were involved in studies related to the COVID-19 pandemic. In the first, a study that was led by a team of researchers from the ZIWR and published in Nature Sustainability found that coronaviruses can persist in wastewater for several days, possibly leading to the spread of these viruses to humans. [ 13 ] [ 14 ] In another study, ZIWR researchers, in cooperation with scientists from Rice University in Houston , Texas , developed a laser-induced graphene technology that can filter airborne COVID-19 particles. [ 15 ] | https://en.wikipedia.org/wiki/The_Zuckerberg_Institute_for_Water_Research |
Alle Dinge sind Gift, und nichts ist ohne Gift; allein die Dosis macht, dass ein Ding kein Gift ist. All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison.
" The dose makes the poison " ( Latin : dosis sola facit venenum 'only the dose makes the poison') is an adage intended to indicate a basic principle of toxicology . It is credited to Paracelsus who expressed the classic toxicology maxim "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison." This is often condensed to: "The dose makes the poison" or in Latin, "Sola dosis facit venenum" . It means that a substance can produce the harmful effect associated with its toxic properties only if it reaches a susceptible biological system within the body in a high enough concentration (i.e., dose ). [ 2 ]
The principle relies on the finding that all chemicals—even water and oxygen —can be toxic if too much is eaten, drunk, or absorbed. "The toxicity of any particular chemical depends on many factors, including the extent to which it enters an individual’s body." [ 3 ] This finding also provides the basis for public health standards, which specify maximum acceptable concentrations of various contaminants in food, public drinking water, and the environment. [ 3 ]
The idea also describes the phenomenon in which a poisonous substance, such as digitalis , can be medicinal ( digoxin ) in small, controlled, doses. | https://en.wikipedia.org/wiki/The_dose_makes_the_poison |
The eSync Alliance is a global automotive initiative established to build a secure, multi-vendor platform for end-to-end over-the-air (OTA) [ 1 ] [ 2 ] [ 3 ] updating and data services for the connected car, with a global network of participating suppliers [ 4 ] [ 5 ] [ 6 ]
In June 2017, Excelfore publicly announced [ 7 ] it would work with several partner companies to form the eSync Alliance as an independent trade association. The aim of the eSync Alliance is to bring automakers, Tier-1 integrators, module and software suppliers into a mutually beneficial partnership to build eSync compliant solutions for the entire vehicle. [ 8 ] [ 9 ]
In February 2018, Excelfore announced [ 10 ] that Rick Kreifeldt, industry executive and former founding chairman of AVNU, joined the eSync Alliance as Executive Director. [ 11 ] [ 12 ]
In August 2018, the eSync Alliance was incorporated as a non-profit consortium, with 5 founding member companies: Alpine, Excelfore, Hella, Molex and ZF. In September 2018 the eSync Alliance announced [ 13 ] the election of officers and management for 2018/2019, and the formation of its first two working groups: Technical Working Group (TWG), and Marketing Working Group (MWG).
In April 2019, the eSync Alliance announced [ 14 ] the release of Version 1.0 of the eSync Compliance Specifications. The specifications total nearly 400 pages and consist of Architecture, Requirements, Interfaces and Security.
In June 2019, the eSync Alliance joined the Connected Vehicle Trade Association (CVTA) as an Associate Member. [ 15 ]
In June 2020, the eSync Alliance announced that Mike Gardner, Founder and President of mG Consulting, was appointed as Executive Director. [ 16 ]
In March 2021, the eSync Alliance released v2.0 of the eSync Specifications for Automotive OTA, expanding the specifications in the areas of cyber security and data gathering. [ 17 ]
In April 2021, the eSync Alliance and GENIVI Alliance, now COVESA, announced collaboration in the area of data standardization, as part of the Common Vehicle Interface Initiative (CVII) between GENIVI and W3C. [ 18 ] (Note: GENIVI has since rebranded as COVESA - the Connected Vehicle Systems Alliance.)
In November 2021, the eSync Alliance and the Autoware Foundation announced a joint working group to address integration of OTA and data gathering into the software stack for the next generation of autonomous vehicles.
The eSync platform has components in the cloud and in the vehicle. The eSync Server is in the cloud, the eSync Client is in the vehicle and multiple eSync Agents for end devices are in the vehicle. [ 19 ] [ 20 ]
The five founding companies of the eSync Alliance each hold one seat on the Board of Directors. [ 21 ] Additional board members may be elected by the membership during the Alliance annual general meeting. Current members of the alliance include Alpine, DSA, Excelfore, Faurecia , Hella, Joynext, Mobica, Molex , R Systems and ZF. [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/The_eSync_Alliance |
The mold, protozoan, and coelenterate mitochondrial code and the mycoplasma/spiroplasma code (translation table 4 ) is the genetic code used by various organisms, in some cases with slight variations, notably the use of UGA as a tryptophan codon rather than a stop codon .
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V).
(Pritchard et al. , 1990)
This article incorporates text from the United States National Library of Medicine , which is in the public domain . [ 3 ] | https://en.wikipedia.org/wiki/The_mold,_protozoan,_and_coelenterate_mitochondrial_code_and_the_mycoplasma/spiroplasma_code |
The monkey and the coconuts is a mathematical puzzle in the field of Diophantine analysis that originated in a short story involving five sailors and a monkey on a desert island who divide up a pile of coconuts ; the problem is to find the number of coconuts in the original pile (fractional coconuts not allowed). The problem is notorious for its confounding difficulty to unsophisticated puzzle solvers, though with the proper mathematical approach, the solution is trivial. The problem has become a staple in recreational mathematics collections.
The problem can be expressed as:
The monkey and the coconuts is the best known representative of a class of puzzle problems requiring integer solutions structured as recursive division or fractionating of some discretely divisible quantity, with or without remainders, and a final division into some number of equal parts, possibly with a remainder. The problem is so well known that the entire class is often referred to broadly as "monkey and coconut type problems", though most are not closely related to the problem.
Another example: "I have a whole number of pounds of cement, I know not how many, but after addition of a ninth and an eleventh, it was partitioned into 3 sacks, each with a whole number of pounds. How many pounds of cement did I have?"
Problems ask for either the initial or terminal quantity. Stated or implied is the smallest positive number that could be a solution. There are two unknowns in such problems, the initial number and the terminal number, but only one equation which is an algebraic reduction of an expression for the relation between them. Common to the class is the nature of the resulting equation, which is a linear Diophantine equation in two unknowns. Most members of the class are determinate, but some are not (the monkey and the coconuts is one of the latter). Familiar algebraic methods are unavailing for solving such equations.
The origin of the class of such problems has been attributed to the Indian mathematician Mahāvīra in chapter VI, § 131 1 ⁄ 2 , 132 1 ⁄ 2 of his Ganita-sara-sangraha (“Compendium of the Essence of Mathematics”), circa 850CE, which dealt with serial division of fruit and flowers with specified remainders. [ 1 ] That would make progenitor problems over 1000 years old before their resurgence in the modern era. Problems involving division which invoke the Chinese remainder theorem appeared in Chinese literature as early as the first century CE. Sun Tzu asked: Find a number which leaves the remainders 2, 3 and 2 when divided by 3, 5 and 7, respectively. Diophantus of Alexandria first studied problems requiring integer solutions in the 3rd century CE. The Euclidean algorithm for greatest common divisor which underlies the solution of such problems was discovered by the Greek geometer Euclid and published in his Elements in 300 BC.
Prof. David Singmaster , a historian of puzzles, traces a series of less plausibly related problems through the middle ages, with a few references as far back as the Babylonian empire circa 1700 BC. They involve the general theme of adding or subtracting fractions of a pile or specific numbers of discrete objects and asking how many there could have been in the beginning. The next reference to a similar problem is in Jacques Ozanam 's Récréations mathématiques et physiques , 1725. In the realm of pure mathematics, Lagrange in 1770 expounded his continued fraction theorem and applied it to solution of Diophantine equations.
The first description of the problem in close to its modern wording appears in Lewis Carroll 's diaries in 1888: it involves a pile of nuts on a table serially divided by four brothers, each time with remainder of one given to a monkey, and the final division coming out even. The problem never appeared in any of Carroll's published works, though from other references [ which? ] it appears the problem was in circulation in 1888. An almost identical problem appeared in W.W. Rouse Ball 's Elementary Algebra (1890). [ citation needed ] The problem was mentioned in works of period mathematicians, with solutions, mostly wrong, indicating that the problem was new and unfamiliar at the time. [ citation needed ]
The problem became notorious when American novelist and short story writer Ben Ames Williams modified an older problem and included it in a story, "Coconuts", in the October 9, 1926, issue of the Saturday Evening Post . [ 2 ]
Williams had not included an answer in the story. The magazine was inundated by more than 2,000 letters pleading for an answer to the problem. The Post editor, Horace Lorimer , famously fired off a telegram to Williams saying: "FOR THE LOVE OF MIKE, HOW MANY COCONUTS? HELL POPPING AROUND HERE". Williams continued to get letters asking for a solution or proposing new ones for the next twenty years. [ 3 ]
Martin Gardner featured the problem in his April 1958 Mathematical Games column in Scientific American . According to Gardner, Williams had modified an older problem to make it more confounding. In the older version there is a coconut for the monkey on the final division; in Williams's version the final division in the morning comes out even. But the available historical evidence does not indicate which versions Williams had access to. [ 4 ] Gardner once told his son Jim that it was his favorite problem. [ 5 ] He said that the Monkey and the Coconuts is "probably the most worked on and least often solved" Diophantine puzzle. [ 2 ] Since that time the Williams version of the problem has become a staple of recreational mathematics . [ 6 ] The original story containing the problem was reprinted in full in Clifton Fadiman 's 1962 anthology The Mathematical Magpie , [ 7 ] a book that the Mathematical Association of America recommends for acquisition by undergraduate mathematics libraries. [ 8 ]
Numerous variants which vary the number of sailors, monkeys, or coconuts have appeared in the literature. [ 9 ]
Diophantine analysis is the study of
equations with rational coefficients
requiring integer solutions. In
Diophantine problems, there are fewer
equations than unknowns. The "extra"
information required to solve the
equations is the condition that the
solutions be integers. Any solution
must satisfy all equations. Some
Diophantine equations have no solution,
some have one or a finite number, and
others have infinitely many solutions.
The monkey and the coconuts reduces to a two-variable linear Diophantine equation of the form
where d is the greatest common divisor of a and b . [ 10 ] By Bézout's identity ,
the equation is solvable if and only if d divides c . If it does,
the equation has infinitely many periodic
solutions of the form
where ( x 0 , y 0 ) is a solution and t is a
parameter than can be any integer. The problem
is not intended to be solved by trial-and-error; there
are deterministic methods for solving ( x 0 , y 0 )
in this case (see text).
Numerous solutions starting as early as 1928 have been published both for the original problem and Williams modification. [ 11 ] [ 12 ] [ 13 ] [ 14 ]
Before entering upon a solution to the problem, a couple of things may be noted. If there were no remainders, given there are 6 divisions of 5, 5 6 =15,625 coconuts must be in the pile; on the 6th and last division, each sailor receives 1024 coconuts. No smaller positive number
will result in all 6 divisions coming out even. That means that in the problem as stated,
any multiple of 15,625 may be added to the pile, and it will satisfy the problem conditions.
That also means that the number of coconuts in the original pile is smaller than 15,625, else subtracting 15,625 will yield a smaller solution. But the number in the original pile is not trivially small, like 5 or 10 (that is why this is a hard problem) – it may be in the hundreds or thousands. Unlike trial and error in the case of guessing a polynomial root, trial and error for a Diophantine root will not result in any obvious convergence. There is no simple way of estimating what the solution will be.
Martin Gardner 's 1958 Mathematical Games column begins its analysis by solving the original problem (with one coconut also remaining in the morning) because it is easier than Williams's version. Let F be the number of coconuts received by each sailor after the final division into 5 equal shares in the morning. Then the number of coconuts left before the morning division is 5 F + 1 {\displaystyle 5F+1} ; the number present when the fifth sailor awoke was 5 4 ( 5 F + 1 ) + 1 = 25 4 F + 9 4 {\displaystyle {\tfrac {5}{4}}(5F+1)+1={\tfrac {25}{4}}F+{\tfrac {9}{4}}} ; the number present when the fourth sailor awoke was 5 4 ( 25 4 F + 9 4 ) + 1 = 125 16 F + 241 16 {\displaystyle {\tfrac {5}{4}}({\tfrac {25}{4}}F+{\tfrac {9}{4}})+1={\tfrac {125}{16}}F+{\tfrac {241}{16}}} ; and so on. We find that the size N of the original pile satisfies the Diophantine equation [ 3 ]
Gardner points out that this equation is "much too difficult to solve by trial and error," [ 3 ] but presents a solution he credits to J. H. C. Whitehead (via Paul Dirac ): [ 3 ] The equation also has solutions in negative integers. Trying out a few small negative numbers it turns out N = − 4 , F = − 1 {\displaystyle N=-4,F=-1} is a solution. [ 15 ] We add 15625 to N and 1024 to F to get the smallest positive solution: N = 15621 , F = 1023 {\displaystyle N=15621,F=1023} .
Trial and error fails to solve Williams's version, so a more systematic approach is needed.
The search space can be reduced by a series of increasingly larger factors by observing the structure of the problem so that a bit of trial and error finds the solution. The search space is much smaller if one starts with the number of coconuts received by each man in the morning division, because that number is much smaller than the number in the original pile.
If F is the number of coconuts each sailor receives in the final division in the morning, the pile in the morning is 5 F , which must also be divisible by 4, since the last sailor in the night combined 4 piles for the morning division. So the morning pile, call the number n , is a multiple of 20. The pile before the last sailor woke up must have been 5/4( n )+1. If only one sailor woke up in the night, then 5/4(20)+1 = 26 works for the minimum number of coconuts in the original pile. But if two sailors woke up, 26 is not divisible by 4, so the morning pile must be some multiple of 20 that yields a pile divisible by 4 before the last sailor wakes up. It so happens that 3*20=60 works for two sailors: applying the recursion formula for n twice yields 96 as the smallest number of coconuts in the original pile. 96 is divisible by 4 once more, so for 3 sailors awakening, the pile could have been 121 coconuts. But 121 is not divisible by 4, so for 4 sailors awakening, one needs to make another leap. At this point, the analogy becomes obtuse, because in order to accommodate 4 sailors awakening, the morning pile must be some multiple of 60: if one is persistent, it may be discovered that 17*60=1020 does the trick and the minimum number in the original pile would be 2496. A last iteration on 2496 for 5 sailors awakening, i.e. 5/4(2496)+1 brings the original pile to 3121 coconuts.
Another device is to use extra objects to clarify the division process. Suppose that in the evening we add four blue coconuts to the pile. Then the first sailor to wake up will find the pile to be evenly divisible by five, instead of having one coconut left over. The sailor divides the pile into fifths such that each blue coconut is in a different fifth; then he takes the fifth with no blue coconut, gives one of his coconuts to the monkey, and puts the other four fifths (including all four blue coconuts) back together. Each sailor does the same. During the final division in the morning, the blue coconuts are left on the side, belonging to no one. Since the whole pile was evenly divided 5 times in the night, it must have contained 5 5 coconuts: 4 blue coconuts and 3121 ordinary coconuts.
The device of using additional objects to aid in conceptualizing a division appeared as far back as 1912 in a solution due to Norman H. Anning . [ 3 ] [ 16 ]
A related device appears in the 17-animal inheritance puzzle : A man wills 17 horses to his three sons, specifying that the eldest son gets half, the next son one-third, and the youngest son, one-ninth of the animals. The sons are confounded, so they consult a wise horse trader. He says, "here, borrow my horse." The sons duly divide the horses, discovering that all the divisions come out even, with one horse left over, which they return to the trader.
A simple solution appears when the divisions and subtractions are performed in base 5.
Consider the subtraction, when the first sailor takes his share (and the monkey's). Let n 0 ,n 1 ,... represent the digits of N, the number of coconuts in the original pile, and s 0 ,s 1 ... represent the digits of the sailor's share S, both base 5. After the monkey's share, the least significant digit of N must now be 0; after the subtraction, the least significant digit of N' left by the first sailor must be 1, hence the following (the actual number of digits in N as well as S is unknown, but they are irrelevant just now):
The digit subtracted from 0 base 5 to yield 1 is 4, so s 0 =4. But since S is (N-1)/5, and dividing by 5 5 is just shifting the number right one position, n 1 =s 0 =4. So now the subtraction looks like:
Since the next sailor is going to do the same thing on N', the least significant digit of N' becomes 0 after tossing one to the monkey, and the LSD of S' must be 4 for the same reason; the next digit of N' must also be 4. So now it looks like:
Borrowing 1 from n 1 (which is now 4) leaves 3, so s 1 must be 4, and therefore n 2 as well. So now
it looks like:
But the same reasoning again applies to N' as applied to N, so the next digit of N' is 4, so s 2 and n 3 are also 4, etc. There are 5 divisions; the first four must leave an odd number base 5 in the pile for the next division, but the last division must leave an even number base 5 so the morning division will come out even (in 5s). So there are four 4s in N following a LSD of 1: N=44441 5 =3121 10
A straightforward numeric analysis goes like this: If N is the initial number, each of 5 sailors transitions the original pile thus:
Repeating this transition 5 times gives the number left in the morning:
Since that number must be an integer and 1024 is relatively prime to 3125, N+4 must be a multiple of 3125. The smallest such multiple is 3125 · 1, so N = 3125 – 4 = 3121; the number left in the morning comes to 1020, which is evenly divisible by 5 as required.
A simple succinct solution can be obtained by directly utilizing the recursive structure of the problem: There were five divisions of the coconuts into fifths, each time with
one left over (putting aside the final division in the morning). The pile remaining after each division must contain an integral number of coconuts. If there were only one such division, then it is readily apparent that 5 · 1+1=6
is a solution. In fact any multiple of five plus one is a solution, so a possible
general formula is 5 · k – 4, since a multiple of 5 plus 1 is also a multiple of 5 minus 4.
So 11, 16, etc also work for one division. [ 17 ]
If two divisions are done, a multiple of 5 · 5=25 rather than 5 must be used, because 25 can be divided by 5 twice. So the number of coconuts that could be in the pile is k · 25 – 4. k =1 yielding 21 is the smallest positive number that can be successively divided by 5 twice with remainder 1. If there are 5 divisions, then multiples of 5 5 =3125 are required; the smallest such number is 3125 – 4 = 3121. After 5 divisions, there are 1020 coconuts left
over, a number divisible by 5 as required by the problem. In fact, after n divisions,
it can be proven that the remaining pile is divisible by n , a property made convenient
use of by the creator of the problem.
A formal way of stating the above argument is:
The original pile of coconuts will be divided by 5 a total of 5 times with a remainder of 1, not considering the last division in the morning. Let N = number of coconuts in the original pile. Each division must leave the number of nuts in the same congruence class (mod 5). So,
So if we began in modulo class –4 nuts then we will remain in modulo class –4. Since ultimately we have to divide the pile 5 times or 5^5, the original pile was 5^5 – 4 = 3121 coconuts. The remainder of 1020 coconuts conveniently divides evenly by 5 in the morning. This solution essentially reverses how the problem was (probably) constructed.
The equivalent Diophantine equation for this version is:
where N is the original number of coconuts, and F is the number received by each sailor on the final division in the morning. This is only trivially different than the equation above for the predecessor problem, and solvability is guaranteed by the same reasoning.
Reordering,
This Diophantine equation has a solution which follows directly from the Euclidean algorithm ; in fact, it has infinitely many periodic solutions positive and negative. If (x 0 , y 0 ) is a solution of 1024x–15625y=1,
then N 0 =x 0 · 8404, F 0 =y 0 · 8404 is a solution of (2), which means any solution must have the form
where t {\displaystyle t} is an arbitrary parameter that can have any integral value.
One can take both sides of (1) above modulo 1024, so
Another way of thinking about it is that in order for n {\displaystyle n} to be an integer, the RHS of the equation must be an integral multiple of 1024; that property will be unaltered by factoring out as many multiples of 1024 as possible from the RHS. Reducing both sides by multiples of 1024,
subtracting,
factoring,
The RHS must still be a multiple of 1024; since 53 is relatively prime to 1024, 5 F +4 must be a multiple of 1024. The smallest such multiple is 1 · 1024, so 5 F +4=1024 and F=204. Substituting into (1)
The Euclidean algorithm is quite tedious but a general methodology for solving rational equations ax+by=c requiring integral answers. From (2) above, it is evident that 1024 (2 10 ) and 15625 (5 6 ) are relatively prime and therefore their GCD is 1, but we need the reduction equations for back substitution to obtain N and F in terms of these two quantities:
First, obtain successive remainders until GCD remains:
15625 = 15·1024 + 265 (a)
1024 = 3·265 + 229 (b)
265 = 1·229 + 36 (c)
229 = 6·36 + 13 (d)
36 = 2·13 + 10 (e)
13 = 1·10 + 3 (f)
10 = 3·3 + 1 (g) (remainder 1 is GCD of 15625 and 1024)
1 = 10 – 3(13–1·10) = 4·10 – 3·13 (reorder (g), substitute for 3 from (f) and combine)
1 = 4·(36 – 2·13) – 3·13 = 4·36 – 11·13 (substitute for 10 from (e) and combine)
1 = 4·36 – 11·(229 – 6·36) = –11·229 + 70*36 (substitute for 13 from (d) and combine)
1 = –11·229 + 70·(265 – 1·229) = –81·229 + 70·265 (substitute for 36 from (c) and combine)
1 = –81·(1024 – 3·265) + 70·265 = –81·1024 + 313·265 (substitute for 229 from (b) and combine)
1 = –81·1024 + 313·(15625 – 15·1024) = 313·15625 – 4776·1024 (substitute for 265 from (a) and combine)
So the pair (N 0 ,F 0 ) = (-4776·8404, -313*8404); the smallest t {\displaystyle t} (see (3) in the previous subsection) that will make both N and F positive is 2569, so:
Alternately, one may use a continued fraction, whose construction is based on the Euclidean algorithm. The continued fraction for 1024 ⁄ 15625 (0.065536 exactly) is [;15,3,1,6,2,1, 3 ]; [ 18 ] its convergent terminated after the repetend is 313 ⁄ 4776 , giving us x 0 =–4776 and y 0 =313. The least value of t for which both N and F are non-negative is 2569, so
This is the smallest positive number that satisfies the conditions of the problem.
When the number of sailors is a parameter, let it be m {\displaystyle m} , rather than a computational value, careful algebraic reduction of the relation between the number of coconuts in the original pile and the number allotted to each sailor in the morning yields an analogous Diophantine relation whose coefficients are expressions in m {\displaystyle m} .
The first step is to obtain an algebraic expansion of the recurrence relation corresponding to each sailor's transformation of the pile, n i {\displaystyle n_{i}} being the number left by the sailor:
where n i → 0 ≡ N {\displaystyle n_{i\rightarrow 0}\equiv N} , the number originally gathered, and n i → m {\displaystyle n_{i\rightarrow m}} the number left in the morning. Expanding the recurrence by substituting n i {\displaystyle n_{i}} for n i − 1 {\displaystyle n_{i-1}} m {\displaystyle m} times yields:
Factoring the latter term,
The power series polynomial in brackets of the form x m − 1 + . . . + x + 1 {\displaystyle x^{m-1}+...+x+1} sums to ( 1 − x m ) / ( 1 − x ) {\displaystyle (1-x^{m})/(1-x)} so,
which simplifies to:
But n m {\displaystyle n_{m}} is the number left in the morning which is a multiple of m {\displaystyle m} (i.e. F {\displaystyle F} , the number allotted to each sailor in the morning):
Solving for n 0 {\displaystyle n_{0}} (= N {\displaystyle N} ),
The equation is a linear Diophantine equation in two variables, N {\displaystyle N} and F {\displaystyle F} . m {\displaystyle m} is a parameter that can be any integer. The nature of the equation and the method of its solution do not depend on m {\displaystyle m} .
Number theoretic considerations now apply. For N {\displaystyle N} to be an integer, it is sufficient that m − 1 + m ⋅ F ( m − 1 ) m {\displaystyle {\frac {m-1+m\cdot F}{(m-1)^{m}}}} be an integer, so let it be r {\displaystyle r} :
The equation must be transformed into the form a x + b y = ± 1 {\displaystyle ax+by=\pm 1} whose solutions are formulaic. Hence:
Because m {\displaystyle m} and m − 1 {\displaystyle m-1} are relatively prime, there exist integer solutions ( r , s ) {\displaystyle (r,s)} by Bézout's identity. This equation can be restated as:
But ( m –1) m is a polynomial Z · m –1 if m is odd and Z · m +1 if m is even, where Z is a polynomial with monomial basis in m . Therefore r 0 =1 if m is odd and r 0 =–1 if m is even is a solution.
Bézout's identity gives the periodic solution r = r 0 + k ⋅ m {\displaystyle r=r_{0}+k\cdot m} , so substituting for r {\displaystyle r} in the Diophantine equation and rearranging:
where r 0 = 1 {\displaystyle r_{0}=1} for m {\displaystyle m} odd and r 0 = − 1 {\displaystyle r_{0}=-1} for m {\displaystyle m} even and k {\displaystyle k} is any integer. [ 19 ] For a given m {\displaystyle m} , the smallest positive k {\displaystyle k} will be chosen such that N {\displaystyle N} satisfies the constraints of the problem statement.
In the William's version of the problem, m {\displaystyle m} is 5 sailors, so r 0 {\displaystyle r_{0}} is 1, and k {\displaystyle k} may be taken to be zero to obtain the lowest positive answer, so N = 1 · 5 5 – 4 = 3121 for the number of coconuts in the original pile. (It may be noted that the next sequential solution of the equation for k =–1, is –12504, so trial and error around zero will not solve the Williams version of the problem, unlike the original version whose equation, fortuitously, had a small magnitude negative solution).
Here is a table of the positive solutions N {\displaystyle N} for the first few m {\displaystyle m} ( k {\displaystyle k} is any non-negative integer):
Other variants, including the putative predecessor problem, have related general solutions for an arbitrary number of sailors.
When the morning division also has a remainder of one, the solution is:
For m = 5 {\displaystyle m=5} and k = 1 {\displaystyle k=1} this yields 15,621 as the smallest positive number of coconuts for the pre-William's version of the problem.
In some earlier alternate forms of the problem, the divisions came out even, and nuts (or items) were allocated from the remaining pile after division. In these forms, the recursion relation is:
The alternate form also had two endings, when the morning division comes out even, and when there is one nut left over for the monkey.
When the morning division comes out even, the general solution reduces via a similar derivation to:
For example, when m = 4 {\displaystyle m=4} and k = 1 {\displaystyle k=1} , the original pile has 1020 coconuts, and after four successive even divisions in the night with a coconut allocated to the monkey after each division, there are 80 coconuts left over in the morning, so the final division comes out even with no coconut left over.
When the morning division results in a nut left over, the general solution is:
where r 0 = − 1 {\displaystyle r_{0}=-1} if m {\displaystyle m} is odd, and r 0 = 1 {\displaystyle r_{0}=1} if m {\displaystyle m} is even. For example, when m = 3 {\displaystyle m=3} , r 0 = − 1 {\displaystyle r_{0}=-1} and k = 1 {\displaystyle k=1} , the original pile has 51 coconuts, and after three successive divisions in the night with a coconut allocated to the monkey after each division, there are 13 coconuts left over in the morning, so the final division has a coconut left over for the monkey.
Other post-Williams variants which specify different remainders including positive ones (i.e. the monkey adds coconuts to the pile), have been treated in the literature. The solution is:
where r 0 = 1 {\displaystyle r_{0}=1} for m {\displaystyle m} odd and r 0 = − 1 {\displaystyle r_{0}=-1} for m {\displaystyle m} even, c {\displaystyle c} is the remainder after each division (or number of monkeys) and k {\displaystyle k} is any integer ( c {\displaystyle c} is negative if the monkeys add coconuts to the pile).
Other variants in which the number of men or the remainders vary between divisions, are generally outside the class of problems associated with the monkey and the coconuts, though these similarly reduce to linear Diophantine equations in two variables. Their solutions yield to the same techniques and present no new difficulties. | https://en.wikipedia.org/wiki/The_monkey_and_the_coconuts |
The purpose of a system is what it does ( POSIWID ) is a systems thinking heuristic coined by Stafford Beer , [ 1 ] who stated that there is "no point in claiming that the purpose of a system is to do what it constantly fails to do". [ 2 ] The term is widely used by systems theorists , and is generally invoked to counter the notion that the purpose of a system can be read from the intentions of those who design , operate, or promote it. When a system's side effects or unintended consequences reveal that its behavior is poorly understood, then the POSIWID perspective can balance political understandings of system behavior with a more straightforwardly descriptive view.
Stafford Beer coined the term POSIWID in his books [ 3 ] and subsequently used it many times in public addresses. In his address to the University of Valladolid , Spain, in October 2001, he said: [ 1 ]
According to the cybernetician , the purpose of a system is what it does. This is a basic dictum . It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment , or sheer ignorance of circumstances.
From a cybernetic perspective, complex systems are not controllable by simple notions of management, and interventions in a system can best be understood by looking at how they affect observed system behavior. The term is used in many other fields as well, including biology [ 4 ] and management . [ 5 ] Whereas a cybernetician may apply the principle to the results inexorably produced by the mechanical dynamics of an activity system, a management scientist may apply it to the results produced by the self-interest of actors who play roles in a business or other institution. | https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does |
The spider and the fly problem is a recreational mathematics problem with an unintuitive solution, asking for a shortest path or geodesic between two points on the surface of a cuboid . It was originally posed by Henry Dudeney .
In the typical version of the puzzle, an otherwise empty cuboid room 30 feet long, 12 feet wide and 12 feet high contains a spider and a fly. The spider is 1 foot below the ceiling and horizontally centred on one 12′×12′ wall. The fly is 1 foot above the floor and horizontally centred on the opposite wall. The problem is to find the minimum distance the spider must crawl along the walls, ceiling and/or floor to reach the fly, which remains stationary. [ 1 ]
A naive solution is for the spider to remain horizontally centred, and crawl up to the ceiling, across it and down to the fly, giving a distance of 42 feet. Instead, the shortest path, 40 feet long, spirals around five of the six faces of the cuboid. Alternatively, it can be described by unfolding the cuboid into a net and finding a shortest path (a line segment) on the resulting unfolded system of six rectangles in the plane. Different nets produce different segments with different lengths, and the question becomes one of finding a net whose segment length is minimum. [ 2 ] Another path, of intermediate length 1658 ≈ 40.7 {\displaystyle {\sqrt {1658}}\approx 40.7} , crosses diagonally through four faces instead of five. [ 3 ]
For a room of length l , width w and height h , the spider a distance b below the ceiling, and the fly a distance a above the floor, length of the spiral path is ( w + h ) 2 + ( b + l + a ) 2 {\displaystyle {\sqrt {(w+h)^{2}+(b+l+a)^{2}}}} while the naive solution has length l + h − | b − a | {\displaystyle l+h-|b-a|} . [ 1 ] Depending on the dimensions of the cuboid, and on the initial positions of the spider and fly, one or another of these paths, or of four other paths, may be the optimal solution. [ 4 ] However, there is no rectangular cuboid, and two points on the cuboid, for which the shortest path passes through all six faces of the cuboid. [ 5 ]
A different lateral thinking solution, beyond the stated rules of the puzzle, involves the spider attaching dragline silk to the wall to lower itself to the floor, and crawling 30 feet across it and 1 foot up the opposite wall, giving a crawl distance of 31 feet. Similarly, it can climb to the ceiling, cross it, then attach the silk to lower itself 11 feet, also a 31-foot crawl. [ 6 ]
The problem was originally posed by Henry Dudeney in the English newspaper Weekly Dispatch on 14 June 1903 and collected in The Canterbury Puzzles (1907). Martin Gardner calls it "Dudeney's best-known brain-teaser". [ 7 ]
A version of the problem was recorded by Adolf Hurwitz in his diary in 1908. Hurwitz stated that he heard it from L. Gustave du Pasquier , who in turn had heard it from Richard von Mises . [ 8 ] | https://en.wikipedia.org/wiki/The_spider_and_the_fly_problem |
In biology , a theca ( pl. : thecae ) is a sheath or a covering.
In botany , the theca is related to plant's flower anatomy. The theca of an angiosperm consists of a pair of microsporangia that are adjacent to each other and share a common area of dehiscence called the stomium . [ 1 ] Any part of a microsporophyll that bears microsporangia is called an anther . Most anthers are formed on the apex of a filament . An anther and its filament together form a typical (or filantherous) stamen , part of the male floral organ.
The typical anther is bilocular, i.e. it consists of two thecae. Each theca contains two microsporangia, also known as pollen sacs. The microsporangia produce the microspores , which for seed plants are known as pollen grains.
If the pollen sacs are not adjacent, or if they open separately, then no thecae are formed. In Lauraceae, for example, the pollen sacs are spaced apart and open independently.
The tissue between the locules and the cells is called the connective and the parenchyma . Both pollen sacs are separated by the stomium . When the anther is dehiscing, it opens at the stomium.
The outer cells of the theca form the epidermis . Below the epidermis, the somatic cells form the tapetum . These support the development of microspores into mature pollen grains. However, little is known about the underlying genetic mechanisms, which play a role in male sporo- and gametogenesis.
The thecal arrangement of a typical stamen can be as follows:
In biology , the theca of follicle can also refer to the site of androgen production in females. The theca of the spinal cord is called the thecal sac , and intrathecal injections are made there or in the subarachnoid space of the skull .
In human embryogenesis , the theca cells form a corpus luteum after a Graafian follicle has expelled its secondary oocyte arrested in second meiosis .
Thecal shape is also important in graptolite and pterobranch taxonomy.
In dinoflagellates that are armoured their covering is made up of thecal plates .
In microbiology and planktology , a theca is a subcellular structural component out of which the frustules of diatoms and dinoflagellates are constructed.
This plant physiology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Theca |
The Theil index is a statistic primarily used to measure economic inequality [ 1 ] and other economic phenomena, though it has also been used to measure racial segregation. [ 2 ] [ 3 ] The Theil index T T is the same as redundancy in information theory which is the maximum possible entropy of the data minus the observed entropy. It is a special case of the generalized entropy index . It can be viewed as a measure of redundancy, lack of diversity, isolation, segregation, inequality, non-randomness, and compressibility. It was proposed by a Dutch econometrician Henri Theil (1924–2000) at the Erasmus University Rotterdam . [ 3 ]
Henri Theil himself said (1967): "The (Theil) index can be interpreted as the expected information content of the indirect message which transforms the population shares as prior probabilities into the income shares as posterior probabilities." [ 4 ] Amartya Sen noted, "But the fact remains that the Theil index is an arbitrary formula, and the average of the logarithms of the reciprocals of income shares weighted by income is not a measure that is exactly overflowing with intuitive sense." [ 4 ]
For a population of N "agents" each with characteristic x , the situation may be represented by the list x i ( i = 1,..., N ) where x i is the characteristic of agent i . For example, if the characteristic is income, then x i is the income of agent i .
The Theil T index is defined as [ 5 ]
and the Theil L index is defined as [ 5 ]
where μ {\displaystyle \mu } is the mean income:
Theil-L is an income-distribution's dis-entropy per person, measured with respect to maximum entropy (...which is achieved with complete equality).
(In an alternative interpretation of it, Theil-L is the natural-logarithm of the geometric-mean of the ratio: (mean income)/(income i), over all the incomes. The related Atkinson(1) is just 1 minus the geometric-mean of (income i)/(mean income), over the income distribution.)
Because a transfer between a larger income & a smaller one will change the smaller income's ratio more than it changes the larger income's ratio, the transfer-principle is satisfied by this index.
Equivalently, if the situation is characterized by a discrete distribution function f k ( k = 0,..., W ) where f k is the fraction of the population with income k and W = Nμ is the total income, then ∑ k = 0 W f k = 1 {\displaystyle \sum _{k=0}^{W}f_{k}=1} and the Theil index is:
where μ {\displaystyle \mu } is again the mean income:
Note that in this case income k is an integer and k=1 represents the smallest increment of income possible (e.g., cents).
if the situation is characterized by a continuous distribution function f ( k ) (supported from 0 to infinity) where f ( k ) dk is the fraction of the population with income k to k + dk , then the Theil index is:
where the mean is:
Theil indices for some common continuous probability distributions are given in the table below:
If everyone has the same income, then T T equals 0. If one person has all the income, then T T gives the result ln N {\displaystyle \ln N} , which is maximum inequality. Dividing T T by ln N {\displaystyle \ln N} can normalize the equation to range from 0 to 1, but then the independence axiom is violated: T [ x ∪ x ] ≠ T [ x ] {\displaystyle T[x\cup x]\neq T[x]} and does not qualify as a measure of inequality.
The Theil index measures an entropic "distance" the population is away from the egalitarian state of everyone having the same income. The numerical result is in terms of negative entropy so that a higher number indicates more order that is further away from the complete equality. Formulating the index to represent negative entropy instead of entropy allows it to be a measure of inequality rather than equality.
The Theil index can be transformed into an Atkinson index , which has a range between 0 and 1 (0% and 100%), where 0 indicates perfect equality and 1 (100%) indicates maximum inequality. (See Generalized entropy index for the transformation.)
The Theil index is derived from Shannon 's measure of information entropy S {\displaystyle S} , where entropy is a measure of randomness in a given set of information. In information theory, physics, and the Theil index, the general form of entropy is
When looking at the distribution of income in a population, p i {\displaystyle p_{i}} is equal to the ratio of a particular individual's income to the total income of the entire population. This gives the observed entropy S Theil {\displaystyle S_{\text{Theil}}} of a population to be:
The Theil index T T {\displaystyle T_{T}} measures how far the observed entropy ( S Theil {\displaystyle S_{\text{Theil}}} , which represents how randomly income is distributed) is from the highest possible entropy ( S max = ln ( N ) {\displaystyle S_{\text{max}}=\ln \left({N}\right)} , [ note 3 ] which represents income being maximally distributed amongst individuals in the population– a distribution analogous to the [most likely] outcome of an infinite number of random coin tosses: an equal distribution of heads and tails). Therefore, the Theil index is the difference between the theoretical maximum entropy (which would be reached if the incomes of every individual were equal) minus the observed entropy:
When x {\displaystyle x} is in units of population/species, S Theil {\displaystyle S_{\text{Theil}}} is a measure of biodiversity and is called the Shannon index . If the Theil index is used with x=population/species, it is a measure of inequality of population among a set of species, or "bio-isolation" as opposed to "wealth isolation".
The Theil index measures what is called redundancy in information theory. [ 5 ] It is the left over "information space" that was not utilized to convey information, which reduces the effectiveness of the price signal . [ original research? ] The Theil index is a measure of the redundancy of income (or other measure of wealth) in some individuals. Redundancy in some individuals implies scarcity in others. A high Theil index indicates the total income is not distributed evenly among individuals in the same way an uncompressed text file does not have a similar number of byte locations assigned to the available unique byte characters.
According to the World Bank ,
"The best-known entropy measures are Theil’s T ( T T {\displaystyle T_{T}} ) and Theil’s L ( T L {\displaystyle T_{L}} ), both of which allow one to decompose inequality into the part that is due to inequality within areas (e.g. urban, rural) and the part that is due to differences between areas (e.g. the rural-urban income gap). Typically at least three-quarters of inequality in a country is due to within-group inequality, and the remaining quarter to between-group differences." [ 7 ]
If the population is divided into m {\displaystyle m} subgroups and
then Theil's T index is
For example, inequality within the United States is the average inequality within each state, weighted by state income, plus the inequality between states.
The decomposition of the Theil index which identifies the share attributable to the between-region component becomes a helpful tool for the positive analysis of regional inequality as it suggests the relative importance of spatial dimension of inequality. [ 8 ]
Both Theil's T and Theil's L are decomposable. The difference between them is based on the part of the outcomes distribution that each is used for. Indexes of inequality in the generalized entropy (GE) family are more sensitive to differences in income shares among the poor or among the rich depending on a parameter that defines the GE index. The smaller the parameter value for GE, the more sensitive it is to differences at the bottom of the distribution. [ 9 ]
The decomposability is a property of the Theil index which the more popular Gini coefficient does not offer. The Gini coefficient is more intuitive to many people since it is based on the Lorenz curve . However, it is not easily decomposable like the Theil.
In addition to multitude of economic applications, the Theil index has been applied to assess performance of irrigation systems [ 10 ] and distribution of software metrics . [ 11 ] | https://en.wikipedia.org/wiki/Theil_index |
Theistic rationalism is a hybrid of natural religion , Christianity , and rationalism , in which rationalism is the predominant element. [ 1 ] According to Henry Clarence Thiessen, the concept of theistic rationalism first developed during the eighteenth century as a form of English and German Deism . [ 2 ] The term "theistic rationalism" occurs as early as 1856, in the English translation of a German work on recent religious history. [ 3 ] Some scholars have argued that the term properly describes the beliefs of some of the prominent Founding Fathers of the United States , including George Washington , John Adams , Benjamin Franklin , James Wilson , and Thomas Jefferson . [ 4 ] [ 5 ]
Theistic rationalists believe natural religion, Christianity, and rationalism typically coexist compatibly, with rational thought balancing the conflicts between the first two aspects. [ 4 ] They often assert that the primary role of a person's religion should be to bolster morality , a fixture of daily life. [ 4 ]
Theistic rationalists believe that God plays an active role in human life, rendering prayer effective. [ 4 ] [ 5 ] They accept parts of the Bible as divinely inspired, using reason as their criterion for what to accept or reject. [ 6 ] Their belief that God intervenes in human affairs and their approving attitude toward parts of the Bible distinguish theistic rationalists from Deists. [ 7 ]
Anthony Ashley-Cooper, 3rd Earl of Shaftesbury (1671–1713), has been described [ by whom? ] as an early theistic rationalist. [ 8 ] According to Stanley Grean,
Both Shaftesbury and the Deists wanted to preserve theology while freeing it from supernaturalism; both denied the occurrence of miracles; both called for free criticism of the Bible and questioned the absoluteness of its authority; both shared a distrust of sacramental and priestly religion; and both stressed the importance of morality in religion. However, despite this broad area of agreement, Shaftesbury did not identify himself unreservedly with the developing Deistic movement, and he expressed some serious doubts about certain aspects of it... The Deists were wrong if they relegated God to the status of a Prime Mover without subsequent contact with the universe; Deity must be conceived as being in constant and living interaction with the creation; otherwise the concept is "dry and barren." [ 9 ] | https://en.wikipedia.org/wiki/Theistic_rationalism |
The Themis programme is an ongoing European Space Agency programme carried by prime contractor ArianeGroup , aiming to develop a prototype reusable rocket first stage and plans to conduct demonstration flights. The prototype rockets will also be called Themis. As of 2025, two prototypes are being developed: Themis 1-Engine Hop (T1H) and Themis 1-Engine Evolution (T1E). Later, a three-engine variant (T3) will be built. [ 1 ] [ 2 ]
Themis is expected to provide valuable information on the economic value of reusability for the European government space program and develop technologies for potential use on future European launch vehicles. [ 3 ] [ non-primary source needed ] Themis will be powered by the ESA's Prometheus rocket engine. [ 3 ] Eventually, lessons learned with Themis' development will pave the way for developing the European reusable launcher Ariane Next , which should first fly in the 2030s. [ 4 ]
Themis is distinct from a similar project CALLISTO under development by CNES , DLR , and JAXA .
Two possible landing sites have been mentioned in discussions surrounding the project: [ 5 ]
The estimated program timeline, as of December 2020 [update] , was as follows: [ 7 ]
Suborbital flight tests were slated to begin as early as 2023 at Europe's Spaceport in Kourou , French Guiana , but have been delayed. [ 5 ] | https://en.wikipedia.org/wiki/Themis_programme |
Theodor Förster (15 May 1910 – 20 May 1974) was a German physical chemist known for theoretical work on light-matter interaction in molecular systems such as fluorescence and resonant energy transfer .
Förster was born in Frankfurt am Main and studied physics and mathematics at the University of Frankfurt from 1929 to 1933. [ 1 ] He received his Ph.D. at the age of only 23 under Erwin Madelung in 1933. In the same year he joined the Nazi Party and the SA . [ 2 ] He then joined Karl-Friedrich Bonhoeffer as a research assistant at the Leipzig University , where he worked closely with Peter Debye , Werner Heisenberg , and Hans Kautzky . Förster obtained his habilitation in 1940 and became a lecturer at the Leipzig University . [ 3 ] Following his research and teaching activities in Leipzig, he became a professor at the Poznań University in occupied Poland (1942). [ 3 ] [ 4 ]
From 1947 to 1951 he worked at the Max Planck Institute for Physical Chemistry in Göttingen as a department head. In 1951, he became a professor at the University of Stuttgart . [ 3 ] He died due to a heart attack in 1974. [ 4 ]
Among Förster's greatest achievements is his contribution to the understanding of FRET ( Förster resonance energy transfer ). The term Förster radius , which is related to the FRET phenomenon, is named after him. [ 3 ] He also proposed the Förster cycle to predict the acid dissociation constant of a photoacid . [ 3 ] He also discovered excimer formation in solutions of pyrene . [ 3 ] [ 5 ] | https://en.wikipedia.org/wiki/Theodor_Förster |
Freiherr Christian Johann Dietrich Theodor von Grotthuss (20 January 1785 – 26 March 1822) was a Baltic German scientist known for establishing the first theory of electrolysis in 1806 and formulating the first law of photochemistry in 1817. [ 1 ] His theory of electrolysis is considered the first description of the so-called Grotthuss mechanism . [ 2 ]
Grotthuss was born in 1785 in Leipzig , Electorate of Saxony , Holy Roman Empire , during an extended stay of his parents away from their home in northern Grand Duchy of Lithuania . He showed interest in natural sciences and went to study first in Leipzig and later in Paris at the École Polytechnique . Several renowned scientists taught at the École Polytechnique at that time, including Antoine François, comte de Fourcroy , Claude Louis Berthollet and Louis Nicolas Vauquelin .
Because of some tensions in the relations between Russia and France, Grotthuss had to leave for Italy where he stayed at Naples for one year. The discovery of the first electric cell in 1800 by Alessandro Volta provided the scientists a source of electricity which was used in various laboratory experiments around Europe. The electrolysis of water, acids and salt solutions was reported, but a good explanation was missing. Grotthuss actively contributed to this area both in terms of electrolysis experiments and their interpretation. During his stay in Italy, he published his work on electrolysis in 1806. [ 1 ] His idea that the charge is not transported by the movement of particles but by breaking and reformation of bonds was the first basically correct concept for the charge transport in electrolytes ; it is still valid for the charge transport in water, and the current proton hopping mechanism is a modified version of the original Grotthuss mechanism . [ 3 ]
The following two years Grotthuss spent in Rome , some other Italian cities, and Paris, and then went back to Russia via Munich and Vienna . From 1808 on he lived at the estate of his mother in northern Lithuania. There he conducted research on electricity and light with the limited research equipment he could assemble. Grotthuss committed suicide in the spring of 1822 during a depression caused by health problems. [ 4 ] | https://en.wikipedia.org/wiki/Theodor_Grotthuss |
Theodore Cohen (May 11, 1929 – December 13, 2017) was an American organic chemist and chemistry professor at University of Pittsburgh . [ 1 ] [ 2 ] [ 3 ] [ 4 ] He is known for his research on organic chemistry , [ 1 ] and particularly on organosulfur compounds , [ 3 ] [ 5 ] on organometallic chemistry , [ 4 ] [ 5 ] and on the synthesis of phenols . [ 6 ]
Cohen was born in Boston, the son of a furrier from England, and was the first in his family with a college education. [ 3 ] He graduated from Tufts University in 1951. [ 1 ] [ 2 ] He was guided towards science instead of medicine in a chance encounter with Isaac Asimov while working a summer job as a waiter, [ 3 ] and completed his Ph.D. at the University of Southern California in 1955, [ 1 ] [ 2 ] helping to support his graduate studies by working as an extra in the movies of Katharine Hepburn and Spencer Tracy . [ 3 ] His doctoral research, supervised by Jerome A. Berson , concerned the synthesis of alkaloids found in ipecac , and the chemical properties of pyridines . [ 5 ]
After postdoctoral research as a Fulbright scholar at the University of Glasgow , working with Derek Barton , [ 6 ] he joined the University of Pittsburgh chemistry faculty in 1956, [ 1 ] and became one of the first professors at the university to bring in federal grant money for his research. [ 1 ] [ 4 ] He retired as a professor emeritus in 1999, but continued to do research in his laboratory, often working 80-hour weeks. [ 1 ] [ 5 ]
At the University of Pittsburgh, he was the doctoral advisor to over 40 students. [ 4 ]
He was the 2009 winner of the Pittsburgh Award of the Pittsburgh section of the American Chemical Society . [ 4 ] [ 5 ]
Cohen worked at a holiday camp in Massachusetts while he was a student at Tufts. While waiting tables, he courted Pearl Silverman, a bookish woman from New York. The biochemist and author, Isaac Asimov , also vacationed there and became friends with Cohen. Observing the romance, Asimov wrote songs about it for the camp show, "Poor Ted's in bed. He's lonely but well read"; the couple were later married and went on to have two children, Bret and Rima. [ 3 ]
He died of chronic lymphocytic leukemia . [ 3 ] | https://en.wikipedia.org/wiki/Theodore_Cohen_(chemist) |
In mathematics and formal logic , a theorem is a statement that has been proven , or can be proven. [ a ] [ 2 ] [ 3 ] The proof of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems.
In mainstream mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), or of a less powerful theory, such as Peano arithmetic . [ b ] Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as theorems only the most important results, and use the terms lemma , proposition and corollary for less important theorems.
In mathematical logic , the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language . A theory consists of some basis statements called axioms , and some deducing rules (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. [ c ] This formalization led to proof theory , which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).
As the axioms are often abstractions of properties of the physical world , theorems may be considered as expressing some truth, but in contrast to the notion of a scientific law , which is experimental , the justification of the truth of a theorem is purely deductive . [ 6 ] [ d ] A conjecture is a tentative proposition that may evolve to become a theorem if proven true.
Until the end of the 19th century and the foundational crisis of mathematics , all mathematical theories were built from a few basic properties that were considered as self-evident; for example, the facts that every natural number has a successor, and that there is exactly one line that passes through two given distinct points. These basic properties that were considered as absolutely evident were called postulates or axioms ; for example Euclid's postulates . All theorems were proved by using implicitly or explicitly these basic properties, and, because of the evidence of these basic properties, a proved theorem was considered as a definitive truth, unless there was an error in the proof. For example, the sum of the interior angles of a triangle equals 180°, and this was considered as an undoubtable fact.
One aspect of the foundational crisis of mathematics was the discovery of non-Euclidean geometries that do not lead to any contradiction, although, in such geometries, the sum of the angles of a triangle is different from 180°. So, the property "the sum of the angles of a triangle equals 180°" is either true or false, depending whether Euclid's fifth postulate is assumed or denied. Similarly, the use of "evident" basic properties of sets leads to the contradiction of Russell's paradox . This has been resolved by elaborating the rules that are allowed for manipulating sets.
This crisis has been resolved by revisiting the foundations of mathematics to make them more rigorous . In these new foundations, a theorem is a well-formed formula of a mathematical theory that can be proved from the axioms and inference rules of the theory. So, the above theorem on the sum of the angles of a triangle becomes: Under the axioms and inference rules of Euclidean geometry , the sum of the interior angles of a triangle equals 180° . Similarly, Russell's paradox disappears because, in an axiomatized set theory, the set of all sets cannot be expressed with a well-formed formula. More precisely, if the set of all sets can be expressed with a well-formed formula, this implies that the theory is inconsistent , and every well-formed assertion, as well as its negation, is a theorem.
In this context, the validity of a theorem depends only on the correctness of its proof. It is independent from the truth, or even the significance of the axioms. This does not mean that the significance of the axioms is uninteresting, but only that the validity of a theorem is independent from the significance of the axioms. This independence may be useful by allowing the use of results of some area of mathematics in apparently unrelated areas.
An important consequence of this way of thinking about mathematics is that it allows defining mathematical theories and theorems as mathematical objects , and to prove theorems about them. Examples are Gödel's incompleteness theorems . In particular, there are well-formed assertions than can be proved to not be a theorem of the ambient theory, although they can be proved in a wider theory. An example is Goodstein's theorem , which can be stated in Peano arithmetic , but is proved to be not provable in Peano arithmetic. However, it is provable in some more general theories, such as Zermelo–Fraenkel set theory .
Many mathematical theorems are conditional statements, whose proofs deduce conclusions from conditions known as hypotheses or premises . In light of the interpretation of proof as justification of truth, the conclusion is often viewed as a necessary consequence of the hypotheses. Namely, that the conclusion is true in case the hypotheses are true—without any further assumptions. However, the conditional could also be interpreted differently in certain deductive systems , depending on the meanings assigned to the derivation rules and the conditional symbol (e.g., non-classical logic ).
Although theorems can be written in a completely symbolic form (e.g., as propositions in propositional calculus ), they are often expressed informally in a natural language such as English for better readability. The same is true of proofs, which are often expressed as logically organized and clearly worded informal arguments, intended to convince readers of the truth of the statement of the theorem beyond any doubt, and from which a formal symbolic proof can in principle be constructed.
In addition to the better readability, informal arguments are typically easier to check than purely symbolic ones—indeed, many mathematicians would express a preference for a proof that not only demonstrates the validity of a theorem, but also explains in some way why it is obviously true. In some cases, one might even be able to substantiate a theorem by using a picture as its proof.
Because theorems lie at the core of mathematics, they are also central to its aesthetics . Theorems are often described as being "trivial", or "difficult", or "deep", or even "beautiful". These subjective judgments vary not only from person to person, but also with time and culture: for example, as a proof is obtained, simplified or better understood, a theorem that was once difficult may become trivial. [ 7 ] On the other hand, a deep theorem may be stated simply, but its proof may involve surprising and subtle connections between disparate areas of mathematics. Fermat's Last Theorem is a particularly well-known example of such a theorem. [ 8 ]
Logically , many theorems are of the form of an indicative conditional : If A, then B . Such a theorem does not assert B — only that B is a necessary consequence of A . In this case, A is called the hypothesis of the theorem ("hypothesis" here means something very different from a conjecture ), and B the conclusion of the theorem. The two together (without the proof) are called the proposition or statement of the theorem (e.g. " If A, then B " is the proposition ). Alternatively, A and B can be also termed the antecedent and the consequent , respectively. [ 9 ] The theorem "If n is an even natural number , then n /2 is a natural number" is a typical example in which the hypothesis is " n is an even natural number", and the conclusion is " n /2 is also a natural number".
In order for a theorem to be proved, it must be in principle expressible as a precise, formal statement. However, theorems are usually expressed in natural language rather than in a completely symbolic form—with the presumption that a formal statement can be derived from the informal one.
It is common in mathematics to choose a number of hypotheses within a given language and declare that the theory consists of all statements provable from these hypotheses. These hypotheses form the foundational basis of the theory and are called axioms or postulates. The field of mathematics known as proof theory studies formal languages, axioms and the structure of proofs.
Some theorems are " trivial ", in the sense that they follow from definitions, axioms, and other theorems in obvious ways and do not contain any surprising insights. Some, on the other hand, may be called "deep", because their proofs may be long and difficult, involve areas of mathematics superficially distinct from the statement of the theorem itself, or show surprising connections between disparate areas of mathematics. [ 10 ] A theorem might be simple to state and yet be deep. An excellent example is Fermat's Last Theorem , [ 8 ] and there are many other examples of simple yet deep theorems in number theory and combinatorics , among other areas.
Other theorems have a known proof that cannot easily be written down. The most prominent examples are the four color theorem and the Kepler conjecture . Both of these theorems are only known to be true by reducing them to a computational search that is then verified by a computer program. Initially, many mathematicians did not accept this form of proof, but it has become more widely accepted. The mathematician Doron Zeilberger has even gone so far as to claim that these are possibly the only nontrivial results that mathematicians have ever proved. [ 11 ] Many mathematical theorems can be reduced to more straightforward computation, including polynomial identities, trigonometric identities [ e ] and hypergeometric identities. [ 12 ] [ page needed ]
Theorems in mathematics and theories in science are fundamentally different in their epistemology . A scientific theory cannot be proved; its key attribute is that it is falsifiable , that is, it makes predictions about the natural world that are testable by experiments . Any disagreement between prediction and experiment demonstrates the incorrectness of the scientific theory, or at least limits its accuracy or domain of validity. Mathematical theorems, on the other hand, are purely abstract formal statements: the proof of a theorem cannot involve experiments or other empirical evidence in the same way such evidence is used to support scientific theories. [ 6 ]
Nonetheless, there is some degree of empiricism and data collection involved in the discovery of mathematical theorems. By establishing a pattern, sometimes with the use of a powerful computer, mathematicians may have an idea of what to prove, and in some cases even a plan for how to set about doing the proof. It is also possible to find a single counter-example and so establish the impossibility of a proof for the proposition as-stated, and possibly suggest restricted forms of the original proposition that might have feasible proofs.
For example, both the Collatz conjecture and the Riemann hypothesis are well-known unsolved problems; they have been extensively studied through empirical checks, but remain unproven. The Collatz conjecture has been verified for start values up to about 2.88 × 10 18 . The Riemann hypothesis has been verified to hold for the first 10 trillion non-trivial zeroes of the zeta function . Although most mathematicians can tolerate supposing that the conjecture and the hypothesis are true, neither of these propositions is considered proved.
Such evidence does not constitute proof. For example, the Mertens conjecture is a statement about natural numbers that is now known to be false, but no explicit counterexample (i.e., a natural number n for which the Mertens function M ( n ) equals or exceeds the square root of n ) is known: all numbers less than 10 14 have the Mertens property, and the smallest number that does not have this property is only known to be less than the exponential of 1.59 × 10 40 , which is approximately 10 to the power 4.3 × 10 39 . Since the number of particles in the universe is generally considered less than 10 to the power 100 (a googol ), there is no hope to find an explicit counterexample by exhaustive search .
The word "theory" also exists in mathematics, to denote a body of mathematical axioms, definitions and theorems, as in, for example, group theory (see mathematical theory ). There are also "theorems" in science, particularly physics, and in engineering, but they often have statements and proofs in which physical assumptions and intuition play an important role; the physical axioms on which such "theorems" are based are themselves falsifiable.
A number of different terms for mathematical statements exist; these terms indicate the role statements play in a particular subject. The distinction between different terms is sometimes rather arbitrary, and the usage of some terms has evolved over time.
Other terms may also be used for historical or customary reasons, for example:
A few well-known theorems have even more idiosyncratic names, for example, the division algorithm , Euler's formula , and the Banach–Tarski paradox .
A theorem and its proof are typically laid out as follows:
The end of the proof may be signaled by the letters Q.E.D. ( quod erat demonstrandum ) or by one of the tombstone marks, such as "□" or "∎", meaning "end of proof", introduced by Paul Halmos following their use in magazines to mark the end of an article. [ 15 ]
The exact style depends on the author or publication. Many publications provide instructions or macros for typesetting in the house style .
It is common for a theorem to be preceded by definitions describing the exact meaning of the terms used in the theorem. It is also common for a theorem to be preceded by a number of propositions or lemmas which are then used in the proof. However, lemmas are sometimes embedded in the proof of a theorem, either with nested proofs, or with their proofs presented after the proof of the theorem.
Corollaries to a theorem are either presented between the theorem and the proof, or directly after the proof. Sometimes, corollaries have proofs of their own that explain why they follow from the theorem.
It has been estimated that over a quarter of a million theorems are proved every year. [ 16 ]
The well-known aphorism , "A mathematician is a device for turning coffee into theorems" , is probably due to Alfréd Rényi , although it is often attributed to Rényi's colleague Paul Erdős (and Rényi may have been thinking of Erdős), who was famous for the many theorems he produced, the number of his collaborations, and his coffee drinking. [ 17 ]
The classification of finite simple groups is regarded by some to be the longest proof of a theorem. It comprises tens of thousands of pages in 500 journal articles by some 100 authors. These papers are together believed to give a complete proof, and several ongoing projects hope to shorten and simplify this proof. [ 18 ] Another theorem of this type is the four color theorem whose computer generated proof is too long for a human to read. It is among the longest known proofs of a theorem whose statement can be easily understood by a layman. [ citation needed ]
In mathematical logic , a formal theory is a set of sentences within a formal language . A sentence is a well-formed formula with no free variables. A sentence that is a member of a theory is one of its theorems, and the theory is the set of its theorems. Usually a theory is understood to be closed under the relation of logical consequence . Some accounts define a theory to be closed under the semantic consequence relation ( ⊨ {\displaystyle \models } ), while others define it to be closed under the syntactic consequence , or derivability relation ( ⊢ {\displaystyle \vdash } ). [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ]
For a theory to be closed under a derivability relation, it must be associated with a deductive system that specifies how the theorems are derived. The deductive system may be stated explicitly, or it may be clear from the context. The closure of the empty set under the relation of logical consequence yields the set that contains just those sentences that are the theorems of the deductive system.
In the broad sense in which the term is used within logic, a theorem does not have to be true, since the theory that contains it may be unsound relative to a given semantics, or relative to the standard interpretation of the underlying language. A theory that is inconsistent has all sentences as theorems.
The definition of theorems as sentences of a formal language is useful within proof theory , which is a branch of mathematics that studies the structure of formal proofs and the structure of provable formulas. It is also important in model theory , which is concerned with the relationship between formal theories and structures that are able to provide a semantics for them through interpretation .
Although theorems may be uninterpreted sentences, in practice mathematicians are more interested in the meanings of the sentences, i.e. in the propositions they express. What makes formal theorems useful and interesting is that they may be interpreted as true propositions and their derivations may be interpreted as a proof of their truth. A theorem whose interpretation is a true statement about a formal system (as opposed to within a formal system) is called a metatheorem .
Some important theorems in mathematical logic are:
The concept of a formal theorem is fundamentally syntactic, in contrast to the notion of a true proposition, which introduces semantics . Different deductive systems can yield other interpretations, depending on the presumptions of the derivation rules (i.e. belief , justification or other modalities ). The soundness of a formal system depends on whether or not all of its theorems are also validities . A validity is a formula that is true under any possible interpretation (for example, in classical propositional logic, validities are tautologies ). A formal system is considered semantically complete when all of its theorems are also tautologies. | https://en.wikipedia.org/wiki/Theorem |
In mathematics , the theorem of the cube is a condition for a line bundle over a product of three complete varieties to be trivial. It was a principle discovered, in the context of linear equivalence , by the Italian school of algebraic geometry . The final version of the theorem of the cube was first published by Lang (1959) , who credited it to André Weil . A discussion of the history has been given by Kleiman (2005) . A treatment by means of sheaf cohomology , and description in terms of the Picard functor , was given
by Mumford (2008) .
The theorem states that for any complete varieties U , V and W over an algebraically closed field, and given points u , v and w on them, any invertible sheaf L which has a trivial restriction to each of U × V × { w }, U × { v } × W , and { u } × V × W , is itself trivial. (Mumford p. 55; the result there is slightly stronger, in that one of the varieties need not be complete and can be replaced by a connected scheme.)
On a ringed space X , an invertible sheaf L is trivial if isomorphic to O X , as an O X -module. If the base X is a complex manifold , then an invertible sheaf is (the sheaf of sections of) a holomorphic line bundle , and trivial means holomorphically equivalent to a trivial bundle , not just topologically equivalent.
Weil's result has been restated in terms of biextensions , a concept now generally used in the duality theory of abelian varieties . [ 1 ]
The theorem of the square ( Lang 1959 ) ( Mumford 2008 , p.59) is a corollary (also due to Weil) applying to an abelian variety A . One version of it states that the function φ L taking x ∈ A to T * x L ⊗ L −1 is a group homomorphism from A to Pic ( A ) (where T * x is translation by x on line bundles). | https://en.wikipedia.org/wiki/Theorem_of_the_cube |
In civil engineering and structural analysis Clapeyron's theorem of three moments (by Émile Clapeyron ) is a relationship among the bending moments at three consecutive supports of a horizontal beam.
Let A,B,C-D be the three consecutive points of support, and denote by- l the length of AB and l ′ {\displaystyle l'} the length of BC , by w and w ′ {\displaystyle w'} the weight per unit of length in these segments. Then [ 1 ] the bending moments M A , M B , M C {\displaystyle M_{A},\,M_{B},\,M_{C}} at the three points are related by:
This equation can also be written as [ 2 ]
where a 1 is the area on the bending moment diagram due to vertical loads on AB, a 2 is the area due to loads on BC, x 1 is the distance from A to the centroid of the bending moment diagram of beam AB, x 2 is the distance from C to the centroid of the area of the bending moment diagram of beam BC.
The second equation is more general as it does not require that the weight of each segment be distributed uniformly.
Christian Otto Mohr's theorem [ 3 ] can be used to derive the three moment theorem [ 4 ] (TMT).
The change in slope of a deflection curve between two points of a beam is equal to the area of the M/EI diagram between those two points.(Figure 02)
Consider two points k1 and k2 on a beam . The deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03)
The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supports.
According to the Figure 04,
PB'Q is a tangent drawn at B' for final Elastic Curve A'B'C' of the beam ABC. RB'S is a horizontal line drawn through B'.
Consider, Triangles RB'P and QB'S.
From (1), (2), and (3),
Draw the M/EI diagram to find the PA' and QC'.
From Mohr's Second Theorem PA' = First moment of area of M/EI diagram between A and B about A.
QC' = First moment of area of M/EI diagram between B and C about C.
Substitute in PA' and QC' on equation (a), the Three Moment Theorem (TMT) can be obtained. | https://en.wikipedia.org/wiki/Theorem_of_three_moments |
In algebra, the theorem of transition is said to hold between commutative rings A ⊂ B {\displaystyle A\subset B} if [ 1 ] [ 2 ]
Given commutative rings A ⊂ B {\displaystyle A\subset B} such that B {\displaystyle B} dominates A {\displaystyle A} and for each maximal ideal m {\displaystyle {\mathfrak {m}}} of A {\displaystyle A} such that length B ( B / m B ) {\displaystyle \operatorname {length} _{B}(B/{\mathfrak {m}}B)} is finite, the natural inclusion A → B {\displaystyle A\to B} is a faithfully flat ring homomorphism if and only if the theorem of transition holds between A ⊂ B {\displaystyle A\subset B} . [ 2 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Theorem_of_transition |
The theorem on friends and strangers is a mathematical theorem in an area of mathematics called Ramsey theory .
Suppose a party has six people. Consider any two of them. They might be meeting for the first time—in which case we will call them mutual strangers; or they might have met before—in which case we will call them mutual acquaintances. The theorem says:
A proof of the theorem requires nothing but a three-step logic. It is convenient to phrase the problem in graph-theoretic language.
Suppose a graph has 6 vertices and every pair of (distinct) vertices is joined by an edge. Such a graph is called a complete graph (because there cannot be any more edges). A complete graph on n {\displaystyle n} vertices is denoted by the symbol K n {\displaystyle K_{n}} .
Now take a K 6 {\displaystyle K_{6}} . It has 15 edges in all. Let the 6 vertices stand for the 6 people in our party. Let the edges be coloured red or blue depending on whether the two people represented by the vertices connected by the edge are mutual strangers or mutual acquaintances, respectively. The theorem now asserts:
Choose any one vertex; call it P . There are five edges leaving P . They are each coloured red or blue. The pigeonhole principle says that at least three of them must be of the same colour; for if there are less than three of one colour, say red, then there are at least three that are blue.
Let A , B , C be the other ends of these three edges, all of the same colour, say blue. If any one of AB , BC , CA is blue, then that edge together with the two edges from P to the edge's endpoints forms a blue triangle. If none of AB , BC , CA is blue, then all three edges are red and we have a red triangle, namely, ABC .
The utter simplicity of this argument, which so powerfully produces a very interesting conclusion, is what makes the theorem appealing. In 1930, in a paper entitled 'On a Problem of Formal Logic,' Frank P. Ramsey proved a very general theorem (now known as Ramsey's theorem ) of which this theorem is a simple case. This theorem of Ramsey forms the foundation of the area known as Ramsey theory in combinatorics .
The conclusion to the theorem does not hold if we replace the party of six people by a party of fewer than six. To show this, we give a coloring of K 5 with red and blue that does not contain a triangle with all edges the same color. We draw K 5 as a pentagon surrounding a star (a pentagram ). We color the edges of the pentagon red and the edges of the star blue.
Thus, 6 is the smallest number for which we can claim the conclusion of the theorem. In Ramsey theory, we write this fact as: | https://en.wikipedia.org/wiki/Theorem_on_friends_and_strangers |
Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy 's Almagest , although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model . Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws . The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy , astrometry , astrochemistry , and astrophysics . Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation , considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe , but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
"Contrary to the belief generally held by laboratory physicists, astronomy has contributed to the growth of our understanding of physics." [ 1 ] Physics has helped in the elucidation of astronomical phenomena, and astronomy has helped in the elucidation of physical phenomena:
Integrating astronomy with physics involves:
The aim of astronomy is to understand the physics and chemistry from the laboratory that is behind cosmic events so as to enrich our understanding of the cosmos and of these sciences as well. [ 1 ]
Astrochemistry , the overlap of the disciplines of astronomy and chemistry , is the study of the abundance and reactions of chemical elements and molecules in space, and their interaction with radiation. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds , is of special interest because it is from these clouds that solar systems form.
Infrared astronomy, for example, has revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called aromatic hydrocarbons, often abbreviated ( PAHs or PACs). These molecules composed primarily of fused rings of carbon (either neutral or in an ionized state) are said to be the most common class of carbon compound in the galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust ( cosmic dust ). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium ( 2 H) and isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying carbon rich red giant stars).
The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on earth can be highly abundant in space, for example the H 3 + ion. Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, the consequences for stellar evolution , as well as stellar 'generations'. Indeed, the nuclear reactions in stars produce every naturally occurring chemical element . As the stellar 'generations' advance, the mass of the newly formed elements increases. A first-generation star uses elemental hydrogen (H) as a fuel source and produces helium (He). Hydrogen is the most abundant element, and it is the basic building block for all other elements as its nucleus has only one proton . Gravitational pull toward the center of a star creates massive amounts of heat and pressure, which cause nuclear fusion . Through this process of merging nuclear mass, heavier elements are formed. Lithium , carbon , nitrogen and oxygen are examples of elements that form in stellar fusion. After many stellar generations, very heavy elements are formed (e.g. iron and lead ).
Theoretical astronomers use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star ) and computational numerical simulations . Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen. [ 2 ] [ 3 ]
Astronomy theorists endeavor to create theoretical models and figure out the observational consequences of those models. This helps observers look for data that can refute a model or help in choosing between several alternate or conflicting models. [ citation needed ]
Theorists also try to generate or modify models to take into account new data. Consistent with the general scientific approach, in the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. [ citation needed ]
Topics studied by theoretical astronomers include:
Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro) physics and the study of gravitational waves .
Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang , Cosmic inflation , dark matter , and fundamental theories of physics .
A few examples of this process:
Dark matter and dark energy are the current leading topics in astronomy, [ 4 ] as their discovery and controversy originated during the study of the galaxies.
Of the topics approached with the tools of theoretical physics, particular consideration is often given to stellar photospheres, stellar atmospheres, the solar atmosphere, planetary atmospheres, gaseous nebulae, nonstationary stars, and the interstellar medium. Special attention is given to the internal structure of stars. [ 5 ]
The observation of a neutrino burst within 3 h of the associated optical burst from Supernova 1987A in the Large Magellanic Cloud (LMC) gave theoretical astrophysicists an opportunity to test that neutrinos and photons follow the same trajectories in the gravitational field of the galaxy. [ 6 ]
A general form of the first law of thermodynamics for stationary black holes can be derived from the microcanonical functional integral for the gravitational field. [ 7 ] The boundary data
are the thermodynamical extensive variables, including the energy and angular momentum of the system. [ 7 ] For the simpler case of nonrelativistic mechanics as is often observed in astrophysical phenomena associated with a black hole event horizon, the density of states can be expressed as a real-time functional integral and subsequently used to deduce Feynman's imaginary-time functional integral for the canonical partition function. [ 7 ]
Reaction equations and large reaction networks are an important tool in theoretical astrochemistry, especially as applied to the gas-grain chemistry of the interstellar medium. [ 8 ] Theoretical astrochemistry offers the prospect of being able to place constraints on the inventory of organics for exogenous delivery to the early Earth.
"An important goal for theoretical astrochemistry is to elucidate which organics are of true interstellar origin, and to identify possible interstellar precursors and reaction pathways for those molecules which are the result of aqueous alterations." [ 9 ] One of the ways this goal can be achieved is through the study of carbonaceous material as found in some meteorites. Carbonaceous chondrites (such as C1 and C2) include organic compounds such as amines and amides; alcohols, aldehydes, and ketones; aliphatic and aromatic hydrocarbons; sulfonic and phosphonic acids; amino, hydroxycarboxylic, and carboxylic acids; purines and pyrimidines; and kerogen -type material. [ 9 ] The organic inventories of primitive meteorites display large and variable enrichments in deuterium, carbon-13 ( 13 C), and nitrogen-15 ( 15 N), which is indicative of their retention of an interstellar heritage. [ 9 ]
The chemical composition of comets should reflect both the conditions in the outer solar nebula some 4.5 × 10 9 ayr, and the nature of the natal interstellar cloud from which the Solar System was formed. [ 10 ] While comets retain a strong signature of their ultimate interstellar origins, significant processing must have occurred in the protosolar nebula. [ 10 ] Early models of coma chemistry showed that reactions can occur rapidly in the inner coma, where the most important reactions are proton transfer reactions. [ 10 ] Such reactions can potentially cycle deuterium between the different coma molecules, altering the initial D/H ratios released from the nuclear ice, and necessitating the construction of accurate models of cometary deuterium chemistry, so that gas-phase coma observations can be safely extrapolated to give nuclear D/H ratios. [ 10 ]
While the lines of conceptual understanding between theoretical astrochemistry and theoretical chemical astronomy often become blurred so that the goals and tools are the same, there are subtle differences between the two sciences. Theoretical chemistry as applied to astronomy seeks to find new ways to observe chemicals in celestial objects, for example. This often leads to theoretical astrochemistry having to seek new ways to describe or explain those same observations.
The new era of chemical astronomy had to await the clear enunciation of the chemical principles of spectroscopy and the applicable theory. [ 11 ]
Supernova radioactivity dominates light curves and the chemistry of dust condensation is also dominated by radioactivity. [ 12 ] Dust is usually either carbon or oxides depending on which is more abundant, but Compton electrons dissociate the CO molecule in about one month. [ 12 ] The new chemical astronomy of supernova solids depends on the supernova radioactivity:
Like theoretical chemical astronomy, the lines of conceptual understanding between theoretical astrophysics and theoretical physical astronomy are often blurred, but, again, there are subtle differences between these two sciences. Theoretical physics as applied to astronomy seeks to find new ways to observe physical phenomena in celestial objects and what to look for, for example. This often leads to theoretical astrophysics having to seek new ways to describe or explain those same observations, with hopefully a convergence to improve our understanding of the local environment of Earth and the physical Universe .
Nuclear matrix elements of relevant operators as extracted from data and from a shell-model and theoretical approximations both for the two-neutrino and neutrinoless modes of decay are used to explain the weak interaction and nuclear structure aspects of nuclear double beta decay. [ 13 ]
New neutron-rich isotopes, 34 Ne, 37 Na, and 43 Si have been produced unambiguously for the first time, and convincing evidence for the particle instability of three others, 33 Ne, 36 Na, and 39 Mg has been obtained. [ 14 ] These experimental findings compare with recent theoretical predictions. [ 14 ]
Until recently all the time units that appear natural to us are caused by astronomical phenomena:
High precision appears problematic:
Some of these time standard scales are sidereal time , solar time , and universal time .
From the Systeme Internationale (SI) comes the second as defined by the duration of 9 192 631 770 cycles of a particular hyperfine structure transition in the ground state of caesium-133 ( 133 Cs). [ 15 ] For practical usability a device is required that attempts to produce the SI second (s) such as an atomic clock . But not all such clocks agree. The weighted mean of many clocks distributed over the whole Earth defines the Temps Atomique International ; i.e., the Atomic Time TAI. [ 15 ] From the General theory of relativity the time measured depends on the altitude on earth and the spatial velocity of the clock so that TAI refers to a location on sea level that rotates with the Earth. [ 15 ]
Since the Earth's rotation is irregular, any time scale derived from it such as Greenwich Mean Time led to recurring problems in predicting the Ephemerides for the positions of the Moon , Sun , planets and their natural satellites . [ 15 ] In 1976 the International Astronomical Union (IAU) resolved that the theoretical basis for ephemeris time (ET) was wholly non-relativistic, and therefore, beginning in 1984 ephemeris time would be replaced by two further time scales with allowance for relativistic corrections. Their names, assigned in 1979, [ 16 ] emphasized their dynamical nature or origin, Barycentric Dynamical Time (TDB) and Terrestrial Dynamical Time (TDT). Both were defined for continuity with ET and were based on what had become the standard SI second, which in turn had been derived from the measured second of ET.
During the period 1991–2006, the TDB and TDT time scales were both redefined and replaced, owing to difficulties or inconsistencies in their original definitions. The current fundamental relativistic time scales are Geocentric Coordinate Time (TCG) and Barycentric Coordinate Time (TCB). Both of these have rates that are based on the SI second in respective reference frames (and hypothetically outside the relevant gravity well), but due to relativistic effects, their rates would appear slightly faster when observed at the Earth's surface, and therefore diverge from local Earth-based time scales using the SI second at the Earth's surface. [ 17 ]
The currently defined IAU time scales also include Terrestrial Time (TT) (replacing TDT, and now defined as a re-scaling of TCG, chosen to give TT a rate that matches the SI second when observed at the Earth's surface), [ 18 ] and a redefined Barycentric Dynamical Time (TDB), a re-scaling of TCB to give TDB a rate that matches the SI second at the Earth's surface.
For a star , the dynamical time scale is defined as the time that would be taken for a test particle released at the surface to fall under the star 's potential to the centre point, if pressure forces were negligible. In other words, the dynamical time scale measures the amount of time it would take a certain star to collapse in the absence of any internal pressure . By appropriate manipulation of the equations of stellar structure this can be found to be
τ d y n a m i c a l ≃ R v = R 3 2 G M ∼ 1 / G ρ {\displaystyle \tau _{dynamical}\simeq {\frac {R}{v}}={\sqrt {\frac {R^{3}}{2GM}}}\sim 1/{\sqrt {G\rho }}}
where R is the radius of the star, G is the gravitational constant , M is the mass of the star, ρ the star gas density (assumed constant here) and v is the escape velocity . As an example, the Sun dynamical time scale is approximately 1133 seconds. Note that the actual time it would take a star like the Sun to collapse is greater because internal pressure is present.
The 'fundamental' oscillatory mode of a star will be at approximately the dynamical time scale. Oscillations at this frequency are seen in Cepheid variables .
The basic characteristics of applied astronomical navigation are
The superiority of satellite navigation systems to astronomical navigation are currently undeniable, especially with the development and use of GPS/NAVSTAR. [ 19 ] This global satellite system
Geodetic astronomy is the application of astronomical methods into networks and technical projects of geodesy for
Astronomical algorithms are the algorithms used to calculate ephemerides , calendars , and positions (as in celestial navigation or satellite navigation ).
Many astronomical and navigational computations use the Figure of the Earth as a surface representing the Earth.
The International Earth Rotation and Reference Systems Service (IERS), formerly the International Earth Rotation Service, is the body responsible for maintaining global time and reference frame standards, notably through its Earth Orientation Parameter (EOP) and International Celestial Reference System (ICRS) groups.
The Deep Space Network , or DSN , is an international network of large antennas and communication facilities that supports interplanetary spacecraft missions, and radio and radar astronomy observations for the exploration of the Solar System and the universe . The network also supports selected Earth-orbiting missions. DSN is part of the NASA Jet Propulsion Laboratory (JPL).
An observer becomes a deep space explorer upon escaping Earth's orbit. [ 20 ] While the Deep Space Network maintains communication and enables data download from an exploratory vessel, any local probing performed by sensors or active systems aboard usually require astronomical navigation, since the enclosing network of satellites to ensure accurate positioning is absent. | https://en.wikipedia.org/wiki/Theoretical_astronomy |
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. [ 1 ] [ 2 ] As one of the founders of the discipline, James Keeler , said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space— what they are, rather than where they are", [ 3 ] which is studied in celestial mechanics .
Among the subjects studied are the Sun ( solar physics ), other stars , galaxies , extrasolar planets , the interstellar medium , and the cosmic microwave background . [ 4 ] [ 5 ] Emissions from these objects are examined across all parts of the electromagnetic spectrum , and the properties examined include luminosity , density , temperature , and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics , electromagnetism , statistical mechanics , thermodynamics , quantum mechanics , relativity , nuclear and particle physics , and atomic and molecular physics .
In practice, modern astronomical research often involves substantial work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include the properties of dark matter , dark energy , black holes , and other celestial bodies ; and the origin and ultimate fate of the universe . [ 4 ] Topics also studied by theoretical astrophysicists include Solar System formation and evolution ; stellar dynamics and evolution ; galaxy formation and evolution ; magnetohydrodynamics ; large-scale structure of matter in the universe; origin of cosmic rays ; general relativity , special relativity , and quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics .
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal . Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato , or Aether as maintained by Aristotle . [ 6 ] [ 7 ] During the 17th century, natural philosophers such as Galileo , [ 8 ] Descartes , [ 9 ] and Newton [ 10 ] began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws . [ 11 ] Their challenge was that the tools had not yet been invented with which to prove these assertions. [ 12 ]
For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. [ 13 ] [ 14 ] A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum . [ 15 ] By 1860 the physicist, Gustav Kirchhoff , and the chemist, Robert Bunsen , had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements . [ 16 ] Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. [ 17 ] In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.
Among those who extended the study of solar and stellar spectra was Norman Lockyer , who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium , after the Greek Helios , the Sun personified. [ 18 ] [ 19 ]
In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory , in which a team of woman computers , notably Williamina Fleming , Antonia Maury , and Annie Jump Cannon , classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922. [ 20 ]
In 1895, George Ellery Hale and James E. Keeler , along with a group of ten associate editors from Europe and the United States, [ 21 ] established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics . [ 22 ] It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories. [ 21 ]
Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars , in his paper The Internal Constitution of the Stars . [ 23 ] [ 24 ] At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc 2 . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity ), had not yet been discovered. [ 25 ]
In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin ) wrote an influential doctoral dissertation at Radcliffe College , in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. [ 26 ] Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell ) convinced her to modify the conclusion before publication. However, later research confirmed her discovery. [ 27 ] [ 28 ]
By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. [ 29 ] In the 21st century, it further expanded to include observations based on gravitational waves .
Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics , which is mainly concerned with finding out the measurable implications of physical models . It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.
Most astrophysical observations are made using the electromagnetic spectrum .
Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere.
Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia . On the other hand, radio observations may look at events on a millisecond timescale ( millisecond pulsars ) or combine years of data ( pulsar deceleration studies). The information obtained from these different timescales is very different.
The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars.
The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram , which can be viewed as representing the state of a stellar object, from birth to destruction.
Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations . Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen. [ 30 ] [ 31 ]
Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole ( astro )physics and the study of gravitational waves .
Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model , are the Big Bang , cosmic inflation , dark matter, dark energy and fundamental theories of physics.
The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. [ 11 ] There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss , Subrahmanyan Chandrasekhar , Stephen Hawking , Hubert Reeves , Carl Sagan and Patrick Moore . The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics. [ 32 ] [ 33 ] [ 34 ] The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson . | https://en.wikipedia.org/wiki/Theoretical_astrophysics |
Theoretical chemistry is the branch of chemistry which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry: for example, the concepts of chemical bonding , chemical reaction , valence , the surface of potential energy , molecular orbitals , orbital interactions, and molecule activation.
Theoretical chemistry unites principles and concepts common to all branches of chemistry. Within the framework of theoretical chemistry, there is a systematization of chemical laws, principles and rules, their refinement and detailing, the construction of a hierarchy. The central place in theoretical chemistry is occupied by the doctrine of the interconnection of the structure and properties of molecular systems. It uses mathematical and physical methods to explain the structures and dynamics of chemical systems and to correlate, understand, and predict their thermodynamic and kinetic properties. In the most general sense, it is explanation of chemical phenomena by methods of theoretical physics . In contrast to theoretical physics, in connection with the high complexity of chemical systems, theoretical chemistry, in addition to approximate mathematical methods, often uses semi-empirical and empirical methods.
In recent years, it has consisted primarily of quantum chemistry , i.e., the application of quantum mechanics to problems in chemistry. Other major components include molecular dynamics , statistical thermodynamics and theories of electrolyte solutions , reaction networks , polymerization , catalysis , molecular magnetism and spectroscopy .
Modern theoretical chemistry may be roughly divided into the study of chemical structure and the study of chemical dynamics. The former includes studies of: electronic structure, potential energy surfaces, and force fields; vibrational-rotational motion; equilibrium properties of condensed-phase systems and macro-molecules. Chemical dynamics includes: bimolecular kinetics and the collision theory of reactions and energy transfer; unimolecular rate theory and metastable states; condensed-phase and macromolecular aspects of dynamics.
Historically, the major field of application of theoretical chemistry has been in the following fields of research:
Hence, theoretical chemistry has emerged as a branch of research. With the rise of the density functional theory and other methods like molecular mechanics , the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics, including biochemistry , condensed matter physics , nanotechnology or molecular biology . | https://en.wikipedia.org/wiki/Theoretical_chemistry |
Theoretical oxygen demand ( ThOD ) is the calculated amount of oxygen required to oxidize a compound to its final oxidation products. [ 1 ] However, there are some differences between standard methods that can influence the results obtained: for example, some calculations assume that nitrogen released from organic compounds is generated as ammonia , whereas others allow for ammonia oxidation to nitrate . Therefore, in expressing results, the calculation assumptions should always be stated.
In order to determine the ThOD for glycine (CH 2 (NH 2 )COOH) using the following assumptions:
We can calculate by following steps:
The theoretical oxygen demand represents the worst-case scenario. The actual oxygen demand of any compound depends on the biodegradability of the compound and the specific organism metabolizing the compound. The actual oxygen demand can be measured experimentally and is called the biochemical oxygen demand (BOD). | https://en.wikipedia.org/wiki/Theoretical_oxygen_demand |
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena . This is in contrast to experimental physics , which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory . In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. [ a ] For example, while developing special relativity , Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth 's drift through a luminiferous aether . [ 1 ] Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect , previously an experimental result lacking a theoretical formulation. [ 2 ]
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms , judgment of mathematical applicability is not based on agreement with any experimental results. [ 3 ] [ 4 ] A physical theory similarly differs from a mathematical theory , in the sense that the word "theory" has a different meaning in mathematical terms. [ b ]
R i c = k g {\displaystyle \mathrm {Ric} =kg} The equations for an Einstein manifold , used in general relativity to describe the curvature of spacetime
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. [ 5 ] [ 6 ] Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that ( action and) energy are not continuously variable.
Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: " phenomenologists " might employ ( semi- ) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding . [ c ] "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. [ d ] Some attempt to create approximate theories, called effective theories , because fully developed theories may be regarded as unsolvable or too complicated . Other theorists may try to unify , formalise, reinterpret or generalise extant theories, or create completely new ones altogether. [ e ] Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; [ f ] e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics .
Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston , or astronomical bodies revolving around the Earth ) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result . [ 7 ] [ 8 ] Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory , first postulated millennia ago (by several thinkers in Greece and India ) and the two-fluid theory of electricity [ 9 ] are two cases in this point. However, an exception to all the above is the wave–particle duality , a theory combining aspects of different, opposing models via the Bohr complementarity principle .
Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty ), a notion sometimes called " Occam's razor " after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). [ 10 ] They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method .
Physical theories can be grouped into three categories: mainstream theories , proposed theories and fringe theories .
Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy , and continued by Plato and Aristotle , whose views held sway for a millennium. During the rise of medieval universities , the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar , logic , and rhetoric and of the Quadrivium like arithmetic , geometry , music and astronomy . During the Middle Ages and Renaissance , the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon . As the Scientific Revolution gathered pace, the concepts of matter , energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy . Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler 's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe ; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution.
The great push toward the modern concept of explanation started with Galileo , one of the few physicists who was both a consummate theoretician and a great experimentalist . The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton , another theoretician/experimentalist of the highest order, writing Principia Mathematica . [ 11 ] In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics ), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange , Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. [ 12 ] They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras.
Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat , electricity and magnetism , and then light . The laws of thermodynamics , and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics ) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory , unifying the previously separate phenomena of electricity, magnetism and light.
The pillars of modern physics , and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics . Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity . Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules . Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena , among others ), in parallel to the applications of relativity to problems in astronomy and cosmology respectively .
All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz ), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series . [ 13 ]
Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe , from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models.
Mainstream theories (sometimes referred to as central theories ) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate.
The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics , which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence , Chern–Simons theory , graviton , magnetic monopole , string theory , theory of everything .
Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory.
Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience . The falsification of the original theory sometimes leads to reformulation of the theory.
"Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat , the EPR thought experiment , simple illustrations of time dilation , and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities , which were then tested to various degrees of rigor , leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis . | https://en.wikipedia.org/wiki/Theoretical_physics |
A theoretical plate in many separation processes is a hypothetical zone or stage in which two phases, such as the liquid and vapor phases of a substance, establish an equilibrium with each other. Such equilibrium stages may also be referred to as an equilibrium stage , ideal stage , or a theoretical tray . The performance of many separation processes depends on having series of equilibrium stages and is enhanced by providing more such stages. In other words, having more theoretical plates increases the efficiency of the separation process be it either a distillation , absorption , chromatographic , adsorption or similar process. [ 1 ] [ 2 ]
The concept of theoretical plates and trays or equilibrium stages is used in the design of many different types of separation. [ 1 ] [ 2 ]
The concept of theoretical plates in designing distillation processes has been discussed in many reference texts. [ 2 ] [ 3 ] Any physical device that provides good contact between the vapor and liquid phases present in industrial-scale distillation columns or laboratory-scale glassware distillation columns constitutes a "plate" or "tray". Since an actual, physical plate can never be a 100% efficient equilibrium stage, the number of actual plates is more than the required theoretical plates.
where N a {\displaystyle N_{a}} is the number of actual, physical plates or trays, N t {\displaystyle N_{t}} is the number of theoretical plates or trays and E {\displaystyle E} is the plate or tray efficiency.
So-called bubble-cap or valve-cap trays are examples of the vapor and liquid contact devices used in industrial distillation columns. Another example of vapor and liquid contact devices are the spikes in laboratory Vigreux fractionating columns .
The trays or plates used in industrial distillation columns are fabricated of circular steel plates and usually installed inside the column at intervals of about 60 to 75 cm (24 to 30 inches) up the height of the column. That spacing is chosen primarily for ease of installation and ease of access for future repair or maintenance.
An example of a very simple tray is a perforated tray. The desired contacting between vapor and liquid occurs as the vapor, flowing upwards through the perforations, comes into contact with the liquid flowing downwards through the perforations. In current modern practice, as shown in the adjacent diagram, better contacting is achieved by installing bubble-caps or valve caps at each perforation to promote the formation of vapor bubbles flowing through a thin layer of liquid maintained by a weir on each tray.
To design a distillation unit or a similar chemical process, the number of theoretical trays or plates (that is, hypothetical equilibrium stages), N t , required in the process should be determined, taking into account a likely range of feedstock composition and the desired degree of separation of the components in the output fractions. In industrial continuous fractionating columns, N t is determined by starting at either the top or bottom of the column and calculating material balances, heat balances and equilibrium flash vaporizations for each of the succession of equilibrium stages until the desired end product composition is achieved. The calculation process requires the availability of a great deal of vapor–liquid equilibrium data for the components present in the distillation feed, and the calculation procedure is very complex. [ 2 ] [ 3 ]
In an industrial distillation column, the N t required to achieve a given separation also depends upon the amount of reflux used. Using more reflux decreases the number of plates required and using less reflux increases the number of plates required. Hence, the calculation of N t is usually repeated at various reflux rates. N t is then divided by the tray efficiency, E, to determine the actual number of trays or physical plates, N a , needed in the separating column. The final design choice of the number of trays to be installed in an industrial distillation column is then selected based upon an economic balance between the cost of additional trays and the cost of using a higher reflux rate.
There is a very important distinction between the theoretical plate terminology used in discussing conventional distillation trays and the theoretical plate terminology used in the discussions below of packed bed distillation or absorption or in chromatography or other applications. The theoretical plate in conventional distillation trays has no "height". It is simply a hypothetical equilibrium stage. However, the theoretical plate in packed beds, chromatography and other applications is defined as having a height.
The empirical formula known as Van Winkle's Correlation can be used to predict the Murphree plate efficiency for distillation columns separating binary systems. [ 4 ]
Distillation and absorption separation processes using packed beds for vapor and liquid contacting have an equivalent concept referred to as the plate height or the height equivalent to a theoretical plate (HETP). [ 2 ] [ 3 ] [ 5 ] HETP arises from the same concept of equilibrium stages as does the theoretical plate and is numerically equal to the absorption bed length divided by the number of theoretical plates in the absorption bed (and in practice is measured in this way).
where N t {\displaystyle N_{t}} is the number of theoretical plates (also called the "plate count"), H is the total bed height and HETP is the height equivalent to a theoretical plate.
The material in packed beds can either be random dumped packing (1-3" wide) such as Raschig rings or structured sheet metal . Liquids tend to wet the surface of the packing and the vapors contact the wetted surface, where mass transfer occurs.
The theoretical plate concept was also adapted for chromatographic processes by Martin and Synge . [ 6 ] The IUPAC 's Gold Book provides a definition of the number of theoretical plates in a chromatography column. [ 7 ]
The same equation applies in chromatography processes as for the packed bed processes, namely:
In packed column chromatography, the HETP may also be calculated with the Van Deemter equation .
In capillary column chromatography HETP is given by the Golay equation.
The concept of theoretical plates or trays applies to other processes as well, such as capillary electrophoresis and some types of adsorption . | https://en.wikipedia.org/wiki/Theoretical_plate |
The theoretical strength of a solid is the maximum possible stress a perfect solid can withstand. It is often much higher than what current real materials can achieve. The lowered fracture stress is due to defects, such as interior or surface cracks. One of the goals for the study of mechanical properties of materials is to design and fabricate materials exhibiting strength close to the theoretical limit.
When a solid is in tension, its atomic bonds stretch, elastically. Once a critical strain is reached, all the atomic bonds on the fracture plane rupture and the material fails mechanically. The stress at which the solid fractures is the theoretical strength, often denoted as σ t h {\displaystyle \sigma _{th}} . After fracture, the stretched atomic bonds return to their initial state, except that two surfaces have formed.
The theoretical strength is often approximated as: [ 1 ] [ 2 ]
where
The stress-displacement, or σ {\displaystyle \sigma } vs x, relationship during fracture can be approximated by a sine curve, σ = σ t h s i n ( 2 π x / λ ) {\displaystyle \sigma =\sigma _{th}sin(2\pi x/\lambda )} , up to λ {\displaystyle \lambda } /4. The initial slope of the σ {\displaystyle \sigma } vs x curve can be related to Young's modulus through the following relationship:
where
The strain ϵ {\displaystyle \epsilon } can be related to the displacement x by ϵ = x / a 0 {\displaystyle \epsilon =x/a_{0}} , and a 0 {\displaystyle a_{0}} is the equilibrium inter-atomic spacing. The strain derivative is therefore given by ( d ϵ d x ) x = 0 = 1 / a 0 {\displaystyle \left({\frac {d\epsilon }{dx}}\right)_{x=0}=1/a_{0}}
The relationship of initial slope of the σ {\displaystyle \sigma } vs x curve with Young's modulus thus becomes
The sinusoidal relationship of stress and displacement gives a derivative:
By setting the two d σ / d x {\displaystyle d\sigma /dx} together, the theoretical strength becomes:
The theoretical strength can also be approximated using the fracture work per unit area, which result in slightly different numbers. However, the above derivation and final approximation is a commonly used metric for evaluating the advantages of a material's mechanical properties. [ 3 ] | https://en.wikipedia.org/wiki/Theoretical_strength_of_a_solid |
Theories of cloaking discusses various theories based on science and research , for producing an electromagnetic cloaking device . Theories presented employ transformation optics , event cloaking, dipolar scattering cancellation, tunneling light transmittance, sensors and active sources, and acoustic cloaking .
A cloaking device is one where the purpose of the transformation is to hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (see Metamaterial cloaking [ 1 ] [ 2 ] ) or sound waves. Objects in the defined location are still present, but incident waves are guided around them without being affected by the object itself. Along with this basic " cloaking device ", other related concepts have been proposed in peer reviewed , scientific articles , and are discussed here. Naturally, some of the theories discussed here also employ metamaterials, either electromagnetic or acoustic , although often in a different manner than the original demonstration and its successor, the broad-band cloak .
The first electromagnetic cloaking device was produced in 2006, using gradient-index metamaterials . This has led to the burgeoning field of transformation optics (and now transformation acoustics ), where the propagation of waves is precisely manipulated by controlling the behaviour of the material through which the light (sound) is travelling.
Waves and the host material in which they propagate have a symbiotic relationship: both act on each other. A simple spatial cloak relies on fine tuning the properties of the propagation medium in order to direct the flow smoothly around an object, like water flowing past a rock in a stream, but without reflection, or without creating turbulence. Another analogy is that of a flow of cars passing a symmetrical traffic island – the cars are temporarily diverted, but can later reassemble themselves into a smooth flow that holds no information about whether the traffic island was small or large, or whether flowers or a large advertising billboard might have been planted on it.
Although both analogies given above have an implied direction (that of the water flow, or of the road orientation), cloaks are often designed so as to be isotropic , i.e. to work equally well for all orientations. However, they do not need to be so general, and might only work in two dimensions, as in the original electromagnetic demonstration, or only from one side, as for the so-called carpet cloak .
Spatial cloaks have other characteristics: whatever they contain can (in principle) be kept invisible forever, since an object inside the cloak may simply remain there. Signals emitted by the objects inside the cloak that are not absorbed can likewise be trapped forever by its internal structure. If a spatial cloak could be turned off and on again at will, the objects inside would then appear and disappear accordingly.
The event cloak is a means of manipulating electromagnetic radiation in space and time in such a way that a certain collection of happenings, or events, is concealed from distant observers. Conceptually, a safecracker can enter a scene, steal the cash and exit, whilst a surveillance camera records the safe door locked and undisturbed all the time. The concept utilizes the science of metamaterials in which light can be made to behave in ways that are not found in naturally occurring materials. [ 3 ]
The event cloak works by designing a medium in which different parts of the light illuminating a certain region can be either slowed or accelerated. A leading portion of the light is accelerated so that it arrives before the events occur, whilst a trailing part is slowed and arrives too late. After their occurrence, the light is reformed by slowing the leading part and accelerating the trailing part. The distant observer only sees a continuous illumination, whilst the events that occurred during the dark period of the cloak's operation remain undetected. The concept can be related to traffic flowing along a highway: at a certain point some cars are accelerated up, whilst the ones behind are slowed. The result is a temporary gap in the traffic allowing a pedestrian to cross. After this, the process can be reversed so that the traffic resumes its continuous flow without a gap. Regarding the cars as light particles (photons), the act of the pedestrian crossing the road is never suspected by the observer down the highway, who sees an uninterrupted and unperturbed flow of cars. [ 3 ] [ 4 ]
For absolute concealment, the events must be non-radiating. If they do emit light during their occurrence (e.g. by fluorescence), then this light is received by the distant observer as a single flash. [ 3 ]
Applications of the Event Cloak include the possibility to achieve `interrupt-without-interrupt' in data channels that converge at a node. A primary calculation can be temporarily suspended to process priority information from another channel. Afterwards the suspended channel can be resumed in such a way as to appear as though it was never interrupted. [ 3 ]
The idea of the event cloak was first proposed by a team of researchers at Imperial College London (UK) in 2010, and published in the Journal of Optics. [ 3 ] An experimental demonstration of the basic concept using nonlinear optical technology has been presented in a preprint on the Cornell physics arXiv . [ 5 ] This uses time lenses to slow down and speed up the light, and thereby improves on the original proposal from McCall et al. [ 3 ] which instead relied on the nonlinear refractive index of optical fibres . The experiment claims a cloaked time interval of about 10 picoseconds , but that extension into the nanosecond and microsecond regimes should be possible.
An event cloaking scheme that requires a single dispersive medium (instead of two successive media with opposite dispersion) has also been proposed based on accelerating wavepackets. [ 6 ] The idea is based on modulating a part of a monochromatic light wave with a discontinuous nonlinear frequency chirp so that two opposite accelerating caustics are created in space–time as the different frequency components propagate at different group velocities in the dispersive medium. Due to the structure of the frequency chirp, the expansion and contraction of the time gap happen continuously in the same medium thus creating a biconvex time gap that conceals the enclosed events. [ 6 ]
In 2006, the same year as the first metamaterial cloak, another type of cloak was proposed. This type of cloaking exploits resonance of light waves while matching the resonance of another object. In particular a particle placed near a superlens would appear to disappear as the light surrounding the particle resonates as the same frequency as the superlens. The resonance would effectively cancel out the light reflecting from the particle, rendering the particle electromagnetically invisible. [ 7 ]
In 2009, a passive cloaking device was designed to be an 'external invisibility device' that leaves the concealed object out in the open so that it can ‘see’ its surroundings. This is based on the premise that cloaking research has not adequately provided a solution to an inherent problem; because no electromagnetic radiation can enter or leave the cloaked space, this leaves the concealed object of the cloak without ability to detect visually, or otherwise, anything outside the cloaked space. [ 8 ] [ 9 ]
Such a cloaking device is also capable of ‘cloaking’ only parts of an object, such as opening a virtual peep hole on a wall so as to see the other side. [ 10 ]
The traffic analogy used above for the spatial cloak can be adapted (albeit imperfectly) to describe this process. Imagine that a car has broken down in the vicinity of the roundabout, and is disrupting the traffic flow, causing cars to take different routes or creating a traffic jam . This exterior cloak corresponds to a carefully misshapen roundabout which manages to cancel or counteract the effect of the broken down car – so that as the traffic flow departs, there is again no evidence in it of either the roundabout or of the broken down car.
The plasmonic cover , mentioned alongside metamaterial covers (see plasmonic metamaterials ), theoretically utilizes plasmonic resonance effects to reduce the total scattering cross section of spherical and cylindrical objects. These are lossless metamaterial covers near their plasma resonance which could possibly induce a dramatic drop in the scattering cross section, making these objects nearly “invisible” or “transparent” to an outside observer. Low loss, even no-loss, passive covers might be utilized that do not require high dissipation, but rely on a completely different mechanism. [ 11 ]
Materials with either negative or low value constitutive parameters, are required for this effect. Certain metals near their plasma frequency, or metamaterials with negative parameters could fill this need. For example, several noble metals achieve this requirement because of their electrical permittivity at the infra-red or visible wavelengths with relatively low loss. [ 11 ]
Currently only microscopically small objects could possibly appear transparent. [ 11 ]
These materials are further described as a homogeneous, isotropic, metamaterial covers near plasma frequency dramatically reducing the fields scattered by a given object. Furthermore, These do not require any absorptive process, any anisotropy or inhomogeneity, and nor any interference cancellation. [ 11 ]
The "classical theory" of metamaterial covers works with light of only one specific frequency.
A new research, of Kort-Kamp et al , [ 12 ] who won the prize “School on Nonlinear Optics and Nanophotonics” of 2013, shows that is possible to tune the metamaterial to different light frequencies.
As implied in the nomenclature, this is a type of light transmission. Transmission of light ( EM radiation ) through an object such as metallic film occurs with an assist of tunnelling between resonating inclusions. This effect can be created by embedding a periodic configuration of dielectrics in a metal, for example. By creating and observing transmission peaks interactions between the dielectrics and interference effects cause mixing and splitting of resonances. With an effective permittivity close to unity, the results can be used to propose a method for turning the resulting materials invisible. [ 2 ]
There are other proposals for use of the cloaking technology.
In 2007 cloaking with metamaterials is reviewed and deficiencies are presented. At the same time, theoretical solutions are presented that could improve the capability to cloak objects. [ 13 ] [ 14 ] [ 15 ] [ 16 ] Later in 2007, a mathematical improvement in the cylindrical shielding to produce an electromagnetic "wormhole", is analyzed in three dimensions. [ 17 ] Electromagnetic wormholes, as an optical device (not gravitational) are derived from cloaking theories has potential applications for advancing some current technology. [ 18 ] [ 19 ] [ 20 ]
Other advances may be realized with an acoustic superlens . In addition, acoustic metamaterials have realized negative refraction for sound waves. Possible advances could be enhanced ultrasound scans, sharpening sonic medical scans, seismic maps with more detail, and buildings no longer susceptible to earthquakes. Underground imaging may be improved with finer details. The acoustic superlens, acoustic cloaking, and acoustic metamaterials translates into novel applications for focusing, or steering, sonic waves. [ 21 ]
Acoustic cloaking technology could be used to stop a sonar-using observer from detecting the presence of an object that would normally be detectable as it reflects or scatters sound waves. Ideally, the technology would encompass a broad spectrum of vibrations on a variety of scales. The range might be from miniature electronic or mechanical components up to large earthquakes. Although most progress has been made on mathematical and theoretical solutions, a laboratory metamaterial device for evading sonar has been recently demonstrated. It can be applied to sound wavelengths from 40 to 80 kHz. [ 21 ] [ 22 ] [ 23 ]
Waves also apply to bodies of water. A theory has been developed for a cloak that could "hide", or protect, man-made platforms, ships, and natural coastlines from destructive ocean waves, including tsunamis. [ 22 ] [ 24 ] [ 25 ] | https://en.wikipedia.org/wiki/Theories_of_cloaking |
In set theory and logic , Buchholz's ID hierarchy is a hierarchy of subsystems of first-order arithmetic . The systems/theories I D ν {\displaystyle ID_{\nu }} are referred to as "the formal theories of ν-times iterated inductive definitions". ID ν extends PA by ν iterated least fixed points of monotone operators.
The formal theory ID ω (and ID ν in general) is an extension of Peano Arithmetic , formulated in the language L ID , by the following axioms: [ 1 ]
The theory ID ν with ν ≠ ω is defined as:
A set I ⊆ N {\displaystyle I\subseteq \mathbb {N} } is called inductively defined if for some monotonic operator Γ : P ( N ) → P ( N ) {\displaystyle \Gamma :P(N)\rightarrow P(N)} , L F P ( Γ ) = I {\displaystyle LFP(\Gamma )=I} , where L F P ( f ) {\displaystyle LFP(f)} denotes the least fixed point of f {\displaystyle f} . The language of ID 1 , L I D 1 {\displaystyle L_{ID_{1}}} , is obtained from that of first-order number theory, L N {\displaystyle L_{\mathbb {N} }} , by the addition of a set (or predicate) constant I A for every X-positive formula A(X, x) in L N [X] that only contains X (a new set variable) and x (a number variable) as free variables. The term X-positive means that X only occurs positively in A (X is never on the left of an implication). We allow ourselves a bit of set-theoretic notation:
Then ID 1 contains the axioms of first-order number theory (PA) with the induction scheme extended to the new language as well as these axioms:
Where F ( x ) {\displaystyle F(x)} ranges over all L I D 1 {\displaystyle L_{ID_{1}}} formulas.
Note that ( I D 1 ) 1 {\displaystyle (ID_{1})^{1}} expresses that I A {\displaystyle I_{A}} is closed under the arithmetically definable set operator Γ A ( S ) = { x ∈ N ∣ N ⊨ A ( S , x ) } {\displaystyle \Gamma _{A}(S)=\{x\in \mathbb {N} \mid \mathbb {N} \models A(S,x)\}} , while ( I D 1 ) 2 {\displaystyle (ID_{1})^{2}} expresses that I A {\displaystyle I_{A}} is the least such (at least among sets definable in L I D 1 {\displaystyle L_{ID_{1}}} ).
Thus, I A {\displaystyle I_{A}} is meant to be the least pre-fixed-point, and hence the least fixed point of the operator Γ A {\displaystyle \Gamma _{A}} .
To define the system of ν-times iterated inductive definitions, where ν is an ordinal, let ≺ {\displaystyle \prec } be a primitive recursive well-ordering of order type ν. We use Greek letters to denote elements of the field of ≺ {\displaystyle \prec } . The language of ID ν , L I D ν {\displaystyle L_{ID_{\nu }}} is obtained from L N {\displaystyle L_{\mathbb {N} }} by the addition of a binary predicate constant J A for every X-positive L N [ X , Y ] {\displaystyle L_{\mathbb {N} }[X,Y]} formula A ( X , Y , μ , x ) {\displaystyle A(X,Y,\mu ,x)} that contains at most the shown free variables, where X is again a unary (set) variable, and Y is a fresh binary predicate variable. We write x ∈ J A μ {\displaystyle x\in J_{A}^{\mu }} instead of J A ( μ , x ) {\displaystyle J_{A}(\mu ,x)} , thinking of x as a distinguished variable in the latter formula.
The system ID ν is now obtained from the system of first-order number theory (PA) by expanding the induction scheme to the new language and adding the scheme ( T I ν ) : T I ( ≺ , F ) {\displaystyle (TI_{\nu }):TI(\prec ,F)} expressing transfinite induction along ≺ {\displaystyle \prec } for an arbitrary L I D ν {\displaystyle L_{ID_{\nu }}} formula F {\displaystyle F} as well as the axioms:
where F ( x ) {\displaystyle F(x)} is an arbitrary L I D ν {\displaystyle L_{ID_{\nu }}} formula. In ( I D ν ) 1 {\displaystyle (ID_{\nu })^{1}} and ( I D ν ) 2 {\displaystyle (ID_{\nu })^{2}} we used the abbreviation A μ ( F ) {\displaystyle A^{\mu }(F)} for the formula A ( F , ( λ γ y ; γ ≺ μ ∧ y ∈ J A γ ) , μ , x ) {\displaystyle A(F,(\lambda \gamma y;\gamma \prec \mu \land y\in J_{A}^{\gamma }),\mu ,x)} , where x {\displaystyle x} is the distinguished variable. We see that these express that each J A μ {\displaystyle J_{A}^{\mu }} , for μ ≺ ν {\displaystyle \mu \prec \nu } , is the least fixed point (among definable sets) for the operator Γ A μ ( S ) = { n ∈ N | ( N , ( J A γ ) γ ≺ μ } {\displaystyle \Gamma _{A}^{\mu }(S)=\{n\in \mathbb {N} |(\mathbb {N} ,(J_{A}^{\gamma })_{\gamma \prec \mu }\}} . Note how all the previous sets J A γ {\displaystyle J_{A}^{\gamma }} , for γ ≺ μ {\displaystyle \gamma \prec \mu } , are used as parameters.
We then define I D ≺ ν = ⋃ ξ ≺ ν I D ξ {\textstyle ID_{\prec \nu }=\bigcup _{\xi \prec \nu }ID_{\xi }} .
I D ^ ν {\displaystyle {\widehat {\mathsf {ID}}}_{\nu }} - I D ^ ν {\displaystyle {\widehat {\mathsf {ID}}}_{\nu }} is a weakened version of I D ν {\displaystyle {\mathsf {ID}}_{\nu }} . In the system of I D ^ ν {\displaystyle {\widehat {\mathsf {ID}}}_{\nu }} , a set I ⊆ N {\displaystyle I\subseteq \mathbb {N} } is instead called inductively defined if for some monotonic operator Γ : P ( N ) → P ( N ) {\displaystyle \Gamma :P(N)\rightarrow P(N)} , I {\displaystyle I} is a fixed point of Γ {\displaystyle \Gamma } , rather than the least fixed point. This subtle difference makes the system significantly weaker: P T O ( I D ^ 1 ) = ψ ( Ω ε 0 ) {\displaystyle PTO({\widehat {\mathsf {ID}}}_{1})=\psi (\Omega ^{\varepsilon _{0}})} , while P T O ( I D 1 ) = ψ ( ε Ω + 1 ) {\displaystyle PTO({\mathsf {ID}}_{1})=\psi (\varepsilon _{\Omega +1})} .
I D ν # {\displaystyle {\mathsf {ID}}_{\nu }\#} is I D ^ ν {\displaystyle {\widehat {\mathsf {ID}}}_{\nu }} weakened even further. In I D ν # {\displaystyle {\mathsf {ID}}_{\nu }\#} , not only does it use fixed points rather than least fixed points, and has induction only for positive formulas. This once again subtle difference makes the system even weaker: P T O ( I D 1 # ) = ψ ( Ω ω ) {\displaystyle PTO({\mathsf {ID}}_{1}\#)=\psi (\Omega ^{\omega })} , while P T O ( I D ^ 1 ) = ψ ( Ω ε 0 ) {\displaystyle PTO({\widehat {\mathsf {ID}}}_{1})=\psi (\Omega ^{\varepsilon _{0}})} .
W − I D ν {\displaystyle {\mathsf {W-ID}}_{\nu }} is the weakest of all variants of I D ν {\displaystyle {\mathsf {ID}}_{\nu }} , based on W-types. The amount of weakening compared to regular iterated inductive definitions is identical to removing bar induction given a certain subsystem of second-order arithmetic . P T O ( W − I D 1 ) = ψ 0 ( Ω × ω ) {\displaystyle PTO({\mathsf {W-ID}}_{1})=\psi _{0}(\Omega \times \omega )} , while P T O ( I D 1 ) = ψ ( ε Ω + 1 ) {\displaystyle PTO({\mathsf {ID}}_{1})=\psi (\varepsilon _{\Omega +1})} .
U ( I D ν ) {\displaystyle {\mathsf {U(ID}}_{\nu }{\mathsf {)}}} is an "unfolding" strengthening of I D ν {\displaystyle {\mathsf {ID}}_{\nu }} . It is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on ν-times iterated generalized inductive definitions. The amount of increase in strength is identical to the increase from ε 0 {\displaystyle \varepsilon _{0}} to Γ 0 {\displaystyle \Gamma _{0}} : P T O ( I D 1 ) = ψ ( ε Ω + 1 ) {\displaystyle PTO({\mathsf {ID}}_{1})=\psi (\varepsilon _{\Omega +1})} , while P T O ( U ( I D 1 ) ) = ψ ( Γ Ω + 1 ) {\displaystyle PTO({\mathsf {U(ID}}_{1}{\mathsf {)}})=\psi (\Gamma _{\Omega +1})} . | https://en.wikipedia.org/wiki/Theories_of_iterated_inductive_definitions |
Theories of technological change and innovation attempt to explain the factors that shape technological innovation as well as the impact of technology on society and culture . Some of the most contemporary theories of technological change reject two of the previous views: the linear model of technological innovation and other, the technological determinism . To challenge the linear model, some of today's theories of technological change and innovation point to the history of technology, where they find evidence that technological innovation often gives rise to new scientific fields, and emphasizes the important role that social networks and cultural values play in creating and shaping technological artifacts. To challenge the so-called "technological determinism", today's theories of technological change emphasize the scope of the need of technical choice, which they find to be greater than most laypeople can realize; as scientists in philosophy of science, and further science and technology often like to say about this "It could have been different." For this reason, theorists who take these positions often argue that a greater public involvement in technological decision-making is desired.
Sociological theories and researches of the Society and the Social focus on how human and technology actually interact and may even affect each other. Some theories are about how political decisions are made for both humans and technology, with here humans and technology are seen as an equal field in the political decision, where humans also make, use, and even move ahead with innovations the technology. The interactions that are used in the majority of the theories on this topic look at the individual human interactions with technological equipment, but there is also a sub-group for the group of people interacting with technology. The theories described are, according to some critiques, purposefully made vague and ambiguous, as the circumstances for the theories change with human culture and technological change and innovation.
Social constructivism and technology argues that technology may not determine the human action, but human action may shape technological use. Key concepts here include:
Key authors here include MacKenzie and Wajcman (1985).
What is important is the gradients and the connectivity of actors' actions and their technological competencies, and also the degree to which we choose to have "figurative" representations. Key concepts here include the inscription of beliefs, practices, relations into technology, which is then said to embody them. Key authors include Bruno Latour (1997) [ 3 ] and Callon (1999). [ 4 ]
Critical theory attempts, according to some, to go beyond the descriptiveness of one account that may show of how things are, the exam and question of why they have come to be that way and how they might otherwise be. Critical theory asks whose interests are being served by the questioned status quo and assesses the potentials of a future, that alternates and propose "to better" both the technological service, and even social justice. Here Geuss's [ 8 ] definition is given, where "a critical theory, then, is a reflective theory which gives agents a kind of knowledge inherently productive of enlightenment and emancipation" (1964). Thus Marcuse argued that while technology matters and design are often presented as neutral technical choices, in fact, they manifest political or moral values. Critical theory is seen as a "form of archaeology" that attempt to get beneath common-sense understandings in order to reveal the power relationships and interests determining particular technological configuration and use.
Perhaps the most developed contemporary critical theory of technology is contained in the works of Andrew Feenberg included in his book 'Transforming Technology' (2002).
There are also a number of technologically related science and society theories that also address even on how media affects group developments or otherwise processes. Broadly speaking, these technological theories are said to be concerned with the social effects of communication media (e.g., media richness) are concerned with questions of media choice (when to use what medium effectively). Other theories (social presence and "media naturalness") are concerned with the consequences of those media choices (i.e., what are the social effects of using particular communication media).
Additionally, many authors have posed technology so as to critique and or emphasize aspects of technology as addressed by the mainline theories. For example, Steve Woolgar (1991) [ 19 ] considers technology as text in order to critique the sociology of scientific knowledge as applied to technology and to distinguish between three responses to that notion: the instrumental response (interpretive flexibility), the interpretivist response (environmental/organizational influences), the reflexive response (a double hermeneutic). Pfaffenberger (1992) [ 20 ] treats technology as drama to argue that a recursive structuring of technological artifacts and their social structure discursively regulate the technological construction of political power. A technological drama is a discourse of technological "statements" and "counterstatements" within the processes of technological regularization, adjustment, and reconstitution.
An important philosophical approach to technology has been taken by Bernard Stiegler , [ 21 ] whose work has been influenced by other philosophers and historians of technology including Gilbert Simondon and André Leroi-Gourhan .
In the Schumpeterian and Neo-Schumpeterian theories technologies are critical factors of economic growth ( Carlota Perez ). [ 22 ]
There are theories of technological change and innovation which are not defined or claimed by a proponent, but are used by authors in describing existing literature, in contrast to their own or as a review of the field.
For example, Markus and Robey (1988) [ 23 ] propose a general technology theory consisting of the causal structures of agency (technological, organizational, imperative, emergent), its structure (variance, process), and the level (micro, macro) of analysis.
Orlikowski (1992) [ 24 ] notes that previous conceptualizations of technology typically differ over scope (is technology more than hardware?) and role (is it an external objective force, the interpreted human action, or an impact moderated by humans?) and identifies three models:
DeSanctis and Poole (1994) similarly write of three views of technology's effects:
Bimber (1998) [ 25 ] addresses the determinacy of technology effects by distinguishing between the: | https://en.wikipedia.org/wiki/Theories_of_technology |
Planning theory is the body of scientific concepts, definitions, behavioral relationships, and assumptions that define the body of knowledge of urban planning . Urban planning is the strategic process of designing and managing the growth and development of human settlements, from small towns to sprawling metropolitan areas. Various planning theories guide urban development decisions and policies. Over time, different schools of thought have emerged, Evolving in response to shifts in society, economy, and technology.". This article explores the key theories and movements that have shaped urban planning.There is no one unified planning theory but various. Whittemore identifies nine procedural theories that dominated the field between 1959 and 1983: the Rational-Comprehensive approach, the Incremental approach, the Transformative Incremental (TI) approach, the Transactive approach, the Communicative approach, the Advocacy approach, the Equity approach, the Radical approach, and the Humanist or Phenomenological approach. [ 1 ] [ 2 ]
Urban planning can include urban renewal , by adapting urban planning methods to existing cities suffering from decline. Alternatively, it can concern the massive challenges associated with urban growth, particularly in the Global South . [ 3 ] All in all, urban planning exists in various forms and addresses many different issues. [ 4 ] The modern origins of urban planning lie in the movement for urban reform that arose as a reaction against the disorder of the industrial city in the mid-19th century. Many of the early influencers were inspired by anarchism , which was popular in the turn of the 19th and 20th centuries. [ 5 ] The new imagined urban form was meant to go hand-in-hand with a new society, based upon voluntary co-operation within self-governing communities. [ 5 ]
In the late 20th century, the term sustainable development has come to represent an ideal outcome in the sum of all planning goals. [ 6 ] Sustainable architecture involves renewable materials and energy sources and is increasing in importance as an environmentally friendly solution [ 7 ]
Since at least the Renaissance and the Age of Enlightenment , urban planning had generally been assumed to be the physical planning and design of human communities. [ 8 ] Therefore, it was seen as related to architecture and civil engineering, and thereby to be carried out by such experts. [ 8 ] This kind of planning was physicalist and design-orientated, and involved the production of masterplans and blueprints which would show precisely what the 'end-state' of land use should be, similar to architectural and engineering plans. [ 9 ] Similarly, the theory of urban planning was mainly interested in visionary planning and design which would demonstrate how the ideal city should be organised spatially. [ 10 ]
Although it can be seen as an extension of the sort of civic pragmatism seen in Oglethorpe 's plan for Savannah or William Penn 's plan for Philadelphia, the roots of the rational planning movement lie in Britain's Sanitary movement (1800–1890). [ 11 ] During this period, advocates such as Charles Booth argued for central organized, top-down solutions to the problems of industrializing cities. In keeping with the rising power of industry, the source of the planning authority in The Sanitary Movement arose as a direct response to the appalling living conditions in rapidly industrializing cities, focusing on improving public health through the creation of better sanitation infrastructure, including sewage systems and clean water supplies.. [ 12 ]
The Garden city movement was founded by Ebenezer Howard (1850-1928).The Garden City Movement advocated for the development of self-sufficient, planned communities that combined the best aspects of both urban and rural living. These communities would be surrounded by green belts to provide residents with access to nature while maintaining proximity to urban amenities.ref> Hall, Peter (17 April 2014). Cities of Tomorrow: An Intellectual History of Urban Planning and Design Since 1880 . John Wiley & Sons. p. 90. ISBN 978-1-118-45651-4 . </ref> His ideas were expressed in the book Garden Cities of To-morrow (1898). [ 13 ] His influences included Benjamin Walter Richardson , who had published a pamphlet in 1876 calling for low population density, good housing, wide roads, an underground railway and for open space; Thomas Spence who had supported common ownership of land and the sharing of the rents it would produce; Edward Gibbon Wakefield who had pioneered the idea of colonizing planned communities to house the poor in Adelaide (including starting new cities separated by green belts at a certain point); James Silk Buckingham who had designed a model town with a central place, radial avenues and industry in the periphery; as well as Alfred Marshall , Peter Kropotkin and the back-to-the-land movement , which had all called for the moving of masses to the countryside. [ 14 ]
Howard aimed to merge urban convenience with rural tranquility to promote healthier communities called Town-Country. [ 15 ] To make this happen, a group of individuals would establish a limited-dividend company to buy cheap agricultural land, which would then be developed with investment from manufacturers and housing for the workers. [ 15 ] No more than 32,000 people would be housed in a settlement, spread over 1,000 acres. [ 15 ] Around it would be a permanent green belt of 5,000 acres, with farms and institutions (such as mental institutions) which would benefit from the location. [ 16 ] After reaching the limit, a new settlement would be started, connected by an inter-city rail , with the polycentric settlements together forming the "Social City". [ 16 ] The lands of the settlements would be jointly owned by the inhabitants, who would use rents received from it to pay off the mortgage necessary to buy the land and then invest the rest in the community through social security . [ 17 ] Actual garden cities were built by Howard in Letchworth , Brentham Garden Suburb , and Welwyn Garden City . The movement would also inspire the later New towns movement . [ 18 ]
Arturo Soria y Mata's idea of the Linear city (1882) [ 19 ] replaced the traditional idea of the city as a centre and a periphery with the idea of constructing linear sections of infrastructure - roads, railways, gas, water, etc.- along an optimal line and then attaching the other components of the city along the length of this line. As compared to the concentric diagrams of Ebenezer Howard and other in the same period, Soria's linear city creates the infrastructure for a controlled process of expansion that joins one growing city to the next in a rational way, instead of letting them both sprawl. The linear city was meant to ‘ruralize the city and urbanize the countryside’, and to be universally applicable as a ring around existing cities, as a strip connecting two cities, or as an entirely new linear town across an unurbanized region. [ 20 ] The idea was later taken up by Nikolay Alexandrovich Milyutin in the planning circles of the 1920s Soviet Union . The Ciudad Lineal was a practical application of the concept.
Patrick Geddes (1864-1932) was the founder of regional planning. [ 21 ] His main influences were the geographers Élisée Reclus and Paul Vidal de La Blache , as well as the sociologist Pierre Guillaume Frédéric le Play . [ 22 ] From these he received the idea of the natural region . [ 23 ] According to Geddes, planning must start by surveying such a region by crafting a "Valley Section" which shows the general slope from mountains to the sea that can be identified across scale and place in the world, with the natural environment and the cultural environments produced by it included. [ 24 ] This was encapsulated in the motto "Survey before Plan". [ 25 ] He saw cities as being changed by technology into more regional settlements, for which he coined the term conurbation . [ 26 ] Similar to the garden city movement, he also believed in adding green areas to these urban regions. [ 26 ] The Regional Planning Association of America advanced his ideas, coming up with the 'regional city' which would have a variety of urban communities across a green landscape of farms, parks and wilderness with the help of telecommunication and the automobile. [ 27 ] This had major influence on the County of London Plan , 1944. [ 28 ]
The City Beautiful movement was inspired by 19th century European capital cities such as Georges-Eugène Haussmann's Paris or the Vienna Ring Road . [ 29 ] An influential figure was Daniel Burnham (1846-1912), who was the chief of construction of the World's Columbian Exposition in 1893. [ 29 ] Urban problems such as the 1886 Haymarket affair in Chicago had created a perceived need to reform the morality of the city among some of the elites. [ 30 ] Burnham's greatest achievement was the Chicago plan of 1909 . [ 31 ] His aim was "to restore to the city a lost visual and aesthetic harmony, thereby creating the physical prerequisite for the emergence of a harmonious social order", essentially creating social reform through new slum clearance and creating public space, which also endeared it the support of the Progressivist movement . [ 32 ] This was also believed to be economically advantageous by drawing in tourists and wealthy migrants. [ 32 ] Because of this it has been referred to as " trickle-down urban development" and as "centrocentrist" for focusing only on the core of the city. [ 33 ] Other major cities planned according to the movement principles included British colonial capitals in New Delhi , Harare , Lusaka Nairobi and Kampala , [ 34 ] [ 35 ] as well as that of Canberra in Australia , [ 36 ] and Albert Speer's plan for the Nazi capital Germania . [ 37 ]
Le Corbusier (1887–1965) pioneered a new urban form called towers in the park . His approach was based on defining the house as 'a machine to live in'. [ 38 ] The Plan Voisin he devised for Paris , which was never fulfilled, would have involved the demolition of much of historic Paris in favour of 18 uniform 700-foot tower blocks. [ 39 ] Ville Contemporaine and the Ville Radieuse formulated his basic principles, including decongestion of the city by increased density and open space by building taller on a smaller footprint. [ 39 ] Wide avenues should also be built to the city centre by demolishing old structures, which was criticized for lack of environmental awareness. [ 39 ] His generic ethos of planning was based on the rule of experts who would "work out their plans in total freedom from partisan pressures and special interests" and that "once their plans are formulated, they must be implemented without opposition". [ 40 ] His influence on the Soviet Union helped inspire the 'urbanists' who wanted to build planned cities full of massive apartment blocks in Soviet countryside. [ 40 ] The only city which he ever actually helped plan was Chandigarh in India . [ 41 ] Brasília , planned by Oscar Niemeyer , also was heavily influenced by his thought. [ 42 ] Both cities suffered from the issue of unplanned settlements growing outside them. [ 43 ]
In the United States, Frank Lloyd Wright similarly identified vehicular mobility as a principal planning metric. Car-based suburbs had already been developed in the Country Club District in 1907-1908 (including later the world's first car-based shopping centre of Country Club Plaza ), as well as in Beverly Hills in 1914 and Palos Verdes Estates in 1923. [ 44 ] Wright began to idealise this vision in his Broadacre City starting in 1924, with similarities to the garden city and regional planning movements. [ 45 ] The fundamental idea was for technology to liberate individuals. [ 45 ] In his Usonian vision , he described the city as
"spacious, well-landscaped highways, grade crossings eliminated by a new kind of integrated by-passing or over- or under-passing all traffic in cultivated or living areas … Giant roads, themselves great architecture, pass public service stations . . . passing by farm units, roadside markets, garden schools, dwelling places, each on its acres of individually adorned and cultivated ground". [ 46 ]
This was justified as a democratic ideal, as "“Democracy is the ideal of reintegrated decentralization … many free units developing strength as they learn by function and grow together in spacious mutual freedom.” [ 46 ] This vision was however criticized by Herbert Muschamp as being contradictory in its call for individualism while relying on the master-architect to design it all. [ 46 ]
After World War II , suburbs similar to Broadacre City spread throughout the US, but without the social or economic aspects of his ideas. [ 47 ] A notable example was that of Levittown , built 1947 to 1951. [ 48 ] The suburban design was criticized for their lack of form by Lewis Mumford as it lacked clear boundaries, and by Ian Nairn because "Each building is treated in isolation, nothing binds it to the next one". [ 49 ]
In the Soviet Union too, the so-called deurbanists (such as Moisei Ginzburg and Mikhail Okhitovich ) advocated for the use of electricity and new transportation technologies (especially the car) to disperse the population from the cities to the countryside, with the ultimate aim of a "townless, fully decentralized, and evenly populated country". [ 44 ] However, in 1931 the Communist Party ruled such views as forbidden. [ 45 ]
Throughout both the United States and Europe, the rational planning movement declined in the latter half of the 20th century. [ 50 ] Key events in the United States include the demolition of the Pruitt-Igoe housing project in St. Louis and the national backlash against urban renewal projects, particularly urban expressway projects. [ 51 ] An influential critic of such planning was Jane Jacobs , who wrote The Death and Life of Great American Cities in 1961, claimed to be "one of the most influential books in the short history of city planning". [ 52 ] She attacked the garden city movement because its "prescription for saving the city was to do the city in" and because it "conceived of planning also as essentially paternalistic, if not authoritarian". [ 52 ] The Corbusians on the other hand were claimed to be egoistic. [ 52 ] In contrast, she defended the dense traditional inner-city neighborhoods like Brooklyn Heights or North Beach, San Francisco , and argued that an urban neighbourhood required about 200-300 people per acre, as well as a high net ground coverage at the expense of open space. [ 53 ] She also advocated for a diversity of land uses and building types, with the aim of having a constant churn of people throughout the neighbourhood across the times of the day. [ 53 ] This essentially meant defending urban environments as they were before modern planning had aimed to start changing them. [ 53 ] As she believed that such environments were essentially self-organizing, her approach was effectively one of laissez-faire , and has been criticized for not being able to guarantee "the development of good neighbourhoods". [ 54 ]
The most radical opposition to blueprint planning was declared in 1969 in a manifesto on the New Society , with the words that:
The whole concept of planning (the town-and-country kind at least) has gone cockeyed … Somehow, everything must be watched; nothing must be allowed simply to “happen.” No house can be allowed to be commonplace in the way that things just are commonplace: each project must be weighed, and planned, and approved, and only then built, and only after that discovered to be commonplace after all. [ 55 ]
Another form of opposition came from the advocacy planning movement, opposes to traditional top-down and technical planning. [ 56 ]
Cybernetics and modernism inspired the related theories of rational process and systems approaches to urban planning in the 1960s. [ 57 ] They were imported into planning from other disciplines. [ 57 ] The systems approach was a reaction to the issues associated with the traditional view of planning. [ 58 ] It did not understand the social and economic sides of cities, the complexity and interconnectedness of urban life, as well as lacking in flexibility. [ 58 ] The 'quantitative revolution' of the 1960s also created a drive for more scientific and precise thinking, while the rise of ecology made the approach more natural. [ 59 ]
Systems theory is based on the conception of phenomena as 'systems', which are themselves coherent entities composed of interconnected and interdependent parts. [ 60 ] A city can in this way be conceptualised as a system with interrelated parts of different land uses, connected by transport and other communications. [ 60 ] The aim of urban planning thereby becomes that of planning and controlling the system. [ 61 ] Similar ideas had been put forward by Geddes, who had seen cities and their regions as analogous to organisms, though they did not receive much attention while planning was dominated by architects. [ 61 ]
The idea of the city as a system meant that it became critical for planners to understand how cities functioned. [ 61 ] It also meant that a change to one part in a city would have effects on others parts as well. [ 61 ] There were also doubts raised about the goal of producing detailed blueprints of how cities should look like in the end, instead suggesting the need for more flexible plans with trajectories instead of fixed futures. [ 62 ] Planning should also be an ongoing process of monitoring and taking action in the city, rather than just producing the blueprint at one time. [ 62 ] The systems approach also necessitated taking into account the economic and social aspects of cities, beyond just the aesthetic and physical ones. [ 62 ]
The focus on the procedural aspect of planning had already been pioneered by Geddes in his Survey-Analysis-Plan approach. [ 63 ] However, this approach had several shortfalls. It did not consider the reasons for doing a survey in the first place. [ 63 ] It also suggested that there should be simply a single plan to be considered. [ 63 ] Finally, it did not take into account the implementation stage of the plan. [ 64 ] There should also be further action in monitoring the outcomes of the plan after that. [ 64 ] The rational process, in contrast, identified five different stages: (1) the definition of problems and aims; (2) the identification of alternatives; (3) the evaluation of alternatives; (4) implementation: (5) monitoring. [ 64 ] This new approach represented a rejection of blueprint planning. [ 65 ]
Beginning in the late 1950s and early 1960s, critiques of the rational paradigm began to emerge and formed into several different schools of planning thought. The first of these schools is Lindblom's incrementalism . Lindblom describes planning as "muddling through" and thought that practical planning required decisions to be made incrementally. This incremental approach meant choosing from small number of policy approaches that can only have a small number consequences and are firmly bounded by reality, constantly adjusting the objectives of the planning process and using multiple analyses and evaluations. [ 66 ]
The mixed scanning model, developed by Etzioni , takes a similar approach to Lindblom. Etzioni (1968) suggested that organizations plan on two different levels: the tactical and the strategic. He posited that organizations could accomplish this by essentially scanning the environment on multiple levels and then choose different strategies and tactics to address what they found there. While Lindblom's approach only operated on the functional level Etzioni argued, the mixed scanning approach would allow planning organizations to work on both the functional and more big-picture oriented levels. [ 67 ]
Modernism sought to design and plan cities that followed the logic of the new model of industrial mass production ; reverting to large-scale solutions, aesthetic standardization, and prefabricated design solutions. [ 68 ] This approach was found to have eroded urban living by its failure to recognize differences and aim towards homogeneous landscapes. [ 69 ] Jane Jacobs 's 1961 book The Death and Life of Great American Cities , [ 70 ] was a sustained critique of urban planning as it had developed within modernism, [ 71 ] and played a major role in turning public opinion against modernist planners, notably Robert Moses . [ 72 ]
Postmodern urban planning involves theories that embrace and aim to create diversity, elevating uncertainty, flexibility, and change, and rejecting utopianism while embracing a utopian way of thinking and acting. [ 73 ] The postmodernity of "resistance" seeks to deconstruct modernism, a critique of the origins without necessarily returning to them. [ 74 ] As a result, planners are much less inclined to lay a firm or steady claim to there being one single "right way" of engaging in urban planning and are more open to different styles and ideas of "how to plan". [ 75 ]
This postmodern reaction is often compared with the modernist Chicago School , the then dominant movement founded at the University of Chicago in the 1920s. Sociologist Ernest Burgess 's prominent concentric circle model depicted urban areas as a series of concentric functional zones that sorted population groups. [ 76 ] [ 77 ] It proposed a central business core, circled by transitional immigrant and working class areas, then by more affluent outer commuter rings. [ 78 ] In contrast, for example, the postmodernist Los Angeles School , primarily associated with the University of California at LA , viewed Los Angeles as a prototypical postmodern city, a "multi-nucleated megacity encompassing hundreds of municipalities", sprawling and centerless. The LA School analysis emphasized the global-local connection, pervasive social fragmentation, and a "reterritorialization of the urban process in which hinterland organizes the center (in direct contradiction to the Chicago model)". [ 79 ]
A review of postmodern urbanism literature, published in 2018 in the Journal of Architectural and Planning Research , examined coverage of style, epoch, and method, noting a general lack of cohesive definition, and the use of questionable interpretation to form conceptual statements. The review concluded that as a theoretical construct, postmodern urbanism "is relevant to planning and design theory insofar as it rejects modernist 'rational' planning." However, given that urban planning and design are grounded in practice, postmodern theoretical ideas offer "little insight that professionals can use." [ 80 ]
In the 1960s, a view emerged of planning as an inherently normative and political activity. [ 81 ] Advocates of this approach included Norman Dennis , Martin Meyerson , Edward C. Banfield , Paul Davidoff , and Norton E. Long , the latter remarking that:
Plans are policies and policies, in a democracy at any rate, spell politics. The question is not whether planning will reflect politics but whose politics it will reflect. What values and whose values will planners seek to implement? . . . No longer can the planner take refuge in the neutrality of the objectivity of the personally uninvolved scientist. [ 82 ]
The choices between alternative end points in planning was a key issue which was seen as political. [ 83 ]
Participatory planning is an urban planning paradigm that emphasizes involving the entire community in the strategic and management processes of urban planning; or, community-level planning processes, urban or rural. It is often considered as part of community development . [ 84 ] Participatory planning aims to harmonize views among all of its participants as well as prevent conflict between opposing parties. In addition, marginalized groups have an opportunity to participate in the planning process. [ 85 ]
Patrick Geddes had first advocated for the "real and active participation" of citizens when working in the British Raj , arguing against the "Dangers of Municipal Government from above" which would cause "detachment from public and popular feeling, and consequently, before long, from public and popular needs and usefulness". [ 86 ] Further on, self-build was researched by Raymond Unwin in the 1930s in his Town Planning in Practice . [ 87 ] The Italian anarchist architect Giancarlo De Carlo then argued in 1948 that "“The housing problem cannot be solved from above. It is a problem of the people, and it will not be solved, or even boldly faced, except by the concrete will and action of the people themselves", and that planning should exist "as the manifestation of communal collaboration". [ 88 ] Through the Architectural Association School of Architecture , his ideas caught John Turner , who started working in Peru with Eduardo Neira. [ 88 ] He would go on working in Lima from the mid-'50s to the mid-'60s. [ 89 ] There he found that the barrios were not slums , but were rather highly organised and well-functioning. [ 90 ] As a result, he came to the conclusion that:
"When dwellers control the major decisions and are free to make their own contributions in the design, construction or management of their housing, both this process and the environment produced stimulate individual and social well-being. When people have no control over nor responsibility for key decisions in the housing process, on the other hand, dwelling environments may instead become a barrier to personal fulfillment and a burden on the economy." [ 91 ]
The role of the government was to provide a framework within which people would be able to work freely, for example by providing them the necessary resources, infrastructure and land. [ 91 ] Self-build was later again taken up by Christopher Alexander , who led a project called People Rebuild Berkeley in 1972, with the aim to create "self-sustaining, self-governing" communities, though it ended up being closer to traditional planning. [ 92 ]
After the "fall" of blueprint planning in the late 1950s and early 1960s, the synoptic model began to emerge as a dominant force in planning. Lane (2005) describes synoptic planning as having four central elements:
Public participation was first introduced into this model and it was generally integrated into the system process described above. However, the problem was that the idea of a single public interest still dominated attitudes, effectively devaluing the importance of participation because it suggests the idea that the public interest is relatively easy to find and only requires the most minimal form of participation. [ 93 ]
Transactive planning was a radical break from previous models. Instead of considering public participation as a method that would be used in addition to the normal training planning process, participation was a central goal. For the first time, the public was encouraged to take on an active role in the policy-setting process, while the planner took on the role of a distributor of information and a feedback source. [ 93 ] Transactive planning focuses on interpersonal dialogue that develops ideas, which will be turned into action. One of the central goals is mutual learning where the planner gets more information on the community and citizens to become more educated about planning issues. [ 94 ]
Formulated in the 1960s by lawyer and planning scholar Paul Davidoff , the advocacy planning model takes the perspective that there are large inequalities in the political system and in the bargaining process between groups that result in large numbers of people unorganized and unrepresented in the process. It concerns itself with ensuring that all people are equally represented in the planning process by advocating for the interests of the underprivileged and seeking social change. [ 95 ] [ 96 ] Again, public participation is a central tenet of this model. A plurality of public interests is assumed, and the role of the planner is essentially the one as a facilitator who either advocates directly for underrepresented groups directly or encourages them to become part of the process. [ 93 ]
Radical planning is a stream of urban planning which seeks to manage development in an equitable and community -based manner. The seminal text to the radical planning movement is Foundations for a Radical Concept in Planning (1973), by Stephen Grabow and Allen Heskin . Grabow and Heskin provided a critique of planning as elitist, centralizing and change-resistant, and proposed a new paradigm based upon systems change, decentralization, communal society , facilitation of human development and consideration of ecology. Grabow and Heskin were joined by Head of Department of Town Planning from the Polytechnic of the South Bank Shean McConnell , and his 1981 work Theories for Planning .
In 1987 John Friedmann entered the fray with Planning in the Public Domain: From Knowledge to Action , promoting a radical planning model based on "decolonization", "democratization", "self-empowerment" and "reaching out". Friedmann described this model as an "Agropolitan development" paradigm, emphasizing the re-localization of primary production and manufacture . In "Toward a Non- Euclidian Mode of Planning" (1993) Friedmann further promoted the urgency of decentralizing planning, advocating a planning paradigm that is normative, innovative, political, transactive and based on a social learning approach to knowledge and policy.
The bargaining model views planning as the result of giving and take on the part of a number of interests who are all involved in the process. It argues that this bargaining is the best way to conduct planning within the bounds of legal and political institutions. [ 97 ] The most interesting part of this theory of planning is that it makes public participation the central dynamic in the decision-making process. Decisions are made first and foremost by the public, and the planner plays a more minor role. [ 93 ]
The communicative approach to planning is perhaps the most difficult to explain. It focuses on using communication to help different interests in the process to understand each other. The idea is that each individual will approach a conversation with his or her own subjective experience in mind and that from that conversation shared goals and possibilities will emerge. Again, participation plays a central role in this model. The model seeks to include a broad range of voice to enhance the debate and negotiation that is supposed to form the core of actual plan making. In this model, participation is actually fundamental to the planning process happening. Without the involvement of concerned interests, there is no planning. [ 93 ] Looking at each of these models it becomes clear that participation is not only shaped by the public in a given area or by the attitude of the planning organization or planners that work for it. In fact, public participation is largely influenced by how planning is defined, how planning problems are defined, the kinds of knowledge that planners choose to employ and how the planning context is set. [ 93 ] Though some might argue that is too difficult to involve the public through transactive, advocacy, bargaining and communicative models because transportation is some ways more technical than other fields, it is important to note that transportation is perhaps unique among planning fields in that its systems depend on the interaction of a number of individuals and organizations. [ 98 ]
Strategic Urban Planning over past decades have witnessed the metamorphosis of the role of the urban planner in the planning process. More citizens calling for democratic planning & development processes have played a huge role in allowing the public to make important decisions as part of the planning process. Community organizers and social workers are now very involved in planning from the grassroots level. [ 99 ] The term advocacy planning was coined by Paul Davidoff in his influential 1965 paper, "Advocacy and Pluralism in Planning" which acknowledged the political nature of planning and urged planners to acknowledge that their actions are not value-neutral and encouraged minority and underrepresented voices to be part of planning decisions. [ 100 ] Benveniste argued that planners had a political role to play and had to bend some truth to power if their plans were to be implemented. [ 101 ]
Developers have also played huge roles in development, particularly by planning projects. Many recent developments were results of large and small-scale developers who purchased land, designed the district and constructed the development from scratch. The Melbourne Docklands , for example, was largely an initiative pushed by private developers to redevelop the waterfront into a high-end residential and commercial district.
Recent theories of urban planning, espoused, for example by Salingaros see the city as an adaptive system that grows according to process similar to those of plants . They say that urban planning should thus take its cues from such natural processes. [ 102 ] Such theories also advocate participation by inhabitants in the design of the urban environment, as opposed to simply leaving all development to large-scale construction firms. [ 103 ]
In the process of creating an urban plan or urban design , carrier-infill is one mechanism of spatial organization in which the city's figure and ground components are considered separately. The urban figure, namely buildings, is represented as total possible building volumes, which are left to be designed by architects in the following stages. The urban ground, namely in-between spaces and open areas, are designed to a higher level of detail. The carrier-infill approach is defined by an urban design performing as the carrying structure that creates the shape and scale of the spaces, including future building volumes that are then infilled by architects' designs. The contents of the carrier structure may include street pattern, landscape architecture , open space, waterways, and other infrastructure . The infill structure may contain zoning , building codes , quality guidelines, and Solar Access based upon a solar envelope . [ 104 ] [ 105 ] Carrier-Infill urban design is differentiated from complete urban design, such as in the monumental axis of Brasília , in which the urban design and architecture were created together.
In carrier-infill urban design or urban planning, the negative space of the city, including landscape, open space, and infrastructure is designed in detail. The positive space, typically building a site for future construction, is only represented in unresolved volumes. The volumes are representative of the total possible building envelope, which can then be infilled by individual architects.
Notes
Bibliography | https://en.wikipedia.org/wiki/Theories_of_urban_planning |
A theory is a systematic and rational form of abstract thinking about a phenomenon, or the conclusions derived from such thinking. It involves contemplative and logical reasoning , often supported by processes such as observation, experimentation, and research. Theories can be scientific, falling within the realm of empirical and testable knowledge, or they may belong to non-scientific disciplines, such as philosophy, art, or sociology. In some cases, theories may exist independently of any formal discipline.
In modern science, the term "theory" refers to scientific theories , a well-confirmed type of explanation of nature , made in a way consistent with the scientific method , and fulfilling the criteria required by modern science . Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (" falsify ") of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge, [ 1 ] in contrast to more common uses of the word "theory" that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis ). [ 2 ] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures , and from scientific laws , which are descriptive accounts of the way nature behaves under certain conditions.
Theories guide the enterprise of finding facts rather than of reaching goals, and are neutral concerning alternatives among values. [ 3 ] : 131 A theory can be a body of knowledge , which may or may not be associated with particular explanatory models . To theorize is to develop this body of knowledge. [ 4 ] : 46
The word theory or "in theory" is sometimes used outside of science to refer to something which the speaker did not experience or test before. [ 5 ] In science, this same concept is referred to as a hypothesis , and the word "hypothetically" is used both inside and outside of science. In its usage outside of science, the word "theory" is very often contrasted to " practice " (from Greek praxis , πρᾶξις) a Greek term for doing , which is opposed to theory. [ 6 ] A "classical example" of the distinction between "theoretical" and "practical" uses the discipline of medicine: medical theory involves trying to understand the causes and nature of health and sickness, while the practical side of medicine is trying to make people healthy. These two things are related but can be independent, because it is possible to research health and sickness without curing specific patients, and it is possible to cure a patient without knowing how the cure worked. [ a ]
The English word theory derives from a technical term in philosophy in Ancient Greek . As an everyday word, theoria , θεωρία , meant "looking at, viewing, beholding", but in more technical contexts it came to refer to contemplative or speculative understandings of natural things , such as those of natural philosophers , as opposed to more practical ways of knowing things, like that of skilled orators or artisans. [ b ] English-speakers have used the word theory since at least the late 16th century. [ 7 ] Modern uses of the word theory derive from the original definition, but have taken on new shades of meaning, still based on the idea of a theory as a thoughtful and rational explanation of the general nature of things.
Although it has more mundane meanings in Greek, the word θεωρία apparently developed special uses early in the recorded history of the Greek language . In the book From Religion to Philosophy , Francis Cornford suggests that the Orphics used the word theoria to mean "passionate sympathetic contemplation". [ 8 ] Pythagoras changed the word to mean "the passionless contemplation of rational, unchanging truth" of mathematical knowledge, because he considered this intellectual pursuit the way to reach the highest plane of existence. [ 9 ] Pythagoras emphasized subduing emotions and bodily desires to help the intellect function at the higher plane of theory. Thus, it was Pythagoras who gave the word theory the specific meaning that led to the classical and modern concept of a distinction between theory (as uninvolved, neutral thinking) and practice. [ 10 ]
Aristotle's terminology, as already mentioned, contrasts theory with praxis or practice, and this contrast exists till today. For Aristotle, both practice and theory involve thinking, but the aims are different. Theoretical contemplation considers things humans do not move or change, such as nature , so it has no human aim apart from itself and the knowledge it helps create. On the other hand, praxis involves thinking, but always with an aim to desired actions, whereby humans cause change or movement themselves for their own ends. Any human movement that involves no conscious choice and thinking could not be an example of praxis or doing. [ c ]
Theories are analytical tools for understanding , explaining , and making predictions about a given subject matter. There are theories in many and varied fields of study, including the arts and sciences. A formal theory is syntactic in nature and is only meaningful when given a semantic component by applying it to some content (e.g., facts and relationships of the actual historical world as it is unfolding). Theories in various fields of study are often expressed in natural language , but can be constructed in such a way that their general form is identical to a theory as it is expressed in the formal language of mathematical logic . Theories may be expressed mathematically, symbolically, or in common language, but are generally expected to follow principles of rational thought or logic .
Theory is constructed of a set of sentences that are thought to be true statements about the subject under consideration. However, the truth of any one of these statements is always relative to the whole theory. Therefore, the same statement may be true with respect to one theory, and not true with respect to another. This is, in ordinary language, where statements such as "He is a terrible person" cannot be judged as true or false without reference to some interpretation of who "He" is and for that matter what a "terrible person" is under the theory. [ 11 ]
Sometimes two theories have exactly the same explanatory power because they make the same predictions. A pair of such theories is called indistinguishable or observationally equivalent , and the choice between them reduces to convenience or philosophical preference. [ citation needed ]
The form of theories is studied formally in mathematical logic, especially in model theory . When theories are studied in mathematics, they are usually expressed in some formal language and their statements are closed under application of certain procedures called rules of inference . A special case of this, an axiomatic theory, consists of axioms (or axiom schemata) and rules of inference. A theorem is a statement that can be derived from those axioms by application of these rules of inference. Theories used in applications are abstractions of observed phenomena and the resulting theorems provide solutions to real-world problems. Obvious examples include arithmetic (abstracting concepts of number), geometry (concepts of space), and probability (concepts of randomness and likelihood).
Gödel's incompleteness theorem shows that no consistent, recursively enumerable theory (that is, one whose theorems form a recursively enumerable set) in which the concept of natural numbers can be expressed, can include all true statements about them. As a result, some domains of knowledge cannot be formalized, accurately and completely, as mathematical theories. (Here, formalizing accurately and completely means that all true propositions—and only true propositions—are derivable within the mathematical system.) This limitation, however, in no way precludes the construction of mathematical theories that formalize large bodies of scientific knowledge.
A theory is underdetermined (also called indeterminacy of data to theory ) if a rival, inconsistent theory is at least as consistent with the evidence. Underdetermination is an epistemological issue about the relation of evidence to conclusions. [ citation needed ]
A theory that lacks supporting evidence is generally, more properly, referred to as a hypothesis . [ 12 ]
If a new theory better explains and predicts a phenomenon than an old theory (i.e., it has more explanatory power ), we are justified in believing that the newer theory describes reality more correctly. This is called an intertheoretic reduction because the terms of the old theory can be reduced to the terms of the new one. For instance, our historical understanding about sound , light and heat have been reduced to wave compressions and rarefactions , electromagnetic waves , and molecular kinetic energy , respectively. These terms, which are identified with each other, are called intertheoretic identities. When an old and new theory are parallel in this way, we can conclude that the new one describes the same reality, only more completely.
When a new theory uses new terms that do not reduce to terms of an older theory, but rather replace them because they misrepresent reality, it is called an intertheoretic elimination. For instance, the obsolete scientific theory that put forward an understanding of heat transfer in terms of the movement of caloric fluid was eliminated when a theory of heat as energy replaced it. Also, the theory that phlogiston is a substance released from burning and rusting material was eliminated with the new understanding of the reactivity of oxygen.
Theories are distinct from theorems . A theorem is derived deductively from axioms (basic assumptions) according to a formal system of rules, sometimes as an end in itself and sometimes as a first step toward being tested or applied in a concrete situation; theorems are said to be true in the sense that the conclusions of a theorem are logical consequences of the axioms. Theories are abstract and conceptual, and are supported or challenged by observations in the world. They are ' rigorously tentative', meaning that they are proposed as true and expected to satisfy careful examination to account for the possibility of faulty inference or incorrect observation. Sometimes theories are incorrect, meaning that an explicit set of observations contradicts some fundamental objection or application of the theory, but more often theories are corrected to conform to new observations, by restricting the class of phenomena the theory applies to or changing the assertions made. An example of the former is the restriction of classical mechanics to phenomena involving macroscopic length scales and particle speeds much lower than the speed of light.
Theory is often distinguished from practice or praxis. The question of whether theoretical models of work are relevant to work itself is of interest to scholars of professions such as medicine, engineering, law, and management. [ 13 ] : 802
The gap between theory and practice has been framed as a knowledge transfer where there is a task of translating research knowledge to be application in practice, and ensuring that practitioners are made aware of it. Academics have been criticized for not attempting to transfer the knowledge they produce to practitioners. [ 13 ] : 804 [ 14 ] Another framing supposes that theory and knowledge seek to understand different problems and model the world in different words (using different ontologies and epistemologies ). Another framing says that research does not produce theory that is relevant to practice. [ 13 ] : 803
In the context of management, Van de Van and Johnson propose a form of engaged scholarship where scholars examine problems that occur in practice, in an interdisciplinary fashion, producing results that create both new practical results as well as new theoretical models, but targeting theoretical results shared in an academic fashion. [ 13 ] : 815 They use a metaphor of "arbitrage" of ideas between disciplines, distinguishing it from collaboration. [ 13 ] : 803
In science, the term "theory" refers to "a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment." [ 15 ] [ 16 ] Theories must also meet further requirements, such as the ability to make falsifiable predictions with consistent accuracy across a broad area of scientific inquiry, and production of strong evidence in favor of the theory from multiple independent sources ( consilience ).
The strength of a scientific theory is related to the diversity of phenomena it can explain, which is measured by its ability to make falsifiable predictions with respect to those phenomena. Theories are improved (or replaced by better theories) as more evidence is gathered, so that accuracy in prediction improves over time; this increased accuracy corresponds to an increase in scientific knowledge. Scientists use theories as a foundation to gain further scientific knowledge, as well as to accomplish goals such as inventing technology or curing diseases.
The United States National Academy of Sciences defines scientific theories as follows:
The formal scientific definition of "theory" is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics) ... One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed. [ 17 ]
From the American Association for the Advancement of Science :
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory." It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact. [ 16 ]
The term theory is not appropriate for describing scientific models or untested, but intricate hypotheses.
The logical positivists thought of scientific theories as deductive theories —that a theory's content is based on some formal system of logic and on basic axioms . In a deductive theory, any sentence which is a logical consequence of one or more of the axioms is also a sentence of that theory. [ 11 ] This is called the received view of theories .
In the semantic view of theories , which has largely replaced the received view, [ 18 ] [ 19 ] theories are viewed as scientific models . A model is an abstract and informative representation of reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country. In this approach, theories are a specific category of models that fulfill the necessary criteria. (See Theories as models for further discussion.)
In physics the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries, like equality of locations in space or in time, or identity of electrons, etc.)—which is capable of producing experimental predictions for a given category of physical systems. One good example is classical electromagnetism , which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations . The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered adequately tested, with new ones always in the making and perhaps untested.
Certain tests may be infeasible or technically difficult. As a result, theories may make predictions that have not been confirmed or proven incorrect. These predictions may be described informally as "theoretical". They can be tested later, and if they are incorrect, this may lead to revision, invalidation, or rejection of the theory. [ 20 ]
In mathematics, the term theory is used differently than its use in science ─ necessarily so, since mathematics contains no explanations of natural phenomena per se , even though it may help provide insight into natural systems or be inspired by them. In the general sense, a mathematical theory is a branch of mathematics devoted to some specific topics or methods, such as set theory , number theory , group theory , probability theory , game theory , control theory , perturbation theory , etc., such as might be appropriate for a single textbook.
In mathematical logic , a theory has a related but different sense: it is the collection of the theorems that can be deduced from a given set of axioms , given a given set of inference rules .
A theory can be either descriptive as in science, or prescriptive ( normative ) as in philosophy. [ 21 ] The latter are those whose subject matter consists not of empirical data, but rather of ideas . At least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation .
A field of study is sometimes named a "theory" because its basis is some initial set of assumptions describing the field's approach to the subject. These assumptions are the elementary theorems of the particular theory, and can be thought of as the axioms of that field. Some commonly known examples include set theory and number theory ; however literary theory , critical theory , and music theory are also of the same form.
One form of philosophical theory is a metatheory or meta-theory . A metatheory is a theory whose subject matter is some other theory or set of theories. In other words, it is a theory about theories. Statements made in the metatheory about the theory are called metatheorems .
A political theory is an ethical theory about the law and government. Often the term "political theory" refers to a general view, or specific ethic, political belief or attitude, thought about politics.
In social science, jurisprudence is the philosophical theory of law. Contemporary philosophy of law addresses problems internal to law and legal systems, and problems of law as a particular social institution.
Most of the following are scientific theories. Some are not, but rather encompass a body of knowledge or art, such as Music theory and Visual Arts Theories. | https://en.wikipedia.org/wiki/Theory |
In algebra , the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial . The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution . This problem was completely solved in 1830 by Évariste Galois , by introducing what is now called Galois theory .
Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics , to avoid confusion between old and new meanings of "algebra".
Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown . The fact that a complex solution always exists is the fundamental theorem of algebra , which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals , that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots . This was done up to degree four during the 16th century. Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations . Gerolamo Cardano published them in his 1545 book Ars Magna , together with a solution for the quartic equations , discovered by his student Lodovico Ferrari . In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations.
The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 1799 that some fifth degree equations cannot be solved in radicals followed by Niels Henrik Abel 's complete proof in 1824 (now known as the Abel–Ruffini theorem ). Évariste Galois later introduced a theory (presently called Galois theory ) to decide which equations are solvable by radicals.
Other classical problems of the theory of equations are the following: | https://en.wikipedia.org/wiki/Theory_of_equations |
The theory of functional systems is a model that describes the structure of conduct , which was established by Russian and Soviet biologist and physiologist Pyotr Anokhin .
Functional systems were put forward by Anokhin as an alternative to the predominant concept of reflexes . Contrary to reflexes, the endpoints of functional systems are not actions themselves but adaptive results of these actions .
In contrast to reflexes, which are based on linear spread of information from receptors to executive organs through the central nervous system, functional systems are self-organizing non-linear systems composed of synchronized distributed elements. [ 1 ]
"The principle of functional systems": association of private mechanisms of the body in a holistic system of adaptive behavioral act, the establishment of "integrative unity".
There are two types of functional systems:
Choice of targets and methods of achieving them are the key factors that regulate behavior. According to Anokhin, in the structure of the behavioral act afferent feedback compared with the acceptor of the result gives a positive or negative situational emotions affect the correction or termination of action (another type of emotion, leading emotions, are associated with satisfaction or dissatisfaction needs in general, with the formation of the target). In addition, the behavior affect the memories of positive and negative emotions.
In general, behavioral act is characterized by meaningful and active role of the subject. | https://en.wikipedia.org/wiki/Theory_of_functional_systems |
The theory of impetus , [ 1 ] developed in the Middle Ages, attempts to explain the forced motion of a body, what it is, and how it comes about or ceases. It is important to note that in ancient and medieval times, motion was always considered absolute, relative to the Earth as the center of the universe.
The theory of impetus is an auxiliary or secondary theory of Aristotelian dynamics , put forth initially to explain projectile motion against gravity . Aristotelian dynamics of forced (in antiquity called “unnatural”) motion states that a body (without a moving soul) only moves when an external force is constantly driving it. The greater the force acting, the proportionally greater the speed of the body. If the force stops acting, the body immediately returns to the natural state of rest. As we know today, this idea is wrong. It also states—as clearly formulated by John of Jadun in his work Quaestiones super 8 libros Physicorum Aristotelis from 1586—that not only motion but also force is transmitted to the medium, [ 2 ] such that this force propagates continuously from layer to layer of air, becoming weaker and weaker until it finally dies out. This is how the body finally comes to rest.
Although the medieval philosophers, beginning with John Philoponus , held to the intuitive idea that only a direct application of force could cause and maintain motion, they recognized that Aristotle's explanation of unnatural motion could not be correct. They therefore developed the concept of impetus. Impetus was understood to be a force inherent in a moving body that had previously been transferred to it by an external force during a previous direct contact.
The explanation of modern mechanics is completely different. First of all, motion is not absolute but relative, namely relative to a reference frame (observer), which in turn can move itself relative to another reference frame. For example, the speed of a bird flying relative to the earth is completely different than if you look at it from a moving car. Second, the observed speed of a body that is not subject to an external force never changes, regardless of who is observing it. The permanent state of a body is therefore uniform motion. Its continuity requires no external or internal force, but is based solely on the inertia of the body. If a force acts on a moving or stationary body, this leads to a change in the observed speed. The state of rest is merely a limiting case of motion. The term “impetus” as a force that maintains motion therefore has no equivalence in modern mechanics. At most, it comes close to the modern term “linear momentum” of a mass. This is because it is linear momentum as the product of mass and velocity that maintains motion due to the inertia of the mass (conservation of linear momentum). But momentum is not a force; rather, a force is the cause of a change in the momentum of a body, and vice versa.
After impetus was introduced by John Philoponus in the 6th century, [ 3 ] [ 4 ] and elaborated by Nur ad-Din al-Bitruji at the end of the 12th century. [ 5 ] The theory was modified by Avicenna in the 11th century and Abu'l-Barakāt al-Baghdādī in the 12th century, before it was later established in Western scientific thought by Jean Buridan in the 14th century. It is the intellectual precursor to the concepts of inertia , momentum and acceleration in classical mechanics .
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics , Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion, quantitative change, qualitative change, and substantial change.
Aristotle describes two kinds of motion: "violent" or "unnatural motion", such as that of a thrown stone, in Physics (254b10), and "natural motion", such as of a falling object, in On the Heavens (300a20). In violent motion, as soon as the agent stops causing it, the motion stops also: in other words, the natural state of an object is to be at rest, since Aristotle does not address friction .
In the 2nd century, Hipparchus assumed that the throwing force is transferred to the body at the time of the throw, and that the body dissipates it during the subsequent up-and-down motion of free fall. This is according to the Neoplatonist Simplicius of Cilicia , who quotes Hipparchus in his book Aristotelis De Caelo commentaria 264, 25 as follows: "Hipparchus says in his book On Bodies Carried Down by Their Weight that the throwing force is the cause of the upward motion of [a lump of] earth thrown upward as long as this force is stronger than that of the thrown body; the stronger the throwing force, the faster the upward motion. Then, when the force decreases, the upward motion continues at a decreased speed until the body begins to move downward under the influence of its own weight, while the throwing force still continues in some way. As this decreases, the velocity of the fall increases and reaches its highest value when this force is completely dissipated." Thus, Hipparchus does not speak of a continuous contact between the moving force and the moving body, or of the function of air as an intermediate carrier of motion, as Aristotle claims.
In the 6th century, John Philoponus partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force," but modified it to include his idea that the hurled body acquires a motive power or inclination for forced movement from the agent producing the initial motion and that this power secures the continuation of such motion. However, he argued that this impressed virtue was temporary: that it was a self-expending inclination, and thus the violent motion produced comes to an end, changing back into natural motion. [ 6 ]
In his book On Aristotle Physics 641, 12; 641, 29; 642, 9 Philoponus first argues explicitly against Aristotle's explanation that a thrown stone, after leaving the hand, cannot be propelled any further by the air behind it. Then he continues: "Instead, some immaterial kinetic force must be imparted to the projectile by the thrower. Whereby the pushed air contributes either nothing or only very little to this motion. But if moving bodies are necessarily moved in this way, it is clear that the same process will take place much more easily if an arrow or a stone is thrown necessarily and against its tendency into empty space, and that nothing is necessary for this except the thrower." This last sentence is intended to show that in empty space—which Aristotle rejects—and contrary to Aristotle's opinion, a moving body would continue to move. It should be pointed out that Philoponus in his book uses two different expressions for impetus: kinetic capacity (dynamis) and kinetic force (energeia). Both expressions designate in his theory a concept, which is close to the today's concept of energy, but they are far away from the Aristotelian conceptions of potentiality and actuality.
Philoponus' theory of imparted force cannot yet be understood as a principle of inertia. For while he rightly says that the driving quality is no longer imparted externally but has become an internal property of the body, he still accepts the Aristotelian assertion that the driving quality is a force (power) that now acts internally and to which velocity is proportional. In modern physics since Newton, however, velocity is a quality that persists in the absence of forces.
The first one to grasp this persistent motion by itself was William of Ockham . In his Commentary on the Sentences , Book 2, Question 26, M, written in 1318, he first argues: "If someone standing at point C were to fire a projectile aimed at point B, while another person standing at point F were to throw a projectile at point C, so that at some point M the two projectiles would meet, it would be necessary, according to the Aristotelian explanation, for the same portion of air at point M to be moved simultaneously in two different directions." The impossibility of this, according to Ockham, invalidates the Aristotelian explanation of the movement of projectiles. So Ockham goes on to say: "I say therefore that that which moves (ipsum movens) ... after the separation of the moving body from the original projector, is the body moved by itself (ipsum motum secundum se) and not by any power in it or relative to it (virtus absoluta in eo vel respectiva), ... ." It has been claimed by some historians that by rejecting the basic Aristotelian principle "Everything that moves is moved by something else." (Omne quod moventur ab alio movetur.), Ockham took the first step toward the principle of inertia.
Around 1320, Francis de Marchia developed a detailed and elaborate theory of his virtus derelicta . Marchia described virtus derelicta as force impressed on a projectile that gradually passes away and is consumed by the movement it generates. It is a form that is "not simply permanent, nor simply fluent, but almost medial", staying for some time in the body, but then fading away. This is different from Buridan's impetus (see below), which is a permanent state (res permanens) that is only diminished or destroyed by an opposing force—the resistance of the medium or the gravity of the projectile, which tends in a direction opposite to its motion. Buridan rightly says that without these opposing forces, the projectile would continue to move at constant speed forever.
In the 11th century, Avicenna (Ibn Sīnā) discussed Philoponus' theory in The Book of Healing , in Physics IV.14 he says: [ 7 ]
When we independently verify the issue (of projectile motion), we find the most correct doctrine is the doctrine of those who think that the moved object acquires an inclination from the mover
Ibn Sīnā agreed that an impetus is imparted to a projectile by the thrower, but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as persistent, requiring external forces such as air resistance to dissipate it. [ 8 ] [ 9 ] [ 10 ] Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. Therefore, he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, which is consistent with Newton's concept of inertia. [ 11 ] This idea (which dissented from the Aristotelian view) was later described as "impetus" by Jean Buridan , who may have been influenced by Ibn Sina. [ 12 ] [ 13 ]
In the 12th century, Hibat Allah Abu'l-Barakat al-Baghdaadi adopted Philoponus' theory of impetus. In his Kitab al-Mu'tabar , Abu'l-Barakat stated that the mover imparts a violent inclination ( mayl qasri ) on the moved and that this diminishes as the moving object distances itself from the mover. [ 14 ] Like Philoponus, and unlike Ibn Sina, al-Baghdaadi believed that the mayl self-extinguishes itself. [ 15 ]
He also proposed an explanation of the acceleration of falling bodies where "one mayl after another" is successively applied, because it is the falling body itself which provides the mayl, as opposed to shooting a bow, where only one violent mayl is applied. [ 15 ] According to Shlomo Pines , al-Baghdaadi's theory was
the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]. [ 15 ]
Jean Buridan and Albert of Saxony later refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. [ 14 ]
In the 14th century, Jean Buridan postulated the notion of motive force, which he named impetus.
When a mover sets a body in motion he implants into it a certain impetus, that is, a certain force enabling a body to move in the direction in which the mover starts it, be it upwards, downwards, sidewards, or in a circle. The implanted impetus increases in the same ratio as the velocity. It is because of this impetus that a stone moves on after the thrower has ceased moving it. But because of the resistance of the air (and also because of the gravity of the stone) which strives to move it in the opposite direction to the motion caused by the impetus, the latter will weaken all the time. Therefore the motion of the stone will be gradually slower, and finally the impetus is so diminished or destroyed that the gravity of the stone prevails and moves the stone towards its natural place. In my opinion one can accept this explanation because the other explanations prove to be false whereas all phenomena agree with this one. [ 16 ]
Buridan gives his theory a mathematical value: impetus = weight x velocity [ citation needed ] .
Buridan's pupil Dominicus de Clavasio in his 1357 De Caelo , as follows:
When something moves a stone by violence, in addition to imposing on it an actual force, it impresses in it a certain impetus. In the same way gravity not only gives motion itself to a moving body, but also gives it a motive power and an impetus, ...
Buridan's position was that a moving object would only be arrested by the resistance of the air and the weight of the body which would oppose its impetus. [ 17 ] Buridan also maintained that impetus was proportional to speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum . Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle.
Buridan pointed out that neither Aristotle's unmoved movers nor Plato's souls are in the Bible, so he applied impetus theory to the eternal rotation of the celestial spheres by extension of a terrestrial example of its application to rotary motion in the form of a rotating millwheel that continues rotating for a long time after the originally propelling hand is withdrawn, driven by the impetus impressed within it. [ 18 ] He wrote on the celestial impetus of the spheres as follows:
God, when He created the world, moved each of the celestial orbs as He pleased, and in moving them he impressed in them impetuses which moved them without his having to move them any more...And those impetuses which he impressed in the celestial bodies were not decreased or corrupted afterwards, because there was no inclination of the celestial bodies for other movements. Nor was there resistance which would be corruptive or repressive of that impetus. [ 19 ]
However, by discounting the possibility of any resistance either due to a contrary inclination to move in any opposite direction or due to any external resistance, he concluded their impetus was therefore not corrupted by any resistance. Buridan also discounted any inherent resistance to motion in the form of an inclination to rest within the spheres themselves, such as the inertia posited by Averroes and Aquinas. For otherwise that resistance would destroy their impetus, as the anti-Duhemian historian of science Annaliese Maier maintained the Parisian impetus dynamicists were forced to conclude because of their belief in an inherent inclinatio ad quietem or inertia in all bodies.
This raised the question of why the motive force of impetus does not therefore move the spheres with infinite speed. One impetus dynamics answer seemed to be that it was a secondary kind of motive force that produced uniform motion rather than infinite speed, [ 20 ] rather than producing uniformly accelerated motion like the primary force did by producing constantly increasing amounts of impetus. However, in his Treatise on the heavens and the world in which the heavens are moved by inanimate inherent mechanical forces, Buridan's pupil Oresme offered an alternative Thomist inertial response to this problem. His response was to posit a resistance to motion inherent in the heavens (i.e. in the spheres), but which is only a resistance to acceleration beyond their natural speed, rather than to motion itself, and was thus a tendency to preserve their natural speed. [ 21 ]
Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390), by writers in Poland such as John Cantius , and the Oxford Calculators . Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
The Buridan impetus theory developed one of the most important thought experiments in the history of science, the 'tunnel-experiment'. This experiment incorporated oscillatory and pendulum motion into dynamical analysis and the science of motion for the first time. It also established one of the important principles of classical mechanics. The pendulum was crucially important to the development of mechanics in the 17th century. The tunnel experiment also gave rise to the more generally important axiomatic principle of Galilean, Huygenian and Leibnizian dynamics, namely that a body rises to the same height from which it has fallen, a principle of gravitational potential energy . As Galileo Galilei expressed this fundamental principle of his dynamics in his 1632 Dialogo :
The heavy falling body acquires sufficient impetus [in falling from a given height] to carry it back to an equal height. [ 22 ]
This imaginary experiment predicted that a cannonball dropped down a tunnel going straight through the Earth's centre and out the other side would pass the centre and rise on the opposite surface to the same height from which it had first fallen, driven upwards by the gravitationally created impetus it had continually accumulated in falling to the centre. This impetus would require a violent motion correspondingly rising to the same height past the centre for the now opposing force of gravity to destroy it all in the same distance which it had previously required to create it. At this turning point the ball would then descend again and oscillate back and forth between the two opposing surfaces about the centre infinitely in principle. The tunnel experiment provided the first dynamical model of oscillatory motion, specifically in terms of A-B impetus dynamics. [ 23 ]
This thought-experiment was then applied to the dynamical explanation of a real world oscillatory motion, namely that of the pendulum. The oscillating motion of the cannonball was compared to the motion of a pendulum bob by imagining it to be attached to the end of an immensely long cord suspended from the vault of the fixed stars centred on the Earth. The relatively short arc of its path through the distant Earth was practically a straight line along the tunnel. Real world pendula were then conceived of as just micro versions of this 'tunnel pendulum', but with far shorter cords and bobs oscillating above the Earth's surface in arcs corresponding to the tunnel as their oscillatory midpoint was dynamically assimilated to the tunnel's centre.
Through such ' lateral thinking ', its lateral horizontal motion that was conceived of as a case of gravitational free-fall followed by violent motion in a recurring cycle, with the bob repeatedly travelling through and beyond the motion's vertically lowest but horizontally middle point that substituted for the Earth's centre in the tunnel pendulum. The lateral motions of the bob first towards and then away from the normal in the downswing and upswing become lateral downward and upward motions in relation to the horizontal rather than to the vertical.
The orthodox Aristotelians saw pendulum motion as a dynamical anomaly, as 'falling to rest with difficulty.' Thomas Kuhn wrote in his 1962 The Structure of Scientific Revolutions on the impetus theory's novel analysis it was not falling with any dynamical difficulty at all in principle, but was rather falling in repeated and potentially endless cycles of alternating downward gravitationally natural motion and upward gravitationally violent motion. [ 24 ] Galileo eventually appealed to pendulum motion to demonstrate that the speed of gravitational free-fall is the same for all unequal weights by virtue of dynamically modelling pendulum motion in this manner as a case of cyclically repeated gravitational free-fall along the horizontal in principle. [ 25 ]
The tunnel experiment was a crucial experiment in favour of impetus dynamics against both orthodox Aristotelian dynamics without any auxiliary impetus theory and Aristotelian dynamics with its H-P variant. According to the latter two theories, the bob cannot possibly pass beyond the normal. In orthodox Aristotelian dynamics there is no force to carry the bob upwards beyond the centre in violent motion against its own gravity that carries it to the centre, where it stops. When conjoined with the Philoponus auxiliary theory, in the case where the cannonball is released from rest, there is no such force because either all the initial upward force of impetus originally impressed within it to hold it in static dynamical equilibrium has been exhausted, or if any remained it would act in the opposite direction and combine with gravity to prevent motion through and beyond the centre. The cannonball being positively hurled downwards could not possibly result in an oscillatory motion either. Although it could then possibly pass beyond the centre, it could never return to pass through it and rise back up again. It would be logically possible for it to pass beyond the centre if upon reaching the centre some of the constantly decaying downward impetus remained and still was sufficiently stronger than gravity to push it beyond the centre and upwards again, eventually becoming weaker than gravity. The ball would then be pulled back towards the centre by its gravity but could not then pass beyond the centre to rise up again, because it would have no force directed against gravity to overcome it. Any possibly remaining impetus would be directed 'downwards' towards the centre, in the same direction it was originally created.
Thus pendulum motion was dynamically impossible for both orthodox Aristotelian dynamics and also for H-P impetus dynamics on this 'tunnel model' analogical reasoning. It was predicted by the impetus theory's tunnel prediction because that theory posited that a continually accumulating downwards force of impetus directed towards the centre is acquired in natural motion, sufficient to then carry it upwards beyond the centre against gravity, and rather than only having an initially upwards force of impetus away from the centre as in the theory of natural motion. So the tunnel experiment constituted a crucial experiment between three alternative theories of natural motion.
Impetus dynamics was to be preferred if the Aristotelian science of motion was to incorporate a dynamical explanation of pendulum motion. It was also to be preferred more generally if it was to explain other oscillatory motions, such as the to and fro vibrations around the normal of musical strings in tension, such as those of a guitar. The analogy made with the gravitational tunnel experiment was that the tension in the string pulling it towards the normal played the role of gravity, and thus when plucked (i.e. pulled away from the normal) and then released, it was the equivalent of pulling the cannonball to the Earth's surface and then releasing it. Thus the musical string vibrated in a continual cycle of the alternating creation of impetus towards the normal and its destruction after passing through the normal until this process starts again with the creation of fresh 'downward' impetus once all the 'upward' impetus has been destroyed.
This positing of a dynamical family resemblance of the motions of pendula and vibrating strings with the paradigmatic tunnel-experiment, the origin of all oscillations in the history of dynamics, was one of the greatest imaginative developments of medieval Aristotelian dynamics in its increasing repertoire of dynamical models of different kinds of motion.
Shortly before Galileo's theory of impetus, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
... [Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path. [ 26 ]
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. | https://en.wikipedia.org/wiki/Theory_of_impetus |
Theory of mind in animals is an extension to non-human animals of the philosophical and psychological concept of theory of mind (ToM) , sometimes known as mentalisation or mind-reading . It involves an inquiry into whether non-human animals have the ability to attribute mental states (such as intention , desires , pretending , knowledge ) to themselves and others, including recognition that others have mental states that are different from their own. [ 1 ] [ 2 ] [ 3 ] To investigate this issue experimentally, researchers place non-human animals in situations where their resulting behavior can be interpreted as supporting ToM or not.
The existence of theory of mind in non-human animals is controversial. On the one hand, one hypothesis proposes that some non-human animals have complex cognitive processes which allow them to attribute mental states to other individuals, sometimes called "mind-reading" while another proposes that non-human animals lack these skills and depend on more simple learning processes such as associative learning ; [ 4 ] or in other words, they are simply behaviour-reading.
Several studies have been designed specifically to test whether non-human animals possess theory of mind by using interspecific or intraspecific communication. Several taxa have been tested including primates, birds and canines. Positive results have been found; however, these are often qualified as showing only low-grade ToM, or rejected as not convincing by other researchers.
The term "theory of mind" was originally proposed by Premack and Woodruff in 1978. [ 2 ] [ 5 ] Early studies focused almost entirely on studying if chimpanzees could understand the knowledge of humans. This approach turned out not to be particularly fruitful and 20 years later, Heyes, reviewing all the extant data, observed that there had been "no substantial progress" in the subject area. [ 6 ]
A 2000 paper [ 7 ] approached the issue differently by examining competitive foraging behaviour between primates of the same species ( conspecifics ). This led to the rather limited conclusion that "chimpanzees know what conspecifics do and do not see". [ 8 ]
In 2007, Penn and Povinelli wrote "there is still little consensus on whether or not nonhuman animals understand anything about unobservable mental states or even what it would mean for a non-verbal animal to understand the concept of a 'mental state'." They went on further to suggest that ToM was " any cognitive system, whether theory-like or not, that predicts or explains the behaviour of another agent by postulating that unobservable inner states particular to the cognitive perspective of that agent causally modulate that agent's behaviour". [ 9 ]
In 2010, an article in Scientific American acknowledged that dogs are considerably better at using social direction cues (e.g. pointing by humans) than are chimpanzees. [ 10 ] In the same year, Towner wrote, "the issue may have evolved beyond whether or not there is theory of mind in non-human primates to a more sophisticated appreciation that the concept of mind has many facets and some of these may exist in non-human primates while others may not." [ 5 ] Horowitz, working with dogs, agreed. [ 11 ]
In 2013, Whiten reviewed the literature and concluded that regarding the question "Are chimpanzees truly mentalists, like we are?", he stated he could not offer an affirmative or negative answer. [ 8 ] A similarly equivocal view was stated in 2014 by Brauer, who suggested that many previous experiments on ToM could be explained by the animals possessing other abilities. They went on further to make reference to several authors who suggest it is pointless to ask a "yes or no" question, rather, it makes more sense to ask which psychological states animals understand and to what extent. [ 12 ] At the same time, it was suggested that a "minimal theory of mind" may be "what enables those with limited cognitive resources or little conceptual sophistication, such as infants, chimpanzees, scrub-jays and human adults under load, to track others' perceptions, knowledge states and beliefs." [ 13 ]
In 2015, Cecilia Heyes , Professor of Psychology at the University of Oxford, wrote about research on ToM, "Since that time [2000], many enthusiasts have become sceptics, empirical methods have become more limited, and it is no longer clear what research on animal mindreading is trying to find" and "However, after some 35 years of research on mindreading in animals, there is still nothing resembling a consensus about whether any animal can ascribe any mental state" (Heyes' emphasis). Heyes further suggested that "In combination with the use of inanimate control stimuli, species that are unlikely to be capable of mindreading, and the 'goggles method' [see below], these approaches could restore both vigour and rigour to research on animal mindreading." [ 1 ]
Specific categories of behaviour are sometimes used as evidence of animal ToM, including imitation, self-recognition, social relationships, deception, role-taking (empathy), perspective-taking, teaching and co-operation, [ 5 ] however, this approach has been criticised. [ 6 ] Some researchers focus on animals' understanding of intention, gaze, perspective, or knowledge, i.e. what another being has seen. Several experimental methods have been developed which are widely used or suggested as appropriate tests for nonhuman animals possessing ToM. Some studies look at communication between individuals of the same species ( intraspecific ) whereas others investigate behaviour between individuals of different species ( interspecific ).
The Knower-Guesser method has been used in many studies relating to animal ToM. [ 6 ]
The competitive feeding paradigm approach is considered by some as evidence that animals have some understanding of the relationship between "seeing" and "knowing". [ 1 ]
In one suggested protocol, chimpanzees are given first-hand experience of wearing two mirrored visors. One of the visors is transparent whereas the other is not. The visors themselves are of markedly different colours or shapes. During the subsequent test session, the chimpanzees are given the opportunity to use their species-typical begging behaviour to request food from one of the two humans, one wearing the transparent visor and the other wearing the opaque. If chimpanzees possess ToM, it would be expected they would beg more often from the human wearing the transparent visor.
A method used to test ToM in human children has been adapted for testing non-human animals. The basis of the test is to track the gaze of the animal. One human hides an object in view of a second human who then leaves the room. The object is then removed. [ 14 ]
Many ToM studies have used nonhuman primates (NHPs). One study that examined the understanding of intention in orangutans ( Pongo pygmaeus ), chimpanzees ( Pan troglodytes ) and children showed that all three species understood the difference between accidental and intentional acts. [ 15 ]
There is controversy over the interpretation of evidence purporting to show ToM in chimpanzees. [ 16 ]
Chimpanzees were unable to follow a human's gaze to find food hidden under opaque bowls, but were able to do so when food was hidden in tubes that the experimenter was able to look into. This seems to suggest that chimpanzees can infer another individual's perception depending on the clarity of the mechanism through which the individual has gained that knowledge. [ 17 ]
Attempts to use the "Goggles Method" (see above) on highly human- enculturated chimpanzees failed to demonstrate they possess ToM. [ 9 ]
In contrast, chimpanzees use the gaze of other chimpanzees to gain information about whether food is accessible. [ 7 ] Subordinate chimpanzees are able to use the knowledge state of dominant chimpanzees to determine which container has hidden food. [ 18 ]
Young chimpanzees were shown to reliably help researchers perform tasks that involved reaching (such as picking up dropped items that the researcher struggled to retrieve), without specific prompting. This suggests that these chimpanzees were able to understand the researcher's intentions in these cases and acted upon them. [ 19 ] [ 20 ]
In a similar study, chimps were provided with a preference box with two compartments, one containing a picture of food, the other containing a picture of nothing. Neither were actually related to the contents of the box. In a foraging competition game, chimpanzees avoided the chamber with the picture of food when their competitor had chosen one of the chambers before them. [ 21 ] [ 22 ]
Captive bonobos such as Kanzi have been reported to show concern for their handlers’ well-being. [ 23 ] Bonobos also console other bonobos who are victims of aggressive conflicts and reconcile after participating in these conflicts. [ 24 ] Both of these behaviors suggest some semblance of ToM through an attribution of mental states to another individual.
Chimpanzees have passed the False Belief Test (see above) involving anticipating the gaze of humans when objects have been removed. Infrared eye-tracking showed that the chimpanzee subjects’ gaze were focused on where the experimenter would falsely believe the object /subject to be, rather than focusing on its actual location of which the chimps were aware. This seems to suggest that the chimpanzees were capable of ascribing false belief to the experimenter. [ 25 ]
In one approach testing monkeys, rhesus macaques ( Macaca mulatta ) are able to "steal" a contested grape from one of two human competitors. In six experiments, the macaques selectively stole the grape from a human who was incapable of seeing the grape, rather than from the human who was visually aware. [ 26 ] Similarly, free ranging rhesus macaques preferentially choose to steal food items from locations where they can be less easily observed by humans, or where they will make less noise. [ citation needed ]
The authors also reported that at least one individual of each of the species showed (weak) evidence of ToM. [ 27 ]
In a multi-species study, it was shown that chimpanzees, bonobos and orangutans passed the False Belief Test (see above). [ 25 ]
In 2009, a summary of the ToM research, particularly emphasising an extensive comparison of humans, chimpanzees and orang-utans, [ 28 ] concluded that great apes do not exhibit understanding of human referential intentions expressed in communicative gestures, such as pointing. [ 29 ]
Grey parrots ( Psittacus erithacus ) have demonstrated high levels of intelligence. Irene Pepperberg did experiments with these and her most accomplished parrot, Alex , demonstrated behaviour which seemed to manipulate the trainer, possibly indicating theory of mind. [ 30 ]
Ravens are members of the family Corvidae and are widely regarded as having complex cognitive abilities. [ 31 ] [ 32 ] Other studies indicate that ravens recall who was watching them during caching, but also know the effects of visual barriers on what competitors can and can not see, and how this affects their pilfering. [ 33 ]
Ravens have been tested for their understanding of "seeing" as a mental state in other ravens. [ 34 ] The researchers further suggested that their findings could be considered in terms of the "minimal" (as opposed to "full-blown") ToM recently suggested. [ 13 ]
Using the Knower-Guesser approach, ravens observing a human hiding food are capable of predicting the behaviour of bystander ravens that had been visible at both, none or just one of two baiting events. The visual field of the competitors was manipulated independently of the view of the test-raven. [ 35 ]
Scrub jays are also corvids. Western scrub jays ( Aphelocoma californica ) both cache food and pilfer other scrub jays' caches. They use a range of tactics to minimise the possibility that their own caches will be pilfered. One of these tactics is to remember which individual scrub jay watched them during particular caching events and adjust their re-caching behaviour accordingly. [ 36 ] One study with particularly interesting results found that only scrub jays which had themselves pilfered would re-cache when they had been observed making the initial cache. [ 37 ] This has been interpreted as the re-caching bird projecting its own experiences of pilfering intent onto those of another potential pilferer, and taking appropriate action. [ 8 ] [ 38 ]
Domestic dogs ( Canis familiaris ) show an impressive ability to use the behaviour of humans to find food and toys using behaviours such as pointing and gazing. The performance of dogs in these studies is superior to that of NHPs, [ 39 ] however, some have stated categorically that dogs do not possess a human-like ToM. [ 12 ] [ 40 ]
Similarly, dogs preferentially use the behaviour of the human Knower to indicate the location of food. This is unrelated to the sex or age of the dog. In another study, 14 of 15 dogs preferred the location indicated by the Knower on the first trial, whereas chimpanzees require approximately 100 trials to reliably exhibit the preference. [ 39 ] [ 29 ]
An experiment at the University of Bristol found that one out of ten pigs was possibly able to understand what other pigs can see. That pig observed another pig which had view of a maze in which food was being hidden, and trailed that pig through the maze to the food. The other pigs involved in the experiment did not. [ 41 ] [ 42 ]
A 2006 study found that goats exhibited intricate social behaviours indicative of high-level cognitive processes, particularly in competitive situations. The study included an experiment in which a subordinate animal was allowed to choose between food that a dominant animal could also see and food that it could not; those who were subject to aggressive behaviour selected the food that the dominant animal could not see, suggesting that they are able to perceive a threat based on being within the dominant animal's view – in other words, visual perspective taking. [ 43 ] | https://en.wikipedia.org/wiki/Theory_of_mind_in_animals |
The theory of solar cells explains the process by which light energy in photons is converted into electric current when the photons strike a suitable semiconductor device . The theoretical studies are of practical use because they predict the fundamental limits of a solar cell , and give guidance on the phenomena that contribute to losses and solar cell efficiency .
When a photon hits a piece of semiconductor, one of three things can happen:
When a photon is absorbed, its energy is given to an electron in the crystal lattice. Usually this electron is in the valence band . The energy given to the electron by the photon "excites" it into the conduction band where it is free to move around within the semiconductor. The network of covalent bonds that the electron was previously a part of now has one fewer electron. This is known as a hole, and it has positive charge. The presence of a missing covalent bond allows the bonded electrons of neighboring atoms to move into the "hole", leaving another hole behind, thus propagating holes throughout the lattice in the opposite direction to the movement of the negatively electrons. It can be said that photons absorbed in the semiconductor create electron-hole pairs.
A photon only needs to have energy greater than that of the band gap in order to excite an electron from the valence band into the conduction band. However, the solar frequency spectrum approximates a black body spectrum at about 5,800 K, [ 1 ] and as such, much of the solar radiation reaching the Earth is composed of photons with energies greater than the band gap of silicon (1.12eV), which is near to the ideal value for a terrestrial solar cell (1.4eV). These higher energy photons will be absorbed by a silicon solar cell, but the difference in energy between these photons and the silicon band gap is converted into heat (via lattice vibrations — called phonons ) rather than into usable electrical energy.
The most commonly known solar cell is configured as a large-area p–n junction made from silicon. As a simplification, one can imagine bringing a layer of n-type silicon into direct contact with a layer of p-type silicon. n-type doping produces mobile electrons (leaving behind positively charged donors) while p-type doping produces mobile holes (and negatively charged acceptors). In practice, p–n junctions of silicon solar cells are not made in this way, but rather by diffusing an n-type dopant into one side of a p-type wafer (or vice versa).
If a piece of p-type silicon is placed in close contact with a piece of n-type silicon, then a diffusion of electrons occurs from the region of high electron concentration (the n-type side of the junction) into the region of low electron concentration (p-type side of the junction). When the electrons diffuse into the p-type side, each one annihilates a hole, making that side net negatively charged (because now the number of mobile positive holes is now less than the number of negative acceptors). Similarly, holes diffusing to the n-type side make it more positively charged. However (in the absence of an external circuit) this diffusion current of carriers does not go on indefinitely because the charge build up on either side of the junction produces an electric field that opposes further diffusion of more charges. Eventually, an equilibrium is reached where the net current is zero, leaving a region either side of the junction where electrons and holes have diffused across the junction and annihilated each other called the depletion region because it contains practically no mobile charge carriers. It is also known as the space charge region , although space charge extends a bit further in both directions than the depletion region.
Once equilibrium is established, electron-hole pairs generated in the depletion region are separated by the electric field, with the electron attracted to the positive n-type side and holes to the negative p-type side, reducing the charge (and the electric field) built up by the diffusion just described. If the device is unconnected (or the external load is very high) then diffusion current would eventually restore the equilibrium charge by bringing the electron and hole back across the junction, but if the load connected is small enough, the electrons prefer to go around the external circuit in their attempt to restore equilibrium, doing useful work on the way.
There are two causes of charge carrier motion and separation in a solar cell:
These two "forces" may work one against the other at any given point in the cell. For instance, an electron moving through the junction from the p region to the n region (as in the diagram at the beginning of this article) is being pushed by the electric field against the concentration gradient. The same goes for a hole moving in the opposite direction.
It is easiest to understand how a current is generated when considering electron-hole pairs that are created in the depletion zone, which is where there is a strong electric field. The electron is pushed by this field toward the n side and the hole toward the p side. (This is opposite to the direction of current in a forward-biased diode, such as a light-emitting diode in operation.) When the pair is created outside the space charge zone, where the electric field is smaller, diffusion also acts to move the carriers, but the junction still plays a role by sweeping any electrons that reach it from the p side to the n side, and by sweeping any holes that reach it from the n side to the p side, thereby creating a concentration gradient outside the space charge zone.
In thick solar cells there is very little electric field in the active region outside the space charge zone, so the dominant mode of charge carrier separation is diffusion. In these cells the diffusion length of minority carriers (the length that photo-generated carriers can travel before they recombine) must be large compared to the cell thickness. In thin film cells (such as amorphous silicon), the diffusion length of minority carriers is usually very short due to the existence of defects, and the dominant charge separation is therefore drift, driven by the electrostatic field of the junction, which extends to the whole thickness of the cell. [ 2 ]
Once the minority carrier enters the drift region, it is 'swept' across the junction and, at the other side of the junction, becomes a majority carrier. This reverse current is a generation current, fed both thermally and (if present) by the absorption of light. On the other hand, majority carriers are driven into the drift region by diffusion (resulting from the concentration gradient), which leads to the forward current; only the majority carriers with the highest energies (in the so-called Boltzmann tail; cf. Maxwell–Boltzmann statistics ) can fully cross the drift region. Therefore, the carrier distribution in the whole device is governed by a dynamic equilibrium between reverse current and forward current.
Ohmic metal -semiconductor contacts are made to both the n-type and p-type sides of the solar cell, and the electrodes connected to an external load. Electrons that are created on the n-type side, or created on the p-type side, "collected" by the junction and swept onto the n-type side, may travel through the wire, power the load, and continue through the wire until they reach the p-type semiconductor-metal contact. Here, they recombine with a hole that was either created as an electron-hole pair on the p-type side of the solar cell, or a hole that was swept across the junction from the n-type side after being created there.
The voltage measured is equal to the difference in the quasi Fermi levels of the majority carriers (electrons in the n-type portion and holes in the p-type portion) at the two terminals. [ 3 ]
An equivalent circuit model of an ideal solar cell's p–n junction uses an ideal current source (whose photogenerated current I L {\displaystyle I_{\text{L}}} increases with light intensity) in parallel with a diode (whose current I D {\displaystyle I_{\text{D}}} represents recombination losses). To account for resistive losses , a shunt resistance R SH {\displaystyle R_{\text{SH}}} and a series resistance R S {\displaystyle R_{\text{S}}} are added as lumped elements . [ 4 ] The resulting output current I out {\displaystyle I_{\text{out}}} equals the photogenerated current minus the currents through the diode and shunt resistor: [ 5 ] [ 6 ]
The junction voltage (across both the diode and shunt resistance) is:
where V out {\displaystyle V_{\text{out}}} is the voltage across the output terminals. The leakage current I SH {\displaystyle I_{\text{SH}}} through the shunt resistor is proportional to the junction's voltage V j {\displaystyle V_{\text{j}}} , according to Ohm's law :
By the Shockley diode equation , the current diverted through the diode is:
where
Substituting these into the first equation produces the characteristic equation of a solar cell, which relates solar cell parameters to the output current and voltage:
An alternative derivation produces an equation similar in appearance, but with V out {\displaystyle V_{\text{out}}} on the left-hand side. The two alternatives are identities ; that is, they yield precisely the same results.
Since the parameters I 0 , n , R S , and R SH cannot be measured directly, the most common application of the characteristic equation is nonlinear regression to extract the values of these parameters on the basis of their combined effect on solar cell behavior.
When R S is not zero, the above equation does not give I out {\displaystyle I_{\text{out}}} directly, but it can then be solved using the Lambert W function :
When an external load is used with the cell, its resistance can simply be added to R S and V out {\displaystyle V_{\text{out}}} set to zero in order to find the current.
When I 0 R S / n V T {\displaystyle I_{0}R_{\text{S}}/nV_{\text{T}}} is small, we can use the approximation x − 1 W ( x y ) → y {\displaystyle x^{-1}W\left(xy\right)\to y} as x → 0 {\displaystyle x\to 0} to produce something much easier to work with
Several further simplifications are now possible, such as when R S ≪ R SH {\displaystyle R_{\text{S}}\ll R_{\text{SH}}} which leads to
When the current generated by the PV is large compared with the current in the shunt, i.e. I L ≫ V out / R SH {\displaystyle I_{\text{L}}\gg V_{\text{out}}/R_{\text{SH}}} (because the shunt resistance is large) there is an analytical solution for V out {\displaystyle V_{\text{out}}} for any I out {\displaystyle I_{\text{out}}} less than I L + I 0 {\displaystyle I_{\text{L}}+I_{0}} :
Otherwise one can solve for V out {\displaystyle V_{\text{out}}} using the Lambert W function:
However, when R SH is large it's better to solve the original equation numerically.
The general form of the solution is a curve with I out {\displaystyle I_{\text{out}}} decreasing as V out {\displaystyle V_{\text{out}}} increases (see graphs lower down). The slope at small or negative V out {\displaystyle V_{\text{out}}} (where the W function is near zero) approaches − 1 / ( R S + R SH ) {\displaystyle -1/(R_{\text{S}}+R_{\text{SH}})} , whereas the slope at high V out {\displaystyle V_{\text{out}}} approaches − 1 / R S {\displaystyle -1/R_{\text{S}}} . Therefore for high optimum output power P out = I out V out {\displaystyle P_{\text{out}}=I_{\text{out}}V_{\text{out}}} , it is desirable to have R SH {\displaystyle R_{\text{SH}}} large and R S {\displaystyle R_{\text{S}}} should be small.
When the cell is operated at open circuit , I out {\displaystyle I_{\text{out}}} = 0 and the voltage across the output terminals is defined as the open-circuit voltage . Assuming the shunt resistance is high enough to neglect the final term of the characteristic equation, the open-circuit voltage V OC is:
Similarly, when the cell is operated at short circuit , V out {\displaystyle V_{\text{out}}} = 0 and the current I SC {\displaystyle I_{\text{SC}}} through the terminals is defined as the short-circuit current . It can be shown that for a high-quality solar cell (low R S and I 0 , and high R SH ) the short-circuit current is:
It is not possible to extract any power from the device when operating at either open circuit or short circuit conditions.
The values of I L , I 0 , R S , and R SH are dependent upon the physical size of the solar cell. In comparing otherwise identical cells, a cell with twice the junction area of another will, in principle, have double the I L and I 0 because it has twice the area where photocurrent is generated and across which diode current can flow. By the same argument, it will also have half the R S of the series resistance related to vertical current flow; however, for large-area silicon solar cells, the scaling of the series resistance encountered by lateral current flow is not easily predictable since it will depend crucially on the grid design (it is not clear what "otherwise identical" means in this respect). Depending on the shunt type, the larger cell may also have half the R SH because it has twice the area where shunts may occur; on the other hand, if shunts occur mainly at the perimeter, then R SH will decrease according to the change in circumference, not area.
Since the changes in the currents are the dominating ones and are balancing each other, the open-circuit voltage is practically the same; V OC starts to depend on the cell size only if R SH becomes too low. To account for the dominance of the currents, the characteristic equation is frequently written in terms of current density , or current produced per unit cell area:
where
This formulation has several advantages. One is that since cell characteristics are referenced to a common cross-sectional area they may be compared for cells of different physical dimensions. While this is of limited benefit in a manufacturing setting, where all cells tend to be the same size, it is useful in research and in comparing cells between manufacturers. Another advantage is that the density equation naturally scales the parameter values to similar orders of magnitude, which can make numerical extraction of them simpler and more accurate even with naive solution methods.
There are practical limitations of this formulation. For instance, certain parasitic effects grow in importance as cell sizes shrink and can affect the extracted parameter values. Recombination and contamination of the junction tend to be greatest at the perimeter of the cell, so very small cells may exhibit higher values of J 0 or lower values of R SH than larger cells that are otherwise identical. In such cases, comparisons between cells must be made cautiously and with these effects in mind.
This approach should only be used for comparing solar cells with comparable layout. For instance, a comparison between primarily quadratical solar cells like typical crystalline silicon solar cells and narrow but long solar cells like typical thin film solar cells can lead to wrong assumptions caused by the different kinds of current paths and therefore the influence of, for instance, a distributed series resistance contribution to r S . [ 8 ] [ 9 ] Macro-architecture of the solar cells could result in different surface areas being placed in any fixed volume - particularly for thin film solar cells and flexible solar cells which may allow for highly convoluted folded structures. If volume is the binding constraint, then efficiency density based on surface area may be of less relevance.
Transparent conducting electrodes are essential components of solar cells. It is either a continuous film of indium tin oxide or a conducting wire network, in which wires are charge collectors while voids between wires are transparent for light. An optimum density of wire network is essential for the maximum solar cell performance as higher wire density blocks the light transmittance while lower wire density leads to high recombination losses due to more distance traveled by the charge carriers. [ 10 ]
Temperature affects the characteristic equation in two ways: directly, via T in the exponential term, and indirectly via its effect on I 0 (strictly speaking, temperature affects all of the terms, but these two far more significantly than the others). While increasing T reduces the magnitude of the exponent in the characteristic equation, the value of I 0 increases exponentially with T . The net effect is to reduce V OC (the open-circuit voltage) linearly with increasing temperature. The magnitude of this reduction is inversely proportional to V OC ; that is, cells with higher values of V OC suffer smaller reductions in voltage with increasing temperature. For most crystalline silicon solar cells the change in V OC with temperature is about −0.50%/°C, though the rate for the highest-efficiency crystalline silicon cells is around −0.35%/°C. By way of comparison, the rate for amorphous silicon solar cells is −0.20 to −0.30%/°C, depending on how the cell is made.
The amount of photogenerated current I L increases slightly with increasing temperature because of an increase in the number of thermally generated carriers in the cell. This effect is slight, however: about 0.065%/°C for crystalline silicon cells and 0.09% for amorphous silicon cells.
The overall effect of temperature on cell efficiency can be computed using these factors in combination with the characteristic equation. However, since the change in voltage is much stronger than the change in current, the overall effect on efficiency tends to be similar to that on voltage. Most crystalline silicon solar cells decline in efficiency by 0.50%/°C and most amorphous cells decline by 0.15−0.25%/°C. The figure above shows I-V curves that might typically be seen for a crystalline silicon solar cell at various temperatures.
As series resistance increases, the voltage drop between the junction voltage and the terminal voltage becomes greater for the same current. The result is that the current-controlled portion of the I-V curve begins to sag toward the origin, producing a significant decrease in V out {\displaystyle V_{\text{out}}} and a slight reduction in I SC , the short-circuit current. Very high values of R S will also produce a significant reduction in I SC ; in these regimes, series resistance dominates and the behavior of the solar cell resembles that of a resistor. These effects are shown for crystalline silicon solar cells in the I-V curves displayed in the figure to the right.
Power lost through the series resistance is I out 2 R S {\displaystyle I_{\text{out}}^{2}R_{\text{S}}} . During illumination when I D {\displaystyle I_{\text{D}}} and I SH {\displaystyle I_{\text{SH}}} are small relative to photocurrent I L {\displaystyle I_{\text{L}}} , power loss also increases quadratically with I L {\displaystyle I_{\text{L}}} . Series resistance losses are therefore most important at high illumination intensities.
As shunt resistance decreases, the current diverted through the shunt resistor increases for a given level of junction voltage. The result is that the voltage-controlled portion of the I-V curve begins to sag far from the origin, producing a significant decrease in I out {\displaystyle I_{\text{out}}} and a slight reduction in V OC . Very low values of R SH will produce a significant reduction in V OC . Much as in the case of a high series resistance, a badly shunted solar cell will take on operating characteristics similar to those of a resistor. These effects are shown for crystalline silicon solar cells in the I-V curves displayed in the figure to the right.
If one assumes infinite shunt resistance, the characteristic equation can be solved for V OC :
Thus, an increase in I 0 produces a reduction in V OC proportional to the inverse of the logarithm of the increase. This explains mathematically the reason for the reduction in V OC that accompanies increases in temperature described above. The effect of reverse saturation current on the I-V curve of a crystalline silicon solar cell are shown in the figure to the right. Physically, reverse saturation current is a measure of the "leakage" of carriers across the p–n junction in reverse bias. This leakage is a result of carrier recombination in the neutral regions on either side of the junction.
The ideality factor (also called the emissivity factor) is a fitting parameter that describes how closely the diode's behavior matches that predicted by theory, which assumes the p–n junction of the diode is an infinite plane and no recombination occurs within the space-charge region. A perfect match to theory is indicated when n = 1 . When recombination in the space-charge region dominate other recombination, however, n = 2 . The effect of changing ideality factor independently of all other parameters is shown for a crystalline silicon solar cell in the I-V curves displayed in the figure to the right.
Most solar cells, which are quite large compared to conventional diodes, well approximate an infinite plane and will usually exhibit near-ideal behavior under standard test conditions ( n ≈ 1 ). Under certain operating conditions, however, device operation may be dominated by recombination in the space-charge region. This is characterized by a significant increase in I 0 as well as an increase in ideality factor to n ≈ 2 . The latter tends to increase solar cell output voltage while the former acts to erode it. The net effect, therefore, is a combination of the increase in voltage shown for increasing n in the figure to the right and the decrease in voltage shown for increasing I 0 in the figure above. Typically, I 0 is the more significant factor and the result is a reduction in voltage.
Sometimes, the ideality factor is observed to be greater than 2, which is generally attributed to the presence of Schottky diode or heterojunction in the solar cell. [ 11 ] The presence of a heterojunction offset reduces the collection efficiency of the solar cell and may contribute to low fill-factor.
While the above model is most common, other models have been proposed, like the d1MxP discrete model. [ 12 ] | https://en.wikipedia.org/wiki/Theory_of_solar_cells |
The theory of tides is the application of continuum mechanics to interpret and predict the tidal deformations of planetary and satellite bodies and their atmospheres and oceans (especially Earth's oceans) under the gravitational loading of another astronomical body or bodies (especially the Moon and Sun ).
The tides received relatively little attention in the civilizations around the Mediterranean Sea , as the tides there are relatively small, and the areas that experience tides do so unreliably. [ 1 ] [ 2 ] [ 3 ] A number of theories were advanced, however, from comparing the movements to breathing or blood flow to theories involving whirlpools or river cycles. [ 2 ] A similar "breathing earth" idea was considered by some Asian thinkers. [ 4 ] Plato reportedly believed that the tides were caused by water flowing in and out of undersea caverns. [ 1 ] Crates of Mallus attributed the tides to "the counter-movement (ἀντισπασμός) of the sea” and Apollodorus of Corcyra to "the refluxes from the Ocean". [ 5 ] An ancient Indian Purana text dated to 400-300 BC refers to the ocean rising and falling because of heat expansion from the light of the Moon. [ a ] [ 6 ] The Yolngu people of northeastern Arnhem Land in the Northern Territory of Australia identified a link between the Moon and the tides, which they mythically attributed to the Moon filling with water and emptying out again. [ 7 ] [ 8 ]
Ultimately the link between the Moon (and Sun ) and the tides became known to the Greeks , although the exact date of discovery is unclear; references to it are made in sources such as Pytheas of Massilia in 325 BC and Pliny the Elder 's Natural History in 77 AD. Although the schedule of the tides and the link to lunar and solar movements was known, the exact mechanism that connected them was unclear. [ 2 ] Classicists Thomas Little Heath claimed that both Pytheas and Posidonius connected the tides with the moon, "the former directly, the latter through the setting up of winds". [ 5 ] Seneca mentions in De Providentia the periodic motion of the tides controlled by the lunar sphere. [ 9 ] Eratosthenes (3rd century BC) and Posidonius (1st century BC) both produced detailed descriptions of the tides and their relationship to the phases of the Moon , Posidonius in particular making lengthy observations of the sea on the Spanish coast, although little of their work survived. The influence of the Moon on tides was mentioned in Ptolemy 's Tetrabiblos as evidence of the reality of astrology . [ 1 ] [ 10 ] Seleucus of Seleucia is thought to have theorized around 150 BC that tides were caused by the Moon as part of his heliocentric model. [ 11 ] [ 12 ]
Aristotle , judging from discussions of his beliefs in other sources, is thought to have believed the tides were caused by winds driven by the Sun's heat, and he rejected the theory that the Moon caused the tides. An apocryphal legend claims that he committed suicide in frustration with his failure to fully understand the tides. [ 1 ] Heraclides also held "the sun sets up winds, and that these winds, when they blow, cause the high tide and, when they cease, the low tide". [ 5 ] Dicaearchus also "put the tides down to the direct action of the sun according to its position". [ 5 ] Philostratus discusses tides in Book Five of Life of Apollonius of Tyana (circa 217-238 AD); he was vaguely aware of a correlation of the tides with the phases of the Moon but attributed them to spirits moving water in and out of caverns, which he connected with the legend that spirits of the dead cannot move on at certain phases of the Moon. [ b ]
The Venerable Bede discusses the tides in The Reckoning of Time and shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. [ 13 ] However, he made no progress regarding the question of how exactly the Moon created the tides. [ 2 ]
Medieval rule-of-thumb methods for predicting tides were said to allow one "to know what Moon makes high water" from the Moon's movements. [ 14 ] Dante references the Moon's influence on the tides in his Divine Comedy . [ 15 ] [ 1 ]
Medieval European understanding of the tides was often based on works of Muslim astronomers that became available through Latin translation starting from the 12th century. [ 16 ] Abu Ma'shar al-Balkhi , in his Introductorium in astronomiam , taught that ebb and flood tides were caused by the Moon. [ 16 ] Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. [ 16 ] In the 12th century, al-Bitruji contributed the notion that the tides were caused by the general circulation of the heavens. [ 16 ] Medieval Arabic astrologers frequently referenced the Moon's influence on the tides as evidence for the reality of astrology; some of their treatises on the topic influenced western Europe. [ 10 ] [ 1 ] Some theorized that the influence was caused by lunar rays heating the ocean's floor. [ 3 ]
Simon Stevin in his 1608 De spiegheling der Ebbenvloet (The Theory of Ebb and Flood ) dismisses a large number of misconceptions that still existed about ebb and flood. Stevin pleads for the idea that the attraction of the Moon was responsible for the tides and writes in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made. [ 17 ] [ 18 ] In 1609, Johannes Kepler correctly suggested that the gravitation of the Moon causes the tides, [ c ] which he compared to magnetic attraction [ 20 ] [ 2 ] [ 21 ] [ 22 ] basing his argument upon ancient observations and correlations.
In 1616, Galileo Galilei wrote Discourse on the Tides . [ 23 ] He strongly and mockingly rejects the lunar theory of the tides, [ 21 ] [ 2 ] and tries to explain the tides as the result of the Earth 's rotation and revolution around the Sun , believing that the oceans moved like water in a large basin: as the basin moves, so does the water. [ 24 ] But his contemporaries noticed that this made predictions that did not fit observations. [ 25 ]
René Descartes theorized that the tides (alongside the movement of planets, etc.) were caused by aetheric vortices , without reference to Kepler's theories of gravitation by mutual attraction; this was extremely influential, with numerous followers of Descartes expounding on this theory throughout the 17th century, particularly in France. [ 26 ] However, Descartes and his followers acknowledged the influence of the Moon, speculating that pressure waves from the Moon via the aether were responsible for the correlation. [ 3 ] [ 27 ] [ 4 ] [ 28 ]
Newton , in the Principia , provides a correct explanation for the tidal force , which can be used to explain tides on a planet covered by a uniform ocean but which takes no account of the distribution of the continents or ocean bathymetry . [ 29 ]
While Newton explained the tides by describing the tide-generating forces and Daniel Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides , developed by Pierre-Simon Laplace in 1775, [ 30 ] describes the ocean's real reaction to tidal forces. [ 31 ] Laplace's theory of ocean tides takes into account friction , resonance and natural periods of ocean basins. It predicts the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed. [ 32 ]
The equilibrium theory—based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects—could not explain the real ocean tides. [ 33 ] Since measurements have confirmed the dynamic theory, many things have possible explanations now, like how the tides interact with deep sea ridges, and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. [ 34 ] The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters. [ 35 ]
Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. [ 36 ] [ 37 ] Measurements from the CHAMP satellite closely match the models based on the TOPEX data. [ 38 ] [ 39 ] [ 40 ] Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels. [ 41 ]
In 1776, Laplace formulated a single set of linear partial differential equations for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity . Laplace obtained these equations by simplifying the fluid dynamics equations, but they can also be derived from energy integrals via Lagrange's equation .
For a fluid sheet of average thickness D , the vertical tidal elevation ζ , as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations : [ 42 ]
where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential .
William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity . Under certain conditions this can be further rewritten as a conservation of vorticity.
Laplace's improvements in theory were substantial, but they still left prediction in an approximate state. This position changed in the 1860s when the local circumstances of tidal phenomena were more fully brought into account by Lord Kelvin 's application of Fourier analysis to the tidal motions as harmonic analysis . Thomson's work in this field was further developed and extended by George Darwin , applying the lunar theory current in his time. Darwin's symbols for the tidal harmonic constituents are still used, for example: M : moon/lunar; S : sun/solar; K : moon-sun/lunisolar.
Darwin's harmonic developments of the tide-generating forces were later improved when A.T. Doodson , applying the lunar theory of E.W. Brown , [ 43 ] developed the tide-generating potential (TGP) in harmonic form, distinguishing 388 tidal frequencies. [ 44 ] Doodson's work was carried out and published in 1921. [ 45 ] Doodson devised a practical system for specifying the different harmonic components of the tide-generating potential, the Doodson numbers , a system still in use.
Since the mid-twentieth century further analysis has generated many more terms than Doodson's 388. About 62 constituents are of sufficient size to be considered for possible use in marine tide prediction, but sometimes many fewer can predict tides to useful accuracy. The calculations of tide predictions using the harmonic constituents are laborious, and from the 1870s to about the 1960s they were carried out using a mechanical tide-predicting machine , a special-purpose form of analog computer . More recently digital computers, using the method of matrix inversion, are used to determine the tidal harmonic constituents directly from tide gauge records.
Tidal constituents combine to give an endlessly varying aggregate because of their different and incommensurable frequencies: the effect is visualized in an animation of the American Mathematical Society illustrating the way in which the components used to be mechanically combined in the tide-predicting machine. Amplitudes (half of peak-to-peak amplitude ) of tidal constituents are given below for six example locations: Eastport, Maine (ME), [ 46 ] Biloxi, Mississippi (MS), San Juan, Puerto Rico (PR), Kodiak, Alaska (AK), San Francisco, California (CA), and Hilo, Hawaii (HI).
In order to specify the different harmonic components of the tide-generating potential, Doodson devised a practical system which is still in use, involving what are called the Doodson numbers based on the six Doodson arguments or Doodson variables. The number of different tidal frequency components is large, but each corresponds to a specific linear combination of six frequencies using small-integer multiples, positive or negative. In principle, these basic angular arguments can be specified in numerous ways; Doodson's choice of his six "Doodson arguments" has been widely used in tidal work. In terms of these Doodson arguments, each tidal frequency can then be specified as a sum made up of a small integer multiple of each of the six arguments. The resulting six small integer multipliers effectively encode the frequency of the tidal argument concerned, and these are the Doodson numbers: in practice all except the first are usually biased upwards by +5 to avoid negative numbers in the notation. (In the case that the biased multiple exceeds 9, the system adopts X for 10, and E for 11.) [ 47 ]
The Doodson arguments are specified in the following way, in order of decreasing frequency: [ 47 ]
In these expressions, the symbols l {\displaystyle l} , l ′ {\displaystyle l'} , F {\displaystyle F} and D {\displaystyle D} refer to an alternative set of fundamental angular arguments (usually preferred for use in modern lunar theory), in which:-
It is possible to define several auxiliary variables on the basis of combinations of these.
In terms of this system, each tidal constituent frequency can be identified by its Doodson numbers. The strongest tidal constituent "M 2 " has a frequency of 2 cycles per lunar day, its Doodson numbers are usually written 255.555, meaning that its frequency is composed of twice the first Doodson argument, and zero times all of the others. The second strongest tidal constituent "S 2 " is influenced by the sun, and its Doodson numbers are 273.555, meaning that its frequency is composed of twice the first Doodson argument, +2 times the second, -2 times the third, and zero times each of the other three. [ 48 ] This aggregates to the angular equivalent of mean solar time +12 hours. These two strongest component frequencies have simple arguments for which the Doodson system might appear needlessly complex, but each of the hundreds of other component frequencies can be briefly specified in a similar way, showing in the aggregate the usefulness of the encoding. | https://en.wikipedia.org/wiki/Theory_of_tides |
Theorycraft (or theorycrafting ) is the mathematical analysis of game mechanics (usually in video games ) to discover optimal strategies and tactics . Theorycraft involves analyzing statistics , hidden systems or underlying game code in order to glean information that is not apparent during normal gameplay . [ 1 ] Theorycraft is similar to analyses performed in sports or other games such as baseball's sabermetrics . The term has been said to come from StarCraft players as a portmanteau of " game theory " and "StarCraft". [ 2 ]
Theorycraft is prominent in competitive gaming (such as multiplayer games , speedrunning and racing events), where players attempt to gain competitive advantage by analyzing game systems. As a result, theorycraft can lower barriers between players and game designers . Game designers must consider that players will have a comprehensive understanding of game systems and players can influence design by exploiting game systems and discovering dominant or unintended strategies. [ 3 ]
Theorycrafting proven to be potent usually finds inclusion in the metagame . Theorycrafting knowledge is often communicated through online communities . [ 2 ] [ 4 ]
The way players theorycraft varies from game to game but often games under the same genres will have similar theorycrafting methods. Communities develop standardized ways to communicate their findings, including use of specialized tools to measure and record game data; terminology; and simulations to represent certain data.
The term theorycraft can be used in a pejorative sense. [ 3 ] In this sense, "theorycraft" refers to naïve or impractical theorizing that would not succeed during actual gameplay. | https://en.wikipedia.org/wiki/Theorycraft |
Theranostics , also known as theragnostics , [ 1 ] is a technique commonly used in personalised medicine . For example in nuclear medicine , one radioactive drug is used to identify ( diagnose ) and a second radioactive drug is used to treat (therapy) cancerous tumors . [ 2 ] [ 3 ] [ 4 ] In other words, theranostics combines radionuclide imaging and radiation therapy which targets specific biological pathways .
Technologies used for theranostic imaging include radiotracers , contrast agents , positron emission tomography , and magnetic resonance imaging . [ 3 ] [ 5 ] It has been used to treat thyroid cancer and neuroblastomas . [ 3 ]
The term "theranostic" is a portmanteau of two words, thera peutic and diag nostic , thus referring to a combination of diagnosis and treatment that also allows for continuing medical assessment of a patient. The first known use of the term is attributed to John Funkhouser, a consultant for the company Cardiovascular Diagnostic, who used it in a press release in August 1998. [ 6 ]
Theranostics originated in the field of nuclear medicine ; iodine isotope 131 for the diagnostic study and treatment of thyroid cancer was one of its earliest applications. [ 7 ] Nuclear medicine encompasses various substances, either alone or in combination, that can be used for diagnostic imaging and targeted therapy. These substances may include ligands of receptors present on the target tissue or compounds, like iodine , that are internalized by the target through metabolic processes. By using these mechanisms, theranostics enables the localization of pathological tissues with imaging and the targeted destruction of these tissues using high doses of radiation . [ 7 ]
Contrast agents with therapeutic properties have been under development for several years. [ 8 ] One example is the design of contrast agents capable of releasing a chemotherapeutic agent locally at the target site, triggered by a stimulus provided by the operator. This localized approach aims to increase treatment efficacy and minimize side effects. For instance, ultrasound -based contrast media, such as microbubbles , can accumulate in hypervascularized tissues and release the active ingredient in response to ultrasound waves, thus targeting a specific area chosen by the sonographer . [ 8 ]
Another approach involves linking monoclonal antibodies (capable of targeting different molecular targets) to nanoparticles . This strategy enhances the drug's affinity and specificity towards the target and enables visualization of the treatment area, such as using superparamagnetic iron oxide particles detectable by magnetic resonance imaging . [ 9 ] Additionally, these particles can be designed to release chemotherapy agents specifically at the site of binding, producing a local synergistic effect with antibody action. Integrating these methods with medical-nuclear techniques, which offer greater imaging sensitivity, may aid in target identification and treatment monitoring. [ 10 ]
Positron emission tomography (PET) imaging in theranostics provides insight into metabolic and molecular processes within the body. The PET scanner detects photons and creates three-dimensional images that enable visualization and quantification of physiological and biochemical processes. [ 11 ] PET imaging uses radiotracers that target specific molecules or processes. For example, [18F] fluorodeoxyglucose (FDG) is commonly used to assess glucose metabolism, as cancer cells exhibit increased glucose uptake. Other radiotracers target specific receptors, enzymes, or transporters, allowing the evaluation of various physiological and pathological processes. [ 11 ]
PET imaging plays a role in both diagnosis and treatment planning. It aids in the identification and staging of diseases, such as cancer, by visualizing the extent and metabolic activity of tumors. PET scans can also guide treatment decisions by assessing treatment response and monitoring disease progression. [ citation needed ] Additionally, PET imaging is used to determine the suitability of patients for targeted therapies based on specific molecular characteristics, enabling personalized treatment approaches. [ 12 ]
Single-photon emission computed tomography (SPECT) is employed in theranostics, using gamma rays emitted by a radiotracer to generate three-dimensional images of the body. SPECT imaging involves the injection of a radiotracer that emits single photons, which are detected by a gamma camera rotating around the person undergoing imaging. [ 7 ]
SPECT provides functional and anatomical information, allowing the assessment of organ structure, blood flow, and specific molecular targets. It is useful in evaluating diseases that involve altered blood flow or specific receptor expression. For example, SPECT imaging with technetium-99m (Tc-99m) radiopharmaceuticals may be able to assess myocardial perfusion and identify areas of ischemia or infarction in patients with cardiovascular diseases. [ 13 ]
SPECT imaging helps in identifying disease localization, staging, and assessing the response to therapy. Moreover, SPECT imaging is employed in targeted radionuclide therapy, where the same radiotracer used for diagnostic imaging can be used to deliver therapeutic doses of radiation to the diseased tissue. [ 13 ]
Magnetic resonance imaging (MRI) is a non-invasive imaging technique that uses strong magnetic fields and radiofrequency pulses to generate detailed anatomical and functional images of the body. MRI provides excellent soft tissue contrast and is widely used in theranostics for its ability to visualize anatomical structures and assess physiological processes. [ 8 ]
In theranostics, MRI allows for the detection and characterization of tumors, assessment of tumor extent, and evaluation of treatment response. MRI can provide information on tissue perfusion , diffusion, and metabolism, aiding in the selection of appropriate therapies and monitoring their effectiveness. [ 14 ]
Advancements in MRI technology have expanded its capabilities in theranostics. Techniques such as functional MRI (fMRI) enable the assessment of brain activation and connectivity, while diffusion-weighted imaging (DWI) provides insights into tissue microstructure. The development of molecular imaging agents, such as superparamagnetic iron oxide nanoparticles , allows for targeted imaging and tracking of specific molecular entities. [ 14 ]
Theranostics encompasses a range of therapeutic approaches that are designed to target and treat diseases with enhanced precision.
Targeted drug delivery systems facilitate the selective delivery of therapeutic agents to specific disease sites while minimizing off-target effects. These systems employ strategies, such as nanoparticles , liposomes , and micelles , to encapsulate drugs and enhance their stability, solubility, and bioavailability. [ 15 ] By incorporating diagnostic components, such as imaging agents or targeting ligands , into these delivery systems, clinicians can monitor drug distribution and accumulation in real-time, ensuring effective treatment and reducing systemic toxicity. Targeted drug delivery systems hold promise in the treatment of cancer, cardiovascular diseases, and other conditions, as they allow for personalized and site-specific therapy. [ 15 ]
Gene therapy is a therapeutic approach that involves modifying or replacing faulty genes to treat or prevent diseases. In theranostics, gene therapy can be combined with diagnostic imaging to monitor the delivery, expression, and activity of therapeutic genes. [ 16 ] Imaging techniques such as MRI , PET , and optical imaging enable non-invasive assessment of gene transfer and expression, providing valuable insights into the efficacy and safety of gene-based treatments. [ 15 ] Gene therapy has shown potential in treating genetic disorders , cancer, and cardiovascular diseases, and its integration with diagnostic imaging offers a comprehensive approach for monitoring and optimizing treatment outcomes. [ 16 ]
Radiotherapy can be integrated with imaging techniques to guide treatment planning, monitor radiation dose distribution, and assess treatment response. Molecular imaging methods, such as PET and SPECT , can be employed to visualize and quantify tumor characteristics, such as hypoxia or receptor expression , aiding in personalized radiation dose optimization 10 .
Additionally, theranostic approaches involving radiolabeled therapeutic agents, known as radiotheranostics , combine the therapeutic effects of radiation with diagnostic capabilities. Radiotheranostics, including Peptide receptor radionuclide therapy (PRRT), hold promise for targeted radiotherapy, enabling precise tumor targeting and dose escalation, while sparing healthy tissues. [ 17 ] For example, PRRT based on Lutetium -177 combinations (known as radioligands ) has emerged as a treatment option for inoperable metastatic neuroendocrine tumours (NET). [ 18 ]
Immunotherapy harnesses the body's immune system to recognize and attack cancer cells or other disease targets. In theranostics, immunotherapeutic approaches can be coupled with diagnostic imaging to assess immune cell infiltration , tumor immunogenicity , and treatment response. [ 7 ] Imaging techniques, such as PET and MRI, can provide valuable information about the tumor microenvironment, immune cell dynamics, and response to immunotherapies. Furthermore, theranostic strategies involving the use of radiolabeled immunotherapeutic agents allow for simultaneous imaging and therapy, aiding in patient selection, treatment monitoring, and optimization of immunotherapeutic regimens. [ 15 ]
Nanomedicine refers to the use of nanoscale materials for medical applications. In theranostics, nanomedicine offers opportunities for targeted drug delivery, imaging, and therapy. [ 7 ] Nanoparticles can be engineered to carry therapeutic payloads, imaging agents, and targeting ligands, allowing for multimodal theranostic approaches. These nanocarriers can enhance drug stability, improve drug solubility, and enable controlled release at the disease site. Additionally, nanomaterials with inherent imaging properties, such as quantum dots or gold nanoparticles, can serve as contrast agents for imaging. [ 19 ]
Theranostics has been applied in oncology, contributing to new approaches in the diagnosis, treatment, and monitoring of cancers. By integrating diagnostic imaging and targeted therapies, theranostics offers personalized approaches that improve treatment outcomes and patient care. In oncology, theranostics encompasses a wide range of applications, including the management of various types of cancers such as breast, lung, prostate, and colorectal cancer. [ 8 ] Molecular imaging techniques, such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT), enable the visualization and characterization of cancerous lesions, aiding in early detection, staging, and assessment of treatment response. [ better source needed ] [ 20 ] This allows for more accurate and tailored treatment planning, including the selection of appropriate targeted therapies or the optimization of radiation therapy.
Despite the significant progress, the translation of theranostics into routine clinical practice faces challenges, including the need for standardized imaging protocols, biomarker validation, and regulatory considerations. Additionally, there is a continuous need for research and development to further enhance the effectiveness and accessibility of theranostic approaches in oncology. [ 19 ]
Theranostics extends beyond oncology and holds potential in the fields of neurology and cardiology . [ 21 ] [ 22 ] In neurology, theranostic approaches offer new avenues for the diagnosis and treatment of various neurodegenerative diseases, such as Alzheimer's disease , Parkinson's disease , and multiple sclerosis . Advanced imaging techniques, including magnetic resonance imaging (MRI) and positron emission tomography (PET), allow for the visualization of neuroanatomy , functional connectivity, and molecular changes in the brain. This enables early detection, precise diagnosis, and monitoring of disease progression, facilitating the development of targeted therapeutic interventions.
Similarly, in cardiology, theranostics play a significant role in the diagnosis and treatment of cardiovascular conditions. Non-invasive imaging modalities like MRI and computed tomography (CT) provide detailed information about cardiac structure, function, and blood flow, aiding in the assessment of heart disease and the guidance of interventions. Theranostic approaches in cardiology involve targeted drug delivery systems for the treatment of conditions such as atherosclerosis and restenosis , as well as image-guided interventions for precise stenting or catheter-based therapies. [ 21 ]
Several challenges remain to be addressed for widespread adoption and integration of theranostics into routine clinical practice. Regulatory considerations will play a role in ensuring the safety, efficacy, and quality of theranostic agents and technologies. Harmonization of regulations across different countries and regions is necessary to facilitate global implementation. [ 23 ] Cost-effectiveness is a significant challenge, as theranostic approaches can be expensive. [ 23 ] Strategies to optimize resource utilization and reimbursement models have been discussed. Technical limitations, such as the development of more specific and sensitive imaging agents, improvement of imaging resolution and quality, and the integration of different imaging modalities, require ongoing research and technological advancements. [ better source needed ] [ 24 ] Ethical considerations surrounding patient privacy, data security, and the responsible use of patient information need to be addressed. [ 24 ] | https://en.wikipedia.org/wiki/Theranostics |
The Therapeutic Goods Administration ( TGA ) is the medicine and therapeutic regulatory agency of the Australian Government . [ 4 ] As part of the Department of Health, Disability and Ageing , the TGA regulates the safety, quality, efficacy and advertising in Australia of therapeutic goods (which comprise medicines, medical devices, biologicals and certain other therapeutic goods). Therapeutic goods include goods that are represented to have a therapeutic effect, are included in a class of goods the sole or principal use of which is (or ordinarily is) a therapeutic use, or are otherwise determined to be a therapeutic good through a legislative instrument under the Therapeutic Goods Act 1989 . [ 5 ] Goods that are therapeutic goods must be entered on the Australian Register of Therapeutic Goods (ARTG), or otherwise be the subject of an exemption, approval or authority by the TGA under the Therapeutic Goods Act 1989 , Therapeutic Goods Regulations 1990 or Therapeutic Goods (Medical Devices) Regulations 2002 before they can be imported, supplied, exported or manufactured in Australia.
Regulation of therapeutic goods in Australia was initially undertaken at a state level, [ 6 ] with regular conferences held between state and federal governments aimed at achieving uniform standards for drugs. [ 7 ] Although federal legislation was sought from an early period, there was considerable uncertainty as to the extent of the federal parliament's ability to legislate in that area . [ 8 ] Commonwealth Serum Laboratories was created by the federal government in 1916 and in 1932 was nominated as the local repository for biological standards set by the League of Nations . [ 9 ]
The first federal legislation relating to therapeutic goods was the Lyons government 's Therapeutic Substances Act 1937 , which gave the federal Minister for Health the power to regulate the import and export of therapeutic goods and bring Australian standards into line with League of Nations standards. [ 10 ] The act was amended in 1938 to better define which substances were therapeutic. [ 11 ] Ultimately the provisions of the legislation was never brought into effect due to changes of government and the onset of World War II. [ 12 ] The National Health and Medical Research Council (NHMRC) took a more active role in co-ordinating regulation during war time, including uniformity of labelling, medical nomenclature, and dangerous drugs legislation. [ 13 ]
In 1952, following lobbying from the NHMRC, the federal government organised a Therapeutic Substances Conference attended by representatives of federal and state governments. The conference passed a series of resolutions calling on major changes to therapeutic goods regulation in Australia, including national legislation for drug purity standards, uniform state legislation for drug manufacture, marketing and labelling, and the establishment of a national standards laboratory for testing new drugs. [ 14 ] It has been suggested the increased focus on national standards was prompted by concerns that the Pharmaceutical Benefits Scheme (PBS) was allowing for government subsidy of sub-standard medicines. [ 15 ]
The Menzies government 's Therapeutic Substances Act 1953 , passed in the wake of the National Health Act 1953 , was the first effective federal therapeutic goods legislation and allowed for federal control over drug imports, drug sold to the federal government, drugs subject to interstate trade, and drugs supplied under the PBS. [ 16 ] The new act allowed for the replacement of the British Pharmacopoeia (BP) as the primary source of quality standards for drugs in Australia, following concerns that the BP had not kept up to date with new medications. In introducing the legislation, federal health minister Earle Page – a former surgeon – referred to the existing poor standard of drug quality in Australia and stated it "would be criminal to allow such a state of affairs to exist and continue merely through lack of appropriate action". [ 17 ]
In 1956, regulations under the 1953 act established the Therapeutic Substances Advisory Committee, the Biological Standards Committee, and the Therapeutic Substances Standards Committee to advise the Minister for Health on drug regulations. The National Biological Standards Laboratory (NBSL) was created in 1958 to test the quality of imported drugs and drugs to be supplied under the PBS. [ 18 ] One notable absence from the 1953 legislation was the lack of any penalty for possessing or dealing with a sub-standard therapeutic substance. [ 19 ]
The Thalidomide scandal of the early 1960s prompted a reconsideration of federal regulation of drugs. [ 20 ] In 1963, the Australian Drug Evaluation Committee was formed to consider legislative reforms and expressed its "grave concern at the continuing lack of adequate statutory control over the importation of new therapeutic substances". [ 21 ] The committee's lobbying resulted in the Therapeutic Goods Act 1966 , which significantly expanded the Minister for Health's powers in that area. However, regulations under the 1966 act were not proclaimed until 1970, amid significant criticism that the act gave the minister too much power and reduced parliamentary oversight. [ 22 ] The new legislation also failed to resolve the issue of uniform state regulations. [ 23 ]
A separate Therapeutic Substances Section was created in the Department of Health in 1963, under the Division of National Health, with regulations having previously been overseen by the director of the NBSL. [ 24 ] The National Therapeutic Goods Committee was established in 1971 and in 1974 the Department of Health was restructured to create a separate Therapeutics Division in place of the Therapeutic Substances Section. The NBSL was merged into the Therapeutics Division in 1985. [ 25 ] In 1981, the Fraser government 's Health Acts Amendment Bill 1981 significantly broadened the scope of the Therapeutic Goods Act 1966 to include a wide range of medical devices, update standards and monitoring of manufacturing and testing, established a new National Register of Therapeutic Goods, and increase penalties for contraventions of the act. [ 26 ]
Following a series of government and parliamentary inquiries in the 1980s, the Hawke government 's Therapeutic Goods Act 1989 came into effect in 1991, repealing the preceding 1966 act. The effect of the new legislation was to establish a comprehensive national framework for therapeutic goods, including a mandatory Australian Register of Therapeutic Goods. [ 27 ] The existing Therapeutics Division within the Department of Health was reorganised and replaced by the Therapeutic Goods Administration (TGA). [ 25 ] Some functions of the NHMRC were transferred to the TGA later in the 1990s. [ 28 ]
In Australia, medical products are regulated by the TGA and, for controlled drugs such as cannabis, by the Office of Drug Control (ODC). Together, the TGA and ODC form the Health Products Regulation Group within the Department of Health and Aged Care. The Health Products Regulation Group comprises 14 regulatory branches and one legal branch, organised into four divisions.
The TGA also includes seven specialised statutory committees, which the agency can call upon for assistance on technical or scientific issues. [ 30 ] Four other committees also exist to give guidance on annual influenza vaccines, industry consultation matters, and the Therapeutic Goods Advertising Code. [ 31 ]
In September 2003, the Australian and New Zealand Government signed a treaty to establish a common therapeutic regulatory agency for the two countries. Australia New Zealand Therapeutic Products Agency, as it was to be called, would replace the TGA and Medsafe , the national regulator in New Zealand. In June 2011, eight years after the original treaty, Australian Prime Minister Julia Gillard and New Zealand Prime Minister John Key signed a letter of intent, reaffirming plans to create such an agency. [ 32 ]
In November 2014, both Australia and New Zealand agreed to cease plans to create a shared regulator, citing "a comprehensive review of progress and assessment of the costs and benefits to each country". The joint statement announcing the cessation outlines that both the TGA and Medsafe would continue to cooperate on medicine regulation and that the New Zealand Government would still participate in the now defunct, Council of Australian Governments Health Council. [ 33 ]
On 25 January 2021, the TGA provisionally approved the two-dose Pfizer–BioNTech COVID-19 vaccine , named COMIRNATY , for use within Australia. The provisional approval only recommends the vaccine for patients over the age of 16, pending ongoing submission of clinical data from the vaccine sponsors (the manufacturers, Pfizer and BioNTech). [ 34 ] Additionally, every batch of vaccines have their composition and documentation verified by TGA laboratories before being distributed to medical providers. [ 35 ]
The Department of Health and Aged Care planned the administration of COVID-19 vaccinations in five phases, organised by the risk of exposure. Border, quarantine, and front-line health and aged care workers were vaccinated first, followed by over 70 year-olds, other health care workers, and essential emergency service members. Following the provisional approval of COMIRNATY, Prime Minister Scott Morrison said that it was planned for the first group to begin vaccinations by February 2021, six weeks earlier than originally planned. [ 36 ]
The first public COVID-19 vaccination in Australia actually took place on 21 February 2021 with the Pfizer–BioNTech vaccine at Castle Hill in Sydney. An 84-year-old aged care resident was the first Australian to receive the vaccine. To show confidence in the national immunisation vaccine rollout, Prime Minister Morrison and Chief Medical Officer Professor Paul Kelly also received vaccinations. [ 37 ]
On 23 February 2021, Australia's second shipment of the Pfizer vaccine arrived at Sydney airport. Health Minister Hunt confirmed the arrival of 166,000 doses, and 120,000 more doses expected to arrive in the following week. [ 38 ]
On 9 April 2021, Prime Minister Morrison announced that Australia had secured another 20 million doses of Pfizer vaccine on top of 20 million already on order, meaning 40 million doses should be available to Australians in 2021. This was amid concerns about the AstraZeneca vaccine, in rare cases, causing blood clots; see section Oxford–AstraZeneca vaccine below. The additional doses of Pfizer were expected to arrive in Australia in the last quarter of 2021. [ 39 ] [ 40 ]
On 23 July 2021, the TGA approved the Pfizer COVID-19 vaccine for teenagers between 12 and 15 years old. [ 41 ]
On 5 December 2021, the TGA provisionally approved the Pfizer COVID-19 vaccine access for five to 11-year-olds. [ 42 ] [ 43 ]
On 16 February 2021, the Oxford–AstraZeneca vaccine was approved by the TGA for use in Australia. The administration of this vaccine was scheduled to start in March. [ 44 ] Two weeks later, on 28 February, the first shipment of the vaccine, around 300,000 doses, arrived at Sydney for rollout from 8 March. [ 45 ] On 5 March 2021, Italy stopped the export of AstraZeneca vaccine to Australia due to their slower rollout of that vaccine in the EU. [ 46 ] On 23 March, TGA approved the first batch of locally manufactured AstraZeneca vaccine by CSL-Seqirus in Melbourne, and 832,200 doses were ready for rollout in the following weeks. [ 47 ]
On 17 June 2021, Federal Health minister Greg Hunt announced a rise in the age limit for administration of the AstraZeneca vaccine. After new advice from the Australian Technical Advisory Group on Immunisation (ATAGI), the vaccine was no longer recommended for people aged under 60 years. This advice came after new cases of blood clotting, thrombosis with thrombocytopenia syndrome (TTS), in those under 60 after AstraZeneca vaccinations. [ 40 ]
On 23 June 2021, the Federal government released vaccine allocation projections and forecast that the Oxford-AstraZeneca vaccine would be in "little need" past October 2021 when all Australians over 60 years were expected to be fully vaccinated. [ 48 ]
On 9 February 2022 within Australia the Oxford-AstraZeneca vaccine was approved by the TGA (still pending ATAGI approval) as booster vaccines for individuals – joining Pfizer and Moderna booster vaccines for individuals approved months ago. [ 49 ]
On 25 June 2021, provisional approval was given by the TGA to the Janssen COVID-19 vaccine , the third vaccine for potential use in Australia. Strict conditions were imposed on Janssen which includes further investigation documents related to the efficacy, long term effects and safety concerns that must be provided regularly to TGA. It is still [ when? ] not included in the vaccination programme. [ 50 ] | https://en.wikipedia.org/wiki/Therapeutic_Goods_Administration |
Therapeutic Target Database ( TTD ) is a pharmaceutical and medical repository [ 1 ] constructed by the Innovative Drug Research and Bioinformatics Group (IDRB) at Zhejiang University, China and the Bioinformatics and Drug Design Group at the National University of Singapore . It provides information about known and explored therapeutic protein and nucleic acid targets, [ 2 ] the targeted disease, [ 3 ] pathway information [ 4 ] and the corresponding drugs directed at each of these targets. [ 1 ] Detailed knowledge about target function, sequence, 3D structure, ligand binding properties, enzyme nomenclature and drug structure, therapeutic class, and clinical development status. [ 1 ] TTD is freely accessible without any login requirement at https://idrblab.org/ttd/ .
This database contains 3,730 therapeutic targets (532 successful, 1,442 clinical trial, 239 preclincial/patented and 1,517 research targets) and 39,862 drugs (2,895 approved, 11,796 clinical trial, 5,041 preclincial/patented and 20,130 experimental drugs). The targets and drugs in TTD cover 583 protein biochemical classes and 958 drug therapeutic classes, respectively. [ 1 ] The latest version of the International Classification of Diseases (ICD-11) codes released by WHO are incorporated in TTD to facilitate the clear definition of disease/disease class. [ 5 ]
Target validation normally requires the determination that the target is expressed in the disease-relevant cells/tissues, [ 6 ] it can be directly modulated by a drug or drug-like molecule with adequate potency in biochemical assay, [ 7 ] and that target modulation in cell and/or animal models ameliorates the relevant disease phenotype. [ 8 ] Therefore, TTD collects three types of target validation data: [ 9 ]
The therapeutic targets in TTD are categorized into successful target, clinical trial target, preclinical target, patented target, and literature-reported target, which are defined by the highest status of their corresponding drugs.
The molecular types of therapeutic targets in TTD include protein, nucleic acid, and other molecule.
The main drug types in TTD include small molecule, antibody, nucleic acid drug, cell therapy, gene therapy and vaccine.
■ Target druggability illustrated by molecular interactions or regulations;
■ Target druggability characterized by different human system features;
■ Target druggability reflected by diverse cell-based expression variations;
■ Structure-based activity landscape and drug-like property profile of targets;
■ Prodrugs together with their parent drug and target;
■ Co-targets modulated by approved/clinical trial drugs;
■ Poor binders and non-binders of targets;
■ Target regulators (microRNAs & transcription factors) and target-interacting proteins;
■ Patented agents and their targets (structures and experimental activity values if available);
■ Differential expression profiles and downloadable data of targets in patients and healthy individuals;
■ Target combination of multitarget drugs and combination therapies;
■ Cross-links of most TTD target and drug entries to the corresponding pathway entries;
■ Access of the multiple targets and drugs cross-linked to each of these pathway entries;
■ Biomarkers for disease conditions;
■ Drug scaffolds for drugs/leads;
■ Target validation information (drug-target-disease);
■ Quantitative structure activity relationship models (QSAR) for compounds;
■ Clinical trial drugs and their targets;
■ Similarity target and drug search. | https://en.wikipedia.org/wiki/Therapeutic_Targets_Database |
Therapeutic angiogenesis is an experimental area in the treatment of ischemia , the condition associated with decrease in blood supply to certain organs, tissues, or body parts. This is usually caused by constriction or obstruction of the blood vessels. Angiogenesis is the natural healing process by which new blood vessels are formed to supply the organ or part in deficit with oxygen-rich blood. The goal of therapeutic angiogenesis is to stimulate the creation of new blood vessels in ischemic organs, tissues, or parts with the hope of increasing the level of oxygen-rich blood reaching these areas.
1. Isner JM. Therapeutic angiogenesis: a new frontier for vascular therapy. Vasc Med . 1996 1: 79–87.
2. Ferrara N, Kerbel RS. Angiogenesis as a therapeutic target. Nature . 2005 Dec 15;438(7070): 967–74.
3. Losordo DW, Dimmeler S. Therapeutic angiogenesis and vasculogenesis
for ischemic disease. Part I: angiogenic cytokines. Circulation . 2004 109: 2487-2491
4. Cao L, Mooney DJ. Spatiotemporal control over growth factor signaling for therapeutic neovascularization. Adv Drug Deliv Rev. 2007 Nov 10;59(13):1340-50.
This medical treatment –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Therapeutic_angiogenesis |
Therapeutic gene modulation refers to the practice of altering the expression of a gene at one of various stages, with a view to alleviate some form of ailment. It differs from gene therapy in that gene modulation seeks to alter the expression of an endogenous gene (perhaps through the introduction of a gene encoding a novel modulatory protein) whereas gene therapy concerns the introduction of a gene whose product aids the recipient directly.
Modulation of gene expression can be mediated at the level of transcription by DNA-binding agents (which may be artificial transcription factors ), small molecules , or synthetic oligonucleotides . It may also be mediated post-transcriptionally through RNA interference .
An approach to therapeutic modulation utilizes agents that modulate endogenous transcription by specifically targeting those genes at the gDNA level. The advantage to this approach over modulation at the mRNA or protein level is that every cell contains only a single gDNA copy. Thus the target copy number is significantly lower allowing the drugs to theoretically be administered at much lower doses. [ citation needed ]
This approach also offers several advantages over traditional gene therapy . Directly targeting endogenous transcription should yield correct relative expression of splice variants. In contrast, traditional gene therapy typically introduces a gene which can express only one transcript, rather than a set of stoichiometrically-expressed spliced transcript variants. Additionally, virally-introduced genes can be targeted for gene silencing by methylation which can counteract the effect of traditional gene therapy. [ 1 ] This is not anticipated to be a problem for transcriptional modulation as it acts on endogenous DNA.
There are three major categories of agents that act as transcriptional gene modulators: triplex-forming oligonucleotides (TFOs), synthetic polyamides (SPAs), and DNA binding proteins . [ 2 ]
Triplex-forming oligonucleotides (TFO) are one potential method to achieve therapeutic gene modulation. TFOs are approximately 10-40 base pairs long and can bind in the major groove in duplex DNA which creates a third strand or a triple helix. [ 2 ] [ 3 ] The binding occurs at polypurine or polypyrimidine regions via Hoogsteen hydrogen bonds to the purine (A / G) bases on the double stranded DNA that is already in the form of the Watson-Crick helix . [ 4 ]
TFOs can be either polypurine or polypyrimidine molecules and bind to one of the two strands in the double helix in either parallel or antiparallel orientation to target polypurine or polypyrimidine regions. Since the DNA-recognition codes are different for the parallel and the anti-parallel fashion of TFO binding, TFOs composed of pyrimidines (C / T) bind to the purine-rich strand of the target double helix via Hoogsteen hydrogen bonds in a parallel fashion. [ 3 ] TFOs composed of purines (A / G), or mixed purine and pyrimidine bind to the same purine-rich strand via reverse Hoogsteen bonds in an anti-parallel fashion. TFO's can recognize purine-rich target strands for duplex DNA. [ 2 ]
In order for TFO motifs to bind in a parallel fashion and create hydrogen bonds , the nitrogen atom at position 3 on the cytosine residue needs to be protonated , but at physiological pH levels it is not, which could prevent parallel binding. [ 2 ]
Another limitation is that TFOs can only bind to purine-rich target strands and this would limit the choice of endogenous gene target sites to polypurine-polypyrimidine stretches in duplex DNA. If a method to also allow TFOs to bind to pyrimidine bases was generated, this would enable TFOs to target any part of the genome . Also the human genome is rich in polypurine and polypyrimidine sequences which could affect the specificity of TFO to bind to a target DNA region. An approach to overcome this limitation is to develop TFOs with modified nucleotides that act as locked nucleic acids to increase the affinity of the TFO for specific target sequences. [ 5 ]
Other limitations include concerns regarding binding affinity and specificity, in vivo stability, and uptake into cells. Researchers are attempting to overcome these limitations by improving TFO characteristics through chemical modifications , such as modifying the TFO backbone to reduce electrostatic repulsions between the TFO and the DNA duplex. Also due to their high molecular weight, uptake into cells is limited and some strategies to overcome this include DNA condensing agents , coupling of the TFO to hydrophobic residues like cholesterol , or cell permeabilization agents. [ 2 ]
Scientists are still refining the technology to turn TFOs into a therapeutic product and much of this revolves around their potential applications in antigene therapy. In particular they have been used as inducers of site-specific mutations , reagents that selectively and specifically cleave target DNA, and as modulators of gene expression . [ 6 ] One such gene sequence modification method is through the targeting DNA with TFOs to active a target gene . If a target sequence is located between two inactive copies of a gene, DNA ligands, such as TFOs, can bind to the target site and would be recognized as DNA lesions. To fix these lesions, DNA repair complexes are assembled on the targeted sequence, the DNA is repaired. Damage of the intramolecular recombination substrate can then be repaired and detected if resection goes far enough to produce compatible ends on both sides of the cleavage site and then 3' overhangs are ligated leading to the formation of a single active copy of the gene and the loss of all the sequences between the two copies of the gene. [ 4 ]
In model systems TFOs can inhibit gene expression at the DNA level as well as induce targeted mutagenesis in the model. [ 6 ] TFO-induced inhibition of transcription elongation on endogenous targets have been tested on cell cultures with success. [ 7 ] However, despite much in vitro success, there has been limited achievement in cellular applications potentially due to target accessibility.
TFOs have the potential to silence silence gene by targeting transcription initiation or elongation, arresting at the triplex binding sites, or introducing permanent changes in a target sequence via stimulating a cell's inherent repair pathways. These applications can be relevant in creating cancer therapies that inhibit gene expression at the DNA level. Since aberrant gene expression is a hallmark of cancer, modulating these endogenous genes' expression levels could potentially act as a therapy for multiple cancer types.
Synthetic polyamides are a set of small molecules that form specific hydrogen bonds to the minor groove of DNA. They can exert an effect either directly, by binding a regulatory region or transcribed region of a gene to modify transcription, or indirectly, by designed conjugation with another agent that makes alterations around the DNA target site.
Specific bases in the minor groove of DNA can be recognized and bound by small synthetic polyamides (SPAs). DNA-binding SPAs have been engineered to contain three polyamide amino acid components: hydroxypyrrole (Hp), imidazole (Im), and pyrrole (Py). [ 10 ] Chains of these amino acids loop back on themselves in a hairpin structure. The amino acids on either side of the hairpin form a pair which can specifically recognize both sides of a Watson-Crick base pair . This occurs through hydrogen bonding within the minor groove of DNA. The amide pairs Py/Im, Py/Hp, Hp/Py, and Im/Py recognize the Watson-Crick base pairs C-G, A-T, T-A, and G-C, respectively (Table 1). See figure for a graphical representation of 5'-GTAC-3' recognition by a SPA. SPAs have low toxicity, but have not yet been used in human gene modulation.
The major structural drawback to unmodified SPAs as gene modulators is that their recognition sequence cannot be extended beyond 5 Watson-Crick base pairings. The natural curvature of the DNA minor groove is too tight a turn for the hairpin structure to match. There are several groups with proposed workarounds to this problem. [ 8 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] SPAs can be made to better follow the curvature of the minor groove by inserting beta-alanine which relaxes the structure. [ 10 ] Another approach to extending the recognition length is to use several short hairpins in succession. [ 15 ] [ 16 ] This approach has increased the recognition length to up to eleven Watson-Crick base pairs.
SPAs may inhibit transcription through binding within a transcribed region of a target gene. This inhibition occurs through blocking of elongation by an RNA polymerase.
SPAs may also modulate transcription by targeting a transcription regulator binding site. If the regulator is an activator of transcription, this will decrease transcriptional levels. As an example, SPA targeting to the binding site for the activating transcription factor TFIIIA has been demonstrated to inhibit transcription of the downstream 5S RNA. [ 17 ] In contrast, if the regulator is a repressor, this will increase transcriptional levels. As an example, SPA targeting to the host factor LSF, which represses expression of the human immunodeficiency virus (HIV) type 1 long terminal repeat (LTR), blocks binding of LSF and consequently de-represses expression of LTR [ 18 ] .
SPAs have not been shown to directly modify DNA or have activity other than direct blocking of other factors or processes. However, modifying agents can be bound to the tail ends of the hairpin structure. The specific binding of the SPA to DNA allows for site-specific targeting of the conjugated modifying agent.
SPAs have been paired with the DNA-alkylating moieties cyclopropylpyrroloindole [ 19 ] and chlorambucil [ 20 ] that were able to damage and crosslink SV40 DNA. This effect inhibited cell cycling and growth. Chlorambucil, a chemotherapeutic agent, was more effective when conjugated to an SPA than without.
In 2012, SPAs were conjugated to SAHA, a potent histone deacetylase (HDAC) inhibitor. [ 21 ] SPAs with conjugated SAHA were targeted to Oct-3/4 and Nanog which induced epigenetic remodeling and consequently increased expression of multiple pluripotency related genes in mouse embryonic fibroblasts.
Designer zinc-finger proteins are engineered proteins used to target specific areas of DNA . These proteins capitalize on the DNA-binding capacity of natural zinc-finger domains to modulate specific target areas of the genome . [ 22 ] In both designer and natural zinc-finger motifs, the protein consists of two β-sheets and one α-helix . Two histidine residues on the α-helix and two cysteine residues on the β-sheets are bonded to a zinc atom, which serves to stabilize the protein domain as a whole. This stabilization particularly benefits the α-helix in its function as the DNA-recognition and -binding domain. Transcription factor TFIIIA is an example of a naturally-occurring protein with zinc-finger motifs. [ 23 ]
Zinc-finger motifs bind into the major groove of helical DNA, [ 23 ] where the amino acid residue sequence on the α-helix gives the motif its target sequence specificity. The domain binds to a seven- nucleotide sequence of DNA (positions 1 through 6 on the primary strand of DNA, plus positions 0 and 3 on the complementary strand ), thereby ensuring that the protein motif is highly selective of its target. [ 22 ] In engineering a designer zinc-finger protein, researchers can utilize techniques such as site-directed mutagenesis followed by randomized trials for binding capacity, [ 22 ] [ 24 ] or the in vitro recombination of motifs with known target specificity to produce a library of sequence-specific final proteins. [ 25 ]
Designer zinc-finger proteins can modulate genome expression in a number of ways. Ultimately, two factors are primarily responsible for the result on expression: whether the targeted sequence is a regulatory region or a coding region of DNA, and whether and what types of effector domains are bound to the zinc-finger domain. If the target sequence for an engineered designer protein is a regulatory domain - e.g., a promoter or a repressor of replication - the binding site for naturally-occurring transcription factors will be obscured, leading to a corresponding decrease or increase, respectively, in transcription for the associated gene . [ 26 ] Similarly, if the target sequence is an exon , the designer zinc-finger will obscure the sequence from RNA polymerase transcription complexes, resulting in a truncated or otherwise nonfunctional gene product. [ 22 ]
Effector domains bound to the zinc-finger can also have comparable effects. It is the function of these effector domains which are arguably the most important with respect to the use of designer zinc-finger proteins for therapeutic gene modulation. If a methylase domain is bound to the designer zinc-finger protein, when the zinc-finger protein binds to the target DNA sequence an increase in methylation state of DNA in that region will subsequently result. Transcription rates of genes so-affected will be reduced. [ 27 ] Many of the effector domains function to modulate either the DNA directly - e.g. via methylation, cleaving, [ 28 ] or recombination of the target DNA sequence [ 29 ] - or by modulating its transcription rate - e.g. inhibiting transcription via repressor domains that block transcriptional machinery, [ 30 ] promoting transcription with activation domains that recruit transcriptional machinery to the site, [ 31 ] or histone - or other epigenetic -modification domains that affect chromatin state and the ability of transcriptional machinery to access the affected genes. [ 32 ] Epigenetic modification is a major theme in determining varying expression levels for genes, as explained by the idea that how tightly-wound the DNA strand is - from histones at the local level up to chromatin at the chromosomal level - can influence the accessibility of sequences of DNA to transcription machinery, thereby influencing the rate at which it can be transcribed. [ 23 ] If, instead of impacting the DNA strand directly, as described above, a designer zinc-finger protein instead affects epigenetic modification state for a target DNA region, modulation of gene expression could similarly be accomplished.
In the first case to successfully demonstrate the use of designer zinc-finger proteins to modulate gene expression in vivo , Choo et al. [ 26 ] designed a protein consisting of three zinc-finger domains that targeted a specific sequence on a BCR-ABL fusion oncogene . This specific oncogene is implicated in acute lymphoblastic leukemia . The oncogene typically enables leukemia cells to proliferate in the absence of specific growth factors, a hallmark of cancer . By including a nuclear localization signal with the tri-domain zinc-finger protein in order to facilitate binding of the protein to genomic DNA in the nucleus, Choo et al. were able to demonstrate that their engineered protein could block transcription of the oncogene in vivo. Leukemia cells became dependent on regular growth factors, bringing the cell cycle back under the control of normal regulation . [ 26 ]
The major approach to post-transcriptional gene modulation is via RNA interference (RNAi). The primary problem with using RNAi in gene modulation is drug delivery to target cells. [ 33 ] [ 34 ] RNAi gene modulation has been successfully applied to mice toward the treatment of a mouse model for inflammatory bowel disease. [ 35 ] This treatment utilized liposome-based beta-7 integrin-targeted, stabilized nanoparticles entrapping short interfering RNAs (siRNAs). There are several other forms of RNAi delivery, including: polyplex delivery, ligand-siRNA conjugates, naked delivery, inorganic particle deliver using gold nanoparticles, and site specific local delivery. [ 36 ]
Designer zinc-finger proteins, on the other hand, have undergone some trials in the clinical arena . The efficacy and safety of EW-A-401, an engineered zinc-finger transcription factor, as a pharmacologic agent for treating claudication , a cardiovascular ailment, has been investigated in clinical trials. [ 37 ] The protein consists of an engineered plasmid DNA that prompts the patient to produce an engineered transcription factor, the target of which is the vascular endothelial growth factor-A (VEGF-A) gene, which positively influences blood vessel development. Although not yet approved by the U.S. Food and Drug Administration (FDA), two Phase I clinical studies have been completed which identify this zinc-finger protein as a promising and safe potential therapeutic agent for treatment of peripheral arterial disease in humans. [ 38 ] | https://en.wikipedia.org/wiki/Therapeutic_gene_modulation |
The therapeutic index ( TI ; also referred to as therapeutic ratio ) is a quantitative measurement of the relative safety of a drug with regard to risk of overdose. It is a comparison of the amount of a therapeutic agent that causes toxicity to the amount that causes the therapeutic effect . [ 1 ] The related terms therapeutic window or safety window refer to a range of doses optimized between efficacy and toxicity, achieving the greatest therapeutic benefit without resulting in unacceptable side-effects or toxicity.
Classically, for clinical indications of an approved drug, TI refers to the ratio of the dose of the drug that causes adverse effects at an incidence/severity not compatible with the targeted indication (e.g. toxic dose in 50% of subjects, TD 50 ) to the dose that leads to the desired pharmacological effect (e.g. efficacious dose in 50% of subjects, ED 50 ). In contrast, in a drug development setting TI is calculated based on plasma exposure levels. [ 2 ]
In the early days of pharmaceutical toxicology, TI was frequently determined in animals as lethal dose of a drug for 50% of the population ( LD 50 ) divided by the minimum effective dose for 50% of the population (ED 50 ). In modern settings, more sophisticated toxicity endpoints are used.
For many drugs, severe toxicities in humans occur at sublethal doses, which limit their maximum dose. A higher safety-based therapeutic index is preferable instead of a lower one; an individual would have to take a much higher dose of a drug to reach the lethal threshold than the dose taken to induce the therapeutic effect of the drug. However, a lower efficacy-based therapeutic index is preferable instead of a higher one; an individual would have to take a higher dose of a drug to reach the toxic threshold than the dose taken to induce the therapeutic effect of the drug.
Generally, a drug or other therapeutic agent with a narrow therapeutic range (i.e. having little difference between toxic and therapeutic doses) may have its dosage adjusted according to measurements of its blood levels in the person taking it. This may be achieved through therapeutic drug monitoring (TDM) protocols. TDM is recommended for use in the treatment of psychiatric disorders with lithium due to its narrow therapeutic range. [ 3 ]
Based on efficacy and safety of drugs, there are two types of therapeutic index:
T I safety = L D 50 E D 50 {\displaystyle TI_{\text{safety}}={\frac {LD_{50}}{ED_{50}}}} It is desirous for the value of LD 50 to be as large as possible, to decrease risk of lethal effects and increase the therapeutic window. In the above formula, TI safety increases as the difference between LD 50 and ED 50 increases—hence, a higher safety-based therapeutic index indicates a larger therapeutic window, and vice versa.
T I efficacy = E D 50 T D 50 {\displaystyle TI_{\text{efficacy}}={\frac {ED_{50}}{TD_{50}}}} Ideally the ED 50 is as low as possible for faster drug response and larger therapeutic window, whereas a drugs TD 50 is ideally as large as possible to decrease risk of toxic effects. In the above equation, the greater the difference between ED 50 and TD 50 , the greater the value of TI efficacy . Hence, a lower efficacy-based therapeutic index indicates a larger therapeutic window.
Similar to safety-based therapeutic index, the protective index uses TD 50 (median toxic dose) in place of LD 50 .
Protective index = T D 50 E D 50 = 1 T I efficacy {\displaystyle {\text{Protective index}}={\frac {TD_{50}}{ED_{50}}}={\frac {1}{TI_{\text{efficacy}}}}}
For many substances, toxicity can occur at levels far below lethal effects (that cause death), and thus, if toxicity is properly specified, the protective index is often more informative about a substance's relative safety. Nevertheless, the safety-based therapeutic index ( T I safety {\displaystyle {TI_{\text{safety}}}} ) is still useful as it can be considered an upper bound of the protective index, and the former also has the advantages of objectivity and easier comprehension.
Since the protective index (PI) is calculated as TD 50 divided by ED 50 , it can be mathematically expressed that:
which means that T I efficacy {\displaystyle TI_{\text{efficacy}}} is a reciprocal of protective index.
All the above types of therapeutic index can be used in both pre-clinical trials and clinical trials .
A low efficacy-based therapeutic index ( T I efficacy {\displaystyle TI_{\text{efficacy}}} ) and a high safety-based therapeutic index ( T I safety {\displaystyle TI_{\text{safety}}} ) are preferable for a drug to have a favorable efficacy vs safety profile. At the early discovery/development stage, the clinical TI of a drug candidate is unknown. However, understanding the preliminary TI of a drug candidate is of utmost importance as early as possible since TI is an important indicator of the probability of successful development. Recognizing drug candidates with potentially suboptimal TI at the earliest possible stage helps to initiate mitigation or potentially re-deploy resources.
TI is the quantitative relationship between pharmacological efficacy and toxicological safety of a drug, without considering the nature of pharmacological or toxicological endpoints themselves. However, to convert a calculated TI into something useful, the nature and limitations of pharmacological and/or toxicological endpoints must be considered. Depending on the intended clinical indication, the associated unmet medical need and/or the competitive situation, more or less weight can be given to either the safety or efficacy of a drug candidate in order to create a well balanced indication-specific efficacy vs safety profile.
In general, it is the exposure of a given tissue to drug (i.e. drug concentration over time), rather than dose, that drives the pharmacological and toxicological effects. For example, at the same dose there may be marked inter-individual variability in exposure due to polymorphisms in metabolism, DDIs or differences in body weight or environmental factors. These considerations emphasize the importance of using exposure instead of dose to calculate TI. To account for delays between exposure and toxicity, the TI for toxicities that occur after multiple dose administrations should be calculated using the exposure to drug at steady state rather than after administration of a single dose.
A review published by Muller PY and Milton MN in Nature Reviews Drug Discovery critically discusses TI determination and interpretation in a translational drug development setting for both small molecules and biotherapeutics. [ 2 ]
The therapeutic index varies widely among substances, even within a related group.
For instance, the opioid painkiller remifentanil is very forgiving, offering a therapeutic index of 33,000:1, while Diazepam , a benzodiazepine sedative-hypnotic and skeletal muscle relaxant , has a less forgiving therapeutic index of 100:1. [ 9 ] Morphine is even less so with a therapeutic index of 70.
Less safe are cocaine (a stimulant and local anaesthetic ) and ethanol (a sedative ): the therapeutic indices for these substances are 15:1 and 10:1, respectively. [ 10 ] Paracetamol , alternatively known by its trade names Tylenol or Panadol, also has a therapeutic index of 10. [ 11 ]
Even less safe are drugs such as digoxin , a cardiac glycoside ; its therapeutic index is approximately 2:1. [ 12 ]
Other examples of drugs with a narrow therapeutic range, which may require drug monitoring both to achieve therapeutic levels and to minimize toxicity, include dimercaprol , theophylline , warfarin and lithium carbonate .
Some antibiotics and antifungals require monitoring to balance efficacy with minimizing adverse effects , including: gentamicin , vancomycin , amphotericin B (nicknamed 'amphoterrible' for this very reason), and polymyxin B .
Radiotherapy aims to shrink tumors and kill cancer cells using high energy. The energy arises from x-rays , gamma rays , or charged or heavy particles. The therapeutic ratio in radiotherapy for cancer treatment is determined by the maximum radiation dose for killing cancer cells and the minimum radiation dose causing acute or late morbidity in cells of normal tissues. [ 13 ] Both of these parameters have sigmoidal dose–response curves . Thus, a favorable outcome in dose–response for tumor tissue is greater than that of normal tissue for the same dose, meaning that the treatment is effective on tumors and does not cause serious morbidity to normal tissue. Conversely, overlapping response for two tissues is highly likely to cause serious morbidity to normal tissue and ineffective treatment of tumors. The mechanism of radiation therapy is categorized as direct or indirect radiation. Both direct and indirect radiation induce DNA mutation or chromosomal rearrangement during its repair process. Direct radiation creates a DNA free radical from radiation energy deposition that damages DNA. Indirect radiation occurs from radiolysis of water, creating a free hydroxyl radical , hydronium and electron. The hydroxyl radical transfers its radical to DNA. Or together with hydronium and electron, a free hydroxyl radical can damage the base region of DNA. [ 14 ]
Cancer cells cause an imbalance of signals in the cell cycle . G1 and G2/M arrest were found to be major checkpoints by irradiating human cells. G1 arrest delays the repair mechanism before synthesis of DNA in S phase and mitosis in M phase, suggesting it is a key checkpoint for survival of cells. G2/M arrest occurs when cells need to repair after S phase but before mitotic entry. It is known that S phase is the most resistant to radiation and M phase is the most sensitive to radiation. p53 , a tumor suppressor protein that plays a role in G1 and G2/M arrest, enabled the understanding of the cell cycle through radiation. For example, irradiation of myeloid leukemia cells leads to an increase in p53 and a decrease in the level of DNA synthesis. Patients with Ataxia telangiectasia delays have hypersensitivity to radiation due to the delay of accumulation of p53. [ 15 ] In this case, cells are able to replicate without repair of their DNA, becoming prone to incidence of cancer. Most cells are in G1 and S phase. Irradiation at G2 phase showed increased radiosensitivity and thus G1 arrest has been a focus for therapeutic treatment.
Irradiation of a tissue induces a response in both irradiated and non-irridiated cells. It was found that even cells up to 50–75 cell diameters distant from irradiated cells exhibit a phenotype of enhanced genetic instability such as micronucleation. [ 16 ] This suggests an effect on cell-to-cell communication such as paracrine and juxtacrine signaling . Normal cells do not lose their DNA repair mechanism whereas cancer cells often lose it during radiotherapy. However, the high energy radiation can override the ability of damaged normal cells to repair, leading to additional risk of carcinogenesis . This suggests a significant risk associated with radiation therapy. Thus, it is desirable to improve the therapeutic ratio during radiotherapy. Employing IG-IMRT, protons and heavy ions are likely to minimize the dose to normal tissues by altered fractionation. Molecular targeting of the DNA repair pathway can lead to radiosensitization or radioprotection. Examples are direct and indirect inhibitors on DNA double-strand breaks . Direct inhibitors target proteins ( PARP family ) and kinases (ATM, DNA-PKCs) that are involved in DNA repair. Indirect inhibitors target protein tumor cell signaling proteins such as EGFR and insulin growth factor . [ 13 ]
The effective therapeutic index can be affected by targeting , in which the therapeutic agent is concentrated in its desirable area of effect. For example, in radiation therapy for cancerous tumors, shaping the radiation beam precisely to the profile of a tumor in the "beam's eye view" can increase the delivered dose without increasing toxic effects, though such shaping might not change the therapeutic index. Similarly, chemotherapy or radiotherapy with infused or injected agents can be made more efficacious by attaching the agent to an oncophilic substance, as in peptide receptor radionuclide therapy for neuroendocrine tumors and in chemoembolization or radioactive microspheres therapy for liver tumors and metastases. This concentrates the agent in the targeted tissues and lowers its concentration in others, increasing efficacy and lowering toxicity.
Sometimes the term safety ratio is used, particularly when referring to psychoactive drugs used for non-therapeutic purposes, e.g. recreational use. [ 10 ] In such cases, the effective dose is the amount and frequency that produces the desired effect, which can vary, and can be greater or less than the therapeutically effective dose.
The Certain Safety Factor , also referred to as the Margin of Safety (MOS) , is the ratio of the lethal dose to 1% of population to the effective dose to 99% of the population (LD 1 /ED 99 ). [ 17 ] This is a better safety index than the LD 50 for materials that have both desirable and undesirable effects, because it factors in the ends of the spectrum where doses may be necessary to produce a response in one person but can, at the same dose, be lethal in another.
A therapeutic index does not consider drug interactions or synergistic effects. For example, the risk associated with benzodiazepines increases significantly when taken with alcohol, [ 18 ] [ 19 ] [ 20 ] depressants, [ 18 ] opiates, [ 19 ] [ 21 ] [ 22 ] [ 20 ] [ 23 ] or stimulants [ 24 ] when compared with being taken alone. Therapeutic index also does not take into account the ease or difficulty of reaching a toxic or lethal dose. This is more of a consideration for recreational drug users, as the purity can be highly variable.
The therapeutic window (or pharmaceutical window) of a drug is the range of drug dosages which can treat disease effectively without having toxic effects. [ 25 ] Medication with a small therapeutic window must be administered with care and control, frequently measuring blood concentration of the drug, to avoid harm. Medications with narrow therapeutic windows include theophylline , digoxin , lithium , and warfarin .
Optimal biological dose (OBD) is the quantity of a drug that will most effectively produce the desired effect while remaining in the range of acceptable toxicity.
The maximum tolerated dose (MTD) refers to the highest dose of a radiological or pharmacological treatment that will produce the desired effect without unacceptable toxicity . [ 26 ] [ 27 ] The purpose of administering MTD is to determine whether long-term exposure to a chemical might lead to unacceptable adverse health effects in a population, when the level of exposure is not sufficient to cause premature mortality due to short-term toxic effects . The maximum dose is used, rather than a lower dose, to reduce the number of test subjects (and, among other things, the cost of testing), to detect an effect that might occur only rarely. This type of analysis is also used in establishing chemical residue tolerances in foods. Maximum tolerated dose studies are also done in clinical trials .
MTD is an essential aspect of a drug's profile. All modern healthcare systems dictate a maximum safe dose for each drug, and generally have numerous safeguards (e.g. insurance quantity limits and government-enforced maximum quantity/time-frame limits) to prevent the prescription and dispensing of quantities exceeding the highest dosage which has been demonstrated to be safe for members of the general patient population.
Patients are often unable to tolerate the theoretical MTD of a drug due to the occurrence of side-effects which are not innately a manifestation of toxicity (not considered to severely threaten a patient's health) but cause the patient sufficient distress and/or discomfort to result in non-compliance with treatment. Such examples include emotional "blunting" with antidepressants, pruritus with opiates , and blurred vision with anticholinergics . | https://en.wikipedia.org/wiki/Therapeutic_index |
A therapeutic interfering particle is an antiviral preparation that reduces the replication rate and pathogenesis of a particular viral infectious disease. A therapeutic interfering particle is typically a biological agent (i.e., nucleic acid) engineered from portions of the viral genome being targeted. Similar to Defective Interfering Particles (DIPs) , the agent competes with the pathogen within an infected cell for critical viral replication resources, reducing the viral replication rate and resulting in reduced pathogenesis. [ 1 ] [ 2 ] But, in contrast to DIPs, TIPs are engineered to have an in vivo basic reproductive ratio ( R 0 ) that is greater than 1 ( R 0 >1). [ 3 ] The term "TIP" was first introduced in 2011 [ 4 ] based on models of its mechanism-of-action from 2003. [ 3 ] Given their unique R 0 >1 mechanism of action, TIPs exhibit high barriers to the evolution of antiviral resistance [ 5 ] and are predicted to be resistance proof. [ 4 ] Intervention with therapeutic interfering particles can be prophylactic (to prevent or ameliorate the effects of a future infection), or a single-administration therapeutic (to fight a disease that has already occurred, such as HIV or COVID-19). [ 6 ] [ 4 ] [ 3 ] [ 7 ] [ 5 ] Synthetic DIPs that rely on stimulating innate antiviral immune responses (i.e., interferon) were proposed for influenza in 2008 [ 8 ] and shown to protect mice to differing extents [ 9 ] [ 10 ] [ 11 ] but are technically distinct from TIPs due to their alternate molecular mechanism of action which has not been predicted to have a similarly high barrier to resistance. [ 12 ] Subsequent work tested the pre-clinical efficacy of TIPs against HIV, [ 6 ] a synthetic DIP for SARS-CoV-2 (in vitro), [ 7 ] and a TIP for SARS-CoV-2 (in vivo). [ 5 ] [ 13 ]
Therapeutic Interfering Particles, often referred to as TIPs, are typically synthetic, engineered versions of naturally occurring defective interfering particles (DIPs), in which critical portions of the virus genome are deleted rendering the TIP unable to replicate on its own. Often a TIP has the vast majority of the virus genome deleted. [ 5 ] However, TIPs are engineered to retain specific elements of the genome that allow them to efficiently compete with the wild-type virus for critical replication resources inside an infected cell. TIPs thereby deprive wild-type virus of replication material through competitive inhibition , [ 14 ] and therapeutically reduce viral load. [ 6 ] Competitive inhibition enables TIPs to conditionally replicate and efficiently mobilize between cells, essentially "piggybacking" on wild-type virus, to act as single-administration antivirals with a high genetic barrier to the evolution of resistance. [ 15 ] TIPs have been engineered for HIV [ 6 ] [ 14 ] and SARS-CoV-2, [ 7 ] and do not induce innate immune responses such as interferon [ 5 ]
Three mechanistic criteria define a TIP:
As a result of these mechanistic criteria, TIPs have been referred to as "piggyback" [ 17 ] or alternatively as "virus hijackers". [ 18 ] [ 19 ]
TIPs do not stimulate or function through the induction of innate cellular immune responses (such as interferon). In fact, stimulation of innate cellular antiviral mechanisms has been shown to contravene criterion (#3) (i.e., R 0 >1), as innate immune mechanisms inhibit efficient mobilization of TIPs. [ 3 ] As such, several VLP-based therapy proposals for influenza and other viruses [ 20 ] that do not satisfy these criteria are DIPs, but not TIPs.
TIPs are built off the phenomenon of defective interfering particles (DIPs) discovered by Preben Von Magnus in the early 1950s, during his work on influenza viruses. [ 21 ] [ 22 ] [ 23 ] [ 2 ] DIPs are spontaneously arising virus mutants, first described by von Magnus as "incomplete" viruses, in which a critical portion of the viral genome has been lost. Direct evidence for DIPs was only found in the 1960s by Hackett, who observed the presence of "stumpy" particles of vesicular stomatitis virus in electron micrographs, [ 24 ] and the DIP terminology was formalized in 1970 by Huang and Baltimore. [ 25 ] DIPs have been reported for many classes of DNA and RNA viruses in clinical and laboratory settings.
Whereas DIPs had been proposed as potential therapeutics that would act via stimulation of the immune system [ 20 ] – a concept [ 8 ] [ 26 ] tested in influenza with mixed results [ 9 ] [ 10 ] – the TIP R 0 >1 mechanism of action was first proposed in 2003 [ 3 ] with the term “TIP” and the unique benefits of the R 0 >1 mechanism shown in 2011. [ 4 ]
In 2016 the US government launched a major funding initiative ( DARPA INTERCEPT, [ 26 ] [ 27 ] [ 28 ] ) to discover and engineer antiviral TIPs for diverse viruses, based on prior investments from the US National Institutes of Health . [ 29 ] This program led to renewed interest in the concept of interfering particles as therapies with the development of technologies to isolate DIPs for influenza [ 30 ] [ 31 ] [ 32 ] and engineer TIPs for HIV and Zika virus. [ 14 ] The first successful experimental demonstration of the TIP concept was reported in 2019 [ 6 ] for HIV, and the discovery of a TIP for SARS-CoV-2 was reported in 2020 [ 7 ] and results on the effect on hamsters in 2021. [ 33 ] In 2020, the US government funded first-in-human clinical trials of TIPs. [ 34 ] [ 35 ] | https://en.wikipedia.org/wiki/Therapeutic_interfering_particle |
A therapy or medical treatment is the attempted remediation of a health problem, usually following a medical diagnosis . Both words, treatment and therapy , are often abbreviated tx , Tx , or T x .
As a rule, each therapy has indications and contraindications . There are many different types of therapy. Not all therapies are effective . Many therapies can produce unwanted adverse effects .
Treatment and therapy are often synonymous, especially in the usage of health professionals . However, in the context of mental health , the term therapy may refer specifically to psychotherapy .
The words care , therapy , treatment , and intervention overlap in a semantic field , and thus they can be synonymous depending on context . Moving rightward through that order, the connotative level of holism decreases and the level of specificity (to concrete instances) increases. Thus, in health-care contexts (where its senses are always noncount ), the word care tends to imply a broad idea of everything done to protect or improve someone's health (for example, as in the terms preventive care and primary care , which connote ongoing action), although it sometimes implies a narrower idea (for example, in the simplest cases of wound care or postanesthesia care , a few particular steps are sufficient, and the patient's interaction with the provider of such care is soon finished). In contrast, the word intervention tends to be specific and concrete, and thus the word is often countable ; for example, one instance of cardiac catheterization is one intervention performed, and coronary care (noncount) can require a series of interventions (count). At the extreme, the piling on of such countable interventions amounts to interventionism , a flawed model of care lacking holistic circumspection —merely treating discrete problems (in billable increments) rather than maintaining health. Therapy and treatment , in the middle of the semantic field, can connote either the holism of care or the discreteness of intervention , with context conveying the intent in each use. Accordingly, they can be used in both noncount and count senses (for example, therapy for chronic kidney disease can involve several dialysis treatments per week ).
The words aceology and iamatology are obscure and obsolete synonyms referring to the study of therapies.
The English word therapy comes via Latin therapīa from Ancient Greek : θεραπεία and literally means "curing" or "healing". [ 1 ] The term therapeusis is a somewhat archaic doublet of the word therapy .
Therapy as a treatment for physical or mental condition is based on knowledge usually from one of three separate fields (or a combination of them): conventional medicine (allopathic, Western biomedicine, relying on scientific approach and evidence-based practice), traditional medicine (age-old cultural practices), and alternative medicine (healthcare procedures "not readily integrated into the dominant healthcare model"). [ 2 ]
Levels of care classify health care into categories of chronology, priority, or intensity, as follows:
Treatment decisions often follow formal or informal algorithmic guidelines. Treatment options can often be ranked or prioritized into lines of therapy : first-line therapy , second-line therapy , third-line therapy , and so on. First-line therapy (sometimes referred to as induction therapy , primary therapy , or front-line therapy ) [ 10 ] is the first therapy that will be tried. Its priority over other options is usually either: (1) formally recommended on the basis of clinical trial evidence for its best-available combination of efficacy, safety, and tolerability or (2) chosen based on the clinical experience of the physician. If a first-line therapy either fails to resolve the issue or produces intolerable side effects , additional (second-line) therapies may be substituted or added to the treatment regimen, followed by third-line therapies, and so on.
An example of a context in which the formalization of treatment algorithms and the ranking of lines of therapy is very extensive is chemotherapy regimens . Because of the great difficulty in successfully treating some forms of cancer, one line after another may be tried. In oncology the count of therapy lines may reach 10 or even 20.
Often multiple therapies may be tried simultaneously ( combination therapy or polytherapy). Thus combination chemotherapy is also called polychemotherapy, whereas chemotherapy with one agent at a time is called single-agent therapy or monotherapy. Single-agent therapy is a care algorithm that focuses on one specific drug or procedure. It utilizes a single therapeutic agent rather than combining multiple ones. [ 11 ] Multiagent Therapy is a treatment by two or more drugs or procedures. Comprehensive therapy combines various forms of medical treatment to provide the most effective care for patients. [ 12 ]
Adjuvant therapy is therapy given in addition to the primary, main, or initial treatment, but simultaneously (as opposed to second-line therapy). Neoadjuvant therapy is therapy that is begun before the main therapy. Thus one can consider surgical excision of a tumor as the first-line therapy for a certain type and stage of cancer even though radiotherapy is used before it; the radiotherapy is neoadjuvant (chronologically first but not primary in the sense of the main event). Premedication is conceptually not far from this, but the words are not interchangeable; cytotoxic drugs to put a tumor "on the ropes" before surgery delivers the "knockout punch" are called neoadjuvant chemotherapy, not premedication, whereas things like anesthetics or prophylactic antibiotics before dental surgery are called premedication.
Step therapy or stepladder therapy is a specific type of prioritization by lines of therapy. It is controversial in American health care because unlike conventional decision-making about what constitutes first-line, second-line, and third-line therapy, which in the U.S. reflects safety and efficacy first and cost only according to the patient's wishes, step therapy attempts to mix cost containment by someone other than the patient (third-party payers) into the algorithm.
Therapy freedom refers to prescription for use of an unlicensed medicine (without a marketing authorization issued by the licensing authority of the country) [ 13 ] and the negotiation between individual and group rights are involved. A comprehensive research in Australia, Czech Republic, India, Israel, Italy, Netherlands, Spain, Serbia, Sweden, UK, and USA showed that the rate of the unlicensed medicine prescription has been reported to range from 0.3 to 35% depending on the country. [ 13 ] In many jurisdictions, therapy freedom is limited to cases of no treatment existing that is both well-established and more efficacious. [ 14 ]
Treatments can be classified according to the method of treatment: | https://en.wikipedia.org/wiki/Therapy |
Theresa M. Reineke (born January 1, 1972) is an American chemist and Distinguished McKnight University Professor at the University of Minnesota . She designs sustainable, environmentally friendly polymer-based delivery systems for targeted therapeutics. She is the associate editor of ACS Macro Letters .
Reineke earned her bachelor's degree at University of Wisconsin–Eau Claire . [ 1 ] She moved to Arizona State University for her graduate studies and earned a master's degree in 1998. [ 1 ] [ 2 ] Reineke was a PhD student at the University of Michigan , where she was supervised by Michael O'Keeffe and Omar M. Yaghi . [ 1 ] [ 3 ] She was awarded the Wirt and Mary Cornell Prize for Outstanding Graduate Research. Reineke joined the California Institute of Technology as an National Institutes of Health postdoctoral fellow in 2000. [ 1 ]
Reineke joined the University of Minnesota in 2011. Her research group focus on the design, characterisation and functionalisation of macromolecular systems. [ 4 ] [ 5 ] These macromolecules include biocompatible polymers that can deliver DNA for regenerative medicine as well as targeted therapeutic treatments. [ 4 ] She was made a Lloyd H. Reyerson Professor with tenure at the University of Minnesota in 2011. [ 1 ] Reineke has published over 140 papers. [ 6 ]
Nucleic acids can have an unparalleled specificity for targets inside a cell, but need to be compacted into nanostructures ( polyplexes ) that can enter cells. [ 7 ] Reineke designs polymer-based transportation systems for nucleic acids. [ 7 ] These polymer vehicles can improve the solubility and bioavailability of drugs. [ 8 ] These often incorporate carbohydrates , which have an affinity for polyplexes and are non-toxic. [ 7 ] She is a member of the University of Minnesota Centre for Sustainable Polymers, synthesising polymers from sustainable ingredients. The carbohydrate units within her polymer drug delivery systems are a widely available, renewable resource. [ 9 ] The sustainable polymers designed by Reineke include poly(ester - thioethers ). [ 9 ]
Reineke used reversible addition−fragmentation chain-transfer polymerization for the synthesis of diblock terpolymers that can be used for targeted drug delivery. [ 10 ] She used spray dried dispersions of the polymer with the drug probucol . [ 11 ]
Reineke was made a University of Minnesota Distinguished McKnight University Professor in 2017. [ 1 ] She is the associate editor of ACS Macro Letters and on the Advisory Board of Biomacromolecules , Bioconjugate Chemistry and Polymer Chemistry . [ 1 ] She is a member of the American Chemical Society Polymer Division. [ 12 ] Her work has been supported by an National Science Foundation CAREER Award , a Sloan Research Fellowship , the National Institutes of Health and the National Academy of Sciences . [ 13 ] | https://en.wikipedia.org/wiki/Theresa_M._Reineke |
Thermal-transfer printing is a digital printing method in which material is applied to paper (or some other material) by melting a coating of ribbon so that it stays glued to the material on which the print is applied. It contrasts with direct thermal printing , where no ribbon is present in the process.
Thermal transfer is preferred over direct thermal printing on surfaces that are heat-sensitive or when higher durability of printed matter (especially against heat) is desired. Thermal transfer is a popular print process particularly used for the printing of identification labels. It is the most widely used printing process in the world for the printing of high-quality barcodes. Printers like label makers can laminate the print for added durability.
Thermal transfer printing was invented by SATO corporation. The world's first thermal-transfer label printer SATO M-2311 was produced in 1981. [ 1 ]
Thermal-transfer printing is done by melting wax within the print heads of a specialized printer. The thermal-transfer print process utilises three main components: a non-movable print head, a carbon ribbon (the ink) and a substrate to be printed, which would typically be paper, synthetics, card or textile materials. These three components effectively form a sandwich with the ribbon in the middle. A thermally compliant print head, in combination with the electrical properties of the ribbon and the correct rheological properties of the ribbon ink are all essential in producing a high-quality printed image.
Print heads are available in 203 dpi, 300 dpi and 600 dpi resolution options. Each dot is addressed independently, and when a dot is electronically addressed, it immediately heats up to a pre-set (adjustable) temperature. The heated element immediately melts the wax- or resin-based ink on the side of the ribbon film facing the substrate, and this process, in combination with the constant pressure being applied by the print-head locking mechanism immediately transfers it onto the substrate. When a dot "turns off", that element of the print head immediately cools down, and that part of the ribbon thereby stops melting/printing. As the substrate comes out of the printer, it is completely dry and can be used immediately.
Carbon ribbons are on rolls and are fitted onto a spindle or reel holder within the printer. The used ribbon is rewound by a take-up spindle, forming a roll of "used" ribbon. It is termed a "one-trip" ribbon because once it has been rewound, the used roll is discarded and replaced with a new one. If one were to hold a strip of used carbon ribbon up to the light, one would see an exact negative of the images that have been printed. The main benefit of using a one-trip thermal transfer ribbon is that providing the correct settings are applied prior to printing, a 100% density of printed image is guaranteed, in contrast to a pre-inked ribbon on a dot-matrix impact printer ribbon, which gradually fades with usage.
Thermal-printing technology can be used to produce color images by adhering a wax-based ink onto paper. As the paper and ribbon travel in unison beneath the thermal print head, the wax-based ink from the transfer ribbon melts onto the paper. When cooled, the wax is permanently adhered to the paper. This type of thermal printer uses a like-sized panel of ribbon for each page to be printed, regardless of the contents of the page. Monochrome printers have a black panel for each page to be printed, while color printers have either three ( CMY ) or four ( CMYK ) colored panels for each page. Unlike dye-sublimation printers , these printers cannot vary the dot intensity, which means that images must be dithered . Although acceptable in quality, the printouts from these printers cannot compare with modern inkjet printers and color laser printers . Currently, this type of printer is rarely used for full-page printing, but is now employed for industrial label printing due to its waterfastness and speed. These printers are considered highly reliable due to their small number of moving parts. Printouts from color thermal printers using wax are sensitive to abrasion, as the wax ink can be scraped, rubbed off, or smeared. However, wax-resin compounds and full resins can be used on materials such as polypropylene or polyester in order to increase durability.
So-called " solid ink " or "phaser" printers were developed by Tektronix and later by Xerox (who acquired Tektronix's printer division). Printers like the Xerox Phaser 8400 use 1 cubic inch (16 cm 3 ) rectangular solid-state ink blocks (similar in consistency to candle wax), which are loaded into a system similar to a stapler magazine in the top of the printer. The ink blocks are melted, and the ink is transferred onto a rotating oil-coated print drum using a piezo inkjet head. The paper then passes over the print drum, at which time the image is transferred, or transfixed, to the page. This system is similar to water-based inkjets, provided that the ink has low viscosity at the jetting temperature 60 °C (140 °F). Printout properties are similar to those mentioned above, although these printers can be configured to produce extremely high-quality results and are far more economical, as they only use the ink needed for the printout, rather than an entire ribbon panel. Costs of upkeep and ink are comparable to color laser printers, while "standby" power usage can be very high, about 200 W.
MicroDry is a computer printing system developed by the Alps Electric of Japan. It is a wax/resin-transfer system using individual colored thermal ribbon cartridges and can print in process color using cyan, magenta, yellow, and black cartridges, as well as such spot-color cartridges as white, metallic silver, and metallic gold, on a wide variety of paper and transparency stock. Certain MicroDry printers can also operate in dye-sublimation mode, using special cartridges and paper.
Usage of TT printers in industry includes:
Barcode printers typically come in fixed sizes of 4, 6 or 8 inches (100, 150 or 200 mm) wide. Although a number of manufacturers have made differing sizes in the past, most have now standardised on these sizes. The main application of these printers is to produce barcode labels for product and shipping identification. | https://en.wikipedia.org/wiki/Thermal-transfer_printing |
In solid-state physics , the thermal Hall effect , also known as the Righi–Leduc effect , named after independent co-discoverers Augusto Righi and Sylvestre Anatole Leduc , [ 1 ] is the thermal analog of the Hall effect . Given a thermal gradient across a solid, this effect describes the appearance of an orthogonal temperature gradient when a magnetic field is applied.
For conductors , a significant portion of the thermal current is carried by the electrons. In particular, the Righi–Leduc effect describes the heat flow resulting from a perpendicular temperature gradient and vice versa. The Maggi–Righi–Leduc effect , named after Gian Antonio Maggi [ it ] , describes changes in thermal conductivity when placing a conductor in a magnetic field . [ 2 ]
A thermal Hall effect has also been measured in a paramagnetic insulators, called the " phonon Hall effect". [ 3 ] In this case, there are no charged currents in the solid, so the magnetic field cannot exert a Lorentz force . Phonon thermal Hall effects have been measured in various classes of non-magnetic insulating solids, [ 4 ] [ 5 ] [ 6 ] [ 7 ] but the exact mechanism giving rise to this phenomenon is largely unknown. An analogous thermal Hall effect for neutral particles exists in polyatomic gases, known as the Senftleben–Beenakker effect .
Measurements of the thermal Hall conductivity are used to distinguish between the electronic and lattice contributions to thermal conductivity. These measurements are especially useful when studying superconductors . [ 8 ]
Given a conductor or semiconductor with a temperature difference in the x -direction and a magnetic field B perpendicular to it in the z -direction, then a temperature difference can occur in the transverse y- direction,
The Righi–Leduc effect is a thermal analogue of the Hall effect. With the Hall effect, an externally applied electrical voltage causes an electrical current to flow. The mobile charge carriers (usually electrons) are transversely deflected by the magnetic field due to the Lorentz force . In the Righi–Leduc effect, the temperature difference causes the mobile charge carriers to flow from the warmer end to the cooler end. Here, too, the Lorentz force causes a transverse deflection. Since the electrons transport heat, one side is heated more than the other.
The thermal Hall coefficient R T H {\displaystyle R_{\mathrm {TH} }} (sometimes also called the Righi–Leduc coefficient) depends on the material and has units of tesla −1 . It is related to the Hall coefficient R H {\displaystyle R_{\mathrm {H} }} by the electrical conductivity σ {\displaystyle \sigma } , as
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_Hall_effect |
Thermal Monitor 2 (TM2) is a throttling control method used on LGA 775 versions of the Core 2 , Pentium Dual-Core , Pentium D , Pentium 4 and Celeron processors and also on the Pentium M series of processors. [ 1 ] TM2 reduces processor temperature by lowering the CPU clock multiplier , and thereby the processor core speed. [ 2 ]
In contrast, Thermal Monitor 1 inserts an idle cycle into the CPU for thermal control without decreasing multipliers.
TM1 and TM2 are associated with DTS/PECI — Digital Temperature Sensor/ Platform Environment Control Interface . [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_Monitor_2 |
Thermal analysis is a branch of materials science where the properties of materials are studied as they change with temperature . Several methods are commonly used – these are distinguished from one another by the property which is measured:
Simultaneous thermal analysis generally refers to the simultaneous application of thermogravimetry and differential scanning calorimetry to one and the same sample in a single instrument. The test conditions are perfectly identical for the thermogravimetric analysis and differential scanning calorimetry signals (same atmosphere, gas flow rate, vapor pressure of the sample, heating rate, thermal contact to the sample crucible and sensor, radiation effect, etc.). The information gathered can even be enhanced by coupling the simultaneous thermal analysis instrument to an Evolved Gas Analyzer like Fourier transform infrared spectroscopy or mass spectrometry .
Other, less common, methods measure the sound or light emission from a sample, or the electrical discharge from a dielectric material, or the mechanical relaxation in a stressed specimen. The essence of all these techniques is that the sample's response is recorded as a function of temperature (and time).
It is usual to control the temperature in a predetermined way – either by a continuous increase or decrease in temperature at a constant rate (linear heating/cooling) or by carrying out a series of determinations at different temperatures (stepwise isothermal measurements). More advanced temperature profiles have been developed which use an oscillating (usually sine or square wave) heating rate (Modulated Temperature Thermal Analysis) or modify the heating rate in response to changes in the system's properties (Sample Controlled Thermal Analysis).
In addition to controlling the temperature of the sample, it is also important to control its environment (e.g. atmosphere). Measurements may be carried out in air or under an inert gas (e.g. nitrogen or helium). Reducing or reactive atmospheres have also been used and measurements are even carried out with the sample surrounded by water or other liquids. Inverse gas chromatography is a technique which studies the interaction of gases and vapours with a surface - measurements are often made at different temperatures so that these experiments can be considered to come under the auspices of Thermal Analysis.
Atomic force microscopy uses a fine stylus to map the topography and mechanical properties of surfaces to high spatial resolution. By controlling the temperature of the heated tip and/or the sample a form of spatially resolved thermal analysis can be carried out.
Thermal analysis is also often used as a term for the study of heat transfer through structures. Many of the basic engineering data for modelling such systems comes from measurements of heat capacity and thermal conductivity .
Polymers represent another large area in which thermal analysis finds strong applications. Thermoplastic polymers are commonly found in everyday packaging and household items, but for the analysis of the raw materials, effects of the many additive used (including stabilisers and colours) and fine-tuning of the moulding or extrusion processing used can be achieved by using differential scanning calorimetry. An example is oxidation induction time by differential scanning calorimetry which can determine the amount of oxidation stabiliser present in a thermoplastic (usually a polyolefin) polymer material. Compositional analysis is often made using thermogravimetric analysis, which can separate fillers, polymer resin and other additives. Thermogravimetric analysis can also give an indication of thermal stability and the effects of additives such as flame retardants. (See J.H.Flynn, L.A.Wall J.Res.Nat.Bur. Standerds, General Treatment of the Thermogravimetry of Polymers Part A, 1966 V70A, No5 487)
Thermal analysis of composite materials, such as carbon fibre composites or glass epoxy composites are often carried out using dynamic mechanical analysis, which can measure the stiffness of materials by determining the modulus and damping (energy absorbing) properties of the material. Aerospace companies often employ these analysers in routine quality control to ensure that products being manufactured meet the required strength specifications. Formula 1 racing car manufacturers also have similar requirements. Differential scanning calorimetry is used to determine the curing properties of the resins used in composite materials, and can also confirm whether a resin can be cured and how much heat is evolved during that process. Application of predictive kinetics analysis can help to fine-tune manufacturing processes. Another example is that thermogravimetric analysis can be used to measure the fibre content of composites by heating a sample to remove the resin by application of heat and then determining the mass remaining.
Production of many metals ( cast iron , grey iron , ductile iron , compacted graphite iron , 3000 series aluminium alloys , copper alloys , silver , and complex steels ) are aided by a production technique also referred to as thermal analysis. [ 2 ] A sample of liquid metal is removed from the furnace or ladle and poured into a sample cup with a thermocouple embedded in it. The temperature is then monitored, and the phase diagram arrests ( liquidus , eutectic , and solidus ) are noted. From this information chemical composition based on the phase diagram can be calculated, or the crystalline structure of the cast sample can be estimated especially for silicon morphology in hypo-eutectic Al-Si cast alloys. [ 3 ] Strictly speaking these measurements are cooling curves and a form of sample controlled thermal analysis whereby the cooling rate of the sample is dependent on the cup material (usually bonded sand) and sample volume which is normally a constant due to the use of standard sized sample cups. To detect phase evolution and corresponding characteristic temperatures, cooling curve and its first derivative curve should be considered simultaneously. Examination of cooling and derivative curves is done by using appropriate data analysis software. The process consists of plotting, smoothing and curve fitting as well as identifying the reaction points and characteristic parameters. This procedure is known as Computer-Aided Cooling Curve Thermal Analysis. [ 4 ]
Advanced techniques use differential curves to locate endothermic inflection points such as gas holes, and shrinkage, or exothermic phases such as carbides, beta crystals, inter crystalline copper, magnesium silicide, iron phosphide's and other phases as they solidify. Detection limits seem to be around 0.01% to 0.03% of volume.
In addition, integration of the area between the zero curve and the first derivative is a measure of the specific heat of that part of the solidification which can lead to rough estimates of the percent volume of a phase. (Something has to be either known or assumed about the specific heat of the phase versus the overall specific heat.) In spite of this limitation, this method is better than estimates from two dimensional micro analysis, and a lot faster than chemical dissolution.
Most foods are subjected to variations in their temperature during production, transport, storage, preparation and consumption, e.g., pasteurization , sterilization , evaporation , cooking , freezing , chilling, etc. Temperature changes cause alterations in the physical and chemical properties of food components which influence the overall properties of the final product, e.g., taste, appearance, texture and stability. Chemical reactions such as hydrolysis , oxidation or reduction may be promoted, or physical changes, such as evaporation, melting , crystallization , aggregation or gelation may occur. A better understanding of the influence of temperature on the properties of foods enables food manufacturers to optimize processing conditions and improve product quality. It is therefore important for food scientists to have analytical techniques to monitor the changes that occur in foods when their temperature varies. These techniques are often grouped under the general heading of thermal analysis. In principle, most analytical techniques can be used, or easily adapted, to monitor the temperature-dependent properties of foods, e.g., spectroscopic ( nuclear magnetic resonance , UV -visible, infrared spectroscopy , fluorescence ), scattering ( light , X-rays , neutrons ), physical (mass, density, rheology , heat capacity ) etc. Nevertheless, at present the term thermal analysis is usually reserved for a narrow range of techniques that measure changes in the physical properties of foods with temperature (TG/DTG, [ clarification needed ] differential thermal analysis, differential scanning calorimetry and transition temperature).
Power dissipation is an important issue in present-day PCB [ clarification needed ] design. Power dissipation will result in temperature difference and pose a thermal problem to a chip. In addition to the issue of reliability, excess heat will also negatively affect electrical performance and safety. The working temperature of an IC should therefore be kept below the maximum allowable limit of the worst case. In general, the temperatures of junction and ambient are 125 °C and 55 °C, respectively.
The ever-shrinking chip size causes the heat to concentrate within a small area and leads to high power density. Furthermore, denser transistors gathering in a monolithic chip and higher operating frequency cause a worsening of the power dissipation. Removing the heat effectively becomes the critical issue to be resolved. | https://en.wikipedia.org/wiki/Thermal_analysis |
A thermal bar is a hydrodynamic feature that forms around the edges of holomictic lakes during the seasonal transition to stratified conditions, due to the shorter amount of time required for shallow areas of the lake to stratify.
During the process of lake stratification , shallow areas generally become stratified before deeper areas. In large lakes this condition may persist for weeks, during which a temperature front known as a thermal bar forms between the stratified and unstratified areas of the lake. The thermal bar generally forms parallel to shore and moves toward the lake center as deeper areas of the lake stratify. While thermal bars can form in both fall and spring, most studies of the thermal bar have investigated aspects of the feature in the spring, when the lake is warming up and the summer thermocline is beginning to form.
At the lake surface, the thermal bar may be visible as a foam line between the stratified water shoreward of the thermal bar and unstratified water on the offshore side. At this convergence, waters mix and sink when they reach the temperature of maximum density , roughly 4 degrees Celsius for freshwater, a process known as cabbeling .
The downwelling of dense water at the thermal bar acts as a barrier to horizontal mixing. In spring, this concentrates warm water and suspended materials in the near shore waters around the edge of the lake. Satellite imagery has been used to identify thermal bars using their thermal characteristics as well as the concentration of suspended materials on their shoreward side, typically due to surface runoff to the lake.
Isotherms on the stratified side of the thermal bar slope away from the bar, producing a pressure gradient force that when balanced by the Coriolis force produces a cyclonic coastal geostrophic current that transports water and suspended matter along the shore.
The thermal bar phenomena was first described by François-Alphonse Forel in his study of Lac Leman . Additional studies have been carried out in Lake Ladoga , [ 1 ] Lake Baikal [ 2 ] and the Laurentian Great Lakes . [ 3 ]
Although a temporary seasonal feature, the thermal bar plays an important role in lake ecology by restricting mixing between coastal and offshore waters. This role is particularly evident during the spring runoff period when the retention of coastal waters may benefit aquatic organisms by providing warmer water temperatures and elevated nutrient concentrations, or may threaten coastal environments by retaining pollutants in coastal waters. [ 4 ]
The dictionary definition of caballing at Wiktionary | https://en.wikipedia.org/wiki/Thermal_bar |
Thermal barrier coatings ( TBCs ) are advanced materials systems usually applied to metallic surfaces on parts operating at elevated temperatures, such as gas turbine combustors and turbines, and in automotive exhaust heat management . These 100 μm to 2 mm thick coatings of thermally insulating materials serve to insulate components from large and prolonged heat loads and can sustain an appreciable temperature difference between the load-bearing alloys and the coating surface. [ 1 ] In doing so, these coatings can allow for higher operating temperatures while limiting the thermal exposure of structural components, extending part life by reducing oxidation and thermal fatigue . In conjunction with active film cooling, TBCs permit working fluid temperatures higher than the melting point of the metal airfoil in some turbine applications. Due to increasing demand for more efficient engines running at higher temperatures with better durability/lifetime and thinner coatings to reduce parasitic mass for rotating/moving components, there is significant motivation to develop new and advanced TBCs. The material requirements of TBCs are similar to those of heat shields , although in the latter application emissivity tends to be of greater importance. [ citation needed ]
An effective TBC needs to meet certain requirements to perform well in aggressive thermo-mechanical environments. [ 2 ] To deal with thermal expansion stresses during heating and cooling, adequate porosity is needed, as well as appropriate matching of thermal expansion coefficients with the metal surface that the TBC is coating. Phase stability is required to prevent significant volume changes (which occur during phase changes), which would cause the coating to crack or spall . In air-breathing engines, oxidation resistance is necessary, as well as decent mechanical properties for rotating/moving parts or parts in contact. Therefore, general requirements for an effective TBC can be summarized as needing: 1) a high melting point. 2) no phase transformation between room temperature and operating temperature. 3) low thermal conductivity . 4) chemical inertness. 5) similar thermal expansion match with the metallic substrate. 6) good adherence to the substrate. 7) low sintering rate for a porous microstructure. These requirements severely limit the number of materials that can be used, with ceramic materials usually being able to satisfy the required properties. [ 3 ]
Thermal barrier coatings typically consist of four layers: the metal substrate, metallic bond coat, thermally-grown oxide (TGO) , and ceramic topcoat. The ceramic topcoat is typically composed of yttria-stabilized zirconia (YSZ), which has very low conductivity while remaining stable at the nominal operating temperatures typically seen in TBC applications. This ceramic layer creates the largest thermal gradient of the TBC and keeps the lower layers at a lower temperature than the surface. However, above 1200 °C, YSZ suffers from unfavorable phase transformations, changing from t'-tetragonal to tetragonal to cubic to monoclinic. Such phase transformations lead to crack formation within the top coating. Recent efforts to develop an alternative to the YSZ ceramic topcoat have identified many novel ceramics (e.g., rare earth zirconates) exhibiting superior performance at temperatures above 1200 °C, but with inferior fracture toughness compared to that of YSZ. In addition, such zirconates may have a high concentration of oxygen-ion vacancies, which may facilitate oxygen transport and exacerbate the formation of the TGO. With a thick enough TGO, spalling of the coating may occur, which is a catastrophic mode of failure for TBCs. The use of such coatings would require additional coatings that are more oxidation resistant, such as alumina or mullite. [ 4 ]
The bond coat is an oxidation-resistant metallic layer which is deposited directly on top of the metal substrate. It is typically 75-150 μm thick and made of a NiCrAlY or NiCoCrAlY alloy, though other bond coats made of Ni and Pt aluminides also exist. The primary purpose of the bond coat is to protect the metal substrate from oxidation and corrosion, particularly from oxygen and corrosive elements that pass through the porous ceramic top coat.
At peak operating conditions found in gas-turbine engines with temperatures in excess of 700 °C, oxidation of the bond-coat leads to the formation of a thermally-grown oxide (TGO) layer. Formation of the TGO layer is inevitable for many high-temperature applications, so thermal barrier coatings are often designed so that the TGO layer grows slowly and uniformly. Such a TGO will have a structure that has a low diffusivity for oxygen, so that further growth is controlled by diffusion of metal from the bond-coat rather than the diffusion of oxygen from the top-coat. [ 5 ]
The TBC can also be locally modified at the interface between the bond coat and the thermally grown oxide so that it acts as a thermographic phosphor , which allows for remote temperature measurement.
In general, failure mechanisms of TBCs are very complex and can vary significantly from TBC to TBC and depending on the environment in which the thermal cycling takes place. For this reason, the failure mechanisms are still not yet fully understood. [ 5 ] [ 6 ] Despite this multitude of failure mechanisms and their complexity, though, three of the most important failure mechanisms have to do with the growth of the thermally-grown oxide (TGO) layer, thermal shock , and sintering of the top coat (TC), discussed below. Additional factors contributing to failure of TBCs include mechanical rumpling of the bond coat during thermal cyclic exposure (especially coatings in aircraft engines), accelerated oxidation at high temperatures, hot corrosion , and molten deposit degradation.
The growth of the thermally-grown oxide (TGO) layer is the most important cause of TBC spallation failure. [ 5 ] When the TGO forms as the TBC is heated, it causes a compressive growth stress associated with volume expansion. When it is cooled, a lattice mismatch strain arises between TGO and the top coat (TC) due to differing thermal expansion coefficients . Lattice mismatch strain refers to the strain that comes about when two crystalline lattices at an interface have different lattice constants and must nonetheless match one another where they meet at the interface. These growth stresses and lattice mismatch stresses, which increase with increasing cycling number, lead to plastic deformation , crack nucleation, and crack propagation, ultimately contributing to TBC failure after many cycles of heating and cooling. For this reason, in order to make a TBC that lasts a long time before failure, the thermal expansion coefficients between all layers should match well. [ 5 ] [ 7 ] Whereas a high BC creep rate increases the tensile stresses present in the TC due to TGO growth, a high TGO creep rate actually decreases these tensile stresses. [ 7 ]
Because the TGO is made of Al 2 O 3 , and the metallic bond coat (BC) is normally made of an aluminum-containing alloy , TGO formation tends to deplete the Al in the bond coat. If the BC runs out of aluminum to supply to the growing TGO, it's possible for compounds other than Al 2 O 3 to enter the TGO (such as Y 2 O 3 , for example), which weakens the TGO, making it easier for the TBC to fail. [ 5 ]
Because the purpose of TBCs is to insulate metallic substrates such that they can be used for prolonged times at high temperatures, they often undergo thermal shock , which is a stress that arises in a material when it undergoes a rapid temperature change. This thermal shock is a major contributor to the failure of TBCs, since the thermal shock stresses can cause cracking in the TBC if they are sufficiently strong. In fact, the repeated thermal shocks associated with turning the engine on and off many times is a main contributor to failure of TBC-coated turbine blades in airplanes. [ 6 ]
Over the course of repeated cycles of rapid heating and cooling, thermal shock leads to significant tensile strains perpendicular to the interface between the BC and the TC, reaching a maximum magnitude at the BC/TC interface, as well as a periodic strain field in the direction parallel to the BC/TC interface. Especially after many cycles of heating and cooling, these strains can lead to nucleation and propagation of cracks both parallel and perpendicular to the BC/TC interface. These linked-up horizontal and vertical cracks due to thermal shock ultimately contribute to the failure of the TBC via delamination of the TC. [ 6 ]
A third major contributor to TBC failure is sintering of the TC. [ 8 ] In TBC applications, YSZ has a columnar structure. These columns start out with a feathery structure, but become smoother with heating due to atomic diffusion at high temperature in order to minimize surface energy. The undulations on adjacent smoother columns eventually touch one another and begin to coalesce. As the YSZ sinters and becomes more dense in this fashion, it shrinks in size, leading to the formation of cracks via a mechanism analogous to the formation of mudcracks , where the top layer shrinks but the bottom layer (the BC in the case of TBCs, or the earth in the case of mud) remains the same size. [ 9 ]
This mud-cracking effect can be exacerbated if the underlying substrate is rough, or if it roughens upon heating, for the following reason. If the surface under the columns is curvy and if the columns can be modeled as straight rods normal to the surface underneath them, then column density will necessarily be high above valleys in the surface and low above peaks in the surface due to the tilting of the straight rods. This leads to a non-uniform columnar density throughout the TBC and promotes crack development in low-density regions. [ 9 ]
In addition to this mud-cracking effect, sintering increases the Young's modulus of the TC as the columns become attached to one another. This in turn increases the lattice mismatch strain at the interface between the TC and BC or TGO. The TC's increased Young's modulus makes it more difficult for its lattice to bend to meet that of the substrate under it; this is the origin of the increased lattice mismatch strain. In turn, this increased mismatch strain adds with the other previously mentioned strain fields in the TC to promote crack formation and propagation, leading to failure of the TBC. [ 10 ]
[ 3 ]
YSZ is the most widely studied and used TBC because it provides excellent performance in applications such as diesel engines and gas turbines. Additionally, it was one of the few refractory oxides that could be deposited as thick films using the then-known technology of plasma spraying. [ 2 ] As for properties, it has low thermal conductivity, high thermal expansion coefficient, and low thermal shock resistance. However, it has a fairly low operating limit of 1200 °C due to phase instability, and can corrode due to its oxygen transparency.
Mullite is a compound of alumina and silica, with the formula 3Al2O3-2SiO2. It has a low density, along with good mechanical properties, high thermal stability, low thermal conductivity, and is corrosion and oxidation resistant. However, it suffers from crystallization and volume contraction above 800 °C, which leads to cracking and delamination . Therefore, this material is suitable as a zirconia alternative for applications such as diesel engines , where surface temperatures are relatively low and temperature variations across the coating may be large.
Only α-phase Al2O3 is stable among aluminum oxides. With a high hardness and chemical inertness, but high thermal conductivity and low thermal expansion coefficient, alumina is often used as an addition to an existing TBC coating. By incorporating alumina in YSZ TBC, oxidation and corrosion resistance can be improved, as well as hardness and bond strength without significant change in the elastic modulus or toughness. One challenge with alumina is applying the coating through plasma spraying, which tends to create a variety of unstable phases, such as γ-alumina. When these phases eventually transform into the stable α-phase through thermal cycling, a significant volume change of ~15% (γ to α) follows, which can lead to microcrack formation in the coating.
CeO2 (Ceria) has a higher thermal expansion coefficient and lower thermal conductivity than YSZ. Adding ceria into a YSZ coating can significantly improve the TBC performance, especially in thermal shock resistance. This is most likely due to less bond coat stress due to better insulation and a better net thermal expansion coefficient. Some negative effects of the addition of ceria include the decrease of hardness and accelerated rate of sintering of the coating (less porous).
La 2 Zr 2 O 7 , also referred to as LZ, is an example of a rare-earth zirconate that shows potential for use as a TBC. This material is phase stable up to its melting point and can largely tolerate vacancies on any of its sublattices. Along with the ability for site-substitution with other elements, this means that thermal properties can potentially be tailored. Although it has a very low thermal conductivity compared to YSZ, it also has a low thermal expansion coefficient and low toughness.
Single and mixed phase materials consisting of rare earth oxides represent a promising low-cost approach towards TBCs. Coatings of rare earth oxides (e.g.: La2O3, Nb2O5, Pr2O3, CeO2 as main phases) have lower thermal conductivity and higher thermal expansion coefficients when compared to YSZ. The main challenge to overcome is the polymorphic nature of most rare earth oxides at elevated temperatures, as phase instability tends to negatively impact thermal shock resistance. Another advantage of rare earth oxides as TBCs is their tendency to exhibit intrinsic hydrophobicity , [ 11 ] which provides various advantages for systems that undergo intermittent use and may otherwise suffer from moisture adsorption or surface ice formation.
A powder mixture of metal and normal glass can be plasma-sprayed in vacuum, with a suitable composition resulting in a TBC comparable to YSZ. Additionally, metal-glass composites have superior bond-coat adherence, higher thermal expansion coefficients, and no open porosity, which prevents oxidation of the bond-coat.
Thermal barrier ceramic coatings are becoming more common in automotive applications. They are specifically designed to reduce heat loss from engine exhaust system components including exhaust manifolds , turbocharger casings, exhaust headers, downpipes and tailpipes. This process is also known as " exhaust heat management ". When used under-bonnet, these have the positive effect of reducing engine bay temperatures, therefore reducing the intake air temperature.
Although most ceramic coatings are applied to metallic parts directly related to the engine exhaust system, technological advances now allow thermal barrier coatings to be applied via plasma spray onto composite materials. It is now commonplace to find ceramic-coated components in modern engines and on high-performance components in race series such as Formula 1 . As well as providing thermal protection, these coatings are also used to prevent physical degradation of the composite material due to friction. This is possible because the ceramic material bonds with the composite (instead of merely sticking on the surface with paint), thereby forming a tough coating that doesn't chip or flake easily.
Although thermal barrier coatings have been applied to the insides of exhaust system components, problems have been encountered because of the difficulty in preparing the internal surface prior to coating.
Thermal barrier coatings are commonly used to protect nickel-based superalloys from both melting and thermal cycling in aviation turbines. Combined with cool air flow, TBCs increase the allowable gas temperature above that of the superalloy melting point. [ 12 ]
To avoid the difficulties associated with the melting point of superalloys, many researchers are investigating ceramic-matrix composites (CMCs) as high-temperature alternatives. Generally, these are made from fiber-reinforced SiC. Rotating parts are especially good candidates for the material change due to the enormous fatigue that they endure. Not only do CMCs have better thermal properties, but they are also lighter meaning that less fuel would be needed to produce the same thrust for the lighter aircraft. [ 13 ] The material change is, however, not without consequences. At high temperatures, these CMCs are reactive with water and form gaseous silicon hydroxide compounds that corrode the CMC.
SiOH 2 + H 2 O = SiO(OH) 2
SiOH 2 + 2H 2 O = Si(OH) 4
2SiOH 2 + 3H 2 O = Si 2 O(OH) 6 [ 14 ]
The thermodynamic data for these reactions has been experimentally determined over many years to determine that Si(OH) 4 is generally the dominant vapor species. [ 15 ] Even more advanced environmental barrier coatings are required to protect these CMCs from water vapor as well as other environmental degradants. For instance, as the gas temperatures increase towards 1400 K-1500 K, sand particles begin to melt and react with coatings. The melted sand is generally a mixture of calcium oxide, magnesium oxide, aluminum oxide, and silicon oxide (commonly referred to as CMAS). Many research groups are investigating the harmful effects of CMAS on turbine coatings and how to prevent damage. CMAS is a large barrier to increasing the combustion temperature of gas turbine engines and will need to be solved before turbines see a large increase in efficiency from temperature increase. [ 16 ]
In industry, thermal barrier coatings are produced in a number of ways:
Additionally, the development of advanced coatings and processing methods is a field of active research. One such example is the solution precursor plasma spray process, which has been used to create TBCs with some of the lowest reported thermal conductivities without sacrificing thermal cyclic durability. [ citation needed ] | https://en.wikipedia.org/wiki/Thermal_barrier_coating |
A thermal blanket is a device used in thermal desorption to clean soil contamination . The primary function of a thermal blanket is to heat the soil to the boiling point of the contaminants (usually 100 to 325 °C and as high as 900 °C [ 1 ] ) so that they break down. A vacuum pulls the resulting gas (along with volatilized contaminants) into a separate air cleaner that may use various methods, such as carbon filters and high-heat ovens, to completely destroy the contaminants. Aside from evaporation and volatilization, the contaminants may also be removed from the soil through other mechanisms such as steam distillation , pyrolysis , oxidation , and other chemical reactions . [ 2 ]
Due to their placement, the thermal blanket can only be used in shallow areas, which is around 1 meter. The process can take more than 24 hours to treat 6 inches of soil and up to 4 days for contaminated areas with depths of 12 to 18 inches. [ 3 ]
Deep contamination (contamination at depths greater than 1 meter) is handled using a similar method but with a deep penetrating heat source. This is commonly referred to as an in situ thermal desorption (ISTD) thermal well and it uses heater elements that consist of nichrome wires in a ceramic insulator. Like the thermal blankets, the heating temperature could reach as high as 900 °C, heating adjacent regions through heat conduction. [ 1 ] Vacuum is also applied to withdraw broken contaminants.
It is reported that the application of the thermal blanket is limited [ 3 ] and this could be attributed to a number of concerns. For example, as a contaminant becomes heated, it may leak outside of the area of the thermal blanket. Therefore, the blanket must completely cover the contaminated area and have a strong enough vacuum to prohibit the spread of contamination. Incomplete destruction of contaminants may also lead to the introduction of dioxins and furans into the air.
The thermal blanket method has not been effectively tested on organic contaminants. The technology is presently commercially available. Shell Technology Ventures, Inc., for instance, has developed its own ISTD thermal blanket solution that can treat or remove contaminants on near-surface soil or pavements without excavation. [ 2 ] | https://en.wikipedia.org/wiki/Thermal_blanket |
A thermal bridge , also called a cold bridge , heat bridge , or thermal bypass , is an area or component of an object which has higher thermal conductivity than the surrounding materials, [ 1 ] creating a path of least resistance for heat transfer . [ 2 ] Thermal bridges result in an overall reduction in thermal resistance of the object. The term is frequently discussed in the context of a building's thermal envelope where thermal bridges result in heat transfer into or out of conditioned space.
Thermal bridges in buildings may impact the amount of energy required to heat and cool a space, cause condensation (moisture) within the building envelope, [ 3 ] and result in thermal discomfort. In colder climates (such as the United Kingdom), thermal heat bridges can result in additional heat losses and require additional energy to mitigate.
There are strategies to reduce or prevent thermal bridging, such as limiting the number of building members that span from unconditioned to conditioned space and applying continuous insulation materials to create thermal breaks .
Heat transfer occurs through three mechanisms: convection , radiation , and conduction . [ 4 ] A thermal bridge is an example of heat transfer through conduction. The rate of heat transfer depends on the thermal conductivity of the material and the temperature difference experienced on either side of the thermal bridge. When a temperature difference is present, heat flow will follow the path of least resistance through the material with the highest thermal conductivity and lowest thermal resistance; this path is a thermal bridge. [ 5 ] Thermal bridging describes a situation in a building where there is a direct connection between the outside and inside through one or more elements that possess a higher thermal conductivity than the rest of the envelope of the building.
Surveying buildings for thermal bridges is performed using passive infrared thermography (IRT) according to the International Organization for Standardization (ISO). Infrared Thermography of buildings can allow thermal signatures that indicate heat leaks. IRT detects thermal abnormalities that are linked to the movement of fluids through building elements, highlighting the variations in the thermal properties of the materials that correspondingly cause a major change in temperature. The drop shadow effect, a situation in which the surrounding environment casts a shadow on the facade of the building, can lead to potential accuracy issues of measurements through inconsistent facade sun exposure. An alternative analysis method, Iterative Filtering (IF), can be used to solve this problem.
In all thermographic building inspections, the thermal image interpretation if performed by a human operator, involving a high level of subjectivity and expertise of the operator. Automated analysis approaches, such as Laser scanning technologies can provide thermal imaging on 3 dimensional CAD model surfaces and metric information to thermographic analyses. [ 6 ] Surface temperature data in 3D models can identify and measure thermal irregularities of thermal bridges and insulation leaks. Thermal imaging can also be acquired through the use of unmanned aerial vehicles (UAV), fusing thermal data from multiple cameras and platforms. The UAV uses an infrared camera to generate a thermal field image of recorded temperature values, where every pixel represents radiative energy emitted by the surface of the building. [ 7 ]
Frequently, thermal bridging is used in reference to a building’s thermal envelope, which is a layer of the building enclosure system that resists heat flow between the interior conditioned environment and the exterior unconditioned environment. Heat will transfer through a building’s thermal envelope at different rates depending on the materials present throughout the envelope. Heat transfer will be greater at thermal bridge locations than where insulation exists because there is less thermal resistance. [ 8 ] In the winter, when exterior temperature is typically lower than interior temperature, heat flows outward and will flow at greater rates through thermal bridges. At a thermal bridge location, the surface temperature on the inside of the building envelope will be lower than the surrounding area. In the summer, when the exterior temperature is typically higher than the interior temperature, heat flows inward, and at greater rates through thermal bridges. [ 9 ] This causes winter heat losses and summer heat gains for conditioned spaces in buildings. [ 10 ]
Despite insulation requirements specified by various national regulations, thermal bridging in a building's envelope remain a weak spot in the construction industry. Moreover, in many countries building design practices implement partial insulation measurements foreseen by regulations. [ 11 ] As a result, thermal losses are greater in practice that is anticipated during the design stage.
An assembly such as an exterior wall or insulated ceiling is generally classified by a U-factor , in W/m 2 ·K, that reflects the overall rate of heat transfer per unit area for all the materials within an assembly, not just the insulation layer. Heat transfer via thermal bridges reduces the overall thermal resistance of an assembly, resulting in an increased U-factor. [ 12 ]
Thermal bridges can occur at several locations within a building envelope; most commonly, they occur at junctions between two or more building elements. Common locations include:
Structural elements remain a weak point in construction, commonly leading to thermal bridges that result in high heat loss and low surface temperatures in a room.
While thermal bridges exist in various types of building enclosures, masonry walls experience significantly increased U-factors caused by thermal bridges. Comparing thermal conductivities between different building materials allows for assessment of performance relative to other design options. Brick materials, which are usually used for facade enclosures, typically have higher thermal conductivities than timber, depending on the brick density and wood type. [ 15 ] Concrete, which may be used for floors and edge beams in masonry buildings are common thermal bridges, especially at the corners. Depending on the physical makeup of the concrete, the thermal conductivity can be greater than that of brick materials. [ 15 ] In addition to heat transfer, if the indoor environment is not adequately vented, thermal bridging may cause the brick material to absorb rainwater and humidity into the wall, which can result in mold growth and deterioration of building envelope material.
Similar to masonry walls, curtain walls can experience significantly increased U-factors due to thermal bridging. Curtain wall frames are often constructed with highly conductive aluminum, which has a typical thermal conductivity above 200 W/m·K. In comparison, wood framing members are typically between 0.68 and 1.25 W/m·K. [ 15 ] The aluminum frame for most curtain wall constructions extends from the exterior of the building through to the interior, creating thermal bridges. [ 16 ]
Thermal bridging can result in increased energy required to heat or cool a conditioned space due to winter heat loss and summer heat gain. At interior locations near thermal bridges, occupants may experience thermal discomfort due to the difference in temperature. [ 17 ] Additionally, when the temperature difference between indoor and outdoor space is large and there is warm and humid air indoors, such as the conditions experienced in the winter, there is a risk of condensation in the building envelope due to the cooler temperature on the interior surface at thermal bridge locations. [ 17 ] Condensation can ultimately result in mold growth with consequent poor indoor air quality and insulation degradation, reducing the insulation performance and causing insulation to perform inconsistently throughout the thermal envelope [ 18 ]
There are several methods that have been proven to reduce or eliminate thermal bridging depending on the cause, location, and the construction type. The objective of these methods is to either create a thermal break where a building component would span from exterior to interior otherwise, or to reduce the number of building components spanning from exterior to interior. These strategies include:
Due to their significant impacts on heat transfer, correctly modeling the impacts of thermal bridges is important to estimate overall energy use. Thermal bridges are characterized by multi-dimensional heat transfer, and therefore they cannot be adequately approximated by steady-state one-dimensional (1D) models of calculation typically used to estimate the thermal performance of buildings in most building energy simulation tools. [ 21 ] Steady state heat transfer models are based on simple heat flow where heat is driven by a temperature difference that does not fluctuate over time so that heat flow is always in one direction. This type of 1D model can substantially underestimate heat transfer through the envelope when thermal bridges are present, resulting in lower predicted building energy use. [ 22 ]
The currently available solutions are to enable two-dimensional (2D) and three-dimensional (3D) heat transfer capabilities in modeling software or, more commonly, to use a method that translates multi-dimensional heat transfer into an equivalent 1D component to use in building simulation software. This latter method can be accomplished through the equivalent wall method in which a complex dynamic assembly, such as a wall with a thermal bridge, is represented by a 1D multi-layered assembly that has equivalent thermal characteristics. [ 23 ] | https://en.wikipedia.org/wiki/Thermal_bridge |
Thermal motion is able to produce capillary waves at the molecular scale. At this scale,
gravity and hydrodynamics can be neglected, and only the surface tension contribution is
relevant.
Capillary wave theory (CWT) is a classic account of how thermal fluctuations distort an interface.
It starts from some intrinsic surface h ( x , y , t ) {\displaystyle h(x,y,t)} that is distorted. Its energy will be
proportional to its area:
where the first equality is the area in this ( de Monge ) representation, and the second
applies for small values of the derivatives (surfaces not too rough). The constant of proportionality, σ {\displaystyle \sigma } , is the surface tension .
By performing a Fourier analysis treatment, normal modes are easily found. Each contributes an energy proportional to the square of its amplitude; therefore, according to classical statistical mechanics, equipartition holds, and the mean energy of each mode will be k T / 2 {\displaystyle kT/2} . Surprisingly, this result leads to a divergent surface (the width of the interface is bound to diverge with its area). This divergence is nevertheless very mild: even for displacements on the order of meters the deviation of the surface is comparable to the size of the molecules. Moreover, the introduction of an external field removes the divergence: the action of gravity is sufficient to keep the width fluctuation on the order of one molecular diameter for areas larger than about 1 mm2 (Ref. 2). [ 1 ]
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_capillary_wave |
The thermal center is a concept used in applied mechanics and engineering . When a solid body is exposed to a thermal variation, an expansion will occur, changing the dimensions and potentially the shape of the body and the position of its points. Under certain circumstances it may happen that one point belonging to the space associated to the body has no displacement at all: this point is called the thermal center (TC). [ 1 ]
The thermal center position is not affected by a thermal expansion: this property makes the TC a very interesting point in those applications where it is important that thermal variations have no effects on a certain process. Photolitography machines and high precision optical instruments are some examples of application of this concept.
The thermal center is defined under the following hypothesis:
The thermal variation will produce an expansion of the body: this means that for each couple of points P and Q the distance will become:
d 2 ( P , Q ) = K ⋅ d 1 ( P , Q ) {\displaystyle d_{2}(P,Q)=K\cdot d_{1}(P,Q)}
where K = 1 + α ⋅ Δ T {\displaystyle K=1+\alpha \cdot \Delta T} , α {\displaystyle \alpha } is the coefficient of thermal expansion.
Analyzing this phenomenon in an absolute coordinate system, the transformation of the solid body is a geometrical similarity .
It is possible that, choosing the constrains conveniently, one point belonging to the space associated to the body will not move during the thermal variation: this point is called thermal center. In this case, the geometrical similarity becomes a homothety and the thermal center is the center of the homothety itself.
The thermal center does not always exist. The TC is a function of the constraints. Picture B shows an example where the geometry of the constraints doesn't come to a unique point.
The TC may exist even if the body has non-homogeneous isotropic thermal properties, but when this condition is not verified, it's not possible to determine its position by using the simple geometric method shown above, and the transformation will not be a homothety. | https://en.wikipedia.org/wiki/Thermal_center |
Thermal cleaning is a combined process involving pyrolysis and oxidation . As an industrial application, thermal cleaning is used to remove organic substances such as polymers , plastics and coatings from parts, products or production components like extruder screws, spinnerets [ 1 ] and static mixers . Thermal cleaning is the most common cleaning method in industrial environment. [ 2 ] A variety of different methods have been developed so far for a wide range of applications.
Heat is supplied for pyrolysis and air is supplied for oxidation. Depending on the procedure, pyrolysis and oxidation can be applied consecutively or simultaneously. During thermal cleaning, organic material is converted into volatile organic compounds , hydrocarbons and carbonized gas. [ 3 ] Inorganic elements remain. [ 2 ] Typical process temperatures range between 400 and 540 °C (752 and 1,004 °F). [ 4 ]
Several types of industrial thermal cleaning systems are available:
Fluidized bed systems [ 5 ] use sand or aluminium oxide (alumina) as heating medium. They apply pyrolysis and oxidation simultaneously. [ 6 ] These systems clean fast, from 30 minutes process time up to two hours. The medium does not melt or boil, nor emit any vapors or odors. [ 4 ] Thermal shock can be a problem with some parts. [ 2 ] Pollution control devices may be needed to protect the environment. [ 4 ]
Vacuum ovens use pyrolysis in a vacuum . [ 7 ] This method is very safe because uncontrolled combustion inside the cleaning chamber is avoided. [ 4 ] The cleaning process in this relatively new approach takes 8 [ 3 ] to 30 hours. [ 8 ] Vacuum pyrolysis is the only method that applies pyrolysis and oxidation consecutively. In two-chamber versions, molten plastic drains into an unheated chamber to capture the bulk of the polymer to reduce the fumes. [ 7 ] Vacuum ovens are also electrically powered. [ 2 ]
Burn-off ovens, also known as heat-cleaning ovens, are gas-fired and used for removing organics from heavy and large metal parts. [ 9 ] The process time is moderate, approximately 4 hours. Fires can occur from the fumes created during cleaning. [ 4 ] The design is simple and inexpensive. Different types are available. Modern types contain an additional afterburner that operates at a minimum of 1,500 °F (820 °C) and consumes any smoke created by the process. [ 2 ]
Molten salt baths belong to the oldest thermal cleaning systems. Cleaning with molten salt is fast: 15 to 30 minutes process time. [ 2 ] [ 4 ] The process has the risk of dangerous splatters due to chemical reactivity, [ 4 ] or other potential hazards, like explosions or toxic hydrogen cyanide gas. Another disadvantage is that parts can be warped or altered in design tolerances. [ 2 ] Molten salt baths can be environmentally unfriendly. Due to their disadvantages, they are rarely used today. | https://en.wikipedia.org/wiki/Thermal_cleaning |
Thermal comfort is the condition of mind that expresses subjective satisfaction with the thermal environment. [ 1 ] The human body can be viewed as a heat engine where food is the input energy. The human body will release excess heat into the environment, so the body can continue to operate. The heat transfer is proportional to temperature difference. In cold environments, the body loses more heat to the environment and in hot environments the body does not release enough heat. Both the hot and cold scenarios lead to discomfort. [ 2 ] Maintaining this standard of thermal comfort for occupants of buildings or other enclosures is one of the important goals of HVAC ( heating , ventilation , and air conditioning ) design engineers.
Thermal neutrality is maintained when the heat generated by human metabolism is allowed to dissipate, thus maintaining thermal equilibrium with the surroundings. The main factors that influence thermal neutrality are those that determine heat gain and loss, namely metabolic rate , clothing insulation , air temperature , mean radiant temperature , air speed and relative humidity . Psychological parameters, such as individual expectations, and physiological parameters also affect thermal neutrality. [ 3 ] Neutral temperature is the temperature that can lead to thermal neutrality and it may vary greatly between individuals and depending on factors such as activity level, clothing, and humidity. People are highly sensitive to even small differences in environmental temperature. At 24 °C, a difference of 0.38 °C can be detected between the temperature of two rooms. [ 4 ]
The Predicted Mean Vote (PMV) model stands among the most recognized thermal comfort models. It was developed using principles of heat balance and experimental data collected in a controlled climate chamber under steady state conditions. [ 5 ] The adaptive model, on the other hand, was developed based on hundreds of field studies with the idea that occupants dynamically interact with their environment. Occupants control their thermal environment by means of clothing, operable windows, fans, personal heaters, and sun shades. [ 3 ] [ 6 ] The PMV model can be applied to air-conditioned buildings, while the adaptive model can be applied only to buildings where no mechanical systems have been installed. [ 1 ] There is no consensus about which comfort model should be applied for buildings that are partially air-conditioned spatially or temporally.
Thermal comfort calculations in accordance with the ANSI/ASHRAE Standard 55 , [ 1 ] the ISO 7730 Standard [ 7 ] and the EN 16798-1 Standard [ 8 ] can be freely performed with either the CBE Thermal Comfort Tool for ASHRAE 55, [ 9 ] with the Python package pythermalcomfort [ 10 ] or with the R package comf.
Satisfaction with the thermal environment is important because thermal conditions are potentially life-threatening for humans if the core body temperature reaches conditions of hyperthermia , above 37.5–38.3 °C (99.5–100.9 °F), [ 11 ] [ 12 ] or hypothermia , below 35.0 °C (95.0 °F). [ 13 ] Buildings modify the conditions of the external environment and reduce the effort that the human body needs to do in order to stay stable at a normal human body temperature , important for the correct functioning of human physiological processes .
The Roman writer Vitruvius actually linked this purpose to the birth of architecture. [ 14 ] David Linden also suggests that the reason why we associate tropical beaches with paradise is because in those environments is where human bodies need to do less metabolic effort to maintain their core temperature. [ 15 ] Temperature not only supports human life; coolness and warmth have also become in different cultures a symbol of protection, community and even the sacred. [ 16 ]
In building science studies, thermal comfort has been related to productivity and health. Office workers who are satisfied with their thermal environment are more productive. [ 17 ] [ 18 ] The combination of high temperature and high relative humidity reduces thermal comfort and indoor air quality . [ 19 ]
Although a single static temperature can be comfortable, people are attracted by thermal changes, such as campfires and cool pools. Thermal pleasure is caused by varying thermal sensations from a state of unpleasantness to a state of pleasantness, and the scientific term for it is positive thermal alliesthesia . [ 20 ] From a state of thermal neutrality or comfort any change will be perceived as unpleasant. [ 21 ] This challenges the assumption that mechanically controlled buildings should deliver uniform temperatures and comfort, if it is at the cost of excluding thermal pleasure. [ 22 ]
Since there are large variations from person to person in terms of physiological and psychological satisfaction, it is hard to find an optimal temperature for everyone in a given space. Laboratory and field data have been collected to define conditions that will be found comfortable for a specified percentage of occupants. [ 1 ]
There are numerous factors that directly affect thermal comfort that can be grouped in two categories:
Even if all these factors may vary with time, standards usually refer to a steady state to study thermal comfort, just allowing limited temperature variations.
People have different metabolic rates that can fluctuate due to activity level and environmental conditions. [ 23 ] [ 24 ] [ 25 ] ASHRAE 55-2017 defines metabolic rate as the rate of transformation of chemical energy into heat and mechanical work by metabolic activities of an individual, per unit of skin surface area. [ 1 ] : 3
Metabolic rate is expressed in units of met, equal to 58.2 W/m² (18.4 Btu/h·ft²). One met is equal to the energy produced per unit surface area of an average person seated at rest.
ASHRAE 55 provides a table of metabolic rates for a variety of activities. Some common values are 0.7 met for sleeping, 1.0 met for a seated and quiet position, 1.2–1.4 met for light activities standing, 2.0 met or more for activities that involve movement, walking, lifting heavy loads or operating machinery. For intermittent activity, the standard states that it is permissible to use a time-weighted average metabolic rate if individuals are performing activities that vary over a period of one hour or less. For longer periods, different metabolic rates must be considered. [ 1 ]
According to ASHRAE Handbook of Fundamentals, estimating metabolic rates is complex, and for levels above 2 or 3 met – especially if there are various ways of performing such activities – the accuracy is low. Therefore, the standard is not applicable for activities with an average level higher than 2 met. Met values can also be determined more accurately than the tabulated ones, using an empirical equation that takes into account the rate of respiratory oxygen consumption and carbon dioxide production. Another physiological yet less accurate method is related to the heart rate, since there is a relationship between the latter and oxygen consumption. [ 26 ]
The Compendium of Physical Activities is used by physicians to record physical activities. It has a different definition of met that is the ratio of the metabolic rate of the activity in question to a resting metabolic rate. [ 27 ] As the formulation of the concept is different from the one that ASHRAE uses, these met values cannot be used directly in PMV calculations, but it opens up a new way of quantifying physical activities.
Food and drink habits may have an influence on metabolic rates, which indirectly influences thermal preferences. These effects may change depending on food and drink intake. [ 28 ]
Body shape is another factor that affects metabolic rate and hence thermal comfort. Heat dissipation depends on body surface area. The surface area of an average person is 1.8 m 2 (19 ft 2 ). [ 1 ] A tall and skinny person has a larger surface-to-volume ratio, can dissipate heat more easily, and can tolerate higher temperatures more than a person with a rounded body shape. [ 28 ]
The amount of thermal insulation worn by a person has a substantial impact on thermal comfort, because it influences the heat loss and consequently the thermal balance. Layers of insulating clothing prevent heat loss and can either help keep a person warm or lead to overheating. Generally, the thicker the garment is, the greater insulating ability it has. Depending on the type of material the clothing is made out of, air movement and relative humidity can decrease the insulating ability of the material. [ 29 ] [ 30 ]
1 clo is equal to 0.155 m 2 ·K/W (0.88 °F·ft 2 ·h/Btu). This corresponds to trousers, a long sleeved shirt, and a jacket. Clothing insulation values for other common ensembles or single garments can be found in ASHRAE 55. [ 1 ]
Skin wetness is defined as "the proportion of the total skin surface area of the body covered with sweat". [ 31 ] The wetness of skin in different areas also affects perceived thermal comfort. Humidity can increase wetness in different areas of the body, leading to a perception of discomfort. This is usually localized in different parts of the body, and local thermal comfort limits for skin wetness differ by locations of the body. [ 32 ] The extremities are much more sensitive to thermal discomfort from wetness than the trunk of the body. Although local thermal discomfort can be caused by wetness, the thermal comfort of the whole body will not be affected by the wetness of certain parts.
The air temperature is the average temperature of the air surrounding the occupant, with respect to location and time. According to ASHRAE 55 standard, the spatial average takes into account the ankle, waist and head levels, which vary for seated or standing occupants. The temporal average is based on three-minutes intervals with at least 18 equally spaced points in time. Air temperature is measured with a dry-bulb thermometer and for this reason it is also known as dry-bulb temperature .
The radiant temperature is related to the amount of radiant heat transferred from a surface, and it depends on the material's ability to absorb or emit heat, or its emissivity . The mean radiant temperature depends on the temperatures and emissivities of the surrounding surfaces as well as the view factor , or the amount of the surface that is “seen” by the object. So the mean radiant temperature experienced by a person in a room with the sunlight streaming in varies based on how much of their body is in the sun.
Air speed is defined as the rate of air movement at a point, without regard to direction. According to ANSI/ASHRAE Standard 55 , it is the average speed of the air surrounding a representative occupant, with respect to location and time. The spatial average is for three heights as defined for average air temperature. For an occupant moving in a space the sensors shall follow the movements of the occupant. The air speed is averaged over an interval not less than one and not greater than three minutes. Variations that occur over a period greater than three minutes shall be treated as multiple different air speeds. [ 33 ]
Relative humidity (RH) is the ratio of the amount of water vapor in the air to the amount of water vapor that the air could hold at the specific temperature and pressure. While the human body has thermoreceptors in the skin that enable perception of temperature, relative humidity is detected indirectly. Sweating is an effective heat loss mechanism that relies on evaporation from the skin. However at high RH, the air has close to the maximum water vapor that it can hold, so evaporation, and therefore heat loss, is decreased. On the other hand, very dry environments (RH < 20–30%) are also uncomfortable because of their effect on the mucous membranes. The recommended level of indoor humidity is in the range of 30–60% in air conditioned buildings, [ 34 ] [ 35 ] but new standards such as the adaptive model allow lower and higher humidity, depending on the other factors involved in thermal comfort.
Recently, the effects of low relative humidity and high air velocity were tested on humans after bathing. Researchers found that low relative humidity engendered thermal discomfort as well as the sensation of dryness and itching. It is recommended to keep relative humidity levels higher in a bathroom than other rooms in the house for optimal conditions. [ 36 ]
Various types of apparent temperature have been developed to combine air temperature and air humidity.
For higher temperatures, there are quantitative scales, such as the heat index .
For lower temperatures, a related interplay was identified only qualitatively:
There has been controversy over why damp cold air feels colder than dry cold air. Some believe it is because when the humidity is high, our skin and clothing become moist and are better conductors of heat, so there is more cooling by conduction. [ 39 ]
The influence of humidity can be exacerbated with the combined use of fans (forced convection cooling). [ 40 ]
Many buildings use an HVAC unit to control their thermal environment. Other buildings are naturally ventilated (or would have cross ventilation ) and do not rely on mechanical systems to provide thermal comfort. Depending on the climate, this can drastically reduce energy consumption. It is sometimes seen as a risk, though, since indoor temperatures can be too extreme if the building is poorly designed. Properly designed, naturally ventilated buildings keep indoor conditions within the range where opening windows and using fans in the summer, and wearing extra clothing in the winter, can keep people thermally comfortable. [ 41 ]
There are several different models or indices that can be used to assess thermal comfort conditions indoors as described below.
The PMV/PPD model was developed by P.O. Fanger using heat-balance equations and empirical studies about skin temperature to define comfort. Standard thermal comfort surveys ask subjects about their thermal sensation on a seven-point scale from cold (−3) to hot (+3). Fanger's equations are used to calculate the predicted mean vote (PMV) of a group of subjects for a particular combination of air temperature , mean radiant temperature , relative humidity , air speed, metabolic rate, and clothing insulation . [ 5 ] PMV equal to zero is representing thermal neutrality, and the comfort zone is defined by the combinations of the six parameters for which the PMV is within the recommended limits (−0.5 < PMV < +0.5) . [ 1 ] Although predicting the thermal sensation of a population is an important step in determining what conditions are comfortable, it is more useful to consider whether or not people will be satisfied. Fanger developed another equation to relate the PMV to the Predicted Percentage of Dissatisfied (PPD). This relation was based on studies that surveyed subjects in a chamber where the indoor conditions could be precisely controlled. [ 5 ]
The PMV/PPD model is applied globally but does not directly take into account the adaptation mechanisms and outdoor thermal conditions. [ 3 ] [ 42 ] [ 43 ]
ASHRAE Standard 55-2017 uses the PMV model to set the requirements for indoor thermal conditions. It requires that at least 80% of the occupants be satisfied. [ 1 ]
The CBE Thermal Comfort Tool for ASHRAE 55 [ 9 ] allows users to input the six comfort parameters to determine whether a certain combination complies with ASHRAE 55. The results are displayed on a psychrometric or a temperature-relative humidity chart and indicate the ranges of temperature and relative humidity that will be comfortable with the given the values input for the remaining four parameters. [ 44 ]
The PMV/PPD model has a low prediction accuracy. [ 45 ] Using the world largest thermal comfort field survey database, [ 46 ] the accuracy of PMV in predicting occupant's thermal sensation was only 34%, meaning that the thermal sensation is correctly predicted one out of three times. The PPD was overestimating subject's thermal unacceptability outside the thermal neutrality ranges (-1≤PMV≤1). The PMV/PPD accuracy varies strongly between ventilation strategies, building types and climates. [ 45 ]
ASHRAE 55 2013 accounts for air speeds above 0.2 metres per second (0.66 ft/s) separately than the baseline model. Because air movement can provide direct cooling to people, particularly if they are not wearing much clothing, higher temperatures can be more comfortable than the PMV model predicts. Air speeds up to 0.8 m/s (2.6 ft/s) are allowed without local control, and 1.2 m/s is possible with local control. This elevated air movement increases the maximum temperature for an office space in the summer to 30 °C from 27.5 °C (86.0–81.5 °F). [ 1 ]
"Virtual Energy for Thermal Comfort" is the amount of energy that will be required to make a non-air-conditioned building relatively as comfortable as one with air-conditioning . This is based on the assumption that the home will eventually install air-conditioning or heating. [ 47 ] Passive design improves thermal comfort in a building, thus reducing demand for heating or cooling. In many developing countries , however, most occupants do not currently heat or cool, due to economic constraints, as well as climate conditions which border lines comfort conditions such as cold winter nights in Johannesburg (South Africa) or warm summer days in San Jose, Costa Rica. At the same time, as incomes rise, there is a strong tendency to introduce cooling and heating systems. If we recognize and reward passive design features that improve thermal comfort today, we diminish the risk of having to install HVAC systems in the future, or we at least ensure that such systems will be smaller and less frequently used. Or in case the heating or cooling system is not installed due to high cost, at least people should not suffer from discomfort indoors. To provide an example, in San Jose, Costa Rica, if a house were being designed with high level of glazing and small opening sizes, the internal temperature would easily rise above 30 °C (86 °F) and natural ventilation would not be enough to remove the internal heat gains and solar gains. This is why Virtual Energy for Comfort is important.
World Bank 's assessment tool the EDGE software ( Excellence in Design for Greater Efficiencies ) illustrates the potential issues with discomfort in buildings and has created the concept of Virtual Energy for Comfort which provides for a way to present potential thermal discomfort. This approach is used to award for design solutions which improves thermal comfort even in a fully free running building.
Despite the inclusion of requirements for overheating in CIBSE, overcooling has not been assessed. However, overcooling can be an issue, mainly in the developing world, for example in cities such as Lima (Peru), Bogota, and Delhi, where cooler indoor temperatures can occur frequently. This may be a new area for research and design guidance for reduction of discomfort.
ASHRAE 55-2017 defines the Cooling Effect (CE) at elevated air speed (above 0.2 metres per second (0.66 ft/s)) as the value that, when subtracted from both the air temperature and the mean radiant temperature, yields the same SET value under still air (0.1 m/s) as in the first SET calculation under elevated air speed. [ 1 ]
The CE can be used to determine the PMV adjusted for an environment with elevated air speed using the adjusted temperature, the adjusted radiant temperature and still air (0.2 metres per second (0.66 ft/s)). Where the adjusted temperatures are equal to the original air and mean radiant temperatures minus the CE.
Avoiding local thermal discomfort, whether caused by a vertical air temperature difference between the feet and the head, by an asymmetric radiant field, by local convective cooling (draft), or by contact with a hot or cold floor, is essential to providing acceptable thermal comfort. People are generally more sensitive to local discomfort when their thermal sensation is cooler than neutral, while they are less sensitive to it when their body is warmer than neutral. [ 33 ]
Large differences in the thermal radiation of the surfaces surrounding a person may cause local discomfort or reduce acceptance of the thermal conditions. ASHRAE Standard 55 sets limits on the allowable temperature differences between various surfaces. Because people are more sensitive to some asymmetries than others, for example that of a warm ceiling versus that of hot and cold vertical surfaces, the limits depend on which surfaces are involved. The ceiling is not allowed to be more than +5 °C (9.0 °F) warmer, whereas a wall may be up to +23 °C (41 °F) warmer than the other surfaces. [ 1 ]
While air movement can be pleasant and provide comfort in some circumstances, it is sometimes unwanted and causes discomfort. This unwanted air movement is called "draft" and is most prevalent when the thermal sensation of the whole body is cool. People are most likely to feel a draft on uncovered body parts such as their head, neck, shoulders, ankles, feet, and legs, but the sensation also depends on the air speed, air temperature, activity, and clothing. [ 1 ]
Floors that are too warm or too cool may cause discomfort, depending on footwear. ASHRAE 55 recommends that floor temperatures stay in the range of 19–29 °C (66–84 °F) in spaces where occupants will be wearing lightweight shoes. [ 1 ]
Standard effective temperature (SET) is a model of human response to the thermal environment. Developed by A.P. Gagge and accepted by ASHRAE in 1986, [ 48 ] it is also referred to as the Pierce Two-Node model. [ 49 ] Its calculation is similar to PMV because it is a comprehensive comfort index based on heat-balance equations that incorporates the personal factors of clothing and metabolic rate. Its fundamental difference is it takes a two-node method to represent human physiology in measuring skin temperature and skin wettedness. [ 48 ]
The SET index is defined as the equivalent dry bulb temperature of an isothermal environment at 50% relative humidity in which a subject, while wearing clothing standardized for activity concerned, would have the same heat stress (skin temperature) and thermoregulatory strain (skin wettedness) as in the actual test environment. [ 48 ]
Research has tested the model against experimental data and found it tends to overestimate skin temperature and underestimate skin wettedness. [ 49 ] [ 50 ] Fountain and Huizenga (1997) developed a thermal sensation prediction tool that computes SET. [ 51 ] The SET index can also be calculated using either the CBE Thermal Comfort Tool for ASHRAE 55, [ 9 ] the Python package pythermalcomfort, [ 10 ] or the R package comf.
The adaptive model is based on the idea that outdoor climate might be used as a proxy of indoor comfort because of a statistically significant correlation between them. The adaptive hypothesis predicts that contextual factors, such as having access to environmental controls, and past thermal history can influence building occupants' thermal expectations and preferences. [ 3 ] Numerous researchers have conducted field studies worldwide in which they survey building occupants about their thermal comfort while taking simultaneous environmental measurements. Analyzing a database of results from 160 of these buildings revealed that occupants of naturally ventilated buildings accept and even prefer a wider range of temperatures than their counterparts in sealed, air-conditioned buildings because their preferred temperature depends on outdoor conditions. [ 3 ] These results were incorporated in the ASHRAE 55-2004 standard as the adaptive comfort model. The adaptive chart relates indoor comfort temperature to prevailing outdoor temperature and defines zones of 80% and 90% satisfaction. [ 1 ]
The ASHRAE-55 2010 Standard introduced the prevailing mean outdoor temperature as the input variable for the adaptive model. It is based on the arithmetic average of the mean daily outdoor temperatures over no fewer than 7 and no more than 30 sequential days prior to the day in question. [ 1 ] It can also be calculated by weighting the temperatures with different coefficients, assigning increasing importance to the most recent temperatures. In case this weighting is used, there is no need to respect the upper limit for the subsequent days. In order to apply the adaptive model, there should be no mechanical cooling system for the space, occupants should be engaged in sedentary activities with metabolic rates of 1–1.3 met, and a prevailing mean temperature of 10–33.5 °C (50.0–92.3 °F). [ 1 ]
This model applies especially to occupant-controlled, natural-conditioned spaces, where the outdoor climate can actually affect the indoor conditions and so the comfort zone. In fact, studies by de Dear and Brager showed that occupants in naturally ventilated buildings were tolerant of a wider range of temperatures. [ 3 ] This is due to both behavioral and physiological adjustments, since there are different types of adaptive processes. [ 52 ] ASHRAE Standard 55-2010 states that differences in recent thermal experiences, changes in clothing, availability of control options, and shifts in occupant expectations can change people's thermal responses. [ 1 ]
Adaptive models of thermal comfort are implemented in other standards, such as European EN 15251 and ISO 7730 standard. While the exact derivation methods and results are slightly different from the ASHRAE 55 adaptive standard, they are substantially the same. A larger difference is in applicability. The ASHRAE adaptive standard only applies to buildings without mechanical cooling installed, while EN15251 can be applied to mixed-mode buildings, provided the system is not running. [ 53 ]
There are basically three categories of thermal adaptation, namely: behavioral, physiological, and psychological.
An individual's comfort level in a given environment may change and adapt over time due to psychological factors. Subjective perception of thermal comfort may be influenced by the memory of previous experiences. Habituation takes place when repeated exposure moderates future expectations, and responses to sensory input. This is an important factor in explaining the difference between field observations and PMV predictions (based on the static model) in naturally ventilated buildings. In these buildings, the relationship with the outdoor temperatures has been twice as strong as predicted. [ 3 ]
Psychological adaptation is subtly different in the static and adaptive models. Laboratory tests of the static model can identify and quantify non-heat transfer (psychological) factors that affect reported comfort. The adaptive model is limited to reporting differences (called psychological) between modeled and reported comfort. [ citation needed ]
Thermal comfort as a "condition of mind" is defined in psychological terms. Among the factors that affect the condition of mind (in the laboratory) are a sense of control over the temperature, knowledge of the temperature and the appearance of the (test) environment. A thermal test chamber that appeared residential "felt" warmer than one which looked like the inside of a refrigerator. [ 54 ]
The body has several thermal adjustment mechanisms to survive in drastic temperature environments. In a cold environment the body utilizes vasoconstriction ; which reduces blood flow to the skin, skin temperature and heat dissipation. In a warm environment, vasodilation will increase blood flow to the skin, heat transport, and skin temperature and heat dissipation. [ 55 ] If there is an imbalance despite the vasomotor adjustments listed above, in a warm environment sweat production will start and provide evaporative cooling. If this is insufficient, hyperthermia will set in, body temperature may reach 40 °C (104 °F), and heat stroke may occur. In a cold environment, shivering will start, involuntarily forcing the muscles to work and increasing the heat production by up to a factor of 10. If equilibrium is not restored, hypothermia can set in, which can be fatal. [ 55 ] Long-term adjustments to extreme temperatures, of a few days to six months, may result in cardiovascular and endocrine adjustments. A hot climate may create increased blood volume, improving the effectiveness of vasodilation, enhanced performance of the sweat mechanism, and the readjustment of thermal preferences. In cold or underheated conditions, vasoconstriction can become permanent, resulting in decreased blood volume and increased body metabolic rate. [ 55 ]
In naturally ventilated buildings, occupants take numerous actions to keep themselves comfortable when the indoor conditions drift towards discomfort. Operating windows and fans, adjusting blinds/shades, changing clothing, and consuming food and drinks are some of the common adaptive strategies. Among these, adjusting windows is the most common. [ 56 ] Those occupants who take these sorts of actions tend to feel cooler at warmer temperatures than those who do not. [ 57 ]
The behavioral actions significantly influence energy simulation inputs, and researchers are developing behavior models to improve the accuracy of simulation results. For example, there are many window-opening models that have been developed to date, but there is no consensus over the factors that trigger window opening. [ 56 ]
People might adapt to seasonal heat by becoming more nocturnal, doing physical activity and even conducting business at night.
The thermal sensitivity of an individual is quantified by the descriptor F S , which takes on higher values for individuals with lower tolerance to non-ideal thermal conditions. [ 58 ] This group includes pregnant women, the disabled, as well as individuals whose age is below fourteen or above sixty, which is considered the adult range. Existing literature provides consistent evidence that sensitivity to hot and cold surfaces usually declines with age. There is also some evidence of a gradual reduction in the effectiveness of the body in thermo-regulation after the age of sixty. [ 58 ] This is mainly due to a more sluggish response of the counteraction mechanisms in lower parts of the body that are used to maintain the core temperature of the body at ideal values. [ 58 ] Seniors prefer warmer temperatures than young adults (76 vs 72 degrees F or 24.4 vs 22.2 Celsius). [ 54 ]
Situational factors include the health, psychological, sociological, and vocational activities of the persons.
While thermal comfort preferences between sexes seem to be small, there are some average differences. Studies have found males on average report discomfort due to rises in temperature much earlier than females. Males on average also estimate higher levels of their sensation of discomfort than females. One recent study tested males and females in the same cotton clothing, performing mental jobs while using a dial vote to report their thermal comfort to the changing temperature. [ 59 ] Many times, females preferred higher temperatures than males. But while females tend to be more sensitive to temperatures, males tend to be more sensitive to relative-humidity levels. [ 60 ] [ 61 ]
An extensive field study was carried out in naturally ventilated residential buildings in Kota Kinabalu, Sabah, Malaysia. This investigation explored the sexes thermal sensitivity to the indoor environment in non-air-conditioned residential buildings. Multiple hierarchical regression for categorical moderator was selected for data analysis; the result showed that as a group females were slightly more sensitive than males to the indoor air temperatures, whereas, under thermal neutrality, it was found that males and females have similar thermal sensation. [ 62 ]
In different areas of the world, thermal comfort needs may vary based on climate. In China [ where? ] the climate has hot humid summers and cold winters, causing a need for efficient thermal comfort. Energy conservation in relation to thermal comfort has become a large issue in China in the last several decades due to rapid economic and population growth. [ 63 ] Researchers are now looking into ways to heat and cool buildings in China for lower costs and also with less harm to the environment.
In tropical areas of Brazil , urbanization is creating urban heat islands (UHI). These are urban areas that have risen over the thermal comfort limits due to a large influx of people and only drop within the comfortable range during the rainy season. [ 64 ] Urban heat islands can occur over any urban city or built-up area with the correct conditions. [ 65 ] [ 66 ]
In the hot, humid region of Saudi Arabia , the issue of thermal comfort has been important in mosques ; because they are very large open buildings that are used only intermittently (very busy for the noon prayer on Fridays) it is hard to ventilate them properly. The large size requires a large amount of ventilation, which requires a lot of energy since the buildings are used only for short periods of time. Temperature regulation in mosques is a challenge due to the intermittent demand, leading to many mosques being either too hot or too cold. The stack effect also comes into play due to their large size and creates a large layer of hot air above the people in the mosque. New designs have placed the ventilation systems lower in the buildings to provide more temperature control at ground level. [ 67 ] New monitoring steps are also being taken to improve efficiency. [ 68 ]
The concept of thermal comfort is closely related to thermal stress. This attempts to predict the impact of solar radiation , air movement, and humidity for military personnel undergoing training exercises or athletes during competitive events. Several thermal stress indices have been proposed, such as the Predicted Heat Strain (PHS) or the humidex . [ 69 ] Generally, humans do not perform well under thermal stress. People's performances under thermal stress is about 11% lower than their performance at normal thermal wet conditions. Also, human performance in relation to thermal stress varies greatly by the type of task which the individual is completing. Some of the physiological effects of thermal heat stress include increased blood flow to the skin, sweating, and increased ventilation. [ 70 ] [ 71 ]
The PHS model, developed by the International Organization for Standardization (ISO) committee, allows the analytical evaluation of the thermal stress experienced by a working subject in a hot environment. [ 72 ] It describes a method for predicting the sweat rate and the internal core temperature that the human body will develop in response to the working conditions. The PHS is calculated as a function of several physical parameters, consequently it makes it possible to determine which parameter or group of parameters should be modified, and to what extent, in order to reduce the risk of physiological strains. The PHS model does not predict the physiological response of an individual subject, but only considers standard subjects in good health and fit for the work they perform. The PHS can be determined using either the Python package pythermalcomfort [ 10 ] or the R package comf.
ACGIH has established Action Limits and Threshold Limit Values for heat stress based upon the estimated metabolic rate of a worker and the environmental conditions the worker is subjected to.
This methodology has been adopted by the Occupational Safety and Health Administration (OSHA) as an effective method of assesing heat stress within workplaces. [ 73 ]
The factors affecting thermal comfort were explored experimentally in the 1970s. Many of these studies led to the development and refinement of ASHRAE Standard 55 and were performed at Kansas State University by Ole Fanger and others. Perceived comfort was found to be a complex interaction of these variables. It was found that the majority of individuals would be satisfied by an ideal set of values. As the range of values deviated progressively from the ideal, fewer and fewer people were satisfied. This observation could be expressed statistically as the percent of individuals who expressed satisfaction by comfort conditions and the predicted mean vote (PMV). This approach was challenged by the adaptive comfort model, developed from the ASHRAE 884 project, which revealed that occupants were comfortable in a broader range of temperatures. [ 3 ]
This research is applied to create Building Energy Simulation (BES) programs for residential buildings. Residential buildings in particular can vary much more in thermal comfort than public and commercial buildings. This is due to their smaller size, the variations in clothing worn, and different uses of each room. The main rooms of concern are bathrooms and bedrooms. Bathrooms need to be at a temperature comfortable for a human with or without clothing. Bedrooms are of importance because they need to accommodate different levels of clothing and also different metabolic rates of people asleep or awake. [ 74 ] Discomfort hours is a common metric used to evaluate the thermal performance of a space.
Thermal comfort research in clothing is currently being done by the military. New air-ventilated garments are being researched to improve evaporative cooling in military settings. Some models are being created and tested based on the amount of cooling they provide. [ 75 ]
In the last twenty years, researchers have also developed advanced thermal comfort models that divide the human body into many segments, and predict local thermal discomfort by considering heat balance. [ 76 ] [ 77 ] [ 78 ] This has opened up a new arena of thermal comfort modeling that aims at heating/cooling selected body parts.
Another area of study is the hue-heat hypothesis that states that an environment with warm colors (red, orange yellow hues) will feel warmer in terms of temperature and comfort, while an environment with cold colors (blue, green hues) will feel cooler. [ 79 ] [ 80 ] [ 81 ] The hue-heat hypothesis has both been investigated scientifically [ 82 ] and ingrained in popular culture in the terms warm and cold colors [ 83 ]
Whenever the studies referenced tried to discuss the thermal conditions for different groups of occupants in one room, the studies ended up simply presenting comparisons of thermal comfort satisfaction based on the subjective studies. No study tried to reconcile the different thermal comfort requirements of different types of occupants who compulsorily must stay in one room. Therefore, it looks to be necessary to investigate the different thermal conditions required by different groups of occupants in hospitals to reconcile their different requirements in this concept. To reconcile the differences in the required thermal comfort conditions it is recommended to test the possibility of using different ranges of local radiant temperature in one room via a suitable mechanical system.
Although different researches are undertaken on thermal comfort for patients in hospitals, it is also necessary to study the effects of thermal comfort conditions on the quality and the quantity of healing for patients in hospitals. There are also original researches that show the link between thermal comfort for staff and their levels of productivity, but no studies have been produced individually in hospitals in this field. Therefore, research for coverage and methods individually for this subject is recommended. Also research in terms of cooling and heating delivery systems for patients with low levels of immune-system protection (such as HIV patients, burned patients, etc.) are recommended. There are important areas, which still need to be focused on including thermal comfort for staff and its relation with their productivity, using different heating systems to prevent hypothermia in the patient and to improve the thermal comfort for hospital staff simultaneously.
Finally, the interaction between people, systems and architectural design in hospitals is a field in which require further work needed to improve the knowledge of how to design buildings and systems to reconcile many conflicting factors for the people occupying these buildings. [ 84 ]
Personal comfort systems (PCS) refer to devices or systems which heat or cool a building occupant personally. [ 85 ] This concept is best appreciated in contrast to central HVAC systems which have uniform temperature settings for extensive areas. Personal comfort systems include fans and air diffusers of various kinds (e.g. desk fans, nozzles and slot diffusers, overhead fans, high-volume low-speed fans etc.) and personalized sources of radiant or conductive heat (footwarmers, legwarmers, hot water bottles etc.). PCS has the potential to satisfy individual comfort requirements much better than current HVAC systems, as interpersonal differences in thermal sensation due to age, sex, body mass, metabolic rate, clothing and thermal adaptation can amount to an equivalent temperature variation of 2–5 °C (3,6–9 °F), which is impossible for a central, uniform HVAC system to cater to. [ 85 ] Besides, research has shown that the perceived ability to control one's thermal environment tends to widen one's range of tolerable temperatures. [ 3 ] Traditionally, PCS devices have been used in isolation from one another. However, it has been proposed by Andersen et al. (2016) that a network of PCS devices which generate well-connected microzones of thermal comfort, and report real-time occupant information and respond to programmatic actuation requests (e.g. a party, a conference, a concert etc.) can combine with occupant-aware building applications to enable new methods of comfort maximization. [ 86 ] | https://en.wikipedia.org/wiki/Thermal_comfort |
In heat transfer , thermal engineering , and thermodynamics , thermal conductance and thermal resistance are fundamental concepts that describe the ability of materials or systems to conduct heat and the opposition they offer to the heat current . The ability to manipulate these properties allows engineers to control temperature gradient , prevent thermal shock , and maximize the efficiency of thermal systems . Furthermore, these principles find applications in a multitude of fields, including materials science , mechanical engineering , electronics , and energy management . Knowledge of these principles is crucial in various scientific, engineering, and everyday applications, from designing efficient temperature control , thermal insulation , and thermal management in industrial processes to optimizing the performance of electronic devices .
Thermal conductance ( G ) measures the ability of a material or system to conduct heat. It provides insights into the ease with which heat can pass through a particular system. It is measured in units of watts per kelvin (W/K). It is essential in the design of heat exchangers , thermally efficient materials , and various engineering systems where the controlled movement of heat is vital.
Conversely, thermal resistance ( R ) measures the opposition to the heat current in a material or system. It is measured in units of kelvins per watt (K/W) and indicates how much temperature difference (in kelvins) is required to transfer a unit of heat current (in watts) through the material or object. It is essential to optimize the building insulation , evaluate the efficiency of electronic devices, and enhance the performance of heat sinks in various applications.
Objects made of insulators like rubber tend to have very high resistance and low conductance, while objects made of conductors like metals tend to have very low resistance and high conductance. This relationship is quantified by resistivity or conductivity . However, the nature of a material is not the only factor as it also depends on the size and shape of an object because these properties are extensive rather than intensive . The relationship between thermal conductance and resistance is analogous to that between electrical conductance and resistance in the domain of electronics.
Thermal insulance ( R -value ) is a measure of a material's resistance to the heat current. It quantifies how effectively a material can resist the transfer of heat through conduction, convection, and radiation. It has the units square metre kelvins per watt (m 2 ⋅K/W) in SI units or square foot degree Fahrenheit – hours per British thermal unit (ft 2 ⋅°F⋅h/Btu) in imperial units . The higher the thermal insulance, the better a material insulates against heat transfer. It is commonly used in construction to assess the insulation properties of materials such as walls, roofs, and insulation products.
Thermal conductance and resistance have several practical applications in various fields:
Absolute thermal resistance is the temperature difference across a structure when a unit of heat energy flows through it in unit time . It is the reciprocal of thermal conductance . The SI unit of absolute thermal resistance is kelvins per watt (K/W) or the equivalent degrees Celsius per watt (°C/W) – the two are the same since the intervals are equal: Δ T = 1 K = 1 °C.
The thermal resistance of materials is of great interest to electronic engineers because most electrical components generate heat and need to be cooled. Electronic components malfunction or fail if they overheat, and some parts routinely need measures taken in the design stage to prevent this.
Electrical engineers are familiar with Ohm's law and so often use it as an analogy when doing calculations involving thermal resistance. Mechanical and structural engineers are more familiar with Hooke's law and so often use it as an analogy when doing calculations involving thermal resistance.
The heat flow can be modelled by analogy to an electrical circuit where heat flow is represented by current, temperatures are represented by voltages, heat sources are represented by constant current sources, absolute thermal resistances are represented by resistors and thermal capacitances by capacitors.
The diagram shows an equivalent thermal circuit for a semiconductor device with a heat sink .
Consider a component such as a silicon transistor that is bolted to the metal frame of a piece of equipment. The transistor's manufacturer will specify parameters in the datasheet called the absolute thermal resistance from junction to case (symbol: R θ J C {\displaystyle R_{\theta {\rm {JC}}}} ), and the maximum allowable temperature of the semiconductor junction (symbol: T J m a x {\displaystyle T_{J{\rm {max}}}} ). The specification for the design should include a maximum temperature at which the circuit should function correctly. Finally, the designer should consider how the heat from the transistor will escape to the environment: this might be by convection into the air, with or without the aid of a heat sink , or by conduction through the printed circuit board . For simplicity, let us assume that the designer decides to bolt the transistor to a metal surface (or heat sink ) that is guaranteed to be less than Δ T H S {\displaystyle \Delta T_{\rm {HS}}} above the ambient temperature. Note: T HS appears to be undefined.
Given all this information, the designer can construct a model of the heat flow from the semiconductor junction, where the heat is generated, to the outside world. In our example, the heat has to flow from the junction to the case of the transistor, then from the case to the metalwork. We do not need to consider where the heat goes after that, because we are told that the metalwork will conduct heat fast enough to keep the temperature less than Δ T H S {\displaystyle \Delta T_{\rm {HS}}} above ambient: this is all we need to know.
Suppose the engineer wishes to know how much power can be put into the transistor before it overheats. The calculations are as follows.
where R θ B {\displaystyle R_{\theta {\rm {B}}}} is the absolute thermal resistance of the bond between the transistor's case and the metalwork. This figure depends on the nature of the bond - for example, a thermal bonding pad or thermal transfer grease might be used to reduce the absolute thermal resistance.
We use the general principle that the temperature drop Δ T {\displaystyle \Delta T} across a given absolute thermal resistance R θ {\displaystyle R_{\theta }} with a given heat flow Q ˙ {\displaystyle {\dot {Q}}} through it is:
Substituting our own symbols into this formula gives:
and, rearranging,
The designer now knows Q ˙ m a x {\displaystyle {\dot {Q}}_{\rm {max}}} , the maximum power that the transistor can be allowed to dissipate, so they can design the circuit to limit the temperature of the transistor to a safe level.
Let us substitute some sample numbers:
The result is then:
This means that the transistor can dissipate about 18 watts before it overheats. A cautious designer would operate the transistor at a lower power level to increase its reliability .
This method can be generalized to include any number of layers of heat-conducting materials, simply by adding together the absolute thermal resistances of the layers and the temperature drops across the layers.
From Fourier's law for heat conduction , the following equation can be derived, and is valid as long as all of the parameters (x and k) are constant throughout the sample.
where:
In terms of the temperature gradient across the sample and heat flux through the sample, the relationship is:
where:
A 2008 review paper written by Philips researcher Clemens J. M. Lasance notes that: "Although there is an analogy between heat flow by conduction (Fourier's law) and the flow of an electric current (Ohm’s law), the corresponding physical properties of thermal conductivity and electrical conductivity conspire to make the behavior of heat flow quite unlike the flow of electricity in normal situations. [...] Unfortunately, although the electrical and thermal differential equations are analogous, it is erroneous to conclude that there is any practical analogy between electrical and thermal resistance. This is because a material that is considered an insulator in electrical terms is about 20 orders of magnitude less conductive than a material that is considered a conductor, while, in thermal terms, the difference between an "insulator" and a "conductor" is only about three orders of magnitude. The entire range of thermal conductivity is then equivalent to the difference in electrical conductivity of high-doped and low-doped silicon." [ 3 ]
The junction-to-air thermal resistance can vary greatly depending on the ambient conditions. [ 4 ] (A more sophisticated way of expressing the same fact is saying that junction-to-ambient thermal resistance is not Boundary-Condition Independent (BCI). [ 3 ] ) JEDEC has a standard (number JESD51-2) for measuring the junction-to-air thermal resistance of electronics packages under natural convection and another standard (number JESD51-6) for measurement under forced convection .
A JEDEC standard for measuring the junction-to-board thermal resistance (relevant for surface-mount technology ) has been published as JESD51-8. [ 5 ]
A JEDEC standard for measuring the junction-to-case thermal resistance (JESD51-14) is relatively newcomer, having been published in late 2010; it concerns only packages having a single heat flow and an exposed cooling surface. [ 6 ] [ 7 ] [ 8 ]
When resistances are in series, the total resistance is the sum of the resistances:
R t o t = R A + R B + R C + . . . {\displaystyle R_{\rm {tot}}=R_{A}+R_{B}+R_{C}+...}
Similarly to electrical circuits, the total thermal resistance for steady state conditions can be calculated as follows.
The total thermal resistance
Simplifying the equation, we get
With terms for the thermal resistance for conduction, we get
It is often suitable to assume one-dimensional conditions, although the heat flow is multidimensional. Now, two different circuits may be used for this case. For case (a) (shown in picture), we presume isothermal surfaces for those normal to the x- direction, whereas for case (b) we presume adiabatic surfaces parallel to the x- direction. We may obtain different results for the total resistance R t o t {\displaystyle {R_{tot}}} and the actual corresponding values of the heat transfer are bracketed by q {\displaystyle {q}} . When the multidimensional effects becomes more significant, these differences are increased with increasing | k f − k g | {\displaystyle {|k_{f}-k_{g}|}} . [ 9 ]
Spherical and cylindrical systems may be treated as one-dimensional, due to the temperature gradients in the radial direction. The standard method can be used for analyzing radial systems under steady state conditions, starting with the appropriate form of the heat equation, or the alternative method, starting with the appropriate form of Fourier's law . For a hollow cylinder in steady state conditions with no heat generation, the appropriate form of heat equation is [ 9 ]
Where k {\displaystyle {k}} is treated as a variable. Considering the appropriate form of Fourier's law, the physical significance of treating k {\displaystyle {k}} as a variable becomes evident when the rate at which energy is conducted across a cylindrical surface, this is represented as
Where A = 2 π r L {\displaystyle {A=2\pi rL}} is the area that is normal to the direction of where the heat transfer occurs. Equation 1 implies that the quantity k r ( d T / d r ) {\displaystyle {kr(dT/dr)}} is not dependent of the radius r {\displaystyle {r}} , it follows from equation 5 that the heat transfer rate, q r {\displaystyle {q_{r}}} is a constant in the radial direction.
In order to determine the temperature distribution in the cylinder, equation 4 can be solved applying the appropriate boundary conditions . With the assumption that k {\displaystyle {k}} is constant
Using the following boundary conditions, the constants C 1 {\displaystyle {C_{1}}} and C 2 {\displaystyle {C_{2}}} can be computed
The general solution gives us
Solving for C 1 {\displaystyle {C_{1}}} and C 2 {\displaystyle {C_{2}}} and substituting into the general solution, we obtain
The logarithmic distribution of the temperature is sketched in the inset of the thumbnail figure.
Assuming that the temperature distribution, equation 7, is used with Fourier's law in equation 5, the heat transfer rate can be expressed in the following form
Finally, for radial conduction in a cylindrical wall, the thermal resistance is of the form
10. K Einalipour, S. Sadeghzadeh , F. Molaei. “Interfacial thermal resistance engineering for polyaniline (C3N)-graphene heterostructure”, The Journal of Physical Chemistry, 2020. doi:10.1021/acs.jpcc.0c02051
There is a large amount of literature on this topic. In general, works using the term "thermal resistance" are more engineering-oriented, whereas works using the term thermal conductivity are more [pure-]physics-oriented. The following books are representative, but may be easily substituted. | https://en.wikipedia.org/wiki/Thermal_conductance_and_resistance |
In physics, the thermal conductance quantum g 0 {\displaystyle g_{0}} describes the rate at which heat is transported through a single ballistic phonon channel with temperature T {\displaystyle T} .
It is given by
The thermal conductance of any electrically insulating structure that exhibits ballistic phonon transport is a positive integer multiple of g 0 . {\displaystyle g_{0}.} The thermal conductance quantum was first measured in 2000. [ 1 ] These measurements employed suspended silicon nitride ( Si 3 N 4 ) nanostructures that exhibited a constant thermal conductance of 16 g 0 {\displaystyle g_{0}} at temperatures below approximately 0.6 kelvin .
For ballistic electrical conductors, the electron contribution to the thermal conductance is also quantized as a result of the electrical conductance quantum and the Wiedemann–Franz law , which has been quantitatively measured at both cryogenic (~20 mK) [ 2 ] and room temperature (~300K). [ 3 ] [ 4 ]
The thermal conductance quantum, also called quantized thermal conductance, may be understood from the Wiedemann-Franz law, which shows that
where L {\displaystyle L} is a universal constant called the Lorenz factor ,
In the regime with quantized electric conductance, one may have
where n {\displaystyle n} is an integer, also known as TKNN number. Then
where g 0 {\displaystyle g_{0}} is the thermal conductance quantum defined above. | https://en.wikipedia.org/wiki/Thermal_conductance_quantum |
Thermal conduction is the diffusion of thermal energy (heat) within one material or between materials in contact. The higher temperature object has molecules with more kinetic energy ; collisions between molecules distributes this kinetic energy until an object has the same kinetic energy throughout. Thermal conductivity , frequently represented by k , is a property that relates the rate of heat loss per unit area of a material to its rate of change of temperature. Essentially, it is a value that accounts for any property of the material that could change the way it conducts heat. [ 1 ] Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform.
Every process involving heat transfer takes place by only three methods:
A region with greater thermal energy (heat) corresponds with greater molecular agitation. Thus when a hot object touches a cooler surface, the highly agitated molecules from the hot object bump the calm molecules of the cooler surface, transferring the microscopic kinetic energy and causing the colder part or object to heat up. Mathematically, thermal conduction works just like diffusion. As temperature difference goes up, the distance traveled gets shorter or the area goes up thermal conduction increases:
Where:
Conduction is the main mode of heat transfer for solid materials because the strong inter-molecular forces allow the vibrations of particles to be easily transmitted, in comparison to liquids and gases. Liquids have weaker inter-molecular forces and more space between the particles, which makes the vibrations of particles harder to transmit. Gases have even more space, and therefore infrequent particle collisions. This makes liquids and gases poor conductors of heat. [ 1 ]
Thermal contact conductance is the study of heat conduction between solid bodies in contact. A temperature drop is often observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Interfacial thermal resistance is a measure of an interface's resistance to thermal flow. This thermal resistance differs from contact resistance, as it exists even at atomically perfect interfaces. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials.
The inter-molecular transfer of energy could be primarily by elastic impact, as in fluids, or by free-electron diffusion, as in metals, or phonon vibration , as in insulators. In insulators , the heat flux is carried almost entirely by phonon vibrations.
Metals (e.g., copper, platinum, gold, etc.) are usually good conductors of thermal energy. This is due to the way that metals bond chemically: metallic bonds (as opposed to covalent or ionic bonds ) have free-moving electrons that transfer thermal energy rapidly through the metal. The electron fluid of a conductive metallic solid conducts most of the heat flux through the solid. Phonon flux is still present but carries less of the energy. Electrons also conduct electric current through conductive solids, and the thermal and electrical conductivities of most metals have about the same ratio. [ clarification needed ] A good electrical conductor, such as copper , also conducts heat well. Thermoelectricity is caused by the interaction of heat flux and electric current. Heat conduction within a solid is directly analogous to diffusion of particles within a fluid, in the situation where there are no fluid currents.
In gases, heat transfer occurs through collisions of gas molecules with one another. In the absence of convection, which relates to a moving fluid or gas phase, thermal conduction through a gas phase is highly dependent on the composition and pressure of this phase, and in particular, the mean free path of gas molecules relative to the size of the gas gap, as given by the Knudsen number K n {\displaystyle K_{n}} . [ 3 ]
To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity , also known as the conductivity constant or conduction coefficient, k . In thermal conductivity , k is defined as "the quantity of heat, Q , transmitted in time ( t ) through a thickness ( L ), in a direction normal to a surface of area ( A ), due to a temperature difference (Δ T ) [...]". Thermal conductivity is a material property that is primarily dependent on the medium's phase , temperature, density, and molecular bonding. Thermal effusivity is a quantity derived from conductivity, which is a measure of its ability to exchange thermal energy with its surroundings.
Steady-state conduction is the form of conduction that happens when the temperature difference(s) driving the conduction are constant, so that (after an equilibration time), the spatial distribution of temperatures (temperature field) in the conducting object does not change any further. Thus, all partial derivatives of temperature concerning space may either be zero or have nonzero values, but all derivatives of temperature at any point concerning time are uniformly zero. In steady-state conduction, the amount of heat entering any region of an object is equal to the amount of heat coming out (if this were not so, the temperature would be rising or falling, as thermal energy was tapped or trapped in a region).
For example, a bar may be cold at one end and hot at the other, but after a state of steady-state conduction is reached, the spatial gradient of temperatures along the bar does not change any further, as time proceeds. Instead, the temperature remains constant at any given cross-section of the rod normal to the direction of heat transfer, and this temperature varies linearly in space in the case where there is no heat generation in the rod. [ 4 ]
In steady-state conduction, all the laws of direct current electrical conduction can be applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances . In such cases, temperature plays the role of voltage, and heat transferred per unit time (heat power) is the analog of electric current. Steady-state systems can be modeled by networks of such thermal resistances in series and parallel, in exact analogy to electrical networks of resistors. See purely resistive thermal circuits for an example of such a network.
During any period in which temperatures changes in time at any place within an object, the mode of thermal energy flow is termed transient conduction. Another term is "non-steady-state" conduction, referring to the time-dependence of temperature fields in an object. Non-steady-state situations appear after an imposed change in temperature at a boundary of an object. They may also occur with temperature changes inside an object, as a result of a new source or sink of heat suddenly introduced within an object, causing temperatures near the source or sink to change in time.
When a new perturbation of temperature of this type happens, temperatures within the system change in time toward a new equilibrium with the new conditions, provided that these do not change. After equilibrium, heat flow into the system once again equals the heat flow out, and temperatures at each point inside the system no longer change. Once this happens, transient conduction is ended, although steady-state conduction may continue if heat flow continues.
If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state.
An example of a new source of heat "turning on" within an object, causing transient conduction, is an engine starting in an automobile. In this case, the transient thermal conduction phase for the entire machine is over, and the steady-state phase appears, as soon as the engine reaches steady-state operating temperature . In this state of steady-state equilibrium, temperatures vary greatly from the engine cylinders to other parts of the automobile, but at no point in space within the automobile does temperature increase or decrease. After establishing this state, the transient conduction phase of heat transfer is over.
New external conditions also cause this process: for example, the copper bar in the example steady-state conduction experiences transient conduction as soon as one end is subjected to a different temperature from the other. Over time, the field of temperatures inside the bar reaches a new steady-state, in which a constant temperature gradient along the bar is finally set up, and this gradient then stays constant in time. Typically, such a new steady-state gradient is approached exponentially with time after a new temperature-or-heat source or sink, has been introduced. When a "transient conduction" phase is over, heat flow may continue at high power, so long as temperatures do not change.
An example of transient conduction that does not end with steady-state conduction, but rather no conduction, occurs when a hot copper ball is dropped into oil at a low temperature. Here, the temperature field within the object begins to change as a function of time, as the heat is removed from the metal, and the interest lies in analyzing this spatial change of temperature within the object over time until all gradients disappear entirely (the ball has reached the same temperature as the oil). Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all.
The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). [ 5 ] However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts .
Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model , as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity . Such regions warm or cool, but show no significant temperature variation across their extent, during the process (as compared to the rest of the system). This is due to their far higher conductance. During transient conduction, therefore, the temperature across their conductive regions changes uniformly in space, and as a simple exponential in time. An example of such systems is those that follow Newton's law of cooling during transient cooling (or the reverse during heating). The equivalent thermal circuit consists of a simple capacitor in series with a resistor. In such cases, the remainder of the system with a high thermal resistance (comparatively low conductivity) plays the role of the resistor in the circuit.
The theory of relativistic heat conduction is a model that is compatible with the theory of special relativity. For most of the last century, it was recognized that the Fourier equation is in contradiction with the theory of relativity because it admits an infinite speed of propagation of heat signals. For example, according to the Fourier equation, a pulse of heat at the origin would be felt at infinity instantaneously. The speed of information propagation is faster than the speed of light in vacuum, which is physically inadmissible within the framework of relativity.
Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave -like motion, rather than by the more usual mechanism of diffusion . Heat takes the place of pressure in normal sound waves. This leads to a very high thermal conductivity . It is known as "second sound" because the wave motion of heat is similar to the propagation of sound in air. This is called Quantum conduction.
The law of heat conduction, also known as Fourier's law (compare Fourier's heat equation ), states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area, at right angles to that gradient, through which the heat flows. We can state this law in two equivalent forms: the integral form, in which we look at the amount of energy flowing into or out of a body as a whole, and the differential form, in which we look at the flow rates or fluxes of energy locally.
Newton's law of cooling is a discrete analogue of Fourier's law, while Ohm's law is the electrical analogue of Fourier's law and Fick's laws of diffusion is its chemical analogue.
The differential form of Fourier's law of thermal conduction shows that the local heat flux density q {\displaystyle \mathbf {q} } is equal to the product of thermal conductivity k {\displaystyle k} and the negative local temperature gradient − ∇ T {\displaystyle -\nabla T} . The heat flux density is the amount of energy that flows through a unit area per unit time. q = − k ∇ T , {\displaystyle \mathbf {q} =-k\nabla T,} where (including the SI units)
The thermal conductivity k {\displaystyle k} is often treated as a constant, though this is not always true. While the thermal conductivity of a material generally varies with temperature, the variation can be small over a significant range of temperatures for some common materials. In anisotropic materials, the thermal conductivity typically varies with orientation; in this case k {\displaystyle k} is represented by a second-order tensor . In non-uniform materials, k {\displaystyle k} varies with spatial location.
For many simple applications, Fourier's law is used in its one-dimensional form, for example, in the x direction: q x = − k d T d x . {\displaystyle q_{x}=-k{\frac {dT}{dx}}.}
In an isotropic medium, Fourier's law leads to the heat equation ∂ T ∂ t = α ( ∂ 2 T ∂ x 2 + ∂ 2 T ∂ y 2 + ∂ 2 T ∂ z 2 ) {\displaystyle {\frac {\partial T}{\partial t}}=\alpha \left({\frac {\partial ^{2}T}{\partial x^{2}}}+{\frac {\partial ^{2}T}{\partial y^{2}}}+{\frac {\partial ^{2}T}{\partial z^{2}}}\right)} with a fundamental solution famously known as the heat kernel .
By integrating the differential form over the material's total surface S {\displaystyle S} , we arrive at the integral form of Fourier's law:
where (including the SI units):
The above differential equation , when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as Q = − k A Δ t L Δ T , {\displaystyle Q=-k{\frac {A\Delta t}{L}}\Delta T,} where
One can define the (macroscopic) thermal resistance of the 1-D homogeneous material: R = 1 k L A {\displaystyle R={\frac {1}{k}}{\frac {L}{A}}}
With a simple 1-D steady heat conduction equation which is analogous to Ohm's law for a simple electric resistance : Δ T = R Q ˙ {\displaystyle \Delta T=R\,{\dot {Q}}}
This law forms the basis for the derivation of the heat equation .
Writing U = k Δ x , {\displaystyle U={\frac {k}{\Delta x}},} where U is the conductance, in W/(m 2 K).
Fourier's law can also be stated as: Δ Q Δ t = U A ( − Δ T ) . {\displaystyle {\frac {\Delta Q}{\Delta t}}=UA\,(-\Delta T).}
The reciprocal of conductance is resistance, R {\displaystyle {\big .}R} is given by: R = 1 U = Δ x k = A ( − Δ T ) Δ Q Δ t . {\displaystyle R={\frac {1}{U}}={\frac {\Delta x}{k}}={\frac {A\,(-\Delta T)}{\frac {\Delta Q}{\Delta t}}}.}
Resistance is additive when several conducting layers lie between the hot and cool regions, because A and Q are the same for all layers. In a multilayer partition, the total conductance is related to the conductance of its layers by: R = R 1 + R 2 + R 3 + ⋯ {\displaystyle R=R_{1}+R_{2}+R_{3}+\cdots } or equivalently 1 U = 1 U 1 + 1 U 2 + 1 U 3 + ⋯ {\displaystyle {\frac {1}{U}}={\frac {1}{U_{1}}}+{\frac {1}{U_{2}}}+{\frac {1}{U_{3}}}+\cdots }
So, when dealing with a multilayer partition, the following formula is usually used: Δ Q Δ t = A ( − Δ T ) Δ x 1 k 1 + Δ x 2 k 2 + Δ x 3 k 3 + ⋯ . {\displaystyle {\frac {\Delta Q}{\Delta t}}={\frac {A\,(-\Delta T)}{{\frac {\Delta x_{1}}{k_{1}}}+{\frac {\Delta x_{2}}{k_{2}}}+{\frac {\Delta x_{3}}{k_{3}}}+\cdots }}.}
For heat conduction from one fluid to another through a barrier, it is sometimes important to consider the conductance of the thin film of fluid that remains stationary next to the barrier. This thin film of fluid is difficult to quantify because its characteristics depend upon complex conditions of turbulence and viscosity —but when dealing with thin high-conductance barriers it can sometimes be quite significant.
The previous conductance equations, written in terms of extensive properties , can be reformulated in terms of intensive properties . Ideally, the formulae for conductance should produce a quantity with dimensions independent of distance, like Ohm's law for electrical resistance, R = V / I {\displaystyle R=V/I\,\!} , and conductance, G = I / V {\displaystyle G=I/V\,\!} .
From the electrical formula: R = ρ x / A {\displaystyle R=\rho x/A} , where ρ is resistivity, x is length, and A is cross-sectional area, we have G = k A / x {\displaystyle G=kA/x\,\!} , where G is conductance, k is conductivity, x is length, and A is cross-sectional area.
For heat, U = k A Δ x , {\displaystyle U={\frac {kA}{\Delta x}},} where U is the conductance.
Fourier's law can also be stated as: Q ˙ = U Δ T , {\displaystyle {\dot {Q}}=U\,\Delta T,} analogous to Ohm's law, I = V / R {\displaystyle I=V/R} or I = V G . {\displaystyle I=VG.}
The reciprocal of conductance is resistance, R , given by: R = Δ T Q ˙ , {\displaystyle R={\frac {\Delta T}{\dot {Q}}},} analogous to Ohm's law, R = V / I . {\displaystyle R=V/I.}
The rules for combining resistances and conductances (in series and parallel) are the same for both heat flow and electric current.
Conduction through cylindrical shells (e.g. pipes) can be calculated from the internal radius, r 1 {\displaystyle r_{1}} , the external radius, r 2 {\displaystyle r_{2}} , the length, ℓ {\displaystyle \ell } , and the temperature difference between the inner and outer wall, T 2 − T 1 {\displaystyle T_{2}-T_{1}} .
The surface area of the cylinder is A r = 2 π r ℓ {\displaystyle A_{r}=2\pi r\ell }
When Fourier's equation is applied: Q ˙ = − k A r d T d r = − 2 k π r ℓ d T d r {\displaystyle {\dot {Q}}=-kA_{r}{\frac {dT}{dr}}=-2k\pi r\ell {\frac {dT}{dr}}} and rearranged: Q ˙ ∫ r 1 r 2 1 r d r = − 2 k π ℓ ∫ T 1 T 2 d T {\displaystyle {\dot {Q}}\int _{r_{1}}^{r_{2}}{\frac {1}{r}}\,dr=-2k\pi \ell \int _{T_{1}}^{T_{2}}dT} then the rate of heat transfer is: Q ˙ = 2 k π ℓ T 1 − T 2 ln ( r 2 / r 1 ) {\displaystyle {\dot {Q}}=2k\pi \ell {\frac {T_{1}-T_{2}}{\ln(r_{2}/r_{1})}}} the thermal resistance is: R c = Δ T Q ˙ = ln ( r 2 / r 1 ) 2 π k ℓ {\displaystyle R_{c}={\frac {\Delta T}{\dot {Q}}}={\frac {\ln(r_{2}/r_{1})}{2\pi k\ell }}} and Q ˙ = 2 π k ℓ r m T 1 − T 2 r 2 − r 1 {\textstyle {\dot {Q}}=2\pi k\ell r_{m}{\frac {T_{1}-T_{2}}{r_{2}-r_{1}}}} , where r m = r 2 − r 1 ln ( r 2 / r 1 ) {\textstyle r_{m}={\frac {r_{2}-r_{1}}{\ln(r_{2}/r_{1})}}} . It is important to note that this is the log-mean radius.
The conduction through a spherical shell with internal radius, r 1 {\displaystyle r_{1}} , and external radius, r 2 {\displaystyle r_{2}} , can be calculated in a similar manner as for a cylindrical shell.
The surface area of the sphere is: A = 4 π r 2 . {\displaystyle A=4\pi r^{2}.}
Solving in a similar manner as for a cylindrical shell (see above) produces: Q ˙ = 4 k π T 1 − T 2 1 / r 1 − 1 / r 2 = 4 k π ( T 1 − T 2 ) r 1 r 2 r 2 − r 1 {\displaystyle {\dot {Q}}=4k\pi {\frac {T_{1}-T_{2}}{1/{r_{1}}-1/{r_{2}}}}=4k\pi {\frac {(T_{1}-T_{2})r_{1}r_{2}}{r_{2}-r_{1}}}}
The heat transfer at an interface is considered a transient heat flow. To analyze this problem, the Biot number is important to understand how the system behaves. The Biot number is determined by: Bi = h L k {\displaystyle {\text{Bi}}={\frac {hL}{k}}} The heat transfer coefficient h {\displaystyle h} , is introduced in this formula, and is measured in J m 2 s K {\displaystyle \mathrm {\frac {J}{m^{2}sK}} } . If the system has a Biot number of less than 0.1, the material behaves according to Newtonian cooling, i.e. with negligible temperature gradient within the body. [ 6 ] If the Biot number is greater than 0.1, the system behaves as a series solution. however, there is a noticeable temperature gradient within the material, and a series solution is required to describe the temperature profile. The cooling equation given is: q = − h Δ T , {\displaystyle q=-h\,\Delta T,} This leads to the dimensionless form of the temperature profile as a function of time: T − T f T i − T f = exp ( − h A t ρ C p V ) . {\displaystyle {\frac {T-T_{f}}{T_{i}-T_{f}}}=\exp \left({\frac {-hAt}{\rho C_{p}V}}\right).} This equation shows that the temperature decreases exponentially over time, with the rate governed by the properties of the material and the heat transfer coefficient. [ 7 ] The heat transfer coefficient , h , is measured in W m 2 K {\displaystyle \mathrm {\frac {W}{m^{2}K}} } , and represents the transfer of heat at an interface between two materials. This value is different at every interface and is an important concept in understanding heat flow at an interface.
The series solution can be analyzed with a nomogram . A nomogram has a relative temperature as the y coordinate and the Fourier number, which is calculated by Fo = α t L 2 . {\displaystyle {\text{Fo}}={\frac {\alpha t}{L^{2}}}.}
The Biot number increases as the Fourier number decreases. There are five steps to determine a temperature profile in terms of time.
Splat cooling is a method for quenching small droplets of molten materials by rapid contact with a cold surface. The particles undergo a characteristic cooling process, with the heat profile at t = 0 {\displaystyle t=0} for initial temperature as the maximum at x = 0 {\displaystyle x=0} and T = 0 {\displaystyle T=0} at x = − ∞ {\displaystyle x=-\infty } and x = ∞ {\displaystyle x=\infty } , and the heat profile at t = ∞ {\displaystyle t=\infty } for − ∞ ≤ x ≤ ∞ {\displaystyle -\infty \leq x\leq \infty } as the boundary conditions. Splat cooling rapidly ends in a steady state temperature, and is similar in form to the Gaussian diffusion equation. The temperature profile, with respect to the position and time of this type of cooling, varies with: T ( x , t ) − T i = T i Δ X 2 π α t exp ( − x 2 4 α t ) {\displaystyle T(x,t)-T_{i}={\frac {T_{i}\Delta X}{2{\sqrt {\pi \alpha t}}}}\exp \left(-{\frac {x^{2}}{4\alpha t}}\right)}
Splat cooling is a fundamental concept that has been adapted for practical use in the form of thermal spraying . The thermal diffusivity coefficient, represented as α {\displaystyle \alpha } , can be written as α = k ρ C p {\displaystyle \alpha ={\frac {k}{\rho C_{p}}}} . This varies according to the material. [ 8 ] [ 9 ]
Metal quenching is a transient heat transfer process in terms of the time temperature transformation (TTT). It is possible to manipulate the cooling process to adjust the phase of a suitable material. For example, appropriate quenching of steel can convert a desirable proportion of its content of austenite to martensite , creating a very hard and strong product. To achieve this, it is necessary to quench at the "nose" (or eutectic ) of the TTT diagram. Since materials differ in their Biot numbers , the time it takes for the material to quench, or the Fourier number , varies in practice. [ 10 ] In steel, the quenching temperature range is generally from 600 °C to 200 °C. To control the quenching time and to select suitable quenching media, it is necessary to determine the Fourier number from the desired quenching time, the relative temperature drop, and the relevant Biot number. Usually, the correct figures are read from a standard nomogram . [ citation needed ] By calculating the heat transfer coefficient from this Biot number, one can find a liquid medium suitable for the application. [ 11 ]
One statement of the so-called zeroth law of thermodynamics is directly focused on the idea of conduction of heat. Bailyn (1994) writes that "the zeroth law may be stated: All diathermal walls are equivalent". [ 12 ]
A diathermal wall is a physical connection between two bodies that allows the passage of heat between them. Bailyn is referring to diathermal walls that exclusively connect two bodies, especially conductive walls.
This statement of the "zeroth law" belongs to an idealized theoretical discourse, and actual physical walls may have peculiarities that do not conform to its generality.
For example, the material of the wall must not undergo a phase transition , such as evaporation or fusion, at the temperature at which it must conduct heat. But when only thermal equilibrium is considered and time is not urgent, so that the conductivity of the material does not matter too much, one suitable heat conductor is as good as another. Conversely, another aspect of the zeroth law is that, subject again to suitable restrictions, a given diathermal wall is indifferent to the nature of the heat bath to which it is connected. For example, the glass bulb of a thermometer acts as a diathermal wall whether exposed to a gas or a liquid, provided that they do not corrode or melt it.
These differences are among the defining characteristics of heat transfer . In a sense, they are symmetries of heat transfer.
Thermal conduction property of any gas under standard conditions of pressure and temperature is a fixed quantity. This property of a known reference gas or known reference gas mixtures can, therefore, be used for certain sensory applications, such as the thermal conductivity analyzer.
The working of this instrument is by principle based on the Wheatstone bridge containing four filaments whose resistances are matched. Whenever a certain gas is passed over such network of filaments, their resistance changes due to the altered thermal conductivity of the filaments and thereby changing the net voltage output from the Wheatstone Bridge. This voltage output will be correlated with the database to identify the gas sample.
The principle of thermal conductivity of gases can also be used to measure the concentration of a gas in a binary mixture of gases.
Working: if the same gas is present around all the Wheatstone bridge filaments, then the same temperature is maintained in all the filaments and hence same resistances are also maintained; resulting in a balanced Wheatstone bridge. However, If the dissimilar gas sample (or gas mixture) is passed over one set of two filaments and the reference gas on the other set of two filaments, then the Wheatstone bridge becomes unbalanced. And the resulting net voltage output of the circuit will be correlated with the database to identify the constituents of the sample gas.
Using this technique many unknown gas samples can be identified by comparing their thermal conductivity with other reference gas of known thermal conductivity. The most commonly used reference gas is nitrogen; as the thermal conductivity of most common gases (except hydrogen and helium) are similar to that of nitrogen. | https://en.wikipedia.org/wiki/Thermal_conduction |
As quoted from various sources in an online version of:
As quoted from various sources in an online version of:
As quoted from this source in an online version of: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 4; Table 4.1, Electronic Configuration and Properties of the Elements
As quoted at http://www.webelements.com/ from these sources: | https://en.wikipedia.org/wiki/Thermal_conductivities_of_the_elements_(data_page) |
The thermal conductivity of a material is a measure of its ability to conduct heat . It is commonly denoted by k {\displaystyle k} , λ {\displaystyle \lambda } , or κ {\displaystyle \kappa } and is measured in W·m −1 ·K −1 .
Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals typically have high thermal conductivity and are very efficient at conducting heat, while the opposite is true for insulating materials such as mineral wool or Styrofoam . Correspondingly, materials of high thermal conductivity are widely used in heat sink applications, and materials of low thermal conductivity are used as thermal insulation . The reciprocal of thermal conductivity is called thermal resistivity .
The defining equation for thermal conductivity is q = − k ∇ T {\displaystyle \mathbf {q} =-k\nabla T} , where q {\displaystyle \mathbf {q} } is the heat flux , k {\displaystyle k} is the thermal conductivity, and ∇ T {\displaystyle \nabla T} is the temperature gradient . This is known as Fourier's law for heat conduction. Although commonly expressed as a scalar , the most general form of thermal conductivity is a second-rank tensor . However, the tensorial description only becomes necessary in materials which are anisotropic .
Consider a solid material placed between two environments of different temperatures. Let T 1 {\displaystyle T_{1}} be the temperature at x = 0 {\displaystyle x=0} and T 2 {\displaystyle T_{2}} be the temperature at x = L {\displaystyle x=L} , and suppose T 2 > T 1 {\displaystyle T_{2}>T_{1}} . An example of this scenario is a building on a cold winter day; the solid material in this case is the building wall, separating the cold outdoor environment from the warm indoor environment.
According to the second law of thermodynamics , heat will flow from the hot environment to the cold one as the temperature difference is equalized by diffusion. This is quantified in terms of a heat flux q {\displaystyle q} , which gives the rate, per unit area, at which heat flows in a given direction (in this case minus x-direction). In many materials, q {\displaystyle q} is observed to be directly proportional to the temperature difference and inversely proportional to the separation distance L {\displaystyle L} : [ 1 ]
The constant of proportionality k {\displaystyle k} is the thermal conductivity; it is a physical property of the material. In the present scenario, since T 2 > T 1 {\displaystyle T_{2}>T_{1}} heat flows in the minus x-direction and q {\displaystyle q} is negative, which in turn means that k > 0 {\displaystyle k>0} . In general, k {\displaystyle k} is always defined to be positive. The same definition of k {\displaystyle k} can also be extended to gases and liquids, provided other modes of energy transport, such as convection and radiation , are eliminated or accounted for.
The preceding derivation assumes that the k {\displaystyle k} does not change significantly as temperature is varied from T 1 {\displaystyle T_{1}} to T 2 {\displaystyle T_{2}} . Cases in which the temperature variation of k {\displaystyle k} is non-negligible must be addressed using the more general definition of k {\displaystyle k} discussed below.
Thermal conduction is defined as the transport of energy due to random molecular motion across a temperature gradient. It is distinguished from energy transport by convection and molecular work in that it does not involve macroscopic flows or work-performing internal stresses.
Energy flow due to thermal conduction is classified as heat and is quantified by the vector q ( r , t ) {\displaystyle \mathbf {q} (\mathbf {r} ,t)} , which gives the heat flux at position r {\displaystyle \mathbf {r} } and time t {\displaystyle t} . According to the second law of thermodynamics, heat flows from high to low temperature. Hence, it is reasonable to postulate that q ( r , t ) {\displaystyle \mathbf {q} (\mathbf {r} ,t)} is proportional to the gradient of the temperature field T ( r , t ) {\displaystyle T(\mathbf {r} ,t)} , i.e.
where the constant of proportionality, k > 0 {\displaystyle k>0} , is the thermal conductivity. This is called Fourier's law of heat conduction. Despite its name, it is not a law but a definition of thermal conductivity in terms of the independent physical quantities q ( r , t ) {\displaystyle \mathbf {q} (\mathbf {r} ,t)} and T ( r , t ) {\displaystyle T(\mathbf {r} ,t)} . [ 2 ] [ 3 ] As such, its usefulness depends on the ability to determine k {\displaystyle k} for a given material under given conditions. The constant k {\displaystyle k} itself usually depends on T ( r , t ) {\displaystyle T(\mathbf {r} ,t)} and thereby implicitly on space and time. An explicit space and time dependence could also occur if the material is inhomogeneous or changing with time. [ 4 ]
In some solids, thermal conduction is anisotropic , i.e. the heat flux is not always parallel to the temperature gradient. To account for such behavior, a tensorial form of Fourier's law must be used:
where κ {\displaystyle {\boldsymbol {\kappa }}} is symmetric, second-rank tensor called the thermal conductivity tensor. [ 5 ]
An implicit assumption in the above description is the presence of local thermodynamic equilibrium , which allows one to define a temperature field T ( r , t ) {\displaystyle T(\mathbf {r} ,t)} . This assumption could be violated in systems that are unable to attain local equilibrium, as might happen in the presence of strong nonequilibrium driving or long-ranged interactions.
In engineering practice, it is common to work in terms of quantities which are derivative to thermal conductivity and implicitly take into account design-specific features such as component dimensions.
For instance, thermal conductance is defined as the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity k {\displaystyle k} , area A {\displaystyle A} and thickness L {\displaystyle L} , the conductance is k A / L {\displaystyle kA/L} , measured in W⋅K −1 . [ 6 ] The relationship between thermal conductivity and conductance is analogous to the relationship between electrical conductivity and electrical conductance .
Thermal resistance is the inverse of thermal conductance. [ 6 ] It is a convenient measure to use in multicomponent design since thermal resistances are additive when occurring in series . [ 7 ]
There is also a measure known as the heat transfer coefficient : the quantity of heat that passes per unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. [ 8 ] In ASTM C168-15, this area-independent quantity is referred to as the "thermal conductance". [ 9 ] The reciprocal of the heat transfer coefficient is thermal insulance . In summary, for a plate of thermal conductivity k {\displaystyle k} , area A {\displaystyle A} and thickness L {\displaystyle L} ,
The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow. [ 10 ]
An additional term, thermal transmittance , quantifies the thermal conductance of a structure along with heat transfer due to convection and radiation . [ citation needed ] It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance . The term U-value is also used.
Finally, thermal diffusivity α {\displaystyle \alpha } combines thermal conductivity with density and specific heat : [ 11 ]
As such, it quantifies the thermal inertia of a material, i.e. the relative difficulty in heating a material to a given temperature using heat sources applied at the boundary. [ 12 ]
In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin ( W /( m ⋅ K )). Some papers report in watts per centimeter-kelvin [W/(cm⋅K)].
However, physicists use other convenient units as well, e.g., in cgs units , where esu/(cm-sec-K) is used. [ 13 ] The Lorentz number , defined as L=κ/σT is a quantity independent of the carrier density and the scattering mechanism. Its value for a gas of non-interacting electrons (typical carriers in good metallic conductors) is 2.72×10 −13 esu/K 2 , or equivalently, 2.44×10 −8 Watt-Ohm/K 2 .
In imperial units , thermal conductivity is measured in BTU /( h ⋅ ft ⋅ °F ). [ note 1 ] [ 14 ]
The dimension of thermal conductivity is M 1 L 1 T −3 Θ −1 , expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ).
Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of measures such as the R-value (resistance) and the U-value (transmittance or conductance). Although related to the thermal conductivity of a material used in an insulation product or assembly, R- and U-values are measured per unit area, and depend on the specified thickness of the product or assembly. [ note 2 ]
Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry.
There are several ways to measure thermal conductivity; each is suitable for a limited range of materials. Broadly speaking, there are two categories of measurement techniques: steady-state and transient . Steady-state techniques infer the thermal conductivity from measurements on the state of a material once a steady-state temperature profile has been reached, whereas transient techniques operate on the instantaneous state of a system during the approach to steady state. Lacking an explicit time component, steady-state techniques do not require complicated signal analysis (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed, and the time required to reach steady state precludes rapid measurement.
In comparison with solid materials, the thermal properties of fluids are more difficult to study experimentally. This is because in addition to thermal conduction, convective and radiative energy transport are usually present unless measures are taken to limit these processes. The formation of an insulating boundary layer can also result in an apparent reduction in the thermal conductivity. [ 15 ] [ 16 ]
The thermal conductivities of common substances span at least four orders of magnitude. [ 17 ] Gases generally have low thermal conductivity, and pure metals have high thermal conductivity. For example, under standard conditions the thermal conductivity of copper is over 10 000 times that of air.
Of all materials, allotropes of carbon, such as graphite and diamond , are usually credited with having the highest thermal conductivities at room temperature. [ 18 ] The thermal conductivity of natural diamond at room temperature is several times higher than that of a highly conductive metal such as copper (although the precise value varies depending on the diamond type ). [ 19 ]
Thermal conductivities of selected substances are tabulated below; an expanded list can be found in the list of thermal conductivities . These values are illustrative estimates only, as they do not account for measurement uncertainties or variability in material definitions.
The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law , thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins ) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply. [ 23 ] In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K.
On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations ( phonons ). Except for high-quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature , thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects. [ 23 ]
When a material undergoes a phase change (e.g. from solid to liquid), the thermal conductivity may change abruptly. For instance, when ice melts to form liquid water at 0 °C, the thermal conductivity changes from 2.18 W/(m⋅K) to 0.56 W/(m⋅K). [ 24 ]
Even more dramatically, the thermal conductivity of a fluid diverges in the vicinity of the vapor-liquid critical point . [ 25 ]
Some substances, such as non- cubic crystals , can exhibit different thermal conductivities along different crystal axes. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the c axis and 32 W/(m⋅K) along the a axis. [ 26 ] Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing , laminated materials, cables, the materials used for the Space Shuttle thermal protection system , and fiber-reinforced composite structures. [ 27 ]
When anisotropy is present, the direction of heat flow may differ from the direction of the thermal gradient.
In metals, thermal conductivity is approximately correlated with electrical conductivity according to the Wiedemann–Franz law , as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond , which is an electrical insulator but conducts heat via phonons due to its orderly array of atoms.
The influence of magnetic fields on thermal conductivity is known as the thermal Hall effect or Righi–Leduc effect.
In the absence of convection, air and other gases are good insulators. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which obstruct heat conduction pathways. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel , as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by trapping air in pores, pockets, or voids.
Low density gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride , a dense gas, has a relatively high thermal conductivity due to its high heat capacity . Argon and krypton , gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics.
The thermal conductivity through bulk materials in porous or granular form is governed by the type of gas in the gaseous phase, and its pressure. [ 28 ] At low pressures, the thermal conductivity of a gaseous phase is reduced, with this behaviour governed by the Knudsen number , defined as K n = l / d {\displaystyle K_{n}=l/d} , where l {\displaystyle l} is the mean free path of gas molecules and d {\displaystyle d} is the typical gap size of the space filled by the gas. In a granular material d {\displaystyle d} corresponds to the characteristic size of the gaseous phase in the pores or intergranular spaces. [ 28 ]
The thermal conductivity of a crystal can depend strongly on isotopic purity, assuming other lattice defects are negligible. A notable example is diamond: at a temperature of around 100 K the thermal conductivity increases from 10,000 W · m −1 · K −1 for natural type IIa diamond (98.9% 12 C ), to 41,000 for 99.9% enriched synthetic diamond. A value of 200,000 is predicted for 99.999% 12 C at 80 K, assuming an otherwise pure crystal. [ 29 ] The thermal conductivity of 99% isotopically enriched cubic boron nitride is ~ 1400 W · m −1 · K −1 , [ 30 ] which is 90% higher than that of natural boron nitride .
The molecular mechanisms of thermal conduction vary among different materials, and in general depend on details of the microscopic structure and molecular interactions. As such, thermal conductivity is difficult to predict from first-principles. Any expressions for thermal conductivity which are exact and general, e.g. the Green-Kubo relations , are difficult to apply in practice, typically consisting of averages over multiparticle correlation functions . [ 31 ] A notable exception is a monatomic dilute gas, for which a well-developed theory exists expressing thermal conductivity accurately and explicitly in terms of molecular parameters.
In a gas, thermal conduction is mediated by discrete molecular collisions. In a simplified picture of a solid, thermal conduction occurs by two mechanisms: 1) the migration of free electrons and 2) lattice vibrations ( phonons ). The first mechanism dominates in pure metals and the second in non-metallic solids. In liquids, by contrast, the precise microscopic mechanisms of thermal conduction are poorly understood. [ 32 ]
In a simplified model of a dilute monatomic gas, molecules are modeled as rigid spheres which are in constant motion, colliding elastically with each other and with the walls of their container. Consider such a gas at temperature T {\displaystyle T} and with density ρ {\displaystyle \rho } , specific heat c v {\displaystyle c_{v}} and molecular mass m {\displaystyle m} . Under these assumptions, an elementary calculation yields for the thermal conductivity
where β {\displaystyle \beta } is a numerical constant of order 1 {\displaystyle 1} , k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , and λ {\displaystyle \lambda } is the mean free path , which measures the average distance a molecule travels between collisions. [ 33 ] Since λ {\displaystyle \lambda } is inversely proportional to density, this equation predicts that thermal conductivity is independent of density for fixed temperature. The explanation is that increasing density increases the number of molecules which carry energy but decreases the average distance λ {\displaystyle \lambda } a molecule can travel before transferring its energy to a different molecule: these two effects cancel out. For most gases, this prediction agrees well with experiments at pressures up to about 10 atmospheres . [ 34 ] At higher densities, the simplifying assumption that energy is only transported by the translational motion of particles no longer holds, and the theory must be modified to account for the transfer of energy across a finite distance at the moment of collision between particles, as well as the locally non-uniform density in a high density gas . This modification has been carried out, yielding revised Enskog theory , which predicts a density dependence of the thermal conductivity in dense gases. [ 35 ]
Typically, experiments show a more rapid increase with temperature than k ∝ T {\displaystyle k\propto {\sqrt {T}}} (here, λ {\displaystyle \lambda } is independent of T {\displaystyle T} ). This failure of the elementary theory can be traced to the oversimplified "hard sphere" model, which both ignores the "softness" of real molecules, and the attractive forces present between real molecules, such as dispersion forces .
To incorporate more complex interparticle interactions, a systematic approach is necessary. One such approach is provided by Chapman–Enskog theory , which derives explicit expressions for thermal conductivity starting from the Boltzmann equation . The Boltzmann equation, in turn, provides a statistical description of a dilute gas for generic interparticle interactions. For a monatomic gas, expressions for k {\displaystyle k} derived in this way take the form
where σ {\displaystyle \sigma } is an effective particle diameter and Ω ( T ) {\displaystyle \Omega (T)} is a function of temperature whose explicit form depends on the interparticle interaction law. [ 36 ] [ 34 ] For rigid elastic spheres, Ω ( T ) {\displaystyle \Omega (T)} is independent of T {\displaystyle T} and very close to 1 {\displaystyle 1} . More complex interaction laws introduce a weak temperature dependence. The precise nature of the dependence is not always easy to discern, however, as Ω ( T ) {\displaystyle \Omega (T)} is defined as a multi-dimensional integral which may not be expressible in terms of elementary functions, but must be evaluated numerically. However, for particles interacting through a Mie potential (a generalisation of the Lennard-Jones potential ) highly accurate correlations for Ω ( T ) {\displaystyle \Omega (T)} in terms of reduced units have been developed. [ 37 ]
An alternate, equivalent way to present the result is in terms of the gas viscosity μ {\displaystyle \mu } , which can also be calculated in the Chapman–Enskog approach:
where f {\displaystyle f} is a numerical factor which in general depends on the molecular model. For smooth spherically symmetric molecules, however, f {\displaystyle f} is very close to 2.5 {\displaystyle 2.5} , not deviating by more than 1 % {\displaystyle 1\%} for a variety of interparticle force laws. [ 38 ] Since k {\displaystyle k} , μ {\displaystyle \mu } , and c v {\displaystyle c_{v}} are each well-defined physical quantities which can be measured independent of each other, this expression provides a convenient test of the theory. For monatomic gases, such as the noble gases , the agreement with experiment is fairly good. [ 39 ]
For gases whose molecules are not spherically symmetric, the expression k = f μ c v {\displaystyle k=f\mu c_{v}} still holds. In contrast with spherically symmetric molecules, however, f {\displaystyle f} varies significantly depending on the particular form of the interparticle interactions: this is a result of the energy exchanges between the internal and translational degrees of freedom of the molecules. An explicit treatment of this effect is difficult in the Chapman–Enskog approach. Alternately, the approximate expression f = ( 1 / 4 ) ( 9 γ − 5 ) {\displaystyle f=(1/4){(9\gamma -5)}} was suggested by Arnold Eucken , where γ {\displaystyle \gamma } is the heat capacity ratio of the gas. [ 38 ] [ 40 ]
The entirety of this section assumes the mean free path λ {\displaystyle \lambda } is small compared with macroscopic (system) dimensions. In extremely dilute gases this assumption fails, and thermal conduction is described instead by an apparent thermal conductivity which decreases with density. Ultimately, as the density goes to 0 {\displaystyle 0} the system approaches a vacuum , and thermal conduction ceases entirely.
The exact mechanisms of thermal conduction are poorly understood in liquids: there is no molecular picture which is both simple and accurate. An example of a simple but very rough theory is that of Bridgman , in which a liquid is ascribed a local molecular structure similar to that of a solid, i.e. with molecules located approximately on a lattice. Elementary calculations then lead to the expression
where N A {\displaystyle N_{\text{A}}} is the Avogadro constant , V {\displaystyle V} is the volume of a mole of liquid, and v s {\displaystyle v_{\text{s}}} is the speed of sound in the liquid. This is commonly called Bridgman's equation . [ 41 ]
For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity c , which, in this case, is proportional to T . So
with k 0 a constant. For pure metals, k 0 is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so l and, consequently k , are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation.
Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (i.e., phonons ). This transport mechanism is theorized to be limited by the elastic scattering of acoustic phonons at lattice defects. This has been confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were found to be limited by "internal boundary scattering" to length scales of 10 −2 cm to 10 −3 cm. [ 42 ] [ 43 ]
The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If V g is the group velocity of a phonon wave packet, then the relaxation length l {\displaystyle l\;} is defined as:
where t is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, [ 44 ] V long is much greater than V trans , and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons. [ 42 ] [ 45 ]
Regarding the dependence of wave velocity on wavelength or frequency ( dispersion ), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering . This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering . [ 46 ] [ 47 ] [ 48 ] [ 49 ]
Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λ L ( κ {\displaystyle \kappa } L ) is small. [ 50 ]
Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3 pq when p is the number of primitive cells with q atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3 p ( q − 1) are accommodated through the optical branches. This implies that structures with larger p and q contain a greater number of optical modes and a reduced λ L .
From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λ L . [ 51 ] [ failed verification ] This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly. [ 50 ]
Describing anharmonic effects is complicated because an exact treatment as in the harmonic case is not possible, and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity.
Only when the phonon number ‹n› deviates from the equilibrium value ‹n› 0 , can a thermal current arise as stated in the following expression
where v is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹ n › in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation
states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time ( τ ) approximation
which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation
Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity λ L can be determined. The temperature dependence for λ L originates from the variety of processes, whose significance for λ L depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for λ L , as stated in the following equation
where Λ is the mean free path for phonon and ∂ ∂ T ϵ {\displaystyle {\frac {\partial }{\partial T}}\epsilon } denotes the heat capacity . This equation is a result of combining the four previous equations with each other and knowing that ⟨ v x 2 ⟩ = 1 3 v 2 {\displaystyle \left\langle v_{x}^{2}\right\rangle ={\frac {1}{3}}v^{2}} for cubic or isotropic systems and Λ = v τ {\displaystyle \Lambda =v\tau } . [ 52 ]
At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λ L is determined by the specific heat and is therefore proportional to T 3 . [ 52 ]
Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K < T < Θ ), the conservation of energy ℏ ω 1 = ℏ ω 2 + ℏ ω 3 {\displaystyle \hslash {\omega }_{1}=\hslash {\omega }_{2}+\hslash {\omega }_{3}} and quasimomentum q 1 = q 2 + q 3 + G {\displaystyle \mathbf {q} _{1}=\mathbf {q} _{2}+\mathbf {q} _{3}+\mathbf {G} } , where q 1 is wave vector of the incident phonon and q 2 , q 3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport.
Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large q -vectors are excited, because unless the sum of q 2 and q 3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy E is given by the Boltzmann distribution P ∝ e − E / k T {\displaystyle P\propto {e}^{-E/kT}} . To U-process to occur the decaying phonon to have a wave vector q 1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved.
Therefore, these phonons have to possess energy of ∼ k Θ / 2 {\displaystyle \sim k\Theta /2} , which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to e − Θ / b T {\displaystyle {e}^{-\Theta /bT}} , with b = 2 {\displaystyle b=2} . Temperature dependence of the mean free path has an exponential form e Θ / b T {\displaystyle {e}^{\Theta /bT}} . The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λ L , [ 50 ] as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance. [ 52 ]
At high temperatures ( T > Θ), the mean free path and therefore λ L has a temperature dependence T −1 , to which one arrives from formula e Θ / b T {\displaystyle {e}^{\Theta /bT}} by making the following approximation e x ∝ x , ( x ) < 1 {\displaystyle {e}^{x}\propto x{\text{ }},{\text{ }}\left(x\right)<1} [ clarification needed ] and writing x = Θ / b T {\displaystyle x=\Theta /bT} . This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur. [ 50 ] [ 52 ]
Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids.
Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles or structures.
Because thermal conductivity depends continuously on quantities like temperature and material composition, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available under the physical conditions of interest. This capability is important in thermophysical simulations, where quantities like temperature and pressure vary continuously with space and time, and may encompass extreme conditions inaccessible to direct measurement. [ 53 ]
For the simplest fluids, such as monatomic gases and their mixtures at low to moderate densities, ab initio quantum mechanical computations can accurately predict thermal conductivity in terms of fundamental atomic properties—that is, without reference to existing measurements of thermal conductivity or other transport properties. [ 54 ] This method uses Chapman-Enskog theory or Revised Enskog Theory to evaluate the thermal conductivity, taking fundamental intermolecular potentials as input, which are computed ab initio from a quantum mechanical description.
For most fluids, such high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing thermal conductivity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures
and pressures, then it is called a "reference correlation" for that material. Reference correlations have been published for many pure materials; examples are carbon dioxide , ammonia , and benzene . [ 55 ] [ 56 ] [ 57 ] Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases.
Thermophysical modeling software often relies on reference correlations for predicting thermal conductivity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP [ 58 ] (proprietary) and CoolProp [ 59 ] (open-source).
Thermal conductivity can also be computed using the Green-Kubo relations , which express transport coefficients in terms of the statistics of molecular trajectories. [ 60 ] The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics . An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules. [ 61 ]
In a 1780 letter to Benjamin Franklin , Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities: [ 62 ]
You remembre you gave me a wire of five metals all drawn thro the same hole Viz. one, of gould, one of silver, copper steel and iron. I supplyed here the two others Viz. the one of tin the other of lead. I fixed these seven wires into a wooden frame at an equal distance of one an other ... I dipt the seven wires into this melted wax as deep as the wooden frame ... By taking them out they were cov[e]red with a coat of wax ... When I found that this crust was there about of an equal thikness upon all the wires, I placed them all in a glased earthen vessel full of olive oil heated to some degrees under boiling, taking care that each wire was dipt just as far in the oil as the other ... Now, as they had been all dipt alike at the same time in the same oil, it must follow, that the wire, upon which the wax had been melted the highest, had been the best conductor of heat. ... Silver conducted heat far the best of all other metals, next to this was copper, then gold, tin, iron, steel, Lead. | https://en.wikipedia.org/wiki/Thermal_conductivity_and_resistivity |
The thermal conductivity detector ( TCD ), also known as a katharometer , is a bulk property detector and a chemical specific detector commonly used in gas chromatography. [ 1 ] This detector senses changes in the thermal conductivity of the column eluent and compares it to a reference flow of carrier gas. Since most compounds have a thermal conductivity much less than that of the common carrier gases of helium or hydrogen, when an analyte elutes from the column the effluent thermal conductivity is reduced, and a detectable signal is produced.
The TCD consists of an electrically heated filament in a temperature-controlled cell. Under normal conditions there is a stable heat flow from the filament to the detector body. When an analyte elutes and the thermal conductivity of the column effluent is reduced, the filament heats up and changes resistance. This resistance change is often sensed by a Wheatstone bridge circuit which produces a measurable voltage change. The column effluent flows over one of the resistors while the reference flow is over a second resistor in the four-resistor circuit.
A schematic of a classic thermal conductivity detector design utilizing a Wheatstone bridge circuit is shown. The reference flow across resistor 4 of the circuit compensates for drift due to flow or temperature fluctuations. Changes in the thermal conductivity of the column effluent flow across resistor 3 will result in a temperature change of the resistor and therefore a resistance change which can be measured as a signal.
Since all compounds, organic and inorganic, have a thermal conductivity different from helium or hydrogen, virtually all compounds can be detected. That's why the TCD is often called a universal detector.
Used after a separation column (in a chromatograph), a TCD measures the concentrations of each compound contained in the sample. Indeed, the TCD signal changes when a compound passes through it, shaping a peak on a baseline. The peak position on the baseline reflects the compound type. The peak area (computed by integrating the TCD signal over time) is representative of the compound concentration. A sample whose compounds concentrations are known is used to calibrate the TCD: concentrations are affected to peak areas through a calibration curve.
The TCD is a good general purpose detector for initial investigations with an unknown sample compared to the FID that will react only to combustible compounds (Ex: hydrocarbons). Moreover, the TCD is a non-specific and non-destructive technique. The TCD is also used in the analysis of permanent gases (argon, oxygen, nitrogen, carbon dioxide) because it responds to all these substances unlike the FID which cannot detect compounds which do not contain carbon-hydrogen bonds.
Considering detection limit, both TCD and FID reach low concentration levels (inferior to ppm or ppb). [ 2 ]
Both of them require pressurized carrier gas (Typically: H 2 for FID, He for TCD) but due to the risk associated with storing H 2 (high flammability, see Hydrogen safety ), TCD with He should be considered in locations where safety is crucial.
One thing to be aware of when operating a TCD is that gas flow must never be interrupted when the filament is hot, as doing so may cause the filament to burn out. While the filament of a TCD is generally chemically passivated to prevent it from reacting with oxygen, the passivation layer can be attacked by halogenated compounds, so these should be avoided wherever possible. [ 3 ]
If analyzing for hydrogen, the peak will appear as negative when helium is used as the reference gas. This problem can be avoided if another reference gas is used, for example argon or nitrogen , although this will significantly reduce the detector's sensitivity towards any compounds other than hydrogen.
It functions by having two parallel tubes both containing gas and heating coils. The gases are examined by comparing the rate of loss of heat from the heating coils into the gas. The coils are arranged in a bridge circuit so that resistance changes due to unequal cooling can be measured. One channel normally holds a reference gas and the mixture to be tested is passed through the other channel.
Katharometers are used medically in lung function testing equipment and in gas chromatography . The results are slower to obtain compared to a mass spectrometer , but the device is inexpensive, and has good accuracy when the gases in question are known, and it is only the proportion that must be determined.
Monitoring of hydrogen purity in hydrogen-cooled turbogenerators .
Detection of helium loss from the helium vessel of an MRI superconducting magnet.
Also used within the brewing industry to quantify the amount of carbon dioxide within beer samples.
Used within the energy industry to quantify the amount (calorific value) of methane within biogas samples.
Used within the food and drink industry to quantify and/or validate food packaging gases.
Used within the oil&gas industry to quantify the percentage of HCs when drilling into a formation. | https://en.wikipedia.org/wiki/Thermal_conductivity_detector |
There are a number of possible ways to measure thermal conductivity , each of them suitable for a limited range of materials, depending on the thermal properties and the medium temperature. Three classes of methods exist to measure the thermal conductivity of a sample: steady-state, time-domain, and frequency-domain methods.
In general, steady-state techniques perform a measurement when the temperature of the material measured does not change with time. This makes the signal analysis straightforward (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed.
Steady-state methods, in general, work by applying a known heat flux , Q ˙ ( W / m 2 ) {\displaystyle {\dot {Q}}(W/m^{2})} , to a sample with a surface area, A ( m 2 ) {\displaystyle A(m^{2})} , and thickness, x ( m ) {\displaystyle x(m)} ; once the sample's steady-state temperature is reached, the difference in temperature, Δ T {\displaystyle \Delta T} , across the thickness of the sample is measured. After assuming one-dimensional heat flow and an isotropic medium, Fourier's law is then used to calculate the measured thermal conductivity, k {\displaystyle k} :
Major sources of error in steady-state measurements include radiative and convective heat losses in the setup, as well as errors in the thickness of the sample propagating to the thermal conductivity.
In geology and geophysics , the most common method for consolidated rock samples is the divided bar . There are various modifications to these devices depending on the temperatures and pressures needed as well as sample sizes. A sample of unknown conductivity is placed between two samples of known conductivity (usually brass plates). The setup is usually vertical with the hot brass plate at the top, the sample in between then the cold brass plate at the bottom. Heat is supplied at the top and made to move downwards to stop any convection within the sample. Measurements are taken after the sample has reached to the steady state (with zero heat gradient or constant heat over entire sample), this usually takes about 30 minutes and over.
For good conductors of heat, Searle's bar method can be used. [ 1 ] For poor conductors of heat, Lee's disc method can be used. [ 2 ]
The transient techniques perform a measurement during the process of heating up. The advantage is that measurements can be made relatively quickly. Transient methods are usually carried out by needle probes.
Non-steady-state methods to measure the thermal conductivity do not require the signal to obtain a constant value. Instead, the signal is studied as a function of time. The advantage of these methods is that they can in general be performed more quickly, since there is no need to wait for a steady-state situation. The disadvantage is that the mathematical analysis of the data is generally more difficult.
The transient hot wire method (THW) is a very popular, accurate and precise technique to measure the thermal conductivity of gases, liquids, [ 3 ] solids, [ 4 ] nanofluids [ 5 ] and refrigerants [ 6 ] in a wide temperature and pressure range. The technique is based on recording the transient temperature rise of a thin vertical metal wire with infinite length when a step voltage is applied to it. The wire is immersed in a fluid and can act both as an electrical heating element and a resistance thermometer. The transient hot wire method has advantage over the other thermal conductivity method since there is a fully developed theory and there is no calibration or single-point calibration. Furthermore, because of the very small measuring time (1 s) there is no convection present in the measurements and only the thermal conductivity of the fluid is measured with very high accuracy.
Most of the THW sensors used in academia consist of two identical very thin wires with only difference in the length. [ 3 ] Sensors using a single wire, [ 7 ] [ 8 ] are used both in academia and industry with the advantage over the two-wire sensors the ease of handling of the sensor and change of the wire.
An ASTM standard is published for the measurements of engine coolants using a single-transient hot wire method. [ 9 ]
Transient Plane Source Method, utilizing a plane sensor and a special mathematical model describing the heat conductivity, combined with electronics, enables the method to be used to measure Thermal Transport Properties. It covers a thermal conductivity range of at least 0.01-500 W/m/K (in accordance with ISO 22007-2) and can be used for measuring various kinds of materials, such as solids, liquid, paste and thin films etc. In 2008 it was approved as an ISO-standard for measuring thermal transport properties of polymers (November 2008). This TPS standard also covers the use of this method to test both isotropic and anisotropic materials.
The Transient Plane Source technique typically employs two samples halves, in-between which the sensor is sandwiched. Normally the samples should be homogeneous, but extended use of transient plane source testing of heterogeneous material is possible, with proper selection of sensor size to maximize sample penetration. This method can also be used in a single-sided configuration, with the introduction of a known insulation material used as sensor support.
The flat sensor consists of a continuous double spiral of electrically conducting nickel (Ni) metal, etched out of a thin foil. The nickel spiral is situated between two layers of thin polyimide film Kapton . The thin Kapton films provides electrical insulation and mechanical stability to the sensor. The sensor is placed between two halves of the sample to be measured. During the measurement a constant electrical effect passes through the conducting spiral, increasing the sensor temperature. The heat generated dissipates into the sample on both sides of the sensor, at a rate depending on the thermal transport properties of the material. By recording temperature vs. time response in the sensor, the thermal conductivity, thermal diffusivity and specific heat capacity of the material can be calculated. For highly conducting materials, very large samples are needed (some litres of volume).
A variation of the above method is the Modified Transient Plane Source Method (MTPS) developed by Dr. Nancy Mathis . The device uses a one-sided, interfacial, heat reflectance sensor that applies a momentary, constant heat source to the sample. The difference between this method and traditional transient plane source technique described above is that the heating element is supported on a backing, which provides mechanical support, electrical insulation and thermal insulation. This modification provides a one-sided interfacial measurement in offering maximum flexibility in testing liquids, powders, pastes and solids.
The physical model behind this method is the infinite line source with constant power per unit length. The temperature profile T ( t , r ) {\displaystyle T(t,r)} at a distance r {\displaystyle r} at time t {\displaystyle t} is as follows
where
When performing an experiment, one measures the temperature at a point at fixed distance, and follows that temperature in time. For large times, the exponential integral can be approximated by making use of the following relation
where
This leads to the following expression
Note that the first two terms in the brackets on the RHS are constants. Thus if the probe temperature is plotted versus the natural logarithm of time, the thermal conductivity can be determined from the slope given knowledge of Q. Typically this means ignoring the first 60 to 120 seconds of data and measuring for 600 to 1200 seconds. Typically, this method is used for gases and liquids whose thermal conductivities are between 0.1 and 50 W/(mK). If the thermal conductivities are too high, the diagram often does not show a linearity, so that no evaluation is possible. [ 10 ]
A variation on the Transient Line Source method is used for measuring the thermal conductivity of a large mass of the earth for Geothermal Heat Pump (GHP/GSHP) system design. This is generally called Ground Thermal Response Testing (TRT) by the GHP industry. [ 11 ] [ 12 ] [ 13 ] Understanding the ground conductivity and thermal capacity is essential to proper GHP design, and using TRT to measure these properties was first presented in 1983 (Mogensen). The now commonly used procedure, introduced by Eklöf and Gehlin in 1996 and now approved by ASHRAE involves inserting a pipe loop deep into the ground (in a well bore, filling the anulus of the bore with a grout substance of known thermal properties, heating the fluid in the pipe loop, and measuring the temperature drop in the loop from the inlet and return pipes in the bore. The ground thermal conductivity is estimated using the line source approximation method—plotting a straight line on the log of the thermal response measured. A very stable thermal source and pumping circuit are required for this procedure.
More advanced Ground TRT methods are currently under development. The DOE is now validating a new Advanced Thermal Conductivity test said to require half the time as the existing approach, while also eliminating the requirement for a stable thermal source. [ 14 ] This new technique is based on multi-dimensional model-based TRT data analysis.
The laser flash method is used to measure thermal diffusivity of a thin disc in the thickness direction. This method is based upon the measurement of the temperature rise at the rear face of the thin-disc specimen produced by a short energy pulse on the front face. With a reference sample specific heat can be achieved and with known density the thermal conductivity results as follows
where
It is suitable for a multiplicity of different materials over a broad temperature range (−120 °C to 2800 °C). [ 15 ]
Time-domain thermoreflectance is a method by which the thermal properties of a material can be measured, most importantly thermal conductivity. This method can be applied most notably to thin film materials, which have properties that vary greatly when compared to the same materials in bulk. The idea behind this technique is that once a material is heated up, the change in the reflectance of the surface can be utilized to derive the thermal properties. The change in reflectivity is measured with respect to time, and the data received can be matched to a model which contain coefficients that correspond to thermal properties.
One popular technique for electro-thermal characterization of materials is the 3ω-method , in which a thin metal structure (generally a wire or a film) is deposited on the sample to function as a resistive heater and a resistance temperature detector (RTD). The heater is driven with AC current at frequency ω, which induces periodic joule heating at frequency 2ω due to the oscillation of the AC signal during a single period. There will be some delay between the heating of the sample and the temperature response which is dependent upon the thermal properties of the sensor/sample. This temperature response is measured by logging the amplitude and phase delay of the AC voltage signal from the heater across a range of frequencies (generally accomplished using a lock-in-amplifier ). Note, the phase delay of the signal is the lag between the heating signal and the temperature response. The measured voltage will contain both the fundamental and third harmonic components (ω and 3ω respectively), because the Joule heating of the metal structure induces oscillations in its resistance with frequency 2ω due to the temperature coefficient of resistance (TCR) of the metal heater/sensor as stated in the following equation:
where C 0 is constant. Thermal conductivity is determined by the linear slope of ΔT vs. log(ω) curve. The main advantages of the 3ω-method are minimization of radiation effects and easier acquisition of the temperature dependence of the thermal conductivity than in the steady-state techniques. Although some expertise in thin film patterning and microlithography is required, this technique is considered as the best pseudo-contact method available. [ 16 ] (ch23)
The transient hot wire method can be combined with the 3ω-method to accurately measure the thermal conductivity of solid and molten compounds from room temperature up to 800 °C. In high temperature liquids, errors from convection and radiation make steady-state and time-domain thermal conductivity measurements vary widely; [ 17 ] this is evident in the previous measurements for molten nitrates. [ 18 ] By operating in the frequency-domain, the thermal conductivity of the liquid can be measured using a 25 μm diameter hot-wire while rejecting the influence of ambient temperature fluctuations, minimizing error from radiation, and minimizing errors from convection by keeping the probed volume below 1 μL. [ 19 ]
The freestanding sensor-based 3ω technique [ 20 ] [ 21 ] is proposed and developed as a candidate for the conventional 3ω method for thermophysical properties measurement. The method covers the determination of solids, powders and fluids from cryogenic temperatures to around 400 K. [ 22 ] For solid samples, the method is applicable to both bulks and tens of micrometers thick wafers/membranes, [ 23 ] dense or porous surfaces. [ 24 ] The thermal conductivity and thermal effusivity can be measured using selected sensors, respectively. Two basic forms are now available: the linear source freestanding sensor and the planar source freestanding sensor. The range of thermophysical properties can be covered by different forms of the technique, with the exception that the recommended thermal conductivity range where the highest precision can be attained is 0.01 to 150 W/m•K for the linear source freestanding sensor and 500 to 8000 J/m2•K•s0.5 for the planar source freestanding sensor.
A thermal conductance tester, one of the instruments of gemology , determines if gems are genuine diamonds using diamond's uniquely high thermal conductivity.
For an example, see Measuring Instrument of Heat Conductivity of ITP-MG4 "Zond" (Russia). [ 25 ] | https://en.wikipedia.org/wiki/Thermal_conductivity_measurement |
In heat transfer and thermodynamics , a thermodynamic system is said to be in thermal contact with another system if it can exchange energy through the process of heat . Perfect thermal isolation is an idealization as real systems are always in thermal contact with their environment to some extent.
When two solid bodies are in contact, a resistance to heat transfer exists between the bodies. The study of heat conduction between such bodies is called thermal contact conductance (or thermal contact resistance).
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_contact |
In physics , thermal contact conductance is the study of heat conduction between solid or liquid bodies in thermal contact . The thermal contact conductance coefficient , h c {\displaystyle h_{c}} , is a property indicating the thermal conductivity , or ability to conduct heat , between two bodies in contact. The inverse of this property is termed thermal contact resistance .
When two solid bodies come in contact, such as A and B in Figure 1, heat flows from the hotter body to the colder body. From experience, the temperature profile along the two bodies varies, approximately, as shown in the figure. A temperature drop is observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Thermal contact resistance is defined as the ratio between this temperature drop and the average heat flow across the interface. [ 1 ]
According to Fourier's law , the heat flow between the bodies is found by the relation:
where q {\displaystyle q} is the heat flow, k {\displaystyle k} is the thermal conductivity, A {\displaystyle A} is the cross sectional area and d T / d x {\displaystyle dT/dx} is the temperature gradient in the direction of flow.
From considerations of conservation of energy , the heat flow between the two bodies in contact, bodies A and B, is found as:
One may observe that the heat flow is directly related to the thermal conductivities of the bodies in contact, k A {\displaystyle k_{A}} and k B {\displaystyle k_{B}} , the contact area A {\displaystyle A} , and the thermal contact resistance, 1 / h c {\displaystyle 1/h_{c}} , which, as previously noted, is the inverse of the thermal conductance coefficient, h c {\displaystyle h_{c}} .
Most experimentally determined values of the thermal contact resistance fall between
0.000005 and 0.0005 m 2 K/W (the corresponding range of thermal contact
conductance is 200,000 to 2000 W/m 2 K). To know whether the thermal contact resistance is significant or not, magnitudes of the thermal resistances of the layers are compared with typical values of thermal contact resistance. Thermal contact resistance is significant and may dominate for good heat conductors such as metals but can be neglected for poor heat conductors such as insulators. [ 2 ] Thermal contact conductance is an important factor in a variety of applications, largely because many physical systems contain a mechanical combination of two materials. Some of the fields where contact conductance is of importance are: [ 3 ] [ 4 ] [ 5 ]
Thermal contact conductance is a complicated phenomenon, influenced by many factors. Experience shows that the most important ones are as follows:
For thermal transport between two contacting bodies, such as particles in a granular medium, the contact pressure , and the area of true contact area that arises from this, is the factor of most influence on overall contact conductance [ 6 ] . Governed by an interface's Normal contact stiffness , as contact pressure grows, true contact area increases and contact conductance grows (contact resistance becomes smaller). [ 7 ]
Since the contact pressure is the most important factor, most studies, correlations and mathematical models for measurement of contact conductance are done as a function of this factor.
The thermal contact resistance of certain sandwich kinds of materials that are manufactured by rolling under high temperatures may sometimes be ignored because the decrease in thermal conductivity between them is negligible.
No truly smooth surfaces really exist, and surface imperfections are visible under a microscope . As a result, when two bodies are pressed together, contact is only performed in a finite number of points , separated by relatively large gaps, as can be shown in Fig. 2. Since the actual contact area is reduced, another resistance for heat flow exists. The gases / fluids filling these gaps may largely influence the total heat flow across the interface. The thermal conductivity of the interstitial material and its pressure, examined through reference to the Knudsen number , are the two properties governing its influence on contact conductance, and thermal transport in heterogeneous materials in general. [ 7 ]
In the absence of interstitial materials, as in a vacuum , the contact resistance will be much larger, since flow through the intimate contact points is dominant.
One can characterise a surface that has undergone certain finishing operations by three main properties of: roughness, waviness , and fractal dimension . Among these, roughness and fractality are of most importance, with roughness often indicated in terms of a rms value, σ {\displaystyle \sigma } and surface fractality denoted generally by D f . The effect of surface structures on thermal conductivity at interfaces is analogous to the concept of electrical contact resistance , also known as ECR , involving contact patch restricted transport of phonons rather than electrons.
When the two bodies come in contact, surface deformation may occur on both bodies. This deformation may either be plastic or elastic , depending on the material properties and the contact pressure. When a surface undergoes plastic deformation, contact resistance is lowered, since the deformation causes the actual contact area to increase [ 8 ] [ 9 ]
The presence of dust particles , acids , etc., can also influence the contact conductance.
Going back to Formula 2, calculation of the thermal contact conductance may prove difficult, even impossible, due to the difficulty in measuring the contact area, A {\displaystyle A} (A product of surface characteristics, as explained earlier). Because of this, contact conductance/resistance is usually found experimentally, by using a standard apparatus. [ 10 ]
The results of such experiments are usually published in Engineering literature , on journals such as Journal of Heat Transfer , International Journal of Heat and Mass Transfer , etc. Unfortunately, a centralized database of contact conductance coefficients does not exist, a situation which sometimes causes companies to use outdated, irrelevant data, or not taking contact conductance as a consideration at all.
CoCoE (Contact Conductance Estimator), a project founded to solve this problem and create a centralized database of contact conductance data and a computer program that uses it, was started in 2006 .
While a finite thermal contact conductance is due to voids at the interface, surface waviness, and surface roughness, etc., a finite conductance exists even at near ideal interfaces as well. This conductance, known as thermal boundary conductance , is due to the differences in electronic and vibrational properties between the contacting materials. This conductance is generally much higher than thermal contact conductance, but becomes important in nanoscale material systems. | https://en.wikipedia.org/wiki/Thermal_contact_conductance |
A thermal copper pillar bump , also known as a "thermal bump", is a thermoelectric device made from thin-film thermoelectric material embedded in flip chip interconnects (in particular copper pillar solder bumps) for use in electronics and optoelectronic packaging, including: flip chip packaging of CPU and GPU integrated circuits (chips), laser diodes , and semiconductor optical amplifiers (SOA). Unlike conventional solder bumps that provide an electrical path and a mechanical connection to the package, thermal bumps act as solid-state heat pumps and add thermal management functionality locally on the surface of a chip or to another electrical component. The diameter of a thermal bump is 238 μm and 60 μm high.
Thermal bumps use the thermoelectric effect , which is the direct conversion of temperature differences to electric voltage and vice versa. Simply put, a thermoelectric device creates a voltage when there is a different temperature on each side, or when a voltage is applied to it, it creates a temperature difference. This effect can be used to generate electricity, to measure temperature, to cool objects, or to heat them.
For each bump, thermoelectric cooling (TEC) occurs when a current is passed through the bump. The thermal bump pulls heat from one side of the device and transfers it to the other as current is passed through the material. This is known as the Peltier effect. [ 1 ] The direction of heating and cooling is determined by the direction of current flow and the sign of the majority electrical carrier in the thermoelectric material. Thermoelectric power generation (TEG) on the other hand occurs when the thermal bump is subjected to a temperature gradient (i.e., the top is hotter than the bottom). In this instance, the device generates current, converting heat into electrical power. This is termed the Seebeck effect. [ 1 ]
The thermal bump was developed by Nextreme Thermal Solutions as a method for integrating active thermal management functionality at the chip level in the same manner that transistors, resistors and capacitors are integrated in conventional circuit designs today. Nextreme chose the copper pillar bump as an integration strategy due to its widespread acceptance by Intel , Amkor and other industry leaders as the method for connecting microprocessors and other advanced electronics devices to various surfaces during a process referred to as “flip-chip” packaging. The thermal bump can be integrated as a part of the standard flip-chip process (Figure 1) or integrated as discrete devices.
The efficiency of a thermoelectric device is measured by the heat moved (or pumped) divided by the amount of electrical power supplied to move this heat. This ratio is termed the coefficient of performance or COP and is a measured characteristic of a thermoelectric device. The COP is inversely related to the temperature difference that the device produces. As you move a cooling device further away from the heat source, parasitic losses between the cooler and the heat source necessitate additional cooling power: the further the distance between source and cooler, the more cooling is required. For this reason, the cooling of electronic devices is most efficient when it occurs closest to the source of the heat generation.
Use of the thermal bump does not displace system level cooling, which is still needed to move heat out of the system; rather it introduces a fundamentally new methodology for achieving temperature uniformity at the chip and board level. In this manner, overall thermal management of the system becomes more efficient. In addition, while conventional cooling solutions scale with the size of the system (bigger fans for bigger systems, etc.), the thermal bump can scale at the chip level by using more thermal bumps in the overall design.
Solder bumping technology (the process of joining a chip to a substrate without shorting using solder) was first conceived and implemented by IBM in the early 1960s. Three versions of this type of solder joining were developed. The first was to embed copper balls in the solder bumps to provide a positive stand-off. The second solution, developed by Delco Electronics (General Motors) in the late 1960s, was similar to embedding copper balls except that the design employed a rigid silver bump. The bump provided a positive stand-off and was attached to the substrate by means of solder that was screen-printed onto the substrate. The third solution was to use a screened glass dam near the electrode tips to act as a ‘‘stop-off’’ to prevent the ball solder from flowing down the electrode. By then the Ball Limiting Metallurgy (BLM) with a high-lead (Pb) solder system and a copper ball had proven to work well. Therefore, the ball was simply removed and the solder evaporation process extended to form pure solder bumps that were approximately 125μm high. This system became known as the controlled collapse chip connection (C3 or C4).
Until the mid-1990s, this type of flip-chip assembly was practiced almost exclusively by IBM and Delco. Around this time, Delco sought to commercialize its technology and formed Flip Chip Technologies with Kulicke & Soffa Industries as a partner. At the same time, MCNC (which had developed a plated version of IBM’s C4 process) received funding from DARPA to commercialize its technology. These two organizations, along with APTOS (Advanced Plating Technologies on Silicon), formed the nascent out-sourcing market.
During this same time, companies began to look at reducing or streamlining their packaging, from the earlier multi-chip-on-ceramic packages that IBM had originally developed C4 to support, to what were referred to as Chip Scale Packages (CSP). There were a number of companies developing products in this area. These products could usually be put into one of two camps: either they were scaled down versions of the multi-chip on ceramic package (of which the Tessera package would be one example); or they were the streamlined versions developed by Unitive Electronics, et al. (where the package wiring had been transferred to the chip, and after bumping, they were ready to be placed).
One of the issues with the CSP type of package (which was intended to be soldered directly to an FR4 or flex circuit) was that for high-density interconnects, the soft solder bump provided less of a stand-off as the solder bump diameter and pitch were decreased. Different solutions were employed including one developed by Focus Interconnect Technology (former APTOS engineers), which used a high aspect ratio plated copper post to provide a larger fixed standoff than was possible for a soft solder collapse joint.
Today, flip chip is a well established technology and collapsed soft solder connections are used in the vast majority of assemblies. The copper post stand-off developed for the CSP market has found a home in high-density interconnects for advanced micro-processors and is used today by IBM for its CPU packaging.
Trends in high-density interconnects have led to the use of copper pillar solder bumps (CPB) for CPU and GPU packaging. [ 2 ] CPBs are an attractive replacement for traditional solder bumps because they provide a fixed stand-off independent of pitch. This is extremely important as most of the high-end products are underfilled and a smaller standoff may create difficulties in getting the underfill adhesive to flow under the die.
Figure 2 shows an example of a CPB fabricated by Intel and incorporated into their Presler line of microprocessors among others. The cross section shows copper and a copper pillar (approximately 60 um high) electrically connected through an opening (or via) in the chip passivation layer at the top of the picture. At the bottom is another copper trace on the package substrate with solder between the two copper layers.
Thin films are thin material layers ranging from fractions of a nanometer to several micrometers in thickness. Thin-film thermoelectric materials are grown by conventional semiconductor deposition methods and fabricated using conventional semiconductor micro-fabrication techniques.
Thin-film thermoelectrics have been demonstrated to provide high heat pumping capacity that far exceeds the capacities provided by traditional bulk pellet TE products. [ 3 ] The benefit of thin-films versus bulk materials for thermoelectric manufacturing is expressed in Equation 1. Here the Qmax (maximum heat pumped by a module) is shown to be inversely proportional to the thickness of the film, L.
Q m a x = S 2 T 2 2 ⋅ R T o t a l = S 2 T 2 A 2 p c L {\displaystyle Q_{max}={\frac {S^{2}T^{2}}{2\cdot R_{Total}}}={\frac {S^{2}T^{2}A}{2p_{c}L}}} Eq. 1
As such, TE coolers manufactured with thin-films can easily have 10x – 20x higher Qmax values for a given active area A. This makes thin-film TECs ideally suited for applications involving high heat-flux flows. In addition to the increased heat pumping capability, the use of thin films allows for truly novel implementation of TE devices. Instead of a bulk module that is 1–3 mm in thickness, a thin-film TEC can be fabricated less than 100 um in thickness.
In its simplest form, the P or N leg of a TE couple (the basic building block of all thin-film TE devices) is a layer of thin-film TE material with a solder layer above and below, providing electrical and thermal functionality.
The thermal bump is compatible with the existing flip-chip manufacturing infrastructure, extending the use of conventional solder bumped interconnects to provide active, integrated cooling of a flip-chipped component using the widely accepted copper pillar bumping process. The result is higher performance and efficiency within the existing semiconductor manufacturing paradigm. The thermal bump also enables power generating capabilities within copper pillar bumps for energy recycling applications.
Thermal bumps have been shown to achieve a temperature differential of 60 °C between the top and bottom headers; demonstrated power pumping capabilities exceeding 150 W/cm2; and when subjected to heat, have demonstrated the capability to generate up to 10 mW of power per bump.
Figure 3 shows an SEM cross-section of a TE leg. Here it is demonstrated that the thermal bump is structurally identical to a CPB with an extra layer, the TE layer, incorporated into the stack-up. The addition of the TE layer transforms a standard copper pillar bump into a thermal bump. This element, when properly configured electrically and thermally, provides active thermoelectric heat transfer from one side of the bump to the other side. The direction of heat transfer is dictated by the doping type of the thermoelectric material (either a P-type or N-type semiconductor) and the direction of electric current passing through the material. This type of thermoelectric heat transfer is known as the Peltier effect. Conversely, if heat is allowed to pass from one side of the thermoelectric material to the other, a current will be generated in the material in a phenomenon known as the Seebeck effect. The Seebeck effect is essentially the reverse of the Peltier effect. In this mode, electrical power is generated from the flow of heat in the TE element. The structure shown in Figure 3 is capable of operating in both the Peltier and Seebeck modes, though not simultaneously.
Figure 4 shows a schematic of a typical CPB and a thermal bump for comparison. These structures are similar, with both having copper pillars and solder connections. The primary difference between the two is the introduction of either a P- or N-type thermoelectric layer between two solder layers. The solders used with CPBs and thermal bumps can be any one of a number of commonly used solders including, but not limited to, Sn, SnPb eutectic, SnAg or AuSn.
Figure 5 shows a device equipped with a thermal bump. The thermal flow is shown by the arrows labeled “heat.” Metal traces, which can be several micrometres high, can be stacked or interdigitated to provide highly conductive pathways for collecting heat from the underlying circuit and funneling that heat to the thermal bump.
The metal traces shown in the figure for conducting electric current into the thermal bump may or may not be directly connected to the circuitry of the chip. In the case where there are electrical connections to the chip circuitry, on-board temperature sensors and driver circuitry can be used to control the thermal bump in a closed loop fashion to maintain optimal performance. Second, the heat that is pumped by the thermal bump and the additional heat created by the thermal bump in the course of pumping that heat will need to be rejected into the substrate or board. Since the performance of the thermal bump can be improved by providing a good thermal path for the rejected heat, it is beneficial to provide high thermally conductive pathways on the backside of the thermal bump. The substrate could be a highly conductive ceramic substrate like AlN or a metal (e.g., Cu, CuW, CuMo, etc.) with a dielectric. In this case, the high thermal conductance of the substrate will act as a natural pathway for the rejected heat. The substrate might also be a multilayer substrate like a printed wiring board (PWB) designed to provide a high-density interconnect. In this case, the thermal conductivity of the PWB may be relatively poor, so adding thermal vias (e.g. metal plugs) can provide excellent pathways for the rejected heat.
Thermal bumps can be used in a number of different ways to provide chip cooling and power generation.
Thermal bumps can be evenly distributed across the surface of a chip to provide a uniform cooling effect. In this case, the thermal bumps may be interspersed with standard bumps that are used for signal, power and ground. This allows the thermal bumps to be placed directly under the active circuitry of the chip for maximum effectiveness. The number and density of thermal bumps are based on the heat load from the chip. Each P/N couple can provide a specific heat pumping (Q) at a specific temperature differential (ΔT) at a given electric current. Temperature sensors on the chip (“on board” sensors) can provide direct measurement of the thermal bump performance and provide feedback to the driver circuit.
Since thermal bumps can either cool or heat the chip depending on the current direction, they can be used to provide precision control of temperature for chips that must operate within specific temperature ranges irrespective of ambient conditions. For example, this is a common problem for many optoelectronic components.
In microprocessors, graphics processors, and other high-end chips, hotspots can occur as power densities vary significantly across a chip. [ 4 ] These hotspots can severely limit the performance of the devices. Because of the small size of the thermal bumps and the relatively high density at which they can be placed on the active surface of the chip, these structures are ideally suited for cooling hotspots. In such a case, the distribution of the thermal bumps may not need to be even. Rather, the thermal bumps would be concentrated in the area of the hotspot while areas of lower heat density would have fewer thermal bumps per unit area. In this way, cooling from the thermal bumps is applied only where needed, thereby reducing the added power necessary to drive the cooling and reducing the general thermal overhead on the system.
In addition to chip cooling, thermal bumps can also be applied to high heat-flux interconnects to provide a constant, steady source of power for energy scavenging applications. Such a source of power, typically in the mW range, can trickle charge batteries for wireless sensor networks and other battery operated systems. | https://en.wikipedia.org/wiki/Thermal_copper_pillar_bump |
The thermal cycler (also known as a thermocycler , PCR machine or DNA amplifier ) is a laboratory apparatus most commonly used to amplify segments of DNA via the polymerase chain reaction (PCR). [ 1 ] Thermal cyclers may also be used in laboratories to facilitate other temperature-sensitive reactions, including restriction enzyme digestion or rapid diagnostics. [ 2 ] The device has a thermal block with holes where tubes holding the reaction mixtures can be inserted. The cycler then raises and lowers the temperature of the block in discrete, pre-programmed steps.
The earliest thermal cyclers were designed for use with the Klenow fragment of DNA polymerase I . Since this enzyme is destroyed during each heating step of the amplification process, new enzyme had to be added every cycle. This led to a cumbersome machine based on an automated pipettor , with open reaction tubes. Later, the PCR process was adapted to the use of thermostable DNA polymerase from Thermus aquaticus , which greatly simplified the design of the thermal cycler. While in some old machines the block is submerged in an oil bath to control temperature, in modern PCR machines a Peltier element is commonly used. Quality thermal cyclers often contain silver blocks to achieve fast temperature changes and uniform temperature throughout the block. Other cyclers have multiple blocks with high heat capacity, each of which is kept at a constant temperature, and the reaction tubes are moved between them by means of an automated process. Miniaturized thermal cyclers have been created in which the reaction mixture moves via channel through hot and cold zones on a microfluidic chip. Thermal cyclers designed for quantitative PCR have optical systems which enable fluorescence to be monitored during reaction cycling.
Modern thermal cyclers are equipped with a heated lid that presses against the lids of the reaction tubes. This prevents condensation of water from the reaction mixtures on the insides of the lids. Traditionally, a layer of mineral oil was used for this purpose. Some thermal cyclers are equipped with a fully adjustable heated lid to allow for nonstandard or diverse types of PCR plasticware. [ 3 ]
Some thermal cyclers are equipped with multiple blocks allowing several different PCRs to be carried out simultaneously. Some models also have a gradient function to allow for different temperatures in different parts of the block. This is particularly useful when testing suitable annealing temperatures for PCR primers . | https://en.wikipedia.org/wiki/Thermal_cycler |
In physics , the thermal de Broglie wavelength ( λ th {\displaystyle \lambda _{\text{th}}} , sometimes also denoted by Λ {\displaystyle \Lambda } ) is a measure of the uncertainty in location of a particle of thermodynamic average momentum in an ideal gas. [ 1 ] It is roughly the average de Broglie wavelength of particles in an ideal gas at the specified temperature.
We can take the average interparticle spacing in the gas to be approximately ( V / N ) 1/3 where V is the volume and N is the number of particles. When the thermal de Broglie wavelength is much smaller than the interparticle distance, the gas can be considered to be a classical or Maxwell–Boltzmann gas. On the other hand, when the thermal de Broglie wavelength is on the order of or larger than the interparticle distance, quantum effects will dominate and the gas must be treated as a Fermi gas or a Bose gas , depending on the nature of the gas particles. The critical temperature is the transition point between these two regimes, and at this critical temperature, the thermal wavelength will be approximately equal to the interparticle distance. That is, the quantum nature of the gas will be evident for V N λ th 3 ≤ 1 , or ( V N ) 1 / 3 ≤ λ th {\displaystyle \displaystyle {\frac {V}{N\lambda _{\text{th}}^{3}}}\leq 1\ ,{\text{ or }}\left({\frac {V}{N}}\right)^{1/3}\leq \lambda _{\text{th}}}
i.e., when the interparticle distance is less than the thermal de Broglie wavelength; in this case the gas will obey Bose–Einstein statistics or Fermi–Dirac statistics , whichever is appropriate. This is for example the case for electrons in a typical metal at T = 300 K , where the electron gas obeys Fermi–Dirac statistics , or in a Bose–Einstein condensate . On the other hand, for V N λ th 3 ≫ 1 , or ( V N ) 1 / 3 ≫ λ th {\displaystyle \displaystyle {\frac {V}{N\lambda _{\text{th}}^{3}}}\gg 1\ ,{\text{or}}\ \left({\frac {V}{N}}\right)^{1/3}\gg \lambda _{\text{th}}}
i.e., when the interparticle distance is much larger than the thermal de Broglie wavelength, the gas will obey Maxwell–Boltzmann statistics . [ 2 ] Such is the case for molecular or atomic gases at room temperature, and for thermal neutrons produced by a neutron source .
For massive, non-interacting particles, the thermal de Broglie wavelength can be derived from the calculation of the partition function . Assuming a 1-dimensional box of length L , the partition function (using the energy states of the 1D particle in a box ) is Z = ∑ n exp ( − E n k B T ) = ∑ n exp ( − h 2 n 2 8 m L 2 k B T ) . {\displaystyle Z=\sum _{n}\exp {\left(-{\frac {E_{n}}{k_{\text{B}}T}}\right)}=\sum _{n}\exp {\left(-{\frac {h^{2}n^{2}}{8mL^{2}k_{\text{B}}T}}\right)}.}
Since the energy levels are extremely close together, we can approximate this sum as an integral: [ 3 ] Z = ∫ 0 ∞ exp ( − h 2 n 2 8 m L 2 k B T ) d n = 2 π m k B T h 2 L ≡ L λ th . {\displaystyle Z=\int _{0}^{\infty }\exp {\left(-{\frac {h^{2}n^{2}}{8mL^{2}k_{\text{B}}T}}\right)}dn={\sqrt {\frac {2\pi mk_{\text{B}}T}{h^{2}}}}L\equiv {\frac {L}{\lambda _{\text{th}}}}.}
Hence, λ th = h 2 π m k B T , {\displaystyle \lambda _{\text{th}}={\frac {h}{\sqrt {2\pi mk_{\text{B}}T}}},} where h {\displaystyle h} is the Planck constant , m is the mass of a gas particle, k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , and T is the temperature of the gas. [ 2 ] This can also be expressed using the reduced Planck constant ℏ = h 2 π {\displaystyle \hbar ={\frac {h}{2\pi }}} as λ th = 2 π ℏ 2 m k B T . {\displaystyle \lambda _{\text{th}}={\sqrt {\frac {2\pi \hbar ^{2}}{mk_{\text{B}}T}}}.}
For massless (or highly relativistic ) particles, the thermal wavelength is defined as λ th = h c 2 π 1 / 3 k B T = π 2 / 3 ℏ c k B T , {\displaystyle \lambda _{\text{th}}={\frac {hc}{2\pi ^{1/3}k_{\text{B}}T}}={\frac {\pi ^{2/3}\hbar c}{k_{\text{B}}T}},}
where c is the speed of light. As with the thermal wavelength for massive particles, this is of the order of the average wavelength of the particles in the gas and defines a critical point at which quantum effects begin to dominate. For example, when observing the long-wavelength spectrum of black body radiation, the classical Rayleigh–Jeans law can be applied, but when the observed wavelengths approach the thermal wavelength of the photons in the black body radiator, the quantum Planck's law must be used.
A general definition of the thermal wavelength for an ideal gas of particles having an arbitrary power-law relationship between energy and momentum (dispersion relationship), in any number of dimensions, can be introduced. [ 1 ] If n is the number of dimensions, and the relationship between energy ( E ) and momentum ( p ) is given by E = a p s {\displaystyle E=ap^{s}} (with a and s being constants), then the thermal wavelength is defined as λ th = h π ( a k B T ) 1 / s [ Γ ( n / 2 + 1 ) Γ ( n / s + 1 ) ] 1 / n , {\displaystyle \lambda _{\text{th}}={\frac {h}{\sqrt {\pi }}}\left({\frac {a}{k_{\text{B}}T}}\right)^{1/s}\left[{\frac {\Gamma (n/2+1)}{\Gamma (n/s+1)}}\right]^{1/n},} where Γ is the Gamma function . This definition retains the following simple form for the chemical potential in the dilute (classical ideal gas) limit: [ 1 ]
for internal degeneracy g (such as spin degeneracy g = 2 for electrons), and also provides clean expressions for the thermodynamics of Fermi and Bose gases. [ 1 ]
In particular, for a 3-D ( n = 3 ) gas of massive or massless particles we have E = p 2 /2 m ( a = 1/2 m , s = 2) and E = pc ( a = c , s = 1) , respectively, yielding the expressions listed in the previous sections. Note that for massive non-relativistic particles ( s = 2), the expression does not depend on n . This explains why the 1-D derivation above agrees with the 3-D case.
Some examples of the thermal de Broglie wavelength at 298 K are given below. | https://en.wikipedia.org/wiki/Thermal_de_Broglie_wavelength |
Thermal death time is how long it takes to kill a specific bacterium at a specific temperature . It was originally developed for food canning and has found applications in cosmetics , producing salmonella-free feeds for animals (e.g. poultry) and pharmaceuticals .
In 1895, William Lyman Underwood of the Underwood Canning Company , a food company founded in 1822 at Boston, Massachusetts and later relocated to Watertown, Massachusetts , approached William Thompson Sedgwick , chair of the biology department at the Massachusetts Institute of Technology , about losses his company was suffering due to swollen and burst cans despite the newest retort technology available. Sedgwick gave his assistant, Samuel Cate Prescott , a detailed assignment on what needed to be done. Prescott and Underwood worked on the problem every afternoon from late 1895 to late 1896, focusing on canned clams . They first discovered that the clams contained heat-resistant bacterial spores that were able to survive the processing; then that these spores' presence depended on the clams' living environment; and finally that these spores would be killed if processed at 250 ˚F (121 ˚C) for ten minutes in a retort.
These studies prompted the similar research of canned lobster , sardines , peas , tomatoes , corn , and spinach . Prescott and Underwood's work was first published in late 1896, with further papers appearing from 1897 to 1926. This research, though important to the growth of food technology , was never patented. It would pave the way for thermal death time research that was pioneered by Bigelow and C. Olin Ball from 1921 to 1936 at the National Canners Association (NCA).
Bigelow and Ball's research focused on the thermal death time of Clostridium botulinum ( C. botulinum ) that was determined in the early 1920s. Research continued with inoculated canning pack studies that were published by the NCA in 1968.
Thermal death time can be determined one of two ways: 1) by using graphs or 2) by using mathematical formulas.
This is usually expressed in minutes at the temperature of 250 °F (121 °C). This is designated as F 0 . Each 18 °F or 10 °C change results in a time change by a factor of 10. This would be shown either as F 10 121 = 10 minutes (Celsius) or F 18 250 = 10 minutes (Fahrenheit).
A lethal ratio ( L ) is also a sterilizing effect at 1 minute at other temperatures with ( T ).
where T Ref is the reference temperature, usually 250 °F (121 °C); z is the z-value , and T is the slowest heat point of the product temperature.
Prior to the advent of computers, this was plotted on semilogarithmic paper though it can also be done on spreadsheet programs. The time would be shown on the x-axis while the temperature would be shown on the y -axis. This simple heating curve can also determine the lag factor ( j ) and the slope ( f h ). It also measures the product temperature rather than the can temperature.
where I = RT (Retort Temperature) − IT (Initial Temperature) and where j is constant for a given product.
It is also determined in the equation shown below:
where g is the number of degrees below the retort temperature on a simple heating curve at the end of the heating period, B B is the time in minutes from the beginning of the process to the end of the heating period, and f h is the time in minutes required for the straight-line portion of the heating curve plotted semilogarithmically on paper or a computer spreadsheet to pass through a log cycle.
A broken heating curve is also used in this method when dealing with different products in the same process such as chicken noodle soup in having to dealing with the meat and the noodles having different cooking times as an example. It is more complex than the simple heating curve for processing.
In the food industry, it is important to reduce the number of microbes in products to ensure proper food safety . This is usually done by thermal processing and finding ways to reduce the number of bacteria in the product. Time-temperature measurements of bacterial reduction is determined by a D-value, meaning how long it would take to reduce the bacterial population by 90% or one log 10 at a given temperature. This D-value reference (D R ) point is 250 °F (121 °C).
z or z-value is used to determine the time values with different D -values at different temperatures with its equation shown below:
where T is temperature in °F or °C.
This D -value is affected by pH of the product where low pH has faster D values on various foods. The D -value at an unknown temperature can be calculated [1] knowing the D -value at a given temperature provided the Z -value is known.
The target of reduction in canning is the 12- D reduction of C. botulinum, which means that processing time will reduce the amount of this bacteria by a factor of 10 12 . The D R for C. botulinum is 0.21 minute (12.6 seconds). A 12-D reduction will take 2.52 minutes (151 seconds).
This is taught in university courses in food science and microbiology and is applicable to cosmetic and pharmaceutical manufacturing.
In 2001, the Purdue University Computer Integrated Food Manufacturing Center and Pilot Plant put Ball's formula online for use. | https://en.wikipedia.org/wiki/Thermal_death_time |
Thermal decomposition , or thermolysis , is a chemical decomposition of a substance caused by heat. The decomposition temperature of a substance is the temperature at which the substance chemically decomposes. The reaction is usually endothermic as heat is required to break chemical bonds in the compound undergoing decomposition. If decomposition is sufficiently exothermic , a positive feedback loop is created producing thermal runaway and possibly an explosion or other chemical reaction. Thermal decomposition is a chemical reaction where heat is a reactant. Since heat is a reactant, these reactions are endothermic meaning that the reaction requires thermal energy to break the chemical bonds in the molecule. [ 1 ]
A simple substance (like water ) may exist in equilibrium with its thermal decomposition products, effectively halting the decomposition. The equilibrium fraction of decomposed molecules increases with the temperature.
Since thermal decomposition is a kinetic process, the observed temperature of its beginning in most instances will be a function of the experimental conditions and sensitivity of the experimental setup. For a rigorous depiction of the process, the use of thermokinetic modeling is recommended. [ 2 ]
main definition : Thermal decomposition is the breakdown of a compound into two or more different substances using heat , and it is an endothermic reaction
When metals are near the bottom of the reactivity series , their compounds generally decompose easily at high temperatures. This is because stronger bonds form between atoms towards the top of the reactivity series, and strong bonds are difficult to break. For example, copper is near the bottom of the reactivity series, and copper sulfate (CuSO 4 ), begins to decompose at about 200 °C (473 K; 392 °F), increasing rapidly at higher temperatures to about 560 °C (833 K; 1,040 °F). In contrast potassium is near the top of the reactivity series, and potassium sulfate (K 2 SO 4 ) does not decompose at its melting point of about 1,069 °C (1,342 K; 1,956 °F), nor even at its boiling point.
Many scenarios in the real world are affected by thermal degradation. One of the things affected is fingerprints. When anyone touches something, there is residue left from the fingers. If fingers are sweaty, or contain more oils, the residue contains many chemicals. De Paoli and her colleagues conducted a study testing thermal degradation on certain components found in fingerprints. For heat exposure, the amino acid and urea samples started degradation at 100 °C (373 K; 212 °F) and for lactic acid, the decomposition process started around 50 °C (323 K; 122 °F). [ 4 ] These components are necessary for further testing, so in the forensics discipline, decomposition of fingerprints is significant. | https://en.wikipedia.org/wiki/Thermal_decomposition |
In polymers , such as plastics , thermal degradation refers to a type of polymer degradation where damaging chemical changes take place at elevated temperatures, without the simultaneous involvement of other compounds such as oxygen . [ 1 ] [ 2 ] Simply put, even in the absence of air, polymers will begin to degrade if heated high enough. It is distinct from thermal-oxidation, which can usually take place at less elevated temperatures. [ 3 ]
The onset of thermal degradation dictates the maximum temperature at which a polymer can be used. It is an important limitation in how the polymer is manufactured and processed. For instance, polymers become less viscous at higher temperatures which makes injection moulding easier and faster, but thermal degradation places a ceiling temperature on this. Polymer devolatilization is similarly effected.
At high temperatures, the components of the long chain backbone of the polymer can break ( chain scission ) and react with one another ( cross-link ) to change the properties of the polymer. These reactions result in changes to the molecular weight (and molecular weight distribution ) of the polymer and can affect its properties by causing reduced ductility and increased embrittlement, chalking, scorch, colour changes, cracking and general reduction in most other desirable physical properties. [ 4 ]
Under thermal effect, the end of polymer chain departs, and forms low free radical which has low activity. Then according to the chain reaction mechanism, the polymer loses the monomer one by one. However, the molecular chain doesn't change a lot in a short time. The reaction is shown below. [ 5 ] This process is common for polymethymethacrylate (perspex).
CH 2 -C(CH 3 )COOCH 3 -CH 2 -C*(CH 3 )COOCH 3 →CH 2 -C*(CH 3 )COOCH 3 + CH 2 =C(CH 3 )COOCH 3
Groups that are attached to the side of the backbone are held by bonds which are weaker than the bonds connecting the chain. When the polymer is heated, the side groups are stripped off from the chain before it is broken into smaller pieces.
For example, the PVC eliminates HCl, under 100–120 °C.
CH 2 (Cl)CHCH 2 CH(Cl)→CH=CH-CH=CH+2HCl
Side group elimination can also proceed in a radical manner. For instance, methyl groups in polypropylene are susceptible to homolysis at high temperatures, leaving radicals on the polymer backbone. [ 6 ]
Radicals formed on the polymer backbone by either hydrogen abstraction side-group elimination can cause the chain to break by beta scission . As a result the molecular weight decreases rapidly. As new free radicals with high reactivity are formed, monomers cannot be a product of this reaction, also intermolecular chain transfer and disproportion termination reactions can occur.
CH 2 -CH 2 -CH 2 -CH 2 -CH 2 -CH 2 -CH 2 ’→
CH 2 -CH 2 -CH=CH 2 + CH 3 -CH 2 -CH 2 ’ or
CH 2 ’+CH 2 =CH-CH 2 -CH 2 -CH 2 -CH 3
As polymers approach their ceiling temperature scission starts to take place randomly on the backbone.
Although thermal degradation is defined as an oxygen free process it is difficult in practise to completely exclude oxygen. Where this is the case thermal oxidation is to be expected, leading to the formation of free radicals by way of hydroperoxides . These may then participate in thermal degradation reactions, accelerating the rate of breakdown.
( Thermogravimetric analysis ) (TGA) refers to the techniques where a sample is heated in a controlled atmosphere at a defined heating rate whilst the sample's mass is measured. When a polymer sample degrades, its mass decreases due to the production of gaseous products like carbon monoxide, water vapour and carbon dioxide.
( Differential thermal analysis ) (DTA) and ( differential scanning calorimetry ) (DSC): Analyzing the heating effect of polymer during the physical changes in terms of glass transition, melting, and so on. [ 7 ] These techniques measure the heat flow associated with oxidation. | https://en.wikipedia.org/wiki/Thermal_degradation_of_polymers |
Thermal depolymerization ( TDP ) is the process of converting a polymer into a monomer or a mixture of monomers, [ 1 ] by predominantly thermal means. It may be catalyzed or un-catalyzed and is distinct from other forms of depolymerization which may rely on the use of chemicals or biological action. This process is associated with an increase in entropy .
For most polymers, thermal depolymerization is chaotic process, giving a mixture of volatile compounds. Materials may be depolymerized in this way during waste management , with the volatile components produced being burnt as a form of synthetic fuel in a waste-to-energy process. For other polymers, thermal depolymerization is an ordered process giving a single product, or limited range of products; these transformations are usually more valuable and form the basis of some plastic recycling technologies. [ 2 ]
For most polymeric materials, thermal depolymerization proceeds in a disordered manner, with random chain scission giving a mixture of volatile compounds. The result is broadly akin to pyrolysis , although at higher temperatures gasification takes place. These reactions can be seen during waste management , with the products being burnt as synthetic fuel in a waste-to-energy process. In comparison to simply incinerating the starting polymer, depolymerization gives a material with a higher heating value , which can be burnt more efficiently and may also be sold. Incineration can also produce harmful dioxins and dioxin-like compounds and requires specially designed reactors and emission control systems in order to be performed safely. As the depolymerization step requires heat, it is energy-consuming; thus, the ultimate balance of energy efficiency compared to straight incineration can be very tight and has been the subject of criticism. [ 3 ]
Many agricultural and animal wastes can be processed, but these are often already used as fertilizer , animal feed, and, in some cases, as feedstocks for paper mills or as low-quality boiler fuel. Thermal depolymerization can convert these into more economically valuable materials. Numerous biomass to liquid technologies have been developed. In general, biochemicals contain oxygen atoms, which are retained during pyrolysis, giving liquid products rich in phenols and furans . [ 4 ] These can be viewed as partially oxidized and make for low-grade fuels. Hydrothermal liquefaction technologies dehydrate the biomass during thermal processing to produce a more energy-rich product stream. [ 5 ] Similarly, gasification produces hydrogen, a very high-energy fuel.
Plastic waste consists mostly of commodity plastics and may be actively sorted from municipal waste . Pyrolysis of mixed plastics can give a fairly broad mix of chemical products (between about 1 and 15 carbon atoms), including gases and aromatic liquids. [ 6 ] Catalysts can give a better-defined product with a higher value. [ 7 ] Likewise, hydrocracking can be employed to give LPG products. The presence of PVC can be problematic, as its thermal depolymerization generates large amounts of HCl , which can corrode equipment and cause undesirable chlorination of the products. It must be either excluded or compensated for by installing dechlorination technologies. [ 8 ] Polyethylene and polypropylene account for just less than half of global plastic production and, being pure hydrocarbons , have a higher potential for conversion to fuel. [ 9 ] Plastic-to-fuel technologies have historically struggled to be economically viable due to the costs of collecting and sorting the plastic and the relatively low value of the fuel produced. [ 9 ] Large plants are seen as being more economical than smaller ones, [ 10 ] [ 11 ] but require more investment to build.
The method can, however, result in a mild net-decrease in greenhouse gas emissions, [ 12 ] though other studies dispute this. For example, a 2020 study released by Renolds on their own Hefty EnergyBag program shows net greenhouse gas emissions. The study showed then when all cradle-to-grave energy costs are tallied, burning in a cement kiln was far superior. Cement kiln fuel scored a −61.1 kg CO 2 equivalents compared to +905 kg CO 2 eq. It also fared far worse in terms of landfill reduction vs. kiln fuel. [ 13 ] Other studies have confirmed that plastics pyrolysis to fuel programs are also more energy intensive. [ 14 ] [ 15 ]
For tire waste management, tire pyrolysis is also an option. Oil derived from tire rubber pyrolysis contains high sulfur content, which gives it high potential as a pollutant and requires hydrodesulfurization before use. [ 16 ] [ 17 ] The area faces legislative, economic, and marketing obstacles. [ 18 ] In most cases, tires are simply incinerated as tire-derived fuel .
Thermal treatment of municipal waste can involve the depolymerization of a very wide range of compounds, including plastics and biomass. Technologies can include simple incineration as well as pyrolysis, gasification , and plasma gasification . All of these are able to accommodate mixed and contaminated feedstocks. The main advantage is the reduction in volume of the waste, particularly in densely populated areas lacking suitable sites for new landfills . In many countries, incineration with energy recovery remains the most common method, with more advanced technologies being hindered by technical and cost hurdles. [ 19 ] [ 20 ]
Some materials thermally decompose in an ordered manner to give a single or limited range of products. By virtue of being pure materials, they are usually more valuable than the mixtures produced by disordered thermal depolymerization. For plastics this is usually the starting monomer , and when this is recycled back into fresh polymer, it is called feedstock recycling. In practice, not all depolymerization reactions are completely efficient, and some competitive pyrolysis is often observed.
Biorefineries convert low-value agricultural and animal waste into useful chemicals. The industrial production of furfural by the acid-catalyzed thermal treatment of hemicellulose has been in operation for over a century. Lignin has been the subject of significant research for the potential production of BTX and other aromatic compounds, [ 21 ] although such processes have not yet been commercialized with any lasting success. [ 22 ]
Certain polymers like PTFE , Nylon 6 , polystyrene , and PMMA [ 23 ] undergo depolymerization to give their starting monomers . These can be converted back into new plastic, a process called chemical or feedstock recycling. [ 24 ] [ 25 ] [ 26 ] In theory, this offers infinite recyclability, but it is also more expensive and has a higher carbon footprint than other forms of plastic recycling; however, in practice, this still yields an inferior product at higher energy costs than virgin polymer production in the real world because of contamination.
Although rarely employed presently, coal gasification has historically been performed on a large scale. Thermal depolymerization is similar to other processes which use superheated water as a major phase to produce fuels, such as direct hydrothermal liquefaction . [ 27 ] These are distinct from processes using dry materials to depolymerize, such as pyrolysis . The term thermochemical conversion (TCC) has also been used for conversion of biomass to oils, using superheated water, although it is more usually applied to fuel production via pyrolysis. [ 28 ] [ 29 ] A demonstration plant due to start up in the Netherlands is said to be capable of processing 64 tons of biomass ( dry basis ) per day into oil. [ 30 ] Thermal depolymerization differs in that it contains a hydrous process followed by an anhydrous cracking / distillation process.
Condensation polymers bearing cleavable groups such as esters and amides can also be completely depolymerized by hydrolysis or solvolysis ; this can be a purely chemical process but may also be promoted by enzymes. [ 31 ] Such technologies are less well developed than those of thermal depolymerization but have the potential for lower energy costs. Thus far, [ as of? ] polyethylene terephthalate has been the most heavily studied polymer. [ 32 ] It has been suggested that waste plastic could be converted into other valuable chemicals (not necessarily monomers) by microbial action, [ 33 ] [ 34 ] but such technology is still in its infancy. | https://en.wikipedia.org/wiki/Thermal_depolymerization |
Thermal Design Power ( TDP ), also known as thermal design point , is the maximum amount of heat that a computer component (like a CPU , GPU or system on a chip ) can generate and that its cooling system is designed to dissipate during normal operation at a non-turbo clock rate (base frequency).
Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating. [ 1 ]
The average CPU power (ACP) is the power consumption of central processing units , especially server processors, under "average" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture ( Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the "average" ACP rating of the same processor.
According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C , SPECcpu2006 , SPECjbb2005 and STREAM Benchmark [ 3 ] (memory bandwidth), [ 4 ] [ 5 ] [ 6 ] which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures.
The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a "therm-trip") or throttle their speed down. [ 7 ] Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted heat sink.
For example, a laptop 's CPU cooling system may be designed for a 20 W TDP, which means that it can dissipate up to 20 watts of heat without exceeding the maximum junction temperature for the laptop's CPU. A cooling system can do this using an active cooling method (e.g. conduction coupled with forced convection) such as a heat sink with a fan , or any of the two passive cooling methods: thermal radiation or conduction . Typically, a combination of these methods is used.
Since safety margins and the definition of what constitutes a real application vary among manufacturers, TDP values between different manufacturers cannot be accurately compared (a processor with a TDP of, for example, 100 W will almost certainly use more power at full load than processors with a fraction of said TDP, and very probably more than processors with lower TDP from the same manufacturer, but it may or may not use more power than a processor from a different manufacturer with a not excessively lower TDP, such as 90 W). Additionally, TDPs are often specified for families of processors, with the low-end models usually using significantly less power than those at the high end of the family.
Until around 2006 AMD used to report the maximum power draw of its processors as TDP. Intel changed this practice with the introduction of its Conroe family of processors. [ 8 ] Intel calculates a specified chip's TDP according to the amount of power the computer's fan and heatsink need to be able to dissipate while the chip is under sustained load. Actual power usage can be higher or (much) lower than TDP, but the figure is intended to give guidance to engineers designing cooling solutions for their products. [ 9 ] In particular, Intel's measurement also does not fully take into account Intel Turbo Boost due to the default time limits, while AMD does because AMD Turbo Core always tries to push for the maximum power. [ 10 ]
TDP specifications for some processors may allow them to work under multiple different power levels, depending on the usage scenario, available cooling capacities and desired power consumption. Technologies that provide such variable TDPs include Intel 's configurable TDP (cTDP) and scenario design power (SDP), and AMD 's TDP power cap .
Configurable TDP ( cTDP ), also known as programmable TDP or TDP power cap , is an operating mode of later generations of Intel mobile processors (as of January 2014 [update] ) and AMD processors (as of June 2012 [update] ) that allows adjustments in their TDP values. By modifying the processor behavior and its performance levels, power consumption of a processor can be changed altering its TDP at the same time. That way, a processor can operate at higher or lower performance levels, depending on the available cooling capacities and desired power consumption. [ 11 ] : 69–72 [ 12 ] [ 13 ]
cTDP typically provide (but are not limited to) three operating modes: [ 11 ] : 71–72
For example, some of the mobile Haswell processors support cTDP up, cTDP down, or both modes. [ 14 ] As another example, some of the AMD Opteron processors and Kaveri APUs can be configured for lower TDP values. [ 13 ] IBM's POWER8 processor implements a similar power capping functionality through its embedded on-chip controller (OCC). [ 15 ]
Intel introduced scenario design power (SDP) for some low power Y-series processors since 2013. [ 16 ] [ 17 ] It is described as "an additional thermal reference point meant to represent thermally relevant device usage in real-world environmental scenarios." [ 18 ] [ promotional source? ] As a power rating, SDP is not an additional power state of a processor; it states the average power consumption of a processor using a certain mix of benchmark programs to simulate "real-world" scenarios. [ 16 ] [ 19 ] [ 20 ]
As some authors and users have observed, the Thermal Design Power (TDP) rating is an ambiguous parameter. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] In fact, different manufacturers define the TDP using different calculation methods and different operating conditions, keeping these details almost undisclosed (with very few exceptions). This makes highly problematic (if not impossible) to reasonably compare similar devices made by different manufacturers based on their TDP, and to optimize the design of a cooling system in terms of both heat management and cost.
To better understand the problem we must remember the basic concepts underlying Thermal management and Computer cooling . [ 26 ] Let’s consider the thermal conduction path from the CPU case to the ambient air through a Heat sink , with:
All these parameters are linked together by the following equation :
Hence, once we know the thermal power to be dissipated (Pd), the maximum allowed case temperature (Tc) of the CPU and the maximum expected ambient temperature (Ta) of the air entering the cooling fans, we can determine the fundamental characteristics of the required Heat sink , i.e. its thermal resistance Rca, as:
This equation can be rearranged by writing
where in Pd can replaced by the Thermal Design Power (TDP).
Note that the heat dissipation path going from the CPU to the ambient air flowing through the printed circuit of the motherboard has a thermal resistance that is orders of magnitude greater than that of the Heat sink , therefore it can be neglected in these computations.
Once all the input data is known, the previous formula allows to choose a CPU ’s Heat sink with a suitable thermal resistance Rca between case and ambient air, sufficient to keep the maximum case temperature at or below a predefined value Tc.
On the contrary, when dealing with the Thermal Design Power (TDP), ambiguities arise because the CPU manufacturers usually do not disclose the exact conditions under which this parameter has been defined. The maximum acceptable case temperature Tc to get the rated performances is usually missing, as well as the corresponding ambient temperature Ta, and, last but not least, details about the specific computational test workload.
For instance, an Intel ’s general support page states briefly that the TDP refers to "the power consumption under the maximum theoretical load" . [ 27 ] Here they also inform that starting from the 12th generation of their CPUs the term Thermal Design Power (TDP) has been replaced with Processor Base Power (PBP) . [ 28 ] In a support page dedicated to the Core i7 -7700 processor, Intel defines the TDP as the maximum amount of heat that a processor can produce when running real life applications , [ 29 ] without telling what these "real life applications" are.
Another example: in a 2011 white paper where the Xeon processors are compared with AMD ’s competing devices, Intel defines TDP as the upper point of the thermal profile measured at maximum case temperature , but without specifying what this temperature should be (nor the computing load). [ 30 ] It is important to note that all these definitions imply that the CPU is running at the base clock rate (non-turbo).
In conclusion:
In October 2019, the GamersNexus Hardware Guides [ 25 ] [ 34 ] showed a table with case and ambient temperature values that they got directly from AMD , describing the TPDs of some Ryzen 5, 7 and 9 CPUs . The formula relating all these parameters, given by AMD , is the usual
The declared TPDs of these devices range from 65 W to 105 W; the ambient temperature considered by AMD is +42° C , and the case temperatures range from +61.8 °C to +69.3 °C , while the case-to-ambient thermal resistances range from 0.189 to 0.420 °C /W. | https://en.wikipedia.org/wiki/Thermal_design_power |
Thermal desorption is an environmental remediation technology that utilizes heat to increase the volatility of contaminants such that they can be removed (separated) from the solid matrix (typically soil, sludge or filter cake sediment). The volatilized contaminants are then either collected or thermally destroyed. A thermal desorption system therefore has two major components; the desorber itself and the offgas treatment system .
Thermal desorption is not incineration .
Thermal desorption first appeared as an environmental treatment technology in 1985 when it was specified in the Record of Decision for the McKin Company Superfund site within the Royal River watershed in Maine. [ 1 ]
It is frequently referred to as "low temp" thermal desorption to differentiate it from high temperature incineration. An early direct fired thermal desorption project was the treatment of 8000 tons of toxaphene (a chlorinated pesticide) contaminated sandy soil at the S&S Flying Services site in Marianna Florida in 1990, with later projects exceeding 170,000 tons at the Cape Fear coal tar site in 1999. A status report from the United States Environmental Protection Agency shows that thermal desorption has been used at 69 Superfund sites through FY2000. In addition, hundreds of remediation projects have been completed using thermal desorption at non-Superfund sites.
For in-situ on-site treatment options, only incineration and stabilization have been used at more Superfund sites. Incineration suffers from poor public acceptance. Stabilization does not provide a permanent remedy, since the contaminants are still on site. Thermal desorption is a widely accepted technology that provides a permanent solution at an economically competitive cost.
The world’s first large-scale thermal desorption for treatment of mercury -containing wastes was erected in Wölsau , for the remediation of the Chemical Factory Marktredwitz (founded in 1788) was considered to be the oldest in Germany. Operation commenced in October 1993 including the first optimising phase. 50,000 tons of mercury-contaminated solid wastes were treated successfully between August 1993 and June 1996. 25 metric tons of mercury had been recovered from soil and rubble. Unfortunately the Marktredwitz plant is often misunderstood in the literature as a pilot-scale plant only.
Numerous desorber types are available today. Some of the more common types are listed below.
Most indirect fired rotary systems use an inclined rotating metallic cylinder to heat the feed material. The heat transfer mechanism is usually conduction through the cylinder wall. In this type of system neither the flame nor the products of combustion can contact the feed solids or the offgas. Think of it as a rotating pipe inside a furnace with both ends sticking outside of the furnace. The cylinder for full-scale transportable systems is typically five to eight feet in diameter with heated lengths ranging from twenty to fifty feet. With a carbon steel shell, the maximum solids temperature is around 1,000 °F, while temperatures of 1,800 °F with special alloy cylinders are attainable. Total residence time in this type of desorber normally ranges from 30 to 120 minutes. Treatment capacities can range from 2 to 30 tons per hour for transportable units.
Direct-fired rotary desorbers have been used extensively over the years for petroleum contaminated soils and soils contaminated with Resource Conservation and Recovery Act hazardous wastes as defined by the United States Environmental Protection Agency. A 1992 paper on treating petroleum contaminated soils estimated that between 20 and 30 contractors have 40 to 60 rotary dryer systems available. Today, it is probably closer to 6 to 10 contractors with 15 to 20 portable systems commercially available. The majority of these systems utilize a secondary combustion chamber (afterburner) or catalytic oxidizer to thermally destroy the volatilized organics. A few of these systems also have a quench and scrubber after the oxidizer which allows them to treat soils containing chlorinated organics such as solvents and pesticides . The desorbing cylinder for full-scale transportable systems is typically four to ten feet in diameter with heated lengths ranging from twenty to fifty feet. The maximum practical solids temperature for these systems is around 750 to 900 °F depending on the material of construction of the cylinder. Total residence time in this type of desorber normally ranges from 3 to 15 minutes. Treatment capacities can range from 6 to over 100 tons per hour for transportable units.
Heated screw systems are also an indirect heated system. Typically they use a jacketed trough with a double auger that intermeshes. The augers themselves frequently contain passages for the heating medium to increase the heat transfer surface area. Some systems use electric resistance heaters instead of a heat transfer media and may employ a single auger in each housing. The augers can range from 12 to 36 inches in diameter for full-scale systems, with lengths up to 20 feet. The auger/trough assemblies can be connected in parallel and/or series to increase throughput. Full scale capabilities up to 4 tons per hour have been demonstrated. This type of system has been most successful treating refinery wastes.
In the early days, there was a continuous infrared system that is no longer in common use. In theory, microwaves would be an excellent technical choice since uniform and accurately controlled heating can be achieved with no heat transfer surface fouling problems. One can only guess that capital and/or energy costs have prevented the development of a microwave thermal desorber at the commercial scale.
There are only three basic options for offgas treatment available. The volatilized contaminants in the offgas can either be discharged to atmosphere, collected or destroyed. In some cases, both a collection and destruction system are employed. In addition to managing the volatilized components, the particulate solids (dust) that exit the desorber must also be removed from the offgas.
When a collection system is used, the offgas must be cooled to condense the bulk of the volatilized components into a liquid. The offgas will exit most desorbers in the 350–900 °F range. The offgas is then typically cooled to somewhere between 120 and 40 °F to condense the bulk of the volatilized water and organic contaminants. Even at 40 °F, there may be measurable amounts of non-condensed organics. For this reason, after the condensation step, further treatment of the offgas is usually required. The cooled offgas may be treated by carbon adsorption, or thermal oxidation. Thermal oxidation can be accomplished using a catalytic oxidizer, an afterburner or by routing the offgas to the combustion heat source for the desorber. The volume of gas requiring treatment for indirect fired desorbers is a fraction of that required for a direct fired desorber. This requires smaller air pollution control trains for the gaseous process vent emissions. Some thermal desorption systems recycle the carrier gas, thereby further reducing the volume of gaseous emissions.
The condensed liquid from cooling the offgas is separated into organic and aqueous fractions. The water is either disposed of or used to cool the treated solids and prevent dusting. The condensed liquid organic is removed from the site. Depending on its composition, the liquid is either recycled as a supplemental fuel or destroyed in a fixed base incinerator. A thermal desorber removing 500 mg/kg of organic contaminants from 20,000 tons of soil will produce less than 3,000 US gallons (11,000 L) of liquid organic. In essence 20,000 tons of contaminated soil could be reduced to less than one tank truck of extracted liquid residue for off-site disposal.
Desorbers using offgas destruction systems use combustion to thermally destroy the volatilized organics components forming CO , CO 2 , NOx , SOx and HCl . The destruction unit may be called an afterburner, secondary combustion chamber, or thermal oxidizer. Catalytic oxidizers may also be used if the organic halide content of the contaminated media is low enough. Regardless of the name, the destruction unit is used to thermally destroy the hazardous organic constituents that were removed (volatilized) from the soil or waste.
T. McGowan, T., R. Carnes and P. Hulon. Incineration of Pesticide-Contaminated Soil on a Superfund Site, paper on the S&S Flying Services Superfund Site remediation project, Marianna, FL, presented at HazMat '91 Conference, Atlanta, GA, October, 1991 | https://en.wikipedia.org/wiki/Thermal_desorption |
Temperature programmed desorption ( TPD ) is the method of observing desorbed molecules from a surface when the surface temperature is increased. When experiments are performed using well-defined surfaces of single-crystalline samples in a continuously pumped ultra-high vacuum (UHV) chamber, then this experimental technique is often also referred to as thermal desorption spectroscopy or thermal desorption spectrometry ( TDS ). [ 1 ] [ 2 ]
When molecules or atoms come in contact with a surface, they adsorb onto it, minimizing their energy by forming a bond with the surface. The binding energy varies with the combination of the adsorbate and surface. If the surface is heated, at one point, the energy transferred to the adsorbed species will cause it to desorb. The temperature at which this happens is known as the desorption temperature. Thus TPD shows information on the binding energy.
Since TPD observes the mass of desorbed molecules, it shows what molecules are adsorbed on the surface. Moreover, TPD recognizes the different adsorption conditions of the same molecule from the differences between the desorption temperatures of molecules desorbing different sites at the surface, e.g. terraces vs. steps. TPD also obtains the amounts of adsorbed molecules on the surface from the intensity of the peaks of the TPD spectrum, and the total amount of adsorbed species is shown by the integral of the spectrum.
To measure TPD, one needs a mass spectrometer, such as a quadrupole mass spectrometer or a time-of-flight (TOF) mass spectrometer, under ultrahigh vacuum (UHV) conditions. The amount of adsorbed molecules is measured by increasing the temperature at a heating rate of typically 2 K/s to 10 K/s. Several masses may be simultaneously measured by the mass spectrometer, and the intensity of each mass as a function of temperature is obtained as a TDS spectrum.
The heating procedure is often controlled by the PID control algorithm, with the controller being either a computer or specialised equipment such as a Eurotherm .
Other methods of measuring desorption are Thermal Gravimetric Analysis (TGA) or using infrared detectors, thermal conductivity detectors etc.
TDS spectrum 1 and 2 are typical examples of a TPD measurement. Both are examples of NO desorbing from a single crystal in high vacuum. The crystal was mounted on a titanium filament and heated with current. The desorbing NO was measured using a mass spectrometer monitoring the atomic mass of 30.
Before 1990 analysis of a TPD spectrum was usually done using a so-called simplified method; the "Redhead" method, [ 3 ] assuming the exponential prefactor and the desorption energy to be independent of the surface coverage. After 1990 and with use of computer algorithms TDS spectra were analyzed using the "complete analysis method" [ 4 ] or the "leading edge method". [ 5 ] These methods assume the exponential prefactor and the desorption energy to be dependent of the surface coverage. Several available methods of analyzing TDS are described and compared in an article by A.M. de JONG and J.W. NIEMANTSVERDRIET. [ 6 ] During parameter optimization/estimation, using the integral has been found to create a more well behaved objective function than the differential. [ 7 ]
Thermal desorption is described by the Polanyi–Wigner equation derived from the Arrhenius equation .
where
This equation is difficult in practice while several variables are a function of the coverage and influence each other. [ 8 ] The “complete analysis method” calculates the pre-exponential factor and the activation energy at several coverages. This calculation can be simplified. First we assume the pre-exponential factor and the activation energy to be independent of the coverage.
We also assume a linear heating rate: (equation 1)
where:
We assume that the pump rate of the system is indefinitely large, thus no gasses will absorb during the desorption. The change in pressure during desorption is described as: (equation 2)
where:
We assume that S {\displaystyle S} is indefinitely large so molecules do not re-adsorp during desorption process and we assume that P / α {\displaystyle P/\alpha } is indefinitely small compared to d P d t {\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} t}}} and thus: (equation 3)
Equation 2 and 3 lead to conclude that the desorption rate is a function of the change in pressure. One can use data in an experiment, which are a function of the pressure like the intensity of a mass spectrometer, to determine the desorption rate.
Since we assumed the pre-exponential factor and the activation energy to be independent of the coverage.
Thermal desorption is described with a simplified Arrhenius equation : (equation 4)
where:
Using the before mentioned Redhead method (a method less precise as the "complete analysis" or the "leading edge" method) and the temperature maximum T m {\displaystyle T_{m}} one can determine the activation energy: (equation 5) for n=1
(equation 6) for n=2
M. Ehasi and K. Christmann [ 9 ] [ 10 ] described a simple method to determine the activation energy of the second order.
Equation 6 can be changed into: (equation 6a)
where: σ 0 {\displaystyle \sigma _{0}} is the surface area of a TDS or TPD peak.
A graph of ln ( σ 0 T m 2 ) {\displaystyle \ln(\sigma _{0}{T_{m}}^{2})} versus 1 / T m {\displaystyle 1/T_{m}} results in a straight line with a slope equal to − E act / R {\displaystyle -E_{\text{act}}/R} .
Thus in a first-order reaction the T m {\displaystyle T_{m}} is independent of the surface coverage. Changing the surface coverage one can determine n {\displaystyle n} . Usually a fixed value of the pre-exponential factor is used and is β {\displaystyle \beta } known, with these values one can derive the E act {\displaystyle E_{\text{act}}} iteratively from T m {\displaystyle T_{m}} . | https://en.wikipedia.org/wiki/Thermal_desorption_spectroscopy |
Thermal destratification is the process of mixing the internal air in a building to eliminate stratified layers and achieve temperature equalization throughout the building envelope .
Destratification is the reverse of the natural process of thermal stratification, which is the layering of differing (typically increasing) air temperatures from floor to ceiling. Stratification is caused by hot air rising up to the ceiling or roof space because it is lighter than the surrounding cooler air. Conversely, cool air falls to the floor as it is heavier than the surrounding warmer air. [ citation needed ]
In a stratified building, temperature differentials of up to 1.5°C per vertical foot is common, and the higher a building's ceiling, the more extreme this temperature differential can be. [ 2 ] In extreme cases, temperature differentials of 10°C have been found over a height of 1 meter. Other variables that influence the level of thermal stratification include heat generated by people and processes present in the building, insulation of the space from outside weather conditions, solar gain, specification of the HVAC system, location of supply and return ducts, and vertical air movement inside the space, usually supplied by destratification fans. Computational fluid dynamics can be used to predict the level of stratification in a space. [ citation needed ]
In a study conducted by the Building Scientific Research Information Association, the wasted energy due to stratification increased consistently based on temperature differential from floor to ceiling (ΔT). [ 3 ] The study indicates that stratified buildings tend to overheat or overcool based on the temperature at the thermostat, which tends to be lower than the overall heat energy present in the room. The study also showed that energy waste due to stratification was present at ceiling heights ranging from 20 ft. to 40 ft, and higher ceilings caused higher energy waste, even at the same ΔT. Since ΔT tends to be higher in taller ceilings, the effect of stratification is compounded, causing substantial energy waste in high-ceiling buildings.
Since stratification and the costs associated with it are linear, the definition of destratification will differ based on opinion and use case. Full destratification, or a 0° ΔT from floor to ceiling, is unlikely to occur in any building. Since the costs of stratification decrease linearly as ΔT approaches 5.4°F, and no study has yet looked at the effects of stratification below 5.4°F, it is not uncommon to consider any space with a ΔT below 5°F to be destratified. In the United States, ASHRAE Standard 55 prescribes 3°C as the limit for the vertical air temperature difference between head and ankle levels, but has no standard recommending an ideal ΔT between floor and ceiling. [ 4 ]
Reducing thermal stratification can be accomplished by controlling the variables that are associated with increased stratification. Since many of the variables, including ceiling height, people and processes, solar gain, and outside weather conditions cannot be controlled, the most common technologies used are related to the building's HVAC (heating, ventilation, and air conditioning) system. One of the cheapest, most effective, and easiest to install technologies are destratification fans, including both axial destratification fans and HVLS (high-volume low-speed) fans. [ citation needed ]
Axial destratification fans are self-contained units that are installed in an array at the ceiling with the goal of blowing conditioned air in the ceiling down to the floor, where people live and work. Because axial fans are designed to blow air straight down at the floor, they can be used in ceiling and roof structures over 100 ft. tall. Because axial destratification fans can achieve destratification with low CFMs, it is imperative that the air leaving the nozzle achieve an air speed at the floor of between 0.2 and 0.5 m/s. The result of this level of air movement is the integration of conditioned air from the ceiling with air at the floor level. Failing to impact the floor will result in destratification of medial layers of air but not achieve destratification at the floor. Since the area around the thermostat will not be destratified in this instance, it is hypothesized that there will be little or no cost savings, as the thermostat will continue to overheat or overcool the room.
An experiment in a room with a 21 ft. ceiling yielded a savings of 23.5% with the use of axial destratification fans. [ 5 ]
Because of their size, HVLS fans are normally installed in new construction, rather than retrofits, as the roof structure may have to be redesigned to accommodate the increased weight and size. It's not uncommon to require the relocation of lights, due to strobing as large fan blades pass under them, and sprinkler systems, which typically require unobstructed access to the floor to meet fire code. When used in the summer to encourage evaporative cooling , HVLS fans are run forward, blowing air at the floor. When used for destratification in the winter, the fans are run in reverse, blowing air towards ceiling which then circulates around the room. The height at which HVLS fans can be effective is limited compared to axial destratification fans.
This method has the most benefits through its application in the heating, ventilation, and air conditioning (HVAC) industry and in heating and cooling for buildings and it has been found that "stratification is the single biggest waste of energy in buildings today." [ 6 ]
By incorporating thermal destratification technology into buildings, energy requirements are reduced as heating systems are no longer over-delivering in order to constantly replace the heat that rises away from the floor area, by redistributing the already heated air from the unoccupied ceiling space back down to floor level, until temperature equalisation is achieved. With regards to cooling destratification systems ensure the cooled air supplied is circulated fully and distributed evenly throughout internal environments, eliminating hot and cold spots and satisfying thermostats for longer periods of time. As a result, destratification technology has great potential for carbon emission reductions due to the reduced energy requirement, and is in turn capable of cutting costs for businesses, sometimes by up to 50%. [ 7 ] This is supported by The Carbon Trust which recommends destratification in buildings as one of its top three methods to reduce carbon dioxide emissions. [ 8 ]
Destratification naturally increases air movement at the floor, reducing "hot spots" and "cold spots" in a room. It can be used in typically cold areas, like grocery store freezer cases, to warm patrons shopping nearby. In addition, air movement from destratification fans can be used to help meet ASHRAE Standard 62.1 by increasing the amount of air movement at the floor. | https://en.wikipedia.org/wiki/Thermal_destratification |
In thermodynamics , thermal diffusivity is the thermal conductivity divided by density and specific heat capacity at constant pressure. [ 1 ] It is a measure of the rate of heat transfer inside a material and has SI units of m 2 /s. It is an intensive property . Thermal diffusivity is usually denoted by lowercase alpha ( α ), but a , h , κ ( kappa ), [ 2 ] K , [ 3 ] D , D T {\displaystyle D_{T}} are also used.
The formula is [ 4 ] α = k ρ c p , {\displaystyle \alpha ={\frac {k}{\rho c_{p}}},} where
Together, ρc p can be considered the volumetric heat capacity (J/(m 3 ·K)).
Thermal diffusivity is a positive coefficient in the heat equation : [ 5 ] ∂ T ∂ t = α ∇ 2 T . {\displaystyle {\frac {\partial T}{\partial t}}=\alpha \nabla ^{2}T.}
One way to view thermal diffusivity is as the ratio of the time derivative of temperature to its curvature , quantifying the rate at which temperature concavity is "smoothed out". In a substance with high thermal diffusivity, heat moves rapidly through it because the substance conducts heat quickly relative to its energy storage capacity or "thermal bulk".
Thermal diffusivity and thermal effusivity are related concepts and quantities used to simulate non-equilibrium thermodynamics . Diffusivity is the more fundamental concept and describes the stochastic process of heat spread throughout some local volume of a substance. Effusivity describes the corresponding transient process of heat flow through some local area of interest. Upon reaching a steady state , where the stored energy distribution stabilizes, the thermal conductivity ( k ) may be sufficient to describe heat transfers inside solid or rigid bodies by applying Fourier's law . [ 6 ] [ 7 ]
Thermal diffusivity is often measured with the flash method . [ 8 ] [ 9 ] It involves heating a strip or cylindrical sample with a short energy pulse at one end and analyzing the temperature change (reduction in amplitude and phase shift of the pulse) a short distance away. [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Thermal_diffusivity |
The term " thermal diode " can refer to:
A thermal diode in this sense is a device whose thermal resistance is different for heat flow in one direction than for heat flow in the other direction. I.e., when the thermal diode's first terminal is hotter than the second, heat will flow easily from the first to the second, but when the second terminal is hotter than the first, little heat will flow from the second to the first.
Such an effect was first observed in a copper – cuprous-oxide interface by Chauncey Starr in the 1930s. Beginning in 2002, theoretical models were proposed to explain this effect. In 2006 the first microscopic solid-state thermal diodes were built. [ 1 ] In April 2015 Italian researchers at CNR announced development of a working thermal diode, [ 2 ] publishing results in Nature Nanotechnology . [ 3 ]
Thermal siphons can act as a one-way heat flow. Heat pipes operating in gravity may also have this effect.
A sensor device embedded on microprocessors used to monitor the temperature of the processor's die is also known as a "thermal diode".
This application of thermal diode is based on the property of electrical diodes to change voltage across it linearly according to temperature. As the temperature increases, diodes' forward voltage decreases. Microprocessors having high clock rate encounter high thermal loads. To monitor the temperature limits thermal diodes are used. They are usually placed in that part of the processor core where highest temperature is encountered. Voltage developed across it varies with the temperature of the diode. All modern AMD and Intel CPUs, as well as AMD and Nvidia GPUs have on-chip thermal diodes. As the sensor is located directly on the processor die, it provides most local and relevant CPU and GPU temperature readings. The silicon diodes have temperature dependency of -2mV per degree Celsius. Thus the junction temperature can be determined by passing a set current through the diode and then measuring voltage developed across it. In addition to processors, the same technology is widely used in dedicated temperature sensor IC's.
There are two types. One uses semiconductor , or less efficient metal, i.e. thermocouples , working on the principles of the Peltier-Seebeck effect . The other relies on vacuum tubes and the principles of thermionic emission .
As of 2009 a team at MIT is working for construction of thermal diodes that convert heat to electricity at lower temperatures than before. [ 4 ] This can be used in construction of engines or in electricity production. The efficiency of present thermal diodes is about 18% between the temperature range of 200-300 degree Celsius. [ 5 ] | https://en.wikipedia.org/wiki/Thermal_diode |
Thermal dissolution is a method of liquefaction of solid fossil fuels. It is a hydrogen-donor solvent refining process. It may be used for the shale oil extraction and coal liquefaction . [ 1 ] Other liquids extraction processes from solid fuels are pyrolysis and hydrogenation . [ 2 ] Compared to hydrogenation, the process of thermal dissolution has milder conditions, simpler process, and no consumption of catalyst. [ 3 ]
This chemical process -related article is a stub . You can help Wikipedia by expanding it .
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_dissolution |
Thermal ecology is the study of the interactions between temperature and organisms. Such interactions include the effects of temperature on an organism's physiology, behavioral patterns, and relationship with its environment. While being warmer is usually associated with greater fitness, maintaining this level of heat costs a significant amount of energy. Organisms will make various trade-offs so that they can continue to operate at their preferred temperatures and optimize metabolic functions. With the emergence of climate change scientists are investigating how species will be affected and what changes they will undergo in response.
While it is not known exactly when thermal ecology began being recognized as a new branch of science, in 1969, the Savanna River Ecology Laboratory ( SREL ) developed a research program on thermal stress due to heated water previously used to cool nuclear reactors being released into various nearby bodies of water. The SREL alongside the DuPont Company Savanna River Laboratory and the Atomic Energy Commission sponsored the first scientific symposium on thermal ecology in 1974 to discuss this issue as well as similar instances and the second symposium was held the next year in 1975. [ 1 ]
Temperature has a notable effect on animals, contributing to body growth and size, and behavioral and physical adaptations. Ways that animals can control their body temperature include generating heat through daily activity and cooling down through prolonged inactivity at night. Because this cannot be done by marine animals, they have adapted to have traits such as a small surface-area-to-volume ratio to minimize heat transfer with their environment and the creation of antifreeze in the body for survival in extreme cold conditions. [ 2 ]
Endotherms expend a large amount of energy keeping their body temperatures warm and therefore require a large energy intake to make up for it. There are several ways that they have evolved to solve this issue. For instance, following Bergmann's Rule , endotherms in colder climates tend to be larger than those in warmer climates as a way to conserve internal heat. [ 3 ] Other methods include reducing internal temperatures and metabolic rates through daily torpor and hibernation. [ 4 ]
The Strix occidentalis, or the California spotted owl , has a preferred temperature range of around 18.20-35.20 °C and is less tolerant to heat than most other birds, exhibiting behaviors such as wing drooping and increased breathing at 30-34 °C. Because of this they tend to live in environments that are resistant to temperature change such as old-growth forests. [ 5 ]
Because the main source of heat for ectotherms comes from their environment, thermal requirements change from species to species depending on geographical location. Due to some species having a static preferred body temperature through generations, they are shown to exhibit behavioral adjustments in situations of drastic environment change with adjustments in physiology as a last resort. In addition, ectotherms, similarly to endotherms, are generally larger in size when living in colder climates, following the temperature-size rule . [ 3 ]
The Podarcis siculus otherwise known as the Italian wall lizard has a preferred temperature range of around 28.40-31.57 °C for both males and females. A strong direct relationship has been observed between their body temperatures and air temperature in the summer and a weak correlation has been observed in the spring. To control their internal temperature, seeking shade under rocks and leaves has proven to be effective. [ 6 ]
Many processes during plant reproduction operate at specific temperature ranges making temperature important for reproductive success. Increasing the temperature of the reproductive organs in plants results in more frequent visitations from pollinators and an increase in the rate of metabolic processes. [ 7 ] Factors that affect the capture and maintaining of heat in plants include flower orientation, size and shape, coloration, opening and closure, pubescence, and thermogenesis . [ 8 ]
Due to recent global climate change, thermal ecology has become a topic of interest for scientists concerning ecological response. Through observation it has been found that organisms typically respond to changes in weather and temperature by either moving to an environment in which these factors match what they are already accustomed to or staying in their current environment and consequently become acclimated to the new conditions. [ 3 ] In a study of the fish species Galaxias platei, it was concluded that the direct impacts of climate change such as increased temperatures would most likely not pose a significant threat however indirect impacts such as habitat loss may be detrimental. [ 9 ] | https://en.wikipedia.org/wiki/Thermal_ecology |
The thermal effective mass of electrons in a metal is the apparent mass due to interactions with the periodic potential of the crystal lattice , with phonons (e.g. phonon drag ), and interaction with other electrons. The resulting effective mass of electrons contributes to the electronic heat capacity of the metal, leading to deviations from the heat capacity of a free electron gas .
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_effective_mass |
In thermodynamics , the thermal efficiency ( η t h {\displaystyle \eta _{\rm {th}}} ) is a dimensionless performance measure of a device that uses thermal energy , such as an internal combustion engine , steam turbine , steam engine , boiler , furnace , refrigerator , ACs etc.
For a heat engine , thermal efficiency is the ratio of the net work output to the heat input; in the case of a heat pump , thermal efficiency (known as the coefficient of performance or COP) is the ratio of net heat output (for heating), or the net heat removed (for cooling) to the energy input (external work). The efficiency of a heat engine is fractional as the output is always less than the input while the COP of a heat pump is more than 1. These values are further restricted by the Carnot theorem .
In general, energy conversion efficiency is the ratio between the useful output of a device and the input, in energy terms. For thermal efficiency, the input, Q i n {\displaystyle Q_{\rm {in}}} , to the device is heat , or the heat-content of a fuel that is consumed. The desired output is mechanical work , W o u t {\displaystyle W_{\rm {out}}} , or heat, Q o u t {\displaystyle Q_{\rm {out}}} , or possibly both. Because the input heat normally has a real financial cost, a memorable, generic definition of thermal efficiency is [ 1 ]
η t h ≡ benefit cost . {\displaystyle \eta _{\rm {th}}\equiv {\frac {\text{benefit}}{\text{cost}}}.}
From the first law of thermodynamics , the energy output cannot exceed the input, and by the second law of thermodynamics it cannot be equal in a non-ideal process, so 0 ≤ η t h < 1 {\displaystyle 0\leq \eta _{\rm {th}}<1}
When expressed as a percentage, the thermal efficiency must be between 0% and 100%. Efficiency must be less than 100% because there are inefficiencies such as friction and heat loss that convert the energy into alternative forms. For example, a typical gasoline automobile engine operates at around 25% efficiency, and a large coal-fuelled electrical generating plant peaks at about 46%. However, advances in Formula 1 motorsport regulations have pushed teams to develop highly efficient power units which peak around 45–50% thermal efficiency. The largest diesel engine in the world peaks at 51.7%. In a combined cycle plant, thermal efficiencies approach 60%. [ 2 ] Such a real-world value may be used as a figure of merit for the device.
For engines where a fuel is burned, there are two types of thermal efficiency: indicated thermal efficiency and brake thermal efficiency. [ 3 ] This form of efficiency is only appropriate when comparing similar types or similar devices.
For other systems, the specifics of the calculations of efficiency vary, but the non-dimensional input is still the same: Efficiency = Output energy / input energy.
Heat engines transform thermal energy , or heat, Q in into mechanical energy , or work , W net . They cannot do this task perfectly, so some of the input heat energy is not converted into work, but is dissipated as waste heat Q out < 0 into the surroundings:
The thermal efficiency of a heat engine is the percentage of heat energy that is transformed into work . Thermal efficiency is defined as
The efficiency of even the best heat engines is low; usually below 50% and often far below. So the energy lost to the environment by heat engines is a major waste of energy resources. Since a large fraction of the fuels produced worldwide go to powering heat engines, perhaps up to half of the useful energy produced worldwide is wasted in engine inefficiency, although modern cogeneration , combined cycle and energy recycling schemes are beginning to use this heat for other purposes. This inefficiency can be attributed to three causes. There is an overall theoretical limit to the efficiency of any heat engine due to temperature, called the Carnot efficiency. Second, specific types of engines have lower limits on their efficiency due to the inherent irreversibility of the engine cycle they use. Thirdly, the nonideal behavior of real engines, such as mechanical friction and losses in the combustion process causes further efficiency losses.
The second law of thermodynamics puts a fundamental limit on the thermal efficiency of all heat engines. Even an ideal, frictionless engine can't convert anywhere near 100% of its input heat into work. The limiting factors are the temperature at which the heat enters the engine, T H {\displaystyle T_{\rm {H}}\,} , and the temperature of the environment into which the engine exhausts its waste heat, T C {\displaystyle T_{\rm {C}}\,} , measured in an absolute scale, such as the Kelvin or Rankine scale. From Carnot's theorem , for any engine working between these two temperatures: [ 4 ]
This limiting value is called the Carnot cycle efficiency because it is the efficiency of an unattainable, ideal, reversible engine cycle called the Carnot cycle . No device converting heat into mechanical energy, regardless of its construction, can exceed this efficiency.
Examples of T H {\displaystyle T_{\rm {H}}\,} are the temperature of hot steam entering the turbine of a steam power plant , or the temperature at which the fuel burns in an internal combustion engine . T C {\displaystyle T_{\rm {C}}} is usually the ambient temperature where the engine is located, or the temperature of a lake or river into which the waste heat is discharged. For example, if an automobile engine burns gasoline at a temperature of T H = 816 ∘ C = 1500 ∘ F = 1089 K {\displaystyle T_{\rm {H}}=816^{\circ }{\text{C}}=1500^{\circ }{\text{F}}=1089{\text{K}}} and the ambient temperature is T C = 21 ∘ C = 70 ∘ F = 294 K {\displaystyle T_{\rm {C}}=21^{\circ }{\text{C}}=70^{\circ }{\text{F}}=294{\text{K}}} , then its maximum possible efficiency is:
It can be seen that since T C {\displaystyle T_{\rm {C}}} is fixed by the environment, the only way for a designer to increase the Carnot efficiency of an engine is to increase T H {\displaystyle T_{\rm {H}}} , the temperature at which the heat is added to the engine. The efficiency of ordinary heat engines also generally increases with operating temperature , and advanced structural materials that allow engines to operate at higher temperatures is an active area of research.
Due to the other causes detailed below, practical engines have efficiencies far below the Carnot limit. For example, the average automobile engine is less than 35% efficient.
Carnot's theorem applies to thermodynamic cycles, where thermal energy is converted to mechanical work. Devices that convert a fuel's chemical energy directly into electrical work, such as fuel cells , can exceed the Carnot efficiency. [ 5 ] [ 6 ]
The Carnot cycle is reversible and thus represents the upper limit on efficiency of an engine cycle. Practical engine cycles are irreversible and thus have inherently lower efficiency than the Carnot efficiency when operated between the same temperatures T H {\displaystyle T_{\rm {H}}} and T C {\displaystyle T_{\rm {C}}} . One of the factors determining efficiency is how heat is added to the working fluid in the cycle, and how it is removed. The Carnot cycle achieves maximum efficiency because all the heat is added to the working fluid at the maximum temperature T H {\displaystyle T_{\rm {H}}} , and removed at the minimum temperature T C {\displaystyle T_{\rm {C}}} . In contrast, in an internal combustion engine, the temperature of the fuel-air mixture in the cylinder is nowhere near its peak temperature as the fuel starts to burn, and only reaches the peak temperature as all the fuel is consumed, so the average temperature at which heat is added is lower, reducing efficiency.
An important parameter in the efficiency of combustion engines is the specific heat ratio of the air-fuel mixture, γ . This varies somewhat with the fuel, but is generally close to the air value of 1.4. This standard value is usually used in the engine cycle equations below, and when this approximation is made the cycle is called an air-standard cycle .
One should not confuse thermal efficiency with other efficiencies that are used when discussing engines. The above efficiency formulas are based on simple idealized mathematical models of engines, with no friction and working fluids that obey simplified thermodynamic models. Real engines have many departures from ideal behavior that waste energy, reducing actual efficiencies below the theoretical values given above. Examples are:
These factors may be accounted when analyzing thermodynamic cycles, however discussion of how to do so is outside the scope of this article.
For a device that converts energy from another form into thermal energy (such as an electric heater, boiler, or furnace), the thermal efficiency is
where the Q {\displaystyle Q} quantities are heat-equivalent values.
So, for a boiler that produces 210 kW (or 700,000 BTU/h) output for each 300 kW (or 1,000,000 BTU/h) heat-equivalent input, its thermal efficiency is 210/300 = 0.70, or 70%. This means that 30% of the energy is lost to the environment.
An electric resistance heater has a thermal efficiency close to 100%. [ 8 ] When comparing heating units, such as a highly efficient electric resistance heater to an 80% efficient natural gas-fuelled furnace, an economic analysis is needed to determine the most cost-effective choice.
The heating value of a fuel is the amount of heat released during an exothermic reaction (e.g., combustion ) and is a characteristic of each substance. It is measured in units of energy per unit of the substance, usually mass , such as: kJ/kg, J / mol .
The heating value for fuels is expressed as the HHV, LHV, or GHV to distinguish treatment of the heat of phase changes:
Which definition of heating value is being used significantly affects any quoted efficiency. Not stating whether an efficiency is HHV or LHV renders such numbers very misleading.
Heat pumps , refrigerators and air conditioners use work to move heat from a colder to a warmer place, so their function is the opposite of a heat engine. The work energy ( W in ) that is applied to them is converted into heat, and the sum of this energy and the heat energy that is taken up from the cold reservoir ( Q C ) is equal to the magnitude of the total heat energy given off to the hot reservoir (| Q H |)
Their efficiency is measured by a coefficient of performance (COP). Heat pumps are measured by the efficiency with which they give off heat to the hot reservoir, COP heating ; refrigerators and air conditioners by the efficiency with which they take up heat from the cold space, COP cooling :
The reason the term "coefficient of performance" is used instead of "efficiency" is that, since these devices are moving heat, not creating it, the amount of heat they move can be greater than the input work, so the COP can be greater than 1 (100%). Therefore, heat pumps can be a more efficient way of heating than simply converting the input work into heat, as in an electric heater or furnace.
Since they are heat engines, these devices are also limited by Carnot's theorem . The limiting value of the Carnot 'efficiency' for these processes, with the equality theoretically achievable only with an ideal 'reversible' cycle, is:
The same device used between the same temperatures is more efficient when considered as a heat pump than when considered as a refrigerator since
This is because when heating, the work used to run the device is converted to heat and adds to the desired effect, whereas if the desired effect is cooling the heat resulting from the input work is just an unwanted by-product. Sometimes, the term efficiency is used for the ratio of the achieved COP to the Carnot COP, which can not exceed 100%. [ 9 ]
The 'thermal efficiency' is sometimes called the energy efficiency . In the United States, in everyday usage the SEER is the more common measure of energy efficiency for cooling devices, as well as for heat pumps when in their heating mode. For energy-conversion heating devices their peak steady-state thermal efficiency is often stated, e.g., 'this furnace is 90% efficient', but a more detailed measure of seasonal energy effectiveness is the annual fuel use efficiency (AFUE). [ 10 ]
The role of a heat exchanger is to transfer heat between two mediums, so the performance of the heat exchanger is closely related to energy or thermal efficiency. [ 11 ] A counter flow heat exchanger is the most efficient type of heat exchanger in transferring heat energy from one circuit to the other [ citation needed ] . However, for a more complete picture of heat exchanger efficiency, exergetic considerations must be taken into account. Thermal efficiencies of an internal combustion engine are typically higher than that of external combustion engines. | https://en.wikipedia.org/wiki/Thermal_efficiency |
In thermodynamics , a material's thermal effusivity , also known as thermal responsivity , is a measure of its ability to exchange energy with its surroundings. It is an intensive quantity defined as the square root of the product of the material's thermal conductivity ( λ {\displaystyle \lambda } ) and its volumetric heat capacity ( ρ c p {\displaystyle \rho c_{p}} ) or as the ratio of thermal conductivity to the square root of thermal diffusivity ( α {\displaystyle \alpha } ). [ 1 ] [ 2 ] [ 3 ]
Some authors use the symbol e {\displaystyle e} to denote the thermal responsivity, although its usage along with an exponential becomes difficult. The SI units for thermal effusivity are W s / ( m 2 K ) {\displaystyle {\rm {W}}{\sqrt {\rm {s}}}/({\rm {m^{2}K}})} or, equivalently, J / ( m 2 K s ) {\displaystyle {\rm {J}}/({\rm {m^{2}K}}{\sqrt {\rm {s}}})} .
Thermal effusivity can also be a measure of a solid or rigid material's thermal inertia . [ 4 ]
Thermal effusivity is a parameter that emerges upon applying solutions of the heat equation to heat flow through a thin surface-like region. [ 3 ] It becomes particularly useful when the region is selected adjacent to a material's actual surface. Knowing the effusivity and equilibrium temperature of each of two material bodies then enables an estimate of their interface temperature T m {\displaystyle T_{m}} when placed into thermal contact . [ 5 ] If T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} are the temperature of the two bodies, then upon contact, the temperature of the contact interface (assumed to be a smooth surface) becomes [ 6 ]
Specialty sensors have also been developed based on this relationship to measure effusivity.
Thermal effusivity and thermal diffusivity are related quantities; respectively a product versus a ratio of a material's intensive heat transport and storage properties. The diffusivity appears explicitly in the heat equation, which is an energy conservation equation , and measures the speed at which thermal equilibrium can be reached by a body. [ 2 ] By contrast a body's effusivity (also sometimes called inertia, accumulation, responsiveness etc.) is its ability to resist a temperature change when subjected to a time-periodic, or similarly perturbative, forcing function . [ 4 ] [ 7 ]
If two semi-infinite [ i ] bodies initially at temperatures T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} are brought in perfect thermal contact, the temperature at the contact surface T m {\displaystyle T_{m}} will be a weighted mean based on their relative effusivities. [ 5 ] This relationship can be demonstrated with a very simple "control volume" back-of-the-envelope calculation:
Consider the following 1D heat conduction problem. Region 1 is material 1, initially at uniform temperature T 1 {\displaystyle T_{1}} , and region 2 is material 2, initially at uniform temperature T 2 {\displaystyle T_{2}} . Given some period of time Δ t {\displaystyle \Delta t} after being brought into contact, heat will have diffused across the boundary between the two materials. The thermal diffusivity of a material is α = λ / ( ρ c p ) {\displaystyle \alpha =\lambda /(\rho c_{p})} . From the heat equation (or diffusion equation ), a characteristic diffusion length Δ x 1 {\displaystyle \Delta x_{1}} into material 1 is
Similarly, a characteristic diffusion length Δ x 2 {\displaystyle \Delta x_{2}} into material 2 is
Assume that the temperature within the characteristic diffusion length on either side of the boundary between the two materials is uniformly at the contact temperature T m {\displaystyle T_{m}} (this is the essence of a control-volume approach). Conservation of energy dictates that
Substitution of the expressions above for Δ x 1 {\displaystyle \Delta x_{1}} and Δ x 2 {\displaystyle \Delta x_{2}} and elimination of Δ t {\displaystyle \Delta t} yields an expression for the contact temperature.
This expression is valid for all times for semi-infinite bodies in perfect thermal contact. It is also a good first guess for the initial contact temperature for finite bodies.
Even though the underlying heat equation is parabolic and not hyperbolic (i.e. it does not support waves), if we in some rough sense allow ourselves to think of a temperature jump as two materials are brought into contact as a "signal", then the transmission of the temperature signal from 1 to 2 is r 1 / ( r 1 + r 2 ) {\displaystyle r_{1}/(r_{1}+r_{2})} . Clearly, this analogy must be used with caution; among other caveats, it only applies in a transient sense, to media which are large enough (or time scales short enough) to be considered effectively infinite in extent.
An application of thermal effusivity is the quasi-qualitative measurement of coolness or warmth "feel" of materials, also known as thermoception . It is a particularly important metric for textiles, fabrics, and building materials. Rather than temperature, skin thermoreceptors are highly responsive to the inward or outward flow of heat. Thus, despite having similar temperatures near room temperature , a high effusivity metal object is detected as cool while a low effusivity fabric is sensed as being warmer. [ 2 ]
For a diathermal wall having a stepped "constant heat" boundary condition imposed at t = 0 {\displaystyle t=0} onto one side, thermal effusivity r {\displaystyle r} performs nearly the same role in limiting the initial dynamic thermal response (rigorously, during times less than the heat diffusion time to transit the wall) as the insulation U-factor U {\displaystyle U} plays in defining the static temperature obtained by the side after a long time. A dynamic U-factor U d y n {\displaystyle U_{dyn}} and a diffusion time t L {\displaystyle t_{L}} for the wall of thickness L {\displaystyle L} , thermal diffusivity α {\displaystyle \alpha } and thermal conductivity λ {\displaystyle \lambda } are specified by: [ 8 ]
For planetary surfaces, thermal inertia is a key phenomenon controlling the diurnal and seasonal surface temperature variations. The thermal inertia of a terrestrial planet such as Mars can be approximated from the thermal effusivity of its near-surface geologic materials. In remote sensing applications, thermal inertia represents a complex combination of particle size, rock abundance, bedrock outcropping and the degree of induration (i.e. thickness and hardness). [ 9 ]
A rough approximation to thermal inertia is sometimes obtained from the amplitude of the diurnal temperature curve (i.e. maximum minus minimum surface temperature). [ 4 ] The temperature of a material with low thermal effusivity changes significantly during the day, while the temperature of a material with high thermal effusivity does not change as drastically. Deriving and understanding the thermal inertia of the surface can help to recognize small-scale features of that surface. In conjunction with other data, thermal inertia can help to characterize surface materials and the geologic processes responsible for forming these materials. [ 10 ]
On Earth, thermal inertia of the global ocean is a major factor influencing climate inertia . Ocean thermal inertia is much greater than land inertia because of convective heat transfer , especially through the upper mixed layer . [ 11 ] The thermal effusivities of stagnant and frozen water underestimate the vast thermal inertia of the dynamic and multi-layered ocean. [ 12 ]
Thermographic inspection encompasses a variety of nondestructive testing methods that utilize the wave-like characteristics of heat propagation through a transfer medium. These methods include Pulse-echo thermography and thermal wave imaging . Thermal effusivity and diffusivity of the materials being inspected can serve to simplify the mathematical modelling of, and thus interpretation of results from these techniques. [ 13 ]
When a material is measured from the surface with short test times by any transient method or instrument, the heat transfer mechanisms generally include thermal conduction , convection , radiation and phase changes . The diffusive process of conduction may dominate the thermal behavior of solid bodies near and below room temperature.
A contact resistance (due to surface roughness, oxidation, impurities, etc.) between the sensor and sample may also exist. Evaluations with high heat dissipation (driven by large temperature differentials) can likewise be influenced by an interfacial thermal resistance . All of these factors, along with the body's finite dimensions, must be considered during execution of measurements and interpretation of results.
This is a list of the thermal effusivity of some common substances, evaluated at room temperature unless otherwise indicated.
(*) minimal advection | https://en.wikipedia.org/wiki/Thermal_effusivity |
In crystallography , thermal ellipsoids , more formally termed atomic displacement parameters or anisotropic displacement parameters , are ellipsoids used to indicate the magnitudes and directions of the thermal vibration of atoms in crystal structures . Since the vibrations are usually anisotropic (different magnitudes in different directions in space), an ellipsoid is a convenient way of visualising the vibration and therefore the symmetry and time averaged position of an atom in a crystal. Their theoretical framework was introduced by D. W. J. Cruickshank in 1956 and the concept was popularized through the program ORTEP (Oak Ridge Thermal-Ellipsoid Plot Program), first released in 1965. [ 4 ]
Thermal ellipsoids can be defined by a tensor , a mathematical object which allows the definition of magnitude and orientation of vibration with respect to three mutually perpendicular axes . The three principal axes of the thermal vibration of an atom are denoted U 1 {\displaystyle U_{1}} , U 2 {\displaystyle U_{2}} , and U 3 {\displaystyle U_{3}} , and the corresponding thermal ellipsoid is based on these axes. The size of the ellipsoid is scaled so that it occupies the space in which there is a particular probability of finding the electron density of the atom. The particular probability is usually 50%. [ 5 ] | https://en.wikipedia.org/wiki/Thermal_ellipsoid |
Thermal emittance or thermal emissivity ( ε {\displaystyle \varepsilon } ) is the ratio of the radiant emittance of heat of a specific object or surface to that of a standard black body . Emissivity and emittivity are both dimensionless quantities given in the range of 0 to 1, representing the comparative/relative emittance with respect to a blackbody operating in similar conditions, but emissivity refers to a material property (of a homogeneous material), while emittivity refers to specific samples or objects. [ 1 ] [ 2 ]
For building products, thermal emittance measurements are taken for wavelengths in the infrared . Determining the thermal emittance and solar reflectance of building materials, especially roofing materials , can be very useful for reducing heating and cooling energy costs in buildings. Combined index Solar Reflectance Index (SRI) is often used to determine the overall ability to reflect solar heat and release thermal heat. A roofing surface with high solar reflectance and high thermal emittance will reflect solar heat and release absorbed heat readily. High thermal emittance material radiates thermal heat back into the atmosphere more readily than one with a low thermal emittance. In common construction applications, the thermal emittance of a surface is usually higher than 0.8–0.85. [ 1 ]
High thermal emittance materials are essential to passive daytime radiative cooling , which uses surfaces high in thermal emittance and solar reflectance to lower surface temperatures by dissipating heat to outer space . It has been proposed as a solution to energy crises and global warming . [ 3 ]
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_emittance |
The term " thermal energy " is often used ambiguously in physics and engineering. [ 1 ] It can denote several different physical concepts, including:
Mark Zemansky (1970) has argued that the term “thermal energy” is best avoided due to its ambiguity. He suggests using more precise terms such as “internal energy” and “heat” to avoid confusion. [ 1 ] The term is, however, used in some textbooks. [ 2 ]
In thermodynamics , heat is energy in transfer to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. [ 3 ] [ 4 ] Heat refers to a quantity in transfer between systems, not to a property of any one system, or "contained" within it; on the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurs. In contrast, internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there. [ 5 ]
In addition to the microscopic kinetic energies of its molecules, the internal energy of a body includes chemical energy belonging to distinct molecules, and the global joint potential energy involved in the interactions between molecules and suchlike. [ 6 ] Thermal energy may be viewed as contributing to internal energy or to enthalpy.
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, sometimes convenient to say that "the chemical potential energy has been converted into thermal energy". This is expressed in ordinary traditional language by talking of 'heat of reaction' . [ 7 ]
In a body of material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body. Still, they are not immediately apparent in the kinetic energies of molecules, as manifest in temperature. Such energies of interaction may be thought of as contributions to the global internal microscopic potential energies of the body. [ 8 ]
In a statistical mechanical account of an ideal gas , in which the molecules move independently between instantaneous collisions, the internal energy is just the sum total of the gas's independent particles' kinetic energies , and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term "thermal energy" is effectively synonymous with " internal energy ". [ 9 ]
In many statistical physics texts, "thermal energy" refers to k T {\displaystyle kT} , the product of the Boltzmann constant and the absolute temperature , also written as k B T {\displaystyle k_{\text{B}}T} . [ 10 ] [ 11 ] [ 12 ] [ 13 ]
When there is no accompanying flow of matter, the term "thermal energy" is also applied to the energy carried by a heat flow. [ 14 ] | https://en.wikipedia.org/wiki/Thermal_energy |
Thermal engineering is a specialized sub-discipline of mechanical engineering that deals with the movement of heat energy and transfer . The energy can be transferred between two mediums or transformed into other forms of energy. A thermal engineer will have knowledge of thermodynamics and the process of converting generated energy from thermal sources into chemical , mechanical , or electrical energy . Many process plants use a wide variety of machines that utilize components that use heat transfer in some way. Many plants use heat exchangers in their operations. A thermal engineer must allow the proper amount of energy to be transferred for the correct use. Too much and the components could fail, too little and the system will not function at all. Thermal engineers must have an understanding of economics and the components that they will be servicing or interacting with. Some components that a thermal engineer could work with include heat exchangers, heat sinks , bi-metals strips , and radiators . Some systems that require a thermal engineer include boilers , heat pumps , water pumps , and engines .
Part of being a thermal engineer is to improve a current system and make it more efficient than the current system. Many industries employ thermal engineers, some main ones are the automotive manufacturing industry, commercial construction, and the heating ventilation and cooling industry. Job opportunities for a thermal engineer are very broad and promising.
Thermal engineering may be practiced by mechanical engineers and chemical engineers .
One or more of the following disciplines may be involved in solving a particular thermal engineering problem: thermodynamics , fluid mechanics , heat transfer , or mass transfer . One branch of knowledge used frequently in thermal engineering is that of thermofluids .
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thermal_engineering |
In fluid dynamics , the entrance length is the distance a flow travels after entering a pipe before the flow becomes fully developed. [ 1 ] Entrance length refers to the length of the entry region , the area following the pipe entrance where effects originating from the interior wall of the pipe propagate into the flow as an expanding boundary layer . When the boundary layer expands to fill the entire pipe, the developing flow becomes a fully developed flow, where flow characteristics no longer change with increased distance along the pipe. Many different entrance lengths exist to describe a variety of flow conditions. Hydrodynamic entrance length describes the formation of a velocity profile caused by viscous forces propagating from the pipe wall. Thermal entrance length describes the formation of a temperature profile. [ 2 ] Awareness of entrance length may be necessary for the effective placement of instrumentation, such as fluid flow meters . [ 3 ]
The hydrodynamic entrance region refers to the area of a pipe where fluid entering a pipe develops a velocity profile due to viscous forces propagating from the interior wall of a pipe. [ 1 ] This region is characterized by a non-uniform flow. [ 1 ] The fluid enters a pipe at a uniform velocity, then fluid particles in the layer in contact with the surface of the pipe come to a complete stop due to the no-slip condition . Due to viscous forces within the fluid, the layer in contact with the pipe surface resists the motion of adjacent layers and slows adjacent layers of fluid down gradually, forming a velocity profile. [ 4 ] For the conservation of mass to hold true, the velocity of layers of the fluid in the center of the pipe increases to compensate for the reduced velocities of the layers of fluid near the pipe surface. This develops a velocity gradient across the cross-section of the pipe. [ 5 ]
The layer in which the shearing viscous forces are significant, is called the boundary layer . [ 6 ] This boundary layer is a hypothetical concept. It divides the flow in pipe into two regions: [ 6 ]
When the fluid just enters the pipe, the thickness of the boundary layer gradually increases from zero moving in the direction of fluid flow and eventually reaches the pipe center and fills the entire pipe. This region from the entrance of the pipe to the point where the boundary layer covers the entire pipe is termed as the hydrodynamic entrance region and the length of the pipe in this region is termed the hydrodynamic entry length. In this region, the velocity profile develops and thus the flow is called the hydrodynamically developing flow. After this region, the velocity profile is fully developed and continues unchanged. This region is called the hydrodynamically fully developed region. But this is not the fully developed fluid flow until the normalized temperature profile also becomes constant. [ 6 ]
In case of laminar flow , the velocity profile in the fully developed region is parabolic but in the case of turbulent flow it gets a little flatter due to vigorous mixing in radial direction and eddy motion.
The velocity profile remains unchanged in the fully developed region.
Hydrodynamic Fully Developed velocity profile Laminar Flow :
∂ u ( r , x ) ∂ x = 0 ⇒ u = u ( r ) {\displaystyle {\frac {\partial u(r,x)}{\partial x}}=0\quad \Rightarrow u=u(r)} [ 6 ]
where x {\displaystyle x} is in the flow direction.
In the hydrodynamic entrance region, the wall shear stress , τ w {\displaystyle \tau _{w}} , is highest at the pipe inlet, where the boundary layer thickness is the smallest. Shear stress decreases along the flow direction. [ 6 ] That is why the pressure drop is highest in the entrance region of a pipe, which increases the average friction factor for the whole pipe. This increase in the friction factor is negligible for long pipes. [ 6 ] In a fully developed region, the pressure gradient and the shear stress in flow are in balance. [ 6 ]
The length of the hydrodynamic entry region along the pipe is called the hydrodynamic entry length. It is a function of Reynolds number of the flow. In case of laminar flow, this length is given by:
L h , l a m i n a r = 0.0575 R e D D {\displaystyle L_{h,laminar}=0.0575Re_{D}D} [ 2 ]
where R e {\displaystyle R_{e}} is the Reynolds number and D {\displaystyle D} is the diameter of the pipe. But in the case of turbulent flow,
L h , t u r b u l e n t = 1.359 D ( R e D ) 1 / 4 . {\displaystyle L_{h,turbulent}=1.359D(Re_{D})^{1/4}.} [ 8 ]
Thus, the entry length in turbulent flow is much shorter as compared to laminar one. In most practical engineering applications, this entrance effect becomes insignificant beyond a pipe length of 10 times the diameter and hence it is approximated to be:
L h , t u r b u l e n t ≈ 10 D {\displaystyle L_{h,turbulent}\approx 10D} [ 6 ] Other authors give much longer entrance length, e.g.
In the case of a non-circular cross-section of a pipe, the same formula can be used to find the entry length with a little modification. A new parameter “ hydraulic diameter ” relates the flow in non-circular pipe to that of circular pipe flow. This is valid as long as the cross-sectional area shape is not too exaggerated. Hydraulic Diameter is defined as:
D h = 4 A P {\displaystyle D_{h}={\frac {4A}{P}}} [ 6 ]
where A {\displaystyle A} is the area of cross-section and P {\displaystyle P} is the Perimeter of the wet part of the pipe
By doing a force balance on a small volume element in the fully developed flow region in the pipe (Laminar Flow), we get velocity as function of radius only i.e. it does not depend upon the axial distance from the entry point. [ 6 ] The velocity as the function of radius comes out to be:
u ( r ) = − R 2 4 μ d P d x ( 1 − r 2 R 2 ) {\displaystyle u(r)=-{\frac {R^{2}}{4\mu }}{\frac {dP}{dx}}\left(1-{\frac {r^{2}}{R^{2}}}\right)} [ 6 ] where d P d x {\textstyle {\frac {\mathrm {d} P}{\mathrm {d} x}}} is constant.
By definition of average velocity is given by
V a v g = ∫ u d A A c {\displaystyle V_{avg}={\frac {\int u\mathrm {d} A}{A_{c}}}}
where A c {\displaystyle A_{c}} is cross-sectional area.
Thus,
V a v g = 2 R 2 ∫ 0 R u ( r ) r d r = − 2 R 2 ∫ 0 R R 2 4 μ d P d x ( 1 − r 2 R 2 ) r d r = − R 2 8 μ d P d x {\displaystyle {\begin{aligned}V_{avg}&={\frac {2}{R^{2}}}\int _{0}^{R}u(r)r\mathrm {d} r\\&=-{\frac {2}{R^{2}}}\int _{0}^{R}{\frac {R^{2}}{4\mu }}{\frac {dP}{dx}}(1-{\frac {r^{2}}{R^{2}}})r\mathrm {d} r\\&=-{\frac {R^{2}}{8\mu }}{\frac {dP}{dx}}\end{aligned}}} [ 6 ]
For fully developed flow, the maximum velocity will be at r = 0 {\displaystyle r=0} . Thus,
U m a x = 2 V a v g . {\displaystyle U_{max}=2V_{avg}.} [ 6 ]
The thermal entrance length is the distance for incoming flow in a pipe to form a temperature profile with a stable shape. The shape of the fully developed temperature profile is determined by temperature and heat flux conditions along the inside wall of the pipe, as well as fluid properties. [ 2 ]
Fully developed heat flow in a pipe can be considered in the following situation. If the wall of the pipe is constantly heated or cooled so that the heat flux from the wall to the fluid via convection is a fixed value, then the bulk temperature of the fluid steadily increases or decreases respectively at a fixed rate along the flow direction.
An example can be a pipe entirely covered by an electrical heating pad with the flow being introduced after a uniform heat flux from the pad is achieved. At some distance away from the entrance of the fluid, fully developed heat flow is achieved when the heat transfer coefficient of the fluid becomes constant and the temperature profile has the same shape along the flow. [ 11 ] This distance is defined as the thermal entrance length, which is important for engineers to design efficient heat transfer processes.
For laminar flow, the thermal entrance length is a function of pipe diameter and the dimensionless Reynolds number and Prandtl number . [ 2 ]
( x f d , t D ) l a m i n a r ≈ 0.05 R e D P r {\displaystyle \left({\frac {x_{fd,t}}{D}}\right)_{laminar}\approx 0.05Re_{D}Pr} [ 2 ]
where
The Prandtl number modifies the hydrodynamic entrance length to determine thermal entrance length. The Prandtl number is the dimensionless number for the ratio of momentum diffusivity to thermal diffusivity . [ 5 ] The thermal entrance length for a fluid with a Prandtl number greater than one will be longer than the hydrodynamic entrance length, and shorter if the Prandtl number is less than one. For example, molten sodium has a low Prandtl number of 0.004, [ 12 ] so the thermal entrance length will be significantly shorter than the hydraulic entrance length.
For turbulent flows, thermal entrance length may be approximated solely based on pipe diameter. [ 2 ]
( x f d , t D ) t u r b u l e n t ≈ 10 {\displaystyle \left({\frac {x_{fd,t}}{D}}\right)_{turbulent}\approx 10} [ 2 ]
where
The development of the temperature profile in the flow is driven by heat transfer determined conditions on the inside surface of the pipe and the fluid. [ 2 ] Heat transfer may be a result of a constant heat flux or constant surface temperature. Constant heat flux may be caused by joule heating from a heat source, like heat tape, wrapped around the pipe. [ 13 ] Constant temperature conditions may be produced by a phase transition, such as condensation of saturated steam on a pipe surface. [ 14 ]
Newtons law of cooling describes convection, the main form of heat transport between the fluid and the pipe:
q s ″ = h ( T s − T m ) {\displaystyle q''_{s}=h(T_{s}-T_{m})} [ 2 ]
where
Constant surface heat flux result in T s − T m {\displaystyle T_{s}-T_{m}} becoming a constant as the flow develops and constant surface temperature results in T s − T m {\displaystyle T_{s}-T_{m}} approaching zero. [ 2 ]
Unlike hydrodynamic developed flow, a constant profile shape is used to define thermally fully developed flow because temperature continually approaches ambient temperature. [ 2 ] Dimensionless analysis of change in profile shape defines when a flow is thermally fully developed.
Requirement for thermally fully developed flow:
∂ ∂ x ( T s − T T s − T m ) f d , t = 0 {\displaystyle {\frac {\partial }{\partial x}}{\biggl (}{\frac {T_{s}-T}{T_{s}-T_{m}}}{\Biggr )}_{fd,t}=0} [ 2 ]
Thermally developed flow results in reduced heat transfer compared to developing flow because the difference between the surface temperature of the pipe and the mean temperature of the flow is greater than the temperature difference between surface temperature of the pipe and the temperature of the fluid near the pipe boundary. [ 2 ]
The concentration entrance length describes the length needed for the concentration profile in a flow to be fully developed. The concentration entrance length can be determined by relating it to the hydrodynamic entrance length with the Schmidt number or by experimental techniques. [ 15 ] The Schmidt number describes the ratio of momentum diffusivity to mass diffusivity. [ 2 ]
x f d , c ≈ 0.05 D R e D S c {\displaystyle x_{fd,c}\approx 0.05DRe_{D}Sc} [ 2 ]
where
Understanding the entrance length is important for the design and analysis of flow systems. The entrance region will have different velocity, temperature, and other profiles than exist in the fully developed region of the pipe.
Many types of flow instrumentation, such as flow meters , require a fully developed flow to function properly. [ 3 ] Common flow meters, including vortex flow meters and differential-pressure flow meters, require hydrodynamically fully developed flow. Hydraulically fully developed flow is commonly achieved by having long, straight sections of pipe before the flow meter. Alternatively, flow conditioners and straightening devices may be used to produce the desired flow. [ 17 ]
Wind tunnels use an inviscid flow of air to test the aerodynamics of an object. Flow straighteners, which consist of many parallel ducts which limit turbulence, are used to produce inviscid flow. [ 18 ] Entrance length must be considered in the design of wind tunnels, because the object being tested must be located in the irrotational flow region, between the flow straighteners and the entrance length. [ 19 ]
Similar to the development of flow at the entrance of the pipe , the flow velocity profile changes before the exit of a pipe. The exit length is much shorter than the entrance length, and is not significant at moderate to high Reynolds numbers. [ 20 ]
Hydraulic exit length for laminar flows may be approximated as: [ 20 ]
( x D ) L a m ≈ { 1 2 Low R e 0 R e > 100 {\displaystyle \left({\frac {x}{D}}\right)_{Lam}\approx {\begin{cases}{\frac {1}{2}}&{\text{Low }}Re\\0&Re>100\end{cases}}}
where | https://en.wikipedia.org/wiki/Thermal_entrance_length |
Thermal expansion is the tendency of matter to increase in length , area , or volume , changing its size and density , in response to an increase in temperature (usually excluding phase transitions ). [ 1 ] Substances usually contract with decreasing temperature ( thermal contraction ), with rare exceptions within limited temperature ranges ( negative thermal expansion ).
Temperature is a monotonic function of the average molecular kinetic energy of a substance. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance.
When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves.
The relative expansion (also called strain ) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature. [ 2 ]
If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures , along with many other state functions .
A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion , rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C (39.169 °F) and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather.
Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 kelvins (−255 and −153 °C; −427 and −244 °F). [ 3 ] ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures. [ 4 ]
Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion.
Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is slightly higher compared to that of crystals. [ 5 ] At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass. [ 6 ]
Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic materials change size much more due to this effect than due to thermal expansion. Common plastics exposed to water can, in the long term, expand by many percent.
Thermal expansion changes the space between particles of a substance, which changes the volume of the substance while negligibly changing its mass (the negligible amount comes from mass–energy equivalence ), thus changing its density, which has an effect on any buoyant forces acting on it. This plays a crucial role in convection of unevenly heated fluid masses, notably making thermal expansion partly responsible for wind and ocean currents .
The coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure, such that lower coefficients describe lower propensity for change in size. Several types of coefficients have been developed: volumetric, area, and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area.
The volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic . For isotropic materials, the area and volumetric thermal expansion coefficient are, respectively, approximately twice and three times larger than the linear thermal expansion coefficient.
In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by α = α V = 1 V ( ∂ V ∂ T ) p {\displaystyle \alpha =\alpha _{\text{V}}={\frac {1}{V}}\,\left({\frac {\partial V}{\partial T}}\right)_{p}}
The subscript " p " to the derivative indicates that the pressure is held constant during the expansion, and the subscript V stresses that it is the volumetric (not linear) expansion that enters this general definition. In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law .
This section summarizes the coefficients for some common materials.
For isotropic materials the coefficients linear thermal expansion α and volumetric thermal expansion α V are related by α V = 3 α .
For liquids usually the coefficient of volumetric expansion is listed and linear expansion is calculated here for comparison.
For common materials like many metals and compounds, the thermal expansion coefficient is inversely proportional to the melting point . [ 7 ] In particular, for metals the relation is: α ≈ 0.020 T m {\displaystyle \alpha \approx {\frac {0.020}{T_{m}}}} for halides and oxides α ≈ 0.038 T m − 7.0 ⋅ 10 − 6 K − 1 {\displaystyle \alpha \approx {\frac {0.038}{T_{m}}}-7.0\cdot 10^{-6}\,\mathrm {K} ^{-1}}
In the table below, the range for α is from 10 −7 K −1 for hard solids to 10 −3 K −1 for organic liquids. The coefficient α varies with the temperature and some materials have a very high variation; see for example the variation vs. temperature of the volumetric coefficient for a semicrystalline polypropylene (PP) at different pressure, and the variation of the linear coefficient vs. temperature for some steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless steel, austenitic steel). The highest linear coefficient in a solid has been reported for a Ti-Nb alloy. [ 8 ]
The formula α V ≈ 3 α is usually used for solids. [ 9 ] Volumetric coefficients shown which do not follow that rule are highlighted.
When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be simply calculated by using the applicable coefficient of thermal expansion.
If the body is constrained so that it cannot expand, then internal stress will be caused (or changed) by a change in temperature. This stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young's modulus . In the special case of solid materials, external ambient pressure does not usually appreciably affect the size of an object and so it is not usually necessary to consider the effect of pressure changes.
Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of the coefficient of expansion.
Linear expansion means change in one dimension (length) as opposed to change in volume (volumetric expansion).
To a first approximation, the change in length measurements of an object due to thermal expansion is related to temperature change by a coefficient of linear thermal expansion (CLTE). It is the fractional change in length per degree of temperature change. Assuming negligible effect of pressure, one may write: α L = 1 L d L d T {\displaystyle \alpha _{L}={\frac {1}{L}}\,{\frac {\mathrm {d} L}{\mathrm {d} T}}} where L {\displaystyle L} is a particular length measurement and d L / d T {\displaystyle \mathrm {d} L/\mathrm {d} T} is the rate of change of that linear dimension per unit change in temperature.
The change in the linear dimension can be estimated to be: Δ L L = α L Δ T {\displaystyle {\frac {\Delta L}{L}}=\alpha _{L}\Delta T}
This estimation works well as long as the linear-expansion coefficient does not change much over the change in temperature Δ T {\displaystyle \Delta T} , and the fractional change in length is small Δ L / L ≪ 1 {\displaystyle \Delta L/L\ll 1} . If either of these conditions does not hold, the exact differential equation (using d L / d T {\displaystyle \mathrm {d} L/\mathrm {d} T} ) must be integrated.
For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain , given by ε t h e r m a l {\displaystyle \varepsilon _{\mathrm {thermal} }} and defined as: ε t h e r m a l = ( L f i n a l − L i n i t i a l ) L i n i t i a l {\displaystyle \varepsilon _{\mathrm {thermal} }={\frac {(L_{\mathrm {final} }-L_{\mathrm {initial} })}{L_{\mathrm {initial} }}}}
where L i n i t i a l {\displaystyle L_{\mathrm {initial} }} is the length before the change of temperature and L f i n a l {\displaystyle L_{\mathrm {final} }} is the length after the change of temperature.
For most solids, thermal expansion is proportional to the change in temperature: ε t h e r m a l ∝ Δ T {\displaystyle \varepsilon _{\mathrm {thermal} }\propto \Delta T} Thus, the change in either the strain or temperature can be estimated by: ε t h e r m a l = α L Δ T {\displaystyle \varepsilon _{\mathrm {thermal} }=\alpha _{L}\Delta T} where Δ T = ( T f i n a l − T i n i t i a l ) {\displaystyle \Delta T=(T_{\mathrm {final} }-T_{\mathrm {initial} })} is the difference of the temperature between the two recorded strains, measured in degrees Fahrenheit , degrees Rankine , degrees Celsius , or kelvin , and α L {\displaystyle \alpha _{L}} is the linear coefficient of thermal expansion in "per degree Fahrenheit", "per degree Rankine", "per degree Celsius", or "per kelvin", denoted by °F −1 , °R −1 , °C −1 , or K −1 , respectively. In the field of continuum mechanics , thermal expansion and its effects are treated as eigenstrain and eigenstress.
The area thermal expansion coefficient relates the change in a material's area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, one may write: α A = 1 A d A d T {\displaystyle \alpha _{A}={\frac {1}{A}}\,{\frac {\mathrm {d} A}{\mathrm {d} T}}} where A {\displaystyle A} is some area of interest on the object, and d A / d T {\displaystyle dA/dT} is the rate of change of that area per unit change in temperature.
The change in the area can be estimated as: Δ A A = α A Δ T {\displaystyle {\frac {\Delta A}{A}}=\alpha _{A}\Delta T}
This equation works well as long as the area expansion coefficient does not change much over the change in temperature Δ T {\displaystyle \Delta T} , and the fractional change in area is small Δ A / A ≪ 1 {\displaystyle \Delta A/A\ll 1} . If either of these conditions does not hold, the equation must be integrated.
For a solid, one can ignore the effects of pressure on the material, and the volumetric (or cubical) thermal expansion coefficient can be written: [ 28 ] α V = 1 V d V d T {\displaystyle \alpha _{V}={\frac {1}{V}}\,{\frac {\mathrm {d} V}{\mathrm {d} T}}} where V {\displaystyle V} is the volume of the material, and d V / d T {\displaystyle \mathrm {d} V/\mathrm {d} T} is the rate of change of that volume with temperature.
This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an expansion of 0.2%. If a block of steel has a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50 K, or 0.004% K −1 .
If the expansion coefficient is known, the change in volume can be calculated Δ V V = α V Δ T {\displaystyle {\frac {\Delta V}{V}}=\alpha _{V}\Delta T} where Δ V / V {\displaystyle \Delta V/V} is the fractional change in volume (e.g., 0.002) and Δ T {\displaystyle \Delta T} is the change in temperature (50 °C).
The above example assumes that the expansion coefficient did not change as the temperature changed and the increase in volume is small compared to the original volume. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, or the increase in volume is significant, then the above equation will have to be integrated: ln ( V + Δ V V ) = ∫ T i T f α V ( T ) d T {\displaystyle \ln \left({\frac {V+\Delta V}{V}}\right)=\int _{T_{i}}^{T_{f}}\alpha _{V}(T)\,\mathrm {d} T} Δ V V = exp ( ∫ T i T f α V ( T ) d T ) − 1 {\displaystyle {\frac {\Delta V}{V}}=\exp \left(\int _{T_{i}}^{T_{f}}\alpha _{V}(T)\,\mathrm {d} T\right)-1} where α V ( T ) {\displaystyle \alpha _{V}(T)} is the volumetric expansion coefficient as a function of temperature T , and T i {\displaystyle T_{i}} and T f {\displaystyle T_{f}} are the initial and final temperatures respectively.
For isotropic materials the volumetric thermal expansion coefficient is three times the linear coefficient: α V = 3 α L {\displaystyle \alpha _{V}=3\alpha _{L}}
This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length L . The original volume will be V = L 3 {\displaystyle V=L^{3}} and the new volume, after a temperature increase, will be V + Δ V = ( L + Δ L ) 3 = L 3 + 3 L 2 Δ L + 3 L Δ L 2 + Δ L 3 ≈ L 3 + 3 L 2 Δ L = V + 3 V Δ L L . {\displaystyle V+\Delta V=\left(L+\Delta L\right)^{3}=L^{3}+3L^{2}\Delta L+3L\Delta L^{2}+\Delta L^{3}\approx L^{3}+3L^{2}\Delta L=V+3V{\frac {\Delta L}{L}}.}
We can easily ignore the terms as Δ L is a small quantity which on squaring gets much smaller and on cubing gets smaller still.
So Δ V V = 3 Δ L L = 3 α L Δ T . {\displaystyle {\frac {\Delta V}{V}}=3{\Delta L \over L}=3\alpha _{L}\Delta T.}
The above approximation holds for small temperature and dimensional changes (that is, when Δ T {\displaystyle \Delta T} and Δ L {\displaystyle \Delta L} are small), but it does not hold if trying to go back and forth between volumetric and linear coefficients using larger values of Δ T {\displaystyle \Delta T} . In this case, the third term (and sometimes even the fourth term) in the expression above must be taken into account.
Similarly, the area thermal expansion coefficient is two times the linear coefficient: α A = 2 α L {\displaystyle \alpha _{A}=2\alpha _{L}}
This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just L 2 {\displaystyle L^{2}} . Also, the same considerations must be made when dealing with large values of Δ T {\displaystyle \Delta T} .
Put more simply, if the length of a cubic solid expands from 1.00 m to 1.01 m, then the area of one of its sides expands from 1.00 m 2 to 1.02 m 2 and its volume expands from 1.00 m 3 to 1.03 m 3 .
Materials with anisotropic structures, such as crystals (with less than cubic symmetry, for example martensitic phases) and many composites (with the homogenization of microstructure), [ 29 ] will generally have different linear expansion coefficients α L {\displaystyle \alpha _{L}} in different directions. As a result, the total volumetric expansion is distributed unequally among the three axes. If the crystal symmetry is monoclinic or triclinic, even the angles between these axes are subject to thermal changes. In such cases it is necessary to treat the coefficient of thermal expansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study the expansion by x-ray powder diffraction . The thermal expansion coefficient tensor for the materials possessing cubic symmetry (for e.g. FCC, BCC) is isotropic. [ 30 ]
Thermal expansion coefficients of solids usually show little dependence on temperature (except at very low temperatures) whereas liquids can expand at different rates at different temperatures. There are some exceptions: for example, cubic boron nitride exhibits significant variation of its thermal expansion coefficient over a broad range of temperatures. [ 31 ] Another example is paraffin which in its solid form has a thermal expansion coefficient that is dependent on temperature. [ 32 ]
Since gases fill the entirety of the container which they occupy, the volumetric thermal expansion coefficient at constant pressure, α V {\displaystyle \alpha _{V}} , is the only one of interest.
For an ideal gas , a formula can be readily obtained by differentiation of the ideal gas law , p V m = R T {\displaystyle pV_{m}=RT} . This yields p d V m + V m d p = R d T {\displaystyle p\mathrm {d} V_{m}+V_{m}\mathrm {d} p=R\mathrm {d} T} where p {\displaystyle p} is the pressure, V m {\displaystyle V_{m}} is the molar volume ( V m = V / n {\displaystyle V_{m}=V/n} , with n {\displaystyle n} the total number of moles of gas), T {\displaystyle T} is the absolute temperature and R {\displaystyle R} is equal to the gas constant .
For an isobaric thermal expansion, d p = 0 {\displaystyle \mathrm {d} p=0} , so that p d V m = R d T {\displaystyle p\mathrm {d} V_{m}=R\mathrm {d} T} and the isobaric thermal expansion coefficient is: α V ≡ 1 V ( ∂ V ∂ T ) p = 1 V m ( ∂ V m ∂ T ) p = 1 V m ( R p ) = R p V m = 1 T {\displaystyle \alpha _{V}\equiv {\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{p}={\frac {1}{V_{m}}}\left({\frac {\partial V_{m}}{\partial T}}\right)_{p}={\frac {1}{V_{m}}}\left({\frac {R}{p}}\right)={\frac {R}{pV_{m}}}={\frac {1}{T}}} which is a strong function of temperature; doubling the temperature will halve the thermal expansion coefficient.
From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton , [ 33 ] and Joseph Louis Gay-Lussac [ 34 ] that, at constant pressure, ideal gases expanded or contracted their volume linearly ( Charles's law ) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100 °C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.
In October 1848, William Thomson, a 24 year old professor of Natural Philosophy at the University of Glasgow , published the paper On an Absolute Thermometric Scale . [ 35 ] [ 36 ] [ 37 ]
In a footnote Thomson calculated that "infinite cold" ( absolute zero ) was equivalent to −273 °C (he called the temperature in °C as the "temperature of the air thermometers" of the time). This value of "−273" was considered to be the temperature at which the ideal gas volume reaches zero. By considering a thermal expansion linear with temperature (i.e. a constant coefficient of thermal expansion), the value of absolute zero was linearly extrapolated as the negative reciprocal of 0.366/100 °C – the accepted average coefficient of thermal expansion of an ideal gas in the temperature interval 0–100 °C, giving a remarkable consistency to the currently accepted value of −273.15 °C.
The thermal expansion of liquids is usually higher than in solids because the intermolecular forces present in liquids are relatively weak, and their constituent molecules are more mobile. [ 38 ] [ 39 ] Unlike solids, liquids have no definite shape and they take the shape of the container. Consequently, liquids have no definite length and area, so linear and areal expansions of liquids only have significance in that they may be applied to topics such as thermometry and estimates of sea level rising due to global climate change . [ 40 ] Sometimes, α L is still calculated from the experimental value of α V .
In general, liquids expand on heating, except cold water; below 4 °C it contracts, leading to a negative thermal expansion coefficient. At higher temperatures it shows more typical behavior, with a positive thermal expansion coefficient. [ 41 ]
The expansion of liquids is usually measured in a container. When a liquid expands in a vessel, the vessel expands along with the liquid. Hence the observed increase in volume (as measured by the liquid level) is not the actual increase in its volume. The expansion of the liquid relative to the container is called its apparent expansion , while the actual expansion of the liquid is called real expansion or absolute expansion . The ratio of apparent increase in volume of the liquid per unit rise of temperature to the original volume is called its coefficient of apparent expansion . The absolute expansion can be measured by a variety of techniques, including ultrasonic methods. [ 42 ]
Historically, this phenomenon complicated the experimental determination of thermal expansion coefficients of liquids, since a direct measurement of the change in height of a liquid column generated by thermal expansion is a measurement of the apparent expansion of the liquid. Thus the experiment simultaneously measures two coefficients of expansion and measurement of the expansion of a liquid must account for the expansion of the container as well. For example, when a flask with a long narrow stem, containing enough liquid to partially fill the stem itself, is placed in a heat bath, the height of the liquid column in the stem will initially drop, followed immediately by a rise of that height until the whole system of flask, liquid and heat bath has warmed through. The initial drop in the height of the liquid column is not due to an initial contraction of the liquid, but rather to the expansion of the flask as it contacts the heat bath first.
Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater percent expansion than solids for the same temperature change, the expansion of the liquid in the flask eventually exceeds that of the flask, causing the level of liquid in the flask to rise. For small and equal rises in temperature, the increase in volume (real expansion) of a liquid is equal to the sum of the apparent increase in volume (apparent expansion) of the liquid and the increase in volume of the containing vessel. The absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel. [ 43 ]
The expansion and contraction of the materials must be considered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected.
Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150 °C and 300 °C thereby causing them to expand and allow for the insertion or removal of another component.
There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with expansion approximately equal to 0.6 × 10 −6 K −1 . These alloys are useful in aerospace applications where wide temperature swings may occur.
Pullinger's apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer .
To determine the coefficient of linear thermal expansion of a metal, a pipe made of that metal is heated by passing steam through it. One end of the pipe is fixed securely and the other rests on a rotating shaft, the motion of which is indicated by a pointer. A suitable thermometer records the pipe's temperature. This enables calculation of the relative change in length per degree temperature change.
The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in concert with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products whose thermal expansion is the key to their success are CorningWare and the spark plug . The thermal expansion of ceramic bodies can be controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there are complex issues involved in controlling body and glaze expansion, so that adjusting for thermal expansion must be done with an eye to other properties that will be affected, and generally trade-offs are necessary.
Thermal expansion can have a noticeable effect on gasoline stored in above-ground storage tanks, which can cause gasoline pumps to dispense gasoline which may be more compressed than gasoline held in underground storage tanks in winter, or less compressed than gasoline held in underground storage tanks in summer. [ 45 ]
Heat-induced expansion has to be taken into account in most areas of engineering. A few examples are: | https://en.wikipedia.org/wiki/Thermal_expansion |
All values refer to 25 °C unless noted.
As quoted from this source in an online version of: David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition . CRC Press. Boca Raton, Florida, 2003; Section 12, Properties of Solids; Thermal and Physical Properties of Pure Metals
As quoted in an online version of:
which further refers to:
As quoted from this source in an online version of: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 4; Table 4.1, Electronic Configuration and Properties of the Elements
As quoted at http://www.webelements.com/ from these sources: | https://en.wikipedia.org/wiki/Thermal_expansivities_of_the_elements |
In statistical mechanics , thermal fluctuations are random deviations of an atomic system from its average state, that occur in a system at equilibrium. [ 1 ] All thermal fluctuations become larger and more frequent as the temperature increases, and likewise they decrease as temperature approaches absolute zero .
Thermal fluctuations are a basic manifestation of the temperature of systems: A system at nonzero temperature does not stay in its equilibrium microscopic state, but instead randomly samples all possible states, with probabilities given by the Boltzmann distribution .
Thermal fluctuations generally affect all the degrees of freedom of a system: There can be random vibrations ( phonons ), random rotations ( rotons ), random electronic excitations, and so forth.
Thermodynamic variables , such as pressure, temperature, or entropy , likewise undergo thermal fluctuations. For example, for a system that has an equilibrium pressure, the system pressure fluctuates to some extent about the equilibrium value.
Only the 'control variables' of statistical ensembles (such as the number of particules N , the volume V and the internal energy E in the microcanonical ensemble ) do not fluctuate.
Thermal fluctuations are a source of noise in many systems. The random forces that give rise to thermal fluctuations are a source of both diffusion and dissipation (including damping and viscosity ). The competing effects of random drift and resistance to drift are related by the fluctuation-dissipation theorem . Thermal fluctuations play a major role in phase transitions and chemical kinetics .
The volume of phase space V {\displaystyle {\mathcal {V}}} , occupied by a system of 2 m {\displaystyle 2m} degrees of freedom is the product of the configuration volume V {\displaystyle V} and the momentum space volume. Since the energy is a quadratic form of the momenta for a non-relativistic system, the radius of momentum space will be E {\displaystyle {\sqrt {E}}} so that the volume of a hypersphere will vary as E 2 m {\displaystyle {\sqrt {E}}^{2m}} giving a phase volume of
where C {\displaystyle C} is a constant depending upon the specific properties of the system and Γ {\displaystyle \Gamma } is the Gamma function. In the case that this hypersphere has a very high dimensionality, 2 m {\displaystyle 2m} , which is the usual case in thermodynamics, essentially all the volume will lie near to the surface
where we used the recursion formula m Γ ( m ) = Γ ( m + 1 ) {\displaystyle m\Gamma (m)=\Gamma (m+1)} .
The surface area Ω ( E ) {\displaystyle \Omega (E)} has its legs in two worlds: (i) the macroscopic one in which it is considered a function of the energy, and the other extensive variables, like the volume, that have been held constant in the differentiation of the phase volume, and (ii) the microscopic world where it represents the number of complexions that is compatible with a given macroscopic state. It is this quantity that Planck referred to as a 'thermodynamic' probability. It differs from a classical probability inasmuch as it cannot be normalized; that is, its integral over all energies diverges—but it diverges as a power of the energy and not faster. Since its integral over all energies is infinite, we might try to consider its Laplace transform
which can be given a physical interpretation. The exponential decreasing factor, where β {\displaystyle \beta } is a positive parameter, will overpower the rapidly increasing surface area so that an enormously sharp peak will develop at a certain energy E ⋆ {\displaystyle E^{\star }} . Most of the contribution to the integral will come from an immediate neighborhood about this value of the energy. This enables the definition of a proper probability density according to
whose integral over all energies is unity on the strength of the definition of Z ( β ) {\displaystyle {\mathcal {Z}}(\beta )} , which is referred to as the partition function, or generating function. The latter name is due to the fact that the derivatives of its logarithm generate the central moments, namely,
and so on, where the first term is the mean energy and the second one is the dispersion in energy.
The fact that Ω ( E ) {\displaystyle \Omega (E)} increases no faster than a power of the energy ensures that these moments will be finite. [ 2 ] Therefore, we can expand the factor e − β E Ω ( E ) {\displaystyle e^{-\beta E}\Omega (E)} about the mean value ⟨ E ⟩ {\displaystyle \langle E\rangle } , which will coincide with E ⋆ {\displaystyle E^{\star }} for Gaussian fluctuations (i.e. average and most probable values coincide), and retaining lowest order terms result in
This is the Gaussian, or normal, distribution, which is defined by its first two moments. In general, one would need all the moments to specify the probability density, f ( E ; β ) {\displaystyle f(E;\beta )} , which is referred to as the canonical, or posterior, density in contrast to the prior density Ω {\displaystyle \Omega } , which is referred to as the 'structure' function. [ 2 ] This is the central limit theorem as it applies to thermodynamic systems. [ 3 ]
If the phase volume increases as E m {\displaystyle E^{m}} , its Laplace transform, the partition function, will vary as β − m {\displaystyle \beta ^{-m}} . Rearranging the normal distribution so that it becomes an expression for the structure function and evaluating it at E = ⟨ E ⟩ {\displaystyle E=\langle E\rangle } give
It follows from the expression of the first moment that β ( ⟨ E ⟩ ) = m / ⟨ E ⟩ {\displaystyle \beta (\langle E\rangle )=m/\langle E\rangle } , while from the second central moment, ⟨ ( Δ E ) 2 ⟩ = ⟨ E ⟩ 2 / m {\displaystyle \langle (\Delta E)^{2}\rangle =\langle E\rangle ^{2}/m} . Introducing these two expressions into the expression of the structure function evaluated at the mean value of the energy leads to
The denominator is exactly Stirling's approximation for m ! = Γ ( m + 1 ) {\displaystyle m!=\Gamma (m+1)} , and if the structure function retains the same functional dependency for all values of the energy, the canonical probability density,
will belong to the family of exponential distributions known as gamma densities. Consequently, the canonical probability density falls under the jurisdiction of the local law of large numbers which asserts that a sequence of independent and identically distributed random variables tends to the normal law as the sequence increases without limit.
The expressions given below are for systems that are close to equilibrium and have negligible quantum effects. [ 4 ]
Suppose x {\displaystyle x} is a thermodynamic variable. The probability distribution w ( x ) d x {\displaystyle w(x)dx} for x {\displaystyle x} is determined by the entropy S {\displaystyle S} :
If the entropy is Taylor expanded about its maximum (corresponding to the equilibrium state), the lowest order term is a Gaussian distribution :
The quantity ⟨ x 2 ⟩ {\displaystyle \langle x^{2}\rangle } is the mean square fluctuation. [ 4 ]
The above expression has a straightforward generalization to the probability distribution w ( x 1 , x 2 , … , x n ) d x 1 d x 2 … d x n {\displaystyle w(x_{1},x_{2},\ldots ,x_{n})dx_{1}dx_{2}\ldots dx_{n}} :
where ⟨ x i x j ⟩ {\displaystyle \langle x_{i}x_{j}\rangle } is the mean value of x i x j {\displaystyle x_{i}x_{j}} . [ 4 ]
In the table below are given the mean square fluctuations of the thermodynamic variables T , V , P {\displaystyle T,V,P} and S {\displaystyle S} in any small part of a body. The small part must still be large enough, however, to have negligible quantum effects. | https://en.wikipedia.org/wiki/Thermal_fluctuations |
A thermal history coating (THC) is a robust coating containing various non-toxic chemical compounds whose crystal structures irreversibly change at high temperatures. This allows for temperature measurements and thermal analysis to be performed on intricate and inaccessible components, which operate in harsh environments. Like thermal barrier coatings , THCs provide protection from intense heat to the surfaces on which they are applied. The temperature range that THCs provide accurate temperature measurements in is 900 °C to 1400 °C with an accuracy of ±10 °C. [ 1 ]
THCs are applied by atmospheric plasma spraying, which is a thermal spraying technique. This ensures that the coatings are robust to allow long life-times in harsh environments, such as on jet engine components, which experience temperatures in excess of 1000 °C [ 2 ] and angular velocities of up to 10,000rpm [ 3 ] (revolutions per minute).
THCs are composed of phosphor materials, whose luminescent characteristics are temperature- and duration-dependent. Phosphor thermometry is the measurement technique used for determining the past temperatures of THCs, [ 4 ] whereby the luminescent characteristics of the coatings are exploited and matched to calibration tables.
The phosphorescence of THCs is excited by use of an external light source such as a laser pen. An optical system then collects a reflected light signal, whose characteristics provide information on the crystal structure of the THC. Crystal structure properties are then converted into temperatures, which had previously been experience by the coatings. This allows for point measurements to be made across the coated surfaces of components and allows thermal analysis to be carried out.
THCs are used in high temperature applications where temperature knowledge is essential in research and development programmes, for example in identifying hot spots, which could lead to structural damage of components.
As the THCs provide historic temperature information, they can be used as warranty tools, where certain components, such as valves or particular engine or machinery components must not exceed certain temperatures. | https://en.wikipedia.org/wiki/Thermal_history_coating |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.