text
stringlengths
900
3.13k
subdomain_id
stringclasses
10 values
similarity_score
float64
0.6
0.78
token_count
int64
256
512
source_dataset
stringclasses
1 value
source_id
stringlengths
47
47
chunk_index
int64
0
216
filtering_threshold
float64
0.6
0.6
created_at
stringdate
2025-12-25 18:39:09
2025-12-26 19:10:12
wikipedia sobre fisica de particulas rapidinho. me falaram que a definicao de fisica de particulas da wikipedia era muito ruim. e de fato, era assim : particle physics is a branch of physics that studies the elementary particle | elementary subatomic constituents of matter and radiation, and their interactions. the field is also called high energy physics, because many elementary particles do not occur under ambient conditions on earth. they can only be created artificially during high energy collisions with other particles in particle accelerators. particle physics has evolved out of its parent field of nuclear physics and is typically still taught in close association with it. scientific research in this area has produced a long list of particles. mas hein? particulas que so podem ser criadas em aceleradores? fisica de particulas e ensinada junto com fisica nuclear? a pesquisa produz particulas ( essa e otima! )? em que mundo essa pessoa vive? reescrevi : particle physics is a branch of physics that studies the existence and interactions of particles, which are the constituents of what is usually referred as matter or radiation. in our current understanding, particles are excitations of quantum fields and interact following their dynamics. most of the interest in this area is in fundamental fields, those that cannot be described as a bound state of other fields. the set of fundamental fields and their dynamics are summarized in a model called the standard model and, therefore, particle physics is largely the study of the standard model particle content and its possible extensions. eu acho que ficou bem melhor. vamos ver em quanto tempo algum editor esquentado da wikipedia vai demorar para reverter. atualmente esta um saco participar da wikipedia por causa dessas pessoas.
subdomain_quantum_field_theory
0.698267
400
HuggingFaceFW/fineweb-edu
<urn:uuid:e7f0a003-07f1-4148-a77c-6e0cb215fc0e>
0
0.6
2025-12-25T18:39:09.309741
belgian physicist francois englert, left, speaks with british physicist … ( fabrice coffrini / afp / getty … ) for physicists, it was a moment like landing on the moon or the discovery of dna. the focus was the higgs boson, a subatomic particle that exists for a mere fraction of a second. long theorized but never glimpsed, the so - called god particle is thought to be key to understanding the existence of all mass in the universe. the revelation wednesday that it - - or some version of it - - had almost certainly been detected amid more than hundreds of trillions of high - speed collisions in a 17 - mile track near geneva prompted a group of normally reserved scientists to erupt with joy. for the record los angeles times friday, july 06, 2012 home edition main news part a page 4 news desk 1 inches ; 48 words type of material : correction large hadron collider : in some copies of the july 5 edition, an article in section a about the machine used by physicists at the european organization for nuclear research to search for the higgs boson referred to the $ 5 - billion large hadron collider. the correct amount is $ 10 billion. peter higgs, one of the scientists who first hypothesized the existence of the particle, reportedly shed tears as the data were presented in a jampacked and applause - heavy seminar at cern, the european organization for nuclear research. " it ' s a gigantic triumph for physics, " said frank wilczek, an mit physicist and nobel laureate. " it ' s a tremendous demonstration of a community dedicated to understanding nature. " the achievement, nearly 50 years in the making, confirms physicists ' understanding of how mass - - the stuff that makes stars, planets and even people - - arose in the universe, they said. it also points the way toward a new path of scientific inquiry into the mass - generating mechanism that was never before possible, said ucla physicist robert cousins, a member of one of the two research teams that has been chasing the higgs boson at cern. " i compare it to turning the corner and walking around a building - - there ' s a whole new set of things you can look at, " he said. " it is a beginning, not an end. " leaders of the two teams reported independent results that suggested the existence of a previously unseen subatomic particle with a mass of about 125 to 126 billion electron volts. both groups got
subdomain_quantum_field_theory
0.624252
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fb237ffb-9cc0-4077-99d5-56c6fce1ca5f>
0
0.6
2025-12-25T18:39:09.357393
varying densities. these particles within the matter are kinetic and in constant motion. the slower the motion of the particles, the more dense the matter becomes. also, as the particles are pushed closer together, the matter also becomes more dense. the best way to slow down kinetic molecules is to cool the matter. the best way to get them to move closer together is to add pressure to the matter. inversely, when you remove the pressure or heat any material, the molecules within the material moves faster and further apart, thus making the material less dense. the least dense form of matter is, of course, gas. if a gas is cooled and compressed, at some point it will become a liquid. if that liquid is then cooled further, then at some point it will become a solid. also, when you take the pressure off any gas or liquid, that material will grow less dense and expand. this is essentially what happens to the gaseous molecules of our atmosphere. our atmosphere contains approximately 79 % nitrogen and 21 % oxygen, a constant ratio until you reach an altitude of about 270, 000 feet. so the question that always comes up is ; " if i have 21 % oxygen at sea level and 21 % at 40, 000 feet, why do i succumb to the effects of hypoxia within 20 seconds at that altitude? " the answer is, atmospheric pressure! if you could picture all the gaseous nitrogen and oxygen molecules in the atmosphere, they would stack up from the surface of the earth to the fringe of space. all these molecules stacking on top each other create a great deal of weight, or pressure. at sea level, one square - inch of any surface has about 15 pounds of air sitting on top of it. at 18, 000 feet, that same square inch has only 7. 5 pounds per square - inch ( psi ) exerted on it. what has caused this atmospheric pressure drop? the answer is simple : there is more air stacked up at sea level than above 18, 000 feet, and therefore, more weight. as you recall, when molecules are subjected to this pressure, they are going to move closer together. this will make the air more dense with oxygen and nitrogen molecules. for example, if at sea level you take in a breath of air that has an atmospheric pressure of 15 psi, then that air may contain 500 billion molecules of oxygen ( this a fictitious number to be used only as an example ) ; if you go to 18, 000 feet and take the same breath where atmospheric pressure
subdomain_quantum_materials
0.602904
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ac6a19cf-dd31-4352-bb69-1c00f45050a7>
1
0.6
2025-12-25T18:39:09.377814
menezes, pradeep l and kishore, * and kailas, satish v ( 2006 ) studies on friction and transfer layer using inclined scratch. in : tribology international, 39 ( 2 ). pp. 175 - 183. restricted to registered users only download ( 562kb ) | request a copy friction influences the nature of transfer layer formed at the interface between die and sheet during forming. in the present investigation, basic studies were conducted using ' inclined scratch test ' to understand the mechanism of transfer layer formation during sliding of pins made of an al - mg alloy on en8 steel flats of different surface roughness under dry and lubricated conditions. the surfaces produced can be categorized into three different types : ( a ) uni - directional ( b ) 8 - ground and ( c ) random. rubbing the en8 flat in a uni - directional manner and a criss - cross manner on emery sheets produced the uni - directional and 8 ground surfaces. the random surfaces were produced by polishing the en8 flats using various abrasive powders. the influence of the ' nature of surface roughness ' on material transfer and coefficient of friction were investigated. scanning electron microscopy studies were performed on the contact surfaces of the al - mg alloy pins and en8 steel flats to reveal the morphology of the transfer layer obtained. it was seen that the transfer layer is dependant on the coefficient of friction. the coefficient of friction, which has two components - the adhesion component and the plowing component, is controlled by the ' nature of surface '. a surface that promotes plane strain conditions near the surfaces increases the plowing component of friction. | item type : | | journal article | | additional information : | | copyright for this article belongs to elsevier. | | keywords : | | friction ; nature of surface ; inclined scratch | | department / centre : | | division of mechanical sciences > materials engineering ( formerly metallurgy ) division of mechanical sciences > mechanical engineering | date deposited : | | 19 jan 2006 | | last modified : | | 19 sep 2010 04 : 23 | actions ( login required )
subdomain_quantum_materials
0.62561
433
HuggingFaceFW/fineweb-edu
<urn:uuid:29ad99f8-17dd-4bf4-9973-88d9fa050e74>
0
0.6
2025-12-25T18:39:09.900409
development of several chromatographic methods : paper chromatography, gas chromatography, and what would become known as high performance liquid chromatography. since then, the technology has advanced rapidly. researchers found that the main principles of tsvet ' s chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules. chromatography terms - the analyte is the substance to be separated during chromatography. - analytical chromatography is used to determine the existence and possibly also the concentration of analyte ( s ) in a sample. - a bonded phase is a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing. - a chromatogram is the visual output of the chromatograph. in the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. - plotted on the x - axis is the retention time and plotted on the y - axis a signal ( for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors ) corresponding to the response created by the analytes exiting the system. in the case of an optimal system the signal is proportional to the concentration of the specific analyte separated. - a chromatograph is equipment that enables a sophisticated separation e. g. gas chromatographic or liquid chromatographic separation. - chromatography is a physical method of separation that distributes components to separate between two phases, one stationary ( stationary phase ), while the other ( the mobile phase ) moves in a definite direction. - the eluate is the mobile phase leaving the column. - the eluent is the solvent that carries the analyte. - an eluotropic series is a list of solvents ranked according to their eluting power. - an immobilized phase is a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing. - the mobile phase is the phase that moves in a definite direction. it may be a liquid ( lc and capillary electrochromatography ( cec ) ), a gas ( gc ), or a supercritical fluid ( supercritical - fluid chromatography,
subdomain_quantum_materials
0.604772
512
HuggingFaceFW/fineweb-edu
<urn:uuid:51ca50ec-be73-4d62-b6f9-64c6eb0ad47f>
1
0.6
2025-12-25T18:39:10.315851
maximize the effect of this difference. in many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than “ peaks ”. because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations. techniques by physical state of mobile phase gas chromatography gas chromatography ( gc ), also sometimes known as gas - liquid chromatography, ( glc ), is a separation technique in which the mobile phase is a gas. gas chromatography is always carried out in a column, which is typically " packed " or " capillary " ( see below ). gas chromatography is based on a partition equilibrium of analyte between a solid stationary phase ( often a liquid silicone - based material ) and a mobile gas ( most often helium ). the stationary phase is adhered to the inside of a small - diameter glass tube ( a capillary column ) or a solid matrix inside a larger metal tube ( a packed column ). it is widely used in analytical chemistry ; though the high temperatures used in gc make it unsuitable for high molecular weight biopolymers or proteins ( heat denatures them ), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. it is also used extensively in chemistry research. liquid chromatography liquid chromatography ( lc ) is a separation technique in which the mobile phase is a liquid. liquid chromatography can be carried out either in a column or a plane. present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high performance liquid chromatography ( hplc ). in hplc the sample is forced by a liquid at high pressure ( the mobile phase ) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. hplc is historically divided into two
subdomain_quantum_materials
0.600338
512
HuggingFaceFW/fineweb-edu
<urn:uuid:51ca50ec-be73-4d62-b6f9-64c6eb0ad47f>
5
0.6
2025-12-25T18:39:10.319397
a liquid at high pressure ( the mobile phase ) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. hplc is historically divided into two different sub - classes based on the polarity of the mobile and stationary phases. methods in which the stationary phase is more polar than the mobile phase ( e. g., toluene as the mobile phase, silica as the stationary phase ) are termed normal phase liquid chromatography ( nplc ) and the opposite ( e. g., water - methanol mixture as the mobile phase and c18 = octadecylsilyl as the stationary phase ) is termed reversed phase liquid chromatography ( rplc ). ironically the " normal phase " has fewer applications and rplc is therefore used considerably more. specific techniques under this broad heading are listed below. affinity chromatography affinity chromatography is based on selective non - covalent interaction between an analyte and specific molecules. it is very specific, but not very robust. it is often used in biochemistry in the purification of proteins bound to tags. these fusion proteins are labeled with compounds such as his - tags, biotin or antigens, which bind to the stationary phase specifically. after purification, some of these tags are usually removed and the pure protein is obtained. affinity chromatography often utilizes a biomolecule ' s affinity for a metal ( zn, cu, fe, etc. ). columns are often manually prepared. traditional affinity columns are used as a preparative step to flush out unwanted biomolecules. however, hplc techniques exist that do utilize affinity chromatogaphy properties. immobilized metal affinity chromatography ( imac ) is useful to separate aforementioned molecules based on the relative affinity for the metal ( i. e. dionex imac ). often these columns can be loaded with different metals to create a column with a targeted affinity. supercritical fluid chromatography supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure. techniques by separation mechanism ion exchange chromatography ion exchange chromatography ( usually referred to as ion chromatography ) uses an ion exchange mechanism to separate analytes based on their respective charges. it is usually performed in columns but can also be useful in
subdomain_quantum_materials
0.608798
512
HuggingFaceFW/fineweb-edu
<urn:uuid:51ca50ec-be73-4d62-b6f9-64c6eb0ad47f>
6
0.6
2025-12-25T18:39:10.320214
deep - space communication improved with electromagnetic radiation antenna - robert c. dye - technology transfer - ( 505 ) 667 - 3404 electromagnetic radiation antenna has potential for deep - space communication - directed energy - long - range communications - medicine ( oncology ) - radar imaging applications are countermeasure - resistant - communications can be spatially - encrypted - 4 - dimensional volumes of energy can be aimed at a single space - time point for directed energy applications - nonspherical decay of the cusp enables low - power communications and propagation over great distances los alamos national laboratory ( lanl ) researchers have developed the lightslinger, a completely new type of antenna that produces tightly - focused packets of electromagnetic radiation fundamentally different from the emissions of conventional transmitters. the device has potential applications in radar, directed - energy ( non - kinetic kill ), secure communications, ultra - long - range communications ( e. g., deep - space ), medicine ( oncology ) and astrophysics. the lightslinger functions by producing a moving polarization pattern in a ring of alumina. by careful timing of voltages applied to electrodes that surround the alumina, the polarization pattern can be made to move superluminally, i. e., faster than the speed of light in a vacuum. nobel laureate vitaly ginzberg showed both that such superluminal polarization patterns do not violate the principles of special relativity and that they emit electromagnetic radiation. once a source travels faster than the waves that it emits, it can make contributions at multiple retarded times to a signal received instantaneously at a distance. this effect is already well known in acoustics ; when a supersonic airplane accelerates through the speed of sound, a violent “ sonic boom ” is heard many miles away, even if the airplane itself is rather quiet. the lightslinger enables the same thing to be done with electromagnetic radiation ; i. e., a relatively low - power source can make an “ electromagnetic boom ”, an intense concentration of radiowaves at a great distance. the “ electromagnetic boom ” is due to temporal focusing, that is, focusing in the time domain. because of this effect, part of the emitted radiation possesses an intensity that decays with distance r as 1 / r rather than as the conventional inverse square law, 1 / r2. these nonspherically - decaying wavepackets represent a game - changing technology in the applications of electromagnetic radiation. development stage : working prototype patent status : patent pending
subdomain_quantum_optics
0.615479
512
HuggingFaceFW/fineweb-edu
<urn:uuid:79bc5d65-38cf-489f-b8c5-6800ff88c6f7>
0
0.6
2025-12-25T18:39:10.363005
refraction and acceleration name : christopher s. why is it that when light travels from a more dense to a less dense medium, its speed is higher? i ' ve read answers to this question in your archives but, sadly, still don ' t get it. one answer ( jasjeet s bagla ) says that we must not ask the question because light is massless, hence questions of acceleration don ' t make sense. it does, however, seem to be ok to talk about different speeds of light. if you start at one speed and end at a higher one, why is one not allowed to talk about acceleration? bagla goes on to say that it depends on how the em fields behave in a given medium. it begs the question : what is it about, say, perspex and air that makes light accelerate, oops, travel at different speeds? if you ' re dealing with the same ray of light, one is forced to speak of acceleration, no? what other explanation is there for final velocity > initial velocity? arthur smith mentioned a very small " evanescent " component that travels ahead at c. where can i learn more about this? sorry for the long question. i understand that f = ma and if there is no m, you cannot talk about a, but, again, you have one velocity higher than another for the same thing. i need to know more than " that ' s just the way em fields are! " an explanation that satisfies me relates to travel through an interactive medium. when light interacts with an atom, the photon of light is absorbed and then emitted. for a moment, the energy of the light is within the atom. this causes a slight delay. light travels at the standard speed of light until interacting with another atom. it is absorbed and emitted, causing another slight delay. the average effect is taking more time to travel a meter through glass than through air. this works like a slower speed. an individual photon does not actually slow down. it gets delayed repeatedly by the atoms of the medium. a more dense medium has more atoms per meter to dr. ken mellendorf illinois central college congratulations! on not being willing to accept " that is just the way em fields are! " the answer to your inquiry is not all that simple ( my opinion ), but i won ' t try to do so in the limited space allowed here, not to say my own limitations of knowledge. like so many " simple " physics questions, i find the most lucid
subdomain_quantum_optics
0.616927
512
HuggingFaceFW/fineweb-edu
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
0
0.6
2025-12-25T18:39:10.440811
is not all that simple ( my opinion ), but i won ' t try to do so in the limited space allowed here, not to say my own limitations of knowledge. like so many " simple " physics questions, i find the most lucid, but accurate, explanation in richard feynman ' s, " lectures on physics " which most libraries will have. volume i, chapter 31 - 1 through 31 - 6, which describes refraction, dispersion, diffraction. the " answer " has to do with how matter alters the electric field of incident radiation, but i won ' t pretend to be able to do a better job than feynman. the answer is that you are not dealing with the same ray of light. in vacuum a photon just keeps going at the speed of light. in a medium, however, it interacts with the atoms, often being absorbed while bumping an atomic or molecular motion into a higher energy state. the excited atom / molecule then can jump to a lower energy state, emitting a photon while doing so. this can obviously make light appear to travel slower in a in detail, it is a very complicated question, requiring at least a graduate course in electromagnetism to begin to understand. why, for example do the emitted photons tend to travel in the same direction? best, richard j. plano click here to return to the physics archives update : june 2012
subdomain_quantum_optics
0.64135
290
HuggingFaceFW/fineweb-edu
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
1
0.6
2025-12-25T18:39:10.442451
topics covered : ideal solutions instructor / speaker : moungi bawendi, keith nelson the following content is provided under a creative commons license. your support will help mit opencourseware continue to offer high quality educational resources for free. to make a donation or view additional materials from hundreds of mit courses, visit mit opencourseware at ocw. mit. edu. professor : so. in the meantime, you ' ve started looking at two phase equilibrium. so now we ' re starting to look at mixtures. and so now we have more than one constituent. and we have more than one phase present. right? so you ' ve started to look at things that look like this, where you ' ve got, let ' s say, two components. both in the gas phase. and now to try to figure out what the phase equilibria look like. of course it ' s now a little bit more complicated than what you went through before, where you can get pressure temperature phase diagrams with just a single component. now we want to worry about what ' s the composition. of each of the components. in each of the phases. and what ' s the temperature and the pressure. total and partial pressures and all of that. so you can really figure out everything about both phases. and there are all sorts of important reasons to do that, obviously lots of chemistry happens in liquid mixtures. some in gas mixtures. some where they ' re in equilibrium. all sorts of chemical processes. distillation, for example, takes advantage of the properties of liquid and gas mixtures. where one of them might be richer, will be richer, and the more volatile of the components. that can be used as a basis for purification. you mix ethanol and water together so you ' ve got a liquid with a certain composition of each. the gas is going to be richer and the more volatile of the two, the ethanol. so in a distillation, where you put things up in the gas, more of the ethanol comes up. you could then collect that gas, right? and re - condense it, and make a new liquid. which is much richer in ethanol than the original liquid was. then you could make, then you could put some of them up into the gas phase. where it will be still richer in ethanol. and then you could collect that and repeat the process. so the point is that properties of liquid gas, two - component or multi - component mixtures like this can
subdomain_quantum_materials
0.629434
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
0
0.6
2025-12-25T18:39:10.489849
. and that ' s what i want to spend some of today doing. is just, walking through what ' s happening physically, with a container with a mixture of the two. and how does that correspond to what gets read off the diagram under different conditions. so. let ' s just start somewhere on a phase diagram like this. let ' s start up here at some point one, so we ' re in the pure - well, not pure, you ' re in the all liquid phase. it ' s still a mixture. it ' s not a pure substance. pa star, pb star. there ' s the gas phase. so, if we start at one, and now there ' s some total pressure. and now we ' re going to reduce it. what happens? we start with a pure - with an all - liquid mixture. no gas. and now we ' re going to bring down the pressure. allowing some of the liquid to go up into the gas phase. so, we can do that. and once we reach point two, then we find a coexistence curve. now the liquid and gas are going to coexist. so this is the liquid phase. and that means that this must be xb. and it ' s xb at one, but it ' s also xb at two, and i want to emphasize that. so let ' s put our pressure for two. and if we go over here, this is telling us about the mole fraction in the gas phase. that ' s what these curves are, remember. so this is the one that ' s showing us the mole fraction in the liquid phase. this nonlinear one in the gas phase. so that means just reading off it, this is xb, that ' s the liquid mole fraction. here ' s yb. the gas mole fraction. they ' re not the same, right, because of course the components have different volatility. a ' s more volatile. so that means that the mole fraction of b in the liquid phase is higher than the mole fraction of b in the gas phase. because a is the more volatile component. so more, relatively more, of a, the mole fraction of a is going to be higher up in the gas phase. which means the mole fraction of b is lower in the gas phase. so, yb less than xb if a is more volatile. ok, so now what ' s happening physically? well, we started at a point where we only had the
subdomain_quantum_materials
0.628322
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
3
0.6
2025-12-25T18:39:10.492425
which means the mole fraction of b is lower in the gas phase. so, yb less than xb if a is more volatile. ok, so now what ' s happening physically? well, we started at a point where we only had the liquid present. so at our initial pressure, we just have all liquid. there ' s some xb at one. that ' s all there is, there isn ' t any gas yet. now, what happened here? well, now we lowered the pressure. so you could imagine, well, we made the box bigger. now, if the liquid was under pressure, being squeezed by the box, right then you could make the box a little bit bigger. and there ' s still no gas. that ' s moving down like this. but then you get to a point where there ' s just barely any pressure on top of the liquid. and then you keep expanding the box. now some gas is going to form. so now we ' re going to go to our case two. we ' ve got a bigger box. and now, right around where this was, this is going to be liquid. and there ' s gas up here. so up here is yb at pressure two. here ' s xb at pressure two. liquid and gas. so that ' s where we are at point two here. now, what happens if we keep going? let ' s lower the pressure some more. well, we can lower it and do this. but really if we want to see what ' s happening in each of the phases, we have to stay on the coexistence curves. those are what tell us what the pressures are. what the partial pressure are going to be in each of the phases. in each of the two, in the liquid and the gas phases. so let ' s say we lower the pressure a little more. what ' s going to happen is, then we ' ll end up somewhere over here. in the liquid, and that ' ll correspond to something over here in the gas. so here ' s three. so now we ' re going to have, that ' s going to be xb at pressure three. and over here is going to be yb at pressure three. and all we ' ve done, of course, is we ' ve just expanded this further. so now we ' ve got a still taller box. and the liquid is going to be a little lower because some of it has evaporated, formed the gas phase
subdomain_quantum_materials
0.606147
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
4
0.6
2025-12-25T18:39:10.494104
' ve done, of course, is we ' ve just expanded this further. so now we ' ve got a still taller box. and the liquid is going to be a little lower because some of it has evaporated, formed the gas phase. so here ' s xb at three. here ' s yb at three, here ' s our gas phase. now we could decrease even further. and this is the sort of thing that you maybe can ' t do in real life. but i can do on a blackboard. i ' m going to give myself more room on this curve, to finish this illustration. there. beautiful. so now we can lower a little bit further, and what i want to illustrate is, if we keep going down, eventually we get to a pressure where now if we look over in the gas phase, we ' re at the same pressure, mole fraction that we had originally in the liquid phase. so let ' s make four even lower pressure. what does that mean? what it means is, we ' re running out of liquid. so what ' s supposed to happen is a is the more volatile component. so as we start opening up some room for gas to form, you get more of a in the gas phase. but of course, and the liquid is richer in b. but of course, eventually you run out of liquid. you make the box pretty big, and you run out, or you have the very last drop of liquid. so what ' s the mole fraction of b in the gas phase? it has to be the same as what it started in in the liquid phase. because after all the total number of moles of a and b hasn ' t changed any. so if you take them all from the liquid and put them all up into the gas phase, it must be the same. so yb of four. once you just have the last drop. so then yb of four is basically equal to xb of one. because everything ' s now up in the gas phase. so in principle, there ' s still a tiny, tiny bit of xb at pressure four. well, we could keep lowering the pressure. we could make the box a little bigger. then the very last of the liquid is going to be gone. and what ' ll happen then is, we ' re all here. there ' s no more liquid. we ' re not going down on the coexistence curve any more. we don ' t have a liquid gas
subdomain_quantum_materials
0.608849
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
5
0.6
2025-12-25T18:39:10.495003
going to be gone. and what ' ll happen then is, we ' re all here. there ' s no more liquid. we ' re not going down on the coexistence curve any more. we don ' t have a liquid gas coexistence any more. we just have a gas phase. of course, we can continue to lower the pressure. and then what we ' re doing is just going down here. so there ' s five. and five is the same as this only bigger. and so forth. ok, any questions about how this works? it ' s really important to just gain facility in reading these things and seeing, ok, what is it that this is telling you. and you can see it ' s not complicated to do it, but it takes a little bit of practice. ok. now, of course, we could do exactly the same thing starting from the gas phase. and raising the pressure. and although you may anticipate that it ' s kind of pedantic, i really do want to illustrate something by it. so let me just imagine that we ' re going to do that. let ' s start all in the gas phase. up here ' s the liquid. pa star, pb star. and now let ' s start somewhere here. so we ' re down somewhere in the gas phase with some composition. so it ' s the same story, except now we ' re starting here. it ' s all gas. and we ' re going to start squeezing. we ' re increasing the pressure. and eventually here ' s one, will reach two, so of course here ' s our yb. we started with all gas, no liquid. so this is yb of one. it ' s the same as yb of two, i ' m just raising the pressure enough to just reach the coexistence curve. and of course, out here tells us xb of two, right? so what is it saying? we ' ve squeezed and started to form some liquid. and the liquid is richer in component b. maybe it ' s ethanol water again. and we squeeze, and now we ' ve got more water in the liquid phase than in the gas phase. because water ' s the less volatile component. it ' s what ' s going to condense first. so the liquid is rich in the less volatile of the components. now, obviously, we can continue in doing exactly the reverse of what i showed you. but all i want to
subdomain_quantum_materials
0.608921
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
6
0.6
2025-12-25T18:39:10.495853
component. it ' s what ' s going to condense first. so the liquid is rich in the less volatile of the components. now, obviously, we can continue in doing exactly the reverse of what i showed you. but all i want to really illustrate is, this is a strategy for purification of the less volatile component. once you ' ve done this, well now you ' ve got some liquid. now you could collect that liquid in a separate vessel. so let ' s collect the liquid mixture with xb of two. so it ' s got some mole fraction of b. so we ' ve purified that. but now we ' re going to start, we ' ve got pure liquid. now let ' s make the vessel big. so it all goes into the gas phase. then lower p. all gas. so we start with yb of three, which equals xb of two. in other words, it ' s the same mole fraction. so let ' s reconstruct that. so here ' s p of two. and now we ' re going to go to some new pressure. and the point is, now we ' re going to start, since the mole fraction in the gas phase that we ' re starting from is the same number as this was. so it ' s around here somewhere. that ' s yb of three equals xb of two. and we ' re down here. in other words, all we ' ve done is make the container big enough so the pressure ' s low and it ' s all in the gas phase. that ' s all we have, is the gas. but the composition is whatever the composition is that we extracted here from the liquid. so this xb, which is the liquid mole fraction, is now yb, the gas mole fraction. of course, the pressure is different. lower than it was before. great. now let ' s increase. so here ' s three. and now let ' s increase the pressure to four. and of course what happens, now we ' ve got coexistence. so here ' s liquid. here ' s gas. so, now we ' re over here again. there ' s xb at pressure four. pure still in component b. we can repeat the same procedure. collect it. all liquid, put it in a new vessel. expand it, lower the pressure, all goes back into the gas phase. do it all again. and the point is, what you ' re doing is walking along
subdomain_quantum_materials
0.606914
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
7
0.6
2025-12-25T18:39:10.496690
phases? so at the end of the day, you can figure out, ok, now when i reach a certain degree of purification, here ' s how much of the stuff i end up with. well, that turns out to be reasonably straightforward to do. and so what i ' ll go through is a simple mathematical derivation. and it turns out that it allows you to just read right off the diagram how much of each material you ' re going to end up with. so, here ' s what happens. this is something called the lever rule. how much of each component is there in each phase? so let ' s consider a case like this. let me draw yet once again, just to get the numbering consistent. with how we ' ll treat this. so we ' re going to start here. and i want to draw it right in the middle, so i ' ve got plenty of room. and we ' re going to go up to some pressure. and somewhere out there, now i can go to my coexistence curves. liquid. and gas. and i can read off my values. so this is the liquid xb. so i ' m going to go up to some point two, here ' s xb of two. here ' s yb of two. great. now let ' s get these written in. so let ' s just define terms a little bit. na, nb. or just our total number of moles. ng and n liquid, of course, total number of moles. in the gas and liquid phases. so let ' s just do the calculation for each of these two cases. we ' ll start with one. that ' s the easier case. because then we have only the gas. so at one, all gas. it says pure gas in the notes, but of course that isn ' t the pure gas. it ' s the mixture of the two components. so. how many moles of a? well it ' s the mole fraction of a in the gas. times the total number of moles in the gas. let me put one in here. just to be clear. and since we have all gas, the number of moles in the gas is just the total number of moles. so this is just ya at one times n total. let ' s just write that in. and of course n total is equal to na plus nb. so now let ' s look at condition two. now we have to look a little more carefully
subdomain_quantum_materials
0.610639
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
9
0.6
2025-12-25T18:39:10.498343
mole fraction in that case has to be the same. and what this is just telling us mathematically is, when that happens this is zero. that means i don ' t have any gas left. yeah. professor : no. because, so it ' s the mole fraction in the gas phase. but you ' ve started with some amount that it ' s only going to go down from there. professor : yeah. yeah. any other questions? ok. well, now what i want to do is just put up a slightly different kind of diagram, but different in an important way. namely, instead of showing the mole fractions as a function of the pressure. and i haven ' t written it in, but all of these are at constant temperature, right? i ' ve assumed the temperature is constant in all these things. now let ' s consider the other possibility, the other simple possibility, which is, let ' s hold the pressure constant and vary the temperature. of course, you know in the lab, that ' s usually what ' s easiest to do. now, unfortunately, the arithmetic gets more complicated. it ' s not monumentally complicated, but here in this case, where you have one linear relationship, which is very convenient. from raoult ' s law. and then you have one non - linear relationship there for the mole fraction of the gas. in the case of temperature, they ' re both, neither one is linear. nevertheless, we can just sketch what the diagram looks like. and of course it ' s very useful to do that, and see how to read off it. and i should say the derivation of the curves isn ' t particularly complicated. it ' s not particularly more complicated than what i think you saw last time to derive this. there ' s no complicated math involved. but the point is, the derivation doesn ' t yield a linear relationship for either the gas or the liquid part of the coexistence curve. ok, so we ' re going to look at temperature and mole fraction phase diagrams. again, a little more complicated mathematically but more practical in real use. and this is t. and here is the, sort of, form that these things take. so again, neither one is linear. up here, now, of course if you raise the temperatures, that ' s where you end up with gas. if you lower the temperature, you condense and get the liquid. so, this is ta star. tb star. so now i want to stick with a as
subdomain_quantum_materials
0.604529
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
12
0.6
2025-12-25T18:39:10.500890
course if you raise the temperatures, that ' s where you end up with gas. if you lower the temperature, you condense and get the liquid. so, this is ta star. tb star. so now i want to stick with a as the more volatile component. at constant temperature, that meant that pa star is bigger than pb star. in other words, the vapor pressure over pure liquid a is higher than the vapor pressure over pure liquid b. similarly, now i ' ve got constant pressure and really what i ' m looking at, let ' s say i ' m at the limit where i ' ve got the pure liquid. or the pure a. and now i ' m going to, let ' s say, raise the temperature until i ' m at the liquid - gas equilibrium. that ' s just the boiling point. so if a is the more volatile component, it has the lower boiling point. and that ' s what this reflects. so higher pb star a corresponds to lower ta star a. which is just the boiling point of pure a. so, this is called the bubble line. that ' s called the dew line. all that means is, let ' s say i ' m at high temperature. i ' ve got all gas. right no coexistence, no liquid yet. and i start to cool things off. just to where i just barely start to get liquid. what you see that as is, dew starts forming. a little bit of condensation. if you ' re outside, it means on the grass a little bit of dew is forming. similarly, if i start at low temperature, all liquid now i start raising the temperature until i just start to boil. i just start to see the first bubbles forming. and so that ' s why these things have those names. so now let ' s just follow along what happens when i do the same sort of thing that i illustrated there. i want to start at one point in this phase diagram. and then start changing the conditions. so let ' s start here. so i ' m going to start all in the liquid phase. that is, the temperature is low. here ' s xb. and my original temperature. now i ' m going to raise it. so if i raise it a little bit, i reach a point at which i first start to boil. start to find some gas above the liquid. and if i look right here, that ' ll be my composition. let me raise it a little farther
subdomain_quantum_materials
0.607422
512
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
13
0.6
2025-12-25T18:39:10.501751
more. still get a substantial amount of enrichment. and now i ' ve got, in the gas phase, i ' ll further enriched in component a. and again i can collect the gas. condense it. now i ' m out here somewhere, i ' ve got all liquid and i ' ll raise the temperature again. and i can again keep walking my way over. and that ' s what happens during an ordinary distillation. each step of the distillation walks along in the phase diagram at some selected point. and of course what you ' re doing is, you ' re always condensing the gas. and starting with fresh liquid that now is enriched in more volatile of the components. so of course if you ' re really purifying, say, ethanol from an ethanol water mixture, that ' s how you do it. ethanol is the more volatile component. so a still is set up. it will boil the stuff and collect the gas and and condense it. and boil it again, and so forth. and the whole thing can be set up in a very efficient way. so you have essentially continuous distillation. where you have a whole sequence of collection and condensation and reheating and so forth events. so then, in a practical way, it ' s possible to walk quite far along the distillation, the coexistence curve, and distill to really a high degree of purification. any questions about how that works? ok. i ' ll leave till next time the discussion of the chemical potentials. but what we ' ll do, just to foreshadow a little bit, what i ' ll do at the beginning of the next lecture is what ' s at the end of your notes here. which is just to say ok, now if we look at raoult ' s law, it ' s straightforward to say what is the chemical potential for each of the substances in the liquid and the gas phase. of course, it has to be equal. given that, that ' s for an ideal solution. we can gain some insight from that. and then look at real solutions, non - ideal solutions, and understand a lot of their behavior as well. just from starting from our understanding of what the chemical potential does even in a simple ideal mixture. so we ' ll look at the chemical potentials. and then we ' ll look at non - ideal solution mixtures next time. see you then.
subdomain_quantum_thermodynamics
0.614114
502
HuggingFaceFW/fineweb-edu
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
15
0.6
2025-12-25T18:39:10.503426
topics covered : encapsulation, inheritance, shadowing instructor : prof. eric grimson, prof. john guttag operator : the following content is provided under a creative commons license. your support will help mit opencourseware continue to offer high quality educational resources for free. to make a donation or view additional materials from hundreds of mit courses, visit mit opencourseware at ocw. mit. edu. professor : last lecture we were talking about classes, and object - oriented programming, and we ' re going to come back to it today. i ' m going to remind you, we were talking about it because we suggested it is a really powerful way of structuring systems, and that ' s really why we want to use it, it ' s a very common way of structuring systems. so today i ' m going to pick up on a bunch of more nuanced, or more complex if you like, ways of leveraging the power of classes. but we ' re going to see a bunch of examples that are going to give us a sense. i ' m going to talk about inheritance, we ' re going to talk about shadowing, we ' re going to talk about iterators. but before get to it, i want to start by just highlighting, sort of, what was the point of classes? so i ' ll remind you. a class, i said, was basically a template for an abstract data type. and this was really to drive home this idea of modularity. i want the ability to say, i ' ve got a set of things that naturally belong together, i ' m going to cluster them together, i want to treat it like it ' s a primitive, i want to treat it like it ' s a float or an int or a string. is this going to be a point or a segment or something different like that. so it ' s really a way, as i said, of just trying to cluster data together. and this is a notion of modularity slash abstraction where i ' m treating them as primitives. but the second thing we talked about is that we also have a set of methods, using the special name method because we ' re talking classes. but basically functions that are designed to deal with this data structure. we ' re trying to group those together as well. so we cluster data and methods. second key thing we said was, in the ideal case, which unfortunately python isn ' t, but we ' ll come back
subdomain_quantum_field_theory
0.610129
512
HuggingFaceFW/fineweb-edu
<urn:uuid:356021a3-01be-42dc-ae50-e22e74e8edfd>
0
0.6
2025-12-25T18:39:10.533244
we had a running joke in science ed that kids get so overexposed to discrepant events involving density and air pressure that they tend to try to explain anything and everything they don ' t understand with respect to science in terms of those two concepts. why do we have seasons? ummm... air pressure? why did dr. smith use that particular research design? ummm... density? i think we need another catch - all explanation. i suggest index of refraction. to simplify greatly, index of refraction describes the amount of bending a light ray will undergo as it passes from one medium to another ( it ' s also related to the velocity of light in both media, but i do want to keep this simple ). if the two media have significantly different indices, light passing from one to the other at an angle ( not perpendicularly, in which case there is no bending ) will be bent more than if indices of the two are similar. the first four data points are from hyperphysics, the final one from wikipedia... glass has a wide range of compositions and thus indices of refraction. water at 20 c : 1. 33 typical soda - lime glass : close to 1. 5 since glycerine and glass have similar ior, light passing from one to the other isn ' t bent ; as long as both are transparent and similarly colored, each will be effectively " invisible " against the other. so, why does it rain? umm... index of refraction? a bright moon impact 12 hours ago
subdomain_quantum_optics
0.610774
317
HuggingFaceFW/fineweb-edu
<urn:uuid:7eeb7ef3-3122-42f0-86c8-01da8f3d7396>
0
0.6
2025-12-25T18:39:10.578236
| gallium metal is silver - white and melts at approximately body temperature ( wikipedia image ). | | atomic number : | | 31 | | atomic radius : | | 187 pm ( van der waals ) | | atomic symbol : | | ga | | melting point : | | 29. 76 °c | | atomic weight : | | 69. 72 | | boiling point : | | 2204 °c | | electron configuration : | | [ ar ] 4s23d104p1 | | oxidation states : | | 3 | from the latin word gallia, france ; also from latin, gallus, a translation of " lecoq, " a cock. predicted and described by mendeleev as ekaaluminum, and discovered spectroscopically by lecoq de boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in koh. gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. some flue dusts from burning coal have been shown to contain as much 1. 5 percent gallium. it is one of four metals - - mercury, cesium, and rubidium - - which can be liquid near room temperature and, thus, can be used in high - temperature thermometers. it has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures. there is a strong tendency for gallium to supercool below its freezing point. therefore, seeding may be necessary to initiate solidification. ultra - pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. the metal expands 3. 1 percent on solidifying ; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies. high - purity gallium is attacked only slowly by mineral acids. gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. it is widely used in doping semiconductors and producing solid - state devices such as transistors. magnesium gallate containing divalent impurities, such as mn + 2, is finding use in commercial ultraviolet - activated powder phosphors. gallium arsenide is capable of converting electricity directly into coherent light. gallium readily alloys with most metals, and has been used as a component in low - melting alloys.
subdomain_quantum_materials
0.622286
512
HuggingFaceFW/fineweb-edu
<urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048>
0
0.6
2025-12-25T18:39:10.590374
professor of electrical engineering at the university of california, berkeley, predicted the existence of a fourth fundamental device, which he called a memristor. he proved that memristor behavior could not be duplicated by any circuit built using only the other three elements, which is why the memristor is truly fundamental. memristor is a contraction of ” memory resistor, ” because that is exactly its function : to remember its history. a memristor is a two - terminal device whose resistance depends on the magnitude and polarity of the voltage applied to it and the length of time that voltage has been applied. when you turn off the voltage, the memristor remembers its most recent resistance until the next time you turn it on, whether that happens a day later or a year later. think of a resistor as a pipe through which water flows. the water is electric charge. the resistor ’ s obstruction of the flow of charge is comparable to the diameter of the pipe : the narrower the pipe, the greater the resistance. for the history of circuit design, resistors have had a fixed pipe diameter. but a memristor is a pipe that changes diameter with the amount and direction of water that flows through it. if water flows through this pipe in one direction, it expands ( becoming less resistive ). but send the water in the opposite direction and the pipe shrinks ( becoming more resistive ). further, the memristor remembers its diameter when water last went through. turn off the flow and the diameter of the pipe ” freezes ” until the water is turned back on. that freezing property suits memristors brilliantly for computer memory. the ability to indefinitely store resistance values means that a memristor can be used as a nonvolatile memory. that might not sound like very much, but go ahead and pop the battery out of your laptop, right now — no saving, no quitting, nothing. you ’ d lose your work, of course. but if your laptop were built using a memory based on memristors, when you popped the battery back in, your screen would return to life with everything exactly as you left it : no lengthy reboot, no half - dozen auto - recovered files. but the memristor ’ s potential goes far beyond instant - on computers to embrace one of the grandest technology challenges : mimicking the functions of a brain. within a decade, memristors could let us emulate
subdomain_quantum_materials
0.61544
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
1
0.6
2025-12-25T18:39:10.842922
recovered files. but the memristor ’ s potential goes far beyond instant - on computers to embrace one of the grandest technology challenges : mimicking the functions of a brain. within a decade, memristors could let us emulate, instead of merely simulate, networks of neurons and synapses. many research groups have been working toward a brain in silico : ibm ’ s blue brain project, howard hughes medical institute ’ s janelia farm, and harvard ’ s center for brain science are just three. however, even a mouse brain simulation in real time involves solving an astronomical number of coupled partial differential equations. a digital computer capable of coping with this staggering workload would need to be the size of a small city, and powering it would require several dedicated nuclear power plants. memristors can be made extremely small, and they function like synapses. using them, we will be able to build analog electronic circuits that could fit in a shoebox and function according to the same physical principles as a brain. a hybrid circuit — containing many connected memristors and transistors — could help us research actual brain function and disorders. such a circuit might even lead to machines that can recognize patterns the way humans can, in those critical ways computers can ’ t — for example, picking a particular face out of a crowd even if it has changed significantly since our last memory of it. the story of the memristor is truly one for the history books. when leon chua, now an ieee fellow, wrote his seminal paper predicting the memristor, he was a newly minted and rapidly rising professor at uc berkeley. chua had been fighting for years against what he considered the arbitrary restriction of electronic circuit theory to linear systems. he was convinced that nonlinear electronics had much more potential than the linear circuits that dominate electronics technology to this day. chua discovered a missing link in the pairwise mathematical equations that relate the four circuit quantities — charge, current, voltage, and magnetic flux — to one another. these can be related in six ways. two are connected through the basic physical laws of electricity and magnetism, and three are related by the known circuit elements : resistors connect voltage and current, inductors connect flux and current, and capacitors connect voltage and charge. but one equation is missing from this group : the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit — or more subtly, a mathematical doppelganger defined by faraday ’
subdomain_quantum_computing
0.613291
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
2
0.6
2025-12-25T18:39:10.844830
and capacitors connect voltage and charge. but one equation is missing from this group : the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit — or more subtly, a mathematical doppelganger defined by faraday ’ s law as the time integral of the voltage across the circuit. this distinction is the crux of a raging internet debate about the legitimacy of our memristor [ see sidebar, ” resistance to memristance ” ]. chua ’ s memristor was a purely mathematical construct that had more than one physical realization. what does that mean? consider a battery and a transformer. both provide identical voltages — for example, 12 volts of direct current — but they do so by entirely different mechanisms : the battery by a chemical reaction going on inside the cell and the transformer by taking a 110a ¿ ¿ v ac input, stepping that down to 12 v ac, and then transforming that into 12 v dc. the end result is mathematically identical — both will run an electric shaver or a cellphone, but the physical source of that 12 v is completely different. conceptually, it was easy to grasp how electric charge could couple to magnetic flux, but there was no obvious physical interaction between charge and the integral over the voltage. chua demonstrated mathematically that his hypothetical device would provide a relationship between flux and charge similar to what a nonlinear resistor provides between voltage and current. in practice, that would mean the device ’ s resistance would vary according to the amount of charge that passed through it. and it would remember that resistance value even after the current was turned off. he also noticed something else — that this behavior reminded him of the way synapses function in a brain. even before chua had his eureka moment, however, many researchers were reporting what they called ” anomalous ” current - voltage behavior in the micrometer - scale devices they had built out of unconventional materials, like polymers and metal oxides. but the idiosyncrasies were usually ascribed to some mystery electrochemical reaction, electrical breakdown, or other spurious phenomenon attributed to the high voltages that researchers were applying to their devices. as it turns out, a great many of these reports were unrecognized examples of memristance. after chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at hp labs, and we only really understood the device about two years ago
subdomain_quantum_materials
0.623517
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
3
0.6
2025-12-25T18:39:10.845865
examples of memristance. after chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at hp labs, and we only really understood the device about two years ago. so what took us so long? it ’ s all about scale. we now know that memristance is an intrinsic property of any electronic circuit. its existence could have been deduced by gustav kirchhoff or by james clerk maxwell, if either had considered nonlinear circuits in the 1800s. but the scales at which electronic devices have been built for most of the past two centuries have prevented experimental observation of the effect. it turns out that the influence of memristance obeys an inverse square law : memristance is a million times as important at the nanometer scale as it is at the micrometer scale, and it ’ s essentially unobservable at the millimeter scale and larger. as we build smaller and smaller devices, memristance is becoming more noticeable and in some cases dominant. that ’ s what accounts for all those strange results researchers have described. memristance has been hidden in plain sight all along. but in spite of all the clues, our finding the memristor was completely serendipitous. in 1995, i was recruited to hp labs to start up a fundamental research group that had been proposed by david packard. he decided that the company had become large enough to dedicate a research group to long - term projects that would be protected from the immediate needs of the business units. packard had an altruistic vision that hp should ” return knowledge to the well of fundamental science from which hp had been withdrawing for so long. ” at the same time, he understood that long - term research could be the strategic basis for technologies and inventions that would directly benefit hp in the future. hp gave me a budget and four researchers. but beyond the comment that ” molecular - scale electronics ” would be interesting and that we should try to have something useful in about 10 years, i was given carte blanche to pursue any topic we wanted. we decided to take on moore ’ s law. at the time, the dot - com bubble was still rapidly inflating its way toward a resounding pop, and the existing semiconductor road map didn ’ t extend past 2010. the critical feature size for the transistors on an integrated circuit was 350 nanometers ; we had a long way to go before atomic sizes would become a
subdomain_quantum_materials
0.636138
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
4
0.6
2025-12-25T18:39:10.846766
that a crossbar array is basically a storage system, with an open switch representing a zero and a closed switch representing a one. you read the data by probing the switch with a small voltage. like everything else at the nanoscale, the switches and wires of a crossbar are bound to be plagued by at least some nonfunctional components. these components will be only a few atoms wide, and the second law of thermodynamics ensures that we will not be able to completely specify the position of every atom. however, a crossbar architecture builds in redundancy by allowing you to route around any parts of the circuit that don ’ t work. because of their simplicity, crossbar arrays have a much higher density of switches than a comparable integrated circuit based on transistors. but implementing such a storage system was easier said than done. many research groups were working on such a cross - point memory — and had been since the 1950s. even after 40 years of research, they had no product on the market. still, that didn ’ t stop them from trying. that ’ s because the potential for a truly nanoscale crossbar memory is staggering ; picture carrying around the entire library of congress on a thumb drive. one of the major impediments for prior crossbar memory research was the small off - to - on resistance ratio of the switches ( 40 years of research had never produced anything surpassing a factor of 2 or 3 ). by comparison, modern transistors have an off - to - on resistance ratio of 10 000 to 1. we calculated that to get a high - performance memory, we had to make switches with a resistance ratio of at least 1000 to 1. in other words, in its off state, a switch had to be 1000 times as resistive to the flow of current as it was in its on state. what mechanism could possibly give a nanometer - scale device a three - orders - of - magnitude resistance ratio? we found the answer in scanning tunneling microscopy ( stm ), an area of research i had been pursuing for a decade. a tunneling microscope generates atomic - resolution images by scanning a very sharp needle across a surface and measuring the electric current that flows between the atoms at the tip of the needle and the surface the needle is probing. the general rule of thumb in stm is that moving that tip 0. 1 nm closer to a surface increases the tunneling current by one order of magnitude. we needed some similar mechanism by which we could change the effective spacing
subdomain_quantum_materials
0.637334
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
6
0.6
2025-12-25T18:39:10.848465
needle is probing. the general rule of thumb in stm is that moving that tip 0. 1 nm closer to a surface increases the tunneling current by one order of magnitude. we needed some similar mechanism by which we could change the effective spacing between two wires in our crossbar by 0. 3 nm. if we could do that, we would have the 1000 : 1 electrical switching ratio we needed. our constraints were getting ridiculous. where would we find a material that could change its physical dimensions like that? that is how we found ourselves in the realm of molecular electronics. conceptually, our device was like a tiny sandwich. two platinum electrodes ( the intersecting wires of the crossbar junction ) functioned as the ” bread ” on either end of the device. we oxidized the surface of the bottom platinum wire to make an extremely thin layer of platinum dioxide, which is highly conducting. next, we assembled a dense film, only one molecule thick, of specially designed switching molecules. over this ” monolayer ” we deposited a 2 - to 3 - nm layer of titanium metal, which bonds strongly to the molecules and was intended to glue them together. the final layer was the top platinum electrode. the molecules were supposed to be the actual switches. we built an enormous number of these devices, experimenting with a wide variety of exotic molecules and configurations, including rotaxanes, special switching molecules designed by james heath and fraser stoddart at the university of california, los angeles. the rotaxane is like a bead on a string, and with the right voltage, the bead slides from one end of the string to the other, causing the electrical resistance of the molecule to rise or fall, depending on the direction it moves. heath and stoddart ’ s devices used silicon electrodes, and they worked, but not well enough for technological applications : the off - to - on resistance ratio was only a factor of 10, the switching was slow, and the devices tended to switch themselves off after 15 minutes. our platinum devices yielded results that were nothing less than frustrating. when a switch worked, it was spectacular : our off - to - on resistance ratios shot past the 1000 mark, the devices switched too fast for us to even measure, and having switched, the device ’ s resistance state remained stable for years ( we still have some early devices we test every now and then, and we have never seen a significant change in resistance ). but our fantastic results were inconsistent. worse yet, the success or failure of a
subdomain_quantum_materials
0.634363
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
7
0.6
2025-12-25T18:39:10.849318
’ s resistance state remained stable for years ( we still have some early devices we test every now and then, and we have never seen a significant change in resistance ). but our fantastic results were inconsistent. worse yet, the success or failure of a device never seemed to depend on the same thing. we had no physical model for how these devices worked. instead of rational engineering, we were reduced to performing huge numbers of edisonian experiments, varying one parameter at a time and attempting to hold all the rest constant. even our switching molecules were betraying us ; it seemed like we could use anything at all. in our desperation, we even turned to long - chain fatty acids — essentially soap — as the molecules in our devices. there ’ s nothing in soap that should switch, and yet some of the soap devices switched phenomenally. we also made control devices with no molecule monolayers at all. none of them switched. we were frustrated and burned out. here we were, in late 2002, six years into our research. we had something that worked, but we couldn ’ t figure out why, we couldn ’ t model it, and we sure couldn ’ t engineer it. that ’ s when greg snider, who had worked with kuekes on the teramac, brought me the chua memristor paper from the september 1971 ieee transactions on circuits theory. ” i don ’ t know what you guys are building, ” he told me, ” but this is what i want. ” to this day, i have no idea how greg happened to come across that paper. few people had read it, fewer had understood it, and fewer still had cited it. at that point, the paper was 31 years old and apparently headed for the proverbial dustbin of history. i wish i could say i took one look and yelled, ” eureka! ” but in fact, the paper sat on my desk for months before i even tried to read it. when i did study it, i found the concepts and the equations unfamiliar and hard to follow. but i kept at it because something had caught my eye, as it had greg ’ s : chua had included a graph that looked suspiciously similar to the experimental data we were collecting. the graph described the current - voltage ( i - v ) characteristics that chua had plotted for his memristor. chua had called them ” pinched - hysteresis loops ” ; we called our i - v characteristics ” bow ties. ” a pinched hyst
subdomain_quantum_materials
0.626304
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
8
0.6
2025-12-25T18:39:10.850191
voltage ( i - v ) characteristics that chua had plotted for his memristor. chua had called them ” pinched - hysteresis loops ” ; we called our i - v characteristics ” bow ties. ” a pinched hysteresis loop looks like a diagonal infinity symbol with the center at the zero axis, when plotted on a graph of current against voltage. the voltage is first increased from zero to a positive maximum value, then decreased to a minimum negative value and finally returned to zero. the bow ties on our graphs were nearly identical [ see graphic, ” bow ties ” ]. that ’ s not all. the total change in the resistance we had measured in our devices also depended on how long we applied the voltage : the longer we applied a positive voltage, the lower the resistance until it reached a minimum value. and the longer we applied a negative voltage, the higher the resistance became until it reached a maximum limiting value. when we stopped applying the voltage, whatever resistance characterized the device was frozen in place, until we reset it by once again applying a voltage. the loop in the i - v curve is called hysteresis, and this behavior is startlingly similar to how synapses operate : synaptic connections between neurons can be made stronger or weaker depending on the polarity, strength, and length of a chemical or electrical signal. that ’ s not the kind of behavior you find in today ’ s circuits. looking at chua ’ s graphs was maddening. we now had a big clue that memristance had something to do with our switches. but how? why should our molecular junctions have anything to do with the relationship between charge and magnetic flux? i couldn ’ t make the connection. two years went by. every once in a while i would idly pick up chua ’ s paper, read it, and each time i understood the concepts a little more. but our experiments were still pretty much trial and error. the best we could do was to make a lot of devices and find the ones that worked. but our frustration wasn ’ t for nothing : by 2004, we had figured out how to do a little surgery on our little sandwiches. we built a gadget that ripped the tiny devices open so that we could peer inside them and do some forensics. when we pried them apart, the little sandwiches separated at their weakest point : the molecule layer. for the first time, we could get a good look at what was going on inside. we were in for
subdomain_quantum_materials
0.647119
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
9
0.6
2025-12-25T18:39:10.851045
them and do some forensics. when we pried them apart, the little sandwiches separated at their weakest point : the molecule layer. for the first time, we could get a good look at what was going on inside. we were in for a shock. what we had was not what we had built. recall that we had built a sandwich with two platinum electrodes as the bread and filled with three layers : the platinum dioxide, the monolayer film of switching molecules, and the film of titanium. but that ’ s not what we found. under the molecular layer, instead of platinum dioxide, there was only pure platinum. above the molecular layer, instead of titanium, we found an unexpected and unusual layer of titanium dioxide. the titanium had sucked the oxygen right out of the platinum dioxide! the oxygen atoms had somehow migrated through the molecules and been consumed by the titanium. this was especially surprising because the switching molecules had not been significantly perturbed by this event — they were intact and well ordered, which convinced us that they must be doing something important in the device. the chemical structure of our devices was not at all what we had thought it was. the titanium dioxide — a stable compound found in sunscreen and white paint — was not just regular titanium dioxide. it had split itself up into two chemically different layers. adjacent to the molecules, the oxide was stoichiometric tio 2, meaning the ratio of oxygen to titanium was perfect, exactly 2 to 1. but closer to the top platinum electrode, the titanium dioxide was missing a tiny amount of its oxygen, between 2 and 3 percent. we called this oxygen - deficient titanium dioxide tio 2 - x, where x is about 0. 05. because of this misunderstanding, we had been performing the experiment backward. every time i had tried to create a switching model, i had reversed the switching polarity. in other words, i had predicted that a positive voltage would switch the device off and a negative voltage would switch it on. in fact, exactly the opposite was true. it was time to get to know titanium dioxide a lot better. they say three weeks in the lab will save you a day in the library every time. in august of 2006 i did a literature search and found about 300 relevant papers on titanium dioxide. i saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. by the end of the month, the pieces had fallen into place. i finally knew how our device worked. i knew why we had a
subdomain_quantum_materials
0.617463
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
10
0.6
2025-12-25T18:39:10.851913
. i saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. by the end of the month, the pieces had fallen into place. i finally knew how our device worked. i knew why we had a memristor. the exotic molecule monolayer in the middle of our sandwich had nothing to do with the actual switching. instead, what it did was control the flow of oxygen from the platinum dioxide into the titanium to produce the fairly uniform layers of tio 2 and tio 2 - x. the key to the switching was this bilayer of the two different titanium dioxide species [ see diagram, ” how memristance works ” ]. the tio 2 is electrically insulating ( actually a semiconductor ), but the tio 2 - x is conductive, because its oxygen vacancies are donors of electrons, which makes the vacancies themselves positively charged. the vacancies can be thought of like bubbles in a glass of beer, except that they don ’ t pop — they can be pushed up and down at will in the titanium dioxide material because they are electrically charged. now i was able to predict the switching polarity of the device. if a positive voltage is applied to the top electrode of the device, it will repel the ( also positive ) oxygen vacancies in the tio 2 - x layer down into the pure tio 2 layer. that turns the tio 2 layer into tio 2 - x and makes it conductive, thus turning the device on. a negative voltage has the opposite effect : the vacancies are attracted upward and back out of the tio 2, and thus the thickness of the tio 2 layer increases and the device turns off. this switching polarity is what we had been seeing for years but had been unable to explain. on 20 august 2006, i solved the two most important equations of my career — one equation detailing the relationship between current and voltage for this equivalent circuit, and another equation describing how the application of the voltage causes the vacancies to move — thereby writing down, for the first time, an equation for memristance in terms of the physical properties of a material. this provided a unique insight. memristance arises in a semiconductor when both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. the memristance did not actually involve magnetism in this case ; the integral over the voltage reflected how far the dopants had moved and thus how
subdomain_quantum_materials
0.654351
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
11
0.6
2025-12-25T18:39:10.852748
both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. the memristance did not actually involve magnetism in this case ; the integral over the voltage reflected how far the dopants had moved and thus how much the resistance of the device had changed. we finally had a model we could use to engineer our switches, which we had by now positively identified as memristors. now we could use all the theoretical machinery chua had created to help us design new circuits with our devices. triumphantly, i showed the group my results and immediately declared that we had to take the molecule monolayers out of our devices. skeptical after years of false starts and failed hypotheses, my team reminded me that we had run control samples without molecule layers for every device we had ever made and that those devices had never switched. and getting the recipe right turned out to be tricky indeed. we needed to find the exact amounts of titanium and oxygen to get the two layers to do their respective jobs. by that point we were all getting impatient. in fact, it took so long to get the first working device that in my discouragement i nearly decided to put the molecule layers back in. a month later, it worked. we not only had working devices, but we were also able to improve and change their characteristics at will. but here is the real triumph. the resistance of these devices stayed constant whether we turned off the voltage or just read their states ( interrogating them with a voltage so small it left the resistance unchanged ). the oxygen vacancies didn ’ t roam around ; they remained absolutely immobile until we again applied a positive or negative voltage. that ’ s memristance : the devices remembered their current history. we had coaxed chua ’ s mythical memristor off the page and into being. emulating the behavior of a single memristor, chua showed, requires a circuit with at least 15 transistors and other passive elements. the implications are extraordinary : just imagine how many kinds of circuits could be supercharged by replacing a handful of transistors with one single memristor. the most obvious benefit is to memories. in its initial state, a crossbar memory has only open switches, and no information is stored. but once you start closing switches, you can store vast amounts of information compactly and efficiently. because memristors remember their state, they can store data indefinitely, using energy only when you toggle or read
subdomain_quantum_materials
0.654082
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
12
0.6
2025-12-25T18:39:10.853658
now that we ’ ve said a lot about individual operators on vector spaces, i want to go back and consider some other sorts of structures we can put on the space itself. foremost among these is the idea of a bilinear form. this is really nothing but a bilinear function to the base field :. of course, this means that it ’ s equivalent to a linear function from the tensor square :. instead of writing this as a function, we will often use a slightly different notation. we write a bracket, or sometimes, if we need to specify which of multiple different inner products under consideration. another viewpoint comes from recognizing that we ’ ve got a duality for vector spaces. this lets us rewrite our bilinear form as a linear transformation. we can view this as saying that once we pick one of the vectors, the bilinear form reduces to a linear functional, which is a vector in the dual space. or we could focus on the other slot and define. we know that the dual space of a finite - dimensional vector space has the same dimension as the space itself, which raises the possibility that or is an isomorphism from to. if either one is, then both are, and we say that the bilinear form is nondegenerate. we can also note that there is a symmetry on the category of vector spaces. that is, we have a linear transformation defined by. this makes it natural to ask what effect this has on our form. two obvious possibilities are that and that. in the first case we ’ ll call the bilinear form “ symmetric ”, and in the second we ’ ll call it “ antisymmetric ”. in terms of the maps and, we see that composing with the symmetry swaps the roles of these two functions. for symmetric bilinear forms,, while for antisymmetric bilinear forms we have. this leads us to consider nondegenerate bilinear forms a little more. if is an isomorphism it has an inverse. then we can form the composite. if is symmetric then this composition is the identity transformation on. on the other hand, if is antisymmetric then this composition is the negative of the identity transformation. thus, the composite transformation measures how much the bilinear transformation diverges from symmetry. accordingly, we call it the asymmetry of the form. finally, if we ’ re working over a finite - dimensional vector space we can pick a basis for, and get a matrix for.
subdomain_quantum_field_theory
0.616527
512
HuggingFaceFW/fineweb-edu
<urn:uuid:3bf09a24-c60d-45a0-b8e6-cc02ddac7ed6>
0
0.6
2025-12-25T18:39:10.969618
the gram - schmidt process now that we have a real or complex inner product, we have notions of length and angle. this lets us define what it means for a collection of vectors to be “ orthonormal ” : each pair of distinct vectors is perpendicular, and each vector has unit length. in formulas, we say that the collection is orthonormal if. these can be useful things to have, but how do we get our hands on them? it turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of. even better, we can pick it so that the first vectors span the same subspace as. the method goes back to laplace and cauchy, but gets its name from jørgen gram and erhard schmidt. we proceed by induction on the number of vectors in the collection. if, then we simply set this “ normalizes ” the vector to have unit length, but doesn ’ t change its direction. it spans the same one - dimensional subspace, and since it ’ s alone it forms an orthonormal collection. now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. first, we can orthonormalize the first vectors using our inductive hypothesis. this gives a collection which spans the same subspace as ( and so on down, as noted above ). but isn ’ t in the subspace spanned by the first vectors ( or else the original collection wouldn ’ t have been linearly independent ). so it points at least somewhat in a new direction. to find this new direction, we define this vector will be orthogonal to all the vectors from to, since for any such we can check where we use the orthonormality of the collection to show that most of these inner products come out to be zero. so we ’ ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. so we normalize it : and we ’ re done.
subdomain_quantum_field_theory
0.635934
434
HuggingFaceFW/fineweb-edu
<urn:uuid:4a2ad899-7ba0-4bfc-9276-c5c5c0845fe6>
0
0.6
2025-12-25T18:39:10.971443
the scientific world is abuzz with news of the ratification of the existence of the subatomic particle called the higgs boson - or more colloquially, the ' god particle. ' this subatomic particle ' s existence - which was verified recently ( with virtually near certainty ) by experiments at the large hadron collider in switzerland - lends credence to several long - standing physical theories such as the so - called standard model and the big bang theory. the nickname god particle is ironic for two reasons. first, generally, the nuclear physicists who deal with these matters - postulating the fundamental physical laws of the universe and then setting about to either verify or refute them - tend not to be regular church - goers. while there are some highly prominent scientists who balance personal, religious beliefs with professional, scientific quests, most probably go along with the thoughts of the world - famous physicist, stephen hawking : i regard the brain as a computer which will stop working when its components fail. there is no heaven or afterlife for broken down computers ; that is a fairy story for people afraid of the dark. [ interview in the guardian, 7 / 9 / 12 ] spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. it is not necessary to invoke god... [ from his book ; the grand design, 2010 ] so it is a bit ironic that physics ' most famous quest has resulted in the discovery of the ' god particle. ' most physicists are quite comfortable having their names associated with famous - even if dead - humans like newton, einstein or the afore - mentioned hawking. one will find few, if any, attributions to deities in the objects that physicists discover and name or the theories they propose. second, and more importantly, the discovery that the god particle really exists does not - as the name suggests - imply that god played some role in the creation of the universe. in fact, quite the opposite. the matter is discussed at some length in the july 9 daily beast by lawrence kraus, a well - known physicist / cosmologist from arizona state university : this term [ god particle ] appeared first in the unfortunate title of a book written by physicist leon lederman two decades ago, and while to my knowledge it was never used by any scientist ( including lederman ) before or since, it has captured the media ' s imagination. what makes this term particularly unfortunate is that nothing could be further from the truth.
subdomain_quantum_field_theory
0.625768
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ed184b23-5659-4b91-97c0-fd818297d417>
0
0.6
2025-12-25T18:39:11.281889
two decades ago, and while to my knowledge it was never used by any scientist ( including lederman ) before or since, it has captured the media ' s imagination. what makes this term particularly unfortunate is that nothing could be further from the truth. assuming the particle in question is indeed the higgs, it validates an unprecedented revolution in our understanding of fundamental physics and brings science closer to dispensing with the need for any supernatural shenanigans all the way back to the beginning of the universe... if these bold, some would say arrogant, notions derive support from the remarkable results at the large hadron collider, they may reinforce two potentially uncomfortable possibilities : first, that many features of our universe, including our existence, may be accidental consequences of conditions associated with the universe ' s birth ; and second, that creating " stuff " from " no stuff " seems to be no problem at all - everything we see could have emerged as a purposeless quantum burp in space or perhaps a quantum burp of space itself. humans, with their remarkable tools and their remarkable brains, may have just taken a giant step toward replacing metaphysical speculation with empirically verifiable knowledge. the higgs particle is now arguably more relevant than god. so the term god particle was first used by a scientist, but was picked up and popularized by the media. it ' s catchy and enhances interest in the subject among the public. but like so much else that the media promotes, it is misleading and inappropriate.
subdomain_quantum_field_theory
0.632446
308
HuggingFaceFW/fineweb-edu
<urn:uuid:ed184b23-5659-4b91-97c0-fd818297d417>
1
0.6
2025-12-25T18:39:11.282454
commonly seen in the front of button - down shirts. also used to reinforce openings or slits in garments. piping binding a seam with decoration. piping is similar to tipping or edging where a decorative material is sewn into the seams. pointelle an open - work knitting pattern used on garments to add texture. typically a cooler and general knit sweater. polyester a fabric made from synthetic fibers. polyester is quick drying, easy to wash and holds its shape well. ponte a knit fabric where the fibers are looped in an interlock. the material is very strong and firm. poplin a strong woven fabric, heavier in weight, with ribbing. rayon a manufactured fiber developed originally as an alternative for silk. rayon drapes well and looks luxurious. sateen a cotton fabric with sheen that resembles satin. seersucker slack - tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay. shirring similar to ruching, shirring gathers material to create folds. silk one of the most luxurious fabrics, silk is soft, warm and has shine. it is created from female silkworm ' s eggs. silk shantung a rough plain weave fabric made of uneven yarns to produce a textured effect, made of fibers such silk in which all knots and lumps are retained. space dyed technique of yarn dyeing to produce a multi - color effect on the yarn itself. also known as dip dyed yarn. spandex also known as lycra ( trademark symbol ), this material is able to expand 600 % and still snap back to its original shape and form. spandex fibers are woven with cotton and other materials to make fabrics stretch. tipping similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc. tissue linen a type of linen that is specifically made for blouses or shirts due to its thinness and sheerness. tweed a loose weave of heavy wool makes up tweed which provides warmth and comfortability. twill a fabric woven in a diagonal weave. commonly used for chinos and denim. variegated multi - colored fabrics where colors are splotched or in patches. velour a stretchy knit fabric that looks similar to velvet. very soft to the touch. velvet a soft, silky woven fabric that is similar to velour. velvet is much more expensive than velour due to the amount of thread and steps it takes to
subdomain_quantum_materials
0.604953
512
HuggingFaceFW/fineweb-edu
<urn:uuid:04a048d3-152b-45eb-ac6e-e7717919a899>
3
0.6
2025-12-25T18:39:11.293095
brookhaven national laboratory was established in 1947 on the eastern end of long island at the former site of the u. s. army ’ s camp upton. originally built out of a post - world war ii desire to explore the peaceful applications of atomic energy, the laboratory now has a broader mission : to perform basic and applied research at the frontiers of science, including nuclear and high - energy physics ; physics and chemistry of materials ; nanoscience ; energy and environmental research ; national security and nonproliferation ; neurosciences and medical imaging ; structural biology ; and computational sciences. over its history, brookhaven lab has housed three research reactors, numerous one - of - a - kind particle accelerators, and other cutting - edge research facilities responsible for discoveries leading to many advances for science and society as well as seven nobel prizes. brookhaven was originally conceived, in part, to establish a national laboratory in the northeastern united states to design, construct and operate large scientific machines that individual institutions could not afford to develop on their own. throughout the years, brookhaven ’ s scientists and visiting researchers have used these unique facilities to make discoveries in biology, physics, chemistry, geophysics, medicine, and materials science. since brookhaven opened its doors, countless innovations and inventions by staff and visiting scientists have contributed to research in many fields. discoveries made here have shaped our understanding of the atom and the universe, advanced medical imaging techniques, and created new technology and tools for studying microbiology, climate and pollutants, energy storage and more.
subdomain_quantum_field_theory
0.602608
306
HuggingFaceFW/fineweb-edu
<urn:uuid:ed9dbb98-4768-4a07-84b7-372f728fdb7b>
0
0.6
2025-12-25T18:39:11.439297
acrylic a synthetic fabric often used as a wool substitute. it is warm, soft, holds colors well and often is stain and wrinkle resistant. angora rabbit hair a soft fiber knit from fur of the angora rabbit. angora wool is often combined with cashmere or another fiber to strengthen the delicate structure. dry cleaning is recommended for angora products. bedford a strong material that is a raised corded fabric ( similar to corduroy ). bedford fabric wears well and is usually washable. boot footwear which covers the entire foot and extends to the height of the anklebone or up to the thigh. bootie a shoe that resembles a boot in style but is not as high. brocade an all - over floral, raised pattern produced in a similar fashion to embroidery. cable knit patterns, typically used in sweaters, where flat knit columns otherwise known as cables are overlapped vertically. cashmere a soft, strong and silky, lightweight wool spun from the kashmir goat. cashmere is commonly used in sweaters, shawls, outerwear, gloves and scarves for its warmth and soft feel. chiffon a common evening wear fabric made from silk, cotton, rayon or nylon. it ' s delicate in nature and sheer. chintz a printed and glazed fabric made of cotton. chintz is known for its bright colors and bold patterns. circumference the measurement around the shaft of a boot taken at the widest part. corduroy cotton blend fibers twisted as they are woven to create long, parallel grooves, called wales, in the fabric. this is a very durable material and depending on the width of the wales, can be extremely soft. cotton a natural fiber that grows in the seed pod of the cotton plant. it is an inelastic fiber. crepe used as a description of surfaces of fabrics. usually designates a fabric that is crimped or crinkled. crinoline a lightweight, plain weave, stiffened fabric with a low yarn count. used to create volume beneath evening or wedding dresses. crochet looping threads with a hooked needle that creates a wide, open lace. typically used on sweaters for warm seasons. cushioning padding on the sole of a shoe for added comfort and stabilization. denimcotton blend fabric created with a twill weave to create a sturdy fabric. used as the primary material of blue jeans. dobbywoven fabric where the weave of the fabric actually produces the garment ' s design. embroidery detailed needlework,
subdomain_quantum_materials
0.600387
512
HuggingFaceFW/fineweb-edu
<urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6>
0
0.6
2025-12-25T18:39:12.603005
. jacquard a fabric of intricate variegated weave or pattern. typically shown on elegant and more expensive pieces. jersey a type of knit material known to be flexible, stretchy, soft and very warm. it is created using tight stitches. knit a knit fabric is made by interlocking loops of one or more yarns either by hand with knitting needles or by machine. linenan exquisite material created from the fibers of the flax plant. some linen contain slubs or small knots on the fabric. the material is a light fabric perfect for warm weather. liningthe leather, fabric or synthetic material used on the inside of a shoe. lame a metallic or plastic fiber woven into material to give the garment shine. lycra ®tmspandex fibers add stretch to fabric when the fibers are woven with other fiber blends. these materials are lightweight, comfortabletm and breathable, and the stretch will not wear away. madras originating from madras, india, this fabric is a lightweight, cotton material used for summer clothing. madras usually has a checked pattern but also comes in plaid or with stripes. typically made from 100 % cotton. marled typically found in sweaters, marled yarn occurs when two colored yards are twisted together. matte a matte finish has a lusterless surface. merino wool wool sheered from the merino sheep and spun into yarn that is fine but strong. modal a type of rayon that is made from natural fibers but goes through a chemical treatment to ensure it has a high threshold of breakage. modal is soft and breathable which is why it ' s used as a cotton replacement. non - iron a treated cotton that allows our easy care shirts to stay crisp throughout the day and does not need ironing after washing / drying. nylon a synthetic fiber that is versatile, fast drying and strong. it has a high resistance to damage. ombre a color technique that shades a color from light to dark. paisley a pattern that consists of crooked teardrop designs in a repetitive manner. patent leather leather made from cattle hide that has been varnished to give a hard and glossy finish. placket the piece of fabric or cloth that is used as a concealing flap to cover buttons, fasteners or attachments. most commonly seen in the front of button - down shirts. also used to reinforce openings or slits in garments. piping binding a seam with decoration. piping is similar to tipping or edging where a decorative material is sewn into the seams
subdomain_quantum_materials
0.609317
512
HuggingFaceFW/fineweb-edu
<urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6>
2
0.6
2025-12-25T18:39:12.619752
most commonly seen in the front of button - down shirts. also used to reinforce openings or slits in garments. piping binding a seam with decoration. piping is similar to tipping or edging where a decorative material is sewn into the seams. pointelle an open - work knitting pattern used on garments to add texture. typically a cooler and general knit sweater. polyester a fabric made from synthetic fibers. polyester is quick drying, easy to wash and holds its shape well. ponte a knit fabric where the fibers are looped in an interlock. the material is very strong and firm. poplin a strong woven fabric, heavier in weight, with ribbing. pump classically a high, medium, or low heeled, totally enclosed shoe. variations include an open toe or ornament. rayon a manufactured fiber developed originally as an alternative for silk. rayon drapes well and looks luxurious. sateen a fabric woven with sheen that resembles satin. seersucker slack - tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay. shaft height measurement of the shaft of the boot, which is from the top of the boot to the inside seam where the instep and the sole meet. shirring similar to ruching, shirring gathers material to create folds. silk one of the most luxurious fibers, silk is soft, warm and has shine. it is obtained from the cocoons of the silkworm ' s larvae. sole the outsole, or bottom part of a shoe. space dyed technique of yarn dyeing to produce a multi - color effect on the yarn itself. also known as dip dyed yarn. spandexelastomeric fiber, this material is able to expand 600 % and still snap back to its original shape and form. spandex fibers are woven with cotton and other fibers to make fabrics stretch. stacked heel a heel made of leather or leawood covering that gives the appearance of wood. synthetic materials man - made materials designed to look or function like leather. tipping similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc. tissue linen a type of linen, which is specifically made for blouses or shirts due to its thinness and sheerness. tweed a loose weave of heavy wool makes up tweed, which provides warmth and comfort. twill a fabric woven in a diagonal weave. commonly used for chinos and denim. variegated multi -
subdomain_quantum_materials
0.617684
512
HuggingFaceFW/fineweb-edu
<urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6>
3
0.6
2025-12-25T18:39:12.647863
bootie a shoe that resembles a boot in style but is not as high. brocade an all - over floral, raised pattern produced in a similar fashion to embroidery. circumference the measurement around the shaft of a boot taken at the widest part. cotton a natural fiber that grows in the seed pod of the cotton plant. it is an inelastic fiber. cushioning padding on the sole of a shoe for added comfort and stabilization. dobbywoven fabric where the weave of the fabric actually produces the garment ' s design. embroidery detailed needlework, usually raised and created by yarn, thread or embroidery floss. faille a slightly ribbed, woven fabric of silk, cotton, or rayon. houndstooth a classic design containing two colors in jagged / slanted checks. similar to glen plaid. liningthe leather, fabric or synthetic material used on the inside of a shoe. lame a metallic or plastic fiber woven into material to give the garment shine. marled typically found in sweaters, marled yarn occurs when two colored yards are twisted together. matte a matte finish has a lusterless surface. merino wool wool sheered from the merino sheep and spun into yarn that is fine but strong. ombre a color technique that shades a color from light to dark. paisley a pattern that consists of crooked teardrop designs in a repetitive manner. poplin a strong woven fabric, heavier in weight, with ribbing. sateen a fabric woven with sheen that resembles satin. shirring similar to ruching, shirring gathers material to create folds. sole the outsole, or bottom part of a shoe. stacked heel a heel made of leather or leawood covering that gives the appearance of wood. synthetic materials man - made materials designed to look or function like leather. tweed a loose weave of heavy wool makes up tweed, which provides warmth and comfort. twill a fabric woven in a diagonal weave. commonly used for chinos and denim. variegated multi - colored fabrics where colors are splotched or in patches. viscosea cellulosic man - made fibers, viscose is soft and supple but can wrinkle easily. wedge heel a heel that lies flat to the ground and extends from the shank to the back of the shoe. woven a woven fabric is formed by interlacing threads, yarns, strands, or strips of some material.
subdomain_quantum_materials
0.61646
491
HuggingFaceFW/fineweb-edu
<urn:uuid:34eaf969-a050-46fc-9917-ce3a0e03647a>
0
0.6
2025-12-25T18:39:12.741751
this operation is particularly well suited for finding the spikes in fourier transform power spectra, as illustrated previously. the top hat is also good for locating any features of a known size by adjusting the radius of the crown. objects too large to fit into the crown of the hat are selectively removed. reversing the logic to use the darkest values in both regions enables the same procedure to isolate dust or other dark features. by replacing the interior value by the mean of the surroundings, the dust can be selectively removed. in this application, shown in the rolling ball filter interactive java tutorial, the method is called a rolling ball filter. john c. russ - materials science and engineering dept., north carolina state university, raleigh, north carolina, 27695. matthew parry - hill and michael w. davidson - national high magnetic field laboratory, 1800 east paul dirac dr., the florida state university, tallahassee, florida, 32310. questions or comments? send us an email. © 1998 - 2009 by michael w. davidson, john russ, olympus america inc., and the florida state university. all rights reserved. no images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. use of this website means you agree to all of the legal terms and conditions set forth by the owners. this website is maintained by our
subdomain_quantum_metrology
0.606683
280
HuggingFaceFW/fineweb-edu
<urn:uuid:5a975454-3da4-4fb7-b80e-35755220af39>
2
0.6
2025-12-25T18:39:14.822413
nov. 27, 2009 physicists from the japanese - led multi - national t2k neutrino collaboration have just announced that over the weekend they detected the first neutrino events generated by their newly built neutrino beam at the j - parc ( japan proton accelerator research complex ) accelerator laboratory in tokai, japan. protons from the 30 - gev main ring synchrotron were directed onto a carbon target, where their collisions produced charged particles called pions. these pions travelled through a helium - filled volume where they decayed to produce a beam of the elusive particles called neutrinos. these neutrinos then flew 200 metres through the earth to a sophisticated detector system capable of making detailed measurements of their energy, direction, and type. the data from the complex detector system is still being analysed, but the physicists have seen at least 3 neutrino events, in line with the expectation based on the current beam and detector performance. this detection therefore marks the beginning of the operational phase of the t2k experiment, a 474 - physicist, 13 - nation collaboration to measure new properties of the ghostly neutrino. neutrinos interact only weakly with matter, and thus pass effortlessly through the earth ( and mostly through the detectors! ). neutrinos exist in three types, called electron, muon, and tau ; linked by particle interactions to their more familiar charged cousins like the electron. measurements over the last few decades, notably by the super kamiokande and kamland neutrino experiments in western japan, have shown that neutrinos possess the strange property of neutrino oscillations, whereby one type of neutrino will turn into another as they propagate through space. neutrino oscillations, which require neutrinos to have mass and therefore were not allowed in our previous theoretical understanding of particle physics, probe new physical laws and are thus of great interest in the study of the fundamental constituents of matter. they may even be related to the mystery of why there is more matter than anti - matter in the universe, and thus are the focus of intense study worldwide. precision measurements of neutrino oscillations can be made using artificial neutrino beams, as pioneered by the k2k neutrino experiment where neutrinos from the kek laboratory were detected using the vast super kamiokande neutrino detector near toyama. t2k is a more powerful and sophisticated version of
subdomain_quantum_materials
0.640419
512
HuggingFaceFW/fineweb-edu
<urn:uuid:73f94bf7-72a9-431b-90ac-37db05302858>
0
0.6
2025-12-25T18:39:15.333962
by i. peterson unlike an ordinary, incandescent bulb, a laser produces light of a single wavelength. moreover, the emitted light waves are coherent, meaning that all of the energy peaks and troughs are precisely in step. now, a team at the massachusetts institute of technology has demonstrated experimentally that a cloud consisting of millions of atoms can also be made coherent. instead of flying about and colliding randomly, the atoms display coordinated behavior, acting as if the entire assemblage were a single entity. according to quantum mechanics, atoms can behave like waves. thus, two overlapping clouds made up of atoms in coherent states should produce a zebra - striped interference pattern of dark and light fringes, just like those generated when two beams of ordinary laser light overlap. by detecting such a pattern, the researchers proved that the clouds ' atoms are coherent and constitute an " atom laser, " says physicist wolfgang ketterle, who heads the mit group. these matter waves, in principle, can be focused just like light. ketterle and his coworkers describe their observations in the jan. 31 science. the demonstration of coherence involving large numbers of atoms is the latest step in a series of studies of a remarkable state of matter called a bose - einstein condensate. chilled to temperatures barely above absolute zero, theory predicted, the atoms would collectively enter the same quantum state and behave like a single unit, or superparticle, with a specific wavelength. first created in the laboratory in 1995 by eric a. cornell and his collaborators at the university of colorado and the national institute of standards and technology, both in boulder, bose - einstein condensates have been the subject of intense investigation ever since ( sn : 7 / 15 / 95, p. 36 ; 5 / 25 / 96, p. 327 ). at mit, ketterle and his colleagues cool sodium atoms to temperatures below 2 microkelvins. the frigid atoms are then confined in a special magnetic trap inside a vacuum chamber. to determine whether the atoms in the resulting condensate are indeed as coherent as photons in a laser beam, the researchers developed a novel method of extracting a clump of atoms from the trap. in effect, they manipulate the magnetic states of the atoms to expel an adjustable fraction of the original cloud ; under the influence of gravity, the released clump falls. the method can produce a sequence of descending clumps, with each containing 100, 000 to several million coherent atoms. the apparatus acts like
subdomain_quantum_optics
0.717416
512
HuggingFaceFW/fineweb-edu
<urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3>
0
0.6
2025-12-25T18:39:15.348857
expel an adjustable fraction of the original cloud ; under the influence of gravity, the released clump falls. the method can produce a sequence of descending clumps, with each containing 100, 000 to several million coherent atoms. the apparatus acts like a dripping faucet, ketterle says. he and his colleagues describe the technique in the jan. 27 physical review letters. to demonstrate interference, the mit group created a double magnetic trap so that two pulses of coherent atoms could be released at the same time. as the two clumps fell, they started to spread and overlap. the researchers could then observe interference between the atomic waves of the droplets. " the signal was almost too good to be true, " ketterle says. " we saw a high - contrast, very regular pattern. " " it ' s a beautiful result, " cornell remarks. " this work really shows that bose - einstein condensation is an atom laser. " from the pattern, the mit researchers deduced that the condensate of sodium atoms has a wavelength of about 30 micrometers, considerably longer than the 0. 04 - nanometer wavelength typical of individual atoms at room temperature. ketterle and his colleagues are already planning several improvements to their primitive atom laser, including getting more atoms into the emitted pulses and going from pulses to a continuous beam. practical use of an atom laser for improving the precision of atomic clocks and for manipulating atoms is still distant, however, cornell notes.
subdomain_quantum_metrology
0.664385
299
HuggingFaceFW/fineweb-edu
<urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3>
1
0.6
2025-12-25T18:39:15.349471
a b c d e f g h i j k l m n o p q r s t u v w x y z a modem self - test in which data from the keyboard or an internal test pattern is sent to the modem ' s transmitter, turned into analog form, looped back to the receiver, and converted back into digital form. a variety of signals and wavelengths that can be transmitted over communications lines such as the sound of a voice over the phone line. the mode used by your modem when answering an incoming call from an originating modem. the transmit / receive frequencies are the reverse of the originating modem, which is in originate mode. a computer program designed to perform a specific task or set of tasks. examples include word processing and spreadsheet applications. automatic repeat request. a function that allows your modem to detect flawed data and request that it be retransmitted. see mnp and v. 42. american standard code for information interchange. a code used to represent letters, numbers, and special characters such as $,!, and /. data transmission in which the length of time between transmitted characters may vary. because characters may not be transmitted at set intervals, start / stop bits are used to mark the beginning and end of each character. sets the modem to pick up the phone line when it detects a certain number of rings. see s - register s0 in the technical reference section of this guide. a process where your modem dials a call for you. the dialing process is initiated by sending an atdt ( dial tone ) or atdp ( dial pulse ) command followed by the telephone number. auto - dial is used to dial voice numbers. see basic data command dn in the technical reference section of this guide. a term used to measure the speed of an analog transmission from one point to another. although not technically accurate, baud rate is commonly used to mean bit rate. a 0 or 1, reflecting the use of the binary numbering system. used because the computer recognizes either of two states, off or on. shortened form of binary digit is bit. also referred to as transmission rate. the number of binary digits, or bits, transmitted per second ( bps ). communications channels using analog modems are established at set bit rates, commonly 2400, 4800, 9600, 14, 400, 28, 800, 33, 600, and higher. bits per second ( bps ) the bits ( binary digits ) per second rate. thousands
subdomain_quantum_information_theory
0.657495
512
HuggingFaceFW/fineweb-edu
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
0
0.6
2025-12-25T18:39:15.729623
are established at set bit rates, commonly 2400, 4800, 9600, 14, 400, 28, 800, 33, 600, and higher. bits per second ( bps ) the bits ( binary digits ) per second rate. thousands of bits per second are expressed as kilobits per second ( kbps ). a temporary memory area used as storage during input and output operations. an example is the modem ' s command buffer. a group of binary digits stored and operated upon as a unit. most often the term refers to 8 - bit units or characters. one kilobyte ( kb ) is equal to 1, 024 bytes or characters ; 640 kb is equal to 655, 360 bytes or characters. the basic signal altered or modulated by the modem in order to carry information. a representation, coded in binary digits, of a letter, number, or other symbol. characters per second ( cps ) a data transfer rate generally estimated from the bit rate and the character length. for example, at 2400 bps, 8 - bit characters with start / stop bits ( for a total of ten bits per character ) will be transmitted at a rate of approximately 240 characters per second ( cps ). some protocols, such as error - control protocols, employ advanced techniques such as longer transmission frames and data compression to increase cps. class 1 and 2. 0 international standards used by fax application programs and faxmodems for sending and receiving faxes. cyclic redundancy checking ( crc ) an error - detection technique consisting of a test performed on each block or frame of data by both sending and receiving modems. the sending modem inserts the results of its tests in each data block in the form of a crc code. the receiving modem compares its results with the received crc code and responds with either a positive or negative acknowledgment. the transmission or sharing of data between computers via an electronic medium. data compression table a table containing values assigned for each character during a call under mnp5 data compression. default values in the table are continually altered and built during each call : the longer the table, the more efficient throughput gained. mode used by a modem when sending and receiving data files. data communications ( or circuit - terminating ) equipment, such as dial - up modems that establish and control the data link via the telephone network. any setting assumed, at startup or reset, by the computer ' s software and attached devices.
subdomain_quantum_information_theory
0.647277
512
HuggingFaceFW/fineweb-edu
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
1
0.6
2025-12-25T18:39:15.731320
files. data communications ( or circuit - terminating ) equipment, such as dial - up modems that establish and control the data link via the telephone network. any setting assumed, at startup or reset, by the computer ' s software and attached devices. the computer or software will use these settings until changed by the user or other software. a test that checks the modem ' s rs - 232 interface and the cable that connects the terminal or computer and the modem. the modem receives data ( in the form of digital signals ) from the computer or terminal and immediately returns the data to the screen for verification. discrete, uniform signals. in this guide, the term refers to the binary digits 0 and 1. data terminal ( or terminating ) equipment. a computer that generates or is the final destination of data. indicates a communications channel capable of carrying signals in both directions. see half - duplex, full - duplex. electronic industries association ( eia ) group which defines electronic standards in the u. s. various techniques that check the reliability of characters ( parity ) or blocks of data. v. 42 and mnp error - control protocols use error detection ( crc ) and retransmission of flawed frames ( arq ). a method for transmitting the image on a page from one point to another. commonly referred to as fax. the mode used by a modem to send and receive data in facsimile format. see definitions for v. 17, v. 27 ter, v. 29. a mechanism that compensates for differences in the flow of data into and out of a modem or other device. see extended data commands & hn, & in, & rn in the technical reference section of this guide. a data communications term for a block of data with header and trailer information attached. the added information usually includes a frame number, block size data, error - check codes, and start / end indicators. signals can flow in both directions at the same time over one line. in microcomputer communications, this may refer to the suppression of the online local echo. signals can flow in both directions, but only one way at a time. in microcomputer communications, may refer to activation of the online local echo, which causes the modem to send a copy of the transmitted data to the screen of the sending computer. hertz, a frequency measurement unit used internationally to indicate cycles per second. an electronic communications network that connects computer networks and organizational computer facilities around the world. internet service
subdomain_quantum_information_theory
0.633606
512
HuggingFaceFW/fineweb-edu
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
2
0.6
2025-12-25T18:39:15.733118
next higher speed. the mode used by your modem when initiating an outgoing call to a destination modem. the transmit / receive frequencies are the reverse of the called modem, which is in answer mode. a simple error - detection method that checks the validity of a transmitted character. character checking has been surpassed by more reliable and efficient forms of error checking, including v. 42 and mnp 2 - 4 protocols. either the same type of parity must be used by two communicating computers, or both may omit parity. a system of rules and procedures governing communications between two or more devices. protocols vary, but communicating devices must follow the same protocol in order to exchange data. the format of the data, readiness to receive or send, error detection and error correction are some of the operations that may be defined in protocols. random access memory. memory that is available for use when the modem is turned on, but that clears of all information when the power is turned off. the modem ' s ram holds the current operational settings, a flow control buffer, and a command buffer. remote digital loopback a test that checks the phone link and a remote modem ' s transmitter and receiver. a copy of the data received by the remote system, returned to the sending system, and displayed on the screen. remote echoing is a function of the remote system. read only memory. permanent memory, not user - programmable. the consecutive flow of data in a single channel. compare to parallel transmissions where data flows simultaneously in multiple channels. the signaling bits attached to a character before and after the character is transmitted during asynchronous transmission. a device whose keyboard and display are used for sending and receiving data over a communications link. differs from a microcomputer or a mainframe in that it has little or no internal processing capabilities. software mode that allows direct communication with the modem. also known as command mode. the amount of actual user data transmitted per second without the overhead of protocol information such as start / stop bits or frame headers and trailers. compare with characters per second. the itu - t standard specification that covers the initial handshaking process. an itu - t standard for making facsimile connections at 14, 400 bps, 12, 000 bps, 9, 600 bps, and 7, 200 bps. an itu - t standard for modems operating in asynchronous mode at speeds up to 300 bps, full - duplex, on public switched telephone
subdomain_quantum_information_theory
0.640448
512
HuggingFaceFW/fineweb-edu
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
4
0.6
2025-12-25T18:39:15.735173
robb godshaw, from syynlabs, has created a haptic cube that gives you an impression of what the temperature will be like tomorrow. the cube, which godshaw has named the cryoscope, consists of an aluminium shell surrounding a peltier element, heatsink, cooling fan and an led, all controlled by an arduino. the cube is heated to a " neutral " state of 30c, and then adjusted by the number of degrees that the next day ' s forecast differs from room temperature ( 23c ). it takes into account wind chill and humidity to give an idea what the following day will " feel " like, rather than merely reflecting air temperature. so, for example, if the forecast for the next day is for 18c, once those factors are all taken into account, the cube ' s temperature will decrease five degrees from 30c to 25c, resulting in it being slightly cool to the touch. godshaw describes it in the video above as a " haptic weathervane ", adding : " users enter their location into a web app. the cube then automatically adjusts to the forecasted temperature. by touching the cryoscope, the user is able to feel tomorrow ' s air you can see the cryoscope in action over on godshaw ' s website. updated 08 : 29 09 / 05 / 2012 : godshaw has redesigned the cryoscope and is raising money for full production over on kickstarter.
subdomain_quantum_metrology
0.609802
298
HuggingFaceFW/fineweb-edu
<urn:uuid:dcbce5d9-dd4b-4753-953d-726754de7973>
0
0.6
2025-12-25T18:39:15.903681
first mechanical calculator. this machine is considered to be the forerunner of the modern computer though none of them were built in his lifetime. study of the molecules and proteins that are the basis of biological functions has led to the concept of a molecular machine. for example, current models of the operation of the kinesin molecule that transports vesicles inside the cell as well as the myocin molecule that operates against actin to cause muscle contraction ; these molecules control movement in response to chemical stimuli. researchers in nano - technology are working to construct molecules that perform movement in response to a specific stimulus. in contrast to molecules such as kinesin and myosin, these nano - machines or molecular machines are constructions like traditional machines that are designed to perform in a task. machines are assembled from standardized types of components. these elements consist of mechanisms that control movement in various ways such as gear trains, transistor switches, belt or chain drives, linkages, cam and follower systems, brakes and clutches, and structural components such as frame members and fasteners. modern machines include sensors, actuators and computer controllers. the shape, texture and color of covers provide a styling and operational interface between the mechanical components of a machine and its users. assemblies within a machine that control movement are often called " mechanisms. " mechanisms are generally classified as gears and gear trains, cam and follower mechanisms, and linkages, though there are other special mechanisms such as clamping linkages, indexing mechanisms and friction devices such as brakes and clutches. controllers combine sensors, logic, and actuators to maintain the performance of components of a machine. perhaps the best known is the flyball governor for a steam engine. examples of these devices range from a thermostat that as temperature rises opens a valve to cooling water to speed controllers such the cruise control system in an automobile. the programmable logic controller replaced relays and specialized control mechanisms with a programmable computer. servomotors that accurately position a shaft in response to an electrical command are the actuators that make robotic systems possible. design plays an important role in all three of the major phases of a product lifecycle : the industrial revolution was a period from 1750 to 1850 where changes in agriculture, manufacturing, mining, transportation, and technology had a profound effect on the social, economic and cultural conditions of the times. it began in the united kingdom, then subsequently spread throughout western europe, north america, japan, and eventually the rest of the world. starting in the later part of
subdomain_quantum_materials
0.612033
512
HuggingFaceFW/fineweb-edu
<urn:uuid:28a7c1f3-c595-43bb-b7f3-2960d5ccb10f>
3
0.6
2025-12-25T18:39:16.137829
date : december 2004 creator : habel, agnieszka description : this problem in lieu of thesis is a discussion of two topics : brownian movement and quantum computers. brownian movement is a physical phenomenon in which the particle velocity is constantly undergoing random fluctuations. chapters 2, 3 and 4, describe brownian motion from three different perspectives. the next four chapters are devoted to the subject of quantum computers, which are the signal of a new era of technology and science combined together. in the first chapter i present to a reader the two topics of my problem in lieu of thesis. in the second chapter i explain the idea of brownian motion, its interpretation as a stochastic process and i find its distribution function. the next chapter illustrates the probabilistic picture of brownian motion, where the statistical averages over trajectories are related to the probability distribution function. chapter 4 shows how to derive the langevin equation, introduced in chapter 1, using a hamiltonian picture of a bath with infinite number of harmonic oscillators. the chapter 5 explains how the idea of quantum computers was developed and how step - by - step all the puzzles for the field of quantum computers were created. the next chapter, chapter 6, discus the basic quantum unit of information... contributing partner : unt libraries
subdomain_quantum_simulation
0.743965
265
HuggingFaceFW/fineweb-edu
<urn:uuid:36a21367-2041-439f-8700-4349c0abc5be>
0
0.6
2025-12-25T18:39:16.352741
real form ( lie theory ) in mathematics, the notion of a real form relates objects defined over the field of real and complex numbers. a real lie algebra g0 is called a real form of a complex lie algebra g if g is the complexification of g0 : real forms for lie groups and algebraic groups using the lie correspondence between lie groups and lie algebras, the notion of a real form can be defined for lie groups. in the case of linear algebraic groups, the notions of complexification and real form have a natural description in the language of algebraic geometry. just as complex semisimple lie algebras are classified by dynkin diagrams, the real forms of a semisimple lie algebra are classified by satake diagrams, which are obtained from the dynkin diagram of the complex form by labeling some vertices black ( filled ), and connecting some other vertices in pairs by arrows, according to certain rules. it is a basic fact in the structure theory of complex semisimple lie algebras that every such algebra has two special real forms : one is the compact real form and corresponds to a compact lie group under the lie correspondence ( its satake diagram has all vertices blackened ), and the other is the split real form and corresponds to a lie group that is as far as possible from being compact ( its satake diagram has no vertices blackened and no arrows ). in the case of the complex special linear group sl ( n, c ), the compact real form is the special unitary group su ( n ) and the split real form is the real special linear group sl ( n, r ). the classification of real forms of semisimple lie algebras was accomplished by elie cartan in the context of riemannian symmetric spaces. in general, there may be more than two real forms. suppose that g0 is a semisimple lie algebra over the field of real numbers. by cartan ' s criterion, the killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries + 1 or - 1. by sylvester ' s law of inertia, the number of positive entries, or the positive index of intertia, is an invariant of the bilinear form, i. e. it does not depend on the choice of the diagonalizing basis. this is a number between 0 and the dimension of g which is an important invariant of the real lie algebra, called its index. split real form a real form g0 of a complex semisimple lie algebra g
subdomain_quantum_field_theory
0.602044
512
HuggingFaceFW/fineweb-edu
<urn:uuid:307c1388-d49d-46d7-a722-4f52f24df709>
0
0.6
2025-12-25T18:39:16.549795
choice of the diagonalizing basis. this is a number between 0 and the dimension of g which is an important invariant of the real lie algebra, called its index. split real form a real form g0 of a complex semisimple lie algebra g is said to be split, or normal, if in each cartan decomposition g0 = k0 ⊕ p0, the space p0 contains a maximal abelian subalgebra of g0, i. e. its cartan subalgebra. elie cartan proved that every complex semisimple lie algebra g has a split real form, which is unique up to isomorphism. it has maximal index among all real forms. the split form corresponds to the satake diagram with no vertices blackened and no arrows. compact real form a real lie algebra g0 is called compact if the killing form is negative definite, i. e. the index of g0 is zero. in this case g0 = k0 is a compact lie algebra. it is known that under the lie correspondence, compact lie algebras correspond to compact lie groups. the compact form corresponds to the satake diagram with all vertices blackened. construction of the compact real form in general, the construction of the compact real form uses structure theory of semisimple lie algebras. for classical lie algebras there is a more explicit construction. let g0 be a real lie algebra of matrices over r that is closed under the transpose map, the complexification g of g0 decomposes into the direct sum of g0 and ig0. the real vector space of matrices is a subspace of the complex lie algebra g that is closed under the commutators and consists of skew - hermitian matrices. it follows that u0 is a real lie subalgebra of g, that its killing form is negative definite ( making it a compact lie algebra ), and that the complexification of u0 is g. therefore, u0 is a compact form of g. see also - helgason 1978, p. 426
subdomain_quantum_field_theory
0.60119
418
HuggingFaceFW/fineweb-edu
<urn:uuid:307c1388-d49d-46d7-a722-4f52f24df709>
1
0.6
2025-12-25T18:39:16.550449
hypothesis 1, 2, 3, 4, 5, 6 and 7 will be rejected. in variance analysis, at the significance 0. 05 is with n = 30. - ii. literature review the use of brainstorming brainstorming is a group creativity technique designed to generate a large number of ideas for the solution of a problem. in 1953 the method was popularized by alex faickney osborn in a book called applied imagination. osborn proposed that groups could double their creative output with brainstorming. oxford defined that brainstorming is a way of making a group of people all think about something at the same time, often in order to solve a problem or to create good ideas. brainstorming is the name given to a situation when a group of people meet to generate new ideas around a specific area of interest. using rules which remove inhibitions, people are able to think more freely and move into new areas of thought and so create numerous new ideas and solutions. the participants shout out ideas as they occur to them and then build on the ideas raised by others. all the ideas are noted down and are not criticized. only when the brainstorming session is over are the ideas evaluated. the other meaning of brainstorming is to brainstorm is to use a set of specific rules and techniques which encourage and spark off new ideas which would never have happened under normal circumstances. so there you have it : brainstorming will help you come up with new ideas. and not only will you come up with new ideas but you will do so with surprisingly little effort. brainstorming makes the generation of new ideas easy and is a tried - and - tested process. exactly what you apply brainstorming techniques to depends on what you want to achieve. you can apply them to develop new products, services and processes in your job, or you can apply them to develop your personal life. there are two models of brainstorming - traditional brainstorming the normal view of brainstorming is where a group of people sit in a room and shout out ideas as they occur to them. they are told to lose their inhibitions and that no ideas will be judged so that people are free to shout out any ideas at all without feeling uncomfortable. people should build on the ideas called out by other participants. the purpose of this is to gain as many ideas as possible for later analysis. out of the many ideas suggested there will be some of great value. because of the free - thinking environment, the session will help promote radical new ideas which break free from
subdomain_quantum_field_theory
0.612087
512
HuggingFaceFW/fineweb-edu
<urn:uuid:6fd5ee54-081f-40f3-bfae-bd27fc2c153b>
2
0.6
2025-12-25T18:39:16.808092
decoupled mar 27, 2013 | 4. 9 / 5 ( 8 ) | 0 - sizing things up : the evolutionary neurobiology of scale invariance feb 28, 2013 | 4. 8 / 5 ( 10 ) | 14 classical and quantum mechanics via lie algebras apr 15, 2011 i ' d like to open a discussion thread for version 2 of the draft of my book ' ' classical and quantum mechanics via lie algebras ' ', available online at http : / / lanl. arxiv. org / abs / 0810. 1019, and for the... - more from physics forums - independent research more news stories no new human cases of the h7n9 virus have been recorded in china for a week, national health authorities said, for the first time since the outbreak began in march. diseases, conditions, syndromes 33 minutes ago | not rated yet | 0 a nobel prize - winning scientist tuesday played down " shock - horror scenarios " that a new virus strain will emerge with the potential to kill millions of people. diseases, conditions, syndromes 51 minutes ago | 5 / 5 ( 1 ) | 0 bacteria resistant to the antibiotic colistin are also commonly resistant to antimicrobial substances made by the human body, according to a study in mbio, the online open - access journal of the american society for microb... diseases, conditions, syndromes 5 hours ago | 5 / 5 ( 1 ) | 0 ( ap ) — federal investigators probing the hantavirus outbreak blamed for three deaths at yosemite national park recommend that design changes to tent cabins and other lodging run by private concessionaires first be reviewed... diseases, conditions, syndromes 11 hours ago | not rated yet | 0 a new diagnostic test for a worm infection that can lead to severe enlargement and deformities of the legs and genitals is far more sensitive than the currently used test, according to results of a field... diseases, conditions, syndromes 12 hours ago | not rated yet | 0 | ( medical xpress ) — a three - year multinational study has tracked and detailed the progression of huntington ' s disease ( hd ), predicting clinical decline in people carrying the hd gene more than 10 years before... 44 seconds ago | not rated yet | 0 1 hour ago | not rated yet | 0 | ( medical xpress ) — a research team, led by jeremy barr, a biology post - doctoral fellow, unveils a new
subdomain_quantum_field_theory
0.604922
512
HuggingFaceFW/fineweb-edu
<urn:uuid:5639c852-ee99-4994-8b0d-957aaa025883>
2
0.6
2025-12-25T18:39:17.001260
and applications, nasa / marshall space flight center, huntsville, alabama 35812 ; # department of chemical engineering, university of alabama in huntsville, huntsville, alabama 35899 ; § biochemistry department, michigan state university, east lansing, michigan 48825 ; and ¶ biophysics sd48, nasa / marshall space flight center, huntsville, alabama 35812 usa part of the challenge of macromolecular crystal growth for structure determination is obtaining crystals with a volume suitable for x - ray analysis. in this respect an understanding of the effect of solution conditions on macromolecule nucleation rates is advantageous. this study investigated the effects of supersaturation, temperature, and ph on the nucleation rate of tetragonal lysozyme crystals. batch crystallization plates were prepared at given solution concentrations and incubated at set temperatures over 1 week. the number of crystals per well with their size and axial ratios were recorded and correlated with solution conditions. crystal numbers were found to increase with increasing supersaturation and temperature. the most significant variable, however, was ph ; crystal numbers changed by two orders of magnitude over the ph range 4. 0 - 5. 2. crystal size also varied with solution conditions, with the largest crystals obtained at ph 5. 2. having optimized the crystallization conditions, we prepared a batch of crystals under the same initial conditions, and 50 of these crystals were analyzed by x - ray diffraction techniques. the results indicate that even under the same crystallization conditions, a marked variation in crystal properties exists. more space science headlines - nasa research on the web life and microgravity sciences and applications information from nasa hq on science in space microgravity research programs office headquartered at marshall space flight center microgravity news online version of nasa ' s latest in microgravity advancements, published quarterly. join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!! for more information, please contact : | dr. john m. horack, director of science communications curator : linda porter nasa official : m. frank rose
subdomain_quantum_materials
0.604544
438
HuggingFaceFW/fineweb-edu
<urn:uuid:2a7ca019-7b31-4e9b-8c46-9219b443a12f>
5
0.6
2025-12-25T18:39:17.371778
elements | blogs wednesday, september 7, 2011 is there oxygen in space? yes, this summer astronomers using the herschel telescope identified oxygen molecules in space. they found these molecules in the orion nebula, 1, 344 light years away. oxygen is the third most abundant element in the universe. until now, scientists have only seen individual oxygen atoms in space. we do not breathe individual oxygen atoms, but rather oxygen molecules. ( a molecule is a group of atoms banded together and it is the smallest unit of chemical compound that can take part in a chemical reaction. ) oxygen molecules make up 20 % of the air we breathe. scientists theorize that the oxygen molecules were locked up in water ice that... thursday, march 10, 2011 i ' m atoms ( scientific cover of jason mraz ' s i ' m yours ) here in chicago it has been gray for the last three weeks – no sun, just melting snow and rain. this song made our day. it has sunshine, great music and atoms! the lyrics include fabulous lines such as : “ atoms bond together to form molecules most of what ’ s surrounding me and you … ” this science verse has been set to the music of jason mraz ’ s “ i ’ m yours ”. this is a must watch! saturday, february 26, 2011 the deep carbon observatory here at supersmart carbon, we love learning about carbon. apparently, we are not alone. there is a project being launched called the deep carbon observatory that is being funded by the alfred p. sloan foundation. the purpose of this group is to study carbon deep inside the earth. carbon makes up somewhere from 0. 7 % to 3. 2 % of the earth ’ s elements. we know that there is carbon trapped under the earth ’ s crust, but we don ’ t know how much. the deep carbon observatory is going to study how much carbon there is in the earth and what happens to it. another question is what form is the... friday, february 25, 2011 where does gas come from? carbon! ( we always love it when the answer is carbon. ) the gas we use to power our cars comes from decomposing organic matter. what does that mean? all life has carbon in it - - this includes everything living from you and me to zebras, tapeworms, tulips and seaweed. since all living things have carbon in them, they are referred to as organic matter. non - organic matter includes things like rocks, water and metals. when something organic dies
subdomain_quantum_materials
0.651341
512
HuggingFaceFW/fineweb-edu
<urn:uuid:b5177112-be1e-4086-9d85-858522f9c4b9>
0
0.6
2025-12-25T18:39:17.433772
so write it offline in an editor ( e. g., notepad ) and paste it in your little post box, viz. : from wikipedia, the free encyclopedia this article is about the general notion of determinism in philosophy. for other uses, see determinism ( disambiguation ). not to be confused with fatalism, predeterminism, or predictability. determinism is a metaphysical philosophical position stating that for everything that happens there are conditions such that, given those conditions, nothing else could happen. " there are many determinisms, depending upon what pre - conditions are considered to be determinative of an event. " determinism throughout the history of philosophy has sprung from diverse considerations, some of which overlap. some forms of determinism can be tested empirically with ideas stemming from physics and the philosophy of physics. the opposite of determinism is some kind of indeterminism ( otherwise called nondeterminism ). determinism is often contrasted with free will. determinism often is taken to mean simply causal determinism, that is, basing determinism upon the idea of cause - and - effect. it is the concept that events within a given paradigm are bound by causality in such a way that any state ( of an object or event ) is completely determined by prior states. this meaning can be distinguished from other varieties of determinism mentioned below. the introduction of " cause - and - effect " introduces unnecessary complications related to what is meant by a ' cause ' and how the presence of a ' cause ' might be established, the interpretation of which varies from one physical theory to another. these complications are avoided by a more general formulation based upon connections between ' events ' supplied by a theory : " a theory is deterministic if, and only if, given its state variables for some initial period, the theory logically determines a unique set of values for those variables for any other period. " — ernest nagel, alternative descriptions of physical state p. 292 this quote replaces the idea of ' cause - and - effect ' with that of ' logical implication ' according to one or another theory that connects events. in addition, an ' event ' is related by the theory itself to formalized states described using the parameters defined by that theory. thus, the details of interpretation are placed where they belong, fitted to the context in which the chosen theory applies. other debates often concern the scope of determined systems, with some maintaining that
subdomain_quantum_field_theory
0.615754
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
0
0.6
2025-12-25T18:39:17.570585
to formalized states described using the parameters defined by that theory. thus, the details of interpretation are placed where they belong, fitted to the context in which the chosen theory applies. other debates often concern the scope of determined systems, with some maintaining that the entire universe ( or multiverse ) is a single determinate system and others identifying other more limited determinate systems. for example, using the definition of physical determinism above, the limitations of a theory to some particular domain of experience also limits the associated definition of ' determinism ' to that same domain. there are numerous historical debates involving many philosophical positions and varieties of determinism. they include debates concerning determinism and free will, technically denoted as compatibilistic ( allowing the two to coexist ) and incompatibilistic ( denying their coexistence is a possibility ). determinism should not be confused with self - determination of human actions by reasons, motives, and desires. determinism rarely requires that perfect prediction be practically possible – merely predictable in theory. many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path causal determinism is " the idea that every event is necessitated by antecedent events and conditions together with the laws of nature ". however, causal determinism is a broad enough term to consider that " one ' s deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. in other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way ". causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. the relation between events may not be specified, nor the origin of that universe. causal determinists believe that there is nothing uncaused or self - caused. historical determinism ( a sort of path dependence ) can also be synonymous with causal determinism. nomological determinism ( sometimes called ' scientific ' determinism, although that is a misnomer ) is the most common form of causal determinism. it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. quantum mechanics and
subdomain_quantum_field_theory
0.661324
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
1
0.6
2025-12-25T18:39:17.571669
misnomer ) is the most common form of causal determinism. it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. quantum mechanics and various interpretations thereof pose a serious challenge to this view. nomological determinism is sometimes illustrated by the thought experiment of laplace ' s demon. physical determinism holds holds that all physical events occur as described by physical laws. depending upon definitions, there is some room here for the view that not everything in the universe must be tied to some physical state, but that view is not usually emphasized by adherents of physical determinism because of the widely accepted scientific view that the operation of all physical systems ( often unnecessarily taken to mean everything ) can be explained entirely in physical terms, the assumed causal closure of physics. necessitarianism is very related to the causal determinism described above. it is a metaphysical principle that denies all mere possibility ; there is exactly one way for the world to be. leucippus claimed there were no uncaused events, and that everything occurs for a reason and by necessity. predeterminism is the idea that all events are determined in advance. the concept of predeterminism is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. in the case of predeterminism, this chain of events has been pre - established, and human actions cannot interfere with the outcomes of this pre - established chain. predeterminism can be used to mean such pre - established causal determinism, in which case it is categorised as a specific type of determinism. it can also be used interchangeably with causal determinism - in the context of its capacity to determine future events. despite this, predeterminism is often considered as independent of causal determinism. the term predeterminism is also frequently used in the context of biology and hereditary, in which case it represents a form of biological determinism. fatalism is normally distinguished from " determinism ". fatalism is the idea that everything is fated to happen, so that humans have no control over their future. fate has arbitrary power, and need not follow any causal or otherwise deterministic laws. types of fatalism include hard theological determinism and the idea of predestination
subdomain_quantum_mechanics
0.636139
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
2
0.6
2025-12-25T18:39:17.572688
the past, present, or future, are either true or false. note that one can support causal determinism without necessarily supporting logical determinism and vice versa ( depending on one ' s views on the nature of time, but also randomness ). the problem of free will is especially salient now with logical determinism : how can choices be free, given that propositions about the future already have a truth value in the present ( i. e. it is already determined as either true or false )? this is referred to as the problem of future contingents. adequate determinism focuses on the fact that, even without a full understanding of microscopic physics, we can predict the distribution of 1000 coin tosses often synonymous with logical determinism are the ideas behind spatio - temporal determinism or eternalism : the view of special relativity. j. j. c. smart, a proponent of this view, uses the term " tenselessness " to describe the simultaneous existence of past, present, and future. in physics, the " block universe " of hermann minkowski and albert einstein assumes that time is a fourth dimension ( like the three spatial dimensions ). in other words, all the other parts of time are real, like the city blocks up and down a street, although the order in which they appear depends on the driver ( see rietdijk – putnam argument ). adequate determinism is the idea that quantum indeterminacy can be ignored for most macroscopic events. this is because of quantum decoherence. random quantum events " average out " in the limit of large numbers of particles ( where the laws of quantum mechanics asymptotically approach the laws of classical mechanics ). stephen hawking explains a similar idea : he says that the microscopic world of quantum mechanics is one of determined probabilities. that is, quantum effects rarely alter the predictions of classical mechanics, which are quite accurate ( albeit still not perfectly certain ) at larger scales. something as large as an animal cell, then, would be " adequately determined " ( even in light of quantum indeterminacy ). nature and nurture interact in humans. a scientist looking at a sculpture after some time does not ask whether we are seeing the effects of the starting materials or environmental influences. although some of the above forms of determinism concern human behaviors and cognition, others frame themselves as an answer to the nature or nurture debate. they will suggest that one factor will entirely determine behavior. as scientific understanding has grown, however
subdomain_quantum_mechanics
0.654034
512
HuggingFaceFW/fineweb-edu
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
4
0.6
2025-12-25T18:39:17.574510
science fair project encyclopedia the sampling frequency or sampling rate defines the number of samples per second taken from a continuous signal to make a discrete signal. the inverse of the sampling frequency is the sampling period or sampling time, which is the time between samples. the sampling frequency can only be applied to samplers in which each sample is periodically taken. there is no rule that limits a sampler from taking a sample at a non - periodic rate. if a signal has a bandwidth of 100 hz then to avoid aliasing the sampling frequency must be greater than 200 hz. in some cases, it is desirable to have a sampling frequency more than twice the bandwidth so that a digital filter can be used in exchange for a weaker analog anti - aliasing filter. this process is known as oversampling. in digital audio, common sampling rates are : - 8, 000 hz - telephone, adequate for human speech - 11, 025 hz - 22, 050 hz - radio - 44, 100 hz - compact disc - 48, 000 hz - digital sound used for films and professional audio - 96, 000 or 192, 400 hz - dvd - audio, some lpcm dvd audio tracks, bd - rom ( blu - ray disc ) audio tracks, and hd - dvd ( high - definition dvd ) audio tracks in digital video, which uses a ccd as the sensor, the sampling rate is defined the frame / field rate, rather than the notional pixel clock. all modern tv cameras use ccds, and the image sampling frequency is the repetition rate of the ccd integration period. - 13. 5 mhz - ccir 601, d1 video - continuous signal vs. discrete signal - digital control - sample and hold - sample ( signal ) - sampling ( information theory ) - signal ( information theory ) the contents of this article is licensed from www. wikipedia. org under the gnu free documentation license. click here to see the transparent copy and copyright details
subdomain_quantum_metrology
0.601603
390
HuggingFaceFW/fineweb-edu
<urn:uuid:d25b5562-8f30-4fd1-bc51-46f94956427e>
0
0.6
2025-12-25T18:39:17.632543
the life - giving ideas of chemistry are not reducible to physics. or, if one tries to reduce them, they wilt at the edges, lose not only much of their meaning, but interest too. and, most importantly, they lose their chemical utility — their ability to relate seemingly disparate compounds to each other, their fecundity in inspiring new experiments. i ' m thinking of concepts such as the chemical bond, a functional group and the logic of substitution, aromaticity, steric effects, acidity and basicity, electronegativity and oxidation - reduction. as well as some theoretical ideas i ' ve been involved in personally — through - bond coupling, orbital symmetry control, the isolobal analogy. consider the notion of oxidation state. if you had to choose two words to epitomize the same - and - not - the - same nature of chemistry, would you not pick ferrous and ferric? the concept evolved at the end of the 19th century ( not without confusion with " valency " ), when the reality of ions in solution was established. as did a multiplicity of notations — ferrous iron is iron in an oxidation state of + 2 ( or is it 2 +? ) or fe ( ii ). schemes for assigning oxidation states ( sometimes called oxidation numbers ) adorn every introductory chemistry text. they begin with the indisputable : in compounds, the oxidation states of the most electronegative elements ( those that hold on most tightly to their valence electrons ), oxygen and fluorine for example, are – 2 and – 1, respectively. after that the rules grow ornate, desperately struggling to balance wide applicability with simplicity. the oxidation - state scheme had tremendous classificatory power ( for inorganic compounds, not organic ones ) from the beginning. think of the sky blue color of chromium ( ii ) versus the violet or green of chromium ( iii ) salts, the four distinctly colored oxidation states of vanadium. oliver sacks writes beautifully of the attraction of these colors for a boy starting out in chemistry. and not only boys. but there was more to oxidation states than just describing color. or balancing equations. chemistry is transformation. the utility of oxidation states dovetailed with the logic of oxidizing and reducing agents — molecules and ions that with ease removed or added electrons to other molecules. between electron transfer and proton transfer you have much of reaction chemistry. i want to tell you how this logic leads to quite
subdomain_quantum_materials
0.690693
512
HuggingFaceFW/fineweb-edu
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
0
0.6
2025-12-25T18:39:17.733733
. people in the trade will recognize that i ' m talking about " mulliken population analysis " or " natural bond analysis " or richard bader ' s beautifully worked out scheme for dividing up space in a molecule. what about experiment? is there an observable that might gauge a charge on an atom? i think photoelectron spectroscopies ( esca or auger ) come the closest. here one measures the energy necessary to promote an inner - core electron to a higher level or to ionize it. atoms in different oxidation states do tend to group themselves at certain energies. but the theoretical framework that relates these spectra to charges depends on the same assumptions that bedevil the definition of a charge on an atom. an oxidation state bears little relation to the actual charge on the atom ( except in the interior of the sun, where ligands are gone, there is plenty of energy, and you can have iron in oxidation states up to + 26 ). this doesn ' t stop the occasional theoretician today from making a heap of a story when the copper in a formal cu ( iii ) complex comes out of a calculation bearing a charge of, say, + 0. 51. nor does it stop oxidation states from being just plain useful. many chemical reactions involve electron transfer, with an attendant complex of changes in chemical, physical and biological properties. oxidation state, a formalism and not a representation of the actual electron density at a metal center, is a wonderful way to " bookkeep " electrons in the course of a reaction. even if that electron, whether added or removed, spends a good part of its time on the ligands. but enough theory, or, as some of my colleagues would sigh, anthropomorphic platitudes. let ' s look at some beautiful chemistry of extreme oxidation states. incredible, but true recently, a young polish postdoctoral associate, wojciech grochala, led me to look with him at the chemical and theoretical design of novel high - temperature superconductors. we focused on silver ( ag ) fluorides ( f ) with silver in oxidation states ii and iii. the reasoning that led us there is described in our forthcoming paper. for now let me tell you about some chemistry that i learned in the process. i can only characterize this chemistry as incredible but true. ( some will say that i should have known about it, since it was hardly hidden, but the fact is i didn ' t. ) here is what ag ( ii
subdomain_quantum_materials
0.663824
512
HuggingFaceFW/fineweb-edu
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
2
0.6
2025-12-25T18:39:17.735791
as teflon and kel - f, synthetic sapphire and platinum, manipulation of and physicochemical investigation of hf solutions in closed systems is now reasonably straightforward. " for this we must thank the pioneers in the field — generations of fluorine chemists, but especially bartlett and boris zemva of the university of ljubljana. bartlett reports the oxidation of agf2 to agf4 – ( as kagf4 ) using photochemical irradiation of f2 in anhydrous hf ( made less acidic by adding kf to the hf ). and zemva used kr2 + ( in krf2 ) to react with agf2 in anhydrous hf in the presence of xef6 to make xef5 + agf4 –. what a startling list of reagents! to appreciate the difficulty and the inspiration of this chemistry, one must look at the original papers, or at the informal letters of the few who have tried it. you can find some of neil bartlett ' s commentary in the article that wojciech and i wrote, and in an interview with him. charge it, please chemists are always changing things. how to tune the propensity of a given oxidation state to oxidize or reduce? one way to do it is by changing the charge on the molecule that contains the oxidizing or reducing center. the syntheses of the silver fluorides cited above contain some splendid examples of this strategy. let me use bartlett ' s words again, just explaining that " electronegativity " gauges in some rough way the tendency of an atom to hold on to electrons. ( high electronegativity means the electron is strongly held, low electronegativity that it is weakly held. ) it ' s easy to make a high oxidation state in an anion because an anion is electron - rich. the electronegativity is lower for a given oxidation state in an anion than it is in a neutral molecule. that in turn, is lower than it is in a cation. if i take silver and i expose it to fluorine in the presence of fluoride ion, in hf, and expose it to light to break of f2 to atoms, i convert the silver to silver ( iii ), agf4 -. this is easy because the ag ( iii ) is in an anion. i can then pass in boron trifluoride and precipitate silver
subdomain_quantum_materials
0.622219
512
HuggingFaceFW/fineweb-edu
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
4
0.6
2025-12-25T18:39:17.737947
to atoms, i convert the silver to silver ( iii ), agf4 -. this is easy because the ag ( iii ) is in an anion. i can then pass in boron trifluoride and precipitate silver trifluoride, which is now a much more potent oxidizer than agf4 - because the electronegativity in the neutral agf3 is much higher than it is in the anion. if i can now take away a fluoride ion, and make a cation, i drive the electronegativity even further up. with such a cation, for example, agf2 +, i can steal the electron from ptf6 - and make ptf6.... this is an oxidation that even kr ( ii ) is unable to bring about. simple, but powerful reasoning. and it works. a world record? finally, a recent oxidation - state curiosity : what is the highest oxidation state one could get in a neutral molecule? pekka pyykko and coworkers suggest cautiously, but i think believably, that octahedral uo6, that is u ( xii ), may exist. there is evidence from other molecules that uranium 6p orbitals can get involved in bonding, which is what they would have to do in uo6. what wonderful chemistry has come — and still promises to come — from the imperfect logic of oxidation states! © roald hoffmann i am grateful to wojciech grochala, robert fay and debra rolison for corrections and comments. thanks to stan marcus for suggesting the title of this column.
subdomain_quantum_materials
0.611249
340
HuggingFaceFW/fineweb-edu
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
5
0.6
2025-12-25T18:39:17.738522
quantum time waits for no quantum theory, also quantum mechanics, in physics, a theory based on using the concept of the quantum unit to describe the dynamic properties of subatomic particles and the interactions of matter and radiation. the foundation was laid by the german physicist max planck, who postulated in 1900 that energy can be emitted or absorbed by matter only in small, discrete units called quanta. fundamental to the development of quantum mechanics was the uncertainty principle, formulated by the german physicist werner heisenberg in 1927, which states that the position and momentum of a subatomic particle cannot be specified simultaneously. spectral lines of atomic hydrogen : when an electron makes a transition from one energy level to another, the electron emits a photon with a particular energy. these photons are then observed as emission lines using a spectroscope. the lyman series involves transitions to the lowest or ground state energy level. to the second energy level are called the balmer series. these transitions involve frequencies in the visible part of the spectrum. in this frequency range each transition is characterized by a in the 18th and 19th centuries, newtonian, or classical, mechanics appeared to provide a wholly accurate description of the motions of bodies — for example, planetary motion. in the late 19th and early 20th centuries, however, experimental findings raised doubts about the completeness of newtonian theory. among the newer observations were the lines that appear in the spectra of light emitted by heated gases, or gases in which electric discharges take place. model of the atom developed in the early 20th century by the english physicist ernest rutherford, in which negatively charged electrons circle a positive nucleus in orbits prescribed by newton ’ s laws of motion, scientists had also expected that the electrons would emit light over a broad frequency range, rather than in the narrow frequency ranges that form the lines in a spectrum. another puzzle for physicists was the coexistence of two theories of light : the corpuscular theory, which explains light as a stream of particles, and the wave theory, which views light as electromagnetic waves. a third problem was the absence of a molecular basis for in his book elementary principles in statistical mechanics ( 1902 ), the american mathematical physicist j. willard gibbs conceded the impossibility of framing a theory of molecular action that reconciled thermodynamics, radiation, and electrical phenomena as they were then understood. at the turn of the century, physicists did not yet clearly recognize that these and other difficulties in physics were in any way related. the first development that led to the solution of these difficulties
subdomain_quantum_mechanics
0.783484
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
0
0.6
2025-12-25T18:39:17.939590
, radiation, and electrical phenomena as they were then understood. at the turn of the century, physicists did not yet clearly recognize that these and other difficulties in physics were in any way related. the first development that led to the solution of these difficulties was planck ’ s introduction of the concept of the quantum, as a result of physicists ’ studies of blackbody radiation during the closing years of the 19th century. ( the term blackbody refers to an ideal body or surface that absorbs all radiant energy without any reflection. ) a body at a moderately high temperature — a " red heat " — gives off most of its radiation in the low frequency ( red and infrared ) regions ; a body at a higher temperature — " white heat " — gives off comparatively more radiation in higher frequencies ( yellow, green, or blue ). during the 1890s physicists conducted detailed quantitative studies of these phenomena and expressed their results in a series of curves or graphs. the classical, or prequantum, theory predicted an altogether different set of curves from those actually observed. what planck did was to devise a mathematical formula that described the curves exactly ; he then deduced a physical hypothesis that could explain the formula. his hypothesis was that energy is radiated only in quanta of energy hu, where u is the frequency and h is the quantum action, now known as the next important developments in quantum mechanics were the work of german - born american physicist and nobel laureate albert einstein. he used planck ’ s concept of the quantum to explain certain properties of the photoelectric effect — an experimentally observed phenomenon in which electrons are emitted from metal surfaces when radiation falls on these surfaces. according to classical theory, the energy, as measured by the voltage of the emitted electrons, should be proportional to the intensity of the radiation. the energy of the electrons, however, was found to be independent of the intensity of radiation — which determined only the number of electrons emitted — and to depend solely on the frequency of the radiation. the higher the frequency of the incident radiation, the greater is the electron energy ; below a certain critical frequency no electrons are emitted. these facts were explained by einstein by assuming that a single quantum of radiant energy ejects a single electron from the metal. of the quantum is proportional to the frequency, and so the energy of the electron depends on the frequency. in 1911 rutherford established the existence of the atomic nucleus. he assumed, on the basis of experimental evidence obtained from the scattering of alpha particles by the nuclei of gold atoms, that every atom consists of a
subdomain_quantum_optics
0.714606
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
1
0.6
2025-12-25T18:39:17.940592
the energy of the electron depends on the frequency. in 1911 rutherford established the existence of the atomic nucleus. he assumed, on the basis of experimental evidence obtained from the scattering of alpha particles by the nuclei of gold atoms, that every atom consists of a dense, positively charged nucleus, surrounded by negatively charged electrons revolving around the nucleus as planets revolve around the sun. electromagnetic theory developed by the british physicist james clerk maxwell unequivocally predicted that an electron revolving around a nucleus will continuously radiate electromagnetic energy until it has lost all its energy, and eventually will fall into the nucleus. thus, according to classical theory, an atom, as described by rutherford, is unstable. this difficulty led the danish physicist niels bohr, in 1913, to postulate that in an atom the classical theory does not hold, and that electrons move in fixed orbits. every change in orbit by the electron corresponds to the absorption or emission of a quantum of radiation. the application of bohr ’ s theory to atoms with more than one electron proved difficult. the mathematical equations for the next simplest atom, the helium atom, were solved during the 1910s and 1920s, but the results were not entirely in accordance with for more complex atoms, only approximate solutions of the equations are possible, and these are only partly concordant the french physicist louis victor de broglie suggested in 1924 that because electromagnetic waves show particle characteristics, particles should, in some cases, also exhibit wave properties. this prediction was verified experimentally within a few years by the american physicists clinton joseph davisson and lester halbert germer and the british physicist george paget thomson. that a beam of electrons scattered by a crystal produces a diffraction pattern characteristic of a wave ( see diffraction ). the wave concept of a particle led the austrian physicist erwin schrodinger to develop a so - called wave equation to describe the wave properties of a particle and, more specifically, the wave behavior of the electron in the hydrogen atom. although this differential equation was continuous and gave solutions for all points in space, the permissible solutions of the equation were restricted by certain conditions expressed by mathematical equations called eigenfunctions ( german eigen, " own " ). the schrodinger wave equation thus had only certain discrete solutions ; these solutions were mathematical expressions in which quantum numbers appeared as parameters. ( quantum numbers are integers developed in particle physics to give the magnitudes of certain characteristic quantities of particles or systems. ) schrodinger equation was solved for the hydrogen atom and
subdomain_quantum_optics
0.66659
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
2
0.6
2025-12-25T18:39:17.941516
; these solutions were mathematical expressions in which quantum numbers appeared as parameters. ( quantum numbers are integers developed in particle physics to give the magnitudes of certain characteristic quantities of particles or systems. ) schrodinger equation was solved for the hydrogen atom and gave conclusions in substantial agreement with earlier quantum theory. moreover, it was solvable for the helium atom, which earlier theory had failed to explain adequately, and here also it was in agreement with experimental evidence. the solutions of the schrodinger equation also indicated that no two electrons could have the same four quantum numbers — that is, be in the same energy state. rule, which had already been established empirically by austro - american physicist and nobel laureate wolfgang pauli in 1925, is called the exclusion principle. what is matter in the 20th century, physicists discovered that matter behaved as both a wave and a particle. austrian physicist and nobel prize winner erwin schrodinger discussed this apparent paradox in a lecture in geneva, switzerland, in 1952. a condensed and translated version of his lecture appeared in scientific american the following what is matter? the wave - particle dualism afflicting modern physics is best resolved in favor of waves, believes the author, but there is no clear picture of matter on which physicists can agree fifty years ago science seemed on the road to a clear - cut answer to the ancient question which is the title of this article. it looked as if matter would be reduced at last to its ultimate building blocks — to certain submicroscopic but nevertheless tangible and measurable particles. but it proved to be less simple than that. today a physicist no longer can distinguish significantly between matter and something else. we no longer contrast matter with forces or fields of force as different entities ; we know now that these concepts must be merged. it is true that we speak of " empty " space ( i. e., space free of matter ), but space is never really empty, because even in the remotest voids of the universe there is always starlight — and that is matter. besides, space is filled with gravitational fields, and according to einstein gravity and inertia cannot very well be separated. thus the subject of this article is in fact the total picture of space - time reality as envisaged by physics. we have to admit that our conception of material reality today is more wavering and uncertain than it has been for a long time. we know a great many interesting details, learn new ones every week. but to construct a clear, easily comprehensible
subdomain_quantum_field_theory
0.689082
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
3
0.6
2025-12-25T18:39:17.942432
have to admit that our conception of material reality today is more wavering and uncertain than it has been for a long time. we know a great many interesting details, learn new ones every week. but to construct a clear, easily comprehensible picture on which all physicists would agree — that is simply impossible. physics stands at a grave crisis of ideas. in the face of this crisis, many maintain that no objective picture of reality is possible. however, the optimists among us ( of whom i consider myself one ) look upon this view as a philosophical extravagance born of despair. we hope that the present fluctuations of thinking are only indications of an upheaval of old beliefs which in the end will lead to something better than the mess of formulas which today surrounds our subject. since the picture of matter that i am supposed to draw does not yet exist, since only fragments of it are visible, some parts of this narrative may be inconsistent with others. like cervantes ’ tale of sancho panza, who loses his donkey in one chapter but a few chapters later, thanks to the forgetfulness of the author, is riding the dear little animal again, our story has contradictions. we must start with the well - established concept that matter is composed of corpuscles or atoms, whose existence has been quite " tangibly " demonstrated by many beautiful experiments, and with max planck ’ s discovery that energy also comes in indivisible units, called quanta, which are supposed to be transferred abruptly from one carrier to another. but then sancho panza ’ s donkey will return. for i shall have to ask you to believe neither in corpuscles as permanent individuals nor in the suddenness of the transfer of an energy quantum. discreteness is present, but not in the traditional sense of discrete single particles, let alone in the sense of abrupt processes. discreteness arises merely as a structure from the laws governing the phenomena. these laws are by no means fully understood ; a probably correct analogue from the physics of palpable bodies is the way various partial tones of a bell derive from its shape and from the laws of elasticity to which, of themselves, nothing discontinuous adheres. the idea that matter is made up of ultimate particles was advanced as early as the fifth century b. c. by leucippus and democritus, who called these particles atoms. the corpuscular theory of matter was lifted to physical reality in the theory of gases developed during the 19th century by james clerk maxwell
subdomain_quantum_materials
0.696116
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
4
0.6
2025-12-25T18:39:17.943626
as the fifth century b. c. by leucippus and democritus, who called these particles atoms. the corpuscular theory of matter was lifted to physical reality in the theory of gases developed during the 19th century by james clerk maxwell and ludwig boltzmann. the concept of atoms and molecules in violent motion, colliding and rebounding again and again, led to full comprehension of all the properties of gases : their elastic and thermal properties, their viscosity, heat conductivity and diffusion. at the same time it led to a firm foundation of the mechanical theory of heat, namely, that heat is the motion of these ultimate particles, which becomes increasingly violent with rising temperature. within one tremendously fertile decade at the turn of the century came the discoveries of x - rays, of electrons, of the emission of streams of particles and other forms of energy from the atomic nucleus by radioactive decay, of the electric charges on the various particles. the masses of these particles, and of the atoms themselves, were later measured very precisely, and from this was discovered the mass defect of the atomic nucleus as a whole. mass of a nucleus is less than the sum of the masses of its component particles ; the lost mass becomes the binding energy holding the nucleus firmly together. this is called the packing effect. the nuclear forces of course are not electrical forces — those are repellent — but are much stronger and act only within very short distances, about 10 - 13 centimeter. here i am already caught in a contradiction. didn ’ t i say at the beginning that we no longer assume the existence of force fields apart from matter? i could easily talk myself out of it by saying : well, the force field of a particle is simply considered a part of it. but that is not the fact. the established view today is rather that everything is at the same time both particle and field. everything has the continuous structure with which we are familiar in fields, as well as the discrete structure with which we are equally familiar in particles. this concept is supported by innumerable experimental facts and is accepted in general, though opinions differ on details, as we shall see. in the particular case of the field of nuclear forces, the particle structure is more or less known. most likely the continuous force field is represented by the so - called pi mesons. on the other hand, the protons and neutrons, which we think of as discrete particles, indisputably also have a continuous wave structure, as is shown by the interference patterns
subdomain_quantum_field_theory
0.658248
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
5
0.6
2025-12-25T18:39:17.945447
field is represented by the so - called pi mesons. on the other hand, the protons and neutrons, which we think of as discrete particles, indisputably also have a continuous wave structure, as is shown by the interference patterns they form when diffracted by a crystal. the difficulty of combining these two so very different character traits in one mental picture is the main stumbling - block that causes our conception of matter to be so uncertain. neither the particle concept nor the wave concept is hypothetical. the tracks in a photographic emulsion or in a wilson cloud chamber leave no doubt of the behavior of particles as discrete units. the artificial production of nuclear particles is being attempted right now with terrific expenditure, defrayed in the main by the various state ministries of defense. it is true that one cannot kill anybody with one such racing particle, or else we should all be dead by now. but their study promises, indirectly, a hastened realization of the plan for the annihilation of mankind which is so close to all our you can easily observe particles yourself by looking at a luminous numeral of your wrist watch in the dark with a magnifying glass. the luminosity surges and undulates, just as a lake sometimes twinkles in the sun. the light consists of sparklets, each produced by a so - called alpha particle ( helium nucleus ) expelled by a radioactive atom which in this process is transformed into a different atom. a specific device for detecting and recording single particles is the geiger - muller counter. in this short resume i cannot possibly exhaust the many ways in which we can observe single particles. now to the continuous field or wave character of matter. wave structure is studied mainly by means of diffraction and interference — phenomena which occur when wave trains cross each other. for the analysis and measurement of light waves the principal device is the ruled grating, which consists of a great many fine, parallel, equidistant lines, closely engraved on a specular metallic light impinging from one direction is scattered by them and collected in different directions depending on its wavelength. but even the finest ruled gratings we can produce are too coarse to scatter the very much shorter waves associated with matter. the fine lattices of crystals, however, which max von laue first used as gratings to analyze the very short x - rays, will do the same for " matter waves. " directed at the surface of a crystal, high - velocity streams of particles manifest their wave nature. with crystal
subdomain_quantum_optics
0.696719
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
6
0.6
2025-12-25T18:39:17.946591
max von laue first used as gratings to analyze the very short x - rays, will do the same for " matter waves. " directed at the surface of a crystal, high - velocity streams of particles manifest their wave nature. with crystal gratings physicists have diffracted and measured the wavelengths of electrons, neutrons and protons. what does planck ’ s quantum theory have to do with all this? planck told us in 1900 that he could comprehend the radiation from red - hot iron, or from an incandescent star such as the sun, only if this radiation was produced in discrete portions and transferred in such discrete quantities from one carrier to another ( e. g., from atom to this was extremely startling, because up to that time energy had been a highly abstract concept. five years later einstein told us that energy has mass and mass is energy ; in other words, that they are one and the same. now the scales begin to fall from our eyes : our dear old atoms, corpuscles, particles are planck ’ s energy quanta. the carriers of those quanta are themselves quanta. one gets dizzy. something quite fundamental must lie at the bottom of this, but it is not surprising that the secret is not yet understood. after all, the scales did not fall suddenly. it took 20 or 30 years. and perhaps they still have not fallen completely. the next step was not quite so far reaching, but important enough. by an ingenious and appropriate generalization of planck ’ s hypothesis niels bohr taught us to understand the line spectra of atoms and molecules and how atoms were composed of heavy, positively charged nuclei with light, negatively charged electrons revolving each small system — atom or molecule — can harbor only definite discrete energy quantities, corresponding to its nature or its constitution. in transition from a higher to a lower " energy level " it emits the excess energy as a radiation quantum of definite wavelength, inversely proportional to the quantum given off. this means that a quantum of given magnitude manifests itself in a periodic process of definite frequency which is directly proportional to the quantum ; the frequency equals the energy quantum divided by the famous planck ’ s constant, h. according to einstein a particle has the energy mc2, m being the mass of the particle and c the velocity of light. in 1925 louis de broglie drew the inference, which rather suggests itself, that a particle might have associated with it a wave process of frequency mc2 divided by h. the particle for which he postulated
subdomain_quantum_optics
0.692524
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
7
0.6
2025-12-25T18:39:17.947790
particle and c the velocity of light. in 1925 louis de broglie drew the inference, which rather suggests itself, that a particle might have associated with it a wave process of frequency mc2 divided by h. the particle for which he postulated such a wave was the electron. within two years the " electron waves " required by his theory were demonstrated by the famous electron diffraction experiment of c. j. davisson and l. h. germer. this was the starting point for the cognition that everything — anything at all — is simultaneously particle and wave field. thus de broglie ’ s dissertation initiated our uncertainty about the nature of matter. both the particle picture and the wave picture have truth value, and we cannot give up either one or the other. but we do not know how to that the two pictures are connected is known in full generality with great precision and down to amazing details. but concerning the unification to a single, concrete, palpable picture opinions are so strongly divided that a great many deem it altogether impossible. i shall briefly sketch the connection. but do not expect that a uniform, concrete picture will emerge before you ; and do not blame the lack of success either on my ineptness in exposition or your own denseness — nobody has yet succeeded. one distinguishes two things in a wave. first of all, a wave has a front, and a succession of wave fronts forms a system of surfaces like the layers of an onion. you are familiar with the two - dimensional analogue of the beautiful wave circles that form on the smooth surface of a pond when a stone is thrown in. the second characteristic of a wave, less intuitive, is the path along which it travels — a system of imagined lines perpendicular to the wave fronts. these lines are known as the wave " normals " or " rays. " we can make the provisional assertion that these rays correspond to the trajectories of particles. indeed, if you cut a small piece out of a wave, approximately 10 or 20 wavelengths along the direction of propagation and about as much across, such a " wave packet " would actually move along a ray with exactly the same velocity and change of velocity as we might expect from a particle of this particular kind at this particular place, taking into account any force fields acting on the particle. here i falter. for what i must say now, though correct, almost contradicts this provisional assertion. although the behavior of the wave packet gives us a more or less intuitive picture of a particle,
subdomain_quantum_optics
0.680575
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
8
0.6
2025-12-25T18:39:17.948719
any force fields acting on the particle. here i falter. for what i must say now, though correct, almost contradicts this provisional assertion. although the behavior of the wave packet gives us a more or less intuitive picture of a particle, which can be worked out in detail ( e. g., the momentum of a particle increases as the wavelength decreases ; the two are inversely proportional ), yet for many reasons we cannot take this intuitive picture quite seriously. for one thing, it is, after all, somewhat vague, the more so the greater the wavelength. for another, quite often we are dealing not with a small packet but with an extended wave. for still another, we must also deal with the important special case of very small " packelets " which form a kind of " standing wave " which can have no wave fronts or wave normals. one interpretation of wave phenomena which is extensively supported by experiments is this : at each position of a uniformly propagating wave train there is a twofold structural connection of interactions, which may be distinguished as " longitudinal " and " transversal. " the transversal structure is that of the wave fronts and manifests itself in diffraction and interference experiments ; the longitudinal structure is that of the wave normals and manifests itself in the observation of single particles. however, these concepts of longitudinal and transversal structures are not sharply defined and absolute, since the concepts of wave front and wave normal are not, the interpretation breaks down completely in the special case of the standing waves mentioned above. here the whole wave phenomenon is reduced to a small region of the dimensions of a single or very few wavelengths. you can produce standing water waves of a similar nature in a small basin if you dabble with your finger rather uniformly in its center, or else just give it a little push so that the water surface undulates. in this situation we are not dealing with uniform wave propagation ; what catches the interest are the normal frequencies of these standing waves. the water waves in the basin are an analogue of a wave phenomenon associated with electrons, which occurs in a region just about the size of the atom. the normal frequencies of the wave group washing around the atomic nucleus are universally found to be exactly equal to bohr ’ s atomic " energy levels " divided by planck ’ s constant h. thus the ingenious yet somewhat artificial assumptions of bohr ’ s model of the atom, as well as of the older quantum theory in general, are superseded by the far more natural idea of
subdomain_quantum_field_theory
0.678647
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
9
0.6
2025-12-25T18:39:17.949653
energy levels " divided by planck ’ s constant h. thus the ingenious yet somewhat artificial assumptions of bohr ’ s model of the atom, as well as of the older quantum theory in general, are superseded by the far more natural idea of de broglie ’ s wave phenomenon. phenomenon forms the " body " proper of the atom. it takes the place of the individual pointlike electrons which in bohr ’ s model are supposed to swarm around the nucleus. such pointlike single particles are completely out of the question within the atom, and if one still thinks of the nucleus itself in this way one does so quite consciously for reasons of expediency. what seems to me particularly important about the discovery that " energy levels " are virtually nothing but the frequencies of normal modes of vibration is that now one can do without the assumption of sudden transitions, or quantum jumps, since two or more normal modes may very well be excited simultaneously. the discreteness of the normal frequencies fully suffices — so i believe — to support the considerations from which planck started and many similar and just as important ones — i mean, in short, to support all of quantum the theory of quantum jumps is becoming more and more unacceptable, at least to me personally, as the years go on. its abandonment has, however, far - reaching consequences. it means that one must give up entirely the idea of the exchange of energy in well - defined quanta and replace it with the concept of resonance between vibrational frequencies. yet we have seen that because of the identity of mass and energy, we must consider the particles themselves as planck ’ s energy quanta. this is at first frightening. for the substituted theory implies that we can no longer consider the individual particle as a well - defined permanent entity. that it is, in fact, no such thing can be reasoned in other ways. for one thing, there is werner heisenberg ’ s famous uncertainty principle, according to which a particle cannot simultaneously have a well - defined position and a sharply defined velocity. this uncertainty implies that we cannot be sure that the same particle could ever be observed twice. another conclusive reason for not attributing identifiable sameness to individual particles is that we must obliterate their individualities whenever we consider two or more interacting particles of the same kind, e. g., the two electrons of a helium atom. two situations which are distinguished only by the interchange of the two electrons must be counted as one and the same ; if they are counted as two equal situations,
subdomain_quantum_mechanics
0.70561
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
10
0.6
2025-12-25T18:39:17.950674
particles of the same kind, e. g., the two electrons of a helium atom. two situations which are distinguished only by the interchange of the two electrons must be counted as one and the same ; if they are counted as two equal situations, nonsense obtains. this circumstance holds for any kind of particle in arbitrary numbers without exception. most theoreticians will probably accept the foregoing reasoning and admit that the individual particle is not a well - defined permanent entity of detectable identity or sameness. nevertheless this inadmissible concept of the individual particle continues to play a large role in their ideas and discussions. even deeper rooted is the belief in " quantum jumps, " which is now surrounded with a highly abstruse terminology whose common - sense meaning is often difficult for instance, an important word in the standing vocabulary of quantum theory is " probability, " referring to transition from one level to another. but, after all, one can speak of the probability of an event only assuming that, occasionally, it actually occurs. if it does occur, the transition must indeed be sudden, since intermediate stages are disclaimed. moreover, if it takes time, it might conceivably be interrupted halfway by an unforeseen disturbance. this possibility leaves one completely at sea. the wave v. corpuscle dilemma is supposed to be resolved by asserting that the wave field merely serves for the computation of the probability of finding a particle of given properties at a given position if one looks for it there. but once one deprives the waves of reality and assigns them only a kind of informative role, it becomes very difficult to understand the phenomena of interference and diffraction on the basis of the combined action of discrete single particles. it certainly seems easier to explain particle tracks in terms of waves than to explain the wave phenomenon in terms of corpuscles. " real existence " is, to be sure, an expression which has been virtually chased to death by many philosophical hounds. its simple, naive meaning has almost become lost to us. therefore i want to recall something else. i spoke of a corpuscle ’ s not being an individual. properly speaking, one never observes the same particle a second time — very much as heraclitus says of the river. you cannot mark an electron, you cannot paint it red. indeed, you must not even think of it as marked ; if you do, your " counting " will be false and you will get wrong results at every step — for the structure of line spectra, in thermod
subdomain_quantum_optics
0.734671
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
11
0.6
2025-12-25T18:39:17.951830
you cannot paint it red. indeed, you must not even think of it as marked ; if you do, your " counting " will be false and you will get wrong results at every step — for the structure of line spectra, in thermodynamics and elsewhere. a wave, on the other hand, can easily be imprinted with an individual structure by which it can be recognized beyond doubt. think of the beacon fires that guide ships at sea. the light shines according to a definite code ; for example : three seconds light, five seconds dark, one second light, another pause of five seconds, and again light for three seconds — the skipper knows that is san sebastian. or you talk by wireless telephone with a friend across the atlantic ; as soon as he says, " hello there, edward meier speaking, " you know that his voice has imprinted on the radio wave a structure which can be distinguished from any other. but one does not have to go that far. if your wife calls, " francis! " from the garden, it is exactly the same thing, except that the structure is printed on sound waves and the trip is shorter ( though it takes somewhat longer than the journey of radio waves across the atlantic ). all our verbal communication is based on imprinted individual wave structures. and, according to the same principle, what a wealth of details is transmitted to us in rapid succession by the movie or the television picture! this characteristic, the individuality of the wave phenomenon, has already been found to a remarkable extent in the very much finer waves of particles. one example must suffice. a limited volume of gas, say helium, can be thought of either as a collection of many helium atoms or as a superposition of elementary wave trains of matter waves. both views lead to the same theoretical results as to the behavior of the gas upon heating, compression, and so on. when you attempt to apply certain somewhat involved enumerations to the gas, you must carry them out in different ways according to the mental picture with which you approach it. if you treat the gas as consisting of particles, then no individuality must be ascribed to them, as i said. if, however, you concentrate on the matter wave trains instead of on the particles, every one of the wave trains has a well - defined structure which is different from that of any other. it is true that there are many pairs of waves which are so similar to each other that they could change roles without any noticeable effect on the gas. but if
subdomain_quantum_optics
0.69169
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
12
0.6
2025-12-25T18:39:17.952834
the wave trains has a well - defined structure which is different from that of any other. it is true that there are many pairs of waves which are so similar to each other that they could change roles without any noticeable effect on the gas. but if you should count the very many similar states formed in this way as merely a single one, the result would be quite wrong. in spite of everything we cannot completely banish the concepts of quantum jump and individual corpuscle from the vocabulary of physics. we still require them to describe many details of the structure of matter. how can one ever determine the weight of a carbon nucleus and of a hydrogen nucleus, each to the precision of several decimals, and detect that the former is somewhat lighter than the 12 hydrogen nuclei combined in it, without accepting for the time being the view that these particles are something quite concrete and real? this view is so much more convenient than the roundabout consideration of wave trains that we cannot do without it, just as the chemist does not discard his valence - bond formulas, although he fully realizes that they represent a drastic simplification of a rather involved wave - mechanical situation. if you finally ask me : " well, what are these corpuscles, really? " i ought to confess honestly that i am almost as little prepared to answer that as to tell where sancho panza ’ s second donkey came from. at the most, it may be permissible to say that one can think of particles as more or less temporary entities within the wave field whose form and general behavior are nevertheless so clearly and sharply determined by the laws of waves that many processes take place as if these temporary entities were substantial permanent beings. the mass and the charge of particles, defined with such precision, must then be counted among the structural elements determined by the wave laws. the conservation of charge and mass in the large must be considered as a statistical effect, based on the " law of large numbers. " simultaneously with the development of wave mechanics, heisenberg evolved a different mathematical analysis known as matrix mechanics. according to heisenberg ’ s theory, which was developed in collaboration with the german physicists max born and ernst pascual jordan, the formula was not a differential equation but a matrix : an array consisting of an infinite number of rows, each row consisting of an infinite number of quantities. matrix mechanics introduced infinite matrices to represent the position and momentum of an electron inside an atom. also, different matrices exist, one for each observable physical property associated with the motion of an electron,
subdomain_quantum_mechanics
0.675507
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
13
0.6
2025-12-25T18:39:17.953777
each row consisting of an infinite number of quantities. matrix mechanics introduced infinite matrices to represent the position and momentum of an electron inside an atom. also, different matrices exist, one for each observable physical property associated with the motion of an electron, such as energy, position, momentum, and angular momentum. these matrices, like schrodinger ’ s differential equations, could be solved ; in other words, they could be manipulated to produce predictions as to the frequencies of the lines in the hydrogen spectrum and other observable quantities. like wave mechanics, matrix mechanics was in agreement with the earlier quantum theory for processes in which the earlier quantum theory agreed with experiment ; it was also useful in explaining phenomena that earlier quantum theory could not explain. schrodinger subsequently succeeded in showing that wave mechanics and matrix mechanics are different mathematical versions of the same theory, now called quantum mechanics. even for the simple hydrogen atom, which consists of two particles, both mathematical interpretations are extremely complex. the next simplest atom, helium, has three particles, and even in the relatively simple mathematics of classical dynamics, the three - body problem ( that of describing the mutual interactions of three separate bodies ) is not the energy levels can be calculated accurately, however, even if not exactly. in applying quantum - mechanics mathematics to relatively complex situations, a physicist can use one of a number of mathematical formulations. the choice depends on the convenience of the formulation for obtaining suitable although quantum mechanics describes the atom purely in terms of mathematical interpretations of observed phenomena, a rough verbal description can be given of what the atom is now thought to be like. surrounding the nucleus is a series of stationary waves ; these waves have crests at certain points, each complete standing wave representing an orbit. the absolute square of the amplitude of the wave at any point is a measure of the probability that an electron will be found at that point at any given time. thus, an electron can no longer be said to be at any precise point at any given time. the impossibility of pinpointing an electron at any precise time was analyzed by heisenberg, who in 1927 formulated the uncertainty principle. this principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. in other words, the more accurately a particle ’ s momentum is measured and known, the less accuracy there can be in the measurement and knowledge of its position. this principle is also fundamental to the understanding of quantum mechanics as it is generally accepted today : the wave and particle character of electromagnetic radiation can be understood
subdomain_quantum_mechanics
0.713855
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
14
0.6
2025-12-25T18:39:17.954711
momentum is measured and known, the less accuracy there can be in the measurement and knowledge of its position. this principle is also fundamental to the understanding of quantum mechanics as it is generally accepted today : the wave and particle character of electromagnetic radiation can be understood as two complementary properties of radiation. another way of expressing the uncertainty principle is that the wavelength of a quantum mechanical principle is inversely proportional to its momentum. as atoms are cooled they slow down and their corresponding wavelength grows larger. at a low enough temperature this wavelength is predicted to exceed the spacing between particles, causing atoms to overlap, becoming indistinguishable, and melding into a single quantum state. in 1995 a team of colorado scientists, led by national institutes of standards and technology physicist eric cornell and university of colorado physicist carl weiman, cooled rubidium atoms to a temperature so low that the particles entered this merged state, known as a bose - einstein condensate. the condensate essentially behaves like one atom even though it is made up of thousands. - physicists condense supercooled atoms, forming new state of matter a team of colorado physicists has cooled atoms of gas to a temperature so low that the particles entered a merged state, known as a " bose - einstein condensate. " this phenomenon was first predicted about 70 years ago by the theories of german - born american physicist albert einstein and indian physicist satyendra nath bose. the condensed particles are considered a new state of matter, different from the common states of matter — gas, liquid, and solid — and from plasma, a high temperature, ionized form of matter that is found in the sun and other stars. physicists have great expectations for the application of this discovery. because the condensate essentially behaves like one atom even though it is made up of thousands, investigators should be able to measure interactions at the atomic and subatomic level that were previously extremely difficult, if not impossible, to study the condensate was detected june 5 by a colorado team led by national institutes of standards and technology physicist eric cornell and university of colorado physicist carl wieman. their discovery was reported in the journal science on july 14. cornell and wieman formed their condensate from rubidium gas. several groups of physicists, including the teams in texas and colorado and a group at the massachusetts institute of technology, have been working to form pure condensate in recent years. the goal of the investigations has been to create a pure chunk of condensate out of atoms in
subdomain_quantum_metrology
0.71939
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
15
0.6
2025-12-25T18:39:17.955636
including the teams in texas and colorado and a group at the massachusetts institute of technology, have been working to form pure condensate in recent years. the goal of the investigations has been to create a pure chunk of condensate out of atoms in an inert medium, such as a diffuse, nonreactive gas. the effort began when methods of cooling and trapping became refined enough that it seemed possible to reach the required conditions of temperature and density. the colorado team used two techniques : first laser cooling and then evaporative cooling. the laser technique used laser light whose frequency was carefully tuned to interact with the rubidium atoms and gently reduce their speeds. a number of lasers were aimed at the gas to slow the motion of the atoms in different directions. the colorado physicists then switched to evaporative cooling. in this method, the gas is " trapped " by a magnetic field that dwindles to zero at its center. atoms that are moving wander out of the field, while the coldest atoms cluster at the center. because a few very cold atoms could still escape at the zero field point of the trap, the physicists perfected their system by adding a second slowly circling magnetic field so that the zero point moved, not giving the atoms the chance to escape through it. physicists will now begin to explore the properties of the condensate and see what other materials they can use to form it. one unusual characteristic of the condensate is that it is composed of atoms that have lost their individual identities. this is analogous to laser light, which is composed of light particles, or photons, that similarly have become indistinguishable and all behave in exactly the same manner. the laser has found a myriad of uses both in practical applications and in theoretical research, and the bose - einstein condensate may turn out to be just as important. some scientists speculate that if a condensate can be readily produced and sustained, it could be used to miniaturize and speed up computer components to a scale and quickness not possible before. the prediction that a merged form of matter will emerge at extremely low temperatures is based on a number of aspects of the quantum theory. this theory governs the interaction of particles on a subatomic scale. the basic principle of quantum theory is that particles can only exist in certain discrete energy states. the exact " quantum state " of a particle takes into consideration such factors as the position of the particle and its " spin, " which can only have certain discrete values.
subdomain_quantum_optics
0.674706
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
16
0.6
2025-12-25T18:39:17.956529
principle of quantum theory is that particles can only exist in certain discrete energy states. the exact " quantum state " of a particle takes into consideration such factors as the position of the particle and its " spin, " which can only have certain discrete values. a particle ’ s spin categorizes it as either a boson or a fermion. those two groups of particles behave according to different sets of statistical rules. bosons have spins that are a constant number multiplied by an integer ( e. g., 0, 1, 2, 3 ). fermions have spins that are that same constant multiplied by an odd half - integer ( 1 / 2, 3 / 2, 5 / 2, etc. ). examples of fermions are the protons and neutrons that make up an atom ’ s nucleus, and composite particles, such as nuclei and atoms, are classified as bosons or fermions based on the sum of the spins of their constituent particles. for instance, an isotope of helium called helium - 4 turns out to be a bose particle. helium - 4 is made up of six fermi particles : two electrons orbiting a nucleus made up of two protons and two neutrons. adding up six odd half - integers will yield a whole integer, making helium - 4 a boson. the atoms of rubidium used in the colorado experiment are bose particles as well. only bose atoms may form a condensate, but they do so only at a sufficiently low temperature and high density. at their lab in colorado, cornell and wieman cooled a rubidium gas down to a temperature as close to absolute zero, the temperature at which particles stop moving, as they could get. the slower the particles, the lower their momentum. in essence, the cooling brought the momentum of the gas particles closer and closer to precisely zero, as the temperature decreased to within a few billionths of a degree kelvin. ( kelvin degrees are on the scale of degrees celsius, but zero kelvin is absolute zero, while zero celsius is the freezing point of water. ) as the temperature, and thus the momentum, of the gas particles dropped to an infinitesimal amount, the possible locations of the atom at any given moment increased proportionally. the goal of the experiment was to keep the gas atoms packed together closely enough that during this process — as their momentum got lower and lower, and their wavelengths got larger and larger — their waves would begin to overlap. this interplay of position and movement in three dimensions with the
subdomain_quantum_materials
0.731811
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
17
0.6
2025-12-25T18:39:17.957364
was to keep the gas atoms packed together closely enough that during this process — as their momentum got lower and lower, and their wavelengths got larger and larger — their waves would begin to overlap. this interplay of position and movement in three dimensions with the relative distances between particles is known as the phase - space density and is the key factor in forming a condensate. in essence, the momentum of the atoms would become so precisely pinpointed ( near zero ) that their position would become less and less certain and there would be a relatively large amount of space that would define each atom ’ s position. as the atoms slowed to almost a stop, their positions became so fuzzy that each atom came to occupy the same position as every other atom, losing their individual identity. this odd phenomenon is a bose - einstein as their experimental conditions neared the realm of bose - einstein condensation, cornell and wieman noticed an abrupt rise in the peak density of their sample, a type of discontinuity that strongly indicates a phase transition. the colorado physicists estimated that after progressive evaporative cooling of the rubidium, they were left with a nugget of about 2, 000 atoms of pure condensate. and wieman then released the atoms from the " trap " in which they had been cooling and sent a pulse of laser light at the condensate, basically blowing it apart. they recorded an image of the expanding cloud of atoms. prior to the light pulse, when the density dropped after the atoms were released, the physicists believed the temperature of the condensate fell to an amazing frigidity of 20 nanokelvins ( 20 billionths of one degree above absolute zero ). the image showed a larger, expanding sphere of particles with a smaller, more concentrated elliptical - looking center. cornell and wieman observed that when a gas is constrained and then released ( in an extreme example, as in a bomb ), thermodynamics specifies that it will expand outward equally in all directions regardless of the shape in which it had been contained. this occurs because the particles in that gas, even if the gas was very cold, were moving in all different directions with various energies when the gas was this rule of uniform expansion does not hold for a bose - einstein condensate. because the particles were all acting in exactly the same manner at the time of the light pulse, their expansion should give some indication of the shape of the space they had previously inhabited. the uneven, elliptical - looking clump of atoms in the
subdomain_quantum_optics
0.646737
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
18
0.6
2025-12-25T18:39:17.958250
discussion. in the 1930s the application of quantum mechanics and special relativity to the theory of the electron ( see quantum electrodynamics ) allowed the british physicist paul dirac to formulate an equation that referred to the existence of the spin of the electron. it further led to the prediction of the existence of the positron, which was experimentally verified by the american physicist carl david anderson. the application of quantum mechanics to the subject of electromagnetic radiation led to explanations of many phenomena, such as bremsstrahlung ( german, " braking radiation, " the radiation emitted by electrons slowed down in matter ) and pair production ( the formation of a positron and an electron when electromagnetic energy interacts with matter ). it also led to a grave problem, however, called the divergence difficulty : certain parameters, such as the so - called bare mass and bare charge of electrons, appear to be infinite in dirac ’ s equations. ( the terms bare mass and bare charge refer to hypothetical electrons that do not interact with any matter or radiation ; in reality, electrons interact with their own electric this difficulty was partly resolved in 1947 - 49 in a program called renormalization, developed by the japanese physicist shin ’ ichiro tomonaga, the american physicists julian s. schwinger and richard feynman, and the british physicist freeman dyson. in this program, the bare mass and charge of the electron are chosen to be infinite in such a way that other infinite physical quantities are canceled out in the equations. renormalization greatly increased the accuracy with which the structure of atoms could be calculated from first principles. theoretical physicist c. llewellyn smith discusses the discoveries that scientists have made to date about the electron and other elementary particles — subatomic particles that scientists believe cannot be split into smaller units of matter. scientists have discovered what smith refers to as sibling and cousin particles to the electron, but much about the nature of these particles is still one way scientists learn about these particles is to accelerate them to high energies, smash them together, and then study what happens when they collide. by observing the behavior of these particles, scientists hope to learn more about the fundamental structures of the universe. electrons : the first hundred years the discovery of the electron was announced by j. j. thomson just over 100 years ago, on april 30, 1897. in the intervening years we have come to understand the mechanics that describe the behavior of electrons — and indeed of all matter on a small scale — which is called quantum mechanics.
subdomain_quantum_field_theory
0.708284
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
20
0.6
2025-12-25T18:39:17.961423
. j. thomson just over 100 years ago, on april 30, 1897. in the intervening years we have come to understand the mechanics that describe the behavior of electrons — and indeed of all matter on a small scale — which is called quantum mechanics. by exploiting this knowledge, we have learned to manipulate electrons and make devices of a tremendous practical and economic importance, such as transistors and lasers. meanwhile, what have we learned of the nature of the electron itself? from the start, electrons were found to behave as elementary particles, and this is still the case today. we know that if the electron has any structure, it is on a scale of less than 1018 m, i. e. less than 1 billionth of 1 billionth of a meter. however, a major complication has emerged. we have discovered that the electron has a sibling and cousins that are apparently equally fundamental. the sibling is an electrically neutral particle, called the neutrino, which is much lighter than the electron. the cousins are two electrically charged particles, called the mu and the which also have neutral siblings. the mu and the tau seem to be identical copies of the electron, except that they are respectively 200 and 3, 500 times heavier. their role in the scheme of things and the origin of their different masses remain mysteries — just the sort of mysteries that particle physicists, who study the constituents of matter and the forces that control their behavior, wish to resolve. we therefore know of six seemingly fundamental particles, the electron, the mu, the tau and their neutral siblings, which — like the electron — do not feel the nuclear force, and incidentally are known generically as leptons. what about the constituents of atomic nuclei, which of course do feel the nuclear force? at first sight, nuclei are made of protons and neutrons, but these particles turned out not to be elementary. it was found that when protons and neutrons are smashed together, new particles are created. we now know that all these particles are made of more elementary entities, called quarks. in a collision, pairs of quarks and their antiparticles, called antiquarks, can be created : part of the energy ( e ) of the incoming particles is turned into mass ( m ) of these new particles, thanks to the famous equivalence e = mc2. the quarks in the projectiles and the created quark - antiquark pairs can then rearrange themselves to make various different sorts of new particles. today
subdomain_quantum_materials
0.662001
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
21
0.6
2025-12-25T18:39:17.962475
) of these new particles, thanks to the famous equivalence e = mc2. the quarks in the projectiles and the created quark - antiquark pairs can then rearrange themselves to make various different sorts of new particles. today, six types of quarks are known which, like the leptons ( the electron and its relations ) have simple properties, and could be elementary. in the past 30 years a recipe that describes the behavior of these particles has been developed. it is called the " standard model " of particle physics. however, we lack a real understanding of the nature of these particles, and the logic behind the standard model. what is wrong with the standard model? first, it does not consistently combine einstein ’ s theory of the properties of space ( called general relativity ) with a quantum mechanical description of the properties of matter. it is therefore second, it contains too many apparently arbitrary futures — it is too baroque, too byzantine — to be complete. it does not explain the role of the mu and the tau, or answer the question whether the fact that the numbers of leptons and quarks are the same — six each — is a coincidence, or an indication of a deep connection between these different types of particles. on paper, we can construct theories that give better answers and explanations, and in which there are such connections, but we do not know which, if any, of these theories is correct. third, it has a missing, untested, element. this is not some minor detail, but a central element, namely a mechanism to generate the observed masses of the known particles, and hence also the different ranges of the known forces ( long range for gravity and electromagnetism, as users of magnetic compasses know, but very short range for the nuclear and the so - called weak forces, although in every other respect these forces appear very similar ). on paper, a possible mechanism is known, called the higgs mechanism, after the british physicist peter higgs who invented it. but there are alternative mechanisms, and in any case the higgs mechanism is a generic idea. we not only need to know if nature uses it, but if so, how it is realized in detail. luckily the prospects of developing a deeper understanding are good. the way forward is to perform experiments that can distinguish the different possibilities. we know that the answer to the mystery of the origin of mass, and the different ranges of forces, and certain other very important questions, must lie in an energy
subdomain_quantum_field_theory
0.68687
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
22
0.6
2025-12-25T18:39:17.963341
are good. the way forward is to perform experiments that can distinguish the different possibilities. we know that the answer to the mystery of the origin of mass, and the different ranges of forces, and certain other very important questions, must lie in an energy range that will be explored in experiments at the large hadron collider, a new accelerator now under construction at cern [ also known as the european laboratory for particle physics ] near geneva. the fundamental tools on which experimental particle physics depends are large accelerators, like the large hadron collider, which accelerate particles to very high energies and smash them together. by studying what happens in the collisions of these particles, which are typically electrons or protons ( the nuclei of hydrogen atoms ), we can learn about their natures. the conditions that are created in these collisions of particles existed just after the birth of the universe, when it was extremely hot and dense. knowledge derived from experiments in particle physics is therefore essential input for those who wish to understand the structure of the universe as a whole, and how it evolved from an initial fireball into its present the large hadron collider will therefore not only open up a large new window on the nature of matter, when it comes into operation in 2005, but also advance our understanding of the structure of the universe. however, although it will undoubtedly resolve some major questions and greatly improve our knowledge of nature, it would be very surprising if it established a " final theory. " the only candidate theory currently known which appears to have the potential to resolve all the problems mentioned above — the reason for the existence of the mu and tau, reconciliation of general relativity with quantum mechanics, etc. — describes the electron and its relatives and the quarks, not as pointlike objects, but as different vibrating modes of tiny strings. however, these strings are so small ( 10 - 35 m ) that they will never be observed if this is so, the electron and the other known particles will continue forever to appear to be fundamental pointlike objects, even if the — currently very speculative — " string theory " scores enough successes to convince us that this is not the case! future prospects : quantum mechanics underlies current attempts to account for the strong nuclear force and to develop a unified theory for all the fundamental interactions nevertheless, doubts exist about the completeness of quantum theory. the divergence difficulty, for example, is only partly resolved. just as newtonian mechanics was eventually amended by quantum mechanics and relativity, many scientists — and einstein was among them — are convinced
subdomain_quantum_field_theory
0.701484
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
23
0.6
2025-12-25T18:39:17.964249
nevertheless, doubts exist about the completeness of quantum theory. the divergence difficulty, for example, is only partly resolved. just as newtonian mechanics was eventually amended by quantum mechanics and relativity, many scientists — and einstein was among them — are convinced that quantum theory will also undergo profound changes in the future. great theoretical difficulties exist, for example, between quantum mechanics and chaos theory, which began to develop rapidly in the 1980s. ongoing efforts are being made by theorists such as the british physicist stephen hawking, to develop a system that encompasses both relativity and quantum mechanics. breakthroughs occurred in the area of quantum computing in the late 1990s. quantum computers under development use components of a chloroform molecule ( a combination of chlorine and hydrogen atoms ) and a variation of a medical procedure called magnetic resonance imaging ( mri ) to compute at a molecular level. scientists used a branch of physics called quantum mechanics, which describes the activity of subatomic particles ( particles that make up atoms ), as the basis for quantum computing. quantum computers may one day be thousands to millions of times faster than current computers, because they take advantage of the laws that govern the behavior of subatomic particles. these laws allow quantum computers to examine all possible answers to a query at one time. future uses of quantum computers could include code breaking and large database queries. quantum time waits for no cosmos the intriguing notion that time might run backwards when the universe collapses has run into difficulties. raymond laflamme, of the los alamos national laboratory in new mexico, has carried out a new calculation which suggests that the universe cannot start out uniform, go through a cycle of expansion and collapse, and end up in a uniform state. it could start out disordered, expand, and then collapse back into disorder. but, since the cobe data show that our universe was born in a smooth and uniform state, this symmetric possibility cannot be applied to the real universe. physicists have long puzzled over the fact that two distinct " arrows of time " both point in the same direction. in the everyday world, things wear out - - cups fall from tables and break, but broken cups never re - assemble themselves spontaneously. in the expanding universe at large, the future is the direction of time in which galaxies are further apart. many years ago, thomas gold suggested that these two arrows might be linked. that would mean that if and when the expansion of the universe were to reverse, then the everyday arrow of time would also reverse, with broken cups re - assem
subdomain_quantum_simulation
0.699247
512
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
24
0.6
2025-12-25T18:39:17.965124
apart. many years ago, thomas gold suggested that these two arrows might be linked. that would mean that if and when the expansion of the universe were to reverse, then the everyday arrow of time would also reverse, with broken cups re - assembling themselves. more recently, these ideas have been extended into quantum physics. there, the arrow of time is linked to the so - called " collapse of the wave function ", which happens, for example, when an electron wave moving through a tv tube collapses into a point particle on the screen of the tv. some researchers have tried to make the quantum description of reality symmetric in time, by including both the original state of the system ( the tv tube before the electron passes through ) and the final state ( the tv tube after the electron has passed through ) in one mathematical description. murray gell - mann and james hartle recently extended this idea to the whole universe. they argued that if, as many cosmologists believe likely, the universe was born in a big bang, will expand out for a finite time and then recollapse into a big crunch, the time - neutral quantum theory could describe time running backwards in the contracting half of its life. unfortunately, laflamme has now shown that this will not work. he has proved that if there are only small inhomogeneities present in the big bang, then they must get larger throughout the lifetime of the universe, in both the expanding and the contracting phases. " a low entropy universe at the big bang cannot come back to low entropy at the big crunch " ( classical and quantum gravity, vol 10 p l79 ). he has found time - asymmetric solutions to the equations - - but only if both big bang and big crunch are highly disordered, with the universe more ordered in the middle of its life. observations of the cosmic microwave background radiation show that the universe emerged from the big bang in a very smooth and uniform state. this rules out the time - symmetric solutions. is that even if the present expansion of the universe does reverse, time will not run backwards and broken cups will not start re -
subdomain_quantum_field_theory
0.622868
433
HuggingFaceFW/fineweb-edu
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
25
0.6
2025-12-25T18:39:17.966797
exchange of energy. when we put warm wet coffee beans in a room with very cold surfaces we call it freeze - drying. the moisture leaves the coffee beans and accumulates on the cold surfaces. there is a huge exchange of energy. the greater the rate of energy exchange the greater the rate of moisture movement. drying cannot happen with out an exchange of energy. when my mom and dad bought their first house in toronto, canada in 1957 there was no insulation in the walls and the house was leaky to air — it had a high air change driven by a traditional chimney. we lived in a 1, 200 square foot house and in january, when the outside temperature dropped to 0 degrees f., momma cranked up the 300, 000 btu oil furnace to maintain an interior temperature of 70 degrees f. the energy flow across the building enclosure was enormous, but oil was cheap, and we were comfortable and happy. the energy flow was so enormous the building enclosure was simultaneously kiln dried and freeze - dried. in fact, the drying potential was so high, we were uncomfortably dry. as a result poppa insisted that the furnace have a new fangled gadget attached to it — called a humidifier. how things have changed. well what changed? we ’ ve begun to insulate — and insulate exceptionally well — and we ’ re getting the assemblies “ tighter ” to air change and convection. that results in two things — less energy exchange therefore less drying potential — and things on the exterior side of the enclosure are colder in the winter. things being colder on the outside lead to something most folks don ’ t consider. many building materials are hygroscopic ( figure 1 ). this means they absorb moisture based on relative humidity. even more strangely, they don ’ t care about vapor pressure except if it affects relative humidity. this is a big deal. in fact it is a huge deal. figure 1 : sorption curve for common building materials — note that moisture content goes up as relative humidity goes up. there is no temperature dependence or vapor pressure dependence except where temperature affects relative humidity or where vapor pressure affects relative humidity. quick, snap quiz, psychrometric chart stuff …. as the temperature drops, and vapor pressure is kept constant, what happens to the relative humidity? buzz / clang / bell. yes, folks, you are correct, the relative humidity goes up. the implications are staggering. just making things on the outside of your building cold, hygroscopic things, makes them wetter
subdomain_quantum_thermodynamics
0.603247
512
HuggingFaceFW/fineweb-edu
<urn:uuid:2b82307d-66d9-4c83-aa8d-50b37690a3e2>
1
0.6
2025-12-25T18:39:18.025736
the national science foundation available languages : english, spanish this classroom - tested learning module gives a condensed, easily - understood view of the development of atomic theory from the late 19th through early 20th century. the key idea was the discovery that the atom is not an " indivisible " particle, but consists of smaller constituents : the proton, neutron, and electron. it discusses the contributions of john dalton, j. j. thomson, ernest rutherford, and james chadwick, whose experiments revolutionized the world view of atomic structure. see related materials for a link to part 2 of this series. atomic structure, cathode ray experiment, electron, helium atom, history of atom, history of the atom, hydrogen atom, neutron, proton metadata instance created july 12, 2011 by caroline hall october 10, 2012 by caroline hall last update when cataloged : january 1, 2006 aaas benchmark alignments ( 2008 version ) 4. the physical setting 4d. the structure of matter 6 - 8 : 4d / m1a. all matter is made up of atoms, which are far too small to see directly through a microscope. 9 - 12 : 4d / h1. atoms are made of a positively charged nucleus surrounded by negatively charged electrons. the nucleus is a tiny fraction of the volume of an atom but makes up almost all of its mass. the nucleus is composed of protons and neutrons which have roughly the same mass but differ in that protons are positively charged while neutrons have no electric charge. 9 - 12 : 4d / h2. the number of protons in the nucleus determines what an atom ' s electron configuration can be and so defines the element. an atom ' s electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. atoms form bonds to other atoms by transferring or sharing electrons. 10. historical perspectives 10f. understanding fire 9 - 12 : 10f / h1. in the late 1700s and early 1800s, the idea of atoms reemerged in response to questions about the structure of matter, the nature of fire, and the basis of chemical phenomena. 9 - 12 : 10f / h3. in the early 1800s, british chemist and physicist john dalton united the concepts of atoms and elements. he proposed two ideas that laid the groundwork for modern chemistry : first, that elements are formed from small, indivisible particles called atoms, which are identical for a given element but different from any other element ; and second, that chemical compounds are
subdomain_quantum_materials
0.677499
512
HuggingFaceFW/fineweb-edu
<urn:uuid:e5d364b6-d557-47e6-b078-62ea4b57c2d1>
0
0.6
2025-12-25T19:01:59.564756
proposed two ideas that laid the groundwork for modern chemistry : first, that elements are formed from small, indivisible particles called atoms, which are identical for a given element but different from any other element ; and second, that chemical compounds are formed from atoms by combining a definite number of each type of atom to form one molecule of the compound. 9 - 12 : 10f / h4. dalton figured out how the relative weights of the atoms could be determined experimentally. his idea that every substance had a unique atomic composition provided an explanation for why substances were made up of elements in specific proportions. this resource is part of a physics front topical unit. topic : particles and interactions and the standard model unit title : history and discovery this classroom - tested learning module gives a condensed, easily - understood view of the development of atomic theory from the late 19th through early 20th century. the key idea was the discovery that the atom is not an " indivisible " particle, but consists of smaller constituents : the proton, neutron, and electron. it discusses the contributions of john dalton, j. j. thomson, ernest rutherford, and james chadwick, whose experiments revolutionized the world view of atomic structure. % 0 electronic source % a carpi, anthony % d january 1, 2006 % t visionlearning : atomic theory i % i visionlearning % v 2013 % n 21 may 2013 % 8 january 1, 2006 % 9 text / html % u http : / / www. visionlearning. com / library / module _ viewer. php? mid = 50 & l = disclaimer : compadre offers citation styles as a guide only. we cannot offer interpretations about citations as this is an automated procedure. please refer to the style manuals in the citation source information area for clarifications.
subdomain_quantum_materials
0.656569
371
HuggingFaceFW/fineweb-edu
<urn:uuid:e5d364b6-d557-47e6-b078-62ea4b57c2d1>
1
0.6
2025-12-25T19:01:59.565435
an electron is a subatomic particles of spin 1 / 2. it couples with photons and, thus, is electrically charged. it is a lepton with a rest mass of 9. 109 * 10 − 31kg and an electric charge of − 1. 602 * 10 − 19 c, which is the smallest known charge possible for an isolated particle ( confined quarks have fractional charge ). the electric charge of the electron e is used as a unit of charge in much of physics. electron pairs within an orbital system have opposite spins due to the pauli exclusion principle ; this characteristic spin pairing allows electrons to exist in the same quantum orbital, as the opposing magnetic dipole moments induced by each of the electrons ensures that they are attracted together. current theories consider the electron as a point particle, as no evidence for internal structure has been observed. as a theoretical construct, electrons have been able to explain other observed phenomena, such as the shell - like structure of an atom, energy distribution around an atom, and energy beams ( electron and positron beams ). - ↑ massimi, m. ( 2005 ). pauli ' s exclusion principle, the origin and validation of a scientific principle. cambridge university press. pp. 7 – 8 - ↑ mauritsson, j.. " electron filmed for the first time ever ". lunds universitet. retrieved 2008 - 09 - 17. http : / / www. atomic. physics. lu. se / research / attosecond _ physics - ↑ chao, a. w. ; tigner, m. ( 1999 ). handbook of accelerator physics and engineering. world scientific. pp. 155, 188. isbn 981 - 02 - 3500 - 3.
subdomain_quantum_materials
0.720369
354
HuggingFaceFW/fineweb-edu
<urn:uuid:e1790b63-dd2a-43d8-ae60-c3a435647df2>
0
0.6
2025-12-25T19:01:59.566990
heat is a sad fact of life for current generation electronics. any android, iphone, or blackberry user can tell you that smartphones tend to get pretty hot at times. and by today ' s standards a balmy 85 degrees celsius, while hot enough to cook an egg, is a pretty " good " operating temperature for a high - powered pc graphics processing unit. but that could all soon change, according to the results of a new study by researchers at the university of illinois. examining graphene transistors, a team led by mechanical science and engineering professor william king [ profile ] and electrical and computer engineering professor eric pop [ profile ] made a remarkable discovery - - graphene appears to self - cool. i. what is graphene? graphene is somewhat like a miniature " fence " of carbon. the material consists of a single - atom thick layer composed of hexagonal units. at each point of the hexagon sits a carbon atom that is bonded to its three close neighbors. the material behaves like a semiconductor, despite being made of organic atoms. it offers remarkable performance at an incredibly small scale, thus the electronics industry views it as a potential material to power electronic devices of the future. a variety of methods exist for producing graphene. the earliest method was an exfoliation technique that involved stripping individual graphene layers off a layer of graphite ( the material found in pencil lead ) - - this technique ( as of 2008 ) cost as much as $ 100m usd to produce a single cubic centimeter of material. however, rapid advances in production have allowed manufacturers to begin scaling up production to the point where tons of exfoliated graphene can now be produced. techniques promise to drop the price even further. one method, epitaxial growth on silicon cost $ 100 per cubic centimeter in 2009. its limitation is that, obviously, it requires silicon ( eliminating some desirable properties like flexibility ). south korean researchers have tested another promising method, nickel metal transfer. graphene is fascinating from a physics perspective. in 2005 physicists at the university of manchester and the philip kim group from columbia university demonstrated that quasiparticles inside graphene were massless dirac fermions. these unusual particles help give rise to the material ' s unique characteristics. ii. graphene as a self - cooling device despite the extreme interest in the material, deal of mystery still surrounds graphene. because it is so extremely thin, it is difficult to test and measure accurately certain properties of the material. overcoming technical challenges,
subdomain_quantum_materials
0.653176
512
HuggingFaceFW/fineweb-edu
<urn:uuid:65b55048-f9ba-4db3-bbce-136fce4b77fb>
0
0.6
2025-12-25T19:01:59.601499