Datasets:
text
stringlengths 558
3.17k
| subdomain_id
stringclasses 10
values | similarity_score
float64 0.6
0.78
| token_count
int64 256
512
| source_dataset
stringclasses 1
value | source_id
stringlengths 47
47
| chunk_index
int64 0
216
| filtering_threshold
float64 0.6
0.6
| created_at
stringdate 2025-12-22 14:56:09
2025-12-25 11:19:33
|
|---|---|---|---|---|---|---|---|---|
wikipedia sobre fisica de particulas rapidinho. me falaram que a definicao de fisica de particulas da wikipedia era muito ruim. e de fato, era assim : particle physics is a branch of physics that studies the elementary particle | elementary subatomic constituents of matter and radiation, and their interactions. the field is also called high energy physics, because many elementary particles do not occur under ambient conditions on earth. they can only be created artificially during high energy collisions with other particles in particle accelerators. particle physics has evolved out of its parent field of nuclear physics and is typically still taught in close association with it. scientific research in this area has produced a long list of particles. mas hein? particulas que so podem ser criadas em aceleradores? fisica de particulas e ensinada junto com fisica nuclear? a pesquisa produz particulas ( essa e otima! )? em que mundo essa pessoa vive? reescrevi : particle physics is a branch of physics that studies the existence and interactions of particles, which are the constituents of what is usually referred as matter or radiation. in our current understanding, particles are excitations of quantum fields and interact following their dynamics. most of the interest in this area is in fundamental fields, those that cannot be described as a bound state of other fields. the set of fundamental fields and their dynamics are summarized in a model called the standard model and, therefore, particle physics is largely the study of the standard model particle content and its possible extensions. eu acho que ficou bem melhor. vamos ver em quanto tempo algum editor esquentado da wikipedia vai demorar para reverter. atualmente esta um saco participar da wikipedia por causa dessas pessoas.
|
subdomain_quantum_field_theory
| 0.698267
| 400
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:e7f0a003-07f1-4148-a77c-6e0cb215fc0e>
| 0
| 0.6
|
2025-12-22T14:56:09.458937
|
belgian physicist francois englert, left, speaks with british physicist … ( fabrice coffrini / afp / getty … ) for physicists, it was a moment like landing on the moon or the discovery of dna. the focus was the higgs boson, a subatomic particle that exists for a mere fraction of a second. long theorized but never glimpsed, the so - called god particle is thought to be key to understanding the existence of all mass in the universe. the revelation wednesday that it - - or some version of it - - had almost certainly been detected amid more than hundreds of trillions of high - speed collisions in a 17 - mile track near geneva prompted a group of normally reserved scientists to erupt with joy. for the record los angeles times friday, july 06, 2012 home edition main news part a page 4 news desk 1 inches ; 48 words type of material : correction large hadron collider : in some copies of the july 5 edition, an article in section a about the machine used by physicists at the european organization for nuclear research to search for the higgs boson referred to the $ 5 - billion large hadron collider. the correct amount is $ 10 billion. peter higgs, one of the scientists who first hypothesized the existence of the particle, reportedly shed tears as the data were presented in a jampacked and applause - heavy seminar at cern, the european organization for nuclear research. " it ' s a gigantic triumph for physics, " said frank wilczek, an mit physicist and nobel laureate. " it ' s a tremendous demonstration of a community dedicated to understanding nature. " the achievement, nearly 50 years in the making, confirms physicists ' understanding of how mass - - the stuff that makes stars, planets and even people - - arose in the universe, they said. it also points the way toward a new path of scientific inquiry into the mass - generating mechanism that was never before possible, said ucla physicist robert cousins, a member of one of the two research teams that has been chasing the higgs boson at cern. " i compare it to turning the corner and walking around a building - - there ' s a whole new set of things you can look at, " he said. " it is a beginning, not an end. " leaders of the two teams reported independent results that suggested the existence of a previously unseen subatomic particle with a mass of about 125 to 126 billion electron volts. both groups got
|
subdomain_quantum_field_theory
| 0.624252
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fb237ffb-9cc0-4077-99d5-56c6fce1ca5f>
| 0
| 0.6
|
2025-12-22T14:56:09.510751
|
varying densities. these particles within the matter are kinetic and in constant motion. the slower the motion of the particles, the more dense the matter becomes. also, as the particles are pushed closer together, the matter also becomes more dense. the best way to slow down kinetic molecules is to cool the matter. the best way to get them to move closer together is to add pressure to the matter. inversely, when you remove the pressure or heat any material, the molecules within the material moves faster and further apart, thus making the material less dense. the least dense form of matter is, of course, gas. if a gas is cooled and compressed, at some point it will become a liquid. if that liquid is then cooled further, then at some point it will become a solid. also, when you take the pressure off any gas or liquid, that material will grow less dense and expand. this is essentially what happens to the gaseous molecules of our atmosphere. our atmosphere contains approximately 79 % nitrogen and 21 % oxygen, a constant ratio until you reach an altitude of about 270, 000 feet. so the question that always comes up is ; " if i have 21 % oxygen at sea level and 21 % at 40, 000 feet, why do i succumb to the effects of hypoxia within 20 seconds at that altitude? " the answer is, atmospheric pressure! if you could picture all the gaseous nitrogen and oxygen molecules in the atmosphere, they would stack up from the surface of the earth to the fringe of space. all these molecules stacking on top each other create a great deal of weight, or pressure. at sea level, one square - inch of any surface has about 15 pounds of air sitting on top of it. at 18, 000 feet, that same square inch has only 7. 5 pounds per square - inch ( psi ) exerted on it. what has caused this atmospheric pressure drop? the answer is simple : there is more air stacked up at sea level than above 18, 000 feet, and therefore, more weight. as you recall, when molecules are subjected to this pressure, they are going to move closer together. this will make the air more dense with oxygen and nitrogen molecules. for example, if at sea level you take in a breath of air that has an atmospheric pressure of 15 psi, then that air may contain 500 billion molecules of oxygen ( this a fictitious number to be used only as an example ) ; if you go to 18, 000 feet and take the same breath where atmospheric pressure
|
subdomain_quantum_materials
| 0.602904
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:ac6a19cf-dd31-4352-bb69-1c00f45050a7>
| 1
| 0.6
|
2025-12-22T14:56:09.536008
|
menezes, pradeep l and kishore, * and kailas, satish v ( 2006 ) studies on friction and transfer layer using inclined scratch. in : tribology international, 39 ( 2 ). pp. 175 - 183. restricted to registered users only download ( 562kb ) | request a copy friction influences the nature of transfer layer formed at the interface between die and sheet during forming. in the present investigation, basic studies were conducted using ' inclined scratch test ' to understand the mechanism of transfer layer formation during sliding of pins made of an al - mg alloy on en8 steel flats of different surface roughness under dry and lubricated conditions. the surfaces produced can be categorized into three different types : ( a ) uni - directional ( b ) 8 - ground and ( c ) random. rubbing the en8 flat in a uni - directional manner and a criss - cross manner on emery sheets produced the uni - directional and 8 ground surfaces. the random surfaces were produced by polishing the en8 flats using various abrasive powders. the influence of the ' nature of surface roughness ' on material transfer and coefficient of friction were investigated. scanning electron microscopy studies were performed on the contact surfaces of the al - mg alloy pins and en8 steel flats to reveal the morphology of the transfer layer obtained. it was seen that the transfer layer is dependant on the coefficient of friction. the coefficient of friction, which has two components - the adhesion component and the plowing component, is controlled by the ' nature of surface '. a surface that promotes plane strain conditions near the surfaces increases the plowing component of friction. | item type : | | journal article | | additional information : | | copyright for this article belongs to elsevier. | | keywords : | | friction ; nature of surface ; inclined scratch | | department / centre : | | division of mechanical sciences > materials engineering ( formerly metallurgy ) division of mechanical sciences > mechanical engineering | date deposited : | | 19 jan 2006 | | last modified : | | 19 sep 2010 04 : 23 | actions ( login required )
|
subdomain_quantum_materials
| 0.62561
| 433
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:29ad99f8-17dd-4bf4-9973-88d9fa050e74>
| 0
| 0.6
|
2025-12-22T14:56:10.286286
|
development of several chromatographic methods : paper chromatography, gas chromatography, and what would become known as high performance liquid chromatography. since then, the technology has advanced rapidly. researchers found that the main principles of tsvet ' s chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules. chromatography terms - the analyte is the substance to be separated during chromatography. - analytical chromatography is used to determine the existence and possibly also the concentration of analyte ( s ) in a sample. - a bonded phase is a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing. - a chromatogram is the visual output of the chromatograph. in the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. - plotted on the x - axis is the retention time and plotted on the y - axis a signal ( for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors ) corresponding to the response created by the analytes exiting the system. in the case of an optimal system the signal is proportional to the concentration of the specific analyte separated. - a chromatograph is equipment that enables a sophisticated separation e. g. gas chromatographic or liquid chromatographic separation. - chromatography is a physical method of separation that distributes components to separate between two phases, one stationary ( stationary phase ), while the other ( the mobile phase ) moves in a definite direction. - the eluate is the mobile phase leaving the column. - the eluent is the solvent that carries the analyte. - an eluotropic series is a list of solvents ranked according to their eluting power. - an immobilized phase is a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing. - the mobile phase is the phase that moves in a definite direction. it may be a liquid ( lc and capillary electrochromatography ( cec ) ), a gas ( gc ), or a supercritical fluid ( supercritical - fluid chromatography,
|
subdomain_quantum_materials
| 0.604772
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:51ca50ec-be73-4d62-b6f9-64c6eb0ad47f>
| 1
| 0.6
|
2025-12-22T14:56:10.830449
|
maximize the effect of this difference. in many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than “ peaks ”. because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations. techniques by physical state of mobile phase gas chromatography gas chromatography ( gc ), also sometimes known as gas - liquid chromatography, ( glc ), is a separation technique in which the mobile phase is a gas. gas chromatography is always carried out in a column, which is typically " packed " or " capillary " ( see below ). gas chromatography is based on a partition equilibrium of analyte between a solid stationary phase ( often a liquid silicone - based material ) and a mobile gas ( most often helium ). the stationary phase is adhered to the inside of a small - diameter glass tube ( a capillary column ) or a solid matrix inside a larger metal tube ( a packed column ). it is widely used in analytical chemistry ; though the high temperatures used in gc make it unsuitable for high molecular weight biopolymers or proteins ( heat denatures them ), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. it is also used extensively in chemistry research. liquid chromatography liquid chromatography ( lc ) is a separation technique in which the mobile phase is a liquid. liquid chromatography can be carried out either in a column or a plane. present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high performance liquid chromatography ( hplc ). in hplc the sample is forced by a liquid at high pressure ( the mobile phase ) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. hplc is historically divided into two
|
subdomain_quantum_materials
| 0.600338
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:51ca50ec-be73-4d62-b6f9-64c6eb0ad47f>
| 5
| 0.6
|
2025-12-22T14:56:10.836880
|
a liquid at high pressure ( the mobile phase ) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. hplc is historically divided into two different sub - classes based on the polarity of the mobile and stationary phases. methods in which the stationary phase is more polar than the mobile phase ( e. g., toluene as the mobile phase, silica as the stationary phase ) are termed normal phase liquid chromatography ( nplc ) and the opposite ( e. g., water - methanol mixture as the mobile phase and c18 = octadecylsilyl as the stationary phase ) is termed reversed phase liquid chromatography ( rplc ). ironically the " normal phase " has fewer applications and rplc is therefore used considerably more. specific techniques under this broad heading are listed below. affinity chromatography affinity chromatography is based on selective non - covalent interaction between an analyte and specific molecules. it is very specific, but not very robust. it is often used in biochemistry in the purification of proteins bound to tags. these fusion proteins are labeled with compounds such as his - tags, biotin or antigens, which bind to the stationary phase specifically. after purification, some of these tags are usually removed and the pure protein is obtained. affinity chromatography often utilizes a biomolecule ' s affinity for a metal ( zn, cu, fe, etc. ). columns are often manually prepared. traditional affinity columns are used as a preparative step to flush out unwanted biomolecules. however, hplc techniques exist that do utilize affinity chromatogaphy properties. immobilized metal affinity chromatography ( imac ) is useful to separate aforementioned molecules based on the relative affinity for the metal ( i. e. dionex imac ). often these columns can be loaded with different metals to create a column with a targeted affinity. supercritical fluid chromatography supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure. techniques by separation mechanism ion exchange chromatography ion exchange chromatography ( usually referred to as ion chromatography ) uses an ion exchange mechanism to separate analytes based on their respective charges. it is usually performed in columns but can also be useful in
|
subdomain_quantum_materials
| 0.608798
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:51ca50ec-be73-4d62-b6f9-64c6eb0ad47f>
| 6
| 0.6
|
2025-12-22T14:56:10.838078
|
deep - space communication improved with electromagnetic radiation antenna - robert c. dye - technology transfer - ( 505 ) 667 - 3404 electromagnetic radiation antenna has potential for deep - space communication - directed energy - long - range communications - medicine ( oncology ) - radar imaging applications are countermeasure - resistant - communications can be spatially - encrypted - 4 - dimensional volumes of energy can be aimed at a single space - time point for directed energy applications - nonspherical decay of the cusp enables low - power communications and propagation over great distances los alamos national laboratory ( lanl ) researchers have developed the lightslinger, a completely new type of antenna that produces tightly - focused packets of electromagnetic radiation fundamentally different from the emissions of conventional transmitters. the device has potential applications in radar, directed - energy ( non - kinetic kill ), secure communications, ultra - long - range communications ( e. g., deep - space ), medicine ( oncology ) and astrophysics. the lightslinger functions by producing a moving polarization pattern in a ring of alumina. by careful timing of voltages applied to electrodes that surround the alumina, the polarization pattern can be made to move superluminally, i. e., faster than the speed of light in a vacuum. nobel laureate vitaly ginzberg showed both that such superluminal polarization patterns do not violate the principles of special relativity and that they emit electromagnetic radiation. once a source travels faster than the waves that it emits, it can make contributions at multiple retarded times to a signal received instantaneously at a distance. this effect is already well known in acoustics ; when a supersonic airplane accelerates through the speed of sound, a violent “ sonic boom ” is heard many miles away, even if the airplane itself is rather quiet. the lightslinger enables the same thing to be done with electromagnetic radiation ; i. e., a relatively low - power source can make an “ electromagnetic boom ”, an intense concentration of radiowaves at a great distance. the “ electromagnetic boom ” is due to temporal focusing, that is, focusing in the time domain. because of this effect, part of the emitted radiation possesses an intensity that decays with distance r as 1 / r rather than as the conventional inverse square law, 1 / r2. these nonspherically - decaying wavepackets represent a game - changing technology in the applications of electromagnetic radiation. development stage : working prototype patent status : patent pending
|
subdomain_quantum_optics
| 0.615479
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:79bc5d65-38cf-489f-b8c5-6800ff88c6f7>
| 0
| 0.6
|
2025-12-22T14:56:10.885361
|
refraction and acceleration name : christopher s. why is it that when light travels from a more dense to a less dense medium, its speed is higher? i ' ve read answers to this question in your archives but, sadly, still don ' t get it. one answer ( jasjeet s bagla ) says that we must not ask the question because light is massless, hence questions of acceleration don ' t make sense. it does, however, seem to be ok to talk about different speeds of light. if you start at one speed and end at a higher one, why is one not allowed to talk about acceleration? bagla goes on to say that it depends on how the em fields behave in a given medium. it begs the question : what is it about, say, perspex and air that makes light accelerate, oops, travel at different speeds? if you ' re dealing with the same ray of light, one is forced to speak of acceleration, no? what other explanation is there for final velocity > initial velocity? arthur smith mentioned a very small " evanescent " component that travels ahead at c. where can i learn more about this? sorry for the long question. i understand that f = ma and if there is no m, you cannot talk about a, but, again, you have one velocity higher than another for the same thing. i need to know more than " that ' s just the way em fields are! " an explanation that satisfies me relates to travel through an interactive medium. when light interacts with an atom, the photon of light is absorbed and then emitted. for a moment, the energy of the light is within the atom. this causes a slight delay. light travels at the standard speed of light until interacting with another atom. it is absorbed and emitted, causing another slight delay. the average effect is taking more time to travel a meter through glass than through air. this works like a slower speed. an individual photon does not actually slow down. it gets delayed repeatedly by the atoms of the medium. a more dense medium has more atoms per meter to dr. ken mellendorf illinois central college congratulations! on not being willing to accept " that is just the way em fields are! " the answer to your inquiry is not all that simple ( my opinion ), but i won ' t try to do so in the limited space allowed here, not to say my own limitations of knowledge. like so many " simple " physics questions, i find the most lucid
|
subdomain_quantum_optics
| 0.616927
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
| 0
| 0.6
|
2025-12-22T14:56:10.927133
|
is not all that simple ( my opinion ), but i won ' t try to do so in the limited space allowed here, not to say my own limitations of knowledge. like so many " simple " physics questions, i find the most lucid, but accurate, explanation in richard feynman ' s, " lectures on physics " which most libraries will have. volume i, chapter 31 - 1 through 31 - 6, which describes refraction, dispersion, diffraction. the " answer " has to do with how matter alters the electric field of incident radiation, but i won ' t pretend to be able to do a better job than feynman. the answer is that you are not dealing with the same ray of light. in vacuum a photon just keeps going at the speed of light. in a medium, however, it interacts with the atoms, often being absorbed while bumping an atomic or molecular motion into a higher energy state. the excited atom / molecule then can jump to a lower energy state, emitting a photon while doing so. this can obviously make light appear to travel slower in a in detail, it is a very complicated question, requiring at least a graduate course in electromagnetism to begin to understand. why, for example do the emitted photons tend to travel in the same direction? best, richard j. plano click here to return to the physics archives update : june 2012
|
subdomain_quantum_optics
| 0.64135
| 290
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
| 1
| 0.6
|
2025-12-22T14:56:10.927778
|
topics covered : ideal solutions instructor / speaker : moungi bawendi, keith nelson the following content is provided under a creative commons license. your support will help mit opencourseware continue to offer high quality educational resources for free. to make a donation or view additional materials from hundreds of mit courses, visit mit opencourseware at ocw. mit. edu. professor : so. in the meantime, you ' ve started looking at two phase equilibrium. so now we ' re starting to look at mixtures. and so now we have more than one constituent. and we have more than one phase present. right? so you ' ve started to look at things that look like this, where you ' ve got, let ' s say, two components. both in the gas phase. and now to try to figure out what the phase equilibria look like. of course it ' s now a little bit more complicated than what you went through before, where you can get pressure temperature phase diagrams with just a single component. now we want to worry about what ' s the composition. of each of the components. in each of the phases. and what ' s the temperature and the pressure. total and partial pressures and all of that. so you can really figure out everything about both phases. and there are all sorts of important reasons to do that, obviously lots of chemistry happens in liquid mixtures. some in gas mixtures. some where they ' re in equilibrium. all sorts of chemical processes. distillation, for example, takes advantage of the properties of liquid and gas mixtures. where one of them might be richer, will be richer, and the more volatile of the components. that can be used as a basis for purification. you mix ethanol and water together so you ' ve got a liquid with a certain composition of each. the gas is going to be richer and the more volatile of the two, the ethanol. so in a distillation, where you put things up in the gas, more of the ethanol comes up. you could then collect that gas, right? and re - condense it, and make a new liquid. which is much richer in ethanol than the original liquid was. then you could make, then you could put some of them up into the gas phase. where it will be still richer in ethanol. and then you could collect that and repeat the process. so the point is that properties of liquid gas, two - component or multi - component mixtures like this can
|
subdomain_quantum_materials
| 0.629434
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 0
| 0.6
|
2025-12-22T14:56:11.010099
|
. and that ' s what i want to spend some of today doing. is just, walking through what ' s happening physically, with a container with a mixture of the two. and how does that correspond to what gets read off the diagram under different conditions. so. let ' s just start somewhere on a phase diagram like this. let ' s start up here at some point one, so we ' re in the pure - well, not pure, you ' re in the all liquid phase. it ' s still a mixture. it ' s not a pure substance. pa star, pb star. there ' s the gas phase. so, if we start at one, and now there ' s some total pressure. and now we ' re going to reduce it. what happens? we start with a pure - with an all - liquid mixture. no gas. and now we ' re going to bring down the pressure. allowing some of the liquid to go up into the gas phase. so, we can do that. and once we reach point two, then we find a coexistence curve. now the liquid and gas are going to coexist. so this is the liquid phase. and that means that this must be xb. and it ' s xb at one, but it ' s also xb at two, and i want to emphasize that. so let ' s put our pressure for two. and if we go over here, this is telling us about the mole fraction in the gas phase. that ' s what these curves are, remember. so this is the one that ' s showing us the mole fraction in the liquid phase. this nonlinear one in the gas phase. so that means just reading off it, this is xb, that ' s the liquid mole fraction. here ' s yb. the gas mole fraction. they ' re not the same, right, because of course the components have different volatility. a ' s more volatile. so that means that the mole fraction of b in the liquid phase is higher than the mole fraction of b in the gas phase. because a is the more volatile component. so more, relatively more, of a, the mole fraction of a is going to be higher up in the gas phase. which means the mole fraction of b is lower in the gas phase. so, yb less than xb if a is more volatile. ok, so now what ' s happening physically? well, we started at a point where we only had the
|
subdomain_quantum_materials
| 0.628322
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 3
| 0.6
|
2025-12-22T14:56:11.017120
|
which means the mole fraction of b is lower in the gas phase. so, yb less than xb if a is more volatile. ok, so now what ' s happening physically? well, we started at a point where we only had the liquid present. so at our initial pressure, we just have all liquid. there ' s some xb at one. that ' s all there is, there isn ' t any gas yet. now, what happened here? well, now we lowered the pressure. so you could imagine, well, we made the box bigger. now, if the liquid was under pressure, being squeezed by the box, right then you could make the box a little bit bigger. and there ' s still no gas. that ' s moving down like this. but then you get to a point where there ' s just barely any pressure on top of the liquid. and then you keep expanding the box. now some gas is going to form. so now we ' re going to go to our case two. we ' ve got a bigger box. and now, right around where this was, this is going to be liquid. and there ' s gas up here. so up here is yb at pressure two. here ' s xb at pressure two. liquid and gas. so that ' s where we are at point two here. now, what happens if we keep going? let ' s lower the pressure some more. well, we can lower it and do this. but really if we want to see what ' s happening in each of the phases, we have to stay on the coexistence curves. those are what tell us what the pressures are. what the partial pressure are going to be in each of the phases. in each of the two, in the liquid and the gas phases. so let ' s say we lower the pressure a little more. what ' s going to happen is, then we ' ll end up somewhere over here. in the liquid, and that ' ll correspond to something over here in the gas. so here ' s three. so now we ' re going to have, that ' s going to be xb at pressure three. and over here is going to be yb at pressure three. and all we ' ve done, of course, is we ' ve just expanded this further. so now we ' ve got a still taller box. and the liquid is going to be a little lower because some of it has evaporated, formed the gas phase
|
subdomain_quantum_materials
| 0.606147
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 4
| 0.6
|
2025-12-22T14:56:11.018188
|
' ve done, of course, is we ' ve just expanded this further. so now we ' ve got a still taller box. and the liquid is going to be a little lower because some of it has evaporated, formed the gas phase. so here ' s xb at three. here ' s yb at three, here ' s our gas phase. now we could decrease even further. and this is the sort of thing that you maybe can ' t do in real life. but i can do on a blackboard. i ' m going to give myself more room on this curve, to finish this illustration. there. beautiful. so now we can lower a little bit further, and what i want to illustrate is, if we keep going down, eventually we get to a pressure where now if we look over in the gas phase, we ' re at the same pressure, mole fraction that we had originally in the liquid phase. so let ' s make four even lower pressure. what does that mean? what it means is, we ' re running out of liquid. so what ' s supposed to happen is a is the more volatile component. so as we start opening up some room for gas to form, you get more of a in the gas phase. but of course, and the liquid is richer in b. but of course, eventually you run out of liquid. you make the box pretty big, and you run out, or you have the very last drop of liquid. so what ' s the mole fraction of b in the gas phase? it has to be the same as what it started in in the liquid phase. because after all the total number of moles of a and b hasn ' t changed any. so if you take them all from the liquid and put them all up into the gas phase, it must be the same. so yb of four. once you just have the last drop. so then yb of four is basically equal to xb of one. because everything ' s now up in the gas phase. so in principle, there ' s still a tiny, tiny bit of xb at pressure four. well, we could keep lowering the pressure. we could make the box a little bigger. then the very last of the liquid is going to be gone. and what ' ll happen then is, we ' re all here. there ' s no more liquid. we ' re not going down on the coexistence curve any more. we don ' t have a liquid gas
|
subdomain_quantum_materials
| 0.608849
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 5
| 0.6
|
2025-12-22T14:56:11.019095
|
going to be gone. and what ' ll happen then is, we ' re all here. there ' s no more liquid. we ' re not going down on the coexistence curve any more. we don ' t have a liquid gas coexistence any more. we just have a gas phase. of course, we can continue to lower the pressure. and then what we ' re doing is just going down here. so there ' s five. and five is the same as this only bigger. and so forth. ok, any questions about how this works? it ' s really important to just gain facility in reading these things and seeing, ok, what is it that this is telling you. and you can see it ' s not complicated to do it, but it takes a little bit of practice. ok. now, of course, we could do exactly the same thing starting from the gas phase. and raising the pressure. and although you may anticipate that it ' s kind of pedantic, i really do want to illustrate something by it. so let me just imagine that we ' re going to do that. let ' s start all in the gas phase. up here ' s the liquid. pa star, pb star. and now let ' s start somewhere here. so we ' re down somewhere in the gas phase with some composition. so it ' s the same story, except now we ' re starting here. it ' s all gas. and we ' re going to start squeezing. we ' re increasing the pressure. and eventually here ' s one, will reach two, so of course here ' s our yb. we started with all gas, no liquid. so this is yb of one. it ' s the same as yb of two, i ' m just raising the pressure enough to just reach the coexistence curve. and of course, out here tells us xb of two, right? so what is it saying? we ' ve squeezed and started to form some liquid. and the liquid is richer in component b. maybe it ' s ethanol water again. and we squeeze, and now we ' ve got more water in the liquid phase than in the gas phase. because water ' s the less volatile component. it ' s what ' s going to condense first. so the liquid is rich in the less volatile of the components. now, obviously, we can continue in doing exactly the reverse of what i showed you. but all i want to
|
subdomain_quantum_materials
| 0.608921
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 6
| 0.6
|
2025-12-22T14:56:11.020177
|
component. it ' s what ' s going to condense first. so the liquid is rich in the less volatile of the components. now, obviously, we can continue in doing exactly the reverse of what i showed you. but all i want to really illustrate is, this is a strategy for purification of the less volatile component. once you ' ve done this, well now you ' ve got some liquid. now you could collect that liquid in a separate vessel. so let ' s collect the liquid mixture with xb of two. so it ' s got some mole fraction of b. so we ' ve purified that. but now we ' re going to start, we ' ve got pure liquid. now let ' s make the vessel big. so it all goes into the gas phase. then lower p. all gas. so we start with yb of three, which equals xb of two. in other words, it ' s the same mole fraction. so let ' s reconstruct that. so here ' s p of two. and now we ' re going to go to some new pressure. and the point is, now we ' re going to start, since the mole fraction in the gas phase that we ' re starting from is the same number as this was. so it ' s around here somewhere. that ' s yb of three equals xb of two. and we ' re down here. in other words, all we ' ve done is make the container big enough so the pressure ' s low and it ' s all in the gas phase. that ' s all we have, is the gas. but the composition is whatever the composition is that we extracted here from the liquid. so this xb, which is the liquid mole fraction, is now yb, the gas mole fraction. of course, the pressure is different. lower than it was before. great. now let ' s increase. so here ' s three. and now let ' s increase the pressure to four. and of course what happens, now we ' ve got coexistence. so here ' s liquid. here ' s gas. so, now we ' re over here again. there ' s xb at pressure four. pure still in component b. we can repeat the same procedure. collect it. all liquid, put it in a new vessel. expand it, lower the pressure, all goes back into the gas phase. do it all again. and the point is, what you ' re doing is walking along
|
subdomain_quantum_materials
| 0.606914
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 7
| 0.6
|
2025-12-22T14:56:11.021263
|
phases? so at the end of the day, you can figure out, ok, now when i reach a certain degree of purification, here ' s how much of the stuff i end up with. well, that turns out to be reasonably straightforward to do. and so what i ' ll go through is a simple mathematical derivation. and it turns out that it allows you to just read right off the diagram how much of each material you ' re going to end up with. so, here ' s what happens. this is something called the lever rule. how much of each component is there in each phase? so let ' s consider a case like this. let me draw yet once again, just to get the numbering consistent. with how we ' ll treat this. so we ' re going to start here. and i want to draw it right in the middle, so i ' ve got plenty of room. and we ' re going to go up to some pressure. and somewhere out there, now i can go to my coexistence curves. liquid. and gas. and i can read off my values. so this is the liquid xb. so i ' m going to go up to some point two, here ' s xb of two. here ' s yb of two. great. now let ' s get these written in. so let ' s just define terms a little bit. na, nb. or just our total number of moles. ng and n liquid, of course, total number of moles. in the gas and liquid phases. so let ' s just do the calculation for each of these two cases. we ' ll start with one. that ' s the easier case. because then we have only the gas. so at one, all gas. it says pure gas in the notes, but of course that isn ' t the pure gas. it ' s the mixture of the two components. so. how many moles of a? well it ' s the mole fraction of a in the gas. times the total number of moles in the gas. let me put one in here. just to be clear. and since we have all gas, the number of moles in the gas is just the total number of moles. so this is just ya at one times n total. let ' s just write that in. and of course n total is equal to na plus nb. so now let ' s look at condition two. now we have to look a little more carefully
|
subdomain_quantum_materials
| 0.610639
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 9
| 0.6
|
2025-12-22T14:56:11.023366
|
mole fraction in that case has to be the same. and what this is just telling us mathematically is, when that happens this is zero. that means i don ' t have any gas left. yeah. professor : no. because, so it ' s the mole fraction in the gas phase. but you ' ve started with some amount that it ' s only going to go down from there. professor : yeah. yeah. any other questions? ok. well, now what i want to do is just put up a slightly different kind of diagram, but different in an important way. namely, instead of showing the mole fractions as a function of the pressure. and i haven ' t written it in, but all of these are at constant temperature, right? i ' ve assumed the temperature is constant in all these things. now let ' s consider the other possibility, the other simple possibility, which is, let ' s hold the pressure constant and vary the temperature. of course, you know in the lab, that ' s usually what ' s easiest to do. now, unfortunately, the arithmetic gets more complicated. it ' s not monumentally complicated, but here in this case, where you have one linear relationship, which is very convenient. from raoult ' s law. and then you have one non - linear relationship there for the mole fraction of the gas. in the case of temperature, they ' re both, neither one is linear. nevertheless, we can just sketch what the diagram looks like. and of course it ' s very useful to do that, and see how to read off it. and i should say the derivation of the curves isn ' t particularly complicated. it ' s not particularly more complicated than what i think you saw last time to derive this. there ' s no complicated math involved. but the point is, the derivation doesn ' t yield a linear relationship for either the gas or the liquid part of the coexistence curve. ok, so we ' re going to look at temperature and mole fraction phase diagrams. again, a little more complicated mathematically but more practical in real use. and this is t. and here is the, sort of, form that these things take. so again, neither one is linear. up here, now, of course if you raise the temperatures, that ' s where you end up with gas. if you lower the temperature, you condense and get the liquid. so, this is ta star. tb star. so now i want to stick with a as
|
subdomain_quantum_materials
| 0.604529
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 12
| 0.6
|
2025-12-22T14:56:11.026264
|
course if you raise the temperatures, that ' s where you end up with gas. if you lower the temperature, you condense and get the liquid. so, this is ta star. tb star. so now i want to stick with a as the more volatile component. at constant temperature, that meant that pa star is bigger than pb star. in other words, the vapor pressure over pure liquid a is higher than the vapor pressure over pure liquid b. similarly, now i ' ve got constant pressure and really what i ' m looking at, let ' s say i ' m at the limit where i ' ve got the pure liquid. or the pure a. and now i ' m going to, let ' s say, raise the temperature until i ' m at the liquid - gas equilibrium. that ' s just the boiling point. so if a is the more volatile component, it has the lower boiling point. and that ' s what this reflects. so higher pb star a corresponds to lower ta star a. which is just the boiling point of pure a. so, this is called the bubble line. that ' s called the dew line. all that means is, let ' s say i ' m at high temperature. i ' ve got all gas. right no coexistence, no liquid yet. and i start to cool things off. just to where i just barely start to get liquid. what you see that as is, dew starts forming. a little bit of condensation. if you ' re outside, it means on the grass a little bit of dew is forming. similarly, if i start at low temperature, all liquid now i start raising the temperature until i just start to boil. i just start to see the first bubbles forming. and so that ' s why these things have those names. so now let ' s just follow along what happens when i do the same sort of thing that i illustrated there. i want to start at one point in this phase diagram. and then start changing the conditions. so let ' s start here. so i ' m going to start all in the liquid phase. that is, the temperature is low. here ' s xb. and my original temperature. now i ' m going to raise it. so if i raise it a little bit, i reach a point at which i first start to boil. start to find some gas above the liquid. and if i look right here, that ' ll be my composition. let me raise it a little farther
|
subdomain_quantum_materials
| 0.607422
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 13
| 0.6
|
2025-12-22T14:56:11.027147
|
more. still get a substantial amount of enrichment. and now i ' ve got, in the gas phase, i ' ll further enriched in component a. and again i can collect the gas. condense it. now i ' m out here somewhere, i ' ve got all liquid and i ' ll raise the temperature again. and i can again keep walking my way over. and that ' s what happens during an ordinary distillation. each step of the distillation walks along in the phase diagram at some selected point. and of course what you ' re doing is, you ' re always condensing the gas. and starting with fresh liquid that now is enriched in more volatile of the components. so of course if you ' re really purifying, say, ethanol from an ethanol water mixture, that ' s how you do it. ethanol is the more volatile component. so a still is set up. it will boil the stuff and collect the gas and and condense it. and boil it again, and so forth. and the whole thing can be set up in a very efficient way. so you have essentially continuous distillation. where you have a whole sequence of collection and condensation and reheating and so forth events. so then, in a practical way, it ' s possible to walk quite far along the distillation, the coexistence curve, and distill to really a high degree of purification. any questions about how that works? ok. i ' ll leave till next time the discussion of the chemical potentials. but what we ' ll do, just to foreshadow a little bit, what i ' ll do at the beginning of the next lecture is what ' s at the end of your notes here. which is just to say ok, now if we look at raoult ' s law, it ' s straightforward to say what is the chemical potential for each of the substances in the liquid and the gas phase. of course, it has to be equal. given that, that ' s for an ideal solution. we can gain some insight from that. and then look at real solutions, non - ideal solutions, and understand a lot of their behavior as well. just from starting from our understanding of what the chemical potential does even in a simple ideal mixture. so we ' ll look at the chemical potentials. and then we ' ll look at non - ideal solution mixtures next time. see you then.
|
subdomain_quantum_thermodynamics
| 0.614114
| 502
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
| 15
| 0.6
|
2025-12-22T14:56:11.031034
|
topics covered : encapsulation, inheritance, shadowing instructor : prof. eric grimson, prof. john guttag operator : the following content is provided under a creative commons license. your support will help mit opencourseware continue to offer high quality educational resources for free. to make a donation or view additional materials from hundreds of mit courses, visit mit opencourseware at ocw. mit. edu. professor : last lecture we were talking about classes, and object - oriented programming, and we ' re going to come back to it today. i ' m going to remind you, we were talking about it because we suggested it is a really powerful way of structuring systems, and that ' s really why we want to use it, it ' s a very common way of structuring systems. so today i ' m going to pick up on a bunch of more nuanced, or more complex if you like, ways of leveraging the power of classes. but we ' re going to see a bunch of examples that are going to give us a sense. i ' m going to talk about inheritance, we ' re going to talk about shadowing, we ' re going to talk about iterators. but before get to it, i want to start by just highlighting, sort of, what was the point of classes? so i ' ll remind you. a class, i said, was basically a template for an abstract data type. and this was really to drive home this idea of modularity. i want the ability to say, i ' ve got a set of things that naturally belong together, i ' m going to cluster them together, i want to treat it like it ' s a primitive, i want to treat it like it ' s a float or an int or a string. is this going to be a point or a segment or something different like that. so it ' s really a way, as i said, of just trying to cluster data together. and this is a notion of modularity slash abstraction where i ' m treating them as primitives. but the second thing we talked about is that we also have a set of methods, using the special name method because we ' re talking classes. but basically functions that are designed to deal with this data structure. we ' re trying to group those together as well. so we cluster data and methods. second key thing we said was, in the ideal case, which unfortunately python isn ' t, but we ' ll come back
|
subdomain_quantum_field_theory
| 0.610129
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:356021a3-01be-42dc-ae50-e22e74e8edfd>
| 0
| 0.6
|
2025-12-22T14:56:11.089463
|
we had a running joke in science ed that kids get so overexposed to discrepant events involving density and air pressure that they tend to try to explain anything and everything they don ' t understand with respect to science in terms of those two concepts. why do we have seasons? ummm... air pressure? why did dr. smith use that particular research design? ummm... density? i think we need another catch - all explanation. i suggest index of refraction. to simplify greatly, index of refraction describes the amount of bending a light ray will undergo as it passes from one medium to another ( it ' s also related to the velocity of light in both media, but i do want to keep this simple ). if the two media have significantly different indices, light passing from one to the other at an angle ( not perpendicularly, in which case there is no bending ) will be bent more than if indices of the two are similar. the first four data points are from hyperphysics, the final one from wikipedia... glass has a wide range of compositions and thus indices of refraction. water at 20 c : 1. 33 typical soda - lime glass : close to 1. 5 since glycerine and glass have similar ior, light passing from one to the other isn ' t bent ; as long as both are transparent and similarly colored, each will be effectively " invisible " against the other. so, why does it rain? umm... index of refraction? a bright moon impact 12 hours ago
|
subdomain_quantum_optics
| 0.610774
| 317
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:7eeb7ef3-3122-42f0-86c8-01da8f3d7396>
| 0
| 0.6
|
2025-12-22T14:56:11.153088
|
| gallium metal is silver - white and melts at approximately body temperature ( wikipedia image ). | | atomic number : | | 31 | | atomic radius : | | 187 pm ( van der waals ) | | atomic symbol : | | ga | | melting point : | | 29. 76 °c | | atomic weight : | | 69. 72 | | boiling point : | | 2204 °c | | electron configuration : | | [ ar ] 4s23d104p1 | | oxidation states : | | 3 | from the latin word gallia, france ; also from latin, gallus, a translation of " lecoq, " a cock. predicted and described by mendeleev as ekaaluminum, and discovered spectroscopically by lecoq de boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in koh. gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. some flue dusts from burning coal have been shown to contain as much 1. 5 percent gallium. it is one of four metals - - mercury, cesium, and rubidium - - which can be liquid near room temperature and, thus, can be used in high - temperature thermometers. it has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures. there is a strong tendency for gallium to supercool below its freezing point. therefore, seeding may be necessary to initiate solidification. ultra - pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. the metal expands 3. 1 percent on solidifying ; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies. high - purity gallium is attacked only slowly by mineral acids. gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. it is widely used in doping semiconductors and producing solid - state devices such as transistors. magnesium gallate containing divalent impurities, such as mn + 2, is finding use in commercial ultraviolet - activated powder phosphors. gallium arsenide is capable of converting electricity directly into coherent light. gallium readily alloys with most metals, and has been used as a component in low - melting alloys.
|
subdomain_quantum_materials
| 0.622286
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048>
| 0
| 0.6
|
2025-12-22T14:56:11.168802
|
professor of electrical engineering at the university of california, berkeley, predicted the existence of a fourth fundamental device, which he called a memristor. he proved that memristor behavior could not be duplicated by any circuit built using only the other three elements, which is why the memristor is truly fundamental. memristor is a contraction of ” memory resistor, ” because that is exactly its function : to remember its history. a memristor is a two - terminal device whose resistance depends on the magnitude and polarity of the voltage applied to it and the length of time that voltage has been applied. when you turn off the voltage, the memristor remembers its most recent resistance until the next time you turn it on, whether that happens a day later or a year later. think of a resistor as a pipe through which water flows. the water is electric charge. the resistor ’ s obstruction of the flow of charge is comparable to the diameter of the pipe : the narrower the pipe, the greater the resistance. for the history of circuit design, resistors have had a fixed pipe diameter. but a memristor is a pipe that changes diameter with the amount and direction of water that flows through it. if water flows through this pipe in one direction, it expands ( becoming less resistive ). but send the water in the opposite direction and the pipe shrinks ( becoming more resistive ). further, the memristor remembers its diameter when water last went through. turn off the flow and the diameter of the pipe ” freezes ” until the water is turned back on. that freezing property suits memristors brilliantly for computer memory. the ability to indefinitely store resistance values means that a memristor can be used as a nonvolatile memory. that might not sound like very much, but go ahead and pop the battery out of your laptop, right now — no saving, no quitting, nothing. you ’ d lose your work, of course. but if your laptop were built using a memory based on memristors, when you popped the battery back in, your screen would return to life with everything exactly as you left it : no lengthy reboot, no half - dozen auto - recovered files. but the memristor ’ s potential goes far beyond instant - on computers to embrace one of the grandest technology challenges : mimicking the functions of a brain. within a decade, memristors could let us emulate
|
subdomain_quantum_materials
| 0.61544
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 1
| 0.6
|
2025-12-22T14:56:11.699748
|
recovered files. but the memristor ’ s potential goes far beyond instant - on computers to embrace one of the grandest technology challenges : mimicking the functions of a brain. within a decade, memristors could let us emulate, instead of merely simulate, networks of neurons and synapses. many research groups have been working toward a brain in silico : ibm ’ s blue brain project, howard hughes medical institute ’ s janelia farm, and harvard ’ s center for brain science are just three. however, even a mouse brain simulation in real time involves solving an astronomical number of coupled partial differential equations. a digital computer capable of coping with this staggering workload would need to be the size of a small city, and powering it would require several dedicated nuclear power plants. memristors can be made extremely small, and they function like synapses. using them, we will be able to build analog electronic circuits that could fit in a shoebox and function according to the same physical principles as a brain. a hybrid circuit — containing many connected memristors and transistors — could help us research actual brain function and disorders. such a circuit might even lead to machines that can recognize patterns the way humans can, in those critical ways computers can ’ t — for example, picking a particular face out of a crowd even if it has changed significantly since our last memory of it. the story of the memristor is truly one for the history books. when leon chua, now an ieee fellow, wrote his seminal paper predicting the memristor, he was a newly minted and rapidly rising professor at uc berkeley. chua had been fighting for years against what he considered the arbitrary restriction of electronic circuit theory to linear systems. he was convinced that nonlinear electronics had much more potential than the linear circuits that dominate electronics technology to this day. chua discovered a missing link in the pairwise mathematical equations that relate the four circuit quantities — charge, current, voltage, and magnetic flux — to one another. these can be related in six ways. two are connected through the basic physical laws of electricity and magnetism, and three are related by the known circuit elements : resistors connect voltage and current, inductors connect flux and current, and capacitors connect voltage and charge. but one equation is missing from this group : the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit — or more subtly, a mathematical doppelganger defined by faraday ’
|
subdomain_quantum_computing
| 0.613291
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 2
| 0.6
|
2025-12-22T14:56:11.700900
|
and capacitors connect voltage and charge. but one equation is missing from this group : the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit — or more subtly, a mathematical doppelganger defined by faraday ’ s law as the time integral of the voltage across the circuit. this distinction is the crux of a raging internet debate about the legitimacy of our memristor [ see sidebar, ” resistance to memristance ” ]. chua ’ s memristor was a purely mathematical construct that had more than one physical realization. what does that mean? consider a battery and a transformer. both provide identical voltages — for example, 12 volts of direct current — but they do so by entirely different mechanisms : the battery by a chemical reaction going on inside the cell and the transformer by taking a 110a ¿ ¿ v ac input, stepping that down to 12 v ac, and then transforming that into 12 v dc. the end result is mathematically identical — both will run an electric shaver or a cellphone, but the physical source of that 12 v is completely different. conceptually, it was easy to grasp how electric charge could couple to magnetic flux, but there was no obvious physical interaction between charge and the integral over the voltage. chua demonstrated mathematically that his hypothetical device would provide a relationship between flux and charge similar to what a nonlinear resistor provides between voltage and current. in practice, that would mean the device ’ s resistance would vary according to the amount of charge that passed through it. and it would remember that resistance value even after the current was turned off. he also noticed something else — that this behavior reminded him of the way synapses function in a brain. even before chua had his eureka moment, however, many researchers were reporting what they called ” anomalous ” current - voltage behavior in the micrometer - scale devices they had built out of unconventional materials, like polymers and metal oxides. but the idiosyncrasies were usually ascribed to some mystery electrochemical reaction, electrical breakdown, or other spurious phenomenon attributed to the high voltages that researchers were applying to their devices. as it turns out, a great many of these reports were unrecognized examples of memristance. after chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at hp labs, and we only really understood the device about two years ago
|
subdomain_quantum_materials
| 0.623517
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 3
| 0.6
|
2025-12-22T14:56:11.702002
|
examples of memristance. after chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at hp labs, and we only really understood the device about two years ago. so what took us so long? it ’ s all about scale. we now know that memristance is an intrinsic property of any electronic circuit. its existence could have been deduced by gustav kirchhoff or by james clerk maxwell, if either had considered nonlinear circuits in the 1800s. but the scales at which electronic devices have been built for most of the past two centuries have prevented experimental observation of the effect. it turns out that the influence of memristance obeys an inverse square law : memristance is a million times as important at the nanometer scale as it is at the micrometer scale, and it ’ s essentially unobservable at the millimeter scale and larger. as we build smaller and smaller devices, memristance is becoming more noticeable and in some cases dominant. that ’ s what accounts for all those strange results researchers have described. memristance has been hidden in plain sight all along. but in spite of all the clues, our finding the memristor was completely serendipitous. in 1995, i was recruited to hp labs to start up a fundamental research group that had been proposed by david packard. he decided that the company had become large enough to dedicate a research group to long - term projects that would be protected from the immediate needs of the business units. packard had an altruistic vision that hp should ” return knowledge to the well of fundamental science from which hp had been withdrawing for so long. ” at the same time, he understood that long - term research could be the strategic basis for technologies and inventions that would directly benefit hp in the future. hp gave me a budget and four researchers. but beyond the comment that ” molecular - scale electronics ” would be interesting and that we should try to have something useful in about 10 years, i was given carte blanche to pursue any topic we wanted. we decided to take on moore ’ s law. at the time, the dot - com bubble was still rapidly inflating its way toward a resounding pop, and the existing semiconductor road map didn ’ t extend past 2010. the critical feature size for the transistors on an integrated circuit was 350 nanometers ; we had a long way to go before atomic sizes would become a
|
subdomain_quantum_materials
| 0.636138
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 4
| 0.6
|
2025-12-22T14:56:11.703109
|
that a crossbar array is basically a storage system, with an open switch representing a zero and a closed switch representing a one. you read the data by probing the switch with a small voltage. like everything else at the nanoscale, the switches and wires of a crossbar are bound to be plagued by at least some nonfunctional components. these components will be only a few atoms wide, and the second law of thermodynamics ensures that we will not be able to completely specify the position of every atom. however, a crossbar architecture builds in redundancy by allowing you to route around any parts of the circuit that don ’ t work. because of their simplicity, crossbar arrays have a much higher density of switches than a comparable integrated circuit based on transistors. but implementing such a storage system was easier said than done. many research groups were working on such a cross - point memory — and had been since the 1950s. even after 40 years of research, they had no product on the market. still, that didn ’ t stop them from trying. that ’ s because the potential for a truly nanoscale crossbar memory is staggering ; picture carrying around the entire library of congress on a thumb drive. one of the major impediments for prior crossbar memory research was the small off - to - on resistance ratio of the switches ( 40 years of research had never produced anything surpassing a factor of 2 or 3 ). by comparison, modern transistors have an off - to - on resistance ratio of 10 000 to 1. we calculated that to get a high - performance memory, we had to make switches with a resistance ratio of at least 1000 to 1. in other words, in its off state, a switch had to be 1000 times as resistive to the flow of current as it was in its on state. what mechanism could possibly give a nanometer - scale device a three - orders - of - magnitude resistance ratio? we found the answer in scanning tunneling microscopy ( stm ), an area of research i had been pursuing for a decade. a tunneling microscope generates atomic - resolution images by scanning a very sharp needle across a surface and measuring the electric current that flows between the atoms at the tip of the needle and the surface the needle is probing. the general rule of thumb in stm is that moving that tip 0. 1 nm closer to a surface increases the tunneling current by one order of magnitude. we needed some similar mechanism by which we could change the effective spacing
|
subdomain_quantum_materials
| 0.637334
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 6
| 0.6
|
2025-12-22T14:56:11.705608
|
needle is probing. the general rule of thumb in stm is that moving that tip 0. 1 nm closer to a surface increases the tunneling current by one order of magnitude. we needed some similar mechanism by which we could change the effective spacing between two wires in our crossbar by 0. 3 nm. if we could do that, we would have the 1000 : 1 electrical switching ratio we needed. our constraints were getting ridiculous. where would we find a material that could change its physical dimensions like that? that is how we found ourselves in the realm of molecular electronics. conceptually, our device was like a tiny sandwich. two platinum electrodes ( the intersecting wires of the crossbar junction ) functioned as the ” bread ” on either end of the device. we oxidized the surface of the bottom platinum wire to make an extremely thin layer of platinum dioxide, which is highly conducting. next, we assembled a dense film, only one molecule thick, of specially designed switching molecules. over this ” monolayer ” we deposited a 2 - to 3 - nm layer of titanium metal, which bonds strongly to the molecules and was intended to glue them together. the final layer was the top platinum electrode. the molecules were supposed to be the actual switches. we built an enormous number of these devices, experimenting with a wide variety of exotic molecules and configurations, including rotaxanes, special switching molecules designed by james heath and fraser stoddart at the university of california, los angeles. the rotaxane is like a bead on a string, and with the right voltage, the bead slides from one end of the string to the other, causing the electrical resistance of the molecule to rise or fall, depending on the direction it moves. heath and stoddart ’ s devices used silicon electrodes, and they worked, but not well enough for technological applications : the off - to - on resistance ratio was only a factor of 10, the switching was slow, and the devices tended to switch themselves off after 15 minutes. our platinum devices yielded results that were nothing less than frustrating. when a switch worked, it was spectacular : our off - to - on resistance ratios shot past the 1000 mark, the devices switched too fast for us to even measure, and having switched, the device ’ s resistance state remained stable for years ( we still have some early devices we test every now and then, and we have never seen a significant change in resistance ). but our fantastic results were inconsistent. worse yet, the success or failure of a
|
subdomain_quantum_materials
| 0.634363
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 7
| 0.6
|
2025-12-22T14:56:11.706674
|
’ s resistance state remained stable for years ( we still have some early devices we test every now and then, and we have never seen a significant change in resistance ). but our fantastic results were inconsistent. worse yet, the success or failure of a device never seemed to depend on the same thing. we had no physical model for how these devices worked. instead of rational engineering, we were reduced to performing huge numbers of edisonian experiments, varying one parameter at a time and attempting to hold all the rest constant. even our switching molecules were betraying us ; it seemed like we could use anything at all. in our desperation, we even turned to long - chain fatty acids — essentially soap — as the molecules in our devices. there ’ s nothing in soap that should switch, and yet some of the soap devices switched phenomenally. we also made control devices with no molecule monolayers at all. none of them switched. we were frustrated and burned out. here we were, in late 2002, six years into our research. we had something that worked, but we couldn ’ t figure out why, we couldn ’ t model it, and we sure couldn ’ t engineer it. that ’ s when greg snider, who had worked with kuekes on the teramac, brought me the chua memristor paper from the september 1971 ieee transactions on circuits theory. ” i don ’ t know what you guys are building, ” he told me, ” but this is what i want. ” to this day, i have no idea how greg happened to come across that paper. few people had read it, fewer had understood it, and fewer still had cited it. at that point, the paper was 31 years old and apparently headed for the proverbial dustbin of history. i wish i could say i took one look and yelled, ” eureka! ” but in fact, the paper sat on my desk for months before i even tried to read it. when i did study it, i found the concepts and the equations unfamiliar and hard to follow. but i kept at it because something had caught my eye, as it had greg ’ s : chua had included a graph that looked suspiciously similar to the experimental data we were collecting. the graph described the current - voltage ( i - v ) characteristics that chua had plotted for his memristor. chua had called them ” pinched - hysteresis loops ” ; we called our i - v characteristics ” bow ties. ” a pinched hyst
|
subdomain_quantum_materials
| 0.626304
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 8
| 0.6
|
2025-12-22T14:56:11.707635
|
voltage ( i - v ) characteristics that chua had plotted for his memristor. chua had called them ” pinched - hysteresis loops ” ; we called our i - v characteristics ” bow ties. ” a pinched hysteresis loop looks like a diagonal infinity symbol with the center at the zero axis, when plotted on a graph of current against voltage. the voltage is first increased from zero to a positive maximum value, then decreased to a minimum negative value and finally returned to zero. the bow ties on our graphs were nearly identical [ see graphic, ” bow ties ” ]. that ’ s not all. the total change in the resistance we had measured in our devices also depended on how long we applied the voltage : the longer we applied a positive voltage, the lower the resistance until it reached a minimum value. and the longer we applied a negative voltage, the higher the resistance became until it reached a maximum limiting value. when we stopped applying the voltage, whatever resistance characterized the device was frozen in place, until we reset it by once again applying a voltage. the loop in the i - v curve is called hysteresis, and this behavior is startlingly similar to how synapses operate : synaptic connections between neurons can be made stronger or weaker depending on the polarity, strength, and length of a chemical or electrical signal. that ’ s not the kind of behavior you find in today ’ s circuits. looking at chua ’ s graphs was maddening. we now had a big clue that memristance had something to do with our switches. but how? why should our molecular junctions have anything to do with the relationship between charge and magnetic flux? i couldn ’ t make the connection. two years went by. every once in a while i would idly pick up chua ’ s paper, read it, and each time i understood the concepts a little more. but our experiments were still pretty much trial and error. the best we could do was to make a lot of devices and find the ones that worked. but our frustration wasn ’ t for nothing : by 2004, we had figured out how to do a little surgery on our little sandwiches. we built a gadget that ripped the tiny devices open so that we could peer inside them and do some forensics. when we pried them apart, the little sandwiches separated at their weakest point : the molecule layer. for the first time, we could get a good look at what was going on inside. we were in for
|
subdomain_quantum_materials
| 0.647119
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 9
| 0.6
|
2025-12-22T14:56:11.708845
|
them and do some forensics. when we pried them apart, the little sandwiches separated at their weakest point : the molecule layer. for the first time, we could get a good look at what was going on inside. we were in for a shock. what we had was not what we had built. recall that we had built a sandwich with two platinum electrodes as the bread and filled with three layers : the platinum dioxide, the monolayer film of switching molecules, and the film of titanium. but that ’ s not what we found. under the molecular layer, instead of platinum dioxide, there was only pure platinum. above the molecular layer, instead of titanium, we found an unexpected and unusual layer of titanium dioxide. the titanium had sucked the oxygen right out of the platinum dioxide! the oxygen atoms had somehow migrated through the molecules and been consumed by the titanium. this was especially surprising because the switching molecules had not been significantly perturbed by this event — they were intact and well ordered, which convinced us that they must be doing something important in the device. the chemical structure of our devices was not at all what we had thought it was. the titanium dioxide — a stable compound found in sunscreen and white paint — was not just regular titanium dioxide. it had split itself up into two chemically different layers. adjacent to the molecules, the oxide was stoichiometric tio 2, meaning the ratio of oxygen to titanium was perfect, exactly 2 to 1. but closer to the top platinum electrode, the titanium dioxide was missing a tiny amount of its oxygen, between 2 and 3 percent. we called this oxygen - deficient titanium dioxide tio 2 - x, where x is about 0. 05. because of this misunderstanding, we had been performing the experiment backward. every time i had tried to create a switching model, i had reversed the switching polarity. in other words, i had predicted that a positive voltage would switch the device off and a negative voltage would switch it on. in fact, exactly the opposite was true. it was time to get to know titanium dioxide a lot better. they say three weeks in the lab will save you a day in the library every time. in august of 2006 i did a literature search and found about 300 relevant papers on titanium dioxide. i saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. by the end of the month, the pieces had fallen into place. i finally knew how our device worked. i knew why we had a
|
subdomain_quantum_materials
| 0.617463
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 10
| 0.6
|
2025-12-22T14:56:11.709879
|
. i saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. by the end of the month, the pieces had fallen into place. i finally knew how our device worked. i knew why we had a memristor. the exotic molecule monolayer in the middle of our sandwich had nothing to do with the actual switching. instead, what it did was control the flow of oxygen from the platinum dioxide into the titanium to produce the fairly uniform layers of tio 2 and tio 2 - x. the key to the switching was this bilayer of the two different titanium dioxide species [ see diagram, ” how memristance works ” ]. the tio 2 is electrically insulating ( actually a semiconductor ), but the tio 2 - x is conductive, because its oxygen vacancies are donors of electrons, which makes the vacancies themselves positively charged. the vacancies can be thought of like bubbles in a glass of beer, except that they don ’ t pop — they can be pushed up and down at will in the titanium dioxide material because they are electrically charged. now i was able to predict the switching polarity of the device. if a positive voltage is applied to the top electrode of the device, it will repel the ( also positive ) oxygen vacancies in the tio 2 - x layer down into the pure tio 2 layer. that turns the tio 2 layer into tio 2 - x and makes it conductive, thus turning the device on. a negative voltage has the opposite effect : the vacancies are attracted upward and back out of the tio 2, and thus the thickness of the tio 2 layer increases and the device turns off. this switching polarity is what we had been seeing for years but had been unable to explain. on 20 august 2006, i solved the two most important equations of my career — one equation detailing the relationship between current and voltage for this equivalent circuit, and another equation describing how the application of the voltage causes the vacancies to move — thereby writing down, for the first time, an equation for memristance in terms of the physical properties of a material. this provided a unique insight. memristance arises in a semiconductor when both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. the memristance did not actually involve magnetism in this case ; the integral over the voltage reflected how far the dopants had moved and thus how
|
subdomain_quantum_materials
| 0.654351
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 11
| 0.6
|
2025-12-22T14:56:11.710830
|
both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. the memristance did not actually involve magnetism in this case ; the integral over the voltage reflected how far the dopants had moved and thus how much the resistance of the device had changed. we finally had a model we could use to engineer our switches, which we had by now positively identified as memristors. now we could use all the theoretical machinery chua had created to help us design new circuits with our devices. triumphantly, i showed the group my results and immediately declared that we had to take the molecule monolayers out of our devices. skeptical after years of false starts and failed hypotheses, my team reminded me that we had run control samples without molecule layers for every device we had ever made and that those devices had never switched. and getting the recipe right turned out to be tricky indeed. we needed to find the exact amounts of titanium and oxygen to get the two layers to do their respective jobs. by that point we were all getting impatient. in fact, it took so long to get the first working device that in my discouragement i nearly decided to put the molecule layers back in. a month later, it worked. we not only had working devices, but we were also able to improve and change their characteristics at will. but here is the real triumph. the resistance of these devices stayed constant whether we turned off the voltage or just read their states ( interrogating them with a voltage so small it left the resistance unchanged ). the oxygen vacancies didn ’ t roam around ; they remained absolutely immobile until we again applied a positive or negative voltage. that ’ s memristance : the devices remembered their current history. we had coaxed chua ’ s mythical memristor off the page and into being. emulating the behavior of a single memristor, chua showed, requires a circuit with at least 15 transistors and other passive elements. the implications are extraordinary : just imagine how many kinds of circuits could be supercharged by replacing a handful of transistors with one single memristor. the most obvious benefit is to memories. in its initial state, a crossbar memory has only open switches, and no information is stored. but once you start closing switches, you can store vast amounts of information compactly and efficiently. because memristors remember their state, they can store data indefinitely, using energy only when you toggle or read
|
subdomain_quantum_materials
| 0.654082
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
| 12
| 0.6
|
2025-12-22T14:56:11.711842
|
now that we ’ ve said a lot about individual operators on vector spaces, i want to go back and consider some other sorts of structures we can put on the space itself. foremost among these is the idea of a bilinear form. this is really nothing but a bilinear function to the base field :. of course, this means that it ’ s equivalent to a linear function from the tensor square :. instead of writing this as a function, we will often use a slightly different notation. we write a bracket, or sometimes, if we need to specify which of multiple different inner products under consideration. another viewpoint comes from recognizing that we ’ ve got a duality for vector spaces. this lets us rewrite our bilinear form as a linear transformation. we can view this as saying that once we pick one of the vectors, the bilinear form reduces to a linear functional, which is a vector in the dual space. or we could focus on the other slot and define. we know that the dual space of a finite - dimensional vector space has the same dimension as the space itself, which raises the possibility that or is an isomorphism from to. if either one is, then both are, and we say that the bilinear form is nondegenerate. we can also note that there is a symmetry on the category of vector spaces. that is, we have a linear transformation defined by. this makes it natural to ask what effect this has on our form. two obvious possibilities are that and that. in the first case we ’ ll call the bilinear form “ symmetric ”, and in the second we ’ ll call it “ antisymmetric ”. in terms of the maps and, we see that composing with the symmetry swaps the roles of these two functions. for symmetric bilinear forms,, while for antisymmetric bilinear forms we have. this leads us to consider nondegenerate bilinear forms a little more. if is an isomorphism it has an inverse. then we can form the composite. if is symmetric then this composition is the identity transformation on. on the other hand, if is antisymmetric then this composition is the negative of the identity transformation. thus, the composite transformation measures how much the bilinear transformation diverges from symmetry. accordingly, we call it the asymmetry of the form. finally, if we ’ re working over a finite - dimensional vector space we can pick a basis for, and get a matrix for.
|
subdomain_quantum_field_theory
| 0.616527
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:3bf09a24-c60d-45a0-b8e6-cc02ddac7ed6>
| 0
| 0.6
|
2025-12-22T14:56:11.793155
|
the gram - schmidt process now that we have a real or complex inner product, we have notions of length and angle. this lets us define what it means for a collection of vectors to be “ orthonormal ” : each pair of distinct vectors is perpendicular, and each vector has unit length. in formulas, we say that the collection is orthonormal if. these can be useful things to have, but how do we get our hands on them? it turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of. even better, we can pick it so that the first vectors span the same subspace as. the method goes back to laplace and cauchy, but gets its name from jørgen gram and erhard schmidt. we proceed by induction on the number of vectors in the collection. if, then we simply set this “ normalizes ” the vector to have unit length, but doesn ’ t change its direction. it spans the same one - dimensional subspace, and since it ’ s alone it forms an orthonormal collection. now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. first, we can orthonormalize the first vectors using our inductive hypothesis. this gives a collection which spans the same subspace as ( and so on down, as noted above ). but isn ’ t in the subspace spanned by the first vectors ( or else the original collection wouldn ’ t have been linearly independent ). so it points at least somewhat in a new direction. to find this new direction, we define this vector will be orthogonal to all the vectors from to, since for any such we can check where we use the orthonormality of the collection to show that most of these inner products come out to be zero. so we ’ ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. so we normalize it : and we ’ re done.
|
subdomain_quantum_field_theory
| 0.635934
| 434
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:4a2ad899-7ba0-4bfc-9276-c5c5c0845fe6>
| 0
| 0.6
|
2025-12-22T14:56:11.798323
|
the scientific world is abuzz with news of the ratification of the existence of the subatomic particle called the higgs boson - or more colloquially, the ' god particle. ' this subatomic particle ' s existence - which was verified recently ( with virtually near certainty ) by experiments at the large hadron collider in switzerland - lends credence to several long - standing physical theories such as the so - called standard model and the big bang theory. the nickname god particle is ironic for two reasons. first, generally, the nuclear physicists who deal with these matters - postulating the fundamental physical laws of the universe and then setting about to either verify or refute them - tend not to be regular church - goers. while there are some highly prominent scientists who balance personal, religious beliefs with professional, scientific quests, most probably go along with the thoughts of the world - famous physicist, stephen hawking : i regard the brain as a computer which will stop working when its components fail. there is no heaven or afterlife for broken down computers ; that is a fairy story for people afraid of the dark. [ interview in the guardian, 7 / 9 / 12 ] spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. it is not necessary to invoke god... [ from his book ; the grand design, 2010 ] so it is a bit ironic that physics ' most famous quest has resulted in the discovery of the ' god particle. ' most physicists are quite comfortable having their names associated with famous - even if dead - humans like newton, einstein or the afore - mentioned hawking. one will find few, if any, attributions to deities in the objects that physicists discover and name or the theories they propose. second, and more importantly, the discovery that the god particle really exists does not - as the name suggests - imply that god played some role in the creation of the universe. in fact, quite the opposite. the matter is discussed at some length in the july 9 daily beast by lawrence kraus, a well - known physicist / cosmologist from arizona state university : this term [ god particle ] appeared first in the unfortunate title of a book written by physicist leon lederman two decades ago, and while to my knowledge it was never used by any scientist ( including lederman ) before or since, it has captured the media ' s imagination. what makes this term particularly unfortunate is that nothing could be further from the truth.
|
subdomain_quantum_field_theory
| 0.625768
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:ed184b23-5659-4b91-97c0-fd818297d417>
| 0
| 0.6
|
2025-12-22T14:56:12.038351
|
two decades ago, and while to my knowledge it was never used by any scientist ( including lederman ) before or since, it has captured the media ' s imagination. what makes this term particularly unfortunate is that nothing could be further from the truth. assuming the particle in question is indeed the higgs, it validates an unprecedented revolution in our understanding of fundamental physics and brings science closer to dispensing with the need for any supernatural shenanigans all the way back to the beginning of the universe... if these bold, some would say arrogant, notions derive support from the remarkable results at the large hadron collider, they may reinforce two potentially uncomfortable possibilities : first, that many features of our universe, including our existence, may be accidental consequences of conditions associated with the universe ' s birth ; and second, that creating " stuff " from " no stuff " seems to be no problem at all - everything we see could have emerged as a purposeless quantum burp in space or perhaps a quantum burp of space itself. humans, with their remarkable tools and their remarkable brains, may have just taken a giant step toward replacing metaphysical speculation with empirically verifiable knowledge. the higgs particle is now arguably more relevant than god. so the term god particle was first used by a scientist, but was picked up and popularized by the media. it ' s catchy and enhances interest in the subject among the public. but like so much else that the media promotes, it is misleading and inappropriate.
|
subdomain_quantum_field_theory
| 0.632446
| 308
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:ed184b23-5659-4b91-97c0-fd818297d417>
| 1
| 0.6
|
2025-12-22T14:56:12.038912
|
commonly seen in the front of button - down shirts. also used to reinforce openings or slits in garments. piping binding a seam with decoration. piping is similar to tipping or edging where a decorative material is sewn into the seams. pointelle an open - work knitting pattern used on garments to add texture. typically a cooler and general knit sweater. polyester a fabric made from synthetic fibers. polyester is quick drying, easy to wash and holds its shape well. ponte a knit fabric where the fibers are looped in an interlock. the material is very strong and firm. poplin a strong woven fabric, heavier in weight, with ribbing. rayon a manufactured fiber developed originally as an alternative for silk. rayon drapes well and looks luxurious. sateen a cotton fabric with sheen that resembles satin. seersucker slack - tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay. shirring similar to ruching, shirring gathers material to create folds. silk one of the most luxurious fabrics, silk is soft, warm and has shine. it is created from female silkworm ' s eggs. silk shantung a rough plain weave fabric made of uneven yarns to produce a textured effect, made of fibers such silk in which all knots and lumps are retained. space dyed technique of yarn dyeing to produce a multi - color effect on the yarn itself. also known as dip dyed yarn. spandex also known as lycra ( trademark symbol ), this material is able to expand 600 % and still snap back to its original shape and form. spandex fibers are woven with cotton and other materials to make fabrics stretch. tipping similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc. tissue linen a type of linen that is specifically made for blouses or shirts due to its thinness and sheerness. tweed a loose weave of heavy wool makes up tweed which provides warmth and comfortability. twill a fabric woven in a diagonal weave. commonly used for chinos and denim. variegated multi - colored fabrics where colors are splotched or in patches. velour a stretchy knit fabric that looks similar to velvet. very soft to the touch. velvet a soft, silky woven fabric that is similar to velour. velvet is much more expensive than velour due to the amount of thread and steps it takes to
|
subdomain_quantum_materials
| 0.604953
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:04a048d3-152b-45eb-ac6e-e7717919a899>
| 3
| 0.6
|
2025-12-22T14:56:12.053405
|
brookhaven national laboratory was established in 1947 on the eastern end of long island at the former site of the u. s. army ’ s camp upton. originally built out of a post - world war ii desire to explore the peaceful applications of atomic energy, the laboratory now has a broader mission : to perform basic and applied research at the frontiers of science, including nuclear and high - energy physics ; physics and chemistry of materials ; nanoscience ; energy and environmental research ; national security and nonproliferation ; neurosciences and medical imaging ; structural biology ; and computational sciences. over its history, brookhaven lab has housed three research reactors, numerous one - of - a - kind particle accelerators, and other cutting - edge research facilities responsible for discoveries leading to many advances for science and society as well as seven nobel prizes. brookhaven was originally conceived, in part, to establish a national laboratory in the northeastern united states to design, construct and operate large scientific machines that individual institutions could not afford to develop on their own. throughout the years, brookhaven ’ s scientists and visiting researchers have used these unique facilities to make discoveries in biology, physics, chemistry, geophysics, medicine, and materials science. since brookhaven opened its doors, countless innovations and inventions by staff and visiting scientists have contributed to research in many fields. discoveries made here have shaped our understanding of the atom and the universe, advanced medical imaging techniques, and created new technology and tools for studying microbiology, climate and pollutants, energy storage and more.
|
subdomain_quantum_field_theory
| 0.602608
| 306
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:ed9dbb98-4768-4a07-84b7-372f728fdb7b>
| 0
| 0.6
|
2025-12-22T14:56:12.285687
|
acrylic a synthetic fabric often used as a wool substitute. it is warm, soft, holds colors well and often is stain and wrinkle resistant. angora rabbit hair a soft fiber knit from fur of the angora rabbit. angora wool is often combined with cashmere or another fiber to strengthen the delicate structure. dry cleaning is recommended for angora products. bedford a strong material that is a raised corded fabric ( similar to corduroy ). bedford fabric wears well and is usually washable. boot footwear which covers the entire foot and extends to the height of the anklebone or up to the thigh. bootie a shoe that resembles a boot in style but is not as high. brocade an all - over floral, raised pattern produced in a similar fashion to embroidery. cable knit patterns, typically used in sweaters, where flat knit columns otherwise known as cables are overlapped vertically. cashmere a soft, strong and silky, lightweight wool spun from the kashmir goat. cashmere is commonly used in sweaters, shawls, outerwear, gloves and scarves for its warmth and soft feel. chiffon a common evening wear fabric made from silk, cotton, rayon or nylon. it ' s delicate in nature and sheer. chintz a printed and glazed fabric made of cotton. chintz is known for its bright colors and bold patterns. circumference the measurement around the shaft of a boot taken at the widest part. corduroy cotton blend fibers twisted as they are woven to create long, parallel grooves, called wales, in the fabric. this is a very durable material and depending on the width of the wales, can be extremely soft. cotton a natural fiber that grows in the seed pod of the cotton plant. it is an inelastic fiber. crepe used as a description of surfaces of fabrics. usually designates a fabric that is crimped or crinkled. crinoline a lightweight, plain weave, stiffened fabric with a low yarn count. used to create volume beneath evening or wedding dresses. crochet looping threads with a hooked needle that creates a wide, open lace. typically used on sweaters for warm seasons. cushioning padding on the sole of a shoe for added comfort and stabilization. denimcotton blend fabric created with a twill weave to create a sturdy fabric. used as the primary material of blue jeans. dobbywoven fabric where the weave of the fabric actually produces the garment ' s design. embroidery detailed needlework,
|
subdomain_quantum_materials
| 0.600387
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6>
| 0
| 0.6
|
2025-12-22T14:56:13.585486
|
. jacquard a fabric of intricate variegated weave or pattern. typically shown on elegant and more expensive pieces. jersey a type of knit material known to be flexible, stretchy, soft and very warm. it is created using tight stitches. knit a knit fabric is made by interlocking loops of one or more yarns either by hand with knitting needles or by machine. linenan exquisite material created from the fibers of the flax plant. some linen contain slubs or small knots on the fabric. the material is a light fabric perfect for warm weather. liningthe leather, fabric or synthetic material used on the inside of a shoe. lame a metallic or plastic fiber woven into material to give the garment shine. lycra ®tmspandex fibers add stretch to fabric when the fibers are woven with other fiber blends. these materials are lightweight, comfortabletm and breathable, and the stretch will not wear away. madras originating from madras, india, this fabric is a lightweight, cotton material used for summer clothing. madras usually has a checked pattern but also comes in plaid or with stripes. typically made from 100 % cotton. marled typically found in sweaters, marled yarn occurs when two colored yards are twisted together. matte a matte finish has a lusterless surface. merino wool wool sheered from the merino sheep and spun into yarn that is fine but strong. modal a type of rayon that is made from natural fibers but goes through a chemical treatment to ensure it has a high threshold of breakage. modal is soft and breathable which is why it ' s used as a cotton replacement. non - iron a treated cotton that allows our easy care shirts to stay crisp throughout the day and does not need ironing after washing / drying. nylon a synthetic fiber that is versatile, fast drying and strong. it has a high resistance to damage. ombre a color technique that shades a color from light to dark. paisley a pattern that consists of crooked teardrop designs in a repetitive manner. patent leather leather made from cattle hide that has been varnished to give a hard and glossy finish. placket the piece of fabric or cloth that is used as a concealing flap to cover buttons, fasteners or attachments. most commonly seen in the front of button - down shirts. also used to reinforce openings or slits in garments. piping binding a seam with decoration. piping is similar to tipping or edging where a decorative material is sewn into the seams
|
subdomain_quantum_materials
| 0.609317
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6>
| 2
| 0.6
|
2025-12-22T14:56:13.588696
|
most commonly seen in the front of button - down shirts. also used to reinforce openings or slits in garments. piping binding a seam with decoration. piping is similar to tipping or edging where a decorative material is sewn into the seams. pointelle an open - work knitting pattern used on garments to add texture. typically a cooler and general knit sweater. polyester a fabric made from synthetic fibers. polyester is quick drying, easy to wash and holds its shape well. ponte a knit fabric where the fibers are looped in an interlock. the material is very strong and firm. poplin a strong woven fabric, heavier in weight, with ribbing. pump classically a high, medium, or low heeled, totally enclosed shoe. variations include an open toe or ornament. rayon a manufactured fiber developed originally as an alternative for silk. rayon drapes well and looks luxurious. sateen a fabric woven with sheen that resembles satin. seersucker slack - tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay. shaft height measurement of the shaft of the boot, which is from the top of the boot to the inside seam where the instep and the sole meet. shirring similar to ruching, shirring gathers material to create folds. silk one of the most luxurious fibers, silk is soft, warm and has shine. it is obtained from the cocoons of the silkworm ' s larvae. sole the outsole, or bottom part of a shoe. space dyed technique of yarn dyeing to produce a multi - color effect on the yarn itself. also known as dip dyed yarn. spandexelastomeric fiber, this material is able to expand 600 % and still snap back to its original shape and form. spandex fibers are woven with cotton and other fibers to make fabrics stretch. stacked heel a heel made of leather or leawood covering that gives the appearance of wood. synthetic materials man - made materials designed to look or function like leather. tipping similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc. tissue linen a type of linen, which is specifically made for blouses or shirts due to its thinness and sheerness. tweed a loose weave of heavy wool makes up tweed, which provides warmth and comfort. twill a fabric woven in a diagonal weave. commonly used for chinos and denim. variegated multi -
|
subdomain_quantum_materials
| 0.617684
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:70ec883d-7f47-4172-8115-7a1124765db6>
| 3
| 0.6
|
2025-12-22T14:56:13.591139
|
bootie a shoe that resembles a boot in style but is not as high. brocade an all - over floral, raised pattern produced in a similar fashion to embroidery. circumference the measurement around the shaft of a boot taken at the widest part. cotton a natural fiber that grows in the seed pod of the cotton plant. it is an inelastic fiber. cushioning padding on the sole of a shoe for added comfort and stabilization. dobbywoven fabric where the weave of the fabric actually produces the garment ' s design. embroidery detailed needlework, usually raised and created by yarn, thread or embroidery floss. faille a slightly ribbed, woven fabric of silk, cotton, or rayon. houndstooth a classic design containing two colors in jagged / slanted checks. similar to glen plaid. liningthe leather, fabric or synthetic material used on the inside of a shoe. lame a metallic or plastic fiber woven into material to give the garment shine. marled typically found in sweaters, marled yarn occurs when two colored yards are twisted together. matte a matte finish has a lusterless surface. merino wool wool sheered from the merino sheep and spun into yarn that is fine but strong. ombre a color technique that shades a color from light to dark. paisley a pattern that consists of crooked teardrop designs in a repetitive manner. poplin a strong woven fabric, heavier in weight, with ribbing. sateen a fabric woven with sheen that resembles satin. shirring similar to ruching, shirring gathers material to create folds. sole the outsole, or bottom part of a shoe. stacked heel a heel made of leather or leawood covering that gives the appearance of wood. synthetic materials man - made materials designed to look or function like leather. tweed a loose weave of heavy wool makes up tweed, which provides warmth and comfort. twill a fabric woven in a diagonal weave. commonly used for chinos and denim. variegated multi - colored fabrics where colors are splotched or in patches. viscosea cellulosic man - made fibers, viscose is soft and supple but can wrinkle easily. wedge heel a heel that lies flat to the ground and extends from the shank to the back of the shoe. woven a woven fabric is formed by interlacing threads, yarns, strands, or strips of some material.
|
subdomain_quantum_materials
| 0.61646
| 491
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:34eaf969-a050-46fc-9917-ce3a0e03647a>
| 0
| 0.6
|
2025-12-22T14:56:13.605478
|
this operation is particularly well suited for finding the spikes in fourier transform power spectra, as illustrated previously. the top hat is also good for locating any features of a known size by adjusting the radius of the crown. objects too large to fit into the crown of the hat are selectively removed. reversing the logic to use the darkest values in both regions enables the same procedure to isolate dust or other dark features. by replacing the interior value by the mean of the surroundings, the dust can be selectively removed. in this application, shown in the rolling ball filter interactive java tutorial, the method is called a rolling ball filter. john c. russ - materials science and engineering dept., north carolina state university, raleigh, north carolina, 27695. matthew parry - hill and michael w. davidson - national high magnetic field laboratory, 1800 east paul dirac dr., the florida state university, tallahassee, florida, 32310. questions or comments? send us an email. © 1998 - 2009 by michael w. davidson, john russ, olympus america inc., and the florida state university. all rights reserved. no images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. use of this website means you agree to all of the legal terms and conditions set forth by the owners. this website is maintained by our
|
subdomain_quantum_metrology
| 0.606683
| 280
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:5a975454-3da4-4fb7-b80e-35755220af39>
| 2
| 0.6
|
2025-12-22T14:56:14.154518
|
nov. 27, 2009 physicists from the japanese - led multi - national t2k neutrino collaboration have just announced that over the weekend they detected the first neutrino events generated by their newly built neutrino beam at the j - parc ( japan proton accelerator research complex ) accelerator laboratory in tokai, japan. protons from the 30 - gev main ring synchrotron were directed onto a carbon target, where their collisions produced charged particles called pions. these pions travelled through a helium - filled volume where they decayed to produce a beam of the elusive particles called neutrinos. these neutrinos then flew 200 metres through the earth to a sophisticated detector system capable of making detailed measurements of their energy, direction, and type. the data from the complex detector system is still being analysed, but the physicists have seen at least 3 neutrino events, in line with the expectation based on the current beam and detector performance. this detection therefore marks the beginning of the operational phase of the t2k experiment, a 474 - physicist, 13 - nation collaboration to measure new properties of the ghostly neutrino. neutrinos interact only weakly with matter, and thus pass effortlessly through the earth ( and mostly through the detectors! ). neutrinos exist in three types, called electron, muon, and tau ; linked by particle interactions to their more familiar charged cousins like the electron. measurements over the last few decades, notably by the super kamiokande and kamland neutrino experiments in western japan, have shown that neutrinos possess the strange property of neutrino oscillations, whereby one type of neutrino will turn into another as they propagate through space. neutrino oscillations, which require neutrinos to have mass and therefore were not allowed in our previous theoretical understanding of particle physics, probe new physical laws and are thus of great interest in the study of the fundamental constituents of matter. they may even be related to the mystery of why there is more matter than anti - matter in the universe, and thus are the focus of intense study worldwide. precision measurements of neutrino oscillations can be made using artificial neutrino beams, as pioneered by the k2k neutrino experiment where neutrinos from the kek laboratory were detected using the vast super kamiokande neutrino detector near toyama. t2k is a more powerful and sophisticated version of
|
subdomain_quantum_materials
| 0.640419
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:73f94bf7-72a9-431b-90ac-37db05302858>
| 0
| 0.6
|
2025-12-22T14:56:14.950182
|
by i. peterson unlike an ordinary, incandescent bulb, a laser produces light of a single wavelength. moreover, the emitted light waves are coherent, meaning that all of the energy peaks and troughs are precisely in step. now, a team at the massachusetts institute of technology has demonstrated experimentally that a cloud consisting of millions of atoms can also be made coherent. instead of flying about and colliding randomly, the atoms display coordinated behavior, acting as if the entire assemblage were a single entity. according to quantum mechanics, atoms can behave like waves. thus, two overlapping clouds made up of atoms in coherent states should produce a zebra - striped interference pattern of dark and light fringes, just like those generated when two beams of ordinary laser light overlap. by detecting such a pattern, the researchers proved that the clouds ' atoms are coherent and constitute an " atom laser, " says physicist wolfgang ketterle, who heads the mit group. these matter waves, in principle, can be focused just like light. ketterle and his coworkers describe their observations in the jan. 31 science. the demonstration of coherence involving large numbers of atoms is the latest step in a series of studies of a remarkable state of matter called a bose - einstein condensate. chilled to temperatures barely above absolute zero, theory predicted, the atoms would collectively enter the same quantum state and behave like a single unit, or superparticle, with a specific wavelength. first created in the laboratory in 1995 by eric a. cornell and his collaborators at the university of colorado and the national institute of standards and technology, both in boulder, bose - einstein condensates have been the subject of intense investigation ever since ( sn : 7 / 15 / 95, p. 36 ; 5 / 25 / 96, p. 327 ). at mit, ketterle and his colleagues cool sodium atoms to temperatures below 2 microkelvins. the frigid atoms are then confined in a special magnetic trap inside a vacuum chamber. to determine whether the atoms in the resulting condensate are indeed as coherent as photons in a laser beam, the researchers developed a novel method of extracting a clump of atoms from the trap. in effect, they manipulate the magnetic states of the atoms to expel an adjustable fraction of the original cloud ; under the influence of gravity, the released clump falls. the method can produce a sequence of descending clumps, with each containing 100, 000 to several million coherent atoms. the apparatus acts like
|
subdomain_quantum_optics
| 0.717416
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3>
| 0
| 0.6
|
2025-12-22T14:56:14.960792
|
expel an adjustable fraction of the original cloud ; under the influence of gravity, the released clump falls. the method can produce a sequence of descending clumps, with each containing 100, 000 to several million coherent atoms. the apparatus acts like a dripping faucet, ketterle says. he and his colleagues describe the technique in the jan. 27 physical review letters. to demonstrate interference, the mit group created a double magnetic trap so that two pulses of coherent atoms could be released at the same time. as the two clumps fell, they started to spread and overlap. the researchers could then observe interference between the atomic waves of the droplets. " the signal was almost too good to be true, " ketterle says. " we saw a high - contrast, very regular pattern. " " it ' s a beautiful result, " cornell remarks. " this work really shows that bose - einstein condensation is an atom laser. " from the pattern, the mit researchers deduced that the condensate of sodium atoms has a wavelength of about 30 micrometers, considerably longer than the 0. 04 - nanometer wavelength typical of individual atoms at room temperature. ketterle and his colleagues are already planning several improvements to their primitive atom laser, including getting more atoms into the emitted pulses and going from pulses to a continuous beam. practical use of an atom laser for improving the precision of atomic clocks and for manipulating atoms is still distant, however, cornell notes.
|
subdomain_quantum_metrology
| 0.664385
| 299
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3>
| 1
| 0.6
|
2025-12-22T14:56:14.961342
|
a b c d e f g h i j k l m n o p q r s t u v w x y z a modem self - test in which data from the keyboard or an internal test pattern is sent to the modem ' s transmitter, turned into analog form, looped back to the receiver, and converted back into digital form. a variety of signals and wavelengths that can be transmitted over communications lines such as the sound of a voice over the phone line. the mode used by your modem when answering an incoming call from an originating modem. the transmit / receive frequencies are the reverse of the originating modem, which is in originate mode. a computer program designed to perform a specific task or set of tasks. examples include word processing and spreadsheet applications. automatic repeat request. a function that allows your modem to detect flawed data and request that it be retransmitted. see mnp and v. 42. american standard code for information interchange. a code used to represent letters, numbers, and special characters such as $,!, and /. data transmission in which the length of time between transmitted characters may vary. because characters may not be transmitted at set intervals, start / stop bits are used to mark the beginning and end of each character. sets the modem to pick up the phone line when it detects a certain number of rings. see s - register s0 in the technical reference section of this guide. a process where your modem dials a call for you. the dialing process is initiated by sending an atdt ( dial tone ) or atdp ( dial pulse ) command followed by the telephone number. auto - dial is used to dial voice numbers. see basic data command dn in the technical reference section of this guide. a term used to measure the speed of an analog transmission from one point to another. although not technically accurate, baud rate is commonly used to mean bit rate. a 0 or 1, reflecting the use of the binary numbering system. used because the computer recognizes either of two states, off or on. shortened form of binary digit is bit. also referred to as transmission rate. the number of binary digits, or bits, transmitted per second ( bps ). communications channels using analog modems are established at set bit rates, commonly 2400, 4800, 9600, 14, 400, 28, 800, 33, 600, and higher. bits per second ( bps ) the bits ( binary digits ) per second rate. thousands
|
subdomain_quantum_information_theory
| 0.657495
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
| 0
| 0.6
|
2025-12-22T14:56:15.486958
|
are established at set bit rates, commonly 2400, 4800, 9600, 14, 400, 28, 800, 33, 600, and higher. bits per second ( bps ) the bits ( binary digits ) per second rate. thousands of bits per second are expressed as kilobits per second ( kbps ). a temporary memory area used as storage during input and output operations. an example is the modem ' s command buffer. a group of binary digits stored and operated upon as a unit. most often the term refers to 8 - bit units or characters. one kilobyte ( kb ) is equal to 1, 024 bytes or characters ; 640 kb is equal to 655, 360 bytes or characters. the basic signal altered or modulated by the modem in order to carry information. a representation, coded in binary digits, of a letter, number, or other symbol. characters per second ( cps ) a data transfer rate generally estimated from the bit rate and the character length. for example, at 2400 bps, 8 - bit characters with start / stop bits ( for a total of ten bits per character ) will be transmitted at a rate of approximately 240 characters per second ( cps ). some protocols, such as error - control protocols, employ advanced techniques such as longer transmission frames and data compression to increase cps. class 1 and 2. 0 international standards used by fax application programs and faxmodems for sending and receiving faxes. cyclic redundancy checking ( crc ) an error - detection technique consisting of a test performed on each block or frame of data by both sending and receiving modems. the sending modem inserts the results of its tests in each data block in the form of a crc code. the receiving modem compares its results with the received crc code and responds with either a positive or negative acknowledgment. the transmission or sharing of data between computers via an electronic medium. data compression table a table containing values assigned for each character during a call under mnp5 data compression. default values in the table are continually altered and built during each call : the longer the table, the more efficient throughput gained. mode used by a modem when sending and receiving data files. data communications ( or circuit - terminating ) equipment, such as dial - up modems that establish and control the data link via the telephone network. any setting assumed, at startup or reset, by the computer ' s software and attached devices.
|
subdomain_quantum_information_theory
| 0.647277
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
| 1
| 0.6
|
2025-12-22T14:56:15.488183
|
files. data communications ( or circuit - terminating ) equipment, such as dial - up modems that establish and control the data link via the telephone network. any setting assumed, at startup or reset, by the computer ' s software and attached devices. the computer or software will use these settings until changed by the user or other software. a test that checks the modem ' s rs - 232 interface and the cable that connects the terminal or computer and the modem. the modem receives data ( in the form of digital signals ) from the computer or terminal and immediately returns the data to the screen for verification. discrete, uniform signals. in this guide, the term refers to the binary digits 0 and 1. data terminal ( or terminating ) equipment. a computer that generates or is the final destination of data. indicates a communications channel capable of carrying signals in both directions. see half - duplex, full - duplex. electronic industries association ( eia ) group which defines electronic standards in the u. s. various techniques that check the reliability of characters ( parity ) or blocks of data. v. 42 and mnp error - control protocols use error detection ( crc ) and retransmission of flawed frames ( arq ). a method for transmitting the image on a page from one point to another. commonly referred to as fax. the mode used by a modem to send and receive data in facsimile format. see definitions for v. 17, v. 27 ter, v. 29. a mechanism that compensates for differences in the flow of data into and out of a modem or other device. see extended data commands & hn, & in, & rn in the technical reference section of this guide. a data communications term for a block of data with header and trailer information attached. the added information usually includes a frame number, block size data, error - check codes, and start / end indicators. signals can flow in both directions at the same time over one line. in microcomputer communications, this may refer to the suppression of the online local echo. signals can flow in both directions, but only one way at a time. in microcomputer communications, may refer to activation of the online local echo, which causes the modem to send a copy of the transmitted data to the screen of the sending computer. hertz, a frequency measurement unit used internationally to indicate cycles per second. an electronic communications network that connects computer networks and organizational computer facilities around the world. internet service
|
subdomain_quantum_information_theory
| 0.633606
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
| 2
| 0.6
|
2025-12-22T14:56:15.489297
|
next higher speed. the mode used by your modem when initiating an outgoing call to a destination modem. the transmit / receive frequencies are the reverse of the called modem, which is in answer mode. a simple error - detection method that checks the validity of a transmitted character. character checking has been surpassed by more reliable and efficient forms of error checking, including v. 42 and mnp 2 - 4 protocols. either the same type of parity must be used by two communicating computers, or both may omit parity. a system of rules and procedures governing communications between two or more devices. protocols vary, but communicating devices must follow the same protocol in order to exchange data. the format of the data, readiness to receive or send, error detection and error correction are some of the operations that may be defined in protocols. random access memory. memory that is available for use when the modem is turned on, but that clears of all information when the power is turned off. the modem ' s ram holds the current operational settings, a flow control buffer, and a command buffer. remote digital loopback a test that checks the phone link and a remote modem ' s transmitter and receiver. a copy of the data received by the remote system, returned to the sending system, and displayed on the screen. remote echoing is a function of the remote system. read only memory. permanent memory, not user - programmable. the consecutive flow of data in a single channel. compare to parallel transmissions where data flows simultaneously in multiple channels. the signaling bits attached to a character before and after the character is transmitted during asynchronous transmission. a device whose keyboard and display are used for sending and receiving data over a communications link. differs from a microcomputer or a mainframe in that it has little or no internal processing capabilities. software mode that allows direct communication with the modem. also known as command mode. the amount of actual user data transmitted per second without the overhead of protocol information such as start / stop bits or frame headers and trailers. compare with characters per second. the itu - t standard specification that covers the initial handshaking process. an itu - t standard for making facsimile connections at 14, 400 bps, 12, 000 bps, 9, 600 bps, and 7, 200 bps. an itu - t standard for modems operating in asynchronous mode at speeds up to 300 bps, full - duplex, on public switched telephone
|
subdomain_quantum_information_theory
| 0.640448
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:8b850b22-852e-4826-a9cd-72324510d250>
| 4
| 0.6
|
2025-12-22T14:56:15.492673
|
robb godshaw, from syynlabs, has created a haptic cube that gives you an impression of what the temperature will be like tomorrow. the cube, which godshaw has named the cryoscope, consists of an aluminium shell surrounding a peltier element, heatsink, cooling fan and an led, all controlled by an arduino. the cube is heated to a " neutral " state of 30c, and then adjusted by the number of degrees that the next day ' s forecast differs from room temperature ( 23c ). it takes into account wind chill and humidity to give an idea what the following day will " feel " like, rather than merely reflecting air temperature. so, for example, if the forecast for the next day is for 18c, once those factors are all taken into account, the cube ' s temperature will decrease five degrees from 30c to 25c, resulting in it being slightly cool to the touch. godshaw describes it in the video above as a " haptic weathervane ", adding : " users enter their location into a web app. the cube then automatically adjusts to the forecasted temperature. by touching the cryoscope, the user is able to feel tomorrow ' s air you can see the cryoscope in action over on godshaw ' s website. updated 08 : 29 09 / 05 / 2012 : godshaw has redesigned the cryoscope and is raising money for full production over on kickstarter.
|
subdomain_quantum_metrology
| 0.609802
| 298
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:dcbce5d9-dd4b-4753-953d-726754de7973>
| 0
| 0.6
|
2025-12-22T14:56:15.717609
|
first mechanical calculator. this machine is considered to be the forerunner of the modern computer though none of them were built in his lifetime. study of the molecules and proteins that are the basis of biological functions has led to the concept of a molecular machine. for example, current models of the operation of the kinesin molecule that transports vesicles inside the cell as well as the myocin molecule that operates against actin to cause muscle contraction ; these molecules control movement in response to chemical stimuli. researchers in nano - technology are working to construct molecules that perform movement in response to a specific stimulus. in contrast to molecules such as kinesin and myosin, these nano - machines or molecular machines are constructions like traditional machines that are designed to perform in a task. machines are assembled from standardized types of components. these elements consist of mechanisms that control movement in various ways such as gear trains, transistor switches, belt or chain drives, linkages, cam and follower systems, brakes and clutches, and structural components such as frame members and fasteners. modern machines include sensors, actuators and computer controllers. the shape, texture and color of covers provide a styling and operational interface between the mechanical components of a machine and its users. assemblies within a machine that control movement are often called " mechanisms. " mechanisms are generally classified as gears and gear trains, cam and follower mechanisms, and linkages, though there are other special mechanisms such as clamping linkages, indexing mechanisms and friction devices such as brakes and clutches. controllers combine sensors, logic, and actuators to maintain the performance of components of a machine. perhaps the best known is the flyball governor for a steam engine. examples of these devices range from a thermostat that as temperature rises opens a valve to cooling water to speed controllers such the cruise control system in an automobile. the programmable logic controller replaced relays and specialized control mechanisms with a programmable computer. servomotors that accurately position a shaft in response to an electrical command are the actuators that make robotic systems possible. design plays an important role in all three of the major phases of a product lifecycle : the industrial revolution was a period from 1750 to 1850 where changes in agriculture, manufacturing, mining, transportation, and technology had a profound effect on the social, economic and cultural conditions of the times. it began in the united kingdom, then subsequently spread throughout western europe, north america, japan, and eventually the rest of the world. starting in the later part of
|
subdomain_quantum_materials
| 0.612033
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:28a7c1f3-c595-43bb-b7f3-2960d5ccb10f>
| 3
| 0.6
|
2025-12-22T14:56:16.410541
|
date : december 2004 creator : habel, agnieszka description : this problem in lieu of thesis is a discussion of two topics : brownian movement and quantum computers. brownian movement is a physical phenomenon in which the particle velocity is constantly undergoing random fluctuations. chapters 2, 3 and 4, describe brownian motion from three different perspectives. the next four chapters are devoted to the subject of quantum computers, which are the signal of a new era of technology and science combined together. in the first chapter i present to a reader the two topics of my problem in lieu of thesis. in the second chapter i explain the idea of brownian motion, its interpretation as a stochastic process and i find its distribution function. the next chapter illustrates the probabilistic picture of brownian motion, where the statistical averages over trajectories are related to the probability distribution function. chapter 4 shows how to derive the langevin equation, introduced in chapter 1, using a hamiltonian picture of a bath with infinite number of harmonic oscillators. the chapter 5 explains how the idea of quantum computers was developed and how step - by - step all the puzzles for the field of quantum computers were created. the next chapter, chapter 6, discus the basic quantum unit of information... contributing partner : unt libraries
|
subdomain_quantum_simulation
| 0.743965
| 265
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:36a21367-2041-439f-8700-4349c0abc5be>
| 0
| 0.6
|
2025-12-22T14:56:17.008104
|
real form ( lie theory ) in mathematics, the notion of a real form relates objects defined over the field of real and complex numbers. a real lie algebra g0 is called a real form of a complex lie algebra g if g is the complexification of g0 : real forms for lie groups and algebraic groups using the lie correspondence between lie groups and lie algebras, the notion of a real form can be defined for lie groups. in the case of linear algebraic groups, the notions of complexification and real form have a natural description in the language of algebraic geometry. just as complex semisimple lie algebras are classified by dynkin diagrams, the real forms of a semisimple lie algebra are classified by satake diagrams, which are obtained from the dynkin diagram of the complex form by labeling some vertices black ( filled ), and connecting some other vertices in pairs by arrows, according to certain rules. it is a basic fact in the structure theory of complex semisimple lie algebras that every such algebra has two special real forms : one is the compact real form and corresponds to a compact lie group under the lie correspondence ( its satake diagram has all vertices blackened ), and the other is the split real form and corresponds to a lie group that is as far as possible from being compact ( its satake diagram has no vertices blackened and no arrows ). in the case of the complex special linear group sl ( n, c ), the compact real form is the special unitary group su ( n ) and the split real form is the real special linear group sl ( n, r ). the classification of real forms of semisimple lie algebras was accomplished by elie cartan in the context of riemannian symmetric spaces. in general, there may be more than two real forms. suppose that g0 is a semisimple lie algebra over the field of real numbers. by cartan ' s criterion, the killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries + 1 or - 1. by sylvester ' s law of inertia, the number of positive entries, or the positive index of intertia, is an invariant of the bilinear form, i. e. it does not depend on the choice of the diagonalizing basis. this is a number between 0 and the dimension of g which is an important invariant of the real lie algebra, called its index. split real form a real form g0 of a complex semisimple lie algebra g
|
subdomain_quantum_field_theory
| 0.602044
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:307c1388-d49d-46d7-a722-4f52f24df709>
| 0
| 0.6
|
2025-12-22T14:56:17.177721
|
choice of the diagonalizing basis. this is a number between 0 and the dimension of g which is an important invariant of the real lie algebra, called its index. split real form a real form g0 of a complex semisimple lie algebra g is said to be split, or normal, if in each cartan decomposition g0 = k0 ⊕ p0, the space p0 contains a maximal abelian subalgebra of g0, i. e. its cartan subalgebra. elie cartan proved that every complex semisimple lie algebra g has a split real form, which is unique up to isomorphism. it has maximal index among all real forms. the split form corresponds to the satake diagram with no vertices blackened and no arrows. compact real form a real lie algebra g0 is called compact if the killing form is negative definite, i. e. the index of g0 is zero. in this case g0 = k0 is a compact lie algebra. it is known that under the lie correspondence, compact lie algebras correspond to compact lie groups. the compact form corresponds to the satake diagram with all vertices blackened. construction of the compact real form in general, the construction of the compact real form uses structure theory of semisimple lie algebras. for classical lie algebras there is a more explicit construction. let g0 be a real lie algebra of matrices over r that is closed under the transpose map, the complexification g of g0 decomposes into the direct sum of g0 and ig0. the real vector space of matrices is a subspace of the complex lie algebra g that is closed under the commutators and consists of skew - hermitian matrices. it follows that u0 is a real lie subalgebra of g, that its killing form is negative definite ( making it a compact lie algebra ), and that the complexification of u0 is g. therefore, u0 is a compact form of g. see also - helgason 1978, p. 426
|
subdomain_quantum_field_theory
| 0.60119
| 418
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:307c1388-d49d-46d7-a722-4f52f24df709>
| 1
| 0.6
|
2025-12-22T14:56:17.179044
|
hypothesis 1, 2, 3, 4, 5, 6 and 7 will be rejected. in variance analysis, at the significance 0. 05 is with n = 30. - ii. literature review the use of brainstorming brainstorming is a group creativity technique designed to generate a large number of ideas for the solution of a problem. in 1953 the method was popularized by alex faickney osborn in a book called applied imagination. osborn proposed that groups could double their creative output with brainstorming. oxford defined that brainstorming is a way of making a group of people all think about something at the same time, often in order to solve a problem or to create good ideas. brainstorming is the name given to a situation when a group of people meet to generate new ideas around a specific area of interest. using rules which remove inhibitions, people are able to think more freely and move into new areas of thought and so create numerous new ideas and solutions. the participants shout out ideas as they occur to them and then build on the ideas raised by others. all the ideas are noted down and are not criticized. only when the brainstorming session is over are the ideas evaluated. the other meaning of brainstorming is to brainstorm is to use a set of specific rules and techniques which encourage and spark off new ideas which would never have happened under normal circumstances. so there you have it : brainstorming will help you come up with new ideas. and not only will you come up with new ideas but you will do so with surprisingly little effort. brainstorming makes the generation of new ideas easy and is a tried - and - tested process. exactly what you apply brainstorming techniques to depends on what you want to achieve. you can apply them to develop new products, services and processes in your job, or you can apply them to develop your personal life. there are two models of brainstorming - traditional brainstorming the normal view of brainstorming is where a group of people sit in a room and shout out ideas as they occur to them. they are told to lose their inhibitions and that no ideas will be judged so that people are free to shout out any ideas at all without feeling uncomfortable. people should build on the ideas called out by other participants. the purpose of this is to gain as many ideas as possible for later analysis. out of the many ideas suggested there will be some of great value. because of the free - thinking environment, the session will help promote radical new ideas which break free from
|
subdomain_quantum_field_theory
| 0.612087
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:6fd5ee54-081f-40f3-bfae-bd27fc2c153b>
| 2
| 0.6
|
2025-12-22T14:56:17.467813
|
decoupled mar 27, 2013 | 4. 9 / 5 ( 8 ) | 0 - sizing things up : the evolutionary neurobiology of scale invariance feb 28, 2013 | 4. 8 / 5 ( 10 ) | 14 classical and quantum mechanics via lie algebras apr 15, 2011 i ' d like to open a discussion thread for version 2 of the draft of my book ' ' classical and quantum mechanics via lie algebras ' ', available online at http : / / lanl. arxiv. org / abs / 0810. 1019, and for the... - more from physics forums - independent research more news stories no new human cases of the h7n9 virus have been recorded in china for a week, national health authorities said, for the first time since the outbreak began in march. diseases, conditions, syndromes 33 minutes ago | not rated yet | 0 a nobel prize - winning scientist tuesday played down " shock - horror scenarios " that a new virus strain will emerge with the potential to kill millions of people. diseases, conditions, syndromes 51 minutes ago | 5 / 5 ( 1 ) | 0 bacteria resistant to the antibiotic colistin are also commonly resistant to antimicrobial substances made by the human body, according to a study in mbio, the online open - access journal of the american society for microb... diseases, conditions, syndromes 5 hours ago | 5 / 5 ( 1 ) | 0 ( ap ) — federal investigators probing the hantavirus outbreak blamed for three deaths at yosemite national park recommend that design changes to tent cabins and other lodging run by private concessionaires first be reviewed... diseases, conditions, syndromes 11 hours ago | not rated yet | 0 a new diagnostic test for a worm infection that can lead to severe enlargement and deformities of the legs and genitals is far more sensitive than the currently used test, according to results of a field... diseases, conditions, syndromes 12 hours ago | not rated yet | 0 | ( medical xpress ) — a three - year multinational study has tracked and detailed the progression of huntington ' s disease ( hd ), predicting clinical decline in people carrying the hd gene more than 10 years before... 44 seconds ago | not rated yet | 0 1 hour ago | not rated yet | 0 | ( medical xpress ) — a research team, led by jeremy barr, a biology post - doctoral fellow, unveils a new
|
subdomain_quantum_field_theory
| 0.604922
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:5639c852-ee99-4994-8b0d-957aaa025883>
| 2
| 0.6
|
2025-12-22T14:56:17.724883
|
and applications, nasa / marshall space flight center, huntsville, alabama 35812 ; # department of chemical engineering, university of alabama in huntsville, huntsville, alabama 35899 ; § biochemistry department, michigan state university, east lansing, michigan 48825 ; and ¶ biophysics sd48, nasa / marshall space flight center, huntsville, alabama 35812 usa part of the challenge of macromolecular crystal growth for structure determination is obtaining crystals with a volume suitable for x - ray analysis. in this respect an understanding of the effect of solution conditions on macromolecule nucleation rates is advantageous. this study investigated the effects of supersaturation, temperature, and ph on the nucleation rate of tetragonal lysozyme crystals. batch crystallization plates were prepared at given solution concentrations and incubated at set temperatures over 1 week. the number of crystals per well with their size and axial ratios were recorded and correlated with solution conditions. crystal numbers were found to increase with increasing supersaturation and temperature. the most significant variable, however, was ph ; crystal numbers changed by two orders of magnitude over the ph range 4. 0 - 5. 2. crystal size also varied with solution conditions, with the largest crystals obtained at ph 5. 2. having optimized the crystallization conditions, we prepared a batch of crystals under the same initial conditions, and 50 of these crystals were analyzed by x - ray diffraction techniques. the results indicate that even under the same crystallization conditions, a marked variation in crystal properties exists. more space science headlines - nasa research on the web life and microgravity sciences and applications information from nasa hq on science in space microgravity research programs office headquartered at marshall space flight center microgravity news online version of nasa ' s latest in microgravity advancements, published quarterly. join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!! for more information, please contact : | dr. john m. horack, director of science communications curator : linda porter nasa official : m. frank rose
|
subdomain_quantum_materials
| 0.604544
| 438
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:2a7ca019-7b31-4e9b-8c46-9219b443a12f>
| 5
| 0.6
|
2025-12-22T14:56:18.003876
|
elements | blogs wednesday, september 7, 2011 is there oxygen in space? yes, this summer astronomers using the herschel telescope identified oxygen molecules in space. they found these molecules in the orion nebula, 1, 344 light years away. oxygen is the third most abundant element in the universe. until now, scientists have only seen individual oxygen atoms in space. we do not breathe individual oxygen atoms, but rather oxygen molecules. ( a molecule is a group of atoms banded together and it is the smallest unit of chemical compound that can take part in a chemical reaction. ) oxygen molecules make up 20 % of the air we breathe. scientists theorize that the oxygen molecules were locked up in water ice that... thursday, march 10, 2011 i ' m atoms ( scientific cover of jason mraz ' s i ' m yours ) here in chicago it has been gray for the last three weeks – no sun, just melting snow and rain. this song made our day. it has sunshine, great music and atoms! the lyrics include fabulous lines such as : “ atoms bond together to form molecules most of what ’ s surrounding me and you … ” this science verse has been set to the music of jason mraz ’ s “ i ’ m yours ”. this is a must watch! saturday, february 26, 2011 the deep carbon observatory here at supersmart carbon, we love learning about carbon. apparently, we are not alone. there is a project being launched called the deep carbon observatory that is being funded by the alfred p. sloan foundation. the purpose of this group is to study carbon deep inside the earth. carbon makes up somewhere from 0. 7 % to 3. 2 % of the earth ’ s elements. we know that there is carbon trapped under the earth ’ s crust, but we don ’ t know how much. the deep carbon observatory is going to study how much carbon there is in the earth and what happens to it. another question is what form is the... friday, february 25, 2011 where does gas come from? carbon! ( we always love it when the answer is carbon. ) the gas we use to power our cars comes from decomposing organic matter. what does that mean? all life has carbon in it - - this includes everything living from you and me to zebras, tapeworms, tulips and seaweed. since all living things have carbon in them, they are referred to as organic matter. non - organic matter includes things like rocks, water and metals. when something organic dies
|
subdomain_quantum_materials
| 0.651341
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:b5177112-be1e-4086-9d85-858522f9c4b9>
| 0
| 0.6
|
2025-12-22T14:56:18.085516
|
so write it offline in an editor ( e. g., notepad ) and paste it in your little post box, viz. : from wikipedia, the free encyclopedia this article is about the general notion of determinism in philosophy. for other uses, see determinism ( disambiguation ). not to be confused with fatalism, predeterminism, or predictability. determinism is a metaphysical philosophical position stating that for everything that happens there are conditions such that, given those conditions, nothing else could happen. " there are many determinisms, depending upon what pre - conditions are considered to be determinative of an event. " determinism throughout the history of philosophy has sprung from diverse considerations, some of which overlap. some forms of determinism can be tested empirically with ideas stemming from physics and the philosophy of physics. the opposite of determinism is some kind of indeterminism ( otherwise called nondeterminism ). determinism is often contrasted with free will. determinism often is taken to mean simply causal determinism, that is, basing determinism upon the idea of cause - and - effect. it is the concept that events within a given paradigm are bound by causality in such a way that any state ( of an object or event ) is completely determined by prior states. this meaning can be distinguished from other varieties of determinism mentioned below. the introduction of " cause - and - effect " introduces unnecessary complications related to what is meant by a ' cause ' and how the presence of a ' cause ' might be established, the interpretation of which varies from one physical theory to another. these complications are avoided by a more general formulation based upon connections between ' events ' supplied by a theory : " a theory is deterministic if, and only if, given its state variables for some initial period, the theory logically determines a unique set of values for those variables for any other period. " — ernest nagel, alternative descriptions of physical state p. 292 this quote replaces the idea of ' cause - and - effect ' with that of ' logical implication ' according to one or another theory that connects events. in addition, an ' event ' is related by the theory itself to formalized states described using the parameters defined by that theory. thus, the details of interpretation are placed where they belong, fitted to the context in which the chosen theory applies. other debates often concern the scope of determined systems, with some maintaining that
|
subdomain_quantum_field_theory
| 0.615754
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
| 0
| 0.6
|
2025-12-22T14:56:18.468181
|
to formalized states described using the parameters defined by that theory. thus, the details of interpretation are placed where they belong, fitted to the context in which the chosen theory applies. other debates often concern the scope of determined systems, with some maintaining that the entire universe ( or multiverse ) is a single determinate system and others identifying other more limited determinate systems. for example, using the definition of physical determinism above, the limitations of a theory to some particular domain of experience also limits the associated definition of ' determinism ' to that same domain. there are numerous historical debates involving many philosophical positions and varieties of determinism. they include debates concerning determinism and free will, technically denoted as compatibilistic ( allowing the two to coexist ) and incompatibilistic ( denying their coexistence is a possibility ). determinism should not be confused with self - determination of human actions by reasons, motives, and desires. determinism rarely requires that perfect prediction be practically possible – merely predictable in theory. many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path causal determinism is " the idea that every event is necessitated by antecedent events and conditions together with the laws of nature ". however, causal determinism is a broad enough term to consider that " one ' s deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. in other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way ". causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. the relation between events may not be specified, nor the origin of that universe. causal determinists believe that there is nothing uncaused or self - caused. historical determinism ( a sort of path dependence ) can also be synonymous with causal determinism. nomological determinism ( sometimes called ' scientific ' determinism, although that is a misnomer ) is the most common form of causal determinism. it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. quantum mechanics and
|
subdomain_quantum_field_theory
| 0.661324
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
| 1
| 0.6
|
2025-12-22T14:56:18.469207
|
misnomer ) is the most common form of causal determinism. it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. quantum mechanics and various interpretations thereof pose a serious challenge to this view. nomological determinism is sometimes illustrated by the thought experiment of laplace ' s demon. physical determinism holds holds that all physical events occur as described by physical laws. depending upon definitions, there is some room here for the view that not everything in the universe must be tied to some physical state, but that view is not usually emphasized by adherents of physical determinism because of the widely accepted scientific view that the operation of all physical systems ( often unnecessarily taken to mean everything ) can be explained entirely in physical terms, the assumed causal closure of physics. necessitarianism is very related to the causal determinism described above. it is a metaphysical principle that denies all mere possibility ; there is exactly one way for the world to be. leucippus claimed there were no uncaused events, and that everything occurs for a reason and by necessity. predeterminism is the idea that all events are determined in advance. the concept of predeterminism is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. in the case of predeterminism, this chain of events has been pre - established, and human actions cannot interfere with the outcomes of this pre - established chain. predeterminism can be used to mean such pre - established causal determinism, in which case it is categorised as a specific type of determinism. it can also be used interchangeably with causal determinism - in the context of its capacity to determine future events. despite this, predeterminism is often considered as independent of causal determinism. the term predeterminism is also frequently used in the context of biology and hereditary, in which case it represents a form of biological determinism. fatalism is normally distinguished from " determinism ". fatalism is the idea that everything is fated to happen, so that humans have no control over their future. fate has arbitrary power, and need not follow any causal or otherwise deterministic laws. types of fatalism include hard theological determinism and the idea of predestination
|
subdomain_quantum_mechanics
| 0.636139
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
| 2
| 0.6
|
2025-12-22T14:56:18.470334
|
the past, present, or future, are either true or false. note that one can support causal determinism without necessarily supporting logical determinism and vice versa ( depending on one ' s views on the nature of time, but also randomness ). the problem of free will is especially salient now with logical determinism : how can choices be free, given that propositions about the future already have a truth value in the present ( i. e. it is already determined as either true or false )? this is referred to as the problem of future contingents. adequate determinism focuses on the fact that, even without a full understanding of microscopic physics, we can predict the distribution of 1000 coin tosses often synonymous with logical determinism are the ideas behind spatio - temporal determinism or eternalism : the view of special relativity. j. j. c. smart, a proponent of this view, uses the term " tenselessness " to describe the simultaneous existence of past, present, and future. in physics, the " block universe " of hermann minkowski and albert einstein assumes that time is a fourth dimension ( like the three spatial dimensions ). in other words, all the other parts of time are real, like the city blocks up and down a street, although the order in which they appear depends on the driver ( see rietdijk – putnam argument ). adequate determinism is the idea that quantum indeterminacy can be ignored for most macroscopic events. this is because of quantum decoherence. random quantum events " average out " in the limit of large numbers of particles ( where the laws of quantum mechanics asymptotically approach the laws of classical mechanics ). stephen hawking explains a similar idea : he says that the microscopic world of quantum mechanics is one of determined probabilities. that is, quantum effects rarely alter the predictions of classical mechanics, which are quite accurate ( albeit still not perfectly certain ) at larger scales. something as large as an animal cell, then, would be " adequately determined " ( even in light of quantum indeterminacy ). nature and nurture interact in humans. a scientist looking at a sculpture after some time does not ask whether we are seeing the effects of the starting materials or environmental influences. although some of the above forms of determinism concern human behaviors and cognition, others frame themselves as an answer to the nature or nurture debate. they will suggest that one factor will entirely determine behavior. as scientific understanding has grown, however
|
subdomain_quantum_mechanics
| 0.654034
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:fa5b85ba-af47-43af-a2ff-d30a8b594bab>
| 4
| 0.6
|
2025-12-22T14:56:18.472547
|
science fair project encyclopedia the sampling frequency or sampling rate defines the number of samples per second taken from a continuous signal to make a discrete signal. the inverse of the sampling frequency is the sampling period or sampling time, which is the time between samples. the sampling frequency can only be applied to samplers in which each sample is periodically taken. there is no rule that limits a sampler from taking a sample at a non - periodic rate. if a signal has a bandwidth of 100 hz then to avoid aliasing the sampling frequency must be greater than 200 hz. in some cases, it is desirable to have a sampling frequency more than twice the bandwidth so that a digital filter can be used in exchange for a weaker analog anti - aliasing filter. this process is known as oversampling. in digital audio, common sampling rates are : - 8, 000 hz - telephone, adequate for human speech - 11, 025 hz - 22, 050 hz - radio - 44, 100 hz - compact disc - 48, 000 hz - digital sound used for films and professional audio - 96, 000 or 192, 400 hz - dvd - audio, some lpcm dvd audio tracks, bd - rom ( blu - ray disc ) audio tracks, and hd - dvd ( high - definition dvd ) audio tracks in digital video, which uses a ccd as the sensor, the sampling rate is defined the frame / field rate, rather than the notional pixel clock. all modern tv cameras use ccds, and the image sampling frequency is the repetition rate of the ccd integration period. - 13. 5 mhz - ccir 601, d1 video - continuous signal vs. discrete signal - digital control - sample and hold - sample ( signal ) - sampling ( information theory ) - signal ( information theory ) the contents of this article is licensed from www. wikipedia. org under the gnu free documentation license. click here to see the transparent copy and copyright details
|
subdomain_quantum_metrology
| 0.601603
| 390
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:d25b5562-8f30-4fd1-bc51-46f94956427e>
| 0
| 0.6
|
2025-12-22T14:56:18.532679
|
the life - giving ideas of chemistry are not reducible to physics. or, if one tries to reduce them, they wilt at the edges, lose not only much of their meaning, but interest too. and, most importantly, they lose their chemical utility — their ability to relate seemingly disparate compounds to each other, their fecundity in inspiring new experiments. i ' m thinking of concepts such as the chemical bond, a functional group and the logic of substitution, aromaticity, steric effects, acidity and basicity, electronegativity and oxidation - reduction. as well as some theoretical ideas i ' ve been involved in personally — through - bond coupling, orbital symmetry control, the isolobal analogy. consider the notion of oxidation state. if you had to choose two words to epitomize the same - and - not - the - same nature of chemistry, would you not pick ferrous and ferric? the concept evolved at the end of the 19th century ( not without confusion with " valency " ), when the reality of ions in solution was established. as did a multiplicity of notations — ferrous iron is iron in an oxidation state of + 2 ( or is it 2 +? ) or fe ( ii ). schemes for assigning oxidation states ( sometimes called oxidation numbers ) adorn every introductory chemistry text. they begin with the indisputable : in compounds, the oxidation states of the most electronegative elements ( those that hold on most tightly to their valence electrons ), oxygen and fluorine for example, are – 2 and – 1, respectively. after that the rules grow ornate, desperately struggling to balance wide applicability with simplicity. the oxidation - state scheme had tremendous classificatory power ( for inorganic compounds, not organic ones ) from the beginning. think of the sky blue color of chromium ( ii ) versus the violet or green of chromium ( iii ) salts, the four distinctly colored oxidation states of vanadium. oliver sacks writes beautifully of the attraction of these colors for a boy starting out in chemistry. and not only boys. but there was more to oxidation states than just describing color. or balancing equations. chemistry is transformation. the utility of oxidation states dovetailed with the logic of oxidizing and reducing agents — molecules and ions that with ease removed or added electrons to other molecules. between electron transfer and proton transfer you have much of reaction chemistry. i want to tell you how this logic leads to quite
|
subdomain_quantum_materials
| 0.690693
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
| 0
| 0.6
|
2025-12-22T14:56:18.645122
|
. people in the trade will recognize that i ' m talking about " mulliken population analysis " or " natural bond analysis " or richard bader ' s beautifully worked out scheme for dividing up space in a molecule. what about experiment? is there an observable that might gauge a charge on an atom? i think photoelectron spectroscopies ( esca or auger ) come the closest. here one measures the energy necessary to promote an inner - core electron to a higher level or to ionize it. atoms in different oxidation states do tend to group themselves at certain energies. but the theoretical framework that relates these spectra to charges depends on the same assumptions that bedevil the definition of a charge on an atom. an oxidation state bears little relation to the actual charge on the atom ( except in the interior of the sun, where ligands are gone, there is plenty of energy, and you can have iron in oxidation states up to + 26 ). this doesn ' t stop the occasional theoretician today from making a heap of a story when the copper in a formal cu ( iii ) complex comes out of a calculation bearing a charge of, say, + 0. 51. nor does it stop oxidation states from being just plain useful. many chemical reactions involve electron transfer, with an attendant complex of changes in chemical, physical and biological properties. oxidation state, a formalism and not a representation of the actual electron density at a metal center, is a wonderful way to " bookkeep " electrons in the course of a reaction. even if that electron, whether added or removed, spends a good part of its time on the ligands. but enough theory, or, as some of my colleagues would sigh, anthropomorphic platitudes. let ' s look at some beautiful chemistry of extreme oxidation states. incredible, but true recently, a young polish postdoctoral associate, wojciech grochala, led me to look with him at the chemical and theoretical design of novel high - temperature superconductors. we focused on silver ( ag ) fluorides ( f ) with silver in oxidation states ii and iii. the reasoning that led us there is described in our forthcoming paper. for now let me tell you about some chemistry that i learned in the process. i can only characterize this chemistry as incredible but true. ( some will say that i should have known about it, since it was hardly hidden, but the fact is i didn ' t. ) here is what ag ( ii
|
subdomain_quantum_materials
| 0.663824
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
| 2
| 0.6
|
2025-12-22T14:56:18.650384
|
as teflon and kel - f, synthetic sapphire and platinum, manipulation of and physicochemical investigation of hf solutions in closed systems is now reasonably straightforward. " for this we must thank the pioneers in the field — generations of fluorine chemists, but especially bartlett and boris zemva of the university of ljubljana. bartlett reports the oxidation of agf2 to agf4 – ( as kagf4 ) using photochemical irradiation of f2 in anhydrous hf ( made less acidic by adding kf to the hf ). and zemva used kr2 + ( in krf2 ) to react with agf2 in anhydrous hf in the presence of xef6 to make xef5 + agf4 –. what a startling list of reagents! to appreciate the difficulty and the inspiration of this chemistry, one must look at the original papers, or at the informal letters of the few who have tried it. you can find some of neil bartlett ' s commentary in the article that wojciech and i wrote, and in an interview with him. charge it, please chemists are always changing things. how to tune the propensity of a given oxidation state to oxidize or reduce? one way to do it is by changing the charge on the molecule that contains the oxidizing or reducing center. the syntheses of the silver fluorides cited above contain some splendid examples of this strategy. let me use bartlett ' s words again, just explaining that " electronegativity " gauges in some rough way the tendency of an atom to hold on to electrons. ( high electronegativity means the electron is strongly held, low electronegativity that it is weakly held. ) it ' s easy to make a high oxidation state in an anion because an anion is electron - rich. the electronegativity is lower for a given oxidation state in an anion than it is in a neutral molecule. that in turn, is lower than it is in a cation. if i take silver and i expose it to fluorine in the presence of fluoride ion, in hf, and expose it to light to break of f2 to atoms, i convert the silver to silver ( iii ), agf4 -. this is easy because the ag ( iii ) is in an anion. i can then pass in boron trifluoride and precipitate silver
|
subdomain_quantum_materials
| 0.622219
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
| 4
| 0.6
|
2025-12-22T14:56:18.652139
|
to atoms, i convert the silver to silver ( iii ), agf4 -. this is easy because the ag ( iii ) is in an anion. i can then pass in boron trifluoride and precipitate silver trifluoride, which is now a much more potent oxidizer than agf4 - because the electronegativity in the neutral agf3 is much higher than it is in the anion. if i can now take away a fluoride ion, and make a cation, i drive the electronegativity even further up. with such a cation, for example, agf2 +, i can steal the electron from ptf6 - and make ptf6.... this is an oxidation that even kr ( ii ) is unable to bring about. simple, but powerful reasoning. and it works. a world record? finally, a recent oxidation - state curiosity : what is the highest oxidation state one could get in a neutral molecule? pekka pyykko and coworkers suggest cautiously, but i think believably, that octahedral uo6, that is u ( xii ), may exist. there is evidence from other molecules that uranium 6p orbitals can get involved in bonding, which is what they would have to do in uo6. what wonderful chemistry has come — and still promises to come — from the imperfect logic of oxidation states! © roald hoffmann i am grateful to wojciech grochala, robert fay and debra rolison for corrections and comments. thanks to stan marcus for suggesting the title of this column.
|
subdomain_quantum_materials
| 0.611249
| 340
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113>
| 5
| 0.6
|
2025-12-22T14:56:18.652681
|
quantum time waits for no quantum theory, also quantum mechanics, in physics, a theory based on using the concept of the quantum unit to describe the dynamic properties of subatomic particles and the interactions of matter and radiation. the foundation was laid by the german physicist max planck, who postulated in 1900 that energy can be emitted or absorbed by matter only in small, discrete units called quanta. fundamental to the development of quantum mechanics was the uncertainty principle, formulated by the german physicist werner heisenberg in 1927, which states that the position and momentum of a subatomic particle cannot be specified simultaneously. spectral lines of atomic hydrogen : when an electron makes a transition from one energy level to another, the electron emits a photon with a particular energy. these photons are then observed as emission lines using a spectroscope. the lyman series involves transitions to the lowest or ground state energy level. to the second energy level are called the balmer series. these transitions involve frequencies in the visible part of the spectrum. in this frequency range each transition is characterized by a in the 18th and 19th centuries, newtonian, or classical, mechanics appeared to provide a wholly accurate description of the motions of bodies — for example, planetary motion. in the late 19th and early 20th centuries, however, experimental findings raised doubts about the completeness of newtonian theory. among the newer observations were the lines that appear in the spectra of light emitted by heated gases, or gases in which electric discharges take place. model of the atom developed in the early 20th century by the english physicist ernest rutherford, in which negatively charged electrons circle a positive nucleus in orbits prescribed by newton ’ s laws of motion, scientists had also expected that the electrons would emit light over a broad frequency range, rather than in the narrow frequency ranges that form the lines in a spectrum. another puzzle for physicists was the coexistence of two theories of light : the corpuscular theory, which explains light as a stream of particles, and the wave theory, which views light as electromagnetic waves. a third problem was the absence of a molecular basis for in his book elementary principles in statistical mechanics ( 1902 ), the american mathematical physicist j. willard gibbs conceded the impossibility of framing a theory of molecular action that reconciled thermodynamics, radiation, and electrical phenomena as they were then understood. at the turn of the century, physicists did not yet clearly recognize that these and other difficulties in physics were in any way related. the first development that led to the solution of these difficulties
|
subdomain_quantum_mechanics
| 0.783484
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
| 0
| 0.6
|
2025-12-22T14:56:19.086267
|
, radiation, and electrical phenomena as they were then understood. at the turn of the century, physicists did not yet clearly recognize that these and other difficulties in physics were in any way related. the first development that led to the solution of these difficulties was planck ’ s introduction of the concept of the quantum, as a result of physicists ’ studies of blackbody radiation during the closing years of the 19th century. ( the term blackbody refers to an ideal body or surface that absorbs all radiant energy without any reflection. ) a body at a moderately high temperature — a " red heat " — gives off most of its radiation in the low frequency ( red and infrared ) regions ; a body at a higher temperature — " white heat " — gives off comparatively more radiation in higher frequencies ( yellow, green, or blue ). during the 1890s physicists conducted detailed quantitative studies of these phenomena and expressed their results in a series of curves or graphs. the classical, or prequantum, theory predicted an altogether different set of curves from those actually observed. what planck did was to devise a mathematical formula that described the curves exactly ; he then deduced a physical hypothesis that could explain the formula. his hypothesis was that energy is radiated only in quanta of energy hu, where u is the frequency and h is the quantum action, now known as the next important developments in quantum mechanics were the work of german - born american physicist and nobel laureate albert einstein. he used planck ’ s concept of the quantum to explain certain properties of the photoelectric effect — an experimentally observed phenomenon in which electrons are emitted from metal surfaces when radiation falls on these surfaces. according to classical theory, the energy, as measured by the voltage of the emitted electrons, should be proportional to the intensity of the radiation. the energy of the electrons, however, was found to be independent of the intensity of radiation — which determined only the number of electrons emitted — and to depend solely on the frequency of the radiation. the higher the frequency of the incident radiation, the greater is the electron energy ; below a certain critical frequency no electrons are emitted. these facts were explained by einstein by assuming that a single quantum of radiant energy ejects a single electron from the metal. of the quantum is proportional to the frequency, and so the energy of the electron depends on the frequency. in 1911 rutherford established the existence of the atomic nucleus. he assumed, on the basis of experimental evidence obtained from the scattering of alpha particles by the nuclei of gold atoms, that every atom consists of a
|
subdomain_quantum_optics
| 0.714606
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
| 1
| 0.6
|
2025-12-22T14:56:19.087342
|
the energy of the electron depends on the frequency. in 1911 rutherford established the existence of the atomic nucleus. he assumed, on the basis of experimental evidence obtained from the scattering of alpha particles by the nuclei of gold atoms, that every atom consists of a dense, positively charged nucleus, surrounded by negatively charged electrons revolving around the nucleus as planets revolve around the sun. electromagnetic theory developed by the british physicist james clerk maxwell unequivocally predicted that an electron revolving around a nucleus will continuously radiate electromagnetic energy until it has lost all its energy, and eventually will fall into the nucleus. thus, according to classical theory, an atom, as described by rutherford, is unstable. this difficulty led the danish physicist niels bohr, in 1913, to postulate that in an atom the classical theory does not hold, and that electrons move in fixed orbits. every change in orbit by the electron corresponds to the absorption or emission of a quantum of radiation. the application of bohr ’ s theory to atoms with more than one electron proved difficult. the mathematical equations for the next simplest atom, the helium atom, were solved during the 1910s and 1920s, but the results were not entirely in accordance with for more complex atoms, only approximate solutions of the equations are possible, and these are only partly concordant the french physicist louis victor de broglie suggested in 1924 that because electromagnetic waves show particle characteristics, particles should, in some cases, also exhibit wave properties. this prediction was verified experimentally within a few years by the american physicists clinton joseph davisson and lester halbert germer and the british physicist george paget thomson. that a beam of electrons scattered by a crystal produces a diffraction pattern characteristic of a wave ( see diffraction ). the wave concept of a particle led the austrian physicist erwin schrodinger to develop a so - called wave equation to describe the wave properties of a particle and, more specifically, the wave behavior of the electron in the hydrogen atom. although this differential equation was continuous and gave solutions for all points in space, the permissible solutions of the equation were restricted by certain conditions expressed by mathematical equations called eigenfunctions ( german eigen, " own " ). the schrodinger wave equation thus had only certain discrete solutions ; these solutions were mathematical expressions in which quantum numbers appeared as parameters. ( quantum numbers are integers developed in particle physics to give the magnitudes of certain characteristic quantities of particles or systems. ) schrodinger equation was solved for the hydrogen atom and
|
subdomain_quantum_optics
| 0.66659
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
| 2
| 0.6
|
2025-12-22T14:56:19.088381
|
; these solutions were mathematical expressions in which quantum numbers appeared as parameters. ( quantum numbers are integers developed in particle physics to give the magnitudes of certain characteristic quantities of particles or systems. ) schrodinger equation was solved for the hydrogen atom and gave conclusions in substantial agreement with earlier quantum theory. moreover, it was solvable for the helium atom, which earlier theory had failed to explain adequately, and here also it was in agreement with experimental evidence. the solutions of the schrodinger equation also indicated that no two electrons could have the same four quantum numbers — that is, be in the same energy state. rule, which had already been established empirically by austro - american physicist and nobel laureate wolfgang pauli in 1925, is called the exclusion principle. what is matter in the 20th century, physicists discovered that matter behaved as both a wave and a particle. austrian physicist and nobel prize winner erwin schrodinger discussed this apparent paradox in a lecture in geneva, switzerland, in 1952. a condensed and translated version of his lecture appeared in scientific american the following what is matter? the wave - particle dualism afflicting modern physics is best resolved in favor of waves, believes the author, but there is no clear picture of matter on which physicists can agree fifty years ago science seemed on the road to a clear - cut answer to the ancient question which is the title of this article. it looked as if matter would be reduced at last to its ultimate building blocks — to certain submicroscopic but nevertheless tangible and measurable particles. but it proved to be less simple than that. today a physicist no longer can distinguish significantly between matter and something else. we no longer contrast matter with forces or fields of force as different entities ; we know now that these concepts must be merged. it is true that we speak of " empty " space ( i. e., space free of matter ), but space is never really empty, because even in the remotest voids of the universe there is always starlight — and that is matter. besides, space is filled with gravitational fields, and according to einstein gravity and inertia cannot very well be separated. thus the subject of this article is in fact the total picture of space - time reality as envisaged by physics. we have to admit that our conception of material reality today is more wavering and uncertain than it has been for a long time. we know a great many interesting details, learn new ones every week. but to construct a clear, easily comprehensible
|
subdomain_quantum_field_theory
| 0.689082
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
| 3
| 0.6
|
2025-12-22T14:56:19.089426
|
have to admit that our conception of material reality today is more wavering and uncertain than it has been for a long time. we know a great many interesting details, learn new ones every week. but to construct a clear, easily comprehensible picture on which all physicists would agree — that is simply impossible. physics stands at a grave crisis of ideas. in the face of this crisis, many maintain that no objective picture of reality is possible. however, the optimists among us ( of whom i consider myself one ) look upon this view as a philosophical extravagance born of despair. we hope that the present fluctuations of thinking are only indications of an upheaval of old beliefs which in the end will lead to something better than the mess of formulas which today surrounds our subject. since the picture of matter that i am supposed to draw does not yet exist, since only fragments of it are visible, some parts of this narrative may be inconsistent with others. like cervantes ’ tale of sancho panza, who loses his donkey in one chapter but a few chapters later, thanks to the forgetfulness of the author, is riding the dear little animal again, our story has contradictions. we must start with the well - established concept that matter is composed of corpuscles or atoms, whose existence has been quite " tangibly " demonstrated by many beautiful experiments, and with max planck ’ s discovery that energy also comes in indivisible units, called quanta, which are supposed to be transferred abruptly from one carrier to another. but then sancho panza ’ s donkey will return. for i shall have to ask you to believe neither in corpuscles as permanent individuals nor in the suddenness of the transfer of an energy quantum. discreteness is present, but not in the traditional sense of discrete single particles, let alone in the sense of abrupt processes. discreteness arises merely as a structure from the laws governing the phenomena. these laws are by no means fully understood ; a probably correct analogue from the physics of palpable bodies is the way various partial tones of a bell derive from its shape and from the laws of elasticity to which, of themselves, nothing discontinuous adheres. the idea that matter is made up of ultimate particles was advanced as early as the fifth century b. c. by leucippus and democritus, who called these particles atoms. the corpuscular theory of matter was lifted to physical reality in the theory of gases developed during the 19th century by james clerk maxwell
|
subdomain_quantum_materials
| 0.696116
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
| 4
| 0.6
|
2025-12-22T14:56:19.090401
|
as the fifth century b. c. by leucippus and democritus, who called these particles atoms. the corpuscular theory of matter was lifted to physical reality in the theory of gases developed during the 19th century by james clerk maxwell and ludwig boltzmann. the concept of atoms and molecules in violent motion, colliding and rebounding again and again, led to full comprehension of all the properties of gases : their elastic and thermal properties, their viscosity, heat conductivity and diffusion. at the same time it led to a firm foundation of the mechanical theory of heat, namely, that heat is the motion of these ultimate particles, which becomes increasingly violent with rising temperature. within one tremendously fertile decade at the turn of the century came the discoveries of x - rays, of electrons, of the emission of streams of particles and other forms of energy from the atomic nucleus by radioactive decay, of the electric charges on the various particles. the masses of these particles, and of the atoms themselves, were later measured very precisely, and from this was discovered the mass defect of the atomic nucleus as a whole. mass of a nucleus is less than the sum of the masses of its component particles ; the lost mass becomes the binding energy holding the nucleus firmly together. this is called the packing effect. the nuclear forces of course are not electrical forces — those are repellent — but are much stronger and act only within very short distances, about 10 - 13 centimeter. here i am already caught in a contradiction. didn ’ t i say at the beginning that we no longer assume the existence of force fields apart from matter? i could easily talk myself out of it by saying : well, the force field of a particle is simply considered a part of it. but that is not the fact. the established view today is rather that everything is at the same time both particle and field. everything has the continuous structure with which we are familiar in fields, as well as the discrete structure with which we are equally familiar in particles. this concept is supported by innumerable experimental facts and is accepted in general, though opinions differ on details, as we shall see. in the particular case of the field of nuclear forces, the particle structure is more or less known. most likely the continuous force field is represented by the so - called pi mesons. on the other hand, the protons and neutrons, which we think of as discrete particles, indisputably also have a continuous wave structure, as is shown by the interference patterns
|
subdomain_quantum_field_theory
| 0.658248
| 512
|
HuggingFaceFW/fineweb-edu
|
<urn:uuid:44c88cfb-a6f6-4068-86e0-5247fe04dc45>
| 5
| 0.6
|
2025-12-22T14:56:19.091343
|
End of preview. Expand
in Data Studio
quantum-physics-0.6-corpus
Dataset Description
This is a domain-specific corpus created using ontology-guided filtering from FineWeb-Edu.
Dataset Creation
- Source: HuggingFaceFW/fineweb-edu
- Filtering Method: Semantic similarity to subdomain centroids (embedding-based)
- Pipeline: Ontology-Guided Domain Corpus Builder
Dataset Structure
Each chunk contains:
text: The text content (256-512 tokens)subdomain_id: Assigned subdomainsimilarity_score: Cosine similarity to subdomain centroidtoken_count: Number of tokenssource_dataset: Original dataset namesource_id: Original document IDchunk_index: Position within source document
Usage
from datasets import load_dataset
dataset = load_dataset("konsman/quantum-physics-0.6-corpus")
# Access filtered chunks
for chunk in dataset['train']:
print(chunk['text'])
print(chunk['subdomain_id'])
print(chunk['similarity_score'])
License
MIT License
Citation
Generated using the Ontology-Guided Domain Corpus Builder pipeline.
- Downloads last month
- 397