text
stringlengths
558
3.22k
subdomain_id
stringclasses
10 values
similarity_score
float64
0.6
0.84
token_count
int64
256
512
source_dataset
stringclasses
1 value
source_id
stringlengths
47
47
chunk_index
int64
0
216
filtering_threshold
float64
0.6
0.6
created_at
stringdate
2025-12-25 18:39:09
2026-01-10 15:00:44
ginsberg. the latter was rejected because causal information cannot be encoded as a set of beliefs, and the former because it is difficult to fine - tune lewis ' s similarity measure to match causal intuition. pearl defines counterfactuals directly in terms of a " structural equation model " - - a set of equations, in which each variable is assigned a value that is an explicit function of other variables in the system. given such a model, the sentence " y would be y had x been x " ( formally, x = x > y = y ) is defined as the assertion : if we replace the equation currently determining x with a constant x = x, and solve the set of equations for variable y, the solution obtained will be y = y. this definition has been shown to be compatible with the axioms of possible world semantics and forms the basis for causal inference in the natural and social sciences, since each structural equation in those domains corresponds to a familiar causal mechanism that can be meaningfully reasoned about by investigators. see also - english conditional sentences - indicative conditional - irrealis moods - logical consequence - material conditional - optative mood - principle of explosion - subjunctive mood - thought experiment - possible world semantics - bennett, jonathan. ( 2003 ). a philosophical guide to conditionals. oxford university press. - bonevac, d. ( 2003 ). deduction, introductory symbolic logic. 2nd ed. blackwell publishers. - byrne, r. m. j. ( 2005 ). the rational imagination : how people create alternatives to reality. cambridge, m. a. : mit press. - byrne, r. m. j. & tasso, a. ( 1999 ). deductive reasoning with factual, possible, and counterfactual conditionals. memory & cognition. 27, 726 - 740. - de vega, m., urrutia, m., riffo, b. ( 2007 ). canceling updating in the comprehension of counterfactuals embedded in narrative. memory & cognition, 35, 1410 - 1421. - edgington, dorothy. ( 2001 ). " conditionals ". in goble, lou, ed., the blackwell guide to philosophical logic. blackwell. - edgington, dorothy. ( 2006 ). " conditionals ". the stanford encyclopedia of philosophy, edward zalta ( ed. ). - ferguson, h. j. and sanford, a. j. ( 2008 ) anomalies in real and counterfactual worlds
subdomain_quantum_field_theory
0.639601
512
HuggingFaceFW/fineweb-edu
<urn:uuid:70f1e3a3-ea34-48ea-91cb-6cd47fdd730e>
6
0.6
2025-12-25T21:32:38.390640
the set of hamming codes are called ' forward error correction ' and give the ability for the receiving station to correct a transmission error. while this takes more bits to send the information, it means fewer retransmits and thus can actually speed up a noisy connection. the number of parity bits in the hamming code is given by the hamming rule. this is a function of the number of bits of information transmitted in a block and is represented by the following inequality : d + p + 1 > = 2p ' d ' is the number of data bits and ' p ' is the number of parity bits. hamming codes are identified by the ordered set ( c, d ) where ' c ' = ' d ' + ' p '. the hamming code ( 7, 4 ) is the classic example used which describes a word of 4 data bits long and 3 error check bits. this satisfies the above inequality : 4 + 3 + 1 > = 23 the hamming code word is created by multiplying the data bits by a generator matrix using modulo - 2 arithmetic. the result of this is called a code word vector which consists of the original data bits and the parity bits. the generator matrix used in constructing the hamming code consists of i ( the identity matrix ) and a parity generation matrix a. for a data size of 4 the following matrix is created : 1 0 0 0 | 1 1 1 0 1 0 0 | 0 1 1 g = 0 0 1 0 | 1 0 1 0 0 0 1 | 1 1 0 multiplying a 4 bit vector ( d1, d2, d3, d4 ) by g results in a 7 bit vector of the form ( d1, d2, d3, d4, p1, p2, p3 ). the a portion is what generates the parity bits. if the selection of the columns of a are unique, it is true that ( p1, p2, p3 ) is the parity calculations of three distinct subsets of the original data. to validate the code word, it is necessary to multiply the data word by ' h ' which is the [ inverse a | i ] check to form the parity check vector. h r | 1 | s | 1 0 1 1 | 1 0 0 | | 0 | | 0 | | 1 1 0 1 | 0 1 0 | * | 1 | = | 0 | | 1 1 1 0 | 0 0 1 |
subdomain_quantum_cryptography
0.604794
512
HuggingFaceFW/fineweb-edu
<urn:uuid:720bdb8d-81e2-40ec-b246-8ae6e494c893>
0
0.6
2025-12-25T21:32:38.545144
h r | 1 | s | 1 0 1 1 | 1 0 0 | | 0 | | 0 | | 1 1 0 1 | 0 1 0 | * | 1 | = | 0 | | 1 1 1 0 | 0 0 1 | | 0 | | 0 | if all the elements of s are 0, then the entire set has been received correctly. if there are any ' 1 ' s in s, then there is an error which can be determined by looking at the parity pits that have failed. if r = s will be this matches the third colum of ' h ' which corresponds to the bit that has the error. the ( 7, 4 ) hamming code, while good for demonstrations is not the best choice for practical communications - it has allot of overhead and has a non - standard length. the number of parity bits goes up with the log of the number of data bits. hence, there is less overhead for longer words than shorter words. the hamming code can detect and fix single bit errors, and detect double bit errors. for the ( 7, 4 ) hamming code, the following table ( error correcting bits are in bold ) : decimal binary hamming ( 7, 4 ) 0 0000 0000000 1 0001 0001110 2 0010 0010101 3 0011 0011011 4 0100 0100011 5 0101 0100011 6 0110 0110110 7 0111 0111000 8 1000 1000111 9 1001 1001001 10 1010 1010010 11 1011 1011100 12 1100 1100100 13 1101 1101010 14 1110 1110001 15 1111 1111111 the hamming distance from one valid error correcting set to another for the same data is three. this means that it would take three errors to go from one valid message to another. example : 0100010 ( not valid - correctable ) 0100000 ( not valid - not correctable ) it is left an excercise to the reader to demonstrate this is the case for all 127 possible cases that the minimum hamming distance between any two valid messages is three.
subdomain_quantum_cryptography
0.601443
444
HuggingFaceFW/fineweb-edu
<urn:uuid:720bdb8d-81e2-40ec-b246-8ae6e494c893>
1
0.6
2025-12-25T21:32:38.546703
david m. lane values of pearson ' s correlation, variance sum law, measures of variability the collection of data involves measurement. measurement of some characteristics such as height and weight are relatively straightforward. the measurement of psychological attributes such as self esteem can be complex. a good measurement scale should be both reliable and valid. these concepts will be discussed in turn. the notion of reliability revolves around whether you would get at least approximately the same result if you measure something twice with the same measurement instrument. a common way to define reliability is the correlation between parallel forms of a test. letting " test " represent a parallel form of the test, the symbol rtest, test is used to denote the reliability of the test. true scores and error assume you wish to measure a person ' s mean response time to the onset of a stimulus. for simplicity, assume that there is no learning over tests which, of course, is not really true. the person is given 1, 000 trials on the task and you obtain the response time on each trial. the mean response time over the 1, 000 trials can be thought of as the person ' s " true " score, or at least a very good approximation of it. theoretically, the true score is the mean that would be approached as the number of trials increases indefinitely. an individual response time can be thought of as being composed of two parts : the true score and the error of measurement. thus if the person ' s true score were 345 and their response on one of the trials was 358, then the error of measurement would be 13. similarly, if the response time were 340, the error of measurement would be - 5. now consider the more realistic example of a class of students taking a 100 - point true / false exam. let ' s assume that each student knows the answer to some of the questions and has no idea about the other questions. for the sake of simplicity, we are assuming there is no partial knowledge of any of the answers and for a given question a student either knows the answer or guesses. finally, assume the test is scored such that a student receives one point for a correct answer and loses a point for an incorrect answer. in this example, a student ' s true score is the number of questions they know the answer to and their error score is their score on the questions they guessed on. for example, assume a student knew 90 of the answers and guessed correctly on 7 of the remaining 10 ( and therefore incorrectly on 3 ). their true score would be 90 since that is the number
subdomain_quantum_metrology
0.600732
512
HuggingFaceFW/fineweb-edu
<urn:uuid:5d0beb3d-9d78-4303-bb9a-f1872b73baa9>
0
0.6
2025-12-25T21:32:39.156683
liquid crystals, the state of matter that makes possible the flat screen technology now commonly used in televisions and computers, may have some new technological tricks in store. writing today ( may 3, 2012 ) in the journal nature, an international team of researchers led by university of wisconsin - madison professor of chemical and biological engineering juan j. de pablo reports the results of a computational study that shows liquid crystals, manipulated at the smallest scale, can unexpectedly induce the molecules they interact with to self - organize in ways that could lead to entirely new classes of materials with new properties. " from an applied perspective, once we get to very small scales, it becomes incredibly difficult to pattern the structure of materials. but here we show it is possible to use liquid crystals to spontaneously create nanoscale morphologies we didn ' t know existed, " says de pablo of computer simulations that portray liquid crystals self - organizing at the molecular scale in ways that could lead to remarkable new materials with scores of technological applications. as their name implies, liquid crystals exhibit the order of a solid crystal but flow like a liquid. used in combination with polarizers, optical filters and electric fields, liquid crystals underlie the pixels that make sharp pictures on thin computer or television displays. liquid crystal displays alone are a multibillion dollar industry. the technology has also been used to make ultrasensitive thermometers and has even been deployed in lasers, among other applications. the new study modeled the behavior of thousands of rod - shaped liquid crystal molecules packed into nano - sized liquid droplets. it showed that the confined molecules self organize as the droplets are cooled. " at elevated temperatures, the droplets are disordered and the liquid is isotropic, " de pablo explains. " as you cool them down, they become ordered and form a liquid crystal phase. the liquid crystallinity within the droplets, surprisingly, induces water and other molecules at the interface of the droplets, known as surfactants, to organize into ordered nanodomains. this is a behavior that was not known. " in the absence of a liquid crystal, the molecules at the interface of the droplet adopt a homogeneous distribution. in the presence of a liquid crystal, however, they form an ordered nanostructure. " you have two things going on at the same time : confinement of the liquid crystals and an interplay of their structure with the interface of the droplet, " notes de pablo. " as you lower the temperature the liquid crystal starts to become organized and imprints that order into the
subdomain_quantum_materials
0.66746
512
HuggingFaceFW/fineweb-edu
<urn:uuid:725cafcd-e957-4ce9-ab40-14fa529641a6>
0
0.6
2025-12-25T21:32:39.243837
at the same time : confinement of the liquid crystals and an interplay of their structure with the interface of the droplet, " notes de pablo. " as you lower the temperature the liquid crystal starts to become organized and imprints that order into the surfactant itself, causing it to self assemble. " it was well known that interfaces influence the order or morphology of liquid crystals. the new study shows the opposite to be true as well. " now you can think of forming these ordered nanophases, controlling them through droplet size or surfactant concentration, and then decorating them to build up structures and create new classes of materials, " says de pablo. as an example, de pablo suggested that surfactants coupled to dna molecules could be added to the surface of a liquid crystal droplets, which could then assemble through the hybridization of dna. such nanoscale engineering, he notes, could also form the basis for liquid crystal based detection of toxins, biological molecules, or viruses. a virus or protein binding to the droplet would change the way the surfactants and the liquid crystals within the droplet are organized, triggering an optical signal. such a technology would have important uses in biosecurity, health care and biology research settings. explore further : physicists develop revolutionary low - power polariton laser
subdomain_quantum_materials
0.618235
269
HuggingFaceFW/fineweb-edu
<urn:uuid:725cafcd-e957-4ce9-ab40-14fa529641a6>
1
0.6
2025-12-25T21:32:39.244392
there are two different questions at work here, that you ' ve kind of mashed together. the first question is " what is the speed at which a change in the electric field propagates? " the answer to that is the speed of light. in qed terms, the electromagnetic interaction that we see as the electric field is mediated by photons, so any change in an established field ( say, due to shifting the position of the charge creating the field ) won ' t be felt by a distant object until enough time has passed for a photon from the source to make it to the observation point. the second question is " what is the speed of propagation of electric current? " this speed is slower than the speed of light, but still on about that order of magnitude - - the exact value depends a little on the arrangement of wires and so on, but you won ' t be far off if you assume that electrical signals propagate down a cable at the speed of light. this relates to electric field in that the charge moving through a circuit to light a light bulb has to be driven by some electric field, so you can reasonably ask how that field is established, and how much time it takes. qualitatively, the necessary field is established by excess charge on the surface of the wires, with the surface charge being generally positive near the positive terminal of a battery and generally negative near the negative terminal, and dropping off smoothly from one to the other so that the electric field is more or less piecewise constant ( that is, the field is the same everywhere inside a wire, and the field is the same everywhere inside a resistor, but the two field values are not the same ). when the circuit is first connected, there is a rapid redistribution of the charge on the surface of the wires which establishes the surface charge gradients that drive the steady - state current that will eventually do whatever it is you want it to do. the time required to establish the gradients and settle in to the steady - state condition is very fast, most likely on the order of nanoseconds for a normal circuit. there ' s a good discussion of the business of how, exactly, charges get moved around to drive a current in the textbook that we use for our introductory classes, matter and interactions, by chabay and sherwood. it doesn ' t go into enough detail to let you calculate the relevant times directly, but it lays out the basic science pretty well. ( it ' s a textbook for a first - year introductory physics class
subdomain_quantum_optics
0.637225
512
HuggingFaceFW/fineweb-edu
<urn:uuid:49279033-9e98-43e4-ba87-afe02bc68b49>
0
0.6
2025-12-25T21:32:39.272278
authors : j. marvin herndon ours is a time of unparalleled richness in astronomical observations, but understanding seems to be absent throughout broad areas of astrophysics. among some groups of astrophysicists there appears to be measured degrees of consensus, as indicated by the prevalence of so - called " standard models ", but in science consensus is nonsense ; science is a logical process, not a democratic process, and logical connections in many instances seem to be lacking. so the question astrophysicists should ask is this : " what ' s wrong with astrophysics? " finding out what ' s wrong is not only the necessary precursor to righting what ' s wrong, but will open the way to new advances in astrophysics. toward that end, one may question the basic assumptions upon which astrophysics is founded, as well as question the approaches astrophysicists currently employ. here i describe one methodology and provide specific examples, the details of which are set forth elsewhere [ 1 - 3 ]. in doing so, i place into a logical sequence seemingly unrelated astronomical observations, including certain hubble space telescope images, so that causal relationships become evident and understanding becomes possible ; as a consequence, profound new implications follow, for example bearing on the origin of diverse galactic structures and the origin of the heavy elements. comments : recovered from sciprint. org [ v1 ] 2 apr 2008 unique - ip document downloads : 29 times add your own feedback and questions here :
subdomain_quantum_field_theory
0.630877
295
HuggingFaceFW/fineweb-edu
<urn:uuid:c5e63dcc-4dd5-4354-acf2-fbb2aaac06fc>
0
0.6
2025-12-25T21:32:39.660443
center of science, policy and society programs : aaas dialogue on science, ethics and religion aaas dialogue on science, ethics and religion physics & the cosmos the field of physics attempts to make sense of the universe at all scales, from the impossibly small particles from which we are comprised to the inconceivably large structures within which we exist. the miniscule yet fundamentally important realm of quarks, photons and protons ( among many others ) is articulated by quantum mechanics through such concepts as the simultaneously wave and particle nature of light and the inherent uncertainty in the physical universe. at the other extreme, einstein ’ s general relativity provides a framework for understanding our cosmos on the largest possible scale and accounts for the large - scale gravitational effects of all matter on space and time. from quarks to quasars, physics and astronomy address an enormous variety of objects and phenomena, many of which provoke intriguing physical and metaphysical questions. since the beginning of human history we have been looking up at the night sky, wondering about the countless points of lights and what might lie beyond. ancient astronomers observed that the visible heavens are relatively ordered and predictable, yet also peculiar and vast. such universally experienced mystery has engendered tremendous philosophical, religious, and scientific inquiry. the last several hundred years in particular have witnessed revolutions in the way we understand the universe. copernicus re - envisioned the cosmos as sun - centered, not earth - centered. galileo observed jupiter ’ s orbiting moons and the sun ’ s “ imperfect ” spots that led him to challenge the traditional greek conceptions of the heavens. indeed, both religious and scientific communities have had to regularly revise their understanding of the cosmos as more discoveries come to light. modern astronomy and physics continue to reveal many unanticipated features of the universe ’ s structure and evolution. astrophysicists theorize that all space, matter and energy expanded explosively from an extremely dense soup of subatomic particles in an event called the big bang. after a process of cooling and coalescing, cloudlike nebulae of gas and dust collapsed to form stars, and these stars clustered to comprise galaxies. eventually, terrestrial planets and moons were forged from the heavier material expelled from dying stars. over an unimaginably long span of time — approximately 13. 7 billion years — the components of the universe gradually formed, and today the universe continues to evolve as space itself dramatically expands. in addition to piecing together the intricate history of the universe and explaining the various objects we see around us
subdomain_quantum_field_theory
0.6445
512
HuggingFaceFW/fineweb-edu
<urn:uuid:52f8fa32-d67c-4e6e-9403-7ca8ef26b03c>
0
0.6
2025-12-25T21:32:39.899740
the search for certainty : a philosophical account of foundations of mathematics. marcus giaquinto. xii + 286 pp. oxford university press, 2002. $ 45. david hilbert ( 1862 – 1943 ) was arguably the leading mathematician of his time. in struggles over how mathematics was to accommodate new understandings of the infinite, the dutch mathematician l. e. j. brouwer was his most fervent opponent. when hilbert ' s favorite student, hermann weyl, went over to the enemy, saying " brouwer, that is the revolution, " hilbert was incensed. in a passionate address delivered in 1922, he proclaimed : weyl and brouwer... seek to provide a foundation for mathematics by pitching overboard whatever discomforts them and declaring an embargo.... but this would mean dismembering and mutilating our science, and, should we follow such reformers, we would run the risk of losing a large part of our most valued treasures. weyl and brouwer outlaw the general notion of irrational number, of function, even of number - theoretic function, cantor ' s [ ordinal ] numbers of higher number classes, etc. the theorem that among infinitely many natural numbers there is always a least, and even the logical law of the excluded middle, e. g., in the assertion that either there are only finitely many prime numbers or there are infinitely many : these are examples of forbidden theorems and modes of inference. i believe that impotent as kronecker was to abolish irrational numbers..., no less impotent will their efforts prove today. no! brouwer ' s [ program ] is not as weyl thinks, the revolution, but only a repetition of an attempted putsch with old methods, that in its day was undertaken with greater verve yet failed utterly. especially today, when the state power is thoroughly armed and fortified by the work of frege, dedekind, and cantor, these efforts are foredoomed to failure. a decade later hilbert ' s own program for the foundations of mathematics lay in tatters, destroyed in an investigation by the young logician kurt godel, which had initially been undertaken in an effort to contribute to that very program. today, passions have cooled, and working mathematicians show little interest in foundational matters. the infinitary set theoretic methods that occasioned such controversy are casually absorbed in passing by the beginning graduate student and used unhesitatingly
subdomain_quantum_field_theory
0.618693
512
HuggingFaceFW/fineweb-edu
<urn:uuid:d0c3274e-596e-4fe9-aaf5-1f23b6006049>
0
0.6
2025-12-25T21:32:40.120251
axioms provided the basis of a formal system rivaling that of principia. in an important paper appearing in 1930, zermelo proposed what came to be called the iterative notion of set, in which a hierarchy of sets is built from some initial collection of things by iterating indefinitely the operation of forming the set of all subsets of a given set. he observed that his axioms could be construed as being about just this notion. a few years later, in an address on the foundations of mathematics, kurt godel emphasized that rather than being seen as a rival to principia, when viewed from the perspective of the iterative notion of set zermelo ' s system could be seen as the result of eliminating unnecessary complications and artificial restrictions from the whitehead - russell system. by the 1940s and ' 50s, set - theoretic methods had become a crucial part of the mathematician ' s toolbox. back in the 1920s, when passions were aflame, hilbert developed an ingenious strategy by which he intended to overcome his opponents. he would establish the legitimacy of methods that brouwer and weyl considered dubious by encapsulating those methods in formal systems whose consistency would then be proved using only methods of which they approved. in a revolutionary paper in 1931, the young godel demonstrated not only that consistency could not be proved using only these restrictive methods, but also that the same negative conclusion held even if the entire panoply of methods encapsulated in the systems in question was brought to bear. after godel, the foundations of mathematics were seen as inevitably open - ended, with more and more propositions becoming provable as ever more powerful methods were employed. godel liked to emphasize that these more powerful methods could be thought of as being essentially a matter of venturing sufficiently far out in the iterative hierarchy of sets. giaquinto has provided a careful and judicious discussion and analysis of these matters, supplying needed technical background for readers who are not mathematicians. although foundational questions have ceased to be of much importance to most mathematicians, controversies among specialists continue. readers of this book will be well prepared to follow the current literature on foundations of mathematics.
subdomain_quantum_field_theory
0.623476
448
HuggingFaceFW/fineweb-edu
<urn:uuid:d0c3274e-596e-4fe9-aaf5-1f23b6006049>
2
0.6
2025-12-25T21:32:40.123087
in the experiments michelson had begun in berlin. the new apparatus was similar in basic design to his previous ones, but much more sensitive. it used extra mirrors to allow the light beams to bounce back and forth, creating a much longer path length. michelson and morley conducted the experiments in a basement lab, and to minimize vibrations, the setup rested atop a huge stone block, which floated in a pool of mercury that allowed the entire apparatus to rotate. even with this exquisitely sensitive design, michelson and morley couldn ’ t detect evidence of motion through the ether. they reported their null result in november 1887 in the american journal of science, in a paper titled “ on the relative motion of the earth and the luminiferous ether. ” ( the paper is online at www. aip. org / history / gap / michelson / michelson. html. ) though disappointing to michelson and morley, the experiment revolutionized physics. some scientists initially tried to explain the results while keeping the ether concept. for instance, george fitzgerald and hendrik lorentz independently proposed that moving objects contract along their direction of motion, making the speed of light appear the same for all observers. then in 1905 albert einstein, with his groundbreaking theory of special relativity, abandoned the ether and explained the michelson - morley result, though it is uncertain whether einstein was actually influenced by their experiment. michelson and morley nonetheless both continued to believe that light must be a vibration in the ether, though michelson did acknowledge the importance of einstein ’ s work on relativity. although it couldn ’ t detect the non - existent ether, the michelson interferometer proved useful for other measurements. michelson used his interferometer to measure the length of the international standard meter in terms of wavelengths of cadmium light, and in 1920 he was the first to measure the angular diameter of a distant star, also using an interferometer. in 1901 michelson was the second president of the aps, and he became the first american to win the nobel prize in 1907, for his precision optical instruments and measurements made with them. in 1889 michelson moved to clark university in worcester, massachusetts, and then in 1892 to the university of chicago. he returned to his work refining measurements of the speed of light, and continued making more and more precise measurements right up to his death in 1931. ©1995 - 2013, american physical society aps encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated
subdomain_quantum_metrology
0.618562
512
HuggingFaceFW/fineweb-edu
<urn:uuid:0a58269b-cea8-4a0a-899f-17cfb5879699>
1
0.6
2025-12-25T21:32:40.192483
copyright © 2001 – 2008 jsd i set up some spreadsheets to solve laplace ’ s equation, with more - or - less any boundary conditions you want. the spreadsheet becomes, essentially, a 2d cellular automaton that directly emulates the physics. this version handles objects in a d = 2 universe in rectangular coordinates. in flatland, i. e. d = 2, the z direction simply does not exist. alas many people are unfamiliar with the laws of physics in flatland. therefore it might be better to think of this as a d = 3 universe in which all d = 3 objects are infinitely tall and translationally invariant along the z axis. in this case, the z direction exists, but is uninteresting, and the essential physics is the same as the d = 2 case. ( this is not the same as considering a thin flat “ d = 2 ” object embedded in the d = 3 universe! ) in any case, each cell represents an area dx∧dy in the xy plane. the spreadsheet to handle this case can be found in reference 1. occupying a large area near the upper left of the spreadsheet is a grid that i call the potential grid. you can set boundary conditions for the problem by choosing cells that you want to represent electrodes, and specifying the potential on these electrodes. for example, reference 1 contains three electrodes : within the universe, cells that are not electrodes are called vacuum cells. they contain a formula that will be used to calculate their potential, in accordance with laplace ’ s equation, subject to the specified boundary conditions. if you want to “ erase ” part of an electrode, you should use the copy - and - paste function to fill those cells with the vacuum formula. just to the right of the " potential " grid there is second grid that i call the | field | grid because it calculates and displays the magnitude of the electric field at each point. farther right is a third grid that calculates the charge density ( charge per unit volume ). if you add up all the cells in a given area, you get a charge per unit length. this means length in the z direction ; it is the charge per unit length of the object rooted in the given area and extending infinitely far perpendicular to the screen. principle of operation : consider a cross - shaped group of 5 elements somewhere on the spreadsheet, and label them as shown in figure 1. now the discrete approximation to the second derivative in
subdomain_quantum_field_theory
0.626066
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
0
0.6
2025-12-25T21:32:40.282826
the given area and extending infinitely far perpendicular to the screen. principle of operation : consider a cross - shaped group of 5 elements somewhere on the spreadsheet, and label them as shown in figure 1. now the discrete approximation to the second derivative in the horizontal direction is b + c−2w, and in the vertical direction it is a + d−2w. the laplacian vanishes if w = ( a + b + c + d ) / 4, i. e. if the central element is equal to the average of its four neighbors. recall we are assuming ( d / dz ) is zero. this leads to an algorithm that says that for each cell in the vacuum, we want to equal the average of its four neighbors. so the basic step of the algorithm is to run through the grid and just set each cell to the average of the neighboring cells. that does not immediately solve the problem, because whenever we change a cell it requires us to change all the neighbors. however, each basic step brings us closer to a good solution, so we just repeat the basic step several times. this is called the relaxation algorithm. another way to motivate the same algorithm is to consider the electrostatic field energy. it depends on the square of the electric field, i. e. the square of the first derivatives of the potential. this energy is minimized when the central cell is equal to the average of its four neighbors. therefore each step of the update algorithm lowers the local energy. tangential remark : you can say that the field energy serves as a lyapunov function for the relaxation algorithm... but if this doesn ’ t mean anything to you, don ’ t worry about it. reference 1 has 841 cells arranged as a 29x29 grid. for a grid of this size, the relaxation algorithm converges in a few seconds. that ’ s fast enough that it ’ s not boring, but slow enough that you can observe the propagation of changes if you fiddle with the boundary conditions. there is a cell just above the top right of the potential grid, labeled object potential. if you change the value of this cell, you can watch how the charge distribution responds. while the algorithm is running, i. e. after you have changed something but before the algorithm has converged to a solution, the grid contains an approximate solution that doesn ’ t exactly satisfy laplace ’ s equation. that is, during this phase, there will be nonzero charge in the “ vacuum ”. this
subdomain_quantum_simulation
0.642048
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
1
0.6
2025-12-25T21:32:40.285578
but before the algorithm has converged to a solution, the grid contains an approximate solution that doesn ’ t exactly satisfy laplace ’ s equation. that is, during this phase, there will be nonzero charge in the “ vacuum ”. this is unavoidable ; because the spreadsheet strictly enforces local conservation of charge, as discussed in section 2. 2. that means there is no way for the objects to acquire the correct charge unless charge flows through the vacuum somehow. the algorithm gradually moves all this charge to the boundaries. the “ manual recalculation ” mode ( using the “ f9 ” key ) may help you observe this, as discussed in section 5. excel evaluates cells in a sequence that it chooses. the sequence defies simple description, and it has nothing to do with the physics. ( remember, this is an electrostatic problem ; there is no physically - significant timescale. ) unfortunately, this sequencing means charge propagates quickly in certain directions across the grid, and slowly in the opposite directions. if you were writing in a computer language that gave you more control than excel does, you could get rid of this unphysical asymmetry by evaluating things in checkerboard - sequence ( all the black squares, then all the white squares ) or in randomized order. as mentioned above, just outside the edge of the potential grid is a layer of cells that implement the boundary conditions. in this example, they implement born / von - karman periodic boundary conditions. that is, given a universe of n rows by m columns, row n + 1 is constrained to equal row 1, and column m + 1 is constrained to equal column 1. you can think of this as a torus, where the top edge of the n×m grid joins the bottom, and the left edge joins the right. equivalently, you can imagine tiling an infinite region with copies of the n×m grid, subject to the constraint that corresponding cells have the same value in every tile. below the potential grid is a graph with many traces ; each trace shows the potential as a function of x, while different traces show different y values ( rows ). clicking on one of the traces highlights the corresponding row. this may help you locate extremal values. below the field grid is a similar graph with many traces. you can make the universe bigger by adding more rows and columns if you like ; use the " fill across " and " fill down " features to propagate the vacuum formula into
subdomain_quantum_simulation
0.612685
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
2
0.6
2025-12-25T21:32:40.286995
values. below the field grid is a similar graph with many traces. you can make the universe bigger by adding more rows and columns if you like ; use the " fill across " and " fill down " features to propagate the vacuum formula into the new cells. beware : you must fill from a vacuum cell that is not adjacent to the newly - added cells or the results will be incorrect. you could extend this calculation to d = 3, removing any assumption of translational symmetry. one possible brute - force solution would be to make a spreadsheet with 29 different 29x29 grids and put the appropriately - generalized formula in them. on the other hand, when the problem gets this complicated, you ’ re probably better off using a more sophisticated programming language, such as c + +. reference 2 is similar to reference 1, but has several additional features. for one thing, it uses a fancier formula in the vacuum cells. it uses a technique called “ over - relaxation ” to improve the speed of convergence. this is described at e. g. reference 3. basically the idea is to figure out how big a step the simple relaxation algorithm would have ‘ taken, and take a step larger than that by a factor of gamma, in hopes of moving more quickly towards the final result. gamma = 1 corresponds to the plain old relaxation algorithm, with no over - relaxation. values between 1 and 2 make sense. ( if gamma were set greater than 2, the electrostatic energy would increase at every step, so the algorithm would not converge. ) the value of gamma is controlled by a cell near the top right of the potential grid. more generally, reference 4 describes a fancy fortran program for doing calculations of this sort. if you ’ re interested in such things, take a look there. reference 2 has another cute little feature, the “ gate ” cell at the lower right of the potential grid. setting it to zero sets the vacuum potential to zero everywhere. setting it back to a nonzero value allows the potentials to be recalculated. this is convenient if you just want to watch how the solution propagates. it is also invaluable for recovering from the following situation : if you enter an invalid expression into a cell in or near the vacuum, the spreadsheet will be unable to calculate the neighboring cell values, and the problem will spread from cell to cell like a disease. as mentioned above, all the potential grids in reference 1 and reference 2 implement periodic boundary conditions – also
subdomain_quantum_field_theory
0.63584
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
3
0.6
2025-12-25T21:32:40.288105
vacuum, the spreadsheet will be unable to calculate the neighboring cell values, and the problem will spread from cell to cell like a disease. as mentioned above, all the potential grids in reference 1 and reference 2 implement periodic boundary conditions – also known as born / von - karman boundary conditions. periodic boundary conditions are not the only possible choice. another option to have a hall of mirrors. that is, imagine that just to the left of the model universe there is a mirror - image copy of itself. then impose periodic boundary conditions on the pair ( with the appropriate double - length period ). do the same in the vertical direction. you can turn on this feature in the advanced spreadsheet by putting a nonzero value in the cell labeled “ hall of mirrors ” near the lower - right corner of the potential grid. the hall - of - mirrors condition has an interesting property : it causes the directional derivative of the potential, in the direction perpendicular to the edge, to be zero at the edge of the universe. for some applications, for instance if you are trying to model the “ self - capacitance ” of some object, the hall - of - mirrors boundary condition may approximate the desired physics better than periodic boundary conditions would. in reference 2, over on the lower right below the main charge - density grid, there is a pair of smallish grids labeled “ charge conservation ”. they serve to illustrate the principle of global charge neutrality and local charge conservation. the pair consists of a potential grid and the corresponding charge - density grid. in this potential grid, you can put an arbitrary arrangement of values in the cells. no matter what you do, no matter how weird the potential - arrangement is, the total charge ( i. e. the sum over the charge - density grid ) comes out zero, provided you don ’ t mess with the periodic boundary conditions. it is easy to see why this must be so : we calculate the charge by convolving the operator ( a + b + c + d−4w ) with the potential grid. every nonzero potential cell contributes to the convolution grid five times : once as a, once as b, once as c, once as d, and once ( weighted by - 4 ) as w. if you add those five contributions, you get zero every time. ( there may be small discrepancies due to roundoff errors, which we ignore. ) the cells in this little grid are just numbers. we do not run the relaxation algorithm
subdomain_quantum_materials
0.626757
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
4
0.6
2025-12-25T21:32:40.289194
if you add those five contributions, you get zero every time. ( there may be small discrepancies due to roundoff errors, which we ignore. ) the cells in this little grid are just numbers. we do not run the relaxation algorithm on them. this should make it clear that the global charge neutrality, in this model system, has nothing to do with the relaxation algorithm. you could use potential values from the relaxation algorithm, or from some other algorithm, or from a random - number generator, and the total charge in the universe would still be zero. no algorithm can change this zero. this zero can be seen as a manifestation of gauss ’ s law. we can consider the edge of the universe to be a gaussian pillbox. the periodic boundary condition ensures that whatever field lines leave the top of the universe re - enter the bottom of the universe. therefore there is no net flux flowing into the universe. ( in the example, the field happens to be zero at the edge, making it extra - obvious that there is no net flux. ) since there is no net flux, the net charge on the interior must be zero. the validity of gauss ’ s law depends on the structure of the operator ( a + b + c + d−4w ) and not much else. its applicability depends on the boundary condition for the universe itself. global charge neutrality automatically implies global conservation of charge. global conservation is vaguely interesting, but it is important in physics, however, to have a local conservation law. here ’ s why : suppose some charge unaccountably disappeared from my lab. it would give me little comfort to be told that it reappeared in some unknowable distant part of the universe ; i would be unable to distinguish non - local conservation from from non - conservation. fortunately, our model system does have a local conservation law. if you increase the potential in any one cell, it causes an increase in the charge - density in the corresponding cell — but this increase is exactly counterbalanced by a decrease in the four neighboring cells ( not in some goofy distant cells ). again, this depends on the structure of the laplacian, not on the update algorithm. just below the aforementioned pair of grids is yet another pair of smallish grids, labeled “ gauge invariance ”. as in most of the other grids, i have imposed born / von - karman periodic boundary conditions. as before, this exhibits global charge neutrality and local charge conservation. this grid is
subdomain_quantum_field_theory
0.614564
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
5
0.6
2025-12-25T21:32:40.290180
namely d = 3 with translational symmetry in the z direction, where the laplacian was ( d / dx ) 2 + ( d / dyf ) 2 ; we knew the z - derivative was zero. ) in the cells of the spreadsheet, i have simplified the formula by observing that ( 1 / r ) ( d / dr ) is equal to ( 1 / x ) ( d / dx ) on the slice of interest, by cancellation of a factor of sign ( x ). in this spreadsheet there is a fourth grid, just to the right of the grid that shows the charge per unit volume. it shows the charge per unit area ( dr∧dz ) in a ring. you can find the total charge on an object by summing the numbers in this grid. there is no point in summing the numbers in the charge - per - unit - volume grid ; that doesn ’ t make sense for several reasons, including dimensional analysis. to improve the accuracy, i use a smart estimate of the quantity ( 1 / r ) ( d / dr ). in particular, i take the arithmetic mean of the left - hand difference ( w−b ) / x1 and the right - hand difference ( c−w ) / x2 ; this accounts for an important nonlinearity because the radius is different in the two denominators. validity checks : i verified that a region with a log ( r ) potential produces zero charge density, with high accuracy. i also checked that the field calculation and charge calculation are automatically gauge invariant, because of the structure of the lapacian operator. i implemented periodic boundary conditions in the z direction, and this is the default behavior. i also implemented hall - of - mirrors boundary conditions, which you can optionally use instead. in the r direction, there is only one choice : the perpendicular component of the electric field vanishes on this boundary. this is reminiscent of the hall - of - mirrors boundary condition, but there is no physical interpretation in terms of tiling the universe. instead, this can be viewed as surrounding the region of interest, at each z level, with an annulus extending to infinity. the potential on this annulus depends on z but is independent of r. this means that outside the region of interest, there will be zero charge, although there will be nonzero fields. these fields seem a bit unphysical. to make these fields go away, you can arrange that the potential at the large - r boundary is independent of
subdomain_quantum_field_theory
0.631462
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec>
8
0.6
2025-12-25T21:32:40.293352
anti de - sitter space bubbles, filaments, voids and sheets condensed matter system cosmic microwave background deep field survey degrees of freedom grand unification theory heisenberg uncertainty principle hubble ' s law and constant intercommuting and loop production laws of thermodynamics nematic liquid crystal quantum field theory speed of light strong and electroweak forces surface of last scattering the great attractor theory of everything when dealing with geometries that take place within the universe, we deal not with conventional three - dimensional euclidean geometry, we have to adapt it to represent a four - dimensional spacetime. this results in what is known as a lorentzian manifold. within this geometry, we deal with three types of space, de - sitter space, anti de - sitter space and minkowski space. they are analogues of spherical, hyperbolic and euclidean space with regards to four - dimensional spacetime. this is a type of hypothetical particle of zero electrical charge that has come out of the framework of quantum chromodynamics. it is hypothesised that these were created during the very early universe. they have little mass and do not easily interact with normal matter. no experimental evidence for them exists as of yet, but they are one of the possible contenders for dark matter. a baryon is a category of subatomic particle which is composed of three quarks. this is opposed to a meson, which is composed of one quark and one antiquark. baryons include protons and neutrons and make up the majority of the mass of visible matter in the universe ( i. e. the mass of the universe that is not dark matter or dark energy ). they participate in the strong nuclear force. about thirteen billion years ago, the universe began in a gigantic explosion. every particle started rushing apart from every other particle in an early super - dense phase. the fact that galaxies are receding from us in all directions is a consequence of this initial explosion. projecting galaxy trajectories backwards in time means that they converge to a high - density state. this is one of the possible ends to the universe as we know it. cosmic inflation is expands the universe and gravitation brings matter together. depending on the density of the universe, one of these forces may overcome the other, or alternatively the universe may be of critical density, which would result in a " flat " universe. if the universe has a density higher than this critical density, then gravitation will eventually overcome the forces working to
subdomain_quantum_field_theory
0.647172
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
0
0.6
2025-12-25T21:32:40.830855
. it must emit this radiation across all possible wavelengths and frequencies and is must also absorb all possible wavelengths and frequencies, which means that it can emit radiation at infinite wavelength. named after indian physicist satyendra nath bose, these are particles with full integer spin, i. e. 1, 2, 3. ( as opposed to fermions, which possess half integer spin ). there are two categories of fundamental boson ( bosons not composed of a combination of other particles ) ; gauge bosons, which mediate the fundamental forces of nature ; and scalar bosons, which are constituents of a scalar field, and include the elusive higgs boson. bosons can also be created from other particles whose spin totals an integer, for example, any meson. brane inflation uses fundamental object of string theory, called branes. in this theory, the universe is a three dimensional slide ( a brane ) in a high dimensional space ( the bulk ), which may also contain other branes. these slices of spacetime have mass and can attract each other by gravity, so two almost parallel branes separated by some distance will start moving towards each other. in brane inflation, the closer the two branes get to each other, the more the branes expand, giving rise to inflation. the process ends with the violent collision of the branes, leading to the copious production of radiation and relativistic particles. hence, the new brane resulting from the collision is filled with a hot plasma, which is the starting point of the standard big bang model. there is another prediction in the model : the collision is also accompanied by the production of cosmic strings. these are all types of large - scale structure formed from galactic distribution in the universe. galaxies form clusters and superclusters which arrange into sheets and filaments through the universe. between these sheets of galaxies, there is very low galaxy density, which leads to voids. these fill approximately 90 % of space. bubble nucleation is a form of first - order phase transition. a phase transition occurs when temperatures and densities increase such that matter changes its form and properties, such as in the very early universe, during the big bang. a simple analogy is water, which melts from ice to liquid, and then boils to gas as temperatures increase. for physicists, it is important to note that as the temperature increases, the symmetry of the matter increases. thinking this through, we know that gas is more symmetry than
subdomain_quantum_field_theory
0.641615
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
2
0.6
2025-12-25T21:32:40.833233
colour is a degree of freedom that allows quarks to exist together to form hadrons, such as protons or neutrons, in otherwise identical quantum states. this is necessary as otherwise they would be in violation of the pauli exclusion principle, which states that no two identical fermions may occupy identical quantum states simultaneously. comoving distance is the distance between two objects as it appears if the expansion of the universe is factored out. at any given time, it is equal to the proper distance, which is the actual distance between two objects, and will change over time due to the expansion of the universe. the comoving horizon is therefore the actual distance to the edge of what we can see at any given time. condensed matter systems deal with, as the name suggests, condensed matter. this includes matter in the liquid, solid, superconducting phases. condensed matter systems can be used to study the effects of phase transitions on matter. around 370, 000 years after the big bang, the temperature of the universe dropped sufficiently for electrons and protons to combine into hydrogen atoms : p + e = h. from this time onwards, radiation was effectively unable to interact with the background gas, so it has propagated freely ever since, while constantly losing energy as its wavelength is being stretched by the expansion of the universe. originally, the radiation temperature was about 3000 degrees kelvin ( i. e. approximately 3300 degrees celsius, 5000 degrees fahrenheit ), whereas today it has fallen to only 3k. observers detecting this radiation today are able to see the universe at a very early stage. photons in the cmb have been travelling towards us for over ten billion years, and have covered a distance of about a million, billion, billion miles. the cmb was discovered in 1964. these are one - dimensional ( that is, line - like ) objects which form when an axial or cylindrical symmetry is broken. strings can be associated with grand unified particle physics models, or they can form at the electroweak scale. they are very thin and may stretch across the visible universe. a typical gut ( grand unified theory ) string has a thickness that is less then a trillion times smaller that the radius of an hydrogen atom. still, a 10 km length of one such string will weigh as much as the earth itself! originally proposed by einstein as a modification to general relativity to result in a universe which would neither expand nor contract. he later famously called it his greatest mistake after hubble discovered that other galaxies were moving away
subdomain_quantum_materials
0.681443
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
5
0.6
2025-12-25T21:32:40.838789
certain particles with the same values for their varying degrees of freedom ( i. e. spin, charge etc. ) cannot exist in the same place at the same time. an object which generates a magnetic field emanates that field from two opposite poles, an example of this would be a bar magnet, which has a north and south pole. each of these poles is a magnetic monopole. the magnet itself, having two of these poles, is a dipole. this is similar to an electric field, in which the field emanates from positive and negative charges. whereas in electricity, negative and positive charges can be easily isolated in the form of electrons and positrons, magnetic monopole particles have yet to be discovered. for example, when you break up a bar magnet, you do not isolate the two monopoles, you simply have two bar magnets half the size of the previous one. these are two - dimensional objects that form when a discrete symmetry is broken at a phase transition. a network of domain walls effectively partitions the universe into various ' cells '. domain walls have some rather peculiar properties. for example, the gravitational field of a domain wall is repulsive rather than attractive. when the source of a wave moves away from us, we observe a change of frequency of that wave. an example would be an ambulance or fire - truck - we hear a lower pitch in its siren once it has passed us by. this is the doppler - shift. it is not, however, limited to sound waves, but any kind of waves, including electromagnetic. ( b. 1879 d. 1955 ), was a german theoretical physicist who spent much of his career at the kaiser wilhelm institute for physics and princeton university. he is regarded as one of the greatest physicists of the 20th century, and indeed, one of the most academically brilliant minds of all time. awarded the nobel prize in physics in 1921, for his work on the photoelectric effect where he described photons as discrete packets, known as quanta. this was in direct conflict with previous, classical descriptions of physics which defined photons as wave. his theories are now the basis of modern physics. these theories, whilst too numerous to list here, include special relativity, which describes how relative motion can result in different laws of physics being experienced by different observers as well as the energy - mass equivalence relationship, e = mc2, and general relativity, which generalises special relativity with respect to gravity and incorporates this with newtonian laws of gravity to describe a
subdomain_quantum_materials
0.650745
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
8
0.6
2025-12-25T21:32:40.842891
in different laws of physics being experienced by different observers as well as the energy - mass equivalence relationship, e = mc2, and general relativity, which generalises special relativity with respect to gravity and incorporates this with newtonian laws of gravity to describe a how gravity is a geometry property of spacetime. when hitler came to power in 1933, he was on a trip to america and did not return to german, instead opting to become an american citizen. his warning to president roosevelt about the german research into nuclear weapons led to the eventual development of the atomic bomb, a weapon he later denounced and crusaded against. such was einstein ' s genius that upon his death his brain was removed for future study. an elementary particle carrying a negative elementary electric charge ( that is, the most fundamental electric charge, particles do not carry charge smaller than this ). a fermion with spin 1 / 2. it is a lepton and therefore is a constituent of matter, but does not participate in the strong nuclear force. it does interact with electromagnetism, gravitation and the weak nuclear force. energy unit equal to approximately 1. 6 x 10 - 19 joules. it is the amount of energy gained by the charge of one electron as it moves across a one volt electric potential difference. a period in time. in cosmology it is used to refer to different time periods in the chronology of the universe. these include the planck epoch ; the grand unification epoch ; the electroweak epoch ; the quark epoch ; the hadron epoch ; the lepton epoch ; and the photon epoch ( all of the epochs prior to the photon epoch occurred within the first 10 seconds of time! ). time periods after this include nucleosynthesis, recombination and reionization. this is the speed required for any object to break free of another objects gravitational field. for the earth, this is approximately 7, 000 miles per second. mathematically, it is described as the velocity at which the escaping object ' s kinetic energy and gravitational potential energy summate to zero. as the gravitational force exerted by an object on another object increases as the distance between the two decreases, the further away the escaping object is, the lower the escape velocity. for black holes, at the distance known as the event horizon, the escape velocity is greater than the speed of light, and therefore nothing can escape. eternal inflation refers to a series of models by which at least one region of the universe is undergoing inflation at any one point in
subdomain_quantum_field_theory
0.649792
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
9
0.6
2025-12-25T21:32:40.843906
the distance known as the event horizon, the escape velocity is greater than the speed of light, and therefore nothing can escape. eternal inflation refers to a series of models by which at least one region of the universe is undergoing inflation at any one point in time. due to the exponential increase in volume during these periods of inflation, it is theorised that at any given point the majority of the volume of the universe is still expanding. this creates a multiverse, whereby each expanding area of the universe appears, to be its own universe, and the beginning period of expansion equivalent to the big bang. in eternal inflation it is possible for these expanding areas of space to decay into a lower energy phase, resulting in inflation ceasing. named after euclid, a greek mathematician of the third century bc. it is a system of geometry based around the geometry of the three dimensions that we are all taught at school ; x, y and z. points within the system can be described by a set of cartesian coordinates. it is described by a system of postulates, or premises, for example, the parallel postulate, which states that " if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles ". in contrast to this is non - euclidean geometry, which deals with curved space. the event horizon is the boundary that marks the point where the escape velocity of a black hole exceeds the speed of light. once the event horizon has been crossed, nothing can escape from the black hole ’ s gravitational pull, not even light. exotic particles are those made up of theorised particles not currently part of the standard model. an example of this would be the heavier partners of the current set of particles that make up the standard model, that are described within the theory of supersymmetry. full title : the fermi national accelerator laboratory. located near chicago, il., it is a united states department of energy laboratory focussed on high - energy physics. until 2011, it house the tevatron particle accelerator, which until the opening of the large hadron collider at cern, the largest in the world. in 1995 work done at the tevatron led to the discovery of the top quark, one of the six different flavours of quark, and the most massive of them all. these are particles with half integer spin. this is
subdomain_quantum_field_theory
0.612084
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
10
0.6
2025-12-25T21:32:40.844972
world. in 1995 work done at the tevatron led to the discovery of the top quark, one of the six different flavours of quark, and the most massive of them all. these are particles with half integer spin. this is opposed to bosons, which have full integer spin. only one fermion can occupy the same quantum state and space at any given time, this is known as the pauli exclusion principle, and does not apply to the other class of particles, bosons. elementary fermions ( those not composed of other particles ) are constituents of visible matter in the universe, and include electrons and quarks. particles composed of fundamental fermions, however, can have full integer spin and therefore can be classed as bosons. a ferromagnet is an object which exhibits the property of ferromagnetism. ferromagnetism is the strongest type of magnetism, and as such ferromagnets are the magnets that the average reader will be familiar with. they are the ones used in physics classes at school, they are the ones used to pick up scrap metal, they are the magnets on your fridge. ferromagnetism is the only type of magnetism that has the strength to produce a force that can be felt. a ferromagnet can be defined as a material that can exhibit a net magnetic moment in the absence of an external magnetic field. ( b. 1918 - d. 1988 ), was a physicist who spent most of his life working at the california institute of technology ( caltech ). also worked on the manhattan project at los alamos national laboratory, where he helped develop the atomic bomb. won the nobel prize in physics in 1965 for his work in quantum electrodynamics ( qed ). developed the path integral formulation that we use today, and developed an illustrative representation scheme for the behaviour of subatomic particles which has become known as feynman diagrams. caltech has a named chair of physics in his honour. outside of his life in physics, he also was a member of the panel that investigated the space shuttle challenge crash, and wrote two popular science books : " surely you ' re joking, mr. feynman! " and " what do you care what other people think? ". there are four fundamental forces in nature. they are electromagnetism, the weak nuclear force, the strong nuclear force and gravitation. the weak nuclear force is associated with radioactivity
subdomain_quantum_materials
0.71751
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
11
0.6
2025-12-25T21:32:40.845982
what do you care what other people think? ". there are four fundamental forces in nature. they are electromagnetism, the weak nuclear force, the strong nuclear force and gravitation. the weak nuclear force is associated with radioactivity in unstable nuclei, specifically the decay of a neutron into a proton in the form of beta radiation. the gauge bosons that mediate the force are the w and z bosons. this interaction can cause quarks to change flavours. the strong nuclear force binds together quarks to form nucleons, in turn, it also acts to bind these nucleons together, forming atomic nuclei. the force is mediated by an exchange of gluons, which are a type of gauge boson. the charge associated with this force, analogous to the electric charge associated with electromagnetism, is the colour charge, of which there are three varieties ; red, green and blue. the mathematical theory describing the elementary particles interacting with this force, quarks and gluons, is known as quantum chromodynamics ( qcd ). at atomic levels, it is by far the strongest of all forces, but only interacts on a scale on the order of 10 - 15m, and therefore, whilst incredibly important for the formation of matter, does not play any observable role in day to day life. electromagnetism is a force associated with the electric charge associated with certain molecules. along with gravitation, is is one of the four forces that has a major noticeable effect on day to day human life. it manifests as two different fields electric fields and magnetic fields, although they are aspects of the same force and therefore interact with each other through electromagnetic induction. the gauge boson that mediates this force is the photon, which is also the quanta ( discrete packet ) of light and other forms of electromagnetic radiation, such as infra - red radiation ( most thermal radiation ), x - rays, ultraviolet radiation etc.. gravitation is a force of attraction between two massive bodies. objects on earth are attracted to the earth via gravitation, why is why, when an apple falls from a tree, it falls down towards the earth, instead of in any other direction. gravitation also gives weight to objects, weight being the mass of an object multiplied by the gravitational force acted upon it by another object. gravitation on a universal scale is described by einstein ' s theory of general relativity, where it is described as being
subdomain_quantum_field_theory
0.667692
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
12
0.6
2025-12-25T21:32:40.846939
allows us to build up a three dimensional map of the sky, which allows us to gain insight into the large - scale structures within the universe. a gauge group is a set of gauge transformations which effect a system in similar manners. a gauge transformation is a transformation that acts on redundant degrees of freedom within a system, that is, it effects a property that does not really have any physical significance at the level at which the system operates. a gauge transformation which is globally symmetric effects all points of space in the same way. an example of this would be a transformation of voltage that states that voltage1 = voltage2 + c ( a constant ). if we substitute the left hand side of the equation with the right in classical equations dealing with electromagnetism, there is no difference in the outcome and therefore this will hold across any difference in voltage. if we impose a local symmetry on the gauge transformation, also known as gauge invariance, then these transformations become very significant. this is because the transformation holds true, but the transformation is now a function of the position in space and time. through introducing these conditions of gauge invariance into quantum equations, one can extrapolate that for particles that interact with fundamental forces, such as the electron, which carries electrical charge and is acted upon by electromagnetism, there is an underlying field which is also undergoing a gauge transformation. in the case of the electron, it is the electromagnetic field, which physicists were already aware of, however, gauge invariance has postulated the gluon field which is the basis for quantum chromodynamics, the theory which explains the strong nuclear force. this is the modern geometric description of gravity. it says that the gravitational force is related to the curvature of spacetime itself, i. e. to its geometry. to this end, it generalises einstein ' s theory of special relativity, and links it to newton ' s laws of gravity. unlike for non - gravitational physics, spacetime is not just an arena in which physical processes take place, but it is a dynamical field. the gravitational field at a fixed time can be described by the geometry of the three spatial dimensions at that time. these are gauge bosons which mediate the strong nuclear force, one of the fundamental forces. similar to the photons which mediate the electromagnetic force, gluons have no rest mass and so travel at the speed of light. although unlike photons, which whilst mediating the electromagnetic force, are themselves electrically neutral, gluons
subdomain_quantum_field_theory
0.732701
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
14
0.6
2025-12-25T21:32:40.849188
. similar to the photons which mediate the electromagnetic force, gluons have no rest mass and so travel at the speed of light. although unlike photons, which whilst mediating the electromagnetic force, are themselves electrically neutral, gluons have charge associated with the strong nuclear force, or colour. there are 8 different colours of gluon. gluons are confined within hadrons, particles made up of quarks ( which have a colour charge ), and are limited in interaction to a distance of approximately 10 - 15 metres. see grand unified theory. in the aftermath of the big bang, the universe was extremely hot and extremely dense. at these energies, the laws of nature that we know were changed. the fundamental forces that we see in nature were unified - the universe was in a state of grand unification - it is only as the universe expanded and cooled that gravitation, electromagnetism and the strong and weak nuclear forces all ceased to be as one. electroweak theory describes the unification of the weak nuclear force and electromagnetism. a grand unified theory will marry up electroweak theory with the strong nuclear force, brining us closer to a unification of the four fundamental forces. gravitational waves are propagating disturbances in spacetime. the effect of a passing gravitational wave is to periodically stretch and compress space in the two directions perpendicular to the direction of propagation. the expected strain on the earth due to these disturbances, which can be caused by black holes merging, is very small, making detection extremely difficult. this is an as yet undiscovered particle that is believed to mediate the force of gravitation. much like the photon, which mediates the electromagnetic force and the gluon which mediates the strong nuclear force, it has no mass, and therefore travels at the speed of light. it has a spins quantum number of 2, and is the only massless particle with that spin number. it has zero electrical charge. experimentally, the graviton is incredibly difficult to observe, and is beyond the reach of current physics. the detection of gravitational waves may lead to some further information about gravitons, but these have not yet been detected. theories of quantum gravity are one of the largest standing issues in cosmology, and there are currently few mathematically consistent theories that can explain it. one of these theories is m - theory, which we believe to be the best explanation at this point in time. this is a type of blackbody radiation emitted by
subdomain_quantum_field_theory
0.67392
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
15
0.6
2025-12-25T21:32:40.850063
should contain at most one degree of freedom per planck area. within m - theory, the holographic principle suggests we are the shadows on the wall. the ' room ' is some larger, five - dimensional spacetime and our four - dimensional world is just the boundary of this larger space. if we try to move away from the wall, we are moving into an extra dimension of space - a fifth dimension. ( b. 1889 - d. 1953 ) was one of the main figures of astronomy in the 20th century. using the hooker 100 inch telescope at mount wilson observatory in california, discovered the galaxies are receding away from us and from each other via the changes in frequency that they exhibit - the shifting of frequency of electromagnetic emissions to the red end of the spectrum. this realisation was crucial as evidence for an expanding universe, which, if reversed, supports the notion of a big bang at the beginning of the universe. famously not awarded the nobel prize on the basis that at the time, research in astronomy was not eligible for the nobel prize in physics. hubble ' s law states that all objects in deep space ( i. e. galaxies ) are receding away from us and each other ( as can be seen by the fact that they are doppler - shifted ), and that the velocity of this recession is proportional to their distance away from the earth and other astral bodies. it is summarised mathematically by the equation : v = h0d, where v is the recession velocity, h0 is the hubble constant and d is the distance away from us that the body is. h0 has an approximate value of 70 kms - 1mpc - 1 ( kilometers per second, per megaparsec ), but a there is disagreement over its precise value. according to the theory of inflation, the early universe expanded exponentially fast for a fraction of a second after the big bang. a simple model for the expansion of the universe is to consider the inflation of a balloon. a person at any point on the balloon might consider himself or herself to be at the centre of the expansion, as all neighbouring points are getting further away. during inflation the universe expanded by a factor of about e60 = 1026. this number is a one followed by 26 zeros. it transcends normal political / economic discussions of inflation. this is a hypothetical particle and scalar field associated with the inflation of the universe that occurred moments after the big bang. it is theorized that this occurred because of a phase transition
subdomain_quantum_field_theory
0.628647
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
17
0.6
2025-12-25T21:32:40.854765
surfaces of spheres with the three geometry slicing the sphere in half. they can be used to calculate the quantum process of universe creation, which cannot be described using classical general relativity. they only usually exist for small three geometries, corresponding to the creation of a small universe. note that the concept of time does not arise in this process. universe creation is not something that takes place inside some bigger spacetime arena - the instanton describes the spontaneous appearance of a universe from literally nothing. once the universe exists, quantum cosmology can be approximated by general relativity so time appears. there are properties exhibited by cosmic strings. intercommuting refers to a process whereby strings exchange ends whenever they meet. a loop is produced whenever a string intercommutes with itself. although cosmic strings have not been detected, this process of intercommuting can be seen in certain liquid crystals. an interferometer is a machine that uses a process of wave interference to learn about the waves in question. that is, the waves are superimposed upon themselves to discover their properties. kaluza - klein theory is a theory that seeks to unify two of the four fundamental interactions ; gravitation and electromagnetism. a similar theory, electroweak theory, already unifies the weak nuclear force and electromagnetism. its proposals extend general relativity into five - dimensional spacetime. the si ( or base ) unit for temperature measurement. kelvin and celsius have the same magnitude scale, therefore you can transform one kelvin into celsius by adding 273. 16 to the number. whereas the celsius scale was created by dividing the difference in temperature between water freezing and boiling by one hundred and labelling the freezing point of water as 0, 0 kelvin is the point described by lord kelvin ( after whom the unit is named ) as " infinite cold ", or absolute zero. this is the mechanism by which cosmic topological defects form during a phase transition. causal effects in the early universe can only propagate at the speed of light. this means that at a time t, regions of the universe separated by more than a distance d = ct can know nothing about each other. in a symmetry breaking phase transition, different regions of the universe will choose to fall into different minima in the set of possible states. topological defects are precisely the ' boundaries ' between these regions with different choices of minima, and their formation is therefore an inevitable consequence of the fact that different regions cannot agree on their choices. these are laws which define the
subdomain_quantum_field_theory
0.688993
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
19
0.6
2025-12-25T21:32:40.857042
the set of possible states. topological defects are precisely the ' boundaries ' between these regions with different choices of minima, and their formation is therefore an inevitable consequence of the fact that different regions cannot agree on their choices. these are laws which define the fundamental physical properties which characterize ) thermodynamic systems. these are temperature, energy and entropy ( a property that works systems towards equilibrium ). they are : the zeroth law : if two systems are in thermal equilibrium with a third, they must be in thermal equilibrium with each other also. the first law : heat and work are forms of energy transfer. this is the law of the conservation of energy. internal energy in a closed system may change if heat or work are transferred in or out of the system. the second law : the entropy of any isolated system not in thermal equilibrium almost always increases. that is, an isolated system will work towards thermal equilibrium. the third law : the entropy of a system approaches a constant value as the temperature approaches zero. this is not, despite the name, a measure of time, but rather a measure of length. it is the length that light will travel in a vacuum in a year, that is 365. 25 days. its exact value is 9, 460, 730, 472, 580, 800 metres, but is approximately given by 9. 4607x1015m. this is calculated by multiplying the number of days ( 365. 25 ) by the number of seconds in each day ( 86, 400 ) and then multiplying that by the speed of light in a vacuum, which is 299, 792, 458 metres per second. in a mathematical function, the highest and lowest values of that function, over the domain of said function, are defined as the maximum or minimum points respectively. a local minimum or maximum value is defined by taking the highest or lowest value in the function over only part of the domain. an example of a function with several local minimum and maximums would be a graph of sin ( x ), which has no overall maximum or minimum value, but several local maximums and minimums of equal respective values. an object ' s ( in our context, an astronomical object ) brightness as measured by the flux, or intensity of electromagnetic radiation, that the object gives out. during the radiation era, shortly after the big bang, the universe consisted of free moving protons, neutrons and electrons and other particles, including helium ions. all radiation was absorbed by these free electrons, making the universe opaque
subdomain_quantum_thermodynamics
0.708957
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
20
0.6
2025-12-25T21:32:40.858060
the object gives out. during the radiation era, shortly after the big bang, the universe consisted of free moving protons, neutrons and electrons and other particles, including helium ions. all radiation was absorbed by these free electrons, making the universe opaque. when the universe was sufficiently expanded the radiation could no longer interact with the electrons, causing the universe became transparent. this process is called decoupling, and it marked the beginning of the matter era. electrons, now no longer absorbing radiation, instead joined with ions to form neutral atoms. through gravity, these atoms clumped together, eventually forming stars, galaxies and other stellar bodies. these are zero - dimensional ( point - like ) objects which form when a spherical symmetry is broken. monopoles are predicted to be supermassive and carry magnetic charge. the existence of monopoles is an inevitable prediction of grand unified theories ( guts ) ; this is one of the puzzles of the standard cosmology. we have five consistent string theories that can describe both the forces and the matter in our universe. we do not, however, have the tools to explore the theories overall possible values of the parameters in the theories. over the past few years, however, we have been able to explore these theories more thoroughly, and we now believe that these five string theories are all different aspects of the same underlying theory : m - theory. m - theory goes beyond string theory, in that it predicts not ten, but eleven dimensions of spacetime. the theory could have as a fundamental object as a membrane, as opposed to a string, which would look like strings when curled up in the eleventh dimension. it is for this reason that the m in m - theory originally referred to a membrane. nowadays, however, the m doesn ’ t specifically refer to anything, and can stand for mystery, or “ mother of all ”, because m - theory is still largely unknown. vast clouds of interstellar dusk, hydrogen, helium and ionized gas. as the mass of a nebula grows due to the slight gravitational attracts of dust towards each other, the mass compacts enough to form stars. other material within the nebula, such as dust, can clump together to form planets and other planetary objects. originally, any large astronomical object was referred to as a nebula - other galaxies, in particular. a liquid crystal is a phase of matter which exhibits properties somewhere between those exhibited by a liquid and solid crystal. when viewed in high resolution, they can appear to be textured, as the molecules may be free
subdomain_quantum_materials
0.671663
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
21
0.6
2025-12-25T21:32:40.859144
nebula - other galaxies, in particular. a liquid crystal is a phase of matter which exhibits properties somewhere between those exhibited by a liquid and solid crystal. when viewed in high resolution, they can appear to be textured, as the molecules may be free to flow in a limited manner around, provided that they stay within a crystal like structure. liquid crystals are used extensively in televisions and computer screens. the nematic phase of a liquid crystal is temperature dependent. when in this phase, clamitic ( rod - like ) molecules align themselves individually roughly parallel to each other on their long - side axis, in a similar way to cigarettes in a package. the result of this is that the molecules are free - flowing within this directional order. in this phase, the crystals can show signs of intercommuting and loop production, which are properties expected to be exhibited by cosmic strings. a neutron star is formed from the collapse of a larger star which has undergone supernova. these stars, as the name suggests, are composed mostly of neutrons. neutron stars are extremely hot. they typically have masses between about 1 and 2 solar masses ( 1 solar mass is approximately 2x1030kg, which is about 333, 000 times the mass of the earth ), despite being somewhere on the order of 1015 smaller in radius than the sun, which makes them extremely dense. the more compact a neutron star is, the more likely it is to form a black hole. this occurs when the star ' s density become so great that the gravitational force it exerts on itself is greater than its internal pressure, causing a collapse into a black hole. this was developed in 1983 by stephen hawking and james hartle. describes a situation whereby the universe can spontaneously come into existence from literally nothing. once the universe exists, quantum cosmology can be approximated by general relativity so that time appears. a particle is a nucleon if it is a particle that forms an atomic nucleus. there are two nucleons : protons and neutrons. these are complicated manifolds which, like calabi - yau manifolds, may be the space in which six extra dimensions proposed by certain string theories are found within. the study of the universe up to around 10 - 11 seconds after the big bang. during this time, the electroweak and strong forces were unified in a grand unified phase, which quickly changed to separate out strong and the electroweak forces. further on in time the electroweak interaction separated to become electromagnetism
subdomain_quantum_materials
0.718531
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
22
0.6
2025-12-25T21:32:40.860194
. during this time, the electroweak and strong forces were unified in a grand unified phase, which quickly changed to separate out strong and the electroweak forces. further on in time the electroweak interaction separated to become electromagnetism and the weak nuclear force. it is possible to reach temperature regimes within this cosmology, allowing us to experimentally test theories. speculation, however, is still required within this time period. a mathematical approach to non - gravitational quantum theory, introduced by richard feynman of caltech. in the path integral approach, the probability that a system in an initial state a will evolve to a final state b is given by adding up a contribution from every possible history of the system that starts in a and ends in b. for this reason a path integral is often referred to as a ' sum over histories '. for large systems, contributions from similar histories cancel each other in the sum and only one history is important. this history is the history that classical physics would predict. for example, a system in the starting position of a ball on a non - symmetrical hill. the probability that the system will end up in the final position of the ball at the bottom of the hill on the side that is steepest is given by the summation of the probabilities of all paths that that ball could take, including going down the other side of the hill. for mathematical reasons, path integrals are formulated in a background with four spatial dimensions rather than three spatial dimensions and one time dimension. there is a procedure known as ' analytic continuation ' which is used to convert results expressed in terms of four spatial dimensions into results expressed in terms of three spatial dimensions and one time dimension. this effectively converts one of the spatial dimensions into the time dimension. this spatial dimension is sometimes referred to as ' imaginary ' time because it involves the use of so - called imaginary numbers. the path integral formulation of quantum gravity has many mathematical problems. it is also not clear how it relates to more modern attempts at constructing a theory of quantum gravity such as string / m - theory. however it can be used to correctly calculate quantities that can be calculated independently in other ways e. g. black hole temperatures and entropies. a phase transition is the change in properties and form of matter due to temperature changes. for example, water changes from solid ice to liquid water to gaseous steam or vapour. as temperature drops and phase transitions occur, the symmetry of the resulting matter is reduced - again, vapour is more symmetric than water
subdomain_quantum_field_theory
0.722694
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
23
0.6
2025-12-25T21:32:40.861729
due to temperature changes. for example, water changes from solid ice to liquid water to gaseous steam or vapour. as temperature drops and phase transitions occur, the symmetry of the resulting matter is reduced - again, vapour is more symmetric than water, which is more symmetric than ice. in terms of cosmology, when a phase transition in the early universe occurs, topological defects are formed. some of the symmetries that were broken in the early universe led to the four fundamental forces becoming discrete forces. at higher temperatures, they reunite in a unified state. the photon is an elementary particle. it is a gauge boson, in that it mediates one of the fundamental forces. in the case of the photon, it is the electromagnetic force. as mediators of the electromagnetic force, they allow us to see things through the visible light part of the electromagnetic spectrum, and are therefore often interchanged with " light ". as they have no rest mass, they are able to travel at the fastest possible speed, which is know as the " speed of light " ( 299, 792, 458 metres per second ) in a perfect vacuum. their spin is 1 and no electrical charge. this is simply the planck length squared. given that the planck length is a fundamental unit of length, so too is the planck area a fundamental unit of area. this is the size of energy quanta ( discrete packets of energy ) in quantum mechanics - it is therefore the smallest amount of energy that anything can hold. it is the proportionality constant between the energy of a photon and the frequency of the associated electromagnetic wave, as denoted in the planck - einstein equation which links the two : e = hv, where v is frequency, h is planck ' s constant and e is energy. it ' s value is 6. 6260695729×10−34 j. s this is the earliest period of time, from the beginning of time to 10 - 43 seconds after the beginning of time. during this period, the fundamental forces of nature were all unified due to the unimaginable temperature of the universe, and it is believed that gravity was as strong as the other forces ( it is now by far the weakest of the forces ). a very, very small unit of length. its precise value is 1. 61619997x10 - 35m. it is a base unit within the planck unit system and it is calculated using the speed of light, c, planck ' s constant, h and the gravitational
subdomain_quantum_optics
0.692274
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
24
0.6
2025-12-25T21:32:40.862910
. its precise value is 1. 61619997x10 - 35m. it is a base unit within the planck unit system and it is calculated using the speed of light, c, planck ' s constant, h and the gravitational constant, g. specifically, it is given by the square root of ħg / c3 where ħ is the reduced planck ' s constant, or planck ' s constant divided by 2π. it is the shortest measureable length in existence. to discuss length on scale shorter than this would be meaningless because it is a physical impossibility to measure below this length. a theory that could describe physical laws at this level would be of great use in the search for a theory of everything. this is the energy that exists in a body due to its position within a system. forces act upon the body to restore it to a lower energy state or configuration, this difference in the energy states is the potential energy. when the force acts upon the body, the energy held within the body is converted into some other form of energy, this occurs because the conservation of energy law states that energy cannot be created or destroyed. an example of potential energy being converted into other energy would be in someone skydiving. the position of the person ( the body in the system ) in the system ( the earth ), i. e. being high up in the air in a plane, gives the person gravitational potential energy. once they leap from the plane, this gravitational potential is turned into kinetic energy as the person falls toward earth. once they have landed, their position, at the surface of the earth, means that they have lower amounts of gravitational potential energy, and they have been restored to a lower energy state. this is the theory that explains the strong nuclear force that is mediated by gluons between different quarks. the charge of this force is known as colour. the force, which occurs due to an exchange of these gluons, does not weaken over distance, as say gravity does, but rather remains constant, on the order of several thousand newtons. this means that at no point does any quark separate from another one, and so quarks can only be observed on a hadron level. this property is called confinement. another property within qcd is asymptotic freedom. this results in a very weak interaction between quarks and gluons during extremely high energy reactions. this is the study of cosmology at temperature regimes where all four fundamental forces were unified. this
subdomain_quantum_field_theory
0.643993
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
25
0.6
2025-12-25T21:32:40.864063
qcd is asymptotic freedom. this results in a very weak interaction between quarks and gluons during extremely high energy reactions. this is the study of cosmology at temperature regimes where all four fundamental forces were unified. this unification, it is theorised, occurred from the big bang to some 10 - 43 seconds after the big bang. due to the temperatures involved all quantum cosmology is theoretical and highly speculative. quantum field theory is a framework that allows for the extension of quantum mechanics, which deals with individual particles, to field systems operating relativistically. quantum field theories have been used to describe how three of the four fundamental forces act, being mediated by and exchange of particles called bosons. the photon and the gluon, for example, are exchanged between electrons and quarks in the case of electromagnetism and the strong nuclear force respectively. with quantum field theory, these natural fields pervade an area of space. particles that mediate these fields, the gauge bosons associated with the field ( like the aforementioned photon with electromagnetism ), are quanta of these fields, that is, ripples in the field carrying small amounts of energy, other particles that act within the field, for example the electron within the electromagnetic field, are though of in a similar manner., albeit different ripples and excitations. these fields are of variable range. the colour field within the quantum chromodynamic field theory, for example, acts in a range between quarks within a nucleon. other fields, such as the electromagnetic field, are infinite in scope and range. the search for a theory of quantum gravity is the search for a theory that can explain the effects of the fundamental force of gravity as explained by general relativity at a quantum level, and marry these up with quantum mechanics, which is a series of models which explain the other fundamental forces ; the strong nuclear ; weak nuclear and electromagnetic forces. examples of quantum gravity include string theory, loop quantum gravity and m - theory. this phase transmission occurred approximately one millionth of a second after the big bang. this was when quark - gluon plasma underwent a phase transition, resulting in quarks forming into hadronic matter, i. e. nucleons. quintessence is a theory of dark energy, given to explain the acceleration of the universe ’ s expansion. it is a dynamic equation, resulting in an attractive or repulsive force depending on the amount of kinetic energy
subdomain_quantum_field_theory
0.734612
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
26
0.6
2025-12-25T21:32:40.864992
e. nucleons. quintessence is a theory of dark energy, given to explain the acceleration of the universe ’ s expansion. it is a dynamic equation, resulting in an attractive or repulsive force depending on the amount of kinetic energy relative to potential energy in the universe. as a repulsive force, it overcomes gravity ’ s attraction over large scales, resulting in an accelerated expansion. quintessence is hypothesised to have become repulsive approximately 10 billion years ago. this refers to a period of time from just after big bang to approximately 300, 000 years after its beginning. during this time, the universe consisted of free moving protons, neutrons and electrons and other particles. all radiation was absorbed by these free electrons, making the universe opaque. protons and neutrons were combining to form deuterium, a heavy isotope of hydrogen, and then helium, however, the temperature of the universe was so high that these existed as free ions in the plasma that was the universe. it was only when the universe was sufficiently expanded that the electrons no longer absorbed the radiation and instead joined with the ions to form neutral atoms. this forms the beginning of the matter era, in which we still exist. recombination was a time period, approximately 300, 000 years after the big bang, when electrons and protons bound to form atoms of hydrogen. before 300, 000 years had passed, the universe was still too hot for atoms for hydrogen to form. only after the universe had expanded sufficiently did the universe cool down sufficiently, making the formation of hydrogen possible. when the source of a wave moves away from us, we observe a change of frequency of that wave. an example would be an ambulance or fire - truck - we hear a lower pitch in its siren once it has passed us by. this is the doppler effect. it is not, however, limited to sound waves, but any kind of waves, including electromagnetic. this means that as an electromagnetic wave source is moving away from us, the frequency of the wave will decrease. as frequency and wavelength are inversely related, one goes up and the other goes down, the wavelength will increase. this shifts the wavelength closer to the red end of the spectrum ( this, when talking about the visible part of the electromagnetic spectrum, of course the wavelength may not be in the visible part ). this is redshift, and it is something we detect from far away galaxies and other electromagnetic sources. this leads us to the conclusion that the universe is expanding. these
subdomain_quantum_field_theory
0.623249
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
27
0.6
2025-12-25T21:32:40.866143
electromagnetic spectrum, of course the wavelength may not be in the visible part ). this is redshift, and it is something we detect from far away galaxies and other electromagnetic sources. this leads us to the conclusion that the universe is expanding. these associate a scalar ( either a number, or a physical quantity ) value to every point in a space within the field. examples of scalar fields include pressure distribution, temperature variation, and gravitational fields. this is a point in spacetime where the curvature of spacetime becomes infinite. it is an area of extremely high density into which matter or light is attracted. singularities can be found both at the centre of black holes and on their own. inside a singularity, the laws of physics are distorted to the point that they are no longer applicable. spacetime is the concept of space and time being part of the same continuum. we use the typical three dimensions that are everyday and commonplace - the x, y and z dimensions used in geometry - ascribing a fourth dimension of time. this allows us to map out any event that takes place in the universe by a set of coordinates ; three of space to give us the location, and one of time to give us when the event occurred. this merging of time and space is important and must be accounted for, because relativity tells us that the observed rate of time passing changes with the respect to an objects velocity relative to the observer. gravitational fields can also change the passage of time. on quantum scales, therefore, it is important to account for time within theoretical frameworks, whereas in classical physics this is unnecessary. the structure of spacetime is detailed in einstein ' s theory of special relativity. this theory lays out the structure of spacetime. it draws on the principle of relativity as laid out by galileo, which states that there is no absolute state of rest, and that all motion is relative to other motion. there are two principles that are laid out in the theory ; that the laws of physics are the same for observers whose motion is uniform relative to each other and that the speed of light in a vacuum is the same for all observers, regardless of any relative motion. this means that with different relative velocities, observers will experience different physical laws. effects of these principles can be seen in various manners. one of the most interesting is time dilation. a clock that is sitting stationary in front of you will tick faster than a clock which is moving away from you. this is has been shown to be true for astronauts, who come
subdomain_quantum_field_theory
0.676192
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
28
0.6
2025-12-25T21:32:40.867152
in various manners. one of the most interesting is time dilation. a clock that is sitting stationary in front of you will tick faster than a clock which is moving away from you. this is has been shown to be true for astronauts, who come back from space younger than they would have been had they remained on earth. another well known consequence of the theory is the energy - mass equivalence relationship, as defined by the equation e = mc2, probably the most famous equation of all time. this states that energy and mass are interchangeable and are related by a function of the speed of light in a vacuum, c. the speed of light in a vacuum, c, is shown not to be just a speed that photons travel at, it is a key cosmological constant that is related to the nature of space and time. special relativity shows us that any object with rest mass cannot travel at the speed of light. the speed that photons, or indeed any particle with zero rest mass ( as energy and mass are equivalent as shown by the equation e = mc2, a particle that is travelling will have kinetic energy and therefore more mass than a particle at rest ), will travel at in a vacuum. its value is 299, 792, 458 metres per second ( ms - 1 ). as explained in the theory of special relativity, the speed of light is the fastest that any form of energy or information can travel in the universe. an intrinsic quantum property of particles that is defined by a spin number that can be either a whole integer ( 1, 2, 3 etc. ) or a half integer ( 1 / 2, 3 / 2, 5 / 2 etc. ), and can be positive or negative. it is a property that all particles exhibit, the sole known exception being the higgs boson, although other particles with zero spin, such as the inflaton, have been hypothesised. to an extent, it is easy to make an analogy of quantum spin with the classic rotational spin that we encounter in everyday life, for example with a spinning top. particles that are electrically charged, such as electrons or positrons, will generate a magnetic field through their spin, as movement of an electric charge will automatically generate magnetic fields. this analogy, however, only takes us so far. different spin quantum numbers can give us ideas as to the symmetry of these particles. a particle with zero spin looks exactly the same from all sides. a particle with spin will look different if rotated, but will regain its symmetry if it
subdomain_quantum_optics
0.67199
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
29
0.6
2025-12-25T21:32:40.869038
us so far. different spin quantum numbers can give us ideas as to the symmetry of these particles. a particle with zero spin looks exactly the same from all sides. a particle with spin will look different if rotated, but will regain its symmetry if it is rotated a certain number of time. in this instance, an analogy of a deck of cards is useful. consider any face card, these are symmetrical every time you spin them half way around, or 180 degrees. consider now the ace of spaces. this card, if places with the point of the space facing up as you look at it, will require a full 360 degree rotation until it looks the same again. a particle with spin 1 will act like an ace of spaces, requiring a full rotation, whereas a spin 2 particle will be symmetrical through 180 degree rotations. a half spin particle will require two rotations to be symmetrical. this kind of rotational symmetry does not have an analogue in the macroscopic world. crucially, whether a particle has half or whole integer spin tells us about how it reacts. particles with half integer spin, or fermions, obey a set of statistics known as fermi - dirac statistics. particles with whole integer spin, or bosons, obey a set called bose - einstein statistics. one of the key differences between these two sets of statistics is that those particles which obey fermi - dirac statistics are subject to the pauli exclusion principle. this states that particles may not occupy the same quantum state as each other. crucially, this means that you cannot make fermions of the same quantum state occupy the same space. this is why fermions are the particles which make up the matter of the universe. they include quarks, which combine to make protons and neutrons, and leptons, a set of particles that include electrons. bosons, which do not obey fermi - dirac statistics and are consequently not subject to the pauli exclusion principle, fulfill other roles, some mediate the fundamental forces of nature, these are the gauge bosons, and the higgs boson gives rise to mass in other particles. also known as the λcdm or lambda - cdm model, this is the best and most widely used model to explain the expansion of the universe, origins of the cosmic microwave background, nucleosynthesis of light elements and the formation of galaxies and large - scale structure. this is a set of mathematical tools that allow us to study thermodynamical properties, such as work
subdomain_quantum_materials
0.684878
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
30
0.6
2025-12-25T21:32:40.870794
, origins of the cosmic microwave background, nucleosynthesis of light elements and the formation of galaxies and large - scale structure. this is a set of mathematical tools that allow us to study thermodynamical properties, such as work, heat and entropy, of a large number of particles, allowing us to look at both atomic level and macroscopic level detail of the system. this allows us to explain thermodynamics in ways that apply to both classical and quantum physics, and allows us to extrapolate macroscopic predictions from microscopic properties. in the standard model of particle physics, particles are considered to be points moving through space, tracing out a line called the world line. to take into account the different interactions observed in nature, one has to provide particles with more degrees of freedom than only their position and velocity. these include mass, electric charge, colour ( which is the “ charge ” associated with the strong interaction ) and spin. this model was designed within a framework known as quantum field theory ( qft ), which allows us to build theories consistent with both quantum mechanics and the special theory of relativity. these theories describe with great success three of the four known interactions in nature : electromagnetism, the strong and weak nuclear forces. unfortunately, gravity, as described by einstein ’ s general relativity, does not fit into this scheme. string theory replaces these different particle types with a single fundamental building block : a “ string ”. these can be closed, like loops, or open, like a hair. as the string moves through time it traces out a tube or a sheet ( depending on whether it is closed or open ). this string is free to vibrate, and different vibrational modes of the string represent the different particle types, as difference modes are seen as different masses or spins. one mode of vibration, or ‘ note ’, makes the string appear as an electron, another as a photon. there is even a mode describing the graviton, the particle carrying the force of gravity. this means we can make sense of the interaction of gravitons in a way we could not in qft. it is this ability of string theory to create a valid model that includes all four fundamental interactions that has dubbed it to be a ‘ theory of everything ’. the problem is that there are five different versions of string theory. this is why we now look to m - theory, which has place for all five theories, as the greatest solution to our ‘ theory of everything ’. as a point of
subdomain_quantum_field_theory
0.721181
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
31
0.6
2025-12-25T21:32:40.871911
’. the problem is that there are five different versions of string theory. this is why we now look to m - theory, which has place for all five theories, as the greatest solution to our ‘ theory of everything ’. as a point of note, string theory predicts that spacetime has ten dimensions. although we only have three dimensions of space and one of time, we can assume that six of these dimensions are curled up very tightly, so that we may never be aware of their existence. having these so - called compact dimensions is very beneficial, as we can suggest that the degrees of freedom, such as electric charge of an electron, can simply arise as motion in the extra compact dimensions. there are four fundamental forces in nature. they are electromagnetism, the weak nuclear force, the strong nuclear force and gravitation. the weak nuclear force is associated with radioactivity in unstable nuclei, specifically the decay of a neutron into a proton. when the temperature is hot enough, such as that of the universe shortly after the big bang, electromagnetism and the weak nuclear force will merge to form the electroweak force. the strong nuclear force binds together neutrons and protons inside nuclei. the mathematical theory describing the elementary particles in this theory, quarks and gluons, is known as quantum chromodynamics ( qcd ). theories that unify the strong nuclear force with electroweak theory are known as grand unified theories, of guts. a supercluster is a vast ( the are some of the largest structures in the universe ) grouping of smaller galaxy clusters and groups. they can span between several hundred million light years to over one billion light years. superclusters can contain galaxy bubbles, sheets, voids and filaments, which are smaller structures within the supercluster. nearly all galaxies are found within superclusters, and inbetween superclusters thee are usually large voids. our own supercluster, called the virgo supercluster, contains the local group, the virgo cluster and some 100 other galactic groups and clusters. its diameter is approximately 100 million light years. supergravity is a theory which follows on from supersymmetry. it is theorised that in the same way that photons mediate the electromagnetism, gluons the strong nuclear force and w and z bosons the weak nuclear force, so to does the as - yet undiscovered gravi
subdomain_quantum_field_theory
0.679682
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
32
0.6
2025-12-25T21:32:40.873677
theorised that in the same way that photons mediate the electromagnetism, gluons the strong nuclear force and w and z bosons the weak nuclear force, so to does the as - yet undiscovered graviton the gravitational force. in supergravity, the graviton has a heavier superpartner whose spin differs by 1 / 2. so far, as with supersymmetry, there has been no observational evidence for supergravity. this is a very powerful stellar explosion that can quite often outshine galaxies. a star undergoes a supernova either when a very old massive star undergoes sudden gravitational collapse, releasing vast quantities of gravitational energy, or through the reignition of the nuclear fusion reaction in a degenerate star ' s ( such as a white dwarf or neutron star ) core. the explosion releases huge quantities of the star ' s matter, resulting in a supernova remnant. certain types of supernova have luminosities of known quantity, such that they can be used as ' standard candles ', which means that we can detect how far away the object is by comparing its known luminosity with our observed brightness. string theory states that all particles are representations of different vibrations on a fundamental building block ; a string. as a theory, it is able to describe the interactions of the particle that mediates gravitation : the graviton. in this way, and by being able to describe all other particles and interactions thereof, it is able to unite the four fundamental forces in nature, and is therefore a ‘ theory of everything ’. the original string theory only described particles with integer spins, called bosons. these are the particles that mediate the fundamental forces, and include the photon, electron, gluon and graviton. the other class of particle, which have half integer spin, called fermions, were not described. these are particles that constitute matter as we know it, such as quarks and electrons. by introducing supersymmetry to bosonic string theory, we obtain a new theory that describes both the forces and the matter that make up the universe. this is superstring theory. there are three different superstring theories that have no mathematical inconsistencies. in two of them, the fundamental object is a closed string, whilst in the third, the string is open. by mixing the best aspects of bosonic string theory and superstring theory, we can create two other consistent theories of strings, heterotic
subdomain_quantum_field_theory
0.664344
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
33
0.6
2025-12-25T21:32:40.874611
of them, the fundamental object is a closed string, whilst in the third, the string is open. by mixing the best aspects of bosonic string theory and superstring theory, we can create two other consistent theories of strings, heterotic string theories supersymmetry is a theory which postulates that for every elementary particle, there is a more massive " superpartner " whose spin is different by 1 / 2. the theory comes about to solve mathematical difficulties related to quantum field theory and the reconciling of general relativity and quantum field theory. these inconsistencies arise because the higgs boson, a gauge boson whose interaction with other particles gives them mass, appears to gain large amounts of mass through interactions with itself. solving these inconsistencies would give physicists a way to marry quantum mechanics and gravity at the smallest scales. these superpartners are a possible candidate for dark matter. no superpartners have yet to be detected and no evidence exists as of yet to support supersymmetry. this is because in order to observe particles of this mass we need to use incredible amounts of energy, which so far we have been unable to generate. it is hoped that the large hadron collider at cern might detect evidence of supersymmetric particles. this is the set of points in space where decoupling occurred, approximately 380, 000 years after the big bang, at the right distance so that we are now seeing these photons reach us as part of the cosmic microwave background relic radiation. this occurs when a system in some state of symmetry moves into a different configuration, resulting in the loss of that symmetry. consider a ball on a hill. the ball is symmetrical. the hills is also symmetrical. if the ball is on top of the hill, the ball and hill in system are symmetrical. if the ball rolls down the hill, the ball and hill are individually still symmetrical, but the system of the ball and the hill is now asymmetrical. this is symmetry breaking. in a cosmological context, this happened as the universe cooled down after the big bang. as this occurred, elementary particles changed state in what is known as a phase transition. as this occurred, symmetry that previously was exhibited by these particles was broken. these symmetries are associated with different fundamental forces. this is why some particles are acted upon by these forces, and others not. these symmetries are restored at higher temperatures, however. these are a type of topological defect that
subdomain_quantum_field_theory
0.65784
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
34
0.6
2025-12-25T21:32:40.875681
was broken. these symmetries are associated with different fundamental forces. this is why some particles are acted upon by these forces, and others not. these symmetries are restored at higher temperatures, however. these are a type of topological defect that is hypothesised to form when large symmetries are broken. they are unstable and prone to collapse. unlike certain other topological defects, such as magnetic monopoles, these are delocalized and occur over large areas. no evidence has been found of them as yet. this is a gravitational anomaly located in the centaurus supercluster. it is a localized concentration of mass of unknown origin that is equivalent to tens of thousands of galaxies. it mass is so large, that ( as the name suggests ) its gravitational attraction is altering the motion of galaxies and galaxy clusters in a region over hundreds of millions of light years across. in the aftermath of the big bang, the universe was extremely hot and extremely dense. at these energies, the laws of nature that we know were changed. the fundamental forces that we see in nature were unified - it is only as the universe expanded and cooled that gravitation, electromagnetism and the strong and weak nuclear forces all ceased to be as one. electroweak theory describes the unification of the weak nuclear force and electromagnetism. a theory of everything will marry up all the fundamental forces. the issue with this is that whilst quantum chronodynamics and the electroweak theory describe the strong and weak nuclear forces and electromagnetism on a well understood quantum basis, there is no consistent theory for describing gravity on such a basis. m - theory, and the associated string theories behind it are being explored as possible candidates. these are configurations of matter that form during matter phase transitions and symmetry breakings, such as occurred during the very early universe. they are configurations of matter in the old, symmetrical phase that remain stable in the new phase where the symmetry that was previously held is now broken. examples of these defects include monopoles, cosmic strings, domain walls and textures. within quantum field theory, particles may move from higher to lower energy states, such as occurred in the very early universe as the universe was expanding and thus cooling. these lower energy states, or vacuum states, may be different whilst possessing the same amount of energy. this means these states are degenerate. the particle, therefore, has a chance of falling into any of these degenerate vacuum states, unless there is something outside the system described here which
subdomain_quantum_field_theory
0.688218
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d>
35
0.6
2025-12-25T21:32:40.877013
what is a retrofit? main entry : ret · ro · fit pronunciation : \ ’ re - tro - ’ fit, ‘ re - tro - ’ fit \ function : transitive verb 1 : to furnish ( as a computer, airplane, or building ) with new or modified parts or equipment not available or considered necessary at the time of manufacture 2 : to install ( new or modified parts or equipment ) in something previously manufactured or constructed 3 : to adapt to a new purpose or need 4 : to save a lot of money on energy costs! 5 : to update your current lighting system innovation and continuous improvement in the field of lighting have given rise to tremendous energy - saving opportunities. lighting is an area in which there is enormous energy - efficient potential, starting at the design stage by incorporating modern energy - efficient lamps and luminaries. following responsible operational practices also can significantly reduce associated energy costs. lighting is not only a very high priority when considering facility retrofitting, but also is a high - return, low - risk investment. by installing new lighting technologies such as dimmers, photo sensors, occupancy sensors, and timers, facilities can reduce the amount of electricity consumed and energy costs associated with lighting. there are several types of energy efficient lighting and affordable lighting technology : compact fluorescents lights, light - emitting diodes ( leds ), and lighting controls. below are a few examples of energy - saving opportunities with efficient lighting! • installation of energy - efficient fluorescent lamps in place of conventional fluorescent lamps for example converting to t8 or t5 lamps from t12 lamps. • installation of compact fluorescent lamps ( cfls ) in place of incandescent lamps. • installation of high pressure sodium ( hps ) lamps for applications where color rendering is not critical. metal halide lamps should also be considered when correct color is important. • installation of led exit signs to replace incandescents. • installation of high frequency ( hf ) electronic ballasts in place of conventional ballasts. • installation of occupancy sensors, an inexpensive way to ensure that unused lights do not remain on. • installation of microprocessor - based controllers. • installation of photocells, devices that automatically detect the natural light level in a room and adjust the intensity of the artificial light accordingly. • replacing incandescent wall lights and exit sign lighting with cfl or led - lit units will not only save a considerable amount of energy, it also will significantly reduce labor costs associated with changing light bulbs,
subdomain_quantum_optics
0.601015
512
HuggingFaceFW/fineweb-edu
<urn:uuid:47e51cdd-f93b-4628-b750-b9cb8236313d>
0
0.6
2025-12-25T21:32:41.195612
i stand at the edge of a dream, my breath a cloud. snow has transformed my familiar landscape and chilled my toes. this careless arm of the east verde where my grandchildren splashed away the summer has frozen over and snow has rendered abstract the shapes of rocks and junipers. water gleams in all its forms about me — still flowing in the stream, gathered in vapor on my breath, crystallized into snow on every hand, frozen into ice underfoot. i wish i knew the proper prayer — the light step of the ritual dance — the intonation of the chant – to offer at such a moment. instead, i kneel at the edge of the stream and study the ice, perhaps the most unlikely of water ’ s forms. here ’ s a nugget to suck on : chill any other liquid and the jittery molecules will slow down — bouncing about less as the temperature drops. eventually, the liquid will settle into a stable crystal lattice — which takes up less space than the liquid did. that ’ s why all other liquids most sensibly condense when they freeze. but not water, thank the lord. water ’ s made of one molecule of hydrogen linked to two molecules of oxygen. these amiable molecular companions actually share electrons to keep everyone happy. moreover, a water molecule has a slight positive electrical charge at one end and a faint negative electrical charge at the other end. this accounts for the nearly miraculous chemistry of water — on which life on the planet depends utterly. for starters, as water cools below 32 degrees f the molecules slip into a strange and counter - intuitive crystalline lattice. once they click into place, they actually take up about 9 percent more space than they did as a warm liquid. now, that didn ’ t work out so well for folks in rim country who left the water on in empty houses during the big freeze, since the expanding ice in the neglected pipes can split open even copper or steel. but water ’ s demented determination to expand when it ought to contract makes life on the planet possible. if water contracted as it froze, then sea ice would form at the surface every winter and sink to the bottom. over time, the oceans would freeze solid — and we could not be here. we could go on and on about the fortunate strangeness of water. for instance, the positive and negative ends of water molecules account for surface tension — so useful to water skiers, stone - skippers and water bugs. but it also explains what ’ s called “ capillary action, ”
subdomain_quantum_materials
0.620831
512
HuggingFaceFW/fineweb-edu
<urn:uuid:d2623cfc-039b-41be-8086-3759eaf87146>
0
0.6
2025-12-25T21:32:42.330150
may 4, 2012 researchers in spain have found that at least some of the individuals claiming to see the so - called aura of people actually have the neuropsychological phenomenon known as " synesthesia " ( specifically, " emotional synesthesia " ). this might be a scientific explanation of their alleged ability. in synesthetes, the brain regions responsible for the processing of each type of sensory stimuli are intensely interconnected. synesthetes can see or taste a sound, feel a taste, or associate people or letters with a particular color. the study was conducted by the university of granada department of experimental psychology oscar iborra, luis pastor and emilio gomez milan, and has been published in the journal consciousness and cognition. this is the first time that a scientific explanation has been provided for the esoteric phenomenon of the aura, a supposed energy field of luminous radiation surrounding a person as a halo, which is imperceptible to most human beings. in basic neurological terms, synesthesia is thought to be due to cross - wiring in the brain of some people ( synesthetes ) ; in other words, synesthetes present more synaptic connections than " normal " people. " these extra connections cause them to automatically establish associations between brain areas that are not normally interconnected, " professor gomez milan explains. new research suggests that many healers claiming to see the aura of people might have this condition. the case of the " santon de baza " one of the university of granada researchers remarked that " not all ' healers ' are synesthetes, but there is a higher prevalence of this phenomenon among them. the same occurs among painters and artists, for example. " to carry out this study, the researchers interviewed some synesthetes including a ' healer ' from granada, " esteban sanchez casas, " known as " el santon de baza ". many local people attribute " paranormal powers " to el santon, because of his supposed ability to see the aura of people " but, in fact, it is a clear case of synesthesia, " the researchers explained. according to the researchers, el santon has face - color synesthesia ( the brain region responsible for face recognition is associated with the color - processing region ) ; touch - mirror synesthesia ( when the synesthete observes a person who is being touched or is experiencing pain, s / he experiences the same ) ; high empathy ( the ability to feel what other person is feeling ), and schizoty
subdomain_quantum_optics
0.619281
512
HuggingFaceFW/fineweb-edu
<urn:uuid:dc499d3a-56ed-43f5-9f0b-d9a91ff2196e>
0
0.6
2025-12-25T21:32:42.556317
it security is, generally, defined as a defensive approach to protect a company and its assets from unauthorized access by an intruder. it security efforts include network security appliances, honeypots, robust authentication, limiting authorization to least necessary privileges, as well as other perimeter security defenses. however, these approaches do not provide definitive protection of the company ' s most valuable asset, its data, because a single intrusion could result in sensitive data being compromised. additionally, in today ' s workplace culture the disgruntled employee may be as much of a threat as any external threat. data encryption is a direct response to internal and external security threats that may also meet compliance regulations. encryption provides strong security for data " at - rest " ; in our case, the data stored in the database, but to be effective should be implemented as a part of a broader security plan. there are many issues involved with the implementation of encryption, details that require decisions and actions to ensure the success of the implementation and the security of the data. this document will discuss the issues associated with database encryption implemented using sql server ' s native transparent database encryption ( tde ) mechanism. encryption has been integral to human history beginning with the babylonian use of intaglio other historical examples include the caesar cipher, scytale transposition cipher, enigma, and even jimkryptos sculpture. throughout history our society has enjoyed the ability to protect information using cryptographic methods including steganography, microdots, invisible ink, digital watermarks, and encryption which may be defined as the conversion of data so as to keep its meaning private. as the amount of sensitive data collected by commercial entities continues to grow the regulatory requirements for protecting the sensitive data will become more robust ; meeting the regulatory requirements will necessarily require the continued use of data encryption methods. encryption requires the application of an algorithm to transform the target data into a form that is unusable to anyone that does not have access to the encryption process used. in practical terms encryption applies a cryptographic algorithm with a " key " to the target data producing the encrypted form of the data which cannot be accessed without the key used to encrypt the data. the two primary forms of key encryption are symmetric and asymmetric which are distinguished by the number of keys used in the encryption / decryption process. symmetric encryption uses a single key while asymmetric encryption uses a pair of keys generally referred to as public and private keys. while asymmetric encryption appears ideal for implementation because only the public key need ever be shared
subdomain_quantum_cryptography
0.6533
512
HuggingFaceFW/fineweb-edu
<urn:uuid:82b7ca05-74da-41d5-a887-ccc152237365>
0
0.6
2025-12-25T21:32:42.664727
encryption / decryption process. symmetric encryption uses a single key while asymmetric encryption uses a pair of keys generally referred to as public and private keys. while asymmetric encryption appears ideal for implementation because only the public key need ever be shared there are disadvantages with regard to performance. a sampling of asymmetric algorithms includes rsa, dsa, elgamal, ecdsa, and xtr. figure 1 demonstrates the asymmetric encryption process. figure 1 asymmetric key encryption / decryption process symmetric algorithms require a single key for both encryption and decryption which allows for high - performance ; however, with this approach the strength of the encryption is dependent on the security of the key. common symmetric algorithms include aes / rijndael, blowfish, des, triple des, serpent, and idea to name only a few. figure 2 demonstrates the symmetric encryption process. figure 2 symmetric key encryption process both symmetric and asymmetric encryption approaches are vulnerable to brute force attacks and cryptanalysis. brute force is an attack during which every possible permutation of the key value is attempted. cryptanalysis, on the other hand, applies computational techniques to circumvent the encryption. in general, the use of sufficiently long keys will mitigate these attacks. in summary, a symmetric key algorithm is fast but less secure than an asymmetric algorithm. another approach is a hybrid wherein a symmetric key is used to encrypt the data while an asymmetric key is used to encrypt the symmetric key. it may be important to know in order to maintain perspective that there is only one encryption algorithm that is impossible to crack, one - time pad ( otp ), any other algorithm may be broken given sufficient time and / or computer resources. security concerns, in general, and encryption, specifically, are new concepts for most it professionals ; therefore, a glossary of security / encryption terms is included as an appendix for reference. overview of transparent database encryption the primary benefit of transparent database encryption ( tde ) is the ability to encrypt data without affecting any application that uses the data while providing security for the entire database. tde is implemented at the database - level, unlike cell - level encryption tde does not require modification to applications or database column data types ; furthermore, database - level encryption allows for higher performance than cell - level encryption. however, tde may allow more data leakage because encrypted data is decrypted when read into the buffer
subdomain_quantum_cryptography
0.670002
512
HuggingFaceFW/fineweb-edu
<urn:uuid:82b7ca05-74da-41d5-a887-ccc152237365>
1
0.6
2025-12-25T21:32:42.666201
to applications or database column data types ; furthermore, database - level encryption allows for higher performance than cell - level encryption. however, tde may allow more data leakage because encrypted data is decrypted when read into the buffer pool ; therefore, the data is not protected if the operating system writes data from memory to disk during paging operations, or during hibernation, or memory dumps, nor is the data protected while in memory. database encryption is achieved by leveraging the data protection api ( dpapi ) in windows® which protects the service master key ( smk ) which protects the database master key ( dmk ) which is used to protect the certificate or asymmetric keys which are used to protect the database encryption key ( dek ). these dependencies create a security chain from the operating system to the data eliminating user interaction thus strengthening security. the relationships and dependencies between keys is represented in figure 3 below : figure 3 sql server encryption key hierarchy with tde and ekm ( source : bol - http : / / msdn. microsoft. com / en - us / library / cc278098. aspx ) the hierarchy of keys in tde is protected from the dpapi to the dek allowing the server to manage encryption and decryption automatically. the dmk and the certificate are stored in the master database while the dek is stored in the user database. this hierarchy and the key management chain provide tde the capability to transparently encrypt and decrypt the database. the process for encrypting a database is conceptually simple : - create a master key - obtain an authentication certificate - create dek - enable tde on the database however, significant complexity will be introduced if the database encryption strategy is undertaken without proper planning that addresses important implementation issues. those issues are discussed in the following section. the level of security necessary to protect the database should be documented during the planning phase. individually and in combination the following encryption mechanisms are available to secure the database : - encrypting file system ( efs ) - transparent database encryption ( tde ) discussion of the benefits and performance implications of each mechanism and their combinations is beyond the scope of this paper. data encryption must address two equally important issues : encryption technology and cryptographic key ( key ) management. encryption technology provides for variable granularity of data protection, performance, and integration with existing applications, as well as ease of implementation and management. however,
subdomain_quantum_cryptography
0.617309
512
HuggingFaceFW/fineweb-edu
<urn:uuid:82b7ca05-74da-41d5-a887-ccc152237365>
2
0.6
2025-12-25T21:32:42.668502
encryption must address two equally important issues : encryption technology and cryptographic key ( key ) management. encryption technology provides for variable granularity of data protection, performance, and integration with existing applications, as well as ease of implementation and management. however, the success of the selected encryption strategy may depend most on key management policies and processes. key management issues include : key access, key storage, and cryptographic algorithm. key management is one of many important issues that must be considered when planning the encryption project. the important issues to consider during the planning phase of the encryption project are listed below : - encryption algorithm : des, triple des, triple _ des _ 3key, rc2, rc4, 128 - bit rc4, desx, 128 - bit aes, 192 - bit aes, and 256 - bit aes - key management : key storage, hardware security module ( hsm ), key scheduling, key availability / mobility / security - performance impact. encryption / decryption - microsoft claims 3 - 5 % ; however, independent tests indicate 6 - 12 %.. - tempdb encryption - encryption of any one db will encrypt tempdb. - transaction log is encrypted. - log shipping implementation changes - encrypted database log shipping requires the recipient database to possess the key in order to apply the logs. - backup and recovery plan changes - encrypted databases cannot be recovered to a different instance without the key. - disaster recovery plan changes - encrypted databases cannot be recovered to a different instance without the key. - increased disk space requirements - no sql server native backup compression. third party tools may be available ; however, in general, encrypted data cannot be significantly compressed. - tde operates during i / o ; therefore, any data written to disk outside of the buffer pool is not protected - no support for filestream data - type the diagram in figure 4 represents a nominal encryption project planning process with each major area of consideration represented. the end result of the planning process is to produce a document detailing the decisions made that address the issues related to encrypting the database. figure 4 encryption planning process a comprehensive it security policy provides a layered defense against threats to the system. however, even the most thorough perimeter network and physical defenses do not obviate the vulnerability of plaintext data stored in databases. data encryption provides a means to protect sensitive data from unauthorized access as a part of a coordinated it security policy that includes network security, robust authentication
subdomain_quantum_cryptography
0.62748
512
HuggingFaceFW/fineweb-edu
<urn:uuid:82b7ca05-74da-41d5-a887-ccc152237365>
3
0.6
2025-12-25T21:32:42.669848
most thorough perimeter network and physical defenses do not obviate the vulnerability of plaintext data stored in databases. data encryption provides a means to protect sensitive data from unauthorized access as a part of a coordinated it security policy that includes network security, robust authentication and authorization, as well as other physical security considerations. sql server and windows® provide several mechanisms for the protection of data either at the file, database, or data levels. transparent database encryption ( tde ) is a new technology available in sql server 2008 enterprise edition which provides a simplified the data encryption option. tde is a database - level encryption mechanism that reduces the implementation complexity by negating the need to modify the data and / or the client applications. however, the benefits of performance and simplicity are balanced by tde ' s potential for data leakage ; therefore, for the most sensitive data tde alone may not suffice as a data security strategy. any data protection strategy must weigh the costs and benefits of implementation to arrive at a usable solution that meets the security requirements defined by the business. tde ' s protection of sensitive data in low to moderate threat environments may be sufficient for some business requirements while highly sensitive data or data in high threat environments will require the combination of tde with other encryption mechanisms such as cell - level encryption, efs, or bitlocker.
subdomain_quantum_cryptography
0.611521
269
HuggingFaceFW/fineweb-edu
<urn:uuid:82b7ca05-74da-41d5-a887-ccc152237365>
4
0.6
2025-12-25T21:32:42.670431
with implications similar to cyclical views of time and history. ( left ) an event is visible through time, like a pebble thrown into a bond creates outward waves / / / ( right ) an event in the present can only be perceived in a certain region of space - time. knowing that our time in the sun is limited, sometimes we try to capture time and light with images. albrecht durer ’ s etching, “ melancholia i ” associates light with order and darkness with chaos. the composition places the products of the imagination – geometry, mathematics, tools, and architecture – within the timeframe of an hourglass running out. in this picture, the imagination succeeds in creating a mental zone that overrides both astrophysics and religion – it holds together past, present and future with rays of perpetual sunlight – messengers of time etched in metal. / / / next week : skin, shell and skeleton part i nuclear bomb test, bikini atoll, 1946. the small black figures just outside the cloud are decommissioned world war ii battleships from the us navy. / / in addition to an urban investigation into the power structures of moscow, i ' ve been looking at ideas about the sublime and its relation to... - - yesterday china launched tiangong - 1 ( heavenly palace - 1 ), its first step towards a manned orbital space station. i remembered reading that the last space shuttle mission, sts - 135, finished earlier this year, signalling an end to america ’ s utopian dream of colonizing space. as i read more... i ' ve just started on the cooper union march ii course, and am excited to be here in new york to say the least! i ' ll be sharing my project work for studio, sculpture, and elsewhere over the upcoming year. for this first post i thought i ' d share something i wrote for a course appropriately titled...
subdomain_quantum_optics
0.61869
382
HuggingFaceFW/fineweb-edu
<urn:uuid:4b2f7c8f-e8be-4df1-98e2-626322aa078f>
1
0.6
2025-12-25T21:32:43.350131
with somewhat bizarre behaviors like the tunneling effect, which are governed by the laws of quantum mechanics and relativity. the orderly, deterministic world of classical physics gives way to a world of wave functions, probability distributions, uncertainty principles, and wave - particle dualities. instead of a deterministic world, we now have a world based on probabilities. you cannot predict all the future states of an object or a particle based on its present state. you can map out its behavior, but only as probability distributions of all the possible states it could be at. moreover, the heisenberg uncertainty principle tells you that it is impossible to know the exact state of a particle. you cannot simultaneously determine its exact position and velocity with any great degree of accuracy no matter how good your measurement tools are. the world is intrinsically unpredictable. in addition, there is no such thing as absolute reality. in classical mechanics something either has the properties of a particle, e. g., a planet, a baseball ; or of a wave, e. g, light, sound. in quantum mechanics all objects exhibit both kinds of properties. the concept of wave - particle duality explains that reality depends on what question you are asking and what experiment you perform to answer the question. the very act of observing an object will change the object being observed. any instruments used to measure its properties will invariable alter the properties being measured. this transition, from a world view based on scientific determinism to one based on probability distributions, uncertainty principles and subjective reality is not intuitive and difficult to get used to. even albert einstein had trouble accepting it, and famously said “ god does not play dice with the universe. ” stephen hawking, one of world ’ s top theoretical physicists, concluded in this brilliant lecture : “... it seems einstein was doubly wrong when he said, god does not play dice. not only does god definitely play dice, but he sometimes confuses us by throwing them where they can ’ t be seen... the universe does not behave according to our pre - conceived ideas. it continues to surprise us. ” but, the worlds of the very small, as well as the very large, are not the only ones that exhibit counter - intuitive, seemingly magical behaviors. so is the world of highly complex systems, especially those systems whose components and interrelationships are themselves quite complex, as is the case with systems biology and evolution. such is also the case with organizational and sociotechnical systems whose main components
subdomain_quantum_mechanics
0.700852
512
HuggingFaceFW/fineweb-edu
<urn:uuid:9417a42f-32e3-43bb-a2e5-914406f9e3f6>
1
0.6
2025-12-25T21:32:43.548956
. so is the world of highly complex systems, especially those systems whose components and interrelationships are themselves quite complex, as is the case with systems biology and evolution. such is also the case with organizational and sociotechnical systems whose main components are people. even though these chaotic systems are in principle deterministic, their dynamic, non - linear nature renders them increasingly unpredictable and accounts for their emergent behavior. new terms, like long tails, freakonomics and black swan theory, – every bit as fanciful as quarks, charm and strangeness, – have begun to enter our lexicon. artificial intelligence ( ai ) is an example of a discipline that has transitioned from its original classical, deterministic approach to an approach more suitable to a highly complex, inherently unpredictable topic like intelligence. ai was one of the hottest areas in computer sciences, in the 1960s and 1970s. many of the ai leaders in those days were convinced that you could build a machine as intelligent as a human being based on logical deductions and the kind of step - by - step reasoning that humans use when solving puzzles or proving theorems. they obtained considerable government funding in the us, uk and japan to implement their vision. but eventually it became clear that all these various projects had grossly underestimated the difficulties of developing any kind of ai system based on logic programming and deductive reasoning. the field went through a so - called ai winter in the 1980s. but things started to change in the 1990s when ai switched paradigms and embraced data mining and information analytics, the precursors of today ’ s big data. instead of trying to program computers to act intelligently, ai embraced a statistical, brute force approach based on analyzing vast amounts of information using powerful computers and sophisticated algorithms. we discovered that such a statistical, information - based approach produced something akin to intelligence or knowledge. moreover, unlike the earlier programming - based projects, the statistical approaches scaled very nicely. the more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results. deep blue ibm ' s chess playing supercomputer, demonstrated the power of such a statistical approach by beating then reigning chess champion gary kasparov in a celebrated match in may of 1997. since that time, analyzing or searching large amounts of information has become increasingly important and commonplace in a wide variety of disciplines. today, most of us use search engines as the primary mechanism for finding information in the world wide web. researchers have been developing sophisticated question -
subdomain_quantum_simulation
0.606436
512
HuggingFaceFW/fineweb-edu
<urn:uuid:9417a42f-32e3-43bb-a2e5-914406f9e3f6>
2
0.6
2025-12-25T21:32:43.551217
radiates 632. 8 nm. without helium, the neon atoms would be excited mostly to lower excited states responsible for non - laser lines. a neon laser with no helium can be constructed but it is much more difficult without this means of energy coupling. therefore, a hene laser that has lost enough of its helium ( e. g., due to diffusion through the seals or glass ) will most likely not lase at all since the pumping efficiency will be too low. the energy or pump source of the laser is provided by a high voltage electrical discharge passed through the gas between electrodes ( anode and cathode ) within the tube. a dc current of 3 to 20 ma is typically required for cw operation. the optical cavity of the laser usually consists of two concave mirrors or one plane and one concave mirror, one having very high ( typically 99. 9 % ) reflectance and the output coupler mirror allowing approximately 1 % transmission. commercial hene lasers are relatively small devices, among gas lasers, having cavity lengths usually ranging from 15 cm to 50 cm ( but sometimes up to about 1 meter to achieve the highest powers ), and optical output power levels ranging from 0. 5 to 50 mw. the red hene laser wavelength of 633 nm has an actual vacuum wavelength of 632. 991 nm, or about 632. 816 nm in air. the wavelength of the lasing modes lie within about 0. 001 nm above or below this value, and the wavelengths of those modes shift within this range due to thermal expansion and contraction of the cavity. frequency - stabilized versions enable the wavelength of a single mode to be specified to within 1 part in 108 by the technique of comparing the powers of two longitudinal modes in opposite polarizations. absolute stabilization of the laser ' s frequency ( or wavelength ) as fine as 2. 5 parts in 1011 can be obtained through use of an iodine absorption cell. the mechanism producing population inversion and light amplification in a hene laser plasma originates with inelastic collision of energetic electrons with ground state helium atoms in the gas mixture. as shown in the accompanying energy level diagram, these collisions excite helium atoms from the ground state to higher energy excited states, among them the 23s1 and 21s0 long - lived metastable states. because of a fortuitous near coincidence between the energy levels of the two he metastable states, and the 3s2 and 2s2 ( paschen notation ) levels of neon, collisions between these helium
subdomain_quantum_optics
0.600996
512
HuggingFaceFW/fineweb-edu
<urn:uuid:65f608c4-8f14-4ab4-8d56-f0829dab41bc>
1
0.6
2025-12-25T21:32:43.925816
- lived metastable states. because of a fortuitous near coincidence between the energy levels of the two he metastable states, and the 3s2 and 2s2 ( paschen notation ) levels of neon, collisions between these helium metastable atoms and ground state neon atoms results in a selective and efficient transfer of excitation energy from the helium to neon. this excitation energy transfer process is given by the reaction equations : - he * ( 23s1 ) + ne1s0 → he ( 1s0 ) + ne * 2s2 + δe - he * ( 21s ) + ne1s0 + δe → he ( 1s0 ) + ne * 3s2 where ( * ) represents an excited state, and δe is the small energy difference between the energy states of the two atoms, of the order of 0. 05 ev or 387 cm−1, which is supplied by kinetic energy. excitation energy transfer increases the population of the neon 2s2 and 3s2 levels manyfold. when the population of these two upper levels exceeds that of the corresponding lower level neon state, 2p4 to which they are optically connected, population inversion is present. the medium becomes capable of amplifying light in a narrow band at 1. 15 μm ( corresponding to the 2s2 to 2p4 transition ) and in a narrow band at 632. 8 nm ( corresponding to the 3s2 to 2p4 transition at 632. 8 nm ). the 2p4 level is efficiently emptied by fast radiative decay to the 1s state, eventually reaching the ground state. the remaining step in utilizing optical amplification to create an optical oscillator is to place highly reflecting mirrors at each end of the amplifying medium so that a wave in a particular spatial mode will reflect back upon itself, gaining more power in each pass than is lost due to transmission through the mirrors and diffraction. when these conditions are met for one or more longitudinal modes then radiation in those modes will rapidly build up until gain saturation occurs, resulting in a stable continuous laser beam output through the front ( typically 99 % reflecting ) mirror. the gain bandwidth of the hene laser is dominated by doppler broadening rather than pressure broadening due to the low gas pressure, and is thus quite narrow : only about 1. 5 ghz full width for the 633 nm transition. with cavities having typical lengths of 15 cm to
subdomain_quantum_optics
0.654742
512
HuggingFaceFW/fineweb-edu
<urn:uuid:65f608c4-8f14-4ab4-8d56-f0829dab41bc>
2
0.6
2025-12-25T21:32:43.926815
line - of - sight propagation refers to electro - magnetic radiation or acoustic wave propagation. electromagnetic transmission includes light emissions traveling in a straight line. the rays or waves may be diffracted, refracted, reflected, or absorbed by atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles. at low frequencies ( below approximately 2 mhz or so ) radio signals travel as ground waves, which follow the earth ' s curvature due to diffraction with the layers of atmosphere. this enables am radio signals in low - noise environments to be received well after the transmitting antenna has dropped below the horizon. additionally, frequencies between approximately 1 and 30 mhz can be reflected by the f1 / f2 layer, thus giving radio transmissions in this range a potentially global reach ( see shortwave radio ), again along multiple deflected straight lines. the effects of multiple diffraction or reflection lead to macroscopically " quasi - curved paths ". however, at higher frequencies and in lower levels of the atmosphere, neither of these effects are significant. thus any obstruction between the transmitting antenna and the receiving antenna will block the signal, just like the light that the eye may sense. therefore, since the ability to visually see a transmitting antenna ( disregarding the limitations of the eye ' s resolution ) roughly corresponds to the ability to receive a radio signal from it, the propagation characteristic of high - frequency radio is called " line - of - sight ". the farthest possible point of propagation is referred to as the " radio horizon ". in practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal ( a function of both the transmitter and the antenna characteristics ). broadcast fm radio, at comparatively low frequencies of around 100 mhz, are less affected by the presence of buildings and forests. radio horizon the radio horizon is the locus of points at which direct rays from an antenna are tangential to the surface of the earth. if the earth were a perfect sphere and there were no atmosphere, the radio horizon would be a circle. the radio horizon of the transmitting and receiving antennas can be added together to increase the effective communication range. antenna heights above 1, 000, 000 feet ( 189 miles ; 305 kilometres ) will cover the entire hemisphere and not increase the radio horizon. radio wave propagation is affected by atmospheric conditions, ionospheric absorption, and the presence of obstructions, for example mountains or trees. simple formulas that include the effect of the atmosphere give the range as : the simple formulas
subdomain_quantum_optics
0.607683
512
HuggingFaceFW/fineweb-edu
<urn:uuid:73f782c1-7cad-44a0-9556-b3f7e3645dbb>
0
0.6
2025-12-25T21:32:44.010708
this might be a rare case about which einstein was wrong. more than 60 years ago, the great physicist scoffed at the idea that anything could travel faster than light, even though quantum mechanics had suggested such a condition. now four swiss researchers have brought the possibility closer to reality. testing a concept called " spooky action at a distance " - - a phrase used by einstein in criticizing the phenomenon - - they have shown that two subatomic particles can communicate nearly instantaneously, even if they are separated by cosmic distances. alice ' s wonderland had nothing on quantum physics, which describes a bizarre state of matter and energy. not only can the same atom exist in two locations at once, but merely attempting to observe a particle will alter its properties. perhaps least intuitive is the characteristic called entanglement. as described by quantum mechanics, it means that two entangled particles can keep tabs on each other no matter how far apart they are. physicists have been trying for decades to determine whether this property is real and what might cause it. in the process, they ' ve uncovered evidence for it but not much about its properties. physicist nicolas gisin and colleagues at the university of geneva in switzerland split off pairs of quantum - entangled photons and sent them from the university ' s campus through two fiber - optic cables to two swiss villages located 18 kilometers apart. thinking of the photons like traffic lights, each passed through specially designed detectors that determined what " color " they were when entering the cable and what color they appeared to be when they reached the terminus. the experiments revealed two things : first, the physical properties of the photons changed identically during their journey, just as predicted by quantum theory - - when one turned " red, " so did the other. second, there was no detectable time difference between when those changes occurred in the photons, as though an imaginary traffic controller had signaled them both. the result, the team reports in tomorrow ' s issue of nature, is that whatever was affecting the photons seems to have happened nearly instantaneously and that according to their calculations, the phenomenon influencing the particles had to be traveling at least 10, 000 times faster than light. given einstein ' s standard speed limit on light traveling within conventional spacetime, the experiments show that entanglement might be controlled by something existing beyond it. gisin says that once the scientific community " accepts that nature has this ability, we should try to create models that explain it. " although the research doesn ' t demonstrate spooky action
subdomain_quantum_optics
0.673151
512
HuggingFaceFW/fineweb-edu
<urn:uuid:46289f9b-357a-4cc5-84f9-63d88b754926>
0
0.6
2025-12-25T21:56:09.416285
a theoretical analysis of recent experiments suggests that a key feature of a topological quantum computer — the unusual statistics of quasiparticles in the quantum hall effect — may finally have been observed. by exploiting the concept of particle - hole duality, one can realize a point junction between integer and fractional quantum hall phases, which constitutes a crucial building block towards possible applications of the quantum hall effect. the fractional quantum hall effect, thought to be special to two dimensions, may also flourish in three, providing a possible explanation for anomalies observed in certain 3d materials in high magnetic fields. physics2, 24 ( 2009 ) – published march 30, 2009 the surprising prediction that currents can flow forever in small normal metal rings was confirmed almost twenty years ago. highly precise new experiments find good agreement with theory that was not seen till now. h. a. fertig, physics2, 15 ( 2009 ) – published february 23, 2009 measurements of the heat transport at the edges of two - dimensional electron systems appear to provide explanations about the quantum hall state that have not been forthcoming via charge transport experiments. crystalline structures have been observed in nanoislands of electrons floating above superfluid helium. the energy required to add or subtract an electron from these quantum - dot - like islands agrees well with theory. physics1, 36 ( 2008 ) – published november 24, 2008 the esoteric concept of “ axions ” was born thirty years ago to describe the strong interaction between quarks. it appears that the same physics — though in a much different context — applies to an unusual class of insulators. graphene has been idealized as a two - dimensional electron system in which the electrons behave like massless fermions, but how “ perfect ” is it? scientists now show they can prepare free - standing sheets of graphene that have some of the highest electron mobilities of any inorganic semiconductor. a decade ago, experimentalists showed that persistent currents can flow in nonsuperconducting mesoscopic metal rings, but there was no theory that correctly explained the magnitude or direction of the unexpectedly large currents. theorists are now proposing a simple idea that may at last explain these results. electrons in graphene can be described by the relativistic dirac equation for massless fermions and exhibit a host of unusual properties. the surfaces of certain band insulators — called topological insulators — can be described in a similar way, leading to an exotic metallic surface on an otherwise “ ordinary ” insulator.
subdomain_quantum_materials
0.739219
511
HuggingFaceFW/fineweb-edu
<urn:uuid:a7f014e1-dfcd-46b7-926a-0554c521be0c>
0
0.6
2025-12-25T21:56:09.570136
nicholas wolterstorff, reason and belief in god, in faith and rationality, 78 - 91. see also john hick, philosophy of religion, 76. 191 the possibility of public access to that experience. there is much philosophical debate concerning precisely how perception is to be analyzed. in particular, questions are raised concerning the status of the phenomenon. but there is general agreement that in perception, objects present themselves to us in ways that enable us to know them. similarly, in religious experience god presents himself in ways that enable us to know him and his actions. for alston there are, it seems, important differences between ordinary perceptual or sense experience and religious experience. sense perception is a common experience, whereas religious experience is less common, perhaps, even rare, sense perception yields a great deal of information about the world, whereas religious experience yields apparently little information about god, all humans have the capacity for sense perception, but many seem not to have the capacity for religious experience. these differences, however, do not show that religious experience has a structure unlike perception. for one thing, neither the frequency of an experience nor the amount of information it yields tells us anything about its structure. on the other hand, the limitation of the rationalist way is that the only truths capable of being strictly proved are analytic and ultimately tautological. but we cannot by logic alone demonstrate any matter of fact and existence ; these must be known through experience. for sure if nothing were given through experience in its various modes, we should never have anything to reason about. this is as true in religion as in other fields. if god exists, god is not an idea but a reality outside us, in order to be known to men and women, god must therefore become manifest in some way within their experience. this conclusion is in line with the contemporary revolt against the rationalist assumptions which have dominated much of western philosophy since the time of descartes. 516 descartes held that we can properly be said to know only truths that are self - evident or that can be reached by logical inferences from self - evident premises. therefore, those who stress faith and attack reason often place a great deal of emphasis on religious experience. however, religious experience is by no means a purely emotional “ happening ” ; rather, it involves concepts and beliefs about the being that is experienced. if we tried to separate religious experiences from such concepts and beliefs - from the religious belief - system, as we shall call it - then there would be no way saying who or what is
subdomain_quantum_optics
0.600491
512
HuggingFaceFW/fineweb-edu
<urn:uuid:95cb84d8-be98-44b4-bedc-a19017694c81>
7
0.6
2025-12-25T21:56:09.626699
( this is reposted from the dictionary help page ) can ’ t understand some parts of the dictionary? fear no more! what do these things mean? n. – noun. usually definitions starting with “ a, ” “ an ” or “ the ” except for “ the state of being. ” used to name a person, place, thing, or idea. v. – verb. usually definitions starting with “ to ” except for “ to be. ” a word or combination of words that expresses an action or says something about the existence or condition of a noun or pronoun. adj. – adjective. usually definitions starting with “ to be ” or “ the state of being. ” a word that modifies a noun or pronoun. modify means to limit, qualify, or make partial changes. adv. – adverb. usually starts with “ the state of ” with a verb and the description of that action. a word that modifies a verb, an adjective, or another verb ( when, where, how, how often, to what extent ). many adverbs en in - ly. contraction. – a shortened word acronym. – the first letters of a bunch of words or phrase interjection. – a word that you would use to say something out loud, like “ damn ” or “ fuck. ” a word or phrase used to express pain, surprise, anger, pleasure, or some other emotions. stands apart from other words in sentences. klingon. – a word from the klingon language location. – a location phrase. – a phrase that has more than 1 word in it ebonics. – a word or series of words used in ebonics preposition. – a word like “ a. ” a word that shows a relation between the word following it and some other word or group of words in a sentence. conjunction. – a word that combines two parts of a sentence, such as “ but, ” “ and, ” and “ yet ” pronoun. – a word that replaces another noun. such as he, she, it. it stands for or takes place of a noun and functions in most ways as a noun.?. – we don ’ t know what kind of word it is ex. – example ; } – separates one definition from another definition for the same word. similar to the numbering system used in actual dictionaries. the reason we don ’ t use numbers in our definitions is because we use numbers sometimes to define
subdomain_quantum_optics
0.6027
512
HuggingFaceFW/fineweb-edu
<urn:uuid:03a8031f-9332-4bc2-b55a-c0e0bc509b2c>
0
0.6
2025-12-25T21:56:09.871751
the universe may be a deterministic system, but that doesn ' t mean random chance doesn ' t exist, or that you can determine the exact path the future will take in advance. for example, heisenburg ' s uncertainty principle shows that you can affect an electron ' s position by measuring its momentum, and vice versa. that ' s because electrons are so small that the act of observing them causes a change in their position or momentum, depending on whether you ' re measuring their momentum or position. there ' s a well - known experiment where you shoot electrons at a double - slit in a screen and then see what pattern they form ; if you don ' t observe the electrons going through the slit, they generate a standard wave interference pattern ( meaning the electrons are seemingly interfering with themselves ), but if you do, the pattern changes to one generated by particles. furthermore, if you delay the observation ( i. e., by using a removable detector screen ), you can cause a retroactive change from a wave pattern to a particle one, and if you make it possible to destroy the measurement of which slit the electron goes through, you can cause a second retroactive change, from a particle pattern to a wave one. here ' s something interesting to think about. let ' s say you have two people, essentially identical, except one believes that free will somehow exists, and the other believes that it doesn ' t. the two people will act differently based on whether they believe in free will or not. furthermore, if they later change their minds ( in other words, make themselves believe the opposite of what they believed before ), it will change their behavior. to make the point even clearer, if you had a third person who had never heard of free will, they ' d act in a completely different way than the other two - but once they heard of it, depending on whether it was " free will exists " or " free will doesn ' t exist ", it would instantly change their behavior from then on. in other words, yes, i believe it ' s possible to change your own behavior by making yourself change what you believe. i don ' t know whether that would actually be considered free will, but i do know that it ' s close enough to count for me.
subdomain_quantum_mechanics
0.622737
463
HuggingFaceFW/fineweb-edu
<urn:uuid:c454f157-a765-49c1-a486-4a816026db7b>
0
0.6
2025-12-25T21:56:10.149739
key : " s : " = show synset ( semantic ) relations, " w : " = show word ( lexical ) relations display options for sense : ( gloss ) " an example sentence " - s : ( n ) swing ( a state of steady vigorous action that is characteristic of an activity ) " the party went with a swing " ; " it took time to get into the swing of things " - s : ( n ) swing ( mechanical device used as a plaything to support someone swinging back and forth ) - s : ( n ) swing ( a sweeping blow or stroke ) " he took a wild swing at my head " - s : ( n ) swing, swinging, vacillation ( changing location by moving back and forth ) - s : ( n ) swing, swing music, jive ( a style of jazz played by big bands popular in the 1930s ; flowing rhythms but less complex than later styles of jazz ) - s : ( n ) lilt, swing ( a jaunty rhythm in music ) - s : ( n ) golf stroke, golf shot, swing ( the act of swinging a golf club at a golf ball and ( usually ) hitting it ) - s : ( n ) baseball swing, swing, cut ( in baseball ; a batter ' s attempt to hit a pitched ball ) " he took a vicious cut at the ball " - s : ( n ) swing ( a square dance figure ; a pair of dancers join hands and dance around a point between them ) - s : ( v ) swing ( move in a curve or arc, usually with the intent of hitting ) " he swung his left fist " ; " swing a bat " - s : ( v ) swing, sway ( move or walk in a swinging or swaying manner ) " he swung back " - s : ( v ) swing ( change direction with a swinging motion ; turn ) " swing back " ; " swing forward " - s : ( v ) swing, swing over ( influence decisively ) " this action swung many votes over to his side " - s : ( v ) swing, sweep, swing out ( make a big sweeping gesture or movement ) - s : ( v ) dangle, swing, drop ( hang freely ) " the ornaments dangled from the tree " ; " the light dropped from the ceiling " - s : ( v ) swing ( hit or aim at with a sweeping arm movement ) " the soccer player began to swing at the referee " - s : ( v ) swing ( alternate dramatically between high
subdomain_quantum_optics
0.627839
512
HuggingFaceFW/fineweb-edu
<urn:uuid:bea841e1-0977-4e22-8ea9-4c09e568e019>
0
0.6
2025-12-25T21:56:10.174890
, the search can go twice as deep with the same amount of computation. in mathematics, a square root of a number x is a number r such that r 2 = x, or in words a number r whose the explanation of b * 1 * b * 1 *... is that all the first player ' s moves must be studied to find the best one, but for each, only the best second player ' s move is needed to refute all but the first ( and best ) first player move – alpha - beta ensures no other second player moves need be considered. if b = 40 ( as in chess ), and the search depth is 12 ply, the ratio between optimal and pessimal sorting is a factor of nearly 406 or about 4 billion times. normally during alpha - beta, the subtrees are temporarily dominated by either a first player advantage ( when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try and find a refutation ), or vice versa. this advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. as the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. an improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. in practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening. iterative deepening depth - first search ( iddfs is a state space search strategy in which a depth - limited search is run repeatedly increasing the depth limit with the algorithm maintains two values, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. initially alpha is negative infinity and beta is positive infinity. as the recursion progresses the " window " becomes smaller. when beta becomes less than alpha, it means that the current position cannot be the result of best play by both players and hence need not be explored further. function alphabeta ( node, depth, α, β ) ( * β represents previous player best choice - doesn ' t want it if α would worsen it * ) if node is a terminal node or depth = 0 return the heuri
subdomain_quantum_simulation
0.614395
512
HuggingFaceFW/fineweb-edu
<urn:uuid:c914da0f-b539-4cf8-9273-83eb7fb2af6f>
2
0.6
2025-12-25T21:56:10.711702
further. function alphabeta ( node, depth, α, β ) ( * β represents previous player best choice - doesn ' t want it if α would worsen it * ) if node is a terminal node or depth = 0 return the heuristic value of node foreach child of node α : = max ( α, - alphabeta ( child, depth - 1, - β, - α ) ) ( * use symmetry, - β becomes subsequently pruned α * ) if β≤α break ( * beta cut - off * ) return α further improvement can be achieved without sacrificing accuracy, by using ordering heuristics to search parts of the tree that are likely to force alpha - beta cutoffs early. pseudocode is a compact and informal high - level description of a computer programming algorithm that uses the structural conventions of some programming language heuristic ( hyu - ˈris - tik is a method to help solve a problem commonly an informal method for example, in chess, moves that take pieces may be examined before moves that do not, or moves that have scored highly in earlier passes through the game - tree analysis may be evaluated before others. iterative deepening depth - first search ( iddfs is a state space search strategy in which a depth - limited search is run repeatedly increasing the depth limit with another common, and very cheap, heuristic is the killer heuristic, where the last move that caused a beta - cutoff at the same level in the tree search is always examined first. in competitive two - player games the killer heuristic is a technique for improving the efficiency of alpha - beta pruning, which in turn improves the efficiency of the this idea can be generalized into a set of refutation tables. alpha - beta search can be made even faster by considering only a narrow search window ( generally determined by guesswork based on experience ). this is known as aspiration search. in the extreme case, the search is performed with alpha and beta equal ; a technique known as zero - window search, null - window search, or scout search. this is particularly useful for win / loss searches near the end of a game where the extra depth gained from the narrow window and a simple win / loss evaluation function may lead to a conclusive result. if an aspiration search fails, it is straightforward to detect whether it failed high ( high edge of window was too low ) or low ( lower edge of window was too high ). this gives information about what window values
subdomain_quantum_simulation
0.600056
512
HuggingFaceFW/fineweb-edu
<urn:uuid:c914da0f-b539-4cf8-9273-83eb7fb2af6f>
3
0.6
2025-12-25T21:56:10.712645
behind the buzz and beyond the hype : our nanowerk - exclusive feature articles posted : jun 28th, 2010 novel maskless e - beam technique a promising tool for engineering metallic nanostructures ( nanowerk spotlight ) the manufacture of certain types of nanostructures – nanotubes, graphene, nanoparticles, etc. – has already entered industrial - scale mass production. however, the controlled fabrication of nanostructures with arbitrary shape and defined chemical composition is still a major challenge in nanotechnology applications. it appears that electron beams from electron microscopes ( em ) – nowadays routinely focused down to the nanometer regime – are ideal candidates for versatile tools for nanotechnology ( see our recent nanowerk spotlight : " direct - write process brings nanotechnology fabrication closer to mass production " ). however, their usage is mostly restricted by the conditions in the corresponding electron microscopes, since most ems are housed in high vacuum chambers the unintended electron - beam - induced deposition of residual gases is a problem, as well as the maintenance of well defined sample conditions. researchers in germany have now presented a novel way to use a highly focused electron beam to lithographically fabricate clean iron nanostructures. this new technique expands the application field for focused electron beams in nanotechnology. " we have developed a novel two - step process to locally generate iron nanostructures on a commercial 300 nm silicon oxide substrate at room temperature, " hubertus marbach, a researcher at the universitat erlangen - nurnberg tells nanowerk. " in the first step, the surface is locally activated by a 3 nm wide electron beam. the second step comprises the development of the activated structures by dosing an organometallic precursor, which then decomposes and grows autocatalytically to form pure iron nanocrystals until the precursor supply is stopped. " using a more vivid picture, marbach says that one might think of the whole process as writing with invisible ink in the irradiation step, which is then made visible by the development step. " besides the fantasy - stimulating application to write secret nanomessages in ultrahigh vacuum, the described effect might be the starting point for a whole new way to generate nanostructures. " electrons as invisible ink. a siox surface can be locally activated with a focused electron beam ( 1 ) such that subsequently dosed [ fe ( co ) 5 ] decomposes ( 2 ) and auto
subdomain_quantum_materials
0.619258
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ef3f121e-0523-4e2c-99bf-7396c4f065d5>
0
0.6
2025-12-25T21:56:11.665266
to generate nanostructures. " electrons as invisible ink. a siox surface can be locally activated with a focused electron beam ( 1 ) such that subsequently dosed [ fe ( co ) 5 ] decomposes ( 2 ) and autocatalytically grows to pure fe nanocrystals ( 3 ) at predefined positions until the precursor supply is stopped. a 3d representation of the sem data is in the background. ( reprinted with permission from wiley - vch verlag ) the major new aspect of this work is the local chemical activation, i. e. catalytic activation of an oxidic surface. the researchers use this process to locally dissociate adsorbed precursor molecules and then generate nanostructures with an electron beam ( a process that can be categorized as focused electron beam induced processing or febip, where the injection or removal of electrons can be used to trigger chemical processes, such as bond formation or dissociation ). the starting point of the present investigations was the so called electron beam induced deposition or ebid technique a special case of febip, where already adsorbed precursor molecules are locally dissociated with a focused electron beam, leaving a deposit of the nonvolatile dissociation products. to minimize the complications of unintended ebid of residual gases, the team followed a ' surface science approach ' where they worked under ultra high vacuum ( uhv ) conditions. this resulted in deposits with high purity. the cleanliness of the whole process, namely uhv conditions plus a well - defined surface, was identified as the key factor for the purity of the metallic nanostructures. in a previous paper, marbach and his team have described this technique ( " electron - beam - induced deposition in ultrahigh vacuum : lithographic fabrication of clean iron nanostructures " ) marbach explains that, in conventional applications, the high energetic primary electrons of the em beam are scattered in the sample. eventually, scattered electrons exit the surface again close to the impact of the electron beam. " in ebid, this effectively leads to a widening of the deposit compared to the size of the beam " he says. " this ( proximity ) effect increases with an increase of the local electron dose. since our fabrication technique relies on catalytic and autocatalytic effects, the electron dose needed as a ' seed ' for the growth of the iron nanostructures can be minimized, thus reducing the mentioned proximity effect. in other words, our
subdomain_quantum_materials
0.618655
512
HuggingFaceFW/fineweb-edu
<urn:uuid:ef3f121e-0523-4e2c-99bf-7396c4f065d5>
1
0.6
2025-12-25T21:56:11.666331
our fabrication technique relies on catalytic and autocatalytic effects, the electron dose needed as a ' seed ' for the growth of the iron nanostructures can be minimized, thus reducing the mentioned proximity effect. in other words, our approach might be suitable to produce smaller structures. " ebid allows almost every combination of deposit material and substrate to be targeted since there is a large variety of precursor molecules and there are nearly no restrictions in regard to the substrate. in this specific work, the researchers ' aim was to generate clean iron nanostructures with potential applications in the field of data storage, sensor or information processing devices or as seeds for the localized growth of other nanostructures like carbon nanotubes or silicon wires. with their novel febip process they are now moving on to explore other oxide materials and precursor molecules. " we propose our technique to pre - structure the surface by a local chemical modification as a general route to fabricate nanostructures, e. g. to locally anchor or activate functional molecules, " says marbach. one challenge of the novel process is the rather low writing speed. marbach points out though, that there are considerable efforts underway to develop multibeam instruments which would boost the throughput of electron - beam - based techniques, e. g. at the tu delft ( mapper lithography ) and the european charpan project located in vienna.
subdomain_quantum_materials
0.600515
288
HuggingFaceFW/fineweb-edu
<urn:uuid:ef3f121e-0523-4e2c-99bf-7396c4f065d5>
2
0.6
2025-12-25T21:56:11.666875
a plan is typically any diagram or list of steps with timing and resources, used to achieve an objective. see also strategy. it is commonly understood as a temporal set of intended actions through which one expects to achieve a goal. for spatial or planar topologic or topographic sets see map. plans can be formal or informal : - structured and formal plans, used by multiple people, are more likely to occur in projects, diplomacy, careers, economic development, military campaigns, combat, or in the conduct of other business. in most cases, the absence of a well - laid plan can have adverse effects : for example, a non - robust project plan can cost the organization time and money. - informal or ad - hoc plans are created by individuals in all of their pursuits. the most popular ways to describe plans are by their breadth, time frame, and specificity ; however, these planning classifications are not independent of one another. for instance, there is a close relationship between the short - and long - term categories and the strategic and operational categories. it is common for less formal plans to be created as abstract ideas, and remain in that form as they are maintained and put to use. more formal plans as used for business and military purposes, while initially created with and as an abstract thought, are likely to be written down, drawn up or otherwise stored in a form that is accessible to multiple people across time and space. this allows more reliable collaboration in the execution of the plan. other articles related to " formal " :... formal methods, mathematically - based techniques for the specification, development and verification of software and hardware systems formal...... formal theory can refer to another name for a theory which is expressed in formal language... by symbols and its operators formal theory from political science, the theoretical modeling of social systems based on game theory, dynamical systems theory, among...... students are not required to wear gowns at formal halls, with exception of at certain college feasts... in special formal meals such as matriculation dinner or scholars ' feast the master usually raises a toast, first to the queen and then to “ sir winston "... in other formal halls this is usually made by a senior student once the fellows have left...... individuals are deemed undesirable in urban space because they do not fit into social norms, which causes unease for many residents of certain neighborhoods... this fear has been deepened by the broken windows theory and
subdomain_quantum_field_theory
0.615734
512
HuggingFaceFW/fineweb-edu
<urn:uuid:8705b413-7063-412c-a320-887aa255cf28>
0
0.6
2025-12-25T21:56:11.879691
science of breath, by yogi ramacharaka, pseud. william atkinson,, at sacred - texts. com the science of breath, like many other teachings, has its esoteric or inner phase, as well as its exoteric or external. the physiological phase may be termed the outer or exoteric side of the subject, and the phase which we will now consider may be termed its esoteric or inner side. occultists, in all ages and lands, have always taught, usually secretly to a few followers, that there was to be found in the air a substance or principle from which all activity, vitality and life was derived. they differed in their terms and names for this force, as well as in the details of the theory, but the main principle is to be found in all occult teachings and philosophies, and has for centuries formed a portion of the teachings of the oriental yogis. in order to avoid misconceptions arising from the various theories regarding this great principle, which theories are usually attached to some name given the principle, we, in this work, will speak of the principle as " prana, " this word being the sancrit term meaning " absolute energy. " many occult authorities teach that the principle which the hindus term " prana " is the universal principle of energy or force, and that all energy or force is derived from that principle, or, rather, is a particular form of manifestation of that principle. these theories do not concern us in the consideration of the subject matter of this work, and we will therefore confine ourselves to an understanding of prana as the principle of energy exhibited in all living things, which distinguishes them from a lifeless thing. we may consider it as the active principle of lifevital force, if you please. it is found in all forms of life, from the amoeba to manfrom the most elementary form of plant life to the highest form of animal life. prana is all pervading. it is found in all things having life, and as the occult philosophy teaches that life is in all thingsin every atomthe apparent lifelessness of some things being only a lesser degree of manifestation, we may understand their teachings that prana is everywhere, in everything. prana must not be confounded with the egothat bit of divine spirit in every soul, around which clusters matter and energy. prana is merely a form of energy used by the ego in its material manifestation. when the ego leaves the body, the pr
subdomain_quantum_field_theory
0.616108
512
HuggingFaceFW/fineweb-edu
<urn:uuid:5601dbfc-518f-4d18-9f0d-7053c7203d33>
0
0.6
2025-12-25T21:56:11.945457
june 8, 2007 concerned that current methods for making computer chips might become stymied as components keep shrinking, many engineers are looking for circuit building blocks with improved electrical properties. among the most promising are stringy carbon nanotubes that capably form transistors to switch current on and off. but the nanotubes tend to grow with unpredictable kinks and bends that could cause bad wiring connections. this week at the design automation conference in san diego, a group of stanford engineers will present a way to design circuits that should work even when many of the nanotubes in them are twisted and misaligned. " the question is what ' s next in chip technologies, " says subhasish mitra, an assistant professor of electrical engineering and computer science. " that ' s why nanotechnology is important. but you want to make sure that you are not in a lab making something that chip designers cannot actually use. " to prevent that, he and electrical engineering professor h. - s. philip wong, working with chemistry professor chongwu zhou at the university of southern california, have been looking closely at how nanotubes end up resting on the surfaces of experimental chips. " it ' s not as bad as a plate of noodles, " mitra says. " you want to create transistors out of these things, and hook up these transistors and make them turn on and off independently. but if twisted carbon nanotubes, for example, short out the circuit, you lose the opportunity to do that. " making messy workable what mitra, wong and graduate students nishant patil and jie deng have realized is that if nanotubes are always going to be somewhat askew, engineers will have to design circuits that can work regardless of where and how the tubes lie. they started by coming up with a single circuit element, a nand gate, that was immune from the vagaries of its underlying nanotube layout. from that single element that could function despite misalignments, they abstracted and generalized the math to come up with an algorithm that can guarantee a working design for any circuit element, mitra says, even when a large number of nanotubes are misaligned. using simulations developed by wong and deng, the group has been able to show that not only do the algorithm ' s designs work, but they also don ' t appear to exact a significant financial, speed or energy price compared to traditional designs, mitra says. the key to determining whether a circuit
subdomain_quantum_materials
0.611308
512
HuggingFaceFW/fineweb-edu
<urn:uuid:b298a488-e67d-4d39-83ca-1c41385ec8c0>
0
0.6
2025-12-25T21:56:12.035815
group has been able to show that not only do the algorithm ' s designs work, but they also don ' t appear to exact a significant financial, speed or energy price compared to traditional designs, mitra says. the key to determining whether a circuit element is immune to nanotube misalignment is breaking up each circuit element into a fine grid that can be analyzed mathematically. doing this in the abstract with models allows engineers to determine which grid squares nanotubes must pass through and which they shouldn ' t traverse to make a design work correctly. to eliminate unwanted connections, nanotubes in so - called " illegal " regions can then be either chemically etched away or rendered electrically irrelevant in other ways. the stanford algorithm takes this all several steps further, applying sophisticated mathematics to automatically determine where the legal and illegal regions should be in the design of a circuit element with a particular function. " you not only determine whether something is immune or not, but can automatically generate circuit designs that are guaranteed to be immune, " mitra says. while the algorithm can overcome all the bad connections that errant nanotubes make, it cannot guarantee that a nanotube will always make a desired connection. nanotubes also have other problems that remain unsolved, mitra points out. some, for example, always conduct electricity instead of switching on and off like a semiconductor should. the group ' s next step is to move beyond simulation to build and test real circuit elements according to the algorithm ' s output. while more work is necessary to deliver the promise of nanotube technology, solving the misalignment problem would be a significant step. " carbon nanotube transistors show great promise as extensions to silicon transistors due to their fast speed, small size and lower energy consumption, " patil says. " using this technique, we can make larger and more complex circuit blocks with them. " wong speculates that the advance could eventually spill over from chips to assist engineers facing analogous challenges. " a similar methodology can be applied to many emerging technologies, " he says. " the concept of not having to define everything with high precision is germane to engineering robust systems. " the microelectronics advanced research corporation supported the research. other social bookmarking and sharing tools : the above story is reprinted from materials provided by stanford university. the original article was written by david orenstein, communications and public relations manager at the stanford school of engineering.. note : materials may be edited for content and length. for
subdomain_quantum_simulation
0.606372
512
HuggingFaceFW/fineweb-edu
<urn:uuid:b298a488-e67d-4d39-83ca-1c41385ec8c0>
1
0.6
2025-12-25T21:56:12.037150
june 10, 2008 researchers in sweden and japan report development of a new type of paper that resists breaking when pulled almost as well as cast iron. the new material, called " cellulose nanopaper, " is made of sub - microscopic particles of cellulose and may open the way for expanded use of paper as a construction material and in other applications, they suggest. in the new study, lars a. berglund and colleagues note that cellulose - - a tough, widely available substance obtained from plants - - has potential as a strong, lightweight ingredient in composites and other materials in a wide range of products. although cellulose - based composites have high strength, existing materials are brittle and snap easily when pulled. the study described a solution to this problem. it involves exposing wood pulp to certain chemicals to produce cellulose nanopaper. their study found that its tensile strength - - a material ' s ability to resist pull before snapping - - exceeded that of cast iron. they also were able to adjust the paper ' s strength by changing its internal structure. other social bookmarking and sharing tools : note : materials may be edited for content and length. for further information, please contact the source cited above. - henriksson et al. cellulose nanopaper structures of high toughness. biomacromolecules, 2008 ; 9 ( 6 ) : 1579 doi : 10. 1021 / bm800038n note : if no author is given, the source is cited instead.
subdomain_quantum_materials
0.640665
310
HuggingFaceFW/fineweb-edu
<urn:uuid:32637a04-ffa2-4d39-9250-69647a3a9442>
0
0.6
2025-12-25T21:56:12.038673
s ] this early part of this time frame might be characterized as the pre - pubescence of laminated decorative surface materials. it was a time of great growth and opportunity, as well as some awkwardness and misunderstanding. new techniques were developed to expand the usage of decorative surface materials. some of them improved the performance of plastic laminates, such as the 20 - fold increase of durability that evolved with the introduction of laminate flooring products. a classic example is vertical surfaces. hpl is an extremely high performance product. it is perfect for usage in horizontal work surface and high - traffic areas. “ value - engineered ” new products were designed and introduced to meet the needs of similar applications with lower manufacturing costs. a good example of this is thermally fused melamine ( tfm ), which is essentially the top layers of hpl ( decor paper impregnated with melamine resins ), thermally fused to particleboard or mdf forming a stand - alone decorative panel. as these derivative products emerged they were not always specified based on the value of their performance, but often purely on cost. this had a negative effect on the perception of laminated decorative surfaces and the term “ laminate ” took on the connotation of a low quality imitation product, an unfortunate misconception. in a sense, engineered products were victims of their own genius, particularly considering how quickly technology advanced in the information age. but all was not lost, and savvy professionals knew that to maximize the use of any material it was important to understand its strengths and limitations. this explains the resurgence of design interest in the potential of laminated decorative surface materials. in addition to specialized performance, decorative surfaces were undergoing a paradigm shift in visual realism during this period. computers and digital scanning technology now allow decor designers to replicate any material with unprecedented fidelity and dimensionality. imaging software has made it possible to bring any design that can be imagined into being. laser engraving of rotogravure cylinders enables sharper contrast and more subtle tonal gradients than was previously possible. it has also expedited the process of decor development and sampling. new digital ink - jet printing technologies are driving decor development to move beyond commodity designs and into experimental boutique fashions and customized surfaces such as logos and murals. advanced surface treatments and overlay technologies also play important roles in the development of decorative surface materials, enhancing both the visual and tactile qualities of the products. one technique uses engineered press plates to create embossed texture “ in register ” with the
subdomain_quantum_materials
0.633394
512
HuggingFaceFW/fineweb-edu
<urn:uuid:42b449d7-e602-4d93-9a4b-803af0cbd59c>
3
0.6
2025-12-25T21:56:12.132666
| life depends on an essentially continuous exchange of mass and energy between living organisms and their environment. human impact on this vital exchange has occurred on a global or macroclimate scale. understanding the physical principles involved in heat transfer and absorption in the atmosphere is critical to understanding how these physical factors affect living organisms. the specific objectives of this section are to explain the properties of heat transfer, and to describe laboratory activities that can be used at a variety of academic levels with only slight described below are three series of experiments performed in the laboratory to address questions that emphasize the underlying principles of heat transfer. these hands - on experiments focused on principles that relate to conduction and convection. the object was to identify the method of heat transfer through solids, liquids, gases, and between boundaries. understanding these concepts gave us a better understanding of how heat is transferred between our environment and living organisms. these experiments were used as an integral part of the workshop, which consisted of reflections on redesigning or modifying lab exercises to fit personal needs of workshop teachers. these exercises could be adapted for middle school, high school, and college level courses. the methods utilized for the three experiments involved increasing or decreasing the temperature of a solid or liquid, and where applicable, observing the motion of a dye caused by the changes in temperature and density of the medium. | modes of heat transfer : - conduction : heat transfer resulting from direct contact between substances of different temperatures ; heat is transferred from the high - temperature substance to the low by direct molecular - convection : heat transport by a moving fluid ( gas of liquid ). the heat is first transferred to the fluid by conduction, but the fluid motion carries the heat away. - radiative exchange : heat transfer via electromagnetic waves, the amount of radiant energy emitted, transmitted, or ( figure from microsoft encarta ) return to top laboratory apparatus for labs 1 - 3 | lab 1 : heating from below : convection in this experiment, water was heated from below to produce convection. although the atmosphere is composed of air, this experiment was relevant to atmospheric motion as well. the lower atmosphere ( troposphere ) is mostly heated from below because the oceans and continents absorb radiation from the sun and then transfer some of the resulting heat energy to the lower atmosphere. in lab 1, a beaker was heated ( see figure below ). thermometers were placed in 1 / 2 cm below water surface and 1 / 2 cm above the bottom of the beaker. the temperature was recorded at 30 second intervals. drops of dye were added to the bottom
subdomain_quantum_materials
0.62173
512
HuggingFaceFW/fineweb-edu
<urn:uuid:77831d47-ed83-47e1-aa8f-16bc431d8619>
0
0.6
2025-12-25T21:56:12.433963
( see figure below ). thermometers were placed in 1 / 2 cm below water surface and 1 / 2 cm above the bottom of the beaker. the temperature was recorded at 30 second intervals. drops of dye were added to the bottom of the beaker between intervals. after three minutes the beaker was removed from the hot plate and temperature reading recorded for another five minutes. convection was visualized by observing the motion of the the motion of the dye was circular from bottom to top and returning to the bottom of the beaker. the energy from heating created a less dense liquid at the bottom, thus causing the upward motion of the dye. upon reaching the surface, the dye was now in the denser medium and therefore returned to the bottom. this motion is an example of convection. this phenomenon is evident in the motion of wind. the difference in densities and kinetic movement of the water molecules driven by temperature change resulted in the movement of air molecules. this lab can be used at lower levels to demonstrate simple properties of heat transfer and convection. at higher levels, this lab illustrates these basic principles, and could be extended to address more complex applications related to convection such as the coriolis 1. explain the process by which the water is heated. 2. describe the motion of the water as made visible by the 3. why does convection occur? 4. did convection cease? when? why? environmental applications of principles of radiative exchange, conduction and convection ( figure from e. zerba, princeton university ; email @ example. com ) return to top | lab 2 : conduction comparison of this experiment with the first illustrated the difference between the rate of heat transfer by conduction and that of convection. it also illustrated the difference in heat capacities between water and the solid materials of the lab 2 was configured similarly to lab 1, but looked at the effect of heating and cooling temperature difference using sand of equal weight as water used in experiment 1. no dye was used in this experiment, as convection was not a the temperature difference between the top and bottom layers of sand indicated that sand heats and cools at a faster rate compared to water. when the beaker was removed from the heat, the temperature continued to increase via conduction from the bottom of the beaker. this lab exercise is useful for demonstrating the concept of conduction to lower level students. upper level students can use this lab to make the connections between conduction and heat capacity of various substances related to heat transfer that occurs between the earth ' s surfaces and the surface
subdomain_quantum_materials
0.61747
512
HuggingFaceFW/fineweb-edu
<urn:uuid:77831d47-ed83-47e1-aa8f-16bc431d8619>
1
0.6
2025-12-25T21:56:12.436447
exercise is useful for demonstrating the concept of conduction to lower level students. upper level students can use this lab to make the connections between conduction and heat capacity of various substances related to heat transfer that occurs between the earth ' s surfaces and the surface of living organisms. 1. is there any convection in the sand? explain. 2. why did the temperature recorded by the lower thermometer continue to rise dramatically after the heating ceased? 3. on the basis of heat capacity, explain why the temperature changes for the sand and water were different. 4. using what you have observed in the two experiments, predict whether a cold front will lower temperatures more at inland locations or on the coast. explain your answer. return to top | lab 3 : cooling from above in lakes and oceans, convection is generally the result of cooling from above rather than heating from below. this was demonstrated by adding ice to the water. using an experimental setup that allowed measurement of temperature at the top and the bottom of a beaker of water, ice was added to the top of the beaker. this experiment illustrated the concept that at 4 °c, water has higher density and sinks. convection was visualized by the movement of dye added to the bottom of the beaker which was displaced by the cooler more dense water. this lab demonstrates several physical principles associated with heat transfer, including density, kinetic molecular theory, and convection. on a larger scale, this laboratory exercise demonstrates the process by which seasonal turnovers occur in ponds and lakes. at lower levels, teachers may choose to discuss physical principles of heat transfer only, while at upper levels, teachers may choose to integrate this small - scale investigation with the study of climate processes and lake nutrient stratification and mixing. 1. why does ice float? 2. is there any evidence of convection? why does or does it not occur? 3. draw a diagram to explain how seasonal turnover occurs in a return to top to the passerine birds home
subdomain_quantum_materials
0.63098
395
HuggingFaceFW/fineweb-edu
<urn:uuid:77831d47-ed83-47e1-aa8f-16bc431d8619>
2
0.6
2025-12-25T21:56:12.438784
applied mathematics department secretary : tel : 01 69 33 46 01. fax : 01 69 33 46 46. scientific computing is the art of the engineer devoted to producing numerical simulations based on a scientific analysis and with computers. most of problems that can be formalised with mathematical equations lead to problems too complicated to be solved with elementary methods or with methods of formal calculus. the objective of scientific computing is to propose approximate numerical solutions for problems that can be modelised with a mathematical equation. the development of scientific computing is related to the increasing of computer power. it is an applied science in continuous evolution. the industries that use and develop scientific computing are first the main partners of state technical administrations in charge of the conception and development of complex systems : space and aeronautics, nuclear, automotive industry, petroleum industry, civil engineering. reduced to amount development and the certification of complex systems only few years ago, the numerical simulation allows the reduction of important development times on conception cycles and the production of more sophisticated products. the option " scientific computing " is devoted to students needing training in scientific computing, either for the analysis of an industrial problem, or the initiation to scientific research, whatever can be the future choice in terms of career orientation. for those who wish to enter the master program " mathematical modeling " in applied mathematics of the ecole polytechnique ( co - organised with paris 6 university ), the training period can be an important first step. examples of subjects studied in recent years - adaptive and multi - scales methods. assessment and design of optical fiber systems. inverse problem in electromagnetism. requirements : some knowledge of numerical analysis and / or optimization. evaluation mechanism : written report and oral defense last modification : monday 8 april 2013
subdomain_quantum_simulation
0.626594
343
HuggingFaceFW/fineweb-edu
<urn:uuid:9caf26e4-4e0c-4d9c-9d29-dbf1944fc54a>
0
0.6
2025-12-25T21:56:12.694679
process sheet and plate materials, accelerating the production and availability of low - cost magnesium a lightweight metal. commercial use of magnesium has been limited because of the high cost associated with its multistep production process. this technology is likely to reduce processing steps, thereby reducing the cost of finished magnesium components and allowing for the replacement of aluminum with magnesium in many commercial goods. the widespread use of magnesium instead of aluminum in cars would reduce vehicle weight and lead to improvements in transportation by improving fuel economy. low frequency rf plasma source ( lfrf - 501 ), co - developed with structured materials industries, inc. ( oak ridge national laboratory ) : lfrf - 501 is a low - cost plasma generator for research, development and production of nanometer scale materials at lower temperatures, faster rates and with enhanced properties. these materials are enabling new developments in many technologies, including microelectronics, renewable energy, sensors and leds. advanced manufacturing and geothermal : nanoshield coatings ( oak ridge national laboratory ) : nanoshield is a protective coating that can extend the life of costly cutting and manufacturing tools by more than 20 %, potentially saving millions of dollars over the course of a project. it is created by laser fusing a unique iron - based powder to any type of steel, which forms a strong metallurgical bond that provides wear resistance between two and 10 times greater than conventional coatings. nanoshield was designed to protect high - wear tools used for tunnel boring and construction, but its potential for navy applications and geothermal drilling tools also is being explored. desiccant - enhanced evaporative air - conditioning ( national renewable energy laboratory ) : developed with ail research and synapse product development llc : devap systems cool commercial buildings at a small fraction of the energy use of a traditional cooler, provides superior comfort in any climate, releases far less carbon dioxide, and could cut costly peak electricity demand by 80 %. the sandia cooler ( sandia national laboratories ) : also known as the " air bearing heat exchanger, " this technology will significantly reduce the energy needed to cool the processor chips in data centers and large - scale computing environments. the sandia cooler also offers benefits in other applications where thermal management and energy efficiency are important, particularly heating, ventilation and air - conditioning ( hvac ). hydrogen and fuel platinum monolayer electrocatalysts for fuel cell cathodes ( brookhaven national laboratory ) : platinum is the most efficient electrocatalyst for fuel cells, but platinum - based catalysts
subdomain_quantum_materials
0.638469
512
HuggingFaceFW/fineweb-edu
<urn:uuid:dbb089d4-a6d7-43e6-a0c9-771c925711e8>
1
0.6
2025-12-25T21:56:12.842235
air - conditioning ( hvac ). hydrogen and fuel platinum monolayer electrocatalysts for fuel cell cathodes ( brookhaven national laboratory ) : platinum is the most efficient electrocatalyst for fuel cells, but platinum - based catalysts are expensive, unstable, and have low durability. the new electrocatalysts have high activity, stability, and durability, while containing only about one - tenth the platinum of conventional catalysts used in fuel cells, significantly reducing overall costs. sj3 solar cell ( national renewable energy laboratory ) : co - developed with solar junction, the cell achieves a world - record conversion efficiency of 43. 5 % with potential to reach 50 %. like a three - blade safety razor that uses all its blades for a closer shave, the three - layered sj3 cell captures different light frequencies, ensuring the best conversion of photons to electrons. the 43. 5 % efficiency occurs under lens - focused light having 418 times the intensity of the sun. microsystems enabled photovoltaics ( sandia national laboratories ) : tiny, glitter - sized pv cells are created using microdesign and microfabrication techniques, released into a solution and " printed " onto a low - cost substrate. the technology has potential applications in buildings, houses, clothing, portable electronics, vehicles and other contoured structures. high - energy concentration - gradient cathode material for plug - in hybrids and all - electric vehicles ( argonne national laboratory ) : argonne and several partners have developed a novel high - energy and high - power cathode material for use in lithium ion ( li - ion ) batteries especially suited for plug - in hybrids and all - electric vehicles. it provides much higher energy and longer life than any other li - ion cathode material, and as such is also ideal for batteries in hybrid vehicles and a wide range of consumer electronics applications. graphene nanostructures for lithium batteries, co - developed with vorbeck materials corp. of jessup md. and princeton university ( pacific northwest national laboratory ) : small quantities of graphene — ultra - thin sheets of carbon atoms — can dramatically improve the performance and power of lithium - ion batteries. graphene nanostructures could lead to the development of batteries that last longer and recharge quickly, drastically reducing the time it takes to charge a smartphone to as little as ten minutes and charging an electric vehicle in just a few hours. the energy department ' s office of energy efficiency and
subdomain_quantum_materials
0.645143
512
HuggingFaceFW/fineweb-edu
<urn:uuid:dbb089d4-a6d7-43e6-a0c9-771c925711e8>
2
0.6
2025-12-25T21:56:12.843548
in once for each of the two charges ) is the charge of the electron. notice the strength of the force drops with the distance between the charges in a way identical to gravity. also, if we were talking about an electron and an anti - electron ( which has the opposite charge ), then there would be a minus sign indicating the force between opposite charges is attractive. we can compare the strength of the gravitational force to the electromagnetic force on two electrons by taking the ratio between the two forces. the distance - squared cancels out and we are left with : f ( gravity ) / f ( em ) = gmm / cee. i intentionally dropped the minus sign ; i will simply remember that the gravitional force between the electrons is attractive and the electromagnetic force between the two electrons is replusive. anyway, when i plug in the values for g, m, c, and e, the ratio is 2. 4x10 ^ ( - 43 ). in words that is pronounced two - point - four times ten to the minus forty - three. that is a very small number. in other words, the gravitational force between two electrons is feeble compared to the electromagnetic force. the reason that you feel the force of gravity, even though it is so weak, is that every atom in the earth is attracting every one of your atoms and there are a lot of atoms in both you and the earth. the reason you aren ' t buffeted around by electromagnetic forces is that you have almost the same number of positive charges as negative ones, so you are ( essentially ) electrically neutral. the weak force is misnamed. it ' s thought to be just as strong as the em force but, unlike the em force, it ' s a short - ranged force. in fact, the range is only about 1 / 100 the size of an atomic nucleus. the weak force is outside the realm of our everyday experience. we study it at fermilab by using the accelerator to produce the particles which transmit the force. these are real particles called the w - boson and the z - boson. because they are very massive, we need a high - energy accelerator to produce them. the large mass of the w - boson and the z - boson is also the reason the force has a short range. incidentally, the particle which carries the em force is called the photon ( yes, light ). because photons are massless, the em force has a long range as i described above. the weak force and
subdomain_quantum_optics
0.627869
512
HuggingFaceFW/fineweb-edu
<urn:uuid:68df6dec-0c9d-44aa-8f93-eb17e7711579>
1
0.6
2025-12-25T21:56:13.049973
the force has a short range. incidentally, the particle which carries the em force is called the photon ( yes, light ). because photons are massless, the em force has a long range as i described above. the weak force and the em force have been found to be linked at high - energy or, equivalently, short range. they both can be described by one set of equations which we call the " electro - weak " theory. this was discovered in 1967 - 1971 by steven weinberg, sheldon glashow, and abdus salam. they got the nobel prize in physics for unifying those forces. finally i am ready to talk about the strong force. this is way out of the experience we get in everyday life ( not that it doesn ' t have everyday life consequences ), so i will be a little more long - winded in describing it. remember that a proton or neutron is composed of three quarks? these quarks have strong charge and are bound together by the strong force. unlike the case of the em force, where there is one electric charge and one anti - charge ( plus and minus charges ) there are three strong force charges and three anti - charges. we call the strong force charges " red ", " blue ", and " yellow " and the anti - charges are called " anti - red " and so forth. the particles which transmit the force are called gluons. gluons are massless, like the photon. but unlike the photon, which is electrically neutral, the gluons carry strong charge and a different strong anti - charge. a gluon could be " red - anti - blue ", for example, and there are eight kinds of gluons. we call the three charges " colors " even though they have nothing to do with how we see. because the gluon is massless, at first you might think the range of the strong force is infinite, like the em force. but if you study the behavior of the strong force, you find that the three quarks in a proton or neutron behave almost as if they were bouncing around freely in a relaxed, elastic spherical container. none of the quarks can escape the container because when the quark reaches the boundary of the proton or neutron, the force begins to act and gets stronger and stronger the further away the that quark gets from the others. that is very different from the other forces which get weaker at longer distances and it occurs because the gluons
subdomain_quantum_field_theory
0.629623
512
HuggingFaceFW/fineweb-edu
<urn:uuid:68df6dec-0c9d-44aa-8f93-eb17e7711579>
2
0.6
2025-12-25T21:56:13.051093
of the proton or neutron, the force begins to act and gets stronger and stronger the further away the that quark gets from the others. that is very different from the other forces which get weaker at longer distances and it occurs because the gluons have the color and anti - color charge. the strong force also acts between protons and neutrons in an atomic nucleus much in the same way that simple chemicals are held together by the electric force. a nucleus such as helium, which has two ( positively em - charged ) protons, is stable because the strong force overcomes the electromagnetic forces. the strong force binds the two protons with about 25 - 35 mev of energy. the electromagnetic forces try to push the protons apart. the net result is that approximately 1 million electron - volts of energy are needed to separate the two protons. in contrast, an electron is bound to a proton in a hydrogen atom by only a few electron - volts. by now you know enough to consider the size of the nucleus in comparison to the size of an atom to judge if this is truly a fair comparison! the strong force is, indeed, strong. we think that if we could study the electroweak and strong forces at high enough energy we would find out they were linked together somehow, like electricity and magnetism are to form em, and like em and the weak force are to form electro - weak. such a theory would be called a grand - unified theory. and we also think that it may be possibe to include gravity with the other three. such a theory would be called a super - grand - unified theory and there is a candidate for that called " superstrings ". so you asked a simple question : " how strong is the strong force? ". the answer is that it depends on the range. at short distances it is weak and at long distances it is strong. that effect is completely different from the other three forces and arises because the forces transmitters, called gluons, are massless and have strong - charge and different strong anti - charge. if you want to learn more about particle physics and the work we do at fermilab, the book " the god particle " by leon lederman and dick teresi gives a very good and readable explanation. | last modified 1 / 11 / 1999 firstname. lastname @ example. org |
subdomain_quantum_field_theory
0.612839
485
HuggingFaceFW/fineweb-edu
<urn:uuid:68df6dec-0c9d-44aa-8f93-eb17e7711579>
3
0.6
2025-12-25T21:56:13.052145
gaping hole in the standard model of particle physics, a conceptual framework for understanding the nuts - and - bolts of the cosmos. one idea is that the higgs was born when the new universe cooled after the big bang some 14 billion years ago. it is believed to act like a fork dipped in honey and held up in dusty air. most of the dust particles interact with the honey, acquiring some of its mass to varying degrees, but a few slip through and do not acquire any. with mass comes gravity — and its pulling power brings particles together. supersymmetry, meanwhile, is the notion that there are novel particles which are the opposite number of each of the known particle actors in the standard model. this may, in turn, explain the existence of dark matter — a hypothetical construct that can only be perceived indirectly via its gravitational pull, yet is thought to make up around 25 percent of the universe. at a cost of 6. 03 billion swiss francs ( 4. 9 billion euros, $ 6. 56 billion dollars ), the lhc was constructed in a 26. 6 - kilometre ( 16. 5 - mile ) circular tunnel originally occupied by its predecessor, the large electron positron ( lep ). that had run in cycles of about seven months followed by a five - month shutdown, but the lhc, opened in 2008, has been pushed well beyond. " we ' ve had full operations for three years, 2010, 2011 and 2012, " said bordry. " initially we thought we ' d have the long shutdown in 2012, but in 2011, with some good results and with the perspective of discovering the boson, we pushed the long shutdown back by a year. but we said that in 2013 we must do it. " unlike the lep, which was used to accelerate electrons or positrons, the lhc crashes together protons, which are part of the hadron family. " the game is about smashing the particles together to transform this energy into mass. with high energy, they are transformed into new particles and we observe these new particles and try to understand things, " bordry explained. " it ' s about recreating the first microsecond of the universe, the big bang. we are reproducing in a lab the conditions we had at the start of the big bang. " over the past three years, cern has slammed protons together more than six million billion times. five billion collisions yielded results deemed worthy of further research and data from only
subdomain_quantum_field_theory
0.63556
512
HuggingFaceFW/fineweb-edu
<urn:uuid:d348472a-a0ad-45b4-ac28-b6f820112b17>
1
0.6
2025-12-25T21:56:13.554524
relation of wholes to parts, infinity, and eternity. the second half deals with the three kinds of true causes within reality recognized by proclus : gods ( which he calls henads or “ unities, ” see below ), intellects, and souls. this elaborate metaphysical framework makes it possible for proclus to develop a scientific theology, i. e., a demonstration of the procession and properties of the different classes of gods. in what follows we will only discuss some characteristic features of proclus ' metaphysics ( see further steel 2011 ). on the whole, proclus ’ doctrine of first principles is a further development of plotinus ' innovative interpretation of platonic philosophy. with plotinus, proclus recognizes three fundamental levels of reality called ‘ hypostases ’ ( or self - subsistent entities ) : one, intellect, and soul. however, following a concern of his predecessor iamblichus for greater precision in the relationship and distinction between the one and intellect, proclus distinguishes between the intelligible being ( to noeton — what is the object of intellectual intuition ) and the intellective ( to noeron — what is intelligizing ), and introduces between both, as an intermediary level, the noeton - noeron ( what is being intelligized and intelligizing ). these three ontological levels thus correspond to the triad of being, life, and intellect, which already play an important role in plotinus ' and porphyry ' s speculations about the procession or ‘ emanation ’ of the intelligible world from the one, without, however, being hypostasized. since zeller ( influenced by hegel ) the application of the triadic structure to reality has been seen as the characteristic feature of proclus ' system, but see dodds 19632, pp. xxii and 220, on possible sources of the doctrine. although the distinction of aspects of reality as distinct hupostases and the multiplication of triads might suggest a loss of plotinus ’ intuition of the unity of reality, it is important to stress that each part of the triad of being, life and intellect, mirrors within itself their triadic relationship. this redoubled triadic structure must be understood as expressing an intrinsic and essential relation between successive levels of being. the intimate relation between being, life, and intellect is the origin of the basic structure uniting all causes to their effects, namely the relation of immanence,
subdomain_quantum_field_theory
0.602757
512
HuggingFaceFW/fineweb-edu
<urn:uuid:b4bac2db-2f3f-46ab-a7e5-e1af870b6e7a>
11
0.6
2025-12-25T21:56:13.667324
something. this distinction can also be rephrased in terms of concepts, implying a distinction between factual concepts that allow us to identify or recognise certain objects, and concepts that fulfil an explanatory role. on the whole, proclus ' reading and systematisation of plato ' s doctrine of learning as recollection makes platonic recollection not only concerned with higher learning, since already on the level of object recognition we employ concepts that originate from the innate logoi of the soul ( helmig 2011 ). proclus argues at length that the human soul has to contain innate knowledge. therefore, one should not consider it an empty writing tablet, as aristotle does ( aristotle, de anima iii 4 ). he is wrong in asserting that the soul contains all things potentially. according to proclus, the soul contains all things ( i. e., all logoi ) in actuality, though due to the ‘ shock of birth ’ it may seem as if the soul has fallen to potentiality. at in crat. § 61, proclus asserts that the soul does not resemble an empty writing tablet ( agraphon grammateion ) and does not possess all things in potentiality, but in act. in eucl. 16. 8 – 13 expresses the same idea : “ the soul is not a writing tablet void of logoi, but it is always written upon and always writing itself and being written on by the intellect. ” as with his philosophy of mathematics, proclus presents a detailed criticism of the view that universal concepts are derived from sensible objects ( by abstraction, induction, or collection ). in the fourth book of his commentary on plato ' s parmenides and in the two prologues of the commentary on euclid we find the most comprehensive criticism of abstractionism in antiquity ( see helmig 2010 and 2011 ). proclus devoted three entire books or ‘ monographs ’ ( monobiblia ) to problems of providence, fate, free choice, and evil. the first treatise ( ten problems concerning providence ) examines ten different problems on providence that were commonly discussed in the platonic school. for proclus providence ( pronoia ) is the beneficent activity of the first principle ( the ‘ source of goods ’ ) and the gods ( henads ), who have their existence before intellect ( pro - nou ). one of the problems discussed is the question of how divine foreknowledge and human free choice can be reconciled. for if god
subdomain_quantum_materials
0.616371
512
HuggingFaceFW/fineweb-edu
<urn:uuid:b4bac2db-2f3f-46ab-a7e5-e1af870b6e7a>
20
0.6
2025-12-25T21:56:13.677490
the demiurge. in that respect the physiology seems also to be a sort of theology, since also natural things have somehow a divine existence insofar as they are produced by the gods. ( in tim. i 217. 18 – 27 ) before offering an explanation of the generation of the world, timaeus sets out the fundamental principles that will govern his whole explanation of the physical world ( tim. 27d5 – 28b5 ). as proclus observes, it is the task of a scientist to formulate at the start of his project the principles proper to the science in question, and not just to assume some general axioms. the science of nature too is based on specific axioms and assumptions, which must be clarified before we can move to the demonstration. in order to make phusiologia a real science, the philosopher must deduce his explanation, as does the geometer, from a set of fundamental propositions or axioms. if i may say what i think, it seems to me that plato proceeds here in the manner of the geometers, assuming before the demonstrations the definitions and hypotheses through which he will make his demonstrations, thus laying the foundations of the whole science of nature. ( in tim. i. 217. 18 – 27 ) starting from these fundamental propositions, proclus argues, plato deduces the different types of causality that are required for a truly scientific understanding of nature ( efficient, exemplary, and final cause ; see steel 2003 and above 3. 2 ). time and eternity proclus discusses eternity and time in his commentary on the timaeus and in propositions 53 – 55 of the elements of theology ( see steel 2001 ). aristotle had defined time as a “ measure of movement according to the before and after. ” therefore, anything measured by time must have a form of existence or activity in which a past and a future state can be distinguished. in fact, an entity in time is never wholly and simultaneously what it is, but has an existence extended in a process of before and after. opposed to it stands the eternal, which exists as a simultaneous whole and admits of no composition or change. “ there is no part of it, ” writes proclus, “ which has already subsisted and another that will subsist later, but as yet is not. all that it is capable of being, already possesses it in entirety without losing it or without accumulating ” ( elem. theol. § 52
subdomain_quantum_field_theory
0.602413
512
HuggingFaceFW/fineweb-edu
<urn:uuid:b4bac2db-2f3f-46ab-a7e5-e1af870b6e7a>
24
0.6
2025-12-25T21:56:13.681533
june 22, 1976. north atlantic. at 21 : 13 gmt a pale orange glow behind a bank of towering cumulus to the west was observed. two minutes later a white disc was observed while the glow from behind the cloud persisted. high probability that this may have been caused by interferometry using 3 - dimensional artificial scalar wave? fourier expansions? as the interferers. marine observer. 47 ( 256 ), apr. 1977. p. 66 - 68. " unidentified phenomenon, off barbados, west indies. " august 22, 1969. west indies. luminous area bearing 310 degrees grew in size and rose in altitude, then turned into an arch or crescent. high probability that this may have been caused by interferometry using artificial scalar wave? ( ( fourier expansions. ) ) marine observer. 40 ( 229 ), july, 1970. p. 107 - 108. " optical phenomenon : caribbean sea ; western north atlantic. " mar. 20, 1969. caribbean sea and western north atlantic. at 23 : 15 gmt, a semicircle of bright, milky - white light became visible in the western sky and rapidly expanded upward and outward during the next 10 minutes, dimming as it expanded. high probability that this may be caused by interferometry using artificial scalar wave? fourier expansions?. marine observer, 40 ( 227 ), jan. 1970. p. 17 ; p. 17 - 18. 7b. 21 - electricity 13. 06 - triple currents of electricity 14. 35 - teslas 3 6 and 9 ( ( 16. 04 - nikola nikola tesla describing what electricity is ) ) 16. 07 - electricity is a polar exchange 16. 10 - positive electricity 16. 16 - negative electricity - russell 16. 17 - negative electricity - tesla 16. 29 - triple currents of electricity ( ( figure 16. 04. 05 and figure 16. 04. 06 - nikola nikola tesla and lord kelvin ) ) part 16 - electricity and magnetism tesla - electricity from space what electricity is - bloomfield moore page last modified on wednesday 19 of may, 2010 05 : 23 : 05 mdt
subdomain_quantum_optics
0.607139
431
HuggingFaceFW/fineweb-edu
<urn:uuid:5ff0ec8a-893b-4fc8-b51a-53a907c3402b>
0
0.6
2025-12-25T21:56:13.714206
360 or just over 180 degrees in conventional machines, 220 degrees in ebt. ct is used in medicine as a diagnostic tool and as a guide for interventional procedures. sometimes contrast materials such as intravenous iodinated contrast are used. this is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. using contrast material can also help to obtain functional information about tissues. pixels in an image obtained by ct scanning are displayed in terms of relative radiodensity. the pixel itself is displayed according to the mean attenuation of the tissue ( s ) that it corresponds to on a scale from - 1024 to + 3071 on the hounsfield scale. pixel is a two dimensional unit based on the matrix size and the field of view. when the ct slice thickness is also factored in, the unit is known as a voxel, which is a three dimensional unit. the phenomenon that one part of the detector can not differ between different tissues is called the partial volume effect. that means that a big amount of cartilage and a thin layer of compact bone can cause the same attenuation in a voxel as hyperdense cartilage alone. water has an attenuation of 0 hounsfield units ( hu ) while air is - 1000 hu, cancellous bone is typically + 400 hu, cranial bone can reach 2000 hu or more ( os temporale ) and can cause artefacts. the attenuation of metallic implants depends on atomic number of the element used : titanium usually has an amount of + 1000 hu, iron steel can completely extinguish the x - ray and is therefore responsible for well - known line - artefacts in computed tomogrammes. windowing is the process of using the calculated hounsfield units to make an image. the various radiodensity amplitudes are mapped to 256 shades of gray. these shades of gray can be distributed over a wide range of hu values to get an overview of structures that attenuate the beam to widely varying degrees. alternatively, these shades of gray can be distributed over a narrow range of hu values ( called a narrow window ) centered over the average hu value of a particular structure to be evaluated. in this way, subtle variations in the internal makeup of the structure can be discerned. this is a commonly used image processing technique known as contrast compression. for example, to evaluate the abdomen in order to find subtle masses in the liver, one might use liver windows. choosing 70 hu as an
subdomain_quantum_materials
0.601769
512
HuggingFaceFW/fineweb-edu
<urn:uuid:4ea77a6a-d6a0-46e9-9fe4-c392f9e613ba>
5
0.6
2025-12-25T21:56:13.775854
algoritma nyaeta susunan parentah, nu jumlahna kawates, pikeun ngolah sababaraha parentah nu, sakumpulan data asupanana, bakal ngahasilkeun sarupaning bentuk ahir nu bisa dipikawanoh ; sabalikna ti heuristik. konsep algoritma mindeng digambarkeun ku conto hiji resep, sanajan loba algoritma kacida ruwetna ; algoritma sering miboga lengkah - lengkah anu malikan ( iterasi ) atawa merlukeun kaputusan ( saperti logika atawa perbandingan ) nepi ka tugas direngsekeunnana. | artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa inggris. bantosanna diantos kanggo narjamahkeun. sababaraha alogaritma bisa anggeus ku pagawean nu sarua make susunan parentah nu beda in more or less time, space, or effort than others. for example, given two different recipes for making potato salad, one may have peel the potato before boil the potato while the other presents the steps in the reverse order, yet they both call for these steps to be repeated for all potatoes and end when the potato salad is ready to be eaten. correctly performing an algorithm will not solve a problem if the algorithm is flawed or not appropriate to the problem. for example, performing the potato salad algorithm will fail if there are no potatoes present, even if all the motions of preparing the salad are performed as if the potatoes were there. in some countries, such as the usa, some algorithms can effectively be patented if an embodiment is possible ( for example, a multiplication algorithm embodied in the arithmetic unit of a microprocessor ). algoritma nu dirumuskeun [ edit ] algorithms are essential to the way computers process information, because a computer program is essentially an algorithm that tells the computer what specific steps to perform ( in what specific order ) in order to carry out a specified task, such as calculating employees ’ paychecks or printing students ’ report cards. thus, an algorithm can be considered to be any sequence of operations which can be performed by a turing -
subdomain_quantum_simulation
0.616292
512
HuggingFaceFW/fineweb-edu
<urn:uuid:db5fd47f-6135-409f-b5f8-161d228a850a>
0
0.6
2025-12-25T21:56:13.990709
in what specific order ) in order to carry out a specified task, such as calculating employees ’ paychecks or printing students ’ report cards. thus, an algorithm can be considered to be any sequence of operations which can be performed by a turing - complete system. typically, when an algorithm is associated with processing information, data is read from an input source or device, written to an output sink or device, and / or stored for further use. stored data is regarded as part of the internal state of the entity performing the algorithm. for any such computational process, the algorithm must be rigorously defined : specified in the way it applies in all possible circumstances that could arise. that is, any conditional steps must be systematically dealt with, case - by - case ; the criteria for each case must be clear ( and computable ). because an algorithm is a precise list of precise steps, the order of computation will almost always be critical to the functioning of the algorithm. instructions are usually assumed to be listed explicitly, and are described as starting ' from the top ' and going ' down to the bottom ', an idea that is described more formally by flow of control. so far, this discussion of the formalisation of an algorithm has assumed the premises of imperative programming. this is the most common conception, and it attempts to describe a task in discrete, ' mechanical ' means. unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. it derives from the intuition of ' memory ' as a scratchpad. there is an example below of such an assignment. ngalarapkeun algoritma [ edit ] algorithms are sometimes implemented as computer programs but are more often implemented by other means, such as in a biological neural network ( for example, the human brain implementing arithmetic or an insect relocating food ), or in electric circuits or in a mechanical device. the analysis and study of algorithms is one discipline of computer science, and is often practiced abstractly ( without the use of a specific programming language or other implementation ). in this sense, it resembles other mathematical disciplines in that the analysis focuses on the underlying principles of the algorithm, and not on any particular implementation. one way to embody ( or sometimes codify ) an algorithm is the writing of pseudocode. some writers restrict the definition of algorithm to procedures that eventually finish. others include procedures that could run forever without stopping, arguing that some entity may be required to carry out such permanent tasks. in the latter case, success can no longer be
subdomain_quantum_computing
0.618967
512
HuggingFaceFW/fineweb-edu
<urn:uuid:db5fd47f-6135-409f-b5f8-161d228a850a>
1
0.6
2025-12-25T21:56:13.991859
pseudocode. some writers restrict the definition of algorithm to procedures that eventually finish. others include procedures that could run forever without stopping, arguing that some entity may be required to carry out such permanent tasks. in the latter case, success can no longer be defined in terms of halting with a meaningful output. instead, terms of success that allow for unbounded output sequences must be defined. for example, an algorithm that verifies if there are more zeros than ones in an infinite random binary sequence must run forever to be effective. if it is implemented correctly, however, the algorithm ' s output will be useful : for as long as it examines the sequence, the algorithm will give a positive response while the number of examined zeros outnumber the ones, and a negative response otherwise. success for this algorithm could then be defined as eventually outputting only positive responses if there are actually more zeros than ones in the sequence, and in any other case outputting any mixture of positive and negative responses. di dieu aya conto sederhana dina algoritma. bayangkeun anjeun mibanda wilangan random dina daptar nu teu kasortir. tujuan ahirna keur manggihkeun wilangan panggedena tina eta daptar. lengkah mimiti nyaeta kudu nempo kana sakabeh nilai nu aya dina deret. lengkah saterusna nyaeta nempo kana eta nilai ngan sakali. asupkeun kana itungan, algoritma basajan ngeunaan hal eta saperti di handap ieu : - pretend the first number in the list is the largest number. - look at the next number, and compare it with this largest number. - only if this next number is larger, then keep that as the new largest number. - repeat steps 2 and 3 until you have gone through the whole list. given : a list " list " largest = list counter = 2 while counter < = length ( list ) : if list [ counter ] > largest : largest = list [ counter ] counter = counter + 1 print largest notes on notation : - = as used here indicates assignment. that is, the value on the right - hand side of the expression is assigned to the container ( or variable ) on the left - hand side of the expression. - list [ counter ] as used here indicates the
subdomain_quantum_computing
0.64043
512
HuggingFaceFW/fineweb-edu
<urn:uuid:db5fd47f-6135-409f-b5f8-161d228a850a>
2
0.6
2025-12-25T21:56:13.993007
= as used here indicates assignment. that is, the value on the right - hand side of the expression is assigned to the container ( or variable ) on the left - hand side of the expression. - list [ counter ] as used here indicates the counterth element of the list. for example : if the value of counter is 5, then list [ counter ] refers to the 5th element of the list. - < = as used here indicates ' less than or equal to ' note also the algorithm assumes that the list contains at least one number. it will fail when presented an empty list. most algorithms have similar assumptions on their inputs, called pre - conditions. as it happens, most people who implement algorithms want to know how much of a particular resource ( such as time or storage ) a given algorithm requires. methods have been developed for the analysis of algorithms to obtain such quantitative answers ; for example, the algorithm above has a time requirement of o ( n ), using the big o notation with n representing for the length of the list. kecap algoritma comes ultimately from the name of the 9th - century mathematician abu abdullah muhammad bin musa al - khwarizmi. the word algorism originally referred only to the rules of performing arithmetic using arabic numerals but evolved into algorithm by the 18th century. the word has now evolved to include all definite procedures for solving problems or performing tasks. the first case of an algorithm written for a computer was ada byron ' s notes on the analytical engine written in 1842, for which she is considered by many to be the world ' s first programmer. however, since charles babbage never completed his analytical engine the algorithm was never implemented on it. the lack of mathematical rigor in the " well - defined procedure " definition of algorithms posed some difficulties for mathematicians and logicians of the 19th and early 20th centuries. this problem was largely solved with the description of the turing machine, an abstract model of a computer formulated by alan turing, and the demonstration that every method yet found for describing " well - defined procedures " advanced by other mathematicians could be emulated on a turing machine ( a statement known as the church - turing thesis ). nowadays, a formal criterion for an algorithm is that it is a procedure that can be implemented on a completely - specified turing machine or one of the equivalent formalisms. turing ' s initial interest was in the halting problem : deciding when an algorithm describes a terminating procedure. in practical terms computational complexity theory matters more : it includes the puzzling problem of the algorithms
subdomain_quantum_computing
0.625214
512
HuggingFaceFW/fineweb-edu
<urn:uuid:db5fd47f-6135-409f-b5f8-161d228a850a>
3
0.6
2025-12-25T21:56:13.994088
specified turing machine or one of the equivalent formalisms. turing ' s initial interest was in the halting problem : deciding when an algorithm describes a terminating procedure. in practical terms computational complexity theory matters more : it includes the puzzling problem of the algorithms called np - complete, which are generally presumed to take more than polynomial time. kelas algoritma [ edit ] aya sababaraha cara keur nyieun kelas algoritma, and the merits of each classification have been the subject of ongoing debate. one way of classifying algorithms is by their design methodology or paradigm. there is a certain number of paradigms, each different from the other. furthermore, each of these categories will include many different types of algorithm. some commonly found paradigms include : - divide and conquer. a divide - and - conquer algorithm reduces an instance of a problem to one or more smaller instances of the same problem ( usually recursively ), until the instances are small enough to be directly expressible in the programming language employed ( what is ' direct ' is often discretionary ). - dynamic programming. when a problem shows optimal substructure, i. e when the optimal solution to a problem consists of optimal solutions to subproblems ( for instance the shortest path between two vertices on a weighted graph consists of the shortest path between all the vertices in between. ) you solve such a problem bottom - up by solving the simplest problems first and then procceding to increasingly difficult problems until you have solved the original problem. this is called a dynamic programming algorithm. - the greedy method. a greedy algorithm is similar to a dynamic programming algorithm, but the difference is that at each stage you don ' t have to have the solutions to the subproblems, you can make a " greedy " choice of what looks best for the moment. - linear programming. when you solve a problem using linear programming you put the program into a number of linear inequalities and then try to maximize ( or minimize ) the inputs. many problems ( such as the maximum flow for directed graphs ) can be stated in a linear programming way, and then be solved by a ' generic ' algorithm such as the simplex algorithm. - search and enumeration. many problems ( such as playing chess ) can be modelled as problems on graphs. a graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. this category also includes the search algorithms and backtracking. - the probabilistic and heuri
subdomain_quantum_simulation
0.650486
512
HuggingFaceFW/fineweb-edu
<urn:uuid:db5fd47f-6135-409f-b5f8-161d228a850a>
4
0.6
2025-12-25T21:56:13.995057
playing chess ) can be modelled as problems on graphs. a graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. this category also includes the search algorithms and backtracking. - the probabilistic and heuristic paradigm. algorithms belonging to this class fit the definition of an algorithm more loosely. probabilistic algorithms are those that make some choices randomly ( or pseudo - randomly ) ; for some problems, it can in fact be proved that the fastest solutions must involve some randomness. genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of ' solutions '. thus, they emulate reproduction and " survival of the fittest ". in genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a ' solution ' to a problem. also there are heuristic algorithms, whose general purpose is not to find a final solution, but an approximate solution where the time or resources to find a perfect solution are not practical. an example of this would be simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. the name ' simulated annealing ' alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. the purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution. another way to classify algorithms is by implementation. a recursive algorithm is one that invokes ( makes reference to ) itself repeatedly until a certain condition matches, which is a method common to functional programming. algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. those computers are sometimes called serial computers. an algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms, which take advantage of computer architectures where several processors can work on a problem at the same time. the various heuristic algorithm would probably also fall into this category, as their name ( e. g. a genetic algorithm ) describes its implementation. a list of algorithms discussed in wikipedia is available. tempo oge [ edit ] - bulletproof algorithms - numerical analysis - cryptographic algorithms - sort algorithms - search algorithms - merge algorithms - string algorithms - list of algorithms - timeline of algorithms - struktur data - genetic algorithms - randomised
subdomain_quantum_simulation
0.673528
512
HuggingFaceFW/fineweb-edu
<urn:uuid:db5fd47f-6135-409f-b5f8-161d228a850a>
5
0.6
2025-12-25T21:56:13.996068
science fair project encyclopedia knot theory is a branch of topology that was inspired by observations, as the name suggests, of knots. but progress in the field no longer depends on experiments with twine. knot theory concerns itself with abstract properties of theoretical knots — the spatial arrangements that in principle could be assumed by a loop of string. in mathematical jargon, knots are embeddings of the closed circle in three - dimensional space. an ordinary knot is converted to a mathematical knot by splicing its ends together. the topological theory of knots asks whether two such knots can be rearranged to match, without opening the splice. the question of untying an ordinary knot has to do with unwedging tangles of rope pulled tight. a knot can be untied in the topological theory of knots if and only if it is equivalent to the unknot, a circle in 3 - space. knot theory originated in an idea of lord kelvin ' s ( 1867 ), that atoms were knots of swirling vortices in the æther ( also known as ' ether ' ). he believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do ( i. e. explain what we now understand to depend on quantum energy levels ). scottish physicist peter tait spent many years listing unique knots under the belief that he was creating a table of elements. when ether was discredited through the michelson - morley experiment, vortex theory became completely obsolete, and knot theory fell out of scientific interest. only in the past 100 years, with the rise of topology, have knots become a popular field of study. today, knot theory is inextricably linked to particle physics, dna replication and recombination, and to areas of statistical mechanics. an introduction to knot theory creating a knot is easy. begin with a one - dimensional line segment, wrap it around itself arbitrarily, and then fuse its two free ends together to form a closed loop. one of the biggest unresolved problems in knot theory is to describe the different ways in which this may be done, or conversely to decide whether two such embeddings are different or the same. the unknot, and a knot equivalent to it before we can do this, we must decide what it means for embeddings to be " the same ". we consider two embeddings of a loop to be the same if we can get from one to the other by
subdomain_quantum_field_theory
0.667614
512
HuggingFaceFW/fineweb-edu
<urn:uuid:5224c8e3-2023-4490-bcb0-4fefe395a832>
0
0.6
2025-12-25T21:56:14.295838