{"text": "called a biconditional. \" iff \" joins two sentences to form a new sentence. it should not be confused with logical equivalence which is a description of a relation between two sentences. the biconditional \" a iff the sentences a, describing a relation between the states of affairs a describe. by contrast \" a is logically equivalent to b \" mentions both sentences : it describes a relation between those two sentences, and not between whatever matters they describe. the distinction is a very confusing one, and has led many a philosopher astray. certainly it is the case that when a is logically equivalent to b, \" a iff b \" is true. but the converse does not hold. reconsidering the sentence : - madison will eat pudding if and only if it is custard. there is clearly no logical equivalence between the two halves of this particular biconditional. for more on the distinction, see w. v. quine ' s mathematical logic, section 5. one way of looking at \" a if and only if b \" is that it means \" a if b \" ( b implies a ) and \" a only when b \" ( not b implies not a ). \" not b implies not a \" means a implies b, so then we get two way implication. in philosophy and logic, \" iff \" is used to indicate definitions, since definitions are supposed to be universally quantified biconditionals. in mathematics and elsewhere, however, the word \" if \" is normally used in definitions, rather than \" iff \". this is due to the observation that \" if \" in the english language has a definitional meaning, separate from its meaning as a propositional conjunction. this separate meaning can be explained by noting that a definition ( for instance : a group is \" abelian \" if it satisfies the commutative law ; or : a grape is a \" raisin \" if it is well dried ) is not an equivalence to be proved, but a rule for interpreting the term defined. ( some authors, nevertheless, explicitly indicate that the \" if \" of a definition means \" iff \"! ) here are some examples of true statements that use \" iff \" - true biconditionals ( the first is an example of a definition, so it should normally have been written with \" if \" ) : - a person is a bachelor iff that person is a marriageable man who has never married. - \" snow is white \" ( in english ) is true if", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6061379374234606, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:37.131585"} {"text": "the basic forces in nature contemporary physics education project the interactions in the universe are governed by four forces ( strong, weak, electromagnetic and gravitational ). physicists are trying to find one theory that would describe all the forces in nature as a single law. so far they have succeeded in producing a single theory that describes the weak and electromagnetic forces ( called electroweak force ). the strong and gravitational forces are not yet described by this theory. table courtesy of university of guelph, guelph, ontario ( cananda ) shop windows to the universe science store! cool it! is the new card game from the union of concerned scientists that teaches kids about the choices we have when it comes to climate change \u2014 and how policy and technology decisions made today will matter. cool it! is available in our online store you might also be interested in : the neutrino is an extremely light particle. it has no electric charge. the neutrino interacts through the weak force. for this reason and because it is electrically neutral, neutrino interactions with... more some ideas are used throughout the sciences. they are \" tools \" that can help us solve puzzles in different fields of science. these \" tools \" include units of measurement, mathematical formulas, and graphs.... more mechanics is the term used to refer to one of the main branches of the science of physics. mechanics deals with the motion of and the forces that act upon physical objects. we need precise terminology... more the interactions in the universe are governed by four forces ( strong, weak, electromagnetic and gravitational ). physicists are trying to find one theory that would describe all the forces in nature as... more when the temperature in the core of a star reaches 100 million degrees kelvin fusion of helium into carbon occurs. oxygen is also formed from fusion of carbon and helium together when the temperature is... more a plot of the binding energy per nucleon vs. atomic mass shows a peak atomic number 56 ( iron ). elements with atomic mass less then 56 release energy if formed as a result of a fusion reaction. above this... more there are several experiments where nuclear fusion reactions have been achieved in a controlled manner ( that means no bombs are involved!! ). the two main approaches that are being explored are magnetic... more", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6713386373316474, "token_count": 472, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:37.561933"} {"text": "', curved space. - flat space does not mean the metric tensor is diagonal with the entries ( - 1, 1, 1, 1 ), this is just the case in a very specific coordinate system. flat space means the curvature tensor identically vanishes ( which is independent of the coordinate system ). - of course one can describe accelerated observers in special relativity. that leads me now directly to the equivalence principle, the cornerstone of general relativity. googling ' equivalence principle ' it is somehow depressing. wikipedia isn ' t wrong, but too specific ( the equivalence principle doesn ' t have anything to do with standing on the surface of the earth ). the second hit is a nasa website which i find mostly confusing ( saying all objects react equally to gravity doesn ' t tell you anything about the relation of gravitational to inertial mass ). the third and fourth hits get it right, the fifth is wrong ( the locality is a crucial ingredient ). so here it is : - the equivalence principle : locally, the effects of gravitation ( motion in a curved space ) are the same as that of an accelerated observer in flat space. that is what einstein explains in his thought experiment with the elevator. if you are standing in the elevator ( that is just a local patch, theoretically infinitesimally small ) you can ' t tell whether you are pulled down because there is a planet underneath your feet, or because there is a flying pig pulling up the elevator. this website has two very nice mini - movies depicting the situation. if you could make your elevator larger you could however eventually distinguish between flat and curved space because you could measure geodesic deviation, i. e. the curvature. if you think of particles, the equivalence principle means that the inertial mass is equal to the gravitational mass, which has been measured with impressive precision. but the above formulation makes the mathematical consequences much clearer. to formulate your theory, you will have to introduce a tangential bundle on your curved manifold where you can deal with the ' local ' quantities, and you will have to figure out how the cuts in this bundle ( tensors ) will transform under change of coordinates. if you want your theory to be independent of that choice of coordinates it will have to be formulated in tensor equations. next thing to ask is then how to transport tensors from one point to the other, which leads you to a ' covariant ' derivative. the equivalence principle is thus a very central ingredient of general relativity and despite its simplicity the base of a", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6248406937625013, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:37.764255"} {"text": "thing to ask is then how to transport tensors from one point to the other, which leads you to a ' covariant ' derivative. the equivalence principle is thus a very central ingredient of general relativity and despite its simplicity the base of a large mathematical apparatus, it ' s the kind of insight every theoretical physicist dreams of. it gives you a notion of a ' straightest line ' in curved space ( a geodesic ) on which a testparticle moves. this curve most notably is independent of the mass of that particle : heavy and light things fall alike even in general relativity ( well, we already knew this to be the case in the newtonian limit ). for a very nice demonstration see the video on the nasa website. please note that this holds for pointlike testparticles only, it is no loger true for extended or spinning objects, or for objects that significantly disturb the background. the equivalence principle however is not sufficient to give you einstein ' s field equations that describe how space is curved by its matter content. but that ' s a different story. it remains to be said all this is standard textbook knowledge and general relativity is today not usually considered a large mystery. there are definitely more than 3 people who understand it. we have moved on quite a bit since 1905. general relativity is sexy. though i doubt there ' s more than three people in the world who really understand potatoes. * in the more advanced stages of confusion they start referring to physical theories as women. josh, this one ' s for you.", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6118661852789796, "token_count": 313, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:37.765034"} {"text": "| a fool or simpleton ; ninny. | | an extraordinary or unusual thing, person, or event ; an exceptional example or instance. | | \u2014 the chemical symbol for | the symbol for the element rubidium. the symbol for rubidium. | rubidium ( r - bid ' e - \u0259m ) pronunciation key a soft, silvery - white metallic element of the alkali group. it ignites spontaneously in air and reacts violently with water. rubidium is used in photoelectric cells, in making vacuum tubes, and in radiometric dating. atomic number 37 ; atomic weight 85. 47 ; melting point 38. 89\u00b0c ; boiling point 688\u00b0c ; specific gravity ( solid ) 1. 532 ; valence 1, 2, 3, 4. see periodic table. rb / r - b - l / abbreviation : \" realtime blackhole list \". a service that allows people to blacklist sites for emitting spam, and makes the blacklist available in real time to electronic - mail transport programs that know how to use rbl so they can filter out mail from those sites. drastic ( and controversial ) but effective. there is an rbl home page ( http : / / maps. vix. com / rbl / usage. html ). chemical element of group 1 ( also called group ia ) in the periodic table, the alkali metal group. rubidium is the second most reactive metal and is very soft, with a silvery - white lustre. a brief treatment of rubidium follows. for full treatment, see alkali metal. learn more about rb with a free trial on britannica. com.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6756665189587943, "token_count": 338, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.188824"} {"text": "recursion is a programming paradigm as well as a problem solving strategy thought to be very challenging to grasp for university students. this article outlines a pilot study, which expands the age range of students exposed to the concept of recursion in computer science through instruction in a series of interesting and engaging activities. in this study, a small number of students ( n = 9 ) aged 11 to 13 years, were presented with a new and unique recursion curriculum involving hands - on experiences over a seven - week period at the university of victoria, canada. the curriculum was comprised of a series of progressively challenging recursion activities \u2014 roughly based upon the ideas of \u2018 computer science unplugged \u2019 ( bell, witten, & fellows, 2009 ) \u2014 and included programming applications with microworlds ex, a programming language based on logo. through this engagement, an increased number of students recognized and understood the concepts covered. we hypothesize that through experiences for youth with activities such as those outlined here, the number of students who understand fundamental computer science applications and who might potentially pursue computer science in post - secondary education will increase. we hypothesis further that through an earlier encounter of \u201c challenging \u201d concepts the learning and understanding of those will become easier at the university level. in this paper, the curriculum, classroom experiences, preliminary, largely descriptive and qualitative results and next steps in the research are discussed. gunion, katherine ; milford, todd ; and stege, ulrike \" the paradigm recursion : is it more accessible when introduced in middle school?, \" the journal of problem solving : 2, article 8.", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6439335061349979, "token_count": 330, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.202407"} {"text": "1. 0 x 10 - 3 mole each of h2 and i2 had been used, together with 2. 0 x 10 - 3 mole of hi, would more hi have been produced spontaneously? | you can verify that the reaction quotient is q = 4. 0. because this is less than keq, the forward reaction is still spontaneous. | if the conditions of example 7 are changed so that the hi concentration is increased to 2. 0 x 10 - 2 mole liter - 1, what happens to the reaction? | the reaction quotient now is q = 400. this is greater than keq - there are now too many product molecules and too few reactant molecules for equilibrium to exist. thus the reverse reaction occurs more rapidly than the forward reaction. equilibrium is reached only by converting some of the hi to h2 and 12, so the reverse reaction is spontaneous. | if the conditions of example 7 are changed so that the hi concentration is 7. 1 x 10 - 3 mole liter - 1, in which direction is the reaction spontaneous? | under these conditions, since q equals keq within the limits of accuracy of the data, the system as described is at equilibrium, and neither the forward nor the backward reaction is spontaneous. ( both reactions are still taking place at the molecular level, of course, but they are balanced so their net effects cancel. ) the second use for equilibrium constants is to calculate the concentrations of reactants and products that will be present at equilibrium. | if a 1 - liter flask contains 1. 0 x 10 - 3 mole each of h2 and i2 at 448\u00b0c, what amount of hi is present when the gas mixture is at equilibrium? | the keq expression is treated as an ordinary algebraic equation, and solved for the hi concentration : you can verify that in example 7 the hi concentration was less than this equilibrium value ; in example 8 it was more ; and in example 9 it was just this value. | one - tenth of a mole, 0. 10 mole, of hydrogen iodide is placed in an otherwise empty 5. 0 liter flask at 448\u00b0c. when the contents have come to equilibrium, how much hydrogen and iodine will be in the flask? | from the stoichiometry of the reaction, the concentrations of h2 and i2 must be the same. for every mole of h2 and i2 formed, 2 moles of hi must decompose. let y equal the number of moles of", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6080864853927168, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.281058"} {"text": "dissociation to hydrogen and iodine is favored much more. the hydrogen iodide - producing reaction is exothermic or heat emitting : ( if you check this figure against appendix 3, remember that this reaction involves gaseous iodine, not solid. ) if the external temperature of this reaction is lowered, the equilibrium is shifted in favor of the heat - emitting or forward reaction ; conversely, if the temperature is increased, the reverse reaction, producing h2 and i2 is favored. the equilibrium shifts so as to counteract to some extent the effect of adding heat externally ( raising the temperature ) or removing it ( lowering the temperature ). the temperature dependence of the equilibrium point is one example of a more general principle, known as le chatelier ' s principle : if an external stress is applied to a system at chemical equilibrium, then the equilibrium point will change in such a way as to counteract the effects of that stress. if the forward half of an equilibrium reaction is exothermic, then keq will decrease as the temperature increases ; if it is endothermic, keq will increase. only for a heat - absorbing reaction can the equilibrium yield of products be improved by increasing the temperature. a good way to remember this is to write the reaction explicitly with a heat term : then it is clear that adding heat, just like adding hi, shifts the reaction to the left. ( see figure 4 - 3. ) le chatelier ' s principle is true for other kinds of stress, such as pressure changes. the equilibrium constant, keq, is not altered by a pressure change at constant temperature. however, the relative amounts of reactants and products will change in a way that can be predicted from le chatelier ' s principle. the hydrogen - iodine reaction involves an equal number ( 2 ) of moles of reactants and product. therefore, if we double the pressure at constant temperature, the volume of the mixture of gases will be halved. all concentrations in moles liter - 1 will be doubled, but their ratio will be the same. in example 12, doubling the concentrations of the reactants and product does not change the equilibrium constant : - keq = - = 50. 51 thus the hydrogen - iodine equilibrium is not sensitive to pressure changes. notice that in this case keq does not have units, since the concentration units in the numerator and denominator cancel. in contrast, the dissociation of ammonia is affected by changes in pressure because the", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6091720826457959, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 16, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.292529"} {"text": "motion machine that would deliver power without an energy source. from common sense and experience we know this to be impossible. this \" common sense \" is stated scientifically as the first law of thermodynamics, which will be discussed in chapter 15. a mathematician would call this a proof by contradiction : if we assume that a catalyst can alter keq, then we must assume the existence of a perpetual - motion machine. however, a perpetual - motion machine cannot exist ; therefore our initial assumption was wrong, and we must conclude that a catalyst cannot alter keq. in summary, keq is a function of temperature, but it is not a function of reactant or product concentrations, total pressure, or the presence or absence of catalysts. the relative amounts of substances at equilibrium can be changed by applying an external stress to the equilibrium mixture of reactants and products, and the change is one that will relieve this stress. this last statement, le chatelier ' s principle, enables us to predict what will happen to a reaction when external factors are changed, without having to make exact calculations. a spontaneous reaction is one that will take place, given enough time, without outside assistance. some spontaneous reactions are rapid, but time is not an element in the definition of spontaneity. a reaction can be almost infinitely slow and still be spontaneous. the net reaction that we observe is the result of competition between forward and reverse steps. if the forward process is faster, then products accumulate, and we say that the reaction is spontaneous in the forward direction. if the reverse process is faster, then reactants accumulate, and we say that the reverse reaction is the spontaneous one. if both forward and reverse processes take place at the same rate, then no net change is observed in any of the reaction components. this is the condition of chemical equilibrium. the ratio of products to reactants, each concentration term being raised to a power corresponding to the coefficient of that substance in the balanced chemical equation, is called the equilibrium constant, keq. ( see equation 4 - 8. ) it can be used to predict whether a given reaction under specified conditions will be spontaneous, and to calculate the concentrations of reactants and products at equilibrium. the reaction quotient, q, has a form that is identical with that of the equilibrium constant, keq, but q applies under nonequilibrium conditions as well. for a given set of conditions, if q is smaller than keq, the forward reaction is spontaneous ; if q is greater than keq,", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6529625467510662, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 19, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.296415"} {"text": "that of the equilibrium constant, keq, but q applies under nonequilibrium conditions as well. for a given set of conditions, if q is smaller than keq, the forward reaction is spontaneous ; if q is greater than keq, the reverse reaction is spontaneous ; and if q = keq, the system is at equilibrium. the equilibrium constant can be used with any convenient set of concentration units : moles liter - 1, pressure in atmospheres, or others. its numerical value will depend on the units of concentration, so one must be careful to match the proper values of keq and units when solving problems. if gas concentrations are expressed in moles liter - 1, the equilibrium constant is designated by kc ; if in atmospheres, by kp. just as partial pressure of the jth component of a gas mixture is related to moles per liter by pj = cjrt, so kp and kc are related by kp = kc ( rt ) \u03b4n, in which \u03b4n is the net change in number of moles of gas during the reaction. when some of the reactants or products are pure solids or liquids, they act as infinite reservoirs of material as long as some solid or liquid is left. their effect on equilibrium depends only on their presence, not on how much of the solid or liquid is present. their effective concentrations are constant, and can be incorporated into keq. in practice, this simply means omitting concentration terms for pure solids and liquids from the equilibrium - constant expression. evaporation of a liquid can be treated formally as a chemical reaction with the liquid as reactant and vapor as product. these conventions for writing concentration terms for a liquid permit us to write the equilibrium constant for evaporation as kp = pj where pj is the equilibrium vapor pressure of substance j. le chatelier ' s principle states that if stress is applied to a system at equilibrium the amounts of reactants and products will shift in such a manner as to minimize the stress. this means that for a heat - absorbing, or endothermic, reaction, keq increases as the temperature is increased, since carrying out more of the reaction is a way of absorbing some of the added heat. similarly, cooling increases keq for a heat - emitting or exothermic reaction. although the equilibrium constant keq is independent of pressure, and changing the total pressure on a reacting system does not alter keq directly, an increase in pressure does cause the reaction to shift in", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6045727265107135, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 20, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.297483"} {"text": "ginsberg. the latter was rejected because causal information cannot be encoded as a set of beliefs, and the former because it is difficult to fine - tune lewis ' s similarity measure to match causal intuition. pearl defines counterfactuals directly in terms of a \" structural equation model \" - - a set of equations, in which each variable is assigned a value that is an explicit function of other variables in the system. given such a model, the sentence \" y would be y had x been x \" ( formally, x = x > y = y ) is defined as the assertion : if we replace the equation currently determining x with a constant x = x, and solve the set of equations for variable y, the solution obtained will be y = y. this definition has been shown to be compatible with the axioms of possible world semantics and forms the basis for causal inference in the natural and social sciences, since each structural equation in those domains corresponds to a familiar causal mechanism that can be meaningfully reasoned about by investigators. see also - english conditional sentences - indicative conditional - irrealis moods - logical consequence - material conditional - optative mood - principle of explosion - subjunctive mood - thought experiment - possible world semantics - bennett, jonathan. ( 2003 ). a philosophical guide to conditionals. oxford university press. - bonevac, d. ( 2003 ). deduction, introductory symbolic logic. 2nd ed. blackwell publishers. - byrne, r. m. j. ( 2005 ). the rational imagination : how people create alternatives to reality. cambridge, m. a. : mit press. - byrne, r. m. j. & tasso, a. ( 1999 ). deductive reasoning with factual, possible, and counterfactual conditionals. memory & cognition. 27, 726 - 740. - de vega, m., urrutia, m., riffo, b. ( 2007 ). canceling updating in the comprehension of counterfactuals embedded in narrative. memory & cognition, 35, 1410 - 1421. - edgington, dorothy. ( 2001 ). \" conditionals \". in goble, lou, ed., the blackwell guide to philosophical logic. blackwell. - edgington, dorothy. ( 2006 ). \" conditionals \". the stanford encyclopedia of philosophy, edward zalta ( ed. ). - ferguson, h. j. and sanford, a. j. ( 2008 ) anomalies in real and counterfactual worlds", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.639600892712962, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.390640"} {"text": "the set of hamming codes are called ' forward error correction ' and give the ability for the receiving station to correct a transmission error. while this takes more bits to send the information, it means fewer retransmits and thus can actually speed up a noisy connection. the number of parity bits in the hamming code is given by the hamming rule. this is a function of the number of bits of information transmitted in a block and is represented by the following inequality : d + p + 1 > = 2p ' d ' is the number of data bits and ' p ' is the number of parity bits. hamming codes are identified by the ordered set ( c, d ) where ' c ' = ' d ' + ' p '. the hamming code ( 7, 4 ) is the classic example used which describes a word of 4 data bits long and 3 error check bits. this satisfies the above inequality : 4 + 3 + 1 > = 23 the hamming code word is created by multiplying the data bits by a generator matrix using modulo - 2 arithmetic. the result of this is called a code word vector which consists of the original data bits and the parity bits. the generator matrix used in constructing the hamming code consists of i ( the identity matrix ) and a parity generation matrix a. for a data size of 4 the following matrix is created : 1 0 0 0 | 1 1 1 0 1 0 0 | 0 1 1 g = 0 0 1 0 | 1 0 1 0 0 0 1 | 1 1 0 multiplying a 4 bit vector ( d1, d2, d3, d4 ) by g results in a 7 bit vector of the form ( d1, d2, d3, d4, p1, p2, p3 ). the a portion is what generates the parity bits. if the selection of the columns of a are unique, it is true that ( p1, p2, p3 ) is the parity calculations of three distinct subsets of the original data. to validate the code word, it is necessary to multiply the data word by ' h ' which is the [ inverse a | i ] check to form the parity check vector. h r | 1 | s | 1 0 1 1 | 1 0 0 | | 0 | | 0 | | 1 1 0 1 | 0 1 0 | * | 1 | = | 0 | | 1 1 1 0 | 0 0 1 |", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6047939651821695, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.545144"} {"text": "h r | 1 | s | 1 0 1 1 | 1 0 0 | | 0 | | 0 | | 1 1 0 1 | 0 1 0 | * | 1 | = | 0 | | 1 1 1 0 | 0 0 1 | | 0 | | 0 | if all the elements of s are 0, then the entire set has been received correctly. if there are any ' 1 ' s in s, then there is an error which can be determined by looking at the parity pits that have failed. if r = s will be this matches the third colum of ' h ' which corresponds to the bit that has the error. the ( 7, 4 ) hamming code, while good for demonstrations is not the best choice for practical communications - it has allot of overhead and has a non - standard length. the number of parity bits goes up with the log of the number of data bits. hence, there is less overhead for longer words than shorter words. the hamming code can detect and fix single bit errors, and detect double bit errors. for the ( 7, 4 ) hamming code, the following table ( error correcting bits are in bold ) : decimal binary hamming ( 7, 4 ) 0 0000 0000000 1 0001 0001110 2 0010 0010101 3 0011 0011011 4 0100 0100011 5 0101 0100011 6 0110 0110110 7 0111 0111000 8 1000 1000111 9 1001 1001001 10 1010 1010010 11 1011 1011100 12 1100 1100100 13 1101 1101010 14 1110 1110001 15 1111 1111111 the hamming distance from one valid error correcting set to another for the same data is three. this means that it would take three errors to go from one valid message to another. example : 0100010 ( not valid - correctable ) 0100000 ( not valid - not correctable ) it is left an excercise to the reader to demonstrate this is the case for all 127 possible cases that the minimum hamming distance between any two valid messages is three.", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6014428524759888, "token_count": 444, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:38.546703"} {"text": "david m. lane values of pearson ' s correlation, variance sum law, measures of variability the collection of data involves measurement. measurement of some characteristics such as height and weight are relatively straightforward. the measurement of psychological attributes such as self esteem can be complex. a good measurement scale should be both reliable and valid. these concepts will be discussed in turn. the notion of reliability revolves around whether you would get at least approximately the same result if you measure something twice with the same measurement instrument. a common way to define reliability is the correlation between parallel forms of a test. letting \" test \" represent a parallel form of the test, the symbol rtest, test is used to denote the reliability of the test. true scores and error assume you wish to measure a person ' s mean response time to the onset of a stimulus. for simplicity, assume that there is no learning over tests which, of course, is not really true. the person is given 1, 000 trials on the task and you obtain the response time on each trial. the mean response time over the 1, 000 trials can be thought of as the person ' s \" true \" score, or at least a very good approximation of it. theoretically, the true score is the mean that would be approached as the number of trials increases indefinitely. an individual response time can be thought of as being composed of two parts : the true score and the error of measurement. thus if the person ' s true score were 345 and their response on one of the trials was 358, then the error of measurement would be 13. similarly, if the response time were 340, the error of measurement would be - 5. now consider the more realistic example of a class of students taking a 100 - point true / false exam. let ' s assume that each student knows the answer to some of the questions and has no idea about the other questions. for the sake of simplicity, we are assuming there is no partial knowledge of any of the answers and for a given question a student either knows the answer or guesses. finally, assume the test is scored such that a student receives one point for a correct answer and loses a point for an incorrect answer. in this example, a student ' s true score is the number of questions they know the answer to and their error score is their score on the questions they guessed on. for example, assume a student knew 90 of the answers and guessed correctly on 7 of the remaining 10 ( and therefore incorrectly on 3 ). their true score would be 90 since that is the number", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6007320808131051, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:39.156683"} {"text": "liquid crystals, the state of matter that makes possible the flat screen technology now commonly used in televisions and computers, may have some new technological tricks in store. writing today ( may 3, 2012 ) in the journal nature, an international team of researchers led by university of wisconsin - madison professor of chemical and biological engineering juan j. de pablo reports the results of a computational study that shows liquid crystals, manipulated at the smallest scale, can unexpectedly induce the molecules they interact with to self - organize in ways that could lead to entirely new classes of materials with new properties. \" from an applied perspective, once we get to very small scales, it becomes incredibly difficult to pattern the structure of materials. but here we show it is possible to use liquid crystals to spontaneously create nanoscale morphologies we didn ' t know existed, \" says de pablo of computer simulations that portray liquid crystals self - organizing at the molecular scale in ways that could lead to remarkable new materials with scores of technological applications. as their name implies, liquid crystals exhibit the order of a solid crystal but flow like a liquid. used in combination with polarizers, optical filters and electric fields, liquid crystals underlie the pixels that make sharp pictures on thin computer or television displays. liquid crystal displays alone are a multibillion dollar industry. the technology has also been used to make ultrasensitive thermometers and has even been deployed in lasers, among other applications. the new study modeled the behavior of thousands of rod - shaped liquid crystal molecules packed into nano - sized liquid droplets. it showed that the confined molecules self organize as the droplets are cooled. \" at elevated temperatures, the droplets are disordered and the liquid is isotropic, \" de pablo explains. \" as you cool them down, they become ordered and form a liquid crystal phase. the liquid crystallinity within the droplets, surprisingly, induces water and other molecules at the interface of the droplets, known as surfactants, to organize into ordered nanodomains. this is a behavior that was not known. \" in the absence of a liquid crystal, the molecules at the interface of the droplet adopt a homogeneous distribution. in the presence of a liquid crystal, however, they form an ordered nanostructure. \" you have two things going on at the same time : confinement of the liquid crystals and an interplay of their structure with the interface of the droplet, \" notes de pablo. \" as you lower the temperature the liquid crystal starts to become organized and imprints that order into the", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6674595504510725, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:39.243837"} {"text": "at the same time : confinement of the liquid crystals and an interplay of their structure with the interface of the droplet, \" notes de pablo. \" as you lower the temperature the liquid crystal starts to become organized and imprints that order into the surfactant itself, causing it to self assemble. \" it was well known that interfaces influence the order or morphology of liquid crystals. the new study shows the opposite to be true as well. \" now you can think of forming these ordered nanophases, controlling them through droplet size or surfactant concentration, and then decorating them to build up structures and create new classes of materials, \" says de pablo. as an example, de pablo suggested that surfactants coupled to dna molecules could be added to the surface of a liquid crystal droplets, which could then assemble through the hybridization of dna. such nanoscale engineering, he notes, could also form the basis for liquid crystal based detection of toxins, biological molecules, or viruses. a virus or protein binding to the droplet would change the way the surfactants and the liquid crystals within the droplet are organized, triggering an optical signal. such a technology would have important uses in biosecurity, health care and biology research settings. explore further : physicists develop revolutionary low - power polariton laser", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6182347616752285, "token_count": 269, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:39.244392"} {"text": "there are two different questions at work here, that you ' ve kind of mashed together. the first question is \" what is the speed at which a change in the electric field propagates? \" the answer to that is the speed of light. in qed terms, the electromagnetic interaction that we see as the electric field is mediated by photons, so any change in an established field ( say, due to shifting the position of the charge creating the field ) won ' t be felt by a distant object until enough time has passed for a photon from the source to make it to the observation point. the second question is \" what is the speed of propagation of electric current? \" this speed is slower than the speed of light, but still on about that order of magnitude - - the exact value depends a little on the arrangement of wires and so on, but you won ' t be far off if you assume that electrical signals propagate down a cable at the speed of light. this relates to electric field in that the charge moving through a circuit to light a light bulb has to be driven by some electric field, so you can reasonably ask how that field is established, and how much time it takes. qualitatively, the necessary field is established by excess charge on the surface of the wires, with the surface charge being generally positive near the positive terminal of a battery and generally negative near the negative terminal, and dropping off smoothly from one to the other so that the electric field is more or less piecewise constant ( that is, the field is the same everywhere inside a wire, and the field is the same everywhere inside a resistor, but the two field values are not the same ). when the circuit is first connected, there is a rapid redistribution of the charge on the surface of the wires which establishes the surface charge gradients that drive the steady - state current that will eventually do whatever it is you want it to do. the time required to establish the gradients and settle in to the steady - state condition is very fast, most likely on the order of nanoseconds for a normal circuit. there ' s a good discussion of the business of how, exactly, charges get moved around to drive a current in the textbook that we use for our introductory classes, matter and interactions, by chabay and sherwood. it doesn ' t go into enough detail to let you calculate the relevant times directly, but it lays out the basic science pretty well. ( it ' s a textbook for a first - year introductory physics class", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6372253517313394, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:39.272278"} {"text": "authors : j. marvin herndon ours is a time of unparalleled richness in astronomical observations, but understanding seems to be absent throughout broad areas of astrophysics. among some groups of astrophysicists there appears to be measured degrees of consensus, as indicated by the prevalence of so - called \" standard models \", but in science consensus is nonsense ; science is a logical process, not a democratic process, and logical connections in many instances seem to be lacking. so the question astrophysicists should ask is this : \" what ' s wrong with astrophysics? \" finding out what ' s wrong is not only the necessary precursor to righting what ' s wrong, but will open the way to new advances in astrophysics. toward that end, one may question the basic assumptions upon which astrophysics is founded, as well as question the approaches astrophysicists currently employ. here i describe one methodology and provide specific examples, the details of which are set forth elsewhere [ 1 - 3 ]. in doing so, i place into a logical sequence seemingly unrelated astronomical observations, including certain hubble space telescope images, so that causal relationships become evident and understanding becomes possible ; as a consequence, profound new implications follow, for example bearing on the origin of diverse galactic structures and the origin of the heavy elements. comments : recovered from sciprint. org [ v1 ] 2 apr 2008 unique - ip document downloads : 29 times add your own feedback and questions here :", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6308770592716828, "token_count": 295, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:39.660443"} {"text": "center of science, policy and society programs : aaas dialogue on science, ethics and religion aaas dialogue on science, ethics and religion physics & the cosmos the field of physics attempts to make sense of the universe at all scales, from the impossibly small particles from which we are comprised to the inconceivably large structures within which we exist. the miniscule yet fundamentally important realm of quarks, photons and protons ( among many others ) is articulated by quantum mechanics through such concepts as the simultaneously wave and particle nature of light and the inherent uncertainty in the physical universe. at the other extreme, einstein \u2019 s general relativity provides a framework for understanding our cosmos on the largest possible scale and accounts for the large - scale gravitational effects of all matter on space and time. from quarks to quasars, physics and astronomy address an enormous variety of objects and phenomena, many of which provoke intriguing physical and metaphysical questions. since the beginning of human history we have been looking up at the night sky, wondering about the countless points of lights and what might lie beyond. ancient astronomers observed that the visible heavens are relatively ordered and predictable, yet also peculiar and vast. such universally experienced mystery has engendered tremendous philosophical, religious, and scientific inquiry. the last several hundred years in particular have witnessed revolutions in the way we understand the universe. copernicus re - envisioned the cosmos as sun - centered, not earth - centered. galileo observed jupiter \u2019 s orbiting moons and the sun \u2019 s \u201c imperfect \u201d spots that led him to challenge the traditional greek conceptions of the heavens. indeed, both religious and scientific communities have had to regularly revise their understanding of the cosmos as more discoveries come to light. modern astronomy and physics continue to reveal many unanticipated features of the universe \u2019 s structure and evolution. astrophysicists theorize that all space, matter and energy expanded explosively from an extremely dense soup of subatomic particles in an event called the big bang. after a process of cooling and coalescing, cloudlike nebulae of gas and dust collapsed to form stars, and these stars clustered to comprise galaxies. eventually, terrestrial planets and moons were forged from the heavier material expelled from dying stars. over an unimaginably long span of time \u2014 approximately 13. 7 billion years \u2014 the components of the universe gradually formed, and today the universe continues to evolve as space itself dramatically expands. in addition to piecing together the intricate history of the universe and explaining the various objects we see around us", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6445002216230873, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:39.899740"} {"text": "the search for certainty : a philosophical account of foundations of mathematics. marcus giaquinto. xii + 286 pp. oxford university press, 2002. $ 45. david hilbert ( 1862 \u2013 1943 ) was arguably the leading mathematician of his time. in struggles over how mathematics was to accommodate new understandings of the infinite, the dutch mathematician l. e. j. brouwer was his most fervent opponent. when hilbert ' s favorite student, hermann weyl, went over to the enemy, saying \" brouwer, that is the revolution, \" hilbert was incensed. in a passionate address delivered in 1922, he proclaimed : weyl and brouwer... seek to provide a foundation for mathematics by pitching overboard whatever discomforts them and declaring an embargo.... but this would mean dismembering and mutilating our science, and, should we follow such reformers, we would run the risk of losing a large part of our most valued treasures. weyl and brouwer outlaw the general notion of irrational number, of function, even of number - theoretic function, cantor ' s [ ordinal ] numbers of higher number classes, etc. the theorem that among infinitely many natural numbers there is always a least, and even the logical law of the excluded middle, e. g., in the assertion that either there are only finitely many prime numbers or there are infinitely many : these are examples of forbidden theorems and modes of inference. i believe that impotent as kronecker was to abolish irrational numbers..., no less impotent will their efforts prove today. no! brouwer ' s [ program ] is not as weyl thinks, the revolution, but only a repetition of an attempted putsch with old methods, that in its day was undertaken with greater verve yet failed utterly. especially today, when the state power is thoroughly armed and fortified by the work of frege, dedekind, and cantor, these efforts are foredoomed to failure. a decade later hilbert ' s own program for the foundations of mathematics lay in tatters, destroyed in an investigation by the young logician kurt godel, which had initially been undertaken in an effort to contribute to that very program. today, passions have cooled, and working mathematicians show little interest in foundational matters. the infinitary set theoretic methods that occasioned such controversy are casually absorbed in passing by the beginning graduate student and used unhesitatingly", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6186925019725367, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.120251"} {"text": "axioms provided the basis of a formal system rivaling that of principia. in an important paper appearing in 1930, zermelo proposed what came to be called the iterative notion of set, in which a hierarchy of sets is built from some initial collection of things by iterating indefinitely the operation of forming the set of all subsets of a given set. he observed that his axioms could be construed as being about just this notion. a few years later, in an address on the foundations of mathematics, kurt godel emphasized that rather than being seen as a rival to principia, when viewed from the perspective of the iterative notion of set zermelo ' s system could be seen as the result of eliminating unnecessary complications and artificial restrictions from the whitehead - russell system. by the 1940s and ' 50s, set - theoretic methods had become a crucial part of the mathematician ' s toolbox. back in the 1920s, when passions were aflame, hilbert developed an ingenious strategy by which he intended to overcome his opponents. he would establish the legitimacy of methods that brouwer and weyl considered dubious by encapsulating those methods in formal systems whose consistency would then be proved using only methods of which they approved. in a revolutionary paper in 1931, the young godel demonstrated not only that consistency could not be proved using only these restrictive methods, but also that the same negative conclusion held even if the entire panoply of methods encapsulated in the systems in question was brought to bear. after godel, the foundations of mathematics were seen as inevitably open - ended, with more and more propositions becoming provable as ever more powerful methods were employed. godel liked to emphasize that these more powerful methods could be thought of as being essentially a matter of venturing sufficiently far out in the iterative hierarchy of sets. giaquinto has provided a careful and judicious discussion and analysis of these matters, supplying needed technical background for readers who are not mathematicians. although foundational questions have ceased to be of much importance to most mathematicians, controversies among specialists continue. readers of this book will be well prepared to follow the current literature on foundations of mathematics.", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.623475975008272, "token_count": 448, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.123087"} {"text": "in the experiments michelson had begun in berlin. the new apparatus was similar in basic design to his previous ones, but much more sensitive. it used extra mirrors to allow the light beams to bounce back and forth, creating a much longer path length. michelson and morley conducted the experiments in a basement lab, and to minimize vibrations, the setup rested atop a huge stone block, which floated in a pool of mercury that allowed the entire apparatus to rotate. even with this exquisitely sensitive design, michelson and morley couldn \u2019 t detect evidence of motion through the ether. they reported their null result in november 1887 in the american journal of science, in a paper titled \u201c on the relative motion of the earth and the luminiferous ether. \u201d ( the paper is online at www. aip. org / history / gap / michelson / michelson. html. ) though disappointing to michelson and morley, the experiment revolutionized physics. some scientists initially tried to explain the results while keeping the ether concept. for instance, george fitzgerald and hendrik lorentz independently proposed that moving objects contract along their direction of motion, making the speed of light appear the same for all observers. then in 1905 albert einstein, with his groundbreaking theory of special relativity, abandoned the ether and explained the michelson - morley result, though it is uncertain whether einstein was actually influenced by their experiment. michelson and morley nonetheless both continued to believe that light must be a vibration in the ether, though michelson did acknowledge the importance of einstein \u2019 s work on relativity. although it couldn \u2019 t detect the non - existent ether, the michelson interferometer proved useful for other measurements. michelson used his interferometer to measure the length of the international standard meter in terms of wavelengths of cadmium light, and in 1920 he was the first to measure the angular diameter of a distant star, also using an interferometer. in 1901 michelson was the second president of the aps, and he became the first american to win the nobel prize in 1907, for his precision optical instruments and measurements made with them. in 1889 michelson moved to clark university in worcester, massachusetts, and then in 1892 to the university of chicago. he returned to his work refining measurements of the speed of light, and continued making more and more precise measurements right up to his death in 1931. \u00a91995 - 2013, american physical society aps encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6185616919108512, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.192483"} {"text": "copyright \u00a9 2001 \u2013 2008 jsd i set up some spreadsheets to solve laplace \u2019 s equation, with more - or - less any boundary conditions you want. the spreadsheet becomes, essentially, a 2d cellular automaton that directly emulates the physics. this version handles objects in a d = 2 universe in rectangular coordinates. in flatland, i. e. d = 2, the z direction simply does not exist. alas many people are unfamiliar with the laws of physics in flatland. therefore it might be better to think of this as a d = 3 universe in which all d = 3 objects are infinitely tall and translationally invariant along the z axis. in this case, the z direction exists, but is uninteresting, and the essential physics is the same as the d = 2 case. ( this is not the same as considering a thin flat \u201c d = 2 \u201d object embedded in the d = 3 universe! ) in any case, each cell represents an area dx\u2227dy in the xy plane. the spreadsheet to handle this case can be found in reference 1. occupying a large area near the upper left of the spreadsheet is a grid that i call the potential grid. you can set boundary conditions for the problem by choosing cells that you want to represent electrodes, and specifying the potential on these electrodes. for example, reference 1 contains three electrodes : within the universe, cells that are not electrodes are called vacuum cells. they contain a formula that will be used to calculate their potential, in accordance with laplace \u2019 s equation, subject to the specified boundary conditions. if you want to \u201c erase \u201d part of an electrode, you should use the copy - and - paste function to fill those cells with the vacuum formula. just to the right of the \" potential \" grid there is second grid that i call the | field | grid because it calculates and displays the magnitude of the electric field at each point. farther right is a third grid that calculates the charge density ( charge per unit volume ). if you add up all the cells in a given area, you get a charge per unit length. this means length in the z direction ; it is the charge per unit length of the object rooted in the given area and extending infinitely far perpendicular to the screen. principle of operation : consider a cross - shaped group of 5 elements somewhere on the spreadsheet, and label them as shown in figure 1. now the discrete approximation to the second derivative in", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.626066124553603, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.282826"} {"text": "the given area and extending infinitely far perpendicular to the screen. principle of operation : consider a cross - shaped group of 5 elements somewhere on the spreadsheet, and label them as shown in figure 1. now the discrete approximation to the second derivative in the horizontal direction is b + c\u22122w, and in the vertical direction it is a + d\u22122w. the laplacian vanishes if w = ( a + b + c + d ) / 4, i. e. if the central element is equal to the average of its four neighbors. recall we are assuming ( d / dz ) is zero. this leads to an algorithm that says that for each cell in the vacuum, we want to equal the average of its four neighbors. so the basic step of the algorithm is to run through the grid and just set each cell to the average of the neighboring cells. that does not immediately solve the problem, because whenever we change a cell it requires us to change all the neighbors. however, each basic step brings us closer to a good solution, so we just repeat the basic step several times. this is called the relaxation algorithm. another way to motivate the same algorithm is to consider the electrostatic field energy. it depends on the square of the electric field, i. e. the square of the first derivatives of the potential. this energy is minimized when the central cell is equal to the average of its four neighbors. therefore each step of the update algorithm lowers the local energy. tangential remark : you can say that the field energy serves as a lyapunov function for the relaxation algorithm... but if this doesn \u2019 t mean anything to you, don \u2019 t worry about it. reference 1 has 841 cells arranged as a 29x29 grid. for a grid of this size, the relaxation algorithm converges in a few seconds. that \u2019 s fast enough that it \u2019 s not boring, but slow enough that you can observe the propagation of changes if you fiddle with the boundary conditions. there is a cell just above the top right of the potential grid, labeled object potential. if you change the value of this cell, you can watch how the charge distribution responds. while the algorithm is running, i. e. after you have changed something but before the algorithm has converged to a solution, the grid contains an approximate solution that doesn \u2019 t exactly satisfy laplace \u2019 s equation. that is, during this phase, there will be nonzero charge in the \u201c vacuum \u201d. this", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6420483909761134, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.285578"} {"text": "but before the algorithm has converged to a solution, the grid contains an approximate solution that doesn \u2019 t exactly satisfy laplace \u2019 s equation. that is, during this phase, there will be nonzero charge in the \u201c vacuum \u201d. this is unavoidable ; because the spreadsheet strictly enforces local conservation of charge, as discussed in section 2. 2. that means there is no way for the objects to acquire the correct charge unless charge flows through the vacuum somehow. the algorithm gradually moves all this charge to the boundaries. the \u201c manual recalculation \u201d mode ( using the \u201c f9 \u201d key ) may help you observe this, as discussed in section 5. excel evaluates cells in a sequence that it chooses. the sequence defies simple description, and it has nothing to do with the physics. ( remember, this is an electrostatic problem ; there is no physically - significant timescale. ) unfortunately, this sequencing means charge propagates quickly in certain directions across the grid, and slowly in the opposite directions. if you were writing in a computer language that gave you more control than excel does, you could get rid of this unphysical asymmetry by evaluating things in checkerboard - sequence ( all the black squares, then all the white squares ) or in randomized order. as mentioned above, just outside the edge of the potential grid is a layer of cells that implement the boundary conditions. in this example, they implement born / von - karman periodic boundary conditions. that is, given a universe of n rows by m columns, row n + 1 is constrained to equal row 1, and column m + 1 is constrained to equal column 1. you can think of this as a torus, where the top edge of the n\u00d7m grid joins the bottom, and the left edge joins the right. equivalently, you can imagine tiling an infinite region with copies of the n\u00d7m grid, subject to the constraint that corresponding cells have the same value in every tile. below the potential grid is a graph with many traces ; each trace shows the potential as a function of x, while different traces show different y values ( rows ). clicking on one of the traces highlights the corresponding row. this may help you locate extremal values. below the field grid is a similar graph with many traces. you can make the universe bigger by adding more rows and columns if you like ; use the \" fill across \" and \" fill down \" features to propagate the vacuum formula into", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6126854620667852, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.286995"} {"text": "values. below the field grid is a similar graph with many traces. you can make the universe bigger by adding more rows and columns if you like ; use the \" fill across \" and \" fill down \" features to propagate the vacuum formula into the new cells. beware : you must fill from a vacuum cell that is not adjacent to the newly - added cells or the results will be incorrect. you could extend this calculation to d = 3, removing any assumption of translational symmetry. one possible brute - force solution would be to make a spreadsheet with 29 different 29x29 grids and put the appropriately - generalized formula in them. on the other hand, when the problem gets this complicated, you \u2019 re probably better off using a more sophisticated programming language, such as c + +. reference 2 is similar to reference 1, but has several additional features. for one thing, it uses a fancier formula in the vacuum cells. it uses a technique called \u201c over - relaxation \u201d to improve the speed of convergence. this is described at e. g. reference 3. basically the idea is to figure out how big a step the simple relaxation algorithm would have \u2018 taken, and take a step larger than that by a factor of gamma, in hopes of moving more quickly towards the final result. gamma = 1 corresponds to the plain old relaxation algorithm, with no over - relaxation. values between 1 and 2 make sense. ( if gamma were set greater than 2, the electrostatic energy would increase at every step, so the algorithm would not converge. ) the value of gamma is controlled by a cell near the top right of the potential grid. more generally, reference 4 describes a fancy fortran program for doing calculations of this sort. if you \u2019 re interested in such things, take a look there. reference 2 has another cute little feature, the \u201c gate \u201d cell at the lower right of the potential grid. setting it to zero sets the vacuum potential to zero everywhere. setting it back to a nonzero value allows the potentials to be recalculated. this is convenient if you just want to watch how the solution propagates. it is also invaluable for recovering from the following situation : if you enter an invalid expression into a cell in or near the vacuum, the spreadsheet will be unable to calculate the neighboring cell values, and the problem will spread from cell to cell like a disease. as mentioned above, all the potential grids in reference 1 and reference 2 implement periodic boundary conditions \u2013 also", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6358397151967345, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.288105"} {"text": "vacuum, the spreadsheet will be unable to calculate the neighboring cell values, and the problem will spread from cell to cell like a disease. as mentioned above, all the potential grids in reference 1 and reference 2 implement periodic boundary conditions \u2013 also known as born / von - karman boundary conditions. periodic boundary conditions are not the only possible choice. another option to have a hall of mirrors. that is, imagine that just to the left of the model universe there is a mirror - image copy of itself. then impose periodic boundary conditions on the pair ( with the appropriate double - length period ). do the same in the vertical direction. you can turn on this feature in the advanced spreadsheet by putting a nonzero value in the cell labeled \u201c hall of mirrors \u201d near the lower - right corner of the potential grid. the hall - of - mirrors condition has an interesting property : it causes the directional derivative of the potential, in the direction perpendicular to the edge, to be zero at the edge of the universe. for some applications, for instance if you are trying to model the \u201c self - capacitance \u201d of some object, the hall - of - mirrors boundary condition may approximate the desired physics better than periodic boundary conditions would. in reference 2, over on the lower right below the main charge - density grid, there is a pair of smallish grids labeled \u201c charge conservation \u201d. they serve to illustrate the principle of global charge neutrality and local charge conservation. the pair consists of a potential grid and the corresponding charge - density grid. in this potential grid, you can put an arbitrary arrangement of values in the cells. no matter what you do, no matter how weird the potential - arrangement is, the total charge ( i. e. the sum over the charge - density grid ) comes out zero, provided you don \u2019 t mess with the periodic boundary conditions. it is easy to see why this must be so : we calculate the charge by convolving the operator ( a + b + c + d\u22124w ) with the potential grid. every nonzero potential cell contributes to the convolution grid five times : once as a, once as b, once as c, once as d, and once ( weighted by - 4 ) as w. if you add those five contributions, you get zero every time. ( there may be small discrepancies due to roundoff errors, which we ignore. ) the cells in this little grid are just numbers. we do not run the relaxation algorithm", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6267566368503511, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.289194"} {"text": "if you add those five contributions, you get zero every time. ( there may be small discrepancies due to roundoff errors, which we ignore. ) the cells in this little grid are just numbers. we do not run the relaxation algorithm on them. this should make it clear that the global charge neutrality, in this model system, has nothing to do with the relaxation algorithm. you could use potential values from the relaxation algorithm, or from some other algorithm, or from a random - number generator, and the total charge in the universe would still be zero. no algorithm can change this zero. this zero can be seen as a manifestation of gauss \u2019 s law. we can consider the edge of the universe to be a gaussian pillbox. the periodic boundary condition ensures that whatever field lines leave the top of the universe re - enter the bottom of the universe. therefore there is no net flux flowing into the universe. ( in the example, the field happens to be zero at the edge, making it extra - obvious that there is no net flux. ) since there is no net flux, the net charge on the interior must be zero. the validity of gauss \u2019 s law depends on the structure of the operator ( a + b + c + d\u22124w ) and not much else. its applicability depends on the boundary condition for the universe itself. global charge neutrality automatically implies global conservation of charge. global conservation is vaguely interesting, but it is important in physics, however, to have a local conservation law. here \u2019 s why : suppose some charge unaccountably disappeared from my lab. it would give me little comfort to be told that it reappeared in some unknowable distant part of the universe ; i would be unable to distinguish non - local conservation from from non - conservation. fortunately, our model system does have a local conservation law. if you increase the potential in any one cell, it causes an increase in the charge - density in the corresponding cell \u2014 but this increase is exactly counterbalanced by a decrease in the four neighboring cells ( not in some goofy distant cells ). again, this depends on the structure of the laplacian, not on the update algorithm. just below the aforementioned pair of grids is yet another pair of smallish grids, labeled \u201c gauge invariance \u201d. as in most of the other grids, i have imposed born / von - karman periodic boundary conditions. as before, this exhibits global charge neutrality and local charge conservation. this grid is", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6145639241707362, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.290180"} {"text": "namely d = 3 with translational symmetry in the z direction, where the laplacian was ( d / dx ) 2 + ( d / dyf ) 2 ; we knew the z - derivative was zero. ) in the cells of the spreadsheet, i have simplified the formula by observing that ( 1 / r ) ( d / dr ) is equal to ( 1 / x ) ( d / dx ) on the slice of interest, by cancellation of a factor of sign ( x ). in this spreadsheet there is a fourth grid, just to the right of the grid that shows the charge per unit volume. it shows the charge per unit area ( dr\u2227dz ) in a ring. you can find the total charge on an object by summing the numbers in this grid. there is no point in summing the numbers in the charge - per - unit - volume grid ; that doesn \u2019 t make sense for several reasons, including dimensional analysis. to improve the accuracy, i use a smart estimate of the quantity ( 1 / r ) ( d / dr ). in particular, i take the arithmetic mean of the left - hand difference ( w\u2212b ) / x1 and the right - hand difference ( c\u2212w ) / x2 ; this accounts for an important nonlinearity because the radius is different in the two denominators. validity checks : i verified that a region with a log ( r ) potential produces zero charge density, with high accuracy. i also checked that the field calculation and charge calculation are automatically gauge invariant, because of the structure of the lapacian operator. i implemented periodic boundary conditions in the z direction, and this is the default behavior. i also implemented hall - of - mirrors boundary conditions, which you can optionally use instead. in the r direction, there is only one choice : the perpendicular component of the electric field vanishes on this boundary. this is reminiscent of the hall - of - mirrors boundary condition, but there is no physical interpretation in terms of tiling the universe. instead, this can be viewed as surrounding the region of interest, at each z level, with an annulus extending to infinity. the potential on this annulus depends on z but is independent of r. this means that outside the region of interest, there will be zero charge, although there will be nonzero fields. these fields seem a bit unphysical. to make these fields go away, you can arrange that the potential at the large - r boundary is independent of", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6314620689649251, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.293352"} {"text": "anti de - sitter space bubbles, filaments, voids and sheets condensed matter system cosmic microwave background deep field survey degrees of freedom grand unification theory heisenberg uncertainty principle hubble ' s law and constant intercommuting and loop production laws of thermodynamics nematic liquid crystal quantum field theory speed of light strong and electroweak forces surface of last scattering the great attractor theory of everything when dealing with geometries that take place within the universe, we deal not with conventional three - dimensional euclidean geometry, we have to adapt it to represent a four - dimensional spacetime. this results in what is known as a lorentzian manifold. within this geometry, we deal with three types of space, de - sitter space, anti de - sitter space and minkowski space. they are analogues of spherical, hyperbolic and euclidean space with regards to four - dimensional spacetime. this is a type of hypothetical particle of zero electrical charge that has come out of the framework of quantum chromodynamics. it is hypothesised that these were created during the very early universe. they have little mass and do not easily interact with normal matter. no experimental evidence for them exists as of yet, but they are one of the possible contenders for dark matter. a baryon is a category of subatomic particle which is composed of three quarks. this is opposed to a meson, which is composed of one quark and one antiquark. baryons include protons and neutrons and make up the majority of the mass of visible matter in the universe ( i. e. the mass of the universe that is not dark matter or dark energy ). they participate in the strong nuclear force. about thirteen billion years ago, the universe began in a gigantic explosion. every particle started rushing apart from every other particle in an early super - dense phase. the fact that galaxies are receding from us in all directions is a consequence of this initial explosion. projecting galaxy trajectories backwards in time means that they converge to a high - density state. this is one of the possible ends to the universe as we know it. cosmic inflation is expands the universe and gravitation brings matter together. depending on the density of the universe, one of these forces may overcome the other, or alternatively the universe may be of critical density, which would result in a \" flat \" universe. if the universe has a density higher than this critical density, then gravitation will eventually overcome the forces working to", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6471722829220397, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.830855"} {"text": ". it must emit this radiation across all possible wavelengths and frequencies and is must also absorb all possible wavelengths and frequencies, which means that it can emit radiation at infinite wavelength. named after indian physicist satyendra nath bose, these are particles with full integer spin, i. e. 1, 2, 3. ( as opposed to fermions, which possess half integer spin ). there are two categories of fundamental boson ( bosons not composed of a combination of other particles ) ; gauge bosons, which mediate the fundamental forces of nature ; and scalar bosons, which are constituents of a scalar field, and include the elusive higgs boson. bosons can also be created from other particles whose spin totals an integer, for example, any meson. brane inflation uses fundamental object of string theory, called branes. in this theory, the universe is a three dimensional slide ( a brane ) in a high dimensional space ( the bulk ), which may also contain other branes. these slices of spacetime have mass and can attract each other by gravity, so two almost parallel branes separated by some distance will start moving towards each other. in brane inflation, the closer the two branes get to each other, the more the branes expand, giving rise to inflation. the process ends with the violent collision of the branes, leading to the copious production of radiation and relativistic particles. hence, the new brane resulting from the collision is filled with a hot plasma, which is the starting point of the standard big bang model. there is another prediction in the model : the collision is also accompanied by the production of cosmic strings. these are all types of large - scale structure formed from galactic distribution in the universe. galaxies form clusters and superclusters which arrange into sheets and filaments through the universe. between these sheets of galaxies, there is very low galaxy density, which leads to voids. these fill approximately 90 % of space. bubble nucleation is a form of first - order phase transition. a phase transition occurs when temperatures and densities increase such that matter changes its form and properties, such as in the very early universe, during the big bang. a simple analogy is water, which melts from ice to liquid, and then boils to gas as temperatures increase. for physicists, it is important to note that as the temperature increases, the symmetry of the matter increases. thinking this through, we know that gas is more symmetry than", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6416154854922802, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.833233"} {"text": "colour is a degree of freedom that allows quarks to exist together to form hadrons, such as protons or neutrons, in otherwise identical quantum states. this is necessary as otherwise they would be in violation of the pauli exclusion principle, which states that no two identical fermions may occupy identical quantum states simultaneously. comoving distance is the distance between two objects as it appears if the expansion of the universe is factored out. at any given time, it is equal to the proper distance, which is the actual distance between two objects, and will change over time due to the expansion of the universe. the comoving horizon is therefore the actual distance to the edge of what we can see at any given time. condensed matter systems deal with, as the name suggests, condensed matter. this includes matter in the liquid, solid, superconducting phases. condensed matter systems can be used to study the effects of phase transitions on matter. around 370, 000 years after the big bang, the temperature of the universe dropped sufficiently for electrons and protons to combine into hydrogen atoms : p + e = h. from this time onwards, radiation was effectively unable to interact with the background gas, so it has propagated freely ever since, while constantly losing energy as its wavelength is being stretched by the expansion of the universe. originally, the radiation temperature was about 3000 degrees kelvin ( i. e. approximately 3300 degrees celsius, 5000 degrees fahrenheit ), whereas today it has fallen to only 3k. observers detecting this radiation today are able to see the universe at a very early stage. photons in the cmb have been travelling towards us for over ten billion years, and have covered a distance of about a million, billion, billion miles. the cmb was discovered in 1964. these are one - dimensional ( that is, line - like ) objects which form when an axial or cylindrical symmetry is broken. strings can be associated with grand unified particle physics models, or they can form at the electroweak scale. they are very thin and may stretch across the visible universe. a typical gut ( grand unified theory ) string has a thickness that is less then a trillion times smaller that the radius of an hydrogen atom. still, a 10 km length of one such string will weigh as much as the earth itself! originally proposed by einstein as a modification to general relativity to result in a universe which would neither expand nor contract. he later famously called it his greatest mistake after hubble discovered that other galaxies were moving away", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6814430406908278, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.838789"} {"text": "certain particles with the same values for their varying degrees of freedom ( i. e. spin, charge etc. ) cannot exist in the same place at the same time. an object which generates a magnetic field emanates that field from two opposite poles, an example of this would be a bar magnet, which has a north and south pole. each of these poles is a magnetic monopole. the magnet itself, having two of these poles, is a dipole. this is similar to an electric field, in which the field emanates from positive and negative charges. whereas in electricity, negative and positive charges can be easily isolated in the form of electrons and positrons, magnetic monopole particles have yet to be discovered. for example, when you break up a bar magnet, you do not isolate the two monopoles, you simply have two bar magnets half the size of the previous one. these are two - dimensional objects that form when a discrete symmetry is broken at a phase transition. a network of domain walls effectively partitions the universe into various ' cells '. domain walls have some rather peculiar properties. for example, the gravitational field of a domain wall is repulsive rather than attractive. when the source of a wave moves away from us, we observe a change of frequency of that wave. an example would be an ambulance or fire - truck - we hear a lower pitch in its siren once it has passed us by. this is the doppler - shift. it is not, however, limited to sound waves, but any kind of waves, including electromagnetic. ( b. 1879 d. 1955 ), was a german theoretical physicist who spent much of his career at the kaiser wilhelm institute for physics and princeton university. he is regarded as one of the greatest physicists of the 20th century, and indeed, one of the most academically brilliant minds of all time. awarded the nobel prize in physics in 1921, for his work on the photoelectric effect where he described photons as discrete packets, known as quanta. this was in direct conflict with previous, classical descriptions of physics which defined photons as wave. his theories are now the basis of modern physics. these theories, whilst too numerous to list here, include special relativity, which describes how relative motion can result in different laws of physics being experienced by different observers as well as the energy - mass equivalence relationship, e = mc2, and general relativity, which generalises special relativity with respect to gravity and incorporates this with newtonian laws of gravity to describe a", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6507452546388701, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.842891"} {"text": "in different laws of physics being experienced by different observers as well as the energy - mass equivalence relationship, e = mc2, and general relativity, which generalises special relativity with respect to gravity and incorporates this with newtonian laws of gravity to describe a how gravity is a geometry property of spacetime. when hitler came to power in 1933, he was on a trip to america and did not return to german, instead opting to become an american citizen. his warning to president roosevelt about the german research into nuclear weapons led to the eventual development of the atomic bomb, a weapon he later denounced and crusaded against. such was einstein ' s genius that upon his death his brain was removed for future study. an elementary particle carrying a negative elementary electric charge ( that is, the most fundamental electric charge, particles do not carry charge smaller than this ). a fermion with spin 1 / 2. it is a lepton and therefore is a constituent of matter, but does not participate in the strong nuclear force. it does interact with electromagnetism, gravitation and the weak nuclear force. energy unit equal to approximately 1. 6 x 10 - 19 joules. it is the amount of energy gained by the charge of one electron as it moves across a one volt electric potential difference. a period in time. in cosmology it is used to refer to different time periods in the chronology of the universe. these include the planck epoch ; the grand unification epoch ; the electroweak epoch ; the quark epoch ; the hadron epoch ; the lepton epoch ; and the photon epoch ( all of the epochs prior to the photon epoch occurred within the first 10 seconds of time! ). time periods after this include nucleosynthesis, recombination and reionization. this is the speed required for any object to break free of another objects gravitational field. for the earth, this is approximately 7, 000 miles per second. mathematically, it is described as the velocity at which the escaping object ' s kinetic energy and gravitational potential energy summate to zero. as the gravitational force exerted by an object on another object increases as the distance between the two decreases, the further away the escaping object is, the lower the escape velocity. for black holes, at the distance known as the event horizon, the escape velocity is greater than the speed of light, and therefore nothing can escape. eternal inflation refers to a series of models by which at least one region of the universe is undergoing inflation at any one point in", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6497916660523726, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.843906"} {"text": "the distance known as the event horizon, the escape velocity is greater than the speed of light, and therefore nothing can escape. eternal inflation refers to a series of models by which at least one region of the universe is undergoing inflation at any one point in time. due to the exponential increase in volume during these periods of inflation, it is theorised that at any given point the majority of the volume of the universe is still expanding. this creates a multiverse, whereby each expanding area of the universe appears, to be its own universe, and the beginning period of expansion equivalent to the big bang. in eternal inflation it is possible for these expanding areas of space to decay into a lower energy phase, resulting in inflation ceasing. named after euclid, a greek mathematician of the third century bc. it is a system of geometry based around the geometry of the three dimensions that we are all taught at school ; x, y and z. points within the system can be described by a set of cartesian coordinates. it is described by a system of postulates, or premises, for example, the parallel postulate, which states that \" if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles \". in contrast to this is non - euclidean geometry, which deals with curved space. the event horizon is the boundary that marks the point where the escape velocity of a black hole exceeds the speed of light. once the event horizon has been crossed, nothing can escape from the black hole \u2019 s gravitational pull, not even light. exotic particles are those made up of theorised particles not currently part of the standard model. an example of this would be the heavier partners of the current set of particles that make up the standard model, that are described within the theory of supersymmetry. full title : the fermi national accelerator laboratory. located near chicago, il., it is a united states department of energy laboratory focussed on high - energy physics. until 2011, it house the tevatron particle accelerator, which until the opening of the large hadron collider at cern, the largest in the world. in 1995 work done at the tevatron led to the discovery of the top quark, one of the six different flavours of quark, and the most massive of them all. these are particles with half integer spin. this is", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6120840258631265, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 10, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.844972"} {"text": "world. in 1995 work done at the tevatron led to the discovery of the top quark, one of the six different flavours of quark, and the most massive of them all. these are particles with half integer spin. this is opposed to bosons, which have full integer spin. only one fermion can occupy the same quantum state and space at any given time, this is known as the pauli exclusion principle, and does not apply to the other class of particles, bosons. elementary fermions ( those not composed of other particles ) are constituents of visible matter in the universe, and include electrons and quarks. particles composed of fundamental fermions, however, can have full integer spin and therefore can be classed as bosons. a ferromagnet is an object which exhibits the property of ferromagnetism. ferromagnetism is the strongest type of magnetism, and as such ferromagnets are the magnets that the average reader will be familiar with. they are the ones used in physics classes at school, they are the ones used to pick up scrap metal, they are the magnets on your fridge. ferromagnetism is the only type of magnetism that has the strength to produce a force that can be felt. a ferromagnet can be defined as a material that can exhibit a net magnetic moment in the absence of an external magnetic field. ( b. 1918 - d. 1988 ), was a physicist who spent most of his life working at the california institute of technology ( caltech ). also worked on the manhattan project at los alamos national laboratory, where he helped develop the atomic bomb. won the nobel prize in physics in 1965 for his work in quantum electrodynamics ( qed ). developed the path integral formulation that we use today, and developed an illustrative representation scheme for the behaviour of subatomic particles which has become known as feynman diagrams. caltech has a named chair of physics in his honour. outside of his life in physics, he also was a member of the panel that investigated the space shuttle challenge crash, and wrote two popular science books : \" surely you ' re joking, mr. feynman! \" and \" what do you care what other people think? \". there are four fundamental forces in nature. they are electromagnetism, the weak nuclear force, the strong nuclear force and gravitation. the weak nuclear force is associated with radioactivity", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.7175103916674139, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 11, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.845982"} {"text": "what do you care what other people think? \". there are four fundamental forces in nature. they are electromagnetism, the weak nuclear force, the strong nuclear force and gravitation. the weak nuclear force is associated with radioactivity in unstable nuclei, specifically the decay of a neutron into a proton in the form of beta radiation. the gauge bosons that mediate the force are the w and z bosons. this interaction can cause quarks to change flavours. the strong nuclear force binds together quarks to form nucleons, in turn, it also acts to bind these nucleons together, forming atomic nuclei. the force is mediated by an exchange of gluons, which are a type of gauge boson. the charge associated with this force, analogous to the electric charge associated with electromagnetism, is the colour charge, of which there are three varieties ; red, green and blue. the mathematical theory describing the elementary particles interacting with this force, quarks and gluons, is known as quantum chromodynamics ( qcd ). at atomic levels, it is by far the strongest of all forces, but only interacts on a scale on the order of 10 - 15m, and therefore, whilst incredibly important for the formation of matter, does not play any observable role in day to day life. electromagnetism is a force associated with the electric charge associated with certain molecules. along with gravitation, is is one of the four forces that has a major noticeable effect on day to day human life. it manifests as two different fields electric fields and magnetic fields, although they are aspects of the same force and therefore interact with each other through electromagnetic induction. the gauge boson that mediates this force is the photon, which is also the quanta ( discrete packet ) of light and other forms of electromagnetic radiation, such as infra - red radiation ( most thermal radiation ), x - rays, ultraviolet radiation etc.. gravitation is a force of attraction between two massive bodies. objects on earth are attracted to the earth via gravitation, why is why, when an apple falls from a tree, it falls down towards the earth, instead of in any other direction. gravitation also gives weight to objects, weight being the mass of an object multiplied by the gravitational force acted upon it by another object. gravitation on a universal scale is described by einstein ' s theory of general relativity, where it is described as being", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.667692275273142, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 12, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.846939"} {"text": "allows us to build up a three dimensional map of the sky, which allows us to gain insight into the large - scale structures within the universe. a gauge group is a set of gauge transformations which effect a system in similar manners. a gauge transformation is a transformation that acts on redundant degrees of freedom within a system, that is, it effects a property that does not really have any physical significance at the level at which the system operates. a gauge transformation which is globally symmetric effects all points of space in the same way. an example of this would be a transformation of voltage that states that voltage1 = voltage2 + c ( a constant ). if we substitute the left hand side of the equation with the right in classical equations dealing with electromagnetism, there is no difference in the outcome and therefore this will hold across any difference in voltage. if we impose a local symmetry on the gauge transformation, also known as gauge invariance, then these transformations become very significant. this is because the transformation holds true, but the transformation is now a function of the position in space and time. through introducing these conditions of gauge invariance into quantum equations, one can extrapolate that for particles that interact with fundamental forces, such as the electron, which carries electrical charge and is acted upon by electromagnetism, there is an underlying field which is also undergoing a gauge transformation. in the case of the electron, it is the electromagnetic field, which physicists were already aware of, however, gauge invariance has postulated the gluon field which is the basis for quantum chromodynamics, the theory which explains the strong nuclear force. this is the modern geometric description of gravity. it says that the gravitational force is related to the curvature of spacetime itself, i. e. to its geometry. to this end, it generalises einstein ' s theory of special relativity, and links it to newton ' s laws of gravity. unlike for non - gravitational physics, spacetime is not just an arena in which physical processes take place, but it is a dynamical field. the gravitational field at a fixed time can be described by the geometry of the three spatial dimensions at that time. these are gauge bosons which mediate the strong nuclear force, one of the fundamental forces. similar to the photons which mediate the electromagnetic force, gluons have no rest mass and so travel at the speed of light. although unlike photons, which whilst mediating the electromagnetic force, are themselves electrically neutral, gluons", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.7327009459432983, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 14, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.849188"} {"text": ". similar to the photons which mediate the electromagnetic force, gluons have no rest mass and so travel at the speed of light. although unlike photons, which whilst mediating the electromagnetic force, are themselves electrically neutral, gluons have charge associated with the strong nuclear force, or colour. there are 8 different colours of gluon. gluons are confined within hadrons, particles made up of quarks ( which have a colour charge ), and are limited in interaction to a distance of approximately 10 - 15 metres. see grand unified theory. in the aftermath of the big bang, the universe was extremely hot and extremely dense. at these energies, the laws of nature that we know were changed. the fundamental forces that we see in nature were unified - the universe was in a state of grand unification - it is only as the universe expanded and cooled that gravitation, electromagnetism and the strong and weak nuclear forces all ceased to be as one. electroweak theory describes the unification of the weak nuclear force and electromagnetism. a grand unified theory will marry up electroweak theory with the strong nuclear force, brining us closer to a unification of the four fundamental forces. gravitational waves are propagating disturbances in spacetime. the effect of a passing gravitational wave is to periodically stretch and compress space in the two directions perpendicular to the direction of propagation. the expected strain on the earth due to these disturbances, which can be caused by black holes merging, is very small, making detection extremely difficult. this is an as yet undiscovered particle that is believed to mediate the force of gravitation. much like the photon, which mediates the electromagnetic force and the gluon which mediates the strong nuclear force, it has no mass, and therefore travels at the speed of light. it has a spins quantum number of 2, and is the only massless particle with that spin number. it has zero electrical charge. experimentally, the graviton is incredibly difficult to observe, and is beyond the reach of current physics. the detection of gravitational waves may lead to some further information about gravitons, but these have not yet been detected. theories of quantum gravity are one of the largest standing issues in cosmology, and there are currently few mathematically consistent theories that can explain it. one of these theories is m - theory, which we believe to be the best explanation at this point in time. this is a type of blackbody radiation emitted by", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6739199098081708, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 15, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.850063"} {"text": "should contain at most one degree of freedom per planck area. within m - theory, the holographic principle suggests we are the shadows on the wall. the ' room ' is some larger, five - dimensional spacetime and our four - dimensional world is just the boundary of this larger space. if we try to move away from the wall, we are moving into an extra dimension of space - a fifth dimension. ( b. 1889 - d. 1953 ) was one of the main figures of astronomy in the 20th century. using the hooker 100 inch telescope at mount wilson observatory in california, discovered the galaxies are receding away from us and from each other via the changes in frequency that they exhibit - the shifting of frequency of electromagnetic emissions to the red end of the spectrum. this realisation was crucial as evidence for an expanding universe, which, if reversed, supports the notion of a big bang at the beginning of the universe. famously not awarded the nobel prize on the basis that at the time, research in astronomy was not eligible for the nobel prize in physics. hubble ' s law states that all objects in deep space ( i. e. galaxies ) are receding away from us and each other ( as can be seen by the fact that they are doppler - shifted ), and that the velocity of this recession is proportional to their distance away from the earth and other astral bodies. it is summarised mathematically by the equation : v = h0d, where v is the recession velocity, h0 is the hubble constant and d is the distance away from us that the body is. h0 has an approximate value of 70 kms - 1mpc - 1 ( kilometers per second, per megaparsec ), but a there is disagreement over its precise value. according to the theory of inflation, the early universe expanded exponentially fast for a fraction of a second after the big bang. a simple model for the expansion of the universe is to consider the inflation of a balloon. a person at any point on the balloon might consider himself or herself to be at the centre of the expansion, as all neighbouring points are getting further away. during inflation the universe expanded by a factor of about e60 = 1026. this number is a one followed by 26 zeros. it transcends normal political / economic discussions of inflation. this is a hypothetical particle and scalar field associated with the inflation of the universe that occurred moments after the big bang. it is theorized that this occurred because of a phase transition", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6286465465488703, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 17, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.854765"} {"text": "surfaces of spheres with the three geometry slicing the sphere in half. they can be used to calculate the quantum process of universe creation, which cannot be described using classical general relativity. they only usually exist for small three geometries, corresponding to the creation of a small universe. note that the concept of time does not arise in this process. universe creation is not something that takes place inside some bigger spacetime arena - the instanton describes the spontaneous appearance of a universe from literally nothing. once the universe exists, quantum cosmology can be approximated by general relativity so time appears. there are properties exhibited by cosmic strings. intercommuting refers to a process whereby strings exchange ends whenever they meet. a loop is produced whenever a string intercommutes with itself. although cosmic strings have not been detected, this process of intercommuting can be seen in certain liquid crystals. an interferometer is a machine that uses a process of wave interference to learn about the waves in question. that is, the waves are superimposed upon themselves to discover their properties. kaluza - klein theory is a theory that seeks to unify two of the four fundamental interactions ; gravitation and electromagnetism. a similar theory, electroweak theory, already unifies the weak nuclear force and electromagnetism. its proposals extend general relativity into five - dimensional spacetime. the si ( or base ) unit for temperature measurement. kelvin and celsius have the same magnitude scale, therefore you can transform one kelvin into celsius by adding 273. 16 to the number. whereas the celsius scale was created by dividing the difference in temperature between water freezing and boiling by one hundred and labelling the freezing point of water as 0, 0 kelvin is the point described by lord kelvin ( after whom the unit is named ) as \" infinite cold \", or absolute zero. this is the mechanism by which cosmic topological defects form during a phase transition. causal effects in the early universe can only propagate at the speed of light. this means that at a time t, regions of the universe separated by more than a distance d = ct can know nothing about each other. in a symmetry breaking phase transition, different regions of the universe will choose to fall into different minima in the set of possible states. topological defects are precisely the ' boundaries ' between these regions with different choices of minima, and their formation is therefore an inevitable consequence of the fact that different regions cannot agree on their choices. these are laws which define the", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6889934445534226, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 19, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.857042"} {"text": "the set of possible states. topological defects are precisely the ' boundaries ' between these regions with different choices of minima, and their formation is therefore an inevitable consequence of the fact that different regions cannot agree on their choices. these are laws which define the fundamental physical properties which characterize ) thermodynamic systems. these are temperature, energy and entropy ( a property that works systems towards equilibrium ). they are : the zeroth law : if two systems are in thermal equilibrium with a third, they must be in thermal equilibrium with each other also. the first law : heat and work are forms of energy transfer. this is the law of the conservation of energy. internal energy in a closed system may change if heat or work are transferred in or out of the system. the second law : the entropy of any isolated system not in thermal equilibrium almost always increases. that is, an isolated system will work towards thermal equilibrium. the third law : the entropy of a system approaches a constant value as the temperature approaches zero. this is not, despite the name, a measure of time, but rather a measure of length. it is the length that light will travel in a vacuum in a year, that is 365. 25 days. its exact value is 9, 460, 730, 472, 580, 800 metres, but is approximately given by 9. 4607x1015m. this is calculated by multiplying the number of days ( 365. 25 ) by the number of seconds in each day ( 86, 400 ) and then multiplying that by the speed of light in a vacuum, which is 299, 792, 458 metres per second. in a mathematical function, the highest and lowest values of that function, over the domain of said function, are defined as the maximum or minimum points respectively. a local minimum or maximum value is defined by taking the highest or lowest value in the function over only part of the domain. an example of a function with several local minimum and maximums would be a graph of sin ( x ), which has no overall maximum or minimum value, but several local maximums and minimums of equal respective values. an object ' s ( in our context, an astronomical object ) brightness as measured by the flux, or intensity of electromagnetic radiation, that the object gives out. during the radiation era, shortly after the big bang, the universe consisted of free moving protons, neutrons and electrons and other particles, including helium ions. all radiation was absorbed by these free electrons, making the universe opaque", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.7089566878677616, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 20, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.858060"} {"text": "the object gives out. during the radiation era, shortly after the big bang, the universe consisted of free moving protons, neutrons and electrons and other particles, including helium ions. all radiation was absorbed by these free electrons, making the universe opaque. when the universe was sufficiently expanded the radiation could no longer interact with the electrons, causing the universe became transparent. this process is called decoupling, and it marked the beginning of the matter era. electrons, now no longer absorbing radiation, instead joined with ions to form neutral atoms. through gravity, these atoms clumped together, eventually forming stars, galaxies and other stellar bodies. these are zero - dimensional ( point - like ) objects which form when a spherical symmetry is broken. monopoles are predicted to be supermassive and carry magnetic charge. the existence of monopoles is an inevitable prediction of grand unified theories ( guts ) ; this is one of the puzzles of the standard cosmology. we have five consistent string theories that can describe both the forces and the matter in our universe. we do not, however, have the tools to explore the theories overall possible values of the parameters in the theories. over the past few years, however, we have been able to explore these theories more thoroughly, and we now believe that these five string theories are all different aspects of the same underlying theory : m - theory. m - theory goes beyond string theory, in that it predicts not ten, but eleven dimensions of spacetime. the theory could have as a fundamental object as a membrane, as opposed to a string, which would look like strings when curled up in the eleventh dimension. it is for this reason that the m in m - theory originally referred to a membrane. nowadays, however, the m doesn \u2019 t specifically refer to anything, and can stand for mystery, or \u201c mother of all \u201d, because m - theory is still largely unknown. vast clouds of interstellar dusk, hydrogen, helium and ionized gas. as the mass of a nebula grows due to the slight gravitational attracts of dust towards each other, the mass compacts enough to form stars. other material within the nebula, such as dust, can clump together to form planets and other planetary objects. originally, any large astronomical object was referred to as a nebula - other galaxies, in particular. a liquid crystal is a phase of matter which exhibits properties somewhere between those exhibited by a liquid and solid crystal. when viewed in high resolution, they can appear to be textured, as the molecules may be free", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6716632695051636, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 21, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.859144"} {"text": "nebula - other galaxies, in particular. a liquid crystal is a phase of matter which exhibits properties somewhere between those exhibited by a liquid and solid crystal. when viewed in high resolution, they can appear to be textured, as the molecules may be free to flow in a limited manner around, provided that they stay within a crystal like structure. liquid crystals are used extensively in televisions and computer screens. the nematic phase of a liquid crystal is temperature dependent. when in this phase, clamitic ( rod - like ) molecules align themselves individually roughly parallel to each other on their long - side axis, in a similar way to cigarettes in a package. the result of this is that the molecules are free - flowing within this directional order. in this phase, the crystals can show signs of intercommuting and loop production, which are properties expected to be exhibited by cosmic strings. a neutron star is formed from the collapse of a larger star which has undergone supernova. these stars, as the name suggests, are composed mostly of neutrons. neutron stars are extremely hot. they typically have masses between about 1 and 2 solar masses ( 1 solar mass is approximately 2x1030kg, which is about 333, 000 times the mass of the earth ), despite being somewhere on the order of 1015 smaller in radius than the sun, which makes them extremely dense. the more compact a neutron star is, the more likely it is to form a black hole. this occurs when the star ' s density become so great that the gravitational force it exerts on itself is greater than its internal pressure, causing a collapse into a black hole. this was developed in 1983 by stephen hawking and james hartle. describes a situation whereby the universe can spontaneously come into existence from literally nothing. once the universe exists, quantum cosmology can be approximated by general relativity so that time appears. a particle is a nucleon if it is a particle that forms an atomic nucleus. there are two nucleons : protons and neutrons. these are complicated manifolds which, like calabi - yau manifolds, may be the space in which six extra dimensions proposed by certain string theories are found within. the study of the universe up to around 10 - 11 seconds after the big bang. during this time, the electroweak and strong forces were unified in a grand unified phase, which quickly changed to separate out strong and the electroweak forces. further on in time the electroweak interaction separated to become electromagnetism", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.718530991176304, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 22, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.860194"} {"text": ". during this time, the electroweak and strong forces were unified in a grand unified phase, which quickly changed to separate out strong and the electroweak forces. further on in time the electroweak interaction separated to become electromagnetism and the weak nuclear force. it is possible to reach temperature regimes within this cosmology, allowing us to experimentally test theories. speculation, however, is still required within this time period. a mathematical approach to non - gravitational quantum theory, introduced by richard feynman of caltech. in the path integral approach, the probability that a system in an initial state a will evolve to a final state b is given by adding up a contribution from every possible history of the system that starts in a and ends in b. for this reason a path integral is often referred to as a ' sum over histories '. for large systems, contributions from similar histories cancel each other in the sum and only one history is important. this history is the history that classical physics would predict. for example, a system in the starting position of a ball on a non - symmetrical hill. the probability that the system will end up in the final position of the ball at the bottom of the hill on the side that is steepest is given by the summation of the probabilities of all paths that that ball could take, including going down the other side of the hill. for mathematical reasons, path integrals are formulated in a background with four spatial dimensions rather than three spatial dimensions and one time dimension. there is a procedure known as ' analytic continuation ' which is used to convert results expressed in terms of four spatial dimensions into results expressed in terms of three spatial dimensions and one time dimension. this effectively converts one of the spatial dimensions into the time dimension. this spatial dimension is sometimes referred to as ' imaginary ' time because it involves the use of so - called imaginary numbers. the path integral formulation of quantum gravity has many mathematical problems. it is also not clear how it relates to more modern attempts at constructing a theory of quantum gravity such as string / m - theory. however it can be used to correctly calculate quantities that can be calculated independently in other ways e. g. black hole temperatures and entropies. a phase transition is the change in properties and form of matter due to temperature changes. for example, water changes from solid ice to liquid water to gaseous steam or vapour. as temperature drops and phase transitions occur, the symmetry of the resulting matter is reduced - again, vapour is more symmetric than water", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.7226939840153632, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 23, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.861729"} {"text": "due to temperature changes. for example, water changes from solid ice to liquid water to gaseous steam or vapour. as temperature drops and phase transitions occur, the symmetry of the resulting matter is reduced - again, vapour is more symmetric than water, which is more symmetric than ice. in terms of cosmology, when a phase transition in the early universe occurs, topological defects are formed. some of the symmetries that were broken in the early universe led to the four fundamental forces becoming discrete forces. at higher temperatures, they reunite in a unified state. the photon is an elementary particle. it is a gauge boson, in that it mediates one of the fundamental forces. in the case of the photon, it is the electromagnetic force. as mediators of the electromagnetic force, they allow us to see things through the visible light part of the electromagnetic spectrum, and are therefore often interchanged with \" light \". as they have no rest mass, they are able to travel at the fastest possible speed, which is know as the \" speed of light \" ( 299, 792, 458 metres per second ) in a perfect vacuum. their spin is 1 and no electrical charge. this is simply the planck length squared. given that the planck length is a fundamental unit of length, so too is the planck area a fundamental unit of area. this is the size of energy quanta ( discrete packets of energy ) in quantum mechanics - it is therefore the smallest amount of energy that anything can hold. it is the proportionality constant between the energy of a photon and the frequency of the associated electromagnetic wave, as denoted in the planck - einstein equation which links the two : e = hv, where v is frequency, h is planck ' s constant and e is energy. it ' s value is 6. 6260695729\u00d710\u221234 j. s this is the earliest period of time, from the beginning of time to 10 - 43 seconds after the beginning of time. during this period, the fundamental forces of nature were all unified due to the unimaginable temperature of the universe, and it is believed that gravity was as strong as the other forces ( it is now by far the weakest of the forces ). a very, very small unit of length. its precise value is 1. 61619997x10 - 35m. it is a base unit within the planck unit system and it is calculated using the speed of light, c, planck ' s constant, h and the gravitational", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.692274266857403, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 24, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.862910"} {"text": ". its precise value is 1. 61619997x10 - 35m. it is a base unit within the planck unit system and it is calculated using the speed of light, c, planck ' s constant, h and the gravitational constant, g. specifically, it is given by the square root of \u0127g / c3 where \u0127 is the reduced planck ' s constant, or planck ' s constant divided by 2\u03c0. it is the shortest measureable length in existence. to discuss length on scale shorter than this would be meaningless because it is a physical impossibility to measure below this length. a theory that could describe physical laws at this level would be of great use in the search for a theory of everything. this is the energy that exists in a body due to its position within a system. forces act upon the body to restore it to a lower energy state or configuration, this difference in the energy states is the potential energy. when the force acts upon the body, the energy held within the body is converted into some other form of energy, this occurs because the conservation of energy law states that energy cannot be created or destroyed. an example of potential energy being converted into other energy would be in someone skydiving. the position of the person ( the body in the system ) in the system ( the earth ), i. e. being high up in the air in a plane, gives the person gravitational potential energy. once they leap from the plane, this gravitational potential is turned into kinetic energy as the person falls toward earth. once they have landed, their position, at the surface of the earth, means that they have lower amounts of gravitational potential energy, and they have been restored to a lower energy state. this is the theory that explains the strong nuclear force that is mediated by gluons between different quarks. the charge of this force is known as colour. the force, which occurs due to an exchange of these gluons, does not weaken over distance, as say gravity does, but rather remains constant, on the order of several thousand newtons. this means that at no point does any quark separate from another one, and so quarks can only be observed on a hadron level. this property is called confinement. another property within qcd is asymptotic freedom. this results in a very weak interaction between quarks and gluons during extremely high energy reactions. this is the study of cosmology at temperature regimes where all four fundamental forces were unified. this", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6439926025305047, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 25, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.864063"} {"text": "qcd is asymptotic freedom. this results in a very weak interaction between quarks and gluons during extremely high energy reactions. this is the study of cosmology at temperature regimes where all four fundamental forces were unified. this unification, it is theorised, occurred from the big bang to some 10 - 43 seconds after the big bang. due to the temperatures involved all quantum cosmology is theoretical and highly speculative. quantum field theory is a framework that allows for the extension of quantum mechanics, which deals with individual particles, to field systems operating relativistically. quantum field theories have been used to describe how three of the four fundamental forces act, being mediated by and exchange of particles called bosons. the photon and the gluon, for example, are exchanged between electrons and quarks in the case of electromagnetism and the strong nuclear force respectively. with quantum field theory, these natural fields pervade an area of space. particles that mediate these fields, the gauge bosons associated with the field ( like the aforementioned photon with electromagnetism ), are quanta of these fields, that is, ripples in the field carrying small amounts of energy, other particles that act within the field, for example the electron within the electromagnetic field, are though of in a similar manner., albeit different ripples and excitations. these fields are of variable range. the colour field within the quantum chromodynamic field theory, for example, acts in a range between quarks within a nucleon. other fields, such as the electromagnetic field, are infinite in scope and range. the search for a theory of quantum gravity is the search for a theory that can explain the effects of the fundamental force of gravity as explained by general relativity at a quantum level, and marry these up with quantum mechanics, which is a series of models which explain the other fundamental forces ; the strong nuclear ; weak nuclear and electromagnetic forces. examples of quantum gravity include string theory, loop quantum gravity and m - theory. this phase transmission occurred approximately one millionth of a second after the big bang. this was when quark - gluon plasma underwent a phase transition, resulting in quarks forming into hadronic matter, i. e. nucleons. quintessence is a theory of dark energy, given to explain the acceleration of the universe \u2019 s expansion. it is a dynamic equation, resulting in an attractive or repulsive force depending on the amount of kinetic energy", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.734612069292211, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 26, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.864992"} {"text": "e. nucleons. quintessence is a theory of dark energy, given to explain the acceleration of the universe \u2019 s expansion. it is a dynamic equation, resulting in an attractive or repulsive force depending on the amount of kinetic energy relative to potential energy in the universe. as a repulsive force, it overcomes gravity \u2019 s attraction over large scales, resulting in an accelerated expansion. quintessence is hypothesised to have become repulsive approximately 10 billion years ago. this refers to a period of time from just after big bang to approximately 300, 000 years after its beginning. during this time, the universe consisted of free moving protons, neutrons and electrons and other particles. all radiation was absorbed by these free electrons, making the universe opaque. protons and neutrons were combining to form deuterium, a heavy isotope of hydrogen, and then helium, however, the temperature of the universe was so high that these existed as free ions in the plasma that was the universe. it was only when the universe was sufficiently expanded that the electrons no longer absorbed the radiation and instead joined with the ions to form neutral atoms. this forms the beginning of the matter era, in which we still exist. recombination was a time period, approximately 300, 000 years after the big bang, when electrons and protons bound to form atoms of hydrogen. before 300, 000 years had passed, the universe was still too hot for atoms for hydrogen to form. only after the universe had expanded sufficiently did the universe cool down sufficiently, making the formation of hydrogen possible. when the source of a wave moves away from us, we observe a change of frequency of that wave. an example would be an ambulance or fire - truck - we hear a lower pitch in its siren once it has passed us by. this is the doppler effect. it is not, however, limited to sound waves, but any kind of waves, including electromagnetic. this means that as an electromagnetic wave source is moving away from us, the frequency of the wave will decrease. as frequency and wavelength are inversely related, one goes up and the other goes down, the wavelength will increase. this shifts the wavelength closer to the red end of the spectrum ( this, when talking about the visible part of the electromagnetic spectrum, of course the wavelength may not be in the visible part ). this is redshift, and it is something we detect from far away galaxies and other electromagnetic sources. this leads us to the conclusion that the universe is expanding. these", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6232486435175155, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 27, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.866143"} {"text": "electromagnetic spectrum, of course the wavelength may not be in the visible part ). this is redshift, and it is something we detect from far away galaxies and other electromagnetic sources. this leads us to the conclusion that the universe is expanding. these associate a scalar ( either a number, or a physical quantity ) value to every point in a space within the field. examples of scalar fields include pressure distribution, temperature variation, and gravitational fields. this is a point in spacetime where the curvature of spacetime becomes infinite. it is an area of extremely high density into which matter or light is attracted. singularities can be found both at the centre of black holes and on their own. inside a singularity, the laws of physics are distorted to the point that they are no longer applicable. spacetime is the concept of space and time being part of the same continuum. we use the typical three dimensions that are everyday and commonplace - the x, y and z dimensions used in geometry - ascribing a fourth dimension of time. this allows us to map out any event that takes place in the universe by a set of coordinates ; three of space to give us the location, and one of time to give us when the event occurred. this merging of time and space is important and must be accounted for, because relativity tells us that the observed rate of time passing changes with the respect to an objects velocity relative to the observer. gravitational fields can also change the passage of time. on quantum scales, therefore, it is important to account for time within theoretical frameworks, whereas in classical physics this is unnecessary. the structure of spacetime is detailed in einstein ' s theory of special relativity. this theory lays out the structure of spacetime. it draws on the principle of relativity as laid out by galileo, which states that there is no absolute state of rest, and that all motion is relative to other motion. there are two principles that are laid out in the theory ; that the laws of physics are the same for observers whose motion is uniform relative to each other and that the speed of light in a vacuum is the same for all observers, regardless of any relative motion. this means that with different relative velocities, observers will experience different physical laws. effects of these principles can be seen in various manners. one of the most interesting is time dilation. a clock that is sitting stationary in front of you will tick faster than a clock which is moving away from you. this is has been shown to be true for astronauts, who come", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.676192382660615, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 28, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.867152"} {"text": "in various manners. one of the most interesting is time dilation. a clock that is sitting stationary in front of you will tick faster than a clock which is moving away from you. this is has been shown to be true for astronauts, who come back from space younger than they would have been had they remained on earth. another well known consequence of the theory is the energy - mass equivalence relationship, as defined by the equation e = mc2, probably the most famous equation of all time. this states that energy and mass are interchangeable and are related by a function of the speed of light in a vacuum, c. the speed of light in a vacuum, c, is shown not to be just a speed that photons travel at, it is a key cosmological constant that is related to the nature of space and time. special relativity shows us that any object with rest mass cannot travel at the speed of light. the speed that photons, or indeed any particle with zero rest mass ( as energy and mass are equivalent as shown by the equation e = mc2, a particle that is travelling will have kinetic energy and therefore more mass than a particle at rest ), will travel at in a vacuum. its value is 299, 792, 458 metres per second ( ms - 1 ). as explained in the theory of special relativity, the speed of light is the fastest that any form of energy or information can travel in the universe. an intrinsic quantum property of particles that is defined by a spin number that can be either a whole integer ( 1, 2, 3 etc. ) or a half integer ( 1 / 2, 3 / 2, 5 / 2 etc. ), and can be positive or negative. it is a property that all particles exhibit, the sole known exception being the higgs boson, although other particles with zero spin, such as the inflaton, have been hypothesised. to an extent, it is easy to make an analogy of quantum spin with the classic rotational spin that we encounter in everyday life, for example with a spinning top. particles that are electrically charged, such as electrons or positrons, will generate a magnetic field through their spin, as movement of an electric charge will automatically generate magnetic fields. this analogy, however, only takes us so far. different spin quantum numbers can give us ideas as to the symmetry of these particles. a particle with zero spin looks exactly the same from all sides. a particle with spin will look different if rotated, but will regain its symmetry if it", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6719903473238913, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 29, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.869038"} {"text": "us so far. different spin quantum numbers can give us ideas as to the symmetry of these particles. a particle with zero spin looks exactly the same from all sides. a particle with spin will look different if rotated, but will regain its symmetry if it is rotated a certain number of time. in this instance, an analogy of a deck of cards is useful. consider any face card, these are symmetrical every time you spin them half way around, or 180 degrees. consider now the ace of spaces. this card, if places with the point of the space facing up as you look at it, will require a full 360 degree rotation until it looks the same again. a particle with spin 1 will act like an ace of spaces, requiring a full rotation, whereas a spin 2 particle will be symmetrical through 180 degree rotations. a half spin particle will require two rotations to be symmetrical. this kind of rotational symmetry does not have an analogue in the macroscopic world. crucially, whether a particle has half or whole integer spin tells us about how it reacts. particles with half integer spin, or fermions, obey a set of statistics known as fermi - dirac statistics. particles with whole integer spin, or bosons, obey a set called bose - einstein statistics. one of the key differences between these two sets of statistics is that those particles which obey fermi - dirac statistics are subject to the pauli exclusion principle. this states that particles may not occupy the same quantum state as each other. crucially, this means that you cannot make fermions of the same quantum state occupy the same space. this is why fermions are the particles which make up the matter of the universe. they include quarks, which combine to make protons and neutrons, and leptons, a set of particles that include electrons. bosons, which do not obey fermi - dirac statistics and are consequently not subject to the pauli exclusion principle, fulfill other roles, some mediate the fundamental forces of nature, these are the gauge bosons, and the higgs boson gives rise to mass in other particles. also known as the \u03bbcdm or lambda - cdm model, this is the best and most widely used model to explain the expansion of the universe, origins of the cosmic microwave background, nucleosynthesis of light elements and the formation of galaxies and large - scale structure. this is a set of mathematical tools that allow us to study thermodynamical properties, such as work", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6848776504061577, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 30, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.870794"} {"text": ", origins of the cosmic microwave background, nucleosynthesis of light elements and the formation of galaxies and large - scale structure. this is a set of mathematical tools that allow us to study thermodynamical properties, such as work, heat and entropy, of a large number of particles, allowing us to look at both atomic level and macroscopic level detail of the system. this allows us to explain thermodynamics in ways that apply to both classical and quantum physics, and allows us to extrapolate macroscopic predictions from microscopic properties. in the standard model of particle physics, particles are considered to be points moving through space, tracing out a line called the world line. to take into account the different interactions observed in nature, one has to provide particles with more degrees of freedom than only their position and velocity. these include mass, electric charge, colour ( which is the \u201c charge \u201d associated with the strong interaction ) and spin. this model was designed within a framework known as quantum field theory ( qft ), which allows us to build theories consistent with both quantum mechanics and the special theory of relativity. these theories describe with great success three of the four known interactions in nature : electromagnetism, the strong and weak nuclear forces. unfortunately, gravity, as described by einstein \u2019 s general relativity, does not fit into this scheme. string theory replaces these different particle types with a single fundamental building block : a \u201c string \u201d. these can be closed, like loops, or open, like a hair. as the string moves through time it traces out a tube or a sheet ( depending on whether it is closed or open ). this string is free to vibrate, and different vibrational modes of the string represent the different particle types, as difference modes are seen as different masses or spins. one mode of vibration, or \u2018 note \u2019, makes the string appear as an electron, another as a photon. there is even a mode describing the graviton, the particle carrying the force of gravity. this means we can make sense of the interaction of gravitons in a way we could not in qft. it is this ability of string theory to create a valid model that includes all four fundamental interactions that has dubbed it to be a \u2018 theory of everything \u2019. the problem is that there are five different versions of string theory. this is why we now look to m - theory, which has place for all five theories, as the greatest solution to our \u2018 theory of everything \u2019. as a point of", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.7211811109301739, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 31, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.871911"} {"text": "\u2019. the problem is that there are five different versions of string theory. this is why we now look to m - theory, which has place for all five theories, as the greatest solution to our \u2018 theory of everything \u2019. as a point of note, string theory predicts that spacetime has ten dimensions. although we only have three dimensions of space and one of time, we can assume that six of these dimensions are curled up very tightly, so that we may never be aware of their existence. having these so - called compact dimensions is very beneficial, as we can suggest that the degrees of freedom, such as electric charge of an electron, can simply arise as motion in the extra compact dimensions. there are four fundamental forces in nature. they are electromagnetism, the weak nuclear force, the strong nuclear force and gravitation. the weak nuclear force is associated with radioactivity in unstable nuclei, specifically the decay of a neutron into a proton. when the temperature is hot enough, such as that of the universe shortly after the big bang, electromagnetism and the weak nuclear force will merge to form the electroweak force. the strong nuclear force binds together neutrons and protons inside nuclei. the mathematical theory describing the elementary particles in this theory, quarks and gluons, is known as quantum chromodynamics ( qcd ). theories that unify the strong nuclear force with electroweak theory are known as grand unified theories, of guts. a supercluster is a vast ( the are some of the largest structures in the universe ) grouping of smaller galaxy clusters and groups. they can span between several hundred million light years to over one billion light years. superclusters can contain galaxy bubbles, sheets, voids and filaments, which are smaller structures within the supercluster. nearly all galaxies are found within superclusters, and inbetween superclusters thee are usually large voids. our own supercluster, called the virgo supercluster, contains the local group, the virgo cluster and some 100 other galactic groups and clusters. its diameter is approximately 100 million light years. supergravity is a theory which follows on from supersymmetry. it is theorised that in the same way that photons mediate the electromagnetism, gluons the strong nuclear force and w and z bosons the weak nuclear force, so to does the as - yet undiscovered gravi", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6796821786171594, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 32, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.873677"} {"text": "theorised that in the same way that photons mediate the electromagnetism, gluons the strong nuclear force and w and z bosons the weak nuclear force, so to does the as - yet undiscovered graviton the gravitational force. in supergravity, the graviton has a heavier superpartner whose spin differs by 1 / 2. so far, as with supersymmetry, there has been no observational evidence for supergravity. this is a very powerful stellar explosion that can quite often outshine galaxies. a star undergoes a supernova either when a very old massive star undergoes sudden gravitational collapse, releasing vast quantities of gravitational energy, or through the reignition of the nuclear fusion reaction in a degenerate star ' s ( such as a white dwarf or neutron star ) core. the explosion releases huge quantities of the star ' s matter, resulting in a supernova remnant. certain types of supernova have luminosities of known quantity, such that they can be used as ' standard candles ', which means that we can detect how far away the object is by comparing its known luminosity with our observed brightness. string theory states that all particles are representations of different vibrations on a fundamental building block ; a string. as a theory, it is able to describe the interactions of the particle that mediates gravitation : the graviton. in this way, and by being able to describe all other particles and interactions thereof, it is able to unite the four fundamental forces in nature, and is therefore a \u2018 theory of everything \u2019. the original string theory only described particles with integer spins, called bosons. these are the particles that mediate the fundamental forces, and include the photon, electron, gluon and graviton. the other class of particle, which have half integer spin, called fermions, were not described. these are particles that constitute matter as we know it, such as quarks and electrons. by introducing supersymmetry to bosonic string theory, we obtain a new theory that describes both the forces and the matter that make up the universe. this is superstring theory. there are three different superstring theories that have no mathematical inconsistencies. in two of them, the fundamental object is a closed string, whilst in the third, the string is open. by mixing the best aspects of bosonic string theory and superstring theory, we can create two other consistent theories of strings, heterotic", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6643438080703092, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 33, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.874611"} {"text": "of them, the fundamental object is a closed string, whilst in the third, the string is open. by mixing the best aspects of bosonic string theory and superstring theory, we can create two other consistent theories of strings, heterotic string theories supersymmetry is a theory which postulates that for every elementary particle, there is a more massive \" superpartner \" whose spin is different by 1 / 2. the theory comes about to solve mathematical difficulties related to quantum field theory and the reconciling of general relativity and quantum field theory. these inconsistencies arise because the higgs boson, a gauge boson whose interaction with other particles gives them mass, appears to gain large amounts of mass through interactions with itself. solving these inconsistencies would give physicists a way to marry quantum mechanics and gravity at the smallest scales. these superpartners are a possible candidate for dark matter. no superpartners have yet to be detected and no evidence exists as of yet to support supersymmetry. this is because in order to observe particles of this mass we need to use incredible amounts of energy, which so far we have been unable to generate. it is hoped that the large hadron collider at cern might detect evidence of supersymmetric particles. this is the set of points in space where decoupling occurred, approximately 380, 000 years after the big bang, at the right distance so that we are now seeing these photons reach us as part of the cosmic microwave background relic radiation. this occurs when a system in some state of symmetry moves into a different configuration, resulting in the loss of that symmetry. consider a ball on a hill. the ball is symmetrical. the hills is also symmetrical. if the ball is on top of the hill, the ball and hill in system are symmetrical. if the ball rolls down the hill, the ball and hill are individually still symmetrical, but the system of the ball and the hill is now asymmetrical. this is symmetry breaking. in a cosmological context, this happened as the universe cooled down after the big bang. as this occurred, elementary particles changed state in what is known as a phase transition. as this occurred, symmetry that previously was exhibited by these particles was broken. these symmetries are associated with different fundamental forces. this is why some particles are acted upon by these forces, and others not. these symmetries are restored at higher temperatures, however. these are a type of topological defect that", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6578396092135441, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 34, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.875681"} {"text": "was broken. these symmetries are associated with different fundamental forces. this is why some particles are acted upon by these forces, and others not. these symmetries are restored at higher temperatures, however. these are a type of topological defect that is hypothesised to form when large symmetries are broken. they are unstable and prone to collapse. unlike certain other topological defects, such as magnetic monopoles, these are delocalized and occur over large areas. no evidence has been found of them as yet. this is a gravitational anomaly located in the centaurus supercluster. it is a localized concentration of mass of unknown origin that is equivalent to tens of thousands of galaxies. it mass is so large, that ( as the name suggests ) its gravitational attraction is altering the motion of galaxies and galaxy clusters in a region over hundreds of millions of light years across. in the aftermath of the big bang, the universe was extremely hot and extremely dense. at these energies, the laws of nature that we know were changed. the fundamental forces that we see in nature were unified - it is only as the universe expanded and cooled that gravitation, electromagnetism and the strong and weak nuclear forces all ceased to be as one. electroweak theory describes the unification of the weak nuclear force and electromagnetism. a theory of everything will marry up all the fundamental forces. the issue with this is that whilst quantum chronodynamics and the electroweak theory describe the strong and weak nuclear forces and electromagnetism on a well understood quantum basis, there is no consistent theory for describing gravity on such a basis. m - theory, and the associated string theories behind it are being explored as possible candidates. these are configurations of matter that form during matter phase transitions and symmetry breakings, such as occurred during the very early universe. they are configurations of matter in the old, symmetrical phase that remain stable in the new phase where the symmetry that was previously held is now broken. examples of these defects include monopoles, cosmic strings, domain walls and textures. within quantum field theory, particles may move from higher to lower energy states, such as occurred in the very early universe as the universe was expanding and thus cooling. these lower energy states, or vacuum states, may be different whilst possessing the same amount of energy. this means these states are degenerate. the particle, therefore, has a chance of falling into any of these degenerate vacuum states, unless there is something outside the system described here which", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6882179324590176, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 35, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:40.877013"} {"text": "what is a retrofit? main entry : ret \u00b7 ro \u00b7 fit pronunciation : \\ \u2019 re - tro - \u2019 fit, \u2018 re - tro - \u2019 fit \\ function : transitive verb 1 : to furnish ( as a computer, airplane, or building ) with new or modified parts or equipment not available or considered necessary at the time of manufacture 2 : to install ( new or modified parts or equipment ) in something previously manufactured or constructed 3 : to adapt to a new purpose or need 4 : to save a lot of money on energy costs! 5 : to update your current lighting system innovation and continuous improvement in the field of lighting have given rise to tremendous energy - saving opportunities. lighting is an area in which there is enormous energy - efficient potential, starting at the design stage by incorporating modern energy - efficient lamps and luminaries. following responsible operational practices also can significantly reduce associated energy costs. lighting is not only a very high priority when considering facility retrofitting, but also is a high - return, low - risk investment. by installing new lighting technologies such as dimmers, photo sensors, occupancy sensors, and timers, facilities can reduce the amount of electricity consumed and energy costs associated with lighting. there are several types of energy efficient lighting and affordable lighting technology : compact fluorescents lights, light - emitting diodes ( leds ), and lighting controls. below are a few examples of energy - saving opportunities with efficient lighting! \u2022 installation of energy - efficient fluorescent lamps in place of conventional fluorescent lamps for example converting to t8 or t5 lamps from t12 lamps. \u2022 installation of compact fluorescent lamps ( cfls ) in place of incandescent lamps. \u2022 installation of high pressure sodium ( hps ) lamps for applications where color rendering is not critical. metal halide lamps should also be considered when correct color is important. \u2022 installation of led exit signs to replace incandescents. \u2022 installation of high frequency ( hf ) electronic ballasts in place of conventional ballasts. \u2022 installation of occupancy sensors, an inexpensive way to ensure that unused lights do not remain on. \u2022 installation of microprocessor - based controllers. \u2022 installation of photocells, devices that automatically detect the natural light level in a room and adjust the intensity of the artificial light accordingly. \u2022 replacing incandescent wall lights and exit sign lighting with cfl or led - lit units will not only save a considerable amount of energy, it also will significantly reduce labor costs associated with changing light bulbs,", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6010153275303792, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:41.195612"} {"text": "i stand at the edge of a dream, my breath a cloud. snow has transformed my familiar landscape and chilled my toes. this careless arm of the east verde where my grandchildren splashed away the summer has frozen over and snow has rendered abstract the shapes of rocks and junipers. water gleams in all its forms about me \u2014 still flowing in the stream, gathered in vapor on my breath, crystallized into snow on every hand, frozen into ice underfoot. i wish i knew the proper prayer \u2014 the light step of the ritual dance \u2014 the intonation of the chant \u2013 to offer at such a moment. instead, i kneel at the edge of the stream and study the ice, perhaps the most unlikely of water \u2019 s forms. here \u2019 s a nugget to suck on : chill any other liquid and the jittery molecules will slow down \u2014 bouncing about less as the temperature drops. eventually, the liquid will settle into a stable crystal lattice \u2014 which takes up less space than the liquid did. that \u2019 s why all other liquids most sensibly condense when they freeze. but not water, thank the lord. water \u2019 s made of one molecule of hydrogen linked to two molecules of oxygen. these amiable molecular companions actually share electrons to keep everyone happy. moreover, a water molecule has a slight positive electrical charge at one end and a faint negative electrical charge at the other end. this accounts for the nearly miraculous chemistry of water \u2014 on which life on the planet depends utterly. for starters, as water cools below 32 degrees f the molecules slip into a strange and counter - intuitive crystalline lattice. once they click into place, they actually take up about 9 percent more space than they did as a warm liquid. now, that didn \u2019 t work out so well for folks in rim country who left the water on in empty houses during the big freeze, since the expanding ice in the neglected pipes can split open even copper or steel. but water \u2019 s demented determination to expand when it ought to contract makes life on the planet possible. if water contracted as it froze, then sea ice would form at the surface every winter and sink to the bottom. over time, the oceans would freeze solid \u2014 and we could not be here. we could go on and on about the fortunate strangeness of water. for instance, the positive and negative ends of water molecules account for surface tension \u2014 so useful to water skiers, stone - skippers and water bugs. but it also explains what \u2019 s called \u201c capillary action, \u201d", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6208312479279194, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.330150"} {"text": "may 4, 2012 researchers in spain have found that at least some of the individuals claiming to see the so - called aura of people actually have the neuropsychological phenomenon known as \" synesthesia \" ( specifically, \" emotional synesthesia \" ). this might be a scientific explanation of their alleged ability. in synesthetes, the brain regions responsible for the processing of each type of sensory stimuli are intensely interconnected. synesthetes can see or taste a sound, feel a taste, or associate people or letters with a particular color. the study was conducted by the university of granada department of experimental psychology oscar iborra, luis pastor and emilio gomez milan, and has been published in the journal consciousness and cognition. this is the first time that a scientific explanation has been provided for the esoteric phenomenon of the aura, a supposed energy field of luminous radiation surrounding a person as a halo, which is imperceptible to most human beings. in basic neurological terms, synesthesia is thought to be due to cross - wiring in the brain of some people ( synesthetes ) ; in other words, synesthetes present more synaptic connections than \" normal \" people. \" these extra connections cause them to automatically establish associations between brain areas that are not normally interconnected, \" professor gomez milan explains. new research suggests that many healers claiming to see the aura of people might have this condition. the case of the \" santon de baza \" one of the university of granada researchers remarked that \" not all ' healers ' are synesthetes, but there is a higher prevalence of this phenomenon among them. the same occurs among painters and artists, for example. \" to carry out this study, the researchers interviewed some synesthetes including a ' healer ' from granada, \" esteban sanchez casas, \" known as \" el santon de baza \". many local people attribute \" paranormal powers \" to el santon, because of his supposed ability to see the aura of people \" but, in fact, it is a clear case of synesthesia, \" the researchers explained. according to the researchers, el santon has face - color synesthesia ( the brain region responsible for face recognition is associated with the color - processing region ) ; touch - mirror synesthesia ( when the synesthete observes a person who is being touched or is experiencing pain, s / he experiences the same ) ; high empathy ( the ability to feel what other person is feeling ), and schizoty", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6192806341581654, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.556317"} {"text": "it security is, generally, defined as a defensive approach to protect a company and its assets from unauthorized access by an intruder. it security efforts include network security appliances, honeypots, robust authentication, limiting authorization to least necessary privileges, as well as other perimeter security defenses. however, these approaches do not provide definitive protection of the company ' s most valuable asset, its data, because a single intrusion could result in sensitive data being compromised. additionally, in today ' s workplace culture the disgruntled employee may be as much of a threat as any external threat. data encryption is a direct response to internal and external security threats that may also meet compliance regulations. encryption provides strong security for data \" at - rest \" ; in our case, the data stored in the database, but to be effective should be implemented as a part of a broader security plan. there are many issues involved with the implementation of encryption, details that require decisions and actions to ensure the success of the implementation and the security of the data. this document will discuss the issues associated with database encryption implemented using sql server ' s native transparent database encryption ( tde ) mechanism. encryption has been integral to human history beginning with the babylonian use of intaglio other historical examples include the caesar cipher, scytale transposition cipher, enigma, and even jimkryptos sculpture. throughout history our society has enjoyed the ability to protect information using cryptographic methods including steganography, microdots, invisible ink, digital watermarks, and encryption which may be defined as the conversion of data so as to keep its meaning private. as the amount of sensitive data collected by commercial entities continues to grow the regulatory requirements for protecting the sensitive data will become more robust ; meeting the regulatory requirements will necessarily require the continued use of data encryption methods. encryption requires the application of an algorithm to transform the target data into a form that is unusable to anyone that does not have access to the encryption process used. in practical terms encryption applies a cryptographic algorithm with a \" key \" to the target data producing the encrypted form of the data which cannot be accessed without the key used to encrypt the data. the two primary forms of key encryption are symmetric and asymmetric which are distinguished by the number of keys used in the encryption / decryption process. symmetric encryption uses a single key while asymmetric encryption uses a pair of keys generally referred to as public and private keys. while asymmetric encryption appears ideal for implementation because only the public key need ever be shared", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6533004025073641, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.664727"} {"text": "encryption / decryption process. symmetric encryption uses a single key while asymmetric encryption uses a pair of keys generally referred to as public and private keys. while asymmetric encryption appears ideal for implementation because only the public key need ever be shared there are disadvantages with regard to performance. a sampling of asymmetric algorithms includes rsa, dsa, elgamal, ecdsa, and xtr. figure 1 demonstrates the asymmetric encryption process. figure 1 asymmetric key encryption / decryption process symmetric algorithms require a single key for both encryption and decryption which allows for high - performance ; however, with this approach the strength of the encryption is dependent on the security of the key. common symmetric algorithms include aes / rijndael, blowfish, des, triple des, serpent, and idea to name only a few. figure 2 demonstrates the symmetric encryption process. figure 2 symmetric key encryption process both symmetric and asymmetric encryption approaches are vulnerable to brute force attacks and cryptanalysis. brute force is an attack during which every possible permutation of the key value is attempted. cryptanalysis, on the other hand, applies computational techniques to circumvent the encryption. in general, the use of sufficiently long keys will mitigate these attacks. in summary, a symmetric key algorithm is fast but less secure than an asymmetric algorithm. another approach is a hybrid wherein a symmetric key is used to encrypt the data while an asymmetric key is used to encrypt the symmetric key. it may be important to know in order to maintain perspective that there is only one encryption algorithm that is impossible to crack, one - time pad ( otp ), any other algorithm may be broken given sufficient time and / or computer resources. security concerns, in general, and encryption, specifically, are new concepts for most it professionals ; therefore, a glossary of security / encryption terms is included as an appendix for reference. overview of transparent database encryption the primary benefit of transparent database encryption ( tde ) is the ability to encrypt data without affecting any application that uses the data while providing security for the entire database. tde is implemented at the database - level, unlike cell - level encryption tde does not require modification to applications or database column data types ; furthermore, database - level encryption allows for higher performance than cell - level encryption. however, tde may allow more data leakage because encrypted data is decrypted when read into the buffer", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6700018371887493, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.666201"} {"text": "to applications or database column data types ; furthermore, database - level encryption allows for higher performance than cell - level encryption. however, tde may allow more data leakage because encrypted data is decrypted when read into the buffer pool ; therefore, the data is not protected if the operating system writes data from memory to disk during paging operations, or during hibernation, or memory dumps, nor is the data protected while in memory. database encryption is achieved by leveraging the data protection api ( dpapi ) in windows\u00ae which protects the service master key ( smk ) which protects the database master key ( dmk ) which is used to protect the certificate or asymmetric keys which are used to protect the database encryption key ( dek ). these dependencies create a security chain from the operating system to the data eliminating user interaction thus strengthening security. the relationships and dependencies between keys is represented in figure 3 below : figure 3 sql server encryption key hierarchy with tde and ekm ( source : bol - http : / / msdn. microsoft. com / en - us / library / cc278098. aspx ) the hierarchy of keys in tde is protected from the dpapi to the dek allowing the server to manage encryption and decryption automatically. the dmk and the certificate are stored in the master database while the dek is stored in the user database. this hierarchy and the key management chain provide tde the capability to transparently encrypt and decrypt the database. the process for encrypting a database is conceptually simple : - create a master key - obtain an authentication certificate - create dek - enable tde on the database however, significant complexity will be introduced if the database encryption strategy is undertaken without proper planning that addresses important implementation issues. those issues are discussed in the following section. the level of security necessary to protect the database should be documented during the planning phase. individually and in combination the following encryption mechanisms are available to secure the database : - encrypting file system ( efs ) - transparent database encryption ( tde ) discussion of the benefits and performance implications of each mechanism and their combinations is beyond the scope of this paper. data encryption must address two equally important issues : encryption technology and cryptographic key ( key ) management. encryption technology provides for variable granularity of data protection, performance, and integration with existing applications, as well as ease of implementation and management. however,", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6173089979987705, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.668502"} {"text": "encryption must address two equally important issues : encryption technology and cryptographic key ( key ) management. encryption technology provides for variable granularity of data protection, performance, and integration with existing applications, as well as ease of implementation and management. however, the success of the selected encryption strategy may depend most on key management policies and processes. key management issues include : key access, key storage, and cryptographic algorithm. key management is one of many important issues that must be considered when planning the encryption project. the important issues to consider during the planning phase of the encryption project are listed below : - encryption algorithm : des, triple des, triple _ des _ 3key, rc2, rc4, 128 - bit rc4, desx, 128 - bit aes, 192 - bit aes, and 256 - bit aes - key management : key storage, hardware security module ( hsm ), key scheduling, key availability / mobility / security - performance impact. encryption / decryption - microsoft claims 3 - 5 % ; however, independent tests indicate 6 - 12 %.. - tempdb encryption - encryption of any one db will encrypt tempdb. - transaction log is encrypted. - log shipping implementation changes - encrypted database log shipping requires the recipient database to possess the key in order to apply the logs. - backup and recovery plan changes - encrypted databases cannot be recovered to a different instance without the key. - disaster recovery plan changes - encrypted databases cannot be recovered to a different instance without the key. - increased disk space requirements - no sql server native backup compression. third party tools may be available ; however, in general, encrypted data cannot be significantly compressed. - tde operates during i / o ; therefore, any data written to disk outside of the buffer pool is not protected - no support for filestream data - type the diagram in figure 4 represents a nominal encryption project planning process with each major area of consideration represented. the end result of the planning process is to produce a document detailing the decisions made that address the issues related to encrypting the database. figure 4 encryption planning process a comprehensive it security policy provides a layered defense against threats to the system. however, even the most thorough perimeter network and physical defenses do not obviate the vulnerability of plaintext data stored in databases. data encryption provides a means to protect sensitive data from unauthorized access as a part of a coordinated it security policy that includes network security, robust authentication", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6274796935264244, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.669848"} {"text": "most thorough perimeter network and physical defenses do not obviate the vulnerability of plaintext data stored in databases. data encryption provides a means to protect sensitive data from unauthorized access as a part of a coordinated it security policy that includes network security, robust authentication and authorization, as well as other physical security considerations. sql server and windows\u00ae provide several mechanisms for the protection of data either at the file, database, or data levels. transparent database encryption ( tde ) is a new technology available in sql server 2008 enterprise edition which provides a simplified the data encryption option. tde is a database - level encryption mechanism that reduces the implementation complexity by negating the need to modify the data and / or the client applications. however, the benefits of performance and simplicity are balanced by tde ' s potential for data leakage ; therefore, for the most sensitive data tde alone may not suffice as a data security strategy. any data protection strategy must weigh the costs and benefits of implementation to arrive at a usable solution that meets the security requirements defined by the business. tde ' s protection of sensitive data in low to moderate threat environments may be sufficient for some business requirements while highly sensitive data or data in high threat environments will require the combination of tde with other encryption mechanisms such as cell - level encryption, efs, or bitlocker.", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6115209106885111, "token_count": 269, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:42.670431"} {"text": "with implications similar to cyclical views of time and history. ( left ) an event is visible through time, like a pebble thrown into a bond creates outward waves / / / ( right ) an event in the present can only be perceived in a certain region of space - time. knowing that our time in the sun is limited, sometimes we try to capture time and light with images. albrecht durer \u2019 s etching, \u201c melancholia i \u201d associates light with order and darkness with chaos. the composition places the products of the imagination \u2013 geometry, mathematics, tools, and architecture \u2013 within the timeframe of an hourglass running out. in this picture, the imagination succeeds in creating a mental zone that overrides both astrophysics and religion \u2013 it holds together past, present and future with rays of perpetual sunlight \u2013 messengers of time etched in metal. / / / next week : skin, shell and skeleton part i nuclear bomb test, bikini atoll, 1946. the small black figures just outside the cloud are decommissioned world war ii battleships from the us navy. / / in addition to an urban investigation into the power structures of moscow, i ' ve been looking at ideas about the sublime and its relation to... - - yesterday china launched tiangong - 1 ( heavenly palace - 1 ), its first step towards a manned orbital space station. i remembered reading that the last space shuttle mission, sts - 135, finished earlier this year, signalling an end to america \u2019 s utopian dream of colonizing space. as i read more... i ' ve just started on the cooper union march ii course, and am excited to be here in new york to say the least! i ' ll be sharing my project work for studio, sculpture, and elsewhere over the upcoming year. for this first post i thought i ' d share something i wrote for a course appropriately titled...", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6186902087407766, "token_count": 382, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:43.350131"} {"text": "with somewhat bizarre behaviors like the tunneling effect, which are governed by the laws of quantum mechanics and relativity. the orderly, deterministic world of classical physics gives way to a world of wave functions, probability distributions, uncertainty principles, and wave - particle dualities. instead of a deterministic world, we now have a world based on probabilities. you cannot predict all the future states of an object or a particle based on its present state. you can map out its behavior, but only as probability distributions of all the possible states it could be at. moreover, the heisenberg uncertainty principle tells you that it is impossible to know the exact state of a particle. you cannot simultaneously determine its exact position and velocity with any great degree of accuracy no matter how good your measurement tools are. the world is intrinsically unpredictable. in addition, there is no such thing as absolute reality. in classical mechanics something either has the properties of a particle, e. g., a planet, a baseball ; or of a wave, e. g, light, sound. in quantum mechanics all objects exhibit both kinds of properties. the concept of wave - particle duality explains that reality depends on what question you are asking and what experiment you perform to answer the question. the very act of observing an object will change the object being observed. any instruments used to measure its properties will invariable alter the properties being measured. this transition, from a world view based on scientific determinism to one based on probability distributions, uncertainty principles and subjective reality is not intuitive and difficult to get used to. even albert einstein had trouble accepting it, and famously said \u201c god does not play dice with the universe. \u201d stephen hawking, one of world \u2019 s top theoretical physicists, concluded in this brilliant lecture : \u201c... it seems einstein was doubly wrong when he said, god does not play dice. not only does god definitely play dice, but he sometimes confuses us by throwing them where they can \u2019 t be seen... the universe does not behave according to our pre - conceived ideas. it continues to surprise us. \u201d but, the worlds of the very small, as well as the very large, are not the only ones that exhibit counter - intuitive, seemingly magical behaviors. so is the world of highly complex systems, especially those systems whose components and interrelationships are themselves quite complex, as is the case with systems biology and evolution. such is also the case with organizational and sociotechnical systems whose main components", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.7008520722896106, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:43.548956"} {"text": ". so is the world of highly complex systems, especially those systems whose components and interrelationships are themselves quite complex, as is the case with systems biology and evolution. such is also the case with organizational and sociotechnical systems whose main components are people. even though these chaotic systems are in principle deterministic, their dynamic, non - linear nature renders them increasingly unpredictable and accounts for their emergent behavior. new terms, like long tails, freakonomics and black swan theory, \u2013 every bit as fanciful as quarks, charm and strangeness, \u2013 have begun to enter our lexicon. artificial intelligence ( ai ) is an example of a discipline that has transitioned from its original classical, deterministic approach to an approach more suitable to a highly complex, inherently unpredictable topic like intelligence. ai was one of the hottest areas in computer sciences, in the 1960s and 1970s. many of the ai leaders in those days were convinced that you could build a machine as intelligent as a human being based on logical deductions and the kind of step - by - step reasoning that humans use when solving puzzles or proving theorems. they obtained considerable government funding in the us, uk and japan to implement their vision. but eventually it became clear that all these various projects had grossly underestimated the difficulties of developing any kind of ai system based on logic programming and deductive reasoning. the field went through a so - called ai winter in the 1980s. but things started to change in the 1990s when ai switched paradigms and embraced data mining and information analytics, the precursors of today \u2019 s big data. instead of trying to program computers to act intelligently, ai embraced a statistical, brute force approach based on analyzing vast amounts of information using powerful computers and sophisticated algorithms. we discovered that such a statistical, information - based approach produced something akin to intelligence or knowledge. moreover, unlike the earlier programming - based projects, the statistical approaches scaled very nicely. the more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results. deep blue ibm ' s chess playing supercomputer, demonstrated the power of such a statistical approach by beating then reigning chess champion gary kasparov in a celebrated match in may of 1997. since that time, analyzing or searching large amounts of information has become increasingly important and commonplace in a wide variety of disciplines. today, most of us use search engines as the primary mechanism for finding information in the world wide web. researchers have been developing sophisticated question -", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.60643575903917, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:43.551217"} {"text": "radiates 632. 8 nm. without helium, the neon atoms would be excited mostly to lower excited states responsible for non - laser lines. a neon laser with no helium can be constructed but it is much more difficult without this means of energy coupling. therefore, a hene laser that has lost enough of its helium ( e. g., due to diffusion through the seals or glass ) will most likely not lase at all since the pumping efficiency will be too low. the energy or pump source of the laser is provided by a high voltage electrical discharge passed through the gas between electrodes ( anode and cathode ) within the tube. a dc current of 3 to 20 ma is typically required for cw operation. the optical cavity of the laser usually consists of two concave mirrors or one plane and one concave mirror, one having very high ( typically 99. 9 % ) reflectance and the output coupler mirror allowing approximately 1 % transmission. commercial hene lasers are relatively small devices, among gas lasers, having cavity lengths usually ranging from 15 cm to 50 cm ( but sometimes up to about 1 meter to achieve the highest powers ), and optical output power levels ranging from 0. 5 to 50 mw. the red hene laser wavelength of 633 nm has an actual vacuum wavelength of 632. 991 nm, or about 632. 816 nm in air. the wavelength of the lasing modes lie within about 0. 001 nm above or below this value, and the wavelengths of those modes shift within this range due to thermal expansion and contraction of the cavity. frequency - stabilized versions enable the wavelength of a single mode to be specified to within 1 part in 108 by the technique of comparing the powers of two longitudinal modes in opposite polarizations. absolute stabilization of the laser ' s frequency ( or wavelength ) as fine as 2. 5 parts in 1011 can be obtained through use of an iodine absorption cell. the mechanism producing population inversion and light amplification in a hene laser plasma originates with inelastic collision of energetic electrons with ground state helium atoms in the gas mixture. as shown in the accompanying energy level diagram, these collisions excite helium atoms from the ground state to higher energy excited states, among them the 23s1 and 21s0 long - lived metastable states. because of a fortuitous near coincidence between the energy levels of the two he metastable states, and the 3s2 and 2s2 ( paschen notation ) levels of neon, collisions between these helium", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6009957306737255, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:43.925816"} {"text": "- lived metastable states. because of a fortuitous near coincidence between the energy levels of the two he metastable states, and the 3s2 and 2s2 ( paschen notation ) levels of neon, collisions between these helium metastable atoms and ground state neon atoms results in a selective and efficient transfer of excitation energy from the helium to neon. this excitation energy transfer process is given by the reaction equations : - he * ( 23s1 ) + ne1s0 \u2192 he ( 1s0 ) + ne * 2s2 + \u03b4e - he * ( 21s ) + ne1s0 + \u03b4e \u2192 he ( 1s0 ) + ne * 3s2 where ( * ) represents an excited state, and \u03b4e is the small energy difference between the energy states of the two atoms, of the order of 0. 05 ev or 387 cm\u22121, which is supplied by kinetic energy. excitation energy transfer increases the population of the neon 2s2 and 3s2 levels manyfold. when the population of these two upper levels exceeds that of the corresponding lower level neon state, 2p4 to which they are optically connected, population inversion is present. the medium becomes capable of amplifying light in a narrow band at 1. 15 \u03bcm ( corresponding to the 2s2 to 2p4 transition ) and in a narrow band at 632. 8 nm ( corresponding to the 3s2 to 2p4 transition at 632. 8 nm ). the 2p4 level is efficiently emptied by fast radiative decay to the 1s state, eventually reaching the ground state. the remaining step in utilizing optical amplification to create an optical oscillator is to place highly reflecting mirrors at each end of the amplifying medium so that a wave in a particular spatial mode will reflect back upon itself, gaining more power in each pass than is lost due to transmission through the mirrors and diffraction. when these conditions are met for one or more longitudinal modes then radiation in those modes will rapidly build up until gain saturation occurs, resulting in a stable continuous laser beam output through the front ( typically 99 % reflecting ) mirror. the gain bandwidth of the hene laser is dominated by doppler broadening rather than pressure broadening due to the low gas pressure, and is thus quite narrow : only about 1. 5 ghz full width for the 633 nm transition. with cavities having typical lengths of 15 cm to", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6547420211687818, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:43.926815"} {"text": "line - of - sight propagation refers to electro - magnetic radiation or acoustic wave propagation. electromagnetic transmission includes light emissions traveling in a straight line. the rays or waves may be diffracted, refracted, reflected, or absorbed by atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles. at low frequencies ( below approximately 2 mhz or so ) radio signals travel as ground waves, which follow the earth ' s curvature due to diffraction with the layers of atmosphere. this enables am radio signals in low - noise environments to be received well after the transmitting antenna has dropped below the horizon. additionally, frequencies between approximately 1 and 30 mhz can be reflected by the f1 / f2 layer, thus giving radio transmissions in this range a potentially global reach ( see shortwave radio ), again along multiple deflected straight lines. the effects of multiple diffraction or reflection lead to macroscopically \" quasi - curved paths \". however, at higher frequencies and in lower levels of the atmosphere, neither of these effects are significant. thus any obstruction between the transmitting antenna and the receiving antenna will block the signal, just like the light that the eye may sense. therefore, since the ability to visually see a transmitting antenna ( disregarding the limitations of the eye ' s resolution ) roughly corresponds to the ability to receive a radio signal from it, the propagation characteristic of high - frequency radio is called \" line - of - sight \". the farthest possible point of propagation is referred to as the \" radio horizon \". in practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal ( a function of both the transmitter and the antenna characteristics ). broadcast fm radio, at comparatively low frequencies of around 100 mhz, are less affected by the presence of buildings and forests. radio horizon the radio horizon is the locus of points at which direct rays from an antenna are tangential to the surface of the earth. if the earth were a perfect sphere and there were no atmosphere, the radio horizon would be a circle. the radio horizon of the transmitting and receiving antennas can be added together to increase the effective communication range. antenna heights above 1, 000, 000 feet ( 189 miles ; 305 kilometres ) will cover the entire hemisphere and not increase the radio horizon. radio wave propagation is affected by atmospheric conditions, ionospheric absorption, and the presence of obstructions, for example mountains or trees. simple formulas that include the effect of the atmosphere give the range as : the simple formulas", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6076826410213851, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:32:44.010708"}