{"text": "x - rays are high - frequency, and thus high - energy, electromagnetic radiation. they have wavelengths ranging from 0. 01 to 10 nanometres, and thus frequencies from 3\u00d71019 to 3\u00d71016 hz. they are found to reside between ultraviolet radiation and gamma rays on the electromagnetic spectrum. astrophysical sources of x - rays include plasmas with temperatures of 1 to 100 million degrees celcius, such as the solar corona, supernova remnants and gas in galaxy clusters. in addition to this blackbody radiation from hot gas, high - energy events involving charged particles moving at high speeds within a magnetic field can also generate x - rays. examples include the aurora of jupiter, compact galactic objects like neutron stars, cataclysmic variable stars and x - ray binaries, and active galactic nuclei and quasars. x - rays are commonly regarded to have first been discovered in the laboratory in 1895 when wilhelm rontgen conducted experiments with a partially evacuated tube enclosed in thick cardboard. he passed electricity through the tube, and whenever he did this, he noticed that a chemical coated screen on the other side of his laboratory would glow. within a week he had created an x - ray image of his wife \u2019 s hand, showing the bones and her wedding ring. rontgen dubbed this new form of radiation \u201c x \u201d rays, to denote its mysterious nature, and the name stuck. although in 1901 rontgen received the nobel prize in physics for his discovery, others had been experimenting with x - rays before him. in 1892 heinrich hertz and his student philipp lenard were generating x - rays ( although they probably did not know it ) from cathode ray tubes, and investigating their penetrating ability through different materials. in the preceding year, at stanford university, fernando sanford had created and detected x - rays, as detailed in his jan 6, 1893, physical review letter. prior to this, there is evidence to suggest that nikola tesla, from 1887 onwards, had worked with x - rays, and before him, from 1881 onwards, the ukrainian - born ivan pulyui had already pioneered the invention and use of x - rays.", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6362392361490976, "token_count": 435, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.421129"} {"text": "isolate patterns of brain activity in subjects who regularly practice transcendental meditation. \" can there be a sense of self without mental content, which is just aware of its own structure without perception or thinking? \" he this state of consciousness in experienced meditators was characterized by eeg data that travis presented, which showed brain patterns of wakeful awareness ( so - called theta and alpha activity ) that appeared even when the subjects were in deep sleep. these findings were also consistent with meditators ' claims. \" subjects report a permanent integration of transcendental experiences with waking, sleeping and dreaming, \" travis said. basic forms of awareness can be studied in the absence of conscious awareness, said randolph blake of vanderbilt university. he presented a series of results involving subjects who were shown different images in each eye. the brain, when presented with an image from the left eye that ' s completely different from the image in the right eye, cycles its conscious attention between eyes. thus, at a moment when one eye is dominant, the images appearing before the other eye lie outside a subject ' s visual consciousness. this laboratory trick - - called \" binocular rivalry \" - - allows researchers to provoke mental responses to changing images in one eye, even though the mind may be focused on the input coming from for instance, blake summarized the results of a study in which subjects watched a rotating pinwheel pattern and then trained their sight on a still image that appeared to move. this optical illusion, his lab found, could even be provoked when the spinning pinwheel was only observed by the unconscious eye. subsequent studies, including brain - imaging studies, indicate that the brain ' s more basic regions for visual processing ( including the primary visual cortex ) handle these images, even though the pinwheels are suppressed from a person ' s awareness. yet when the researchers presented the subjects ' temporarily \" blinded \" eye with images that required advanced visual or verbal processing - - requiring more sophisticated tasks beyond the range of the visual cortex - - they could not provoke unconscious awareness. blake said that binocular rivalry is a useful tool for probing some of the rudiments of awareness, but the \" knife is not sharp enough \" to slice into the root cause of awareness. to that end, he cited the early 20th century psychologist william james. \" we know what consciousness is, \" james famously wrote, \" as long as no one asks us to define it. \" see source webpage for related links : \" to lengthen thy life, lessen thy meals. \" benjamin franklin more information", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6202147272443164, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.451470"} {"text": "on this page : you will recall that the first law of thermodynamics, expressed as \u03b4u = q + w, is essentially a statement of the law of conservation of energy. the significance of this law is that it tells us that any proposed process that would violate this condition can be dismissed as impossible, without even inquiring further into the details of the process. for simple mechanical operations on macroscopic objects, the first law, conservation of energy, is all we usually need to determine such things as how many joules of energy is required to lift a weight or to boil some water, how many grams of glucose you must metabolize in order to climb a hill, or how much fuel your car needs to drive a given distance. but if you think about it, there are a number of \" simple mechanical operations \" that never occur, even though they would not violate energy conservation. what do all these scenarios that conform to the first law but are neverthless never seen to occur have in common? in every case, energy becomes less spread out, less \" diluted \". in the first two examples, thermal energy ( dispersed ) gets concentrated into organized kinetic energy of a macroscopic object \u2014 a book, a propellor. in the third case, the thermal energy gets concentrated into a smaller volume as the gas contracts. a more brief statement of the second law ( for those who know the meaning of \" entropy \" ) is the more formal and historical ways of stating the second law will be presented farther below after we introduce the topic of heat engines. it is also worth knowing this important consequence of the second law : in the first lesson of this series, we explained how processes that take place spontaneously always proceed in a direction that leads to the spreading and sharing of thermal energy. because all natural processes lead to the spreading and sharing of thermal energy, and because entropy is a measure of the extent to which energy is dispersed in the world, it follows that all natural processes that allow the free exchange of thermal energy amongst chemically - significant numbers of particles are accompanied by a spreading or dilution of energy that leaves the world forever changed. in other words, all spontaneous change leads to an increase in the entropy of the world. at first sight, this might seem to be inconsistent with our observations of very common instances in which there is a clear decrease in entropy, such as the freezing of a liquid, the formation of a precipitate, or the growth of an organism.... but its the entropy of the", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6690652381516302, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.583822"} {"text": "with our observations of very common instances in which there is a clear decrease in entropy, such as the freezing of a liquid, the formation of a precipitate, or the growth of an organism.... but its the entropy of the system plus surroundings that counts! it is important to understand that the criterion for spontaneous change is the entropy change of the system and the surroundings \u2014 that is, of the world, which we denote by \u03b4stotal : \u03b4stotal = \u03b4ssystem + \u03b4ssurroundings ( 3 - 1 ) the only way the entropy of the surroundings can be affected is by exchange of heat with the system : \u03b4ssurroundings = qsurr / t ( 3 - 2 ) thus the freezing of water is accompanied by a flow of heat ( the heat of fusion ) into the surroundings, causing \u03b4ssurr to increase. at temperatures below the freezing point, this increase more than offsets the decrease in the entropy of the water itself, so \u03b4sworld exceeds zero and the process is spontaneous. the problem example below works this out in detail for a specific example. note that it does not matter whether the change in the system occurs reversibly or irreversibly ; as mentioned previously, it is always possible to define an alternative ( irreversible ) pathway in which the amount of heat exchanged with the surroundings is the same as qrev ; because \u03b4s is a state function, the entropy change of the surroundings will have the same value as for the unrealizable reversible pathway. if there is no flow of heat into or out of the surroundings, the entropy change of the system and that of the world are identical. examples of such processes, which are always spontaneous, are the free expansion of an ideal gas into a vacuum, and the mixing of two ideal gases. in practice, almost all processes involving mixing and diffusion can be regarded as driven exclusively by the entropy increase of the system. most processes involving chemical and phase changes involve the exchange of heat with the surroundings, so their tendency to occur cannot always be predicted by focussing attention on the system alone. further, owing to the q / t term in \u03b4ssurroundings, the spontaneity of all such processes will depend on the temperature, as we illustrated for the dissociation of h2 previously. as a quantitative example, let us consider the freezing of water. we know that liquid water will spontaneously change into ice when the temperature drops below 0\u00b0c", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.7045240220135042, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.584952"} {"text": "will depend on the temperature, as we illustrated for the dissociation of h2 previously. as a quantitative example, let us consider the freezing of water. we know that liquid water will spontaneously change into ice when the temperature drops below 0\u00b0c at 1 atm pressure. since the entropy of the solid is less than that of the liquid, we know the entropy of the water ( the system here ) will decrease on freezing. the amount of decrease is found by dividing the heat of fusion of ice by the temperature for the reversible pathway, which occurs at the normal freezing point : if the process is actually carried at 0\u00b0c, then the heat of fusion is transferred to the surroundings at the same temperature, and the entropy of the surroundings increases by so that \u03b4stotal = 0. under these conditions the process can proceed in either direction ( freezing or melting ) without affecting the entropy of the world ; this means that both ice and liquid water can be present simultaneously without any change occurring ; the system is said to be in equilibrium. suppose now that the water is supercooled to 1\u00b0c before it freezes. the entropy change of the water still corresponds to the reversible value qrev / t = ( 6000j ) / ( 273k ). the entropy change of the surroundings, however, is now given by the total entropy change is now \u03b4stotal = ( 21. 978 + 22. 059 ) j k1 mol1 = + 0. 081 j k1 mol1 indicating that the process can now occur ( is spontaneous ) only in the one direction. why did we use 273 k when evaluating \u03b4ssystem and 272 k for calculating \u03b4ssurroundings? in the latter case it is possible to formulate a reversible pathway by which heat can be transferred to the surroundings at any temperature. \u03b4ssystem, however, is a state function of water, and will vary with temperature only slightly. note that in order to actually freeze water, it must be cooled to very slightly below its normal freezing point, a condition known as supercooling. freezing of supercooled water is of course an irreversible process ( once it starts, it cannot be stopped except by raising the temperature by a finite amount ), and the positive value of \u03b4stotal tells us that this process will occur spontaneously at temperatures below 273k. under these conditions, the process is driven by the entropy increase of the surroundings resulting from flow of the heat of fusion", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6754159651189208, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.586067"} {"text": "a finite amount ), and the positive value of \u03b4stotal tells us that this process will occur spontaneously at temperatures below 273k. under these conditions, the process is driven by the entropy increase of the surroundings resulting from flow of the heat of fusion of water into the surroundings. the principle that thermal energy ( and the molecules carrying it ) tends to spread out is based on simple statistics. it must be remembered, however, that the laws of probability have meaningful application only to systems made up of large numbers of independent actors. if you trap a hundred flies in a bottle, they will generally distribute themselves more or less uniformly throughout the container ; if there are only four flies, however, it is quite likely that all of them will occasionally be located in one particular half of the bottle. why the sky is blue. similarly, you can trust with complete certainty that the spontaneous movement of half the molecules of the air to one side of the room you now occupy will not occur, even though the molecules are moving randomly and independently. on the other hand, if we consider a box whose dimensions are only a few molecular diameters, then we would expect that the random and short - term displacement of the small number of particles it contains to one side of the box would occur quite frequently. this is, in fact, the cause of the blueness of the sky : random fluctuations in the air density over tiny volumes of space whose dimensions are comparable with the wavelength of light results in selective scattering of the shorter wavelengths, so that blue light is scattered out, leaving the red light for the enjoyment of sunset - watchers to the east. brownian motion. this refers to the irregular zig - zag - like movement of extremely small particles such as plant pollen when they are suspended in a drop of liquid. any such particle is continually being buffeted by the thermal motions of the surrounding liquid molecules. if size of the particle is very large compared to that the the liquid molecules, the forces that result from collisions of these molecules with the particle will cancel out and the particle remains undisturbed. if the particle is very small, however ( perhaps only a thousand times larger than a molecule of the liquid ), then the chances that it will undergo sufficiently more hits from one direction than from another during a brief interval of time become significant. in these two examples, the entropy of the system decreases without any compensating flow of heat into the surroundings, leading to a net ( but only temporary ) decrease in the entropy of the world. this does not represent a failure", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6778999810499192, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.587298"} {"text": "become significant. in these two examples, the entropy of the system decreases without any compensating flow of heat into the surroundings, leading to a net ( but only temporary ) decrease in the entropy of the world. this does not represent a failure of the second law, however, because no one has ever devised a way to extract useful work from these processes. the industrial revolution of the 19th century was largely driven by the invention of the steam engine. the first major use of such engines was to pump water out of mines, whose flooding from natural seepage seriously limited the depths to which they could be driven, and thus the availablility of the metal ores that were essential to the expansion of industrial activities. the steam engine is a type of heat engine, a device that converts heat, provided by burning a fuel, into mechanical work, typically delivered through the motion of a piston in opposition to an opposing force. an engine is therefore an energy conversion device in which, ideally, every joule of heat released by combustion of the fuel could be extracted as work at the output shaft ; such an engine would operate at 100 percent efficiency. however, engineers of the time were perplexed to find that the efficiencies of steam engines were rather low ( usually around 20 % ), with most of the heat being exhausted uselessly to the environment. everyone understood that an efficiency exceeding 100 % would be impossible ( that would violate conservation of energy, and thus the first law ), but it was not clear why efficiencies could not rise significantly beyond the small values observed even as mechanical designs improved the answer was found by a young french engineer, sadi carnot, who in 1824 published an analysis of an idealized heat engine that is generally considered to be the foundation of the science of thermodynamics notwithstanding the fact that carnot still accepted the belief that heat is a fluid - like substance called caloric. we will not replicate his analysis here ( this is normally done in more advanced courses in physical chemistry ), but will simply state his conclusion in his own [ translated ] words : \" the production of motive power is then due in steam - engines not to an actual consumption of caloric, but to its transportation from a warm body to a cold body... the production of heat alone is not sufficient to give birth to the impelling power : it is necessary that there should also be cold ; without it, the heat would be useless. the ultimate attainable efficiency of any heat engine will depend on the temperatures", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6430247420493278, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.588402"} {"text": "the production of heat alone is not sufficient to give birth to the impelling power : it is necessary that there should also be cold ; without it, the heat would be useless. the ultimate attainable efficiency of any heat engine will depend on the temperatures at which heat is supplied to and removed from it. \" efficiency of a heat engine left : common schematic representation of a heat engine. right : diagramatic representation of eq. 14 ( below ) ; the efficiency is the ratio of the temperature intervals a : b. the left side of the figure represents a generalized heat engine into which a quantity of heat qh, extracted from a source or reservoir at temperature th is partly converted into work w. the remainder of the heat ql is exhausted to a reservoir at a lower temperature tl. in practice, th would be the temperature of the steam in a steam engine, or the temperature of the combustion mixture in an internal combustion or turbine engine. the low temperature reservoir is ordinarily that of the local environment. the efficiency \u03b5 ( epsilon ) of a heat engine is the fraction of the heat abstracted from the high temperature reservoir that can be converted into work : \u03b5 = w / qh ( 3 - 3 ) carnots crucial finding ( for which he would certainly have deserved a nobel prize if these had existed at the time ) is that the efficiency is proportional to the \" distance ' ' in temperature that the heat can fall as it passes through the engine : this is illustrated graphically in the right half of the figure just above, in which the efficiency is simply the fraction of the complete fall ( in temperature ) to absolute zero ( arrow b ) that the heat undergoes in the engine ( arrow a. ) clearly, the only way to attain 100 % efficiency would be to set the temperature of the exhaust reservoir to 0\u00b0k, which would be impossible. for most terrestrial heat engines, tl is just the temperature of the environment, normally around 300k, so the only practical way to improve the efficieny is to make th as high as possible. this is the reason that high pressure ( superheated ) steam is favored in commercial thermal power plants. the highest temperatures ( and the greatest operating efficiencies ) are obtained in gas turbine engines. however, as operating temperatures rise, the costs of dealing with higher steam pressures and the ability of materials such as turbine blades to withstand high temperatures become significant factors, placing an upper limit of around 600k on th, thus imposing a maximum of around 50 percent efficiency", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6264672553596359, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.589747"} {"text": "as operating temperatures rise, the costs of dealing with higher steam pressures and the ability of materials such as turbine blades to withstand high temperatures become significant factors, placing an upper limit of around 600k on th, thus imposing a maximum of around 50 percent efficiency on thermal power generation. for nuclear plants, in which safety considerations require lower steam pressures, the efficiency is lower. one consequence of this is that a larger fraction of the heat is exhausted to the environment, which may result in greater harm to aquatic organisms when the cooling water is returned to a stream or estuary. several proposals have been made for building a heat engine that makes use of the temperature differential between the surface waters of the ocean and cooler waters that, being more dense, reside at greater depth. if the exhaust temperature is 5\u00b0c, what is the maximum amount of work that could be extracted from 1000 l of surface water at 10\u00b0c? ( the specific heat capacity of water is 4. 184 j g1k1. ) solution : the amount of heat ( qh ) that must be extracted to cool the water by 5 k is ( 4. 184 j g1k1 ) ( 106 g ) ( 5 k ) = 2. 09 \u00d7 107 j. the ideal thermodynamic efficiency is given by the amount of work that could be done would be (. 018 ) ( 2. 09 \u00d7 107 j ) = 3. 7 \u00d7 106 j comment : it may be only 1. 8 % efficient, but its free! few toys illustrate as many principles of physical science as this popular device that has been around for many years. at first glance it might appear to be a perpetual motion machine, but it ' s really just a simple heat engine. modern \" dippy birds \" ( as they are sometimes called ) utilize dichloromethane as the working fluid. this liquid boils at 39\u00b0 c, and therefore has a rather high vapor pressure at room temperature. the liquid ( to which a dye is often added for dramatic effect ) is stored in a reservoir at the bottom of the bird. the bird ' s beak is covered with felt which, when momentarily dipped in water, creates a cooling effect as the water evaporates. this causes some of the ch2cl2 vapor to condense in the head, reducing the pressure inside the device, causing more liquid to boil off and re - condense in the head. the redistribution of fluid upsets the balance, causing the bird to dip its beak back into the water. once the head", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6021663428929354, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.590883"} {"text": "in the head, reducing the pressure inside the device, causing more liquid to boil off and re - condense in the head. the redistribution of fluid upsets the balance, causing the bird to dip its beak back into the water. once the head fills with liquid, it drains back into the bottom, tipping the bird upright to repeat the cycle. we will leave it to you to relate this to the heat engine diagram above by identifying the heat source and sink, and estimate the thermodynamic efficiency of the engine. if a heat engine is run in reverse by performing work on it ( that is, changing work out to work in in fig 8 ), it becomes a device for transporting heat against a thermal gradient. refrigerators and air conditioners are the most commonly - encountered heat pumps. a heat pump can also be used to heat the interior of a building. in this application, the low temperature reservoir can be a heat exchanger buried in the earth or immersed in a well. in this application heat pumps are more efficient than furnaces or electric heating, but the capital cost is rather high. it was the above observation by carnot that eventually led to the formulation of the second law of thermodynamics near the end of the 19th century. one statement of this law ( by kelvin and planck ) is as follows : it is impossible for a cyclic process connected to a reservoir at one temperature to produce a positive amount of work in the surroundings. to help you understand this statement and how it applies to heat engines, consider the schematic heat engine in the figure in which a working fluid ( combustion gases or steam ) expands aginst the restraining force of a weight that is mechanically linked to the piston. from a thermodynamic perspective, the working fluid is the system and everything else is surroundings. expansion of the fluid occurs when it absorbs heat from the surroundings ; return of the system to its initial state requires that the surrounding do work on the system. now re - read the above statement of the second law, paying special attention to the italicized phrases which are explained below : note carefully that the second law applies only to a cyclic process isothermal expansion of a gas against a non - zero pressure always does work on the surroundings, but an engine must repeat this process continually ; to do so it must be returned to its initial state at the end of every cycle. when operating isothermally, the work w it does on the surroundings in the expansion step ( power stroke ) is nullified by the", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6508743328718649, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 7, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.591897"} {"text": "repeat this process continually ; to do so it must be returned to its initial state at the end of every cycle. when operating isothermally, the work w it does on the surroundings in the expansion step ( power stroke ) is nullified by the work + w the surroundings must do on the system in order to complete the cycle. ( review the plots shown in the previous lesson comparing the work associated with expansion and compression for single - and multistep processes. the second law can also be stated in an alternative way : it is impossible to construct a machine operating in cycles that will convert heat into work without producing any other changes. ( max planck ) thus the second law does allow an engine to convert heat into work, but only if other changes ( transfer of a portion of the heat directly to the surroundings ) are allowed. and since heat can only flow spontaneously from a source at a higher temperature to a sink at a lower temperature, the impossibility of isothermal conversion of heat into work is implied. a device that violates the second law of thermodynamics is formally known as a perpetual motion machine of the second kind. ( a perpetual motion machine of the first kind is one that would violate the first law. ) the u. s. patent office frequently receives applications to patent devices whose operation would not be in accord with the second law ; in the majority of cases the inventor appears to be unaware of this fact or, for that matter, of the second law itself. for some time, it has been the practice of the patent office to require that a working model of the device be made available to verify its operation. the search for perpetual motion machines is a rich history of human folly that goes back to the 13th century and continues to this day in the form of goofy schemes and not a few frauds. many early designs employed rotating unbalanced weights 17th century closed - cycle mill. the archimedes ' screw lifts the water up as it is turned by the water wheel. perpetual motion art by david gockel \u2014 at least it ' s nice to look at! ( you are expected to be able to define and explain the significance of terms identified in green type. ) page last modified : 10. 12. 2007", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6210247794420377, "token_count": 452, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.592988"} {"text": "among the most common tools in electrical engineering and computer science are rectangular grids of numbers known as matrices. the numbers in a matrix can represent data : the rows, for instance, could represent temperature, air pressure and humidity, and the columns could represent different locations where those three measurements were taken. but matrices can also represent mathematical equations. if the expressions t + 2p + 3h and 4t + 5p + 6h described two different mathematical operations involving temperature, pressure and humidity measurements, they could be represented as a matrix with two rows, [ 1 2 3 ] and [ 4 5 6 ]. multiplying the two matrices together means performing both mathematical operations on every column of the data matrix and entering the results in a new matrix. in many time - sensitive engineering applications, multiplying matrices can give quick but good approximations of much more complicated calculations. in a paper published in the july 13 issue of proceedings of the national academy of science, mit math professor gilbert strang describes a new way to split certain types of matrices into simpler matrices. the result could have implications for software that processes video or audio data, for compression software that squeezes down digital files so that they take up less space, or even for systems that control mechanical devices. strang \u2019 s analysis applies to so - called \" banded matrices. \" most of the numbers in a banded matrix are zeroes ; the only exceptions fall along diagonal bands, at or near the central diagonal of the matrix. this may sound like an esoteric property, but it often has practical implications. some applications that process video or audio signals, for instance, use banded matrices in which each band represents a different time slice of the signal. by analyzing local properties of the signal, the application could, for instance, sharpen frames of video, or look for redundant information that can be removed to save memory or bandwidth. since most of the entries in a banded matrix \u2014 maybe 99 percent, strang says \u2014 are zero, multiplying it by another matrix is a very efficient procedure : you can ignore all the zero entries. after a signal has been processed, however, it has to be converted back into its original form. that requires multiplying it by the \u201c inverse \u201d of the processing matrix : if multiplying matrix a by matrix b yields matrix c, multiplying c by the inverse of b yields a. but the fact that a matrix is banded doesn \u2019 t mean that its inverse is. in fact, strang says, the inverse of a banded matrix is", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6096338372662836, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.732397"} {"text": "notions in complex numbers. discrete and continuous time systems. linear, time invariant systms ( lti ). representation of signals as series of pulses, convolution. describing systems using differential and difference equations. - continuous time signals and their frequency analysis : periodic - fourier series, coefficients. non - periodic - fourier transform, spectral function. spectra of typical signals. signal energy - parseval ' s theorem. - continuous - time systems - laplace transform, transfer function, frequency response, stability. example of a simple analog - sampling and reconstruction - ideal sampling, aliasing, sampling theorem. spectrum of sampled signal, ideal reconstruction. normalized time and frequency. quantization. - discrete - time signals and their frequency analysis - discrete fourier series, discrete - time fourier transform. circular convolution, - discrete fourier transform ( dft ) and what it really computes. fast fourier transform. - discrete systems - z - transform, finite and infinite impulse response systems ( fir and iir ), transfer function, frequency response, stability. example of a digital filter : matlab and c. - discrete systems cont ' d : design of simple digital filters, sampling of frequency response, windowing. links between continuous - time and discrete - time systems. - two - dimensional ( 2d ) signals and systems : space frequency, spectral analysis ( 2d - fourier transform ), filtering using a mask. example - jpeg. - random signals - random variable, realization, distribution function, probability density function ( pdf ). stationarity and ergodicity. parameters of a random signal : mean, etc. estimation - ensemble and temporal. - random signals cont ' d : correlation function, power spectral density ( psd ). processing of random signals by lti systems. - summary of basic notions, systematic organization of signal processing knowledge. examples. | syllabus of computer exercises : | - introduction to matlab - projection onto basis, fourier series - processing of sounds - image processing - random signals - sampling, quantization and aliasing | syllabus - others, projects and individual work of students : | the individual project aims at image processing, see http : / / www. fit. vutbr. cz / study / courses / iss / public / # proj | - oppenheim a. v., wilski a. s. : signals and systems, prentice hall, 1997 - jan, j., kozumplik, j. : systemy, proces", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6306986309389422, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:38.919679"} {"text": "this application is a continuation of u. s. application ser. no. 09 / 282, 141 filed mar. 31, 1999 now u. s. pat. no. 7, 472, 215. field of invention the present invention is directed to portable computers having at least two power modes of operation. in particular, an embodiment of the present invention is directed to a portable computer having a high and low power mode of operation and more particularly in association with a docking station wherein the portable computer operates in a lower power mode when not engaged in the docking station and in a high power mode when engaged in the docking station which has cooling systems to cool the high power mode of operation. the power consumption of laptop computers, especially the power of cpus used in laptop computers is increasing. for instance, the total power of a laptop computer is usually around 10 watts and now it is becoming 20 to 30 w. the cpu power has been increased from 2 to 8 w and in the future could be in the 15 w range and higher. most of this power will eventually be dissipated as heat to the surroundings. getting more heat out of the laptop computer is becoming a critical factor in the laptop computer business. portable computers, such as laptop computers, are designed to be compact and small. thus there is limited space to incorporate cooling systems. thus portable computers cannot operate using the fastest cpu chips available. this presents a problem when the portable computer is used as a workstation, as a desk top computer or in place of a desk top computer. typically a portable computer is used as a workstation by inserting the portable computer into a frame, referred to as a docking station. the docking station provides additional functionality to the portable computer, such as additional disk drives and cd drives. the docking station has ports through which a large keyboard and a large screen monitor can be connected to the portable computer. the portable computer when engaged with a docking station and used as a workstation has the disadvantage as compared with a desktop computer of not functioning as fast as the desktop computer. this is because the desktop computer has a cooling system which can cool the desktop computer which has a cpu which runs too hot to be included in the portable computer. applications running on the portable computer engaged with a docking station have slower performance than the desktop and some applications either cannot run on the in a portable computer engaged with docking station or run so slow as to be effectively unusable. applicants invention solves this problem. a portable computer is intended to be transported around by a user. as described", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6283939832518615, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:39.017663"} {"text": "and some applications either cannot run on the in a portable computer engaged with docking station or run so slow as to be effectively unusable. applicants invention solves this problem. a portable computer is intended to be transported around by a user. as described above, the portable computer is commonly used as a workstation by inserting into a docking station. a user typically has a docking station in their office and typically takes the portable computer on business or for use at home. if the user takes the portable computer home and forgets to bring it into the office, the user has no computer to use in the office. this prevents the user from accessing systems such as e - mail, the internet and using word processors. applicants invention solves this problem. a broad aspect of the present invention is a system having : a portable computer ; a docking station ; the portable computer has a low power mode of operation and a high power mode of operation ; and, a sensor for sensing if the portable computer is engaged in the docking station or if the portable computer in not engaged in the docking station. another broad aspect of the present invention is a system having : a portable computer ; a docking station ; the portable computer comprises a low power mode of operation and a high power mode of operation ; and, a signal generator for switching the computer between the high power mode of operation and the low power mode of operation. another broad aspect of the present invention is a system having : a computer ; the computer has a low power mode of operation and a high power mode of operation ; and, a signal generator for switching said computer between the high power mode of operation and the low power mode of operation in response to an input. another broad aspect of the present invention is a system having : a portable computer ; a docking station ; the portable computer has a first cpu ; the docking station has a second cpu ; and the docking station without the portable computer engaged to the docking station is operable through the second cpu. another broad aspect of the present invention is a system to increase the cooling capability of a portable computer when it is in a docking base. brief description of the drawings these and other objects, features and advantages of the present invention will become apparent upon a consideration of the following detailed description and the invention when read in conjunction with the drawing figures, in which : fig. 1 is the schematic view of a laptop computer sitting on a tray of a docking base waiting to be docked. fig. 2 is the schematic view of a laptop computer docked into the base.", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6073352972787415, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:39.018898"} {"text": "a statement which cannot be false. reality ; actuality ; truth ; as, he, in fact, excelled all the rest ; the fact is, he was beaten. the assertion or statement of a thing done or existing ; sometimes, even when false, improperly put, by a transfer of meaning, for the thing done, or supposed to be done ; a thing supposed or asserted to be done ; as, history abounds with false facts. an information asserted to be true, e. g. something proven to exist, an event that has happened, or a statement true on the basis of reasoning ; in that sense, fact is not merely a datum ( or data ) because it involves the criteria of truth. datum, data information game : chess acronym a reliable piece of information that can be verified through independent sources or procedures. a fact differs from an opinion. see opinion. phenomenon that has an actual, objective existence, regardless of whether it was observed or not. when our observations are confirmed and found to be repeatable, they become facts. the repeatability of an observation means there is little doubt about it, though it cannot be accepted with absolute certainty. for example, everyone can observe that a student in the classroom is smiling. therefore, this is a fact. an observation that has been repeatedly confirmed. for example, there are 23 pairs of chromosomes in human cells. ( n. ) in the context of logic programming, a fact is a horn clause with a head but no body. a kind of ( or a subtype of ) clause a prolog clause without a clause body fact concerning the properties and names of objects. a fact describes a clause whose body is true. in peoplesoft applications, facts are numeric data values from fields from a source database as well as an analytic application. a fact can be anything you want to measure your business by, for example, revenue, actual, budget data, or sales numbers. a fact is stored on a fact table. data, usually numeric and additive, that can be examined and analyzed. examples include sales, cost, and profit. fact and measure are synonymous ; fact is more commonly used with relational environments, measure is more commonly used with multidimensional environments. see also : derived fact ( or measure ) a measurement regarding an organization, typically numeric and additive, that is stored in a fact table. see measure. see also additive, derived fact ( or measure ). a relationship between two objects or ideas in the world a world - state, nothing", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6272608568845592, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:39.502453"} {"text": "new silicon - germanium nanowires could lead to smaller, more powerful electronic devices by wileen wong kromhout december 09, 2009 category : research microchip manufacturers have long faced challenges miniaturizing transistors, the key active components in nearly every modern electronic device, which are used to amplify or switch electronic signals. now, researchers from the ucla henry samueli school of engineering and applied science, purdue university and ibm have successfully grown silicon - germanium semiconducting nanowires for potential use in next - generation transistors. these nanowires \u2014 which measure from a few tens to a few hundreds of nanometers in diameter and up to several millimeters in length \u2014 could help speed the development of smaller, faster and more powerful electronics, according to study co - author suneel kodambaka, a ucla professor of materials science and engineering. the team ' s research appears in the nov. 27 issue of the journal science. \" we are excited for two reasons, \" said frances ross, manager of ibm ' s nanoscale materials analysis department and corresponding author of the study. \" one is that we have extended our knowledge of the fundamental physics of the process by which nanowires grow. the other is the improved prospect of using nanowires in high - performance electronic devices. \" \" the nanowires are so small you can place them in virtually anything, \" kodambaka said. \" because of their small size, they are capable of having distinctly different properties, compared to their bulk counterparts. \" the team showed they could create nanowires with layers of different materials, specifically silicon and germanium, that were defect - free and atomically sharp at the junction \u2014 critical requirements for making efficient transistors out of the tiny structures. the \" sharper \" the interface between the material layers \u2014 in this case, just one atom, or close to one atom, thick \u2014 the better the electronic properties. \" we think this study is significant because it provides a solution to the problem of growing sharp interfaces in nanowires, thereby addressing an important limitation in the growth of nanowires, \" ross said. according to kodambaka, silicon - germanium nanostructures also have thermoelectric applications, in which heat is converted into electricity. \" the jet propulsion laboratory uses bulk chunks of silicon - germanium to power their satellites, and now there is a lot of interest in using a similar technology in automobiles. these nanowire", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.613610542330058, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:39.608388"} {"text": "engineers at yale university have developed a new breed of micro fuel cell that could serve as a long - lasting, low - cost, and eco - friendly power source for portable electronic devices, such as tablet computers, smartphones, and remote sensors. the researchers describe the novel device online in small. an alternative to a battery, a fuel cell is an electrochemical device that combines hydrogen and oxygen to produce energy, giving off only water and heat as byproducts. but the materials and methods commonly used for making micro fuel cells are fragile, inefficient, and expensive. major components of the new device are made of bulk metallic glasses ( bmgs ) \u2014 extremely pliable metal alloys that nonetheless are more durable than the metals typically used in micro fuel cells. bmgs can be finely shaped and molded using a comparatively efficient and inexpensive fabrication process akin to processes used in shaping plastics. \u201c these amorphous metal alloys are amazing materials that can be easily shaped into both large and small nanostructures, yet retain suitable properties for a wide range of electrochemical applications, \u201d says andre d. taylor, an assistant professor of chemical and environmental engineering at yale university school of engineering & applied science and a principal investigator of the research. ryan c. sekol, a doctoral student in taylor \u2019 s laboratory, is lead author. silicon and stainless steel are the materials typically used in micro fuel cells. but silicon is brittle and a poor electricity conductor, and stainless steel is prone to corrosion. this means they require special coatings, which drives up production costs. fabricating metal components on the nanoscale is complex and time - consuming also. using bulk metallic glasses solves these problems, the researchers say. bmgs are metal alloys with randomly arranged atoms rather than the orderly, crystalline makeup of ordinary metals. the random atomic arrangement results in a tough but elastic substance \u2014 as strong as steel, yet malleable and good at conducting electricity, and thus superior to silicon and steel for micro fuel cells. \u201c using thermoplastic processing, a process we invented at yale, we can form metallic glasses like plastics, dramatically reducing fabrication costs, \u201d says jan schroers, a professor of mechanical engineering and materials science at yale and also a principal investigator of the project. he has pioneered the technique and used it to create complex shapes, including seamless metallic bottles, watchcases, miniature resonators, and biomedical implants. the bmg components of the yale team \u2019 s micro fuel cell ( the entirety of which measures three", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6226221647736947, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:39.864796"} {"text": "my students often stare in wonder at the radiometer that sits in the window sil of my classroom. a lot of them think it ' s the temperature that causes the different spinning rates. others think it ' s the amount of light. i decided to attempt to see which factor determines it. this could be a good inquiry activity for students and it also demonstrates how smartphones are becoming useful data collection devices. i have two lava lamps that rest in one of the window sils in my classroom. it is both a great distractor to the students and a source of fascination / curiosity to my students. it also helps that my students have to learn the concept of density. one of my lava lamps always seems lethargic and to be frank quite a dissapointment for viewing. i had been thinking that maybe it ' s a really good observation that might lead to a science inquiry for my students. i did some video and time lapse filming which may result in an inquiry for my students to do. take a look at the video and see if it ' s a good idea. a little late for the holiday season, but better late than never. one of the standards my students have to learn is the repeating pattern of the crystaline lattice. with a little bit of time before break ( and after a unit test ), my students were able to make some borax crystal holiday ornaments ( and they took their ornaments home ). the video shows the making of process, which my students did. after they return from break, we will go into the nitty gritty science part of the crystal formation. this past unit in science we covered states of matter and how they change. students have to understand how molecules move at each phase and the energy involved. there are a ton of demos that show the phase changes and this is one of my favorites. all that you need is a large flask, water, a water balloon, a hot plate and tongs. i have my students draw diagrams of how the molecules are arranged and moving at each phase and the transitions inbetween. they also have to determine if heat energy is being added or taken away in each change. even in the digital age, i think students benefit from simple pencil and paper drawings. the drawings are really models that explain the scientific phenomena. when the balloon gets pushed into the flask, it is a very dramatic demonstration of a liquid taking up less space than a gas. discrepant events are the cornerstone of a constructivist science education. a good", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6264342950424614, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:40.033978"} {"text": "processes that would lead to [ potassium ] enrichment, \" mulkidjanian says. the only such places extant today are so - called \" vapor - dominated \" geothermal systems \u2014 locales where water, heated deep within earth until it becomes steam, reaches the surface, cools and condenses back to elementally enriched liquid pools. condensed geothermal steam in these pools can have ratios of potassium to sodium ions as high as 75 to 1, and are rich in the other elements of life that have been leached from rock by the hot water. thus, mulkidjanian and his colleagues argue that they may have been the \" hatcheries \" of the first cells. the argument matches a perhaps prescient suggestion from charles darwin in an 1871 letter : \" but if ( and oh what a big if ) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts, light, heat, electricity, etcetera present that a protein compound was chemically formed, ready to undergo still more complex changes. \" nobel laureate and geneticist jack szostak of harvard university has also argued that the first cells probably had leaky membranes and that early oceans were not a favorable environment for the origin of life.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6049558742090502, "token_count": 256, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:40.043004"} {"text": "this message will be pushed to the admin ' s iphone instantly. science fiction has suggested artificial limbs stronger than biological counterparts. we are beginning to see this reality in present day research. \u201c carbon - nanotube - based muscles that are 100 times stronger than natural muscle \u201d have already been produced by the research of ray baughman, director of the nanotech institute at the university of texas at dallas. now, he has produced a yarn from carbon nanotubes that can twist 1000 times more than other materials. it is this twisting motion that opens new possibilities and gives the material a potential as strong as a commercial electric motor. what can they do? small - sized nanotech muscles may be useful in some of the tiniest applications, where electrical energy is turned into mechanical energy. normally, we think of artificial muscles like electrical devices that produce the linear motion solenoids and actuators. these new materials have a much wider operating range than devices we now use. some of them use fuel rather than electricity as an energy supply. some are as light as air. artificial muscles are also potentially useful in devices that mimic nature ; robotics ( such as vehicles that stay aloft by flapping wings ) and as potential replacements for damaged human muscles like the heart. due to their wide temperature range, space and exploration also include possible applications. this is \u201c game changing \u201d potential. how do they do it? electricity and an electrolyte were used for this effect to take place on the carbon nanotubes : to create the twisting motion, the yarn is connected to an electrode and immersed in an electrolyte. ions from the electrolyte enter the yarn, which causes it to first swell and then contract and rotate along its length. issues and comments : \u201c this is a fascinating new way to provide torsion, \u201d says baughman. what remains to be disclosed is the relative energy cost and efficiency of this material / device. this would give us a benchmark for comparison with present technology, and allow us to evaluate utility and future improvement. primary source : mit technology review photo credit : nasa we share this world ; its past, present resources and our combined future. with every aspiration, the very molecules we use for life are passed to others through time and space so that each of us may be considered a breath on the wind. this part of the world ' s consciousness lives in nyc ; has worked in law, research, construction, engineering ; has traveled, often drawn to asia ; writes on energy and electric vehicle issues and looks", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6134410606236668, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:40.956983"} {"text": "density functional theory : a practical introduction copyright \u00a9 2009 john wiley & sons, inc. author ( s ) : david s. sholl, janice a. steckel published online : 11 aug 2009 print isbn : 9780470373170 online isbn : 9780470447710 about this book about the product demonstrates how anyone in math, science, and engineering can master dft calculations density functional theory ( dft ) is one of the most frequently used computational tools for studying and predicting the properties of isolated molecules, bulk solids, and material interfaces, including surfaces. although the theoretical underpinnings of dft are quite complicated, this book demonstrates that the basic concepts underlying the calculations are simple enough to be understood by anyone with a background in chemistry, physics, engineering, or mathematics. the authors show how the widespread availability of powerful dft codes makes it possible for students and researchers to apply this important computational technique to a broad range of fundamental and applied problems. density functional theory : a practical introduction offers a concise, easy - to - follow introduction to the key concepts and practical applications of dft, focusing on plane - wave dft. the authors have many years of experience introducing dft to students from a variety of backgrounds. the book therefore offers several features that have proven to be helpful in enabling students to master the subject, including : problem sets in each chapter that give readers the opportunity to test their knowledge by performing their own calculations worked examples that demonstrate how dft calculations are used to solve real - world problems further readings listed in each chapter enabling readers to investigate specific topics in greater depth this text is written at a level suitable for individuals from a variety of scientific, mathematical, and engineering backgrounds. no previous experience working with dft calculations is needed.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6789751491149284, "token_count": 358, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:42.362662"} {"text": "vizzini : he didn \u2019 t fall? inconceivable! inigio : you keep using that word. i do not think it means what you think it means. - william goldman, the princess bride excuse me while i temporarily interrupt the genome sequencing series to define a word. artifacts in the classroom it \u2019 s disorienting. you learn a word in certain context. you \u2019 re sure of it \u2019 s meaning and then you end up in a situation where people use the word in a completely unexpected way and no one else seems bothered by this! i had this happen once with the word \u201c artifact. \u201d i had organized a conference and some workshop presenters were talking about students and protein gels. it was a dark room and there were on things on mind, which, i confess, began to wander a bit. at least it did, until the presenter said that students could use a photocopier and take home artifacts to show their parents. in molecular biology, an experimental artifact is a bad thing. it \u2019 s something that you see sometimes in experiments and it isn \u2019 t a real, meaningful result. in fact, it can be very misleading. for example, imagine that you \u2019 re trying to extract ancient dna from a bug caught in amber, and someone \u2018 s pet fly buzzes through the lab and drowns in your test tube. you would get a result from this experiment all right. you would find that ancient amber contains dna from drosophila. that result would be an artifact. so, i was stunned to hear someone, in a science education workshop no less, speaking of artifacts as if they were good things. we were using different definitions. it blows my my mind, but it turns out that the education world defines an artifact as a piece of evidence. lost in translation? vector is one another word that makes me think we need a special dictionary that translates science words between science disciplines. this would work like an english french dictionary that takes an english word ( or vice versa ) and finds the french counterpart. except in this case, we would take a perfectly respectable word that \u2019 s used in both disciplines and find what the word means in the other discipline. heck, you can see from our efforts to define a gene, that sometimes we need a contextual dictionary to translate words within that same discipline. such a dictionary would allow us to take a perfectly respectable molecular biology word, like \u201c vector, \u201d and find out what it means in physics or epidemiology. what is a vector", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.628686289396752, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:42.555927"} {"text": "( science : chemical ) the name of a group of minerals characterised by highly perfect cleavage, so that they readily separate into very thin leaves, more or less elastic. they differ widely in composition, and vary in colour from pale brown or yellow to green or black. the transparent forms are used in lanterns, the doors of stoves, etc, being popularly called isinglass. formerly called also cat - silver, and glimmer. the important species of the mica group are : muscovite, common or potash mica, pale brown or green, often silvery, including damourite ( also called hydromica ) ; biotite, iron - magnesia mica, dark brown, green, or black ; lepidomelane, iron, mica, black ; phlogopite, magnesia mica, colourless, yellow, brown ; lepidolite, lithia mica, rose - red, lilac. mica ( usually muscovite, also biotite ) is an essential constituent of granite, gneiss, and mica slate ; biotite is common in many eruptive rocks ; phlogopite in crystalline limestone and serpentine. mica diorite, a schistose rock, consisting of mica and quartz with, usually, some feldspar. origin : l. mica crumb, grain, particle ; cf. f. mica.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6202312987973599, "token_count": 290, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:43.137095"} {"text": "glossary of eye terms accommodation : adjustment of the focusing power of the eye to see objects clearly over a range of distances. achieved by change in the shape of the crystalline lens. there is a reduction in focusing ability with age ( presbyopia ). astigmatism : a refractive error usually caused by toric ( like a rugby ball ) curvature of the front surface of the eye. instead of being brought to a focus at one point at the back of the eye, light is focused in two lines at right angles to each other. astigmatism is easily corrected with glasses to bring the two lines to one point focus. bifocal lens : spectacle lens having two portions with different focusing power. the top portion is usually larger and used for distance vision while the lower portion is smaller and used for near vision. blepharitis : inflammation of the eyelids, due to infection, usually accompanied by crusts or scales on the eyelid margin. eyes may become red, sore and itchy. binocular vision anomalies : eye problems affecting the way a pair of eyes work together. cataract : loss of transparency ( \u2018 clouding \u2019 ) of the crystalline lens of the eye. mainly associated with age, but also with systemic conditions, trauma, exposure to uv light and down \u2019 s syndrome. treatment, which involves surgical replacement of the cloudy lens, has a very high success rate. contrast sensitivity : most tests for contrast sensitivity measure ability to see low contrast letters / pictures. reduced contrast sensitivity is usually associated with difficulty with day - to - day visual tasks. dispensing optician : a person who helps choose and fit glasses. fundus : name given to the structures at the back of the eye ; includes the retina, optic disc, macular, and retinal and choroidal blood vessels. glaucoma : a group of eye conditions causing progressive vision loss. these often symptom - free conditions result in damage to the optic nerve at the back of the eye. the three main eye tests for glaucoma are eye pressure, optic disc appearance and visual fields. hypermetropia : also known as \u2018 long sight \u2019. a refractive error of the eye in which light is brought to focus behind the retina ( corrected using positive lenses ). uncorrected hypermetropia may cause problems with near work, for example headaches when reading. significant, uncorrected hypermetropia in childhood may cause other more", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.609282318424929, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:43.218888"} {"text": "wildman, wesley j. and robert john russell. chaos : a mathematical introduction with philosophical reflections. wesley j. wildman and robert john russells article surveys the mathematical details of a single equation, the logistic equation, which has become a hallmark of this field, at least within the circles of theology and science. the logistic equation displays many of the generic features of chaotic dynamical systems : the transition from regular to apparently random behavior, the presence of period - doubling bifurcation cascades, the influence of attractors, and the underlying characteristics of a fractal. they then raise philosophical questions based on the mathematical analysis and conclude with possible theological implications. the logistic equation is a simple, quadratic equation or map, xn + 1 = kxn ( 1 - xn ), which iteratively generates a sequences of states of the system represented by the variable x. the tuning constant k represents the influence of the environment on the system. one starts from an initial state x0 and a specified value for the tuning constant k to generate x1. substituting x1 back into the map generates x2, and so on. although incredibly simple at face value, the logistic map actually displays remarkably complex behavior, much of which is still the focus of active scientific research. the behavior of the iterated sequence produced by the logistic map can be divided into five regimes. the constant k determines which regime the sequence occupies as well as much of the behavior within that regime. in regime i, the sequence converges to 0. in regime ii, the sequence converges on a single positive limit which depends on k. in regime iii, bifurcations set in and increase in powers of two as k increases. moreover, the initial conditions have a significant permanent effect on the system in the form of phase shifts. chaos sets in in regime iv. here chaotic sequences are separated by densely packed bifurcation regions and there is maximal dependence on initial conditions. for most values of k, the sequences seem to fluctuate at random and the periodic points found in previous regimes appear to be absent. nevertheless, for almost all values of k we actually find highly intricate bifurcation structures, and the sequences fall within broad bands, suggesting an underlying orderliness to the system. finally in regime v, chaos is found on the cantor subset of x. there is no universally accepted mathematical definition of chaos capturing all cases of interest. defining chaos simply as randomness proves too vague because this term acquires new and more precise shades of", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6644864138587669, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:43.331821"} {"text": "finally in regime v, chaos is found on the cantor subset of x. there is no universally accepted mathematical definition of chaos capturing all cases of interest. defining chaos simply as randomness proves too vague because this term acquires new and more precise shades of meaning in the mathematics of chaos theory. defining chaos in terms of sensitive dependence on initial conditions ( the butterfly effect ) results in the inclusion of many maps that otherwise display no chaotic behavior. the definition adopted here requires a chaotic map to meet three conditions : mixing ( the effect of repeated stretching and folding ), density of periodic points ( a condition suggesting orderliness ), and sensitive dependence. interestingly, in the case of the logistic map and many similar chaotic maps, mixing is the fundamental condition, as it entails the other two. the paper also addresses the question of the predictability of chaotic systems. on the one hand, a chaotic system such as the logistic map is predictable in principle, since the sequence of iterations is generated by a strict governing equation. on the other hand, chaotic systems are eventu ally unpredictable in practice, since most values of the initial conditions cannot be specified precisely, and even if they could, the information necessary to specify them cannot be stored physically. yet these systems are also temporarily predictable in practice, since one can predict the amount of time which will elapse before mathematical calculations will cease to match the state of the system. this leads to a definition of chaotic randomness as a tertium quid between strict randomness ( as in one common interpretation of quantum physics ), and the complete absence of randomness. what implications does mathematical chaos have for a philosophy of nature? it is superficial to say that the mathematical determinism of chaotic equations requires metaphysical determinism in nature, because of complexities in the experimental testing of the mathematical models used in chaos theory. in particular, it may be very difficult to distinguish phenomenologically between chaos, sufficiently complicated periodicity, and strict randomness, even though these are entirely distinct mathematically. there are additional practical limitations to the testing of chaotic models of natural systems, including sensitivity to the effects of the environment ( such as heat noise or long - range interactions ), and the fact that the development of the physical system eventually out paces even the fastest calculations. conclusions are drawn from this. on the one hand, the causal whole - part relations between environment and system, the causal connnectedness implied in the butterfly effect, and the fact that much of the apparent randomness of nature can now", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.7078336409765361, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:43.333706"} {"text": "calculations. conclusions are drawn from this. on the one hand, the causal whole - part relations between environment and system, the causal connnectedness implied in the butterfly effect, and the fact that much of the apparent randomness of nature can now be brought under the umbrella of chaos, are best seen as supporting evidence for the hypothesis of metaphysical determinism. on the other hand, however, there are profound epistemic and explanatory limitations on the testing of chaos theory due to the peculiar nature of chaotic randomness. in this sense, chaos theory places a fundamental and unexpected new limit on how well the hypothesis of metaphysical determinism can be supported. on the basis of these philosophical conclusions, what relevance does chaos theory have for theology? on the one hand, it will be bad news to those who simply assume that nature is open to the free actions of god and people, and particularly bad news to those who mistakenly appeal to chaos theory to establish this. on the other hand, chaos theory will be irrelevant to theologians operating with a supervening solution to the problem of divine action, such as kants, that is able to affirm human freedom and divine action even in the presence of strict metaphysical determinism. at still another level chaos theory is good news to the theological project and bad news for polemical determinists. due to the fundamental, new limitation in the testability of chaos theory, one can never fully exclude the possibility that classical physics as we now have it, including chaos theory, will be replaced by a better model of the world at the classical level which allows for divine causality in some way. this opens a window of hope for speaking intelligibly about special, natural - law - conforming divine acts, and it is a window that seems to be impossible in principle to close. the article includes an extended bibliography of textbooks, key technical articles, experimental applications, useful introductions and surveys, and selected works on chaos theory and theology. link | printer - friendly | feedback | contributed by : ctns / vatican observatory", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6176665347663451, "token_count": 417, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:43.335217"} {"text": "1. basic terminology bit depth is determined by the number of bits used to define each pixel. the greater the bit depth, the greater the number of tones ( grayscale or color ) that can be represented. digital images may be produced in black and white ( bitonal ), grayscale, or color. a bitonal image is represented by pixels consisting of 1 bit each, which can represent two tones ( typically black and white ), using the values 0 for black and 1 for white or vice versa. a grayscale image is composed of pixels represented by multiple bits of information, typically ranging from 2 to 8 bits or more. a color image is typically represented by a bit depth ranging from 8 to 24 or higher. with a 24 - bit image, the bits are often divided into three groupings : 8 for red, 8 for green, and 8 for blue. combinations of those bits are used to represent other colors. a 24 - bit image offers 16. 7 million ( 2 24 ) color values. increasingly scanners are capturing 10 bits or more per color channel and often outputting 8 bits to compensate for \" noise \" in the scanner and to present an image that more closely mimics human perception. left to right - 1 - bit bitonal, 8 - bit grayscale, and 24 - bit color images. binary calculations for the number of tones represented by common bit depths : \u00a9 2000 - 2003 cornell university library / research department", "subdomain_id": "subdomain_quantum_information_theory", "similarity_score": 0.6177053360012156, "token_count": 293, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.426453"} {"text": "this essay is based on a talk given by the first author to students and staff of the departmento de geometria e topologia at the university of seville in november, 1993. the issues presented there have been part of a continued debate and discussion at bangor over many years, and this explains why this is a joint paper. the aim of the talk, and the reason for discussing these topics, was to give students an understanding and a sense of pride in the aims and achievements of their subject, and so help them explain these aims and achievements to their friends and relatives. this pride in itself would be expected to contribute to their enjoyment of the subject, whatever their own level of achievement. because of this, and because of its origin, the tone of the article is principally that of an address to students. we do not claim to be alone in addressing these questions. dr allan muir ( city university ) has organised a ` ` how mathematics works ' ' group for some years, and there is a similar group in the u. s. a. many of these issues are discussed in the books by davis and hersh [ 2, 3 ]. we start with some general questions to which we believe it is helpful for students to be able to formulate some kind of answers. the question for teachers of mathematics at all levels is to what extent, if at all, the training of mathematicians should involve professional discussion of, and assessment in, possible answers to these questions, such as those given or suggested here. it may be thought by some that these questions are beside the point, a waste of time, and not what a real mathematician should be considering. against this we would like to give a quotation from albert einstein ( 1916 ) : how does a normally talented research scientist come to concern himself with the theory of knowledge? is there not more valuable work to be done in his field? i hear this from many of my professional colleagues ; or rather, i sense in the case of many more of them that this is what they feel. i cannot share this opinion. when i think of the ablest students whom i have encountered in teaching - i. e., those who have distinguished themselves by their independence and judgement and not only mere agility - i find that they have a concern for the theory of knowledge. they like to start discussions concerning the aims and methods of the sciences, and showed unequivocally by the obstinacy with which they defend their views that this subject seemed important to them. this is not really astonishing.", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6051883502130824, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.832268"} {"text": "a place marker. the lack of this concept of zero held up the progress of mathematics for centuries. on a higher level, without the mathematics of error correcting codes we would not have had the beautiful pictures of jupiter from the voyager ii. this mathematics is also essential in many aspects of telecommunications and of computers, and in particular for cd players. there is an amusing story about this last application. negotiations between sony and the dutch company philips about the standards for cd were held by top management. the japanese considered philip ' s proposal for error correction inferior to theirs, and in the end the japanese proposal was accepted. back in eindhoven, the embarrassed managers called in their science directors to declare that the company did not have sufficient expertise in this area called \" coding theory \" and to find out where in europe the real experts could be found. to their dismay, the answer was : \" in eindhoven! \", in the person of the dutch number theorist van lint! without the mathematics of cryptography, there would not be possible the current level of electronic financial transactions crossing the world, and involving billions of dollars. currently, the mathematics of category theory, a theory of mathematical structures, is being used to give new insights into future logics and algebras for the design of the next generation of programs and software. the enormous applications of mathematics in engineering, in statistics, in physics, are common knowledge. it is also imagined that the role of mathematics is being taken over by the use of supercomputers. it is not so generally realised that these supercomputers are the servants of mathematical and conceptual formulations : the electronics is marvellous in that it does the calculations so quickly and accurately. for example, body scanners are an application, a realisation, of a piece of 19th century mathematics expressing how to reconstruct a solid object of varying density from views through it of an x - ray, where the only measurement is the change of intensity as the ray passes through the body, for a large number of varying positions of the ray. the theories of the big bang, of fundamental particles, would not be possible without mathematics. there is here a mystery. the nobel prize - winner e. wigner has written a famous essay \" the unreasonable effectiveness of mathematics in the natural sciences \". for us, the key word is \" unreasonable \". he is talking about the surprise that the use of mathematics is able to give predictions which are in accord with experiment to the extent of nine significant figures. how is such astonishing accuracy possible?", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6734495887958449, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.835534"} {"text": "\". for us, the key word is \" unreasonable \". he is talking about the surprise that the use of mathematics is able to give predictions which are in accord with experiment to the extent of nine significant figures. how is such astonishing accuracy possible? it seems likely that a full \" explanation \" of the success of mathematics would need more understanding of language, of psychology, of the structure of the brain and its action, than is at present conceivable. even worse, the development of such understanding might need, indeed must need, a new kind and type of mathematics. it is still important to analyse the scope and limitations of mathematics. it is also reasonable that such an analysis should be a necessary part of the education and assessment of a student of mathematics. of what use is a student who does not know such things? here then are some quotations from this article :... that the enormous usefulness of mathematics in the physical sciences is something bordering on the mysterious, and that there is no rational explanation for it. mathematics is the science of skilful operations with concepts and rules invented just for this purpose. [ this purpose being the skilful operation.... ] the principal emphasis is on the invention of concepts. the depth of thought which goes into the formation of mathematical concepts is later justified by the skill with which these concepts the statement that the laws of nature are written in the language of mathematics was properly made three hundred years ago ; [ it is attributed to gallileo ] it is now more true than ever before. the observation which comes closest to an explanation for the mathematical concepts ' cropping up in physics which i know is einstein ' s statement that the only physical theories which we are willing to accept are the beautiful ones. it stands to argue that the concepts of mathematics, which invite the exercise of so much wit, have the quality of beauty. in order to discuss this, it is of interest to compare mathematics with other subjects, and to link this with the question of the objects of study of a subject, and of its importance. suppose we ask questions of a few of our fellow scientists as to why one should study their subject. answers might run as follows : the astronomer : in astronomy we study the beginnings of the universe, and the flow of time over billions of years, as well as the furthest distances of space. what could be more enthralling? we have some money for this study, with various telescopes over the world, but of course not enough. the physicist : in physics", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6060978892970206, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.836524"} {"text": "glory of an apparent solution to a longstanding problem. these questions are not idle. resources are limited. any one person ' s interests are limited. we need a more convincing answer for the support of our subject, and to persuade people to study it. here is our try : the mathematician : mathematics is about the study of pattern and structure, and the logical analysis and calculation with patterns and structures. in our search for understanding of the world, driven by the need for survival, and simply for the wish to know what is there, and to make sense of it, we need a science of structure, in the abstract, and a method of knowing what is true, and what is interesting, for these structures. thus mathematics in the end underlies and is necessary for all these other subjects. this is part of our claim for your attention, and for the support for our studies. another part of this claim is the fascination and wonder at the new patterns and structures, the surprising relationships, which our study has found. mathematics also brings humility. we know how hard it can be to decide the truth of but one apparently simple and clear statement. we are aware of the limitations of mathematical truth, that not all that is true can be proved, as shown by the undecidability results of godel. you will not find a mathematician writing that the final solution, the unified theory which will solve everything, is at hand. rather, we are looking for the surprises which show us a new view of the world, and new riches to explore. experience leads us to expect these to appear. for the mathematician, the world is not only stranger than you imagine, but stranger than you can now imagine. it is our job to investigate this strangeness. this has already been answered to some extent. mathematics does not study things, but the relations between things. a description of such a relation is what we mean by a \" concept \". thus we talk about the distance between towns, and might feel this is less \" real \" than the towns themselves. nonetheless, relations between things, and our understanding of these relations, is crucial for our operation in and interaction with the world. in this sense, mathematics has the form of a language. it must be supposed that our ability to operate with concepts, with relationships, had and maybe continues to have an evolutionary value. it is also curious in this respect that the achievements of mathematics are generally held by mathematicians to be the solution of some famous problem. certainly such a solution will bring to the solver fame and", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6239416224808796, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.838645"} {"text": "relationships, had and maybe continues to have an evolutionary value. it is also curious in this respect that the achievements of mathematics are generally held by mathematicians to be the solution of some famous problem. certainly such a solution will bring to the solver fame and fortune, or at any rate a certain fame within the world of mathematicians. yet the history of mathematics and its applications shows that it is the language, methods and concepts of mathematics which bring its lasting value and everyday use. we have earlier mentioned some examples of this. at a more advanced level, we can say that without this language, for example that of groups and of hilbert spaces, fundamental particle physics would be inconceivable. some of the great concepts which have been given rigorous treatments through this mathematicisation are : number, length, area, volume, rate of change, randomness, computation and computability, symmetry, motion, force, energy, curvature, space, continuity, infinity, deduction. very often the problem to make some mathematics is, in the words of a master of new concepts, alexander grothendieck, \" to bring new concepts out of the dark \". it is these new concepts that make the difficult easy, which show us what has to be done, which lead the way. more important is the way mathematics deals with and defines concepts, by combining them into mathematical structures. these structures, these patterns, show the relations between concepts and their structural behaviour. as said before, the objects of study of mathematics are patterns and structures. these patterns and structures are abstract, a notion discussed below. here again is a subject which is rarely and not widely studied. there is the comment of paul erds that mathematics is a means of turning coffee into theorems. perhaps, though, this does not help the beginner too much. so let us look at some of the issues discussed in the books by p davis and r hersh [ 2, 3 ], \" the mathematical experience \", and \" descartes ' dream \", particularly the section of the first book on \" inner issues \". this deals with a number of themes. the use of symbols and symbolic notations is one of the characteristics of mathematics, and one which puts off the general public. people will say they were able to do mathematics till it got onto x and y. the manipulation of symbols according to rules is still an important part of the craft of mathematics. we find we have to teach people who wish to master say economics but who are unable to deduce", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6689039207494569, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 7, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.839655"} {"text": "to do mathematics till it got onto x and y. the manipulation of symbols according to rules is still an important part of the craft of mathematics. we find we have to teach people who wish to master say economics but who are unable to deduce from x + 2 = 4 that x = 2. this makes very difficult the understanding of the concepts of economics. very complicated relations can be expressed symbolically in a way which can hardly be conveyed in words. this economy which symbols allow is improving continually, as the symbols are used in the denotation of advanced concepts and the rules of the symbol manipulation are used to model the rules for the concepts. it has been said, in an exaggerated way, that the history of mathematics is the history of improved notation. this reflects the finite nature of intelligence, which requires props and metaphors to help and guide it. some symbols are in themselves metaphors. examples are =, <. <, \\ le, \\ subseteq, \\ to, \\ angle, \\ int, $ and so on. others have acquired strong associations, so that we can use them as metaphors. symbols are able to express \" with economy and precision \", to use words of a n whitehead. the use of particular symbols is something that changes with time, as mathematicians become accustomed and find appropriate a new notation. in some cases, a notation, brought about by the laziness of mathematicians, leads to a new theory. for example, expressions of the type ( a11x1 +... + a1n xn,..., am1x1 +... + amnxn ) get abbreviated over time to ax, which you can see is simpler to write without understanding what the displayed line means. in order to allow for the correct manipulation of this abbreviation it turns out that the rules for what are called matrices were worked out, and these are widely used in mathematics, science and engineering. to give an example close to the heart of some of our research, the first author has been concerned for many years as to whether the linear notation for mathematics is a necessity or a historical result, based on the needs of printing. the analysis of this linguistic point has led to a new kind of \" higher dimensional algebra \", in which symbols are related not just to those to the left and those to the right, but also up and down, or out of the page, as well. this algebra then becomes closer to and more able to model some geometric situations, and this leads to the formulation", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6039280460450335, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.840610"} {"text": "related not just to those to the left and those to the right, but also up and down, or out of the page, as well. this algebra then becomes closer to and more able to model some geometric situations, and this leads to the formulation and proofs of new theorems, to new calculations and insights. this is an essential part of mathematics, and again is one part of what makes mathematics incomprehensible to the general public. as said above, mathematical structures are abstract. they are defined by the relations within them. they are thought of as non - sensual. the advantages of abstraction are at least threefold. this has some features in common with abstraction, but usually applies differently. thus a generalisation of the ( 3, 4, 5 ) right angled triangle is pythagoras ' theorem, while an extension is fermat ' s last theorem, which says that the equation xn + yn = zn has no solutions for positive integers x, y, z if n > 2. this was thought recently to have been settled, but it seems there is still a gap in the proof. ( this gap has now been filled. ) the rigorousness of the notion of proof is a particular feature of mathematics. it is why mathematics is essential in engineering, safety, physics and so on. the notion of proof, of validity, in mathematics, is an aspect of the general question : what is the notion of validity in an area of study? each area, from social sciences, economics, chemistry, biology, education, law, literature, and so on, has its notion of validity, and the contrast and uses of this notion are of particular interest. the question of what is acceptable as a valid argument in mathematics is still subject to argument and discussion, particularly with the existence of very long proofs ( for example 15, 000 pages ), and with the use of computers for visualisation, experimentation, and calculation. a great mathematician has urged that the major problem of mathematical education is to teach the reality of mathematical objects. what is this reality? in what way do these objects exist? this question has been a matter of major interest to many philosophers of mathematics, but its interest is perhaps in the process of being downgraded. mathematics is often about processes. the question of existence of a mathematical structure is maybe like asking whether the game of chess exists. clearly it does not exist in the way that tables and chairs exist, but none the less, it influences many lives, and passes the cash test", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6076029246178754, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.841698"} {"text": "processes. the question of existence of a mathematical structure is maybe like asking whether the game of chess exists. clearly it does not exist in the way that tables and chairs exist, but none the less, it influences many lives, and passes the cash test. ( does it earn money? the answer is clearly : yes, for some, for example world champions and makers of chess equipment. ) the relation of mathematical concepts and methods to processes is indicated by the way that memory of muscular action and rhythm are important aspects of mathematical work. a lot of mathematics is concerned with the realisation and understanding of the effect of repetitive processes and methods. mathematicians are good at understanding and imagining moving things around, such as from one side of an equation to another, or changing a pattern in space. they use movements of their hands and arms to convey what is happening. the objects and ideas of which mathematicians talk are sometimes a kind of concatenation of a variety of such remembered processes. the representation of these ideas in writing is by contrast often bare and sparse, and this is part of the difficulty of learning the use and application of these objects and ideas. on the other hand, it also allows for each person to make the interpretation and internalisation most appropriate to themselves. the taming of the infinite, or the enlargement of the imagination to include infinite operations, is one of the joys of mathematics, and also one of the scandals. are these infinite objects real? the surprise is that these infinite, possibly unreal, objects can be used to prove finite real things, and this again is an aspect of the mystery of the subject. suppose for example that these infinite objects are used to prove the safety of a nuclear installation, or of an aircraft landing system? what credence should be placed on such a proof? these are real issues. those who wish a practical test should look at the change in mathematical reviews since it was started in 1940. this monthly journal contains abstracts of mathematical papers. roughly speaking, a few paragraphs are enough for a five page paper. the growth in terms of numbers of pages over these years is about eleven times. each month now there are published about 400 large pages of abstracts of mathematical papers. this is indeed the golden age of mathematics, both in quantity and quality. the aims of this research are at various levels. one is the advancement in knowledge about particular types of structures, which are already well defined. another is the introduction of the study of new structures, as they have appeared and been shown to be relevant. there", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6491657291322133, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 10, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.843023"} {"text": "can hardly be judged till the theory is worked out, and such a theory does not emerge, like venus anadyamene, fully formed from the sea. a theory accumulates in a journey over a period of years, and a gut feeling of importance of a line of investigation is necessary to motivate travel on a long road. we have both been working on this kind of research, as well as other kinds, for decades. the first author formulated the theme of higher dimensional algebra in the mid 1960 ' s. in this algebra, symbols are related not just to those to the left or right on a line, but also to those up or down, or even out of the page. the aim was that of an algebra more closely related to the geometry, and allowing a more general type of composition. the expectation was that this algebra would yield some formulations and proofs of new theorems, which would automatically lead to new methods of calculation. this in the end has proved right, with a lot of people joining in the project. for a long time, though, for example five years, all that could be said was that it was possible to draw pictures which suggested that the ideas would have to work. the problem was a lack of framework to express the algebra corresponding to the pictures, to the geometry. this framework was built up gradually, and it became ever more amazing to see how natural and fitting way it was, once the ideas were thought about in the \" correct \" manner. thus, as suggested by wigner in the quotation given earlier, the aesthetic criteria for a proper theory were nicely satisfied, and the theory became better than the vision which had prompted it. it has to be said that, paradoxically, the secret of success in research is the successful management of failure. for if you never fail, then it is likely that the tasks you have set yourself are simply too easy. interesting research must have an element of risk. you need strategies for dealing with situations when things go wrong : the problem may have proved too hard, or too easy. what comes next? the analysis of the reasons for failure, and the comparison of these reasons for failure with the reasons for wanting to do this problem in the first place, becomes instructive for future work. we would not like to attempt to give any final answer to this, but all of us should try and formulate some of the aspects that we are looking for. indeed, as editors of journals, we have to make judgements on this question on a daily basis", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6586761823541041, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 13, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.849160"} {"text": "this is occurring on two fronts. there is first the computational revolution. for computation with numbers, or for graphics presentation, this revolution is well known. less well known publicly is the computer software which can manipulate symbols and axioms, and other software which can carry out automated reasoning. in principle, these should give mathematicians power to calculate and reason a millionfold more than they can at present, and to deal with the complexities of systems thought previously to be intractable. the prospective effect of these on the teaching of mathematics has yet to be properly understood and assessed, although a lot of work is in progress. the effect on research has already been considerable and is likely to grow in its influence. a more subtle revolution is the conceptual one. the emphasis on mathematics as the study of structures is finding its mathematicisation in category theory, the mathematical and algebraic study of structures. category theory has revealed new approaches to the basic concepts of mathematics, such as logic and set theory, and indeed has made respectable the idea that the practice of mathematics needs not one foundation, as traditionally sought, but alternative environments, and a framework for their comparison. these ideas are also important for the progress of computer science, as for example in showing new approaches to data structures. one of the pleasures of mathematics is the way it operates on various levels, which then interact. so the algebraic study of mathematical structures has itself led to new mathematical structures. some of these structures have had notable applications in mathematics and in physics. nevertheless, there are still many current dangers for mathematics. there is a general lack of appreciation of what mathematicians have accomplished, and the importance of mathematics. some of this has come about through mathematicians themselves failing to define and explain their subject in a global sense to their students, to the public, and to government and industry. it is possible for a student to get a good degree in mathematics without any awareness that research is going on in the subject. another danger is the growing reliance on computers as a black box to give the answer, without understanding of the processes involved, or of the concepts which are intended to be manipulated. so both the scope and the limitations of the computer fail to be understood, the mathematical basis is neglected and perhaps fails to be developed, and the computer may be used in ways which are inappropriate, or simply limited by the software design. it is said that some engineering firms are dispensing with their mathematical research departments in favour of engineers manipulating software packages. will this ensure the safety or reliability of the product, and will it allow the use", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6055386660764039, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 15, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.851931"} {"text": "or simply limited by the software design. it is said that some engineering firms are dispensing with their mathematical research departments in favour of engineers manipulating software packages. will this ensure the safety or reliability of the product, and will it allow the use of the most advanced mathematical concepts? if these dangers are to be averted, then an increased understanding and appreciation of the questions with which we started are essential. there may be ways of speeding up the process of transfer from the conceptual foresight of the mathematician to the realisation in a scientific or technological application. to find them, we need in society a real understanding of the work of mathematicians, and of the way mathematics has played a role in the society in which we live. it is our responsibility to the subject we love to find ways of developing this understanding. acknowledgements : many of the questions raised in this article were discussed with students of the final year \" maths in context \" course we ran together, and also with students in the course \" ideas in maths \", for first year honours mathematics students. the contributions of these students through discussions and essays have strongly influenced our thinking. we would also like to thank roger bowers and brian denton who have run a course on \" mathematics in society \" at liverpool university. for a discussion of art and mathematics go to symbolic sculptures and mathematics. why study mathematics | internal links | ronald brown ' s homepage", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6170679443824842, "token_count": 283, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 16, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.852564"} {"text": "this is a high school level course based on the california state standards and can help prepare for the ap biology and sat ii biology subject test. the biology sequence at quantumcamp focuses on the basic concepts and theories in modern biology. through a series of hypothesis - driven experiments, hands - on activities, and active independent studies, students are going to develop ideas on the following topics : using optical microscopes with magnification up to 1600x, students are going to investigate major cell types and their cellular structure. they are expected to unveil the cell cycle themselves without any information given by the instructor, just like scientists did when making discoveries from the 1920s to 1950s! the class will also explore major cell activities including proliferation, photosynthesis and respiration, differentiation, and cell aging and death. from this course, students will find out why carbon is so unique in the chemistry of life. with a good understanding of carbon - centered chemical bonds, students will construct common macromolecules found in living things and explore the properties of these molecules. in this course, students will follow gregor mendel and thomas morgan ' s discoveries in classic genetics into modern molecular genetics. classic genetics studies mendelian inheritance and the chromosome theory of inheritance developed by morgan. dna, rna, protein and phenotype are the core concepts of molecular genetics, which studies the structure and function of genes at a molecular level. the advanced biology sequence consists of three 30 - hour workshops for a total of 90 hours of in - class material. academic year sequences are held in 10 week classes each 3 hours long. this sequence is designed to prepare students for success on the ap biology exam and / or the sat ii biology subject test. classes consist of hands - on activities, practice problems, and concept synthesis. academic year students are required to complete a minimal amount of practice problems. students seeking a - g fulfillment will need to work with their parent schools in order to have this course satisfy a - g requirements.", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6310185414900892, "token_count": 397, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.922342"} {"text": "molecules are built up from the atom, which is the basic unit of any chemical element. the atom in turn is made from the proton, neutron, and electron. it turns out that protons and neutrons are made of varieties of a still smaller particle called the quark. at this time it appears that the two basic constituents of matter are the lepton ( of which the electron is one type ) and quark ; there are believed to be six types of each. each type of lepton and quark also has a corresponding antiparticle : a particle that has the same mass but opposite electrical charge and magnetic moment. an isolated quark has never been found \u2014 quarks appear to almost always be found in pairs or triplets with other quarks and antiquarks ( the resulting particles being classed as hadrons, more than 200 of which have been identified ). two theoretically predicted five - quark particles, called pentaquarks, have been produced in the laboratory. four - and six - quark particles are also predicted but have not been found. the most familiar lepton is the electron ; the other five leptons are the muon, the tau particle, and the three types of neutrino associated with each : the electron neutrino, the muon neutrino, and the tau neutrino. the six quarks have been whimsically named up, down, charm, strange, top ( or truth ), and bottom ( or beauty ) ; the top quark, which has a mass greater than an entire atom of gold, is about 35 times heavier than the next biggest quark and may be the heaviest particle nature has ever created. the quarks found in ordinary matter are the up and down quarks, from which protons and neutrons are made. a proton, for instance, consists of two up quarks and a down quark, and a neutron consists of two down quarks and an up quark. the pentaquark consists of two up quarks, two down quarks, and the strange antiquark. ( quarks have fractional charges of one third or two thirds of the basic charge of the electron or proton. ) the elementary particles of matter interact with one another through four distinct types of force : gravitation, electromagnetism, and the forces from strong interactions and weak interactions. a given particle experiences certain of these forces, while it may be immune to others. the", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.660518652733058, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.982084"} {"text": "of matter interact with one another through four distinct types of force : gravitation, electromagnetism, and the forces from strong interactions and weak interactions. a given particle experiences certain of these forces, while it may be immune to others. the gravitational force is experienced by all particles. the electromagnetic force is experienced only by charged particles, such as the electron and muon. the strong nuclear force is responsible for the structure of the nucleus, and only particles made up of quarks participate in the strong nuclear interaction or force. other particles, including the electron, muon, and the three neutrinos, do not participate in the strong nuclear interactions but only in the weak nuclear interactions associated with particle decay. each force is carried by an elementary particle. the electromagnetic force, for instance, is mediated by the photon, the basic quantum of electromagnetic radiation. the strong force is mediated by the gluon, the weak force by the w and z particles, and gravity is thought to be mediated by the graviton. quantum field theory applied to the understanding of the electromagnetic force is called quantum electrodynamics, and applied to the understanding of strong interactions is called quantum chromodynamics. in 1979 sheldon glashow, steven weinberg, and abdus salam were awarded the nobel prize in physics for their work in demonstrating that the electromagnetic and weak forces are really manifestations of a single electroweak force. a unified theory that would explain all four forces as manifestations of a single force is being sought. the behavior of all known subatomic particles can be described within a single theoretical framework called the standard model. this model incorporates the quarks and leptons as well as their interactions through the strong, weak and electromagnetic forces. only gravity remains outside the standard model. the force - carrying particles are called gauge bosons, and they differ fundamentally from the quarks and leptons. the fundamental forces appear to behave very differently in ordinary matter, but the standard model indicates that they are basically very similar when matter is in a high - energy environment. although the standard model does a credible job in explaining the interactions among quarks, leptons, and bosons, the theory does not include an important property of elementary particles, their mass. the lightest particle is the electron and the heaviest particle is believed to be the top quark, which weighs at least 200, 000 times as much as an electron. in 1964 scottish physicist peter w. higgs of edinburgh university proposed a mechanism", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.7131207168545031, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.983099"} {"text": ". the lightest particle is the electron and the heaviest particle is believed to be the top quark, which weighs at least 200, 000 times as much as an electron. in 1964 scottish physicist peter w. higgs of edinburgh university proposed a mechanism that provided a way to explain how the fundamental particles could have mass. higgs theorized that the whole of space is permeated by a field, now called the higgs field, similar in some ways to the electromagnetic field. as particles move through space they travel through this field, and if they interact with it they acquire what appears to be mass. a basic part of quantum theory is wave - particle duality - - all fields have particles associated with them. the particle associated with the higgs field is the higgs boson, a particle with no intrinsic spin or electrical charge. although it is called a boson, it does not mediate force as do the other bosons ( see below ). the higgs boson has not yet been observed. finding it is the key to discovering whether the higgs field exists, whether higgs ' s hypothesis for the origin of mass is indeed correct, and whether the standard model will survive. two types of statistics are used to describe elementary particles, and the particles are classified on the basis of which statistics they obey. fermi - dirac statistics apply to those particles restricted by the pauli exclusion principle ; particles obeying the fermi - dirac statistics are known as fermions. leptons and quarks are fermions. two fermions are not allowed to occupy the same quantum state. bose - einstein statistics apply to all particles not covered by the exclusion principle, and such particles are known as bosons. the number of bosons in a given quantum state is not restricted. in general, fermions compose nuclear and atomic structure, while bosons act to transmit forces between fermions ; the photon, gluon, and the w and z particles are bosons. basic categories of particles have also been distinguished according to other particle behavior. the strongly interacting particles were classified as either mesons or baryons ; it is now known that mesons consist of quark - antiquark pairs and that baryons consist of quark triplets. the meson class members are more massive than the leptons but generally less massive than the proton and neutron, although some mesons are heavier than these particles. the lightest members of the baryon class are the proton and neutron,", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6908595077307453, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.984001"} {"text": "used to describe ordinary spin ( the intrinsic angular momentum of elementary particles ). isotopic spin actually has nothing to do with spin, but is represented by a vector that can have various orientations in an imaginary space known as isotopic spin space. isotopic spin is conserved only in the strong interactions. closely related to conservation laws are three symmetry principles that apply to changing the total circumstances of an event rather than changing a particular quantity. the three symmetry operations associated with these principles are : charge conjugation ( c ), which is equivalent to exchanging particles and antiparticles ; parity ( p ), which is a kind of mirror - image symmetry involving the exchange of left and right ; and time - reversal ( t ), which reverses the order in which events occur. according to the symmetry principles ( or invariance principles ), performing one of these symmetry operations on a possible particle reaction should result in a second reaction that is also possible. however, it was found in 1956 that parity is not conserved in the weak interactions, i. e., there are some possible particle decays whose mirror - image counterparts do not occur. although not conserved individually, the combination of all three operations performed successively is conserved ; this law is known as the cpt theorem. the first subatomic particle to be discovered was the electron, identified in 1897 by j. j. thomson. after the nucleus of the atom was discovered in 1911 by ernest rutherford, the nucleus of ordinary hydrogen was recognized to be a single proton. in 1932 the neutron was discovered. an atom was seen to consist of a central nucleus \u2014 containing protons and, except for ordinary hydrogen, neutrons \u2014 surrounded by orbiting electrons. however, other elementary particles not found in ordinary atoms immediately began to appear. in 1928 the relativistic quantum theory of p. a. m. dirac hypothesized the existence of a positively charged electron, or positron, which is the antiparticle of the electron ; it was first detected in 1932. difficulties in explaining beta decay ( see radioactivity ) led to the prediction of the neutrino in 1930, and by 1934 the existence of the neutrino was firmly established in theory ( although it was not actually detected until 1956 ). another particle was also added to the list : the photon, which had been first suggested by einstein in 1905 as part of his quantum theory of the photoelectric effect. the next particles discovered were related to attempts to explain the strong interactions,", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6479688879091322, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.986300"} {"text": "). another particle was also added to the list : the photon, which had been first suggested by einstein in 1905 as part of his quantum theory of the photoelectric effect. the next particles discovered were related to attempts to explain the strong interactions, or strong nuclear force, binding nucleons ( protons and neutrons ) together in an atomic nucleus. in 1935 hideki yukawa suggested that a meson ( a charged particle with a mass intermediate between those of the electron and the proton ) might be exchanged between nucleons. the meson emitted by one nucleon would be absorbed by another nucleon ; this would produce a strong force between the nucleons, analogous to the force produced by the exchange of photons between charged particles interacting through the electromagnetic force. ( it is now known, of course, that the strong force is mediated by the gluon. ) the following year a particle of approximately the required mass ( about 200 times that of the electron ) was discovered and named the mu meson, or muon. however, its behavior did not conform to that of the theoretical particle. in 1947 the particle predicted by yukawa was finally discovered and named the pi meson, or pion. both the muon and the pion were first observed in cosmic rays. further studies of cosmic rays turned up more particles. by the 1950s these elementary particles were also being observed in the laboratory as a result of particle collisions produced by a particle accelerator. one of the current frontiers in the study of elementary particles concerns the interface between that discipline and cosmology. the known quarks and leptons, for instance, are typically grouped in three families ( where each family contains two quarks and two leptons ) ; investigators have wondered whether additional families of elementary particles might be found. recent work in cosmology pertaining to the evolution of the universe has suggested that there could be no more families than four, and the cosmological theory has been substantiated by experimental work at the stanford linear accelerator and at cern, which indicates that there are no families of elementary particles other than the three that are known today. see s. glashow, interactions : a journey through the mind of a particle physicist and the matter of this world ( 1988 ) ; l. m. lederman and d. n. schramm, from quarks to the cosmos ( 1989 ). | particle | | symbol | | mass ( mev / c2 ) | | electric charge | | particle", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6351447499946189, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:44.987235"} {"text": "definition : ( 1 ) a method to detect and correct errors by adding bits derived from a block or string of bits to the block. ( 2 ) an algorithm to compute bits characteristic of a block based on the algebra of polynomials over the integers, modulo 2. ( 3 ) the characteristic bits of a block. also known as crc. note : large blocks may be probabilistically compared by precalculating the crc for each block, then comparing their crcs. if the crcs are different, the blocks are different. if the crcs match, there is a small chance that the blocks are actually different. this probability may be made arbitrarily smaller with more crc bits. many transmission errors may be detected, and some corrected, by recalculating the crc and comparing it with the transmitted crc. contributed by arvind < firstname. lastname @ example. org > may 2002. if you have suggestions, corrections, or comments, please get in touch with paul e. black. entry modified 3 august 2009. html page formatted tue dec 6 16 : 16 : 32 2011. cite this as : paul e. black, \" cyclic redundancy check \", in dictionary of algorithms and data structures [ online ], paul e. black, ed., u. s. national institute of standards and technology. 3 august 2009. ( accessed today ) available from : http : / / www. nist. gov / dads / html / cyclicredundancycheck. html", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6669069333489588, "token_count": 314, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T15:45:45.802518"}