{"text": "this article was originally published in the july / august 1992 issue of home energy magazine. some formatting inconsistencies may be evident in older archive content. | home energy home page | back issues of home energy | home energy magazine online july / august 1992 trends in energy trends in energy is a bulletin of residential energy conservation issues. it covers items ranging from the latest policy issues to the newest energy technologies. if you have items that would be of interest, please send them to : trends department, home energy, 2124 kittredge st., no. 95, berkeley, ca 94704. radiant barrier updatewhen home energy last covered radiant barriers ( may / june ' 89 and nov / dec ' 89 ), the level of confusion and outright misstatements associated with radiant barriers were at an all - time high. some of the misinformation came from manufacturers who extrapolated limited test information to validate erroneous claims. the research by that point was too scanty to support any claims to speak of. major funding by the radiant barrier industry for an independent study evaporated shortly afterward. to this day, much less is known about the effectiveness of radiant barriers - the thin sheets or coatings of reflective materials that virtually stop transfer of infrared energy - than about any other type of insulation. but since 1989, the u. s. department of energy has sponsored more research into some of the nagging questions about dust accumulation and moisture, and has finally released the long - awaited radiant barrier fact sheet to help consumers and contractors understand the knowledge accumulated to date. still, no simple ratings ( such as r - values for conventional insulation ) have yet been developed to aid the public in decision - making about radiant barriers. until recently, part of the problem with studying dust and moisture has been that most radiant barrier testing has been field testing. now, controlled environments such as the large scale climate simulator at oak ridge national laboratory ( see convective loss in loose - fill attic insulation, he, may / june ' 92, p. 27 - 30 ) are available and are being effectively used to analyze the complex nature of building systems and thermal envelopes. when the radiant barrier is placed on the attic floor, dust does become a problem. as dust builds up, insulating value is reduced by as much as half, over a period of one to ten years. for reflective barriers attached to rafters - shiny side down - dust appears to be little problem. these findings are integrated into the fact sheet in the form of tables", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6033575661813151, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:03.912447"} {"text": "results and implement a process to obtain such protection besides being the center director, dr subiyanto together with his team of researchers at lipi dedicate much of their time researching and developing bio ( nano ) composite from natural fibers and the waste of agrobusiness ( such as rice husks ) especially for structural materials application. it is one of his interests and dreams to be able to construct earthquake friendly buildings with the materials his team is developing. his team has developed laminated veneer lumber ( lvl ) from old rubber tree for panel / partition and structural building materials. this lvl has been characterized to possess strength equivalent with second class lumber such as teakwood, miranti and keruing. a factory has been built to scale up the production of his lvl and they are currently searching for industry partnership to commercialize their research in construction materials application. another material based on bamboo composite is also being developed to substitute nails and bolts for earthquake friendly building application. another interesting on - going project collaborated with kyoto university is the development of cellulose nanofiber reinforced composite. it is believed to be as strong as steel, as thermally stable as glass and as flexible as plastics. together with kyoto university in japan, the team has developed a transparent polymeric nanocomposite using a web - like bacterial cellulose nanofiber network as the mechanical reinforcing agent. eventually, this material can be used for many applications including oled and other electronic components, building materials, as well as automotive. the team has targeted to reduce the fuel consumption by 20 % when cellulose nanofiber composite is used as reinforcement for the automobile body. this automobile application is also being developed together with prominent automotive manufacturer in japan. it is encouraging to know that developing countries such as indonesia are actively pursuing nanotech r & d and application development in collaboration with developed countries such as japan to accelerate the commercialization of nanotechnology. japan offers advanced r & d capability and commercialization experience, while indonesia provides natural and human resources. this partnership will accelerate indonesia nanotechnology development and advancement of indonesian r & d capabilities. | samples of the bio - composite |", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6013811545708867, "token_count": 442, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:04.544920"} {"text": "describing motion with diagrams visit the physics classroom ' s flickr galleries and take a visual overview of 1d kinematics. introduction to diagrams throughout the course, there will be a persistent appeal to your ability to represent physical concepts in a visual manner. you will quickly notice that this effort to provide visual representation of physical concepts permeates much of the discussion in the physics classroom tutorial. the world that we study in physics is a physical world - a world that we can see. and if we can see it, we certainly ought to visualize it. and if we seek to understand it, then that understanding ought to involve visual representations. so as you continue your pursuit of physics understanding, always be mindful of your ability ( or lack of ability ) to visually represent it. monitor your study and learning habits, asking if your knowledge has become abstracted to a series of vocabulary words that have ( at least in your own mind ) no relation to the physical world which it seeks to describe. your understanding of physics should be intimately tied to the physical world as demonstrated by your visual images. like the study of all of physics, our study of 1 - dimensional kinematics will be concerned with the multiple means by which the motion of objects can be represented. such means include the use of words, the use of graphs, the use of numbers, the use of equations, and the use of diagrams. lesson 2 focuses on the use of diagrams to describe motion. the two most commonly used types of diagrams used to describe the motion of objects are : begin cultivating your visualization skills early in the course. spend some time on the rest of lesson 2, seeking to connect the visuals and graphics with the words and the physical reality. and as you proceed through the remainder of the unit 1 lessons, continue to make these same connections.", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.639711650578301, "token_count": 368, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:04.628713"} {"text": "and epidermal irritations. and sand systems are being explored to meet these requirements, the metalcasting industry has pressed refractory coating manufacturers to produce coatings capable of enhancing the performance of resin / sand systems. to date, refractory coating ' s primary function has been to enhance casting surface finish and minimize metal penetration defects. in most u. s. core applications, the final dry coating deposit typically is limited to 0. 004 - 0. 019 in. ( 0. 10 - 0. 25 mm ) in order to compensate for dimensional changes associated with coating cores. in europe, heavier dry coating deposits ranging from 0. 10 - 0. 15 in. ( 0. 25 - 0. 4 mm ) have demonstrated some anti - veining characteristics, which implies that heavier coating deposits may reduce the thermo - mechanical stress development in resin - bonded sand systems. one approach to help prevent thermal distortion is to reduce the heat transfer through the coating and into the core. the primary method for this is to lower the thermal conductance thermal conductance a measure of the ability of a material to transfer heat per unit time, given one unit area of the material and a temperature gradient through the thickness of the material. it is measured in watts per meter per degree kelvin. of the refractory coating ( increase the insulation insulation ( in ' s\u0259la ` sh\u0259n, in ' sy \u2013 ), use of materials or devices to inhibit or prevent the conduction of heat or of electricity. ). refractories with insulating characteristics are necessary to reduce heat transfer. the insulating characteristics of the refractory particles in a coating are controlled by multiple factors, including : * particle chemistry, shape, crystalline like a crystal. it implies a uniform structure of molecules in all dimensions. for example, phase change technology, widely used for rewritable optical discs, uses crystalline spots ( bits ) to reflect the laser beam. amorphous, non - crystalline bits do not reflect light. structure and density ; * thermal conductivity thermal conductivity a measure of the ability of a material to transfer heat. given two surfaces on either side of the material with a temperature difference between them, the thermal conductivity is the heat energy transferred per unit time and per unit of the refractory ; * the thickness and alignment of the particles in the deposit. investigations were performed to compare diatomaceous earth ( de ), a low thermal conductance refractory ( high insulation coating ), to graphite ( g", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6246922329920627, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:04.836750"} {"text": "of the refractory ; * the thickness and alignment of the particles in the deposit. investigations were performed to compare diatomaceous earth ( de ), a low thermal conductance refractory ( high insulation coating ), to graphite ( g ), a high thermal conductance refractory ( low insulation coating ), at temperatures and pressures that simulated pouring cast iron against a coated phenolic phe \u00b7 no \u00b7 lic of, relating to, containing, or derived from phenol. any of various synthetic thermosetting resins, obtained by the reaction of phenols with simple aldehydes and used as adhesives. urethane urethane ( yoor\u00b4ithan\u00b4 ), n ethyl carbamate used as an anesthetic agent for laboratory animals, formerly used as a hypnotic in humans. coldbox ( pucb ) disc ( fig. 1 ). the effectiveness of the two types and thicknesses of refractories in preventing heat transfer and thermal expansion thermal expansion increase in volume of a material as its temperature is increased, usually expressed as a fractional change in dimensions per unit temperature change. to the discs was measured through changes in the thermal distortion curves of the discs and by comparing the heat transfer ( [ t. sub. transfer ] ) from the thermal source to the experimental discs. [ figure 1 omitted ] calculating heat transfer coating thicknesses of 0. 004 and 0. 008 in. ( 0. 1 and 0. 2 mm ) were selected as representative of industry practices and because they are the thinnest and heaviest surface deposits that could be applied with the experimental one refractory component coatings. as seen in fig. 2, refractory layers are not strictly limited to the surface of the discs, but also penetrate into the interstitial space interstitial space the fluid filled areas that surround the cells of a given tissue ; also known as tissue space. mentioned in : lymphedema between the sand grains of the disc. [ figure 2 omitted ] in addition, the transfer of heat from the contact point at the thermal hot surface used to heat the discs contributes a thermal joint conductance factor to the heat transfer. as a consequence, no attempt was made to calculate the actual thermal conductance of the two refractories. instead, the rate of heat loss from the thermal hot surface per unit of time was recorded during each disc heating cycle to determine and compare the amount and rate of the heat transfer. heat loss from", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.617711139521743, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:04.837721"} {"text": "0. 01 - mm ) de demonstrated slight expansion for 110 sec. with a slight plastic deformation at the end of the test. [ de. sub. 0. 008 ] showed only slight plastic deformation. the thermal distortion was reduced by 42. 2 % at the 0. 004 in. ( 0. 01 - mm ) deposit and by 62. 2 % at the 0. 008 in. ( 0. 02 - mm ) deposit. with respect to systems tested, there was a significant difference in thermal distortion ( table 4, fig. 3 ). the g specimens had more thermal distortion than the controls, and the de specimens had less thermal distortion than the control discs. comparing hot surface temperature certain trends were observed while comparing the data for average hot surface temperature. the [ g. sub. 0. 004 ], [ g. sub. 0. 008 ] and control samples caused a thermal gradient gradient in mathematics, a differential operator applied to a three - dimensional vector - valued function to yield a vector whose three components are the partial derivatives of the function with respect to its three variables. the symbol for gradient is \u2207. at the hot surface and, after 20 sec., were reduced to 2, 039, 2, 075 and 2, 102f ( 1, 115, 1, 135 and 1, 150c ). the temperatures then began the return to the steady state condition ( fig. 4 ). the g refractory increased the heat transfer ( relative to the control ) by 10. 6 % at the 0. 004 - in. ( 0. 01 - mm ) deposit and by 12 % at the 0. 008 - in. ( 0. 02 - mm ) deposit. both g samples demonstrated a maximum heat loss at 19 sec. with a change in temperature of - 276 and - 245f ( - 171c and - 154 c ). the average calculated heat transfer for graphite was - 8. 55c [ s. sup. - 1 ]. the graphite systems demonstrated at 12 % increase in heat transfer from the hot surface when compared to the uncoated control disc. [ figure 4 omitted ] the thermal gradient at the hot surface after 20 sec. for the de specimens showed an insulative in \u00b7 su \u00b7 la \u00b7 tive serving to insulate or keep safe : the insulative value of an animal ' s fur ; insulative packing materials. property and only dropped to 2, 246 and 2, 210 f ( 1, 230 and 1,", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6092031675291449, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:04.842057"} {"text": "rather the computer were working out anagrams of your name or printing snoopy calendars or running life patterns. may or may not refer to someone who actually studies chemistry. at ha international, toledo, ohio forced expiratory flow forced expiratory flow rate. key professor at western michigan western michigan, also known as west michigan, is a region of the u. s. state of michigan. univ., kalamazoo, mich. table 1. refractory coating properties relative to coated discs. brookfield thixotropic coating % solids visc. ( cp ) index a [ g. sub. 0. 004 ] 22. 13 330 1. 74 b [ g. sub. 0. 008 ] 24. 72 609 1. 74 c [ de. sub. 0. 004 ] 23. 32 259 1. 76 d [ de. sub. 0. 008 ] 25. 07 460 1. 76 median particle surface tension coating size ( [ micro ] m ) ( dyne / [ cm. sup. 2 ] ) a [ g. sub. 0. 004 ] 9. 38 31. 64 b [ g. sub. 0. 008 ] 9. 38 30. 94 c [ de. sub. 0. 004 ] 9. 28 34. 32 d [ de. sub. 0. 008 ] 9. 28 34. 58 table 2. properties of sand. source afs / gfn shape screens % resin mi 48 subangular 4 1. 25 % roundness / source sphericity / krumbein ph mi 0. 5 / 0. 7 7. 2 - 8. 4 table 3. changes in disc weight caused by dry coating. refractory coating g high thermal conductance dry coating thickness ( in. ) 0. 004 0. 008 mean coated disc wt. ( g ) 25. 967 26. 123 refractory penetration ( sand grains ) 5 2 refractory coating de low thermal conductance dry coating thickness ( in. ) 0. 004 0. 008 mean coated disc wt. ( g ) 26. 151 26. 260 refractory penetration ( sand grains ) 5 2 refractory coating control ( uncoated ) dry coating thickness ( in. ) 0 mean coated disc wt. ( g ) 25. 616 refractory penetration ( sand grains ) 0 table 4. physical and thermo - mechanical properties of the ref", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6357070366233379, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:04.843972"} {"text": "continuous flow - a method of analysis where sample material is moved through a series of conversion and purification steps within a continuously flowing stream of carrier gas, typically helium. in most cases, this method allows for a single measurement of a sample. continuous flow interface - the carrier gas flow rate from a continous flow system is typically an order of magnitude higher than an isotope ratio mass spectrometer ( irms ) can handle. a reduction in flow immediately upstream the irms is achieved with a plumbing interface. dual inlet - a method of analysis where a sample gas and and reference gas are alternately put into the isotope ratio mass spectrometer ( irms ). this method typically allows for many measurements of a single sample and is generally considered the most precise way to use an irms. elemental analyzer - an instrument used to combust solid or liquid material in a controlled excess oxygen reaction column. a carrier gas ( typically helium ) is used to move the combustion products through a series of oxidation or reduction as well as purification steps. light stable isotopes - isotopes are elements with varying numbers of neutrons but identical numbers of protons and electrons. some isotopes are unstable, or radioactive, with increasing numbers of neutrons. elements with a relatively low mass are considered light. for example, carbon has a mass of 12 and is considered to be light. lead, however, has a mass of 207 and would be considered heavy. in the environmental stable isotope community, light isotopes are generally considered to be hydrogen, carbon, nitrogen, oxygen, and sulfur. mass dependence - multiple isotopes of the same species following a physical fractionation that is dependent on mass. for example, 17o is empirically different from 18o by about half. symbols \u03b4 and \u03b4 - these symbols are the lower case and capital greek letter delta. we use the lower case delta ( \u03b4 ) to indicate the ratio of heavy to light isotope relative to the same ratio of a standard. for example, the 13c to 12c ratio of some sample material relative to the 13c to 12c ratio of some internationally recongnized standard. the capital delta ( \u03b4 ) is used to express the difference of one \u03b4 with that of another ( for example \u03b417o of an oxygen containing species expresses the difference between \u03b417o and \u03b418o ).", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6230561364383791, "token_count": 469, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:05.432080"} {"text": "why use niobium to make the cavities? each one of jefferson lab ' s 338 acceleration cavities cost about $ 30, 000. a good portion of this expense comes from the use of niobium, a relatively rare and expensive metal. why were the cavities made from niobium when they could have been made from a less expensive metal, such as copper? most metals are good conductors of electricity, but they aren ' t perfect. a property of metals, known as resistance, opposes the flow of electrical currents and converts electrical energy to heat. unless the metal is part of something that is suppose to get hot ( an oven, toaster, space heater or similar device ), the heat is a waste product and adds to the expense of operating the device. jefferson lab ' s cavities work by moving charges back and forth over its surface. this movement of charges is an electrical current. if jefferson lab ' s cavities were made from copper, the cavities would heat, expand and eventually melt. this would not be good for the accelerator. there are two ways to avoid this problem. the first is to run the accelerator for very short periods of time and allow much longer periods in between for the cavities to cool. unfortunately, this greatly increases the amount of time it takes to conduct an experiment. the second way is to eliminate the electrical resistance within the cavities. this is why jefferson lab ' s cavities are made from niobium. niobium, at room temperature, has electrical resistance and behaves just like copper. if, however, niobium is cooled to very low temperatures, it loses all electrical resistance and becomes what scientists call a superconductor. since superconductors have no electrical resistance, electrical currents flowing through them do not lose any energy and do not produce any waste heat. if no heat is created, the cavities can not heat up and the accelerator does not need to shut down to allow them to cool. the use of superconductive niobium cavities allows the accelerator to provide a continuous beam of electrons to the experiments. in order for niobium to become superconductive, it must be cooled far below the freezing point of water. the cavities are immersed in a bath of liquid helium at a temperature of - 271\u00b0c ( - 456\u00b0f ). this is only 2\u00b0c above absolute zero, the coldest possible temperature. the cavities and liquid", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6188418627568483, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:05.462676"} {"text": "this a defining element of mature socialism ; however many socialists are of the opinion that such an arrangement will follow a transitional phase of economic and social development, such as market socialism. critical views the anarchist and communist concept of free association is often considered utopian or too abstract to guide a transforming society. however, it is valued by present trends such as the free software movement, and considered as a basic principle in the relationship between software developers of free softwares. others reply to this critique by asserting that free association is not a utopia, but an emancipatory exigence which necessarily come from the very material condition which is the proletariat ( i. e., deprivation of property and a constant social struggle against the submission and deprivation that it causes, and that puts them against the state and capital ). however, the trends that advocate a transition ( especially social democracy and marxism - leninism ) postpone it for a more or less remote future, pushing free association so increasingly in the background, in exchange for the task of establishing a transitional phase. and as the proletariat can have no interest in their own emancipation when it is postponed for the indefinite future, the search for a \" transition \" is necessarily a task that is not assumed by the proletariat themselves but by an intelligentsia or political professionals. this culminated in stalinism ( for example, the so - called socialist countries like cuba, ussr, china ) and the present social democratic parties, in which the concept of free association was virtually abandoned. in contrast, the present trends derived from anarchism and council communism understand the free association as the practical basis for the fundamental transformation of society at all levels, from the everyday level ( search of a libertarian interpersonal relationship, critique of the family, consumerism, criticism of conformist and obedient behavior ) to the level of world society as a whole ( the fight against the state and against the ruling class in all countries, the destruction of national borders, support for self - organized struggle of the oppressed, attacks on property, support to wildcat strikes and to workers and unemployed autonomous struggles ). since anarchists, some libertarian marxists ( mainly the situationists ) and other libertarian socialists consider free association as an immediate task for introduction and maintenance of stateless socialism, most theorists of these ideologies have gone into great detail about how it will operate, unlike most leninists and democratic socialists who tend to be more concerned with the \" transition \" than the final goal. some of most important works :", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.602996346673055, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:05.548825"} {"text": "the integral totality of ordinary cause - effects, and that there, is no super - cause independent of ordinary causes and effects. - god thus includes the world ; he is, in fact, the totality of world parts, which are indifferently causes and effects. now ar [ absolute perfection in some respects, relative perfection in all others ] is equally far from either of these doctrines ; thanks to its two - aspect view of god, it is able consistently to embrace all that is positive in either deism or pandeism. ar means that god is, in one aspect of himself, the integral totality of all ordinary causes and effects, but that in another aspect, his essence ( which is a ), he is conceivable in abstraction from any one or any group of particular, contingent beings ( though not from the requirement and the power always to provide himself with some particulars or other, sufficient to constitute in their integrated totality the r aspect of himself at the given moment ). - these distinctions make sense only when ar [ absolute perfection in some respects, relative perfection in all others ] is assumed ( hence spinoza ' s failure, who assumed mere a ). just as ar is the whole positive content of perfection, so cw, or the conception of the creator - and - the - whole - of - what - he - has - created as constituting one life, the super - whole which in its everlasting essence is uncreated ( and does not necessitate just the parts which the whole has ) but in its de facto concreteness is created - this panentheistic doctrine contains all of deism and pandeism except their arbitrary negations. thus arcw, or absolute - relative panentheism, is the one doctrine that really states the whole of what all theists, if not all atheists as well, are implicitly talking about. - i think pandeism was system ; \u2014 and that when i say the country or kingdom of pand\u00e6a, i express myself in a manner similar to what i should do, if i said the popish kingdom or the kingdoms of popery ; or again, the greeks have many idle ceremonies in their church, meaning the greeks of all nations : or, the countries of the pope are superstitions, & c. at the same time, i beg to be understood as not denying that there was such a kingdom as that of pandae, the daughter of cristna, any more than i would deny that there", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6002456021063683, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:05.708756"} {"text": "by deepak chopra, md, facp, menas kafatos, ph. d., fletcher jones endowed professor in computational physics, chapman university, and rudolph e. tanzi, ph. d., joseph p. and rose f. kennedy professor of neurology at harvard university, and director of the genetics and aging research unit at massachusetts general hospital ( mgh ). the greatest mystery of existence is existence itself. there is the existence of the universe and there is the existence of the awareness of existence of the universe. were it not for this awareness, even if the universe existed as an external reality, we would not be aware of its existence, so it would for all practical purpose not exist. traditional science assumes, for the most part, that an objective observer independent reality exists ; the universe, stars, galaxies, sun, moon and earth would still be there if no one was looking. however, modern quantum theory, the most successful of all scientific creations of the human mind, disagrees. the properties of a particle, quantum theory tells us, do not even exist until an observation takes place. quantum theory disagrees with traditional, newtonian physics. most scientists, although respecting quantum theory, do not follow its implications. the result is a kind of schizophrenia between what scientists believe and what they practice. when we examine this hypothesis of traditional science, we find it more a metaphysical assumption than a scientific assertion. how can we assert that an observer - independent reality exists if the assertion itself is dependent on the existence of a conscious observer? this raises the additional dilemma of who or what is the observer and where is this observer located? when scientists in general describe empirical facts and formulate scientific theories, they forget that neither facts nor theories are an insight into the true nature of fundamental reality apart from any observer. what we consider to be empirical facts are entirely dependent on observation, in agreement with quantum theory. the scientific observer in this case is an activity of the universe called homo sapiens usually with a ph. d. in physics. however, many scientists have never really asked the question \u201c who am i \u201d? most neuroscientists who still don \u2019 t believe that quantum theory has anything to do with the brain, would assert that \u201c i \u201d, the conscious observer, is solely an epiphenomenon of the brain ; that consciousness is produced by the brain, just as gastric juices are produced by the stomach and bile is produced by the gall bladder. the problem with this of course, is that any", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6447585773317679, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.007160"} {"text": "is solely an epiphenomenon of the brain ; that consciousness is produced by the brain, just as gastric juices are produced by the stomach and bile is produced by the gall bladder. the problem with this of course, is that any neuroscientist worth his / her tenure will tell you that there is no satisfactory theory in neuroscience that explains how neurochemistry translates into conscious experience. how do electrochemical phenomena in the brain create the appreciation of the beauty of a red rose, the taste of garlic, the smell of onions, the feeling of love, compassion, joy, insight, intuition, imagination, creativity, free will, or awareness of existence of self and the universe? there is no physicalist theory based on classical physics to explain these subjective experiences. nor, is there any obvious means for coming up with one. when traditional science finds itself in such an impasse it might be time to question some of the basic assumptions about so called independently - existing reality. we must revisit the idea that science is a methodology and not an ontology. current science however is based on a physicalist ontology. this is the basic belief that reality is physical and mind is an epiphenomenon of matter ( the nervous system ). nonetheless we are baffled when asked to explain how matter becomes mind. we suggest here a fundamental revision in our most cherished scientific assumptions. we boldly suggest that matter, force fields, particles, waves, even the fabric of space and time are not denizens of fundamental reality but that they are perceptual and cognitive experiences in consciousness. actually what we propose, would be in agreement with what the great physicists who founded quantum theory almost a hundred years ago would hold. but we are also going beyond, taking the statements of quantum theory to the next level : all of physical reality is a perceptual experience in consciousness alone. the experience may turn out to be different for different species. what is physical reality to a bat, a honey bee, a nematode, a whale, a dolphin, an eagle, an insect with numerous eyes? there is no fixed physical reality, no single perception of the world, just numerous ways of interpreting world views as dictated by one \u2019 s nervous system and the specific environment of our planetary existence. we propose that the worldview of current science as its is being practiced, which operates from the assumption that human perception and particularly facts emanating from observations made with human scientific methods are the only fundamental truth, is clearly", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6301796567741271, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.008230"} {"text": "environment of our planetary existence. we propose that the worldview of current science as its is being practiced, which operates from the assumption that human perception and particularly facts emanating from observations made with human scientific methods are the only fundamental truth, is clearly flawed. furthermore the subject / object split that is the basic premise of the current scientific methods has led to the creation of arguably detrimental technologies including mechanized death, petroleum products in our food, genetically modified foods, global warming, extinction of species, and even the possible extinction of the human species. building on the quantum view of the cosmos, which accepts a non - local, entangled reality that includes observers as fundamental, we suggest the next natural step, a new science rooted in consciousness, one that strives to interpret the entire universe, with all its observers, all modes of observation, and all objects observed as nothing other than consciousness and it \u2019 s manifestations! rejecting what we believe is the most reasonable and rational approach proposed here, will lead nowhere and force us to accept randomness and lack of purpose as the hallmarks of the universe. such a view is, ultimately, leading to no meaning for our own very existence. we suggest that perceptual objects experienced in consciousness, including our very brains, are not the source of consciousness. we suggest rigorous testing of this radically different ontology. we feel a holistic science that does not separate observer from that which is observed would lead to the unraveling of the mysteries of the universe which at presently seem beyond reach, leading to an understanding of a conscious universe in which all are differentiated activities of a single field that is an undivided wholeness and in some sense bridges external reality with inner being. deepak chopra, md, facp, is the author of the forthcoming book, god : a story of revelation. menas kafatos, ph. d., fletcher jones endowed professor in computational physics, chapman university, co author of the forthcoming book, who made god and other cosmic riddles. rudolph e. tanzi, ph. d., joseph p. and rose f. kennedy professor of neurology at harvard university, and director of the genetics and aging research unit at massachusetts general hospital ( mgh ), co author of the forthcoming book, super brain : unleashing the explosive power of your mind to maximize health, happiness, and spiritual well - being.", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6733643649662715, "token_count": 483, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.009257"} {"text": "lifeboat foundation particleacceleratorshield by the lifeboat foundation scientific advisory board. this is an ongoing program so you may submit suggestions to email @ example. com. overviewour goal is to prevent, and also make plans on surviving when possible, particle accelerator mishaps including quantum vacuum collapse, mining the quantum vacuum, formation of a stable strangelet, and the creation of artificial mini - black holes. the atlas detector of the large hadron collider ( lhc ) under construction. the lhc is 27 kilometers ( 16. 7 miles ) long and spans two countries. lords of the ring the atlas experiment movie atlas \u2013 episode i : a new hope atlas \u2013 episode ii : the particles strike back click on parts of the detector for their name and description anatomy of a black hole ( interactive animation ) large hadron rap : rappin \u2019 about cern \u2019 s large hadron collider! the higgs boson is a hypothetical massive scalar elementary particle predicted to exist by the standard model of particle physics. it is the only standard model particle not yet observed, but plays a key role in explaining the origins of the mass of other elementary particles, in particular the difference between the massless photon and the very heavy w and z bosons. elementary particle masses, and the differences between electromagnetism ( caused by the photon ) and the weak force ( caused by the w and z bosons ), are critical to many aspects of the structure of microscopic ( and hence macroscopic ) matter ; thus if it exists, the higgs boson has an enormous effect on the world around us. artificial mini - black holessome people are worrying about the world \u2019 s next - generation particle collider, the large hadron collider, which is now operating at cern \u2019 s facility on the franco - swiss border. the lhc site says : \u201c according to some theoretical models, tiny black holes could be produced in collisions at the lhc. they would then very quickly decay into what is known as hawking radiation ( the tinier the black hole, the faster it evaporates ) which would be detected by experiments. \u201d but some scientists have pointed out that hawking radiation may not exist as documented in the scientific papers do black holes radiate? and on the universality of the hawking effect. dangerssir martin rees says : \u201c it is not inconceivable that physics could be dangerous too. some experiments are designed to generate conditions more extreme than ever occur naturally. nobody then knows exactly", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.636261588760776, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.032080"} {"text": "on the universality of the hawking effect. dangerssir martin rees says : \u201c it is not inconceivable that physics could be dangerous too. some experiments are designed to generate conditions more extreme than ever occur naturally. nobody then knows exactly what will happen. indeed, there would be no point in doing any experiments if their outcomes could be fully predicted in advance. some theorists have conjectured that certain types of experiment could conceivably unleash a runaway process that destroyed not just us but earth itself. \u201d nick bostrom says : \u201c there have been speculations that future high - energy particle accelerator experiments may cause a breakdown of a metastable vacuum state that our part of the cosmos might be in, converting it into a \u2018 true \u2019 vacuum of lower energy density. this would result in an expanding bubble of total destruction that would sweep through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds. \u201d although neither sir martin rees nor nick bostrom are greatly concerned by this potential problem, there is a reason that they call the actions done by the lhc \u201c experiments \u201d. if the outcome of an experiment was known beforehand, it would not be called an experiment! what are your thoughts on the experiments that will be conducted at the lhc?", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6047550623819303, "token_count": 261, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.032601"} {"text": "the eye element. that presents an unusual problem for the trilobite, since a simple thick spherical lens of calcite could not have resolved the light into an image. the trilobite optical element is a compound lens composed of two lenses of differing refractive indices joined along a huygens surface. in order for such an eye to correctly focus light on the receptors it would have to have exactly this shape of lens \u201d. the most amazing fact is that a so complex eye was present in one of the first animals to appear in the fossil record. in the book \u201c darwin ' s black box \u201d, behe goes beyond the eye morphology and shows that, even in animals that possess the simplest kinds of eyes, i. e. the light sensitive spots of jellyfish, vision is an extremely complex biochemical process. the evolution of such a complex system cannot be explained yet ( behe, 1996, p. 22 ). creation seems a better explanation. behe ( 1996, p. 39 ) considers the origin of irreducibly complex systems by mutation and natural selection impossible. a irreducibly complex system is a single system composed by several well - matched, interacting parts contributing to its basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. an irreducibly complexity system cannot be produced gradually ( that is, by continuously improving the initial function, which continues to work by the same mechanism ) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. the same author ( behe, 1996, p. 72 ) analyzes the structure and functioning of flagella and cilia, locomotory structures present in bacteria and also in protozoa, which are considered ancestral to all the animal phyla. flagella and cilia are also present in several kinds of cells in multicelular animals. an exhausting biochemical analysis shows that the cilium contains more than two hundred kinds of different proteins and its complexity is much greater than was thought. the bacterial flagellum needs more than 40 proteins to work, and the exact roles of most of the proteins are unknown. the author considers the two systems as irreducibly complex and he reflects that the probability of gradually assembling these systems is virtually nil. behe ( 1996, p. 67 ) makes an exhaustive review of scientific papers, searching for research that tries to explain cilium and", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6186012472767489, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 7, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.426069"} {"text": "the development of steam engines during the industrial revolution went hand - in - hand with formulation of one of the most powerful laws of physics. the second law of thermodynamics provides insight into everything from the workings of refrigeration systems and such oddities as black holes to why time only runs forward. the second law is frequently misunderstood, ignored or misstated to obscure its meaning. consequently, countless would - be inventors have expended energy in fruitless attempts to build perpetual - motion machines, which the second law demonstrates is not possible. the second law has many implications that have led to its being stated in a number of ways. some formulations are rigorous and make reference to an unfamiliar quantity, while others are more pragmatic. ultimately, they are all equivalent. one formulation states that, in a closed system, entropy always increases. entropy is a thermodynamic parameter, like heat and temperature, which is necessary to characterize the thermal properties of a system. first, one must understand that the thermal energy of a physical object is simply the sum of all the energy of motion, or kinetic energy, of the atoms that make it up. as temperature increases the atoms of a solid vibrate more vigorously. in gases, atoms or molecules fly about at ever higher speeds. most things have structure of some kind. crystals are examples of very regularly shaped objects. the outward appearance of crystals is the result of their atoms being arranged in a repeating lattice. imagine an ice cube in a warm room that is otherwise isolated from outside influences. water ice is crystalline, but, over time, its structure disintegrates. that is, it melts and becomes a liquid. water molecules in a liquid state attract one another sufficiently to give the liquid self - adhesion. as a result, though it is less structured than ice, a drop of water takes on a structure tending toward a spherical shape. liquid water proceeds to evaporate, becoming a gas. in this phase, water and air molecules behave independently of one another, flying about randomly, colliding and ricocheting off each other. eventually, all structure disappears. with the passage of time, water molecules have gone from being in a regular, rigid structure, to a loose aggregation in a liquid, to whizzing about freely in a gaseous form. ordered states have yielded to progressively more disordered states. at the start, thermal energy in the room was partitioned. the ice cube, air and walls were at different temperatures. eventually", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6902270133354578, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.763504"} {"text": "to whizzing about freely in a gaseous form. ordered states have yielded to progressively more disordered states. at the start, thermal energy in the room was partitioned. the ice cube, air and walls were at different temperatures. eventually, everything is at the same temperature. the thermal energy is now dispersed. individual atoms or molecules have a range of kinetic energies, but, on average, one region has no more thermal energy than another. entropy is a measure of the extent to which a system moves from distinctly identifiable regions, that is, displays patterns, to uniformity. it is a measure of the loss of structure or increase in disorder. steam engines do work by exploiting temperature differences between reservoirs. a fire boils water in a chamber ( hot reservoir ). the resulting steam expands, applying pressure and doing work on a piston that drives a mechanism. steam, with its residual heat, is vented to the surroundings ( cold reservoir ) at the end of each cycle. the efficiency of the engine depends on how much heat is available to do work versus heat lost to the surroundings. losses include those due to friction between the moving mechanical parts. if the engine were to become a closed system, that is, it was left alone and isolated, the engine ' s fire would burn out and it would cease functioning. eventually, like the ice cube example, the entire system attains a uniform temperature. the complex molecules making up its fuel have broken, apart leaving ashes. to maintain operation the system needs to remain open. a supply of new fuel and the dissipation of lost heat are needed to maintain the temperature difference between the reservoirs. suppose an engine in a closed system is used to build a structure. looking narrowly at a structure built from raw materials one would say order increased, and entropy decreased. however, this is but a small corner of a larger system. more broadly, taking into account losses in chemical structure of the fuel and lost partitioning of heat reservoirs, the system, as a whole, increased in entropy. structure can be built in a closed system. but, to the extent you create order in one location, you create more disorder elsewhere. entropy can decrease locally, but only at the expense of a larger increase on a broader scale. the second law was stated more succinctly, without explicit reference to the concept of entropy, by lord kelvin in the mid - 19th century : no process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work. in", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.722157093410436, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.764440"} {"text": "more succinctly, without explicit reference to the concept of entropy, by lord kelvin in the mid - 19th century : no process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work. in any system there are always energy losses that cannot be used to do work. as a result, you can ' t build something that does work and at the same time reenergizes the reservoir it draws upon to do that work. perpetual motion machines are impossible. though our solar system approximates a closed system, the earth does not. many structures on earth exist because the planet is an open system. weather patterns and all of life owe their existence to the sun providing a continuous supply of radiant ( electromagnetic ) energy to the planet. to the extent structure arises on earth as a result of solar radiation, larger amounts of disorder are created in the sun. the sun will eventually burn out, leaving the equivalent of a heap of ash. when that occurs, the earth will truly be a closed system. the structures the sun ' s energy makes possible on earth will then disintegrate. the second law is very profound. i have made frequent references to how, in a closed system, a property called entropy increases with the passage of time. this arises because of irreversible processes that take place in the physical world. the second law affirms that events run forward, not backwards in time. without work being done a pile of sand on the beach won ' t spontaneously reassemble itself as a sand castle and liquid water won ' t spontaneously crystallize as ice. the second law is said to demonstrate the arrow of time, which only fires in one direction : forward. steve luckstead is a medical physicist in the radiation oncology department at st. mary medical center. he can be reached at firstname. lastname @ example. org.", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6172195038897539, "token_count": 384, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.765168"} {"text": "authors : charles b. leffert abstract the success of quantum theory shows that the universe is much more complicated than most have supposed. how did our universe get started? what is energy? what is gravity? the late richard feynman in volume 1 of his \u201c lectures on physics \u201d said that no one had come up with the machinery of either energy or gravity. however the machinery of both has been presented by the author in recent issues of this vixar archive and other publications. for the machinery of the expansion of the universe, a complete spatial condensation theory, with no free parameters, has been under development for the past 25 years. in that development, one important contact has been made with quantum theory ; the expansion theory predicts exactly the same value of vacuum energy as quantum theory, a factor of 10123 greater than einstein \u2019 s mass energy, mc2. the new concepts, such as a fourth spatial dimension, and our ordinary space of three spatial dimensions as the surface of a four - dimensional ball, indicate that there are even more complexities needed to accomplish unification with quantum theory. present physics uses a symmetric time and yet we know that from our subjective concepts of past - present - future that there is also an \u201c arrow \u201d of time. this conundrum was solved by two different productions of space operating under two different times. our universe started under the first radiation - dominated era with the arrow of time producing four - dimensional space. then as radiation cooled and four - dimensional space continued, geometric production of our three - dimensional space with symmetric time increased in the matter - dominated era. some additional plots of important parameters are presented as well as a new agreement with measurements of passive separation of galaxies. but in general the aim of this paper is to alert the reader to the background vision of a greater epi - universe as the source of quantum interaction with the present mass of matter and the vision of how our universe came to be. comments : 17 pages, 9 figures [ v1 ] 2012 - 06 - 26 15 : 32 : 20 unique - ip document downloads : 35 times add your own feedback and questions here :", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6828511807606832, "token_count": 429, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.791441"} {"text": "soldering and brazing soldering is a process in which two or more metal items are joined together by melting and flowing a filler metal into the joint, the filler metal having a relatively low melting point. soft soldering is characterized by the melting point of the filler metal, which is below 400 \u00b0c ( 752 \u00b0f ). the filler metal used in the process is called solder. soldering is distinguished from brazing by use of a lower melting - temperature filler metal ; it is distinguished from welding by the base metals not being melted during the joining process. brazing is a joining process whereby a filler metal or alloy is heated to melting temperature above 450 \u00b0c ( 840 \u00b0f ) or, by the traditional definition in the united states, above 800 \u00b0f ( 427 \u00b0c ) and distributed between two or more close - fitting parts by capillary action. by definition the melting temperature of the braze alloy is lower ( sometimes substantially ) than the melting temperature of the materials being joined. the brazed joint becomes a sandwich of different layers, each metallurgically linked to the adjacent layers. silver brazing alloys maintenance welding alloys, flamespray, flash back arrestors mild steel, stainless, brazing, silver brazing alloys soldering & brazing", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6301839812311614, "token_count": 259, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:06.944590"} {"text": "science fair project encyclopedia astrology ( from greek : \u03b1\u03c3\u03c4\u03c1\u03bf\u03bb\u03bf\u03b3\u03b9\u03b1 = \u03b1\u03c3\u03c4\u03c1\u03bf\u03bd, astron, \" star \" + \u03bb\u03bf\u03b3\u03bf\u03c2, logos, \" word \" ) is any of several traditions or systems in which knowledge of the apparent positions of celestial bodies is held to be useful in understanding, interpreting and organizing knowledge about reality and human existence on earth. all are based on the relative positions and movements of various real and construed celestial bodies, chiefly the sun, moon, planets, ascendant & midheaven axes, and lunar nodes as seen at the time and place of the birth or other event being studied. a practitioner of astrology is termed an astrologer, though they are sometimes referred to as an astrologist. many of those who practice astrology believe the positions of certain celestial bodies either influence or correlate with, but do not influence people ' s personality traits, important events in their lives, and even physical characteristics. astrology is not considered to be a science, but is more appropriately a spiritual discipline, and is therefore separate from astronomy, the scientific study of the heavens. for many astrologers the purported relationship between the celestial bodies and events on earth need not be causal, nor even scientific. although there are astrologers who try to put astrology on sound scientific grounds, for many more it is a technology and an art that merges calculations with intuitive perceptions. the core principles of astrology reflect general principles, which were universally accepted in the ancient world, that events in the heavens must have analogies on earth. from china to babylon, the apparently untoward movement of a comet across the otherwise orderly movement of the heavens was taken as a portent of disaster : the very word still contains its \" star \" root, aster. such ancient beliefs are epitomized in the hermetic maxim : as above, so below. the famous astronomer / astrologer tycho brahe also used a similar phrase to justify his studies in astrology : suspiciendo despicio - \" by looking up i see downward. \" in past centuries astrology often relied on close observation of astronomical objects, and the charting of their movements, and might be considered a protoscience in this regard. in modern times astrologers have tended to rely on data drawn up by astronomers and set out in a set of tables called an ephemeris, which shows the changing positions of the heavenly bodies through time. central to all", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6215857351218647, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.017676"} {"text": "an exact value for avogadro ' s number untangling this constant from le gran k could provide a new definition of the gram avogadro ' s number, n a, is the fundamental physical constant that links the macroscopic physical world of objects that we can see and feel with the submicroscopic, invisible world of atoms. in theory, n a specifies the exact number of atoms in a palm - sized specimen of a physical element such as carbon or silicon. the name honors the italian mathematical physicist amedeo avogadro ( 1776 - 1856 ), who proposed that equal volumes of all gases at the same temperature and pressure contain the same number of molecules. long after avogadro ' s death, the concept of the mole was introduced, and it was experimentally observed that one mole ( the molecular weight in grams ) of any substance contains the same number of molecules. this number is avogadro ' s number, although he knew nothing of moles or the eponymous number itself. today, avogadro ' s number is formally defined to be the number of carbon - 12 atoms in 12 grams of unbound carbon - 12 in its rest - energy electronic state. the current state of the art estimates the value of n a, not based on experiments using carbon - 12, but by using x - ray diffraction in crystal silicon lattices in the shape of a sphere or by a watt - balance method. according to the national institute of standards and technology ( nist ), the current accepted value for n a is : n a = ( 6. 0221415 \u00b1 0. 0000010 ) \u00d7 1023 this definition of n a and the current experiments to estimate it, however, both rely on the precise definition of a gram. originally the mass of one cubic centimeter of water at exactly 3. 98 degrees celsius and atmospheric pressure, for the past 117 years the definition of one gram has been one - thousandth of the mass of \" le gran k, \" a single precious platinum - iridium cylinder stored in a vault in sevres, france. the problem is that the mass of le gran k is known to be unstable in time. periodic cleanings and calibration measurements result in abrasion of platinum - iridium and accretion of cleaning chemicals. these changes cannot be measured exactly, simply because there is no \" perfect \" reference against which to measure them \u2014 le gran k is always exactly one kilogram, by definition. it is estimated that le gran k may have", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6349310352732668, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.054904"} {"text": "communications systems glossary telegraphy. literally, telegraphy means \" writing at a distance \". telephony. literally, telephony means \" speaking at a distance \". ter. ter is a postscript used to note the third version of an itu standard ( ter is french for three ). ternary. ternary means having three states. throughput. throughput is the amount of information carried by a communication system. data rate equals information rate plus overhead. time - division duplex ( tdd ). time - division duplex ( tdd ) refers to duplexs communications links where the uplink is separated from the downlink by the allocation of different time slots in the same frequency band. time - division multiple access ( tdma ). time - division multiple access is a mechanism for sharing a channel, whereby a number of users have access to the whole channel bandwidth for a small period of time ( a time slot ). time division multiplexing ( tdm ). time division multiplexing ( tdm ) is a technique that shares a transmission channel between users by dividing transmission time by allotting to each device a time slot during which it can send or receive data. token bus. token bus is a medium access technique using token passing on a logical bus local area network ( lan ) topology. token - passing. token - passing is a media access technique in which a small set of bits called the token is passed between network devices. token ring. token ring is a medium access technique using token passing on a logical ring local area network ( lan ) topology. topology. the topology describes the way in which the devices on a lan are connected together. transceiver. transceiver is a contraction of transmitter / receiver - a device that is able to act as both a transmitter and a receiver. transmission medium. the transmission medium is the physical path between transmitters and receivers in a communications system. transponder. in satellite communications, a transponder receives the transmission from earth ( uplink ), amplifies the signal, changes frequency and retransmits the data to a receiving earth station ( downlink ). transport layer ( layer 4 ). the transport layer is layer 4 of the 0si model, which provides reliable, transparent transfer of data between endpoints. the transport layer is responsible for generating the end useris address and for the integrity of the receipt of message blocks. among the most complex of protocols. twisted pair. a copper transmission line consisting of two insulated wires twisted together.", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6387951321890304, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.145750"} {"text": "| silicon ( icon | a | s | a \u2030 a\u00aa | l | a \u2030 a\u00a8 | k | a \u2030 a\u2122norsil | a \u2030 a\u2122 - kon ) is a chemical element with the symbol si and atomic number 14. a tetravalent metalloid, it is less reactive than its chemical analog carbon, the nonmetal directly above it in the periodic table, but more reactive than germanium, the metalloid directly below it in the table. controversy about silicon ' s character dates to its discovery : silicon was first prepared and characterized in pure form in 1824, and given the name silicium ( from flints ), with an - ium word - ending to suggest a metal. however, its final name, suggested in 1831, reflects the more physically similar elements carbon and boron. silicon is the eighth most common element in the universe by mass, but very rarely occurs as the pure free element in nature. it is most widely distributed in dusts, sands, planetoids, and planets as various forms of silicon dioxide ( silica ) or silicates. over 90 % of the earth ' s crust is composed of silicate minerals, making silicon the second most abundant element in the earth ' s crust ( about 28 % by mass ) after oxygen. most silicon is used commercially without being separated, and indeed often with little processing of compounds from nature. these include direct industrial building - use of clays, silica sand and stone. silica is used in ceramic brick. silicate goes into portland cement for mortar and stucco, and when combined with silica sand and gravel, to make concrete. silicates are also in whiteware ceramics such as porcelain, and in traditional quartz - based soda - lime glass. more modern silicon compounds such as silicon carbide form abrasives and high - strength ceramics. silicon is the basis of the ubiquitous synthetic silicon - based polymers called silicones. elemental silicon also has a large impact on the modern world economy. although most free silicon is used in the steel refining, aluminum - casting, and fine chemical industries ( often to make fumed silica ), the relatively small portion of very highly purified silicon that is used in semiconductor electronics ( < 10 % ) is perhaps even more critical. because of wide use of silicon in integrated circuits, the basis of most computers, a great deal of modern technology depends on it. silicon is an essential element in biology, although only tiny traces of it appear to be required by animals,", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6428125465087764, "token_count": 510, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.474538"} {"text": "the original wordlist seems to be offlinethe wordlist containing your word and definition doesn ' t exist anymore, or, the website doesn ' t exist anymore. on this page you can find a copy of the original information. the information may have been taken offline because it is outdated. the blackened surface in a solar collector that absorbs incoming solar radiation, converts it to thermal energy, and feeds it into the heat transfer fluid. a measure of the air exchange in a building where one air change is an exchange of a volume of air equal to the interior volume of a building in question. a type of silicon made up of atoms with no fixed order, as in glass, so it is not crystalline. also known as thin - film silicon. an assembly of pv modules connected in series and / or parallel. the least massive, yet the most important part of the earth for life. through the atmosphere pass nearly all the elements that form living organisms. the atmosphere protects life from the rigours of space and establishes the climate. the conventional ( i. e. non - solar ) contribution to the total load ( e. g. gas boiler, etc. ). wastewater generated by a household, including toilet wastes, that is entirely non - useable. see grey water. describes the energy content of a unit mass or volume of a fuel ( kwh kg - 1, jkg - 1, kwh m - 3, j m - 3 ). carbon dioxide, co2 a gas used by plants and produced by respiration and burning. it has a faintly pungent smell and is present in the air at 280 parts per million but in our breath at 4 %. carbon dioxide in the air helps, through the greenhouse effect, to keep the earth warm, but too much can lead overheating. chimney or stack effect the tendency of air or gas in a vertical passage, such as a duct, to rise when heated due to its lower density in comparison with that of the surrounding air or gas. in buildings, there is a tendency towards displacement caused by the difference in temperature between the internal heated air and external un - heated air, and therefore the difference a\u20ac\u00a6 collector, flat plate an assembly containing a panel of metal or other suitable material, usually a flat black colour on its sun side, that absorbs sunlight and converts it into heat ( see absorber ). it is usually in an insulated box, covered with glass or plastic on the sun side to take advantage of the greenhouse effect. in the collector,", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6868386895138036, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.572961"} {"text": "life, such as potassium, carbon, and hydrogen, have naturally occurring isotopes that are radioactive. the ability of a surface to reflect radiation incident upon it. the energy - carrying electromagnetic radiation emitted by the sun. this radiation comprises many frequencies, each relating to a particular class of radiation : the substance dissolved in a solvent. together the solute and the solvent form a solution. the layer of the earth ' s atmosphere immediately above the troposphere where the air is stratified. it is warmer than the upper troposphere and contains the ozone layer. the thermal transmission through 1m2 area of a given structure ( e. g. a wall consisting of bricks, thermal insulation, cavities, etc. ) divided by the difference between the environmental temperature on either side of the structure. usually called ' u - value ' ( w m - 2 k ). gases such as methane, nitrous oxide, ozone, carbon monoxide, dimethyl sulphide, and methyl chloride that exist at low concentrations in the atmosphere. their low concentration belies their importance. the process of altering the tilt of a module through the day in order to face the sun and thus maximize the power output. the ratio of radiant energy transmitted through a substance to the total radiant energy incident on its surface. in solar technology, transmittance is always affected by the thickness and composition of the glass cover plates on the collector or window, and to a major extent by the angle of incidence between the sun ' s rays and a line normal to the a\u20ac\u00a6 a black - painted masonry ( earth, brick, block, stone or mass concrete ) wall with glazing on the southerly side. the wall acts as absorber, heat store and emitter. felix trombe was the french designer who designed the trombe wall. the layer of the earth ' s atmosphere about 10km from the surface. the air in the troposphere is well mixed and it is where the clouds are mostly to be found. electro - magnetic radiation in the range of frequencies immediately above visible light and below x - rays. the ultraviolet radiation reaching the earth ' s surface has both harmful and beneficial effects. material sent to landfill or incineration. mains, surface and groundwater consumption. an area of land that drains to a common outlet, such as the outflow of a lake, the mouth of a river or any point along a stream channel. unit of power which is the rate of flow of energy, whether electrical, light or heat ; definition", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6436079616515293, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.577136"} {"text": "a conversation with physicist y. c. chen a technological breakthrough in imaging that has applications in early diagnosis of eye ailments was the fruit of a recent collaboration between professor y. c. chen, a laser specialist who is chairman of the hunter physics department, and dr. ronald h. silverman, then of the weill cornell medical college ophthalmology department. silverman, a specialist in ultrasound imaging of the eye who is now at columbia, was exploring the possibilities of combining ultrasound with light to improve image resolution ; chen wanted to apply his laser knowledge to the biomedical field. together, the two invented a device that creates a detailed image of the eye, perhaps 20 times better than the images available with ultrasound alone - allowing doctors to see early signs of macular degeneration, tumors, oxygen shortages, and other danger signals. the collaboration was a result of hunter ' s partnership in the consortium of institutions headed by weill cornell medical college in the clinical and translational science center ( ctsc ). established in 2008 with the aid of a $ 49 million grant from the national institutes of health, the center will translate research advances made in labs into new clinical methods used at the bedside. the men hope to have a workable device in five years. how does the photoacoustic imaging process work? the way ultrasound works, you direct an ultrasound pulse at a tissue. it is reflected, and from the delay of the echo, you know where the reflection is taking place. a computer processes the data and constructs the image. ultrasound is well developed : it can penetrate where light cannot. but ultrasound has a limitation : the smallest thing you can see is about the size of a human hair. doctors need to see much smaller things such as microvasculature and the optic nerve, and ultrasound cannot reveal enough details. what we came up with uses a technique called photoacoustics. we generate ultrasound by light : we direct a focused laser pulse toward the tissue. when the tissue absorbs the light, it warms up, expands and generates an ultrasound pulse. the laser beams can be focused better than the ultrasound. this way, we improved the resolution at least 20 times. with photoacoustic imaging, not only can you tell fine details, you can also tell what kind of tissue it is. could you say a few words about the collaboration? the sciences in the 21st century will be driven by interdisciplinary research. the ctsc supports interdisciplinary and cross - institutional research. i have limited knowledge about the human body and ultrasound", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6171022368240223, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.798800"} {"text": "kind of tissue it is. could you say a few words about the collaboration? the sciences in the 21st century will be driven by interdisciplinary research. the ctsc supports interdisciplinary and cross - institutional research. i have limited knowledge about the human body and ultrasound, while my collaborator needs my laser technology to help break the limitations of his technology. the collaboration not only solves the problem that brought us together, but also opens up many new ideas and opportunities. what else could your technique be used for? dermatology. it ' s good for anything that is a few millimeters deep - or else for the eye, because it ' s transparent. we are also working to shrink the device so it may be used to examine places that can be reached by an endoscope. what ' s next in your lab? we are developing techniques that will let us focus the laser beam through a non - transparent medium. it is like seeing through ground glass. among the people in my lab are two students who just graduated with their doctoral degrees, fanting kong and liping liu. how did you come to hunter? i used to work in industry, but quite often, in industry, if something doesn ' t work in six months, then you will be asked to change direction. it is hard to do research if management has only six months of patience. so i came to academics, which allows me to set my own direction. what do you like about hunter? hunter is very supportive. we have a very good academic program and smart students. and we offer very personal attention to students. in physics, we pretty much know every student, and my students helped in this project. we have a very close bond, and it ' s very enjoyable to see students starting productive careers. also, new york city, of course, can ' t be beat. is this technique patented, and do you have other patents? we submitted an \" invention disclosure \" through cornell. i have nine patents, mostly from industry - various mini - lasers, some already commercial products, because a large part of my work in the past involved shrinking lasers from a bulky benchtop model to a little millimeter - sized chip without compromising performance. what is your favorite thing about teaching? teaching physics makes me understand physics better. you think about the problem more and from different angles, and you understand it much better. what are your pastimes outside of work? hiking. i have hiked parts of the appalachian trail. wherever i go - to a conference, for example - i find", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6196113721860185, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.799971"} {"text": "the discovery was made by hgst labs, a company owned by western digital ( wd ), using a technique called nanolithography which is used to imprint patterns on the thin film of hard drive platters where data is to be stored. the process overcomes the challenges associated with photolithography, a semiconductor technology used for making successively smaller circuit features in shorter wavelengths of light, among other things. [ test your ssds or risk massive data loss, researchers warn | managing backup infrastructure right is not so simple. infoworld ' s expert contributors show you how to get it right in this \" backup infrastructure deep dive \" pdf guide. | get the latest practical data center advice and info in matt prigge ' s information overload blog and infoworld ' s data center newsletter. ] the discovery allows for twice the bit density of today ' s disk drives. nanolithography was used to make dense patterns of \" magnetic islands \" that appear as small dots in about 100, 000 circular tracks required for disk drives. this pattern has 1. 2 trillion dots per square inch, showing the dense patterns of magnetic islands made by hgst labs using such emerging nanotechnologies as self - assembling molecules, line doubling and nanoimprinting. each dot can store a single bit of data ( source : hgst ). to make the magnetic islands, hgst labs used the nanotechnologies to created dense patterns of even smaller 10 - nanometer structures, each only about 50 atoms wide. \" we made our ultra - small features without using any conventional photolithography, \" tom albrecht, an hgst fellow, said in a statement. \" with the proper chemistry and surface preparations, we believe this work is extendible to ever - smaller dimensions. \" hgst said it is the first company to combine self - assembling molecules, line doubling and nanoimprinting to make rectangular features as small as 10 nanometers in the radial and circular paths necessary for rotating disk storage. the company expects bit - patterned media similar to its discovery to become a cost - effective means of increasing data densities in magnetic hard disk drives before the end of the decade. \" the emerging techniques of self - assembling molecules and nanoimprinting utilized at the hgst labs will have an enormous impact on nanoscale manufacturing, enabling bit - patterned media to become a cost - effective means of increasing data densities in magnetic hard disk drives before the end of the decade, \"", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.631646570716224, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.817613"} {"text": "possible hints of the higgs remain in latest analyses : not strong enough to claim discovery december 13, 2011 two experiments at the large hadron collider have nearly eliminated the space in which the higgs boson could dwell, scientists announced in a seminar held at cern today. however, the atlas and cms experiments see modest excesses in their data that could soon uncover the famous missing piece of the physics puzzle. theorists have predicted that some subatomic particles gain mass by interacting with other particles called higgs bosons. the higgs boson is the only undiscovered part of the standard model of physics, which describes the basic building blocks of matter and their interactions. the experiments \u2019 main conclusion is that the standard model higgs boson, if it exists, is most likely to have a mass constrained to the range 116 - 130 gev by the atlas experiment, and 115 - 127 gev by cms. tantalizing hints have been seen by both experiments in this mass region, but these are not yet strong enough to claim a discovery. higgs bosons, if they exist, are short - lived and can decay in many different ways. just as a vending machine might return the same amount of change using different combinations of coins, the higgs can decay into different combinations of particles. discovery relies on observing statistically significant excesses of the particles into which they decay rather than observing the higgs itself. both atlas and cms have analyzed several decay channels, and the experiments see small excesses in the low mass region that has not yet been excluded. taken individually, none of these excesses is any more statistically significant than rolling a die and coming up with two sixes in a row. what is interesting is that there are multiple independent measurements pointing to the region of 124 to 126 gev. it \u2019 s far too early to say whether atlas and cms have discovered the higgs boson, but these updated results are generating a lot of interest in the particle physics community. hundreds of scientists from u. s. universities and institutions are heavily involved in the search for the higgs boson at lhc experiments, said cms physicist boaz klima of the department of energy \u2019 s fermi national accelerator laboratory near chicago. \u201c u. s. scientists are definitely in the thick of things in all aspects and at all levels, \u201d he said. more than 1, 600 scientists, students, engineers and technicians from more than 90 u. s. universities and five u. s.", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6139289893523718, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:07.977383"} {"text": "below are the first 10 and last 10 pages of uncorrected machine - read text ( when available ) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. intended to provide our own search engines and external engines with highly rich, chapter - representative searchable text on the opening pages of each chapter. because it is uncorrected material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. do not use for reproduction, copying, pasting, or reading ; exclusively for search engines. ocr for page 127 appendix b reference paper ocr for page 128 this page in the original is blank. ocr for page 129 improving student learning in science through discipline - based education research lillian c. mcdermott department of physics, university of washington introduction i would like to thank the council of scientific society presidents for the 2000 award for achievement in educational research. the accomplishments recognized by this honor are the result of many contributions by faculty, postdocs, graduate students, k \u2013 12 teachers, and undergraduates in the physics education group at the university of washington. perhaps my \u201c most seminal research achievement \u201d has been to demonstrate, in the context of physics, the value of discipline - based education research. this type of research differs from traditional education research in that the emphasis is not on educational theory or methodology in the general sense but rather on student understanding of science content. for both intellectual and practical reasons, discipline - based education research must be conducted by science faculty within science departments. i shall present some evidence that this is an effective approach for improving student learning ( k \u2013 20 ). the emphasis here will be on introductory university students and k \u2013 12 teachers. context for research a brief description of the physics education group can set a context for our research. our group is an entity within the physics department in the same sense that there are groups in other subfields of physics. the courses in the department provide the primary environment for our investigations. most of our work involves two populations : undergraduates in the introductory calculus - based course and prospective and practicing k \u2013 12 teachers who are taking special courses designed to ocr for page 130 prepare them to teach physics and physical science by inquiry. our investigations also include students in engineering and in advanced undergraduate and graduate physics courses. as part of our research on how to improve student learning in physics, we try to identify specific difficulties that students encounter in the study of various topics. the results are used to design instructional materials that target these difficulties and help", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.620926125504474, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.186905"} {"text": "undergraduate and graduate physics courses. as part of our research on how to improve student learning in physics, we try to identify specific difficulties that students encounter in the study of various topics. the results are used to design instructional materials that target these difficulties and help guide students through the reasoning required to overcome them and to develop a coherent conceptual framework. assessment of effectiveness with students is an integral part of the iterative process through which the physics education group develops curriculum. to ensure applicability beyond our own university, our materials are also tested at pilot sites ( e. g., georgetown, harvard, illinois, maryland, purdue ). our two major curriculum projects are physics by inquiry ( mcdermott, shaffer, and rosenquist, 1996 ) and tutorials in introductory physics ( mcdermott, shaffer, and the physics education group, 1998 ). the development of both is guided by research. the first is a self - contained, laboratory - based curriculum for the preparation of k \u2013 12 teachers ; the second is a supplementary curriculum that can be used in conjunction with any standard text. perspective on teaching as a science the perspective that teaching is a science, as well as an art, motivates our work. considered as a science, teaching is an appropriate field for scholarly inquiry by scientists. this view is in marked contrast to that held by many science faculty. a more traditional view was expressed in 1933 in the first article in the first journal published by the american association of physics teachers ( aapt ). in \u201c physics is physics, \u201d f. k. richtmyer ( cornell university ) argued that teaching is an art and not a science. he quoted r. a. millikan ( california institute of technology ) in characterizing science as comprising \u201c a body of factual knowledge accepted as correct by all workers in the field. \u201d richtmyer went on to say : \u201c without a reasonable foundation of accepted fact, no subject can lay claim to the appellation \u2018 science. \u2019 if this definition of a science be accepted \u2014 and it seems to me very sound \u2014 then i believe that one must admit that in no sense can teaching be considered a science. \u201d although this is a somewhat limited definition of science, i would like to challenge the implication that it is not possible to build \u201c a reasonable foundation of accepted fact \u201d for the teaching of physics ( and, by extension, other ocr for page 131 sciences ). for example, we have found that most people encounter many of the same conceptual and reasoning difficulties in learning a given body of material", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6400276791317014, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.188072"} {"text": "foundation of accepted fact \u201d for the teaching of physics ( and, by extension, other ocr for page 131 sciences ). for example, we have found that most people encounter many of the same conceptual and reasoning difficulties in learning a given body of material. these difficulties can be identified, analyzed, and effectively addressed through an iterative process of research, curriculum development, and instruction. both the learning difficulties of students and effective means for addressing them are often generalizable beyond a particular course, instructor, or institution. if one documents intellectual outcomes for student learning, teaching can be treated as a science. if the criteria for success are clearly stated and the results are reproducible, findings from research can contribute to \u201c a reasonable foundation of accepted fact. \u201d this foundation is represented by a rapidly growing research base. the personal qualities and style of an instructor contribute to the aspect of teaching that can be viewed as an art ( a benefit confined to the instructor \u2019 s class ). however, when student learning is used as the criterion ( as distinct from student enthusiasm ), we have found that effective teaching is not as tightly linked as is often assumed either to self - assessment of learning by students or to their evaluation of the course or instructor. focus on the student as a learner the focus of our research is on the student as a learner, rather than on the instructor as a teacher. we try to determine the intellectual state of the student throughout the process of instruction. to the degree possible, we try to follow the procedures and rules of evidence of an experimental science. we conduct our investigations in a systematic manner and record our procedures so that they can be replicated. we use two general methods : individual demonstration interviews ( which allow deep probing into the nature of student difficulties ) and written tests ( which provide information on prevalence ). continuous pre - testing and post - testing enable us to judge the effectiveness of instruction. although experienced instructors know there is a gap between what they say and what students learn, most do not recognize how large the gap can be. the usual means of evaluation in physics courses \u2014 the ability to solve standard quantitative problems \u2014 is not adequate as a criterion for a functional understanding and unfortunately reinforces the perception of physics as a collection of facts and formulas. success on numerical problems does not provide adequate feedback for improving instruction. questions that require ocr for page 132 qualitative reasoning and verbal explanations are essential. our investigations have shown that on certain types of qualitative questions, student performance in physics is essentially the same : before and after standard", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.621299971268978, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.189193"} {"text": "figure b - 1 pretest. ( a ) students were asked to sketch what they would see on the screen. ( b ) correct answer. source : wosilait et al. ( 1998 ) and heron and mcdermott ( 1998 ). reprinted with permission of the american association of physics teachers and optical society of america. small section of about 24 students, in which groups of three or four work together. the structure in these 50 - minute sessions is provided by worksheets that guide students through a series of exercises and simple experiments by asking questions. with results from questions like the one described above as a guide, we designed a tutorial entitled light and shadow. the tutorial begins by having students predict the images formed by point and line sources with apertures of various sizes and shapes. after making predictions and explaining their reasoning to one another, the students observe what actually happens and try to resolve any discrepancies with their predictions. they are then asked to predict and explain up - down and left - right inversions of images formed by asymmetric sources. these and other exercises help students recognize how the shape and relative size of the source and aperture and the distances involved affect the image. systematic monitoring in the classroom helped us improve the tutorial. one exercise that was added had a pronounced effect on student understanding of the geometric model for light. the students are asked to predict what they would see on the screen when a frosted light bulb is placed in front of a mask with a triangular hole. many are surprised to see the inverted image of the bulb. eventually, they realize that the entire bulb can be ocr for page 136 table b - 1 results from pretest and posttest questions administered in introductory physics courses and graduate teaching seminars introductory course pretests ( before tutorial ) ( n 1215 ) posttests ( after tutorial ) ( n 360 ) graduate seminar pretests ( before tutorial ) ( n 110 ) correct or nearly correct 20 % 80 % 65 % incorrect : image mimics shape of hole in mask 70 % 10 % 30 % source : wosilait et al. ( 1998 ) and heron and mcdermott ( 1998 ). reprinted with permission of the american association of physics teachers and optical society of america. considered as a collection of point sources. the students recognize that superposition of the images from the continuum of point sources produces an image that closely resembles the extended source, but is affected by the shape of the aperture. they also note that whether a light source can be", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6748032206029981, "token_count": 509, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.193944"} {"text": "collection of point sources. the students recognize that superposition of the images from the continuum of point sources produces an image that closely resembles the extended source, but is affected by the shape of the aperture. they also note that whether a light source can be treated as a point or extended source depends on a variety of factors. posttest throughout the development of the tutorial, assessment played a critical role. in figure b - 2 is one of several posttest questions that we administered on examinations to about 360 students in several introductory courses. the percentage of correct or nearly correct responses was 80 percent, an increase from 20 percent on the pretest. only 10 percent drew images the same shape as the aperture, in sharp contrast to the 70 percent who made this error on the pretest. ( see table b - 1. ) the teaching assistants and postdocs who lead the tutorial sessions participate in a weekly graduate teaching seminar in which they work through the pretests and tutorials. about 65 percent have given a correct, or nearly correct, response for the question described above. this result is consistent with our experience that advanced study may not increase student understanding of basic topics. we consider the pretest performance of graduate students to be a reasonable post - test goal for introductory students. as shown in table b - 1, the latter demonstrate a better functional understanding than the graduate students had initially had. ocr for page 137 figure b - 2 posttest question : ( a ) students were asked to sketch what they would see on the screen. ( b ) correct answer. source : wosilait et al. ( 1998 ) and heron and mcdermott ( 1998 ). reprinted with permission of the american association of physics teachers and optical society of america. commentary it is tempting for instructors to think that the rectilinear propagation of light is such a simple concept that only a brief discussion of the topic is needed. evidence to the contrary comes not only from our own research but from the experience of colleagues in our department. recently, instructors of an honors section and a regular section of the calculus - based course used other approaches to teach this concept. their students did not work through the tutorial. in the honors section, the instructor demonstrated the image that is formed when light from an object passes through a pinhole. he asked questions to guide the students in explaining what they saw. he assigned homework based on equipment similar to that used in the tutorial. only about 30 percent of the students responded correctly on the homework. the instructor then distributed solutions. in", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6626944321364409, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 7, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.194974"} {"text": "a lesser extent in tutorials in introductory physics, we try to help students learn to answer and to ask the kinds of questions that are necessary to assess and improve their understanding. this ability is critical for all students, but especially for those who plan to teach. learning to reflect on one \u2019 s own thinking transcends the learning of physics or any other science. our group has demonstrated that, in the context of physics, discipline - based education research can help improve student learning. recently, there has been a steady increase in the number of physicists who are pursuing this type of research. the results are reported at professional meetings and in articles in refereed journals that are readily acces - ocr for page 139 sible to physics faculty ( mcdermott and redish, 1999 ). thus, colleagues who are not involved in education research have a rich resource from which to draw in developing print and computer - based instructional materials. our experience indicates that it is difficult to develop effective curriculum that yields consistent positive results. therefore, unless faculty can devote a long - term effort to the development and refinement of their own instructional materials, they should take advantage of already existing curriculum that has been carefully designed and thoroughly assessed. without a research base on student learning, we lack the knowledge necessary to make cumulative progress in improving instruction. there is a need in all the sciences for research on the intellectual development of students as they progress through a given body of material. investigations of this type demand a depth of understanding that ordinarily is found only among specialists in a field. therefore, such research must be conducted by science faculty in the context of courses offered by science departments. the american physical society has issued a statement in support of research in physics education as a scholarly activity by faculty in physics departments. by taking similar action, other scientific societies could help strengthen efforts to improve student learning in their disciplines. references heron, p. r. l., and mcdermott, l. c. ( 1998 ). bridging the gap between teaching and learning in geometrical optics : the role of research. optics & photonics news, 9 ( 9 ), 30 \u2013 36. mcdermott, l. c., and redish, e. f. ( 1999 ). resource letter : per - 1 : physics education research. american journal of physics, 67 ( 9 ), 755. mcdermott, l. c., shaffer, p. s., and the physics education group. ( 1998 ). tutorials in introductory physics. upper saddle", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6052962332724472, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.197321"} {"text": "education research. american journal of physics, 67 ( 9 ), 755. mcdermott, l. c., shaffer, p. s., and the physics education group. ( 1998 ). tutorials in introductory physics. upper saddle river, nj : prentice - hall. mcdermott, l. c., shaffer, p. s., and rosenquist, m. l. ( 1996 ). physics by inquiry ( vols. i - ii ). new york : wiley. wosilait, k., heron, p. r. l., shaffer, p. s., and mcdermott, l. c. ( 1998 ). development and assessment of a research - based tutorial on light and shadow. american journal of physics, 66 ( 10 ), 906 \u2013 913. acknowledgments special thanks are due to the current faculty in the physics education group : paula r. l. heron, peter s. shaffer, and stamatis vokos. in addition to past and present members of our group, i want to express my appreciation to the past and present leadership of the physics department and the university of washington. i would like to recognize the early intellectual influence of arnold b. arons and the contributions by our physics colleagues here and elsewhere. i am also grateful to the national science foundation for enabling our group to do the research for which this cssp award is being given. representative terms from entire chapter :", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6406287832460129, "token_count": 302, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 10, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:08.197849"} {"text": "the game pieces ( chess pieces, tanks, aircraft, etc. ), their actions and application of sensors. more than 180 papers on lg have been published. dr. stilman has written the first scholarly book on lg, linguistic geometry : from search to construction, published by kluwer ( now springer ) in february 2000. since 2000, major theoretical advancement was made in the lg - based representation and solution of the real world defense problems. more details about lg and dr. stilman, including demonstration movies and brochures, can be found at the stilman advanced strategies web site. a number of draft papers on lg and information on dr stilman \u2019 s research and teaching can be found at www. stilman - strategies. com / bstilman. theory and applications of linguistic geometry ( lg ) lg is a new type of game theory, which allows us to solve classes of adversarial games of practical scale and complexity. as every new theory it has a lot of topics that require experimental and theoretical research. dr. boris stilman has shown that lg is applicable to a wide class of games with concurrently moving agents. later, he has proven that for several classes of games, lg generates optimal strategies in polynomial time. this groundbreaking result also suggests that for much wider class of games lg strategies are also optimal or close to optimal. the latest version of lg is dispensing with tree search altogether by defining explicit game states, the so - called game boards. such game boards must sufficiently reach structure so that a \" projection \" of the game tree on the board could be defined. if considered in its entirety, this projection essentially forms the graph of the game, such that each node in the graph represents multiple nodes of the game tree. however, although the resultant graph is much smaller than the game tree, it could still be too large for a meaningful analysis. within the lg approach only portions of the projected game tree are constructed, and only those portions represent meaningful flow of events, the so - called trajectories. moreover, such \" flows \" are not constructed in isolation, but are intertwined together as action - reaction - counteraction constructs, the so - called zones. essentially, in lg search is replaced by construction of strategies out of several types of constructs or blocks, an attack zone, a domination zone, a retreat zone, etc., whose combinations reflect the entire set of winning strategies in abstract board games. informally, we can", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6003401580012917, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.076817"} {"text": "search is replaced by construction of strategies out of several types of constructs or blocks, an attack zone, a domination zone, a retreat zone, etc., whose combinations reflect the entire set of winning strategies in abstract board games. informally, we can say that lg reveals the \" genetic structure \" of the abstract board games. boris stilman, phd professor stilman received an ms in mathematics from moscow state university in 1972 ; a phd in computer science and a phd in electrical engineering from the national research institute for electrical engineering, moscow, in 1984. 1200 larimer street north classroom, room 2404b mailing address : campus box 109, p. o. box 173364, denver, co80217 - 3364 in 1972 - 1988, in moscow, ussr, dr. stilman was involved in the advanced research project pioneer, led by a former world chess champion professor mikhail botvinnik. the goal of the project was to discover and formalize an approach utilized by the most advanced chess experts in solving chess problems almost without search. dr. stilman developed experimental and mathematical foundations of the new approach to search problems in artificial intelligence ( ai ). in 1990 - 91, while at mcgill university, montreal, canada, based on this approach. dr. stilman originated linguistic geometry ( lg ), a new theory for solving abstract board games. lg allows us to overcome combinatorial explosion. it is scalable to solving complex real - world problems that are considered intractable by conventional approaches. since 1991, dr. stilman was developing the theory and applications of lg at the university of colorado denver as professor of computer science. a leap in the development lg was made in 1999 when dr. stilman, with a group of scientists and engineers, founded stilman advanced strategies, llc ( stilman ). since then, he combines his professorship at uc denver with his leadership role of chairman and ceo at stilman. dr. stilman led a number of national projects in the former soviet union ( until 1990 ), several government - funded projects at uc denver and all the projects developed at stilman. a number of applications of lg developed at stilman passed comprehensive testing and are considered vital for u. s. national defense. they are currently being transitioned to the u. s. armed forces. dr. stilman has published several books and contributions to books, and more than 200 research papers. he was a recipient of numerous r & d awards, including the top research awards at uc denver, grants from", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6169927481442594, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.077829"} {"text": "heredity ( the adjective is hereditary ) is the transfer of characteristics from parent to offspring through their genes, or the transfer of a title, style or social status through the social convention known as inheritance ( for example, a hereditary title may be passed down according to relevant customs and / or laws ). it was apparent to ancient humans that offsprings resembled their parents. for example, genesis 30 - 46 tells how jacob and labans split their sheep into white and speckled varieties so they could distinguish the two to ensure none were later stolen. although it was clear that traits were hereditary, the precise mechanism of heredity was not clear. ancient concepts of heredity the greek philosophers had a variety of ideas about heredity : theophrastus proposed that male flowers caused female flowers to ripen ; hippocrates speculated that \" seeds \" were produced by various body parts and transmitted to offspring at the time of conception, and aristotle thought that male and female semen mixed at conception. aeschylus, in 458 bc, proposed the male as the parent, with the female as a \" nurse for the young life sown within her \". various hereditary mechanisms were envisaged without being properly tested or quantified. these included blending inheritance and the inheritance of acquired traits. nevertheless, people were able to develop domestic breeds of animals as well as crops through artificial selection. the inheritance of acquired traits also formed a part of early lamarckian ideas on evolution. during the 1700s, dutch microscopist antoine van leeuwenhoek ( 1632 - 1723 ) discovered \" animalcules \" in the sperm of humans and other animals. some scientists speculated they saw a \" little man \" ( homunculus ) inside each sperm. these scientists formed a school of thought known as the \" spermists \". they contended the only contributions of the female to the next generation were the womb in which the homunculus grew, and prenatal influences of the womb. an opposing school of thought, the ovists, believed that the future human was in the egg, and that sperm merely stimulated the growth of the egg. ovists thought women carried eggs containing boy and girl children, and that the gender of the offspring was determined well before conception. pangenesis was an idea that males and females formed \" pangenes \" in every organ. these pangenes subsequently moved through their blood to the genitals and then to the children. the concept originated with the ancient greeks and influenced biology until little over 100 years ago. the", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6134316632064718, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.281353"} {"text": "a light source that ' s not only mathematically perfect, but perhaps a little magical. people appear symmetrical, but even the most perfect human face shows irregularities if we compare the left side with the right. perhaps this is why the absolute, rigid symmetry of crystals seems beautiful yet alien to us. unlike dna \u2019 s soft spiral, a crystal \u2019 s molecular bonds align themselves to form regular three - dimensional structures, which the greeks considered mathematically pure. the most fundamental of these shapes are known as the five platonic solids. if you assemble equal - sided triangles \u2014 all the same size, with the same angles to each other \u2014 you can create three possible solids : a tetrahedron ( with 4 faces ), an octahedron ( 8 faces ), and an icosahedron ( 20 faces ). if you use squares instead of triangles, you can create only a hexahedron, commonly known as a cube. pentagons create a dodecahedron ( 12 faces ), and that \u2019 s as far as we can go. no other solid objects can be built with all - identical, equal - sided, equal - angled polygons. the platonic solids have always fascinated me. my favorite is the dodecahedron, which is why i used it in this project as the basis for a table lamp. by extending its edges to form points, we make something that looks not only mathematically perfect, but perhaps a little magical.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6411285425540068, "token_count": 291, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.608903"} {"text": "so what is sha - 1? from wikipedia : the sha ( secure hash algorithm ) family is a set of related cryptographic hash functions. the most commonly used function in the family, sha - 1, is employed in a large variety of popular security applications and protocols, including tls, ssl, pgp, ssh, s / mime, and ipsec. sha - 1 is considered to be the successor to md5, an earlier, widely - used hash function. the sha algorithms were designed by the national security agency ( nsa ) and published as a us government standard. the first member of the family, published in 1993, is officially called sha ; however, it is often called sha - 0 to avoid confusion with its successors. two years later, sha - 1, the first successor to sha, was published. four more variants have since been issued with increased output ranges and a slightly different design : sha - 224, sha - 256, sha - 384, and sha - 512 \u2014 sometimes collectively referred to as sha - 2. from w3c. org : the secure hash algorithm takes a message of less than 264 bits in length and produces a 160 - bit message digest which is designed so that it should be computationaly expensive to find a text which matches a given hash. ie if you have a hash for document a, h ( a ), it is difficult to find a document b which has the same hash, and even more difficult to arrange that document b says what you want it to say. some months ago a team of chinese researchers found an algorithm that could produce collisions in sha - 1, i. e., different messages could produce the same hash, which could be used, in theory, to forge certificates. sha - 1 is supposed to require at least 2 ^ 80 to produce a collision, which would be enough to keep it squarely out of supercomputer realm. the researchers initially managed to produce collisions in 2 ^ 69 operations, and now they were able to do it in 2 ^ 63. the lower it gets, the faster it is to break : d for now, this is only a paper... until someone implements it, and then the fun begins. although the us are recommending a move to sha - 2, there ' s this interesting quote by the nist security technology group manager william burr, in federal computer week : \" sha - 1 is not broken, and there is not much reason to suspect that it will be soon. \" should become an interesting tag", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6218175851600276, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.626600"} {"text": "okay, here goes. i theorize that black holes may just be a naturally occurring form of bose - einstein condensation. background on bec : all states of matter thus discovered are all found naturally occurring, so it stands to reason the b. e. c. will also be naturally occurring somewhere. bose - einstein condensation is a state of matter that occurs when the temperature of a substance approaches absolute zero. all atoms in a substance are fluctuating between different energy levels. explanation of the above information, and an introduction to bose - einstein condensation can be found at : http : / / www. colorado. edu / physics / 2000 / bec / t... emperature. html these fluctuations are what we call temperature. faster moving atoms are considered a higher temperature. the site above describes temperature as a range of speeds. thats because of the fluctuations. anyways, when a substance is cooled to within a billionth of absolute zero, all the atoms jump to the lowest energy level. the fact that they are all within this energy level makes it impossible to tell the difference between the atoms. to quote the link i ' m about to give you, \" it means that all the atoms are absolutely identical. there is no possible measurement that can tell them apart. \" this link will further explain the energy levels and explain why all the atoms are indistinguishable from one another : http : / / www. colorado. edu / physics / 2000 / bec / w... what _ is _ it. html when you think \" atoms \" you probably think of small dots, or spheres. you imagine them as well defined, like a baseball. however, atoms are not well defined. they are blurry, like a smudge. when super coled, they get even more blurry, until they reach bec. some very cold atoms : http : / / www. colorado. edu / physics / 2000 / bec / i... / cold _ atoms. gif atoms in a bec : http : / / www. colorado. edu / physics / 2000 / bec / i... / super _ atom. gif they all just sort of merge into a \" super atom \", and as long as they are this cold, they will stay in this form. after being heated above this low, low temperature, they spread out into whatever form they were in before. it hasn ' t been discovered whether a solid in", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6486694651045635, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.799253"} {"text": "atom \", and as long as they are this cold, they will stay in this form. after being heated above this low, low temperature, they spread out into whatever form they were in before. it hasn ' t been discovered whether a solid in bec would reform into it ' s original shape, but the component substances of it would all be there. background on black holes : a black hole is an infinitely dense object with an infinitely powerful gravitational pull within it ' s event horizon. they are formed when a dying star ' s gravity causes the star to collapse in on itself, and it then begins to pull everything in within a certain distance. a basic overview is all that is, as it is not necessary for a very large working knowledge of black holes. you will either understand my theory or you won ' t. getting to the point : so we have a working knowledge of what a bec is, and what a black hole is, right? or maybe you skipped reading that and jumped to where i said i was getting to the point. if that ' s the case, you may not understand anything of what i ' m about to talk about. my theory is that a black hole is a bose - einstein condensate. i believe that the temperatures within the actual \" hole \" ( though it isn ' t a hole ) are cold enough to form a bose - einstein condensate. so far, scientists have said that nothing in the universe can be colder than 3 degrees above absolute zero due to residual heat left over from the big bang. however, we also didn ' t think that black holes really existed 20 years ago and now we know that they do. i believe that the gravity is so intense near the \" core \" of the black hole, that it slows the atoms down to within a few billionths of absolute zero, causing it to become a bec. much in the same way that putting an ice cube on your hand will cool your hand, i believe that the base bec would cool any incoming matter as it entered the black hole. the closer the new matter gets to the hole, the colder they get, until they get to the \" core \" and reach the right temperature and become part of the bec. this gradual cooling would keep something hot from going into the bec, heating it up, and causing it to separate. this massive amount of... mass... would have such great density and... mass.... that it would create a large gravitational field around it", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6400179170019743, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:09.802264"} {"text": "clearly show structures at two depths close to the cmb, and the existence of new phase transitions in the mantle. the team \u2019 s methodology involves a rich mix of physics, mathematics and statistics to extract information from seismic wave data through \u201c inverse scattering. \u201d whereas in the past, existing knowledge of geophysical structures was used to interpret scattering patterns, this method allows researchers to take scattered wave data and reconstruct an image of the subsurface without relying on existing knowledge. combined with considerably better data coverage, this advance in imaging is leading to a rapid expansion in our knowledge of the subsurface and the inner workings of the planet. materials and metallurgical engineering professor, ivar reimanis recently discovered a unique material behavior in which particles are ejected from the surface of an indented ceramic over periods lasting up to a few minutes. because many of the ejected particles are submicron in size, it looks to the unaided eye like the ceramic is smoking. the key ingredient in the ceramic is a lithium aluminum silicate called \u03b2 - eucryptite, a strange material that has a negative coefficient of thermal expansion. it is thought that a high compressive stress, such as that experienced under an indenter, stimulates a transformation to a denser ceramic phase. upon release, a reverse transformation leads to a popcorn - like effect where particles ranging from submicron to 50 microns are ejected violently from the material. there is no known report of this phenomenon in any other material. with assistance from undergraduate students chris seick and kyle fitzpatrick, reimanis is exploring whether this discovery can inform development of a toughened ceramic composite \u2014 the phase transformation may be able to shut cracks before they can propagate through a composite. in fact, this latter idea has already been submitted for a united states patent. to better understand the phenomenon, the mines researchers have involved collaborators at the national institute for standards and technology in gaithersburg, the los alamos national laboratory and the indian institute of science. the work is being supported by the u. s. department of energy, office of basic energy sciences. enhanced imaging of the subsurface paul sava, who joined the department of geophysics faculty in the fall of 2006, is working on increasing the accuracy of seismic imaging. when computers generate a visual representation of the subsurface from a seismic dataset, finer details are often obscured by background \u201c noise. \u201d while imaging may provide a coherent overall picture, it emerges from a fuzzy background, much as a badly oriented tv", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6188056208614828, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:10.289815"} {"text": "if you are having trouble with the nlr website, please provide details here, and we will try to improve the site accordingly. two types of rationality on est toujours libre de ne rien comprendre ` rien \u2014 gabriel marcel experimental natural science is grounded in careful observation. each investigation must proceed from observed facts. in physical and biological science these observed facts are usually inert facts, that is to say they are grasped from the exterior by an observer who is not disturbed by them and does not disturb them by his process of observation. even in micro - physics where the uncertainty principle tells us that the observational procedures disturb the field of the observed there are mathematical techniques which maintain the observer in some sort of relation of exteriority to the observed and indeed to his observing techniques themselves. in a science of personal interaction, on the other hand, mutual disturbance of the observer and the observed are not only inevitable in every case but it is this mutual disturbance which gives rise to the primary facts on which theory is based and not the disturbed or disturbing personal entities. the facts which are the observational data of anthropological sciences are not different from the facts from which science proceeds in the same sense that facts for biology are different from facts for physics : they differ in ontological status from natural scientific facts. put another way, the observer - observed relation in a science of persons is ontologically continuous ( subject - object vis - ` - vis subject - object ), whereas in natural sciences it is discontinuous ( subject vis - ` - vis object ) permitting a purely exterior description of the field of the observed. subscribe for just \u00a335 and get free access to the archive please login on the left to read more or buy the article for \u00a33", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6468354952049099, "token_count": 354, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:10.531428"} {"text": "a long - known but little - discussed vulnerability in the modern internet ' s design was highlighted yesterday by a report that hackers traced to iran spoofed the encryption procedures used to secure connections to google, yahoo, microsoft, and other major web sites. this design, pioneered by netscape in the early and mid - 1990s, allows the creation of encrypted channels to web sites, an important security feature typically identified by a closed lock icon in a browser. the system relies on third parties to issue so - called certificates that prove that a web site is legitimate when making an \" https : / / \" connection. the problem, however, is that the list of certificate issuers has ballooned over the years to approximately 650 organizations, which may not always follow the strictest security procedures. and each one has a copy of the web ' s master keys. \" there is this problem that exists today where there are a very large number of certificate authorities that are trusted by everyone and everything, \" says peter eckersley, senior staff technologist at the electronic frontier foundation who has compiled a list of them. this has resulted in a bizarre situation in which companies like etisalat, a wireless carrier in the united arab emirates that implanted spyware on customers ' blackberry devices, possess the master keys that can be used to impersonate any web site on the internet, even the u. s. treasury, bankofamerica. com, and google. com. so do more than 100 german universities, the u. s. department of homeland security, and random organizations like the gemini observatory, which operates a pair of 8. 1 - meter diameter telescopes in hawaii and chile. it ' s a situation that nobody would have anticipated nearly two decades ago when the cryptographic protection known as ssl ( secure sockets layer ) began to be embedded into web browsers. at the time, the focus was on securing the connections, not on securing the certificate authorities themselves - - or limiting their numbers. \" it was the ' 90s, \" says security researcher dan kaminsky, who discovered a serious domain name system flaw in 2008. \" we didn ' t realize how this system would grow. \" today, there are now about 1, 500 master keys, or signing certificates, trusted by internet explorer and firefox. the vulnerability of today ' s authentication infrastructure came to light after comodo, a jersey city, n. j. - based firm that issues ssl certificates, alerted web browser makers that an unnamed european partner had its systems compromised", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6018298050375743, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:10.549753"} {"text": "stretched molecule puts a new spin on electrons jun 15, 2010 2 comments physicists in the us have invented a way of measuring the magnetic properties of a single molecule as it is being stretched. the technique provides a new approach for studying quantum chemistry and how the spin of an electron affects its passage through tiny structures. the technique could one day even be adapted for use in spintronic devices, which use the spin of the electron to process and store information. the technique explores an effect first explained in 1964 by the japanese physicist jun kondo. he showed that, at very low temperatures, a conduction electron in a metal such as gold can pair up with an electron of opposite spin associated with a magnetic impurity ( such as iron ). this tie - up curtails the electron ' s ability to conduct current, resulting in a drop in the conductivity of the metal in these chilly conditions. physicists have observed this low - temperature fall in conductivity \u2013 known as the \" kondo effect \" \u2013 in a number of bulk materials. however, something very different can happen when electrons confront just a single magnetic impurity, such as a magnetic molecule or a magnetic quantum dot. studies of electrons flowing from one metallic electrode to another via the magnetic impurity reveal a sharp peak in the conductance of the dot or molecule at zero voltage \u2013 dubbed a \" kondo resonance \". jumping the barrier for non - magnetic molecules or quantum dots, conduction is governed by the repulsive electrostatic force between an electron in the metal and an electron in the molecule or dot. any electron wishing to hop from an electrode and into a molecule or dot must overcome this barrier. in a magnetic system, however, the same pairing interaction described by kondo lowers this barrier, allowing an electron to jump onto the molecule or dot \u2013 and then jump off the other side. although this effect has already been seen in dots and molecules with one magnetic electron ( spin \u00bd systems ), studying it in higher - spin systems could shed further light on how conduction electrons behave in magnetic materials. now, a team led by dan ralph at cornell university has studied a kondo resonance for the first time in a spin 1 molecule. the researchers have also shown that the resonance can be modified by stretching the molecule along one direction. in the experiment, the team used lithography to first create a gold bridge just 500 nm long and a few tens of nanometres thick and wide on a silicon substrate. a section in the middle of the bridge was removed and a single molecule, comprising", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6839313265531007, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:10.623360"} {"text": "experiment, the team used lithography to first create a gold bridge just 500 nm long and a few tens of nanometres thick and wide on a silicon substrate. a section in the middle of the bridge was removed and a single molecule, comprising one magnetic atom ( cobalt ) and six pyridine rings, was put in its place. cobalt has two magnetic electrons that arrange themselves into a triplet state \u2013 a set of three quantum states with identical energies. both spins point in the same direction, giving cobalt a total spin of 1. when the sample was cooled to about 1. 6 k, the team noticed a big drop in the electrical resistance of the molecule at zero applied voltage \u2013 the hallmark of a kondo resonance. ralph and colleagues then stretched the molecule by as much as 0. 08 nm by bending the silicon substrate. the kondo resonance was seen to split into two peaks, one on either side of zero applied voltage. according to ralph, this splitting occurs because stretching the molecule breaks the cubic symmetry of the molecule that is responsible for the triplet states all having the same energy. instead, one state drops in energy and the size of the drop is related to the size of the splitting. the team confirmed the magnetic nature of the splitting by repeating the experiment in an applied magnetic field. when the field was applied perpendicularly to the direction of stretch, the splitting gradually got bigger as the field strength was turned up. however, when the field was applied parallel to the stretch, the size of the splitting changed significantly as the field strength was varied. according to ralph, this behaviour confirms that they are observing a kondo resonance in a spin 1 molecule. ralph and colleagues also looked at how the conductance at zero voltage changes as the sample is warmed from 1. 6 k to about 30 k. the drop in conductance was that expected for a spin 1 kondo resonance. pablo jarillo - herrero of the massachusetts institute of technology describes the work as \" an important experiment \" that could lead to better quantum - chemistry calculations, which yield the spin states of a molecule. he also believes that the work could result in the development of tiny magnetic memories that store information in terms of the spin state of the molecule. the work could even lead to the development of new sources of spin - polarized electrons and switches that can turn spin currents on and off. ralph told physicsworld. com that the team is now trying to repeat the experiment using electrodes made from a magnetic metal rather than gold. this would allow spin - polarized", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6644538046470853, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:10.624380"} {"text": "interhemispheric communication via direct connections for alternative meanings of ambiguous words collins, m. a. ( 2002 ) interhemispheric communication via direct connections for alternative meanings of ambiguous words. brain and language, 80 ( 1 ). pp. 77 - 96. * subscription may be required a priming experiment was used to investigate burgess and simpson ' s ( 1988 ) claim that interhemispheric cooperation plays an essential role in the interpretation of ambiguous text. in doing so, the merits of two models of interhemispheric cooperation, the homotopic inhibition theory ( cook, 1986 ) and the direct connections model ( collins & coney, 1998 ), were examined. priming of alternative meanings of ambiguous words was measured using homographs and their dominant ( e. g., bark \u2013 dog ) and subordinate meanings ( e. g., bark \u2013 tree ) as related pairs in a lexical decision task, with normal university students as subjects. stimulus pairs were temporally separated by stimulus onset asynchronies ( soas ) of 180 and 350 ms and were independently projected to the left or right visual fields ( lvf or rvf ). at the shorter soa, priming was restricted to lvf \u2013 rvf presentations, with homograph primes directed to the lvf equally facilitating responses to rvf targets which were associated with their dominant and subordinate meanings. this suggests that within 180 ms, a homograph projected to the right hemisphere activates a range of alternative meanings in the left hemisphere. at an soa of 350 ms, lvf \u2013 rvf priming was obtained along with rvf \u2013 lvf and rvf \u2013 rvf priming. evidently at this stage of processing, an ambiguous word directed to either hemisphere activates a range of alternative meanings in the contralateral hemisphere, while rvf primes also activate subordinate, but not dominant meanings in the left hemisphere. a homograph directed to the lvf did not activate dominant or subordinate meanings within the right hemisphere at either soa. generally, ambiguous words directed to either hemisphere activated a more extensive array of meanings in the contralateral hemisphere than in the hemisphere to which the prime was directed. this confirms the importance of interhemispheric cooperation in generating alternate meanings of ambiguous words. strong support was found for the direct connections model ( collins & coney, 1998 ), but no support for the homotopic inhibition theory ( cook, 1986 ). | publication type : | | journal article | |", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6401806205007843, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:10.689653"} {"text": "on the meter spades and diverts the current flow from the meter to the internal bypass. this allows for the exchange, inspection or repair of a meter without service interruption to the customer. note : meter cover cannot be reinstalled or sealed with lever handle in upward \u201c bypass \u201d position. capacity : the capability to generate electrical power, measured in megawatts ( mw ) or kilowatts ( kw ). circuit : a conductor or a system of conductors through which electric current flows. circuit breaker ( s ) : a safety device designed to automatically stop the flow of electricity whenever a circuit becomes overloaded, i. e. exceeds the number of amps that the wiring can accommodate. codes : various government bodies have adopted minimum safety standards, or \u201c codes \u201d for the electrical wiring. see nec and nesc also. commission : the agency responsible for regulating the regulated portions of investor - owned public utilities. in missouri, the missouri public service commission ( psc ) or any duly constituted successor. company : union electric company dba ameren missouri acting through its duly authorized officers, agents or employees within the scope of their respective duties and authorities. conductors : a material that carries electrical current usually wires or cables. conduit : a channel for holding and protecting conductors and cables, made of metal or an insulating material, usually circular in cross section like a pipe. conduit system : the joining of multiple conduits to provided a channel for holding and protecting conductors and cables from one piece of electrical equipment to another. corps of engineer : refers to the united states army corps of engineers. customer : any person, developer, firm, organization, association, corporation or other entity legally receiving service at a premise or whose facilities are connected for utilizing service at the premises. current transformer ( ct ) : a transformer used to measure the amount of current flowing in a circuit. its primary winding is rated in excess of the expected current of the circuit and the secondary will normally be rated at 5 amps being equal to the nominal full primary current. delivery voltage : the voltage of company \u2019 s lines at the point of delivery. developer : the entity responsible for constructing and making improvements to a tract of land ( streets, sidewalks, storm sewers, utilities, etc. ) for the creation of a residential and / or non - residential development. development : a lot, tract, or parcel of land divided into two or more lots, plots, sites or other divisions for use for two or more new residential or non - residential buildings or", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6081877572497784, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:11.246356"} {"text": "conductors to a building or other structure, or otherwise defined area, and intended to constitute the main control and means of cutoff of the supply. service supply type : indicates whether electric service is supplied by overhead or underground lines. single - phase : this implies a power supply or a load that uses only two wires for power. some \" grounded \" single phase devices also have a third wire used only for a safety ground, but not connected to the electrical supply or load in any other way except for safety grounding. span : the distance between two poles of a transmission or distribution line. ssn / fein : an individual \u2019 s social security number or a company \u2019 s federal employer identification number. subdivision : a lot, tract, or parcel of land divided into two or more lots, plots, sites or other divisions for use for two or more new residential buildings or the land on which is constructed new multiple - occupancy residential buildings per a recorded plat. substation : facility equipment that switches, changes, or regulates electric voltage. surge protector : an electrical device that protects equipment from a sudden, high fluctuation in the level of voltage normally flowing during a period of time. tariff : schedule of ameren missouri rates, charges, and general rules and regulations for providing electric service. tariffs are available at www. ameren. com, and are on file with the missouri public service commission. taxing area : the geographical area where premise is located and where taxes are paid. generally speaking, taxing areas are jurisdiction boundaries such as cities or towns and may have inspection authority jurisdiction. temporary service : a service intended to be used for a limited period, such as for construction, purposes. the temporary meter structure will be removed after completion of the project. three phase : multiple phase power supply or load that uses at least three wires where a different voltage phase from a common generator is carried between each pair of wires. the voltage level may be identical but the voltages will vary in phase relationship to each other by 120 degrees. transmission system : an interconnected group of electric transmission lines and associated equipment for moving or transferring electric energy in bulk between points of supply and points at which it is transformed for delivery over the distribution system lines to consumers, or is delivered to other electric systems. triplex service : three separate wires insulated and twisted around one another ( 2 hot wires, 1 neutral ). villa : a residential building consisting of two units that share a common wall. voltage / volts : the measure of electrical pressure in a circuit", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.615376335319284, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:11.253660"} {"text": "this page is sponsored by google ads. arn does not necessarily select or endorse the organizations or products advertised above. snowmass village, colo., july 6 by observing millions of subatomic particles called b mesons, a team of scientists working at the stanford linear accelerator center in california has found new evidence of a basic but subtle lopsidedness in nature that may explain why the universe contains mostly matter, rather than being virtually empty and devoid of stars, planets and people. the results found by a multinational team of about 600 physicists and engineers were announced today in stanford. the lopsidedness is \" extraordinarily tiny, \" dr. jonathan dorfan, the director of the center, said at a meeting of physicists here. nonetheless, he said, it may explain \" a spectacularly interesting phenomenon, namely why we are here. \" the asymmetry was first seen in 1964 by dr. val fitch of princeton university and dr. james cronin of the university of chicago in an experiment for which they later received a nobel prize. the effect was revealed in slight differences in the behavior of a given type of particle so - called kaons or neutral k ' s in the fitch - cronin experiment and its antimatter counterparts. such differences may explain why the big bang explosion thought to have created the universe did not produce equal amounts of matter and antimatter, which then would have annihilated each other and left nothing but light. but if matter and antimatter behaved differently in the cooling that followed the big bang, matter might have gradually begun to predominate in the universe. physicists call these theorized differences charge - parity violation or cp violation. but despite nearly 40 years of searching since the original experiment, physicists had been unable to show definitively that any other particles displayed cp violation. this left physicists to wonder whether the original discovery reflected some unexplained quirk or a basic natural law. the new findings show with a statistical certainty of 99. 997 percent that the effect also occurs in another type of particle, the b meson, the team says. \" people have looked for this or the equivalent under every rock \" since the fitch - cronin experiment, said dr. stewart smith, a princeton physicist who is the spokesman for the team. dr. fitch, who was not involved in the new work, agreed that despite strong hints from earlier experiments at the fermi national accelerator laboratory the new results are the first", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6365943174882824, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:11.290969"} {"text": "dr. stewart smith, a princeton physicist who is the spokesman for the team. dr. fitch, who was not involved in the new work, agreed that despite strong hints from earlier experiments at the fermi national accelerator laboratory the new results are the first solid demonstration of cp violation in another particle. the experiment produced millions of b mesons and their antimatter counterparts, anti - b mesons, and observed the rates at which they decayed into other particles. ( anti - b ' s are also called b - bars ; the experiment is called babar, after the elephant in the children ' s books. ) slight differences in the decay rates of the matter and antimatter particles established cp violation \" with a very high probability, \" said dr. gautier hamel de monchenault, a physicist at the atomic energy commission in saclay, france, and the physics analysis coordinator of babar. dr. hamel de monchenault said the degree of cp violation observed seemed generally in line with what physicists expected in their standard model of particle physics, the framework that physicists use to describe the fundamental particles and forces in the universe. a closely related experiment called belle at the kek accelerator laboratory in tsukuba, japan, is also measuring the effect with b mesons and, like babar, presented preliminary results last year. but last year, neither group could demonstrate an effect. despite the new results, most theorists agree that the degree of cp violation predicted by the standard model and observed in the new experiment is probably not enough to explain all the matter that is seen in the universe. dr. natalie roe, a babar collaborator at the lawrence berkeley national laboratory, said at the meeting here that further experiments at the stanford laboratory would both check the latest results and test the standard model ' s predictions in new ways. \" this is the beginning of the story, \" dr. roe said. \" what we want to find is a crack in the standard model. \" file date : 07. 07. 01 this data file may be reproduced in its entirety for non - commercial use. a return link to the access research network web site would be appreciated. documents on this site which have been reproduced from a previous publication are copyrighted through the individual publication. see the body of the above document for specific copyright information.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6123474656964729, "token_count": 470, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-26T14:05:11.291873"}