{"text": "this might be a rare case about which einstein was wrong. more than 60 years ago, the great physicist scoffed at the idea that anything could travel faster than light, even though quantum mechanics had suggested such a condition. now four swiss researchers have brought the possibility closer to reality. testing a concept called \" spooky action at a distance \" - - a phrase used by einstein in criticizing the phenomenon - - they have shown that two subatomic particles can communicate nearly instantaneously, even if they are separated by cosmic distances. alice ' s wonderland had nothing on quantum physics, which describes a bizarre state of matter and energy. not only can the same atom exist in two locations at once, but merely attempting to observe a particle will alter its properties. perhaps least intuitive is the characteristic called entanglement. as described by quantum mechanics, it means that two entangled particles can keep tabs on each other no matter how far apart they are. physicists have been trying for decades to determine whether this property is real and what might cause it. in the process, they ' ve uncovered evidence for it but not much about its properties. physicist nicolas gisin and colleagues at the university of geneva in switzerland split off pairs of quantum - entangled photons and sent them from the university ' s campus through two fiber - optic cables to two swiss villages located 18 kilometers apart. thinking of the photons like traffic lights, each passed through specially designed detectors that determined what \" color \" they were when entering the cable and what color they appeared to be when they reached the terminus. the experiments revealed two things : first, the physical properties of the photons changed identically during their journey, just as predicted by quantum theory - - when one turned \" red, \" so did the other. second, there was no detectable time difference between when those changes occurred in the photons, as though an imaginary traffic controller had signaled them both. the result, the team reports in tomorrow ' s issue of nature, is that whatever was affecting the photons seems to have happened nearly instantaneously and that according to their calculations, the phenomenon influencing the particles had to be traveling at least 10, 000 times faster than light. given einstein ' s standard speed limit on light traveling within conventional spacetime, the experiments show that entanglement might be controlled by something existing beyond it. gisin says that once the scientific community \" accepts that nature has this ability, we should try to create models that explain it. \" although the research doesn ' t demonstrate spooky action", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6731505919639854, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:09.416285"} {"text": "a theoretical analysis of recent experiments suggests that a key feature of a topological quantum computer \u2014 the unusual statistics of quasiparticles in the quantum hall effect \u2014 may finally have been observed. by exploiting the concept of particle - hole duality, one can realize a point junction between integer and fractional quantum hall phases, which constitutes a crucial building block towards possible applications of the quantum hall effect. the fractional quantum hall effect, thought to be special to two dimensions, may also flourish in three, providing a possible explanation for anomalies observed in certain 3d materials in high magnetic fields. physics2, 24 ( 2009 ) \u2013 published march 30, 2009 the surprising prediction that currents can flow forever in small normal metal rings was confirmed almost twenty years ago. highly precise new experiments find good agreement with theory that was not seen till now. h. a. fertig, physics2, 15 ( 2009 ) \u2013 published february 23, 2009 measurements of the heat transport at the edges of two - dimensional electron systems appear to provide explanations about the quantum hall state that have not been forthcoming via charge transport experiments. crystalline structures have been observed in nanoislands of electrons floating above superfluid helium. the energy required to add or subtract an electron from these quantum - dot - like islands agrees well with theory. physics1, 36 ( 2008 ) \u2013 published november 24, 2008 the esoteric concept of \u201c axions \u201d was born thirty years ago to describe the strong interaction between quarks. it appears that the same physics \u2014 though in a much different context \u2014 applies to an unusual class of insulators. graphene has been idealized as a two - dimensional electron system in which the electrons behave like massless fermions, but how \u201c perfect \u201d is it? scientists now show they can prepare free - standing sheets of graphene that have some of the highest electron mobilities of any inorganic semiconductor. a decade ago, experimentalists showed that persistent currents can flow in nonsuperconducting mesoscopic metal rings, but there was no theory that correctly explained the magnitude or direction of the unexpectedly large currents. theorists are now proposing a simple idea that may at last explain these results. electrons in graphene can be described by the relativistic dirac equation for massless fermions and exhibit a host of unusual properties. the surfaces of certain band insulators \u2014 called topological insulators \u2014 can be described in a similar way, leading to an exotic metallic surface on an otherwise \u201c ordinary \u201d insulator.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.7392192700934153, "token_count": 511, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:09.570136"} {"text": "nicholas wolterstorff, reason and belief in god, in faith and rationality, 78 - 91. see also john hick, philosophy of religion, 76. 191 the possibility of public access to that experience. there is much philosophical debate concerning precisely how perception is to be analyzed. in particular, questions are raised concerning the status of the phenomenon. but there is general agreement that in perception, objects present themselves to us in ways that enable us to know them. similarly, in religious experience god presents himself in ways that enable us to know him and his actions. for alston there are, it seems, important differences between ordinary perceptual or sense experience and religious experience. sense perception is a common experience, whereas religious experience is less common, perhaps, even rare, sense perception yields a great deal of information about the world, whereas religious experience yields apparently little information about god, all humans have the capacity for sense perception, but many seem not to have the capacity for religious experience. these differences, however, do not show that religious experience has a structure unlike perception. for one thing, neither the frequency of an experience nor the amount of information it yields tells us anything about its structure. on the other hand, the limitation of the rationalist way is that the only truths capable of being strictly proved are analytic and ultimately tautological. but we cannot by logic alone demonstrate any matter of fact and existence ; these must be known through experience. for sure if nothing were given through experience in its various modes, we should never have anything to reason about. this is as true in religion as in other fields. if god exists, god is not an idea but a reality outside us, in order to be known to men and women, god must therefore become manifest in some way within their experience. this conclusion is in line with the contemporary revolt against the rationalist assumptions which have dominated much of western philosophy since the time of descartes. 516 descartes held that we can properly be said to know only truths that are self - evident or that can be reached by logical inferences from self - evident premises. therefore, those who stress faith and attack reason often place a great deal of emphasis on religious experience. however, religious experience is by no means a purely emotional \u201c happening \u201d ; rather, it involves concepts and beliefs about the being that is experienced. if we tried to separate religious experiences from such concepts and beliefs - from the religious belief - system, as we shall call it - then there would be no way saying who or what is", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.600490886848682, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 7, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:09.626699"} {"text": "( this is reposted from the dictionary help page ) can \u2019 t understand some parts of the dictionary? fear no more! what do these things mean? n. \u2013 noun. usually definitions starting with \u201c a, \u201d \u201c an \u201d or \u201c the \u201d except for \u201c the state of being. \u201d used to name a person, place, thing, or idea. v. \u2013 verb. usually definitions starting with \u201c to \u201d except for \u201c to be. \u201d a word or combination of words that expresses an action or says something about the existence or condition of a noun or pronoun. adj. \u2013 adjective. usually definitions starting with \u201c to be \u201d or \u201c the state of being. \u201d a word that modifies a noun or pronoun. modify means to limit, qualify, or make partial changes. adv. \u2013 adverb. usually starts with \u201c the state of \u201d with a verb and the description of that action. a word that modifies a verb, an adjective, or another verb ( when, where, how, how often, to what extent ). many adverbs en in - ly. contraction. \u2013 a shortened word acronym. \u2013 the first letters of a bunch of words or phrase interjection. \u2013 a word that you would use to say something out loud, like \u201c damn \u201d or \u201c fuck. \u201d a word or phrase used to express pain, surprise, anger, pleasure, or some other emotions. stands apart from other words in sentences. klingon. \u2013 a word from the klingon language location. \u2013 a location phrase. \u2013 a phrase that has more than 1 word in it ebonics. \u2013 a word or series of words used in ebonics preposition. \u2013 a word like \u201c a. \u201d a word that shows a relation between the word following it and some other word or group of words in a sentence. conjunction. \u2013 a word that combines two parts of a sentence, such as \u201c but, \u201d \u201c and, \u201d and \u201c yet \u201d pronoun. \u2013 a word that replaces another noun. such as he, she, it. it stands for or takes place of a noun and functions in most ways as a noun.?. \u2013 we don \u2019 t know what kind of word it is ex. \u2013 example ; } \u2013 separates one definition from another definition for the same word. similar to the numbering system used in actual dictionaries. the reason we don \u2019 t use numbers in our definitions is because we use numbers sometimes to define", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6026995554978457, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:09.871751"} {"text": "the universe may be a deterministic system, but that doesn ' t mean random chance doesn ' t exist, or that you can determine the exact path the future will take in advance. for example, heisenburg ' s uncertainty principle shows that you can affect an electron ' s position by measuring its momentum, and vice versa. that ' s because electrons are so small that the act of observing them causes a change in their position or momentum, depending on whether you ' re measuring their momentum or position. there ' s a well - known experiment where you shoot electrons at a double - slit in a screen and then see what pattern they form ; if you don ' t observe the electrons going through the slit, they generate a standard wave interference pattern ( meaning the electrons are seemingly interfering with themselves ), but if you do, the pattern changes to one generated by particles. furthermore, if you delay the observation ( i. e., by using a removable detector screen ), you can cause a retroactive change from a wave pattern to a particle one, and if you make it possible to destroy the measurement of which slit the electron goes through, you can cause a second retroactive change, from a particle pattern to a wave one. here ' s something interesting to think about. let ' s say you have two people, essentially identical, except one believes that free will somehow exists, and the other believes that it doesn ' t. the two people will act differently based on whether they believe in free will or not. furthermore, if they later change their minds ( in other words, make themselves believe the opposite of what they believed before ), it will change their behavior. to make the point even clearer, if you had a third person who had never heard of free will, they ' d act in a completely different way than the other two - but once they heard of it, depending on whether it was \" free will exists \" or \" free will doesn ' t exist \", it would instantly change their behavior from then on. in other words, yes, i believe it ' s possible to change your own behavior by making yourself change what you believe. i don ' t know whether that would actually be considered free will, but i do know that it ' s close enough to count for me.", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6227370196618509, "token_count": 463, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:10.149739"} {"text": "key : \" s : \" = show synset ( semantic ) relations, \" w : \" = show word ( lexical ) relations display options for sense : ( gloss ) \" an example sentence \" - s : ( n ) swing ( a state of steady vigorous action that is characteristic of an activity ) \" the party went with a swing \" ; \" it took time to get into the swing of things \" - s : ( n ) swing ( mechanical device used as a plaything to support someone swinging back and forth ) - s : ( n ) swing ( a sweeping blow or stroke ) \" he took a wild swing at my head \" - s : ( n ) swing, swinging, vacillation ( changing location by moving back and forth ) - s : ( n ) swing, swing music, jive ( a style of jazz played by big bands popular in the 1930s ; flowing rhythms but less complex than later styles of jazz ) - s : ( n ) lilt, swing ( a jaunty rhythm in music ) - s : ( n ) golf stroke, golf shot, swing ( the act of swinging a golf club at a golf ball and ( usually ) hitting it ) - s : ( n ) baseball swing, swing, cut ( in baseball ; a batter ' s attempt to hit a pitched ball ) \" he took a vicious cut at the ball \" - s : ( n ) swing ( a square dance figure ; a pair of dancers join hands and dance around a point between them ) - s : ( v ) swing ( move in a curve or arc, usually with the intent of hitting ) \" he swung his left fist \" ; \" swing a bat \" - s : ( v ) swing, sway ( move or walk in a swinging or swaying manner ) \" he swung back \" - s : ( v ) swing ( change direction with a swinging motion ; turn ) \" swing back \" ; \" swing forward \" - s : ( v ) swing, swing over ( influence decisively ) \" this action swung many votes over to his side \" - s : ( v ) swing, sweep, swing out ( make a big sweeping gesture or movement ) - s : ( v ) dangle, swing, drop ( hang freely ) \" the ornaments dangled from the tree \" ; \" the light dropped from the ceiling \" - s : ( v ) swing ( hit or aim at with a sweeping arm movement ) \" the soccer player began to swing at the referee \" - s : ( v ) swing ( alternate dramatically between high", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6278387578537195, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:10.174890"} {"text": ", the search can go twice as deep with the same amount of computation. in mathematics, a square root of a number x is a number r such that r 2 = x, or in words a number r whose the explanation of b * 1 * b * 1 *... is that all the first player ' s moves must be studied to find the best one, but for each, only the best second player ' s move is needed to refute all but the first ( and best ) first player move \u2013 alpha - beta ensures no other second player moves need be considered. if b = 40 ( as in chess ), and the search depth is 12 ply, the ratio between optimal and pessimal sorting is a factor of nearly 406 or about 4 billion times. normally during alpha - beta, the subtrees are temporarily dominated by either a first player advantage ( when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try and find a refutation ), or vice versa. this advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. as the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. an improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. in practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening. iterative deepening depth - first search ( iddfs is a state space search strategy in which a depth - limited search is run repeatedly increasing the depth limit with the algorithm maintains two values, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. initially alpha is negative infinity and beta is positive infinity. as the recursion progresses the \" window \" becomes smaller. when beta becomes less than alpha, it means that the current position cannot be the result of best play by both players and hence need not be explored further. function alphabeta ( node, depth, \u03b1, \u03b2 ) ( * \u03b2 represents previous player best choice - doesn ' t want it if \u03b1 would worsen it * ) if node is a terminal node or depth = 0 return the heuri", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6143952749525858, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:10.711702"} {"text": "further. function alphabeta ( node, depth, \u03b1, \u03b2 ) ( * \u03b2 represents previous player best choice - doesn ' t want it if \u03b1 would worsen it * ) if node is a terminal node or depth = 0 return the heuristic value of node foreach child of node \u03b1 : = max ( \u03b1, - alphabeta ( child, depth - 1, - \u03b2, - \u03b1 ) ) ( * use symmetry, - \u03b2 becomes subsequently pruned \u03b1 * ) if \u03b2\u2264\u03b1 break ( * beta cut - off * ) return \u03b1 further improvement can be achieved without sacrificing accuracy, by using ordering heuristics to search parts of the tree that are likely to force alpha - beta cutoffs early. pseudocode is a compact and informal high - level description of a computer programming algorithm that uses the structural conventions of some programming language heuristic ( hyu - \u02c8ris - tik is a method to help solve a problem commonly an informal method for example, in chess, moves that take pieces may be examined before moves that do not, or moves that have scored highly in earlier passes through the game - tree analysis may be evaluated before others. iterative deepening depth - first search ( iddfs is a state space search strategy in which a depth - limited search is run repeatedly increasing the depth limit with another common, and very cheap, heuristic is the killer heuristic, where the last move that caused a beta - cutoff at the same level in the tree search is always examined first. in competitive two - player games the killer heuristic is a technique for improving the efficiency of alpha - beta pruning, which in turn improves the efficiency of the this idea can be generalized into a set of refutation tables. alpha - beta search can be made even faster by considering only a narrow search window ( generally determined by guesswork based on experience ). this is known as aspiration search. in the extreme case, the search is performed with alpha and beta equal ; a technique known as zero - window search, null - window search, or scout search. this is particularly useful for win / loss searches near the end of a game where the extra depth gained from the narrow window and a simple win / loss evaluation function may lead to a conclusive result. if an aspiration search fails, it is straightforward to detect whether it failed high ( high edge of window was too low ) or low ( lower edge of window was too high ). this gives information about what window values", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6000557886705716, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:10.712645"} {"text": "behind the buzz and beyond the hype : our nanowerk - exclusive feature articles posted : jun 28th, 2010 novel maskless e - beam technique a promising tool for engineering metallic nanostructures ( nanowerk spotlight ) the manufacture of certain types of nanostructures \u2013 nanotubes, graphene, nanoparticles, etc. \u2013 has already entered industrial - scale mass production. however, the controlled fabrication of nanostructures with arbitrary shape and defined chemical composition is still a major challenge in nanotechnology applications. it appears that electron beams from electron microscopes ( em ) \u2013 nowadays routinely focused down to the nanometer regime \u2013 are ideal candidates for versatile tools for nanotechnology ( see our recent nanowerk spotlight : \" direct - write process brings nanotechnology fabrication closer to mass production \" ). however, their usage is mostly restricted by the conditions in the corresponding electron microscopes, since most ems are housed in high vacuum chambers the unintended electron - beam - induced deposition of residual gases is a problem, as well as the maintenance of well defined sample conditions. researchers in germany have now presented a novel way to use a highly focused electron beam to lithographically fabricate clean iron nanostructures. this new technique expands the application field for focused electron beams in nanotechnology. \" we have developed a novel two - step process to locally generate iron nanostructures on a commercial 300 nm silicon oxide substrate at room temperature, \" hubertus marbach, a researcher at the universitat erlangen - nurnberg tells nanowerk. \" in the first step, the surface is locally activated by a 3 nm wide electron beam. the second step comprises the development of the activated structures by dosing an organometallic precursor, which then decomposes and grows autocatalytically to form pure iron nanocrystals until the precursor supply is stopped. \" using a more vivid picture, marbach says that one might think of the whole process as writing with invisible ink in the irradiation step, which is then made visible by the development step. \" besides the fantasy - stimulating application to write secret nanomessages in ultrahigh vacuum, the described effect might be the starting point for a whole new way to generate nanostructures. \" electrons as invisible ink. a siox surface can be locally activated with a focused electron beam ( 1 ) such that subsequently dosed [ fe ( co ) 5 ] decomposes ( 2 ) and auto", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6192581296468528, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:11.665266"} {"text": "to generate nanostructures. \" electrons as invisible ink. a siox surface can be locally activated with a focused electron beam ( 1 ) such that subsequently dosed [ fe ( co ) 5 ] decomposes ( 2 ) and autocatalytically grows to pure fe nanocrystals ( 3 ) at predefined positions until the precursor supply is stopped. a 3d representation of the sem data is in the background. ( reprinted with permission from wiley - vch verlag ) the major new aspect of this work is the local chemical activation, i. e. catalytic activation of an oxidic surface. the researchers use this process to locally dissociate adsorbed precursor molecules and then generate nanostructures with an electron beam ( a process that can be categorized as focused electron beam induced processing or febip, where the injection or removal of electrons can be used to trigger chemical processes, such as bond formation or dissociation ). the starting point of the present investigations was the so called electron beam induced deposition or ebid technique a special case of febip, where already adsorbed precursor molecules are locally dissociated with a focused electron beam, leaving a deposit of the nonvolatile dissociation products. to minimize the complications of unintended ebid of residual gases, the team followed a ' surface science approach ' where they worked under ultra high vacuum ( uhv ) conditions. this resulted in deposits with high purity. the cleanliness of the whole process, namely uhv conditions plus a well - defined surface, was identified as the key factor for the purity of the metallic nanostructures. in a previous paper, marbach and his team have described this technique ( \" electron - beam - induced deposition in ultrahigh vacuum : lithographic fabrication of clean iron nanostructures \" ) marbach explains that, in conventional applications, the high energetic primary electrons of the em beam are scattered in the sample. eventually, scattered electrons exit the surface again close to the impact of the electron beam. \" in ebid, this effectively leads to a widening of the deposit compared to the size of the beam \" he says. \" this ( proximity ) effect increases with an increase of the local electron dose. since our fabrication technique relies on catalytic and autocatalytic effects, the electron dose needed as a ' seed ' for the growth of the iron nanostructures can be minimized, thus reducing the mentioned proximity effect. in other words, our", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.618654891231433, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:11.666331"} {"text": "our fabrication technique relies on catalytic and autocatalytic effects, the electron dose needed as a ' seed ' for the growth of the iron nanostructures can be minimized, thus reducing the mentioned proximity effect. in other words, our approach might be suitable to produce smaller structures. \" ebid allows almost every combination of deposit material and substrate to be targeted since there is a large variety of precursor molecules and there are nearly no restrictions in regard to the substrate. in this specific work, the researchers ' aim was to generate clean iron nanostructures with potential applications in the field of data storage, sensor or information processing devices or as seeds for the localized growth of other nanostructures like carbon nanotubes or silicon wires. with their novel febip process they are now moving on to explore other oxide materials and precursor molecules. \" we propose our technique to pre - structure the surface by a local chemical modification as a general route to fabricate nanostructures, e. g. to locally anchor or activate functional molecules, \" says marbach. one challenge of the novel process is the rather low writing speed. marbach points out though, that there are considerable efforts underway to develop multibeam instruments which would boost the throughput of electron - beam - based techniques, e. g. at the tu delft ( mapper lithography ) and the european charpan project located in vienna.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6005147571431126, "token_count": 288, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:11.666875"} {"text": "a plan is typically any diagram or list of steps with timing and resources, used to achieve an objective. see also strategy. it is commonly understood as a temporal set of intended actions through which one expects to achieve a goal. for spatial or planar topologic or topographic sets see map. plans can be formal or informal : - structured and formal plans, used by multiple people, are more likely to occur in projects, diplomacy, careers, economic development, military campaigns, combat, or in the conduct of other business. in most cases, the absence of a well - laid plan can have adverse effects : for example, a non - robust project plan can cost the organization time and money. - informal or ad - hoc plans are created by individuals in all of their pursuits. the most popular ways to describe plans are by their breadth, time frame, and specificity ; however, these planning classifications are not independent of one another. for instance, there is a close relationship between the short - and long - term categories and the strategic and operational categories. it is common for less formal plans to be created as abstract ideas, and remain in that form as they are maintained and put to use. more formal plans as used for business and military purposes, while initially created with and as an abstract thought, are likely to be written down, drawn up or otherwise stored in a form that is accessible to multiple people across time and space. this allows more reliable collaboration in the execution of the plan. other articles related to \" formal \" :... formal methods, mathematically - based techniques for the specification, development and verification of software and hardware systems formal...... formal theory can refer to another name for a theory which is expressed in formal language... by symbols and its operators formal theory from political science, the theoretical modeling of social systems based on game theory, dynamical systems theory, among...... students are not required to wear gowns at formal halls, with exception of at certain college feasts... in special formal meals such as matriculation dinner or scholars ' feast the master usually raises a toast, first to the queen and then to \u201c sir winston \"... in other formal halls this is usually made by a senior student once the fellows have left...... individuals are deemed undesirable in urban space because they do not fit into social norms, which causes unease for many residents of certain neighborhoods... this fear has been deepened by the broken windows theory and", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6157344006086927, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:11.879691"} {"text": "science of breath, by yogi ramacharaka, pseud. william atkinson,, at sacred - texts. com the science of breath, like many other teachings, has its esoteric or inner phase, as well as its exoteric or external. the physiological phase may be termed the outer or exoteric side of the subject, and the phase which we will now consider may be termed its esoteric or inner side. occultists, in all ages and lands, have always taught, usually secretly to a few followers, that there was to be found in the air a substance or principle from which all activity, vitality and life was derived. they differed in their terms and names for this force, as well as in the details of the theory, but the main principle is to be found in all occult teachings and philosophies, and has for centuries formed a portion of the teachings of the oriental yogis. in order to avoid misconceptions arising from the various theories regarding this great principle, which theories are usually attached to some name given the principle, we, in this work, will speak of the principle as \" prana, \" this word being the sancrit term meaning \" absolute energy. \" many occult authorities teach that the principle which the hindus term \" prana \" is the universal principle of energy or force, and that all energy or force is derived from that principle, or, rather, is a particular form of manifestation of that principle. these theories do not concern us in the consideration of the subject matter of this work, and we will therefore confine ourselves to an understanding of prana as the principle of energy exhibited in all living things, which distinguishes them from a lifeless thing. we may consider it as the active principle of lifevital force, if you please. it is found in all forms of life, from the amoeba to manfrom the most elementary form of plant life to the highest form of animal life. prana is all pervading. it is found in all things having life, and as the occult philosophy teaches that life is in all thingsin every atomthe apparent lifelessness of some things being only a lesser degree of manifestation, we may understand their teachings that prana is everywhere, in everything. prana must not be confounded with the egothat bit of divine spirit in every soul, around which clusters matter and energy. prana is merely a form of energy used by the ego in its material manifestation. when the ego leaves the body, the pr", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6161075552760249, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:11.945457"} {"text": "june 8, 2007 concerned that current methods for making computer chips might become stymied as components keep shrinking, many engineers are looking for circuit building blocks with improved electrical properties. among the most promising are stringy carbon nanotubes that capably form transistors to switch current on and off. but the nanotubes tend to grow with unpredictable kinks and bends that could cause bad wiring connections. this week at the design automation conference in san diego, a group of stanford engineers will present a way to design circuits that should work even when many of the nanotubes in them are twisted and misaligned. \" the question is what ' s next in chip technologies, \" says subhasish mitra, an assistant professor of electrical engineering and computer science. \" that ' s why nanotechnology is important. but you want to make sure that you are not in a lab making something that chip designers cannot actually use. \" to prevent that, he and electrical engineering professor h. - s. philip wong, working with chemistry professor chongwu zhou at the university of southern california, have been looking closely at how nanotubes end up resting on the surfaces of experimental chips. \" it ' s not as bad as a plate of noodles, \" mitra says. \" you want to create transistors out of these things, and hook up these transistors and make them turn on and off independently. but if twisted carbon nanotubes, for example, short out the circuit, you lose the opportunity to do that. \" making messy workable what mitra, wong and graduate students nishant patil and jie deng have realized is that if nanotubes are always going to be somewhat askew, engineers will have to design circuits that can work regardless of where and how the tubes lie. they started by coming up with a single circuit element, a nand gate, that was immune from the vagaries of its underlying nanotube layout. from that single element that could function despite misalignments, they abstracted and generalized the math to come up with an algorithm that can guarantee a working design for any circuit element, mitra says, even when a large number of nanotubes are misaligned. using simulations developed by wong and deng, the group has been able to show that not only do the algorithm ' s designs work, but they also don ' t appear to exact a significant financial, speed or energy price compared to traditional designs, mitra says. the key to determining whether a circuit", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6113077401725702, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.035815"} {"text": "group has been able to show that not only do the algorithm ' s designs work, but they also don ' t appear to exact a significant financial, speed or energy price compared to traditional designs, mitra says. the key to determining whether a circuit element is immune to nanotube misalignment is breaking up each circuit element into a fine grid that can be analyzed mathematically. doing this in the abstract with models allows engineers to determine which grid squares nanotubes must pass through and which they shouldn ' t traverse to make a design work correctly. to eliminate unwanted connections, nanotubes in so - called \" illegal \" regions can then be either chemically etched away or rendered electrically irrelevant in other ways. the stanford algorithm takes this all several steps further, applying sophisticated mathematics to automatically determine where the legal and illegal regions should be in the design of a circuit element with a particular function. \" you not only determine whether something is immune or not, but can automatically generate circuit designs that are guaranteed to be immune, \" mitra says. while the algorithm can overcome all the bad connections that errant nanotubes make, it cannot guarantee that a nanotube will always make a desired connection. nanotubes also have other problems that remain unsolved, mitra points out. some, for example, always conduct electricity instead of switching on and off like a semiconductor should. the group ' s next step is to move beyond simulation to build and test real circuit elements according to the algorithm ' s output. while more work is necessary to deliver the promise of nanotube technology, solving the misalignment problem would be a significant step. \" carbon nanotube transistors show great promise as extensions to silicon transistors due to their fast speed, small size and lower energy consumption, \" patil says. \" using this technique, we can make larger and more complex circuit blocks with them. \" wong speculates that the advance could eventually spill over from chips to assist engineers facing analogous challenges. \" a similar methodology can be applied to many emerging technologies, \" he says. \" the concept of not having to define everything with high precision is germane to engineering robust systems. \" the microelectronics advanced research corporation supported the research. other social bookmarking and sharing tools : the above story is reprinted from materials provided by stanford university. the original article was written by david orenstein, communications and public relations manager at the stanford school of engineering.. note : materials may be edited for content and length. for", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6063715501119826, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.037150"} {"text": "june 10, 2008 researchers in sweden and japan report development of a new type of paper that resists breaking when pulled almost as well as cast iron. the new material, called \" cellulose nanopaper, \" is made of sub - microscopic particles of cellulose and may open the way for expanded use of paper as a construction material and in other applications, they suggest. in the new study, lars a. berglund and colleagues note that cellulose - - a tough, widely available substance obtained from plants - - has potential as a strong, lightweight ingredient in composites and other materials in a wide range of products. although cellulose - based composites have high strength, existing materials are brittle and snap easily when pulled. the study described a solution to this problem. it involves exposing wood pulp to certain chemicals to produce cellulose nanopaper. their study found that its tensile strength - - a material ' s ability to resist pull before snapping - - exceeded that of cast iron. they also were able to adjust the paper ' s strength by changing its internal structure. other social bookmarking and sharing tools : note : materials may be edited for content and length. for further information, please contact the source cited above. - henriksson et al. cellulose nanopaper structures of high toughness. biomacromolecules, 2008 ; 9 ( 6 ) : 1579 doi : 10. 1021 / bm800038n note : if no author is given, the source is cited instead.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6406649816852075, "token_count": 310, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.038673"} {"text": "s ] this early part of this time frame might be characterized as the pre - pubescence of laminated decorative surface materials. it was a time of great growth and opportunity, as well as some awkwardness and misunderstanding. new techniques were developed to expand the usage of decorative surface materials. some of them improved the performance of plastic laminates, such as the 20 - fold increase of durability that evolved with the introduction of laminate flooring products. a classic example is vertical surfaces. hpl is an extremely high performance product. it is perfect for usage in horizontal work surface and high - traffic areas. \u201c value - engineered \u201d new products were designed and introduced to meet the needs of similar applications with lower manufacturing costs. a good example of this is thermally fused melamine ( tfm ), which is essentially the top layers of hpl ( decor paper impregnated with melamine resins ), thermally fused to particleboard or mdf forming a stand - alone decorative panel. as these derivative products emerged they were not always specified based on the value of their performance, but often purely on cost. this had a negative effect on the perception of laminated decorative surfaces and the term \u201c laminate \u201d took on the connotation of a low quality imitation product, an unfortunate misconception. in a sense, engineered products were victims of their own genius, particularly considering how quickly technology advanced in the information age. but all was not lost, and savvy professionals knew that to maximize the use of any material it was important to understand its strengths and limitations. this explains the resurgence of design interest in the potential of laminated decorative surface materials. in addition to specialized performance, decorative surfaces were undergoing a paradigm shift in visual realism during this period. computers and digital scanning technology now allow decor designers to replicate any material with unprecedented fidelity and dimensionality. imaging software has made it possible to bring any design that can be imagined into being. laser engraving of rotogravure cylinders enables sharper contrast and more subtle tonal gradients than was previously possible. it has also expedited the process of decor development and sampling. new digital ink - jet printing technologies are driving decor development to move beyond commodity designs and into experimental boutique fashions and customized surfaces such as logos and murals. advanced surface treatments and overlay technologies also play important roles in the development of decorative surface materials, enhancing both the visual and tactile qualities of the products. one technique uses engineered press plates to create embossed texture \u201c in register \u201d with the", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6333939376763207, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.132666"} {"text": "| life depends on an essentially continuous exchange of mass and energy between living organisms and their environment. human impact on this vital exchange has occurred on a global or macroclimate scale. understanding the physical principles involved in heat transfer and absorption in the atmosphere is critical to understanding how these physical factors affect living organisms. the specific objectives of this section are to explain the properties of heat transfer, and to describe laboratory activities that can be used at a variety of academic levels with only slight described below are three series of experiments performed in the laboratory to address questions that emphasize the underlying principles of heat transfer. these hands - on experiments focused on principles that relate to conduction and convection. the object was to identify the method of heat transfer through solids, liquids, gases, and between boundaries. understanding these concepts gave us a better understanding of how heat is transferred between our environment and living organisms. these experiments were used as an integral part of the workshop, which consisted of reflections on redesigning or modifying lab exercises to fit personal needs of workshop teachers. these exercises could be adapted for middle school, high school, and college level courses. the methods utilized for the three experiments involved increasing or decreasing the temperature of a solid or liquid, and where applicable, observing the motion of a dye caused by the changes in temperature and density of the medium. | modes of heat transfer : - conduction : heat transfer resulting from direct contact between substances of different temperatures ; heat is transferred from the high - temperature substance to the low by direct molecular - convection : heat transport by a moving fluid ( gas of liquid ). the heat is first transferred to the fluid by conduction, but the fluid motion carries the heat away. - radiative exchange : heat transfer via electromagnetic waves, the amount of radiant energy emitted, transmitted, or ( figure from microsoft encarta ) return to top laboratory apparatus for labs 1 - 3 | lab 1 : heating from below : convection in this experiment, water was heated from below to produce convection. although the atmosphere is composed of air, this experiment was relevant to atmospheric motion as well. the lower atmosphere ( troposphere ) is mostly heated from below because the oceans and continents absorb radiation from the sun and then transfer some of the resulting heat energy to the lower atmosphere. in lab 1, a beaker was heated ( see figure below ). thermometers were placed in 1 / 2 cm below water surface and 1 / 2 cm above the bottom of the beaker. the temperature was recorded at 30 second intervals. drops of dye were added to the bottom", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6217304544480032, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.433963"} {"text": "( see figure below ). thermometers were placed in 1 / 2 cm below water surface and 1 / 2 cm above the bottom of the beaker. the temperature was recorded at 30 second intervals. drops of dye were added to the bottom of the beaker between intervals. after three minutes the beaker was removed from the hot plate and temperature reading recorded for another five minutes. convection was visualized by observing the motion of the the motion of the dye was circular from bottom to top and returning to the bottom of the beaker. the energy from heating created a less dense liquid at the bottom, thus causing the upward motion of the dye. upon reaching the surface, the dye was now in the denser medium and therefore returned to the bottom. this motion is an example of convection. this phenomenon is evident in the motion of wind. the difference in densities and kinetic movement of the water molecules driven by temperature change resulted in the movement of air molecules. this lab can be used at lower levels to demonstrate simple properties of heat transfer and convection. at higher levels, this lab illustrates these basic principles, and could be extended to address more complex applications related to convection such as the coriolis 1. explain the process by which the water is heated. 2. describe the motion of the water as made visible by the 3. why does convection occur? 4. did convection cease? when? why? environmental applications of principles of radiative exchange, conduction and convection ( figure from e. zerba, princeton university ; email @ example. com ) return to top | lab 2 : conduction comparison of this experiment with the first illustrated the difference between the rate of heat transfer by conduction and that of convection. it also illustrated the difference in heat capacities between water and the solid materials of the lab 2 was configured similarly to lab 1, but looked at the effect of heating and cooling temperature difference using sand of equal weight as water used in experiment 1. no dye was used in this experiment, as convection was not a the temperature difference between the top and bottom layers of sand indicated that sand heats and cools at a faster rate compared to water. when the beaker was removed from the heat, the temperature continued to increase via conduction from the bottom of the beaker. this lab exercise is useful for demonstrating the concept of conduction to lower level students. upper level students can use this lab to make the connections between conduction and heat capacity of various substances related to heat transfer that occurs between the earth ' s surfaces and the surface", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6174699933130889, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.436447"} {"text": "exercise is useful for demonstrating the concept of conduction to lower level students. upper level students can use this lab to make the connections between conduction and heat capacity of various substances related to heat transfer that occurs between the earth ' s surfaces and the surface of living organisms. 1. is there any convection in the sand? explain. 2. why did the temperature recorded by the lower thermometer continue to rise dramatically after the heating ceased? 3. on the basis of heat capacity, explain why the temperature changes for the sand and water were different. 4. using what you have observed in the two experiments, predict whether a cold front will lower temperatures more at inland locations or on the coast. explain your answer. return to top | lab 3 : cooling from above in lakes and oceans, convection is generally the result of cooling from above rather than heating from below. this was demonstrated by adding ice to the water. using an experimental setup that allowed measurement of temperature at the top and the bottom of a beaker of water, ice was added to the top of the beaker. this experiment illustrated the concept that at 4 \u00b0c, water has higher density and sinks. convection was visualized by the movement of dye added to the bottom of the beaker which was displaced by the cooler more dense water. this lab demonstrates several physical principles associated with heat transfer, including density, kinetic molecular theory, and convection. on a larger scale, this laboratory exercise demonstrates the process by which seasonal turnovers occur in ponds and lakes. at lower levels, teachers may choose to discuss physical principles of heat transfer only, while at upper levels, teachers may choose to integrate this small - scale investigation with the study of climate processes and lake nutrient stratification and mixing. 1. why does ice float? 2. is there any evidence of convection? why does or does it not occur? 3. draw a diagram to explain how seasonal turnover occurs in a return to top to the passerine birds home", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6309804406509144, "token_count": 395, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.438784"} {"text": "applied mathematics department secretary : tel : 01 69 33 46 01. fax : 01 69 33 46 46. scientific computing is the art of the engineer devoted to producing numerical simulations based on a scientific analysis and with computers. most of problems that can be formalised with mathematical equations lead to problems too complicated to be solved with elementary methods or with methods of formal calculus. the objective of scientific computing is to propose approximate numerical solutions for problems that can be modelised with a mathematical equation. the development of scientific computing is related to the increasing of computer power. it is an applied science in continuous evolution. the industries that use and develop scientific computing are first the main partners of state technical administrations in charge of the conception and development of complex systems : space and aeronautics, nuclear, automotive industry, petroleum industry, civil engineering. reduced to amount development and the certification of complex systems only few years ago, the numerical simulation allows the reduction of important development times on conception cycles and the production of more sophisticated products. the option \" scientific computing \" is devoted to students needing training in scientific computing, either for the analysis of an industrial problem, or the initiation to scientific research, whatever can be the future choice in terms of career orientation. for those who wish to enter the master program \" mathematical modeling \" in applied mathematics of the ecole polytechnique ( co - organised with paris 6 university ), the training period can be an important first step. examples of subjects studied in recent years - adaptive and multi - scales methods. assessment and design of optical fiber systems. inverse problem in electromagnetism. requirements : some knowledge of numerical analysis and / or optimization. evaluation mechanism : written report and oral defense last modification : monday 8 april 2013", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.626594362750471, "token_count": 343, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.694679"} {"text": "process sheet and plate materials, accelerating the production and availability of low - cost magnesium a lightweight metal. commercial use of magnesium has been limited because of the high cost associated with its multistep production process. this technology is likely to reduce processing steps, thereby reducing the cost of finished magnesium components and allowing for the replacement of aluminum with magnesium in many commercial goods. the widespread use of magnesium instead of aluminum in cars would reduce vehicle weight and lead to improvements in transportation by improving fuel economy. low frequency rf plasma source ( lfrf - 501 ), co - developed with structured materials industries, inc. ( oak ridge national laboratory ) : lfrf - 501 is a low - cost plasma generator for research, development and production of nanometer scale materials at lower temperatures, faster rates and with enhanced properties. these materials are enabling new developments in many technologies, including microelectronics, renewable energy, sensors and leds. advanced manufacturing and geothermal : nanoshield coatings ( oak ridge national laboratory ) : nanoshield is a protective coating that can extend the life of costly cutting and manufacturing tools by more than 20 %, potentially saving millions of dollars over the course of a project. it is created by laser fusing a unique iron - based powder to any type of steel, which forms a strong metallurgical bond that provides wear resistance between two and 10 times greater than conventional coatings. nanoshield was designed to protect high - wear tools used for tunnel boring and construction, but its potential for navy applications and geothermal drilling tools also is being explored. desiccant - enhanced evaporative air - conditioning ( national renewable energy laboratory ) : developed with ail research and synapse product development llc : devap systems cool commercial buildings at a small fraction of the energy use of a traditional cooler, provides superior comfort in any climate, releases far less carbon dioxide, and could cut costly peak electricity demand by 80 %. the sandia cooler ( sandia national laboratories ) : also known as the \" air bearing heat exchanger, \" this technology will significantly reduce the energy needed to cool the processor chips in data centers and large - scale computing environments. the sandia cooler also offers benefits in other applications where thermal management and energy efficiency are important, particularly heating, ventilation and air - conditioning ( hvac ). hydrogen and fuel platinum monolayer electrocatalysts for fuel cell cathodes ( brookhaven national laboratory ) : platinum is the most efficient electrocatalyst for fuel cells, but platinum - based catalysts", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6384686075389439, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.842235"} {"text": "air - conditioning ( hvac ). hydrogen and fuel platinum monolayer electrocatalysts for fuel cell cathodes ( brookhaven national laboratory ) : platinum is the most efficient electrocatalyst for fuel cells, but platinum - based catalysts are expensive, unstable, and have low durability. the new electrocatalysts have high activity, stability, and durability, while containing only about one - tenth the platinum of conventional catalysts used in fuel cells, significantly reducing overall costs. sj3 solar cell ( national renewable energy laboratory ) : co - developed with solar junction, the cell achieves a world - record conversion efficiency of 43. 5 % with potential to reach 50 %. like a three - blade safety razor that uses all its blades for a closer shave, the three - layered sj3 cell captures different light frequencies, ensuring the best conversion of photons to electrons. the 43. 5 % efficiency occurs under lens - focused light having 418 times the intensity of the sun. microsystems enabled photovoltaics ( sandia national laboratories ) : tiny, glitter - sized pv cells are created using microdesign and microfabrication techniques, released into a solution and \" printed \" onto a low - cost substrate. the technology has potential applications in buildings, houses, clothing, portable electronics, vehicles and other contoured structures. high - energy concentration - gradient cathode material for plug - in hybrids and all - electric vehicles ( argonne national laboratory ) : argonne and several partners have developed a novel high - energy and high - power cathode material for use in lithium ion ( li - ion ) batteries especially suited for plug - in hybrids and all - electric vehicles. it provides much higher energy and longer life than any other li - ion cathode material, and as such is also ideal for batteries in hybrid vehicles and a wide range of consumer electronics applications. graphene nanostructures for lithium batteries, co - developed with vorbeck materials corp. of jessup md. and princeton university ( pacific northwest national laboratory ) : small quantities of graphene \u2014 ultra - thin sheets of carbon atoms \u2014 can dramatically improve the performance and power of lithium - ion batteries. graphene nanostructures could lead to the development of batteries that last longer and recharge quickly, drastically reducing the time it takes to charge a smartphone to as little as ten minutes and charging an electric vehicle in just a few hours. the energy department ' s office of energy efficiency and", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6451434467670039, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:12.843548"} {"text": "in once for each of the two charges ) is the charge of the electron. notice the strength of the force drops with the distance between the charges in a way identical to gravity. also, if we were talking about an electron and an anti - electron ( which has the opposite charge ), then there would be a minus sign indicating the force between opposite charges is attractive. we can compare the strength of the gravitational force to the electromagnetic force on two electrons by taking the ratio between the two forces. the distance - squared cancels out and we are left with : f ( gravity ) / f ( em ) = gmm / cee. i intentionally dropped the minus sign ; i will simply remember that the gravitional force between the electrons is attractive and the electromagnetic force between the two electrons is replusive. anyway, when i plug in the values for g, m, c, and e, the ratio is 2. 4x10 ^ ( - 43 ). in words that is pronounced two - point - four times ten to the minus forty - three. that is a very small number. in other words, the gravitational force between two electrons is feeble compared to the electromagnetic force. the reason that you feel the force of gravity, even though it is so weak, is that every atom in the earth is attracting every one of your atoms and there are a lot of atoms in both you and the earth. the reason you aren ' t buffeted around by electromagnetic forces is that you have almost the same number of positive charges as negative ones, so you are ( essentially ) electrically neutral. the weak force is misnamed. it ' s thought to be just as strong as the em force but, unlike the em force, it ' s a short - ranged force. in fact, the range is only about 1 / 100 the size of an atomic nucleus. the weak force is outside the realm of our everyday experience. we study it at fermilab by using the accelerator to produce the particles which transmit the force. these are real particles called the w - boson and the z - boson. because they are very massive, we need a high - energy accelerator to produce them. the large mass of the w - boson and the z - boson is also the reason the force has a short range. incidentally, the particle which carries the em force is called the photon ( yes, light ). because photons are massless, the em force has a long range as i described above. the weak force and", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6278690373177416, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.049973"} {"text": "the force has a short range. incidentally, the particle which carries the em force is called the photon ( yes, light ). because photons are massless, the em force has a long range as i described above. the weak force and the em force have been found to be linked at high - energy or, equivalently, short range. they both can be described by one set of equations which we call the \" electro - weak \" theory. this was discovered in 1967 - 1971 by steven weinberg, sheldon glashow, and abdus salam. they got the nobel prize in physics for unifying those forces. finally i am ready to talk about the strong force. this is way out of the experience we get in everyday life ( not that it doesn ' t have everyday life consequences ), so i will be a little more long - winded in describing it. remember that a proton or neutron is composed of three quarks? these quarks have strong charge and are bound together by the strong force. unlike the case of the em force, where there is one electric charge and one anti - charge ( plus and minus charges ) there are three strong force charges and three anti - charges. we call the strong force charges \" red \", \" blue \", and \" yellow \" and the anti - charges are called \" anti - red \" and so forth. the particles which transmit the force are called gluons. gluons are massless, like the photon. but unlike the photon, which is electrically neutral, the gluons carry strong charge and a different strong anti - charge. a gluon could be \" red - anti - blue \", for example, and there are eight kinds of gluons. we call the three charges \" colors \" even though they have nothing to do with how we see. because the gluon is massless, at first you might think the range of the strong force is infinite, like the em force. but if you study the behavior of the strong force, you find that the three quarks in a proton or neutron behave almost as if they were bouncing around freely in a relaxed, elastic spherical container. none of the quarks can escape the container because when the quark reaches the boundary of the proton or neutron, the force begins to act and gets stronger and stronger the further away the that quark gets from the others. that is very different from the other forces which get weaker at longer distances and it occurs because the gluons", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6296228196092487, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.051093"} {"text": "of the proton or neutron, the force begins to act and gets stronger and stronger the further away the that quark gets from the others. that is very different from the other forces which get weaker at longer distances and it occurs because the gluons have the color and anti - color charge. the strong force also acts between protons and neutrons in an atomic nucleus much in the same way that simple chemicals are held together by the electric force. a nucleus such as helium, which has two ( positively em - charged ) protons, is stable because the strong force overcomes the electromagnetic forces. the strong force binds the two protons with about 25 - 35 mev of energy. the electromagnetic forces try to push the protons apart. the net result is that approximately 1 million electron - volts of energy are needed to separate the two protons. in contrast, an electron is bound to a proton in a hydrogen atom by only a few electron - volts. by now you know enough to consider the size of the nucleus in comparison to the size of an atom to judge if this is truly a fair comparison! the strong force is, indeed, strong. we think that if we could study the electroweak and strong forces at high enough energy we would find out they were linked together somehow, like electricity and magnetism are to form em, and like em and the weak force are to form electro - weak. such a theory would be called a grand - unified theory. and we also think that it may be possibe to include gravity with the other three. such a theory would be called a super - grand - unified theory and there is a candidate for that called \" superstrings \". so you asked a simple question : \" how strong is the strong force? \". the answer is that it depends on the range. at short distances it is weak and at long distances it is strong. that effect is completely different from the other three forces and arises because the forces transmitters, called gluons, are massless and have strong - charge and different strong anti - charge. if you want to learn more about particle physics and the work we do at fermilab, the book \" the god particle \" by leon lederman and dick teresi gives a very good and readable explanation. | last modified 1 / 11 / 1999 firstname. lastname @ example. org |", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6128390076677018, "token_count": 485, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.052145"} {"text": "gaping hole in the standard model of particle physics, a conceptual framework for understanding the nuts - and - bolts of the cosmos. one idea is that the higgs was born when the new universe cooled after the big bang some 14 billion years ago. it is believed to act like a fork dipped in honey and held up in dusty air. most of the dust particles interact with the honey, acquiring some of its mass to varying degrees, but a few slip through and do not acquire any. with mass comes gravity \u2014 and its pulling power brings particles together. supersymmetry, meanwhile, is the notion that there are novel particles which are the opposite number of each of the known particle actors in the standard model. this may, in turn, explain the existence of dark matter \u2014 a hypothetical construct that can only be perceived indirectly via its gravitational pull, yet is thought to make up around 25 percent of the universe. at a cost of 6. 03 billion swiss francs ( 4. 9 billion euros, $ 6. 56 billion dollars ), the lhc was constructed in a 26. 6 - kilometre ( 16. 5 - mile ) circular tunnel originally occupied by its predecessor, the large electron positron ( lep ). that had run in cycles of about seven months followed by a five - month shutdown, but the lhc, opened in 2008, has been pushed well beyond. \" we ' ve had full operations for three years, 2010, 2011 and 2012, \" said bordry. \" initially we thought we ' d have the long shutdown in 2012, but in 2011, with some good results and with the perspective of discovering the boson, we pushed the long shutdown back by a year. but we said that in 2013 we must do it. \" unlike the lep, which was used to accelerate electrons or positrons, the lhc crashes together protons, which are part of the hadron family. \" the game is about smashing the particles together to transform this energy into mass. with high energy, they are transformed into new particles and we observe these new particles and try to understand things, \" bordry explained. \" it ' s about recreating the first microsecond of the universe, the big bang. we are reproducing in a lab the conditions we had at the start of the big bang. \" over the past three years, cern has slammed protons together more than six million billion times. five billion collisions yielded results deemed worthy of further research and data from only", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6355599795126571, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.554524"} {"text": "relation of wholes to parts, infinity, and eternity. the second half deals with the three kinds of true causes within reality recognized by proclus : gods ( which he calls henads or \u201c unities, \u201d see below ), intellects, and souls. this elaborate metaphysical framework makes it possible for proclus to develop a scientific theology, i. e., a demonstration of the procession and properties of the different classes of gods. in what follows we will only discuss some characteristic features of proclus ' metaphysics ( see further steel 2011 ). on the whole, proclus \u2019 doctrine of first principles is a further development of plotinus ' innovative interpretation of platonic philosophy. with plotinus, proclus recognizes three fundamental levels of reality called \u2018 hypostases \u2019 ( or self - subsistent entities ) : one, intellect, and soul. however, following a concern of his predecessor iamblichus for greater precision in the relationship and distinction between the one and intellect, proclus distinguishes between the intelligible being ( to noeton \u2014 what is the object of intellectual intuition ) and the intellective ( to noeron \u2014 what is intelligizing ), and introduces between both, as an intermediary level, the noeton - noeron ( what is being intelligized and intelligizing ). these three ontological levels thus correspond to the triad of being, life, and intellect, which already play an important role in plotinus ' and porphyry ' s speculations about the procession or \u2018 emanation \u2019 of the intelligible world from the one, without, however, being hypostasized. since zeller ( influenced by hegel ) the application of the triadic structure to reality has been seen as the characteristic feature of proclus ' system, but see dodds 19632, pp. xxii and 220, on possible sources of the doctrine. although the distinction of aspects of reality as distinct hupostases and the multiplication of triads might suggest a loss of plotinus \u2019 intuition of the unity of reality, it is important to stress that each part of the triad of being, life and intellect, mirrors within itself their triadic relationship. this redoubled triadic structure must be understood as expressing an intrinsic and essential relation between successive levels of being. the intimate relation between being, life, and intellect is the origin of the basic structure uniting all causes to their effects, namely the relation of immanence,", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6027568251750776, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 11, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.667324"} {"text": "something. this distinction can also be rephrased in terms of concepts, implying a distinction between factual concepts that allow us to identify or recognise certain objects, and concepts that fulfil an explanatory role. on the whole, proclus ' reading and systematisation of plato ' s doctrine of learning as recollection makes platonic recollection not only concerned with higher learning, since already on the level of object recognition we employ concepts that originate from the innate logoi of the soul ( helmig 2011 ). proclus argues at length that the human soul has to contain innate knowledge. therefore, one should not consider it an empty writing tablet, as aristotle does ( aristotle, de anima iii 4 ). he is wrong in asserting that the soul contains all things potentially. according to proclus, the soul contains all things ( i. e., all logoi ) in actuality, though due to the \u2018 shock of birth \u2019 it may seem as if the soul has fallen to potentiality. at in crat. \u00a7 61, proclus asserts that the soul does not resemble an empty writing tablet ( agraphon grammateion ) and does not possess all things in potentiality, but in act. in eucl. 16. 8 \u2013 13 expresses the same idea : \u201c the soul is not a writing tablet void of logoi, but it is always written upon and always writing itself and being written on by the intellect. \u201d as with his philosophy of mathematics, proclus presents a detailed criticism of the view that universal concepts are derived from sensible objects ( by abstraction, induction, or collection ). in the fourth book of his commentary on plato ' s parmenides and in the two prologues of the commentary on euclid we find the most comprehensive criticism of abstractionism in antiquity ( see helmig 2010 and 2011 ). proclus devoted three entire books or \u2018 monographs \u2019 ( monobiblia ) to problems of providence, fate, free choice, and evil. the first treatise ( ten problems concerning providence ) examines ten different problems on providence that were commonly discussed in the platonic school. for proclus providence ( pronoia ) is the beneficent activity of the first principle ( the \u2018 source of goods \u2019 ) and the gods ( henads ), who have their existence before intellect ( pro - nou ). one of the problems discussed is the question of how divine foreknowledge and human free choice can be reconciled. for if god", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6163707011471782, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 20, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.677490"} {"text": "the demiurge. in that respect the physiology seems also to be a sort of theology, since also natural things have somehow a divine existence insofar as they are produced by the gods. ( in tim. i 217. 18 \u2013 27 ) before offering an explanation of the generation of the world, timaeus sets out the fundamental principles that will govern his whole explanation of the physical world ( tim. 27d5 \u2013 28b5 ). as proclus observes, it is the task of a scientist to formulate at the start of his project the principles proper to the science in question, and not just to assume some general axioms. the science of nature too is based on specific axioms and assumptions, which must be clarified before we can move to the demonstration. in order to make phusiologia a real science, the philosopher must deduce his explanation, as does the geometer, from a set of fundamental propositions or axioms. if i may say what i think, it seems to me that plato proceeds here in the manner of the geometers, assuming before the demonstrations the definitions and hypotheses through which he will make his demonstrations, thus laying the foundations of the whole science of nature. ( in tim. i. 217. 18 \u2013 27 ) starting from these fundamental propositions, proclus argues, plato deduces the different types of causality that are required for a truly scientific understanding of nature ( efficient, exemplary, and final cause ; see steel 2003 and above 3. 2 ). time and eternity proclus discusses eternity and time in his commentary on the timaeus and in propositions 53 \u2013 55 of the elements of theology ( see steel 2001 ). aristotle had defined time as a \u201c measure of movement according to the before and after. \u201d therefore, anything measured by time must have a form of existence or activity in which a past and a future state can be distinguished. in fact, an entity in time is never wholly and simultaneously what it is, but has an existence extended in a process of before and after. opposed to it stands the eternal, which exists as a simultaneous whole and admits of no composition or change. \u201c there is no part of it, \u201d writes proclus, \u201c which has already subsisted and another that will subsist later, but as yet is not. all that it is capable of being, already possesses it in entirety without losing it or without accumulating \u201d ( elem. theol. \u00a7 52", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6024134741163373, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 24, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.681533"} {"text": "june 22, 1976. north atlantic. at 21 : 13 gmt a pale orange glow behind a bank of towering cumulus to the west was observed. two minutes later a white disc was observed while the glow from behind the cloud persisted. high probability that this may have been caused by interferometry using 3 - dimensional artificial scalar wave? fourier expansions? as the interferers. marine observer. 47 ( 256 ), apr. 1977. p. 66 - 68. \" unidentified phenomenon, off barbados, west indies. \" august 22, 1969. west indies. luminous area bearing 310 degrees grew in size and rose in altitude, then turned into an arch or crescent. high probability that this may have been caused by interferometry using artificial scalar wave? ( ( fourier expansions. ) ) marine observer. 40 ( 229 ), july, 1970. p. 107 - 108. \" optical phenomenon : caribbean sea ; western north atlantic. \" mar. 20, 1969. caribbean sea and western north atlantic. at 23 : 15 gmt, a semicircle of bright, milky - white light became visible in the western sky and rapidly expanded upward and outward during the next 10 minutes, dimming as it expanded. high probability that this may be caused by interferometry using artificial scalar wave? fourier expansions?. marine observer, 40 ( 227 ), jan. 1970. p. 17 ; p. 17 - 18. 7b. 21 - electricity 13. 06 - triple currents of electricity 14. 35 - teslas 3 6 and 9 ( ( 16. 04 - nikola nikola tesla describing what electricity is ) ) 16. 07 - electricity is a polar exchange 16. 10 - positive electricity 16. 16 - negative electricity - russell 16. 17 - negative electricity - tesla 16. 29 - triple currents of electricity ( ( figure 16. 04. 05 and figure 16. 04. 06 - nikola nikola tesla and lord kelvin ) ) part 16 - electricity and magnetism tesla - electricity from space what electricity is - bloomfield moore page last modified on wednesday 19 of may, 2010 05 : 23 : 05 mdt", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6071393587759556, "token_count": 431, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.714206"} {"text": "360 or just over 180 degrees in conventional machines, 220 degrees in ebt. ct is used in medicine as a diagnostic tool and as a guide for interventional procedures. sometimes contrast materials such as intravenous iodinated contrast are used. this is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. using contrast material can also help to obtain functional information about tissues. pixels in an image obtained by ct scanning are displayed in terms of relative radiodensity. the pixel itself is displayed according to the mean attenuation of the tissue ( s ) that it corresponds to on a scale from - 1024 to + 3071 on the hounsfield scale. pixel is a two dimensional unit based on the matrix size and the field of view. when the ct slice thickness is also factored in, the unit is known as a voxel, which is a three dimensional unit. the phenomenon that one part of the detector can not differ between different tissues is called the partial volume effect. that means that a big amount of cartilage and a thin layer of compact bone can cause the same attenuation in a voxel as hyperdense cartilage alone. water has an attenuation of 0 hounsfield units ( hu ) while air is - 1000 hu, cancellous bone is typically + 400 hu, cranial bone can reach 2000 hu or more ( os temporale ) and can cause artefacts. the attenuation of metallic implants depends on atomic number of the element used : titanium usually has an amount of + 1000 hu, iron steel can completely extinguish the x - ray and is therefore responsible for well - known line - artefacts in computed tomogrammes. windowing is the process of using the calculated hounsfield units to make an image. the various radiodensity amplitudes are mapped to 256 shades of gray. these shades of gray can be distributed over a wide range of hu values to get an overview of structures that attenuate the beam to widely varying degrees. alternatively, these shades of gray can be distributed over a narrow range of hu values ( called a narrow window ) centered over the average hu value of a particular structure to be evaluated. in this way, subtle variations in the internal makeup of the structure can be discerned. this is a commonly used image processing technique known as contrast compression. for example, to evaluate the abdomen in order to find subtle masses in the liver, one might use liver windows. choosing 70 hu as an", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.601768649513099, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.775854"} {"text": "algoritma nyaeta susunan parentah, nu jumlahna kawates, pikeun ngolah sababaraha parentah nu, sakumpulan data asupanana, bakal ngahasilkeun sarupaning bentuk ahir nu bisa dipikawanoh ; sabalikna ti heuristik. konsep algoritma mindeng digambarkeun ku conto hiji resep, sanajan loba algoritma kacida ruwetna ; algoritma sering miboga lengkah - lengkah anu malikan ( iterasi ) atawa merlukeun kaputusan ( saperti logika atawa perbandingan ) nepi ka tugas direngsekeunnana. | artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa inggris. bantosanna diantos kanggo narjamahkeun. sababaraha alogaritma bisa anggeus ku pagawean nu sarua make susunan parentah nu beda in more or less time, space, or effort than others. for example, given two different recipes for making potato salad, one may have peel the potato before boil the potato while the other presents the steps in the reverse order, yet they both call for these steps to be repeated for all potatoes and end when the potato salad is ready to be eaten. correctly performing an algorithm will not solve a problem if the algorithm is flawed or not appropriate to the problem. for example, performing the potato salad algorithm will fail if there are no potatoes present, even if all the motions of preparing the salad are performed as if the potatoes were there. in some countries, such as the usa, some algorithms can effectively be patented if an embodiment is possible ( for example, a multiplication algorithm embodied in the arithmetic unit of a microprocessor ). algoritma nu dirumuskeun [ edit ] algorithms are essential to the way computers process information, because a computer program is essentially an algorithm that tells the computer what specific steps to perform ( in what specific order ) in order to carry out a specified task, such as calculating employees \u2019 paychecks or printing students \u2019 report cards. thus, an algorithm can be considered to be any sequence of operations which can be performed by a turing -", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6162917325861825, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.990709"} {"text": "in what specific order ) in order to carry out a specified task, such as calculating employees \u2019 paychecks or printing students \u2019 report cards. thus, an algorithm can be considered to be any sequence of operations which can be performed by a turing - complete system. typically, when an algorithm is associated with processing information, data is read from an input source or device, written to an output sink or device, and / or stored for further use. stored data is regarded as part of the internal state of the entity performing the algorithm. for any such computational process, the algorithm must be rigorously defined : specified in the way it applies in all possible circumstances that could arise. that is, any conditional steps must be systematically dealt with, case - by - case ; the criteria for each case must be clear ( and computable ). because an algorithm is a precise list of precise steps, the order of computation will almost always be critical to the functioning of the algorithm. instructions are usually assumed to be listed explicitly, and are described as starting ' from the top ' and going ' down to the bottom ', an idea that is described more formally by flow of control. so far, this discussion of the formalisation of an algorithm has assumed the premises of imperative programming. this is the most common conception, and it attempts to describe a task in discrete, ' mechanical ' means. unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. it derives from the intuition of ' memory ' as a scratchpad. there is an example below of such an assignment. ngalarapkeun algoritma [ edit ] algorithms are sometimes implemented as computer programs but are more often implemented by other means, such as in a biological neural network ( for example, the human brain implementing arithmetic or an insect relocating food ), or in electric circuits or in a mechanical device. the analysis and study of algorithms is one discipline of computer science, and is often practiced abstractly ( without the use of a specific programming language or other implementation ). in this sense, it resembles other mathematical disciplines in that the analysis focuses on the underlying principles of the algorithm, and not on any particular implementation. one way to embody ( or sometimes codify ) an algorithm is the writing of pseudocode. some writers restrict the definition of algorithm to procedures that eventually finish. others include procedures that could run forever without stopping, arguing that some entity may be required to carry out such permanent tasks. in the latter case, success can no longer be", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6189672636354469, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.991859"} {"text": "pseudocode. some writers restrict the definition of algorithm to procedures that eventually finish. others include procedures that could run forever without stopping, arguing that some entity may be required to carry out such permanent tasks. in the latter case, success can no longer be defined in terms of halting with a meaningful output. instead, terms of success that allow for unbounded output sequences must be defined. for example, an algorithm that verifies if there are more zeros than ones in an infinite random binary sequence must run forever to be effective. if it is implemented correctly, however, the algorithm ' s output will be useful : for as long as it examines the sequence, the algorithm will give a positive response while the number of examined zeros outnumber the ones, and a negative response otherwise. success for this algorithm could then be defined as eventually outputting only positive responses if there are actually more zeros than ones in the sequence, and in any other case outputting any mixture of positive and negative responses. di dieu aya conto sederhana dina algoritma. bayangkeun anjeun mibanda wilangan random dina daptar nu teu kasortir. tujuan ahirna keur manggihkeun wilangan panggedena tina eta daptar. lengkah mimiti nyaeta kudu nempo kana sakabeh nilai nu aya dina deret. lengkah saterusna nyaeta nempo kana eta nilai ngan sakali. asupkeun kana itungan, algoritma basajan ngeunaan hal eta saperti di handap ieu : - pretend the first number in the list is the largest number. - look at the next number, and compare it with this largest number. - only if this next number is larger, then keep that as the new largest number. - repeat steps 2 and 3 until you have gone through the whole list. given : a list \" list \" largest = list counter = 2 while counter < = length ( list ) : if list [ counter ] > largest : largest = list [ counter ] counter = counter + 1 print largest notes on notation : - = as used here indicates assignment. that is, the value on the right - hand side of the expression is assigned to the container ( or variable ) on the left - hand side of the expression. - list [ counter ] as used here indicates the", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.640430141847407, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.993007"} {"text": "= as used here indicates assignment. that is, the value on the right - hand side of the expression is assigned to the container ( or variable ) on the left - hand side of the expression. - list [ counter ] as used here indicates the counterth element of the list. for example : if the value of counter is 5, then list [ counter ] refers to the 5th element of the list. - < = as used here indicates ' less than or equal to ' note also the algorithm assumes that the list contains at least one number. it will fail when presented an empty list. most algorithms have similar assumptions on their inputs, called pre - conditions. as it happens, most people who implement algorithms want to know how much of a particular resource ( such as time or storage ) a given algorithm requires. methods have been developed for the analysis of algorithms to obtain such quantitative answers ; for example, the algorithm above has a time requirement of o ( n ), using the big o notation with n representing for the length of the list. kecap algoritma comes ultimately from the name of the 9th - century mathematician abu abdullah muhammad bin musa al - khwarizmi. the word algorism originally referred only to the rules of performing arithmetic using arabic numerals but evolved into algorithm by the 18th century. the word has now evolved to include all definite procedures for solving problems or performing tasks. the first case of an algorithm written for a computer was ada byron ' s notes on the analytical engine written in 1842, for which she is considered by many to be the world ' s first programmer. however, since charles babbage never completed his analytical engine the algorithm was never implemented on it. the lack of mathematical rigor in the \" well - defined procedure \" definition of algorithms posed some difficulties for mathematicians and logicians of the 19th and early 20th centuries. this problem was largely solved with the description of the turing machine, an abstract model of a computer formulated by alan turing, and the demonstration that every method yet found for describing \" well - defined procedures \" advanced by other mathematicians could be emulated on a turing machine ( a statement known as the church - turing thesis ). nowadays, a formal criterion for an algorithm is that it is a procedure that can be implemented on a completely - specified turing machine or one of the equivalent formalisms. turing ' s initial interest was in the halting problem : deciding when an algorithm describes a terminating procedure. in practical terms computational complexity theory matters more : it includes the puzzling problem of the algorithms", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6252135487291444, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.994088"} {"text": "specified turing machine or one of the equivalent formalisms. turing ' s initial interest was in the halting problem : deciding when an algorithm describes a terminating procedure. in practical terms computational complexity theory matters more : it includes the puzzling problem of the algorithms called np - complete, which are generally presumed to take more than polynomial time. kelas algoritma [ edit ] aya sababaraha cara keur nyieun kelas algoritma, and the merits of each classification have been the subject of ongoing debate. one way of classifying algorithms is by their design methodology or paradigm. there is a certain number of paradigms, each different from the other. furthermore, each of these categories will include many different types of algorithm. some commonly found paradigms include : - divide and conquer. a divide - and - conquer algorithm reduces an instance of a problem to one or more smaller instances of the same problem ( usually recursively ), until the instances are small enough to be directly expressible in the programming language employed ( what is ' direct ' is often discretionary ). - dynamic programming. when a problem shows optimal substructure, i. e when the optimal solution to a problem consists of optimal solutions to subproblems ( for instance the shortest path between two vertices on a weighted graph consists of the shortest path between all the vertices in between. ) you solve such a problem bottom - up by solving the simplest problems first and then procceding to increasingly difficult problems until you have solved the original problem. this is called a dynamic programming algorithm. - the greedy method. a greedy algorithm is similar to a dynamic programming algorithm, but the difference is that at each stage you don ' t have to have the solutions to the subproblems, you can make a \" greedy \" choice of what looks best for the moment. - linear programming. when you solve a problem using linear programming you put the program into a number of linear inequalities and then try to maximize ( or minimize ) the inputs. many problems ( such as the maximum flow for directed graphs ) can be stated in a linear programming way, and then be solved by a ' generic ' algorithm such as the simplex algorithm. - search and enumeration. many problems ( such as playing chess ) can be modelled as problems on graphs. a graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. this category also includes the search algorithms and backtracking. - the probabilistic and heuri", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6504857678661083, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.995057"} {"text": "playing chess ) can be modelled as problems on graphs. a graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. this category also includes the search algorithms and backtracking. - the probabilistic and heuristic paradigm. algorithms belonging to this class fit the definition of an algorithm more loosely. probabilistic algorithms are those that make some choices randomly ( or pseudo - randomly ) ; for some problems, it can in fact be proved that the fastest solutions must involve some randomness. genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of ' solutions '. thus, they emulate reproduction and \" survival of the fittest \". in genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a ' solution ' to a problem. also there are heuristic algorithms, whose general purpose is not to find a final solution, but an approximate solution where the time or resources to find a perfect solution are not practical. an example of this would be simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. the name ' simulated annealing ' alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. the purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution. another way to classify algorithms is by implementation. a recursive algorithm is one that invokes ( makes reference to ) itself repeatedly until a certain condition matches, which is a method common to functional programming. algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. those computers are sometimes called serial computers. an algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms, which take advantage of computer architectures where several processors can work on a problem at the same time. the various heuristic algorithm would probably also fall into this category, as their name ( e. g. a genetic algorithm ) describes its implementation. a list of algorithms discussed in wikipedia is available. tempo oge [ edit ] - bulletproof algorithms - numerical analysis - cryptographic algorithms - sort algorithms - search algorithms - merge algorithms - string algorithms - list of algorithms - timeline of algorithms - struktur data - genetic algorithms - randomised", "subdomain_id": "subdomain_quantum_simulation", "similarity_score": 0.6735280437513205, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:13.996068"} {"text": "science fair project encyclopedia knot theory is a branch of topology that was inspired by observations, as the name suggests, of knots. but progress in the field no longer depends on experiments with twine. knot theory concerns itself with abstract properties of theoretical knots \u2014 the spatial arrangements that in principle could be assumed by a loop of string. in mathematical jargon, knots are embeddings of the closed circle in three - dimensional space. an ordinary knot is converted to a mathematical knot by splicing its ends together. the topological theory of knots asks whether two such knots can be rearranged to match, without opening the splice. the question of untying an ordinary knot has to do with unwedging tangles of rope pulled tight. a knot can be untied in the topological theory of knots if and only if it is equivalent to the unknot, a circle in 3 - space. knot theory originated in an idea of lord kelvin ' s ( 1867 ), that atoms were knots of swirling vortices in the \u00e6ther ( also known as ' ether ' ). he believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do ( i. e. explain what we now understand to depend on quantum energy levels ). scottish physicist peter tait spent many years listing unique knots under the belief that he was creating a table of elements. when ether was discredited through the michelson - morley experiment, vortex theory became completely obsolete, and knot theory fell out of scientific interest. only in the past 100 years, with the rise of topology, have knots become a popular field of study. today, knot theory is inextricably linked to particle physics, dna replication and recombination, and to areas of statistical mechanics. an introduction to knot theory creating a knot is easy. begin with a one - dimensional line segment, wrap it around itself arbitrarily, and then fuse its two free ends together to form a closed loop. one of the biggest unresolved problems in knot theory is to describe the different ways in which this may be done, or conversely to decide whether two such embeddings are different or the same. the unknot, and a knot equivalent to it before we can do this, we must decide what it means for embeddings to be \" the same \". we consider two embeddings of a loop to be the same if we can get from one to the other by", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6676141928382808, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.295838"} {"text": "to it before we can do this, we must decide what it means for embeddings to be \" the same \". we consider two embeddings of a loop to be the same if we can get from one to the other by a series of slides and distortions of the string which do not tear it, and do not pass one segment of string through another. if no such sequence of moves exists, the embeddings are different knots. a useful way to visualise knots and the allowed moves on them is to project the knot onto a plane - think of the knot casting a shadow on the wall. now we can draw and manipulate pictures, instead of having to think in 3d. however, there is one more thing we must do - at each crossing we must indicate which section is \" over \" and which is \" under \". this is to prevent us from pushing one piece of string through another, which is against the rules. to avoid ambiguity, we must avoid having three arcs cross at the same crossing and also having two arcs meet without actually crossing ( we would say that the knot is in general position with respect to the plane ). fortunately a small perturbation in either the original knot or the position of the plane is all that is needed to ensure this. in 1927, working with this diagrammatic form of knots, j. w. alexander and g. b. briggs, and independently kurt reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown right. these operations, now called the reidemeister moves, are : - twist and untwist in either direction. - move one loop completely over another. - move a string completely over or under a crossing. knot invariants can be defined by demonstrating a property of a knot diagram which is not changed when we apply any of the reidemeister moves. some very important invariants can be defined in this way, including the jones polynomial. you can unknot any circle in four dimensions. there are two steps to this. first, \" push \" the circle into a 3 - dimensional subspace. this is the hard, technical part which we will skip. now imagine temperature to be a fourth dimension to the 3 - dimensional space. then you could make one section of a line cross through the other by simply warming it with your fingers. two knots can be added by breaking the circles and connecting the pairs of ends. knots in 3 - space form", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6032613588979813, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.296893"} {"text": "this tutorial, developed for high school physics students, uses multiple graphs and animations to study the relationship between the motion of an object and its graph of velocity vs. time. users explore the relationship between position and velocity, positive and negative velocities, slope and shape of graphs, and acceleration. interactive self - evaluations are included. see related materials for an accompanying lab by the same author. this item is part of the physics classroom, a comprehensive set of tutorials and multimedia resources for high school physics. editor ' s note : education research indicates that many students have difficulty differentiating velocity and acceleration, and often plot velocity graphs as the path of an object. see related materials for a free research - based diagnostic tool to probe misconceptions related to velocity. 6 - 8 : 4f / m3b. if a force acts towards a single center, the object ' s path may curve into an orbit around the center. 9 - 12 : 4f / h1. the change in motion ( direction or speed ) of an object is proportional to the applied force and inversely proportional to the mass. 9 - 12 : 4f / h8. any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it. 9. the mathematical world 9b. symbolic relationships 6 - 8 : 9b / m3. graphs can show a variety of possible relationships between two variables. as one variable increases uniformly, the other may do one of the following : increase or decrease steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in steps, or do something different from any of these. 9 - 12 : 9b / h4. tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another. 9 - 12 : 9c / h3c. a graph represents all the values that satisfy an equation, and if two equations have to be satisfied at the same time, the values that satisfy them both will be found where the graphs intersect. common core state standards for mathematics alignments expressions and equations ( 6 - 8 ) represent and analyze quantitative relationships between dependent and independent variables. ( 6 ) 6. ee. 9 use variables to represent two quantities in a real - world problem that change in relationship to one another ; write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6192805641557667, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.563605"} {"text": "1. a fatal disease of cattle that affects the central nervous system. 4. the small projection of a mammary gland. 8. the act of slowing down or falling behind. 11. a drug combination found in some over - the - counter headache remedies ( aspirin and phenacetin and caffeine ). 12. goddess of the dead and queen of the underworld. 13. a hospital unit staffed and equipped to provide intensive care. 14. a police officer who investigates crimes. 15. divulge information or secrets. 16. a logarithmic unit of sound intensity equal to 10 decibels. 17. a radioactive element of the actinide series. 19. a range of mountains ( usually with jagged peaks and irregular outline ). 24. open - heart surgery in which the rib cage is opened and a section of a blood vessel is grafted from the aorta to the coronary artery to bypass the blocked section of the coronary artery and improve the blood supply to the heart. 25. a piece of furniture that provides a place to sleep. 27. resinlike substance secreted by certain lac insects. 29. a family of sino - tibetan languages spoken in southeastern asia. 31. a highly unstable radioactive element ( the heaviest of the halogen series ). 34. an official prosecutor for a judicial district. 35. a soft silvery metallic element of the alkali earth group. 36. ( greek mythology ) goddess of the earth and mother of cronus and the titans in ancient mythology. 38. the protoplasm of the germ cells that contains chromosomes and genes. 41. the blood group whose red cells carry both the a and b antigens. 42. a summary that repeats the substance of a longer discussion. 46. aircraft landing in bad weather in which the pilot is talked down by ground control using precision approach radar. 47. call upon in supplication. 49. ( irish ) mother of the tuatha de danann. 50. united states liquid unit equal to 4 quarts or 3. 785 liters. 51. a condition ( mostly in boys ) characterized by behavioral and learning disorders. 52. a strategically located monarchy on the southern and eastern coasts of the arabian peninsula. 53. a loose sleeveless outer garment made from aba cloth. 1. a chadic language spoken south of lake chad. 2. a detailed description of design criteria for a piece of work. 3. ( computer science ) a coding system that incorporates extra parity bits", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6091890242252052, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.588448"} {"text": "sleeveless outer garment made from aba cloth. 1. a chadic language spoken south of lake chad. 2. a detailed description of design criteria for a piece of work. 3. ( computer science ) a coding system that incorporates extra parity bits in order to detect errors. 4. a bachelor ' s degree in theology. 5. the fatty flesh of eel. 6. by bad luck. 7. a sock with a separation for the big toe. 8. a small faint zodiacal constellation in the southern hemisphere. 9. sour or bitter in taste. 10. a russian prison camp for political prisoners. 18. a person forced to flee from home or country. 20. liquid containing proteins and electrolytes including the liquid in blood plasma and interstitial fluid. 21. not divisible by two. 22. english essayist ( 1775 - 1834 ). 23. a white metallic element that burns with a brilliant light. 26. ( akkadian ) god of wisdom. 28. a compartment in front of a motor vehicle where driver sits. 30. the sixth month of the civil year. 32. united states abolitionist ( 1786 - 1865 ). 33. a bantu language spoken by the chaga people in northern tanzania. 37. in bed. 38. being ten more than one hundred ninety. 39. someone who works ( or provides workers ) during a strike. 40. the arch of bone beneath the eye that forms the prominence of the cheek. 43. the capital and largest city of japan. 44. a rotating disk shaped to convert circular into linear motion. 45. ( irish ) mother of the ancient irish gods. 46. a heavy brittle diamagnetic trivalent metallic element ( resembles arsenic and antimony chemically ). 48. a ductile silvery - white ductile ferromagnetic trivalent metallic element of the rare earth group.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6522241464997469, "token_count": 380, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.589337"} {"text": "new solar cell gives its \" 110 percent \" in efficiency december 20, 2011 6 : 23 pm comment ( s ) - last by like tebow, these new solar cells are giving their \" 110 percent \" week in and week out. gains to quantum efficiency could yield around a 35 percent gain in conversion efficiency, the key metric using quantum dots - - tiny nanometer scale semiconductor crystals - - researchers at the u. s. national renewable energy laboratory have cracked an important physical barrier and achieved levels of performance long considered impossible for a solar cell. i. giving its 110 percent the special design used by the team utilized quantum dot nanocrystals in the 1 - 20 nm range. the nanocrystals were composed of lead selenide treated with ethanedithol and hydrazine. the photon - harvesting quantum dot - populated plane was sandwiched between a nanostructured zinc oxide layer and a thin gold electrode. a top layer was formed using a transparent conductor. the overall design is in line with the \" thin - film \" methodology, which is currently rising in commercial production. thin film cells tend to rely on scarce ( i. e. expensive on a per mass basis ) resources, such as rare earth metals. however, they use so little of them - - given the low mass of the thin film - - that they are not significantly more expensive than existing polycrystalline silicon cells. generally, the only major extra cost to thin film is the initial cost of shifting the production technology. the new nrel cell shatters the quantum efficiencies of previous designs, posting a peak external quantum efficiency of 114 \u00b1 1 % and a peak internal quantum efficiency of 130 %. in order to understand these numbers and how any power efficiency device can be more than \" 100 percent \" efficient, you must understand the meaning of quantum efficiency ( qe ), which is overall quite different, but related to conversion efficiency ( which will be over 100 percent - - or even to 100 percent - - in traditional physics ). the new cell is a thin film design. [ image source : nrel ] quantum efficiency is a measure of how many electrons come out of a cell for every photon that goes into the cell. traditional silicon solar cells can achieve near 100 percent quantum efficiency at around 600 nm, but drop to around 80 percent on either end of the 500 - 1000 nm range ( visible light is 380 to 740 nm ). what this means is that the perfect \" color \" of light for silicon cells is orangish, while purple light can", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6126507312344404, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.630565"} {"text": "but drop to around 80 percent on either end of the 500 - 1000 nm range ( visible light is 380 to 740 nm ). what this means is that the perfect \" color \" of light for silicon cells is orangish, while purple light can have a less than 45 percent conversion rate. as white light ( sunlight ) is a mixture of different wavelengths, the lower quantum efficiency of certain parts of the spectrum leads to lower average quantum efficiency. external efficiency directly uses the number of input photons and the number of output electrons from a device. internal efficiency, by contrast, uses theory to adjust these numbers to account for losses due to reflection and absorption. we took the liberty of borrowing ( fair use clause title 17 > chapter 1 > \u00a7 107 ) the charts for their 0. 72 ev bandgap cell ( their best - performing design ) and comparing it to a traditional pc silicon cell, adding a helpful reference that shows what evs roughly correspond to in the visible light range : comparing the external quantum efficiencies of the new nrel design ( top ) and the ps silicon design ( bottom ) over the visible light range ( middle bar ), we see that the new cell is slightly less efficient in capturing red - end light, but is much more efficient in capturing blue - end light. ( the black line in bottom graph and the blue line in the top right graph are the internal qes. ) overall this could grant up to a 35 percent efficiency gain versus today ' s standard ps silicon cells, according to the paper ' s authors. ii. you \" cannot change the laws of physics \" - - so pick a better law! the better blue - range performance comes thanks to multiple exciton generation ( meg ), a unique quantum effect, which like other oddball quantum effects, occurs at an extremely small scale. in an meg scenario, a single photon hits an atom, but rather than simply knocking off one electron via the formation of an \" exciton \" ( an electron / hole pair ), it puts multiple electrons into the flow. meg - - multiple exciton generation - - bends the traditional laws of physics. [ image source : los alamos science & tech mag. / u. s. department of energy ' s nnsa ] the exact quantum mechanics of this phenomena are being debated by physics. currently the three leading hypotheses are : - - the high energy exciton ( \" x \" ) becomes a \" multi \" - x, decaying through a dense range of multi - x", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6214662406016315, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.631705"} {"text": "of this phenomena are being debated by physics. currently the three leading hypotheses are : - - the high energy exciton ( \" x \" ) becomes a \" multi \" - x, decaying through a dense range of multi - x states. - - a mixed \" virtual \" state consisting of multi - x and x ( think superposition ) is triggered by photon energetic absorption. - - photon absorption creates standard x, but in the special material x waffles back and forth, switching identity from x to multi - x and back, slowly dropping in energy, in the process. without meg, no solar cell can have more than a 100 percent internal or external qe. hence no traditional solar cell has had greater than a 100 percent qe, even at its optimal part of the spectrum ( e. g. orange light for silicon cells ). this means that the overall efficiency ( ce ) of a traditional cell - - even if perfectly optimized - - would not exceed 32 percent. cumulatively this 100 / 32 ( qe / ce ) limit is named the shockley - queisser limit after its discoverers ( s - q limit, for short ). as scotty would say \" you cannot change the laws of physics. \" but sometimes you have your cake and eat it to, if only you find the right quirk in complex and poorly understood physics of our universe. that ' s fundamentally what has been done here. meg was first theorized by nrel researcher arthur j. nozik, ph. d back in 2001, and was later confirmed to work in quantum dots, thanks to their special scale. this method is also known as \" hot carrier generation \". using this quantum effect, later proved in the laboratory, the s - q performance barrier could be shattered. a useful property of quantum dots, is that their size determines their band gap, and hence the efficiency. thus building the \" perfect \" meg cell is simply a matter of picking the right size dots. as the bandgap tends to decrease as the quantum dot size and efficiency increase, the trick is to pick a quantum dot that is as big as possible, without losing the quantum effects. quantum dots don ' t just look pretty, they have some handy physics quirks too! [ image source : elec - intro ] quantum dots also generate electron / hole pairs easier, with room temperature being enough excite ( generate electricity ) in some quantum dot materials. the most recent paper was [ abstract ] in the peer - reviewed journal", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6846075168552594, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.632779"} {"text": "[ image source : elec - intro ] quantum dots also generate electron / hole pairs easier, with room temperature being enough excite ( generate electricity ) in some quantum dot materials. the most recent paper was [ abstract ] in the peer - reviewed journal, with matthew c. beard taking the distinction of senior author and octavi e. semonin the distinction of being first author. professor novik was listed second to last, after four additional nrel colleagues. iii. third generation solar cells - - finally a solar tech. worth investing in \" first \" and \" second \" generation solar cells use various bulk semiconductors such as silicon, cadmium telluride, or copper indium gallium ( di ) selenide, which are then mixed with third, fourth, and fifth column ( in the periodic table ) elements to improve performance. ideally quantum dot cells could be combined with these traditional thin - film semiconductor cell designs, or applied using a mixture of nanocrystalline quantum dots optimized for different wavelengths. either methodology could yield an optimized \" third \" generation ( aka. next generation ) design. such a cell would enjoy the best of both worlds - - silicon cells ' excellent red range performance, along with quantum dots excellent performance on the higher end ( blue ) of the visible light spectrum. one approach to make a third generation ultra - efficient cell is to use a mixture of wavelength optimized quantum dots. [ image source : los alamos science & tech mag. / u. s. department of energy ' s nnsa ] while quantum dots are generally thought to be amenable to thin film cell \" roll - to - roll \" printing processes, the precise methods to do this on a mass production scale still have to be ironed out. furthermore, the quantum dot cells measured in this study exhibited a pretty low 4. 5 percent efficiency. while that sounds quite bad, it \u2019 s largely a result of the lower amount of quantum dots used in the absorbing layer. if quantum dot deposition techniques can be refined, the aforementioned \" third \" generation mixed cell could be finally realized. if somebody is going to do that, it will probably be professor nozik ' s team at the nrel. after all, they ' re who first discovered how to play the grand meg prank on the laws of physics in the first place. with these third generation solar cells, the technology may finally have the legs under it to compete with cheaper power generation methods ( e. g. carbon - based fuels and nuclear energy", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6129277773785271, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:14.633761"} {"text": "password passwords are the combination locks used to protect our computer accounts. it goes without saying that giving out our combination or leaving the lock unlatched ( i. e. walking away from a logged on computer ), compromises our security. however, technology provides ways for people to obtain our combination even if we aren ' t careless. to thwart such misuse, we must choose complex combinations. there are three elements to a complex combination : 1. it can ' t be obvious. that is, it can ' t exist in an attack dictionary. * every word in an english language dictionary can be tried in minutes. attack dictionaries also include names, common misspellings, words with numbers, and other commonly used passwords. you also don ' t want the password to have any personal significance to you... your dog ' s name for example. using a dictionary word for a password is like using a locker number for a combination. 2. it can ' t be a short * a combination lock with a two number combination wouldn ' t protect very well. anything less than an eight character password is like having a such a combination. it simply won ' t hold up for long on the network. 3. it can ' t be made up of just a few characters * a combination lock with only ten numbers on the dial isn ' t as effective as one with fifty. using just lower case letters is like limiting a combination lock to ten numbers. on systems that support them, passwords should contain at least one of each of the following characters : o uppercase letters ( a - z ) o lowercase letters ( a - z ) o numbers ( 0 - 9 ) o punctuation marks (! @ # $ % ^ & * ( ) _ + = - ) etc. different systems have different capabilities. some will not let you use all the strength features mentioned here. when you get an account or change your password on a system, you should be given instructions on any limitations. how, you may ask, am i ever going to remember such a complicated password? * pick a sentence that reminds you of the password. for example : o if my car makes it through 2 semesters, i ' ll be lucky ( imcmit2s, ibl ) o only bill gates could afford this $ 70. 00 textbook ( obgcat $ 7t ) o what time is my accounting class in showker 240? ( wtimacis2? ) * if you absolutely have to, record it in", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6145783962253748, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:15.040251"} {"text": "you have been there before. the best part of any professional development activity is the collaboration with other teachers. hearing the struggles and successes of others helps us to articulate our own ideas and spark new excitement in our own curriculum. we hope to strengthen the community of chemistry educators and provide a place for discussion and collaboration right here at this site. summer is one of my favorite times as a teacher! like most teachers i like to take a little time away from school, but, once i ' ve rested a bit, its my favorite time to do research as well. i encourage you to take time this summer to explore labs and activities that you think may work for your classroom, but just didn ' t have time to examine with your busy teaching schedule. call for symposia and workshops for the 23rd bcce at grand valley state university \u2013 greener on the grand : empowering chemical educators for a greener tomorrow, august 3 \u2013 7, 2014 i \u2019 d like to report on one of the end - of - year research projects that two of my general chemistry students completed during class this year. if you \u2019 d like read more about these end - of year research projects in general, click here. wow! talk about an interesting idea! a very neat experiment, called \u201c hydroglyphics \u201d, has been published by philseok kim, jack alvarenga, joanna aizenberg and raymond sleeper in the journal of chemical education. i came across a simple, yet interesting experiment that was first described by elizabeth sumner walter in 2001. she merely had students pour water into a dish containing some gobstoppers candies. i showed this experiment to some of my college chemistry students while they were workin inquiry is a fluid concept. there are some truly fabulous activities on grand valley state university ' s target inquiry ( ti ) website ( www. gvsu. edu / targetinquiry ). yes, i am biased as i was part of the first ti cohort, but there are several labs now that were written later and they, too, are terrific.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6162738930427081, "token_count": 421, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T21:56:15.294980"}