Datasets:
| {"text": "event - source objects represent command gates in the i / o kit. another name for the os x core operating system, or kernel environment. the darwin kernel environment is equivalent to the os x kernel plus the bsd libraries and commands essential to the bsd commands environment. darwin is open source technology. ( direct memory access ) a capability of some bus architectures that enables a bus controller to transfer data directly between a device ( such as a disk drive ) and a device with physically addressable memory, such as that on a computer ' s motherboard. the microprocessor is freed from involvement with the data transfer, thus speeding up overall computer operation. see also bus master computer hardware, typically excluding the cpu and system memory, which can be controlled and can send and receive data. examples of devices include monitors, disk drives, buses, and keyboards. - device driver a component of an operating system that deals with getting data to and from a device, as well as the control of that device. a driver written with the i / o kit is an object that implements the appropriate i / o kit abstractions for controlling hardware. - device file in bsd, a device file is a special file located in / devthat represents a block or character device such as a terminal, disk drive, or printer. if a program knows the name of a device file, it can use posix functions to access and control the associated device. the program can obtain the device name ( which is not persistent across reboots or device removal ) from the i / o kit. - device interface in the i / o kit, a mechanism that uses a plug - in architecture to allow a program in user space to communicate with a nub in the kernel that is appropriate to the type of device the program wishes to control. through the nub the program gains access to i / o kit services and to the device itself. from the perspective of the kernel, the device interface appears as a driver object called a user client. - device matching in the i / o kit, a process by which an application finds an appropriate device interface to load. the application calls a special i / o kit function that uses a \u201c matching dictionary \u201d to search the i / o registry. the function returns one or more matching driver objects that the application can then use to load an appropriate device interface. also referred to as device discovery. see device driver - driver matching in the i / o kit, a process in which a nub, after discovering a specific hardware device,", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6059786968384719, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:623d2e8d-3749-4e06-86a1-86049d5916e7>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.541796"} | |
| {"text": "driver objects that the application can then use to load an appropriate device interface. also referred to as device discovery. see device driver - driver matching in the i / o kit, a process in which a nub, after discovering a specific hardware device, searches for the driver or drivers most suited to drive that device. matching requires that a driver have one or more personalities that specify whether it is a candidate for a particular device. driver matching is a subtractive process involving three phases : class matching, passive matching, and active matching. see also personality - driver stack in an i / o connection, the series of driver objects ( drivers and nubs ) in client / provider relationships with each other. a driver stack often refers to the entire collection of software between a device and its client application ( or applications ). - event source an i / o object that corresponds to a type of event that a device driver can be expected to handle ; there are currently event sources for hardware interrupts, timer events, and i / o commands. the i / o kit defines a class for each of these event types, respectively iointerrupteventsource, iotimereventsource, and iocommandgate. a collection of software abstractions that are common to all devices of a particular category. families provide functionality and services to drivers. examples of families include protocol families ( such as scsi, usb, and firewire ), storage families ( disk drives ), network families, and families that describe human interface devices ( mouse and keyboard ). in the virtual - memory system, faults are the mechanism for initiating page - in activity. they are interrupts that occur when code tries to access data at a virtual address that is not mapped to physical memory. see also page ; virtual memory a type of bundle that packages a dynamic shared library with the resources that the library requires, including header files and reference documentation. note that the kernel framework ( which contains the i / o kit headers ) contains no dynamic shared library. all library - type linking for the kernel framework is done using the mach _ kernelfile itself and kernel extensions. this linking is actually static ( with vtable patch - ups ) in implementation - idle sleep a sleep state that occurs when there has been no device or system activity for the period of time the user specifies in the energy saver pane of system preferences. see also system sleep - information property list a property list that contains essential configuration information for bundles such as kernel extensions. a file named info.", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6009480361190038, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:623d2e8d-3749-4e06-86a1-86049d5916e7>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.542941"} | |
| {"text": "partition of another program. although os x has memory protection, mac os 8 and 9 do not. a mutual - exclusion locking object that allows multiple threads to synchronize access to shared resources. a mutex has two states : locked and unlocked. once a mutex has been locked by a thread, other threads attempting to lock it will block. when the locking thread unlocks ( releases ) the mutex, one of the blocked threads ( if any ) acquires ( locks ) it and uses the resource. the thread that locks the mutex must be the one that unlocks it. the work - loop lock ( which is used by a command gate ) is based on a mutex. see also lock ; work loop a programmatic mechanism for alerting interested recipients ( sometimes called observers ) that an event has occurred. an i / o kit object that represents a detected, controllable entity such as a device or logical service. a nub may represent a bus, disk, graphics adaptor, or any number of similar entities. a nub supports dynamic configuration by providing a bridge between two drivers ( and, by extension, between two families ). see also device ; driver ( 1 ) the smallest unit ( in bytes ) of information that the virtual memory system can transfer between physical memory and backing store. in darwin, a page is currently 4 kilobytes. ( 2 ) as a verb, page refers to the transfer of pages between physical memory and backing store. refer to kernel. framework / headers / mach / machine / vm _ params. hfor specifics. see also fault ; virtual memory - passive driver a device driver that performs only basic power - management tasks, such as joining the power plane and changing the device \u2019 s power state. see also active driver a set of properties specifying the kinds of devices a driver can support. this information is stored in an xml matching dictionary defined in the information property list ( info. plist ) file in the driver \u2019 s kext bundle. a single driver may present one or more personalities for matching ; each personality specifies a class to instantiate. such instances are passed a reference to the personality dictionary at initialization. - physical memory electronic circuitry contained in random - access memory ( ram ) chips, used to temporarily hold information at execution time. addresses in a process \u2019 s virtual memory are mapped to addresses in physical memory. see also virtual memory ( programmed input / output ) a way to move data between a device and system memory in which each byte", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6141133297269269, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:623d2e8d-3749-4e06-86a1-86049d5916e7>", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.546214"} | |
| {"text": "used to temporarily hold information at execution time. addresses in a process \u2019 s virtual memory are mapped to addresses in physical memory. see also virtual memory ( programmed input / output ) a way to move data between a device and system memory in which each byte is transferred under control of the host processor. see also dma a subset of driver ( or service ) objects in the i / o registry that have a certain type of provider / client relationship connecting them. the most general plane is the service plane, which displays the objects in the same hierarchy in which they are attached during registry construction. there are also the audio, power, device tree, firewire, and usb planes. - platform expert a driver object for a particular motherboard that knows the type of platform the system is running on. the platform expert serves as the root of the i / o registry tree. a module that can be dynamically added to a running system or application. core foundation plug - in services uses the basic code - loading facility of core foundation bundle services to provide a standard plug - in architecture, known as the cfplugin architecture, for mac apps. a kernel extension is a type of kernel plug - in. a heavily overloaded term which in darwin has two particular meanings : ( 1 ) in mach, a secure unidirectional channel for communication between tasks running on a single system ; ( 2 ) in ip transport protocols, an integer identifier used to select a receiver for an incoming packet or to specify the sender of an outgoing packet. the portable operating system interface. an operating - system interface standardization effort supported by iso / iec, ieee, and the open group. - power child - power parent - preemptive multitasking a type of multitasking in which the operating system can interrupt a currently running program in order to run another program, as needed. a phase of active matching in which a candidate driver communicates with a device and verifies whether it can drive it. the driver \u2019 s probemember function is invoked to kick off this phase. the driver returns a probe score that reflects its ability to drive the device. see also driver matching a bsd abstraction for a running program. a process \u2019 resources include a virtual address space, threads, and file descriptors. in os x, a process is based on one mach task and one or more mach threads. a driver object that provides services of some kind to its client. in a driver stack, the provider in a provider / client relationship is closer to the", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6044115529061124, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:623d2e8d-3749-4e06-86a1-86049d5916e7>", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.547214"} | |
| {"text": "| | this article needs additional citations for verification. ( march 2011 ) | nuclear meltdown is an informal term for a severe nuclear reactor accident that results in core damage from overheating. the term is not officially defined by the international atomic energy agency or by the u. s. nuclear regulatory commission. however, it has been defined to mean the accidental melting of the core of a nuclear reactor, and is in common usage a reference to the core ' s either complete or partial collapse. \" core melt accident \" and \" partial core melt \" are the analogous technical terms for a meltdown. a core melt accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. this differs from a fuel element failure, which is not caused by high temperatures. a meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. alternately, in a reactor plant such as the rbmk - 1000, an external fire may endanger the core, leading to a meltdown. once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel ( such as uranium, plutonium, or thorium ) and fission products ( such as cesium - 137, krypton - 88, or iodine - 131 ) within the fuel elements can leach out into the coolant. subsequent failures can permit these radioisotopes to breach further layers of containment. superheated steam and hot metal inside the core can lead to fuel - coolant interactions, hydrogen explosions, or water hammer, any of which could destroy parts of the containment. a meltdown is considered very serious because of the potential, however remote, that radioactive materials could breach all containment and escape ( or be released ) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby. nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. if the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. a core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat. a core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. the reason may be one", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6198723861798687, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.683725"} | |
| {"text": "efficiency ( when using an inert gas as a coolant ) and in others may form an insulating \" bubble \" of steam surrounding the fuel assemblies ( for pressurized water reactors ). in the latter case, due to localized heating of the \" steam bubble \" due to decay heat, the pressure required to collapse the \" steam bubble \" may exceed reactor design specifications until the reactor has had time to cool down. ( this event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the emergency core cooling system may be turned on ). in a depressurization fault, a gas - cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel ; however, as long as at least one gas circulator is available, the fuel will be kept cool. - in an uncontrolled power excursion accident, a sudden power spike in the reactor exceeds reactor design specifications due to a sudden increase in reactor reactivity. an uncontrolled power excursion occurs due to significantly altering a parameter that affects the neutron multiplication rate of a chain reaction ( examples include ejecting a control rod or significantly altering the nuclear characteristics of the moderator, such as by rapid cooling ). in extreme cases the reactor may proceed to a condition known as prompt critical. this is especially a problem in reactors that have a positive void coefficient of reactivity, a positive temperature coefficient, are overmoderated, or can trap excess quantities of deleterious fission products within their fuel or moderators. many of these characteristics are present in the rbmk design, and the chernobyl disaster was caused by such deficiencies as well as by severe operator negligence. western light water reactors are not subject to very large uncontrolled power excursions because loss of coolant decreases, rather than increases, core reactivity ( a negative void coefficient of reactivity ) ; \" transients, \" as the minor power fluctuations within western light water reactors are called, are limited to momentary increases in reactivity that will rapidly decrease with time ( approximately 200 % - 250 % of maximum neutronic power for a few seconds in the event of a complete rapid shutdown failure combined with a transient ). - core - based fires endanger the core and can cause the fuel assemblies to melt. a fire may be caused by air entering a graphite moderated reactor, or a liquid - sodium cooled reactor. graphite is also subject to accumulation of wigner energy, which can overhea", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6023676748973488, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.685841"} | |
| {"text": "the core and can cause the fuel assemblies to melt. a fire may be caused by air entering a graphite moderated reactor, or a liquid - sodium cooled reactor. graphite is also subject to accumulation of wigner energy, which can overheat the graphite ( as happened at the windscale fire ). light water reactors do not have flammable cores or moderators and are not subject to core fires. gas - cooled civilian reactors, such as the magnox, ungg, and agcr type reactors, keep their cores blanketed with non reactive carbon dioxide gas, which cannot support a fire. modern gas - cooled civilian reactors use helium, which cannot burn, and have fuel that can withstand high temperatures without melting ( such as the high temperature gas cooled reactor and the pebble bed modular reactor ). - byzantine faults and cascading failures within instrumentation and control systems may cause severe problems in reactor operation, potentially leading to core damage if not mitigated. for example, the browns ferry fire damaged control cables and required the plant operators to manually activate cooling systems. the three mile island accident was caused by a stuck - open pilot - operated pressure relief valve combined with a deceptive water level gauge that misled reactor operators, which resulted in core damage. light water reactors ( lwrs ) before the core of a light water nuclear reactor can be damaged, two precursor events must have already occurred : - a limiting fault ( or a set of compounded emergency conditions ) that leads to the failure of heat removal within the core ( the loss of cooling ). low water level uncovers the core, allowing it to heat up. - failure of the emergency core cooling system ( eccs ). the eccs is designed to rapidly cool the core and make it safe in the event of the maximum fault ( the design basis accident ) that nuclear regulators and plant engineers could imagine. there are at least two copies of the eccs built for every reactor. each division ( copy ) of the eccs is capable, by itself, of responding to the design basis accident. the latest reactors have as many as four divisions of the eccs. this is the principle of redundancy, or duplication. as long as at least one eccs division functions, no core damage can occur. each of the several divisions of the eccs has several internal \" trains \" of components. thus the eccs divisions themselves have internal redundancy \u2013 and can withstand failures of components within them. the three mile island accident was a compounded", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6039552567837813, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6>", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.686944"} | |
| {"text": "is not susceptible to meltdown, additional capabilities of heat removal are provided by using regular atmospheric airflow as a means of backup heat removal, by having it pass through a heat exchanger and rising into the atmosphere due to convection, achieving full residual heat removal. the vhtr is scheduled to be prototyped and tested at idaho national laboratory within the next decade ( as of 2009 ) as the design selected for the next generation nuclear plant by the us department of energy. this reactor will use a gas as a coolant, which can then be used for process heat ( such as in hydrogen production ) or for the driving of gas turbines and the generation of electricity. a similar highly advanced gas cooled reactor originally designed by west germany ( the avr reactor ) and now developed by south africa is known as the pebble bed modular reactor. it is an inherently safe design, meaning that core damage is physically impossible, due to the design of the fuel ( spherical graphite \" pebbles \" arranged in a bed within a metal rpv and filled with triso ( or quadriso ) pellets of uranium, thorium, or mixed oxide within ). a prototype of a very similar type of reactor has been built by the chinese, htr - 10, and has worked beyond researchers ' expectations, leading the chinese to announce plans to build a pair of follow - on, full - scale 250 mwe, inherently safe, power production reactors based on the same concept. ( see nuclear power in the people ' s republic of china for more information. ) experimental or conceptual designs some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety. the pius ( process inherent ultimate safety ) designs, originally engineered by the swedes in the late 1970s and early 1980s, are lwrs that by virtue of their design are resistant to core damage. no units have ever been built. power reactors, including the deployable electrical energy reactor, a larger - scale mobile version of the triga for power generation in disaster areas and on military missions, and the triga power system, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the triga due to the uranium zirconium hydride fuel used. the hydrogen moderated self - regulating nuclear power module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the triga, also possesses these extreme safety and stability characteristics, and has attracted a good deal", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.600121326222177, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6>", "chunk_index": 12, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.695617"} | |
| {"text": "signal - to - noise ratio ( often abbreviated snr or s / n ) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. it is defined as the ratio of signal power to the noise power. a ratio higher than 1 : 1 indicates more signal than noise. while snr is commonly quoted for electrical signals, it can be applied to any form of signal ( such as isotope levels in an ice core or biochemical signaling between cells ). signal - to - noise ratio is sometimes used informally to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. for example, in online discussion forums and other online communities, off - topic posts and spam are regarded as \" noise \" that interferes with the \" signal \" of appropriate discussion. where p is average power. both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. if the signal and the noise are measured across the same impedance, then the snr can be obtained by calculating the square of the amplitude ratio : where a is root mean square ( rms ) amplitude ( for example, rms voltage ). because many signals have a very wide dynamic range, snrs are often expressed using the logarithmic decibel scale. in decibels, the snr is defined as which may equivalently be written using amplitude ratios as the concepts of signal - to - noise ratio and dynamic range are closely related. dynamic range measures the ratio between the strongest un - distorted signal on a channel and the minimum discernable signal, which for most purposes is the noise level. snr measures the ratio between an arbitrary signal level ( not necessarily the most powerful signal possible ) and noise. measuring signal - to - noise ratios requires the selection of a representative or reference signal. in audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 khz at + 4 dbu ( 1. 228 vrms ). snr is usually taken to indicate an average signal - to - noise ratio, as it is possible that ( near ) instantaneous signal - to - noise ratios will be considerably different. the concept can be understood as normalizing the noise level to 1 ( 0 db ) and measuring how far the signal ' stands out '. difference from conventional power in physics power ( physics ) of an ac signal is defined as but in signal processing and communication we usually assume that", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.611553552533917, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:a29a332a-7233-45be-ae61-4e108349e3b2>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.734504"} | |
| {"text": "as normalizing the noise level to 1 ( 0 db ) and measuring how far the signal ' stands out '. difference from conventional power in physics power ( physics ) of an ac signal is defined as but in signal processing and communication we usually assume that so that usually we don ' t include that resistance term while measuring power or energy of a signal. this usually causes some confusions among readers but the resistance term is not significant for operations performed in signal processing. most of cases the power of a signal would be where ' a ' is the amplitude of the ac signal. in some places people just use as the constant term doesn ' t affect much during the calculations. alternative definition where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof. [ note 2 ] notice that such an alternative definition is only useful for variables that are always non - negative ( such as photon counts and luminance ). thus it is commonly used in image processing, where the snr of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. sometimes snr is defined as the square of the alternative definition above. the rose criterion ( named after albert rose ) states that an snr of at least 5 is needed to be able to distinguish image features at 100 % certainty. an snr less than 5 means less than 100 % certainty in identifying image details. snr for various modulation systems amplitude modulation channel signal - to - noise ratio is given by where w is the bandwidth and ka is modulation index output signal - to - noise ratio ( of am receiver ) is given by frequency modulation channel signal - to - noise ratio is given by output signal - to - noise ratio is given by improving snr in practice all real measurements are disturbed by noise. this includes electronic noise, but can also include external events that affect the measured phenomenon \u2014 wind, vibrations, gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. it is often possible to reduce the noise by controlling the environment. otherwise, when the characteristics of the noise are known and are different from the signals, it is possible to filter it or to process the signal. for example, it is sometimes possible to use a lock - in amplifier to modulate and confine the signal within a very narrow bandwidth and then filter the detected signal to the narrow band where it resides, thereby eliminating most of the broadband noise.", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6246906600941576, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:a29a332a-7233-45be-ae61-4e108349e3b2>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.735618"} | |
| {"text": ". for example, it is sometimes possible to use a lock - in amplifier to modulate and confine the signal within a very narrow bandwidth and then filter the detected signal to the narrow band where it resides, thereby eliminating most of the broadband noise. when the signal is constant or periodic and the noise is random, it is possible to enhance the snr by averaging the measurement. in this case the noise goes down as the square root of the number of averaged samples. digital signals when a measurement is digitised, the number of bits used to represent the measurement determines the maximum possible signal - to - noise ratio. this is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. this noise level is non - linear and signal - dependent ; different calculations exist for different signal models. quantization noise is modeled as an analog error signal summed with the signal before quantization ( \" additive noise \" ). this theoretical maximum snr assumes a perfect input signal. if the input signal is already noisy ( as is usually the case ), the signal ' s noise may be larger than the quantization noise. real analog - to - digital converters also have other sources of noise that further decrease the snr compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. although noise levels in a digital system can be expressed using snr, it is more common to use eb / no, the energy per bit per noise power spectral density. the modulation error ratio ( mer ) is a measure of the snr in a digitally modulated signal. fixed point assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak - to - peak amplitude of one quantization level, making the amplitude ratio 2n / 1. the formula is then : this relationship is the origin of statements like \" 16 - bit audio has a dynamic range of 96 db \". each extra quantization bit increases the dynamic range by roughly 6 db. assuming a full - scale sine wave signal ( that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal ), the quantization noise approximates a sawtooth wave with peak - to - peak amplitude of one quantization level and uniform distribution. in this case, the snr is approximately floating point note that the dynamic range is much larger than fixed - point,", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6345529050947387, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:a29a332a-7233-45be-ae61-4e108349e3b2>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:41.736672"} | |
| {"text": "corrosion and easier to weld. russia used such an aluminium - scandium alloy for its mig fighter planes during the cold war era. but from a commercial perspective, the alloy is prohibitively expensive. the scientific challenge that professor wu ' s centre has taken on is to determine how scandium works when added to aluminium alloys, and to then find a cheaper substitute. \" we are working at the atomic level. in metallurgy, just a few atoms in a million added to an alloy can influence engineering at the macro - scale ; how we control the homogeneity of metal sheeting when it is rolled, or the integrity of the metal when it is fabricated into a component. \" however, professor wu says the key factor with such industrial research is achieving this economically. \" from just a materials research perspective, without worrying about costs, we can make the most wonderful metal and alloy materials. but the goal is not just to develop stronger, lighter, more durable and more stable metals. they must also be produced through more efficient and cheaper manufacturing with lower energy consumption, both during construction and during the aircraft ' s operational life over 25 or more years. we have to create new materials that not only have the best performance but are also the cheapest. \" this is what makes industrial science exciting. yes, the fundamental science must be good, but it is the industrial science that has to deliver this material, functionally and cost - effectively, to industry. and it doesn ' t stop with developing the material ; new manufacturing processes have to be designed for each new material developed. \" professor wu ' s approach to science has been strongly influenced by the 20 years she worked with the rolls - royce aerospace division \u2013 \" a technology - driven company and world leader in materials technology and manufacturing \". it has imbued her with a robust ' can - do ' attitude, which is why other major european companies have set up collaborations with her and her centre. \" it is because we deliver on the promise, \" professor wu says. her own special field of interest is titanium metallurgy. aside from the offer by monash to replicate her research facilities in melbourne, the other attraction of moving from the uk was that australia has 51 per cent of the world ' s known titanium ore deposits. she was keen to apply her metals science closer to its raw materials. professor wu has been involved extensively in developing titanium and titanium aluminide ( tial ) alloys, and in advanced powder processing for titanium and nickel alloy powders. her most recent research has", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6090548418980923, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:1f51fba7-5bea-4eb4-9013-830e98f817b7>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.311173"} | |
| {"text": ". in this initial test electrodes were placed on the subject ' s scalp to measure brain responses. professor roberts said : \" eeg gives graph - like measurements and when the brain reads a sentence that does not make semantic sense it registers what we call a n400 effect \u2013 a negative wave modulation. when the brain reads a grammatically incorrect sentence it registers a p600 effect \u2013 an effect which continues to last after the word that triggered it was first read. \" researchers also found that when participants read the word producing the functional shift there was no n400 effect indicating that the meaning was accepted but a p600 effect was observed which indicates a positive re - evaluation of the word. the team is now using magnetoencephalography ( meg ) and functional magnetic resonance imaging ( fmi ) to test which areas of the brain are most affected and the kind of impact it could have in maintaining healthy brain activity. professor davis added : \" this interdisciplinary work is good for brain science because it offers permanent scripts of the human mind working moment - to - moment. it is good for literature as it illustrates primary human thinking. through the two disciplines, we may discover new insights into the very motions of the mind. \" source : university of liverpool \" reading shakespeare has dramatic effect on human brain. \" december 18th, 2006. http : / / phys. org / news85664210. html", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6022633112995239, "token_count": 282, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:5627c492-fb6d-410a-ab45-d71b3f5a2287>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.319249"} | |
| {"text": "analog input channels temperature is a measure of the average kinetic energy of the particles in a sample of matter expressed in units of degrees on a standard scale. you can measure temperature in many different ways that vary in equipment cost and accuracy. the most common types of sensors are thermocouples, rtds, and thermistors. figure 1. thermocouples are inexpensive and can operate over a wide range of temperatures. thermocouples are the most commonly used temperature sensors because they are relatively inexpensive yet accurate sensors that can operate over a wide range of temperatures. a thermocouple is created when two dissimilar metals touch and the contact point produces a small open - circuit voltage as a function of temperature. you can use this thermoelectric voltage, known as seebeck voltage, to calculate temperature. for small changes in temperature, the voltage is approximately linear. you can choose from different types of thermocouples designated by capital letters that indicate their compositions according to american national standards institute ( ansi ) conventions. the most common types of thermocouples include b, e, k, n, r, s, and t. for more information on thermocouples, read the engineer ' s toolbox for thermocouples. figure 2. rtds are made of metal coils and can measure temperatures up to 850 \u00b0c. a platinum rtd is a device made of coils or films of metal ( usually platinum ). when heated, the resistance of the metal increases ; when cooled, the resistance decreases. passing current through an rtd generates a voltage across the rtd. by measuring this voltage, you can determine its resistance and, thus, its temperature. the relationship between resistance and temperature is relatively linear. typically, rtds have a resistance of 100 \u03c9 at 0 \u00b0c and can measure temperatures up to 850 \u00b0c. for more information on rtds, read the engineer ' s toolbox for rtds. figure 3. passing current through a thermistor generates a voltage proportional to temperature. a thermistor is a piece of semiconductor made from metal oxides that are pressed into a small bead, disk, wafer, or other shape and sintered at high temperatures. lastly, they are coated with epoxy or glass. as with rtds, you can pass a current through a thermistor to read the voltage across the thermistor and determine its temperature. however, unlike rtds, thermistors have a", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6204252451119193, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:e3d9f26b-9215-49bf-a296-3724a4a14b64>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.514091"} | |
| {"text": "mit professor \u2019 s book digs into the eclectic, textually linked reading choices of people in medieval london. the glitter of gold may hold more than just beauty, or so says a team of mit researchers that is working on ways to use tiny gold rods to fight cancer, deliver drugs and more. but before gold nanorods can live up to their potential, scientists must figure out how to overcome one major difficulty : the surfaces of the tiny particles are coated with an uncooperative molecule ( a byproduct of the synthesis process ) that prevents researchers from creating nanorods with the features they want. \" the surface chemistry is really key to everything, \" said kimberly hamad - schifferli, assistant professor of biological and mechanical engineering at mit. \" for all of these nifty applications to work, someone ' s got to sit down and do the dirty work of understanding the surface. \" hamad - schifferli and her colleagues published two papers this month describing ways to manipulate the nanorods ' surface, which could allow researchers to design nanorods with specific useful functions. as their name implies, gold nanorods are tiny cylinders of gold, about 10 billionths of a meter wide and 40 billionths of a meter long. they differ from traditional, spherical gold nanoparticles in one very important respect - - they can absorb infrared light. that means they can theoretically be activated by infrared laser without damaging surrounding cells, which do not absorb infrared light. before that can happen, scientists must figure out how to deal with an organic molecule known as ctab that coats the outer surface of gold nanorods and tends to detach from and reattach itself to the surface. the molecule, a byproduct of the synthesis reaction that produces the nanorods, makes it difficult to attach other molecules for delivery, such as drugs or dna. the team ' s two recent papers describe how the ctab influences heat dissipation and how to remove the ctab and replace it with another organic molecule. in the first paper, published online aug. 12 in the journal of physical chemistry c, they found that a low concentration of the ctab in the surrounding solution accelerates heat dissipation after the nanorod is hit with infrared light. when the concentration of ctab is high, heat is dissipated more slowly. that information could help scientists design nanorods that fight cancer agents by burning away tumor cells when activated with infrared light. in the second paper, published online aug", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.619963523295878, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:e7aca56d-5302-4ce0-8b6f-71018d022f59>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.659491"} | |
| {"text": "infrared light. when the concentration of ctab is high, heat is dissipated more slowly. that information could help scientists design nanorods that fight cancer agents by burning away tumor cells when activated with infrared light. in the second paper, published online aug. 22 in the journal langmuir, the team demonstrated how to replace ctab with a more useful molecule - - a sulfur - containing group known as a thiol. this molecule binds more strongly to the nanorod, so it doesn ' t detach and reattach like ctab. in addition, other molecules, such as dna, can be easily attached to the end of the thiol. these surface chemistry studies are critical to lay the groundwork for development of gold nanorods, according to hamad - schifferli. \" people have dreamed up all of these cool applications for nanorods, but one of the biggest bottlenecks to making this a reality is this interface, \" she said. in the future, hamad - schifferli and her colleagues hope to build gold nanorods that carry dna designed for a specific function in the target cell. for example, the dna could shut down production of a protein that is being overexpressed. lead author of the langmuir paper is andy wijaya, a graduate student in chemical engineering. lead authors of the jpcc paper are aaron schmidt, a postdoctoral associate in mechanical engineering, and joshua alper, a graduate student in mechanical engineering. other authors are matteo chiesa, a visiting scholar in the technology and development program, gang chen, the rohsenow professor of mechanical engineering, and sarit das, a visiting professor in mechanical engineering. the work was funded by the norwegian research council, the ford - mit alliance and the national science foundation.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6122845203576175, "token_count": 364, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:e7aca56d-5302-4ce0-8b6f-71018d022f59>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.660165"} | |
| {"text": "key : \" s : \" = show synset ( semantic ) relations, \" w : \" = show word ( lexical ) relations display options for sense : ( gloss ) \" an example sentence \" - s : ( n ) guard ( a person who keeps watch over something or someone ) - s : ( n ) guard ( the person who plays that position on a football team ) \" the left guard was injured on the play \" - s : ( n ) guard, safety, safety device ( a device designed to prevent injury or accidents ) - s : ( n ) guard ( a posture of defence in boxing or fencing ) \" keep your guard up \" - s : ( n ) guard ( the person who plays the position of guard on a basketball team ) - s : ( n ) guard ( a military unit serving to protect some place or person ) - s : ( n ) precaution, safeguard, guard ( a precautionary measure warding off impending danger or damage or injury etc. ) \" he put an ice pack on the injury as a precaution \" ; \" an insurance policy is a good safeguard \" ; \" we let our guard down \" - s : ( n ) guard duty, guard, sentry duty, sentry go ( the duty of serving as a sentry ) \" he was on guard that night \" - s : ( n ) guard ( ( american football ) a position on the line of scrimmage ) \" guards must be good blockers \" - s : ( n ) guard ( a position on a basketball team ) - s : ( v ) guard ( to keep watch over ) \" there would be men guarding the horses \" - s : ( v ) guard, ward ( watch over or shield from danger or harm ; protect ) \" guard my possessions while i ' m away \" - s : ( v ) defend, guard, hold ( protect against a challenge or attack ) \" hold that position behind the trees! \" ; \" hold the bridge against the enemy ' s attacks \" - s : ( v ) guard ( take precautions in order to avoid some unwanted consequence ) \" guard against becoming too friendly with the staff \" ; \" guard against infection \"", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.606075795872774, "token_count": 445, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:9ab6b329-0679-4714-af36-66635ea92941>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.685810"} | |
| {"text": "| the surprising appearance of nanotubular fullerene d5h ( 1 ) - c90 | the previously undetected fullerene d5h ( 1 ) - c90 \u2014 with a distinct nanotubular shape \u2014 has been isolated as the major c90 isomer produced from sm2o3 - doped graphite rods and structurally identified by single - crystal x - ray diffraction. fullerenes are well - defined molecules that consist of closed cages of carbon atoms and distinct inside and outside surfaces. they tend to form very small crystals ; consequently, high - resolution data was collected using small - molecule crystallography at als beamline 11. 3. 1. the discovery of nanotubular d5h ( 1 ) - c90, which is a fullerene with 90 carbon atoms and d5h symmetry, opens a bridge between molecular fullerenes and carbon nanotubes. in recent years, the well - known solid allotropes diamond and graphite have been joined by new allotropes : fullerenes, carbon nanotubes, and graphene. diamond consists of four - coordinate carbon atoms with tetrahedral geometry, while the other allotropes involve three - coordinate carbon atoms. in graphite, these carbon atoms are arranged in hexagonal sheets that are stacked upon one another. graphene is simply a single hexagonal graphitic sheet with a thickness of only one atom. carbon nanotubes can be conceived as hexagonal graphene sheets rolled into cylindrical shapes. these tubes may consist of a single wall of carbon atoms ( single - walled carbon nanotubes ) or may consist of multiple layers of tubes nested inside one another ( multi - wall carbon nanotubes ). carbon nanotubes are produced as mixtures in which the individual tubes can vary in length, width, precise alignment of the component hexagons, and the chemical nature of the unique carbon atoms at the two ends of the tube. graphene is likewise produced as sheets of varying size with generally less well - defined structures for those carbon atoms at the outer edges. fullerenes of varying sizes ( from 60 to more than 500 carbon atoms ) have also been observed, and individual molecules such as c60 and c70 have been isolated in pure form. each fullerene is constructed of 12 pentagonal rings of carbon atoms and a number of hexagonal rings. for example, the prototypical c60, the most readily prepared fullerene, has 20 hexagonal rings in addition to the 12 pentagons", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6501534028918, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:2f9c3991-1432-4449-97f6-f09dea37f2f9>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.694915"} | |
| {"text": "fullerene is constructed of 12 pentagonal rings of carbon atoms and a number of hexagonal rings. for example, the prototypical c60, the most readily prepared fullerene, has 20 hexagonal rings in addition to the 12 pentagons. isolating higher fullerenes in an isomerically pure form is challenging, especially since the number of isomers increases as the size of the fullerene cage expands, as per the isolated pentagon rule ( ipr ). the ipr requires that each pentagon be surrounded by five hexagons to avoid strain - inducing pentagon \u2013 pentagon contact. there are 46 isomers of c90 that obey the ipr, but none of these isomers had previously been obtained in pure form. indeed, in the absence of sm2o3, no d5h ( 1 ) - c90 has ever been detected. the oblong fullerene d5h ( 1 ) - c90 belongs to a set of nanotube - like fullerenes with the formula c60 + 10n, which have alternating d5h symmetry ( when n is odd and the end caps are eclipsed ) or d5d symmetry ( when n is even and the end caps are staggered ). the structure of d5h ( 1 ) - c90 ( n = 3 ) is thus closely related to that of c70 ( n = 1 ). however, within this family only c60, c70, and d5h ( 1 ) - c90 have been isolated in pure form and characterized crystallographically. the isolation of d5h ( 1 ) - c90 provides a unique molecular model for carbon nanotubes that will allow scientists to explore the chemical and physical properties of a distinctly cylindrical fullerene. the armchair - style belts that are found at the waist of d5h ( 1 ) - c90 are a unique feature of this particular fullerene, but are the fundamental building block of carbon nanotubes. research conducted by h. yang, a. jiang, z. wang, and z. liu ( zhejiang university, p. r. china ) ; h. jin ( jiliang university, p. r. china ) ; b. q. mercado, m. m. olmstead, and a. l. balch ( university of california, davis ) ; and c. m. beavers ( berkeley lab ). research funding : national science foundation and the natural science foundation of china. operation of the als is supported by the u. s.", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6419561882470741, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:2f9c3991-1432-4449-97f6-f09dea37f2f9>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:42.695938"} | |
| {"text": "., fabricated a series of small but complete processor chips based on a reversible technology ; frank continues this work at florida state university. at the university of gent in belgium, alexis de vos and his colleagues have built several reversible adders and other circuits. it ' s important to note that building a computer according to a reversible logic diagram does not guarantee low - power operation. reversibility removes the thermodynamic floor at kt ln 2, but the circuit must still be designed to attain that level of energy savings. the current state of the art is far above the theoretical floor ; even the most efficient chips, reversible or not, dissipate somewhere between 10, 000 and 10 million times kt ln 2 for each logical operation. thus it will be some years before reversible technology can be put to the ultimate test of challenging the three - zeptojoule barrier. in the meantime, however, it turns out that some concepts derived from reversible logic are already useful in low - power circuits. one of these is charge recovery, which attempts to recycle packets of electric charge rather than let them drain to ground. another is adiabatic switching, which avoids wasteful current surges by closing switches only after voltages have had a chance to equalize. \u00bb post comment", "subdomain_id": "subdomain_quantum_computing", "similarity_score": 0.6191534025674472, "token_count": 276, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:53d5592b-4089-4273-9d8d-fc7396e4e75d>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.021061"} | |
| {"text": "of thermal mass. thermal mass is characterized by those dense materials like concrete and earthen materials, and also by an extremely good material - water. these materials can readily absorb solar radiation, hold its warmth, and easily and evenly give it up to adjacent spaces. | heat capture, storage and distribution follow a natural and predictable behavior. sunlight heats the surfaces it strikes. the amount of heat held within the material depends on the material composition - straw is a terrible holder, concrete is a better holder. when sunlight is no longer available the material gives its \u2019 captured heat to adjacent cooler conditions. generally there are 3 passive heating building concepts - direct gain, indirect gain and isolated gain these concepts have inherent within them cooling strategies and applications as well. direct gain - simply stated, sunlight comes directly through windows into the space to be heated. the building materials struck by the sunlight are thermal mass materials - concrete / tile floor, masonry walls, or even strategically placed containers of water. building windows act in exactly the same way as solar panel glazing - they let the sunlight ( short wave radiation ) in and inhibit heat ( long wave radiation ) from escape. direct gain design system is always working, letting in not only direct sunlight but also the diffuse light of cloudy days, and the intense light of summer. like any system, optimization is the goal - so the building eaves and overhangs become a designed - in optimizing element - summertime conditions, when heating is not required, are mitigated by keeping the sunlight off of the windows via the overhang, while in the winter, the sun is much lower in the sky and can easily skirt under the building \u2019 s brow. heating is quite simple in this approach - sunlight, absorbed by the thermal mass materials, solid and / or liquid, is stored as heat. when the space cools in the evening, the heat migrates to the cooling spaces directly ( radiation ) or by air movement across the surface of the material ( convection ). for this approach, a careful consideration of the site, solar energy availability, and seasonal conditions, are all necessary to determine the appropriate amount of windows and thermal mass. too many windows in an arizona desert setting will result in a human cooker ; too few windows in a rim setting will result in not enough capture. this system has worked effectively in arizona designs, as well as that sunniest of place of liverpool, england. for effective cooling, such as the desert setting, direct gain avoidance is the rule, but the thermal mass of the building can still", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6003938933830272, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:18749c72-d97f-4add-a437-3b6a218805e9>", "chunk_index": 11, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.126415"} | |
| {"text": "two researchers from the state key laboratory of millimeter waves at southeast university from nanjing, china, have discovered and prototyped a device that acts like a black hole for electromagnetic waves in the microwave spectrum. it consists of 60 concentric rings of metamaterials, a class or ordered composites that can distort light and other waves. qiang cheng and tie jun cui called their device \u201c omnidirectional electromagnetic absorber \u201d. the 60 rings of circuit board are arranged in concentric layers and coated in copper. each of the layers is printed with alternating patterns, which resonate or don \u2019 t resonate in electromagnetic waves. what is indeed very amazing is that their device can spiral 99 % of the radiations coming from all directions inside it and convert them into heat, acting like an \u201c electromagnetic black body \u201d ( or \u201c hole \u201d ). the omnidirectional electromagnetic absorber could be used in harvesting the energy that exists in form of electromagnetic waves and turn them into usable heat. of course, turning the heat back into electricity isn \u2019 t a 100 % efficient process ( far from it ), but directly harvesting electromagnetic waves in the classic antenna - fashion is way too inefficient compared to this black hole. \u201c since the lossy core can transfer electromagnetic energies into heat energies, we expect that the proposed device could find important applications in thermal emitting and electromagnetic - wave harvesting. \u201d possible uses can vary from powering your phone with the existing electromagnetic energy that surrounds it, to wireless power transmission and even powering space ships \u2013 it all depends on the wavelength that the device is tuned to. the question that arises is : would this kind of devices have other uses than these constructive ones mentioned above? more like this article not what you were looking for? search the green optimistic! join the discussion4046 total comments so far. what ' s your opinion? electromagnetic wave harvesting? extremely fascinating. when one thinks about it, it makes sense. electromagnetism is one of the more powerful forms of the universe ( next to gravity and strong / weak nuclear forces ). the inner sci - fi geek in me loves the idea and can only imagine what an em device could do for humanity in the future. but of course the part of me stuck in reality is still skeptical of such technologies and what their applicable use would be. very very cool science though! - consumer energy alliance \" a balanced approach towards america ' s energy future \"", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6165845870075718, "token_count": 500, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:9223455a-55b0-44bb-b0c4-c351a964fb39>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.668476"} | |
| {"text": "the arrow of time we are all aware of an intuitive \" flow \" of time from past to future. not only do we feel this flow of time, but we also see it manifested in the behaviour of objects which change over time. many objects seem to behave differently in the forward time direction when compared to the backward time direction. for example, we don ' t see a spilt glass of water jumping up and going back into the glass, we don ' t see a broken egg reforming itself. these effects all add to the impression that there is some sort of \" forward direction \" in the time dimension. this directionality is called the arrow of time. however, this \" arrow of time \" is something of a mystery to physicists because, at the microscopic level, all fundamental physical processes appear to be time - reversible ( we ' ll consider this later ). also, as shown on the time and the block universe page, our universe appears to have a spacetime structure in which all of time is laid - out in a \" block universe \", i. e., there is no actual \" flow \" of time, no movement of a \" now \" point. so on this page we will investigate the cause of this mysterious \" arrow of time \". entropy can be considered the amount of disorder in a system. for example, a car that has rusted could be said to have a greater entropy value than a new car : bits of the car may have fallen off, the paint may be flaking. basically, the molecules of the car have become more disordered over time : entropy has increased. as has just been just discussed, all microscopic processes appear to be time - reversible. the question of why we see an \" arrow of time \" in macroscopic processes has therefore presented physics with a long - standing conundrum. for this reason, much attention has focussed on the fact that the entropy of a closed system increases with time, i. e., a system will gradually become more disordered with time. eventually the system ( gas in a closed container, for example ) will reach a state when all its molecules are completely randomly orientated. this state is called thermal equilibrium. the rule that entropy increases with time is called the second law of thermodynamics. the reason for this increase in entropy can be seen from a purely probabilistic argument : a system will have many more possible disordered states than ordered states, so a system which changes state randomly will most likely move to", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6659422138096727, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.874198"} | |
| {"text": "perfect unbroken eggs in egg cups. and these objects are basically falling apart around us as they inevitably move to higher entropy states : cars rust, eggs fall on the floor and break. hence, the increase in entropy in our ordered world is one reason why we detect an apparent \" arrow of time \". but change of entropy is fundamentally time - symmetrical!! however, this is a good time to clear - up a very widely - held misconception about the change of entropy : that change of entropy is in some way fundamentally time - asymmetric, that entropy change behaves fundamentally differently in the forward time direction to the backward time direction. this is absolutely not the case. in the general case, entropy increases in the backward time direction in just the same way as it increases in the forward time direction : change of entropy is symmetrical with time. ( however, a very small minority of physicists might still believe change of entropy is time - asymmetric - see my comments at the bottom of this discussion with the notoriously tetchy physicist lubos motl here ). the probabilistic basis of the second law of thermodynamics simply says that a system will have many more possible disordered states than ordered states, so a system which changes state randomly will most likely move to a more disordered state. this seems very clear and obvious - such a simple statement is never going to be the cause of something so mysterious as fundamental time - asymmetry. indeed, this change to a more disordered state is just as applicable in the reverse time direction as in the forward time direction : it ' s just a change of state, independent of time. but what about the second law of thermodynamics which states that \" entropy increases with time \"? this seems to imply a fundamental time - asymmetry to entropy. but we have to realise that the second law only applies to special - case systems : objects with low entropy, the sort of objects we generally encounter in everyday life ( rusting cars, etc. ). in fact, if we consider general - case objects ( i. e., objects in thermal equilibrium ), objects which have never been arranged into any sort of order, then their entropy is at a maximum already so their entropy can only decrease with time - completely at odds with the second law! this generally - held misconception that change of entropy is fundamentally time - asymmetrical is revealed by the loschmidt paradox. the loschmidt paradox considers the apparently fundamental", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.6171244684886794, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.876295"} | |
| {"text": "with time - completely at odds with the second law! this generally - held misconception that change of entropy is fundamentally time - asymmetrical is revealed by the loschmidt paradox. the loschmidt paradox considers the apparently fundamental time - asymmetry of entropy implied by the second law and states that this is at odds with the known time symmetry of fundamental processes. it is only when we realise that the second law is frequently badly stated and hence contains unstated assumptions ( which have been just considered ) that the loschmidt paradox is resolved. ( wikipedia describes this resolution of the paradox, showing how one of the key assumptions of boltzmann ' s version of the second law of thermodynamics was flawed - see here ). but if change of entropy is time - symmetric, why do we see the entropy of the universe as only increasing? roger penrose considers this question in his book the road to reality. penrose considers what we might expect to happen if we trace the entropy of the universe back in time from the state it is in now. if change of entropy is really time - symmetrical, then we should expect to see entropy increasing as we trace the universe into the past, just as we will see entropy increasing into the future. but we know, in fact, that the universe had a lower entropy in the past : i. e., the entropy of the universe actually reduces in the past. so where does this asymmetry come from? as roger penrose goes on to reveal, the time - asymmetry of change of entropy within the universe is explained by the extraordinarily low entropy of the universe at its origin : basically, the low - entropy past of the universe \" fixes \" the experiment. if we want to get a symmetrical answer then we have to be careful to conduct a symmetrical experiment. rather than starting with a special - case low entropy universe, we have to imagine a universe which started in thermal equilibrium and has reached its current state unaided, purely by chance : after that low - entropy point is reached, we then see entropy starting to increase according to the second law. but the key thing is that if we trace the entropy of the universe back in time past the low - entropy point we now see that symmetry that roger penrose sought. hence, change of entropy is fundamentally symmetrical. in fact, throughout this discussion on the arrow of time we will find that the arrow of time is caused by the time - symmetric second law of thermod", "subdomain_id": "subdomain_quantum_thermodynamics", "similarity_score": 0.616548670228202, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.877381"} | |
| {"text": "see that symmetry that roger penrose sought. hence, change of entropy is fundamentally symmetrical. in fact, throughout this discussion on the arrow of time we will find that the arrow of time is caused by the time - symmetric second law of thermodynamics, together with the very special, low - entropy initial conditions of the universe. ( this discussion on time - symmetric entropy change is based on an example by j. richard gott in his book time travel in einstein ' s universe in which the role of the universe is played by an ice cube - see here. the ice cube example is considered in detail in chapter 6 of brian greene ' s book the fabric of the cosmos. ) we all have a very strong feeling of a directionality of time, which has a flow in a forwards direction. as michael lockwood says in his book the labyrinth of time : \" we regard the forward direction in time, in stark contrast to the backward direction, as the direction in which causality is permitted to operate. causes, we assume, can precede their effects, but cannot follow them. \" but we have just seen how physical processes appear to be time - symmetrical, with no distinction between the forward and backward directions. so where does that leave causality? as michael lockwood again says about the passage of time : \" we find no hint of this in the formalism of newtonian physics. not only is there no explicit reference to a passage or flow of time ; there is not even any reference to cause and effect. indeed, there is not even any directionality \". \" but \", you might protest, \" surely causality works in only one direction : forwards in time? i kick a football - the football doesn ' t kick me. \" well, let ' s consider the example immediately below of forward causality. we see a snooker cue coming in from the left, hitting the white ball, which then causes the white ball to hit the red ball : however, if you shoot a movie of that sequence, and then play it backwards, it still makes perfect physical sense. as you can see below, we then have the red ball coming in from the right, hitting the white ball, which then causes the white ball to hit the cue backwards. so, because of the symmetry of the laws of physics, this process of causality - which we thought only applied to the forward direction of time - in fact applies equally to the backward direction of time as well : the reason why we don ' t see causality happening in", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6128283694937582, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.878509"} | |
| {"text": "symmetry of the laws of physics, this process of causality - which we thought only applied to the forward direction of time - in fact applies equally to the backward direction of time as well : the reason why we don ' t see causality happening in the backward direction is purely because of a bias in our psychological systems : something about the complexity of our psychological system ( our brains! ) causes our thought processes to work only in the forward direction of time ( this will be considered below ). the great advantage of recording the sequence on a movie and then playing the movie backwards ( to reveal the time symmetry of causality ) is that a movie camera works in a much more simple fashion than our brains and thus has no such psychological bias in the forward direction : it works in exactly the same way forward as backward. so if causality is time - symmetrical, we could in fact think of our current situations are being caused by time - reversed future events as much as by past events! for example, as i sit here by my desk in work this morning, i could consider my position as being caused by me being in my apartment this evening, and driving my car from there backward in time, backward down the road the work, to put me in work this morning! it ' s a bit brain - bending, but it ' s equally valid as saying \" i got up this morning, and drove forwards to work \". it seems strange, but that ' s only because of our psychological bias. the movie of my complete day at work would tell the correct ( time - reversible ) story. the quantum mechanical arrow of time as has just been explained, almost all known physical principles ( from newtonian mechanics through to einstein ' s relativity ) have a completely symmetric treatment of past and future. nowhere in any of these equations is there anything which distinguishes a forward direction of time from a backward direction of time. the exception to this rule appears to be quantum mechanics. on the page on the quantum casino it was explained how, when we make a measurement of a quantum observable, there is a \" collapse of the wavefunction \" in which a probability wave collapses to generate a single observed value from a range of possible values. this process appears to work in the forward time direction only, i. e., it is irreversible. an explanation for this apparent \" collapse of the wavefunction \" is presented in detail on the page on quantum decoherence, so i don ' t want to repeat it here. suffice to", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6113234542361073, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 5, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.880584"} | |
| {"text": "., it is irreversible. an explanation for this apparent \" collapse of the wavefunction \" is presented in detail on the page on quantum decoherence, so i don ' t want to repeat it here. suffice to say that the coherent phase relationships of the interference terms are destroyed when a particle interacts with the environment. the dissipation of these terms into the wider environment can be interpreted in terms of increasing entropy ( again, see the section on \" decoherence and entropy \" on the quantum decoherence page for full details ). quantum decoherence can then be understood as a thermodynamic process : after decoherence, the process is said to be thermodynamically irreversible. so once again the underlying physical principles appear to be time symmetric, with no fundamental preference for either the forward or backward time direction. the apparent arrow of time produced by the \" collapse of the wavefunction \" is once again shown to be a result of increasing entropy. as andreas albrecht explains in his paper cosmic inflation and the arrow of time ( when considering decoherence in the double - slit experiment ) : \" a double - slit electron striking a photographic plate is only a good quantum measurement to the extent that the photographic plate is well constructed, and has a very low probability of re - emitting the electron in the coherent ' double slit ' state. good photographic plates are possible because of the thermodynamic arrow of time : the electron striking the plate puts the internal degrees of freedom of the plate into a higher entropy state, which is essentially impossible to reverse. furthermore, different electron positions on the plate become entangled with different states of the internal degrees of freedom, so there is essentially no interference between positions of the electron. from this point of view, the quantum mechanical arrow of time is none other than the thermodynamic arrow of time. \" why can ' t we remember the future? if physical processes all appear to be time - reversible at a fundamental level, we might ask the question \" why can ' t we remember the future? \" after all, we can remember the past, and physics seems to make no distinction between past, present, and future. so why don ' t we already have prior knowledge of what is going to happen in the future? in order to answer this question, we shall consider the reasoning of james hartle which is based around the radiative arrow of time : the radiative arrow of time", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6586231222241061, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.883960"} | |
| {"text": "composed of billions of photons ( bosons are quite happy to congregate in the same state, and gather together in a cooperative fashion to create light rays ). for this reason, studies of the radiative arrow of time have concentrated on studying the maxwell electromagnetic field equations which treats light as a field with a wave nature ( rather than considering the path of individual particles ). it is often quoted that maxwell ' s electromagnetic field equations are time - reversible and so allow for advanced ( backward - in - time ) waves as well as retarded ( forward - in - time ) waves. however, in practice it is much easier to produce a retarded wave than an advanced wave, and this reveals the limitations of maxwell ' s equations as a full description of the behaviour of light. we need to combine maxwell ' s equations with something else in order to derive a radiative arrow of time. james hartle attempts to use maxwell ' s equations to deduce the radiative arrow of time in appendix a of his aforementioned paper the physics of \" now \" which is called the cosmological origin of time ' s arrow. his approach ( based on principles described in h. dieter zeh ' s book the physical basis for the direction of time ) combines the time - symmetric maxwell ' s equations with the time - asymmetric boundary conditions of the universe as a whole ( he considers the asymmetrical total amount of electromagnetic radiation ). the approach suggests that because there were no free electromagnetic fields at the start of the universe, but there are fields in the future, those fields must all be caused by retarded waves that have their sources in the past. however, i don ' t see how the radiative arrow of time can depend on the total of electromagnetic fields in the universe in this way. there ' s no equivalent of the second law of thermodynamics ( increasing entropy ) for electromagnetic fields. the total of electromagnetic field in an isolated system does not tend to increase ( as is the case with entropy ). the radiative arrow of time must surely depend on the increasing sum total of entropy in the universe, not the total of electromagnetic field. surely the radiative arrow of time must have the same cause as the thermodynamic arrow of time. at the beginning of the last century, walter ritz proposed that only retarded ( forward - in - time ) waves were physically possible ( i. e., the process was fundamentally time - asym", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6160681015623888, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3ab5fd70-76a4-4ab3-b9bf-9162bdadc8d9>", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.887508"} | |
| {"text": "click here for an introduction to some basic concepts and design principles of secret key cryptography. 3 - way is a simple and fast cipher designed by joan daemen. 3 - way features a 96 - bit key length and a 96 - bit block length. 3 - way is an iterated block cipher that repeats some relatively simple operations a specified number of rounds. david wagner, john kelsey, and bruce schneier of counterpane systems have discovered a related key attack on 3 - way that requires one related key query and about 222 chosen plaintexts, described in this paper. 3 - way is unpatented. blowfish is a block cipher designed by bruce schneier, author of applied cryptography. blowfish combines a feistel network, key - dependent s - boxes, and a non - invertible f function to create what is perhaps one of the most secure algorithms available. schneier ' s paper is available is also described in the concepts of cryptography page. the only known attacks against blowfish are based on its weak blowfish is implemented in kremlin. cast, designed by carlisle adams and stafford taveres, is shaping up to be a solid algorithm. its design is very similar to blowfish ' s, with key - dependent s - boxes, a non - invertible f function, and a feistel network - like structure ( called a substitution - permutation network ). david wagner, john kelsey, and bruce schneier have discovered a related - key attack on the 64 - bit version of cast that requires approximately 217 chosen plaintexts, one related query, and 248 offline computations ( described in this paper ). the attack is infeasible at best. cast is patented by entrust technologies, which has generously released it for free use. the cast cipher design process is described in this paper and the 128 - bit version is described in this addendum. carlisle adams has submitted a version of cast ( cast - 256 ) as an aes candidate. cast - 128 is implemented in kremlin. cmea is the encryption algorithm developed by the telecommunications industry association to encrypt digital cellular phone data. it uses a 64 - bit key and features a variable block length. cmea is used to encrypt the control channel of cellular phones. it is distinct from oryx, an also insecure stream cipher that is used to encrypt data transmitted over digital cellular phones. it has been", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6441618802373807, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d1c071e5-ada6-42ad-a8ca-ceea2d299789>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.991734"} | |
| {"text": "block length. cmea is used to encrypt the control channel of cellular phones. it is distinct from oryx, an also insecure stream cipher that is used to encrypt data transmitted over digital cellular phones. it has been broken by david wagner, john kelsey, and bruce schneier of counterpane systems. their paper, which also provides an excellent description of the cmea algorithm, is available here. designed at ibm during the 1970s and officially adopted as the nist standard encryption algorithm for unclassified data in 1976, des has become the bastion of the cryptography market. however, des has since become outdated, its long reign as official nist algorithm ending in 1997. though des accepts a 64 - bit key, the key setup routines effectively discard 8 bits, giving des a 56 - bit effective keylength. des remains widely in use. during the design of des, the nsa provided secret s - boxes. after differential cryptanalysis had been discovered outside the closed fortress of the nsa, it was revealed that the des s - boxes were designed to be resistant against differential cryptanalysis. des is becoming weaker and weaker over time ; modern computing power is fast approaching the computational horsepower needed to easily crack des was designed to be implemented only in hardware, and is therefore extremely slow in software. a recent successful effort to crack des took several thousand computers several months. the eff has sponsored the development of a crypto chip named \" deep crack \" that can process 88 billion des keys per second and has successfully cracked 56 bit des in less than 3 days. des is implemented in kremlin ( accessible through kremlin sdk api ). a variant of des, triple - des ( also 3des ) is based on using des three times. this means that the input data is encrypted three times. the triple - des is considered much stronger than des, however, it is rather slow compared to some new block ciphers. deal is an interesting aes submission and, like all aes submissions, it uses a 128 bit block and accepts 128 bit, 192 bit, and 256 bit keylengths. it uses des as its inner round function and its authors suggest at least 6, preferably 8 rounds ( there are some attacks against deal ). there is a paper available here that describes some attacks, all of which can be cured by using at least developed by the nippon telephone & telegraph as an improvement to des, the fast data encipherment algorithm ( feal )", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6176174848482557, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d1c071e5-ada6-42ad-a8ca-ceea2d299789>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.992686"} | |
| {"text": "against deal ). there is a paper available here that describes some attacks, all of which can be cured by using at least developed by the nippon telephone & telegraph as an improvement to des, the fast data encipherment algorithm ( feal ) is very insecure. feal - 4, feal - 8, and feal - n are all susceptible to a variety of cryptanalytic attacks, some requiring as little as 12 chosen plaintexts. feal is patented. gost is a cryptographic algorithm from russia that appears to be the russian analog to des both politically and technologically. its designers took no chances, iterating the gost algorithm for 32 rounds and using a 256 bit key. although gost ' s conservative design inspires confidence, john kelsey has discovered a key - relation attack on gost, described in a post to sci. crypt on 10 february 1996. there are also weak keys in gost, but there are too few to be a problem when gost is used with its standard set of s - boxes. you can read the official gost algorithm description ( translated from russian ) here. there is also a description of the gost algorithm here. idea, developed in zurich, switzerland by xuejia lai and james massey, is generally regarded to be one of the best and most secure block algorithm available to the public today. it utilizes a 128 - bit key and is designed to be resistant to differential cryptanalysis. some attacks have been made against reduced round idea. unfortunately, idea is patented ; licensing information can be obtained from ascom. loki was designed as a possible replacement for des. it operates on a 64 - bit block and a 64 - bit key. the first version of loki to be released was broken by differential cryptanalysis and was shown to have an 8 - bit complementation property ( this means that the number of keys that need to be searched in a brute force attack is reduced by 256 ). loki was revised and re - released as loki91. loki91 is secure against differential cryptanalysis, but loki easily falls to a chosen - key attack. the designers of loki have proposed loki97 as an aes candidate, but linear and differential attacks on loki97 have already been proposed. lucifer was one of the first modern cryptographic algorithms. it was designed at ibm in the 1960s by horst feistel, of feistel network fame. lucifer is often considered to be a precursor to des. there are several incarnations of lucifer, each with the", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6153134903953511, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d1c071e5-ada6-42ad-a8ca-ceea2d299789>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.993597"} | |
| {"text": "the first modern cryptographic algorithms. it was designed at ibm in the 1960s by horst feistel, of feistel network fame. lucifer is often considered to be a precursor to des. there are several incarnations of lucifer, each with the same name, which creates a good deal of confusion. no version is secure. a paper on the differential cryptanlysis of lucifer was written by ishai ben - aroya & eli macguffin is a cipher developed by matt blaze and bruce schneier as an experiment in cipher design. it uses a feistel network ( see the cryptography overview for details ), but does not split the input evenly, instead dividing the 64 bit block into one 16 bit part and another 48 bit part. this is called a generalized unbalanced feistel network ( gufn ). details are available here. a differential attack on macguffin has been found that requires approximately 251. 5 chosen plaintexts. mars is ibm ' s aes submission. there is a mars web page with a link to the mars paper. mars uses 128 bit blocks and supports variable key sizes ( from 128 to 1248 bits ). mars is unique in that it combines virtually every design technique known to cryptographers in one algorithm. it uses addition and subtractions, s - boxes, fixed and data dependent rotations, misty is a cryptographic algorithm developed by mitsubishi electric after they broke des in 1994. it is designed to withstand linear and differential cryptanalysis, but has not yet been cryptanalysed. as it has not undergone intensive peer review, the usual caution is recommended. it is being considered for inclusion into the set 2. 0 standard. visit web page or read the author ' s paper mmb was designed as an alternative to idea that uses a 128 - bit block instead of idea ' s 64 - bit block. it was designed using the same principles as idea. unfortunately, it is not as secure as idea and several attacks exist against it. its author, joan daemen, abandoned it and designed although newdes was developed by robert scott to possibly replace des, newdes has fallen short of expectations. newdes has been proven to be weaker than des, requiring 24 related - key probes and 530 chosen plaintext / ciphertext queries, as described in this newdes is implemented in kremlin rc2, like rc4, was formerly a trade secret, but code purporting to be rc2 was posted to sci. crypt. it is", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6299106778748524, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d1c071e5-ada6-42ad-a8ca-ceea2d299789>", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:43.994515"} | |
| {"text": "workshop on fast software encryption. this iteration of serpent was called serpent 0 and used the original des s - boxes. after comments, the key schedule of sperpent was changed slightly and the s - boxes were changed ; this new iteration of serpent is called serpent 1. serpent 1 resists both linear and differential attacks. the serpent paper is available here. square is an iterated block cipher that uses a 128 - bit key length and a 128 - bit block length. the round function of square is composed of four transformations : a linear transformation, a nonlinear transformation, a byte permutation, and a bitwise round - key addition. square was designed to be resistant to linear and differential cryptanalysis, and succeeds in this respect. the designers of square have developed an attack on square, but it cannot be extended past 6 rounds. a paper on square is and there are links to the paper and source code on the designers ' web in what surely signals the end of the clipper chip project, the nsa released skipjack, its formerly secret encryption algorithm, to the public. skipjack uses an 80 bit key. a fuzzy scan of the official nsa paper is available here at the nist web site, but it has been transcribed by the folks over at jya. com. a reference implementation ( in c ) is available here, and an optimized version is available here. eli biham and adi shamir have published some initial cryptanalytic results ( which are growing more and more interesting as time progresses ). tiny encryption algorithm ( tea ) tea is a cryptographic algorithm designed to minimize memory footprint, and maximize speed. however, the cryptographers from counterpane systems have discovered three related - key attacks on tea, the best of which requires only 223 chosen plaintexts and one related key query. the problems arise from the overly simple key schedule. each tea key can be found to have three other equivalent keys, as described in a paper by david wagner, john kelsey, and bruce schneier. this precludes the possibility of using tea as a hash function. roger needham and david wheeler have proposed extensions to tea that counter the above attacks. twofish is counterpane systems ' aes submission. designed by the counterpane team ( bruce schneier, john kelsey, doug whiting, david wagner, chris hall, and niels ferguson ), twofish has undergone extensive analysis by the counterpane team. there is a paper available from the twofish web page and source", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6030029187569143, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d1c071e5-ada6-42ad-a8ca-ceea2d299789>", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.003086"} | |
| {"text": ". sha1 is similar in design to md4. the original published algorithm, known as sha, was modified by nsa to protect against an unspecified attack ; the updated algorithm is named sha1. it produces a 160 - bit digest - - large enough to protect against \" birthday \" attacks, where two different messages are selected to produce the same signature, for the next decade. the official fips description of sha1 can be found sha1 is implemented in kremlin. snefru is a hash function designed by ralph merkle, the designer of the khufu and khafre encryption algorithms. 2 - round snefru has been broken by eli biham. snefru 2. 5, the latest edition of the hash algorithm, can generate either a 128 - bit or a 256 - bit digest. tiger is a new hash algorithm by ross anderson and eli biham. it is designed to work with 64 - bit processors such as the digital alpha and, unlike md4, does not rely on rotations ( the alpha has no such rotate instruction ). in order to provide drop - in compatibility with other hashes, tiger can generate a 128 - bit, a 160 - bit or a 192 - bit digest. the tiger home page contains more information. want to add to the list of algorithms ( or found a mistake )? please", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6080567305581703, "token_count": 276, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d1c071e5-ada6-42ad-a8ca-ceea2d299789>", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.007040"} | |
| {"text": "a group of individuals agreeing in common attributes, and designated by a common name ; a conception subordinated to another conception, called a genus, or generic conception, from which it differs in containing or comprehending more attributes, and extending to fewer individuals. thus, man is a species, under animal as a genus ; and man, in its turn, may be regarded as a genus with respect to european, american, or the like, as species. in science, a more or less permanent group of existing things or beings, associated according to attributes, or properties determined by scientific observation. a sort ; a kind ; a variety ; as, a species of low cunning ; a species of generosity ; a species of cloth. an officinal mixture or compound powder of any kind ; esp., one used for making an aromatic tea or tisane ; a tea mixture. a group of living things that appear to have common ancestry so closely related that their characteristics definitely separate them all from any other group ; a further division of a genus. n. ( l. species, particular kind ) a group of interbreeding individuals, not interbreeding with another such group, being a taxonomic unit including two names in binomial nomenclature, the generic name and specific epithet, similar and related species being grouped into a genus. a group of organisms that differ from all other groups of organisms and that are capable of breeding and producing fertile offspring. this is the smallest unit of classification for plants and animals. a group of closely related plants under the same genus. the lowest group of creatures in the tree of life. the hierarchy is as follows : kingdom ; phylum ; class ; order ; family ; genus ; species. the species is the group of creatures which share a great number of similarities and share a common name with other groups. a group of animals or plants of the same kind....... back a group of organisms ( individuals ) that can interbreed and reproduce with each other. used to distinguish sexually reproducing organisms into groups. individuals from two different species cannot have offspring. they are said to be reproductively isolated. the biologist ernst mayr formulated this definition of a species advancing our understanding of the mechanism of evolution of higher organisms. for microbes, the species definition does not properly apply, because they do not reproduce sexually, but have an efficient mechanism to exchange genetic material even between evolutionarily distant forms. this exchange of genes is known as horizontal gene transfer. unlike sexual reproduction, it usually involves only a", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6027211120256526, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:df1cef2f-0141-487f-a813-07e77bbd408d>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.132909"} | |
| {"text": "breed naturally only with each other and resemble each other more closely than they resemble members of any similar group. a class of individuals having common attributes and designated by a common name ; a logical division of a genus or more comprehensive class ( merriam - webster 1996 ). most specific level of scientific classification ; below genus the basic unit of classification which usually refers to one or several groups of plants or other living organisms that interbreed and maintain their distinctive identity through successive generations a class of individual plants or animals having some common characteristics or qualities which makes them distinct from other classes of plants or animals. the basic category of biological classification, ranking below the genus ; a species consists of related organisms or populations potentially capable of interbreeding ; a species is designated by a two part name consisting of its genus and a specific epithet ; see also subspecies a group of organisms ( living things ) capable of reproducing to give fertile off - spring. momo organisms that can interbreed and produce fertile offspring. a group of individuals biologically capable of interbreeding and which have a common ancestor. organisms that are genetically related, similar physically, and can reproduce viable offspring. this is the most useful taxonomical name because every living creature is assigned a unique species name, which is composed of two parts. a class of plants or animals having common attributes and designated by a common name. theoretically, plants or animals of different species cannot interbreed. however, occasionally this does not hold true. espece familienbezeichnung, f especie a population or series of populations whose individuals have the potential to freely breed with one another and that is discontinuous in variation from other populations or series of populations ; a fundamental category of taxonomic classification, ranking a kind of plant that is distinct from other plants. a group of organisms that are biologically capable of breeding and producing fertile offspring. it is the lowest normal taxonomic unit in use. meagher, 1991 a population of morphologically similar organisms that can reproduce sexually among themselves but that cannot produce fertile offspring when mated with other organisms. a group of organisms that have similar characteristics and can interbreed to produce fertile offspring. this is the defining identification of a living organism. based upon taxonomy it is usually a latinised adjective or noun and is never capitalised and is usually italicised. species may only have varieties after it, although the specific name may also be double - barreled. the basic unit of biological classification. generally defined as an aggregation of individuals similar in appearance", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6058548663199343, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:df1cef2f-0141-487f-a813-07e77bbd408d>", "chunk_index": 11, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.145055"} | |
| {"text": "common case and bracelet materials frequently asked questions is my bracelet scratch - proof or scratch - resistant? how do magnetic fields affect my watch? how do i get my watch appraised for insurance purposes? what is a \" swiss \" watch? do i need to wind my automatic ( self - winding ) watch? a mechanical movement that is wound through the motion of the wearer ' s arm during normal daily arm movement ; sufficient activity is required to build up a power reserve. also known as a \" self - winding \" watch. the part of the watch that secures the watch to the wrist. the two common types of bands are strap ( i. e. leather ) and bracelet. the rim which secures the crystal in place on the watch case and may be set with diamonds or other stones. bezels may also be rings which are graduated to track elapsed time, as in a diver ' s watch. some bezels are rotating and can be turned to perform different types of timekeeping. the ornament, often a dome - shaped or faceted precious stone such as a ruby or emerald used to accent the winding crown. also, the raised dome - shaped markers used to indicate the hours on some watch dials. the windows or subdials on the dial of a watch that display the day, date, month and / or year. also known as graphite fiber, carbon fiber consists of extremely thin fibers, predominantly of carbon atoms, bonded together in microscopic crystals. the vertical alignment of the crystals gives carbon fiber its unique texture, and makes it incredibly strong. often combined with a polymer, carbon fiber watch cases and dials are exceptionally tough. the metal housing that contains the internal workings of the watch ( the movement, dial and hands ). high - tech ceramic, an extremely hard material containing titanium carbide, is valued by watchmakers for its lightweight and exceptional scratch - resistance. high - polished ceramic timepieces are smooth - to - the - touch, ultra lightweight and durable. a watch that includes a stop watch feature : a timer that can be started and stopped to time an event. a timepiece that has met very high standards of accuracy, tested and certified by the c. o. s. c. ( an official watch institute in switzerland ). each chronometer comes with an individual certificate of precision. the tiny knob on the winding stem used to move the hands to set the time on the watch, and to wind a watch with a manual movement. the transparent \" glass \" which", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6445440771417381, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3e4aeada-3305-4a22-8b10-a5169ef5bc2d>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.190257"} | |
| {"text": ". pushers are usually found on chronographs and timepieces with minute strikers and alarms. the letters are an abbreviation for physical vapor deposit, a high - tech vacuum - coating procedure that produces wear - resistant finish. a watch movement where time is \" tuned \" to, and measured by, the extremely rapid and consistent vibrations of a quartz crystal. the quartz crystal is powered by a battery. also known as an electronic quartz movement. a device that chimes the time when a button is pushed, or a slide is pulled. see \" minute repeater \". a crown that screws down into the case tube making the watch more water resistant. provides the best underwater shock protection ( against rocks, accidental knocks, scrapes, etc. ) to prevent water leakage. to set the time on a watch with a screw - down crown, the crown must first be unscrewed before it can be pulled out to any hand - setting position. a movement that converts mechanical energy generated by the force of gravity and natural movements of the wearer ' s wrist into electrical energy which is stored in an accumulator which powers a quartz movement. another name for an automatic mechanical movement. see \" automatic \" and \" mechanical \" movements. to be qualified as \" shock resistant \", a watch must have demonstrated the ability to withstand an impact equal to that of being dropped onto a wood floor from a height of three feet during testing. the dial of the watch is \" cut out \" to allow the inner workings of a watch ' s movement to be seen through the transparent crystal and dial on the front side, or a transparent crystal case back. in a watch with a \u201c skeletonized \u201d movement, the rotor, wheels and other moving parts are also painstakingly cut away, creating an elegant transparency all the way through the case. abbreviation stands for \" stock keeping unit \" ; an identifying number used when taking inventory. same as the watch \" model number \". a seconds hand that is mounted in the center of the watch dial ( vs. one positioned in a sub - dial ). a \" true \" sweep seconds hand is found only on mechanical watches, and has a motion that is undetectable to the human eye. on a quartz watch, the advance of the seconds hand is discernible in tiny step - by - step jumps. a feature found on chronographs consisting of a calibrated scale, usually found around the perimeter of the dial, that can be used to measure the wearer ' s speed of", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6496520318127388, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:3e4aeada-3305-4a22-8b10-a5169ef5bc2d>", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.193100"} | |
| {"text": "optical fibers are circular dielectric wave - guides that can transport optical energy and information. they have a central core surrounded by a concentric cladding with slightly lower ( by \u2248 1 % ) refractive index. fibers are typically made of silica with index - modifying dopants such as geo2. a protective coating of one or two layers of cushioning material ( such as acrylate ) is used to reduce cross talk between adjacent fibers and the loss - increasing microbending that occurs when fibers are pressed against rough surfaces. for greater environmental protection, fibers are commonly incorporated into cables. typical cables have a polyethylene sheath that encases the fiber within a strength member such as steel or kevlar strands. the fiber as a dielectric wave - guide : fiber modes since the core has a higher index of refraction than the cladding, light will be confined to the core if the angular condition for total internal reflectance is met. the fiber geometry and composition determine the discrete set of electromagnetic fields, or fiber modes, which can propagate in the fiber. there are two broad classifications of modes : radiation modes and guided modes. radiation modes carry energy out of the core ; the energy is quickly dissipated. guided modes are confined to the core, and propagate energy along the fiber, transporting information and power. if the fiber core is large enough, it can support many simultaneous guided modes. each guided mode has its own distinct velocity and can be further decomposed into orthogonal linearly polarized components. any field distribution within the fiber can be expressed as a combination of the modes. the two lowest - order guided modes of a circularly symmetrical fiber designated lp01 and lp11 are illustrated in figure 1. when light is launched into a fiber, the modes are excited to varying degrees depending on the conditions of the launch input cone angle, spot size, axial centration and the like. the distribution of energy among the modes evolves with distance as energy is exchanged between them. in particular, energy can be coupled from guided to radiation modes by perturbations such as microbending and twisting of the fiber increasing the attenuation. bandwidth of an optical fiber determines the data rate. the mechanism that limits a fibers bandwidth is known as dispersion. dispersion is the spreading of the optical pulses as they travel down the fiber. the result is that pulses then begin to spread into one another and the symbols become indistinguishable. there are two main categories of dispersion,", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6465711279929913, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.268810"} | |
| {"text": "dispersion is the spreading of the optical pulses as they travel down the fiber. the result is that pulses then begin to spread into one another and the symbols become indistinguishable. there are two main categories of dispersion, intermodal and intramodal. figure 1alp01 mode distributionfigure 1blp11 mode distributionfigure 1dispersionfigure 2cross section view of optical fiber and single fiber cable. as its name implies, intermodal dispersion is a phenomenon between different modes in an optical fiber. therefore this category of dispersion only applies to mulitmode fiber. since all the different propagating modes have different group velocities, the time it takes each mode to travel a fixed distance is also different. therefore as an optical pulse travels down a multimode fiber, the pulses begin to spread, until they eventually spread into one another. this effect limits both the bandwidth of multimode fiber as well as the distance it can transport data. intramodal dispersion, sometimes called material dispersion, is a result of material properties of optical fiber and applies to both single - mode and multimode fibers. there are two distinct types of intramodal dispersion : chromatic dispersion and polarization - mode dispersion. the index of refraction varies depending upon wavelength. therefore, different wavelengths will travel down an optical fiber at different velocities. this is known as chromatic dispersion. this principle implies that a pulse with a wider fwhm will spread more than a pulse with a narrower fwhm. dispersion limits both the bandwidth and the distance that information can be supported. this is why for long communications links it is desirable to use a laser with a very narrow line width. distributed feedback ( dfb ) lasers are popular for communications because they have a single longitudinal mode with a very narrow line width. polarization mode dispersion ( pmd ) is actually another form of material dispersion. single - mode fiber supports a mode, which consists of two orthogonal polarization modes. ideally, the core of an optical fiber is perfectly circular. however, the fact that in reality, the core is not perfectly circular, and mechanical stresses such as bending introduce birefringency in the fiber, causes one of the orthogonal polarization - modes to travel faster than the other, hence causing dispersion of the optical pulse. light power propagating in a fiber decay", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6422932496863731, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.269765"} | |
| {"text": "mechanical stresses such as bending introduce birefringency in the fiber, causes one of the orthogonal polarization - modes to travel faster than the other, hence causing dispersion of the optical pulse. light power propagating in a fiber decays exponentially with length due to absorption and scattering losses. attenuation is the single most important factor determining the cost of fiber optic telecommunication systems, as it determines spacing of repeaters needed to maintain acceptable signal levels. in the near infrared and visible regions, the small absorption losses of pure silica are due to tails of absorption bands in the far infrared and ultraviolet. impurities notably water in the form of hydroxyl ions are much more dominant causes of absorption in commercial fibers. recent improvements in fiber purity have reduced attenuation losses. state - of - the - art systems can have attenuation on the order of 0. 1 db / km. scattering can couple energy from guided to radiation modes, causing loss of energy from the fiber. there are unavoidable rayleigh scattering losses from small - scale index fluctuations frozen into the fiber when it solidifies. this produces attenuation proportional to l / \u03bb4. irregularities in core diameter and geometry or changes in fiber axis direction also cause scattering. any process that imposes dimensional irregularities such as microbending increases scattering and hence attenuation. typical spectral attenuation in silicafigure 3dispersionfigure 4typical spectral attenuation in silica numerical aperture ( na ) the numerical aperture ( na ) of a fiber is defined as the sine of the largest angle an incident ray can have for total internal reflectance in the core. rays launched outside the angle specified by a fibers na will excite radiation modes of the fiber. a higher core index, with respect to the cladding, means larger na. however, increasing na causes higher scattering loss from greater concentrations of dopant. a fibers na can be determined by measuring the divergence angle of the light cone it emits when all its modes are excited. figure 5numerical aperture qualitatively, na is a measure of the light gathering ability of a fiber. it also indicates how easy it is to couple light into a fiber. the normalized frequency parameter of a fiber, also called the v number, is a useful specification. many fiber parameters can be expressed in terms of v, such as : the number of modes at a given wavelength, mode cut off conditions, and propagation constants. for example, the", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6078984916234569, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.270786"} | |
| {"text": "the mode profile of the he11 mode of a step index fiber can be approximated by a gaussian distribution with a 1 / e width w given by : where : d is the core diameter, and v is the v - number. for our f - sv fiber, for which v = 2, the gaussian width is approximately 28 % larger than the core diameter, so the light should be focused to a spot size 1. 28 times the core diameter at the fiber surface. for a gaussian laser beam, the required beam diameter d incident upon focusing lens of focal length f to produce a focused spot of diameter w is d = 4\u03bbf / ( \u03c0w ). given the laser beam waist and divergence, its easy to determine the distance needed between the focusing lens and the laser to expand the beam to the required diameter. the mode field diameter is now given to provide easier matching of lens to optical fiber for a gaussian beam. a high numerical aperture lens must collimate the diverging output beam of a laser diode. newports f - l series diode laser focusing lenses, are ar - coated for high transmittance at popular laser diode wavelengths and with numerical apertures up to 0. 5 are useful for collimating or focusing. mode scrambling and filtering many multimode fiber experiments are sensitive to the distribution of power among the fibers modes. this is determined by the launching optics, fiber perturbations, and the fibers length. mode scrambling is a technique that distributes the optical power in a fiber among all the guided modes. mode filtering simulates the effects of kilometer lengths of fiber by attenuating higher - order fiber modes. figure 8launching conditions in a multimode optical fiber. one scrambling technique is to splice a length of graded - index fiber between two pieces of step - index fiber this ensures that the downstream fibers core is overfilled regardless of launch conditions. mode filtering can be achieved by wrapping a fiber several times around a finger - sized mandrel ; bending sheds the high - order modes. one way to achieve both scrambling and filtering is to introduce microbending to cause rapid coupling between all fiber modes and attenuation of high - order modes. one approach is to place a stripped section of fiber in a box filled with lead shot. a more precise way is to use newports fm - 1 mode scrambler. this specially designed tool uses a calibrated mechanism to introduce microbending for mode scrambling and filtering. ( a )", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6148533107243064, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.275468"} | |
| {"text": "of fiber in a box filled with lead shot. a more precise way is to use newports fm - 1 mode scrambler. this specially designed tool uses a calibrated mechanism to introduce microbending for mode scrambling and filtering. ( a ) overfilled ( b ) underfilledfigure 9a schematic of coupling of light into an optical fiberfigure 10mode scrambler for optical fibers. the bends tend to couple out higher - order and radiation modes and to distribute the light into a distribution of modes that will remain stable over long distances. cladding mode removal some light is invariably launched into a fibers cladding. though cladding modes dissipate rapidly with fiber length, they can interfere with measurements. for example, the output of a single - mode fiber will not have a gaussian distribution if light is propagating in the cladding. you can remove cladding modes by stripping a length of fiber coating and immersing the bare fiber in an index matching fluid such as glycerin. common optical parameters the following is a list of common optical parameters associated with fiber optic components. please call or visit newports website for application notes on how to measure these parameters. figure 11 port configuration : number of input ports x number of output ports. e. g. 2 x 2 coupling ratio : the ratio of the power at an output port to the launched power expressed in db. e. g. - 10log ( p2 / p1 ). isolation : the ratio of the power at an output port in the transmitted wavelength band to that in the extinguished wavelength band, expressed in db. directivity : the ratio of the power returned to any other input port to the launched power, expressed in db. e. g. - 10log ( p4 / p1 ). bandwidth : the range of operating wavelengths over which performance parameters are specified. excess loss : the ratio of the total power at all output ports to the launched power, expressed in db. e. g. - 10log [ ( p2 + p3 ) / p1 ]. uniformity : the difference between maximum and minimum insertion losses. extinction ratio : the ratio of the residual power in an extinguished polarization state to the transmitted power, expressed in db. return loss : the ratio of the power returned to the input port to the launched power, expressed in db. e. g. - 10log ( p5 / p1 ). polarization - dependent loss ( pdl ) : the maximum (", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6169932608227028, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 7, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.276469"} | |
| {"text": "return loss : the ratio of the power returned to the input port to the launched power, expressed in db. e. g. - 10log ( p5 / p1 ). polarization - dependent loss ( pdl ) : the maximum ( peak - to - peak ) variation in insertion loss as the input polarization varies, expressed in db. fiber optic communications the theoretical bandwidth of optical fiber transmission in the 1550 nm window alone is on the order of terabits. current fiber optic systems have not even begun to utilize the enormous potential bandwidth that is possible. there are two methods that are employed to achieve an increase in bandwidth. the first is known as time division multiplexing or tdm. multiple channels are transmitted on a single carrier by increasing the modulation rate and allotting a time slot to each channel. however, more sophisticated high - speed electronics, at both the transmitting and receiving ends of the communications link, are required when increasing the bit rate of a system. and as the bit rate increases, inherent modulation limiting characteristics of optical fibers become dominant. chromatic and polarization mode dispersion cause pulse spreading, which affects the signal quality over longer transmission distances. an alternate method for increasing the capacity of fiber optic communications systems is known as wavelength division multiplexing, or wdm. by this method, capacity can be increased by using more than one optical carrier ( wavelength ) on a single fiber. therefore, adding a second transmitter and receiver to an optical fiber can double the bandwidth of that communications system. this method of increasing the capacity of an optical system has appeal for a variety of reasons. if a system were to increase in capacity using tdm alone, the existing transmitter and receiver would be replaced with a faster and more expensive transmitter / receiver pair. using wdm, the existing transmitter and receiver do not need to be replaced. a second transmitter / receiver pair of a different wavelength is simply added. this is done by coupling, or multiplexing the output of the two lasers into a single fiber. at the receiving end, the two wavelengths are then separated, or demultiplexed, and each optical carrier is routed to its own receiver. for transmission systems using a 1310 nm laser, a second laser at 1550 nm is usually added. the reason for choosing these wavelengths is that they lie in the windows or ranges of least attenuation. this allows the signal to travel a longer distance. the itu ( international telecommunication union ) has proposed a set of closely spaced wavelengths in the 1550 nm window. this method of w", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6525401687612793, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 8, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.277566"} | |
| {"text": "that they lie in the windows or ranges of least attenuation. this allows the signal to travel a longer distance. the itu ( international telecommunication union ) has proposed a set of closely spaced wavelengths in the 1550 nm window. this method of wdm is known as dense wavelength division multiplexing, or dwdm. these different wavelengths or channels, are spaced 100 ghz apart, which is approximately 0. 8 nm. this set of channels is commonly known as the itu - t grid, and is specified in frequency. the reason the 1550 nm window was chosen by the itu is twofold : it is in one of the windows that has the smallest amount of attenuation ; and it also lies in the band in which erbium doped optical amplifiers operate. itu - t dwdm grid the following diagram is a conceptual example of a fiber optic network. the all - optical network the all - optical network will be the next evolution in optical communications. current dwdm systems are point - to - point links meaning that the signals have a single distinct starting and ending point. research is being performed to help these networks evolve into fully configurable networks, which are not limited to fixed point - to - point links. transparency in the optical layer opens many possibilities for the future. digital and analog transmission can occur on the same fiber. different bit rates using different protocols will all travel together. current research is being performed on reconfiguring an optical network in real time. wavelength selective switching allows wavelengths to be routed through the network individually. some of the applications of this are for network restoration and redundancy, which may reduce or entirely eliminate the need for an entire back up system to help the network recover from failures such as equipment malfunctions or fiber breaks. a reconfigurable network may offer bandwidth on demand to configure itself to optimize for traffic bottlenecks. the future may also include wavelength translation to convert traffic on one wavelength to another wavelength in the optical domain. all optical switching is still in the research phase ; however, researchers are looking for ways to create reliable, low loss switches with fast switching speeds. investigation into the possibility of optical packet switching and other novel technologies are currently underway. the all - optical network may be just around the corner. photonic crystal fibers ( pcfs ) photonic crystal fiber ( pcf ) is a subset of photonic crystals. the field of pcf was first explored in the latter half of 1990 ' s and quickly", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6001979509604073, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 9, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.278559"} | |
| {"text": "network may be just around the corner. photonic crystal fibers ( pcfs ) photonic crystal fiber ( pcf ) is a subset of photonic crystals. the field of pcf was first explored in the latter half of 1990 ' s and quickly evolved into a commercial technology. pcfs are generally divided into two main categories : index guiding fibers that have a solid core, and photonic bandgap fibers that have periodic microstructured elements and a core of low index material ( e. g. hollow core ). they can provide characteristics that ordinary optical fiber cannot, such as : single - mode operation from the uv to ir with large mode - field diameters, exceptionally high nonlinearity, numerical aperture ( na ) ranging from very low to about 0. 9, and optimized dispersion properties. applications of pcfs are found in a wide range of research fields like spectroscopy, metrology, biomedicine, imaging, telecommunication, industrial machining, and military. fabrication and characteristicsfigure 12close - up view of pcf preform the typical starting point for manufacturing of an index guided pcf is an array of hollow capillary silica tubes bundled around a pure silica rod replacing the center capillary. for photonic bandgap ( pbg ) fibers, one or more capillary tubes may simply be left out in the center of the preform in order to create a hollow ' defect ' core. a sleeving tube surrounds the entire assembly that forms the preform. in a fiber draw tower, the preform is heated to around 2000\u00b0c and it is carefully pulled into fiber with the aid of gravity and pressure. typical outer fiber diameter is 125 \u00b5m, but diameters from 80 to around 700 \u00b5m are routinely fabricated. this fiber maintains the structure of the preform, but now on a microscopic scale. standard protective polymer coatings are applied to the fibers in order to improve handling characteristics. figure 13most pcf fiber can be connectorized too. call newport for more information. the dispersion characteristics of pcfs can be manipulated to create fibers having zero, low, or anomalous dispersion at visible wavelengths. the dispersion can also be flattened. combining these features with small mode field areas results in outstanding nonlinear fibers. by altering the pattern of air holes or the materials used, it is possible to manipulate other characteristics of pcfs, such as the single - mode cut - off wavelength, the na, and the nonlinear coefficient.", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6368910154336878, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 10, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.280556"} | |
| {"text": "fiber technology, where the light is guided using solid glasses with different refractive indices, several new properties may be realized using pcf technology. for example : - fibers that are single - mode in a very broad spectral range ( in principle all wavelengths ) - very small mode sizes may be obtained ( down to approx. 1 \u00b5m ) - very large mode sizes may be obtained ( up to 25 \u00b5m or larger ) - zero dispersion wavelengths below 1300 nm is possible ( down to approx. 600 nm ) - exceptionally large birefringence close to 10 - 2 can be realized - very high numerical apertures up to 0. 9 may be obtained hence, pcfs are ideally suited for applications requiring large non - linearities, broadband operation with single - mode guidance, large mode areas, light collection from a large solid angle, etc. figure 17large mode area fiber ( f - sm series ) figure 18near field image of f - sm fiber at 1550 nmfigure 19mode structure of f - sm20 fiber formation of broad continuous spectra through propagation of short femto or picosecond - range high power pulses through nonlinear media ( also known as supercontinuum generation, or scg ) was first observed in 1970 and has since then been studied extensively in many different materials. the term supercontinuum does not cover a specific phenomenon but rather a plethora of nonlinear effects leading to considerable spectral broadening of optical pulses and thereby potentially octave - spanning output. the involved nonlinear effects depend on the dispersion in the material and count effects like self - phase modulation ( spm ), raman scattering, phase matching and solitons. results on scg in pcfs have previously been presented with pumping in the anomalous dispersion regime or at the zero - dispersion wavelength in both the visible and the infrared wavelength range. most experiments utilize femtosecond pumping as this results in spectacularly broad spectra. picosecond pumping yields more narrow spectra, but does so with far cheaper laser sources and is therefore commercially interesting. although scg can be observed in a drop of water given enough pumping power, pcfs are ideal media for scg as the dispersion can be designed to facilitate continuum generation in a specific region. in this way, it is possible to convert light to both higher and lower wavelength, just like super wide spectra covering more than an octave is achievable at previously unthinkable low power levels. practical super", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6173311284383163, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6e348240-aa58-47a4-b809-78996b01af78>", "chunk_index": 13, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.285996"} | |
| {"text": "contact : ken kingery, nscl, office : 517 - 908 - 7482, kingery @ nscl. msu. edu published january 4, 2012 for immediate release east lansing, mich. \u2013 the recent measurement of the mass of the short - lived rare isotope manganese - 66 has made it possible for nuclear astrophysicists to pin down the underlying heating elements of one of the universe \u2019 s most fantastic phenomena \u2014 accreting neutron stars. out in the cold depths of space, billions of the densest objects known to man sit quietly while their nuclear decomposition processes play out. but some of them are hungry. some neutron stars sit close enough to a neighboring star for its immense gravity to begin pulling matter from its neighbor into its own mass in an ongoing thermonuclear process. but sooner or later, the fuel for the neutron star is exhausted and it begins to cool rapidly. through observations of this cooling process and measurements taken at nuclear physics laboratories such as the national superconducting cyclotron laboratory ( nscl ), scientists can deduce the inner workings of neutron stars. in the recent experiment at nscl, researchers measured the mass of manganese - 66, which sits right next to iron - 66 on the nuclear chart. based on the newly discovered mass and previous measurements of iron - 66, scientists can determine where in the crust of a neutron star the layer of iron - 66 lies, which is one of two heating elements in neutron stars. \u201c on earth, iron - 66 is a rare short - lived isotope with a half - life of about 400 ms, \u201d said alfredo estrade, postdoctoral researcher with st. mary \u2019 s university in halifax, canada, and gsi in darmstadt, germany, and lead author of the study. \u201c however, it also is part of the crust of accreting neutron stars, where it becomes stable due to its high density and it heats the crust by capturing electrons. \u201d scientists at nscl calculated the mass of manganese - 66 by doing a time - of - flight experiment. krypton - 86 was accelerated up to 40 percent of the speed of light and smashed into a thin foil of beryllium. some of the ions shattered after hitting other nuclei in the foil, creating a smorgasborg of new isotopes and particles. the facility then filtered out about 100 desired types of isotopes, some of which they wanted to measure and others that they used for calibrations. the filtered isotopes traveled down", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6176134280380053, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:fe19fb5c-1732-4c4d-89d4-c2032e87fa1c>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.383080"} | |
| {"text": "representative theory of perception, also known as indirect realism, epistemological dualism, and the veil of perception, is a philosophical concept. it states that we do not ( and cannot ) perceive the external world directly ; instead we know only our ideas or interpretations of objects in the world. thus, a barrier or a veil of perception prevents first - hand knowledge of anything beyond it. the \" veil \" exists between the mind and the existing world. the debate then occurs about where our ideas come from, and what this place is like. an indirect realist believes our ideas come from sense data of a real, material, external world ( unlike idealists ). the doctrine states that in any act of perception, the immediate ( direct ) object of perception is only a sense - datum that represents an external object. aristotle was the first to provide an in - depth description of indirect realism. in on the soul he describes how the eye must be affected by changes in an intervening medium rather than by objects themselves. he then speculates on how these sense impressions can form our experience of seeing and reasons that an endless regress would occur unless the sense itself were self aware. he concludes by proposing that the mind is the things it thinks. he calls the images in the mind \" ideas \". the way that indirect realism involves intermediate stages between objects and perceptions immediately raises a question : how well do sense - data represent external objects, properties, and events? indirect realism creates deep epistemological problems, such as solipsism and the problem of the external world. nonetheless, indirect realism has been popular in the history of philosophy and has been developed by many philosophers including bertrand russell, baruch spinoza, rene descartes, and john locke. representationalism is one of the key assumptions of cognitivism in psychology. potential results of representative realism a problem with representationalism is that if simple data flow and information processing is assumed then something in the brain, described as a homunculus, must be viewing the perception. this suggests that some physical effect or phenomenon other than simple data flow and information processing might be involved in perception. this was not an issue for the rationalist philosophers such as descartes, since cartesian dualism held that there is a supernatural \" homunculus \" in the form of the soul. for those who doubt dualism, explaining precisely what it is that sees the representation is problematic. but if the transfer of information into a \" mind \" is thought to be the only explanation of how we", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6175021587344752, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:a2edcd54-41fb-41cc-b408-5aebf5860850>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.580196"} | |
| {"text": ", i am referring to the process of interpreting the sense - data i am receiving from the eiffel tower in an act of mental representation. the term \" eiffel tower \" refers to the eiffel tower, and not to the mental representation of the tower, which is the result of the act of \" seeing \". thus, both of us can refer to the same object while making our own unique representations of that object. representative realism does, unlike naive realism, take into account sense data ( the way in which the object is interpreted, not simply the objective, mathematical object ) - this induces the veil of perception wherein we are unsure the table we look at exists due to there being no direct objective proof of its existence. in other words, the table i ' m looking at appears to have a particular shape to me, due to my angle of vision, and to have a particular colour due to the way in which the light bounces off it relative to my position, and that appearance differs from the appearance of the table as seen by the person next to me. each of us sees not the actual table, but an appearance of it which merely represents an actual table out there. the representative theory of perception states that we do not perceive the external world directly ; instead we perceive our personal interpretation of an object by way of sense data. a naive realist assumes she sees the dog upon perceiving a dog, whereas a representative realist assumes she sees a sensory representation of the dog upon perceiving a dog. the external world is real and continues to exist unobserved. but we are only aware of it indirectly. our perception of the external world is mediated by way of sense data such as photons and sound waves. we perceive a representation of reality ( not the reality itself ) ; this has been given many names : ideas, sense data, percept or appearance. thus representative realism is the idea that our perceptions are directly caused by the intrinsic qualities of objects, and based on these perceptions we can infer things about these objects. the 17th century philosopher john locke most prominently advocated this theory. the term he used was not \" sense - datum \" but \" idea. \" \" idea \" as used in the theory of perception is a technical term, meaning roughly the same thing as sense - datum, and this article does not discuss any differences in meaning that the two terms might have. john locke thought objects had two classes of qualities : - primary qualities are qualities which are ' explanatorily basic", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.656000190391723, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:a2edcd54-41fb-41cc-b408-5aebf5860850>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.582647"} | |
| {"text": "the same thing as sense - datum, and this article does not discuss any differences in meaning that the two terms might have. john locke thought objects had two classes of qualities : - primary qualities are qualities which are ' explanatorily basic ' - which is to say, they can be referred to as the explanation for other qualities or phenomena without requiring explanation themselves - and they are distinct in that our sensory experience of them resembles them in reality. ( for example, one perceives an object as spherical precisely because of the way the atoms of the sphere are arranged. ) primary qualities cannot be removed by either thought or physical action, and include mass, movement, and, controversially, solidity ( although later proponents of the distinction between primary and secondary qualities usually discount solidity ). - secondary qualities are qualities which one ' s experience does not directly resemble ; for example, when one sees an object as red, the sensation of seeing redness is not produced by some quality of redness in the object, but by the arrangement of atoms on the surface of the object which reflects and absorbs light in a particular way. secondary qualities include colour, smell, and taste. in contemporary philosophy, epistemological dualism has come under sustained attack by philosophers like wittgenstein ( the private language argument ) and wilfrid sellars in his seminal essay \" empiricism and the philosophy of mind. \" indirect realism is argued to be problematical because of ryle ' s regress and the apparent need for a homunculus. these problems have led some philosophers to abandon realism and suggest the existence of dualism and others to propose, or suggest through emergentism, that some form of new physics is operating in the brain such as quantum mind, space - time theories of consciousness - online papers on representationalism, by various authors, compiled by david chalmers - harold i. brown, \" direct realism, indirect realism, and epistemology \". philosophy and phenomenological research, vol. 52, no. 2. ( jun., 1992 ), pp. 341 - 363. - what do we perceive and how do we perceive it? ( pdf file ) - neurological explanation for paranormal experiences - the representationalism web site - mccreery, c., \" perception and hallucination : the case for continuity. \u201d oxford : oxford forum ( 2006 ). an analysis of empirical arguments for representationalism. online pdf", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6327485636678647, "token_count": 497, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:a2edcd54-41fb-41cc-b408-5aebf5860850>", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.583638"} | |
| {"text": "materials enable the ideas of scientists to meet the needs of engineers. research into the relationships between the atomic structure of materials and their physical and mechanical properties, both in the united states and elsewhere, is leading to exciting new alloys and compounds that can be designed to exhibit a wide range of useful properties. for this reason a number of federal agencies, including the department of energy, department of defense, department of commerce, and advisory bodies, such as the office of science and technology policy and the national research council, have identified materials as a critical technology vital to our nation ' s national security and economic competitiveness. the integrated materials research laboratory ( imrl ) enables sandia to develop new and superior materials that meet government and industrial needs. this 140, 000 square foot building houses most of the advanced materials research and development functions at sandia. the facility integrates research from the atomic scale, through the development of electronic devices, to full scale mechanical components. the experimental work is augmented by advanced computer modeling and simulation techniques, another area of sandia ' s expertise. a wide variety of types of materials will be investigated : advanced metallic alloys, semiconductors for electronic and photonic applications, high temperature superconductors, ceramics, metals with properties tailored for improved resistance to friction, wear, corrosion and erosion, etc., and laser, optical and dielectric materials. the imrl has been built outside of sandia ' s secure area to facilitate technical cooperation with researchers from industry and universities. the new four story building has permitted sandia to bring together some 250 materials researchers previously scattered about the campus. it also includes space for postdoctoral researchers and guests from other organizations, facilitating the collaborative generation of new ideas, and the subsequent transfer of novel pre - competitive technologies to practice. the imrl is strategically located with our microelectronics development, compound semiconductor research and robotics manufacturing science and engineering laboratories. this drives the integration of materials research with advanced microelectronic component development creating a set of leading edge facilities in what may be termed an integrated microsystems technology park. we have developed a variety of solution chemistry routes to ceramic powders in order to control such important powder properties as particle size, agglomerate structure, dopant levels, and impurities. the ability to control powder properties is important because the microstructural and electrical properties of ceramics are strongly influenced by the nature of the powder used in their fabrication. this solution processing approach is illustrated above for the sandia - developed process to prepare high field zno varisto", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.683428268429425, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:b984e025-c67a-4a33-be94-ea001a13f793>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.654282"} | |
| {"text": "problems of philosophy chapter 5 - knowledge by acquaintance and knowledge by description after distinguishing two types of knowledge, knowledge of things and knowledge of truths, russell devotes this fifth chapter to an elucidation of knowledge of things. he further distinguishes two types of knowledge of things, knowledge by acquaintance and knowledge by description. we have knowledge by acquaintance when we are directly aware of a thing, without any inference. we are immediately conscious and acquainted with a color or hardness of a table before us, our sense - data. since acquaintance with things is logically independent from any knowledge of truths, we can be acquainted with something immediately without knowing any truth about it. i can know the color of a table \" perfectly and completely when i see it \" and not know any truth about the color in itself. the other type of knowledge of things is called knowledge by description. when we say we have knowledge of the table itself, a physical object, we refer to a kind of knowledge other than immediate, direct knowledge. \" the physical object which causes such - and - such sense - data \" is a phrase that describes the table by way of sense - data. we only have a description of the table. knowledge by description is predicated on something with which we are acquainted, sense - data, and some knowledge of truths, like knowing that \" such - and - such sense - data are caused by the physical object. \" thus, knowledge by description allows us to infer knowledge about the actual world via the things that can be known to us, things with which we have direct acquaintance ( our subjective sense - data ). according to this outline, knowledge by acquaintance forms the bedrock for all of our other knowledge. sense - data is not the only instance of things with which we can be immediately acquainted. for how would we recall the past, russell argues, if we could only know what was immediately present to our senses. beyond sense - data, we also have \" acquaintance by memory. \" remembering what we were immediately aware of makes it so that we are still immediately aware of that past, perceived thing. we may therefore access many past things with the same requisite immediacy. beyond sense - data and memories, we possess \" acquaintance by introspection. \" when we are aware of an awareness, like in the case of hunger, \" my desiring food \" becomes an object of acquaintance. introspective acquaintance is a kind of acquaintance with our own minds that may be understood as self - consciousness. however, this self - consciousness is really more", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6111685435180465, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.884865"} | |
| {"text": "in the case of hunger, \" my desiring food \" becomes an object of acquaintance. introspective acquaintance is a kind of acquaintance with our own minds that may be understood as self - consciousness. however, this self - consciousness is really more like a consciousness of a feeling or a particular thought ; the awareness rarely includes the explicit use of \" i, \" which would identify the self as a subject. russell abandons this strand of knowledge, knowledge of the self, as a probable but unclear dimension of acquaintance. russell summarizes our acquaintance with things as follows : \" we have acquaintance in sensation with the data of the outer senses, and in introspection with the data of what may be called the inner sense \u2014 thoughts, feelings, desires, etc. ; we have acquaintance in memory with things which have been data either of the outer senses or of the inner sense. further, it is probable, though not certain, that we have acquaintance with self, as that which is aware of things or has desires towards things. \" all these objects of acquaintance are particulars, concrete, existing things. russell cautions that we can also have acquaintance with abstract, general ideas called universals. he addresses universals more fully later in chapter 9. russell allocates the rest of the chapter to explaining how the complicated theory of knowledge by description actually works. the most conspicuous things that are known to us by description are physical objects and other people ' s minds. we approach a case of having knowledge by description when we know \" that there is an object answering to a definite description, though we are not acquainted with any such object. \" russell offers several illustrations in the service of understanding knowledge by description. he claims that it is important to understand this kind of knowledge because our language uses depends so heavily on it. when we say common words or proper names, we are really relying on the meanings implicit in descriptive knowledge. the thought connoted by the use of a proper name can only really be explicitly expressed through a description or proposition. bismarck, or \" the first chancellor of the german empire, \" is russell ' s most cogent example. imagine that there is a proposition, or statement, made about bismarck. if bismarck is the speaker, admitting that he has a kind of direct acquaintance with his own self, bismarck might have voiced his name in order to make a self - referential judgment, of which his name is a constituent. in this simplest case, the \" proper name has the direct use which it always wishes to have, as simply", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6138748115867971, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.886020"} | |
| {"text": "own self, bismarck might have voiced his name in order to make a self - referential judgment, of which his name is a constituent. in this simplest case, the \" proper name has the direct use which it always wishes to have, as simply standing for a certain object, and not for a description of the object. \" if one of bismarck ' s friends who knew him directly was the speaker of the statement, then we would say that the speaker had knowledge by description. the speaker is acquainted with sense - data which he infers corresponds with bismarck ' s body. the body or physical object representing the mind is \" only known as the body and the mind connected with these sense - data, \" which is the vital description. since the sense - data corresponding to bismarck change from moment to moment and with perspective, the speaker knows which various descriptions are valid. still more removed from direct acquaintance, imagine that someone like you or i comes along and makes a statement about bismarck that is a description based on a \" more or less vague mass of historical knowledge. \" we say that bismarck was the \" first chancellor of the german empire. \" in order to make a valid description applicable to the physical object, bismarck ' s body, we must find a relation between some particular with which we have acquaintance and the physical object, the particular with which we wish to have an indirect acquaintance. we must make such a reference in order to secure a meaningful description. to usefully distinguish particulars from universals, russell posits the example of \" the most long - lived of men, \" a description which wholly consists of universals. we assume that the description must apply to some man, but we have no way of inferring any judgment about him. russell remarks, \" all knowledge of truths, as we shall show, demands acquaintance with things which are of an essentially different character from sense - data, the things which are sometimes called ' abstract ideas ', but which we shall call ' universals '. \" the description composed only of universals gives no knowledge by acquaintance with which we might anchor an inference about the longest - lived man. a further statement about bismarck, like \" the first chancellor of the german empire was an astute diplomatist, \" is a statement that contains particulars and asserts a judgment that we can only make in virtue of some acquaintance ( like something heard or read ). statements about things known by description function in our language as statements about the \" actual thing described ; \" that is, we intend to refer to that thing.", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6106778720938946, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.887326"} | |
| {"text": "judgment that we can only make in virtue of some acquaintance ( like something heard or read ). statements about things known by description function in our language as statements about the \" actual thing described ; \" that is, we intend to refer to that thing. we intend to say something with the direct authority that only bismarck himself could have when he makes a statement about himself, something with which he has direct acquaintance. yet, there is a spectrum of removal from acquaintance with the relevant particulars : from bismarck himself, \" there is bismarck to people who knew him ; bismarck to those who only know of him through history \" and at a far end of the spectrum \" the longest lived of men. \" at the latter end, we can only make propositions that are logically deducible from universals, and at the former end, we come as close as possible to direct acquaintance and can make many propositions identifying the actual object. it is now clear how knowledge gained by description is reducible to knowledge by acquaintance. russell calls this observation his fundamental principle in the study of \" propositions containing descriptions \" : \" every proposition which we can understand must be composed wholly of constituents with which we are acquainted. \" indirect knowledge of some particulars seems necessary if we are to expressively attach meanings to the words we commonly use. when we say something referring to julius caesar, we clearly have no direct acquaintance with the man. rather, we are thinking of such descriptions as \" the man who was assassinated on the ides of march \" or \" the founder of the roman empire. \" since we have no way of being directly acquainted with julius caesar, our knowledge by description allows us to gain knowledge of \" things which we have never experienced. \" it allows us to overstep the boundaries of our private, immediate experiences and engage a public knowledge and public language. this knowledge by acquaintance and knowledge by description theory was a famous epistemological problem - solver for russell. its innovative character allowed him to shift to his moderate realism, a realism ruled by a more definite categorization of objects. it is a theory of knowledge that considers our practice of language to be meaningful and worthy of detailed analysis. russell contemplates how we construct a sense of meaning about objects remote from our experience. the realm of acquaintance offers the most secure references for our understanding of the world. knowledge by description allows us to draw inferences from our realm of acquaintance but leaves us in a more vulnerable position. since knowledge by description also depends on truths, we are prone to", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6245714905592379, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>", "chunk_index": 3, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.888514"} | |
| {"text": "of acquaintance offers the most secure references for our understanding of the world. knowledge by description allows us to draw inferences from our realm of acquaintance but leaves us in a more vulnerable position. since knowledge by description also depends on truths, we are prone to error about our descriptive knowledge if we are somehow mistaken about a proposition that we have taken to be true. critics of this theory have held that russell ' s hypothesis of knowledge by description is confusing. his comments when defining sense - data, that the physical world is unknowable to us, contradict his theory of knowledge by descriptions. he implies that \" knowledge by description \" is not really a form of knowledge since we can only know those things with which we are acquainted and we cannot be acquainted with physical objects. russell ' s theory amounts to the proposition that our acquaintance with mental objects appears related in a distant way to physical objects and renders us obliquely acquainted with the physical world. sense - data are our subjective representations of the external world, and they negotiate this indirect contact. while innovative, russell ' s theory of knowledge by description is not an attractive theory of knowledge. it is clearly unappealing because our impressions of the real world, on his view, are commensurate with muddy representations of reality. though we have direct access to these representations, it seems impossible to have any kind of direct experience of reality. reality, rather, consists in unconscious, inferential pieces of reasoning. readers ' notes allow users to add their own analysis and insights to our sparknotes \u2014 and to discuss those ideas with one another. have a novel take or think we left something out? add a readers ' note!", "subdomain_id": "subdomain_quantum_mechanics", "similarity_score": 0.6352533965165041, "token_count": 336, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.889226"} | |
| {"text": "in a galaxy simultaneously, rather than observing one location at a time. stis can also record a broader span of wavelengths in the spectrum of a star at one time. as a result, stis is much more efficient at obtaining scientific data than the earlier hst spectrographs. a power supply in stis failed in august 2004, rendering it inoperable. during the servicing mission in 2009, astronauts successfully repaired the stis by removing the circuit card containing the failed power supply and replacing it with a new card. since stis was not designed for in - orbit repair of internal electronics, this task was a substantial challenge for the astronaut crew. near infrared camera and multi - object spectrometer the near infrared camera and multi - object spectrometer ( nicmos ) is an hst instrument providing the capability for infrared imaging and spectroscopic observations of astronomical targets. nicmos detects light with wavelengths between 0. 8 and 2. 5 microns - longer than the human - eye limit. the sensitive hgcdte arrays that comprise the infrared detectors in nicmos must operate at very cold temperatures. after its deployment, nicmos kept its detectors cold inside a cryogenic dewar ( a thermally insulated container much like a thermos bottle ) containing frozen nitrogen ice. nicmos is hst ' s first cryogenic instrument. the frozen nitrogen ice cryogen in nicmos was exhausted in early 1999, rendering the instrument inoperable at that time. an alternate means of cooling the nicmos was developed and installed in the march 2002 servicing mission. this device uses a mechanical cooler to cool the detectors to the low temperatures necessary for operations. the technology for this cooler was not available when the instrument was originally designed, but fortunately became available in time to support the reactivation of the instrument. since late 2008, the nicmos cooling system ( ncs ) has experienced difficulties maintaining the instrument \u2019 s nominal scientific operating state, in which the detectors are maintained at ~ 77k. repeated restart attempts have demonstrated that it is not possible to restart the ncs in a cold state immediately following safing events. the main culprit for the problems is believed to be water ice in the primary ( circulator ) loop of the ncs. an inefficient approach to this problem would be to put the ncs through a several - month warm - up / cooldown cycle and hope that there is an opportunity for science prior to the next payload safing event. the only feasible path towards satisfactory operation of nic", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6017914616613027, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:eb22ec4d-4069-49a1-8a5e-6fdb526d2a5a>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:44.923920"} | |
| {"text": "achieving quality the final piece of the puzzle properly calibrated testing equipment ensures quality in order for a business to generate a high - quality product or service, it is essential to obtain a quality measurement system that will be used to study the integrity of its finished product. in the certification industry, testing equipment is essential to measure the different variables that could potentially alter the quality of a raw material, finished product or final status on a certification report. the quality of a product or service is compromised if the test equipment used to measure the final quality is not reading accurate results. this is why a flawless calibration system is the final puzzle piece to achieving a high - quality product or service. what is calibration? in this article, calibration will be defined as the comparison between measurements. during the calibration of a test instrument, a device with a known magnitude or assigned correctness, known as a standard, will be used to check the measuring accuracy of a test instrument. calibration ensures that a measuring instrument is providing results for a sample that fall in an acceptable accurate range. accurate testing results allow manufacturers or certification agencies to eliminate or minimize factors that could cause inaccurate measurements during production or testing. calibration procedures naturally vary depending on the instrument being calibrated. generally, the test instrument is used to test calibrators, which are one or more test samples that have known values. the results are then used to establish a relationship between the measurement instrument and the known values. the calibration processes eliminate or \u201c zero out \u201d the current instrument error at the specified calibration points. this process basically \u201c teaches \u201d the instrument to produce more accurate results. after a test instrument is calibrated, it will provide more accurate results for unknown values tested during its everyday normal usage. to keep a successful calibration system, calibrations must be done consistently and on a systematic schedule. when is calibration needed? during the manufacturing or certification process of any product, there may be many different types of test instruments used to determine the quality of a product or service. the question of which test instruments need to be calibrated and which do not is answered by whether or not the test performed and the test instrument used affect the final quality of the product or service. there are situations in which a test instrument does not need to be calibrated. if the readings of the test instrument are for reference only, and the accuracy of the test results have little or no impact on the quality of the product or service", "subdomain_id": "subdomain_quantum_metrology", "similarity_score": 0.6085708139431534, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:6f31546c-b0ec-4f52-9416-e3bd9ec13159>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:45.195845"} | |
| {"text": "i said that there are n ^ 2 interactions and not ( n ^ 2 ) / 2? this is because each form factor is like a diode in that it only handles energy going in one direction : from a \" source surface \" to a \" destination surface. \" in this case, we \u2019 ll say that the source surfaces are index by columns and destination surfaces are indexed by rows. source surfaces will emit their energy to the destination surfaces. now lets solve the matrix. to do this, we simply visit each column ( source ) in the matrix and emit energy to each row ( destination ) in that column. when we do this, we \u2019 ll be placing some of that radiated energy ( from the source ) in the illumination value for the destination. but these surfaces are reflectors, which means they \u2019 re going to reflect some energy back into the scene. based on the surface \u2019 s reflectivity, we \u2019 ll add a little bit of energy to the destination \u2019 s radiative energy. this radiative energy will eventually make its way back into the scene ( i. e. to the other surfaces ) as we progress through the matrix. if the destination is a perfect reflector ( i. e. it reflects every single bit of energy it receives - a mirror ) then there will be no energy stored in the destination \u2019 s illumination, it would all go to its radiative energy. the inverse is also true : a perfectly black surface might not reflect any energy back into the scene, absorbing it all, so every bit of energy it receives is stored in its illumination value. if you \u2019 re starting to think that we \u2019 re making a black surface white, we \u2019 re not. remember, we \u2019 re dealing with light, so the color of a surface is ultimately multiplied by its illumination. in the case of the perfectly black surface, the surface remains visually black. once we \u2019 ve gone through the matrix once, we do it all over again. this is necessary because we we \u2019 re storing some energy as illumination, and some as radiative energy. now it \u2019 s time to go through the matrix again and start distributing that reflected radiative energy. we \u2019 ll go through this matrix over and over again until the total amount of radiative energy for all surfaces is relatively small. the next step if you made it this far without getting lost, you \u2019 re in the home stretch. there \u2019 s still a lot we haven \u2019 t covered yet, so let \u2019 s move on. i \u2019", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6237825153010621, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d804860f-0731-4f81-9028-027f53b66241>", "chunk_index": 2, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:45.708009"} | |
| {"text": "element from its source element. ironically, this is called \" gathering. \" the progressive refinement approach reversed this and defined ( the other incredibly ironic term ) \" shooting. \" the basic idea behind progressive refinement starts by finding the surface with the most energy to contribute to the scene ( i. e. has the highest amount of radiative energy. ) this surface would then iterate through all other surfaces, distributing its energy along the way. after this process was completed, the image was then rendered for the user, and the process began again, finding the surface with the most energy to contribute to the scene. each pass would cause another render of the scene, allowing the user to progressively evaluate the progress. if the progress showed a problem along the way ( an illumination surface was in the wrong place or the wrong color ) they could stop the process and make the needed adjustments. during this process, the user would see a completely dark scene progress to a fully lit scene. to accommodate this sharp contrast in visual difference from beginning to end, the progressive refinement technique added something called the \" ambient term \". before i continue, i want to point something out that is pretty important in radiosity. there is no such thing as ambient light in real life. ambient light is something that was invented to accommodate the need for what appears to be a \" global light \" in real life. but in reality, ambient light doesn \u2019 t exist. rather, light is always being reflected from surface to surface, which is how it finds its way into all the nooks and crannies of real - world detail. before the advent of radiosity, ambient light was the best thing available to the typical rendering architectures. it is safe to think of radiosity is a more accurate solution to ambient ( global ) light. this is why radiosity is considered a technique for \" global illumination. \" the ambient term starts off as a \" differential area sum \" of the radiative energy for the entire scene. what this means is that it \u2019 s a number that represents the average amount of light that each surface will receive throughout the processing of the entire radiosity solution. we can calculate that average without doing all the work simply because it \u2019 s an average amount of energy, not a specific amount of energy for a single surface. as each progressive pass emits the radiative energy for a surface, the ambient term is slowly decreased. as the total radiative energy of the scene approaches zero, so does the ambient term ( though", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.610219180834279, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:d804860f-0731-4f81-9028-027f53b66241>", "chunk_index": 4, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:45.709964"} | |
| {"text": "science fiction ( abbreviated sf or sci - fi with varying punctuation and capitalization ) is a broad genre of fiction that often involves speculations based on current or future science or technology. fiction is the telling of stories which are not real more specifically fiction is an imaginative form of narrative, one of the four basic rhetorical modes. science ( from the latin scientia, meaning \" knowledge \" or \" knowing \" is the effort to discover, and increase human understanding technology is a broad concept that deals with a species ' usage and knowledge of tools and crafts and how it affects a species ' ability to control and adapt science fiction is found in books, art, television, films, games, theater, and other media. in organizational or marketing contexts, science fiction can be synonymous with the broader definition of speculative fiction, encompassing creative works incorporating imaginative elements not found in contemporary reality ; this includes fantasy, horror, and related genres. speculative fiction is a term used as an inclusive descriptor covering a group of fiction genres that speculate about worlds that are unlike the real world in fantasy is a genre that uses magic and other supernatural forms as a primary element of plot, theme, and / or setting horror fiction is broadly fiction in any medium intended to scare unsettle or horrify the audience science fiction differs from fantasy in that, within the context of the story, its imaginary elements are largely possible within scientifically established or scientifically postulated laws of nature ( though some elements in a story might still be pure imaginative speculation ). science fiction is largely based on writing entertainingly and rationally about alternate possibilities in settings that are contrary to known reality. these include : exploring the consequences of such differences is the traditional purpose of science fiction, making it a \" literature of ideas \". outer space, often simply called space, comprises the relatively empty regions of the universe outside the escape velocities of celestial bodies. extraterrestrial life is life originating outside of the earth. this article details time travel itself for other uses see time traveler. psionics is the study and / or practice of using the mind to induce paranormal phenomena nanotechnology, sometimes shortened to nanotech, refers to a field of applied science whose theme is the control of matter on an atomic and molecular a robot is a mechanical or virtual artificial agent in practice it is usually an electro - mechanical system which by its appearance or movements science fiction is difficult to define, as it includes a wide range of subgenres and themes. a genre ( \u02c8\u0292\u0251\u02d0nr\u0259", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.6301648156223105, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:c1ca71b9-82bb-4291-8b61-15273ec28084>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:46.389879"} | |
| {"text": "mechanical or virtual artificial agent in practice it is usually an electro - mechanical system which by its appearance or movements science fiction is difficult to define, as it includes a wide range of subgenres and themes. a genre ( \u02c8\u0292\u0251\u02d0nr\u0259 also / \u02c8d\u0292\u0251\u02d0nr\u0259 / from french \" kind \" or \" sort \" from latin : genus ( stem gener - ) is a loose set author and editor damon knight summed up the difficulty by stating that \" science fiction is what we point to when we say it \". damon francis knight ( september 19, 1922 & ndash april 15, 2002 ) was an american science fiction author, vladimir nabokov argued that were we rigorous with our definitions, shakespeare ' s play the tempest would have to be termed science fiction. this page is about the novelist for his father the politician see vladimir dmitrievich nabokov. william shakespeare ( baptised the tempest is a comedy written by william shakespeare. it is generally dated to 1610 - 11 and accepted as the last play written solely by him although according to science fiction writer robert a. heinlein, \" a handy short definition of almost all science fiction might read : realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the nature and significance of the scientific method. robert anson heinlein ( july 7 1907 \u2013 may 8 1988 was an american novelist and science fiction writer. scientific method refers to bodies of techniques for investigating phenomena \" rod serling ' s stated definition is \" fantasy is the impossible made probable. rodman edward \" rod \" serling ( december 25, 1924 & ndash june 28, 1975 ) was an american screenwriter, best known science fiction is the improbable made possible. \" lester del rey wrote, \" even the devoted aficionado \u2013 or fan - has a hard time trying to explain what science fiction is, \" and that the reason for there not being a \" full satisfactory definition \" is that \" there are no easily delineated limits to science fiction. lester del rey ( june 2 1915 & ndash may 10 1993 ) was an american science fiction author and editor. \" author mark c. glassy stated that the definition of science fiction was very much like the definition of porn ; you don ' t know what it is, but you know it when you see it. pornography or porn is the explicit depiction of sexual subject matter with the sole intention of sexually exciting the viewer forrest j. ac", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.6040582884658365, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:c1ca71b9-82bb-4291-8b61-15273ec28084>", "chunk_index": 1, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:46.391020"} | |
| {"text": "physics ( greek physis - \u03c6\u03c5\u03c3\u03b9\u03c2 in everyday terms is the science of matter and its motion. astrophysics is the branch of astronomy that deals with the physics of the universe, including the physical properties ( luminosity, chemistry ( from egyptian keme ( chem meaning \" earth \" ) is the science concerned with the composition structure and properties many accurate predictions of the future come from the hard science fiction subgenre, but numerous inaccurate predictions have emerged as well. hard science fiction is a category of science fiction characterized by an emphasis on scientific or technical detail or on scientific accuracy or on both for example, arthur c. clarke accurately predicted ( and invented the concept of ) geostationary communications satellites, but erred in his prediction of deep layers of moondust in lunar craters. sir arthur charles clarke, cbe ( 16 december 1917 \u2013 19 march 2008 was a british science fiction author, inventor, and a geostationary orbit ( geo is a geosynchronous orbit directly above the earth ' s equator ( 0\u00b0 latitude ) with a period equal to the earth ' s some hard sf authors have distinguished themselves as working scientists, including robert forward, gregory benford, charles sheffield, and geoffrey a. landis, while mathematician authors include rudy rucker and vernor vinge. this is about the physicist and science fiction writer you may be looking for his son robert d gregory benford ( born january 30, 1941 in mobile alabama ) is an american science fiction author and astrophysicist who is on the charles sheffield ( june 25, 1935 & ndash november 2, 2002 ) was an english - born mathematician physicist and science fiction geoffrey a landis works as a scientist and writer of science fiction. rudolf von bitter rucker ( born march 22, 1946 in louisville kentucky ) is an american computer scientist and science fiction vernor steffen vinge ( \u02c8v\u026and\u0292i ( born october 2, 1944 in waukesha wisconsin, u other noteworthy hard sf authors include hal clement, joe haldeman, larry niven, jerry pournelle, kim stanley robinson, robert j. sawyer, and stephen baxter. harry clement stubbs ( may 30, 1922 in somerville massachusetts \u2013 october 29, 2003 in milton massachusetts ) better joe william haldeman is an american science fiction author. life and work haldeman was born in 1943 in oklahoma city oklahoma laurence van cott niven ( born april 30, 1938 los angeles california ) is a us science fiction author. jerry eugene pournelle (", "subdomain_id": "subdomain_quantum_field_theory", "similarity_score": 0.678560659414555, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:c1ca71b9-82bb-4291-8b61-15273ec28084>", "chunk_index": 10, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:46.402699"} | |
| {"text": "of an ego, or as an eternal state of existence into which an ego or self enters or with which it merges. hence it is said : \" mere suffering exists, no sufferer the deed is, but no doer of the deed is nibbana is, but not the man that enters the path is, but no traveler on it is literature : for texts on nibbana, see path, 36ff. - see vis. m. xvi. 64ff. - anatta and nibbana, by nyanaponika thera ( wheel 11 ) ; the buddhist doctrine of nibbana, by ven. p. vajiranana & f. story ' rebirth ', is a synonym for patisandhi ( q. v. ). - panna ) : ' morality ( concentration, wisdom ) connected with penetration ' ; s. hana - bhagiya - sila. of aversion ', is one of the 18 chief kinds of insight ; s. vipassana ( 4 ), samatha - vipassana ( 2 ), visuddhi ( vi, 5 ). perception ( or consciousness, or view ) of permanency, is one of the 4 perversions nihilistic view : natthika - ditthi ; exercise ' s. kasina. nimitta : mark, sign ; image ; target, object ; cause, condition. these meanings are used in, and adapted to, many contexts of which only the doctrinal ones are mentioned here. 1. ' mental ( reflex - ) image ', obtained in meditation. in full clarity, it will appear in the mind by successful practice of certain concentration - exercises and will then appear as vividly as if seen by the eye. the object perceived at the very beginning of concentration is called the preparatory image ( parikamma - nimitta ). the still unsteady and unclear image, which arises when the mind has reached a weak degree of concentration, is called the acquired image ( uggaha - nimitta ). an entirely clear and immovable image arising at a higher degree of concentration is the counter - image ( patibhaga - nimitta ). as soon as this image arises, the stage of neighbourhood ( or access ) concentration ( upacara - samadhi ) is reached. for further details, s. kasina, samadhi. 2. ' sign of ( previous ) kamma ' ( kamma - nimit", "subdomain_id": "subdomain_quantum_optics", "similarity_score": 0.611517879849363, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:1f42ae89-6657-46f0-bd5e-b007d6e09d74>", "chunk_index": 6, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:47.072485"} | |
| {"text": "it is possible to use a hash function to construct a block cipher with a structure similar to des? because a hash function is one way and a block cipher must be reversible ( to decrypt ), how is it possible? migrated from security. stackexchange. com nov 8 ' 12 at 12 : 26 it is possible to build a block cipher out of a great many things. if you want to use a hash function, the classic trick is to follow a feistel structure, which is, incidentally, the same kind of structure than what des uses. the schematics on the wikipedia page are quite clear ; you would use the hash function for the \" f \" part, which combines one ( sub ) key and one half of the current block, to produce a value which is to be xored with the other half of the current block. the beauty of the scheme is that the \" f \" function is always invoked in the same direction, both for encryption and for decryption. therefore, it can be a one - way function, like a hash function. luby and rackoff have demonstrated in 1988 that the feistel scheme offers remarkable security with as little as four rounds, provided that the \" f \" function is \" perfect \" and that the cipher block size is big enough ( to get the standard \" 128 - bit security \" out of the luby - rackoff proof, you need 256 - bit blocks ). of course, any concrete hash function cannot be really \" perfect \" ( see for instance this answer ) and there are a lot of subtle details which can destroy the security of the best thought cipher structure. as usual, you are strongly advised not to build your own crypto ( unless you are quite clear with yourself that you do it for learning and not to actually protect any data of value ). also, if you build such a cipher, you will probably notice that the resulting performance is disappointing. with a secure hash function like sha - 256, you could expect an encryption bandwidth roughly 20 times lower than what aes would get you. it is possible to use a hash function like ( sha family, for instance ) in ofb or cfb ( and possibly ctr ), by using the hash function ( with the key as part of the input! ) in the place of the block cipher encryption. that said, thomas is right - - do not build your own crypto. just use a normal block cipher. you ' ll get better performance ( especially if it ' s", "subdomain_id": "subdomain_quantum_cryptography", "similarity_score": 0.6146444799020456, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:f271c8d6-4b16-4cdd-9b0f-086b5ab89383>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:47.662680"} | |
| {"text": "from wikipedia, the free encyclopedia | part of the politics series on | | communism portal | communism is a social structure in which classes are abolished and property is commonly controlled, as well as a political philosophy and social movement that advocates and aims to create such a society. karl marx, the father of communist thought, posited that communism would be the final stage in society, which would be achieved through a proletarian revolution and only possible after a socialist stage develops the productive forces, leading to a superabundance of goods and services. \" pure communism \" in the marxian sense refers to a classless, stateless and oppression - free society where decisions on what to produce and what policies to pursue are made democratically, allowing every member of society to participate in the decision - making process in both the political and economic spheres of life. in modern usage, communism is often used to refer to bolshevism or marxism - leninism and the policies of the various communist states which had government ownership of all the means of production and centrally planned economies. communist regimes, all inspired only by the leninist current, have historically been authoritarian, repressive, and coercive governments concerned primarily with preserving their own power. as a political ideology, communism is usually considered to be a branch of socialism ; a broad group of economic and political philosophies that draw on the various political and intellectual movements with origins in the work oftheorists of the industrial revolution and the french revolution. communism attempts to offer an alternative to the problems with the capitalist market economy and the legacy of imperialism and nationalism. marx states that the only way to solve these problems is for the working class ( proletariat ), who according to marx are the main producers of wealth in society and are exploited by the capitalist - class ( bourgeoisie ), to replace the bourgeoisie as the ruling class in order to establish a free society, without class or racial divisions. the dominant forms of communism, such as leninism, stalinism, maoism and trotskyism are based on marxism, as well as others forms of communism ( such as luxemburgism and council communism ), but non - marxist versions of communism ( such as christian communism and anarchist communism ) also exist. karl marx never provided a detailed description as to how communism would function as an economic system, but it is understood that a communist economy would consist of common ownership of the means of production, culminating in the negation of the concept of private ownership of capital, which referred to", "subdomain_id": "subdomain_quantum_materials", "similarity_score": 0.6016966946878595, "token_count": 512, "source_dataset": "HuggingFaceFW/fineweb-edu", "source_id": "<urn:uuid:93baa9c6-f802-404a-b2f0-f7525715ede9>", "chunk_index": 0, "filtering_threshold": 0.6, "created_at": "2025-12-25T20:06:47.743617"} | |