source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Mordant
A mordant or dye fixative is a substance used to set (i.e., bind) dyes on fabrics. It does this by forming a coordination complex with the dye, which then attaches to the fabric (or tissue). It may be used for dyeing fabrics or for intensifying stains in cell or tissue preparations. Although mordants are still used, especially by small batch dyers, it has been largely displaced in industry by directs. The term mordant comes from the Latin mordere, "to bite". In the past, it was thought that a mordant helped the dye bite onto the fiber so that it would hold fast during washing. A mordant is often a polyvalent metal ion, and one example is chromium (III). The resulting coordination complex of dye and ion is colloidal and can be either acidic or alkaline. Common dye mordants Mordants include tannic acid, oxalic acid, alum, chrome alum, sodium chloride, and certain salts of aluminium, chromium, copper, iron, iodine, potassium, sodium, tungsten, and tin. Iodine is often referred to as a mordant in Gram stains, but is in fact a trapping agent. Dyeing methods The three methods used for mordanting are: Pre-mordanting (onchrome): The substrate is treated with the mordant and then the dye. The complex between the mordant and dye is formed on the fibre. Meta-mordanting (metachrome): The mordant is added in the dye bath itself. The process is simpler than pre- or post-mordanting, but is applicable to only a few dyes. Mordant red 19 shown above is applied in this manner Post-mordanting (afterchrome): The dyed material is treated with a mordant. The complex between the mordant and dye is formed on the fibre. The type of mordant used affects the shade obtained after dyeing and also affects the fastness property of the dye. The application of mordant, either pre-mordant, meta-mordant or post-mordant methods, is influenced by: The action of the mordant on the substrate: if the mordant and dye methods are harsh (for example, an acidic mordant with an acidic dye), pre-
https://en.wikipedia.org/wiki/Version%207%20Unix
Version 7 Unix, also called Seventh Edition Unix, Version 7 or just V7, was an important early release of the Unix operating system. V7, released in 1979, was the last Bell Laboratories release to see widespread distribution before the commercialization of Unix by AT&T Corporation in the early 1980s. V7 was originally developed for Digital Equipment Corporation's PDP-11 minicomputers and was later ported to other platforms. Overview Unix versions from Bell Labs were designated by the edition of the user's manual with which they were accompanied. Released in 1979, the Seventh Edition was preceded by Sixth Edition, which was the first version licensed to commercial users. Development of the Research Unix line continued with the Eighth Edition, which incorporated development from 4.1BSD, through the Tenth Edition, after which the Bell Labs researchers concentrated on developing Plan 9. V7 was the first readily portable version of Unix. As this was the era of minicomputers, with their many architectural variations, and also the beginning of the market for 16-bit microprocessors, many ports were completed within the first few years of its release. The first Sun workstations (then based on the Motorola 68000) ran a V7 port by UniSoft; the first version of Xenix for the Intel 8086 was derived from V7 and Onyx Systems soon produced a Zilog Z8000 computer running V7. The VAX port of V7, called UNIX/32V, was the direct ancestor of the popular 4BSD family of Unix systems. The group at the University of Wollongong that had ported V6 to the Interdata 7/32 ported V7 to that machine as well. Interdata sold the port as Edition VII, making it the first commercial UNIX offering. DEC distributed their own PDP-11 version of V7, called V7M (for modified). V7M, developed by DEC's original Unix Engineering Group (UEG), contained many enhancements to the kernel for the PDP-11 line of computers including significantly improved hardware error recovery and many additional device driver
https://en.wikipedia.org/wiki/Computers%20and%20Typesetting
Computers and Typesetting is a 5-volume set of books by Donald Knuth published in 1986 describing the TeX and Metafont systems for digital typography. Knuth's computers and typesetting project was the result of his frustration with the lack of decent software for the typesetting of mathematical and technical documents. The results of this project include TeX for typesetting, Metafont for font construction and the Computer Modern typefaces that are the default fonts used by TeX. In the series of five books Knuth not only describes the TeX and Metafont languages (volumes A and C), he also describes and documents the source code (in the WEB programming language) of the TeX and Metafont interpreters (volumes B and D), and the source code for the Computer Modern fonts used by TeX (volume E). The book set stands as a tour de force demonstration of literate programming. The books themselves were typeset in the Computer Modern Roman typeface using TeX; thus, in Knuth's words, they "belong to the class of sets of books that describe precisely their own appearance." Volumes The five volumes are published by Addison-Wesley. Volume A: The TeXbook. Describes the TeX typesetting language. It is by far the most common and available of the set, as the TeX interpreter is widely used for typesetting. It is available in softcover (blue spiral-bound with a built-in flap for a bookmark) and hardcover Volume B: TeX: The program. A documented listing of the source code of the TeX interpreter The 1986 edition in hardcover is Volume C: The METAFONTbook. Describes the METAFONT font description language. Hardcover , softcover . Volume D: Metafont: The program. A documented listing of the source code of the Metafont interpreter. Hardcover , paperback Volume E: Computer Modern Typefaces. A character-by-character listing (in the Metafont language) of the source code for the Computer Modern typefaces (cmr, cmbx, cmti, etc.) used by TeX. Hardcover: , Softcover: The set is also avai
https://en.wikipedia.org/wiki/4000-series%20integrated%20circuits
The 4000 series is a CMOS logic family of integrated circuits (ICs) first introduced in 1968 by RCA. It was slowly migrated into the 4000B buffered series after about 1975. It had a much wider supply voltage range than any contemporary logic family (3V to 18V recommended range for "B" series). Almost all IC manufacturers active during this initial era fabricated models for this series. Its naming convention is still in use today. History The 4000 series was introduced as the CD4000 COS/MOS series in 1968 by RCA as a lower power and more versatile alternative to the 7400 series of transistor-transistor logic (TTL) chips. The logic functions were implemented with the newly introduced Complementary Metal–Oxide–Semiconductor (CMOS) technology. While initially marketed with "COS/MOS" labeling by RCA (which stood for Complementary Symmetry Metal-Oxide Semiconductor), the shorter CMOS terminology emerged as the industry preference to refer to the technology. The first chips in the series were designed by a group led by Albert Medwin. Wide adoption was initially hindered by the comparatively lower speeds of the designs compared to TTL based designs. Speed limitations were eventually overcome with newer fabrication methods (such as self aligned gates of polysilicon instead of metal). These CMOS variants performed on par with contemporary TTL. The series was extended in the late 1970s and 1980s with new models that were given 45xx and 45xxx designations, but are usually still regarded by engineers as part of the 4000 series. In the 1990s, some manufacturers (e.g. Texas Instruments) ported the 4000 series to newer HCMOS based designs to provide greater speeds. Design considerations The 4000 series facilitates simpler circuit design through relatively low power consumption, a wide range of supply voltages, and vastly increased load-driving capability (fanout) compared to TTL. This makes the series ideal for use in prototyping LSI designs. While TTL ICs are similarly modular
https://en.wikipedia.org/wiki/Basic%20exchange%20telephone%20radio%20service
The basic exchange telephone radio service or BETRS is a fixed radio service where a multiplexed, digital radio link is used as the last segment of the local loop to provide wireless telephone service to subscribers in remote areas. BETRS technology was developed in the mid-1980s and allows up to four subscribers to use a single radio channel pair, simultaneously, without interfering with one another. In the US, this service may operate in the paired 152/158 and 454/459 MHz bands and on 10 channel blocks in the 816-820/861-865 MHz bands. BETRS may be licensed only to state-certified carriers in the area where the service is provided and is considered a part of the public switched telephone network (PSTN) by state regulators. Regulation of this service currently resides in parts 1 and 22 of the Code of Federal Regulations (CFR), Subtitle 47 on Telecommunications, and may be researched or ordered through the Government Printing Office (GPO). Sources Federal Communications Commission (Wireless Bureau) External links Federal Communications Commission website Government Printing Office website Local loop Mobile telecommunications standards
https://en.wikipedia.org/wiki/Risch%20algorithm
In symbolic computation, the Risch algorithm is a method of indefinite integration used in some computer algebra systems to find antiderivatives. It is named after the American mathematician Robert Henry Risch, a specialist in computer algebra who developed it in 1968. The algorithm transforms the problem of integration into a problem in algebra. It is based on the form of the function being integrated and on methods for integrating rational functions, radicals, logarithms, and exponential functions. Risch called it a decision procedure, because it is a method for deciding whether a function has an elementary function as an indefinite integral, and if it does, for determining that indefinite integral. However, the algorithm does not always succeed in identifying whether or not the antiderivative of a given function in fact can be expressed in terms of elementary functions. The complete description of the Risch algorithm takes over 100 pages. The Risch–Norman algorithm is a simpler, faster, but less powerful variant that was developed in 1976 by Arthur Norman. Some significant progress has been made in computing the logarithmic part of a mixed transcendental-algebraic integral by Brian L. Miller. Description The Risch algorithm is used to integrate elementary functions. These are functions obtained by composing exponentials, logarithms, radicals, trigonometric functions, and the four arithmetic operations (). Laplace solved this problem for the case of rational functions, as he showed that the indefinite integral of a rational function is a rational function and a finite number of constant multiples of logarithms of rational functions . The algorithm suggested by Laplace is usually described in calculus textbooks; as a computer program, it was finally implemented in the 1960s. Liouville formulated the problem that is solved by the Risch algorithm. Liouville proved by analytical means that if there is an elementary solution to the equation then there exist cons
https://en.wikipedia.org/wiki/Surface-water%20hydrology
Surface-water hydrology is the sub-field of hydrology concerned with above-earth water (surface water), in contrast to groundwater hydrology that deals with water below the surface of the Earth. Its applications include rainfall and runoff, the routes that surface water takes (for example through rivers or reservoirs), and the occurrence of floods and droughts. Surface-water hydrology is used to predict the effects of water constructions such as dams and canals. It considers the layout of the watershed, geology, soils, vegetation, nutrients, energy and wildlife. Modelled aspects include precipitation, the interception of rain water by vegetation or artificial structures, evaporation, the runoff function and the soil-surface system itself. When surface water seeps into the ground above bedrock, it is categorized as groundwater, and the rate at which this occurs determines baseflow needs for instream flow, as well as subsurface water levels in wells. While groundwater is not part of surface-water hydrology, it must be taken into account for a full understanding of the behaviour of surface water. Glacial hydrology is a part of surface-water hydrology; some of the runoff from glaciers and snow also involves groundwater hydrology concepts. See also Hydrological transport model Moisture recycling References Hydrology Hydraulic engineering
https://en.wikipedia.org/wiki/Nilradical%20of%20a%20ring
In algebra, the nilradical of a commutative ring is the ideal consisting of the nilpotent elements: It is thus the radical of the zero ideal. If the nilradical is the zero ideal, the ring is called a reduced ring. The nilradical of a commutative ring is the intersection of all prime ideals. In the non-commutative ring case the same definition does not always work. This has resulted in several radicals generalizing the commutative case in distinct ways; see the article Radical of a ring for more on this. The nilradical of a Lie algebra is similarly defined for Lie algebras. Commutative rings The nilradical of a commutative ring is the set of all nilpotent elements in the ring, or equivalently the radical of the zero ideal. This is an ideal because the sum of any two nilpotent elements is nilpotent (by the binomial formula), and the product of any element with a nilpotent element is nilpotent (by commutativity). It can also be characterized as the intersection of all the prime ideals of the ring (in fact, it is the intersection of all minimal prime ideals). A ring is called reduced if it has no nonzero nilpotent. Thus, a ring is reduced if and only if its nilradical is zero. If R is an arbitrary commutative ring, then the quotient of it by the nilradical is a reduced ring and is denoted by . Since every maximal ideal is a prime ideal, the Jacobson radical — which is the intersection of maximal ideals — must contain the nilradical. A ring R is called a Jacobson ring if the nilradical and Jacobson radical of R/P coincide for all prime ideals P of R. An Artinian ring is Jacobson, and its nilradical is the maximal nilpotent ideal of the ring. In general, if the nilradical is finitely generated (e.g., the ring is Noetherian), then it is nilpotent. Noncommutative rings For noncommutative rings, there are several analogues of the nilradical. The lower nilradical (or Baer–McCoy radical, or prime radical) is the analogue of the radical of the zero ideal and is def
https://en.wikipedia.org/wiki/Xine
xine is a multimedia playback engine for Unix-like operating systems released under the GNU General Public License. xine is built around a shared library (xine-lib) that supports different frontend player applications. xine uses libraries from other projects such as liba52, libmpeg2, FFmpeg, libmad, FAAD2, and Ogle. xine can also use binary Windows codecs through a wrapper, bundled as the w32codecs, for playback of some media formats that are not handled natively. History xine was started in 2000 by Günter Bartsch shortly after LinuxTag. At that time playing DVDs in Linux was described as a tortuous process since one had to manually create audio and video named pipes and start their separated decoder processes. Günter realized the OMS (Open Media System) or LiViD approach had obvious shortcomings in terms of audio and video synchronization, so xine was born as an experiment trying to get it right. The project evolved into a modern media player multi-threaded architecture. During xine development, some effort was dedicated to making a clear separation of the player engine (xine-lib) and front-end (xine-ui). Since the 1.0 release (2004-12-25) the API of xine-lib is considered stable and several applications and players rely on it. Günter left the project in 2003 when he officially announced the new project leaders, Miguel Freitas, Michael Roitzsch, Mike Melanson, and Thibaut Mattern. Supported media formats Physical media: CDs, DVDs, Video CDs Container formats: 3gp, AVI, ASF, FLV, Matroska, MOV (QuickTime), MP4, NUT, Ogg, OGM, RealMedia Audio formats: AAC, AC3, ALAC, AMR, FLAC, MP3, RealAudio, Shorten, Speex, Vorbis, WMA Video formats: Cinepak, DV, H.263, H.264/MPEG-4 AVC, HuffYUV, Indeo, MJPEG, MPEG-1, MPEG-2, MPEG-4 ASP, RealVideo, Sorenson, Theora, WMV (partial, including WMV1, WMV2 and WMV3; via FFmpeg) Video devices: V4L, DVB, PVR Network protocols: HTTP, TCP, UDP, RTP, SMB, MMS, PNM, RTSP DVD issues Since it is not a member of DVD Forum, the xine proj
https://en.wikipedia.org/wiki/Unit%20%28ring%20theory%29
In algebra, a unit or invertible element of a ring is an invertible element for the multiplication of the ring. That is, an element of a ring is a unit if there exists in such that where is the multiplicative identity; the element is unique for this property and is called the multiplicative inverse of . The set of units of forms a group under multiplication, called the group of units or unit group of . Other notations for the unit group are , , and (from the German term ). Less commonly, the term unit is sometimes used to refer to the element of the ring, in expressions like ring with a unit or unit ring, and also unit matrix. Because of this ambiguity, is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of a rng. Examples The multiplicative identity and its additive inverse are always units. More generally, any root of unity in a ring is a unit: if , then is a multiplicative inverse of . In a nonzero ring, the element 0 is not a unit, so is not closed under addition. A nonzero ring in which every nonzero element is a unit (that is, ) is called a division ring (or a skew-field). A commutative division ring is called a field. For example, the unit group of the field of real numbers is . Integer ring In the ring of integers , the only units are and . In the ring of integers modulo , the units are the congruence classes represented by integers coprime to . They constitute the multiplicative group of integers modulo . Ring of integers of a number field In the ring obtained by adjoining the quadratic integer to , one has , so is a unit, and so are its powers, so has infinitely many units. More generally, for the ring of integers in a number field , Dirichlet's unit theorem states that is isomorphic to the group where is the (finite, cyclic) group of roots of unity in and , the rank of the unit g
https://en.wikipedia.org/wiki/Tahini
Tahini () or tahina (, ) is a Middle Eastern condiment made from toasted ground hulled sesame. It is served by itself (as a dip) or as a major ingredient in hummus, baba ghanoush, and halva. Tahini is used in the cuisines of the Levant and Eastern Mediterranean, the South Caucasus, the Balkans, South Asia, Central Asia, and amongst Ashkenazi Jews as well as parts of Russia and North Africa. Sesame paste (though not called tahini) is also used in some East Asian cuisines. Etymology Tahini is of Arabic origin and comes from a colloquial Levantine Arabic pronunciation of (), or more accurately (), whence also English tahina and Hebrew t'china . It is derived from the root , which as a verb means "to grind", and also produces the word , "flour" in some dialects. The word tahini appeared in English by the late 1930s. History The oldest mention of sesame is in a cuneiform document written 4000 years ago that describes the custom of serving the gods sesame wine. The historian Herodotus writes about the cultivation of sesame 3500 years ago in the region of the Tigris and Euphrates in Mesopotamia. It was mainly used as a source of oil. Tahini is mentioned as an ingredient of hummus kasa, a recipe transcribed in an anonymous 13th-century Arabic cookbook, Kitab Wasf al-Atima al-Mutada. Sesame paste is an ingredient in some Chinese and Japanese dishes; Sichuan cuisine uses it in some recipes for dandan noodles. Sesame paste is also used in Indian cuisine. In North America, sesame tahini, along with other raw nut butters, was available by 1940 in health food stores. Preparation and storage Tahini is made from sesame seeds that are soaked in water and then crushed to separate the bran from the kernels. The crushed seeds are soaked in salt water, causing the bran to sink. The floating kernels are skimmed off the surface, toasted, and ground to produce an oily paste. It can also be prepared with untoasted seeds and called "raw tahini". Because of tahini's high oil
https://en.wikipedia.org/wiki/DARPA%20Grand%20Challenge
The DARPA Grand Challenge is a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency, the most prominent research organization of the United States Department of Defense. Congress has authorized DARPA to award cash prizes to further DARPA's mission to sponsor revolutionary, high-payoff research that bridges the gap between fundamental discoveries and military use. The initial DARPA Grand Challenge in 2004 was created to spur the development of technologies needed to create the first fully autonomous ground vehicles capable of completing a substantial off-road course within a limited time. The third event, the DARPA Urban Challenge in 2007, extended the initial Challenge to autonomous operation in a mock urban environment. The 2012 DARPA Robotics Challenge, focused on autonomous emergency-maintenance robots, and new Challenges are still being conceived. The DARPA Subterranean Challenge was tasked with building robotic teams to autonomously map, navigate, and search subterranean environments. Such teams could be useful in exploring hazardous areas and in search and rescue. History and background Fully autonomous vehicles have been an international pursuit for many years, from endeavors in Japan (starting in 1977), Germany (Ernst Dickmanns and VaMP), Italy (the ARGO Project), the European Union (EUREKA Prometheus Project), the United States of America, and other countries. DARPA funded the development of the first fully autonomous robot beginning in 1966 with the Shakey the robot project at Stanford Research Institute, now SRI International. The first autonomous ground vehicle capable of driving on and off roads was developed by DARPA as part of the Strategic Computing Initiative beginning in 1984 leading to demonstrations of autonomous navigation by the Autonomous Land Vehicle and the Navlab. The Grand Challenge was the first long distance competition for driverless cars in the world; other research efforts in
https://en.wikipedia.org/wiki/Turbidity
Turbidity is the cloudiness or haziness of a fluid caused by large numbers of individual particles that are generally invisible to the naked eye, similar to smoke in air. The measurement of turbidity is a key test of both water clarity and water quality. Fluids can contain suspended solid matter consisting of particles of many different sizes. While some suspended material will be large enough and heavy enough to settle rapidly to the bottom of the container if a liquid sample is left to stand (the settable solids), very small particles will settle only very slowly or not at all if the sample is regularly agitated or the particles are colloidal. These small solid particles cause the liquid to appear turbid. Turbidity (or haze) is also applied to transparent solids such as glass or plastic. In plastic production, haze is defined as the percentage of light that is deflected more than 2.5° from the incoming light direction. Causes and effects Turbidity in open water may be caused by growth of phytoplankton. Human activities that disturb land, such as construction, mining and agriculture, can lead to high sediment levels entering water bodies during rain storms due to storm water runoff. Areas prone to high bank erosion rates as well as urbanized areas also contribute large amounts of turbidity to nearby waters, through stormwater pollution from paved surfaces such as roads, bridges, parking lots and airports. Some industries such as quarrying, mining and coal recovery can generate very high levels of turbidity from colloidal rock particles. In drinking water, the higher the turbidity level, the higher the risk that people may develop gastrointestinal diseases. This is especially problematic for immunocompromised people, because contaminants like viruses or bacteria can become attached to the suspended solids. The suspended solids interfere with water disinfection with chlorine because the particles act as shields for the virus and bacteria. Similarly, suspended sol
https://en.wikipedia.org/wiki/Peanut%20butter
Peanut butter is a food paste or spread made from ground, dry-roasted peanuts. It commonly contains additional ingredients that modify the taste or texture, such as salt, sweeteners, or emulsifiers. Consumed in many countries, it is the most commonly used of the nut butters, a group that also includes cashew butter and almond butter (though peanuts are not nuts, peanut butter is culinarily considered a nut butter). Peanut butter is a nutrient-rich food containing high levels of protein, several vitamins, and dietary minerals. It is typically served as a spread on bread, toast, or crackers, and used to make sandwiches (notably the peanut butter and jelly sandwich). It is also used in a number of breakfast dishes and desserts, such as granola, smoothies, crepes, cookies, brownies, or croissants. History The earliest references to peanut butter can be traced to Aztec and Inca civilizations, who ground roasted peanuts into a paste. However, several people can be credited with the invention of modern peanut butter and the processes involved in making it. The US National Peanut Board credits three modern inventors with the earliest patents related to the production of modern peanut butter. Marcellus Gilmore Edson of Montreal, Quebec, Canada, obtained the first patent for a method of producing peanut butter from roasted peanuts using heated surfaces in 1884. Edson's cooled product had "a consistency like that of butter, lard, or ointment" according to his patent application which described a process of milling roasted peanuts until the peanuts reached "a fluid or semi-fluid state". He mixed sugar into the paste to harden its consistency. A businessman from St. Louis named George Bayle produced and sold peanut butter in the form of a snack food in 1894. By 1917, American consumers used peanut products during periods of meat rationing, with government promotions of "meatless Mondays" when peanut butter was a favored choice. John Harvey Kellogg, known for his line of pr
https://en.wikipedia.org/wiki/Alignments%20of%20random%20points
The study of alignments of random points in a plane seeks to discover subsets of points that occupy an approximately straight line within a larger set of points that are randomly placed in a planar region. Studies have shown that such near-alignments occur by chance with greater frequency than one might intuitively expect. This has been put forward as a demonstration that ley lines and other similar mysterious alignments believed by some to be phenomena of deep significance might exist solely due to chance alone, as opposed to the supernatural or anthropological explanations put forward by their proponents. The topic has also been studied in the fields of computer vision and astronomy. A number of studies have examined the mathematics of alignment of random points on the plane. In all of these, the width of the line — the allowed displacement of the positions of the points from a perfect straight line — is important. It allows the fact that real-world features are not mathematical points, and that their positions need not line up exactly for them to be considered in alignment. Alfred Watkins, in his classic work on ley lines The Old Straight Track, used the width of a pencil line on a map as the threshold for the tolerance of what might be regarded as an alignment. For example, using a 1 mm pencil line to draw alignments on a 1:50,000 scale Ordnance Survey map, the corresponding width on the ground would be 50 m. Estimate of probability of chance alignments Contrary to intuition, finding alignments between randomly placed points on a landscape gets progressively easier as the geographic area to be considered increases. One way of understanding this phenomenon is to see that the increase in the number of possible combinations of sets of points in that area overwhelms the decrease in the probability that any given set of points in that area line up. One definition which expresses the generally accepted meaning of "alignment" is: A set of points, chosen from a giv
https://en.wikipedia.org/wiki/Fan-out
In digital electronics, the fan-out is the number of gate inputs driven by the output of another single logic gate. In most designs, logic gates are connected to form more complex circuits. While no logic gate input can be fed by more than one output at a time without causing contention, it is common for one output to be connected to several inputs. The technology used to implement logic gates usually allows a certain number of gate inputs to be wired directly together without additional interfacing circuitry. The maximum fan-out of an output measures its load-driving capability: it is the greatest number of inputs of gates of the same type to which the output can be safely connected. Logical practice Maximum limits on fan-out are usually stated for a given logic family or device in the manufacturer's datasheets. These limits assume that the driven devices are members of the same family. More complex analysis than fan-in and fan-out is required when two different logic families are interconnected. Fan-out is ultimately determined by the maximum source and sink currents of an output and the maximum source and sink currents of the connected inputs; the driving device must be able to supply or sink at its output the sum of the currents needed or provided (depending on whether the output is a logic high or low voltage level) by all of the connected inputs, while maintaining the output voltage specifications. For each logic family, typically a "standard" input is defined by the manufacturer with maximum input currents at each logic level, and the fan-out for an output is computed as the number of these standard inputs that can be driven in the worst case. (Therefore, it is possible that an output can actually drive more inputs than specified by fan-out, even of devices within the same family, if the particular devices being driven sink and/or source less current, as reported on their data sheets, than a "standard" device of that family.) Ultimately, whether a d
https://en.wikipedia.org/wiki/Apple%20GS/OS
GS/OS is an operating system developed by Apple Computer for its Apple IIGS personal computer. It provides facilities for accessing the file system, controlling input/output devices, loading and running program files, and a system allowing programs to handle interrupts and signals. It uses ProDOS as its primary filing system. GS/OS is a component of Apple IIGS System Software versions 4.0 through 6.0.1, and was the first true 16-bit operating system for the IIGS. Features Speed optimization The advantage of GS/OS over its predecessor, the ProDOS 16 operating system, is that it was written entirely in 16-bit code for the 65816 processor used in the IIGS, rather than primarily in 8-bit 6502 machine code that does not take advantage of the IIGS's unique features. This in turn allows GS/OS to offers vast speed optimizations (loading time, disk access, screen updates) compared with the previous OS, and provided room to incorporate many features of other Apple operating systems, including Apple III Apple SOS, the Macintosh System 5, as well as concepts and features that would later appear in future Macintosh System Software releases (e.g. proportional scrollbars, thermometer progress bars). New features and enhancements In addition to continued enhancements to the IIGS Finder and loadable fonts, GS/OS offered plug-in device drivers (modem, printer, etc.), a thermometer progress display, AppleShare support, File System Translators for accessing foreign file formats, disk caching and support for storage devices up to 4 Gigabytes. It also extends the ProDOS file system to provide for resource forks on files similar to those used on the Apple Macintosh, which allows for programs to be written in a more flexible way. The newly included Apple Advanced Disk Utilities and Apple IIGS Installer helped facilitate partitioning, formatting and installing software and drivers with visual ease. A command-line development environment called APW (Apple Programmer's Workshop) is availab
https://en.wikipedia.org/wiki/Society%20of%20Broadcast%20Engineers
The Society of Broadcast Engineers (SBE) is a professional organization for engineers in broadcast radio and television. The SBE also offers certification in various radio frequency and video and audio technology areas for its members. Background The organization was founded in 1964. The society elected its first female president, Andrea Cummis, in 2021. Certifications Operator Level Certifications Certified Radio Operator (CRO) Certified Television Operator (CTO) Broadcast Networking Certifications Certified Broadcast Networking Technologist (CBNT) Certified Broadcast Networking Engineer (CBNE) Engineer Level Certifications Certified Broadcast Technologist (CBT) Certified Audio Engineer (CEA) Certified Video Engineer (CEV) Certified Broadcast Radio Engineer (CBRE) Certified Broadcast Television Engineer (CBTE) Certified Senior Broadcast Radio Engineer (CSRE) Certified Senior Broadcast Television Engineer (CSTE) Certified Professional Broadcast Engineer (CPBE) Specialist Certifications Certified 8-VSB Specialist (8-VSB) Certified AM Directional Specialist (AMD) Certified Digital Radio Broadcast Specialist (DRB) Previous Certifications These certifications are still in use but are no longer issued. Certified Senior Broadcast Engineer (CSBE) Certified Radio and Television Broadcast Engineer (CBRTE) Certified Senior Radio and Television Broadcast Engineer (CSRTE) See also List of post-nominal letters Broadcast engineering References External links SBE Official Website Society of Broadcast Engineers Hong Kong Chapter(SBE HK Chapter Official Website) Broadcast engineering Engineering societies based in the United States Mass media companies established in 1975 Organizations established in 1975 Professional certification in engineering Non-profit organizations based in Indianapolis 1975 establishments in the United States Multinational mass media companies
https://en.wikipedia.org/wiki/Scribus
Scribus () is free and open-source desktop publishing (DTP) software available for most desktop operating systems. It is designed for layout, typesetting, and preparation of files for professional-quality image-setting equipment. Scribus can also create animated and interactive PDF presentations and forms. Example uses include writing newspapers, brochures, newsletters, posters, and books. The Scribus 1.4 series are the current stable releases, and the 1.5 series where developments are made available in preparation for the next stable release series, version 1.6. Scribus is written in Qt and released under the GNU General Public License. There are native versions available for Unix, Linux, BSD, macOS, Haiku, Microsoft Windows, OS/2 (including ArcaOS and eComStation) operating systems. General feature overview Scribus supports most major bitmap formats, including TIFF, JPEG, and PSD. Vector drawings can be imported or directly opened for editing. The long list of supported formats includes Encapsulated PostScript, SVG, Adobe Illustrator, and Xfig. Professional type/image-setting features include CMYK colors and ICC color management. It has a built-in scripting engine using Python. It is available in 60 languages. High-level printing is achieved using its own internal level 3 PostScript driver, including support for font embedding and sub-setting with TrueType, Type 1, and OpenType fonts. The internal driver supports full Level 2 PostScript constructs and a large subset of Level 3 constructs. PDF support includes transparency, encryption, and a large set of the PDF 1.5 specification including layers (OCG), as well as PDF/X-3, including interactive PDFs form fields, annotations, and bookmarks. The current file format, called SLA, is XML. Old versions of SLA were based on XML. Text can be imported from OpenDocument (ODT) text documents (such as from LibreOffice Writer), OpenOffice.org XML (OpenOffice.org Writer's SXW files), Microsoft Word's DOC, PDB, and HTML for
https://en.wikipedia.org/wiki/Chess%20as%20mental%20training
There are efforts to use the game of chess as a tool to aid the intellectual development of young people. Chess is significant in cognitive psychology and artificial intelligence (AI) studies, because it represents the domain in which expert performance has been most intensively studied and measured. New York–based Chess-In-The-Schools, Inc. has been active in the public school system in the city since 1986. It currently reaches more than 30,000 students annually. America's Foundation for Chess has initiated programs in partnership with local school districts in several U.S. cities, including Seattle, San Diego, Philadelphia, and Tampa. The Chess'n Math Association promotes chess at the scholastic level in Canada. Chess for Success is a program for at-risk schools in Oregon. Since 1991, the U.S. Chess Center in Washington, D.C. teaches chess to children, especially those in the inner city, "as a means of improving their academic and social skills." Research Research has shown that chess can have a positive impact on meta-cognitive ability and mathematical problem-solving in children, which is why several local governments, schools, and student organizations all over the world are implementing chess programs. There are a number of experiments that suggest that learning and playing chess aids the mind. The Grandmaster Eugene Torre Chess Institute in the Philippines, the United States Chess Federation's chess research bibliography, and English educational consultant Tony Buzan's Brain Foundation, among others, continuously collect such experimental results. The advent of chess software that automatically record and analyze the moves of each player in each game and can tirelessly play with human players of various levels, further helped in giving new directions to experimental designs on chess as mental training. History As early as 1779 Benjamin Franklin, in his article The morals of chess, advocated such a view, saying: Alfred Binet demonstrated in the late 19
https://en.wikipedia.org/wiki/Conch
Conch () is a common name of a number of different medium-to-large-sized sea snails. Conch shells typically have a high spire and a noticeable siphonal canal (in other words, the shell comes to a noticeable point on both ends). In North America, a conch is often identified as a queen conch, indigenous to the waters of the Gulf of Mexico and Caribbean. Queen conches are valued for seafood and are also used as fish bait. In the United States, a rule has been proposed to list the queen conch's conservation status as threatened. The group of conches that are sometimes referred to as "true conches" are marine gastropod molluscs in the family Strombidae, specifically in the genus Strombus and other closely related genera. For example, Lobatus gigas, the queen conch, and Laevistrombus canarium, the dog conch, are true conches. Many other species are also often called "conch", but are not at all closely related to the family Strombidae, including Melongena species (family Melongenidae) and the horse conch Triplofusus papillosus (family Fasciolariidae). Species commonly referred to as conches also include the sacred chank or shankha shell (Turbinella pyrum) and other Turbinella species in the family Turbinellidae. The Triton's trumpet (family Charoniidae) may also be fashioned into a horn and referred to as a conch. Etymology The English word "conch" is attested in Middle English, coming from Latin (shellfish, mussel), which in turn comes from Greek (same meaning) ultimately from Proto-Indo-European root , cognate with Sanskrit word . General description A conch is a sea snail in the phylum Mollusca. A conch shell has superior strength and is used as a musical instrument or decoration. It consists of about 95% calcium carbonate and 5% organic matter. The conch meat is edible. Culinary use The meat of conches is eaten raw in salads or cooked in burgers, chowders, fritters, and gumbos. All parts of the conch meat are edible. Conch is indigenous to the Bahamas and is
https://en.wikipedia.org/wiki/Nephelometer
A nephelometer or aerosol photometer is an instrument for measuring the concentration of suspended particulates in a liquid or gas colloid. A nephelometer measures suspended particulates by employing a light beam (source beam) and a light detector set to one side (often 90°) of the source beam. Particle density is then a function of the light reflected into the detector from the particles. To some extent, how much light reflects for a given density of particles is dependent upon properties of the particles such as their shape, color, and reflectivity. Nephelometers are calibrated to a known particulate, then use environmental factors (k-factors) to compensate lighter or darker colored dusts accordingly. K-factor is determined by the user by running the nephelometer next to an air sampling pump and comparing results. There are a wide variety of research-grade nephelometers on the market as well as open source varieties. Nephelometer uses The main uses of nephelometers relate to air quality measurement for pollution monitoring, climate monitoring, and visibility. Airborne particles are commonly either biological contaminants, particulate contaminants, gaseous contaminants, or dust. The accompanying chart shows the types and sizes of various particulate contaminants. This information helps understand the character of particulate pollution inside a building or in the ambient air, as well as the cleanliness level in a controlled environment. Biological contaminants include mold, fungus, bacteria, viruses, animal dander, dust mites, pollen, human skin cells, cockroach parts, or anything alive or living at one time. They are the biggest enemy of indoor air quality specialists because they are contaminants that cause health problems. Levels of biological contamination depend on humidity and temperature that supports the livelihood of micro-organisms. The presence of pets, plants, rodents, and insects will raise the level of biological contamination. Sheath air Sheath
https://en.wikipedia.org/wiki/Huawei
Huawei Technologies Co., Ltd. ( ; ) is a Chinese multinational technology corporation headquartered in Shenzhen, Guangdong. It designs, develops, manufactures and sells telecommunications equipment, consumer electronics, smart devices and various rooftop solar products. The corporation was founded in 1987 by Ren Zhengfei, a former officer in the People's Liberation Army (PLA). Initially focused on manufacturing phone switches, Huawei has expanded to more than 170 countries to include building telecommunications networks, providing operational and consulting services and equipment, and manufacturing communications devices for the consumer market. It overtook Ericsson in 2012 as the largest telecommunications equipment manufacturer in the world. Huawei surpassed Apple and Samsung, in 2018 and 2020, respectively, to become the largest smartphone manufacturer worldwide. Amidst its rise, Huawei has been accused of intellectual property infringement, for which it has settled with companies like Cisco. Questions regarding the extent of state influence on Huawei have revolved around its national champions role in China, subsidies and financing support from state entities, and reactions of the Chinese government in light of oppositions in certain countries to Huawei's participation in 5G. Its software and equipment have been linked to the mass surveillance of Uyghurs and Xinjiang internment camps, drawing sanctions from the US. The company has faced difficulties in some countries arising from concerns that its equipment may enable surveillance by the Chinese government due to perceived connections with the country's military and intelligence agencies. Huawei has argued that critics such as the US government have not shown evidence of espionage. Experts say that China's 2014 Counter-Espionage Law and 2017 National Intelligence Law can compel Huawei and other companies to cooperate with state intelligence. In 2012, Australian and US intelligence agencies concluded that a ha
https://en.wikipedia.org/wiki/Digital%20Compact%20Cassette
The Digital Compact Cassette (DCC) is a magnetic tape sound recording format introduced by Philips and Matsushita Electric in late and marketed as the successor to the standard analog Compact Cassette. It was also a direct competitor to Sony's MiniDisc (MD), but neither format toppled the then-ubiquitous analog cassette despite their technical superiority, and DCC was discontinued in . Another competing format, the Digital Audio Tape (DAT), had by also failed to sell in large quantities to consumers, although it was popular as a professional digital audio storage format. The DCC form factor is similar to the analog compact cassette (CC), and DCC recorders and players can play back either type: analog as well as DCC. This backward compatibility was intended to allow users to adopt digital recording without rendering their existing tape collections obsolete, but because DCC recorders couldn't record (only play back) analog cassettes, it effectively forced consumers to either replace their cassette deck with a DCC recorder and give up analog recording, or keep the existing cassette deck and make space to add the DCC recorder to their setup. History DCC signaled the parting of ways of Philips and Sony, who had previously worked together successfully on the audio CD, CD-ROM, and CD-i. The companies had also worked together on the Digital Audio Tape which was successful in professional environments, but was perceived as too expensive and fragile for consumers. Furthermore, the recording industry had been fighting against digital recording in court, resulting in the Audio Home Recording Act and SCMS. Philips had developed the Compact Cassette in and allowed companies to use the format royalty-free, which made it hugely successful but not a significant money-maker. The company saw a market for a digital version of the cassette, and expected that the product would be popular if it could be made compatible with the analog cassette. Around , Philips participated in
https://en.wikipedia.org/wiki/Self-clocking%20signal
In telecommunications and electronics, a self-clocking signal is one that can be decoded without the need for a separate clock signal or other source of synchronization. This is usually done by including embedded synchronization information within the signal, and adding constraints on the coding of the data payload such that false synchronization can easily be detected. Most line codes are designed to be self-clocking. Isochronicity and anisochronicity If a clock signal is embedded in the data transmission, there are two possibilities: the clock signals are sent at the same time as the data (isochronous), or at a different time (anisochronous). Isochronous self-clocking signals If the embedded clock signal is isochronous, it gets sent simultaneously with the data. Below is an example signal, in this case using the Manchester code self-clocking signal. The data and clock cycles can be thought of as "adding up" to a combination, where both the clock cycle and the data can be retrieved from the transmitted signal. Asynchronous self-clocking signals Asynchronous self-clocking signals do not combine clock cycles and data transfer into one continuous signal. Instead, the transmission of clock cycles and data transmission is modulated. Below is an example signal used in asynchronous serial communication, where it is made clear that the information about the clock speed is transmitted in a different timeframe than the actual data. Implementations Example uses of self-clocking signal protocols include: Isochronous Manchester code, where the clock signals occur at the transition points. Plesiochronous Digital Hierarchy signals Eight-to-Fourteen Modulation 4B5B 8b/10b encoding 64b/66b encoding HDLC Modified Frequency Modulation Anisochronous Morse code Asynchronous start-stop Most of these codes can be seen as a kind of Run Length Limited code. Those constraints on "runs" of zeros and "runs" of ones ensure that transitions occur often enough to keep the receiver synchron
https://en.wikipedia.org/wiki/Smart%20mob
A smart mob is a group whose coordination and communication abilities have been empowered by digital communication technologies. Smart mobs are particularly known for their ability to mobilize quickly. The concept was introduced by Howard Rheingold in his 2002 book Smart Mobs: The Next Social Revolution. Rheingold defined the smart mob as follows: "Smart mobs consist of people who are able to act in concert even if they don’t know each other... because they carry devices that possess both communication and computing capabilities". In December of that year, the "smart mob" concept was highlighted in the New York Times "Year in Ideas". Characteristics These technologies that empower smart mobs include the Internet, computer-mediated communication such as Internet Relay Chat, and wireless devices like mobile phones and personal digital assistants. Methodologies like peer-to-peer networks and ubiquitous computing are also changing the ways in which people organize and share information. Flash mobs are a specific form of smart mob, originally describing a group of people who assemble suddenly in a public place, do something unusual and pointless for a brief period of time, then quickly disperse. The difference between flash and smart mobs is primarily with regards to their duration: flash mobs disappear quickly, but smart mobs can have a more enduring presence. The term flash mob is claimed to have been inspired by "smart mob". Smart mobs have begun to have an impact in current events, as mobile phones and text messages have empowered everyone from revolutionaries in Malaysia to individuals protesting the second Iraq War. Individuals who have divergent worldviews and methods have been able to coordinate short-term. A 2009 entry in the Encyclopedia of Computer Science and Technology noted that the term may be "fading from public use". Early instances A forerunner to the idea can be found in the work of anarchist thinker Kropotkin, "fishermen, hunters, travelling mer
https://en.wikipedia.org/wiki/Pythagoreanism
Pythagoreanism originated in the 6th century BC, based on and around the teachings and beliefs held by Pythagoras and his followers, the Pythagoreans. Pythagoras established the first Pythagorean community in the ancient Greek colony of Kroton, in modern Calabria (Italy). Early Pythagorean communities spread throughout Magna Graecia. Pythagoras' death and disputes about his teachings led to the development of two philosophical traditions within Pythagoreanism. The akousmatikoi were superseded in the 4th century BC as a significant mendicant school of philosophy by the Cynics. The mathēmatikoi philosophers were absorbed into the Platonic school in the 4th century BC. Following political instability in Magna Graecia, some Pythagorean philosophers fled to mainland Greece while others regrouped in Rhegium. By about 400 BC the majority of Pythagorean philosophers had left Italy. Pythagorean ideas exercised a marked influence on Plato and through him, on all of Western philosophy. Many of the surviving sources on Pythagoras originate with Aristotle and the philosophers of the Peripatetic school. As a philosophic tradition, Pythagoreanism was revived in the 1st century BC, giving rise to Neopythagoreanism. The worship of Pythagoras continued in Italy and as a religious community Pythagoreans appear to have survived as part of, or deeply influenced, the Bacchic cults and Orphism. History Pythagoras was already well known in ancient times for the mathematical achievement of the Pythagorean theorem. Pythagoras had been credited with discovering that in a right-angled triangle the square of the hypotenuse is equal to the sum of the squares of the other two sides. In ancient times Pythagoras was also noted for his discovery that music had mathematical foundations. Antique sources that credit Pythagoras as the philosopher who first discovered music intervals also credit him as the inventor of the monochord, a straight rod on which a string and a movable bridge could be use
https://en.wikipedia.org/wiki/Eleusis%20%28card%20game%29
Eleusis is a shedding-type card game where one player chooses a secret rule to determine which cards can be played on top of others, and the other players attempt to determine the rule using inductive logic. The game was invented by Robert Abbott in 1956, and was first published by Martin Gardner in his Mathematical Games column in Scientific American magazine in June 1959. A revised version appeared in Gardner's July 1977 column. Eleusis is sometimes considered an analogy to the problems of scientific method. It can be compared with the card game Mao, which also has secret rules that can be learned inductively. The games of Penultima and commercially produced Zendo also feature players attempting to discover inductively a secret rule or rules thought of by a "Master" or "spectators" who declare plays legal or illegal on the basis of the rules. Rules The game is played by creating a row of cards in sequence. At the start of the game the dealer (known as "God") invents a secret constraint for how these cards must progress: for example, "each card played must be higher than the last, unless the last card was a face card, in which case any numeral card may be played". Two decks of cards are shuffled and 14 cards dealt to each player except the dealer. One card is dealt face-up to start the row and a random player chosen to start. On a player's turn they must add one or more cards from their hand to the row, in sequence. The dealer judges this play: if the entire play fits the dealer's rule, the cards are left in place as part of the row. Otherwise, they are removed from the row and "sidelined", as to be put below the card that they attempted to follow, and the player is dealt a number of penalty cards equal to twice the number of cards they attempted to play that turn. If the play had multiple cards and only some were incorrect, the entire play is declared invalid, without the dealer specifying the invalid cards. One player may elect to be a "prophet". A player
https://en.wikipedia.org/wiki/P-adic%20analysis
In mathematics, p-adic analysis is a branch of number theory that deals with the mathematical analysis of functions of p-adic numbers. The theory of complex-valued numerical functions on the p-adic numbers is part of the theory of locally compact groups. The usual meaning taken for p-adic analysis is the theory of p-adic-valued functions on spaces of interest. Applications of p-adic analysis have mainly been in number theory, where it has a significant role in diophantine geometry and diophantine approximation. Some applications have required the development of p-adic functional analysis and spectral theory. In many ways p-adic analysis is less subtle than classical analysis, since the ultrametric inequality means, for example, that convergence of infinite series of p-adic numbers is much simpler. Topological vector spaces over p-adic fields show distinctive features; for example aspects relating to convexity and the Hahn–Banach theorem are different. Important results Ostrowski's theorem Ostrowski's theorem, due to Alexander Ostrowski (1916), states that every non-trivial absolute value on the rational numbers Q is equivalent to either the usual real absolute value or a -adic absolute value. Mahler's theorem Mahler's theorem, introduced by Kurt Mahler, expresses continuous p-adic functions in terms of polynomials. In any field of characteristic 0, one has the following result. Let be the forward difference operator. Then for polynomial functions f we have the Newton series: where is the kth binomial coefficient polynomial. Over the field of real numbers, the assumption that the function f is a polynomial can be weakened, but it cannot be weakened all the way down to mere continuity. Mahler proved the following result: Mahler's theorem: If f is a continuous p-adic-valued function on the p-adic integers then the same identity holds. Hensel's lemma Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular a
https://en.wikipedia.org/wiki/Renormalization
Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian. For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electron-system behaves as if it had a different mass and charge than initially postulated. Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. Mathematics and experiments prove that positrons and more massive particles like protons exhibit precisely the same observed charge as the electron – even in the presence of much stronger interactions and more intense clouds of virtual particles. Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. Physically, the pileup of contributions from an infinity of scales involved in a problem may then result in further infinities. When describing spacetime as a continuum, certain statistical and quantum mechanical constructions are not well-defined. To define them, or make them unambiguous, a continuum limit must carefully remove "construction scaffolding" of lattices at various scales. Renormalization procedures are based on the requirement that c
https://en.wikipedia.org/wiki/Renormalization%20group
In theoretical physics, the term renormalization group (RG) refers to a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws (codified in a quantum field theory) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle. A change in scale is called a scale transformation. The renormalization group is intimately related to scale invariance and conformal invariance, symmetries in which a system appears the same at all scales (so-called self-similarity). As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances. For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation. History The idea of scale transformations and scale invariance is old in physics: Scaling
https://en.wikipedia.org/wiki/Single-photon%20emission%20computed%20tomography
Single-photon emission computed tomography (SPECT, or less commonly, SPET) is a nuclear medicine tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera (that is, scintigraphy), but is able to provide true 3D information. This information is typically presented as cross-sectional slices through the patient, but can be freely reformatted or manipulated as required. The technique needs delivery of a gamma-emitting radioisotope (a radionuclide) into the patient, normally through injection into the bloodstream. On occasion, the radioisotope is a simple soluble dissolved ion, such as an isotope of gallium(III). Most of the time, though, a marker radioisotope is attached to a specific ligand to create a radioligand, whose properties bind it to certain types of tissues. This marriage allows the combination of ligand and radiopharmaceutical to be carried and bound to a place of interest in the body, where the ligand concentration is seen by a gamma camera. Principles Instead of just "taking a picture of anatomical structures", a SPECT scan monitors level of biological activity at each place in the 3-D region analyzed. Emissions from the radionuclide indicate amounts of blood flow in the capillaries of the imaged regions. In the same way that a plain X-ray is a 2-dimensional (2-D) view of a 3-dimensional structure, the image obtained by a gamma camera is a 2-D view of 3-D distribution of a radionuclide. SPECT imaging is performed by using a gamma camera to acquire multiple 2-D images (also called projections), from multiple angles. A computer is then used to apply a tomographic reconstruction algorithm to the multiple projections, yielding a 3-D data set. This data set may then be manipulated to show thin slices along any chosen axis of the body, similar to those obtained from other tomographic techniques, such as magnetic resonance imaging (MRI), X-ray computed tomography (X-ray CT), and p
https://en.wikipedia.org/wiki/Sadleirian%20Professor%20of%20Pure%20Mathematics
The Sadleirian Professorship of Pure Mathematics, originally spelled in the statutes and for the first two professors as Sadlerian, is a professorship in pure mathematics within the DPMMS at the University of Cambridge. It was founded on a bequest from Lady Mary Sadleir for lectureships "for the full and clear explication and teaching that part of mathematical knowledge commonly called algebra". She died in 1706 and lectures began in 1710 but eventually these failed to attract undergraduates. In 1860 the foundation was used to establish the professorship. On 10 June 1863 Arthur Cayley was elected with the statutory duty "to explain and teach the principles of pure mathematics, and to apply himself to the advancement of that science." The stipend attached to the professorship was modest although it improved in the course of subsequent legislation. List of Sadlerian Lecturers of Pure Mathematics 1746–1769 William Ludlam 1826–1835 Lawrence Stephenson List of Sadleirian Lecturers of Pure Mathematics 1845–1847 Arthur Scratchley 1847–1857 George Ferns Reyner 1851 Stephen Hanson 1855–1858 William Charles Green 1857–1864 John Robert Lunn List of Sadleirian Professors of Pure Mathematics 1863–1895 Arthur Cayley 1895–1910 Andrew Russell Forsyth 1910–1931 E. W. Hobson 1931–1942 G. H. Hardy 1945–1953 Louis Mordell 1953–1967 Philip Hall 1967–1986 J. W. S. Cassels 1986–2012 John H. Coates 2013–2014 Vladimir Markovic 2017–2021 Emmanuel Breuillard References Sources Obituary Notices of Fellows Deceased. (1895). Proceedings of the Royal Society of London, 58, I-Lx. Retrieved from https://www.jstor.org/stable/115800 (Obituary of Arthur Cayley written by Andrew Forsyth). University of Cambridge DPMMS https://web.archive.org/web/20160624155328/http://www.admin.cam.ac.uk/offices/academic/secretary/professorships/sadleirian.pdf Pure Mathematics, Sadleirian Faculty of Mathematics, University of Cambridge Pure Mathematics, Sadleirian, Cambridge Mathematics education in the United
https://en.wikipedia.org/wiki/In-circuit%20emulation
In-circuit emulation (ICE) is the use of a hardware device or in-circuit emulator used to debug the software of an embedded system. It operates by using a processor with the additional ability to support debugging operations, as well as to carry out the main function of the system. Particularly for older systems, with limited processors, this usually involved replacing the processor temporarily with a hardware emulator: a more powerful although more expensive version. It was historically in the form of bond-out processor which has many internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor. More recently the term also covers JTAG-based hardware debuggers which provide equivalent access using on-chip debugging hardware with standard production chips. Using standard chips instead of custom bond-out versions makes the technology ubiquitous and low cost, and eliminates most differences between the development and runtime environments. In this common case, the in-circuit emulator term is a misnomer, sometimes confusingly so, because emulation is no longer involved. Embedded systems present special problems for programmers because they usually lack keyboards, monitors, disk drives and other user interfaces that are present on computers. These shortcomings make in-circuit software debugging tools essential for many common development tasks. Function An in-circuit emulator (ICE) provides a window into the embedded system. The programmer uses the emulator to load programs into the embedded system, run them, step through them slowly, and view and change data used by the system's software. An emulator gets its name because it emulates (imitates) the central processing unit (CPU) of the embedded system's computer. Traditionally it had a plug that inserts into the socket where the CPU integrated circuit chip would normally be placed. Most modern systems use the target system's CPU directly, with special
https://en.wikipedia.org/wiki/Fatou%27s%20lemma
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou. Fatou's lemma can be used to prove the Fatou–Lebesgue theorem and Lebesgue's dominated convergence theorem. Standard statement In what follows, denotes the -algebra of Borel sets on . Fatou's lemma remains true if its assumptions hold -almost everywhere. In other words, it is enough that there is a null set such that the values are non-negative for every To see this, note that the integrals appearing in Fatou's lemma are unchanged if we change each function on . Proof Fatou's lemma does not require the monotone convergence theorem, but the latter can be used to provide a quick proof. A proof directly from the definitions of integrals is given further below. In each case, the proof begins by analyzing the properties of . These satisfy: the sequence is pointwise non-decreasing at any and , . Since , we immediately see that is measurable. Via the Monotone Convergence Theorem Moreover, By the Monotone Convergence Theorem and property (1), the limit and integral may be interchanged: where the last step used property (2). From "first principles" To demonstrate that the monotone convergence theorem is not "hidden", the proof below does not use any properties of Lebesgue integral except those established here. Denote by the set of simple -measurable functions such that on . Now we turn to the main theorem The proof is complete. Examples for strict inequality Equip the space with the Borel σ-algebra and the Lebesgue measure. Example for a probability space: Let denote the unit interval. For every natural number define Example with uniform convergence: Let denote the set of all real numbers. Define These sequences converge on pointwise (respectively uniformly) to the zero function (with zero integral),
https://en.wikipedia.org/wiki/Glossary%20of%20field%20theory
Field theory is the branch of mathematics in which fields are studied. This is a glossary of some terms of the subject. (See field theory (physics) for the unrelated field theories in physics.) Definition of a field A field is a commutative ring in which and every nonzero element has a multiplicative inverse. In a field we thus can perform the operations addition, subtraction, multiplication, and division. The non-zero elements of a field F form an abelian group under multiplication; this group is typically denoted by F×; The ring of polynomials in the variable x with coefficients in F is denoted by F[x]. Basic definitions Characteristic The characteristic of the field F is the smallest positive integer n such that ; here n·1 stands for n summands . If no such n exists, we say the characteristic is zero. Every non-zero characteristic is a prime number. For example, the rational numbers, the real numbers and the p-adic numbers have characteristic 0, while the finite field Zp with p being prime has characteristic p. Subfield A subfield of a field F is a subset of F which is closed under the field operation + and * of F and which, with these operations, forms itself a field. Prime field The prime field of the field F is the unique smallest subfield of F. Extension field If F is a subfield of E then E is an extension field of F. We then also say that E/F is a field extension. Degree of an extension Given an extension E/F, the field E can be considered as a vector space over the field F, and the dimension of this vector space is the degree of the extension, denoted by [E : F]. Finite extension A finite extension is a field extension whose degree is finite. Algebraic extension If an element α of an extension field E over F is the root of a non-zero polynomial in F[x], then α is algebraic over F. If every element of E is algebraic over F, then E/F is an algebraic extension. Generating set Given a field extension E/F and a subset S of E, we write F
https://en.wikipedia.org/wiki/XOR%20linked%20list
An XOR linked list is a type of data structure used in computer programming. It takes advantage of the bitwise XOR operation to decrease storage requirements for doubly linked lists. Description An ordinary doubly linked list stores addresses of the previous and next list items in each list node, requiring two address fields: ... A B C D E ... –> next –> next –> next –> <– prev <– prev <– prev <– An XOR linked list compresses the same information into one address field by storing the bitwise XOR (here denoted by ⊕) of the address for previous and the address for next in one field: ... A B C D E ... ⇌ A⊕C ⇌ B⊕D ⇌ C⊕E ⇌ More formally: link(B) = addr(A)⊕addr(C), link(C) = addr(B)⊕addr(D), ... When traversing the list from left to right: supposing the cursor is at C, the previous item, B, may be XORed with the value in the link field (B⊕D). The address for D will then be obtained and list traversal may resume. The same pattern applies in the other direction. i.e. where link(C) = addr(B)⊕addr(D) so addr(D) = addr(B)⊕addr(D) ⊕ addr(B) addr(D) = addr(B)⊕addr(B) ⊕ addr(D) since X⊕X = 0 => addr(D) = 0 ⊕ addr(D) since X⊕0 = X => addr(D) = addr(D) The XOR operation cancels appearing twice in the equation and all we are left with is the . To start traversing the list in either direction from some point, the address of two consecutive items is required. If the addresses of the two consecutive items are reversed, list traversal will occur in the opposite direction. Theory of operation The key is the first operation, and the properties of XOR: X⊕X = 0 X⊕0 = X X⊕Y = Y⊕X (X⊕Y)⊕Z = X⊕(Y⊕Z) The R2 register always contains the XOR of the address of current item C with the address of the predecessor item P: C⊕P. The Link fields in the records contain the XOR
https://en.wikipedia.org/wiki/Command%E2%80%93query%20separation
Command-query separation (CQS) is a principle of imperative computer programming. It was devised by Bertrand Meyer as part of his pioneering work on the Eiffel programming language. It states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, asking a question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects. Connection with design by contract Command-query separation is particularly well suited to a design by contract (DbC) methodology, in which the design of a program is expressed as assertions embedded in the source code, describing the state of the program at certain critical times. In DbC, assertions are considered design annotations—not program logic—and as such, their execution should not affect the program state. CQS is beneficial to DbC because any value-returning method (any query) can be called by any assertion without fear of modifying program state. In theoretical terms, this establishes a measure of sanity, whereby one can reason about a program's state without simultaneously modifying that state. In practical terms, CQS allows all assertion checks to be bypassed in a working system to improve its performance without inadvertently modifying its behaviour. CQS may also prevent the occurrence of certain kinds of heisenbugs. Broader impact on software engineering Even beyond the connection with design by contract, CQS is considered by its adherents to have a simplifying effect on a program, making its states (via queries) and state changes (via commands) more comprehensible. CQS is well-suited to the object-oriented methodology, but can also be applied outside of object-oriented programming. Since the separation of side effects and return values is not inherently object-oriented, CQS can be profitably applied to any programming paradigm that requires reaso
https://en.wikipedia.org/wiki/Bertrand%20Meyer
Bertrand Meyer (; ; born 21 November 1950) is a French academic, author, and consultant in the field of computer languages. He created the Eiffel programming language and the idea of design by contract. Education and academic career Meyer received a master's degree in engineering from the École Polytechnique in Paris, a second master's degree from Stanford University, and a PhD from the Université de Nancy. He had a technical and managerial career for nine years at Électricité de France, and for three years was a member of the faculty of the University of California, Santa Barbara. From 2001 to 2016, he was professor of software engineering at ETH Zürich, the Swiss Federal Institute of Technology, where he pursued research on building trusted components (reusable software elements) with a guaranteed level of quality. He was Chair of the ETH Computer Science department from 2004 to 2006 and for 13 years (2003–2015) taught the Introduction to Programming course taken by all ETH computer science students, resulting in a widely disseminated programming textbook, Touch of Class (Springer). He remains Professor emeritus of Software Engineering at ETH Zurich and is currently Professor of Software Engineering and Provost at Constructor Institute (previously Schaffhausen Institute of Technology (SIT)), a new research university in Schaffhausen, Switzerland. He has held visiting positions at the University of Toulouse (Chair of Excellence, 2015–16), Politecnico di Milano, Innopolis University, Monash University and University of Technology Sydney. He is also active as a consultant (object-oriented system design, architectural reviews, technology assessment), trainer in object technology and other software topics, and conference speaker. For many years Meyer has been active in issues of research and education policy and was the founding president (2006–2011) of Informatics Europe, the association of European computer science departments. Computer languages Meyer pursues t
https://en.wikipedia.org/wiki/SPARK%20%28programming%20language%29
SPARK is a formally defined computer programming language based on the Ada programming language, intended for the development of high integrity software used in systems where predictable and highly reliable operation is essential. It facilitates the development of applications that demand safety, security, or business integrity. Originally, there were three versions of the SPARK language (SPARK83, SPARK95, SPARK2005) based on Ada 83, Ada 95 and Ada 2005 respectively. A fourth version of the SPARK language, SPARK 2014, based on Ada 2012, was released on April 30, 2014. SPARK 2014 is a complete re-design of the language and supporting verification tools. The SPARK language consists of a well-defined subset of the Ada language that uses contracts to describe the specification of components in a form that is suitable for both static and dynamic verification. In SPARK83/95/2005, the contracts are encoded in Ada comments and so are ignored by any standard Ada compiler, but are processed by the SPARK "Examiner" and its associated tools. SPARK 2014, in contrast, uses Ada 2012's built-in "aspect" syntax to express contracts, bringing them into the core of the language. The main tool for SPARK 2014 (GNATprove) is based on the GNAT/GCC infrastructure, and re-uses almost the entirety of the GNAT Ada 2012 front-end. Technical overview SPARK utilises the strengths of Ada while trying to eliminate all its potential ambiguities and insecure constructs. SPARK programs are by design meant to be unambiguous, and their behavior is required to be unaffected by the choice of Ada compiler. These goals are achieved partly by omitting some of Ada's more problematic features (such as unrestricted parallel tasking) and partly by introducing contracts which encode the application designer's intentions and requirements for certain components of a program. The combination of these approaches allows SPARK to meet its design objectives, which are: logical soundness rigorous formal definit
https://en.wikipedia.org/wiki/Introduction%20to%20gauge%20theory
A gauge theory is a type of theory in physics. The word gauge means a measurement, a thickness, an in-between distance (as in railroad tracks), or a resulting number of units per certain parameter (a number of loops in an inch of fabric or a number of lead balls in a pound of ammunition). Modern theories describe physical forces in terms of fields, e.g., the electromagnetic field, the gravitational field, and fields that describe forces between the elementary particles. A general feature of these field theories is that the fundamental fields cannot be directly measured; however, some associated quantities can be measured, such as charges, energies, and velocities. For example, say you cannot measure the diameter of a lead ball, but you can determine how many lead balls, which are equal in every way, are required to make a pound. Using the number of balls, the density of lead, and the formula for calculating the volume of a sphere from its diameter, one could indirectly determine the diameter of a single lead ball. In field theories, different configurations of the unobservable fields can result in identical observable quantities. A transformation from one such field configuration to another is called a gauge transformation; the lack of change in the measurable quantities, despite the field being transformed, is a property called gauge invariance. For example, if you could measure the color of lead balls and discover that when you change the color, you still fit the same number of balls in a pound, the property of "color" would show gauge invariance. Since any kind of invariance under a field transformation is considered a symmetry, gauge invariance is sometimes called gauge symmetry. Generally, any theory that has the property of gauge invariance is considered a gauge theory. For example, in electromagnetism the electric field E and the magnetic field B are observable, while the potentials V ("voltage") and A (the vector potential) are not. Under a gauge transforma
https://en.wikipedia.org/wiki/Seminorm
In mathematics, particularly in functional analysis, a seminorm is a vector space norm that need not be positive definite. Seminorms are intimately connected with convex sets: every seminorm is the Minkowski functional of some absorbing disk and, conversely, the Minkowski functional of any such set is a seminorm. A topological vector space is locally convex if and only if its topology is induced by a family of seminorms. Definition Let be a vector space over either the real numbers or the complex numbers A real-valued function is called a if it satisfies the following two conditions: Subadditivity/Triangle inequality: for all Absolute homogeneity: for all and all scalars These two conditions imply that and that every seminorm also has the following property: Nonnegativity: for all Some authors include non-negativity as part of the definition of "seminorm" (and also sometimes of "norm"), although this is not necessary since it follows from the other two properties. By definition, a norm on is a seminorm that also separates points, meaning that it has the following additional property: Positive definite/Positive/: whenever satisfies then A is a pair consisting of a vector space and a seminorm on If the seminorm is also a norm then the seminormed space is called a . Since absolute homogeneity implies positive homogeneity, every seminorm is a type of function called a sublinear function. A map is called a if it is subadditive and positive homogeneous. Unlike a seminorm, a sublinear function is necessarily nonnegative. Sublinear functions are often encountered in the context of the Hahn–Banach theorem. A real-valued function is a seminorm if and only if it is a sublinear and balanced function. Examples The on which refers to the constant map on induces the indiscrete topology on Let be a measure on a space . For an arbitrary constant , let be the set of all functions for which exists and is finite. It can be shown th
https://en.wikipedia.org/wiki/Stochastic
Stochastic (; ) refers to the property of being well-described by a random probability distribution. Although stochasticity and randomness are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselves, these two terms are often used synonymously. Furthermore, in probability theory, the formal concept of a stochastic process is also referred to as a random process. Stochasticity is used in many different fields, including the natural sciences such as biology, chemistry, ecology, neuroscience, and physics, as well as technology and engineering fields such as image processing, signal processing, information theory, computer science, cryptography, and telecommunications. It is also used in finance, due to seemingly random changes in financial markets as well as in medicine, linguistics, music, media, colour theory, botany, manufacturing, and geomorphology. Etymology The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz, who in 1917 wrote in German the word Stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob. For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, though the German term had been used earlier in 1931 by Andrey Kolmogorov. Mathematics In the early 1930s, Aleksandr Khinchin gave the first mathematical definition of a stochas
https://en.wikipedia.org/wiki/Topologist%27s%20sine%20curve
In the branch of mathematics known as topology, the topologist's sine curve or Warsaw sine curve is a topological space with several interesting properties that make it an important textbook example. It can be defined as the graph of the function sin(1/x) on the half-open interval (0, 1], together with the origin, under the topology induced from the Euclidean plane: Properties The topologist's sine curve T is connected but neither locally connected nor path connected. This is because it includes the point (0,0) but there is no way to link the function to the origin so as to make a path. The space T is the continuous image of a locally compact space (namely, let V be the space {−1} ∪ (0, 1], and use the map f from V to T defined by f(−1) = (0,0) and f(x) = (x, sin(1/x)) for x > 0), but T is not locally compact itself. The topological dimension of T is 1. Variants Two variants of the topologist's sine curve have other interesting properties. The closed topologist's sine curve can be defined by taking the topologist's sine curve and adding its set of limit points, ; some texts define the topologist's sine curve itself as this closed version, as they prefer to use the term 'closed topologist's sine curve' to refer to another curve. This space is closed and bounded and so compact by the Heine–Borel theorem, but has similar properties to the topologist's sine curve—it too is connected but neither locally connected nor path-connected. The extended topologist's sine curve can be defined by taking the closed topologist's sine curve and adding to it the set . It is arc connected but not locally connected. See also List of topologies Warsaw circle References Topological spaces
https://en.wikipedia.org/wiki/Combinatorial%20game%20theory
Combinatorial game theory is a branch of mathematics and theoretical computer science that typically studies sequential games with perfect information. Study has been largely confined to two-player games that have a position that the players take turns changing in defined ways or moves to achieve a defined winning condition. Combinatorial game theory has not traditionally studied games of chance or those that use imperfect or incomplete information, favoring games that offer perfect information in which the state of the game and the set of available moves is always known by both players. However, as mathematical techniques advance, the types of game that can be mathematically analyzed expands, thus the boundaries of the field are ever changing. Scholars will generally define what they mean by a "game" at the beginning of a paper, and these definitions often vary as they are specific to the game being analyzed and are not meant to represent the entire scope of the field. Combinatorial games include well-known games such as chess, checkers, and Go, which are regarded as non-trivial, and tic-tac-toe, which is considered trivial, in the sense of being "easy to solve". Some combinatorial games may also have an unbounded playing area, such as infinite chess. In combinatorial game theory, the moves in these and other games are represented as a game tree. Combinatorial games also include one-player combinatorial puzzles such as Sudoku, and no-player automata, such as Conway's Game of Life, (although in the strictest definition, "games" can be said to require more than one participant, thus the designations of "puzzle" and "automata".) Game theory in general includes games of chance, games of imperfect knowledge, and games in which players can move simultaneously, and they tend to represent real-life decision making situations. Combinatorial game theory has a different emphasis than "traditional" or "economic" game theory, which was initially developed to study games wit
https://en.wikipedia.org/wiki/Transitive%20closure
In mathematics, the transitive closure of a homogeneous binary relation on a set is the smallest relation on that contains and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets is the unique minimal transitive superset of . For example, if is a set of airports and means "there is a direct flight from airport to airport " (for and in ), then the transitive closure of on is the relation such that means "it is possible to fly from to in one or more flights". More formally, the transitive closure of a binary relation on a set is the smallest (w.r.t. ⊆) transitive relation on such that ⊆ ; see . We have = if, and only if, itself is transitive. Conversely, transitive reduction adduces a minimal relation from a given relation such that they have the same closure, that is, ; however, many different with this property may exist. Both transitive closure and transitive reduction are also used in the closely related area of graph theory. Transitive relations and examples A relation R on a set X is transitive if, for all x, y, z in X, whenever and then . Examples of transitive relations include the equality relation on any set, the "less than or equal" relation on any linearly ordered set, and the relation "x was born before y" on the set of all people. Symbolically, this can be denoted as: if and then . One example of a non-transitive relation is "city x can be reached via a direct flight from city y" on the set of all cities. Simply because there is a direct flight from one city to a second city, and a direct flight from the second city to the third, does not imply there is a direct flight from the first city to the third. The transitive closure of this relation is a different relation, namely "there is a sequence of direct flights that begins at city x and ends at city y". Every relation can be extended in a similar way to a transitive relation. An example
https://en.wikipedia.org/wiki/Emission%20spectrum
The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to electrons making a transition from a high energy state to a lower energy state. The photon energy of the emitted photons is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference. This collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element's emission spectrum is unique. Therefore, spectroscopy can be used to identify elements in matter of unknown composition. Similarly, the emission spectra of molecules can be used in chemical analysis of substances. Emission In physics, emission is the process by which a higher energy quantum mechanical state of a particle becomes converted to a lower one through the emission of a photon, resulting in the production of light. The frequency of light emitted is a function of the energy of the transition. Since energy must be conserved, the energy difference between the two states equals the energy carried off by the photon. The energy states of the transitions can lead to emissions over a very large range of frequencies. For example, visible light is emitted by the coupling of electronic states in atoms and molecules (then the phenomenon is called fluorescence or phosphorescence). On the other hand, nuclear shell transitions can emit high energy gamma rays, while nuclear spin transitions emit low energy radio waves. The emittance of an object quantifies how much light is emitted by it. This may be related to other properties of the object through the Stefan–Boltzmann law. For most substances, the amount of emission varies with the temperature and the spectroscopic composition of the object, leading to the appearance of color temperature and emission lines. Precise measurements at many wavelengths allow the identifi
https://en.wikipedia.org/wiki/Cube%20root
In mathematics, a cube root of a number is a number such that . All nonzero real numbers have exactly one real cube root and a pair of complex conjugate cube roots, and all nonzero complex numbers have three distinct complex cube roots. For example, the real cube root of , denoted , is , because , while the other cube roots of are and . The three cube roots of are In some contexts, particularly when the number whose cube root is to be taken is a real number, one of the cube roots (in this particular case the real one) is referred to as the principal cube root, denoted with the radical sign The cube root is the inverse function of the cube function if considering only real numbers, but not if considering also complex numbers: although one has always the cube of a nonzero number has more than one complex cube root and its principal cube root may not be the number that was cubed. For example, , but Formal definition The cube roots of a number x are the numbers y which satisfy the equation Properties Real numbers For any real number x, there is one real number y such that y3 = x. The cube function is increasing, so does not give the same result for two different inputs, and it covers all real numbers. In other words, it is a bijection, or one-to-one. Then we can define an inverse function that is also one-to-one. For real numbers, we can define a unique cube root of all real numbers. If this definition is used, the cube root of a negative number is a negative number. If x and y are allowed to be complex, then there are three solutions (if x is non-zero) and so x has three cube roots. A real number has one real cube root and two further cube roots which form a complex conjugate pair. For instance, the cube roots of 1 are: The last two of these roots lead to a relationship between all roots of any real or complex number. If a number is one cube root of a particular real or complex number, the other two cube roots can be found by multiplying that cube root
https://en.wikipedia.org/wiki/Symplectic%20vector%20space
In mathematics, a symplectic vector space is a vector space V over a field F (for example the real numbers R) equipped with a symplectic bilinear form. A symplectic bilinear form is a mapping that is Bilinear Linear in each argument separately; Alternating holds for all ; and Non-degenerate for all implies that . If the underlying field has characteristic not 2, alternation is equivalent to skew-symmetry. If the characteristic is 2, the skew-symmetry is implied by, but does not imply alternation. In this case every symplectic form is a symmetric form, but not vice versa. Working in a fixed basis, ω can be represented by a matrix. The conditions above are equivalent to this matrix being skew-symmetric, nonsingular, and hollow (all diagonal entries are zero). This should not be confused with a symplectic matrix, which represents a symplectic transformation of the space. If V is finite-dimensional, then its dimension must necessarily be even since every skew-symmetric, hollow matrix of odd size has determinant zero. Notice that the condition that the matrix be hollow is not redundant if the characteristic of the field is 2. A symplectic form behaves quite differently from a symmetric form, for example, the scalar product on Euclidean vector spaces. Standard symplectic space The standard symplectic space is R2n with the symplectic form given by a nonsingular, skew-symmetric matrix. Typically ω is chosen to be the block matrix where In is the identity matrix. In terms of basis vectors : A modified version of the Gram–Schmidt process shows that any finite-dimensional symplectic vector space has a basis such that ω takes this form, often called a Darboux basis or symplectic basis. Sketch of process: Start with an arbitrary basis , and represent the dual of each basis vector by the dual basis: . This gives us a matrix with entries . Solve for its null space. Now for any in the null space, we have , so the null space gives us the degenerate subspace .
https://en.wikipedia.org/wiki/Biofeedback
Biofeedback is the technique of gaining greater awareness of many physiological functions of one's own body by using electronic or other instruments, and with a goal of being able to manipulate the body's systems at will. Humans conduct biofeedback naturally all the time, at varied levels of consciousness and intentionality. Biofeedback and the biofeedback loop can also be thought of as self-regulation. Some of the processes that can be controlled include brainwaves, muscle tone, skin conductance, heart rate and pain perception. Biofeedback may be used to improve health, performance, and the physiological changes that often occur in conjunction with changes to thoughts, emotions, and behavior. Recently, technologies have provided assistance with intentional biofeedback. Eventually, these changes may be maintained without the use of extra equipment, for no equipment is necessarily required to practice biofeedback. Meta-analysis of different biofeedback treatments have shown some benefit in the treatment of headaches and migraines and ADHD, though most of the studies in these meta-analyses did not make comparisons with alternative treatments. Information coded biofeedback Information coded biofeedback is an evolving form and methodology in the field of biofeedback. Its uses may be applied in the areas of health, wellness and awareness. Biofeedback has its modern conventional roots in the early 1970s. Over the years, biofeedback as a discipline and a technology has continued to mature and express new versions of the method with novel interpretations in areas utilizing the electromyograph, electrodermograph, electroencephalograph and electrocardiogram among others. The concept of biofeedback is based on the fact that a wide variety of ongoing intrinsic natural functions of the organism occur at a level of awareness generally called the "unconscious". The biofeedback process is designed to interface with select aspects of these "unconscious" processes. The defi
https://en.wikipedia.org/wiki/Name%20your%20own%20price
Name your own price (NYOP) is a pricing strategy under which buyers make a suggestion for a product’s price (unlike the traditional way where sellers quote a certain price) and the transaction occurs only if a seller accepts this quoted price. What happens is that the seller waits for a potential buyer's offer and can then either accept or reject that 'named price' that the user had offered. As the Internet is continuously being developed and online marketplaces are becoming increasingly more popular, consumers have more choices in terms of product pricing. Popularized by the reverse auction pioneer, Priceline.com, such pricing strategy asks consumers to 'name their own price' for various products and services like air tickets, hotels, rental cars, etc. The first bid a consumer places and the subsequent bid increments express the consumer's willingness or unwillingness to haggle. "The economic argument is that the number of bids a consumer submits to win a product in a NYOP auction is determined by the bidder’s intention to trade off higher expected savings from haggling against the associated frictional costs". NYOP retailers do not post a price for their products, and the final price of the transaction is only determined via a "reverse auction process", and these are key features that distinguish hotels and travel intermediaries from NYOP retailers. Similarly, LetYouKnow, Inc. pioneered the application of its own patented matching method within confines of the reverse auction process, whereby consumers name their own price for new vehicles. Originally, name-your-own-price sales are considered "opaque" by marketers because buyers "don't know the name of the supplier (airline, hotel or car rental company) or the schedule (with air tickets) until after" they make a nonrefundable purchase. Suppliers benefit because they can sell to the most price-conscious buyers/travelers without publicly disclosing those low rates. Priceline.com Priceline.com, an online travel
https://en.wikipedia.org/wiki/Polyethylene%20terephthalate
Polyethylene terephthalate (or poly(ethylene terephthalate), PET, PETE, or the obsolete PETP or PET-P), is the most common thermoplastic polymer resin of the polyester family and is used in fibres for clothing, containers for liquids and foods, and thermoforming for manufacturing, and in combination with glass fibre for engineering resins. In 2016, annual production of PET was 56 million tons. The biggest application is in fibres (in excess of 60%), with bottle production accounting for about 30% of global demand. In the context of textile applications, PET is referred to by its common name, polyester, whereas the acronym PET is generally used in relation to packaging. Polyester makes up about 18% of world polymer production and is the fourth-most-produced polymer after polyethylene (PE), polypropylene (PP) and polyvinyl chloride (PVC). PET consists of repeating (C10H8O4) units. PET is commonly recycled, and has the digit 1 (♳) as its resin identification code (RIC). The National Association for PET Container Resources (NAPCOR) defines PET as: "Polyethylene terephthalate items referenced are derived from terephthalic acid (or dimethyl terephthalate) and mono ethylene glycol, wherein the sum of terephthalic acid (or dimethyl terephthalate) and mono ethylene glycol reacted constitutes at least 90 percent of the mass of monomer reacted to form the polymer, and must exhibit a melting peak temperature between 225 °C and 255 °C, as identified during the second thermal scan in procedure 10.1 in ASTM D3418, when heating the sample at a rate of 10 °C/minute." Depending on its processing and thermal history, polyethylene terephthalate may exist both as an amorphous (transparent) and as a semi-crystalline polymer. The semicrystalline material might appear transparent (particle size less than 500 nm) or opaque and white (particle size up to a few micrometers) depending on its crystal structure and particle size. One process for making PET uses bis(2-hydroxyethyl) terephthal
https://en.wikipedia.org/wiki/Video%20game%20remake
A video game remake is a video game closely adapted from an earlier title, usually for the purpose of modernizing a game with updated graphics for newer hardware and gameplay for contemporary audiences. Typically, a remake of such game software shares essentially the same title, fundamental gameplay concepts, and core story elements of the original game, although some aspects of the original game may have been changed for the remake. Remakes are often made by the original developer or copyright holder, and sometimes by the fan community. If created by the community, video game remakes are sometimes also called fangames and can be seen as part of the retro gaming phenomenon. Definition A remake offers a newer interpretation of an older work, characterized by updated or changed assets. For example, The Legend of Zelda: Ocarina of Time 3D and The Legend of Zelda: Majora's Mask 3D for the Nintendo 3DS are considered remakes of their original versions for the Nintendo 64, and not a remaster or a port, since there are new character models and texture packs. The Legend of Zelda: Wind Waker HD for Wii U would be considered a remaster, since it retains the same, albeit updated upscaled aesthetics of the original. A remake typically maintains the same story, genre, and fundamental gameplay ideas of the original work. The intent of a remake is usually to take an older game that has become outdated and update it for a new platform and audience. A remake will not necessarily preserve the original gameplay especially if it is dated, instead remaking the gameplay to conform to the conventions of contemporary games or later titles in the same series in order to make a game marketable to a new audience. For example, for Sierra's 1991 remake of Space Quest, the developers used the engine, point-and-click interface, and graphical style of Space Quest IV: Roger Wilco and The Time Rippers, replacing the original graphics and text parser interface of the original. However, other elem
https://en.wikipedia.org/wiki/Actuator
An actuator is a component of a machine that produces force, torque, or displacement, usually in a controlled way, when an electrical, pneumatic or hydraulic input is supplied to it in a system (called an actuating system). An actuator converts such an input signal into the required form of mechanical energy. It is a type of transducer. In simple terms, it is a "mover". An actuator requires a control device (controlled by control signal) and a source of energy. The control signal is relatively low energy and may be electric voltage or current, pneumatic, or hydraulic fluid pressure, or even human power. In the electric, hydraulic, and pneumatic sense, it is a form of automation or automatic control. The displacement achieved is commonly linear or rotational, as exemplified by linear motors and rotary motors, respectively. Rotary motion is more natural for small machines making large displacements. By means of a leadscrew, rotary motion can be adapted to function as a linear actuator (a linear motion, but not a linear motor). Another broad classification of actuators separates them into two types: incremental-drive actuators and continuous-drive actuators. Stepper motors are one type of incremental-drive actuators. Examples of continuous-drive actuators include DC torque motors, induction motors, hydraulic and pneumatic motors, and piston-cylinder drives (rams). Types of actuators Soft actuator A soft actuator is one that changes its shape in response to stimuli including mechanical, thermal, magnetic, and electrical. Soft actuators mainly deal with the robotics of humans rather than industry which is what most of the actuators are used for. For most actuators they are mechanically durable yet do not have an ability to adapt compared to soft actuators. The soft actuators apply to mainly safety and healthcare for humans which is why they are able to adapt to environments by disassembling their parts. This is why the driven energy behind soft actuators deal with
https://en.wikipedia.org/wiki/DMX512
DMX512 is a standard for digital communication networks that are commonly used to control lighting and effects. It was originally intended as a standardized method for controlling stage lighting dimmers, which, prior to DMX512, had employed various incompatible proprietary protocols. It quickly became the primary method for linking controllers (such as a lighting console) to dimmers and special effects devices such as fog machines and intelligent lights. DMX512 has also expanded to uses in non-theatrical interior and architectural lighting, at scales ranging from strings of Christmas lights to electronic billboards and stadium or arena concerts. It can now be used to control almost anything, reflecting its popularity in all types of venues. DMX512 uses a unidirectional EIA-485 (RS-485) differential signaling at its physical layer, in conjunction with a variable-size, packet-based communication protocol. DMX512 does not include automatic error checking and correction, and therefore is not an appropriate control for hazardous applications, such as pyrotechnics or movement of theatrical rigging. However, it is still used for such applications. False triggering may be caused by electromagnetic interference, static electricity discharges, improper cable termination, excessively long cables, or poor quality cables. History Developed by the Engineering Commission of United States Institute for Theatre Technology (USITT), the DMX512 standard (for Digital Multiplex with 512 pieces of information) was created in 1986, with subsequent revisions in 1990 leading to USITT DMX512/1990. DMX512-A In 1998 the Entertainment Services and Technology Association (ESTA) began a revision process to develop the standard as an ANSI standard. The resulting revised standard, known officially as "Entertainment Technology—USITT DMX512-A—Asynchronous Serial Digital Data Transmission Standard for Controlling Lighting Equipment and Accessories", was approved by the American National S
https://en.wikipedia.org/wiki/Ammonium%20chloride
Ammonium chloride is an inorganic compound with the formula NH4Cl and a white crystalline salt that is highly soluble in water. Solutions of ammonium chloride are mildly acidic. In its naturally occurring mineralogic form, it is known as sal ammoniac. The mineral is commonly formed on burning coal dumps from condensation of coal-derived gases. It is also found around some types of volcanic vents. It is mainly used as fertilizer and a flavouring agent in some types of liquorice. It is the product from the reaction of hydrochloric acid and ammonia. Production It is a product of the Solvay process used to produce sodium carbonate: CO2 + 2 NH3 + 2 NaCl + H2O → 2 NH4Cl + Na2CO3 Not only is that method the principal one for the manufacture of ammonium chloride, but also it is used to minimize ammonia release in some industrial operations. Ammonium chloride is prepared commercially by combining ammonia (NH3) with either hydrogen chloride (gas) or hydrochloric acid (water solution): NH3 + HCl → NH4Cl Ammonium chloride occurs naturally in volcanic regions, forming on volcanic rocks near fume-releasing vents (fumaroles). The crystals deposit directly from the gaseous state and tend to be short-lived, as they dissolve easily in water. Reactions Ammonium chloride appears to sublime upon heating but actually reversibly decomposes into ammonia and hydrogen chloride gas: NH4Cl NH3 + HCl Ammonium chloride reacts with a strong base, like sodium hydroxide, to release ammonia gas: NH4Cl + NaOH → NH3 + NaCl + H2O Similarly, ammonium chloride also reacts with alkali-metal carbonates at elevated temperatures, giving ammonia and alkali-metal chloride: 2 NH4Cl + Na2CO3 → 2 NaCl + CO2 + H2O + 2 NH3 A solution of 5% by mass of ammonium chloride in water has a pH in the range 4.6 to 6.0. Some reactions of ammonium chloride with other chemicals are endothermic, such as its reaction with barium hydroxide and its dissolving in water. Applications The dominant application of ammon
https://en.wikipedia.org/wiki/Mac%20OS%20X%20Panther
Mac OS X Panther (version 10.3) is the fourth major release of macOS, Apple's desktop and server operating system. It followed Mac OS X Jaguar and preceded Mac OS X Tiger. It was released on October 24, 2003, with the retail price of US$129 for a single user and US$199 for a five user, family license. The main features of Panther included a refined Aqua theme, Exposé, Fast user switching, and a new Finder. Panther also included Safari as its default browser, as a change from Internet Explorer in Jaguar. System requirements Panther's system requirements are: PowerPC G3, G4, or G5 processor (at least 233 MHz) Built-in USB At least 128 MB of RAM (256 MB recommended, minimum of 96 MB supported unofficially) At least 1.5 GB of available hard disk space CD drive Internet access requires a compatible service provider; iDisk requires a .Mac account Video conferencing requires: 333 MHz or faster PowerPC G3, G4, or G5 processor Broadband internet access (100 kbit/s or faster) Compatible FireWire DV camera or web camera Because a New World ROM was required for Mac OS X Panther, certain older computers (such as beige Power Mac G3s and 'Wall Street' PowerBook G3s) were unable to run Panther by default. Third-party software (such as XPostFacto) can, however, override checks made during the install process; otherwise, installation or upgrades from Jaguar fails on these older machines. Panther still fully supported the Classic environment for running older Mac OS 9 applications, but made Classic application windows double-buffered, interfering with some applications written to draw directly to screen. New and changed features End-user features Apple advertised that Mac OS X Panther had over 150 new features, including: Finder: Updated with a brushed-metal interface, a new live search engine, customizable Sidebar, secure deletion, colored labels (resurrected from classic Mac OS) in the filesystem and Zip support built in. The Finder icon was also changed. Fast user switching:
https://en.wikipedia.org/wiki/RSA%20SecurID
RSA SecurID, formerly referred to as SecurID, is a mechanism developed by RSA for performing two-factor authentication for a user to a network resource. Description The RSA SecurID authentication mechanism consists of a "token"—either hardware (e.g. a key fob) or software (a soft token)—which is assigned to a computer user and which creates an authentication code at fixed intervals (usually 60 seconds) using a built-in clock and the card's factory-encoded almost random key (known as the "seed"). The seed is different for each token, and is loaded into the corresponding RSA SecurID server (RSA Authentication Manager, formerly ACE/Server) as the tokens are purchased. On-demand tokens are also available, which provide a tokencode via email or SMS delivery, eliminating the need to provision a token to the user. The token hardware is designed to be tamper-resistant to deter reverse engineering. When software implementations of the same algorithm ("software tokens") appeared on the market, public code had been developed by the security community allowing a user to emulate RSA SecurID in software, but only if they have access to a current RSA SecurID code, and the original 64-bit RSA SecurID seed file introduced to the server. Later, the 128-bit RSA SecurID algorithm was published as part of an open source library. In the RSA SecurID authentication scheme, the seed record is the secret key used to generate one-time passwords. Newer versions also feature a USB connector, which allows the token to be used as a smart card-like device for securely storing certificates. A user authenticating to a network resource—say, a dial-in server or a firewall—needs to enter both a personal identification number and the number being displayed at that moment on their RSA SecurID token. Though increasingly rare, some systems using RSA SecurID disregard PIN implementation altogether, and rely on password/RSA SecurID code combinations. The server, which also has a real-time clock and a d
https://en.wikipedia.org/wiki/Field%20electron%20emission
Field electron emission, also known as field emission (FE) and electron field emission, is emission of electrons induced by an electrostatic field. The most common context is field emission from a solid surface into a vacuum. However, field emission can take place from solid or liquid surfaces, into a vacuum, a fluid (e.g. air), or any non-conducting or weakly conducting dielectric. The field-induced promotion of electrons from the valence to conduction band of semiconductors (the Zener effect) can also be regarded as a form of field emission. The terminology is historical because related phenomena of surface photoeffect, thermionic emission (or Richardson–Dushman effect) and "cold electronic emission", i.e. the emission of electrons in strong static (or quasi-static) electric fields, were discovered and studied independently from the 1880s to 1930s. When field emission is used without qualifiers it typically means "cold emission". Field emission in pure metals occurs in high electric fields: the gradients are typically higher than 1 gigavolt per metre and strongly dependent upon the work function. While electron sources based on field emission have a number of applications, field emission is most commonly an undesirable primary source of vacuum breakdown and electrical discharge phenomena, which engineers work to prevent. Examples of applications for surface field emission include the construction of bright electron sources for high-resolution electron microscopes or the discharge of induced charges from spacecraft. Devices which eliminate induced charges are termed charge-neutralizers. Field emission was explained by quantum tunneling of electrons in the late 1920s. This was one of the triumphs of the nascent quantum mechanics. The theory of field emission from bulk metals was proposed by Ralph H. Fowler and Lothar Wolfgang Nordheim. A family of approximate equations, Fowler–Nordheim equations, is named after them. Strictly, Fowler–Nordheim equations apply only
https://en.wikipedia.org/wiki/Flat-panel%20display
A flat-panel display (FPD) is an electronic display used to display visual content such as text or images. It is present in consumer, medical, transportation, and industrial equipment. Flat-panel displays are thin, lightweight, provide better linearity and are capable of higher resolution than typical consumer-grade TVs from earlier eras. They are usually less than thick. While the highest resolution for consumer-grade CRT televisions was 1080i, many flat-panel displays in the 2020s are capable of 1080p and 4K resolution. In the 2010s, portable consumer electronics such as laptops, mobile phones, and portable cameras have used flat-panel displays since they consume less power and are lightweight. As of 2016, flat-panel displays have almost completely replaced CRT displays. Most 2010s-era flat-panel displays use LCD or light-emitting diode (LED) technologies, sometimes combined. Most LCD screens are back-lit with color filters used to display colors. In many cases, flat-panel displays are combined with touch screen technology, which allows the user to interact with the display in a natural manner. For example, modern smartphone displays often use OLED panels, with capacitive touch screens. Flat-panel displays can be divided into two display device categories: volatile and static. The former requires that pixels be periodically electronically refreshed to retain their state (e.g. liquid-crystal displays (LCD)), and can only show an image when it has power. On the other hand, static flat-panel displays rely on materials whose color states are bistable, such as displays that make use of e-ink technology, and as such retain content even when power is removed. History The first engineering proposal for a flat-panel TV was by General Electric in 1954 as a result of its work on radar monitors. The publication of their findings gave all the basics of future flat-panel TVs and monitors. But GE did not continue with the R&D required and never built a working flat panel a
https://en.wikipedia.org/wiki/Character%20%28mathematics%29
In mathematics, a character is (most commonly) a special kind of function from a group to a field (such as the complex numbers). There are at least two distinct, but overlapping meanings. Other uses of the word "character" are almost always qualified. Multiplicative character A multiplicative character (or linear character, or simply character) on a group G is a group homomorphism from G to the multiplicative group of a field , usually the field of complex numbers. If G is any group, then the set Ch(G) of these morphisms forms an abelian group under pointwise multiplication. This group is referred to as the character group of G. Sometimes only unitary characters are considered (thus the image is in the unit circle); other such homomorphisms are then called quasi-characters. Dirichlet characters can be seen as a special case of this definition. Multiplicative characters are linearly independent, i.e. if are different characters on a group G then from it follows that . Character of a representation The character of a representation of a group G on a finite-dimensional vector space V over a field F is the trace of the representation , i.e. for In general, the trace is not a group homomorphism, nor does the set of traces form a group. The characters of one-dimensional representations are identical to one-dimensional representations, so the above notion of multiplicative character can be seen as a special case of higher-dimensional characters. The study of representations using characters is called "character theory" and one-dimensional characters are also called "linear characters" within this context. Alternative definition If restricted to finite abelian group with representation in (i.e. ), the following alternative definition would be equivalent to the above (For abelian groups, every matrix representation decomposes into a direct sum of representations. For non-abelian groups, the original definition would be more general than this one): A c
https://en.wikipedia.org/wiki/%2A-algebra
In mathematics, and more specifically in abstract algebra, a *-algebra (or involutive algebra) is a mathematical structure consisting of two involutive rings and , where is commutative and has the structure of an associative algebra over . Involutive algebras generalize the idea of a number system equipped with conjugation, for example the complex numbers and complex conjugation, matrices over the complex numbers and conjugate transpose, and linear operators over a Hilbert space and Hermitian adjoints. However, it may happen that an algebra admits no involution. Definitions *-ring In mathematics, a *-ring is a ring with a map that is an antiautomorphism and an involution. More precisely, is required to satisfy the following properties: for all in . This is also called an involutive ring, involutory ring, and ring with involution. The third axiom is implied by the second and fourth axioms, making it redundant. Elements such that are called self-adjoint. Archetypical examples of a *-ring are fields of complex numbers and algebraic numbers with complex conjugation as the involution. One can define a sesquilinear form over any *-ring. Also, one can define *-versions of algebraic objects, such as ideal and subring, with the requirement to be *-invariant: and so on. &ast;-rings are unrelated to star semirings in the theory of computation. *-algebra A *-algebra is a *-ring, with involution * that is an associative algebra over a commutative *-ring with involution , such that . The base *-ring is often the complex numbers (with acting as complex conjugation). It follows from the axioms that * on is conjugate-linear in , meaning for . A *-homomorphism is an algebra homomorphism that is compatible with the involutions of and , i.e., for all in . Philosophy of the *-operation The *-operation on a *-ring is analogous to complex conjugation on the complex numbers. The *-operation on a *-algebra is analogous to taking adjoints in complex
https://en.wikipedia.org/wiki/Antilinear%20map
In mathematics, a function between two complex vector spaces is said to be antilinear or conjugate-linear if hold for all vectors and every complex number where denotes the complex conjugate of Antilinear maps stand in contrast to linear maps, which are additive maps that are homogeneous rather than conjugate homogeneous. If the vector spaces are real then antilinearity is the same as linearity. Antilinear maps occur in quantum mechanics in the study of time reversal and in spinor calculus, where it is customary to replace the bars over the basis vectors and the components of geometric objects by dots put above the indices. Scalar-valued antilinear maps often arise when dealing with complex inner products and Hilbert spaces. Definitions and characterizations A function is called or if it is additive and conjugate homogeneous. An on a vector space is a scalar-valued antilinear map. A function is called if while it is called if In contrast, a linear map is a function that is additive and homogeneous, where is called if An antilinear map may be equivalently described in terms of the linear map from to the complex conjugate vector space Examples Anti-linear dual map Given a complex vector space of rank 1, we can construct an anti-linear dual map which is an anti-linear map sending an element for to for some fixed real numbers We can extend this to any finite dimensional complex vector space, where if we write out the standard basis and each standard basis element as then an anti-linear complex map to will be of the form for Isomorphism of anti-linear dual with real dual The anti-linear dualpg 36 of a complex vector space is a special example because it is isomorphic to the real dual of the underlying real vector space of This is given by the map sending an anti-linear map to In the other direction, there is the inverse map sending a real dual vector to giving the desired map. Properties The composite of two a
https://en.wikipedia.org/wiki/Involution%20%28mathematics%29
In mathematics, an involution, involutory function, or self-inverse function is a function that is its own inverse, for all in the domain of . Equivalently, applying twice produces the original value. General properties Any involution is a bijection. The identity map is a trivial example of an involution. Examples of nontrivial involutions include negation (), reciprocation (), and complex conjugation () in arithmetic; reflection, half-turn rotation, and circle inversion in geometry; complementation in set theory; and reciprocal ciphers such as the ROT13 transformation and the Beaufort polyalphabetic cipher. The composition of two involutions f and g is an involution if and only if they commute: . Involutions on finite sets The number of involutions, including the identity involution, on a set with elements is given by a recurrence relation found by Heinrich August Rothe in 1800: and for The first few terms of this sequence are 1, 1, 2, 4, 10, 26, 76, 232 ; these numbers are called the telephone numbers, and they also count the number of Young tableaux with a given number of cells. The number can also be expressed by non-recursive formulas, such as the sum The number of fixed points of an involution on a finite set and its number of elements have the same parity. Thus the number of fixed points of all the involutions on a given finite set have the same parity. In particular, every involution on an odd number of elements has at least one fixed point. This can be used to prove Fermat's two squares theorem. Involution throughout the fields of mathematics Real-valued functions Some basic examples of involutions include the functions the composition and more generally the function is an involution for constants and that satisfy Another one is The graph of an involution (on the real numbers) is symmetric across the line . This is due to the fact that the inverse of any general function will be its reflection over the line . This can be seen b
https://en.wikipedia.org/wiki/D%27Alembert%20operator
In special relativity, electromagnetism and wave theory, the d'Alembert operator (denoted by a box: ), also called the d'Alembertian, wave operator, box operator or sometimes quabla operator (cf. nabla symbol) is the Laplace operator of Minkowski space. The operator is named after French mathematician and physicist Jean le Rond d'Alembert. In Minkowski space, in standard coordinates , it has the form Here is the 3-dimensional Laplacian and is the inverse Minkowski metric with , , for . Note that the and summation indices range from 0 to 3: see Einstein notation. We have assumed units such that the speed of light = 1. (Some authors alternatively use the negative metric signature of , with .) Lorentz transformations leave the Minkowski metric invariant, so the d'Alembertian yields a Lorentz scalar. The above coordinate expressions remain valid for the standard coordinates in every inertial frame. The box symbol and alternate notations There are a variety of notations for the d'Alembertian. The most common are the box symbol (Unicode: ) whose four sides represent the four dimensions of space-time and the box-squared symbol which emphasizes the scalar property through the squared term (much like the Laplacian). In keeping with the triangular notation for the Laplacian, sometimes is used. Another way to write the d'Alembertian in flat standard coordinates is . This notation is used extensively in quantum field theory, where partial derivatives are usually indexed, so the lack of an index with the squared partial derivative signals the presence of the d'Alembertian. Sometimes the box symbol is used to represent the four-dimensional Levi-Civita covariant derivative. The symbol is then used to represent the space derivatives, but this is coordinate chart dependent. Applications The wave equation for small vibrations is of the form where is the displacement. The wave equation for the electromagnetic field in vacuum is where is the electromagn
https://en.wikipedia.org/wiki/Sign%20convention
In physics, a sign convention is a choice of the physical significance of signs (plus or minus) for a set of quantities, in a case where the choice of sign is arbitrary. "Arbitrary" here means that the same physical system can be correctly described using different choices for the signs, as long as one set of definitions is used consistently. The choices made may differ between authors. Disagreement about sign conventions is a frequent source of confusion, frustration, misunderstandings, and even outright errors in scientific work. In general, a sign convention is a special case of a choice of coordinate system for the case of one dimension. Sometimes, the term "sign convention" is used more broadly to include factors of i and 2π, rather than just choices of sign. Relativity Metric signature In relativity, the metric signature can be either or . (Note that throughout this article we are displaying the signs of the eigenvalues of the metric in the order that presents the timelike component first, followed by the spacelike components). A similar convention is used in higher-dimensional relativistic theories; that is, or . A choice of signature is associated with a variety of names: : Timelike convention Particle physics convention West coast convention Mostly minuses Landau–Lifshitz sign convention. : Spacelike convention Relativity convention East coast convention Mostly pluses Pauli convention Cataloged below are the choices of various authors of some graduate textbooks: : Landau & Lifshitz Gravitation: an introduction to current research (L. Witten) Ray D'Inverno, Introducing Einstein's relativity. : Misner, Thorne and Wheeler Spacetime and Geometry: An Introduction to General Relativity (Sean M. Carroll) General Relativity (Wald) (Note that Wald changes signature to the timelike convention for Chapter 13 only.) The signature corresponds to the metric tensor: and gives as the relationship between mass and four momentum whereas the signature corres
https://en.wikipedia.org/wiki/Graph%20paper
Graph paper, coordinate paper, grid paper, or squared paper is writing paper that is printed with fine lines making up a regular grid. The lines are often used as guides for plotting graphs of functions or experimental data and drawing curves. It is commonly found in mathematics and engineering education settings and in laboratory notebooks. Graph paper is available either as loose leaf paper or bound in notebooks. History The Metropolitan Museum of Art owns a pattern book dated to around 1596 in which each page bears a grid printed with a woodblock. The owner has used these grids to create block pictures in black and white and in colour. The first commercially published "coordinate paper" is usually attributed to a Dr. Buxton of England, who patented paper, printed with a rectangular coordinate grid, in 1794. A century later, E. H. Moore, a distinguished mathematician at the University of Chicago, advocated usage of paper with "squared lines" by students of high schools and universities. The 1906 edition of Algebra for Beginners by H. S. Hall and S. R. Knight included a strong statement that "the squared paper should be of good quality and accurately ruled to inches and tenths of an inch. Experience shows that anything on a smaller scale (such as 'millimeter' paper) is practically worthless in the hands of beginners." The term "graph paper" did not catch on quickly in American usage. A School Arithmetic (1919) by H. S. Hall and F. H. Stevens had a chapter on graphing with "squared paper". Analytic Geometry (1937) by W. A. Wilson and J. A. Tracey used the phrase "coordinate paper". The term "squared paper" remained in British usage for longer; for example it was used in Public School Arithmetic (2023) by W. M. Baker and A. A. Bourne published in London. Formats Quad paper, sometimes referred to as quadrille paper from French quadrillé, 'large square', is a common form of graph paper with a sparse grid printed in light blue or gray and right to the edge of the
https://en.wikipedia.org/wiki/Lift-to-drag%20ratio
In aerodynamics, the lift-to-drag ratio (or L/D ratio) is the lift generated by an aerodynamic body such as an aerofoil or aircraft, divided by the aerodynamic drag caused by moving through air. It describes the aerodynamic efficiency under given flight conditions. The L/D ratio for any given body will vary according to these flight conditions. For an aerofoil wing or powered aircraft, the L/D is specified when in straight and level flight. For a glider it determines the glide ratio, of distance travelled against loss of height. The term is calculated for any particular airspeed by measuring the lift generated, then dividing by the drag at that speed. These vary with speed, so the results are typically plotted on a 2-dimensional graph. In almost all cases the graph forms a U-shape, due to the two main components of drag. The L/D may be calculated using computational fluid dynamics or computer simulation. It is measured empirically by testing in a wind tunnel or in free flight test. The L/D ratio is affected by both the form drag of the body and by the induced drag associated with creating a lifting force. It depends principally on the lift and drag coefficients, angle of attack to the airflow and the wing aspect ratio. The L/D ratio is inversely proportional to the energy required for a given flightpath, so that doubling the L/D ratio will require only half of the energy for the same distance travelled. This results directly in better fuel economy. The L/D ratio can also be used for water craft and land vehicles. The L/D ratios for hydrofoil boats and displacement craft are determined similarly to aircraft. Lift and drag Lift can be created when an aerofoil-shaped body travels through a viscous fluid such as air. The aerofoil is often cambered and/or set at an angle of attack to the airflow. The lift then increases as the square of the airspeed. Whenever an aerodynamic body generates lift, this also creates lift-induced drag or induced drag. At low speeds an
https://en.wikipedia.org/wiki/Eddington%E2%80%93Finkelstein%20coordinates
In general relativity, Eddington–Finkelstein coordinates are a pair of coordinate systems for a Schwarzschild geometry (e.g. a spherically symmetric black hole) which are adapted to radial null geodesics. Null geodesics are the worldlines of photons; radial ones are those that are moving directly towards or away from the central mass. They are named for Arthur Stanley Eddington and David Finkelstein. Although they appear to have inspired the idea, neither ever wrote down these coordinates or the metric in these coordinates. Roger Penrose seems to have been the first to write down the null form but credits it to the above paper by Finkelstein, and, in his Adams Prize essay later that year, to Eddington and Finkelstein. Most influentially, Misner, Thorne and Wheeler, in their book Gravitation, refer to the null coordinates by that name. In these coordinate systems, outward (inward) traveling radial light rays (which each follow a null geodesic) define the surfaces of constant "time", while the radial coordinate is the usual area coordinate so that the surfaces of rotation symmetry have an area of . One advantage of this coordinate system is that it shows that the apparent singularity at the Schwarzschild radius is only a coordinate singularity and is not a true physical singularity. While this fact was recognized by Finkelstein, it was not recognized (or at least not commented on) by Eddington, whose primary purpose was to compare and contrast the spherically symmetric solutions in Whitehead's theory of gravitation and Einstein's version of the theory of relativity. Schwarzschild metric The Schwarzschild coordinates are , and in these coordinates the Schwarzschild metric is well known: where is the standard Riemannian metric of the unit 2-sphere. Note the conventions being used here are the metric signature of (− + + +) and the natural units where c = 1 is the dimensionless speed of light, G the gravitational constant, and M is the characteristic mass of the S
https://en.wikipedia.org/wiki/Flavr%20Savr
Flavr Savr (also known as CGN-89564-2; pronounced "flavor saver"), a genetically modified tomato, was the first commercially grown genetically engineered food to be granted a license for human consumption. It was developed by the Californian company Calgene in the 1980s. The tomato has an improved shelf-life, increased fungal resistance and a slightly increased viscosity compared to its non-modified counterpart. It was meant to be harvested ripe for increased flavor for long-distance shipping. The Flavr Savr contains two genes added by Calgene; a reversed antisense polygalacturonase gene which inhibits the production of the aforementioned rotting enzyme and a gene responsible for the creation of APH(3')II, which confers resistance to certain aminoglycoside antibiotics including kanamycin and neomycin. On May 18, 1994, the FDA completed its evaluation of the Flavr Savr tomato and the use of APH(3')II, concluding that the tomato "is as safe as tomatoes bred by conventional means" and "that the use of aminoglycoside 3'-phosphotransferase II is safe for use as a processing aid in the development of new varieties of tomato, rapeseed oil, and cotton intended for food use." It was first sold in 1994, and was only available for a few years before production ceased in 1997. Calgene made history, but mounting costs prevented the company from becoming profitable, and it was eventually acquired by Monsanto Company. Characteristics Tomatoes have a short shelf-life in which they remain firm and ripe. This lifetime may be shorter than the time needed for them to reach market when shipped from winter growing areas to markets in the north, and the softening process can also lead to more of the fruit being damaged during transit. If picked while ripe, tomatoes can spoil before reaching far-away consumers due to their short lifetime. To address this, tomatoes intended for shipping are often picked while they are unripe, or "green", and then prompted to ripen just before delivery thr
https://en.wikipedia.org/wiki/GNUnet
GNUnet is a software framework for decentralized, peer-to-peer networking and an official GNU package. The framework offers link encryption, peer discovery, resource allocation, communication over many transports (such as TCP, UDP, HTTP, HTTPS, WLAN and Bluetooth) and various basic peer-to-peer algorithms for routing, multicast and network size estimation. GNUnet's basic network topology is that of a mesh network. GNUnet includes a distributed hash table (DHT) which is a randomized variant of Kademlia that can still efficiently route in small-world networks. GNUnet offers a "F2F topology" option for restricting connections to only the users' trusted friends. The users' friends' own friends (and so on) can then indirectly exchange files with the users' computer, never using its IP address directly. GNUnet uses Uniform resource identifiers (not approved by IANA, although an application has been made). GNUnet URIs consist of two major parts: the module and the module specific identifier. A GNUnet URI is of form gnunet://module/identifier where module is the module name and identifier is a module specific string. The primary codebase is written in C, but there are bindings in other languages to produce an API for developing extensions in those languages. GNUnet is part of the GNU Project. It has gained interest in the hacker community after the PRISM revelations. GNUnet consists of several subsystems, of which essential ones are Transport and Core subsystems. Transport subsystem provides insecure link-layer communications, while Core provides peer discovery and encryption. On top of the core subsystem various applications are built. GNUnet includes various P2P applications in the main distribution of the framework, including filesharing, chat and VPN; additionally, a few external projects (such as secushare) are also extending the GNUnet infrastructure. GNUnet is unrelated to the older Gnutella P2P protocol. Gnutella is not an official GNU project, while GNUnet i
https://en.wikipedia.org/wiki/Closure%20%28mathematics%29
In mathematics, a subset of a given set is closed under an operation of the larger set if performing that operation on members of the subset always produces a member of that subset. For example, the natural numbers are closed under addition, but not under subtraction: is not a natural number, although both 1 and 2 are. Similarly, a subset is said to be closed under a collection of operations if it is closed under each of the operations individually. The closure of a subset is the result of a closure operator applied to the subset. The closure of a subset under some operations is the smallest superset that is closed under these operations. It is often called the span (for example linear span) or the generated set. Definitions Let be a set equipped with one or several methods for producing elements of from other elements of . A subset of is said to be closed under these methods, if, when all input elements are in , then all possible results are also in . Sometimes, one may also say that has the . The main property of closed sets, which results immediately from the definition, is that every intersection of closed sets is a closed set. It follows that for every subset of , there is a smallest closed subset of such that (it is the intersection of all closed subsets that contain ). Depending on the context, is called the closure of or the set generated or spanned by . The concepts of closed sets and closure are often extended to any property of subsets that are stable under intersection; that is, every intersection of subsets that have the property has also the property. For example, in a Zariski-closed set, also known as an algebraic set, is the set of the common zeros of a family of polynomials, and the Zariski closure of a set of points is the smallest algebraic set that contains . In algebraic structures An algebraic structure is a set equipped with operations that satisfy some axioms. These axioms may be identities. Some axioms may contain existe
https://en.wikipedia.org/wiki/Aleph%20number
In mathematics, particularly in set theory, the aleph numbers are a sequence of numbers used to represent the cardinality (or size) of infinite sets that can be well-ordered. They were introduced by the mathematician Georg Cantor and are named after the symbol he used to denote them, the Hebrew letter aleph (). The cardinality of the natural numbers is (read aleph-nought or aleph-zero; the term aleph-null is also sometimes used), the next larger cardinality of a well-ordered set is aleph-one then and so on. Continuing in this manner, it is possible to define a cardinal number for every ordinal number as described below. The concept and notation are due to Georg Cantor, who defined the notion of cardinality and realized that infinite sets can have different cardinalities. The aleph numbers differ from the infinity () commonly found in algebra and calculus, in that the alephs measure the sizes of sets, while infinity is commonly defined either as an extreme limit of the real number line (applied to a function or sequence that "diverges to infinity" or "increases without bound"), or as an extreme point of the extended real number line. Aleph-zero (aleph-zero, also aleph-nought or aleph-null) is the cardinality of the set of all natural numbers, and is an infinite cardinal. The set of all finite ordinals, called or (where is the lowercase Greek letter omega), has cardinality . A set has cardinality if and only if it is countably infinite, that is, there is a bijection (one-to-one correspondence) between it and the natural numbers. Examples of such sets are the set of all integers, any infinite subset of the integers, such as the set of all square numbers or the set of all prime numbers, the set of all rational numbers, the set of all constructible numbers (in the geometric sense), the set of all algebraic numbers, the set of all computable numbers, the set of all computable functions, the set of all binary strings of finite length, and the
https://en.wikipedia.org/wiki/Extensionality
In logic, extensionality, or extensional equality, refers to principles that judge objects to be equal if they have the same external properties. It stands in contrast to the concept of intensionality, which is concerned with whether the internal definitions of objects are the same. Example Consider the two functions f and g mapping from and to natural numbers, defined as follows: To find f(n), first add 5 to n, then multiply by 2. To find g(n), first multiply n by 2, then add 10. These functions are extensionally equal; given the same input, both functions always produce the same value. But the definitions of the functions are not equal, and in that intensional sense the functions are not the same. Similarly, in natural language there are many predicates (relations) that are intensionally different but are extensionally identical. For example, suppose that a town has one person named Joe, who is also the oldest person in the town. Then, the two predicates "being called Joe", and "being the oldest person in this town" are intensionally distinct, but extensionally equal for the (current) population of this town. In mathematics The extensional definition of function equality, discussed above, is commonly used in mathematics. Sometimes additional information is attached to a function, such as an explicit codomain, in which case two functions must not only agree on all values, but must also have the same codomain, in order to be equal (in contrast, the usual definition of a function in mathematics means that equal functions must have the same domain). A similar extensional definition is usually employed for relations: two relations are said to be equal if they have the same extensions. In set theory, the axiom of extensionality states that two sets are equal if and only if they contain the same elements. In mathematics formalized in set theory, it is common to identify relations—and, most importantly, functions—with their extension as stated above, so that
https://en.wikipedia.org/wiki/Block%20size%20%28cryptography%29
In modern cryptography, symmetric key ciphers are generally divided into stream ciphers and block ciphers. Block ciphers operate on a fixed length string of bits. The length of this bit string is the block size. Both the input (plaintext) and output (ciphertext) are the same length; the output cannot be shorter than the input this follows logically from the pigeonhole principle and the fact that the cipher must be reversibleand it is undesirable for the output to be longer than the input. Until the announcement of NIST's AES contest, the majority of block ciphers followed the example of the DES in using a block size of 64 bits (8 bytes). However, the birthday paradox tells us that after accumulating a number of blocks equal to the square root of the total number possible, there will be an approximately 50% chance of two or more being the same, which would start to leak information about the message contents. Thus even when used with a proper encryption mode (e.g. CBC or OFB), only 232 × 8 B = 32 GB of data can be safely sent under one key. In practice a greater margin of security is desired, restricting a single key to the encryption of much less data  — say a few hundred megabytes. At one point that seemed like a fair amount of data, but today it is easily exceeded. If the cipher mode does not properly randomise the input, the limit is even lower. Consequently, AES candidates were required to support a block length of 128 bits (16 bytes). This should be acceptable for up to 264 × 16 B = 256 exabytes of data, and should suffice for quite a few years to come. The winner of the AES contest, Rijndael, supports block and key sizes of 128, 192, and 256 bits, but in AES the block size is always 128 bits. The extra block sizes were not adopted by the AES standard. Many block ciphers, such as RC5, support a variable block size. The Luby-Rackoff construction and the Outerbridge construction can both increase the effective block size of a cipher. Joan Daemen's 3-Way and
https://en.wikipedia.org/wiki/Connect%20Four
Connect Four (also known as Connect 4, Four Up, Plot Four, Find Four, Captain's Mistress, Four in a Row, Drop Four, and Gravitrips in the Soviet Union) is a game in which the players choose a color and then take turns dropping colored tokens into a six-row, seven-column vertically suspended grid. The pieces fall straight down, occupying the lowest available space within the column. The objective of the game is to be the first to form a horizontal, vertical, or diagonal line of four of one's own tokens. It is therefore a type of M,n,k-game (7, 6, 4) with restricted piece placement. Connect Four is a solved game. The first player can always win by playing the right moves. The game was first sold under the Connect Four trademark by Milton Bradley in February 1974. Gameplay A gameplay example (right), shows the first player starting Connect Four by dropping one of their yellow discs into the center column of an empty game board. The two players then alternate turns dropping one of their discs at a time into an unfilled column, until the second player, with red discs, achieves a diagonal four in a row, and wins the game. If the board fills up before either player achieves four in a row, then the game is a draw. Mathematical solution Connect Four is a two-player game with perfect information for both sides, meaning that nothing is hidden from anyone. Connect Four also belongs to the classification of an adversarial, zero-sum game, since a player's advantage is an opponent's disadvantage. One measure of complexity of the Connect Four game is the number of possible games board positions. For classic Connect Four played on a 7-column-wide, 6-row-high grid, there are 4,531,985,219,092 positions<ref name="oeis">{{Cite web|title= Number of legal 7 X 6 Connect-Four positions after n plies|id=sequence A212693 |url=https://oeis.org/A212693|access-date=2023-02-12|website=Online Encyclopedia of Integer Sequences | date=2012 |editor= Neil Sloane | editor-link= Neil Sloane}}</re
https://en.wikipedia.org/wiki/Arithmetization%20of%20analysis
The arithmetization of analysis was a research program in the foundations of mathematics carried out in the second half of the 19th century. History Kronecker originally introduced the term arithmetization of analysis, by which he meant its constructivization in the context of the natural numbers (see quotation at bottom of page). The meaning of the term later shifted to signify the set-theoretic construction of the real line. Its main proponent was Weierstrass, who argued the geometric foundations of calculus were not solid enough for rigorous work. Research program The highlights of this research program are: the various (but equivalent) constructions of the real numbers by Dedekind and Cantor resulting in the modern axiomatic definition of the real number field; the epsilon-delta definition of limit; and the naïve set-theoretic definition of function. Legacy An important spinoff of the arithmetization of analysis is set theory. Naive set theory was created by Cantor and others after arithmetization was completed as a way to study the singularities of functions appearing in calculus. The arithmetization of analysis had several important consequences: the widely held belief in the banishment of infinitesimals from mathematics until the creation of non-standard analysis by Abraham Robinson in the 1960s, whereas in reality the work on non-Archimedean systems continued unabated, as documented by P. Ehrlich; the shift of the emphasis from geometric to algebraic reasoning: this has had important consequences in the way mathematics is taught today; it made possible the development of modern measure theory by Lebesgue and the rudiments of functional analysis by Hilbert; it motivated the currently prevalent philosophical position that all of mathematics should be derivable from logic and set theory, ultimately leading to Hilbert's program, Gödel's theorems and non-standard analysis. Quotations "God created the natural numbers, all else is the work of man."
https://en.wikipedia.org/wiki/Fish%20%28cryptography%29
Fish (sometimes capitalised as FISH) was the UK's GC&CS Bletchley Park codename for any of several German teleprinter stream ciphers used during World War II. Enciphered teleprinter traffic was used between German High Command and Army Group commanders in the field, so its intelligence value (Ultra) was of the highest strategic value to the Allies. This traffic normally passed over landlines, but as German forces extended their geographic reach beyond western Europe, they had to resort to wireless transmission. Bletchley Park decrypts of messages enciphered with the Enigma machines revealed that the Germans called one of their wireless teleprinter transmission systems "" ('sawfish') which led British cryptographers to refer to encrypted German radiotelegraphic traffic as "Fish." The code "Tunny" ('tuna') was the name given to the first non-Morse link, and it was subsequently used for the Lorenz SZ machines and the traffic enciphered by them. History In June 1941, the British "Y" wireless intercept stations, as well as receiving Enigma-enciphered Morse code traffic, started to receive non-Morse traffic which was initially called NoMo. NoMo1 was a German army link between Berlin and Athens, and NoMo2 a temporary air force link between Berlin and Königsberg. The parallel Enigma-enciphered link to NoMo2, which was being read by Government Code and Cypher School at Bletchley Park, revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" (sawfish). This led the British to use the code Fish dubbing the machine and its traffic Tunny. The enciphering/deciphering equipment was called a Geheimschreiber (secret writer) which, like Enigma, used a symmetrical substitution alphabet. The teleprinter code used was the International Telegraph Alphabet No. 2 (ITA2)—Murray's modification of the 5-bit Baudot code. When the Germans invaded Russia, during World War II, the Germans began to use a new type of enciphered transmission between central he
https://en.wikipedia.org/wiki/Molecular%20assembler
A molecular assembler, as defined by K. Eric Drexler, is a "proposed device able to guide chemical reactions by positioning reactive molecules with atomic precision". A molecular assembler is a kind of molecular machine. Some biological molecules such as ribosomes fit this definition. This is because they receive instructions from messenger RNA and then assemble specific sequences of amino acids to construct protein molecules. However, the term "molecular assembler" usually refers to theoretical human-made devices. Beginning in 2007, the British Engineering and Physical Sciences Research Council has funded development of ribosome-like molecular assemblers. Clearly, molecular assemblers are possible in this limited sense. A technology roadmap project, led by the Battelle Memorial Institute and hosted by several U.S. National Laboratories has explored a range of atomically precise fabrication technologies, including both early-generation and longer-term prospects for programmable molecular assembly; the report was released in December, 2007. In 2008, the Engineering and Physical Sciences Research Council provided funding of £1.5 million over six years (£1,942,235.57, $2,693,808.00 in 2021) for research working towards mechanized mechanosynthesis, in partnership with the Institute for Molecular Manufacturing, amongst others. Likewise, the term "molecular assembler" has been used in science fiction and popular culture to refer to a wide range of fantastic atom-manipulating nanomachines. Much of the controversy regarding "molecular assemblers" results from the confusion in the use of the name for both technical concepts and popular fantasies. In 1992, Drexler introduced the related but better-understood term "molecular manufacturing", which he defined as the programmed "chemical synthesis of complex structures by mechanically positioning reactive molecules, not by manipulating individual atoms". This article mostly discusses "molecular assemblers" in the popular s
https://en.wikipedia.org/wiki/DragonFly%20BSD
DragonFly BSD is a free and open-source Unix-like operating system forked from FreeBSD 4.8. Matthew Dillon, an Amiga developer in the late 1980s and early 1990s and FreeBSD developer between 1994 and 2003, began working on DragonFly BSD in June 2003 and announced it on the FreeBSD mailing lists on 16 July 2003. Dillon started DragonFly in the belief that the techniques adopted for threading and symmetric multiprocessing in FreeBSD 5 would lead to poor performance and maintenance problems. He sought to correct these anticipated problems within the FreeBSD project. Due to conflicts with other FreeBSD developers over the implementation of his ideas, his ability to directly change the codebase was eventually revoked. Despite this, the DragonFly BSD and FreeBSD projects still work together, sharing bug fixes, driver updates, and other improvements. Intended as the logical continuation of the FreeBSD 4.x series, DragonFly has diverged significantly from FreeBSD, implementing lightweight kernel threads (LWKT), an in-kernel message passing system, and the HAMMER file system. Many design concepts were influenced by AmigaOS. System design Kernel The kernel messaging subsystem being developed is similar to those found in microkernels such as Mach, though it is less complex by design. DragonFly's messaging subsystem has the ability to act in either a synchronous or asynchronous fashion, and attempts to use this capability to achieve the best performance possible in any given situation. According to developer Matthew Dillon, progress is being made to provide both device input/output (I/O) and virtual file system (VFS) messaging capabilities that will enable the remainder of the project goals to be met. The new infrastructure will allow many parts of the kernel to be migrated out into userspace; here they will be more easily debugged as they will be smaller, isolated programs, instead of being small parts entwined in a larger chunk of code. Additionally, the migration of se
https://en.wikipedia.org/wiki/Observable
In physics, an observable is a physical property or physical quantity that can be measured. Examples include position and momentum. In systems governed by classical mechanics, it is a real-valued "function" on the set of all possible system states. In quantum physics, it is an operator, or gauge, where the property of the quantum state can be determined by some sequence of operations. For example, these operations might involve submitting the system to various electromagnetic fields and eventually reading a value. Physically meaningful observables must also satisfy transformation laws that relate observations performed by different observers in different frames of reference. These transformation laws are automorphisms of the state space, that is bijective transformations that preserve certain mathematical properties of the space in question. Quantum mechanics In quantum physics, observables manifest as linear operators on a Hilbert space representing the state space of quantum states. The eigenvalues of observables are real numbers that correspond to possible values the dynamical variable represented by the observable can be measured as having. That is, observables in quantum mechanics assign real numbers to outcomes of particular measurements, corresponding to the eigenvalue of the operator with respect to the system's measured quantum state. As a consequence, only certain measurements can determine the value of an observable for some state of a quantum system. In classical mechanics, any measurement can be made to determine the value of an observable. The relation between the state of a quantum system and the value of an observable requires some linear algebra for its description. In the mathematical formulation of quantum mechanics, up to a phase constant, pure states are given by non-zero vectors in a Hilbert space V. Two vectors v and w are considered to specify the same state if and only if for some non-zero . Observables are given by self-adjoint oper
https://en.wikipedia.org/wiki/Poisson%20bracket
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables (below symbolized by and , respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates. In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups. All of these objects are named in honor of Siméon Denis Poisson. Properties Given two functions and that depend on phase space and time, their Poisson bracket is another function that depends on phase space and time. The following rules hold for any three functions of phase space and time: Anticommutativity Bilinearity Leibniz's rule Jacobi identity Also, if a function is constant over phase space (but may depend on time), then for any . Definition in canonical coordinates In canonical coordinates (also known as Darboux coordinates) on the phase space, given two functions and , the Poisson bracket takes the form The Poisson brackets of the canonical coordinates are
https://en.wikipedia.org/wiki/Jacobi%20identity
In mathematics, the Jacobi identity is a property of a binary operation that describes how the order of evaluation, the placement of parentheses in a multiple product, affects the result of the operation. By contrast, for operations with the associative property, any order of evaluation gives the same result (parentheses in a multiple product are not needed). The identity is named after the German mathematician Carl Gustav Jacob Jacobi. The cross product and the Lie bracket operation both satisfy the Jacobi identity. In analytical mechanics, the Jacobi identity is satisfied by the Poisson brackets. In quantum mechanics, it is satisfied by operator commutators on a Hilbert space and equivalently in the phase space formulation of quantum mechanics by the Moyal bracket. Definition Let and be two binary operations, and let be the neutral element for . The is Notice the pattern in the variables on the left side of this identity. In each subsequent expression of the form , the variables , and are permuted according to the cycle . Alternatively, we may observe that the ordered triples , and , are the even permutations of the ordered triple . Commutator bracket form The simplest informative example of a Lie algebra is constructed from the (associative) ring of matrices, which may be thought of as infinitesimal motions of an n-dimensional vector space. The × operation is the commutator, which measures the failure of commutativity in matrix multiplication. Instead of , the Lie bracket notation is used: In that notation, the Jacobi identity is: That is easily checked by computation. More generally, if is an associative algebra and is a subspace of that is closed under the bracket operation: belongs to for all , the Jacobi identity continues to hold on . Thus, if a binary operation satisfies the Jacobi identity, it may be said that it behaves as if it were given by in some associative algebra even if it is not actually defined that way. Using the antis
https://en.wikipedia.org/wiki/Chern%E2%80%93Simons%20form
In mathematics, the Chern–Simons forms are certain secondary characteristic classes. The theory is named for Shiing-Shen Chern and James Harris Simons, co-authors of a 1974 paper entitled "Characteristic Forms and Geometric Invariants," from which the theory arose. Definition Given a manifold and a Lie algebra valued 1-form over it, we can define a family of p-forms: In one dimension, the Chern–Simons 1-form is given by In three dimensions, the Chern–Simons 3-form is given by In five dimensions, the Chern–Simons 5-form is given by where the curvature F is defined as The general Chern–Simons form is defined in such a way that where the wedge product is used to define Fk. The right-hand side of this equation is proportional to the k-th Chern character of the connection . In general, the Chern–Simons p-form is defined for any odd p. Application to physics In 1978, Albert Schwarz formulated Chern–Simons theory, early topological quantum field theory, using Chern-Simons forms. In the gauge theory, the integral of Chern-Simons form is a global geometric invariant, and is typically gauge invariant modulo addition of an integer. See also Chern–Weil homomorphism Chiral anomaly Topological quantum field theory Jones polynomial References Further reading Homology theory Algebraic topology Differential geometry String theory
https://en.wikipedia.org/wiki/Airlock
An airlock is a compartment which permits passage between environments of differing atmospheric pressure or composition while minimizing the mixing of environments or change in pressure in the adjoining spaces. "Airlock" is sometimes written as air-lock or air lock, or abbreviated to just lock. An airlock consists of a chamber with two airtight doors or openings, usually arranged in series, which do not open simultaneously. Airlocks can be small-scale mechanisms, such as those used in fermenting, or larger mechanisms, which often take the form of an antechamber. An airlock may also be used underwater to allow passage between the air environment in a pressure vessel, such as a submarine, and the water environment outside. In such cases the airlock can contain air or water. This is called a floodable airlock or underwater airlock, and is used to prevent water from entering a submersible vessel or underwater habitat. Operation The procedure of entering an airlock, sealing it, equalizing the pressure, and passing through the inner door is known as locking in. Conversely, locking out involves equalizing pressure, unsealing the outer door, then exiting the lock compartment to enter the ambient environment. Locking on and off refer to transfer under pressure where the two chambers are physically connected or disconnected prior to equalizing the pressure and locking in or out. Before opening either door, the air pressure of the airlock chamber is equalized with that of the environment beyond the next door. A gradual pressure transition minimizes air temperature fluctuations, which helps reduce fogging and condensation, decreases stresses on air seals, and allows safe verification of pressure and space suit operation. When a person who is not in a pressure suit moves between environments of greatly different pressures, an airlock changes the pressure slowly to help with internal air cavity equalization and to prevent decompression sickness. This is critical in underwate
https://en.wikipedia.org/wiki/Commutative%20property
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says something like or , the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, ); such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. A similar property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two equal mathematical objects are equal regardless of their order. Mathematical definitions A binary operation on a set S is called commutative if In other words, an operation is commutative if every two elements commute. An operation that does not satisfy the above property is called noncommutative. One says that commutes with or that and commute under if That is, a specific pair of elements may commute even if the operation is (strictly) noncommutative. Examples Commutative operations Addition and multiplication are commutative in most number systems, and, in particular, between natural numbers, integers, rational numbers, real numbers and complex numbers. This is also true in every field. Addition is commutative in every vector space and in every algebra. Union and intersection are commutative operations on sets. "And" and "or" are commutative logical operations. Noncommutative operations Some noncommutative binary operations: Division, subtraction, and exponentiat
https://en.wikipedia.org/wiki/Code%20Co-op
Code Co-op is the peer-to-peer revision control system made by Reliable Software. Distinguishing features Code Co-op is a distributed revision control system of the replicated type. It uses peer-to-peer architecture to share projects among developers and to control changes to files. Instead of using a centralized database (the repository), it replicates its own database on each computer involved in the project. The replicas are synchronized by the exchange of (differential) scripts. The exchange of scripts may proceed using different transports, including e-mail (support for SMTP and POP3, integration with MAPI clients, Gmail) and LAN. Code Co-op has a built-in peer-to-peer wiki system, which can be used to integrate documentation with a software project. It is also possible to create text-based Wiki databases, which can be queried using simplified SQL directly from wiki pages. Standard features Distributed development support through E-mail, LAN, or VPN Change-based model—modifications to multiple files are checked in as one transaction File additions, deletions, renames, and moves are treated on the same level as edits—they can be added in any combination to a check-in changeset File changes can be reviewed before a check-in using a built-in or user-defined differ Synchronization changes can be reviewed in the same manner by the recipients Three-way visual merge Project history is replicated on each machine. Historical version can be reviewed, compared, or restored Integration with Microsoft SCC clients, including Visual Studio History Code Co-op was one of the first distributed version control systems. It debuted at the 7th Workshop on System Configuration Management in May 1997. The development of Code Co-op started in 1996, when Reliable Software, the distributed software company that makes it, was established. Reliable Software needed a collaboration tool that would work between the United States and Poland. The only dependable and affordable means of c
https://en.wikipedia.org/wiki/Lie%20derivative
In differential geometry, the Lie derivative ( ), named after Sophus Lie by Władysław Ślebodziński, evaluates the change of a tensor field (including scalar functions, vector fields and one-forms), along the flow defined by another vector field. This change is coordinate invariant and therefore the Lie derivative is defined on any differentiable manifold. Functions, tensor fields and forms can be differentiated with respect to a vector field. If T is a tensor field and X is a vector field, then the Lie derivative of T with respect to X is denoted . The differential operator is a derivation of the algebra of tensor fields of the underlying manifold. The Lie derivative commutes with contraction and the exterior derivative on differential forms. Although there are many concepts of taking a derivative in differential geometry, they all agree when the expression being differentiated is a function or scalar field. Thus in this case the word "Lie" is dropped, and one simply speaks of the derivative of a function. The Lie derivative of a vector field Y with respect to another vector field X is known as the "Lie bracket" of X and Y, and is often denoted [X,Y] instead of . The space of vector fields forms a Lie algebra with respect to this Lie bracket. The Lie derivative constitutes an infinite-dimensional Lie algebra representation of this Lie algebra, due to the identity valid for any vector fields X and Y and any tensor field T. Considering vector fields as infinitesimal generators of flows (i.e. one-dimensional groups of diffeomorphisms) on M, the Lie derivative is the differential of the representation of the diffeomorphism group on tensor fields, analogous to Lie algebra representations as infinitesimal representations associated to group representation in Lie group theory. Generalisations exist for spinor fields, fibre bundles with a connection and vector-valued differential forms. Motivation A 'naïve' attempt to define the derivative of a tensor field with
https://en.wikipedia.org/wiki/Indentation%20style
In computer programming, an indentation style is a convention governing the indentation of blocks of code to convey program structure. This article largely addresses the free-form languages, such as C and its descendants, but can be (and often is) applied to most other programming languages (especially those in the curly bracket family), where whitespace is otherwise insignificant. Indentation style is only one aspect of programming style. Indentation is not a requirement of most programming languages, where it is used as secondary notation. Rather, indenting helps better convey the structure of a program to human readers. Especially, it is used to clarify the link between control flow constructs such as conditions or loops, and code contained within and outside of them. However, some languages (such as Python and occam) use indentation to determine the structure instead of using braces or keywords; this is termed the off-side rule. In such languages, indentation is meaningful to the compiler or interpreter; it is more than only a clarity or style issue. This article uses the term brackets to refer to parentheses, and the term braces to refer to curly brackets. Brace placement in compound statements The main difference between indentation styles lies in the placing of the braces of the compound statement ({...}) that often follows a control statement (if, while, for...). The table below shows this placement for the style of statements discussed in this article; function declaration style is another case. The style for brace placement in statements may differ from the style for brace placement of a function definition. For consistency, the indentation depth has been kept constant at 4 spaces, regardless of the preferred indentation depth of each style. Tabs, spaces, and size of indentations The displayed width for tabs can be set to arbitrary values in most programming editors, including Notepad++ (MS-Windows), TextEdit (MacOS/X), Emacs (Unix), vi (Unix), and nan
https://en.wikipedia.org/wiki/IBM%20System/38
The System/38 is a discontinued minicomputer and midrange computer manufactured and sold by IBM. The system was announced in 1978. The System/38 has 48-bit addressing, which was unique for the time, and a novel integrated database system. It was oriented toward a multi-user system environment. At the time, the typical system handled from a dozen to several dozen terminals. Although the System/38 failed to displace the systems it was intended to replace, its architecture served as the basis of the much more successful IBM AS/400. History The System/38 was introduced on October 24, 1978 and delivered in 1980. Developed under the code-name "Pacific", it was made commercially available in August 1979. The system offered a number of innovative features, and was designed by a number of engineers including Frank Soltis and Glenn Henry. The architecture shared many similarities with the design of the failed IBM Future Systems project, including the single-level store, the use of microcode to implement operating system functionality, and the Machine Interface abstraction. It had been developed over eight years by IBM's laboratory in Rochester, Minnesota. The president of IBM's General Systems Division (GSD) said at the time: "The System/38 is the largest program we've ever introduced in GSD and it is one of the top three or four largest programs ever introduced in IBM." The system was designed as a follow-on for the System/3, but it is not compatible with those computers. The predecessors to the System/38 include the System/3 (1969), System/32 (1975), and System/34 (1977). In 1983 the System/36 was released as a low-end business computer for users who found the System/38 too expensive for their needs. The System/38 was succeeded by the IBM AS/400 midrange computer family in 1988, which originally used a processor architecture similar to the System/38, before adopting PowerPC-based processors in 1995. Hardware characteristics The IBM 5381 System Unit contains processor, m
https://en.wikipedia.org/wiki/Data%20strobe%20encoding
Data strobe encoding (or D/S encoding) is an encoding scheme for transmitting data in digital circuits. It uses two signal lines (e.g. wires in a cable or traces on a printed circuit board), Data and Strobe. These have the property that either Data or Strobe changes its logical value in one clock cycle, but never both. More precisely data is transmitted as-is and strobe changes its state if and only if data stays constant between two data bits. This allows for easy clock recovery with a good jitter tolerance by XORing the two signal line values. There is an equivalent way to specify the relationship between Data and Strobe. For even-numbered Data bits, Strobe is the opposite of Data. For odd-numbered Data bits, Strobe is the same as Data. From this definition it is more obvious that the XOR of Data and Strobe will yield a clock signal. Also, it specifies the simplest means of generating the Strobe signal for a given Data stream. Data strobe encoding originated in IEEE 1355 Standard and is used on the signal lines in SpaceWire and the IEEE 1394 (also known as FireWire 400) system. Gray code is another code that always changes one logical value, but never more than one. References Line codes
https://en.wikipedia.org/wiki/Red%20Bull%20Flugtag
Red Bull Flugtag (, 'airshow' ) is an event organized by Red Bull in which competitors attempt to fly home-made, human-powered flying machines, size-limited to around and weight-limited to approximately . The flying machines are usually launched off a pier about high into the sea or body of water. Most competitors enter for the entertainment value, and the flying machines rarely fly at all. Background The format was originally invented in Selsey, a small seaside town in the south of England under the name "Birdman Rally" in 1971. The first Red Bull Flugtag competition was held in 1992 in Vienna, Austria. It was such a success that it has been held every year since and in over 35 cities all over the world. Anyone is eligible to compete in the Flugtag event. To participate, each team must submit an application and their contraption must meet the criteria set forth by Red Bull. The criteria vary with location. In the United States each flying machine must have a maximum wingspan of and a maximum weight (including pilot) of . In Australian Flugtags the wingspan is limited to and the weight (not including pilot) to . The craft must be powered by muscle, gravity, and imagination. Because the aircraft will ultimately end up in the water, it must be unsinkable and constructed entirely of environmentally friendly materials. The aircraft may not have any loose parts and advertising space is limited to . World records Distance The record for the longest flight is 258 feet (78.6 m), set on September 21, 2013, at the Flugtag in Long Beach, California, by "The Chicken Whisperers" team in front of a crowd of 110,000. Attendance The largest crowd was in Cape Town, South Africa with 220,000 attending in 2012. Results Teams that enter the competition are judged according to three criteria: distance, creativity, and showmanship. International Johannesburg, South Africa, 2000 Auckland, New Zealand, 2002 Winning team: Greatest American Hero Winning distance: 22 m Air
https://en.wikipedia.org/wiki/Internet%20forum
An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are often longer than one line of text, and are at least temporarily archived. Also, depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible. Forums have a specific set of jargon associated with them; for example, a single conversation is called a "thread", or topic. A discussion forum is hierarchical or tree-like in structure; a forum can contain a number of subforums, each of which may have several topics. Within a forum's topic, each new discussion started is called a thread and can be replied to by as many people as they so wish. Depending on the forum's settings, users can be anonymous or have to register with the forum and then subsequently log in to post messages. On most forums, users do not have to log in to read existing messages. History The modern forum originated from bulletin boards and so-called computer conferencing systems, which are a technological evolution of the dial-up bulletin board system. From a technological standpoint, forums or boards are web applications that manage user-generated content. Early Internet forums could be described as a web version of an electronic mailing list or newsgroup (such as those that exist on Usenet), allowing people to post messages and comment on other messages. Later developments emulated the different newsgroups or individual lists, providing more than one forum dedicated to a particular topic. Internet forums are prevalent in several developed countries. Japan posts the most, with over two million per day on their largest forum, 2channel. China also has millions of posts on forums such as Tianya Club. Some of the first forum systems were the Planet-Forum system, developed at the beginning of the 1970s; the EIES system, first ope
https://en.wikipedia.org/wiki/People%27s%20World
People's World, official successor to the Daily Worker, is a Marxist-Leninst and American leftist national daily online news publication. Founded by activists, socialists, communists, and those active in the labor movement in the early 1900s, the current publication is a result of a merger between the Daily World and the West Coast weekly paper People's Daily World in 1987. History People's World traces its lineage to the Daily Worker newspaper, founded by communists, socialists, union members, and other activists in Chicago in 1924. On the front page of its first edition, the paper declared that "big business interests, bankers, merchant princes, landlords, and other profiteers" should fear the Daily Worker. It pledged to "raise the standards of struggle against the few who rob and plunder the many". The current publication is a result of a merger between the Daily World (formerly known as the Daily Worker) and the West Coast weekly paper People's Daily World. The Daily Worker was a national newspaper first published in 1924. It became known as the Daily World in 1968. The People's Daily World was first launched in 1938. Its founder, Harrison George, started People's Daily World in San Francisco after he raised $33,000 from supporters in California. The paper had 20,000 readers and cost 3 cents. The paper circulated throughout the West Coast. It was completely funded through subscribers. After World War II, many of the editors of People's Daily World were convicted using the Smith Act of "conspiring to violently overthrow the U.S. government". During the 1950s, reporters from the paper were not allowed in the press galleries of various California governing bodies. Circulation was also down in the 1950s, with the paper only having a press run of 5,000 in 1955. In 1957, the paper became a weekly publication. In 1986, the Daily World merged with People's Daily World. Its publisher is Long View Publishing Company. The online newspaper is a member of the Internat
https://en.wikipedia.org/wiki/Cross-correlation%20matrix
The cross-correlation matrix of two random vectors is a matrix containing as elements the cross-correlations of all pairs of elements of the random vectors. The cross-correlation matrix is used in various digital signal processing algorithms. Definition For two random vectors and , each containing random elements whose expected value and variance exist, the cross-correlation matrix of and is defined by and has dimensions . Written component-wise: The random vectors and need not have the same dimension, and either might be a scalar value. Example For example, if and are random vectors, then is a matrix whose -th entry is . Complex random vectors If and are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of and is defined by where denotes Hermitian transposition. Uncorrelatedness Two random vectors and are called uncorrelated if They are uncorrelated if and only if their cross-covariance matrix matrix is zero. In the case of two complex random vectors and they are called uncorrelated if and Properties Relation to the cross-covariance matrix The cross-correlation is related to the cross-covariance matrix as follows: Respectively for complex random vectors: See also Autocorrelation Correlation does not imply causation Covariance function Pearson product-moment correlation coefficient Correlation function (astronomy) Correlation function (statistical mechanics) Correlation function (quantum field theory) Mutual information Rate distortion theory Radial distribution function References Further reading Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. . Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005. M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the
https://en.wikipedia.org/wiki/Petr%20Beckmann
Petr Beckmann (November 13, 1924 – August 3, 1993) was a professor of electrical engineering who became a well-known advocate of libertarianism and nuclear power. Later in his life he disputed Albert Einstein's theory of relativity and other accepted theories in modern physics. Biography In 1939, when Beckmann was 14, his family fled their home in Prague, Czechoslovakia to escape the Nazis. From 1942 to 1945, he served in a Czech squadron of the Royal Air Force. He worked as a radar mechanic on the newly invented radar systems that helped Britain win the Battle of the Atlantic. He received a B.Sc. in 1949, a Ph.D. in 1955, and a D.Sc. in 1962, all from Prague's Czech Academy of Sciences in electrical engineering. He defected to the United States in 1963 and became a professor (later, emeritus) of electrical engineering at the University of Colorado. In the United States, he became acquainted with novelist Ayn Rand, a contributing editor to a publication devoted to her ideas, The Intellectual Activist, and a speaker at The Thomas Jefferson School, an intellectual conference of similar purpose. Beckmann was a prolific author; he wrote several electrical engineering textbooks and non-technical works. By 1968, he had founded Golem Press, which published most of his books. The Golem Press books included The Health Hazards of Not Going Nuclear (1976), which argued in favor of nuclear power during the height of the anti-nuclear movement by making "apples-to-apples" comparisons of the risks of nuclear power with the risks in the same terms (e.g., deaths per terawatt hour) of the alternative power sources. Beckmann also wrote A History of , documenting the history of the calculation of . The book also expresses opposition to the Roman culture, Catholicism (and other religions), Nazism, and Communism. He published his own monthly newsletter, Access to Energy, which since September 1993 has been written by biochemist Arthur B. Robinson. In 1981, he took early retiremen
https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange%20equation
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange. Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero. In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field. History The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their
https://en.wikipedia.org/wiki/Email%20address
An email address identifies an email box to which messages are delivered. While early messaging systems used a variety of formats for addressing, today, email addresses follow a set of specific rules originally standardized by the Internet Engineering Task Force (IETF) in the 1980s, and updated by . The term email address in this article refers to just the addr-spec in Section 3.4 of RFC 5322. The RFC defines address more broadly as either a mailbox or group. A mailbox value can be either a name-addr, which contains a display-name and addr-spec, or the more common addr-spec alone. An email address, such as john.smith@example.com, is made up from a local-part, the symbol @, and a domain, which may be a domain name or an IP address enclosed in brackets. Although the standard requires the local-part to be case-sensitive, it also urges that receiving hosts deliver messages in a case-independent manner, e.g., that the mail system in the domain example.com treat John.Smith as equivalent to john.smith; some mail systems even treat them as equivalent to johnsmith. Mail systems often limit the users' choice of name to a subset of the technically permitted characters. With the introduction of internationalized domain names, efforts are progressing to permit non-ASCII characters in email addresses. Message transport An email address consists of two parts, a local-part (sometimes a user name, but not always) and a domain; if the domain is a domain name rather than an IP address then the SMTP client uses the domain name to look up the mail exchange IP address. The general format of an email address is local-part@domain, e.g. jsmith@[192.168.1.2], jsmith@example.com. The SMTP client transmits the message to the mail exchange, which may forward it to another mail exchange until it eventually arrives at the host of the recipient's mail system. The transmission of electronic mail from the author's computer and between mail hosts in the Internet uses the Simple Mail Transfer Pr
https://en.wikipedia.org/wiki/Source%20lines%20of%20code
Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced. Measurement methods Many useful comparisons involve only the order of magnitude of lines of code in a project. Using lines of code to compare a 10,000-line project to a 100,000-line project is far more useful than when comparing a 20,000-line project with a 21,000-line project. While it is debatable exactly how to measure lines of code, discrepancies of an order of magnitude can be clear indicators of software complexity or man-hours. There are two major types of SLOC measures: physical SLOC (LOC) and logical SLOC (LLOC). Specific definitions of these two measures vary, but the most common definition of physical SLOC is a count of lines in the text of the program's source code excluding comment lines. Logical SLOC attempts to measure the number of executable "statements", but their specific definitions are tied to specific computer languages (one simple logical SLOC measure for C-like programming languages is the number of statement-terminating semicolons). It is much easier to create tools that measure physical SLOC, and physical SLOC definitions are easier to explain. However, physical SLOC measures are more sensitive to logically irrelevant formatting and style conventions than logical SLOC. However, SLOC measures are often stated without giving their definition, and logical SLOC can often be significantly different from physical SLOC. Consider this snippet of C code as an example of the ambiguity encountered when determining SLOC: for (i = 0; i < 100; i++) printf("hello"); /* How many lines of code is this? */ In this example we have: 1 physical line of code (L
https://en.wikipedia.org/wiki/Salammoniac
Salammoniac, also sal ammoniac or salmiac, is a rare naturally occurring mineral composed of ammonium chloride, NH4Cl. It forms colorless, white, or yellow-brown crystals in the isometric-hexoctahedral class. It has very poor cleavage and is brittle to conchoidal fracture. It is quite soft, with a Mohs hardness of 1.5 to 2, and it has a low specific gravity of 1.5. It is water-soluble. Sal ammoniac is also the archaic name for the chemical compound ammonium chloride. History Pliny, in Book XXXI of his Natural History, refers to a salt produced in the Roman province of Cyrenaica named hammoniacum, so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων Ammon). However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name. The first attested reference to sal ammoniac as ammonium chloride is in the Pseudo-Geber work De inventione veritatis, where a preparation of sal ammoniac is given in the chapter De Salis armoniaci præparatione, salis armoniaci being a common name in the Middle Ages for sal ammoniac. It typically forms as encrustations formed by sublimation around volcanic vents and is found around volcanic fumaroles, guano deposits and burning coal seams. Associated minerals include sodium alum, native sulfur and other fumarole minerals. Notable occurrences include Tajikistan; Mount Vesuvius, Italy; and Parícutin, Michoacan, Mexico. Uses It is commonly used to clean the soldering iron in the soldering of stained-glass windows. Metal refining In jewellery-making and the refining of precious metals, potassium carbonate is added to gold and silver in a borax-coated crucible to purify iron or steel filings that may have contaminated the scrap. It is then air-coo