source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Vienna%20Standard%20Mean%20Ocean%20Water
Vienna Standard Mean Ocean Water (VSMOW) is an isotopic standard for water, that is, a particular sample of water whose proportions of different isotopes of hydrogen and oxygen are accurately known. VSMOW is distilled from ocean water and does not contain salt or other impurities. Published and distributed by the Vienna-based International Atomic Energy Agency in 1968, the standard and its essentially identical successor, VSMOW2, continue to be used as a reference material. Water samples made up of different isotopes of hydrogen and oxygen have slightly different physical properties. As an extreme example, heavy water, which contains two deuterium (2H) atoms instead of the usual, lighter hydrogen-1 (1H), has a melting point of and boiling point of . Different rates of evaporation cause water samples from different places in the water cycle to contain slightly different ratios of isotopes. Ocean water (richer in heavy isotopes) and rain water (poorer in heavy isotopes) roughly represent the two extremes found on Earth. With VSMOW, the IAEA simultaneously published an analogous standard for rain water, Standard Light Antarctic Precipitation (SLAP), and eventually its successor SLAP2. SLAP contains about 5% less oxygen-18 and 42.8% less deuterium than VSMOW. A scale based on VSMOW and SLAP is used to report oxygen-18 and deuterium concentrations. From 2005 until its redefinition in 2019, the kelvin was specified to be of the temperature of specifically VSMOW at its triple point. History and background Abundances of a particular isotope in a substance are usually given relative to some reference material, as a delta in parts per thousandth (‰) from the reference. For example, the ratio of deuterium (H) to hydrogen-1 in a substance x may be given as , where denotes the absolute concentration in x. In 1961, pursuing a standard for measuring and reporting deuterium and oxygen-18 concentrations, Harmon Craig of the Scripps Institution of Oceanography in San Dieg
https://en.wikipedia.org/wiki/WinCustomize
WinCustomize is a website that provides content for users to customize Microsoft Windows. The site hosts thousands of skins, themes, icons, wallpapers, and other graphical content to modify the Windows graphical user interface. There is some premium or paid content, however, the vast majority of the content is free for users to download. Site history WinCustomize was launched in March 2001 by Brad Wardell and Pat Ford, both of whom work at Stardock. After the dot-com recession had taken down many popular skin sites, WinCustomize quickly grew in popularity due to a combination of wide variety of content, uptime reliability, and being the preferred content destination by Stardock customers. The site has grown at a far greater pace than its founders had anticipated. It has managed to avoid having to put many limitations on users or having to resort to pop-up advertising because of its corporate patron Stardock subsidizing its costs. This growth has prompted several site redesigns to offer improved functionality and reliability to users. Since launch, WinCustomize has undergone several iterations: WinCustomize 2k5 — Launched at the end of 2004, WinCustomize was redesigned for improved stability, and added functionality, such as personal pages for subscribers, an articles system, tutorials etc. WinCustomize 2k7 — Launched January 15, 2007, WC2k7 was a fundamental rewrite using ASP.NET. The focus was to build a foundation that was easier to maintain and, in the future, expand. WinCustomize v6 — Planned for Late 2008/Early 2009, the WC v6 project aims be a major revision to how users navigate and interact with the site and the community as a whole. Where 2k7 was focused on the core codebase, v6 is focused on the user interface and experience. In July 2007 the WinCustomize Wiki was launched. WinCustomize 2010 — WinCustomize 2010 was launched on April 20, 2010. This major revision represents a major change in the sites look and navigation for users. A guided tour o
https://en.wikipedia.org/wiki/Breather
In physics, a breather is a nonlinear wave in which energy concentrates in a localized and oscillatory fashion. This contradicts with the expectations derived from the corresponding linear system for infinitesimal amplitudes, which tends towards an even distribution of initially localized energy. A discrete breather is a breather solution on a nonlinear lattice. The term breather originates from the characteristic that most breathers are localized in space and oscillate (breathe) in time. But also the opposite situation: oscillations in space and localized in time, is denoted as a breather. Overview A breather is a localized periodic solution of either continuous media equations or discrete lattice equations. The exactly solvable sine-Gordon equation and the focusing nonlinear Schrödinger equation are examples of one-dimensional partial differential equations that possess breather solutions. Discrete nonlinear Hamiltonian lattices in many cases support breather solutions. Breathers are solitonic structures. There are two types of breathers: standing or traveling ones. Standing breathers correspond to localized solutions whose amplitude vary in time (they are sometimes called oscillons). A necessary condition for the existence of breathers in discrete lattices is that the breather main frequency and all its multipliers are located outside of the phonon spectrum of the lattice. Example of a breather solution for the sine-Gordon equation The sine-Gordon equation is the nonlinear dispersive partial differential equation with the field u a function of the spatial coordinate x and time t. An exact solution found by using the inverse scattering transform is: which, for ω < 1, is periodic in time t and decays exponentially when moving away from x = 0. Example of a breather solution for the nonlinear Schrödinger equation The focusing nonlinear Schrödinger equation is the dispersive partial differential equation: with u a complex field as a function of x and t. Fu
https://en.wikipedia.org/wiki/Achard%E2%80%93Thiers%20syndrome
Achard–Thiers syndrome (also known as diabetic-bearded woman syndrome) is a rare disorder mainly occurring in postmenopausal women. It is characterized by type II diabetes mellitus and signs related to the overproduction of androgens. The disease is named for Emile Achard and Joseph Thiers. Presentation Achard–Thiers syndrome affects mostly postmenopausal women and comprises diabetes mellitus, deep voice, hirsutism, clitoral hypertrophy and adrenal cortical hyperplasia or adenoma. Patients often also have amenorrhoea, hypertension and osteoporosis. Diagnosis Treatment
https://en.wikipedia.org/wiki/WPSG
WPSG (channel 57), branded on-air as Philly 57, is an independent television station in Philadelphia, Pennsylvania, United States. It is owned by the CBS News and Stations group alongside CBS station KYW-TV (channel 3). Both stations share studios on Hamilton Street north of Center City Philadelphia, while WPSG's transmitter is located in the city's Roxborough section. Channel 57 was allocated for commercial use in Philadelphia at the start of the 1970s; it was fought over by two groups who sought to broadcast subscription television (STV) programming to paying customers in the metropolitan area. Radio Broadcasting Company prevailed and launched WWSG-TV on June 15, 1981. It offered limited financial news programming, which was abandoned after 18 months, and a subscription service utilizing programming from SelecTV. Two years later, the station switched to broadcasting PRISM, a premium regional sports and movies service seeking to reach potential subscribers in areas beyond cable coverage, such as the city of Philadelphia. The Grant Broadcasting System acquired the station and relaunched it in 1985 as general-entertainment independent WGBS-TV, known on air as "Philly 57". The new owners spent millions of dollars on programming and the rights to Philadelphia Flyers hockey and Villanova Wildcats basketball; the station filled the third independent void left when WKBS-TV (channel 48) folded in 1983, and its entrance into the market clipped multiple separate efforts to establish such a station. However, Grant's strategy to build "full-grown" independents with expensive acquisitions drove the company into bankruptcy in December 1986. Grant's three stations were assumed by a consortium of creditors and bondholders known as Combined Broadcasting; management was controlled from Philadelphia. Combined Broadcasting solicited offers on its stations in 1993; a deal was reached to sell to the Fox network, but an objection caused the sale to be delayed and canceled. In 1995, Pa
https://en.wikipedia.org/wiki/NetWare%20File%20System
In computing, a NetWare File System (NWFS) is a file system based on a heavily modified version of FAT. It was used in the Novell NetWare operating system. It is the default and only file system for all volumes in versions 2.x through 4.x, and the default and only file system for the SYS volume continuing through version 5.x. Novell developed two varieties of NWFS: 16-bit NWFS 286, used in NetWare 2.x 32-bit NWFS 386, used in NetWare 3.x through NetWare 6.x. Novell Storage Services (NSS, released in 1998), superseded the NWFS format. The NWFS on-disk format was never publicly documented by Novell. The published specifications for 32-bit NWFS are: Maximum file size: 4 GB Maximum volume size: 1 TB Maximum files per volume: 2 million when using a single name space. Maximum files per server: 16 million Maximum directory entries: 16 million Maximum volumes per server: 64 Maximum volumes per partition: 8 Maximum open files per server: 100,000 Maximum directory tree depth: 100 levels Characters used: ASCII double-byte Maximum extended attributes: 512 Maximum data streams: 10 Support for different name spaces: Microsoft Windows Long names (a.k.a. OS/2 namespace), Unix, Apple Macintosh Support for restoring deleted files (salvage) Support for journaling (Novell Transaction Tracking System a.k.a. TTS) Support for block suballocation, starting in NetWare 4.x For larger files the file system utilized a performance feature named Turbo FAT. Transparent file compression was also supported, although this had a significant impact on the performance of file serving. Every name space requires its own separate directory entry for each file. While the maximum number of directory entries is 16,000,000, two resident name spaces would reduce the usable maximum number of directory entries to 8,000,000, and three to 5,333,333. 16-bit NWFS could handle volumes of up to 256 MB. However, its only name-space support was a dedicated API to handle Macintosh clients. See a
https://en.wikipedia.org/wiki/Terminal%20aerodrome%20forecast
In meteorology and aviation, terminal aerodrome forecast (TAF) is a format for reporting weather forecast information, particularly as it relates to aviation. TAFs are issued at least four times a day, every six hours, for major civil airfields: 0000, 0600, 1200 and 1800 UTC, and generally apply to a 24- or 30-hour period, and an area within approximately (or in Canada) from the center of an airport runway complex. TAFs are issued every three hours for military airfields and some civil airfields and cover a period ranging from 3 hours to 30 hours. TAFs complement and use similar encoding to METAR reports. They are produced by a human forecaster based on the ground. For this reason, there are considerably fewer TAF locations than there are airports for which METARs are available. TAFs can be more accurate than numerical weather forecasts, since they take into account local, small-scale, geographic effects. In the United States, the weather forecasters responsible for the TAFs in their respective areas are located within one of the 122 Weather Forecast Offices operated by the United States' National Weather Service. In contrast, a trend type forecast (TTF), which is similar to a TAF, is always produced by a person on-site where the TTF applies. In the United Kingdom, most TAFs for military airfields are produced locally, however TAFs for civil airfields are produced at the Met Office headquarters in Exeter. The United States Air Force employs active duty enlisted personnel as TAF writers. Air Force weather personnel are responsible for providing weather support for all Air Force and Army operations. Different countries use different change criteria for their weather groups. In the United Kingdom, TAFs for military airfields use colour states as one of the change criteria. Civil airfields in the UK use slightly different criteria. Code This TAF example of a 30-hour TAF was released on November 5, 2008, at 1730 UTC: TAF KXYZ 051730Z 0518/0624 31008KT 3SM -SHRA
https://en.wikipedia.org/wiki/Hans%20Reichenbach
Hans Reichenbach (September 26, 1891 – April 9, 1953) was a leading philosopher of science, educator, and proponent of logical empiricism. He was influential in the areas of science, education, and of logical empiricism. He founded the Gesellschaft für empirische Philosophie (Society for Empirical Philosophy) in Berlin in 1928, also known as the "Berlin Circle". Carl Gustav Hempel, Richard von Mises, David Hilbert and Kurt Grelling all became members of the Berlin Circle. In 1930, Reichenbach and Rudolf Carnap became editors of the journal Erkenntnis. He also made lasting contributions to the study of empiricism based on a theory of probability; the logic and the philosophy of mathematics; space, time, and relativity theory; analysis of probabilistic reasoning; and quantum mechanics. In 1951, he authored The Rise of Scientific Philosophy, his most popular book. Early life Hans was the second son of a Jewish merchant, Bruno Reichenbach, who had converted to Protestantism. He married Selma Menzel, a school mistress, who came from a long line of Protestant professionals which went back to the Reformation. His elder brother Bernard played a significant role in the left communist movement. His younger brother, Herman was a music educator. After completing secondary school in Hamburg, Hans Reichenbach studied civil engineering at the Hochschule für Technik Stuttgart, and physics, mathematics and philosophy at various universities, including Berlin, Erlangen, Göttingen and Munich. Among his teachers were Ernst Cassirer, David Hilbert, Max Planck, Max Born and Arnold Sommerfeld. Political activism Reichenbach was active in youth movements and student organizations. He joined the Freistudentenschaft in 1910. He attended the founding conference of the Freideutsche Jugend umbrella group at Hoher Meissner in 1913. He published articles about the university reform, the freedom of research, and against anti-Semitic infiltrations in student organizations. His older brother Be
https://en.wikipedia.org/wiki/C.mmp
The C.mmp was an early multiple instruction, multiple data (MIMD) multiprocessor system developed at Carnegie Mellon University (CMU) by William Wulf (1971). The notation C.mmp came from the PMS notation of Gordon Bell and Allen Newell, where a central processing unit (CPU) was designated as C, a variant was noted by the dot notation, and mmp stood for Multi-Mini-Processor. , the machine is on display at CMU, in Wean Hall, on the ninth floor. Structure Sixteen Digital Equipment Corporation PDP-11 minicomputers were used as the processing elements, named Compute Modules (CMs) in the system. Each CM had a local memory of 8K and a local set of peripheral devices. One of the challenges was that a device was only available through its unique connected processor, so the input/output (I/O) system (designed by Roy Levien) hid the connectivity of the devices and routed the requests to the hosting processor. If a processor went down, the devices connected to its Unibus became unavailable, which became a problem in overall system reliability. Processor 0 (the boot processor) had the disk drives attached. Each of the Compute Modules shared these communication pathways: An Interprocessor bus – used to distribute system-wide clock, interrupt, and process control messaging among the CMs A 16x16 crossbar switch – used to connect the 16 CMs on one side and 16 banks of shared memory on the other. If all 16 processors were accessing different banks of memory, the memory accesses would all be concurrent. If two or more processors were trying to access the same bank of memory, one of them would be granted access on one cycle and the remainder would be negotiated on subsequent memory cycles. Since the PDP-11 had a logical address space of 16-bits, another address translation unit was added to expand the address space to 25 bits for the shared memory space. The Unibus architecture provided 18 bits of physical address, and the two high-order bits were used to select one of four reloc
https://en.wikipedia.org/wiki/Population%20size
In population genetics and population ecology, population size (usually denoted N) is a countable quantity representing the number of individual organisms in a population. Population size is directly associated with amount of genetic drift, and is the underlying cause of effects like population bottlenecks and the founder effect. Genetic drift is the major source of decrease of genetic diversity within populations which drives fixation and can potentially lead to speciation events. Genetic drift Of the five conditions required to maintain Hardy-Weinberg Equilibrium, infinite population size will always be violated; this means that some degree of genetic drift is always occurring. Smaller population size leads to increased genetic drift, it has been hypothesized that this gives these groups an evolutionary advantage for acquisition of genome complexity. An alternate hypothesis posits that while genetic drift plays a larger role in small populations developing complexity, selection is the mechanism by which large populations develop complexity. Population bottlenecks and founder effect Population bottlenecks occur when population size reduces for a short period of time, decreasing the genetic diversity in the population. The founder effect occurs when few individuals from a larger population establish a new population and also decreases the genetic diversity, and was originally outlined by Ernst Mayr. The founder effect is a unique case of genetic drift, as the smaller founding population has decreased genetic diversity that will move alleles within the population more rapidly towards fixation. Modeling genetic drift Genetic drift is typically modeled in lab environments using bacterial populations or digital simulation. In digital organisms, a generated population undergoes evolution based on varying parameters, including differential fitness, variation, and heredity set for individual organisms. Rozen et al. use separate bacterial strains on two different me
https://en.wikipedia.org/wiki/Conventionally%20grown
Conventionally grown is an agriculture term referring to a method of growing edible plants (such as fruit and vegetables) and other products. It is opposite to organic growing methods which attempt to produce without synthetic chemicals (fertilizers, pesticides, antibiotics, hormones) or genetically modified organisms. Conventionally grown products, meanwhile, often use fertilizers and pesticides which allow for higher yield, out of season growth, greater resistance, greater longevity and a generally greater mass. Conventionally grown fruit: PLU code consists of 4 numbers (e.g. 4012). Organically grown fruit: PLU code consists of 5 numbers and begins with 9 (e.g. 94012) Genetically engineered fruit: PLU code consists of 5 numbers and begins with 8 (e.g. 84012). Food science
https://en.wikipedia.org/wiki/Subjunctive%20possibility
Subjunctive possibility (also called alethic possibility) is a form of modality studied in modal logic. Subjunctive possibilities are the sorts of possibilities considered when conceiving counterfactual situations; subjunctive modalities are modalities that bear on whether a statement might have been or could be true—such as might, could, must, possibly, necessarily, contingently, essentially, accidentally, and so on. Subjunctive possibilities include logical possibility, metaphysical possibility, nomological possibility, and temporal possibility. Subjunctive possibility and other modalities Subjunctive possibility is contrasted with (among other things) epistemic possibility (which deals with how the world may be, for all we know) and deontic possibility (which deals with how the world ought to be). Epistemic possibility The contrast with epistemic possibility is especially important to draw, since in ordinary language the same phrases ("it's possible," "it can't be", "it must be") are often used to express either sort of possibility. But they are not the same. We do not know whether Goldbach's conjecture is true or not (no-one has come up with a proof yet); so it is (epistemically) possible that it is true and it is (epistemically) possible that it is false. But if it is, in fact, provably true (as it may be, for all we know), then it would have to be (subjunctively) necessarily true; what being provable means is that it would not be (logically) possible for it to be false. Similarly, it might not be at all (epistemically) possible that it is raining outside—we might know beyond a shadow of a doubt that it is not—but that would hardly mean that it is (subjunctively) impossible for it to rain outside. This point is also made by Norman Swartz and Raymond Bradley. Deontic possibility There is some overlap in language between subjunctive possibilities and deontic possibilities: for example, we sometimes use the statement "You can/cannot do that" to express (i) w
https://en.wikipedia.org/wiki/Injective%20sheaf
In mathematics, injective sheaves of abelian groups are used to construct the resolutions needed to define sheaf cohomology (and other derived functors, such as sheaf Ext). There is a further group of related concepts applied to sheaves: flabby (flasque in French), fine, soft (mou in French), acyclic. In the history of the subject they were introduced before the 1957 "Tohoku paper" of Alexander Grothendieck, which showed that the abelian category notion of injective object sufficed to found the theory. The other classes of sheaves are historically older notions. The abstract framework for defining cohomology and derived functors does not need them. However, in most concrete situations, resolutions by acyclic sheaves are often easier to construct. Acyclic sheaves therefore serve for computational purposes, for example the Leray spectral sequence. Injective sheaves An injective sheaf is a sheaf that is an injective object of the category of abelian sheaves; in other words, homomorphisms from to can always be extended to any sheaf containing The category of abelian sheaves has enough injective objects: this means that any sheaf is a subsheaf of an injective sheaf. This result of Grothendieck follows from the existence of a generator of the category (it can be written down explicitly, and is related to the subobject classifier). This is enough to show that right derived functors of any left exact functor exist and are unique up to canonical isomorphism. For technical purposes, injective sheaves are usually superior to the other classes of sheaves mentioned above: they can do almost anything the other classes can do, and their theory is simpler and more general. In fact, injective sheaves are flabby (flasque), soft, and acyclic. However, there are situations where the other classes of sheaves occur naturally, and this is especially true in concrete computational situations. The dual concept, projective sheaves, is not used much, because in a general category
https://en.wikipedia.org/wiki/Joseph%20L.%20Doob
Joseph Leo Doob (February 27, 1910 – June 7, 2004) was an American mathematician, specializing in analysis and probability theory. The theory of martingales was developed by Doob. Early life and education Doob was born in Cincinnati, Ohio, February 27, 1910, the son of a Jewish couple, Leo Doob and Mollie Doerfler Doob. The family moved to New York City before he was three years old. The parents felt that he was underachieving in grade school and placed him in the Ethical Culture School, from which he graduated in 1926. He then went on to Harvard where he received a BA in 1930, an MA in 1931, and a PhD (Boundary Values of Analytic Functions, advisor Joseph L. Walsh) in 1932. After postdoctoral research at Columbia and Princeton, he joined the department of mathematics of the University of Illinois in 1935 and served until his retirement in 1978. He was a member of the Urbana campus's Center for Advanced Study from its beginning in 1959. During the Second World War, he worked in Washington, D.C., and Guam as a civilian consultant to the Navy from 1942 to 1945; he was at the Institute for Advanced Study for the academic year 1941–1942 when Oswald Veblen approached him to work on mine warfare for the Navy. Work Doob's thesis was on boundary values of analytic functions. He published two papers based on this thesis, which appeared in 1932 and 1933 in the Transactions of the American Mathematical Society. Doob returned to this subject many years later when he proved a probabilistic version of Fatou's boundary limit theorem for harmonic functions. The Great Depression of 1929 was still going strong in the thirties and Doob could not find a job. B.O. Koopman at Columbia University suggested that statistician Harold Hotelling might have a grant that would permit Doob to work with him. Hotelling did, so the Depression led Doob to probability. In 1933 Kolmogorov provided the first axiomatic foundation for the theory of probability. Thus a subject that had originated fro
https://en.wikipedia.org/wiki/Identity%20by%20type
Alleles have identity by type (IBT) when they have the same phenotypic effect or, if applied to a variation in the composition of DNA such as a single nucleotide polymorphism, when they have the same DNA sequence. Alleles that are identical by type fall into two groups; those that are identical by descent (IBD) because they arose from the same allele in an earlier generation; and those that are non-identical by descent (NIBD) because they arose from separate mutations. NIBD can also be identical by state (IBS) though, if they share the same mutational expression but not through a recent common ancestor. Parent-offspring pairs share 50% of their genes IBD, and monozygotic twins share 100% IBD. See also Population genetics External links https://web.archive.org/web/20060309055031/http://darwin.eeb.uconn.edu/eeb348/lecture-notes/identity.pdf http://zwets.com/pedkin/thompson.pdf Classical genetics
https://en.wikipedia.org/wiki/Identity%20by%20descent
A DNA segment is identical by state (IBS) in two or more individuals if they have identical nucleotide sequences in this segment. An IBS segment is identical by descent (IBD) in two or more individuals if they have inherited it from a common ancestor without recombination, that is, the segment has the same ancestral origin in these individuals. DNA segments that are IBD are IBS per definition, but segments that are not IBD can still be IBS due to the same mutations in different individuals or recombinations that do not alter the segment. Theory All individuals in a finite population are related if traced back long enough and will, therefore, share segments of their genomes IBD. During meiosis segments of IBD are broken up by recombination. Therefore, the expected length of an IBD segment depends on the number of generations since the most recent common ancestor at the locus of the segment. The length of IBD segments that result from a common ancestor n generations in the past (therefore involving 2n meiosis) is exponentially distributed with mean 1/(2n) Morgans (M). The expected number of IBD segments decreases with the number of generations since the common ancestor at this locus. For a specific DNA segment, the probability of being IBD decreases as 2−2n since in each meiosis the probability of transmitting this segment is 1/2. Applications Identified IBD segments can be used for a wide range of purposes. As noted above the amount (length and number) of IBD sharing depends on the familial relationships between the tested individuals. Therefore, one application of IBD segment detection is to quantify relatedness. Measurement of relatedness can be used in forensic genetics, but can also increase information in genetic linkage mapping and help to decrease bias by undocumented relationships in standard association studies. Another application of IBD is genotype imputation and haplotype phase inference. Long shared segments of IBD, which are broken up by short regio
https://en.wikipedia.org/wiki/Mother%20Nature
Mother Nature (sometimes known as Mother Earth or the Earth Mother) is a personification of nature that focuses on the life-giving and nurturing aspects of nature by embodying it, in the form of the mother. European tradition history The word "nature" comes from the Latin word, "natura", meaning birth or character [see nature (philosophy)]. In English, its first recorded use (in the sense of the entirety of the phenomena of the world) was in 1266. "Natura" and the personification of Mother Nature were widely popular in the Middle Ages. As a concept, seated between the properly divine and the human, it can be traced to Ancient Greece, though Earth (or "Eorthe" in the Old English period) may have been personified as a goddess. The Norse also had a goddess called Jörð (Jord, or Erth). The earliest written usage is in Mycenaean Greek: Ma-ka (transliterated as ma-ga), "Mother Gaia", written in Linear B syllabic script (13th or 12th century BC). In Greece, the pre-Socratic philosophers had "invented" nature when they abstracted the entirety of phenomena of the world as singular: physis, and this was inherited by Aristotle. Later medieval Christian thinkers did not see nature as inclusive of everything, but thought that she had been created by God; her place lay on earth, below the unchanging heavens and moon. Nature lay somewhere in the center, with agents above her (angels), and below her (demons and hell). For the medieval mind she was only a personification, not a goddess. Greek myth In Greek mythology, Persephone, daughter of Demeter (goddess of the harvest), was abducted by Hades (god of the dead), and taken to the underworld as his queen. Demeter was so distraught that no crops would grow and the "entire human race [would] have perished of cruel, biting hunger if Zeus had not been concerned" (Larousse 152). Zeus forced Hades to return Persephone to her mother, but while in the underworld, Persephone had eaten pomegranate seeds, the food of the dead and thus,
https://en.wikipedia.org/wiki/Menuconfig
make menuconfig is one of five similar tools that can configure Linux source, a necessary early step needed to compile the source code. make menuconfig, with a menu-driven user interface, allows the user to choose the features of Linux (and other options) that will be compiled. It is normally invoked using the command make menuconfig; menuconfig is a target in Linux Makefile. History make menuconfig was not in the first version of Linux. The predecessor tool is a question-and-answer-based utility (make config, make oldconfig). A third tool for Linux configuration is make xconfig, which requires Qt. There is also make gconfig, which uses GTK+, and make nconfig, which is similar to make menuconfig. All these tools use the Kconfig language internally. Kconfig is also used in other projects, such as Das U-Boot, a bootloader for embedded devices, Buildroot, a tool for generating embedded Linux systems, and BusyBox, a single-executable shell utility toolbox for embedded systems. Advantages over earlier versions Despite being a simple design, make menuconfig offers considerable advantages to the question-and-answer-based configuration tool make oldconfig, the most notable being a basic search system and the ability to load and save files with filenames different from ".config". make menuconfig gives the user an ability to navigate forwards or backwards directly between features, rather than using make config by pressing the key to navigate linearly to the configuration for a specific feature. If the user is satisfied with a previous .config file, using make oldconfig uses this previous file to answer all questions that it can, only interactively presenting the new features. This is intended for a version upgrade, but may be appropriate at other times. make menuconfig is a light load on system resources unlike make xconfig (uses Qt as of version 2.6.31.1, formerly Tk) or make gconfig, which utilizes GTK+. It's possible to ignore most of the features with make config,
https://en.wikipedia.org/wiki/Hex%20dump
In computing, a hex dump is a textual hexadecimal view (on screen or paper) of (often, but not necessarily binary) computer data, from memory or from a computer file or storage device. Looking at a hex dump of data is usually done in the context of either debugging, reverse engineering or digital forensics. In a hex dump, each byte (8 bits) is represented as a two-digit hexadecimal number. Hex dumps are commonly organized into rows of 8 or 16 bytes, sometimes separated by whitespaces. Some hex dumps have the hexadecimal memory address at the beginning. Some common names for this program function are hexdump, hd, od, xxd and simply dump or even D. Samples A sample text file: 0123456789ABCDEF /* ********************************************** */ Table with TABs (09) 1 2 3 3.14 6.28 9.42 as displayed by Unix hexdump: 0000000 30 31 32 33 34 35 36 37 38 39 41 42 43 44 45 46 0000010 0a 2f 2a 20 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 0000020 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a * 0000040 2a 2a 20 2a 2f 0a 09 54 61 62 6c 65 20 77 69 74 0000050 68 20 54 41 42 73 20 28 30 39 29 0a 09 31 09 09 0000060 32 09 09 33 0a 09 33 2e 31 34 09 36 2e 32 38 09 0000070 39 2e 34 32 0a 0000075 The leftmost column is the hexadecimal displacement (or address) for the values of the following columns. Each row displays 16 bytes, with the exception of the row containing a single *. The * is used to indicate multiple occurrences of the same display were omitted. The last line displays the number of bytes taken from the input. An additional column shows the corresponding ASCII character translation with or : 00000000 30 31 32 33 34 35 36 37 38 39 41 42 43 44 45 46 |0123456789ABCDEF| 00000010 0a 2f 2a 20 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a |./* ************| 00000020 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a |****************| * 00000040 2a 2a 20 2a 2f 0a 09 54 61 62 6c 65 20 77 69 74 |** */..Table wit| 00000050 68 2
https://en.wikipedia.org/wiki/Metaplasia
Metaplasia () is the transformation of one differentiated cell type to another differentiated cell type. The change from one type of cell to another may be part of a normal maturation process, or caused by some sort of abnormal stimulus. In simplistic terms, it is as if the original cells are not robust enough to withstand their environment, so they transform into another cell type better suited to their environment. If the stimulus causing metaplasia is removed or ceases, tissues return to their normal pattern of differentiation. Metaplasia is not synonymous with dysplasia, and is not considered to be an actual cancer. It is also contrasted with heteroplasia, which is the spontaneous abnormal growth of cytologic and histologic elements. Today, metaplastic changes are usually considered to be an early phase of carcinogenesis, specifically for those with a history of cancers or who are known to be susceptible to carcinogenic changes. Metaplastic change is thus often viewed as a premalignant condition that requires immediate intervention, either surgical or medical, lest it lead to cancer via malignant transformation. Causes When cells are faced with physiological or pathological stresses, they respond by adapting in any of several ways, one of which is metaplasia. It is a benign (i.e. non-cancerous) change that occurs as a response to change of milieu (physiological metaplasia) or chronic physical or chemical irritation. One example of pathological irritation is cigarette smoke, which causes the mucus-secreting ciliated pseudostratified columnar respiratory epithelial cells that line the airways to be replaced by stratified squamous epithelium, or a stone in the bile duct that causes the replacement of the secretory columnar epithelium with stratified squamous epithelium (squamous metaplasia). Metaplasia is an adaptation that replaces one type of epithelium with another that is more likely to be able to withstand the stresses it is faced with. It is also accom
https://en.wikipedia.org/wiki/Fractional-order%20control
Fractional-order control (FOC) is a field of control theory that uses the fractional-order integrator as part of the control system design toolkit. The use of fractional calculus (FC) can improve and generalize well-established control methods and strategies. The fundamental advantage of FOC is that the fractional-order integrator weights history using a function that decays with a power-law tail. The effect is that the effects of all time are computed for each iteration of the control algorithm. This creates a 'distribution of time constants,' the upshot of which is there is no particular time constant, or resonance frequency, for the system. In fact, the fractional integral operator is different from any integer-order rational transfer function , in the sense that it is a non-local operator that possesses an infinite memory and takes into account the whole history of its input signal. Fractional-order control shows promise in many controlled environments that suffer from the classical problems of overshoot and resonance, as well as time diffuse applications such as thermal dissipation and chemical mixing. Fractional-order control has also been demonstrated to be capable of suppressing chaotic behaviors in mathematical models of, for example, muscular blood vessels. Initiated from the 80's by the Pr. Oustaloup's group, the CRONE approach is one of the most developed control-system design methodologies that uses fractional-order operator properties. See also Differintegral Fractional calculus Fractional-order system External links Dr. YangQuan Chen's latest homepage for the applied fractional calculus (AFC) Dr. YangQuan Chen's page about fractional calculus on Google Sites
https://en.wikipedia.org/wiki/WRDC
WRDC (channel 28) is a television station licensed to Durham, North Carolina, United States, serving the Research Triangle area as an affiliate of MyNetworkTV. It is owned by Sinclair Broadcast Group alongside Raleigh-licensed CW affiliate WLFL (channel 22). Both stations share studios in the Highwoods Office Park, just outside downtown Raleigh, while WRDC's transmitter is located in Auburn, North Carolina. Channel 28 is the third-oldest television station in the Triangle and was the market's NBC affiliate for its first 27 years of operation. It was perennially the third-rated station in the market and did not produce local newscasts for significant portions of its tenure with NBC, which contributed to the network moving to another station. Prior use of channel 28 in Raleigh Channel 28 in Raleigh was initially occupied by WNAO-TV, the first television station in the Raleigh–Durham market and North Carolina's first UHF station. Owned by the Sir Walter Television Company, WNAO-TV broadcast from July 12, 1953, to December 31, 1957, primarily as a CBS affiliate with secondary affiliations with other networks. The station was co-owned with WNAO radio (850 AM and 96.1 FM)), which Sir Walter had bought from The News & Observer newspaper after obtaining the television construction permit. After the Raleigh–Durham market received two VHF television stations in 1954 and 1956 (WTVD, channel 11, and WRAL-TV, channel 5, respectively), WNAO-TV found the going increasingly difficult, as did many early UHF stations. The station signed off December 31, 1957, and its owner entered into a joint venture with another dark UHF outlet that was successful in obtaining channel 8 in High Point. History WRDU-TV/Triangle Telecasters In 1966, a major overhaul of the UHF allocation table moved the market's channel 28 allotment from Raleigh to Durham. On November 18 of that year, Triangle Telecasters, Inc., a group led by law professor Robinson O. Everett, applied to the Federal Communicatio
https://en.wikipedia.org/wiki/Euclidean%20tilings%20by%20convex%20regular%20polygons
Euclidean plane tilings by convex regular polygons have been widely used since antiquity. The first systematic mathematical treatment was that of Kepler in his Harmonices Mundi (Latin: The Harmony of the World, 1619). Notation of Euclidean tilings Euclidean tilings are usually named after Cundy & Rollett’s notation. This notation represents (i) the number of vertices, (ii) the number of polygons around each vertex (arranged clockwise) and (iii) the number of sides to each of those polygons. For example: 36; 36; 34.6, tells us there are 3 vertices with 2 different vertex types, so this tiling would be classed as a ‘3-uniform (2-vertex types)’ tiling. Broken down, 36; 36 (both of different transitivity class), or (36)2, tells us that there are 2 vertices (denoted by the superscript 2), each with 6 equilateral 3-sided polygons (triangles). With a final vertex 34.6, 4 more contiguous equilateral triangles and a single regular hexagon. However, this notation has two main problems related to ambiguous conformation and uniqueness First, when it comes to k-uniform tilings, the notation does not explain the relationships between the vertices. This makes it impossible to generate a covered plane given the notation alone. And second, some tessellations have the same nomenclature, they are very similar but it can be noticed that the relative positions of the hexagons are different. Therefore, the second problem is that this nomenclature is not unique for each tessellation. In order to solve those problems, GomJau-Hogg’s notation is a slightly modified version of the research and notation presented in 2012, about the generation and nomenclature of tessellations and double-layer grids. Antwerp v3.0, a free online application, allows for the infinite generation of regular polygon tilings through a set of shape placement stages and iterative rotation and reflection operations, obtained directly from the GomJau-Hogg’s notation. Regular tilings Following Grünbaum and Sheph
https://en.wikipedia.org/wiki/Vision%20mixer
A vision mixer is a device used to select between several different live video sources and, in some cases, compositing live video sources together to create visual effects. In most of the world, both the equipment and its operator are called a vision mixer or video mixer; however, in the United States, the equipment is called a video switcher, production switcher or video production switcher, and its operator is known as a technical director (TD). The role of the vision mixer for video is similar to what a mixing console does for audio. Typically a vision mixer would be found in a video production environment such as a production control room of a television studio, production truck or post-production facility. Capabilities and usage Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations (called mattes in this context) and generate color signals. Vision mixers may include digital video effects (DVE) and still store functionality. Most vision mixers are targeted at the professional market, with newer analog models having component video connections and digital ones using serial digital interface (SDI) or SMPTE 2110. They are used in live television, such as outside broadcasting, with video tape recording (VTR) and video servers for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer-based non-linear editing systems. Older professional mixers worked with composite video, analog signal inputs. There were a number of consumer video switchers with composite video or S-Video. These are often used for VJing, presentations, and small multi-camera productions. Operation The most basic part of a vision mixer is a bus, which is a signal path consisting of multiple video inputs that feeds a single output. On the panel, a bus is represented by a
https://en.wikipedia.org/wiki/AES47
AES47 is a standard which describes a method for transporting AES3 professional digital audio streams over Asynchronous Transfer Mode (ATM) networks. The Audio Engineering Society (AES) published AES47 in 2002. The method described by AES47 is also published by the International Electrotechnical Commission as IEC 62365. Introduction Many professional audio systems are now combined with telecommunication and IT technologies to provide new functionality, flexibility and connectivity over both local and wide area networks. AES47 was developed to provide a standardised method of transporting the standard digital audio per AES3 over telecommunications networks that provide a quality of service required by many professional low-latency live audio uses. AES47 may be used directly between specialist audio devices or in combination with telecommunication and computer equipment with suitable network interfaces. In both cases, AES47 the same physical structured cable used as standard by the telecommunications networks. Common network protocols like Ethernet use large packet sizes, which produce a larger minimum latency. Asynchronous transfer mode divides data into 48-byte cells which provide lower latency. History The original work was carried out at the British Broadcasting Corporation’s R&D department and published as "White Paper 074", which established that this approach provides the necessary performance for professional media production. AES47 was originally published in 2002 and was republished with minor revisions in February 2006. Amendment 1 to AES47 was published in February 2009, adding code points in the ATM Adaptation Layer Parameters Information Element to signal that the time to which each audio sample relates can be identified as specified in AES53. The change in thinking from traditional ATM network design is not to necessarily use ATM to pass IP traffic (apart from management traffic) but to use AES47 in parallel with standard Ethernet structures to d
https://en.wikipedia.org/wiki/Head%20over%20Heels%20%28video%20game%29
Head Over Heels is an action-adventure video game released by Ocean Software in 1987 for several 8-bit home computers. It uses an isometric engine that is similar to the Filmation technique first developed by Ultimate Play the Game. Head Over Heels is the second isometric game by Jon Ritman and Bernie Drummond, after their earlier Batman computer game was released in 1986. The game received very favourable reviews and was described as an all time classic. In 2003, Retrospec released a remake of Head Over Heels for Microsoft Windows, Mac OS X, BeOS, and Linux. In 2019, Piko Interactive released an Atari ST port of Head Over Heels for Atari Jaguar. A Nintendo Switch port was released on October 28, 2021. Gameplay The player controls two characters instead of just one, each with different abilities. Head can jump higher than Heels, control himself in the air, and fire doughnuts from a hooter to paralyze enemies. Heels can run twice as fast as Head, climb certain staircases that Head cannot, and carry objects around a room in a bag. These abilities become complementary when the player combines them after completing roughly a sixth of the game. Compared to its predecessors, the game offers unique and revolutionary gameplay, complex puzzles, and more than 300 rooms to explore. Drummond contributed some famously surreal touches, including robots (controlled by push switches) that bore a remarkable resemblance to the head of Charles III (then Prince of Wales) on the body of a Dalek. Other surreal touches include enemies with the heads of elephants and staircases made of dogs that teleport themselves away as soon as Head enters the room. Plot Headus Mouthion (Head) and Footus Underium (Heels) are two spies from the planet Freedom. They are sent to Blacktooth to liberate the enslaved planets of Penitentiary, Safari, Book World and Egyptus, and then to defeat the Emperor to prevent further planets from falling under his rule. Captured and separated, the spies are pla
https://en.wikipedia.org/wiki/Cuisenaire%20rods
Cuisenaire rods are mathematics learning aids for students that provide an interactive, hands-on way to explore mathematics and learn mathematical concepts, such as the four basic arithmetical operations, working with fractions and finding divisors. In the early 1950s, Caleb Gattegno popularised this set of coloured number rods created by Georges Cuisenaire (1891–1975), a Belgian primary school teacher, who called the rods réglettes. According to Gattegno, "Georges Cuisenaire showed in the early 1950s that students who had been taught traditionally, and were rated 'weak', took huge strides when they shifted to using the material. They became 'very good' at traditional arithmetic when they were allowed to manipulate the rods." History The educationalists Maria Montessori and Friedrich Fröbel had used rods to represent numbers, but it was Georges Cuisenaire who introduced the rods that were to be used across the world from the 1950s onwards. In 1952 he published Les nombres en couleurs, Numbers in Color, which outlined their use. Cuisenaire, a violin player, taught music as well as arithmetic in the primary school in Thuin. He wondered why children found it easy and enjoyable to pick up a tune and yet found mathematics neither easy nor enjoyable. These comparisons with music and its representation led Cuisenaire to experiment in 1931 with a set of ten rods sawn out of wood, with lengths from 1 cm to 10 cm. He painted each length of rod a different colour and began to use these in his teaching of arithmetic. The invention remained almost unknown outside the village of Thuin for about 23 years until, in April 1953, British mathematician and mathematics education specialist Caleb Gattegno was invited to see students using the rods in Thuin. At this point he had already founded the International Commission for the Study and Improvement of Mathematics Education (CIEAEM) and the Association of Teachers of Mathematics, but this marked a turning point in his understanding:
https://en.wikipedia.org/wiki/Advertising%20network
An online advertising network or ad network is a company that connects advertisers to websites that want to host advertisements. The key function of an ad network is an aggregation of ad supply from publishers and matching it with the advertiser's demand. The phrase "ad network" by itself is media-neutral in the sense that there can be a "Television Ad Network" or a "Print Ad Network", but is increasingly used to mean "online ad network" as the effect of aggregation of publisher ad space and sale to advertisers is most commonly seen in the online space. The fundamental difference between traditional media ad networks and online ad networks is that online ad networks use a central ad server to deliver advertisements to consumers (ad serving), which enables targeting, tracking and reporting of impressions in ways not possible with analog media alternatives. Overview The advertising network market is a large and growing market, with Internet advertising revenues expected to grow from $135.42 bn in 2014 to $239.87 bn in 2019. Digital advertising revenues in the United States alone are set to reach $107.30 bn in 2018 which is an 18.7% increase from 2017 ad spend. This growth will result in many new players in the market and encourage acquisitions of ad networks by larger companies that either enter the market or expand their market presence. Currently, there are hundreds of ad networks worldwide and the landscape changes daily. The inventory of online advertising space comes in many different forms, including space on the desktop and mobile websites, in RSS feeds, blogs, instant messaging applications, mobile apps, adware, e-mails, and other media. The dominant forms of inventory include third-party content websites, which work with advertising networks for either a share of the ad revenues or a fee, as well as search engines, mobile, and online video resources. An advertiser can buy a run of network package, or a run of category package within the network. The adve
https://en.wikipedia.org/wiki/136%20%28number%29
136 (one hundred [and] thirty-six) is the natural number following 135 and preceding 137. In mathematics 136 is itself a factor of the Eddington number. With a total of 8 divisors, 8 among them, 136 is a refactorable number. It is a composite number. 136 is a centered triangular number and a centered nonagonal number. The sum of the ninth row of Lozanić's triangle is 136. 136 is a self-descriptive number in base 4, and a repdigit in base 16. In base 10, the sum of the cubes of its digits is . The sum of the cubes of the digits of 244 is . 136 is a triangular number, because it's the sum of the first 16 positive integers. In the military Force 136 branch of the British organization, the Special Operations Executive (SOE), in the South-East Asian Theatre of World War II USNS Mission Soledad (T-AO-136) was a United States Navy Mission Buenaventura-class fleet oiler during World War II USS Admirable (AM-136) was a United States Navy Admirable class minesweeper USS Ara (AK-136) was a United States Navy during World War II was a United States Navy during World War II USS Botetourt (APA-136) was a United States Navy during World War II and the Korean War was a United States Navy tanker during World War II was a United States Navy during World War II was a United States Navy heavy cruiser during World War II USS Frederick C. Davis (DE-136) was a United States Navy during World War II was a United States Navy General G. O. Squier-class transport ship during World War II Electronic Attack Squadron 136 (VAQ-136) also known as "The Gauntlets" is a United States Navy attack squadron at Naval Air Station Atsugi, Japan Strike Fighter Squadron 136 (VFA-136) is a United States Navy strike fighter squadron based at Naval Air Station Oceana, Virginia In transportation London Buses route 136 is a Transport for London contracted bus route in London In TV and radio 136 kHz band is the lowest frequency band amateur radio operators are allowed to transmit
https://en.wikipedia.org/wiki/173%20%28number%29
173 (one hundred [and] seventy-three) is the natural number following 172 and preceding 174. In mathematics 173 is: an odd number. a deficient number. an odious number. a balanced prime. an Eisenstein prime with no imaginary part. a Sophie Germain prime. an inconsummate number. the sum of 2 squares: 22 + 132. the sum of three consecutive prime numbers: 53 + 59 + 61. Palindromic number in bases 3 (201023) and 9 (2129). In astronomy 173 Ino is a large dark main belt asteroid 173P/Mueller is a periodic comet in the Solar System Arp 173 (VV 296, KPG 439) is a pair of galaxies in the constellation Boötes In the military 173rd Air Refueling Squadron unit of the Nebraska Air National Guard 173rd Airborne Brigade Combat Team of the United States Army based in Vicenza 173rd Battalion unit of the Canadian Expeditionary Force during the World War I 173rd Special Operations Aviation Squadron of the Australian Army K-173 Chelyabinsk Russian was a U.S. Navy Phoenix-class auxiliary ship following World War II was a U.S. Navy during World War II was a U.S. Navy during World War II was a U.S. Navy during World War II was a U.S. Navy yacht during World War I was a U.S. Navy ship during World War II was a U.S. Navy submarine chaser during World War II was a U.S. Navy Porpoise-class submarine during World War II was a U.S. Navy following World War II Vought V-173 (Flying Pancake) was a U.S. Navy experimental test aircraft during World War II In transportation The Georgia Railroad, the world longest railroad in 1845, ran for from Augusta to Marthasville (Atlanta, Georgia) United Airlines Flight 173 en route from Denver to Portland crashed on December 28, 1978 The Velocity 173 was a kit aircraft produced by Velocity Aircraft in the early 1990s. In popular culture The book 173 Hours in Captivity (2000) SCP-173, a fictional statue In other fields 173 is also: The year AD 173 or 173 BC 173 AH is a year in the Islamic calendar that corresponds
https://en.wikipedia.org/wiki/Pumpkin%20seed%20oil
Pumpkin seed oil is a culinary oil, used especially in central Europe. Culinary uses This oil is a culinary specialty from what used to be part of the Austro-Hungarian Empire and is now southeastern Austria (Styria), eastern Slovenia (Styria and Prekmurje), Central Transylvania, Orăștie-Cugir region of Romania, north western Croatia (esp. Međimurje), Vojvodina, and adjacent regions of Hungary. It is also used worldwide, including North America, Mexico, India and China. Pumpkin seed oil has an intense nutty taste and is rich in polyunsaturated fatty acids. Browned oil has a bitter taste. Pumpkin seed oil serves as a salad dressing. The typical Styrian dressing consists of pumpkin seed oil and cider vinegar. The oil is also used for desserts, giving ordinary vanilla ice cream a nutty taste. It is considered a delicacy in Austria and Slovenia, and a few drops are added to pumpkin soup and other local dishes. Using it as a cooking oil, however, destroys its essential fatty acids. Production Oil from pumpkin seeds is extracted by solvent extraction or by supercritical carbon dioxide methods. Once the oil is obtained, further specific extractions may be done, such as for carotenoids. Styrian oil – an export commodity of Austria and Slovenia – is made by pressing roasted, hull-less pumpkin seeds from a local variety of pumpkin, the "Styrian oil pumpkin" (Cucurbita pepo subsp. pepo var. 'styriaca', also known as var. oleifera). High-temperature roasting improves the aromatic quality of pumpkin seed oil. Seed types and oil The viscous oil is light to very dark green to dark red in colour depending on the thickness of the observed sample. The oil appears green in thin layers and red in thick layers, an optical phenomenon called dichromatism. Pumpkin seed oil is one of the substances with the strongest dichromatism. Its Kreft's dichromaticity index is −44. When used together with yoghurt, the oil turns bright green and is sometimes referred to as "green-gold". Other ty
https://en.wikipedia.org/wiki/Virama
Virama ( ्) is a Sanskrit phonological concept to suppress the inherent vowel that otherwise occurs with every consonant letter, commonly used as a generic term for a codepoint in Unicode, representing either halanta, hasanta or explicit virāma, a diacritic in many Brahmic scripts, including the Devanagari and Bengali scripts, or saṃyuktākṣara (Sanskrit: संयुक्ताक्षर) or implicit virama, a conjunct consonant or ligature. Unicode schemes of scripts writing Mainland Southeast Asia languages, such as that of Burmese script and of Tibetan script, generally don't group the two functions together. Names The name is Sanskrit for "cessation, termination, end". As a Sanskrit word, it is used in place of several language-specific terms, such as: Usage In Devanagari and many other Indic scripts, a virama is used to cancel the inherent vowel of a consonant letter and represent a consonant without a vowel, a "dead" consonant. For example, in Devanagari, is a consonant letter, ka, ् is a virāma; therefore, (ka + virāma) represents a dead consonant k. If this k is further followed by another consonant letter, for example, ṣa ष, the result might look like , which represents kṣa as ka + (visible) virāma + ṣa. In this case, two elements k क् and ṣa ष are simply placed one by one, side by side. Alternatively, kṣa can be also written as a ligature , which is actually the preferred form. Generally, when a dead consonant letter C1 and another consonant letter C2 are conjoined, the result may be: A fully conjoined ligature of C1+C2; Half-conjoined— C1-conjoining: a modified form (half form) of C1 attached to the original form (full form) of C2 C2-conjoining: a modified form of C2 attached to the full form of C1; or Non-ligated: full forms of C1 and C2 with a visible virama. If the result is fully or half-conjoined, the (conceptual) virama which made C1 dead becomes invisible, logically existing only in a character encoding scheme such as ISCII or Unicode. If the result is not li
https://en.wikipedia.org/wiki/Minimum%20viable%20population
Minimum viable population (MVP) is a lower bound on the population of a species, such that it can survive in the wild. This term is commonly used in the fields of biology, ecology, and conservation biology. MVP refers to the smallest possible size at which a biological population can exist without facing extinction from natural disasters or demographic, environmental, or genetic stochasticity. The term "population" is defined as a group of interbreeding individuals in similar geographic area that undergo negligible gene flow with other groups of the species. Typically, MVP is used to refer to a wild population, but can also be used for ex-situ conservation (Zoo populations). Estimation There is no unique definition of what constitutes a sufficient population for the continuation of a species, because whether a species survives will depend to some extent on random events. Thus, any calculation of a minimum viable population (MVP) will depend on the population projection model used. A set of random (stochastic) projections might be used to estimate the initial population size needed (based on the assumptions in the model) for there to be, (for example) a 95% or 99% probability of survival 1,000 years into the future. Some models use generations as a unit of time rather than years in order to maintain consistency between taxa. These projections (population viability analyses, or PVA) use computer simulations to model populations using demographic and environmental information to project future population dynamics. The probability assigned to a PVA is arrived at after repeating the environmental simulation thousands of times. Extinction Small populations are at a greater risk of extinction than larger populations due to small populations having less capacity to recover from adverse stochastic (i.e. random) events. Such events may be divided into four sources: Demographic stochasticity Demographic stochasticity is often only a driving force toward extinction in po
https://en.wikipedia.org/wiki/API%20gravity
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). It is used to compare densities of petroleum liquids. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. API gravity is graduated in degrees on a hydrometer instrument. API gravity values of most petroleum liquids fall between 10 and 70 degrees. In 1916, the U.S. National Bureau of Standards accepted the Baumé scale, which had been developed in France in 1768, as the U.S. standard for measuring the specific gravity of liquids less dense than water. Investigation by the U.S. National Academy of Sciences found major errors in salinity and temperature controls that had caused serious variations in published values. Hydrometers in the U.S. had been manufactured and distributed widely with a modulus of 141.5 instead of the Baumé scale modulus of 140. The scale was so firmly established that, by 1921, the remedy implemented by the American Petroleum Institute was to create the API gravity scale, recognizing the scale that was actually being used. API gravity formulas The formula to calculate API gravity from specific gravity (SG) is: Conversely, the specific gravity of petroleum liquids can be derived from their API gravity value as Thus, a heavy oil with a specific gravity of 1.0 (i.e., with the same density as pure water at 60 °F) has an API gravity of: Using API gravity to calculate barrels of crude oil per metric ton In the oil industry, quantities of crude oil are often measured in metric tons. One can calculate the
https://en.wikipedia.org/wiki/Radial%20stress
Radial stress is stress toward or away from the central axis of a component. Pressure vessels The walls of pressure vessels generally undergo triaxial loading. For cylindrical pressure vessels, the normal loads on a wall element are longitudinal stress, circumferential (hoop) stress and radial stress. The radial stress for a thick-walled cylinder is equal and opposite to the gauge pressure on the inside surface, and zero on the outside surface. The circumferential stress and longitudinal stresses are usually much larger for pressure vessels, and so for thin-walled instances, radial stress is usually neglected. Formula The radial stress for a thick walled pipe at a point from the central axis is given by where is the inner radius, is the outer radius, is the inner absolute pressure and is the outer absolute pressure. Maximum radial stress occurs when (at the inside surface) and is equal to gauge pressure, or .
https://en.wikipedia.org/wiki/Cylinder%20stress
In mechanics, a cylinder stress is a stress distribution with rotational symmetry; that is, which remains unchanged if the stressed object is rotated about some fixed axis. Cylinder stress patterns include: circumferential stress, or hoop stress, a normal stress in the tangential (azimuth) direction. axial stress, a normal stress parallel to the axis of cylindrical symmetry. radial stress, a normal stress in directions coplanar with but perpendicular to the symmetry axis. These three principal stresses- hoop, longitudinal, and radial can be calculated analytically using a mutually perpendicular tri-axial stress system. The classical example (and namesake) of hoop stress is the tension applied to the iron bands, or hoops, of a wooden barrel. In a straight, closed pipe, any force applied to the cylindrical pipe wall by a pressure differential will ultimately give rise to hoop stresses. Similarly, if this pipe has flat end caps, any force applied to them by static pressure will induce a perpendicular axial stress on the same pipe wall. Thin sections often have negligibly small radial stress, but accurate models of thicker-walled cylindrical shells require such stresses to be considered. In thick-walled pressure vessels, construction techniques allowing for favorable initial stress patterns can be utilized. These compressive stresses at the inner surface reduce the overall hoop stress in pressurized cylinders. Cylindrical vessels of this nature are generally constructed from concentric cylinders shrunk over (or expanded into) one another, i.e., built-up shrink-fit cylinders, but can also be performed to singular cylinders though autofrettage of thick cylinders. Definitions Hoop stress The hoop stress is the force over area exerted circumferentially (perpendicular to the axis and the radius of the object) in both directions on every particle in the cylinder wall. It can be described as: where: F is the force exerted circumferentially on an area of the cylind
https://en.wikipedia.org/wiki/Strong%20antichain
In order theory, a subset A of a partially ordered set P is a strong downwards antichain if it is an antichain in which no two distinct elements have a common lower bound in P, that is, In the case where P is ordered by inclusion, and closed under subsets, but does not contain the empty set, this is simply a family of pairwise disjoint sets. A strong upwards antichain B is a subset of P in which no two distinct elements have a common upper bound in P. Authors will often omit the "upwards" and "downwards" term and merely refer to strong antichains. Unfortunately, there is no common convention as to which version is called a strong antichain. In the context of forcing, authors will sometimes also omit the "strong" term and merely refer to antichains. To resolve ambiguities in this case, the weaker type of antichain is called a weak antichain. If (P, ≤) is a partial order and there exist distinct x, y ∈ P such that {x, y} is a strong antichain, then (P, ≤) cannot be a lattice (or even a meet semilattice), since by definition, every two elements in a lattice (or meet semilattice) must have a common lower bound. Thus lattices have only trivial strong antichains (i.e., strong antichains of cardinality at most 1).
https://en.wikipedia.org/wiki/Countable%20chain%20condition
In order theory, a partially ordered set X is said to satisfy the countable chain condition, or to be ccc, if every strong antichain in X is countable. Overview There are really two conditions: the upwards and downwards countable chain conditions. These are not equivalent. The countable chain condition means the downwards countable chain condition, in other words no two elements have a common lower bound. This is called the "countable chain condition" rather than the more logical term "countable antichain condition" for historical reasons related to certain chains of open sets in topological spaces and chains in complete Boolean algebras, where chain conditions sometimes happen to be equivalent to antichain conditions. For example, if κ is a cardinal, then in a complete Boolean algebra every antichain has size less than κ if and only if there is no descending κ-sequence of elements, so chain conditions are equivalent to antichain conditions. Partial orders and spaces satisfying the ccc are used in the statement of Martin's axiom. In the theory of forcing, ccc partial orders are used because forcing with any generic set over such an order preserves cardinals and cofinalities. Furthermore, the ccc property is preserved by finite support iterations (see iterated forcing). For more information on ccc in the context of forcing, see . More generally, if κ is a cardinal then a poset is said to satisfy the κ-chain condition if every antichain has size less than κ. The countable chain condition is the ℵ1-chain condition. Examples and properties in topology A topological space is said to satisfy the countable chain condition, or Suslin's Condition, if the partially ordered set of non-empty open subsets of X satisfies the countable chain condition, i.e. every pairwise disjoint collection of non-empty open subsets of X is countable. The name originates from Suslin's Problem. Every separable topological space is ccc. Furthermore, the product space of at most separable
https://en.wikipedia.org/wiki/William%20E.%20Castle
William Ernest Castle (October 25, 1867 – June 3, 1962) was an early American geneticist. Early years William Ernest Castle was born on a farm in Ohio and took an early interest in natural history. He graduated in 1889 from Denison University in Granville, Ohio, a Baptist college that emphasized classics, and went on to become a teacher of Latin at Ottawa University in Ottawa, Kansas, where he published his first paper on the flowering plants of the area. After three years of teaching, botany won out over Latin. Education Castle entered the senior class of Harvard University in 1892 and in 1893 took a second A.B. degree with honors. He was appointed laboratory assistant in zoology, an A.M. degree in 1894 and a Ph.D. in 1895. He then taught zoology at the University of Wisconsin–Madison and at the Knox College in Galesburg, Illinois, each for a year. Harvard and Drosophila Castle returned to Harvard in 1897. His early work focused on embryology, but after the rediscovery of Mendelian genetics in 1900, he turned to mammalian genetics, especially that of the guinea pig. In 1903 Castle intervened in the debate on mathematical foundations of Mendelian genetics. He corrected some tentative work of Udny Yule on breeding by deliberate selection and genetics. In so doing, he anticipated what has now become known as the Hardy–Weinberg law. Formulated in the terms "as soon as selection is arrested the race remains stable at the degree of purity then attained", it appeared in his paper of November that year. At Harvard, Charles W. Woodworth suggested to Castle that Drosophila might be used for genetical work. Castle was the first to use the fruit fly Drosophila melanogaster, and it was his work that inspired T.H. Morgan to use Drosophila and the basis of Morgan's 1933 Nobel Prize. Bussey Institution In 1908 Castle moved from the Harvard Museum of Comparative Zoology to the Bussey Institution for Applied Biology. There his most famous PhD student was Sewall Wright wh
https://en.wikipedia.org/wiki/Martin%27s%20axiom
In the mathematical field of set theory, Martin's axiom, introduced by Donald A. Martin and Robert M. Solovay, is a statement that is independent of the usual axioms of ZFC set theory. It is implied by the continuum hypothesis, but it is consistent with ZFC and the negation of the continuum hypothesis. Informally, it says that all cardinals less than the cardinality of the continuum, , behave roughly like . The intuition behind this can be understood by studying the proof of the Rasiowa–Sikorski lemma. It is a principle that is used to control certain forcing arguments. Statement For any cardinal 𝛋, consider the following statement: MA(𝛋) For any partial order P satisfying the countable chain condition (hereafter ccc) and any family D of dense subsets of P such that |D| ≤ 𝛋, there is a filter F on P such that F ∩ d is non-empty for every d in D. In this case (for application of ccc), an antichain is a subset A of P such that any two distinct members of A are incompatible (two elements are said to be compatible if there exists a common element below both of them in the partial order). This differs from, for example, the notion of antichain in the context of trees. MA(&aleph;0) is simply true — the Rasiowa–Sikorski lemma. MA(2&aleph;0) is false: [0, 1] is a separable compact Hausdorff space, and so (P, the poset of open subsets under inclusion, is) ccc. But now consider the following two size-2&aleph;0= families of dense sets in P: no x∈[0, 1] is isolated, and so each x defines the dense subset {S : x∉S}. And each r∈(0, 1], defines the dense subset {S : diam(S)<r}. The two families combined are also of size , and a filter meeting both must simultaneously avoid all points of [0, 1] while containing sets of arbitrarily small diameter. But a filter F containing sets of arbitrarily small diameter must contain a point in &bigcap;F by compactness. (See also .) Martin's axiom is then that MA(κ) holds "as long as possible": Martin's axiom (MA) For every 𝛋 < , MA(𝛋) ho
https://en.wikipedia.org/wiki/Richard%20Ferber
Richard Ferber is a physician and the director of The Center for Pediatric Sleep Disorders, at Children's Hospital Boston. He has been researching sleep and sleep disorders in children for over 30 years. He is best known for his methods—popularly called Ferberization—that purports to teach infants to learn how to fall asleep on their own, which are described in his book Solve Your Child's Sleep Problems (first edition 1985).
https://en.wikipedia.org/wiki/Comparison%20of%20wiki%20software
The following tables compare general and technical information for a number of wiki software packages. General information Systems listed on a light purple background are no longer in active development. Target audience Features 1 Features 2 Installation See also Comparison of wiki farms notetaking software text editors HTML editors word processors wiki hosting services List of wikis wiki software personal information managers text editors outliners for desktops mobile devices web-based Footnotes Comparison Wiki software Text editor comparisons
https://en.wikipedia.org/wiki/Absorption%20band
In quantum mechanics, an absorption band is a range of wavelengths, frequencies or energies in the electromagnetic spectrum that are characteristic of a particular transition from initial to final state in a substance. According to quantum mechanics, atoms and molecules can only hold certain defined quantities of energy, or exist in specific states. When such quanta of electromagnetic radiation are emitted or absorbed by an atom or molecule, energy of the radiation changes the state of the atom or molecule from an initial state to a final state. Overview According to quantum mechanics, atoms and molecules can only hold certain defined quantities of energy, or exist in specific states. When electromagnetic radiation is absorbed by an atom or molecule, the energy of the radiation changes the state of the atom or molecule from an initial state to a final state. The number of states in a specific energy range is discrete for gaseous or diluted systems, with discrete energy levels. Condensed systems, like liquids or solids, have a continuous density of states distribution and often possess continuous energy bands. In order for a substance to change its energy it must do so in a series of "steps" by the absorption of a photon. This absorption process can move a particle, like an electron, from an occupied state to an empty or unoccupied state. It can also move a whole vibrating or rotating system, like a molecule, from one vibrational or rotational state to another or it can create a quasiparticle like a phonon or a plasmon in a solid. Electromagnetic transitions When a photon is absorbed, the electromagnetic field of the photon disappears as it initiates a change in the state of the system that absorbs the photon. Energy, momentum, angular momentum, magnetic dipole moment and electric dipole moment are transported from the photon to the system. Because there are conservation laws, that have to be satisfied, the transition has to meet a series of constraints. This res
https://en.wikipedia.org/wiki/Operative%20temperature
Operative temperature () is defined as a uniform temperature of an imaginary black enclosure in which an occupant would exchange the same amount of heat by radiation plus convection as in the actual nonuniform environment. Some references also use the terms 'equivalent temperature" or 'effective temperature' to describe combined effects of convective and radiant heat transfer. In design, operative temperature can be defined as the average of the mean radiant and ambient air temperatures, weighted by their respective heat transfer coefficients. The instrument used for assessing environmental thermal comfort in terms of operative temperature is called a eupatheoscope and was invented by A. F. Dufton in 1929. Mathematically, operative temperature can be shown as; where, = convective heat transfer coefficient = linear radiative heat transfer coefficient = air temperature = mean radiant temperature Or where, = air velocity and have the same meaning as above. It is also acceptable to approximate this relationship for occupants engaged in near sedentary physical activity (with metabolic rates between 1.0 met and 1.3 met), not in direct sunlight, and not exposed to air velocities greater than 0.10 m/s (20 fpm). where and have the same meaning as above. Application Operative temperature is used in heat transfer and thermal comfort analysis in transportation and buildings. Most psychrometric charts used in HVAC design only show the dry bulb temperature on the x-axis(abscissa), however, it is the operative temperature which is specified on the x-axis of the psychrometric chart illustrated in ANSI/ASHRAE Standard 55 – Thermal Environmental Conditions for Human occupancy. See also HVAC Psychrometrics Underfloor heating ASHRAE
https://en.wikipedia.org/wiki/Medical%20algorithm
A medical algorithm is any computation, formula, statistical survey, nomogram, or look-up table, useful in healthcare. Medical algorithms include decision tree approaches to healthcare treatment (e.g., if symptoms A, B, and C are evident, then use treatment X) and also less clear-cut tools aimed at reducing or defining uncertainty. A medical prescription is also a type of medical algorithm. Scope Medical algorithms are part of a broader field which is usually fit under the aims of medical informatics and medical decision-making. Medical decisions occur in several areas of medical activity including medical test selection, diagnosis, therapy and prognosis, and automatic control of medical equipment. In relation to logic-based and artificial neural network-based clinical decision support systems, which are also computer applications used in the medical decision-making field, algorithms are less complex in architecture, data structure and user interface. Medical algorithms are not necessarily implemented using digital computers. In fact, many of them can be represented on paper, in the form of diagrams, nomographs, etc. Examples A wealth of medical information exists in the form of published medical algorithms. These algorithms range from simple calculations to complex outcome predictions. Most clinicians use only a small subset routinely. Examples of medical algorithms are: Calculators, e.g. an on-line or stand-alone calculator for body mass index (BMI) when stature and body weight are given; Flowcharts and drakon-charts, e.g. a binary decision tree for deciding what is the etiology of chest pain Look-up tables, e.g. for looking up food energy and nutritional contents of foodstuffs Nomograms, e.g. a moving circular slide to calculate body surface area or drug dosages. A common class of algorithms are embedded in guidelines on the choice of treatments produced by many national, state, financial and local healthcare organisations and provided as knowledge re
https://en.wikipedia.org/wiki/Excess%20post-exercise%20oxygen%20consumption
Excess post-exercise oxygen consumption (EPOC, informally called afterburn) is a measurably increased rate of oxygen intake following strenuous activity. In historical contexts the term "oxygen debt" was popularized to explain or perhaps attempt to quantify anaerobic energy expenditure, particularly as regards lactic acid/lactate metabolism; in fact, the term "oxygen debt" is still widely used to this day. However, direct and indirect calorimeter experiments have definitively disproven any association of lactate metabolism as causal to an elevated oxygen uptake. In recovery, oxygen (EPOC) is used in the processes that restore the body to a resting state and adapt it to the exercise just performed. These include: hormone balancing, replenishment of fuel stores, cellular repair, innervation, and anabolism. Post-exercise oxygen consumption replenishes the phosphagen system. New ATP is synthesized and some of this ATP donates phosphate groups to creatine until ATP and creatine levels are back to resting state levels again. Another use of EPOC is to fuel the body’s increased metabolism from the increase in body temperature which occurs during exercise. EPOC is accompanied by an elevated consumption of fuel. In response to exercise, fat stores are broken down and free fatty acids (FFA) are released into the blood stream. In recovery, the direct oxidation of free fatty acids as fuel and the energy consuming re-conversion of FFAs back into fat stores both take place. Duration of the effect The EPOC effect is greatest soon after the exercise is completed and decays to a lower level over time. One experiment, involving exertion above baseline, found EPOC increasing metabolic rate to an excess level that decays to 13% three hours after exercise, and 4% after 16 hours, for the studied exercise dose. Another study, specifically designed to test whether the effect existed for more than 16 hours, conducted tests for 48 hours after the conclusion of the exercise and found measu
https://en.wikipedia.org/wiki/Chamfer
A chamfer or is a transitional edge between two faces of an object. Sometimes defined as a form of bevel, it is often created at a 45° angle between two adjoining right-angled faces. Chamfers are frequently used in machining, carpentry, furniture, concrete formwork, mirrors, and to facilitate assembly of many mechanical engineering designs. Terminology In machining the word bevel is not used to refer to a chamfer. Machinists use chamfers to "ease" otherwise sharp edges, both for safety and to prevent damage to the edges. A chamfer may sometimes be regarded as a type of bevel, and the terms are often used interchangeably. In furniture-making, a lark's tongue is a chamfer which ends short of a piece in a gradual outward curve, leaving the remainder of the edge as a right angle. Chamfers may be formed in either inside or outside adjoining faces of an object or room. By comparison, a fillet is the rounding-off of an interior corner, and a round (or radius) the rounding of an outside one. Carpentry and furniture Chamfers are used in furniture such as counters and table tops to ease their edges to keep people from bruising themselves in the otherwise sharp corner. When the edges are rounded instead, they are called bullnosed. Special tools such as chamfer mills and chamfer planes are sometimes used. Architecture Chamfers are commonly used in architecture, both for functional and aesthetic reasons. For example, the base of the Taj Mahal is a cube with chamfered corners, thereby creating an octagonal architectural footprint. Its great gate is formed of chamfered base stones and chamfered corbels for a balcony or equivalent cornice towards the roof. Urban planning Many city blocks in Barcelona, Valencia and various other cities in Spain, and street corners (curbs) in Ponce, Puerto Rico, are chamfered. The chamfering was designed as an embellishment and a modernization of urban space in Barcelona's mid-19th century Eixample or Expansion District, where the bui
https://en.wikipedia.org/wiki/Binary%20Ordered%20Compression%20for%20Unicode
Binary Ordered Compression for Unicode (BOCU) is a MIME compatible Unicode compression scheme. BOCU-1 combines the wide applicability of UTF-8 with the compactness of Standard Compression Scheme for Unicode (SCSU). This Unicode encoding is designed to be useful for compressing short strings, and maintains code point order. BOCU-1 is specified in a Unicode Technical Note. For comparison SCSU was adopted as standard Unicode compression scheme with a byte/code point ratio similar to language-specific code pages. SCSU has not been widely adopted, as it is not suitable for MIME “text” media types. For example, SCSU cannot be used directly in emails and similar protocols. SCSU requires a complicated encoder design for good performance. Usually, the zip, bzip2, and other industry standard algorithms compact larger amounts of Unicode text more efficiently. Both SCSU and BOCU-1 are IANA registered charsets. Details All numbers in this section are hexadecimal, and all ranges are inclusive. Code points from U+0000 to U+0020 are encoded in BOCU-1 as the corresponding byte value. All other code points (that is, U+0021 through U+D7FF and U+E000 through U+10FFFF) are encoded as a difference between the code point and a normalized version of the most recently encoded code point that was not an ASCII space (U+0020). The initial state is U+0040. The normalization mapping is as follows: The difference between the current code point and the normalized previous code point is encoded as follows: Each byte range is lexicographically ordered with the following thirteen byte values excluded: 00 07 08 09 0A 0B 0C 0D 0E 0F 1A 1B 20. For example, the byte sequence FC 06 FF, coding for a difference of 1156B, is immediately followed by the byte sequence FC 10 01, coding for a difference of 1156C. Any ASCII input U+0000 to U+007F excluding space U+0020 resets the encoder to U+0040. Because the above-mentioned values cover line end code points U+000D and U+000A as is (0D 0A), the encoder
https://en.wikipedia.org/wiki/Cylinder-head-sector
Cylinder-head-sector (CHS) is an early method for giving addresses to each physical block of data on a hard disk drive. It is a 3D-coordinate system made out of a vertical coordinate head, a horizontal (or radial) coordinate cylinder, and an angular coordinate sector. Head selects a circular surface: a platter in the disk (and one of its two sides). Cylinder is a cylindrical intersection through the stack of platters in a disk, centered around the disk's spindle. Combined, cylinder and head intersect to a circular line, or more precisely: a circular strip of physical data blocks called track. Sector finally selects which data block in this track is to be addressed, as the track is subdivided into several equally-sized portions, each of which is an arc of (360/n) degrees, where n is the number of sectors in the track. CHS addresses were exposed, instead of simple linear addresses (going from 0 to the total block count on disk - 1), because early hard drives didn't come with an embedded disk controller, that would hide the physical layout. A separate generic controller card was used, so that the operating system had to know the exact physical "geometry" of the specific drive attached to the controller, to correctly address data blocks. The traditional limits were 512 bytes/sector × 63 sectors/track × 255 heads (tracks/cylinder) × 1024 cylinders, resulting in a limit of 8032.5 MiB for the total capacity of a disk. As the geometry became more complicated (for example, with the introduction of zone bit recording) and drive sizes grew over time, the CHS addressing method became restrictive. Since the late 1980s, hard drives began shipping with an embedded disk controller that had good knowledge of the physical geometry; they would however report a false geometry to the computer, e.g., a larger number of heads than actually present, to gain more addressable space. These logical CHS values would be translated by the controller, thus CHS addressing no longer corresponded
https://en.wikipedia.org/wiki/Exposed%20node%20problem
In wireless networks, the exposed node problem occurs when a node is prevented from sending packets to other nodes because of co-channel interference with a neighboring transmitter. Consider an example of four nodes labeled R1, S1, S2, and R2, where the two receivers (R1, R2) are out of range of each other, yet the two transmitters (S1, S2) in the middle are in range of each other. Here, if a transmission between S1 and R1 is taking place, node S2 is prevented from transmitting to R2 as it concludes after carrier sense that it will interfere with the transmission by its neighbor S1. However note that R2 could still receive the transmission of S2 without interference because it is out of range of S1. IEEE 802.11 RTS/CTS mechanism helps to solve this problem only if the nodes are synchronized and packet sizes and data rates are the same for both the transmitting nodes. When a node hears an RTS from a neighboring node, but not the corresponding CTS, that node can deduce that it is an exposed node and is permitted to transmit to other neighboring nodes. If the nodes are not synchronised (or if the packet sizes are different or the data rates are different) the problem may occur that the sender will not hear the CTS or the ACK during the transmission of data of the second sender. The exposed node problem is not an issue in cellular networks as the power and distance between cells is controlled to avoid it. See also Hidden node problem IEEE 802.11 RTS/CTS Multiple Access with Collision Avoidance for Wireless (MACAW)
https://en.wikipedia.org/wiki/Delphi%20%28online%20service%29
Delphi Forums is a U.S. online service provider and since the mid-1990s has been a community internet forum site. It started as a nationwide dialup service in 1983. Delphi Forums remains active as of 2023. History The company that became Delphi was founded by Wes Kussmaul as Kussmaul Encyclopedia in 1981 and featured an encyclopedia, e-mail, and a primitive chat. Newswires, bulletin boards and better chat were added in early 1982. Kussmaul recalled: Delphi was actually launched in October 1981, at Jerry Milden's Northeast Computer Show, as the Kussmaul Encyclopedia--the world's first commercially available computerized encyclopedia. (Frank Greenagle's Arête Encyclopedia was announced at about the same time, but you couldn't buy it until much later.) The Kussmaul Encyclopedia was actually a complete home computer system (your choice of Tandy Color Computer or Apple II) with a 300-bps modem that dialed up to a VAX computer hosting our online encyclopedia database. We sold the system for about the same price and terms as Britannica. People wandered around in it and were impressed with the ease with which they could find information. We had a wonderful cross-referencing system that turned every occurrence of a word that was the name of an entry in the encyclopedia into a hypertext link—in 1981... In November 1982, Wes hired Glenn McIntyre as a software engineer primarily doing internal systems. Glenn brought in colleagues Kip Bryan and Dan Bruns. Kip wrote the software that became Delphi Conference and Delphi Forums. Dan upon finishing his MBA at Harvard, become President and subsequently CEO when Wes moved on to form Global Villages. On March 15, 1983, the Delphi name was first used by General Videotex Corporation. Forums were text-based, and accessed via Telenet, Sprintnet, Tymnet, Uninet, and Datapac. In 1984, it had 4 million members. Delphi was extended to Argentina in 1985, through a partnership with the Argentine IT company Siscotel S.A. Delphi partnered
https://en.wikipedia.org/wiki/Optical%20medium
In optics, an optical medium is material through which light and other electromagnetic waves propagate. It is a form of transmission medium. The permittivity and permeability of the medium define how electromagnetic waves propagate in it. Properties The optical medium has an intrinsic impedance, given by where and are the electric field and magnetic field, respectively. In a region with no electrical conductivity, the expression simplifies to: For example, in free space the intrinsic impedance is called the characteristic impedance of vacuum, denoted Z0, and Waves propagate through a medium with velocity , where is the frequency and is the wavelength of the electromagnetic waves. This equation also may be put in the form where is the angular frequency of the wave and is the wavenumber of the wave. In electrical engineering, the symbol , called the phase constant, is often used instead of . The propagation velocity of electromagnetic waves in free space, an idealized standard reference state (like absolute zero for temperature), is conventionally denoted by c0: where is the electric constant and is the magnetic constant. For a general introduction, see Serway For a discussion of synthetic media, see Joannopoulus. Types Homogeneous medium vs. heterogeneous medium Transparent medium vs. opaque body Translucent medium See also Čerenkov radiation Electromagnetic spectrum Electromagnetic radiation Optics SI units Free space Metamaterial Photonic crystal Photonic crystal fiber Notes and references Optics Electric and magnetic fields in matter
https://en.wikipedia.org/wiki/Rhapsody%20%28operating%20system%29
Rhapsody is an operating system that was developed by Apple Computer after its purchase of NeXT in the late 1990s. It is the fifth major release of the Mach-based operating system that was developed at NeXT in the late 1980s, previously called OPENSTEP and NEXTSTEP. Rhapsody was targeted to developers for a transition period between the Classic Mac OS and Mac OS X. Rhapsody represented a new and exploratory strategy for Apple, more than an operating system, and runs on x86-based PCs and on Power Macintosh. Rhapsody's OPENSTEP based Yellow Box API frameworks were ported to Windows NT for creating cross-platform applications. Eventually, the non-Apple platforms were discontinued, and later versions consist primarily of the OPENSTEP operating system ported to Power Macintosh, merging the Copland-originated GUI of Mac OS 8 with that of OPENSTEP. Several existing classic Mac OS frameworks were ported, including QuickTime and AppleSearch. Rhapsody can run Mac OS 8 and its applications in a paravirtualization layer called Blue Box for backward compatibility during migration to Mac OS X. Background Naming Rhapsody follows Apple's pattern through the 1990s of music-related codenames for operating system releases (see Rhapsody (music)). Apple had canceled its previous next-generation operating system strategy of Copland (named for American composer, Aaron Copland) and its pre-announced successor Gershwin (named for George Gershwin, composer of Rhapsody in Blue). Other musical code names include Harmony (Mac OS 7.6), Tempo (Mac OS 8), Allegro (Mac OS 8.5), and Sonata (Mac OS 9). Previous attempts to develop a successor to the Classic Mac OS In the mid-1990s, Mac OS was falling behind Windows. In 1993, Microsoft had introduced the next-generation Windows NT, which was a processor-independent, multiprocessing and multi-user operating system. At the time, Mac OS was still a single-user OS, and had gained a reputation for being unstable. Apple made several attempts to devel
https://en.wikipedia.org/wiki/Video%20production
Video production is the process of producing video content for video. It is the equivalent of filmmaking, but with video recorded either as analog signals on videotape, digitally in video tape or as computer files stored on optical discs, hard drives, SSDs, magnetic tape or memory cards instead of film stock. There are three stages of video production: pre-production, production (also known as principal photography), and post-production. Pre-production involves all of the planning aspects of the video production process before filming begins. This includes scriptwriting, scheduling, logistics, and other administrative duties. Production is the phase of video production which captures the video content (electronic moving images) and involves filming the subject(s) of the video. Post-production is the action of selectively combining those video clips through video editing into a finished product that tells a story or communicates a message in either a live event setting (live production), or after an event has occurred (post-production). Currently, the majority of video content is captured through electronic media like an SD card for consumer grade cameras, or on solid state storage and flash storage for professional grade cameras. Video content that is distributed digitally on the internet often appears in common formats such as the MPEG container format (.mpeg, .mpg, .mp4), QuickTime (.mov), Audio Video Interleave (.avi), Windows Media Video (.wmv), and DivX (.avi, .divx). Types of videos There are many different types of video production. The most common include film and TV production, television commercials, internet commercials, corporate videos, product videos, customer testimonial videos, marketing videos, event videos, wedding videos. The term "Video Production" is reserved only for content creation that is taken through all phases of production (Pre-production, Production, and Post-production) and created with a specific audience in mind. A person filming
https://en.wikipedia.org/wiki/Brianchon%27s%20theorem
In geometry, Brianchon's theorem is a theorem stating that when a hexagon is circumscribed around a conic section, its principal diagonals (those connecting opposite vertices) meet in a single point. It is named after Charles Julien Brianchon (1783–1864). Formal statement Let be a hexagon formed by six tangent lines of a conic section. Then lines (extended diagonals each connecting opposite vertices) intersect at a single point , the Brianchon point. Connection to Pascal's theorem The polar reciprocal and projective dual of this theorem give Pascal's theorem. Degenerations As for Pascal's theorem there exist degenerations for Brianchon's theorem, too: Let coincide two neighbored tangents. Their point of intersection becomes a point of the conic. In the diagram three pairs of neighbored tangents coincide. This procedure results in a statement on inellipses of triangles. From a projective point of view the two triangles and lie perspectively with center . That means there exists a central collineation, which maps the one onto the other triangle. But only in special cases this collineation is an affine scaling. For example for a Steiner inellipse, where the Brianchon point is the centroid. In the affine plane Brianchon's theorem is true in both the affine plane and the real projective plane. However, its statement in the affine plane is in a sense less informative and more complicated than that in the projective plane. Consider, for example, five tangent lines to a parabola. These may be considered sides of a hexagon whose sixth side is the line at infinity, but there is no line at infinity in the affine plane. In two instances, a line from a (non-existent) vertex to the opposite vertex would be a line parallel to one of the five tangent lines. Brianchon's theorem stated only for the affine plane would therefore have to be stated differently in such a situation. The projective dual of Brianchon's theorem has exceptions in the affine plane but not in the
https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3%20Barab%C3%A1si
Albert-László Barabási (born March 30, 1967) is a Romanian-born Hungarian-American physicist, best known for his discoveries in network science and network medicine. He is a distinguished university professor and Robert Gray Professor of Network Science at Northeastern University, and holds appointments at the department of medicine, Harvard Medical School and the department of network and data science at Central European University. He is the former Emil T. Hofmann Professor of Physics at the University of Notre Dame and former associate member of the Center of Cancer Systems Biology (CCSB) at the Dana–Farber Cancer Institute, Harvard University. He discovered in 1999 the concept of scale-free networks and proposed the Barabási–Albert model to explain their widespread emergence in natural, technological and social systems, from the cellular telephone to the World Wide Web or online communities. He is the founding president of the Network Science Society, which sponsors the flagship NetSci conference held yearly since 2006. Birth and education Barabási was born to an ethnic Hungarian family in Cârța, Harghita County, Romania. His father, László Barabási, was a historian, museum director and writer, while his mother, Katalin Keresztes, taught literature, and later became director of a children's theater. He attended a high school specializing in science and mathematics; in the tenth grade, he won a local physics olympiad. Between 1986 and 1989, he studied physics and engineering at the University of Bucharest; during that time, he began doing research on chaos theory, publishing three papers. In 1989, Barabási emigrated to Hungary, together with his father. In 1991, he received a master's degree at Eötvös Loránd University in Budapest, under Tamás Vicsek, before enrolling in the Physics program at Boston University, where he earned a PhD in 1994. His thesis, written under the direction of H. Eugene Stanley, was published by Cambridge University Press under the ti
https://en.wikipedia.org/wiki/Minimal%20realization
In control theory, given any transfer function, any state-space model that is both controllable and observable and has the same input-output behaviour as the transfer function is said to be a minimal realization of the transfer function. The realization is called "minimal" because it describes the system with the minimum number of states. The minimum number of state variables required to describe a system equals the order of the differential equation; more state variables than the minimum can be defined. For example, a second order system can be defined by two or more state variables, with two being the minimal realization. Gilbert's realization Given a matrix transfer function, it is possible to directly construct a minimal state-space realization by using Gilbert's method (also known as Gilbert's realization).
https://en.wikipedia.org/wiki/De%20Finetti%20diagram
A de Finetti diagram is a ternary plot used in population genetics. It is named after the Italian statistician Bruno de Finetti (1906–1985) and is used to graph the genotype frequencies of populations, where there are two alleles and the population is diploid. It is based on an equilateral triangle, and Viviani's theorem: the sum of the perpendicular distances from any interior point to the sides of said triangle is a constant equal to the length of the triangle's altitude. Applications in genetics The de Finetti diagram is used extensively in A.W.F. Edwards' book Foundations of Mathematical Genetics. The sum of the lengths, representing allele frequencies, is set to be 1. In its simplest form the diagram can be used to show the range of genotype frequencies for which Hardy–Weinberg equilibrium is satisfied (the curve within the diagram). A. W. F. Edwards and Chris Cannings extended its use to demonstrate the changes that occur in allele frequencies under natural selection. See also Ternary diagram Wahlund effect
https://en.wikipedia.org/wiki/Jazz%20DSP
The Jazz DSP, by Improv Systems, is a VLIW embedded digital signal processor architecture with a 2-stage instruction pipeline, and single-cycle execution units. The baseline DSP includes one arithmetic logic unit (ALU), dual memory interfaces, and the control unit (instruction decoder, branch control, task control). Most aspects of the architecture, such as the number and sizes of Memory Interface Units (MIU) or the types and number of Computation Units (CU), datapath width (16 or 32-bit), the number of interrupts and priority levels, and debugging support may be independently configured using a proprietary graphical user interface (GUI) tool. A key feature of the architecture allows the user to add custom instructions and/or custom execution units to enhance the performance of their application. Typical Jazz DSP performance can exceed 1000 million operations per second (MOPS) at a modest 100 MHz clock frequency. Please refer to the EEMBC Benchmark site for more details on Jazz DSP performance as compared to other benchmarked processors. Digital signal processors Parallel computing Very long instruction word computing
https://en.wikipedia.org/wiki/Web%202.0
Web 2.0 (also known as participative (or participatory) web and social web) refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatibility with other products, systems, and devices) for end users. The term was coined by Darcy DiNucci in 1999 and later popularized by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference in 2004. Although the term mimics the numbering of software versions, it does not denote a formal change in the nature of the World Wide Web, but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web. A Web 2.0 website allows users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing content in a passive manner. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications. Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write". On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines. History Web 1.0 Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode an
https://en.wikipedia.org/wiki/Gunter%27s%20chain
Gunter's chain (also known as Gunter's measurement) is a distance measuring device used for surveying. It was designed and introduced in 1620 by English clergyman and mathematician Edmund Gunter (1581–1626). It enabled plots of land to be accurately surveyed and plotted, for legal and commercial purposes. Gunter developed an actual measuring chain of 100 links. These, the chain and the link, became statutory measures in England and subsequently the British Empire. Description The chain is divided into 100 links, usually marked off into groups of 10 by brass rings or tags which simplify intermediate measurement. Each link is thus long. A quarter chain, or 25 links, measures and thus measures a rod (or pole). Ten chains measure a furlong and 80 chains measure a statute mile. Gunter's chain reconciled two seemingly incompatible systems: the traditional English land measurements, based on the number four, and decimals based on the number 10. Since an acre measured 10 square chains in Gunter's system, the entire process of land area measurement could be computed using measurements in chains, and then converted to acres by dividing the results by 10. Hence 10 chains by 10 chains (100 square chains) equals 10 acres, 5 chains by 5 chains (25 square chains) equals 2.5 acres. By the 1670s the chain and the link had become statutory units of measurement in England. Method The method of surveying a field or other parcel of land with Gunter's chain is to first determine corners and other significant locations, and then to measure the distance between them, taking two points at a time. The surveyor is assisted by a chainman. A ranging rod (usually a prominently coloured wooden pole) is placed in the ground at the destination point. Starting at the originating point the chain is laid out towards the ranging rod, and the surveyor then directs the chainman to make the chain perfectly straight and pointing directly at the ranging rod. A pin is put in the ground at the forw
https://en.wikipedia.org/wiki/Evolutionary%20neuroscience
Evolutionary neuroscience is the scientific study of the evolution of nervous systems. Evolutionary neuroscientists investigate the evolution and natural history of nervous system structure, functions and emergent properties. The field draws on concepts and findings from both neuroscience and evolutionary biology. Historically, most empirical work has been in the area of comparative neuroanatomy, and modern studies often make use of phylogenetic comparative methods. Selective breeding and experimental evolution approaches are also being used more frequently. Conceptually and theoretically, the field is related to fields as diverse as cognitive genomics, neurogenetics, developmental neuroscience, neuroethology, comparative psychology, evo-devo, behavioral neuroscience, cognitive neuroscience, behavioral ecology, biological anthropology and sociobiology. Evolutionary neuroscientists examine changes in genes, anatomy, physiology, and behavior to study the evolution of changes in the brain. They study a multitude of processes including the evolution of vocal, visual, auditory, taste, and learning systems as well as language evolution and development. In addition, evolutionary neuroscientists study the evolution of specific areas or structures in the brain such as the amygdala, forebrain and cerebellum as well as the motor or visual cortex. History Studies of the brain began during ancient Egyptian times but studies in the field of evolutionary neuroscience began after the publication of Darwin's On the Origin of Species in 1859. At that time, brain evolution was largely viewed at the time in relation to the incorrect scala naturae. Phylogeny and the evolution of the brain were still viewed as linear. During the early 20th century, there were several prevailing theories about evolution. Darwinism was based on the principles of natural selection and variation, Lamarckism was based on the passing down of acquired traits, Orthogenesis was based on the assumption that te
https://en.wikipedia.org/wiki/General%20insurance
General insurance or non-life insurance policy, including automobile and homeowners policies, provide payments depending on the loss from a particular financial event. General insurance is typically defined as any insurance that is not determined to be life insurance. It is called property and casualty insurance in the United States and Canada and non-life insurance in Continental Europe. In the United Kingdom, insurance is broadly divided into three areas: personal lines, commercial lines and London market. The London market insures large commercial risks such as supermarkets, football players, corporation risks, and other very specific risks. It consists of a number of insurers, reinsurers, P&I Clubs, brokers and other companies that are typically physically located in the City of London. Lloyd's of London is a big participant in this market. The London market also participates in personal lines and commercial lines, domestic and foreign, through reinsurance. Commercial lines products are usually designed for relatively small legal entities. These would include workers' compensation (employers liability), public liability, product liability, commercial fleet and other general insurance products sold in a relatively standard fashion to many organisations. There are many companies that supply comprehensive commercial insurance packages for a wide range of different industries, including shops, restaurants and hotels. Personal lines products are designed to be sold in large quantities. This would include autos (private car), homeowners (household), pet insurance, creditor insurance and others. ACORD, which is the insurance industry global standards organization, has standards for personal and commercial lines and has been working with the Australian General Insurers to develop those XML standards, standard applications for insurance, and certificates of currency. Types of general insurance General insurance can be categorised in to following: Motor Insuranc
https://en.wikipedia.org/wiki/Gynandromorphism
A gynandromorph is an organism that contains both male and female characteristics. The term comes from the Greek γυνή (gynē) 'female', ἀνήρ (anēr) 'male', and μορφή (morphē) 'form', and is used mainly in the field of entomology. Gynandromorphism is most frequently recognized in organisms that have strong sexual dimorphism such as certain butterflies, spiders, and birds, but has been recognized in numerous other types of organisms. Occurrence Gynandromorphism has been noted in Lepidoptera (butterflies and moths) since the 1700s. It has also been observed in crustaceans, such as lobsters and crabs, in spiders, ticks, flies, locusts, crickets, dragonflies, ants, termites, bees, lizards, snakes, rodents, and many species of birds. A notable example in birds is the zebra finch. These birds have lateralised brain structures in the face of a common steroid signal, providing strong evidence for a non-hormonal primary sex mechanism regulating brain differentiation. Pattern of distribution of male and female tissues in a single organism A gynandromorph can have bilateral symmetry—one side female and one side male. Alternatively, the distribution of male and female tissue can be more haphazard. Bilateral gynandromorphy arises very early in development, typically when the organism has between 8 and 64 cells. Later stages produce a more random pattern. Causes The cause of this phenomenon is typically (but not always) an event in mitosis during early development. While the organism contains only a few cells, one of the dividing cells does not split its sex chromosomes typically. This leads to one of the two cells having sex chromosomes that cause male development and the other cell having chromosomes that cause female development. For example, an XY cell undergoing mitosis duplicates its chromosomes, becoming XXYY. Usually this cell would divide into two XY cells, but in rare occasions the cell may divide into an X cell and an XYY cell. If this happens early in development
https://en.wikipedia.org/wiki/Workprint
A workprint is a rough version of a motion picture, used by the film editor(s) during the editing process. Such copies generally contain original recorded sound that will later be re-dubbed, stock footage as placeholders for missing shots or special effects, and animation tests for in-production animated shots or sequences. For most of the first century of filmmaking, workprints were done using second-generation prints from the original camera negatives. After the editor and director approved of the final edit of the workprint, the same edits were made to the negative. With the conversion to digital editing, workprints are now generally created on a non-linear editing system using telecined footage from the original film or video sources (in contrast to a pirate "telecine", which is made with a much higher-generation film print). Occasionally, early digital workprints of films have been bootlegged and made available on the Internet. They sometimes appear months in advance of an official release. There are also director's cut versions of films that are only available on bootleg, such as the workprint version of Richard Williams' The Thief and the Cobbler. Although movie studios generally do not make full-length workprints readily available to the public, there are exceptions. Examples include the "Work-In-Progress" version of Beauty and the Beast (albeit it's just unfinished footage intertwined with the DVD release on top with the finalized sound mix), and the Denver/Dallas pre-release version of Blade Runner. Deleted scenes or bonus footage included on DVD releases are sometimes left in workprint format as well, e.g. the Scrubs DVD extras. A workprint as source for a leaked television show is rather unusual, but it happened with the third season's first episode of Homeland a month before it aired. Notable examples on the internet Hulk – Appeared on the internet two weeks before the film opening. Star Wars Episode III: Revenge of the Sith – Appeared on the web
https://en.wikipedia.org/wiki/Battleship%20%28puzzle%29
The Battleship puzzle (sometimes called Bimaru, Yubotu, Solitaire Battleships or Battleship Solitaire) is a logic puzzle based on the Battleship guessing game. It and its variants have appeared in several puzzle contests, including the World Puzzle Championship, and puzzle magazines, such as Games magazine. Solitaire Battleship was invented in Argentina by Jaime Poniachik and was first featured in 1982 in the Argentine magazine . Battleship gained more widespread popularity after its international debut at the first World Puzzle Championship in New York City in 1992. Battleship appeared in Games magazine the following year and remains a regular feature of the magazine. Variants of Battleship have emerged since the puzzle's inclusion in the first World Puzzle Championship. Battleship is played in a grid of squares that hides ships of different sizes. Numbers alongside the grid indicate how many squares in a row or column are occupied by part of a ship. History The solitaire version of Battleship was invented in Argentina in 1982 under the name Batalla Naval, with the first published puzzles appearing in 1982 in the Spanish magazine . Battleship was created by the magazine's founder, Jaime Poniachik, along with its editors Eduardo Abel Gimenez, Jorge Varlotta, and Daniel Samoilovich. After 1982, no more Battleship puzzles were published until 1987, when they appeared in , a renamed version of . The publishing company of regularly publishes Battleship puzzles in its monthly magazine . Battleship made its international debut at the first World Puzzle Championship in New York in 1992 and met with success. The next World Puzzle Championship in 1993 featured a variant of Battleship that omitted some of the row and column numbers. Battleship was first published in Games magazine in 1993, the year after the first World Puzzle Championship. Other variants later emerged, including Hexagonal Battleship, 3D Battleship, and Diagonal Battleship. Rules In Battleship,
https://en.wikipedia.org/wiki/Bolt%20cutter
A bolt cutter, sometimes called bolt cropper, is a tool used for cutting bolts, chains, padlocks, rebar and wire mesh. It typically has long handles and short blades, with compound hinges to maximize leverage and cutting force. A typical bolt cutter yields of cutting force for a force on the handles. There are different types of cutting blades for bolt cutters, including angle cut, center cut, shear cut, and clipper cut blades. Bolt cutters are usually available in 12, 14, 18, 24, 30, 36 and 42 inches (30.5, 35.6, 46, 61, 76, 91.4 and 107 cm) in length. The length is measured from the tip of the jaw to the end of the handle. Angle cut has the cutter head angled for easier insertion. Typical angling is 25 to 35 degrees. Center cut has the blades equidistant from the two faces of the blade. Shear cut has the blades inverted to each other (such as normal paper scissor blades). Clipper cut has the blades flush against one face (for cutting against flat surfaces). Bolt cutters with fiberglass handles can be used for cutting live electrical wires and are useful during rescue operations. The fiberglass handles have another advantage of being lighter in weight than the conventional drop forged or solid pipe handles, making it easier to carry to the place of operation. Cultural significance The tools became iconic at the Greenham Common Women's Peace Camp, where protestors used bolt cutters to remove fencing around the RAF airbase. A Greenham banner displaying bolt cutters, together with a hanging of Greenham fence wire, was displayed at the Pine Gap Women's Peace Camp in Australia.
https://en.wikipedia.org/wiki/Eastern%20meadow%20vole
The eastern meadow vole (Microtus pennsylvanicus), sometimes called the field mouse or meadow mouse, is a North American vole found in eastern Canada and the United States. Its range extends farther south along the Atlantic coast. The western meadow vole, Florida salt marsh vole, and beach vole were formerly considered regional variants or subspecies of M. pennsylvanicus, but have all since been designated as distinct species. The eastern meadow vole is active year-round, usually at night. It also digs burrows, where it stores food for the winter and females give birth to their young. Although these animals tend to live close together, they are aggressive towards one another. This is particularly evident in males during the breeding season. They can cause damage to fruit trees, garden plants, and commercial grain crops. Taxonomy The species was formerly grouped with the western meadow vole (M. drummondii) and the Florida salt marsh vole (M. dukecampbelli) as a single species with a very large range, but genetic evidence indicates that these are all distinct species. Distribution The eastern meadow vole is found throughout eastern North America. It ranges from Labrador and New Brunswick south to South Carolina and extreme northeastern Georgia; west through Tennessee to Ohio. West of Ohio, it is replaced by the western meadow vole. Several subspecies are found on eastern islands, including the beach vole (M. p. breweri) and the extinct Gull Island vole. Plant communities Eastern meadow voles are most commonly found in grasslands, preferring moister areas, but are also found in wooded areas. In east-central Ohio, eastern meadow voles were captured in reconstructed common cattail (Typha latifolia) wetlands. In Virginia, eastern meadow voles were least abundant in eastern red cedar (Juniperus virginiana) glades and most abundant in fields with dense grass cover. Habits Eastern meadow voles are active year-round and day or night, with no clear 24-hour rhythm in ma
https://en.wikipedia.org/wiki/Fibrodysplasia%20ossificans%20progressiva
Fibrodysplasia ossificans progressiva (; FOP), also called Münchmeyer disease or formerly myositis ossificans progressiva, is an extremely rare connective tissue disease in which fibrous connective tissue such as muscle, tendons, and ligaments turn into bone tissue. It is the only known medical condition where one organ system changes into another. It is a severe, disabling disorder with no cure or treatment. FOP is caused by a mutation of the gene ACVR1. The mutation affects the body's repair mechanism, causing fibrous tissue including muscle, tendons, and ligaments to become ossified, either spontaneously or when damaged as the result of trauma. In many cases, otherwise minor injuries can cause joints to become permanently fused as new bone forms, replacing the damaged muscle tissue. This new bone formation (known as "heterotopic ossification") eventually forms a secondary skeleton and progressively restricts the patient's ability to move. Bone formed as a result of this process is identical to "normal" bone, simply in improper locations. Circumstantial evidence suggests that the disease can cause joint degradation separate from its characteristic bone growth. Surgical removal of the extra bone growth has been shown to cause the body to "repair" the affected area with additional bone. Although the rate of bone growth may differ depending on the patient, the condition ultimately leaves sufferers immobilized as new bone replaces musculature and fuses with the existing skeleton. This has earned FOP the nickname "stone man disease". Signs and symptoms For unknown reasons, children born with FOP often have malformed big toes, sometimes missing a joint or, in other cases, simply presenting with a notable lump at the minor joint. The first "flare-up" that leads to the formation of FOP bone usually occurs before the age of 10. The bone growth generally progresses from the top of the body downward, just as bones grow in fetuses. A child with FOP will typically develop
https://en.wikipedia.org/wiki/Gray%20death
Gray death is a slang term which refers to a potent mixture of synthetic opioids, for example benzimidazole opioids or fentanyl analogues, often sold on the street misleadingly as "heroin". However, other substances such as cocaine have also been laced with opioids that resulted in illness and death. Etymology The first batch of gray death had a characteristic gray color. Detected samples Samples have been found to contain heroin, fentanyl, carfentanil, and the designer drug U-47700. A mixture of drugs misleadingly called 2C-B had been found to contain fentanyl in Argentina. Deaths In February 2022, 24 people in Argentina died after using cocaine laced with carfentanil. Dangers As with other illicit narcotics, gray death carries a higher risk of serious adverse effects than prescribed opioids due to the unknown and inconsistent composition of the product. As such, even experienced opioid users risk serious injury or death when taking this drug mixture. Treatment Reversing a gray death overdose may require multiple doses of naloxone. By contrast, an overdose from morphine or from high-purity heroin would ordinarily need only one dose. This difficulty is regularly encountered when treating overdoses of high-affinity opioids in the fentanyl chemical family or with buprenorphine. The greater affinity of these substances for the μ-opioid receptor impedes the activity of naloxone, which is an antagonist at the receptor. Increasing the dosage of naloxone or its frequency of administration may be required to counteract respiratory depression. History The substance first appeared in America and was thought to be a unique chemical compound before being identified as a mixture of drugs. See also List of opioids List of designer drugs Opioid epidemic in the United States Mickey Finn (drugs) Whoonga
https://en.wikipedia.org/wiki/Phosphatidic%20acid
Phosphatidic acids are anionic phospholipids important to cell signaling and direct activation of lipid-gated ion channels. Hydrolysis of phosphatidic acid gives rise to one molecule each of glycerol and phosphoric acid and two molecules of fatty acids. They constitute about 0.25% of phospholipids in the bilayer. Structure Phosphatidic acid consists of a glycerol backbone, with, in general, a saturated fatty acid bonded to carbon-1, an unsaturated fatty acid bonded to carbon-2, and a phosphate group bonded to carbon-3. Formation and degradation Besides de novo synthesis, PA can be formed in three ways: By phospholipase D (PLD), via the hydrolysis of the P-O bond of phosphatidylcholine (PC) to produce PA and choline. By the phosphorylation of diacylglycerol (DAG) by DAG kinase (DAGK). By the acylation of lysophosphatidic acid by lysoPA-acyltransferase (LPAAT); this is the most common pathway. The glycerol 3-phosphate pathway for de novo synthesis of PA is shown here: In addition, PA can be converted into DAG by lipid phosphate phosphohydrolases (LPPs) or into lyso-PA by phospholipase A (PLA). Roles in the cell The role of PA in the cell can be divided into three categories: PA is the precursor for the biosynthesis of many other lipids. The physical properties of PA influence membrane curvature. PA acts as a signaling lipid, recruiting cytosolic proteins to appropriate membranes (e.g., sphingosine kinase 1). PA plays very important role in phototransduction in Drosophila. PA is a lipid ligand that gates ion channels. See also lipid-gated ion channels. The first three roles are not mutually exclusive. For example, PA may be involved in vesicle formation by promoting membrane curvature and by recruiting the proteins to carry out the much more energetically unfavourable task of neck formation and pinching. Roles in biosynthesis PA is a vital cell lipid that acts as a biosynthetic precursor for the formation (directly or indirectly) of all acylglycerol lipi
https://en.wikipedia.org/wiki/Phase%20response%20curve
A phase response curve (PRC) illustrates the transient change (phase response) in the cycle period of an oscillation induced by a perturbation as a function of the phase at which it is received. PRCs are used in various fields; examples of biological oscillations are the heartbeat, circadian rhythms, and the regular, repetitive firing observed in some neurons in the absence of noise. In circadian rhythms In humans and animals, there is a regulatory system that governs the phase relationship of an organism's internal circadian clock to a regular periodicity in the external environment (usually governed by the solar day). In most organisms, a stable phase relationship is desired, though in some cases the desired phase will vary by season, especially among mammals with seasonal mating habits. In circadian rhythm research, a PRC illustrates the relationship between a chronobiotic's time of administration (relative to the internal circadian clock) and the magnitude of the treatment's effect on circadian phase. Specifically, a PRC is a graph showing, by convention, time of the subject's endogenous day along the x-axis and the amount of the phase shift (in hours) along the y-axis. Each curve has one peak and one trough in each 24-hour cycle. Relative circadian time is plotted against phase-shift magnitude. The treatment is usually narrowly specified as a set intensity and colour and duration of light exposure to the retina and skin, or a set dose and formulation of melatonin. These curves are often consulted in the therapeutic setting. Normally, the body's various physiological rhythms will be synchronized within an individual organism (human or animal), usually with respect to a master biological clock. Of particular importance is the sleep–wake cycle. Various sleep disorders and externals stresses (such as jet lag) can interfere with this. People with non-24-hour sleep–wake disorder often experience an inability to maintain a consistent internal clock. Extreme chron
https://en.wikipedia.org/wiki/Positive%20and%20negative%20predictive%20values
The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively. The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test (as true positive rate and true negative rate are); they depend also on the prevalence. Both PPV and NPV can be derived using Bayes' theorem. Although sometimes used synonymously, a positive predictive value generally refers to what is established by control groups, while a post-test probability refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal. In information retrieval, the PPV statistic is often called the precision. Definition Positive predictive value (PPV) The positive predictive value (PPV), or precision, is defined as where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero. The PPV can also be computed from sensitivity, specificity, and the prevalence of the condition: cf. Bayes' theorem The complement of the PPV is the false discovery rate (FDR): Negative predictive value (NPV) The negative predictive value is defined as: where a "true negative" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "false negative" is the eve
https://en.wikipedia.org/wiki/Arc%20measurement
Arc measurement, sometimes degree measurement (), is the astrogeodetic technique of determining the radius of Earth – more specifically, the local Earth radius of curvature of the figure of the Earth – by relating the latitude difference (sometimes also the longitude difference) and the geographic distance (arc length) surveyed between two locations on Earth's surface. The most common variant involves only astronomical latitudes and the meridian arc length and is called meridian arc measurement; other variants may involve only astronomical longitude (parallel arc measurement) or both geographic coordinates (oblique arc measurement). Arc measurement campaigns in Europe were the precursors to the International Association of Geodesy (IAG). History The first known arc measurement was performed by Eratosthenes (240 BC) between Alexandria and Syene in what is now Egypt, determining the radius of the Earth with remarkable correctness. In the early 8th century, Yi Xing performed a similar survey. The French physician Jean Fernel measured the arc in 1528. The Dutch geodesist Snellius (~1620) repeated the experiment between Alkmaar and Bergen op Zoom using more modern geodetic instrumentation (Snellius' triangulation). Later arc measurements aimed at determining the flattening of the Earth ellipsoid by measuring at different geographic latitudes. The first of these was the French Geodesic Mission, commissioned by the French Academy of Sciences in 1735–1738, involving measurement expeditions to Lapland (Maupertuis et al.) and Peru (Pierre Bouguer et al.). Struve measured a geodetic control network via triangulation between the Arctic Sea and the Black Sea, the Struve Geodetic Arc. Bessel compiled several meridian arcs, to compute the famous Bessel ellipsoid (1841). Nowadays, the method is replaced by worldwide geodetic networks and by satellite geodesy. List of other instances Al-Ma'mun's arc measurement Posidonius' arc measurement Swedish–Russian Arc-of-Meridian
https://en.wikipedia.org/wiki/Somitomere
In the developing vertebrate embryo, the somitomeres (or somatomeres) are collections of cells that are derived from the loose masses of paraxial mesoderm that are found alongside the developing neural tube. In human embryogenesis they appear towards the end of the third gestational week. The approximately 50 pairs of somitomeres in the human embryo, begin developing in the cranial (head) region, continuing in a caudal (tail) direction until the end of week four. Development The first seven somitomeres give rise to the striated muscles of the face, jaws, and throat. The remaining somitomeres, likely driven by periodic expression of the hairy gene, begin expressing adhesion proteins such as N-cadherin and fibronectin, compact, and bud off forming somites. The somites give rise to the vertebral column (sclerotome), associated muscles (myotome), and overlying dermis (dermatome). There are a total of 37 somite pairs at the end of the fifth week of development, after the first occipital somite and 5-7 coccygeal somites disappear from the original 42-44 somites.
https://en.wikipedia.org/wiki/Wolfsangel
(, translation "wolf's hook") or () is a heraldic charge in countries like Germany, the Netherlands and eastern France, which was inspired by medieval European wolf traps that consisted of a Z-shaped metal hook (called the Wolfsangel, or the Crampon in French) that was hung by a chain from a crescent-shaped metal bar (called the , or the in French). The stylized symbol of the Z-shape (also called the , meaning the "double-hook") can include a central horizontal bar to give a Ƶ-symbol, which can be reversed and/or rotated; it is sometimes mistaken as being an ancient rune due to its similarity to the "gibor rune" of the pseudo Armanen runes. Early medieval pagans believed that the symbol possessed magical powers, and could ward off wolves. It became an early symbol of German liberty and independence after its adoption as an emblem in various 15th-century peasant revolts, and also in the 17th-century Thirty Years War. In pre-war Germany, interest in the was revived by the popularity of 's 1910 novel , which follows a hero in the Thirty Years war. The Ƶ-symbol was later adopted by the Nazi Party, and was used by various German Wehrmacht and SS units such as the and the Waffen-SS Division . The Anti-Defamation League, and others, list the Ƶ-symbol as a hate and a neo-Nazi symbol. Origins Hunting tool The was a European medieval wolf hunting tool whereby the hook was concealed inside a chunk of meat that would impale the unsuspecting wolf who would gulp the meat in one movement. The tool was developed by attaching the hook via a chain or rope to a larger bar (often with a double crescent or half-moon shape per photo opposite) that was lodged between the overhanging branches of a tree. This would encourage the wolf to jump up to gulp the hanging chunk of meat (with the hook concealed inside), thus further impaling itself in the manner of a fish caught on a fishing hook. Medieval hunters were known to use "blood trails" to lead the wolf to the trap and also us
https://en.wikipedia.org/wiki/Charles%20Yanofsky
Charles Yanofsky (April 17, 1925 – March 16, 2018) was an American geneticist on the faculty of Stanford University who contributed to the establishment of the one gene-one enzyme hypothesis and discovered attenuation, a riboswitch mechanism in which messenger RNA changes shape in response to a small molecule and thus alters its binding ability for the regulatory region of a gene or operon. Education and early life Charles Yanofsky was born on April 17, 1925, in New York. He was one of the earliest graduates of the Bronx High School of Science, then studied at the City College of New York and completed his degree in biochemistry in spite of having had his education interrupted by military service in World War II including participation in the Battle of the Bulge. In 1948, having returned and completed college, he took up graduate work towards his master's degree and PhD, both granted by Yale University. He pursued postdoctoral work at Yale for a time, completing work started during his PhD training. Career and research Yanofsky joined the Case Western Reserve Medical School faculty in 1954. He moved to the faculty at Stanford University as an Associate Professor in 1958. In 1964, Yanofsky and colleagues established that gene sequences and protein sequences are colinear in bacteria. Yanofsky showed that changes in DNA sequence can produce changes in protein sequence at corresponding positions. His work is considered the best evidence in favor of the one gene-one enzyme hypothesis. His laboratory also revealed how controlled alterations in RNA shapes allow RNA to serve as a regulatory molecule in both bacterial and animal cells. His graduate student Iwona Stroynowski and Mitzi Kuroda discovered the process of attenuation of expression based on regulated binding ability of the five-prime untranslated region of the messenger RNA for the bacterial tryptophan operon. They had thus discovered the first regulatory riboswitch, although that terminology was not used
https://en.wikipedia.org/wiki/Reginald%20Punnett
Reginald Crundall Punnett FRS (; 20 June 1875 – 3 January 1967) was a British geneticist who co-founded, with William Bateson, the Journal of Genetics in 1910. Punnett is probably best remembered today as the creator of the Punnett square, a tool still used by biologists to predict the probability of possible genotypes of offspring. His Mendelism (1905) is sometimes said to have been the first textbook on genetics; it was probably the first popular science book to introduce genetics to the public. Life and work Reginald Punnett was born in 1875 in the town of Tonbridge in Kent, England. While recovering from a childhood bout of appendicitis, Punnett became acquainted with Jardine's Naturalist's Library and developed an interest in natural history. Punnett was educated at Clifton College. Attending Gonville and Caius College, Cambridge, Punnett earned a bachelor's degree in zoology in 1898 and a master's degree in 1901. Between these degrees he worked as a demonstrator and part-time lecturer at the University of St. Andrews' Natural History Department. In October 1901, Punnett was back at Cambridge when he was elected to a Fellowship at Gonville and Caius College, working in zoology, primarily the study of worms, specifically nemerteans. It was during this time that he and William Bateson began a research collaboration, which lasted several years. A Punnett square When Punnett was an undergraduate, Gregor Mendel's work on inheritance was largely unknown and unappreciated by scientists. However, in 1900, Mendel's work was rediscovered by Carl Correns, Erich Tschermak von Seysenegg and Hugo de Vries. William Bateson became a proponent of Mendelian genetics and had Mendel's work translated into English. It was with Bateson that Reginald Punnett helped establish the new science of genetics at Cambridge. He, Bateson and Saunders co-discovered genetic linkage through experiments with chickens and sweet peas. In 1908, unable to explain how a dominant allele
https://en.wikipedia.org/wiki/Wake-on-ring
Wake-on-Ring (WOR), sometimes referred to as Wake-on-Modem (WOM), is a specification that allows supported computers and devices to "wake up" or turn on from a sleeping, hibernating or "soft off" state (e.g. ACPI state G1 or G2), and begin operation. The basic premise is that a special signal is sent over phone lines to the computer through its dial-up modem, telling it to fully power-on and begin operation. Common uses were archive databases and BBSes, although hobbyist use was significant. Fax machines use a similar system, in which they are mostly idle until receiving an incoming fax signal, which spurs operation. This style of remote operation has mostly been supplanted by Wake-on-LAN, which is newer but works in much the same way. See also ACPI RS-232 Signals, Ring Indicator Wake-on-LAN Additional resources "Wake on Modem" entry from Smart Computing Encyclopedia Networking standards BIOS Unified Extensible Firmware Interface Remote control
https://en.wikipedia.org/wiki/Propositional%20variable
In mathematical logic, a propositional variable (also called a sentential variable or sentential letter) is an input variable (that can either be true or false) of a truth function. Propositional variables are the basic building-blocks of propositional formulas, used in propositional logic and higher-order logics. Uses Formulas in logic are typically built up recursively from some propositional variables, some number of logical connectives, and some logical quantifiers. Propositional variables are the atomic formulas of propositional logic, and are often denoted using capital roman letters such as , and . Example In a given propositional logic, a formula can be defined as follows: Every propositional variable is a formula. Given a formula X, the negation ¬X is a formula. Given two formulas X and Y, and a binary connective b (such as the logical conjunction ∧),the expression (X b Y) is a formula. (Note the parentheses.) Through this construction, all of the formulas of propositional logic can be built up from propositional variables as a basic unit. Propositional variables should not be confused with the metavariables, which appear in the typical axioms of propositional calculus; the latter effectively range over well-formed formulae, and are often denoted using lower-case greek letters such as , and . Predicate logic Propositional variables with no object variables such as x and y attached to predicate letters such as Px and xRy, having instead individual constants a, b, ..attached to predicate letters are propositional constants Pa, aRb. These propositional constants are atomic propositions, not containing propositional operators. The internal structure of propositional variables contains predicate letters such as P and Q, in association with bound individual variables (e.g., x, y), individual constants such as a and b (singular terms from a domain of discourse D), ultimately taking a form such as Pa, aRb.(or with parenthesis, and ). Propositional l
https://en.wikipedia.org/wiki/Conservation%20genetics
Conservation genetics is an interdisciplinary subfield of population genetics that aims to understand the dynamics of genes in a population for the purpose of natural resource management and extinction prevention. Researchers involved in conservation genetics come from a variety of fields including population genetics, natural resources, molecular ecology, biology, evolutionary biology, and systematics. Genetic diversity is one of the three fundamental measures of biodiversity (along with species diversity and ecosystem diversity), so it is an important consideration in the wider field of conservation biology. Genetic diversity Genetic diversity is the total number of genetic characteristics in a species. It can be measured in several ways: observed heterozygosity, expected heterozygosity, the mean number of alleles per locus, or the percentage of polymorphic loci. Genetic diversity on the population level is a crucial focus for conservation genetics as it influences both the health and long-term survival of populations: decreased genetic diversity has been associated with reduced fitness, such as high juvenile mortality, diminished population growth, reduced immunity, and ultimately, higher extinction risk. Heterozygosity, a fundamental measurement of genetic diversity in population genetics, plays an important role in determining the chance of a population surviving environmental change, novel pathogens not previously encountered, as well as the average fitness of a population over successive generations. Heterozygosity is also deeply connected, in population genetics theory, to population size (which itself clearly has a fundamental importance to conservation). All things being equal, small populations will be less heterozygous– across their whole genomes– than comparable, but larger, populations. This lower heterozygosity (i.e. low genetic diversity) renders small populations more susceptible to the challenges mentioned above. In a small population, over succ
https://en.wikipedia.org/wiki/Propositional%20formula
In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula. A propositional formula is constructed from simple propositions, such as "five is greater than three" or propositional variables such as p and q, using connectives or logical operators such as NOT, AND, OR, or IMPLIES; for example: (p AND NOT q) IMPLIES (p OR q). In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under discussion, just like an expression such as "" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance. Propositions For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. Compound propositions are considered to be linked by sentential connectives, some of the most common of which are "AND", "OR", "IF ... THEN ...", "NEITHER ... NOR ...", "... IS EQUIVALENT TO ..." . The linking semicolon ";", and connective "BUT" are considered to be expressions of "AND". A sequence of discrete sentences are considered to be linked by "AND"s, and formal analysis applies a recursive "parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas). For example: The assertion: "This cow is blue. That horse is orange but this horse here is purple." is actually a compound proposition linked by "AND"s: ( ("This cow is blue" AND "that horse is orange") AND "this horse here is purple" ) . Simple propositions are declarative in nature, that is, they make assertions about the condition
https://en.wikipedia.org/wiki/Rundlet
The rundlet is an archaic unit-like size of wine casks once used in Britain. It was equivalent to about 68 litres. It used to be defined as 18 wine gallons—one of several gallons then in use—before the adoption of the imperial system in 1824, afterwards it was 15 imperial gallons, which became the universal English base unit of volume in the British realm.
https://en.wikipedia.org/wiki/Kosmos%20%28publisher%29
Franckh-Kosmos Verlags-GmbH & Co. is a media publishing house based in Stuttgart, Germany, founded in 1822 by Johann Friedrich Franckh. In the nineteenth century the company published the fairy tales of Wilhelm Hauff as well as works by Wilhelm Waiblinger and Eduard Mörike. The "Friends of Nature Club" () was set up in 1903 in response to booming public interest in science and technology, and by 1912 100,000 members were receiving its monthly magazine "Cosmos" (Kosmos). The company moved into publishing books on popular science topics under the brands Franckh’sche Verlagshandlung and KOSMOS, including successful non-fiction guidebooks by Hanns Günther and Heinz Richter. Children's fiction and Kosmos-branded science experimentation kits were introduced in the 1920s, first in the field of electronics and then chemistry and other areas. In 1937, this effort led to a gold medal at the world exhibition in Paris. Kosmos's current output includes non-fiction, children's books, science kits and German-style board games. Many of their games are translated into English and published by Thames & Kosmos. Their line of experiment kits and science kits is distributed in North America and the United Kingdom by Thames & Kosmos. Notable games Beowulf: The Legend Exit: The Game Jambo Lord of the Rings Lost Cities The Settlers of Catan TwixT External links Board game publishing companies Children's book publishers Publishing companies of Germany Toy companies of Germany Construction toys Toy companies established in the 19th century Manufacturing companies established in 1822 Publishing companies established in 1822 German companies established in 1822
https://en.wikipedia.org/wiki/Longeron
In engineering, a longeron or stringer is a load-bearing component of a framework. The term is commonly used in connection with aircraft fuselages and automobile chassis. Longerons are used in conjunction with stringers to form structural frameworks. Aircraft In an aircraft fuselage, stringers are attached to formers (also called frames) and run in the longitudinal direction of the aircraft. They are primarily responsible for transferring the aerodynamic loads acting on the skin onto the frames and formers. In the wings or horizontal stabilizer, longerons run spanwise (from wing root to wing tip) and attach between the ribs. The primary function here also is to transfer the bending loads acting on the wings onto the ribs and spar. Sometimes the terms "longeron" and "stringer" are used interchangeably. Historically, though, there is a subtle difference between the two terms. If the longitudinal members in a fuselage are few in number (usually 4 to 8) and run all along the fuselage length, then they are called "longerons". The longeron system also requires that the fuselage frames be closely spaced (about every ). If the longitudinal members are numerous (usually 50 to 100) and are placed just between two formers/frames, then they are called "stringers". In the stringer system the longitudinal members are smaller and the frames are spaced farther apart (about ). Generally, longerons are of larger cross-section when compared to stringers. On large modern aircraft the stringer system is more common because it is more weight-efficient, despite being more complex to construct and analyze. Some aircraft use a combination of both stringers and longerons. Longerons often carry larger loads than stringers and also help to transfer skin loads to internal structure. Longerons nearly always attach to frames or ribs. Stringers often are not attached to anything but the skin, where they carry a portion of the fuselage bending moment through axial loading. It is not unco
https://en.wikipedia.org/wiki/Web%20Services%20Resource%20Framework
Web Services Resource Framework (WSRF) is a family of OASIS-published specifications for web services. Major contributors include the Globus Alliance and IBM. A web service by itself is nominally stateless, i.e., it retains no data between invocations. This limits the things that can be done with web services, Before WSRF, no standard in the Web Services family of specifications explicitly defined how to deal with stateful interactions with remote resources. This does not mean that web services could not be stateful. Where required a web service could read from a database, or use session state by way of cookies or WS-Session. WSRF provides a set of operations that web services can use to implement stateful interaction; web service clients communicate with resource services which allow data to be stored and retrieved. When clients talk to the web service they include the identifier of the specific resource that should be used inside the request, encapsulated within the WS-Addressing endpoint reference. This may be a simple URI address, or it may be complex XML content that helps identify or even fully describe the specific resource in question. Alongside the notion of an explicit resource reference comes a standardized set of web service operations to get/set resource properties. These can be used to read and perhaps write resource state, in a manner somewhat similar to having member variables of an object alongside its methods. The primary beneficiary of such a model are management tools, which can enumerate and view resources, even if they have no other knowledge of them. This is the basis for WSDM. Issues with WSRF WSRF is not without controversy. Most fundamental is architectural: are distributed objects with state and operations the best way to represent remote resources? It is almost a port into XML of the distributed objects pattern, of which CORBA and DCOM are examples. A WSRF resource may be a stateful entity to which multiple clients have resource re
https://en.wikipedia.org/wiki/Reciprocal%20innervation
René Descartes (1596–1650) was one of the first to conceive a model of reciprocal innervation (in 1626) as the principle that provides for the control of agonist and antagonist muscles. Reciprocal innervation describes skeletal muscles as existing in antagonistic pairs, with contraction of one muscle producing forces opposite to those generated by contraction of the other. For example, in the human arm, the triceps acts to extend the lower arm outward while the biceps acts to flex the lower arm inward. To reach optimum efficiency, contraction of opposing muscles must be inhibited while muscles with the desired action are excited. This reciprocal innervation occurs so that the contraction of a muscle results in the simultaneous relaxation of its corresponding antagonist. A common example of reciprocal innervation, is the effect of the nociceptive (or nocifensive) reflex, or defensive response to pain, otherwise commonly known as the withdrawal reflex; a type of involuntary action of the body to remove the body part from the vicinity of an offending object by contracting the appropriate muscles (usually flexor muscles), while relaxing the extensor muscles, allowing smooth movement. The concept of reciprocal innervation as applicable to the eye is also known as Sherrington's law (after Charles Scott Sherrington), wherein increased innervation to an extraocular muscle is accompanied by a simultaneous decrease in innervation to its specific antagonist, such as the medial rectus and the lateral rectus in the case of an eye looking to one side of the midline. When looking outward or laterally, the lateral rectus of one eye must contract via increased innervation, while its antagonist, the medial rectus of the same eye - shall relax. The converse would occur in the other eye, both eyes demonstrating the law of reciprocal innervation. The significance of Descartes’ Law of Reciprocal Innervation has been additionally highlighted by recent research and applications of bioe
https://en.wikipedia.org/wiki/John%20Tierney%20%28journalist%29
John Marion Tierney (born March 25, 1953) is an American journalist and a contributing editor to City Journal, the Manhattan Institute's quarterly publication. Previously he had been a reporter and columnist at the New York Times for three decades since 1990. A self-described contrarian, Tierney is a critic of aspects of environmentalism, the "science establishment," and big government, but he does support the goal of limiting overall emissions of carbon dioxide. Early and personal life Tierney was born in 1953 outside Chicago, and grew up in "the Midwest, South America and Pittsburgh". He graduated from Yale University in 1976. He was previously married to Dana Tierney, with whom he had one child. They later divorced; Tierney married anthropologist and love expert Helen Fisher in 2020. Career After graduating college, Tierney was a newspaper reporter for four years, first at the Bergen Record in New Jersey and then at the Washington Star. Starting in 1980, he spent ten years in magazine journalism writing for such magazines as Atlantic Monthly, Discover, Esquire, Health, National Geographic Traveler, New York, Newsweek, Outside, Rolling Stone. Tierney began working at The New York Times in 1990 as a "general assignment" reporter in the Metro section. Tierney writes a science column, "Findings", for the Times. He previously wrote the TierneyLab blog for the Times. In 2005 Tierney began to write for the Times Op-Ed page and as of 2015 his writings appeared in both the Times Op-Ed and "Findings" science column. He also writes for the conservative City Journal. In 2009 Tierney wrote about mathematics popularizer Martin Gardner and in that same year started featuring recreational mathematics problems, often curated by Pradeep Mutalik in his New York Times TierneyLab blog. In 2010, Tierney retired from writing the blog, and Mutalik continued it under a new name (NumberPlay). In time, Gary Antonick took that over until he retired it in October 2016. Views Tierney des
https://en.wikipedia.org/wiki/William%20B.%20Provine
William Ball Provine (February 19, 1942 – September 1, 2015) was an American historian of science and of evolutionary biology and population genetics. He was the Andrew H. and James S. Tisch Distinguished University Professor at Cornell University and was a professor in the Departments of History, Science and Technology Studies, and Ecology and Evolutionary Biology. Biography Provine was born in Tennessee. He held a B.S. in mathematics (1962), and an M.A. (1965) and Ph.D. (1970) in History of Science from the University of Chicago. He joined the Cornell faculty in 1969. He suffered seizures in 1995 due to a brain tumour. Provine died on September 1, 2015, due to complications from the tumor. History of theoretical population genetics Provine's Ph.D. thesis, later published as a book, documented the early origins of theoretical population genetics in the conflicts between the biostatistics and Mendelian schools of thought. He documented later developments in theoretical population genetics in his biography of Sewall Wright, who was still alive and available for interviews. In this book, Provine criticizes Wright for confounding three different concepts of adaptive landscape: genotype to fitness landscapes, allele frequency to fitness landscapes, and phenotype to fitness landscapes. Provine later grew critical of Wright's views on genetic drift, instead attributing observed effects to the consequences of inbreeding and consequent selection at linked sites. John H. Gillespie credits Provine with stimulating his interest in the topic of hitchhiking or "genetic draft" as an alternative to genetic drift. Provine later published his critique of genetic drift in a book. Provine defended the importance of mathematics' contribution to the modern evolutionary synthesis. Education reform In 1970, Provine was instrumental in the founding of Cornell's Risley Residential College. He was the first faculty member in residence. Philosophy Provine was a philosopher, atheist, an
https://en.wikipedia.org/wiki/Pushforward%20%28differential%29
In differential geometry, pushforward is a linear approximation of smooth maps on tangent spaces. Suppose that is a smooth map between smooth manifolds; then the differential of at a point , denoted , is, in some sense, the best linear approximation of near . It can be viewed as a generalization of the total derivative of ordinary calculus. Explicitly, the differential is a linear map from the tangent space of at to the tangent space of at , . Hence it can be used to push tangent vectors on forward to tangent vectors on . The differential of a map is also called, by various authors, the derivative or total derivative of . Motivation Let be a smooth map from an open subset of to an open subset of . For any point in , the Jacobian of at (with respect to the standard coordinates) is the matrix representation of the total derivative of at , which is a linear map between their tangent spaces. Note the tangent spaces are isomorphic to and , respectively. The pushforward generalizes this construction to the case that is a smooth function between any smooth manifolds and . The differential of a smooth map Let be a smooth map of smooth manifolds. Given the differential of at is a linear map from the tangent space of at to the tangent space of at The image of a tangent vector under is sometimes called the pushforward of by The exact definition of this pushforward depends on the definition one uses for tangent vectors (for the various definitions see tangent space). If tangent vectors are defined as equivalence classes of the curves for which then the differential is given by Here, is a curve in with and is tangent vector to the curve at In other words, the pushforward of the tangent vector to the curve at is the tangent vector to the curve at Alternatively, if tangent vectors are defined as derivations acting on smooth real-valued functions, then the differential is given by for an arbitrary function and an arbitrary der
https://en.wikipedia.org/wiki/Oral%20rehydration%20therapy
Oral rehydration therapy (ORT) is a type of fluid replacement used to prevent and treat dehydration, especially due to diarrhea. It involves drinking water with modest amounts of sugar and salts, specifically sodium and potassium. Oral rehydration therapy can also be given by a nasogastric tube. Therapy should routinely include the use of zinc supplements. Use of oral rehydration therapy has been estimated to decrease the risk of death from diarrhea by up to 93%. Side effects may include vomiting, high blood sodium, or high blood potassium. If vomiting occurs, it is recommended that use be paused for 10 minutes and then gradually restarted. The recommended formulation includes sodium chloride, sodium citrate, potassium chloride, and glucose. Glucose may be replaced by sucrose and sodium citrate may be replaced by sodium bicarbonate, if not available, although the resulting mixture is not shelf stable in high-humidity environments. It works as glucose increases the uptake of sodium and thus water by the intestines, and the potassium chloride and sodium citrate help prevent hypokalemia and acidosis, respectively, which are both common side effects of diarrhea. A number of other formulations are also available including versions that can be made at home. However, the use of homemade solutions has not been well studied. Oral therapy was developed in the 1940s using electrolyte solutions with or without glucose on an empirical basis chiefly for mild or convalescent patients, but did not come into common use for rehydration and maintenance therapy until after the discovery that glucose promoted sodium and water absorption during cholera in the 1960s. It is on the World Health Organization's List of Essential Medicines. Globally, , oral rehydration therapy is used by 41% of children with diarrhea. This use has played an important role in reducing the number of deaths in children under the age of five. Medical uses ORT is less invasive than the other strategies for flui
https://en.wikipedia.org/wiki/GPS%20drawing
GPS drawing, also known as GPS art, is a method of drawing where an artist uses a Global Positioning System (GPS) device and follows a pre-planned route to create a large-scale picture or pattern. The .GPX data file recorded during the drawing process is then visualised, usually overlaying it as a line on a map of the area. Artists usually run or cycle the route—while cars, vans, boats and aeroplanes are utilized to create larger pieces. The first known GPS drawing was made by Reid Stowe in 1999. "Voyage of the Turtle" is an ocean sized drawing with a 5,500 mile circumference in the Atlantic made using a sailboat. The GPS data was recorded in logbooks and was therefore very low resolution. In 2000, after the US Military GPS satellite signals were opened up to the public, artists Jeremy Wood and Hugh Pryor were able to use a newly available GPS tracker to record their movements. To display their drawings Hugh Pryor wrote a computer program which convented the GPX data into a single line to be shown on screen or to be turned into an image file. With these tools in place GPS drawing as distinct artform was able to develop. Planning GPS artists can spend many hours finding a certain image or text hidden in a map or can sometimes simply see an existing image in a map due to pareidolia. In many cities and towns the road layout and landscape restricts the routes available so artists have to find creative ways to show their pictures or characters. In cities where there is a strong grid pattern 8-bit-style or pixelated images can be created of almost any object or shape. Many artists will create paper or digital maps of their route to follow on their journey. Artistic style There are many approaches to GPS drawing which an artist can choose depending on their means of travel and the landscape around them. Roads, trails, and paths only One style uses only pre-existing roads, paths, trails, etc. This can make it more challenging to find a route and plan the artwor
https://en.wikipedia.org/wiki/Oxygen%20scavenger
Oxygen scavengers or oxygen absorbers are added to enclosed packaging to help remove or decrease the level of oxygen in the package. They are used to help maintain product safety and extend shelf life. There are many types of oxygen absorbers available to cover a wide array of applications. The components of an oxygen absorber vary according to intended use, the water activity of the product being preserved, and other factors. Often the oxygen absorber or scavenger is enclosed in a porous sachet or packet but it can also be part of packaging films and structures. Others are part of a polymer structure. Oxygen absorbing chemicals are also commonly added to boiler feedwater used in boiler systems, to reduce corrosion of components within the system. Mechanism The first patent for an oxygen scavenger used an alkaline solution of pyrogallic acid in an air-tight vessel. Modern scavenger sachets use a mixture of iron powder and sodium chloride. Often activated carbon is also included as it adsorbs some other gases and many organic molecules, further preserving products and removing odors. When an oxygen absorber is removed from its protective packaging, the moisture in the surrounding atmosphere begins to permeate into the iron particles inside of the absorber sachet. Moisture activates the iron, and it oxidizes to form iron oxide. Typically, there is required to be at least 65% relative humidity in the surrounding atmosphere before the rusting process can begin. To assist in the process of oxidation, sodium chloride is added to the mixture, acting as a catalyst or activator, causing the iron powder to be able to oxidize even with relatively low humidity. As oxygen is consumed to form iron oxide the level of oxygen in the surrounding atmosphere is reduced. Absorber technology of this type may reduce the oxygen level in the surrounding atmosphere to below 0.01%. Complete oxidation of 1 g of iron can remove 300 cm3 of oxygen in standard conditions. Though other techn
https://en.wikipedia.org/wiki/Food%20Standards%20Australia%20New%20Zealand
Food Standards Australia New Zealand (FSANZ) (Māori: Te Mana Kounga Kai – Ahitereiria me Aotearoa), formerly Australia New Zealand Food Authority (ANZFA), is the statutory authority in the Australian Government Health portfolio that is responsible for developing food standards for Australia and New Zealand. Description FSANZ develops the standards in consultation with experts, other government agencies and stakeholders; the standards are enforced by state and territory departments, agencies and local councils in Australia, the Ministry for Primary Industries in New Zealand, and the Australian Department of Agriculture, Water and the Environment for food imported into Australia. According to legislation, the recommendations made by the body should be open and accountable, and based upon a rigorous scientific assessment of risk to public health and safety, though FSANZ's commitment to this has been disputed by leading public health and consumer representatives across Australia and New Zealand. All decisions made by FSANZ must be approved by the Australia and New Zealand Food Regulation Ministerial Council, which is composed of the Health Minister from each of the Australian states and territories, and the Health Minister from New Zealand, and other participating Ministers nominated by each jurisdiction. This may lead to political interference in the decision: for example the decision made over hemp seed, when the Food Standards scientists recommended that hemp seed be allowed for sale, the ministers vetoed this because they did not want to appear soft on drugs. Publications from FSANZ include the Australian Total Diet Survey and Shoppers' Guide to Food Additives and labels. Nomenclature This authority is sometimes cited variously as Australia and New Zealand Food Standards/Safety Authority (ANZFSA), possibly incorrect nomenclature arising due to confusion with the old initialism ANZFA, and with the acronym of the New Zealand authority, New Zealand Food Safety, wh
https://en.wikipedia.org/wiki/Fast%20wavelet%20transform
The fast wavelet transform is a mathematical algorithm designed to turn a waveform or signal in the time domain into a sequence of coefficients based on an orthogonal basis of small finite waves, or wavelets. The transform can be easily extended to multidimensional signals, such as images, where the time domain is replaced with the space domain. This algorithm was introduced in 1989 by Stéphane Mallat. It has as theoretical foundation the device of a finitely generated, orthogonal multiresolution analysis (MRA). In the terms given there, one selects a sampling scale J with sampling rate of 2J per unit interval, and projects the given signal f onto the space ; in theory by computing the scalar products where is the scaling function of the chosen wavelet transform; in practice by any suitable sampling procedure under the condition that the signal is highly oversampled, so is the orthogonal projection or at least some good approximation of the original signal in . The MRA is characterised by its scaling sequence or, as Z-transform, and its wavelet sequence or (some coefficients might be zero). Those allow to compute the wavelet coefficients , at least some range k=M,...,J-1, without having to approximate the integrals in the corresponding scalar products. Instead, one can directly, with the help of convolution and decimation operators, compute those coefficients from the first approximation . Forward DWT For the discrete wavelet transform (DWT), one computes recursively, starting with the coefficient sequence and counting down from k = J - 1 to some M < J, or and or , for k=J-1,J-2,...,M and all . In the Z-transform notation: The downsampling operator reduces an infinite sequence, given by its Z-transform, which is simply a Laurent series, to the sequence of the coefficients with even indices, . The starred Laurent-polynomial denotes the adjoint filter, it has time-reversed adjoint coefficients, . (The adjoint of a real number being t
https://en.wikipedia.org/wiki/Internet%20background%20noise
Internet background noise (IBN, also known as Internet background radiation) consists of data packets on the Internet which are addressed to IP addresses or ports where there is no network device set up to receive them. These packets often contain unsolicited commercial or network control messages, or are the result of port scans and worm activities. Smaller devices such as DSL modems may have a hard-coded IP address to look up the correct time using the Network Time Protocol. If, for some reason, the hard-coded NTP server is no longer available, faulty software might retry failed requests up to every second, which, if many devices are affected, generates a significant amount of unnecessary request traffic. Historical context In the first 10 years of the Internet, there was very little background noise but with its commercialization in the 1990s the noise factor became a permanent feature. The Conficker worm in recent times was responsible for a large amount of background noise generated by viruses looking for new victims. In addition to malicious activities, misconfigured hardware and leaks from private networks are also sources of background noise. 2000s As of November 2010, it is estimated that 5.5 gigabits (687.5 megabytes) of background noise are generated every second. It was also estimated in the early 2000s that a dial-up modem user loses about 20 bits per second of their bandwidth to unsolicited traffic. Over the past decade, the amount of background noise for an IPv4 /8 address block (which contains 16.7 million address) has increased from 1 to 50 Mbit/s (1KB/s to 6.25MB/s). The newer IPv6 protocol, which has a much larger address space, will make it more difficult for viruses to scan ports and also limit the impact of misconfigured equipment. Internet background noise has been used to detect significant changes in Internet traffic and connectivity during the 2011 political unrest from IP address blocks that were geolocated to Libya. Backscatter
https://en.wikipedia.org/wiki/Guido%20Castelnuovo
Guido Castelnuovo (14 August 1865 – 27 April 1952) was an Italian mathematician. He is best known for his contributions to the field of algebraic geometry, though his contributions to the study of statistics and probability theory are also significant. Life Early life Castelnuovo was born in Venice. His father, Enrico Castelnuovo, was a novelist and campaigner for the unification of Italy. His mother Emma Levi was a relative of Cesare Lombroso and David Levi. His wife Elbina Marianna Enriques was the sister of mathematician Federigo Enriques and zoologist Paolo Enriques. After attending a grammar school at in Venice, he went to the University of Padua, from where he graduated in 1886. At the University of Padua he was taught by Giuseppe Veronese. He also achieved minor fame due to winning the university salsa dancing competition. After his graduation, he sent one of his papers to Corrado Segre, whose replies he found remarkably helpful. It marked the beginning of a long period of collaboration. Career Castelnuovo spent one year in Rome to research advanced geometry. After that, he was appointed as an assistant of Enrico D'Ovidio at the University of Turin, where he was strongly influenced by Corrado Segre. Here he worked with Alexander von Brill and Max Noether. In 1891 he moved back to Rome to work at the chair of Analytic and Projective Geometry. Here he was a colleague of Luigi Cremona, his former teacher, and took over his job when he later died in 1903. He also founded the University of Rome's School of Statistics and Actuarial Sciences (1927). He influenced a younger generation of Italian mathematicians and statisticians, including Corrado Gini and Francesco Paolo Cantelli. Retirement and World War II Castelnuovo retired from teaching in 1935. It was a period of great political difficulty in Italy. In 1922 Benito Mussolini had risen to power and in 1938 a large number of anti-semitic laws were declared, which excluded him, like all other Jews, from p
https://en.wikipedia.org/wiki/Intel%208259
The Intel 8259 is a Programmable Interrupt Controller (PIC) designed for the Intel 8085 and Intel 8086 microprocessors. The initial part was 8259, a later A suffix version was upward compatible and usable with the 8086 or 8088 processor. The 8259 combines multiple interrupt input sources into a single interrupt output to the host microprocessor, extending the interrupt levels available in a system beyond the one or two levels found on the processor chip. The 8259A was the interrupt controller for the ISA bus in the original IBM PC and IBM PC AT. The 8259 was introduced as part of Intel's MCS 85 family in 1976. The 8259A was included in the original PC introduced in 1981 and maintained by the PC/XT when introduced in 1983. A second 8259A was added with the introduction of the PC/AT. The 8259 has coexisted with the Intel APIC Architecture since its introduction in Symmetric Multi-Processor PCs. Modern PCs have begun to phase out the 8259A in favor of the Intel APIC Architecture. However, while not anymore a separate chip, the 8259A interface is still provided by the Platform Controller Hub or Southbridge chipset on modern x86 motherboards. Functional description The main signal pins on an 8259 are as follows: eight interrupt input request lines named IRQ0 through IRQ7, an interrupt request output line named INTR, interrupt acknowledgment line named INTA, D0 through D7 for communicating the interrupt level or vector offset. Other connections include CAS0 through CAS2 for cascading between 8259s. Up to eight slave 8259s may be cascaded to a master 8259 to provide up to 64 IRQs. 8259s are cascaded by connecting the INT line of one slave 8259 to the IRQ line of one master 8259. End of Interrupt (EOI) operations support specific EOI, non-specific EOI, and auto-EOI. A specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR. Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowl
https://en.wikipedia.org/wiki/Pericallis%20%C3%97%20hybrida
Pericallis × hybrida, known as cineraria, florist's cineraria or common ragwort is a flowering plant in the family Asteraceae. It originated as a hybrid between Pericallis cruenta and P. lanata, both natives of the Canary Islands. The hybrid was first developed in the British royal gardens in 1777. It was originally known as Cineraria × hybrida, but the genus Cineraria is now restricted to a group of South African species, with the Canary Island species being transferred to the genus Pericallis; some botanists also treat it in a broad view of the large and widespread genus Senecio. Some varieties are sold under the trade name Senetti. Cultivation and uses Florist's cinerarias can be raised freely from seeds. For spring flowering the seeds are sown in mid spring in well-drained pots or pans, in soil of three parts loam to two parts leaf mould, with one-sixth sand; cover the seed thinly with fine soil, and press the surface firm. When the seedlings are large enough to handle, prick them out in pans or pots of similar soil, and when more advanced pot them singly in 10 cm pots, using soil a trifle less sandy. They should be grown in shallow frames facing the north. If so situated that the sun shines upon the plants in the middle of the day, they must be slightly shaded; give plenty of air, and never allow them to get dry. When well established with roots, shift them into 15 cm pots, which should be liberally supplied with manure water as they get filled with roots. In winter remove to a pit or house, where a little heat can be supplied whenever there is a risk of their getting frozen. They should stand on a moist bottom, but must not be subjected to cold draughts. When the flowering stems appear, give manure water at every alternate watering. Seeds sown in early spring, and grown on in this way, will be in flower by Christmas if kept in a temperature of from 5° to 7 °C at night, with a little more warmth in the day. Those sown in April and May will follow them durin
https://en.wikipedia.org/wiki/Discrete%20trial%20training
Discrete trial training (DTT) is a technique used by practitioners of applied behavior analysis (ABA) that was developed by Ivar Lovaas at the University of California, Los Angeles (UCLA). DTT uses direct instruction and reinforcers to create clear contingencies that shape new skills. Often employed as an early intensive behavioral intervention (EIBI) for up to 30–40 hours per week for children with autism, the technique relies on the use of prompts, modeling, and positive reinforcement strategies to facilitate the child's learning. It previously used aversives to punish unwanted behaviors. DTT has also been referred to as the "Lovaas/UCLA model", "rapid motor imitation antecedent", "listener responding", errorless learning", and "mass trials". Technique Discrete trial training (DTT) is a process whereby an activity is divided into smaller distinct sub-tasks and each of these is repeated continuously until a person is proficient. The trainer rewards successful completion and uses errorless correction procedures if there is unsuccessful completion by the subject to condition them into mastering the process. When proficiency is gained in each sub-task, they are re-combined into the whole activity: in this way proficiency at complex activities can be taught. DTT is carried out in a one-on-one therapist to student ratio at the table. Intervention can start when a child is as young as two years old and can last from two to six years. Progression through goals of the program are determined individually and are not determined by which year the client has been in the program. The first year seeks to reduce self-stimulating ("stimming") behavior, teach listener responding, eye contact, and fine and gross motor imitation, as well as to establish playing with toys in their intended manner, and integrate the family into the treatment protocol. The second year teaches early expressive language and abstract linguistic skills. The third year strives to include the individual's c