source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Dimetridazole
Dimetridazole is a drug that combats protozoan infections. It is a nitroimidazole class drug. It used to be commonly added to poultry feed. This led to it being found in eggs. Because of suspicions of it being carcinogenic its use has been legally limited but it is still found in the eggs. It is now banned as a livestock feed additive in many jurisdictions, for example in the European Union, Canada. and the United States. In the US, the Food and Drug Administration bans it for extralabel use See also Metronidazole Nimorazole
https://en.wikipedia.org/wiki/Eduard%20Helly
Eduard Helly (June 1, 1884 in Vienna – 28 November 1943 in Chicago) was a mathematician after whom Helly's theorem, Helly families, Helly's selection theorem, Helly metric, and the Helly–Bray theorem were named. Life Helly earned his doctorate from the University of Vienna in 1907, with two advisors, Wilhelm Wirtinger and Franz Mertens. He then continued his studies for another year at the University of Göttingen. Richard Courant, also studying there at the same time, tells a story of Helly disrupting one of Courant's talks, which fortunately did not prevent David Hilbert from eventually hiring Courant as an assistant. After returning to Vienna, Helly worked as a tutor, Gymnasium teacher, and textbook editor until World War I, when he enlisted in the Austrian army. He was shot in 1915, and spent the rest of the war as a prisoner of the Russians. In one prison camp in Berezovka, Siberia, he organized a mathematical seminar in which Tibor Radó, then an engineer, began his interest in pure mathematics. While held in another camp at Nikolsk-Ussuriysk, also in Siberia, Helly wrote important contributions on functional analysis. After a complicated return trip, Helly finally came back to Vienna in 1920, married his wife (mathematician Elise Bloch) in 1921, and also in 1921 earned his habilitation. Unable to obtain a paid position at the university because he was seen as too old and too Jewish, he worked at a bank until the financial collapse of 1929, and then for an insurance company. After the takeover of Austria by the Nazis in 1938, he lost that job as well, and escaped to America. With the assistance of Albert Einstein, he found teaching positions at Paterson Junior College and Monmouth Junior College in New Jersey, before moving with his wife to Chicago in 1941, to work for the U.S. Army Signal Corps. In Chicago, he suffered two heart attacks, and died from the second one. Contributions In the same 1912 paper in which he introduced Helly's selection theorem concer
https://en.wikipedia.org/wiki/Carbadox
Carbadox is a veterinary drug that combats infection in swine, particularly swine dysentery. Indications Carbadox is indicated for control of swine dysentery (vibrionic dysentery, bloody scours, or hemorrhagic dysentery); control of bacterial swine enteritis (salmonellosis or necrotic enteritis caused by Salmonella enterica); aid in the prevention of migration and establishment of large roundworm (Ascaris suum) infections; aid in the prevention of establishment of nodular worm (Oesophagostomum) infections. Safety In animal models, carbadox has been shown to be carcinogenic and to induce birth defects. The Food and Drug Administration's Center for Veterinary Medicine has questioned the safety in light of its possible carcinogenicity. Regulation Carbadox is approved in the United States only for use in swine and may not be used within 42 days of slaughter or used in pregnant animals. In 2016, the United States Food and Drug Administration moved to ban its use in pork, citing a potential cancer risk to humans. However, as of August 2018, FDA had indefinitely stayed its withdrawal of approval and carbadox remains available. In 2004, carbadox was banned by the Canadian government as a livestock feed additive and for human consumption. The European Union also forbids the use of carbadox at any level. Australia forbids the use of carbadox in food producing animals.
https://en.wikipedia.org/wiki/Paromomycin
Paromomycin is an antimicrobial used to treat a number of parasitic infections including amebiasis, giardiasis, leishmaniasis, and tapeworm infection. It is a first-line treatment for amebiasis or giardiasis during pregnancy. Otherwise, it is generally a second line treatment option. It is taken by mouth, applied to the skin, or by injection into a muscle. Common side effects when taken by mouth include loss of appetite, vomiting, abdominal pain, and diarrhea. When applied to the skin side effects include itchiness, redness, and blisters. When given by injection there may be fever, liver problems, or hearing loss. Use during breastfeeding appears to be safe. Paromomycin is in the aminoglycoside family of medications and causes microbe death by stopping the creation of bacterial proteins. Paromomycin was discovered in the 1950s from a type of streptomyces and came into medical use in 1960. It is on the World Health Organization's List of Essential Medicines. Paromomycin is available as a generic medication. Medical uses It is an antimicrobial used to treat intestinal parasitic infections such as cryptosporidiosis and amoebiasis, and other diseases such as leishmaniasis. Paromomycin was demonstrated to be effective against cutaneous leishmaniasis in clinical studies in the USSR in the 1960s, and in trials with visceral leishmaniasis in the early 1990s. The route of administration is intramuscular injection and capsule. Paromomycin topical cream with or without gentamicin is an effective treatment for ulcerative cutaneous leishmaniasis, according to the results of a phase-3, randomized, double-blind, parallel group–controlled trial. Pregnancy and breastfeeding The medication is poorly absorbed. The effect it may have on the baby is still unknown. There is limited data regarding the safety of taking paromomycin while breastfeeding but because the drug is poorly absorbed minimal amounts of drug will be secreted in breastmilk. HIV/AIDS There is limited evidence t
https://en.wikipedia.org/wiki/Harvard%20Mark%20IV
The Harvard Mark IV was an electronic stored-program computer built by Harvard University under the supervision of Howard Aiken for the United States Air Force. The computer was finished being built in 1952. It stayed at Harvard, where the Air Force used it extensively. The Mark IV was all electronic. The Mark IV used magnetic drum and had 200 registers of ferrite magnetic-core memory (one of the first computers to do so). It separated the storage of data and instructions in what is now sometimes referred to as the Harvard architecture although that term was not coined until the 1970s (in the context of microcontrollers). See also Harvard Mark I Harvard Mark II Harvard Mark III List of vacuum-tube computers Howard Aiken Harvard (World War II advanced trainer aircraft)
https://en.wikipedia.org/wiki/Linwood%20Springs%20Research%20Station
The Linwood Springs Research Station (LSRS) is a raptor research station located in Stevens Point, Wisconsin. Each fall, the station conducts studies on migrant Northern Saw-Whet Owls are to gain information about their migration routes, molt patterns, mortality rates, and winter and summer ranges. The station also studies red-shouldered and sharp-shinned hawks. These are nesting ecology studies that are conducted annually to determine productivity, reoccupancy, natal dispersal, and nest site fidelity. Additionally, each summer and fall a five-day raptor workshop is offered for those interested in gaining valuable field experience working with many of the techniques used in raptor studies. External links LSRS Official Site Raptor organizations Biological stations Research institutes in Wisconsin Stevens Point, Wisconsin Zoological research institutes Ornithological organizations in the United States
https://en.wikipedia.org/wiki/Merozoite%20surface%20protein
Merozoite surface proteins are both integral and peripheral membrane proteins found on the surface of a merozoite, an early life cycle stage of a protozoan. Merozoite surface proteins, or MSPs, are important in understanding malaria, a disease caused by protozoans of the genus Plasmodium. During the asexual blood stage of its life cycle, the malaria parasite enters red blood cells to replicate itself, causing the classic symptoms of malaria. These surface protein complexes are involved in many interactions of the parasite with red blood cells and are therefore an important topic of study for scientists aiming to combat malaria. Forms The most common form of MSPs are anchored to the merozoite surface with glycophosphatidylinositol, a short glycolipid often used for protein anchoring. Additional forms include integral membrane proteins and peripherally associated proteins, which are found to a lesser extent than glycophosphatidylinositol anchored proteins, or (GPI)-anchored proteins, on the merozoite surface. Merozoite surface proteins 1 and 2 (MSP-1 & MSP-2) are the most abundant (GPI)-anchored proteins on the surface of Plasmodium merozoites. Function MSP-1 is synthesized at the very beginning of schizogony, or asexual merozoite reproduction. The merozoite first attaches to a red blood cell using its MSP-1 complex. The MSP-1 complex targets spectrin, a complex on the internal surface of the cell membrane of a red blood cell. The majority of the MSP-1 complex is shed upon entry into the red blood cell, but a small portion of the C-terminus, called MSP-119, is conserved. The exact role of MSP-119 remains unknown, but it currently serves as a marker for the formation of the food vacuole. The function of the MSP-2 complex is not concrete, but current research suggests it has a role in red blood cell invasion due to its degradation shortly after invasion. MSP- 3, 6, 7 and 9 are peripheral membrane proteins that have been shown to form a complex with MSP-1, but the
https://en.wikipedia.org/wiki/Geomagnetic%20reversal
A geomagnetic reversal is a change in a planet's magnetic field such that the positions of magnetic north and magnetic south are interchanged (not to be confused with geographic north and geographic south). The Earth's field has alternated between periods of normal polarity, in which the predominant direction of the field was the same as the present direction, and reverse polarity, in which it was the opposite. These periods are called chrons. Reversal occurrences are statistically random. There have been at least 183 reversals over the last 83 million years (on average once every ~450,000 years). The latest, the Brunhes–Matuyama reversal, occurred 780,000 years ago, with widely varying estimates of how quickly it happened. Other sources estimate that the time that it takes for a reversal to complete is on average around 7,000 years for the four most recent reversals. Clement (2004) suggests that this duration is dependent on latitude, with shorter durations at low latitudes, and longer durations at mid and high latitudes. Although variable, the duration of a full reversal is typically between 2,000 and 12,000 years. Although there have been periods in which the field reversed globally (such as the Laschamp excursion) for several hundred years, these events are classified as excursions rather than full geomagnetic reversals. Stable polarity chrons often show large, rapid directional excursions, which occur more often than reversals, and could be seen as failed reversals. During such an excursion, the field reverses in the liquid outer core, but not in the solid inner core. Diffusion in the liquid outer core is on timescales of 500 years or less, while that of the solid inner core is longer, around 3,000 years. History In the early 20th century, geologists such as Bernard Brunhes first noticed that some volcanic rocks were magnetized opposite to the direction of the local Earth's field. The first systematic evidence for and time-scale estimate of the magnetic reve
https://en.wikipedia.org/wiki/Futures%20and%20promises
In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete. The term promise was proposed in 1976 by Daniel P. Friedman and David Wise, and Peter Hibbard called it eventual. A somewhat similar concept future was introduced in 1977 in a paper by Henry Baker and Carl Hewitt. The terms future, promise, delay, and deferred are often used interchangeably, although some differences in usage between future and promise are treated below. Specifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future. Notably, a future may be defined without specifying which specific promise will set its value, and different possible promises may set the value of a given future, though this can be done only once for a given future. In other cases a future and a promise are created together and associated with each other: the future is the value, the promise is the function that sets the value – essentially the return value (future) of an asynchronous function (promise). Setting the value of a future is also called resolving, fulfilling, or binding it. Applications Futures and promises originated in functional programming and related paradigms (such as logic programming) to decouple a value (a future) from how it was computed (a promise), allowing the computation to be done more flexibly, notably by parallelizing it. Later, it found use in distributed computing, in reducing the latency from communication round trips. Later still, it gained more use by allowing writing asynchronous programs in direct style, rather than in continuation-passing style. Implicit vs. explicit Use of futures may be implicit (any use of t
https://en.wikipedia.org/wiki/Helminthology
Helminthology is the study of parasitic worms (helminths). The field studies the taxonomy of helminths and their effects on their hosts. The origin of the first compound of the word is the Greek ἕλμινς - helmins, meaning "worm". In the 18th and early 19th century there was wave of publications on helminthology; this period has been described as the science's "Golden Era". During that period the authors Félix Dujardin, William Blaxland Benham, Peter Simon Pallas, Marcus Elieser Bloch, Otto Friedrich Müller, Johann Goeze, Friedrich Zenker, Charles Wardell Stiles, Carl Asmund Rudolphi, Otto Friedrich Bernhard von Linstow and Johann Gottfried Bremser started systematic scientific studies of the subject. The Japanese parasitologist Satyu Yamaguti was one of the most active helminthologists of the 20th century; he wrote the six-volume Systema Helminthum. See also Nematology
https://en.wikipedia.org/wiki/How%20to%20Design%20Programs
How to Design Programs (HtDP) is a textbook by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi on the systematic design of computer programs. MIT Press published the first edition in 2001, and the second edition in 2018, which is freely available online and in print. The book introduces the concept of a design recipe, a six-step process for creating programs from a problem statement. While the book was originally used along with the education project TeachScheme! (renamed ProgramByDesign), it has been adopted at many colleges and universities for teaching program design principles. According to HtDP, the design process starts with a careful analysis of a problem statement with the goal of extracting a rigorous description of the kinds of data that the desired program consumes and produces. The structure of these data descriptions determines the organization of the program. Then, the book carefully introduces data forms of progressively growing complexity. It starts with data of atomic forms and then progresses to compound forms, including data that can be arbitrarily large. For each kind of data definition, the book explains how to organize the program in principle, thus enabling a programmer who encounters a new form of data to still construct a program systematically. Like Structure and Interpretation of Computer Programs (SICP), HtDP relies on a variant of the programming language Scheme. It includes its own programming integrated development environment (IDE), named DrRacket, which provides a series of programming languages. The first language supports only functions, atomic data, and simple structures. Each language adds expressive power to the prior one. Except for the largest teaching language, all languages for HtDP are functional programming languages. Pedagogical basis In the 2004 paper, The Structure and Interpretation of the Computer Science Curriculum, the same authors compared and contrasted the pedagogical focus
https://en.wikipedia.org/wiki/Egg%20hatch%20assay
Egg hatch assay (EHA), also called an egg hatch test (EHT), is a method used to determine a given parasite's resistance to extant drug therapy. Fresh eggs are incubated from the parasite of interest and serial dilutions of the drug of interest are applied. The percentage of eggs that hatch or die is determined at each concentration and a drug response curve may be plotted. The data can then be transformed and analysed to give further statistics such as an effective dose. This technique is labour-intensive, expensive and can take some time, however an egg hatch assay will give an accurate and reliable result.
https://en.wikipedia.org/wiki/Jaccard%20index
The Jaccard index, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets. It was developed by Grove Karl Gilbert in 1884 as his ratio of verification (v) and now is frequently referred to as the Critical Success Index in meteorology. It was later developed independently by Paul Jaccard, originally giving the French name coefficient de communauté, and independently formulated again by T. Tanimoto. Thus, the Tanimoto index or Tanimoto coefficient are also used in some fields. However, they are identical in generally taking the ratio of Intersection over Union. Overview The Jaccard coefficient measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets: Note that by design, If A intersection B is empty, then J(A,B) = 0. The Jaccard coefficient is widely used in computer science, ecology, genomics, and other sciences, where binary or binarized data are used. Both the exact solution and approximation methods are available for hypothesis testing with the Jaccard coefficient. Jaccard similarity also applies to bags, i.e., Multisets. This has a similar formula, but the symbols mean bag intersection and bag sum (not union). The maximum value is 1/2. The Jaccard distance, which measures dissimilarity between sample sets, is complementary to the Jaccard coefficient and is obtained by subtracting the Jaccard coefficient from 1, or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union: An alternative interpretation of the Jaccard distance is as the ratio of the size of the symmetric difference to the union. Jaccard distance is commonly used to calculate an n × n matrix for clustering and multidimensional scaling of n sample sets. This distance is a metric on the collection of all finite sets. There is also a version of the Jaccard dista
https://en.wikipedia.org/wiki/Amplified%20fragment%20length%20polymorphism
AFLP-PCR or just AFLP is a PCR-based tool used in genetics research, DNA fingerprinting, and in the practice of genetic engineering. Developed in the early 1990s by KeyGene, AFLP uses restriction enzymes to digest genomic DNA, followed by ligation of adaptors to the sticky ends of the restriction fragments. A subset of the restriction fragments is then selected to be amplified. This selection is achieved by using primers complementary to the adaptor sequence, the restriction site sequence and a few nucleotides inside the restriction site fragments (as described in detail below). The amplified fragments are separated and visualized on denaturing on agarose gel electrophoresis, either through autoradiography or fluorescence methodologies, or via automated capillary sequencing instruments. Although AFLP should not be used as an acronym, it is commonly referred to as "Amplified fragment length polymorphism". However, the resulting data are not scored as length polymorphisms, but instead as presence-absence polymorphisms. AFLP-PCR is a highly sensitive method for detecting polymorphisms in DNA. The technique was originally described by Vos and Zabeau in 1993. In detail, the procedure of this technique is divided into three steps: Digestion of total cellular DNA with one or more restriction enzymes and ligation of restriction half-site specific adaptors to all restriction fragments. Selective amplification of some of these fragments with two PCR primers that have corresponding adaptor and restriction site specific sequences. Electrophoretic separation of amplicons on a gel matrix, followed by visualisation of the band pattern. Applications The AFLP technology has the capability to detect various polymorphisms in different genomic regions simultaneously. It is also highly sensitive and reproducible. As a result, AFLP has become widely used for the identification of genetic variation in strains or closely related species of plants, fungi, animals, and bacteria. The AFL
https://en.wikipedia.org/wiki/Microneme
Micronemes are secretory organelles, possessed by parasitic apicomplexans. Micronemes are located on the apical third of the protozoan body. They are surrounded by a typical unit membrane. On electron microscopy they have an electron-dense matrix due to the high protein content. They are specialized secretory organelles important for host-cell invasion and gliding motility. These organelles secrete several proteins such as the Plasmodium falciparum apical membrane antigen-1, or PfAMA1, and Erythrocyte family antigen, or EBA, family proteins. These proteins specialize in binding to erythrocyte surface receptors and facilitating erythrocyte entry. Only by this initial chemical exchange can the parasite enter into the erythrocyte via actin-myosin motor complex. It has been posited that this organelle works cooperatively with its counterpart organelle, the rhoptry, which also is a secretory organelle. It is possible that, while the microneme initiates erythrocyte-binding, the rhoptry secretes proteins to create the PVM, or the parasitophorous vacuole membrane, in which the parasite can survive and reproduce. See also Dense granule Rhoptry
https://en.wikipedia.org/wiki/GeneXus
GeneXus is a low code, cross-platform, knowledge representation-based development tool, mainly oriented towards enterprise-class applications for web applications, smart devices, and the Microsoft Windows platform. GeneXus uses mostly declarative language to generate native code for multiple environments. It includes a normalization module, which creates and maintains an optimal database structure based on user views. The languages for which code can be generated include COBOL, Java, Objective-C, RPG, Ruby, Visual Basic, and Visual FoxPro. Some of the DBMSs supported are Microsoft SQL Server, Oracle, IBM Db2, Informix, PostgreSQL, and MySQL. GeneXus was developed by Uruguayan company ARTech Consultores SRL which later renamed to Genexus SA. The latest version is GeneXus 17, which was released on October 20, 2020. See also Comparison of code generation tools List of low-code development platforms
https://en.wikipedia.org/wiki/Spaghetti%20sort
Spaghetti sort is a linear-time, analog algorithm for sorting a sequence of items, introduced by A. K. Dewdney in his Scientific American column. This algorithm sorts a sequence of items requiring O(n) stack space in a stable manner. It requires a parallel processor. Algorithm For simplicity, assume we are sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti: For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in the list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, break a rod in two so that one piece is of length x units; discard the other piece.) Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod—this one is clearly the longest. Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed. Analysis Preparing the n rods of spaghetti takes linear time. Lowering the rods on the table takes constant time, O(1). This is possible because the hand, the spaghetti rods and the table work as a fully parallel computing device. There are then n rods to remove so, assuming each contact-and-removal operation takes constant time, the worst-case time complexity of the algorithm is O(n).
https://en.wikipedia.org/wiki/Xbox%20Linux
Xbox Linux was a project that ported the Linux operating system to the Xbox video game console. Because the Xbox uses a digital signature system to prevent the public from running unsigned code, one must either use a modchip, or a softmod. Originally, modchips were the only option; however, it was later demonstrated that the TSOP chip on which the Xbox's BIOS is held may be reflashed. This way, one may flash on the "Cromwell" BIOS, which was developed legally by the Xbox Linux project. Catalyzed by a large cash prize for the first team to provide the possibility of booting Linux on an Xbox without the need of a hardware hack, numerous software-only hacks were also found. For example, a buffer overflow was found in the game 007: Agent Under Fire that allowed the booting of a Linux loader ("xbeboot") straight from a save game. The Xbox is essentially a PC with a custom 733 MHz Intel Pentium III processor, a 10 GB hard drive (8 GB of which is accessible to the user), 64MB of RAM (although on all earlier boxes this is upgradable to 128MB), and 4 USB ports. (The controller ports are actually USB 1.1 ports with a modified connector.) These specifications are enough to run several readily available Linux distributions. From the Xbox-Linux home page: The Xbox is a legacy-free PC by Microsoft that consists of an Intel Celeron 733 MHz CPU, an nVidia GeForce 3MX, 64 MB of RAM, a 8/10 GB hard disk, a DVD drive and 10/100 Ethernet. As on every PC, you can run Linux on it. An Xbox with Linux can be a full desktop computer with mouse and keyboard, a web/email box connected to TV, a server or router or a node in a cluster. You can either dual-boot or use Linux only; in the latter case, you can replace both IDE devices. And yes, you can connect the Xbox to a VGA monitor. Uses An Xbox with Linux installed can act as a full desktop computer with mouse and keyboard, a web/email box connected to a television, a server, router or a node in a cluster. One can either dual-boot or
https://en.wikipedia.org/wiki/The%20Right%20to%20Read
The Right to Read is a short story by Richard Stallman, the founder of the Free Software Foundation, which was first published in 1997 in Communications of the ACM. It is a cautionary tale set in the year 2047, when DRM-like technologies are employed to restrict the readership of books; when the sharing of books and written material is a crime punishable by imprisonment. In particular, the story touches on the impact of such a system on university students, due to their need for materials, one (Dan Halbert) of whom is forced into a dilemma in which he must decide whether to loan his computer to a fellow student (Lissa Lenz), who would then have the ability to illegally access his purchased documents. It is notable for being written before the use of Digital Rights Management (DRM) technology was widespread (although DVD video discs which used DRM had appeared the year before, and various proprietary software since the 1970s had made use of some form of copy protection), and for predicting later hardware-based attempts to restrict how users could use content, such as Trusted Computing. See also The Digital Imprimatur External links The Right to Read Free content Digital rights management Free software 1997 short stories
https://en.wikipedia.org/wiki/Preemption%20%28computing%29
In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of a processor are known as context switching. User mode and kernel mode In any given system design, some operations performed by the system may not be preemptable. This usually applies to kernel functions and service interrupts which, if not permitted to run to completion, would tend to produce race conditions resulting in deadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense of system responsiveness. The distinction between user mode and kernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable. Most modern operating systems have preemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems are Solaris 2.0/SunOS 5.0, Windows NT, Linux kernel (2.5.4 and newer), AIX and some BSD systems (NetBSD, since version 5). Preemptive multitasking The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources. In simple terms: Preemptive multitasking involves the use of an interrupt mechanism which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time. In preemptive multit
https://en.wikipedia.org/wiki/Honda%20E%20series
The E series was a collection of successive humanoid robots created by the Honda Motor Company between the years of 1986 and 1993. These robots were only experimental, but later evolved into the Honda P series, with Honda eventually amassing the knowledge and experience necessary to create Honda's advanced humanoid robot: ASIMO. The fact that Honda had been developing the robots was kept secret from the public until the announcement of the Honda P2 in 1996. E0, developed in 1986, was the first robot. It walked in a straight line on two feet, in a manner resembling human locomotion, taking around 5 seconds to complete a single step. Quickly engineers realised that in order to walk up slopes, the robot would need to travel faster. The model has 6 degrees of freedom: one in each groin, one in each knee and one in each ankle. Models E0, developed in 1986. E1, developed in 1987, was larger than the first and walked at 0.25 km/h. This model and subsequent E-series robots have 12 degrees of freedom: 3 in each groin, 1 in each knee and 2 in each ankle. E2, developed in 1989, could travel at 1.2km/h, through the development of "dynamic movement". E3, developed in 1991, travelled at 3km/h, the average speed of a walking human. E4, developed in 1991, lengthened the knee to achieve speeds of up to 4.7km/h. E5, developed in 1992, was able to walk autonomously, albeit with a very large head. E6, developed in 1993, was able to autonomously balance, walk over obstacles, and even climb stairs. See also Honda P series
https://en.wikipedia.org/wiki/Renninger%20negative-result%20experiment
In quantum mechanics, the Renninger negative-result experiment is a thought experiment that illustrates some of the difficulties of understanding the nature of wave function collapse and measurement in quantum mechanics. The statement is that a particle need not be detected in order for a quantum measurement to occur, and that the lack of a particle detection can also constitute a measurement. The thought experiment was first posed in 1953 by Mauritius Renninger. The non-detection of a particle in one arm of an interferometer implies that the particle must be in the other arm. It can be understood to be a refinement of the paradox presented in the Mott problem. The Mott problem The Mott problem concerns the paradox of reconciling the spherical wave function describing the emission of an alpha ray by a radioactive nucleus, with the linear tracks seen in a cloud chamber. Formulated in 1929 by Sir Nevill Francis Mott and Werner Heisenberg, it was resolved by a calculation done by Mott that showed that the correct quantum mechanical system must include the wave functions for the atoms in the cloud chamber as well as that for the alpha ray. The calculation showed that the resulting probability is non-zero only on straight lines raying out from the decayed atom; that is, once the measurement is performed, the wave-function becomes non-vanishing only near the classical trajectory of a particle. Renninger's negative-result experiment In Renninger's 1960 formulation, the cloud chamber is replaced by a pair of hemispherical particle detectors, completely surrounding a radioactive atom at the center that is about to decay by emitting an alpha ray. For the purposes of the thought experiment, the detectors are assumed to be 100% efficient, so that the emitted alpha ray is always detected. By consideration of the normal process of quantum measurement, it is clear that if one detector registers the decay, then the other will not: a single particle cannot be detected by bot
https://en.wikipedia.org/wiki/Intervalence%20charge%20transfer
In chemistry, intervalence charge transfer, often abbreviated IVCT or even IT, is a type of charge-transfer band that is associated with mixed valence compounds. It is most common for systems with two metal sites differing only in oxidation state. Quite often such electron transfer reverses the oxidation states of the sites. The term is frequently extended to the case of metal-to-metal charge transfer between non-equivalent metal centres. The transition produces a characteristically intense absorption in the electromagnetic spectrum. The band is usually found in the visible or near infrared region of the spectrum and is broad. The process can be described as follows: LnM+-bridge-M'Ln + hν → LnM-bridge-M'+Ln where L is a bridging ligand. Mixed valency and the IT band Since the energy states of valence tautomers affect the IVCT band, the strength of electronic interaction between the sites, known as α (the mixing coefficient), can be determined by analysis of the IVCT band. Depending on the value of α, mixed valence complexes are classified into three groups: class I: α ~ 0, the complex has no interaction between redox sites. No IVCT band is observed. The oxidation states of the two metal sites are distinct and do not readily interconvert. class II: 0 < α < = 0.707, intermediate interaction between sites. An IVCT band is observed. The oxidation states of the two metal sites are distinct, but they readily interconvert. This is by far the most common class of intervalence complexes. class III: α > = 0.707, interaction between redox sites is very strong. It is better to consider these sites as one united site, not as two isolated sites. An IVCT band is observed. The oxidation states of the two metal sites are essentially equivalent. In these situations, the two metals are often best described as having the same half integer oxidation state.
https://en.wikipedia.org/wiki/Allium%20montanum
The scientific name Allium montanum has been used for at least six different species of Allium. Allium montanum Schrank – the only legitimate name, first used in 1785, now considered a synonym of Allium schoenoprasum subsp. schoenoprasum (chives) The following names are synonyms for other species: Allium montanum F.W.Schmidt, nom. illeg. = Allium lusitanicum Allium montanum Guss., nom. illeg. = Allium tenuiflorum Allium montanum Rchb., nom. illeg. = Allium flavum subsp. flavum Allium montanum Sm. , nom. illeg. = Allium sibthorpianum Allium montanum Ten., nom. illeg. = Allium cupani subsp. cupani
https://en.wikipedia.org/wiki/Christen%20C.%20Raunki%C3%A6r
Christen Christensen Raunkiær (29 March 1860 – 11 March 1938) was a Danish botanist, who was a pioneer of plant ecology. He is mainly remembered for his scheme of plant strategies to survive an unfavourable season ("life forms") and his demonstration that the relative abundance of strategies in floras largely corresponded to the Earth's climatic zones. This scheme, the Raunkiær system, is still widely used today and may be seen as a precursor of modern plant strategy schemes, e.g. J. Philip Grime's CSR system. Life He was born on a small heathland farm, named Raunkiær, in Lyhne parish in western Jutland, Denmark. He later took his surname after it. He succeeded Eugen Warming as professor in botany at the University of Copenhagen and the director of the Copenhagen Botanical Garden, a position he held from 1912 to 1923. He was married to the author and artist Ingeborg Raunkiær (1863–1921), who accompanied him on journeys to the West Indies and the Mediterranean and made line drawings for his botanical works. They divorced in 1915, the same year as their only son Barclay Raunkiær died. Raunkiær later married the botanist Agnete Seidelin (1874–1956). Raunkiær's research axiom was that everything countable in nature should be subjected to numerical analysis, e.g. the number of male and female catkins in monecious plants and the number of male and female individuals in dioecious plants. Raunkiær also was an early student of apomixis in flowering plants and hybrid swarms. In addition, he studied the effect of soil pH on plants and plants on soil pH, a work his apprentice Carsten Olsen continued. After his retirement, C. Raunkiær made numerical studies of plants and flora in the literature ("The flora and the heathland poets", "The dandelion in Danish poetry", "Plants in the psalms"). In these studies, he applied strict quantitative criteria, like in his ecological studies. For example, he defined a poet as a person who has written 1,000 or more lines of verse. Legacy
https://en.wikipedia.org/wiki/Explicit%20formulae%20for%20L-functions
In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field. Riemann's explicit formula In his 1859 paper "On the Number of Primes Less Than a Given Magnitude" Riemann sketched an explicit formula (it was not fully proven until 1895 by von Mangoldt, see below) for the normalized prime-counting function which is related to the prime-counting function by which takes the arithmetic mean of the limit from the left and the limit from the right at discontinuities. His formula was given in terms of the related function in which a prime power counts as of a prime. The normalized prime-counting function can be recovered from this function by where is the Möbius function. Riemann's formula is then involving a sum over the non-trivial zeros of the Riemann zeta function. The sum is not absolutely convergent, but may be evaluated by taking the zeros in order of the absolute value of their imaginary part. The function occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral The terms involving the zeros of the zeta function need some care in their definition as has branch points at 0 and 1, and are defined by analytic continuation in the complex variable in the region and . The other terms also correspond to zeros: The dominant term comes from the pole at , considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. (For graphs of the sums of the first few terms of this series see .) The first rigorous proof of the af
https://en.wikipedia.org/wiki/Raunki%C3%A6r%20plant%20life-form
The Raunkiær system is a system for categorizing plants using life-form categories, devised by Danish botanist Christen C. Raunkiær and later extended by various authors. History It was first proposed in a talk to the Danish Botanical Society in 1904 as can be inferred from the printed discussion of that talk, but not the talk itself, nor its title. The journal, Botanisk Tidsskrift, published brief comments on the talk by M.P. Porsild, with replies by Raunkiær. A fuller account appeared in French the following year. Raunkiær elaborated further on the system and published this in Danish in 1907. The original note and the 1907 paper were much later translated to English and published with Raunkiær's collected works. Modernization Raunkiær's life-form scheme has subsequently been revised and modified by various authors, but the main structure has survived. Raunkiær's life-form system may be useful in researching the transformations of biotas and the genesis of some groups of phytophagous animals. Subdivisions The subdivisions of the Raunkiær system are premised on the location of the bud of a plant during seasons with adverse conditions, i. e. cold seasons and dry seasons: Phanerophytes These plants, normally woody perennials, grow stems into the air, with their resting buds being more than 50 cm above the soil surface, e.g. trees and shrubs, and also epiphytes, which Raunkiær later separated as a distinct class (see below). Raunkiær further divided the phanerophytes according to height as Megaphanerophytes, Mesophanerophytes, Microphanerophytes, and Nanophanerophytes. Further division was premised on the characters of duration of foliage, i. e. evergreen or deciduous, and presence of covering bracts on buds, for 8 classes. 3 further divisions were made to increase the total of classes to 12: Phanerophytic stem succulents, Phanerophytic epiphytes, and Phanerophytic herbs. Epiphytes Epiphytes were originally included in the phanerophytes (see above) but then sepa
https://en.wikipedia.org/wiki/Dysplasia
Dysplasia is any of various types of abnormal growth or development of cells (microscopic scale) or organs (macroscopic scale), and the abnormal histology or anatomical structure(s) resulting from such growth. Dysplasias on a mainly microscopic scale include epithelial dysplasia and fibrous dysplasia of bone. Dysplasias on a mainly macroscopic scale include hip dysplasia, myelodysplastic syndrome, and multicystic dysplastic kidney. In one of the modern histopathological senses of the term, dysplasia is sometimes differentiated from other categories of tissue change including hyperplasia, metaplasia, and neoplasia, and dysplasias are thus generally not cancerous. An exception is that the myelodysplasias include a range of benign, precancerous, and cancerous forms. Various other dysplasias tend to be precancerous. The word's meanings thus cover a spectrum of histopathological variations. Microscopic scale Epithelial dysplasia Epithelial dysplasia consists of an expansion of immature cells (such as cells of the ectoderm), with a corresponding decrease in the number and location of mature cells. Dysplasia is often indicative of an early neoplastic process. The term dysplasia is typically used when the cellular abnormality is restricted to the originating tissue, as in the case of an early, in-situ neoplasm. Dysplasia, in which cell maturation and differentiation are delayed, can be contrasted with metaplasia, in which cells of one mature, differentiated type are replaced by cells of another mature, differentiated type. Myelodysplastic syndrome Myelodysplastic syndromes (MDS) are a group of cancers in which immature blood cells in the bone marrow do not mature and therefore do not become healthy blood cells. Problems with blood cell formation result in some combination of low red blood cells, low platelets, and low white blood cells. Some types have an increase in immature blood cells, called blasts, in the bone marrow or blood. Fibrous dysplasia of bone Fi
https://en.wikipedia.org/wiki/AUO%20Corporation
AUO Corporation (AUO; ) is a Taiwanese company that specialises in optoelectronic solutions. It was formed in September 2001 by the merger of Acer Display Technology, Inc. (the predecessor of AUO, established in 1996) and Unipac Optoelectronics Corporation. AUO offers display panel products and solutions, and in recent years expanded its business to smart retail, smart transportation, general health, solar energy, circular economy and smart manufacturing service. AUO employs 38,000 people. History August 1996 Acer Display Technology, Inc. (the predecessor of AUO) was founded September 2000 Listed at Taiwan Stock Exchange Corporation September 2001 Merged with Unipac Optoelectronics Corporation to form AUO October 2006 Merged with Quanta Display Inc. December 2008 Entered solar business March 2009 G8.5 Fab in Taichung recognized as world's first LEED Gold certified TFT-LCD facility June 2009 Joint venture with Changhong in Sichuan, China to set up module plant April 2010 Joint venture with TCL in China to set up module plant July 2010 Acquired AFPD Pte., Ltd.("AFPD"), subsidiary of Toshiba Mobile Display Co., Ltd. in Singapore May 2011 G8.5 Fab in Houli recognized as world's first LEED Platinum certified TFT-LCD facility June 2011 Acquired world's first ISO 50001 certification for manufacturing facilities February 2012 OLED strategic alliance formed with Idemitsu in Japan July 2012 Acquired world's first ISO 14045 eco-efficiency assessment of product systems verification June 2013 G8.5 Fab in Houli received Taiwan's first Diamond Certification for Green Factory Building November 2013 Named to 2013 / 2014 Ocean Tomo 300® Patent Index April 2014 Initiated new model of solar power plant operation by founding Star River Energy Corporation May 2014 CSR Report acquired Taiwan's first GRI G4 certificate among the manufacturing industry May 2015 Ranked among the top 5% companies in the first Corporate Governance Evaluation released by Taiwan
https://en.wikipedia.org/wiki/Concurrent%20lines
In geometry, lines in a plane or higher-dimensional space are concurrent if they intersect at a single point. They are in contrast to parallel lines. Examples Triangles In a triangle, four basic types of sets of concurrent lines are altitudes, angle bisectors, medians, and perpendicular bisectors: A triangle's altitudes run from each vertex and meet the opposite side at a right angle. The point where the three altitudes meet is the orthocenter. Angle bisectors are rays running from each vertex of the triangle and bisecting the associated angle. They all meet at the incenter. Medians connect each vertex of a triangle to the midpoint of the opposite side. The three medians meet at the centroid. Perpendicular bisectors are lines running out of the midpoints of each side of a triangle at 90 degree angles. The three perpendicular bisectors meet at the circumcenter. Other sets of lines associated with a triangle are concurrent as well. For example: Any median (which is necessarily a bisector of the triangle's area) is concurrent with two other area bisectors each of which is parallel to a side. A cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. The three cleavers concur at the center of the Spieker circle, which is the incircle of the medial triangle. A splitter of a triangle is a line segment having one endpoint at one of the three vertices of the triangle and bisecting the perimeter. The three splitters concur at the Nagel point of the triangle. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter, and each triangle has one, two, or three of these lines. Thus if there are three of them, they concur at the incenter. The Tarry point of a triangle is the point of concurrency of the lines through the vertices of the triangle perpendicular to the corresponding sides of the triangle's first
https://en.wikipedia.org/wiki/Opisthorchis%20viverrini
Opisthorchis viverrini, common name Southeast Asian liver fluke, is a food-borne trematode parasite from the family Opisthorchiidae that infects the bile duct. People are infected after eating raw or undercooked fish. Infection with the parasite is called opisthorchiasis. O. viverrini infection also increases the risk of cholangiocarcinoma, a cancer of the bile ducts. A small, leaf-like fluke, O. viverrini completes its lifecycle in three different animals. Snails of the species Bithynia are the first intermediate hosts, fish belonging to the family Cyprinidae are the second intermediate host, and the definitive hosts are humans and other mammals such as dogs, cats, rats, and pigs. It was first discovered in the Indian fishing cat (Prionailurus viverrus) by M.J. Poirier in 1886. The first human case was discovered by Robert Thomson Leiper in 1915. O. viverrini (together with Clonorchis sinensis and Opisthorchis felineus) is one of the three most medically important species in the family Opisthorchiidae. In fact O. viverrini and C. sinensis are capable of causing cancer in humans, and are classified by the International Agency for Research on Cancer as a group 1 biological carcinogen in 2009. O. viverrini is found in Thailand, the Laos, Vietnam, and Cambodia. It is most widely distributed in northern Thailand, with high prevalence in humans, while central Thailand has a low rate of prevalence. Discovery O. viverrini was first described by a French parasitologist Jules Poirier in 1886, who discovered the parasite in an Indian fishing cat (Prionailurus viverrus), originally from Southeast Asia, that died in the Zoological Gardens attached to the National Museum of Natural History in Paris. He named it Distomum viverrini. American parasitologists Charles Wardell Stiles and Albert Hassall redescribed it and assigned it to the existing genus Opisthorchis (created by a French zoologist Raphaël Blanchard) in 1891. The first human specimen was described by a British pa
https://en.wikipedia.org/wiki/Lentiform%20nucleus
The lentiform nucleus (or lentiform complex, lenticular nucleus, or lenticular complex) are the putamen (laterally) and the globus pallidus (medially), collectively. Due to their proximity, these two structures were formerly considered one, however, the two are separated by a thin layer of white matter - the external medullary lamina - and are functionally and connectionally distinct. The lentiform nucleus is a large, lens-shaped mass of gray matter just lateral to the internal capsule. It forms part of the basal ganglia. With the caudate nucleus, it forms the dorsal striatum. Structure When divided horizontally, it exhibits, to some extent, the appearance of a biconvex lens, while a coronal section of its central part presents a somewhat triangular outline. It is shorter than the caudate nucleus and does not extend as far forward. Relations It is deep/medial to the insular cortex, with which it is coextensive; the two are separated by intervening structures. It is lateral to the caudate nucleus and thalamus, and is seen only in sections of the hemisphere. It is bounded laterally by a lamina of a white substance called the external capsule, and lateral to this is a thin layer of gray substance termed the claustrum. Its anterior end is continuous with the lower part of the head of the caudate nucleus and with the anterior perforated substance. Inferiorly, there is a groove upon the surface of the lenticular nucleus that accommodates the anterior commissure. Components In a coronal section through the middle of the lentiform nucleus, two medullary laminae are seen dividing it into three parts. The lateral and largest part is of a reddish color, and is known as the putamen, while the medial and intermediate are of a yellowish tint, and together constitute the globus pallidus; all three are marked by fine radiating white fibers, which are most distinct in the putamen. Pathology Increased volume of the lentiform nuclei has been observed in obsessive–compulsi
https://en.wikipedia.org/wiki/Disk%20Utility
Disk Utility is a system utility for performing disk and disk volume-related tasks on the macOS operating system by Apple Inc. Functions The functions currently supported by Disk Utility include: Creation, conversion, backup, compression, and encryption of logical volume images from a wide range of formats read by Disk Utility to .dmg or, for CD/DVD images, .cdr Mounting, unmounting and ejecting disk volumes (including both hard disks, removable media, and disk volume images) Enabling or disabling journaling Verifying a disk's integrity, and repairing it if the disk is damaged (this will work for both Mac compatible format partitions and for FAT32 partitions with Microsoft Windows installed) Erasing, formatting, partitioning, and cloning disks Secure deletion of free space or disk using a "zero out" data, a 7-pass DOD 5220-22 M standard, or a 35-pass Gutmann algorithm Adding or changing partition table between Apple Partition Map, GUID Partition Table, and master boot record (MBR) Restoring volumes from Apple Software Restore (ASR) images Checking the S.M.A.R.T. status of a hard disk Disk Utility functions may also be accessed from the macOS command line with the diskutil and hdiutil commands. It is also possible to create and manage RAM disk images by using hdiutil and diskutil in terminal. History In the classic Mac OS, similar functionality to the verification features of Disk Utility could be found in the Disk First Aid application. Another application called Drive Setup was used for drive formatting and partitioning and the application Disk Copy was used for working with disk images. Before Mac OS X Panther, the functionality of Disk Utility was spread across two applications: Disk Copy and Disk Utility. Disk Copy was used for creating and mounting disk image files whereas Disk Utility was used for formatting, partitioning, verifying, and repairing file structures. The ability to "zero" all data (multi-pass formatting) on a disk was not added until
https://en.wikipedia.org/wiki/Amastigote
An amastigote is a protist cell that does not have visible external flagella or cilia. The term is used mainly to describe an intracellular phase in the life-cycle of trypanosomes that replicates. It is also called the leishmanial stage, since in Leishmania it is the form the parasite takes in the vertebrate host, but occurs in all trypanosome genera.
https://en.wikipedia.org/wiki/Annexin
Annexin is a common name for a group of cellular proteins. They are mostly found in eukaryotic organisms (animal, plant and fungi). In humans, the annexins are found inside the cell. However some annexins (Annexin A1, Annexin A2, and Annexin A5) can be secreted from the cytoplasm to outside cellular environments, such as blood. Annexin is also known as lipocortin. Lipocortins suppress phospholipase A2. Increased expression of the gene coding for annexin-1 is one of the mechanisms by which glucocorticoids (such as cortisol) inhibit inflammation. Introduction The protein family of annexins has continued to grow since their association with intracellular membranes was first reported in 1977. The recognition that these proteins were members of a broad family first came from protein sequence comparisons and their cross-reactivity with antibodies. One of these workers (Geisow) coined the name Annexin shortly after. As of 2002 160 annexin proteins have been identified in 65 different species. The criteria that a protein has to meet to be classified as an annexin are: it has to be capable of binding negatively charged phospholipids in a calcium dependent manner and must contain a 70 amino acid repeat sequence called an annexin repeat. Several proteins consist of annexin with other domains like gelsolin. The basic structure of an annexin is composed of two major domains. The first is located at the COOH terminal and is called the “core” region. The second is located at the NH2 terminal and is called the “head” region. The core region consists of an alpha helical disk. The convex side of this disk has type 2 calcium-binding sites. They are important for allowing interaction with the phospholipids at the plasma membrane. The N terminal region is located on the concave side of the core region and is important for providing a binding site for cytoplasmic proteins. In some annexins it can become phosphorylated and can cause affinity changes for calcium in the core region o
https://en.wikipedia.org/wiki/DEC%20Multia
The Multia, later re-branded the Universal Desktop Box, was a line of desktop computers introduced by Digital Equipment Corporation on 7 November 1994. The line is notable in that units were offered with either an Alpha AXP or Intel Pentium processor as the CPU, and most hardware other than the backplane and CPU were interchangeable. Both the Alpha and Intel versions were intended to run Windows NT. The Multia had a compact case that left little room for expansion cards and restricted air flow, which can cause premature hardware failure due to overheating if not properly cared for. Enthusiasts remedy this by placing the Multia vertically instead of horizontally, allowing the heated air to escape via vents at the top, although this still requires preventing the Multia from overheating due to other factors, e.g. environmental. Hardware specifications The Alpha Multias included either an Alpha 21066 or Alpha 21066A microprocessor running at 166 MHz or 233 MHz respectively, and came with 16 or 24 MB of RAM as standard (expandable to 128 MB officially, but in practice 256 MB). Because the 21066 was a budget version of the Alpha 21064 processor, it had a narrower (64-bit versus 128-bit) and slower bus and thus performance was roughly equivalent to a Pentium running at 100 MHz for integer operations, but superior in floating-point; furthermore, the standard RAM capacity was a severe restriction on the performance of these workstations. The Alpha-based Multias came with the TGA (DEC 21030) graphics adapter. Standard peripherals on both Alpha and Intel models included a SCSI host adapter, DEC 21040 Ethernet controller, two PCMCIA slots, two RS-232 ports, a bi-directional parallel port, a 2.5 in or 3.5 in SCSI or ATA hard disk (340 MB to 1.6 GB), PS/2 keyboard and mouse ports, and a PCI slot (on models with 2.5-inch hard disks). Models Multia models comprised: Alpha Multia (codenamed QuickSilver): VX40: 166 MHz 21066, optional floppy disk drive and external SCS
https://en.wikipedia.org/wiki/History%20of%20fluid%20mechanics
The history of fluid mechanics is a fundamental strand of the history of physics and engineering. The study of the movement of fluids (liquids and gases) and the forces that act upon them dates back to pre-history. The field has undergone a continuous evolution, driven by human dependence on water, meteorological conditions and internal biological processes. The success of early civilizations, can be attributed to developments in the understanding of water dynamics, allowing for the construction of canals and aqueducts for water distribution and farm irrigation, as well as maritime transport. Due to its conceptual complexity, most discoveries in this field relied almost entirely on experiments, at least until the development of advanced understanding of differential equations and computational methods. Significant theoretical contributions were made by notables figures like Archimedes, Johann Bernoulli, Leonhard Euler, Claude-Louis Navier and Stokes, who developed the fundamental equations to describe fluid mechanics. Advancements in experimentation and computational methods have further propelled the field, leading to practical applications in more specialized industries ranging from aerospace to environmental engineering. Fluid mechanics has also been important for the study astronomical bodies and the dynamics of galaxies. Antiquity Pre-history A pragmatic, if not scientific, knowledge of fluid flow was exhibited by ancient civilizations, such as in the design of arrows, spears, boats, and particularly hydraulic engineering projects for flood protection, irrigation, drainage, and water supply. The earliest human civilizations began near the shores of rivers, and consequently coincided with the dawn of hydrology, hydraulics, and hydraulic engineering. Ancient China Observations of specific gravity and buoyancy were recorded by ancient Chinese philosophers. In the 4th century BCE Mencius describes the weight of the gold is equivalent to the feathers. In 3rd
https://en.wikipedia.org/wiki/Direct%20fluorescent%20antibody
A direct fluorescent antibody (DFA or dFA), also known as "direct immunofluorescence", is an antibody that has been tagged in a direct fluorescent antibody test. Its name derives from the fact that it directly tests the presence of an antigen with the tagged antibody, unlike western blotting, which uses an indirect method of detection, where the primary antibody binds the target antigen, with a secondary antibody directed against the primary, and a tag attached to the secondary antibody. Commercial DFA testing kits are available, which contain fluorescently labelled antibodies, designed to specifically target unique antigens present in the bacteria or virus, but not present in mammals (Eukaryotes). This technique can be used to quickly determine if a subject has a specific viral or bacterial infection. In the case of respiratory viruses, many of which have similar broad symptoms, detection can be carried out using nasal wash samples from the subject with the suspected infection. Although shedding cells in the respiratory tract can be obtained, it is often in low numbers, and so an alternative method can be adopted where compatible cell culture can be exposed to infected nasal wash samples, so if the virus is present it can be grown up to a larger quantity, which can then give a clearer positive or negative reading. As with all types of fluorescence microscopy, the correct absorption wavelength needs to be determined in order to excite the fluorophore tag attached to the antibody, and detect the fluorescence given off, which indicates which cells are positive for the presence of the virus or bacteria being detected. Direct immunofluorescence can be used to detect deposits of immunoglobulins and complement proteins in biopsies of skin, kidney and other organs. Their presence is indicative of an autoimmune disease. When skin not exposed to the sun is tested, a positive direct IF (the so-called Lupus band test) is an evidence of systemic lupus erythematosus. Direct
https://en.wikipedia.org/wiki/Immunomagnetic%20separation
Immunomagnetic separation (IMS) is a laboratory tool that can efficiently isolate cells out of body fluid or cultured cells. It can also be used as a method of quantifying the pathogenicity of food, blood or feces. DNA analysis have supported the combined use of both this technique and Polymerase Chain Reaction (PCR). Another laboratory separation tool is the affinity magnetic separation (AMS), which is more suitable for the isolation of prokaryotic cells. IMS deals with the isolation of cells, proteins, and nucleic acids through the specific capture of biomolecules through the attachment of small-magnetized particles, beads, containing antibodies and lectins. These beads are coated to bind to targeted biomolecules, gently separated and goes through multiple cycles of washing to obtain targeted molecules bound to these super paramagnetic beads, which can differentiate based on strength of magnetic field and targeted molecules, are then eluted to collect supernatant and then are able to determine the concentration of specifically targeted biomolecules. IMS obtains certain concentrations of specific molecules within targeted bacteria. A mixture of cell population will be put into a magnetic field where cells then are attached to super paramagnetic beads, specific example are Dynabeads (4.5-μm), will remain once excess substrate is removed binding to targeted antigen. Dynabeads consists of iron-containing cores, which is covered by a thin layer of a polymer shell allowing the absorption of biomolecules. The beads are coated with primary antibodies, specific-specific antibodies, lectins, enzymes, or streptavidin; the linkage between magnetized beads coated materials are cleavable DNA linker cell separation from the beads when the culturing of cells is more desirable. Many of these beads have the same principles of separation; however, the presence and different strength s of magnetic fields requires certain sizes of beads, based on the ramifications of the separation
https://en.wikipedia.org/wiki/Bottleneck%20%28engineering%29
In engineering, a bottleneck is a phenomenon by which the performance or capacity of an entire system is severely limited by a single component. The component is sometimes called a bottleneck point. The term is metaphorically derived from the neck of a bottle, where the flow speed of the liquid is limited by its neck. Formally, a bottleneck lies on a system's critical path and provides the lowest throughput. Bottlenecks are usually avoided by system designers, also a great amount of effort is directed at locating and tuning them. Bottleneck may be for example a processor, a communication link, a data processing software, etc. Bottlenecks in software In computer programming, tracking down bottlenecks (sometimes known as "hot spots" - sections of the code that execute most frequently - i.e. have the highest execution count) is called performance analysis. Reduction is usually achieved with the help of specialized tools, known as performance analyzers or profilers. The objective being to make those particular sections of code perform as fast as possible to improve overall algorithmic efficiency. Bottlenecks in max-min fairness In a communication network, sometimes a max-min fairness of the network is desired, usually opposed to the basic first-come first-served policy. With max-min fairness, data flow between any two nodes is maximized, but only at the cost of more or equally expensive data flows. To put it another way, in case of network congestion any data flow is only impacted by smaller or equal flows. In such context, a bottleneck link for a given data flow is a link that is fully utilized (is saturated) and of all the flows sharing this link, the given data flow achieves maximum data rate network-wide. Note that this definition is substantially different from a common meaning of a bottleneck. Also note, that this definition does not forbid a single link to be a bottleneck for multiple flows. A data rate allocation is max-min fair if and only if a data flo
https://en.wikipedia.org/wiki/Rainbow%20Series
The Rainbow Series (sometimes known as the Rainbow Books) is a series of computer security standards and guidelines published by the United States government in the 1980s and 1990s. They were originally published by the U.S. Department of Defense Computer Security Center, and then by the National Computer Security Center. Objective These standards describe a process of evaluation for trusted systems. In some cases, U.S. government entities (as well as private firms) would require formal validation of computer technology using this process as part of their procurement criteria. Many of these standards have influenced, and have been superseded by, the Common Criteria. The books have nicknames based on the color of its cover. For example, the Trusted Computer System Evaluation Criteria was referred to as "The Orange Book." In the book entitled Applied Cryptography, security expert Bruce Schneier states of NCSC-TG-021 that he "can't even begin to describe the color of [the] cover" and that some of the books in this series have "hideously colored covers." He then goes on to describe how to receive a copy of them, saying "Don't tell them I sent you." Most significant Rainbow Series books
https://en.wikipedia.org/wiki/Platysma%20muscle
The platysma muscle is a superficial muscle of the human neck that overlaps the sternocleidomastoid. It covers the anterior surface of the neck superficially. When it contracts, it produces a slight wrinkling of the neck, and a "bowstring" effect on either side of the neck. Structure The platysma muscle is a broad sheet of muscle arising from the fascia covering the upper parts of the pectoralis major muscle and deltoid muscle. Its fibers cross the clavicle, and proceed obliquely upward and medially along the side of the neck. This leaves the inferior part of the neck in the midline deficient of significant muscle cover. Fibres at the front of the muscle from the left and right sides intermingle together below and behind the mandibular symphysis, the junction where the two lateral halves of the mandible are fused at an early period of life (although not a true symphysis). Fibres at the back of the muscle cross the mandible, some being inserted into the bone below the oblique line, others into the skin and subcutaneous tissue of the lower part of the face. Many of these fibers blend with the muscles about the angle and lower part of the mouth. Sometimes fibers can be traced to the zygomaticus major muscle, or to the margin of the orbicularis oris muscle. Beneath the platysma, the external jugular vein descends from the angle of the mandible to the clavicle. Nerve supply The platysma muscle is supplied by the cervical branch of the facial nerve. Blood supply The platysma muscle is supplied by branches of the submental artery and suprascapular artery. Relations The platysma muscle lies just deep to the subcutaneous fascia and fat. It covers many structures found deeper in the neck, such as the external carotid artery, the external jugular vein, the parotid gland, the lesser occipital nerve, the great auricular nerve, and the marginal mandibular branch of the facial nerve. Variation Variations occur in the extension over the face and over the clavicle and should
https://en.wikipedia.org/wiki/Surface%20power%20density
In physics and engineering, surface power density is power per unit area. Applications The intensity of electromagnetic radiation can be expressed in W/m2. An example of such a quantity is the solar constant. Wind turbines are often compared using a specific power measuring watts per square meter of turbine disk area, which is , where r is the length of a blade. This measure is also commonly used for solar panels, at least for typical applications. Radiance is surface power density per unit of solid angle (steradians) in a specific direction. Spectral radiance is radiance per unit of frequency (Hertz) at a specific frequency. Surface power densities of energy sources Surface power density is an important factor in comparison of industrial energy sources. The concept was popularised by geographer Vaclav Smil. The term is usually shortened to "power density" in the relevant literature, which can lead to confusion with homonymous or related terms. Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende. The following table shows median surface power density of renewable and non-renewable energy sources. Background As an electromagnetic wave travels through space, energy is transferred from the source to other objects (receivers). The rate of this energy transfer depends on the strength of the EM field components. Simply put, the rate of energy transfer per unit area (power
https://en.wikipedia.org/wiki/Power%20density
Power density is the amount of power (time rate of energy transfer) per unit volume. In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m3. In reciprocating internal combustion engines, power density (power per swept volume or brake horsepower per cubic centimeter) is an important metric, based on the internal capacity of the engine, not its external size. Examples See also Surface power density, energy per unit of area Energy density, energy per unit volume Specific energy, energy per unit mass Power-to-weight ratio/specific power, power per unit mass Specific absorption rate (SAR)
https://en.wikipedia.org/wiki/Kelvin%20probe%20force%20microscope
Kelvin probe force microscopy (KPFM), also known as surface potential microscopy, is a noncontact variant of atomic force microscopy (AFM). By raster scanning in the x,y plane the work function of the sample can be locally mapped for correlation with sample features. When there is little or no magnification, this approach can be described as using a scanning Kelvin probe (SKP). These techniques are predominantly used to measure corrosion and coatings. With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity, reconstruction of surfaces, doping and band-bending of semiconductors, charge trapping in dielectrics and corrosion. The map of the work function produced by KPFM gives information about the composition and electronic state of the local structures on the surface of a solid. History The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals. Working principle In SKP the probe and sample are held parallel to each other and electrically connected to form a parallel plate capacitor. The probe is selected to be of a different material to the sample, therefore each component initially has a distinct Fermi level. When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (Vc). In SKP the probe is vibrated along a perpendicular to the plane of the sample. This vibration causes a change in probe to sample distance, which in turn res
https://en.wikipedia.org/wiki/Semantic%20differential
The semantic differential (SD) is a measurement scale designed to measure a person's subjective perception of, and affective reactions to, the properties of concepts, objects, and events by making use of a set of bipolar scales. The SD is used to assess one's opinions, attitudes, and values regarding these concepts, objects, and events in a controlled and valid way. Respondents are asked to choose where their position lies, on a set of scales with polar adjectives (for example: "sweet - bitter", "fair - unfair", "warm - cold"). Compared to other measurement scaling techniques such as Likert scaling, the SD can be assumed to be relatively reliable, valid, and robust. The SD has been used in both a general and a more specific way. Charles E. Osgood's theory of the semantic differential exemplifies the more general attempt to measure the semantics, or meaning, of words, particularly adjectives, and their referent concepts. In fields such as marketing, psychology, sociology, and information systems, the SD is used to measure the subjective perception of, and affective reactions to, more specific concepts such as marketing communication, political candidates, alcoholic beverages, and websites. Guidelines for using the SD Verhagen and colleagues introduce a framework to assist researchers in applying the semantic differential. The framework, which consists of six subsequent steps, advocates particular attention for collecting the set of relevant bipolar scales, linguistic testing of semantic bipolarity, and establishing semantic differential dimensionality. A detailed presentation on the development of the semantic differential is provided in Cross-Cultural Universals of Affective Meaning. David R. Heise's Surveying Cultures provides a contemporary update with special attention to measurement issues when using computerized graphic rating scales. One possible problem with this scale is that its psychometric properties and level of measurement are disputed. The most g
https://en.wikipedia.org/wiki/RAF%20Denge
Denge is a former Royal Air Force site near Dungeness, in Kent, England. It is best known for the early experimental acoustic mirrors which remain there. The acoustic mirrors, known colloquially as 'listening ears', at Denge are located between Greatstone-on-Sea and Lydd airfield, on the banks of a now disused gravel pit. The mirrors were built in the late 1920s and early 1930s as an experimental early warning system for incoming aircraft, developed by William Sansome Tucker. Several were built along the south and east coasts, but the complex at Denge is the best preserved, and are protected as scheduled monuments. Denge complex There are three acoustic mirrors in the complex, each consisting of a single concrete hemispherical reflector. The 200 foot mirror is a near vertical, curved wall, 200 feet (60m) long. It is one of only two similar acoustic mirrors in the world, the other being in Magħtab, Malta. The 30 foot mirror is a circular dish, similar to a deeply curved satellite dish, 9 m (30 ft) across, supported on concrete buttresses. This mirror still retains the metal microphone pole at its centre. The 20 foot mirror is similar to the 30 foot mirror, with a smaller, shallower dish 6 m (20 ft) across. The design is close to that of an acoustic mirror in Kilnsea, East Riding of Yorkshire. Acoustic mirrors did work and could effectively be used to detect slow moving enemy aircraft before they came into sight. They worked by concentrating sound waves towards a central point, where a microphone would have been located. However, their use was limited as aircraft became faster. Operators also found it difficult to distinguish between aircraft and seagoing vessels. In any case, they quickly became obsolete due to the invention of radar in 1932. The experiment was abandoned, and the mirrors left to decay. The gravel extraction works caused some undermining of at least one of the structures. The striking forms of the sound mirrors have attracted artists and photogr
https://en.wikipedia.org/wiki/Super%20black
Super black is a surface treatment developed at the National Physical Laboratory (NPL) in the United Kingdom. It absorbs approximately 99.6% of visible light at normal incidence, while conventional black paint absorbs about 97.5%. At other angles of incidence, super black is even more effective: at an angle of 45°, it absorbs 99.9% of light. Technology The technology to create super black involves chemically etching a nickel-phosphorus alloy. Applications of super black are in specialist optical instruments for reducing unwanted reflections. The disadvantage of this material is its low optical thickness, as it is a surface treatment. As a result, infrared light of a wavelength longer than a few micrometers penetrates through the dark layer and has much higher reflectivity. The reported spectral dependence increases from about 1% at 3 µm to 50% at 20 µm. In 2009, a competitor to the super black material, Vantablack, was developed based on carbon nanotubes. It has a relatively flat reflectance in a wide spectral range. In 2011, NASA and the US Army began funding research in the use of nanotube-based super black coatings in sensitive optics. Nanotube-based superblack arrays and coatings have recently become commercially available. See also Vantablack Emissivity Black hole Black body
https://en.wikipedia.org/wiki/Intensity%20interferometer
An intensity interferometer is the name given to devices that use the Hanbury Brown and Twiss effect. In astronomy, the most common use of such an astronomical interferometer is to determine the apparent angular diameter of a radio source or star. If the distance to the object can then be determined by parallax or some other method, the physical diameter of the star can then be inferred. An example of an optical intensity interferometer is the Narrabri Stellar Intensity Interferometer. In quantum optics, some devices which take advantage of correlation and anti-correlation effects in beams of photons might be said to be intensity interferometers, although the term is usually reserved for observatories. An intensity interferometer is built from two light detectors, typically either radio antenna or optical telescopes with photomultiplier tubes (PMTs), separated by some distance, called the baseline. Both detectors are pointed at the same astronomical source, and intensity measurements are then transmitted to a central correlator facility. A major advantage of intensity interferometers is that only the measured intensity observed by each detector must be sent to the central correlator facility, rather than the amplitude and phase of the signal. The intensity interferometer measures interferometric visibilities like all other astronomical interferometers. These measurements can be used to calculate the diameter and limb darkening coefficients of stars, but with intensity interferometers aperture synthesis images cannot be produced as the visibility phase information is not preserved by an intensity interferometer.
https://en.wikipedia.org/wiki/Inelastic%20mean%20free%20path
The inelastic mean free path (IMFP) is an index of how far an electron on average travels through a solid before losing energy. If a monochromatic, primary beam of electrons is incident on a solid surface, the majority of incident electrons lose their energy because they interact strongly with matter, leading to plasmon excitation, electron-hole pair formation, and vibrational excitation. The intensity of the primary electrons, , is damped as a function of the distance, , into the solid. The intensity decay can be expressed as follows: where is the intensity after the primary electron beam has traveled through the solid to a distance . The parameter , termed the inelastic mean free path (IMFP), is defined as the distance an electron beam can travel before its intensity decays to of its initial value. (Note that this is equation is closely related to the Beer–Lambert law.) The inelastic mean free path of electrons can roughly be described by a universal curve that is the same for all materials. The knowledge of the IMFP is indispensable for several electron spectroscopy and microscopy measurements. Applications of the IMFP in XPS Following, the IMFP is employed to calculate the effective attenuation length (EAL), the mean escape depth (MED) and the information depth (ID). Besides, one can utilize the IMFP to make matrix corrections for the relative sensitivity factor in quantitative surface analysis. Moreover, the IMFP is an important parameter in Monte Carlo simulations of photoelectron transport in matter. Calculations of the IMFP Calculations of the IMFP are mostly based on the algorithm (full Penn algorithm, FPA) developed by Penn, experimental optical constants or calculated optical data (for compounds). The FPA considers an inelastic scattering event and the dependence of the energy-loss function (EFL) on momentum transfer which describes the probability for inelastic scattering as a function of momentum transfer. Experimental measurements of the I
https://en.wikipedia.org/wiki/Calcium%20phosphide
Calcium phosphide (CP) is the inorganic compound with the formula Ca3P2. It is one of several phosphides of calcium, being described as the salt-like material composed of Ca2+ and P3−. Other, more exotic calcium phosphides have the formula CaP / Ca2P2, CaP3, and Ca5P8. Ca3P2 has the appearance of red-brown crystalline powder or grey lumps. Its trade name is Photophor for the incendiary use or Polytanol for the use as rodenticide. Preparation and structure It may be formed by reaction of the elements, but it is more commonly prepared by carbothermal reduction of calcium phosphate: Ca3(PO4)2 + 8 C → Ca3P2 + 8 CO The structure of the room temperature form of Ca3P2 has not been confirmed by X-ray crystallography. A high temperature phase has been characterized by Rietveld refinement. Ca2+ centers are octahedral. Uses Metal phosphides are used as a rodenticide. A mixture of food and calcium phosphide is left where the rodents can eat it. The acid in the digestive system of the rodent reacts with the phosphide to generate the toxic gas phosphine. This method of vermin control has possible use in places where rodents immune to many of the common warfarin-type (anticoagulant) poisons have appeared. Other pesticides similar to calcium phosphide are zinc phosphide and aluminium phosphide. Calcium phosphide is also used in fireworks, torpedoes, self-igniting naval pyrotechnic flares, and various water-activated ammunition. During the 1920s and 1930s, Charles Kingsford Smith used separate buoyant canisters of calcium carbide and calcium phosphide as naval flares lasting up to ten minutes. It is speculated that calcium phosphide—made by boiling bones in urine, within a closed vessel—was an ingredient of some ancient Greek fire formulas. Calcium phosphide is a common impurity in calcium carbide, which may cause the resulting phosphine-contaminated acetylene to ignite spontaneously. See also Phosphorus
https://en.wikipedia.org/wiki/Net2Phone
net2phone is a Cloud Communications provider offering cloud based telephony services to businesses worldwide. The company is a subsidiary of IDT Corporation. History net2phone was founded in 1990 by telecom entrepreneur Howard Jonas, the chairman and chief executive officer of net2phone’s parent company, IDT Corporation. The company was an early pioneer in the commercialization of voice-over-Internet protocol (VoIP) technologies leveraging the global carrier business and infrastructure of IDT and focusing on transitioning businesses and consumers from PSTN, traditional telecom interconnects, to Voice over IP. On July 30, 1999, during the dot-com bubble, the company became a public company via an initial public offering, raising $81 million. Shares rose 77% on the first day of trading to $26 per share. After completion of the IPO, IDT owned 57% of the company. Within a few weeks, the shares increased another 100% in value, to $53 per share. In March 2000, in a transaction facilitated by IDT CEO Howard Jonas, a consortium of telecommunications companies led by AT&T announced a $1.4 billion investment for a 32% stake in the company, buying shares for $75 each. The transaction was completed in August 2000. AOL had expressed an interest in buying all or part of the company but was not agreeable to the price. In August 2000, Jonathan Fram, president of the company, left the company to join eVoice. In September 2000, the company formed Adir Technologies, a joint venture with Cisco Systems. In March 2002, the company sued Cisco for breach of contract. In February 2002, the company announced 110 layoffs, or 28% of its workforce. In October 2004, Liore Alroy became chief executive officer of the company. On March 13, 2006, IDT Corporation acquired the shares of the company that it did not already own for $2.05 per share. In 2015, net2phone began providing Unified Communications as a Service (UCaaS) targeted to the SMB market. net2phone’s UCaaS initiative was develope
https://en.wikipedia.org/wiki/TRADIC
The TRADIC (for TRAnsistor DIgital Computer or TRansistorized Airborne DIgital Computer) was the first transistorized computer in the USA, completed in 1954. The computer was built by Jean Howard Felker of Bell Labs for the United States Air Force while L.C. Brown ("Charlie Brown") was a lead engineer on the project, which started in 1951. The project initially examined the feasibility of constructing a transistorized airborne digital computer. A second application was a transistorized digital computer to be used in a Navy track-while-scan shipboard radar system. Several models were completed: TRADIC Phase One computer, Flyable TRADIC, Leprechaun (using germanium alloy junction transistors in 1956) and XMH-3 TRADIC. TRADIC Phase One was developed to explore the feasibility, in the laboratory, of using transistors in a digital computer that could be used to solve aircraft bombing and navigation problems. Flyable TRADIC was used to establish the feasibility of using an airborne solid-state computer as the control element of a bombing and navigation system. Leprechaun was a second-generation laboratory research transistor digital computer designed to explore direct-coupled transistor logic (DCTL). The TRADIC Phase One computer was completed in January 1954. The TRADIC Phase One computer has been claimed to be the world's first fully transistorized computer, ahead of the Mailüfterl in Austria or the Harwell CADET in the UK, which were each completed in 1955. In the UK, the Manchester University Transistor Computer demonstrated a working prototype in 1953 which incorporated transistors before TRADIC was operational, although that was not a fully transistorized computer because it used vacuum tubes to generate the clock signal. The 30 watts of power for the 1 MHz clock in the TRADIC was also supplied by a vacuum tube supply because no transistors were available that could supply that much power at that frequency. If the TRADIC can be called fully transistorized while in
https://en.wikipedia.org/wiki/Finite%20strain%20theory
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. Displacement field Deformation gradient tensor The deformation gradient tensor is related to both the reference and current configuration, as seen by the unit vectors and , therefore it is a two-point tensor. Two types of deformation gradient tensor may be defined. Due to the assumption of continuity of , has the inverse , where is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant must be nonsingular, i.e. The material deformation gradient tensor is a second-order tensor that represents the gradient of the mapping function or functional relation , which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector , i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function , i.e. differentiable function of and time , which implies that cracks and voids do not open or close during the deformation. Thus we have, Relative displacement vector Consider a particle or material point with position vector in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by in the new configuration is given by the vector position . The coordinate systems for t
https://en.wikipedia.org/wiki/Repetition%20blindness
Repetition blindness (RB) is a phenomenon observed in rapid serial visual presentation. People are sometimes poor at recognizing when things happen twice. Repetition blindness is the failure to recognize a second happening of a visual display. The two displays are shown sequentially, possibly with other stimuli displays in between. Each display is only shortly shown, usually for about 150 milliseconds (Kanwisher, 1987). If stimuli are shown in between, RB can occur in a time interval up to 600 milliseconds. Without other stimuli displayed in between the two repeated stimuli, RB only lasts about 250 milliseconds (Luo & Caramazza, 1995). Repetition blindness tasks usually are words in lists and in sentences. They are called phonologically similar items (Bavelier & Potter, 1992). There are also pictures, and words that include pictures. An example of this is a picture of the sun and the word sun (Bavelier, 1994). The most popular task used to examine repetition blindness is to show words one after another on a screen fast in which participants must recall the words that they saw. This task is known as the rapid serial visual presentation (RSVP). Repetition blindness is present if missing the second word creates an inaccurate sentence. An example of this is "When she spilled the ink there was ink all over.” An RSVP sequence participants will recall seeing "When she spilled the ink there was all over." However, they are missing the second occurrence of "ink" (Kanwisher, 1987). This finding supports that people are "blind" for the second occurrence of a repetitive item in an RSVP series. For example, a subject's chances of correctly reporting both appearances of the word "cat" in the RSVP stream "dog mouse cat elephant cat snake" are lower than their chances of reporting the third and fifth words in the stream "dog mouse cat elephant pig snake". The precise mechanism underlying RB has been extensively debated. Nancy Kanwisher has argued that it involves failure to t
https://en.wikipedia.org/wiki/Calculus%20%28medicine%29
A calculus (: calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body. Formation of calculi is known as lithiasis (). Stones can cause a number of medical conditions. Some common principles (below) apply to stones at any location, but for specifics see the particular stone type in question. Calculi are not to be confused with gastroliths. Types Calculi in the inner ear are called otoliths Calculi in the urinary system are called urinary calculi and include kidney stones (also called renal calculi or nephroliths) and bladder stones (also called vesical calculi or cystoliths). They can have any of several compositions, including mixed. Principal compositions include oxalate and urate. Calculi of the gallbladder and bile ducts are called gallstones and are primarily developed from bile salts and cholesterol derivatives. Calculi in the nasal passages (rhinoliths) are rare. Calculi in the gastrointestinal tract (enteroliths) can be enormous. Individual enteroliths weighing many pounds have been reported in horses. Calculi in the stomach are called gastric calculi (Not to be confused with gastroliths which are exogenous in nature). Calculi in the salivary glands are called salivary calculi (sialoliths). Calculi in the tonsils are called tonsillar calculi (tonsilloliths). Calculi in the veins are called venous calculi (phleboliths). Calculi in the skin, such as in sweat glands, are not common but occasionally occur. Calculi in the navel are called omphaloliths. Calculi are usually asymptomatic, and large calculi may have required many years to grow to their large size. Cause From an underlying abnormal excess of the mineral, e.g., with elevated levels of calcium (hypercalcaemia) that may cause kidney stones, dietary factors for gallstones. Local conditions at the site in question that promote their formation, e.g., local bacteria action (in kidney stones) or slower fluid flow rates, a
https://en.wikipedia.org/wiki/Hot%20atom
In physical chemistry, a hot atom is an atom that has a high kinetic or internal energy. When molecule AB adsorbs on a surface dissociatively, both A and B adsorb on the surface, or only A adsorbs on the surface, and B desorbs from the surface. In case 2, B gains a high translational energy from the adsorption energy of A, and hot atom B is generated. For example, the hydrogen molecule, because of its light mass, gets a high translational energy. Such a hot atom does not fly into vacuum but is trapped on the surface, where it diffuses with high energy. Hot atoms are expected to play important roles in catalytic reactions. For example, a reaction of a hydrogen atom with hydrogen atoms on a silicon surface and a reaction of an oxygen atom with oxygen molecules on Pt(111) have been reported. Hot atoms can also be generated by degenerating molecules on a metal surface with UV light. It has been reported that the reactivity of an oxygen atom generated in such a way on a platinum surface is different from that of chemisorbed oxygen atoms. Elucidating the role of hot atoms on surfaces will lead to a deeper understanding of the mechanism of reactions.
https://en.wikipedia.org/wiki/Beryllium%20oxide
Beryllium oxide (BeO), also known as beryllia, is an inorganic compound with the formula BeO. This colourless solid is a notable electrical insulator with a higher thermal conductivity than any other non-metal except diamond, and exceeds that of most metals. As an amorphous solid, beryllium oxide is white. Its high melting point leads to its use as a refractory material. It occurs in nature as the mineral bromellite. Historically and in materials science, beryllium oxide was called glucina or glucinium oxide, owing to its sweet taste. Preparation and chemical properties Beryllium oxide can be prepared by calcining (roasting) beryllium carbonate, dehydrating beryllium hydroxide, or igniting metallic beryllium: BeCO3 → BeO + CO2 Be(OH)2 → BeO + H2O 2 Be + O2 → 2 BeO Igniting beryllium in air gives a mixture of BeO and the nitride Be3N2. Unlike the oxides formed by the other Group 2 elements (alkaline earth metals), beryllium oxide is amphoteric rather than basic. Beryllium oxide formed at high temperatures (>800 °C) is inert, but dissolves easily in hot aqueous ammonium bifluoride (NH4HF2) or a solution of hot concentrated sulfuric acid (H2SO4) and ammonium sulfate ((NH4)2SO4). Structure BeO crystallizes in the hexagonal wurtzite structure, featuring tetrahedral Be2+ and O2− centres, like lonsdaleite and w-BN (with both of which it is isoelectronic). In contrast, the oxides of the larger group-2 metals, i.e., MgO, CaO, SrO, BaO, crystallize in the cubic rock salt motif with octahedral geometry about the dications and dianions. At high temperature the structure transforms to a tetragonal form. In the vapour phase, beryllium oxide is present as discrete diatomic molecules. In the language of valence bond theory, these molecules can be described as adopting sp orbital hybridisation on both atoms, featuring one σ (between one sp orbital on each atom) and one π bond (between aligned p orbitals on each atom oriented perpendicular to the molecular axis). Molecular orbi
https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Storer%E2%80%93Szymanski
Lempel–Ziv–Storer–Szymanski (LZSS) is a lossless data compression algorithm, a derivative of LZ77, that was created in 1982 by James A. Storer and Thomas Szymanski. LZSS was described in article "Data compression via textual substitution" published in Journal of the ACM (1982, pp. 928–951). LZSS is a dictionary coding technique. It attempts to replace a string of symbols with a reference to a dictionary location of the same string. The main difference between LZ77 and LZSS is that in LZ77 the dictionary reference could actually be longer than the string it was replacing. In LZSS, such references are omitted if the length is less than the "break even" point. Furthermore, LZSS uses one-bit flags to indicate whether the next chunk of data is a literal (byte) or a reference to an offset/length pair. Example Here is the beginning of Dr. Seuss's Green Eggs and Ham, with character numbers at the beginning of lines for convenience. Green Eggs and Ham is a good example to illustrate LZSS compression because the book itself only contains 50 unique words, despite having a word count of 170. Thus, words are repeated, however not in succession. 0: I am Sam 9: 10: Sam I am 19: 20: That Sam-I-am! 35: That Sam-I-am! 50: I do not like 64: that Sam-I-am! 79: 80: Do you like green eggs and ham? 112: 113: I do not like them, Sam-I-am. 143: I do not like green eggs and ham. This text takes 177 bytes in uncompressed form. Assuming a break even point of 2 bytes (and thus 2 byte pointer/offset pairs), and one byte newlines, this text compressed with LZSS becomes 95 bytes long: 0: I am Sam 9: 10: (5,3) (0,4) 16: 17: That(4,4)-I-am!(19,15) 32: I do not like 46: t(21,14) 50: Do you(58,5) green eggs and ham? 79: (49,14) them,(24,9).(112,15)(92,18). Note: this does not include the 12 bytes of flags indicating whether the next chunk of text is a pointer or a literal. Adding it, the text becomes 107 bytes long, which is still shorter than the original 177 bytes. Implementat
https://en.wikipedia.org/wiki/Closed-loop%20authentication
Closed-loop authentication, as applied to computer network communication, refers to a mechanism whereby one party verifies the purported identity of another party by requiring them to supply a copy of a token transmitted to the canonical or trusted point of contact for that identity. It is also sometimes used to refer to a system of mutual authentication whereby two parties authenticate one another by signing and passing back and forth a cryptographically signed nonce, each party demonstrating to the other that they control the secret key used to certify their identity. E-mail Authentication Closed-loop email authentication is useful for simple i another, as a weak form of identity verification. It is not a strong form of authentication in the face of host- or network-based attacks (where an imposter, Chuck, is able to intercept Bob's email, intercepting the nonce and thus masquerading as Bob.) A use of closed-loop email authentication is used by parties with a shared secret relationship (for example, a website and someone with a password to an account on that website), where one party has lost or forgotten the secret and needs to be reminded. The party still holding the secret sends it to the other party at a trusted point of contact. The most common instance of this usage is the "lost password" feature of many websites, where an untrusted party may request that a copy of an account's password be sent by email, but only to the email address already associated with that account. A problem associated with this variation is the tendency of a naïve or inexperienced user to click on a URL if an email encourages them to do so. Most website authentication systems mitigate this by permitting unauthenticated password reminders or resets only by email to the account holder, but never allowing a user who does not possess a password to log in or specify a new one. In some instances in web authentication, closed-loop authentication is employed before any access is gra
https://en.wikipedia.org/wiki/Sampled%20data%20system
In systems science, a sampled-data system is a control system in which a continuous-time plant is controlled with a digital device. Under periodic sampling, the sampled-data system is time-varying but also periodic; thus, it may be modeled by a simplified discrete-time system obtained by discretizing the plant. However, this discrete model does not capture the inter-sample behavior of the real system, which may be critical in a number of applications. The analysis of sampled-data systems incorporating full-time information leads to challenging control problems with a rich mathematical structure. Many of these problems have only been solved recently.
https://en.wikipedia.org/wiki/Lov%C3%A1sz%20local%20lemma
In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows one to relax the independence condition slightly: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. It is most commonly used in the probabilistic method, in particular to give existence proofs. There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. A weaker version was proved in 1975 by László Lovász and Paul Erdős in the article Problems and results on 3-chromatic hypergraphs and some related questions. For other versions, see . In 2020, Robin Moser and Gábor Tardos received the Gödel Prize for their algorithmic version of the Lovász Local Lemma, which uses entropy compression to provide an efficient randomized algorithm for finding an outcome in which none of the events occurs. Statements of the lemma (symmetric version) Let A1, A2,..., Ak be a sequence of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them. Lemma I (Lovász and Erdős 1973; published 1975) If then there is a nonzero probability that none of the events occurs. Lemma II (Lovász 1977; published by Joel Spencer) If where e = 2.718... is the base of natural logarithms, then there is a nonzero probability that none of the events occurs. Lemma II today is usually referred to as "Lovász local lemma". Lemma III (Shearer 1985) If then there is a nonzero probability that none of the events occurs. The threshold in Lemma III is optimal and it implies that the bound is also sufficient. Asymmetric Lovász local lemma A statement of the asymmetric version (which a
https://en.wikipedia.org/wiki/Lustre%20%28programming%20language%29
Lustre is a formally defined, declarative, and synchronous dataflow programming language for programming reactive systems. It began as a research project in the early 1980s. A formal presentation of the language can be found in the 1991 Proceedings of the IEEE. In 1993 it progressed to practical, industrial use in a commercial product as the core language of the industrial environment SCADE, developed by Esterel Technologies. It is now used for critical control software in aircraft, helicopters, and nuclear power plants. Structure of Lustre programs A Lustre program is a series of node definitions, written as: node foo(a : bool) returns (b : bool); let b = not a; tel Where foo is the name of the node, a is the name of the single input of this node and b is the name of the single output. In this example the node foo returns the negation of its input a, which is the expected result. Inner variables Additional internal variables can be declared as follows: node Nand(X,Y: bool) returns (Z: bool); var U: bool; let U = X and Y; Z = not U; tel Note: The equations order doesn't matter, the order of lines U = X and Y; and Z = not U; doesn't change the result. Special operators Examples Edge detection node Edge (X : bool) returns (E : bool); let E = false -> X and not pre X; tel See also Esterel SIGNAL (another dataflow-oriented synchronous language) Synchronous programming language Dataflow programming
https://en.wikipedia.org/wiki/Averest
Averest is a synchronous programming language and set of tools to specify, verify, and implement reactive systems. It includes a compiler for synchronous programs, a symbolic model checker, and a tool for hardware/software synthesis. It can be used to model and verify finite and infinite state systems, at varied abstraction levels. It is useful for hardware design, modeling communication protocols, concurrent programs, software in embedded systems, and more. Components: compiler to translate synchronous programs to transition systems, symbolic model checker, tool for hardware/software synthesis. These cover large parts of the design flow of reactive systems, from specifying to implementing. Though the tools are part of a common framework, they are mostly independent of each other, and can be used with 3rd-party tools. See also Synchronous programming language Esterel External links Averest Toolbox Official home site Embedded Systems Group Research group that develops the Averest Toolbox Synchronous programming languages Hardware description languages
https://en.wikipedia.org/wiki/Inspissation
Inspissation is the process of increasing the viscosity of a fluid, or even of causing it to solidify, typically by dehydration or otherwise reducing its content of solvents. The term also has been applied to coagulation by heating of some substances such as albumens, or cooling some such as solutions of gelatin or agar. Some forms of inspissation may be reversed by re-introducing solvent, such as by adding water to molasses or gum arabic; in other forms, its resistance to flow may include cross-linking or mutual adhesion of its component particles or molecules, in ways that prevent their dissolving again, such as in the irreversible setting or gelling of some kinds of rubber latex, egg-white, adhesives, or coagulation of blood. Intentional use Inspissation is the process used when heating high-protein containing media; for example to enable recovery of bacteria for testing. Once inspissation has occurred, any stained bacteria, such as Mycobacteria, can then be isolated. A Serum inspissation or Fractional sterilization is a process of heating an article on 3 successive days as follows: Pathologic inspissation In cystic fibrosis, inspissation of secretions in the respiratory and gastrointestinal tracts is a major mechanism of causing the disease.
https://en.wikipedia.org/wiki/Test%20harness
In software testing, a test harness is a collection of stubs and drivers configured to assist with the testing of an application or component. It acts as imitation infrastructure for test environments or containers where the full infrastructure is either not available or not desired. Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness provides a hook for the developed code, which can be tested using an automation framework. A test harness is used to facilitate testing where all or some of an application's production infrastructure is unavailable, this may be due to licensing costs, security concerns meaning test environments are air gapped, resource limitations, or simply to increase the execution speed of tests by providing pre-defined test data and smaller software components instead of calculated data from full applications. These individual objectives may be fulfilled by unit test framework tools, stubs or drivers. Example When attempting to build an application that needs to interface with an application on a mainframe computer, but no mainframe is available during development, a test harness may be built to use as a substitute this can mean that normally complex operations can be handled with a small amount of resources by providing pre-defined data and responses so the calculations performed by the mainframe are not needed. A test harness may be part of a project deliverable. It may be kept separate from the application source code and may be reused on multiple projects. A test harness simulates application functionality; it has no knowledge of test suites, test cases or test reports. Those things are provided by a testing framework and associated automated testing tools. A part of its job is to set up suitable test fixtures. The test harness will generally be specific to a development environment such as Java. However, interoper
https://en.wikipedia.org/wiki/Georgi%E2%80%93Jarlskog%20mass%20relation
In grand unified theories of the SU(5) or SO(10) type, there is a mass relation predicted between the electron and the down quark, the muon and the strange quark and the tau lepton and the bottom quark called the Georgi–Jarlskog mass relations. The relations were formulated by Howard Georgi and Cecilia Jarlskog. At GUT scale, these are sometimes quoted as: In the same paper it is written that: Meaning that:
https://en.wikipedia.org/wiki/JSON-RPC
JSON-RPC is a remote procedure call protocol encoded in JSON. It is similar to the XML-RPC protocol, defining only a few data types and commands. JSON-RPC allows for notifications (data sent to the server that does not require a response) and for multiple calls to be sent to the server which may be answered asynchronously. History Usage JSON-RPC works by sending a request to a server implementing this protocol. The client in that case is typically software intending to call a single method of a remote system. Multiple input parameters can be passed to the remote method as an array or object, whereas the method itself can return multiple output data as well. (This depends on the implemented version.) All transfer types are single objects, serialized using JSON. A request is a call to a specific method provided by a remote system. It can contain three members: method - A String with the name of the method to be invoked. Method names that begin with "rpc." are reserved for rpc-internal methods. params - An Object or Array of values to be passed as parameters to the defined method. This member may be omitted. id - A string or non-fractional number used to match the response with the request that it is replying to. This member may be omitted if no response should be returned. The receiver of the request must reply with a valid response to all received requests. A response can contain the members mentioned below. result - The data returned by the invoked method. If an error occurred while invoking the method, this member must not exist. error - An error object if there was an error invoking the method, otherwise this member must not exist. The object must contain members code (integer) and message (string). An optional data member can contain further server-specific data. There are pre-defined error codes which follow those defined for XML-RPC. id - The id of the request it is responding to. Since there are situations where no response is needed or even desi
https://en.wikipedia.org/wiki/Umklapp%20scattering
In crystalline materials, Umklapp scattering (also U-process or Umklapp process) is a scattering process that results in a wave vector (usually written k) which falls outside the first Brillouin zone. If a material is periodic, it has a Brillouin zone, and any point outside the first Brillouin zone can also be expressed as a point inside the zone. So, the wave vector is then mathematically transformed to a point inside the first Brillouin zone. This transformation allows for scattering processes which would otherwise violate the conservation of momentum: two wave vectors pointing to the right can combine to create a wave vector that points to the left. This non-conservation is why crystal momentum is not a true momentum. Examples include electron-lattice potential scattering or an anharmonic phonon-phonon (or electron-phonon) scattering process, reflecting an electronic state or creating a phonon with a momentum k-vector outside the first Brillouin zone. Umklapp scattering is one process limiting the thermal conductivity in crystalline materials, the others being phonon scattering on crystal defects and at the surface of the sample. The left panel of Figure 1 schematically shows the possible scattering processes of two incoming phonons with wave-vectors (k-vectors) k1 and k2 (red) creating one outgoing phonon with a wave vector k3 (blue). As long as the sum of k1 and k2 stay inside the first Brillouin zone (grey squares), k3 is the sum of the former two, thus conserving phonon momentum. This process is called normal scattering (N-process). With increasing phonon momentum and thus larger wave vectors k1 and k2, their sum might point outside the first Brillouin zone (k'3). As shown in the right panel of Figure 1, k-vectors outside the first Brillouin zone are physically equivalent to vectors inside it and can be mathematically transformed into each other by the addition of a reciprocal lattice vector G. These processes are called Umklapp scattering and change the
https://en.wikipedia.org/wiki/Detailed%20balance
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process. History The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility. Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64). Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation. In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry. The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state. Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics. Microscopic background The microscopic "reversing of time" turns at
https://en.wikipedia.org/wiki/Paleopathology
Paleopathology, also spelled palaeopathology, is the study of ancient diseases and injuries in organisms through the examination of fossils, mummified tissue, skeletal remains, and analysis of coprolites. Specific sources in the study of ancient human diseases may include early documents, illustrations from early books, painting and sculpture from the past. Looking at the individual roots of the word "Paleopathology" can give a basic definition of what it encompasses. "Paleo-" refers to "ancient, early, prehistoric, primitive, fossil." The suffix "-pathology" comes from the Latin pathologia meaning "study of disease." Through the analysis of the aforementioned things, information on the evolution of diseases as well as how past civilizations treated conditions are both valuable byproducts. Studies have historically focused on humans, but there is no evidence that humans are more prone to pathologies than any other animal. Paleopathology is an interdisciplinary science, meaning it involves knowledge from many sectors including (but not limited to) "clinical pathology, human osteology, epidemiology, social anthropology, and archaeology". It is unlikely that one person can be fluent in all necessary sciences. Therefore, those trained in each are important and make up a collective study. Training in Anthropology/archaeology are arguably most important because the analysis of human remains and ancient artifacts are paramount to the discovery of early disease. History Historical evidence shows that deviations from good health have long been an interest to humans. Although the content that makes up this study can be traced through ancient texts, the term "paelopathology" did not have much traction until the 20th century. This time period saw an increase in case studies and "published reports on ancient diseases". Ancient texts that are thousands of years old record instances of diseases such as leprosy. From the Renaissance to the mid-nineteenth century, there was incre
https://en.wikipedia.org/wiki/Harry%20Steenbock
Harry Steenbock (August 16, 1886, Charlestown, Wisconsin – December 25, 1967, Madison, Wisconsin) was a professor of biochemistry at the University of Wisconsin–Madison. Steenbock graduated from Wisconsin in 1916, where he was a member of Phi Sigma Kappa fraternity. Vitamin D Steenbock was born in Charlestown, Wisconsin, and grew up on a model farm outside New Holstein, Wisconsin. His graduate advisor at the University of Wisconsin–Madison, was Edwin B. Hart. His first publication reported the results of the single-grain experiment on which he assisted with Hart and Stephen Moulton Babcock. During his graduate career, Steenbock also served as an assistant in the lab of Elmer McCollum. When McCollum and another assistant Marguerite Davis published their discovery of what came to be called vitamin A, Steenbock thought he deserved more credit than he received. Steenbock carried on the vitamin A work in Madison, after McCollum accepted an offer from Johns Hopkins University. In 1923, Steenbock demonstrated that irradiation by ultraviolet light increased the vitamin D content of foods and other organic materials. After irradiating rodent food, Steenbock discovered that the rodents were cured of rickets. It is now known that vitamin D deficiency is a cause of rickets. Using $300 of his own money, Steenbock patented his invention. Steenbock's irradiation technique was used for food stuffs, but most memorably for milk. By the expiration of the patent in 1945, rickets had all but been eliminated. WARF After receiving his patent, the Quaker Oats company offered $1 million (approximately $10 million today) for Steenbock's vitamin D technology. Steenbock thought twice about the offer. Instead of quickly selling his rights to a commercial company, Steenbock believed the money should be returned to the university. After soliciting interest from nine other University of Wisconsin–Madison alumni, Steenbock was influential in starting the first university technology transfer
https://en.wikipedia.org/wiki/European%20Ecological%20Federation
The European Ecological Foundation (EEF) is a European organisation with the objective "to promote cooperation within the science of ecology in Europe". Members Its members are: British Ecological Society, United Kingdom and Ireland Gesellschaft für Ökologie, Germany, Austria, Switzerland, and Liechtenstein Société Française d'Ecologie, France and Belgium Swedish and Nordic Oikos Societies, Nordic countries Sociatá Italiana di Ecologia, Italy Dutch-Flemish Ecological Society (Necov), Netherlands and Flanders Polish Ecological Society Petecol, Poland Société d'Ecophysiologie, France Greek Ecologists' Association, Greece Turkish Ecological Society, Turkey Slovak Ecological Society, Slovakia Spanish Ecological Society, Spain Portuguese Ecological Society (SPECO), Portugal Romanian Ecological Society, Romania Czech Union of Ecologists, Czech Republic Hungarian Biological Society, Hungary Institute of Ecology, Poland Macedonian Ecological Society, North Macedonia Lithuanian Ecological Society, Lithuania Presidents Presidents include: R. J. Berry 1990-1992 R. Bornkamm 1993-1995 Piet Nienhuis 1995-1999 John Pantis 1999–2002 Pehr H. Enckell 2002-2006 Stefan Klotz 2006–present
https://en.wikipedia.org/wiki/Generic%20filter
In the mathematical field of set theory, a generic filter is a kind of object used in the theory of forcing, a technique used for many purposes, but especially to establish the independence of certain propositions from certain formal theories, such as ZFC. For example, Paul Cohen used forcing to establish that ZFC, if consistent, cannot prove the continuum hypothesis, which states that there are exactly aleph-one real numbers. In the contemporary re-interpretation of Cohen's proof, it proceeds by constructing a generic filter that codes more than reals, without changing the value of . Formally, let P be a partially ordered set, and let F be a filter on P; that is, F is a subset of P such that: F is nonempty If p, q ∈ P and p ≤ q and p is an element of F, then q is an element of F (F is closed upward) If p and q are elements of F, then there is an element r of F such that r ≤ p and r ≤ q (F is downward directed) Now if D is a collection of dense open subsets of P, in the topology whose basic open sets are all sets of the form {q | q ≤ p} for particular p in P, then F is said to be D-generic if F meets all sets in D; that is, for all E ∈ D. Similarly, if M is a transitive model of ZFC (or some sufficient fragment thereof), with P an element of M, then F is said to be M-generic, or sometimes generic over M, if F meets all dense open subsets of P that are elements of M. See also in computability
https://en.wikipedia.org/wiki/Physical%20plant
Physical plant, mechanical plant or industrial plant (and where context is given, often just plant) refers to the necessary infrastructure used in operation and maintenance of a given facility. The operation of these facilities, or the department of an organization which does so, is called "plant operations" or facility management. Industrial plant should not be confused with "manufacturing plant" in the sense of "a factory". This is a holistic look at the architecture, design, equipment, and other peripheral systems linked with a plant required to operate or maintain it. Power plants Nuclear power The design and equipment in a Nuclear Power Plant, has for the most part remained stagnant over the last 30 years There are three types of reactor cooling mechanisms: “Light water reactors, Liquid Metal Reactors and High Temperature Gas-Cooled Reactors”. While for the most part equipment remains the same, there have been some minimal modifications to existing reactors improving safety and efficiency. There have also been significant design changes for all these reactors. However, they remain theoretical and unimplemented. Nuclear power plant equipment can be separated into two categories: Primary systems and Balance-Of-Plant Systems. Primary systems are equipment involved in the production and safety of nuclear power. The reactor specifically has equipment such as, reactor vessels usually surrounding the core for protection, the reactor core which holds fuel rods. It also includes reactor cooling equipment consisting of liquid cooling loops, circulating coolant. These loops are usually separate systems each having at least one pump. Other equipment includes Steam generators and pressurizers that ensures pressure in the plant is adjusted as needed. Containment equipment is the physical structure built around the reactor to protect the surroundings from reactor failure. Lastly primary systems also include Emergency core cooling Equipment and Reactor protection Equipmen
https://en.wikipedia.org/wiki/Two-vector
A two-vector or bivector is a tensor of type and it is the dual of a two-form, meaning that it is a linear functional which maps two-forms to the real numbers (or more generally, to scalars). The tensor product of a pair of vectors is a two-vector. Then, any two-form can be expressed as a linear combination of tensor products of pairs of vectors, especially a linear combination of tensor products of pairs of basis vectors. If f is a two-vector, then where the f α β are the components of the two-vector. Notice that both indices of the components are contravariant. This is always the case for two-vectors, by definition. A bivector may operate on a one-form, yielding a vector: , although a problem might be which of the upper indices of the bivector to contract with. (This problem does not arise with mixed tensors because only one of such tensor's indices is upper.) However, if the bivector is symmetric then the choice of index to contract with is indifferent. An example of a bivector is the stress–energy tensor. Another one is the orthogonal complement of the metric tensor. Matrix notation If one assumes that vectors may only be represented as column matrices and covectors as row matrices; then, since a square matrix operating on a column vector must yield a column vector, it follows that square matrices can only represent mixed tensors. However, there is nothing in the abstract algebraic definition of a matrix that says that such assumptions must be made. Then dropping that assumption matrices can be used to represent bivectors as well as two-forms. Example: or . If f is symmetric, i.e., , then . See also Two-point tensor Bivector § Tensors and matrices (but note that the stress–energy tensor is symmetric, not skew-symmetric) Dyadics
https://en.wikipedia.org/wiki/Numerical%20taxonomy
Numerical taxonomy is a classification system in biological systematics which deals with the grouping by numerical methods of taxonomic units based on their character states. It aims to create a taxonomy using numeric algorithms like cluster analysis rather than using subjective evaluation of their properties. The concept was first developed by Robert R. Sokal and Peter H. A. Sneath in 1963 and later elaborated by the same authors. They divided the field into phenetics in which classifications are formed based on the patterns of overall similarities and cladistics in which classifications are based on the branching patterns of the estimated evolutionary history of the taxa. Although intended as an objective method, in practice the choice and implicit or explicit weighting of characteristics is influenced by available data and research interests of the investigator. What was made objective was the introduction of explicit steps to be used to create dendrograms and cladograms using numerical methods rather than subjective synthesis of data. See also Computational phylogenetics Taxonomy (biology)
https://en.wikipedia.org/wiki/Property%20of%20Baire
A subset of a topological space has the property of Baire (Baire property, named after René-Louis Baire), or is called an almost open set, if it differs from an open set by a meager set; that is, if there is an open set such that is meager (where denotes the symmetric difference). Definitions A subset of a topological space is called almost open and is said to have the property of Baire or the Baire property if there is an open set such that is a meager subset, where denotes the symmetric difference. Further, has the Baire property in the restricted sense if for every subset of the intersection has the Baire property relative to . Properties The family of sets with the property of Baire forms a σ-algebra. That is, the complement of an almost open set is almost open, and any countable union or intersection of almost open sets is again almost open. Since every open set is almost open (the empty set is meager), it follows that every Borel set is almost open. If a subset of a Polish space has the property of Baire, then its corresponding Banach–Mazur game is determined. The converse does not hold; however, if every game in a given adequate pointclass is determined, then every set in has the property of Baire. Therefore, it follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set (in a Polish space) has the property of Baire. It follows from the axiom of choice that there are sets of reals without the property of Baire. In particular, the Vitali set does not have the property of Baire. Already weaker versions of choice are sufficient: the Boolean prime ideal theorem implies that there is a nonprincipal ultrafilter on the set of natural numbers; each such ultrafilter induces, via binary representations of reals, a set of reals without the Baire property. See also
https://en.wikipedia.org/wiki/Memorex
Memorex Corp. began as a computer tape producer and expanded to become both a consumer media supplier and a major IBM plug compatible peripheral supplier. It was broken up and ceased to exist after 1996 other than as a consumer electronics brand specializing in disk recordable media for CD and DVD drives, flash memory, computer accessories and other electronics. History and evolution Established in 1961 in Silicon Valley, Memorex started by selling computer tapes, then added other media such as disk packs. The company then expanded into disk drives and other peripheral equipment for IBM mainframes. During the 1970s and into the early 1980s, Memorex was worldwide one of the largest independent suppliers of disk drives and communications controllers to users of IBM-compatible mainframes, as well as media for computer uses and consumers. The company's name is a portmanteau of "memory excellence". Memorex entered the consumer media business in 1971 and started the ad campaign, first with its "shattering glass" advertisements and then with a series of legendary television commercials featuring Ella Fitzgerald. In the commercials, she would sing a note that shattered a glass while being recorded to a Memorex audio cassette. The tape was played back and the recording also broke the glass, asking "Is it live, or is it Memorex?" This would become the company slogan which was used in a series of advertisements released through 1970s and 1980s. In 1982, Memorex was bought by Burroughs for its enterprise businesses; the company’s consumer business, a small segment of the company’s revenue at that time was sold to Tandy. Over the next six years, Burroughs and its successor Unisys shut down, sold off or spun out the various remaining parts of Memorex. The computer media, communications and IBM end user sales and service organization were spun out as Memorex International. In 1988, Memorex International acquired the Telex Corporation becoming Memorex Telex NV, a corporatio
https://en.wikipedia.org/wiki/Anderson%20localization
In condensed matter physics, Anderson localization (also known as strong localization) is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree of randomness (disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor with impurities or defects. Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished from weak localization, which is the precursor effect of Anderson localization (see below), and from Mott localization, named after Sir Nevill Mott, where the transition from metallic to insulating behaviour is not due to disorder, but to a strong mutual Coulomb repulsion of electrons. Introduction In the original Anderson tight-binding model, the evolution of the wave function ψ on the d-dimensional lattice Zd is given by the Schrödinger equation where the Hamiltonian H is given by with Ej random and independent, and potential V(r) falling off faster than r−3 at infinity. For example, one may take Ej uniformly distributed in [−W,   +W], and Starting with ψ0 localised at the origin, one is interested in how fast the probability distribution diffuses. Anderson's analysis shows the following: if d is 1 or 2 and W is arbitrary, or if d ≥ 3 and W/ħ is sufficiently large, then the probability distribution remains localized: uniformly in t. This phenomenon is called Anderson localization. if d ≥ 3 and W/ħ is small, where D is the diffusion constant. Analysis The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in the wave interference between multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the wav
https://en.wikipedia.org/wiki/Thouless%20energy
The Thouless energy is a characteristic energy scale of diffusive disordered conductors. It was first introduced by the Scottish-American physicist David J. Thouless when studying Anderson localization, as a measure of the sensitivity of energy levels to a change in the boundary conditions of the system. Though being a classical quantity, it has been shown to play an important role in the quantum-mechanical treatment of disordered systems. It is defined by , where D is the diffusion constant and L the size of the system, and thereby inversely proportional to the diffusion time through the system.
https://en.wikipedia.org/wiki/Sector%20mass%20spectrometer
A sector instrument is a general term for a class of mass spectrometer that uses a static electric (E) or magnetic (B) sector or some combination of the two (separately in space) as a mass analyzer. Popular combinations of these sectors have been the EB, BE (of so-called reverse geometry), three-sector BEB and four-sector EBEB (electric-magnetic-electric-magnetic) instruments. Most modern sector instruments are double-focusing instruments (first developed by Francis William Aston, Arthur Jeffrey Dempster, Kenneth Bainbridge and Josef Mattauch in 1936) in that they focus the ion beams both in direction and velocity. Theory The behavior of ions in a homogeneous, linear, static electric or magnetic field (separately) as is found in a sector instrument is simple. The physics are described by a single equation called the Lorentz force law. This equation is the fundamental equation of all mass spectrometric techniques and applies in non-linear, non-homogeneous cases too and is an important equation in the field of electrodynamics in general. where E is the electric field strength, B is the magnetic field induction, q is the charge of the particle, v is its current velocity (expressed as a vector), and × is the cross product. So the force on an ion in a linear homogenous electric field (an electric sector) is: , in the direction of the electric field, with positive ions and opposite that with negative ions. The force is only dependent on the charge and electric field strength. The lighter ions will be deflected more and heavier ions less due to the difference in inertia and the ions will physically separate from each other in space into distinct beams of ions as they exit the electric sector. And the force on an ion in a linear homogenous magnetic field (a magnetic sector) is: , perpendicular to both the magnetic field and the velocity vector of the ion itself, in the direction determined by the right-hand rule of cross products and the sign of the charge. Th
https://en.wikipedia.org/wiki/Temporal%20logic%20of%20actions
Temporal logic of actions (TLA) is a logic developed by Leslie Lamport, which combines temporal logic with a logic of actions. It is used to describe behaviours of concurrent and distributed systems. It is the logic underlying the specification language TLA+. Details Statements in the temporal logic of actions are of the form , where A is an action and t contains a subset of the variables appearing in A. An action is an expression containing primed and non-primed variables, such as . The meaning of the non-primed variables is the variable's value in this state. The meaning of primed variables is the variable's value in the next state. The above expression means the value of x today, plus the value of x tomorrow times the value of y today, equals the value of y tomorrow. The meaning of is that either A is valid now, or the variables appearing in t do not change. This allows for stuttering steps, in which none of the program variables change their values. See also Temporal logic PlusCal TLA+
https://en.wikipedia.org/wiki/Skype%20Technologies
Skype Technologies S.A.R.L (also known as Skype Software S.A.R.L, Skype Communications S.A.R.L, Skype Inc., and Skype Limited) is a telecommunications company headquartered in Luxembourg City, Luxembourg, whose chief business is the manufacturing and marketing of the video chat and instant messaging computer software program Skype, and various Internet telephony services associated with it. Microsoft purchased the company in 2011, and it has since then operated as their wholly owned subsidiary; as of 2016, it is operating as part of Microsoft's Office Product Group. The company is a Société à responsabilité limitée, or SARL, equivalent to an American limited liability company. Skype, a voice over IP (VoIP) service, was first released in 2003 as a way to make free computer-to-computer calls, or reduced-rate calls from a computer to telephones. Support for paid services such as calling landline/mobile phones from Skype (formerly called SkypeOut), allowing landline/mobile phones to call Skype (formerly called SkypeIn and now Skype Number), and voice messaging generates the majority of Skype's revenue. eBay acquired Skype Technologies S.A. in September 2005 and in April 2009 announced plans to spin it off in a 2010 initial public offering (IPO). In September 2009, Silver Lake, Andreessen Horowitz and the Canada Pension Plan Investment Board announced the acquisition of 65% of Skype for $1.9 billion from eBay, valuing the business at $2.75 billion. Skype was acquired by Microsoft in May 2011 for $8.5 billion (~$ in ). As of 2010, Skype was available in 27 languages and has 660 million worldwide users, an average of over 100 million active each month, and has faced challenges to its intellectual property amid political concerns by governments wishing to control telecommunications systems within their borders. History Skype was founded in 2003 by Janus Friis from Denmark and Niklas Zennström from Sweden, having its headquarters in Luxembourg with offices now in Berli
https://en.wikipedia.org/wiki/Poisson%20kernel
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson. Poisson kernels commonly find applications in control theory and two-dimensional problems in electrostatics. In practice, the definition of Poisson kernels are often extended to n-dimensional problems. Two-dimensional Poisson kernels On the unit disc In the complex plane, the Poisson kernel for the unit disc is given by This can be thought of in two ways: either as a function of r and θ, or as a family of functions of θ indexed by r. If is the open unit disc in C, T is the boundary of the disc, and f a function on T that lies in L1(T), then the function u given by is harmonic in D and has a radial limit that agrees with f almost everywhere on the boundary T of the disc. That the boundary value of u is f can be argued using the fact that as , the functions form an approximate unit in the convolution algebra L1(T). As linear operators, they tend to the Dirac delta function pointwise on Lp(T). By the maximum principle, u is the only such harmonic function on D. Convolutions with this approximate unit gives an example of a summability kernel for the Fourier series of a function in L1(T) . Let f ∈ L1(T) have Fourier series {fk}. After the Fourier transform, convolution with Pr(θ) becomes multiplication by the sequence {r|k|} ∈ ℓ1(Z). Taking the inverse Fourier transform of the resulting product {r|k|fk} gives the Abel means Arf of f: Rearranging this absolutely convergent series shows that f is the boundary value of g + h, where g (resp. h) is a holomorphic (resp. antiholomorphic) function on D. When one also asks for the harmonic extension to be holomorphic, then the solutions are elements of a Hardy space. This is true when
https://en.wikipedia.org/wiki/Koha%20%28software%29
Koha is an open-source integrated library system (ILS), used world-wide by public, school and special libraries. The name comes from a Māori term for a gift or donation. Features Koha is a web-based ILS, with a SQL database (MariaDB or MySQL preferred) back end with cataloguing data stored in MARC and accessible via Z39.50 or SRU. The user interface is very configurable and adaptable and has been translated into many languages. Koha has most of the features that would be expected in an ILS, including: Various Web 2.0 facilities like tagging, comment, social sharing and RSS feeds Union catalog facility Customizable search Online circulation Bar code printing Patron card creation Report generation Patron self registration form through OPAC History Koha was created in 1999 by Katipo Communications for the Horowhenua Library Trust in New Zealand, and the first installation went live in January 2000. From 2000, companies started providing commercial support for Koha, building to more than 50 today. In 2001, Paul Poulain (of Marseille, France) began adding many new features to Koha, most significantly support for multiple languages. By 2010, Koha has been translated from its original English into French, Chinese, Arabic and several other languages. Support for the cataloguing and search standards MARC and Z39.50 was added in 2002 and later sponsored by the Athens County Public Libraries. Poulain co-founded BibLibre in 2007. In 2005, an Ohio-based company, Metavore, Inc., trading as LibLime, was established to support Koha and added many new features, including support for Zebra sponsored by the Crawford County Federated Library System. Zebra support increased the speed of searches as well as improving scalability to support tens of millions of bibliographic records. In 2007 a group of libraries in Vermont began testing the use of Koha for Vermont libraries. At first a separate implementation was created for each library. Then the Vermont Organization of K
https://en.wikipedia.org/wiki/Geometry%20processing
Geometry processing, or mesh processing, is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing. Geometry processing is a common research topic at SIGGRAPH, the premier computer graphics academic conference, and the main topic of the annual Symposium on Geometry Processing. Geometry processing as a life cycle Geometry processing involves working with a shape, usually in 2D or 3D, although the shape can live in a space of arbitrary dimensions. The processing of a shape involves three stages, which is known as its life cycle. At its "birth," a shape can be instantiated through one of three methods: a model, a mathematical representation, or a scan. After a shape is born, it can be analyzed and edited repeatedly in a cycle. This usually involves acquiring different measurements, such as the distances between the points of the shape, the smoothness of the shape, or its Euler characteristic. Editing may involve denoising, deforming, or performing rigid transformations. At the final stage of the shape's "life," it is consumed. This can mean it is consumed by a viewer as a rendered asset in a game or movie, for instance. The end of a shape's life can also be defined by a de
https://en.wikipedia.org/wiki/Fosmidomycin
Fosmidomycin is an antibiotic that was originally isolated from culture broths of bacteria of the genus Streptomyces. It specifically inhibits DXP reductoisomerase, a key enzyme in the non-mevalonate pathway of isoprenoid biosynthesis. It is a structural analogue of 2-C-methyl-D-erythrose 4-phosphate. It inhibits the E. coli enzyme with a KI value of 38 nM (4), MTB at 80 nM, and the Francisella enzyme at 99 nM. Several mutations in the E. coli DXP reductoisomerase were found to confer resistance to fosmidomycin. Use in malaria The discovery of the non-mevalonate pathway in malaria parasites has indicated the use of fosmidomycin and other such inhibitors as antimalarial drugs. Indeed, fosmidomycin has been tested in combination treatment with clindamycin for treatment of malaria with favorable results. It has been shown that an increase in copy number of the target enzyme (DXP reductoisomerase) correlates with in vitro fosmidomycin resistance in the lethal malaria parasite, Plasmodium falciparum.
https://en.wikipedia.org/wiki/Noke%20%28worms%29
Noke is a culinary term used by the Māori of New Zealand to refer to earthworms, some types of native worms (called noke whiti and noke kurekure in Māori) are historically local delicacies reserved for chiefs because of their sweet flavour which was said to "remain in the mouth for two days". Another notable kind of worm, the noke waiū (possibly Octochaetus multiporus) was prized as eel fishing bait due to its large size and bioluminescence. Noke has more recently become a popular trend at certain New Zealand wild food festivals, where it is often served in modern fusion dishes such as worm sushi and chocolate truffles with crystallized worm. According to Māori mythology, the trickster Māui once transformed himself into a noke worm in order to crawl into the womb of the underworld goddess Hine-nui-te-pō and gain everlasting life. Due to its having characteristics of both males and females, it was considered divine. Sources Martin, Daniella. Edible: An Adventure into the World of Eating Insects and the Last Great Hope to Save the Planet 2014
https://en.wikipedia.org/wiki/Procedural%20generation
In computing, procedural generation (sometimes shortened as proc-gen) is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated content and algorithms coupled with computer-generated randomness and processing power. In computer graphics, it is commonly used to create textures and 3D models. In video games, it is used to automatically create large amounts of content in a game. Depending on the implementation, advantages of procedural generation can include smaller file sizes, larger amounts of content, and randomness for less predictable gameplay. Procedural generation is a branch of media synthesis. Overview The term procedural refers to the process that computes a particular function. Fractals are geometric patterns which can often be generated procedurally. Commonplace procedural content includes textures and meshes. Sound is often also procedurally generated, and has applications in both speech synthesis as well as music. It has been used to create compositions in various genres of electronic music by artists such as Brian Eno who popularized the term "generative music". While software developers have applied procedural generation techniques for years, few products have employed this approach extensively. Procedurally generated elements have appeared in earlier video games: The Elder Scrolls II: Daggerfall takes place in a mostly procedurally generated world, giving a world roughly two thirds the actual size of the British Isles. Soldier of Fortune from Raven Software uses simple routines to detail enemy models, while its sequel featured a randomly generated level mode. Avalanche Studios employed procedural generation to create a large and varied group of detailed tropical islands for Just Cause. No Man's Sky, a game developed by games studio Hello Games, is all based upon procedurally generated elements. The modern demoscene uses procedural generation to package a great deal of audiovisual content
https://en.wikipedia.org/wiki/Key%20Tronic
Key Tronic Corporation (branded Keytronic) is a technology company founded in 1969. Its core products initially included keyboards, mice and other input devices. KeyTronic currently specializes in PCBA and full product assembly. The company is among the ten largest contract manufacturers providing electronic manufacturing services in the US. The company offers full product design or assembly of a wide variety of household goods and electronic products such as keyboards, printed circuit board assembly, plastic molding, thermometers, toilet bowl cleaners, satellite tracking systems, etc. Keyboards After the introduction of the IBM PC, Keytronic began manufacturing keyboards compatible with those computer system units. Most of their keyboards are based on the 8048 microcontroller to communicate to the computer. Their early keyboards used an Intel 8048 MCU. However, as the company evolved, they began to use their own 8048-based and 83C51KB-based MCUs. In 1978, Keytronic Corporation introduced keyboards with capacitive-based switches, one of the first keyboard technologies to not use self-contained switches. There was simply a sponge pad with a conductive-coated Mylar plastic sheet on the switch plunger, and two half-moon trace patterns on the printed circuit board below. As the key was depressed, the capacitance between the plunger pad and the patterns on the PCB below changed, which was detected by integrated circuits (IC). These keyboards were claimed to have the same reliability as the other "solid-state switch" keyboards such as inductive and Hall-Effect, but competitive with direct-contact keyboards. ErgoForce Among modern keyboard enthusiasts, Keytronic is known mostly for its "ErgoForce" technology, where different keys have rubber domes with different stiffness. The alphabetic keys intended to be struck with the little finger need only 35 grams of force to actuate, while other alphabetic keys need 45 grams. Other keys can be as stiff as 80 grams. Corpora
https://en.wikipedia.org/wiki/Magnetogravity%20wave
A magnetogravity wave is a type of plasma wave. A magnetogravity wave is an acoustic gravity wave which is associated with fluctuations in the background magnetic field. In this context, gravity wave refers to a classical fluid wave, and is completely unrelated to the relativistic gravitational wave. Examples Magnetogravity waves are found in the corona of the Sun. See also Wave Plasma Magnetosonic wave Helioseismology
https://en.wikipedia.org/wiki/Z2%20%28computer%29
The Z2 was an electromechanical (mechanical and relay-based) digital computer that was completed by Konrad Zuse in 1940. It was an improvement on the Z1 Zuse built in his parents' home, which used the same mechanical memory. In the Z2, he replaced the arithmetic and control logic with 600 electrical relay circuits, weighing over 600 pounds. The Z2 could read 64 words from punch cards. Photographs and plans for the Z2 were destroyed by the Allied bombing during World War II. In contrast to the Z1, the Z2 used 16-bit fixed-point arithmetic instead of 22-bit floating point. Zuse presented the Z2 in 1940 to members of the DVL (today DLR) and member , whose support helped fund the successor model Z3. Specifications See also Z1 Z3 Z4
https://en.wikipedia.org/wiki/Burrow
A burrow is a hole or tunnel excavated into the ground by an animal to construct a space suitable for habitation or temporary refuge, or as a byproduct of locomotion. Burrows provide a form of shelter against predation and exposure to the elements, and can be found in nearly every biome and among various biological interactions. Many animal species are known to form burrows. These species range from small invertebrates, such as the Corophium arenarium, to very large vertebrate species such as the polar bear. Burrows can be constructed into a wide variety of substrates and can range in complexity from a simple tube a few centimeters long to a complex network of interconnecting tunnels and chambers hundreds or thousands of meters in total length; an example of the latter level of complexity, a well-developed burrow, would be a rabbit warren. Vertebrate burrows A large variety of vertebrates construct or use burrows in many types of substrate; burrows can range widely in complexity. Some examples of vertebrate burrowing animals include a number of mammals, amphibians, fish (dragonet and lungfish), reptiles, and birds (including small dinosaurs). Mammals are perhaps most well known for burrowing. Mammal species such as Insectivora like the mole, and rodents like the gopher, great gerbil and groundhog are often found to form burrows. Some other mammals that are known to burrow are the platypus, pangolin, pygmy rabbit, armadillo, rat and weasel. The rabbit, a member of the family Lagomorpha, is a well-known burrower. Some species such as the groundhog can construct burrows that occupy a full cubic metre, displacing about of dirt. There is evidence that rodents may construct the most complex burrows of all vertebrate burrowing species. For example, great gerbils live in family groups in extensive burrows, which can be seen on satellite images. Even the unoccupied burrows can remain visible in the landscape for years. The burrows are distributed regularly, although th
https://en.wikipedia.org/wiki/Nuclear%20reaction%20analysis
Nuclear reaction analysis (NRA) is a nuclear method of nuclear spectroscopy in materials science to obtain concentration vs. depth distributions for certain target chemical elements in a solid thin film. Mechanism of NRA If irradiated with select projectile nuclei at kinetic energies Ekin, target solid thin-film chemical elements can undergo a nuclear reaction under resonance conditions for a sharply defined resonance energy. The reaction product is usually a nucleus in an excited state which immediately decays, emitting ionizing radiation. To obtain depth information the initial kinetic energy of the projectile nucleus (which has to exceed the resonance energy) and its stopping power (energy loss per distance traveled) in the sample has to be known. To contribute to the nuclear reaction the projectile nuclei have to slow down in the sample to reach the resonance energy. Thus each initial kinetic energy corresponds to a depth in the sample where the reaction occurs (the higher the energy, the deeper the reaction). NRA profiling of hydrogen For example, a commonly used reaction to profile hydrogen with an energetic 15N ion beam is 15N + 1H → 12C + α + γ (4.43 MeV) with a sharp resonance in the reaction cross section at 6.385 MeV of only 1.8 keV. Since the incident 15N ion loses energy along its trajectory in the material it must have an energy higher than the resonance energy to induce the nuclear reaction with hydrogen nuclei deeper in the target. This reaction is usually written 1H(15N,αγ)12C. It is inelastic because the Q-value is not zero (in this case it is 4.965 MeV). Rutherford backscattering (RBS) reactions are elastic (Q = 0), and the interaction (scattering) cross-section σ given by the famous formula derived by Lord Rutherford in 1911. But non-Rutherford cross-sections (so-called EBS, elastic backscattering spectrometry) can also be resonant: for example, the 16O(α,α)16O reaction has a strong and very useful resonance at 3038.1 ± 1.3 keV.
https://en.wikipedia.org/wiki/Dirichlet%20beta%20function
In mathematics, the Dirichlet beta function (also known as the Catalan beta function) is a special function, closely related to the Riemann zeta function. It is a particular Dirichlet L-function, the L-function for the alternating character of period four. Definition The Dirichlet beta function is defined as or, equivalently, In each case, it is assumed that Re(s) > 0. Alternatively, the following definition, in terms of the Hurwitz zeta function, is valid in the whole complex s-plane: Another equivalent definition, in terms of the Lerch transcendent, is: which is once again valid for all complex values of s. The Dirichlet beta function can also be written in terms of the polylogarithm function: Also the series representation of Dirichlet beta function can be formed in terms of the polygamma function but this formula is only valid at positive integer values of . Euler product formula It is also the simplest example of a series non-directly related to which can also be factorized as an Euler product, thus leading to the idea of Dirichlet character defining the exact set of Dirichlet series having a factorization over the prime numbers. At least for Re(s) ≥ 1: where are the primes of the form (5,13,17,...) and are the primes of the form (3,7,11,...). This can be written compactly as Functional equation The functional equation extends the beta function to the left side of the complex plane Re(s) ≤ 0. It is given by where Γ(s) is the gamma function. It was conjectured by Euler in 1749 and proved by Malmsten in 1842 (see Blagouchine, 2014). Special values Some special values include: where G represents Catalan's constant, and where in the above is an example of the polygamma function. Hence, the function vanishes for all odd negative integral values of the argument. For every positive integer k: where is the Euler zigzag number. Also it was derived by Malmsten in 1842 (see Blagouchine, 2014) that There are zeros at -1; -3; -5; -7 etc. See
https://en.wikipedia.org/wiki/Phosgene%20oxime
Phosgene oxime, or CX, is an organic compound with the formula Cl2CNOH. It is a potent chemical weapon, specifically a nettle agent, which is a type of blister agent. The compound itself is a colorless solid, but impure samples are often yellowish liquids. It has a strong, disagreeable and irritating odor. It is used as a reagent in organic chemistry. Preparation and reactions Phosgene oxime can be prepared by reduction of chloropicrin using a combination of tin metal and hydrochloric acid as the source of the active hydrogen reducing acent: The observation of a transient violet color in the reaction suggests intermediate formation of trichloronitrosomethane (Cl3CNO). Early preparations, using stannous chloride as the reductant, also started with chloropicrin. The compound is electrophilic and thus sensitive to nucleophiles, including bases, which destroy it: Phosgene oxime has been used to prepare heterocycles that contain N-O bonds, such as isoxazoles. Dehydrohalogenation upon contact with mercuric oxide generates cyanoformyl chloride, a reactive nitrile oxide: Cl2CNOH → ClCNO + HCl Toxicity Phosgene oxime is classified as a vesicant even though it does not produce blisters. It is toxic by inhalation, ingestion, or skin contact. The effects of the poisoning occur almost immediately. No antidote for phosgene oxime poisoning is known. Generally, any treatment is supportive. Typical physical symptoms of CX exposure are as follows: Skin: Blanching surrounded by an erythematous ring can be observed within 30 seconds of exposure. A wheal develops on exposed skin within 30 minutes. The original blanched area acquires a brown pigmentation by 24 hours. An eschar forms in the pigmented area by 1 week and sloughs after approximately 3 weeks. Initially, the effects of CX can easily be misidentified as mustard gas exposure. However, the onset of skin irritation resulting from CX exposure is a great deal faster than mustard gas, which typically takes several hours o
https://en.wikipedia.org/wiki/Labyrinth%3A%20The%20Computer%20Game
Labyrinth: The Computer Game is a 1986 graphic adventure game developed by Lucasfilm Games and published by Activision. Based on the fantasy film Labyrinth, it tasks the player with navigating a maze while solving puzzles and evading dangers. The player's goal is to find and defeat the main antagonist, Jareth, within 13 real-time hours. Unlike other adventure games of the period, Labyrinth does not feature a command-line interface. Instead, the player uses two scrolling "word wheel" menus on the screen to construct basic sentences. Labyrinth was the first adventure game created by Lucasfilm. The project was led by designer David Fox, who invented its word wheels to avoid the text parsers and syntax guessing typical of text-based adventure games. Early in development, the team collaborated with author Douglas Adams in a week-long series of brainstorming sessions, which inspired much of the final product. Labyrinth received positive reviews and, in the United States, was a bigger commercial success than the film upon which it was based. Its design influenced Lucasfilm's subsequent adventure title, the critically acclaimed Maniac Mansion. This game is entirely different from another game based on the same movie entitled "Labyrinth: Maō no Meikyū" ("Maze of the Goblin King"), released exclusively in Japan for the Famicom and MSX in 1987, developed by Atlus and published by Tokuma Shoten. Overview Labyrinth: The Computer Game is a graphic adventure game in which the player maneuvers a character through a maze while solving puzzles and evading dangers. It is an adaptation of the 1986 film Labyrinth, many of whose events and characters are reproduced in the game. However, it does not follow the plot of the film. At the beginning, the player enters their name, sex and favorite color: the last two fields determine the appearance of the player character. Afterward, a short text-based adventure sequence unfolds, wherein the player enters a movie theater to watch the film L
https://en.wikipedia.org/wiki/Wilderness-acquired%20diarrhea
Wilderness-acquired diarrhea is a variety of traveler's diarrhea in which backpackers and other outdoor enthusiasts are affected. Potential sources are contaminated food or water, or "hand-to-mouth", directly from another person who is infected. Cases generally resolve spontaneously, with or without treatment, and the cause is typically unknown. The National Outdoor Leadership School has recorded about one incident per 5,000 person-field days by following strict protocols on hygiene and water treatment. More limited, separate studies have presented highly varied estimated rates of affliction that range from 3 percent to 74 percent of wilderness visitors. One survey found that long-distance Appalachian Trail hikers reported diarrhea as their most common illness. Based on reviews of epidemiologic data and literature, some researchers believe that the risks have been over-stated and are poorly understood by the public. Symptoms and signs The average incubation periods for giardiasis and cryptosporidiosis are each 7 days. Certain other bacterial and viral agents have shorter incubation periods, although hepatitis may take weeks to manifest itself. The onset usually occurs within the first week of return from the field, but may also occur at any time while hiking. Most cases begin abruptly and usually result in increased frequency, volume, and weight of stool. Typically, a hiker experiences at least four to five loose or watery bowel movements each day. Other commonly associated symptoms are nausea, vomiting, abdominal cramping, bloating, low fever, urgency, and malaise, and usually the appetite is affected. The condition is much more serious if there is blood or mucus in stools, abdominal pain, or high fever. Dehydration is a possibility. Life-threatening illness resulting from WAD is extremely rare but can occur in people with weakened immune systems. Some people may be carriers and not exhibit symptoms. Causes Infectious diarrhea acquired in the wilderness is ca
https://en.wikipedia.org/wiki/Term%20algebra
In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra. From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category. A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them. An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand. Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration. Universal algebra A type is a set of function symbols, with each having an associated arity (i.e. number of inputs). For any non-negative integer , let denote the function symbols in of arity . A constant is a function symbol of arity 0. Let be a type, and let be a non-empty set of symbols, representing the variable symbols. (For simplicity, assume and are disjoint.) Then the set of terms of type ove