source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Q-matrix
In mathematics, a Q-matrix is a square matrix whose associated linear complementarity problem LCP(M,q) has a solution for every vector q. Properties M is a Q-matrix if there exists d > 0 such that LCP(M,0) and LCP(M,d) have a unique solution. Any P-matrix is a Q-matrix. Conversely, if a matrix is a Z-matrix and a Q-matrix, then it is also a P-matrix. See also P-matrix Z-matrix
https://en.wikipedia.org/wiki/Functional%20integration%20%28neurobiology%29
Functional integration is the study of how brain regions work together to process information and effect responses. Though functional integration frequently relies on anatomic knowledge of the connections between brain areas, the emphasis is on how large clusters of neurons – numbering in the thousands or millions – fire together under various stimuli. The large datasets required for such a whole-scale picture of brain function have motivated the development of several novel and general methods for the statistical analysis of interdependence, such as dynamic causal modelling and statistical linear parametric mapping. These datasets are typically gathered in human subjects by non-invasive methods such as EEG/MEG, fMRI, or PET. The results can be of clinical value by helping to identify the regions responsible for psychiatric disorders, as well as to assess how different activities or lifestyles affect the functioning of the brain. Imaging techniques A study's choice of imaging modality depends on the desired spatial and temporal resolution. fMRI and PET offer relatively high spatial resolution, with voxel dimensions on the order of a few millimeters, but their relatively low sampling rate hinders the observation of rapid and transient interactions between distant regions of the brain. These temporal limitations are overcome by MEG, but at the cost of only detecting signals from much larger clusters of neurons. fMRI Functional magnetic resonance imaging (fMRI) is a form of MRI that is most frequently used to take advantage of a difference in magnetism between oxy- and deoxyhemoglobin to assess blood flow to different parts of the brain. Typical sampling rates for fMRI images are in the tenths of seconds. MEG Magnetoencephalography (MEG) is an imaging modality that uses very sensitive magnetometers to measure the magnetic fields resulting from ionic currents flowing through neurons in the brain. High-quality MEG machines allow for sub-millisecond sampling rates.
https://en.wikipedia.org/wiki/Micropropagation
Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods. Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s. Steps In short, steps of micropropagation can be divided into four stages: Selection of mother plant Multiplication Rooting and acclimatizing Transfer new plant to soil Selection of mother plant Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used; including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micro nutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media f
https://en.wikipedia.org/wiki/Proxy%20re-encryption
Proxy re-encryption (PRE) schemes are cryptosystems which allow third parties (proxies) to alter a ciphertext which has been encrypted for one party, so that it may be decrypted by another. Examples of use A proxy re-encryption is generally used when one party, say Bob, wants to reveal the contents of messages sent to him and encrypted with his public key to a third party, Charlie, without revealing his private key to Charlie. Bob does not want the proxy to be able to read the contents of his messages. Bob could designate a proxy to re-encrypt one of his messages that is to be sent to Charlie. This generates a new key that Charlie can use to decrypt the message. Now if Bob sends Charlie a message that was encrypted under Bob's key, the proxy will alter the message, allowing Charlie to decrypt it. This method allows for a number of applications such as e-mail forwarding, law-enforcement monitoring, and content distribution. A weaker re-encryption scheme is one in which the proxy possesses both parties' keys simultaneously. One key decrypts a plaintext, while the other encrypts it. Since the goal of many proxy re-encryption schemes is to avoid revealing either of the keys or the underlying plaintext to the proxy, this method is not ideal. Defining functions Proxy re-encryption schemes are similar to traditional symmetric or asymmetric encryption schemes, with the addition of two functions: Delegation – allows a message recipient (keyholder) to generate a re-encryption key based on his secret key and the key of the delegated user. This re-encryption key is used by the proxy as input to the re-encryption function, which is executed by the proxy to translate ciphertexts to the delegated user's key. Asymmetric proxy re-encryption schemes come in bi-directional and uni-directional varieties. In a bi-directional scheme, the re-encryption scheme is reversible—that is, the re-encryption key can be used to translate messages from Bob to Charlie, as well as from Char
https://en.wikipedia.org/wiki/Velamen
Velamen or velamen radicum is a spongy, multiple epidermis that covers the roots of some epiphytic or semi-epiphytic plants, such as orchid and Clivia species. The velamen of an orchid is the white or gray covering of aerial roots (when dry, and usually more green when wet as a result of the appearance of underlying photosynthetic structures). It is many cell layers thick and capable of absorbing atmospheric moisture and nutrients, but its main function may lie in protecting the underlying cells against damaging UV rays (Chomicki et al., 2015). Often, the roots of orchids are associated with symbiotic fungi or bacteria; the latter may fix nutrients from the air. This functionality allows the orchid to exist in locations that provide a reproductive or vegetative advantage such as improved exposure or reduced competition from other plant species. The velamen also serves a mechanical function, protecting the vascular tissues in the root cortex, shielding the root from transpirational water loss, and, in many cases, adhering the plant to the substrate. The typical orchid root has a stele of comparatively small diameter. It is surrounded by a cortex which is further enveloped by a highly specialized exodermis, most of which at maturity do not contain protoplasm. A few cells, however, are living and allow the passage of water through them. The exodermis is surrounded by velamen, consisting of one to several layers of cells, which can develop root hair under proper environmental conditions. The velamen arises from the root tip by division of a special tissue. The dead cells of velamen diffuse light, thus giving it a grey appearance—except at the tips, where the chlorophyll become visible. Upon absorbing water, the dead cells become transparent, and the whole velamen tissue then appears green.
https://en.wikipedia.org/wiki/Standard%20cell
In semiconductor design, standard-cell methodology is a method of designing application-specific integrated circuits (ASICs) with mostly digital-logic features. Standard-cell methodology is an example of design abstraction, whereby a low-level very-large-scale integration (VLSI) layout is encapsulated into an abstract logic representation (such as a NAND gate). Cell-based methodology – the general class to which standard cells belong – makes it possible for one designer to focus on the high-level (logical function) aspect of digital design, while another designer focuses on the implementation (physical) aspect. Along with semiconductor manufacturing advances, standard-cell methodology has helped designers scale ASICs from comparatively simple single-function ICs (of several thousand gates), to complex multi-million gate system-on-a-chip (SoC) devices. Construction of a standard cell A standard cell is a group of transistor and interconnect structures that provides a boolean logic function (e.g., AND, OR, XOR, XNOR, inverters) or a storage function (flipflop or latch). The simplest cells are direct representations of the elemental NAND, NOR, and XOR boolean function, although cells of much greater complexity are commonly used (such as a 2-bit full-adder, or muxed D-input flipflop.) The cell's boolean logic function is called its logical view: functional behavior is captured in the form of a truth table or Boolean algebra equation (for combinational logic), or a state transition table (for sequential logic). Usually, the initial design of a standard cell is developed at the transistor level, in the form of a transistor netlist or schematic view. The netlist is a nodal description of transistors, of their connections to each other, and of their terminals (ports) to the external environment. A schematic view may be generated with a number of different Computer Aided Design (CAD) or Electronic Design Automation (EDA) programs that provide a Graphical User Interface (
https://en.wikipedia.org/wiki/VideoNow
The VideoNow is a portable video player produced by Hasbro and released by their subsidiary Tiger Electronics in 2003, and was considered the most popular product in Tiger's line of Now consumer products. The systems use discs called PVDs (which stands for Personal Video Disc), which can store about 30 minutes (half an hour) of video, the length of an average TV show with commercials (a typical TV episode is about 20–23 minutes without them), so each PVD contains only one episode, with trailers at the end to use the leftover time on most PVDs, including Nickelodeon PVDs. Video data is stored on the left audio channel with audio on the right channel, thus making it impossible to achieve stereo sound on the system, which only plays in black and white. The video plays at 15fps. Most of the shows were from Nickelodeon, such as SpongeBob SquarePants and The Fairly OddParents, and later they released shows from Cartoon Network, such as Ed, Edd n Eddy and Dexter's Laboratory, Disney only mostly released episodes of America’s Funniest Home Videos and one Hannah Montana music video. A small amount of movies were also released on the system, but due to the limited space on a PVD, said movies would have to be released on at least three discs, depending on the length of said film. Hasbro also produced editing software for creating custom VideoNow Color PVDs called the VideoNow Media Wizard in 2005, which came with blank PVD media. A number of unofficial solutions are available for creating the oddly-formatted VideoNow files, including a plug-in for the popular video processing program Virtual Dub. The files can then be burned to a CD-R using standard CD burning software, and the disc cut down to the required size. As the VideoNow Color does not accept standard 8 cm mini-CDs, some creative users have resorted to cutting down standard 12 cm CD-R discs, though not without problems. Hasbro made recordable PVDs available without the Media Wizard from their online store. However
https://en.wikipedia.org/wiki/Serotype
A serotype or serovar is a distinct variation within a species of bacteria or virus or among immune cells of different individuals. These microorganisms, viruses, or cells are classified together based on their surface antigens, allowing the epidemiologic classification of organisms to the subspecies level. A group of serovars with common antigens is called a serogroup or sometimes serocomplex. Serotyping often plays an essential role in determining species and subspecies. The Salmonella genus of bacteria, for example, has been determined to have over 2600 serotypes. Vibrio cholerae, the species of bacteria that causes cholera, has over 200 serotypes, based on cell antigens. Only two of them have been observed to produce the potent enterotoxin that results in cholera: O1 and O139. Serotypes were discovered by the American microbiologist Rebecca Lancefield in 1933. Role in organ transplantation The immune system is capable of discerning a cell as being 'self' or 'non-self' according to that cell's serotype. In humans, that serotype is largely determined by human leukocyte antigen (HLA), the human version of the major histocompatibility complex. Cells determined to be non-self are usually recognized by the immune system as foreign, causing an immune response, such as hemagglutination. Serotypes differ widely between individuals; therefore, if cells from one human (or animal) are introduced into another random human, those cells are often determined to be non-self because they do not match the self-serotype. For this reason, transplants between genetically non-identical humans often induce a problematic immune response in the recipient, leading to transplant rejection. In some situations, this effect can be reduced by serotyping both recipient and potential donors to determine the closest HLA match. Human leukocyte antigens Serotyping of Salmonella The Kauffman–White classification scheme is the basis for naming the manifold serovars of Salmonella. To date, more
https://en.wikipedia.org/wiki/Wireless%20Transport%20Layer%20Security
Wireless Transport Layer Security (WTLS) is a security protocol, part of the Wireless Application Protocol (WAP) stack. It sits between the WTP and WDP layers in the WAP communications stack. Overview WTLS is derived from TLS. WTLS uses similar semantics adapted for a low bandwidth mobile device. The main changes are: Compressed data structures — Where possible packet sizes are reduced by using bit-fields, discarding redundancy and truncating some cryptographic elements. New certificate format — WTLS defines a compressed certificate format. This broadly follows the X.509 v3 certificate structure, but uses smaller data structures. Packet based design — TLS is designed for use over a data stream. WTLS adapts that design to be more appropriate on a packet based network. A significant amount of the design is based on a requirement that it be possible to use a packet network such as SMS as a data transport. WTLS has been superseded in the WAP Wireless Application Protocol 2.0 standard by the End-to-end Transport Layer Security Specification. Security WTLS uses cryptographic algorithms and in common with TLS allows negotiation of cryptographic suites between client and server. Algorithms An incomplete list: Key Exchange and Signature RSA Elliptic Curve Cryptography (ECC) Symmetric Encryption DES Triple DES RC5 Message Digest MD5 SHA1 Security criticisms Encryption/Decryption at the gateway — in the WAP architecture the content is typically stored on the server as uncompressed WML (an XML DTD). That content is retrieved by the gateway using HTTP and compressed into WBXML, in order to perform that compression the gateway must be able to handle the WML in cleartext, so even if there is encryption between the client and the gateway (using WTLS) and between the gateway and the originating server (using HTTPS) the gateway acts as a man-in-the-middle. This gateway architecture serves a number of purposes: transcoding between HTML and WML; content provide
https://en.wikipedia.org/wiki/Glucose-6-phosphate%20isomerase
Glucose-6-phosphate isomerase (GPI), alternatively known as phosphoglucose isomerase/phosphoglucoisomerase (PGI) or phosphohexose isomerase (PHI), is an enzyme ( ) that in humans is encoded by the GPI gene on chromosome 19. This gene encodes a member of the glucose phosphate isomerase protein family. The encoded protein has been identified as a moonlighting protein based on its ability to perform mechanistically distinct functions. In the cytoplasm, the gene product functions as a glycolytic enzyme (glucose-6-phosphate isomerase) that interconverts glucose-6-phosphate (G6P) and fructose-6-phosphate (F6P). Extracellularly, the encoded protein (also referred to as neuroleukin) functions as a neurotrophic factor that promotes survival of skeletal motor neurons and sensory neurons, and as a lymphokine that induces immunoglobulin secretion. The encoded protein is also referred to as autocrine motility factor (AMF) based on an additional function as a tumor-secreted cytokine and angiogenic factor. Defects in this gene are the cause of nonspherocytic hemolytic anemia, and a severe enzyme deficiency can be associated with hydrops fetalis, immediate neonatal death and neurological impairment. Alternative splicing results in multiple transcript variants. [provided by RefSeq, Jan 2014] Structure Functional GPI is a 64-kDa dimer composed of two identical monomers. The two monomers interact notably through the two protrusions in a hugging embrace. The active site of each monomer is formed by a cleft between the two domains and the dimer interface. GPI monomers are made of two domains, one made of two separate segments called the large domain and the other made of the segment in between called the small domain. The two domains are each αβα sandwiches, with the small domain containing a five-strand β-sheet surrounded by α-helices while the large domain has a six-stranded β-sheet. The large domain, located at the N-terminal, and the C-terminal of each monomer also contain "arm
https://en.wikipedia.org/wiki/Dual%20superconductor%20model
In the theory of quantum chromodynamics, dual superconductor models attempt to explain confinement of quarks in terms of an electromagnetic dual theory of superconductivity. Overview In an electromagnetic dual theory the roles of electric and magnetic fields are interchanged. The BCS theory of superconductivity explains superconductivity as the result of the condensation of electric charges to Cooper pairs. In a dual superconductor an analogous effect occurs through the condensation of magnetic charges (also called magnetic monopoles). In ordinary electromagnetic theory, no monopoles have been shown to exist. However, in quantum chromodynamics — the theory of color charge which explains the strong interaction between quarks — the color charges can be viewed as (non-abelian) analogues of electric charges and corresponding magnetic monopoles are known to exist. Dual superconductor models posit that condensation of these magnetic monopoles in a superconductive state explains color confinement — the phenomenon that only neutrally colored bound states are observed at low energies. Qualitatively, confinement in dual superconductor models can be understood as a result of the dual to the Meissner effect. The Meissner effect says that a superconducting metal will try to expel magnetic field lines from its interior. If a magnetic field is forced to run through the superconductor, the field lines are compressed in magnetic flux "tubes" known as fluxons. In a dual superconductor the roles of magnetic and electric fields are exchanged and the Meissner effect tries to expel electric field lines. Quarks and antiquarks carry opposite color charges, and for a quark–antiquark pair 'electric' field lines run from the quark to the antiquark. If the quark–antiquark pair are immersed in a dual superconductor, then the electric field lines get compressed to a flux tube. The energy associated to the tube is proportional to its length, and the potential energy of the quark–antiquark is p
https://en.wikipedia.org/wiki/Keystone%20Kapers
Keystone Kapers is a platform game developed by Garry Kitchen for Activision and published for the Atari 2600 in April 1983. It was ported to the Atari 5200, Atari 8-bit family, ColecoVision, and in 1984, MSX. Inspired by Mack Sennett's slapstick Keystone Cops series of silent films, the object of the game is for Officer Keystone Kelly to catch Harry Hooligan before he can escape from a department store. Gameplay The game takes place in side-view of a three-story department store and its roof. The store is eight times wider than the screen; reaching the edge of the screen flips to the next section of the store. A mini-map at the bottom of the screen provides an overall view of the store and characters. Officer Kelly begins the game in the lower right on the first floor. The joystick moves Kelly left and right. Vertical movement between floors is accomplished by escalators at the ends of the map and a central elevator. Harry Hooligan starts the game in the center of the second floor. He immediately begins running to the right to reach the elevator to the third floor. Hooligan continues moving up the floor. If he succeeds, then he escapes. This trip takes 50 seconds, and a timer at the top of the screen counts down the remaining time. Kelly runs significantly faster than Hooligan and can normally catch him in that time in a straight run with no penalties. Kelly can use the elevator to get ahead of Hooligan, causing Hooligan to reverse direction and start back down through the store. He can jump down between levels at either end of the map, something Kelly cannot do because the escalators only go up for Kelly. Kelly has to use the elevator carefully, or risk being stuck on a higher floor while the timer runs out. Slowing Kelly's progress are obstacles like radios, beach balls, and shopping carts, which can be jumped over or ducked under by pushing up or down on the joystick. Hitting any of these objects causes a nine-second penalty. In later levels, flying toy bi
https://en.wikipedia.org/wiki/K-complex
A K-complex is a waveform that may be seen on an electroencephalogram (EEG). It occurs during stage 2 NREM sleep. It is the "largest event in healthy human EEG". They are more frequent in the first sleep cycles. K-complexes have two proposed functions: first, suppressing cortical arousal in response to stimuli that the sleeping brain evaluates not to signal danger, and second, aiding sleep-based memory consolidation. The K-complex was discovered in 1937 in the private laboratories of Alfred Lee Loomis. Neurophysiology K-complex consists of a brief negative high-voltage peak, usually greater than 100 µV, followed by a slower positive complex around 350 and 550 ms and at 900 ms a final negative peak. K-complexes occur roughly every 1.0–1.7 minutes and are often followed by bursts of sleep spindles. They occur spontaneously but also occur in response to external stimuli such as sounds, touches on the skin and internal ones such as inspiratory interruptions. They are generated in widespread cortical locations though they tend to predominate over the frontal parts of the brain. Both K-complex and delta wave activity in stage 2 sleep create slow-wave (0.8 Hz) and delta (1.6–4.0 Hz) oscillations. However, their topographical distribution is different, and the delta power of K-complexes is higher. They are created by the occurrence in widespread cortical areas of outward dendritic currents from the middle (III) to the upper (I) layers of the cerebral cortex. This is accompanied by a decrease in broadband EEG power including gamma wave activity. This produces "down-states" of neuronal silence in which neural network activity is reduced. The activity of K-complexes is transferred to the thalamus where it synchronizes the thalamocortical network during sleep, producing sleep oscillations such as spindles and delta waves. It has been observed that they are indeed identical in the "laminar distributions of transmembrane currents" to the slow waves of slow-wave sleep. K-com
https://en.wikipedia.org/wiki/Sleep%20spindle
Sleep spindles are bursts of neural oscillatory activity that are generated by interplay of the thalamic reticular nucleus (TRN) and other thalamic nuclei during stage 2 NREM sleep in a frequency range of ~11 to 16 Hz (usually 12–14 Hz) with a duration of 0.5 seconds or greater (usually 0.5–1.5 seconds). After generation as an interaction of the TRN neurons and thalamocortical cells, spindles are sustained and relayed to the cortex by thalamo-thalamic and thalamo-cortical feedback loops regulated by both GABAergic and NMDA-receptor mediated glutamatergic neurotransmission. Sleep spindles have been reported (at face value) for all tested mammalian species. Considering animals in which sleep-spindles were studied extensively (and thus excluding results mislead by pseudo-spindles), they appear to have a conserved (across species) main frequency of roughly 9–16 Hz. Only in humans, rats and dogs is a difference in the intrinsic frequency of frontal and posterior spindles confirmed, however (spindles recorded over the posterior part of the scalp are of higher frequency, on average above 13 Hz). Research supports that spindles (sometimes referred to as "sigma bands" or "sigma waves") play an essential role in both sensory processing and long term memory consolidation. Until recently, it was believed that each sleep spindle oscillation peaked at the same time throughout the neocortex. It was determined that oscillations sweep across the neocortex in circular patterns around the neocortex, peaking in one area, and then a few milliseconds later in an adjacent area. It has been suggested that this spindle organization allows for neurons to communicate across cortices. The time scale at which the waves travel at is the same speed it takes for neurons to communicate with each other. Although the function of sleep spindles is unclear, it is believed that they actively participate in the consolidation of overnight declarative memory through the reconsolidation process. The dens
https://en.wikipedia.org/wiki/Doujin%20soft
is software created by Japanese hobbyists or hobbyist groups (referred to as "circles"), more for fun than for profit. The term includes digital , which are essentially the Japanese equivalent of independent video games or fangames (the term "doujin game" also includes things like doujin-made board games and card games). Doujin soft is considered part of doujin katsudou, for which it accounts for 5% of all doujin works altogether (as of 2015). Doujin soft began with microcomputers in Japan, and spread to platforms such as the MSX and X68000. Since the 1990's, however, they have primarily been made for Microsoft Windows. Most doujin soft sales occur at doujin conventions such as Comiket, with several that deal with doujin soft or doujin games exclusively such as Freedom Game (which further only allows games distributed for free) and Digital Games Expo. There is also a growing number of specialized internet sites that sell doujin soft. Additionally, more doujin games have been sold as downloads on consoles and PC stores such as Steam in recent years, through publishers such as Mediascape picking them up. Digital doujin games Doujin video games, like doujin soft, began with microcomputers in Japan, such as the PC-98 and PC-88, and spread to platforms such as the MSX, FM Towns and X68000. From the 90's to 00's however, they were primarily exclusive to Microsoft Windows. In recent years, more doujin games have been released on mobile platforms and home consoles, as well as other operating systems like macOS and Linux. Though doujin games used to primarily be for home computers, more doujin games have been made available on gaming consoles in recent years. There are also doujin groups that develop software for retro consoles such as the Game Boy and Game Gear. Like fangames, doujin games frequently use characters from existing games, anime, or manga ("niji sousaku"). These unauthorized uses of characters are generally ignored and accepted by the copyright holders, an
https://en.wikipedia.org/wiki/Robustness%20principle
In computing, the robustness principle is a design guideline for software that states: "be conservative in what you do, be liberal in what you accept from others". It is often reworded as: "be conservative in what you send, be liberal in what you accept". The principle is also known as Postel's law, after Jon Postel, who used the wording in an early specification of TCP. In other words, programs that send messages to other machines (or to other programs on the same machine) should conform completely to the specifications, but programs that receive messages should accept non-conformant input as long as the meaning is clear. Among programmers, to produce compatible functions, the principle is also known in the form be contravariant in the input type and covariant in the output type. Interpretation RFC 1122 (1989) expanded on Postel's principle by recommending that programmers "assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect". Protocols should allow for the addition of new codes for existing fields in future versions of protocols by accepting messages with unknown codes (possibly logging them). Programmers should avoid sending messages with "legal but obscure protocol features" that might expose deficiencies in receivers, and design their code "not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility". Criticism In 2001, Marshall Rose characterized several deployment problems when applying Postel's principle in the design of a new application protocol. For example, a defective implementation that sends non-conforming messages might be used only with implementations that tolerate those deviations from the specification until, possibly several years later, it is connected with a less tolerant application that rejects its messages. In such a situation, identifying the problem is often
https://en.wikipedia.org/wiki/S%20wave
In seismology and other areas involving elastic waves, S waves, secondary waves, or shear waves (sometimes called elastic S waves) are a type of elastic wave and are one of the two main types of elastic body waves, so named because they move through the body of an object, unlike surface waves. S waves are transverse waves, meaning that the direction of particle movement of an S wave is perpendicular to the direction of wave propagation, and the main restoring force comes from shear stress. Therefore, S waves cannot propagate in liquids with zero (or very low) viscosity; however, they may propagate in liquids with high viscosity. The name secondary wave comes from the fact that they are the second type of wave to be detected by an earthquake seismograph, after the compressional primary wave, or P wave, because S waves travel more slowly in solids. Unlike P waves, S waves cannot travel through the molten outer core of the Earth, and this causes a shadow zone for S waves opposite to their origin. They can still propagate through the solid inner core: when a P wave strikes the boundary of molten and solid cores at an oblique angle, S waves will form and propagate in the solid medium. When these S waves hit the boundary again at an oblique angle, they will in turn create P waves that propagate through the liquid medium. This property allows seismologists to determine some physical properties of the Earth's inner core. History In 1830, the mathematician Siméon Denis Poisson presented to the French Academy of Sciences an essay ("memoir") with a theory of the propagation of elastic waves in solids. In his memoir, he states that an earthquake would produce two different waves: one having a certain speed and the other having a speed . At a sufficient distance from the source, when they can be considered plane waves in the region of interest, the first kind consists of expansions and compressions in the direction perpendicular to the wavefront (that is, parallel to the
https://en.wikipedia.org/wiki/Hitting%20the%20wall
In endurance sports such as road cycling and long-distance running, hitting the wall or the bonk is a condition of sudden fatigue and loss of energy which is caused by the depletion of glycogen stores in the liver and muscles. Milder instances can be remedied by brief rest and the ingestion of food or drinks containing carbohydrates. Otherwise, it can remedied by attaining second wind by either resting for approximately 10 minutes or by slowing down considerably and increasing speed slowly over a period of 10 minutes. Ten minutes is approximately the time that it takes for free fatty acids to sufficiently produce ATP in response to increased demand. During a marathon, for instance, runners typically hit the wall around kilometer 30 (mile 20). The condition can usually be avoided by ensuring that glycogen levels are high when the exercise begins, maintaining glucose levels during exercise by eating or drinking carbohydrate-rich substances, or by reducing exercise intensity. Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. The lack of glycogen causes a low ATP reservoir within the exercising muscle cells. Until second wind is achieved (increased ATP production primarily from free fatty acids), the symptoms of a low ATP reservoir in exercising muscle due to depleted glycogen include: muscle fatigue, muscle cramping, muscle pain (myalgia), inappropriate rapid heart rate response to exercise (tachycardia), breathlessness (dyspnea) or rapid breathing (tachypnea), exaggerated cardiorespiratory response to exercise (tachycardia & dyspnea/tachypnea). The heart tries to compensate for the energy shortage by increasing heart rate to maximize delivery of oxygen and blood borne fuels to the muscle cells for oxidative phosphorylation. Without muscle glycogen, it is important to get into second wind without going too fast, to
https://en.wikipedia.org/wiki/Society%20for%20Neuroscience
The Society for Neuroscience (SfN) is a professional society, headquartered in Washington, D.C., for basic scientists and physicians around the world whose research is focused on the study of the brain and nervous system. It is especially well known for its annual meeting, consistently one of the largest scientific conferences in the world. History SfN was founded in 1969 by Ralph W. Gerard and, at nearly 37,000 members, has grown to be the largest neuroscience society in the world. The stated mission of the society is to: Advance the understanding of the brain and the nervous system. Provide professional development activities, information, and educational resources. Promote public information and general education about science and neuroscience. Inform legislators and other policy makers about the implications of research for public policy, societal benefit, and continued scientific progress. Annual meeting The society holds an annual meeting that is attended by scientists and physicians from all around the world. The first annual meeting of the society was held in Washington, DC in 1971, and it was attended by 1,396 scientists. Subsequent meetings have been held annually in a variety of cities throughout the US, with the exception of the 1988 meeting, which was held in Canada. The 2022 meeting was held in San Diego, California. Publishing The Journal of Neuroscience, was launched in 1981 and has consistently been a multidisciplinary journal publishing papers on a broad range of topics of general interest to those working on the nervous-system. In addition, SfN publications offer breadth and depth into the rapidly developing field of neuroscience. eNeuro, was launched in 2014, SfN's open-access journal publishes high quality papers in all areas of neuroscience that increase the understanding of the nervous-system, including replication studies and negative results. SfN's digital member magazine, Neuroscience Quarterly covers SfN news, programs, science, an
https://en.wikipedia.org/wiki/Time%20reversibility
A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed. A deterministic process is time-reversible if the time-reversed process satisfies the same dynamic equations as the original process; in other words, the equations are invariant or symmetrical under a change in the sign of time. A stochastic process is reversible if the statistical properties of the process are the same as the statistical properties for time-reversed data from the same process. Mathematics In mathematics, a dynamical system is time-reversible if the forward evolution is one-to-one, so that for every state there exists a transformation (an involution) π which gives a one-to-one mapping between the time-reversed evolution of any one state and the forward-time evolution of another corresponding state, given by the operator equation: Any time-independent structures (e.g. critical points or attractors) which the dynamics give rise to must therefore either be self-symmetrical or have symmetrical images under the involution π. Physics In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry). In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present, reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry. Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process. Note, however, that the fundamental laws that underlie the thermodynamic processes are all time-reversible (classical laws of motion and laws of electrodynamics), which means that
https://en.wikipedia.org/wiki/Puzzle%20jewelry
A piece of puzzle jewelry is a puzzle which can be worn by a person as jewelry. These puzzles can be both fully mechanically functional and aesthetically pleasing as pieces of wearable jewelry. Examples of available puzzle jewelry The following list implies that a small version of the cited puzzle is available with suitable design and finish to be worn as jewelry. Puzzle ring, often made in Turkey having four interconnected rings See also Puzzle box
https://en.wikipedia.org/wiki/Puzzle%20ring
A puzzle ring is a jewelry ring made up of multiple interconnected bands, which is a type of mechanical puzzle most likely developed as an elaboration of the European gimmal ring. The puzzle ring is also sometimes called a "Turkish wedding ring" or "harem ring." According to popular legend, the ring would be given by the husband as a wedding ring, because if the wife removed it (presumably to commit adultery), the bands of the ring would fall apart, and she would be unable to reassemble it before its absence would be noticed. However, a puzzle ring can be easily removed without the bands falling apart. In Sweden, Norway and Finland, puzzle rings are often carried by military veterans (in Norway the rings are often called the "Lebanon ring" after military people have served in the United Nations Interim Forces in Lebanon UNIFIL), where the number of rings correspond to the number of tours made, starting at 4 rings for 1 tour (mostly for Sweden, not usually for Norwegian veterans). In Finland you can use the ring if you have served more than 6 months.
https://en.wikipedia.org/wiki/Substitution%20model
In biology, a substitution model, also called models of DNA sequence evolution, are Markov models that describe changes over evolutionary time. These models describe evolutionary changes in macromolecules (e.g., DNA sequences) represented as sequence of symbols (A, C, G, and T in the case of DNA). Substitution models are used to calculate the likelihood of phylogenetic trees using multiple sequence alignment data. Thus, substitution models are central to maximum likelihood estimation of phylogeny as well as Bayesian inference in phylogeny. Estimates of evolutionary distances (numbers of substitutions that have occurred since a pair of sequences diverged from a common ancestor) are typically calculated using substitution models (evolutionary distances are used input for distance methods such as neighbor joining). Substitution models are also central to phylogenetic invariants because they are necessary to predict site pattern frequencies given a tree topology. Substitution models are also necessary to simulate sequence data for a group of organisms related by a specific tree. Phylogenetic tree topologies and other parameters Phylogenetic tree topologies are often the parameter of interest; thus, branch lengths and any other parameters describing the substitution process are often viewed as nuisance parameters. However, biologists are sometimes interested in the other aspects of the model. For example, branch lengths, especially when those branch lengths are combined with information from the fossil record and a model to estimate the timeframe for evolution. Other model parameters have been used to gain insights into various aspects of the process of evolution. The Ka/Ks ratio (also called ω in codon substitution models) is a parameter of interest in many studies. The Ka/Ks ratio can be used to examine the action of natural selection on protein-coding regions, it provides information about the relative rates of nucleotide substitutions that change amino acids (non-
https://en.wikipedia.org/wiki/Bone%20marrow%20examination
Bone marrow examination refers to the pathologic analysis of samples of bone marrow obtained by bone marrow biopsy (often called trephine biopsy) and bone marrow aspiration. Bone marrow examination is used in the diagnosis of a number of conditions, including leukemia, multiple myeloma, lymphoma, anemia, and pancytopenia. The bone marrow produces the cellular elements of the blood, including platelets, red blood cells and white blood cells. While much information can be gleaned by testing the blood itself (drawn from a vein by phlebotomy), it is sometimes necessary to examine the source of the blood cells in the bone marrow to obtain more information on hematopoiesis; this is the role of bone marrow aspiration and biopsy. Components of the procedure Bone marrow samples can be obtained by aspiration and trephine biopsy. Sometimes, a bone marrow examination will include both an aspirate and a biopsy. The aspirate yields semi-liquid bone marrow, which can be examined by a pathologist under a light microscope and analyzed by flow cytometry, chromosome analysis, or polymerase chain reaction (PCR). Frequently, a trephine biopsy is also obtained, which yields a narrow, cylindrically shaped solid piece of bone marrow, 2 mm wide and 2 cm long (80 μL), which is examined microscopically (sometimes with the aid of immunohistochemistry) for cellularity and infiltrative processes. An aspiration, using a 20 mL syringe, yields approximately 300 μL of bone marrow. A volume greater than 300 μL is not recommended, since it may dilute the sample with peripheral blood. Aspiration does not always represent all cells since some such as lymphoma stick to the trabecula, and would thus be missed by a simple aspiration. Site of procedure Bone marrow aspiration and trephine biopsy are usually performed on the back of the hipbone, or posterior iliac crest. An aspirate can also be obtained from the sternum (breastbone). For the sternal aspirate, the patient lies on their back, with a p
https://en.wikipedia.org/wiki/Axial%20ratio
Axial ratio, for any structure or shape with two or more axes, is the ratio of the length (or magnitude) of those axes to each other - the longer axis divided by the shorter. In chemistry or materials science, the axial ratio (symbol P) is used to describe rigid rod-like molecules. It is defined as the length of the rod divided by the rod diameter. In physics, the axial ratio describes electromagnetic radiation with elliptical, or circular, polarization. The axial ratio is the ratio of the magnitudes of the major and minor axis defined by the electric field vector. See also Aspect ratio Degree of polarization Ratios Polymer physics
https://en.wikipedia.org/wiki/Hyperstructure
Hyperstructures are algebraic structures equipped with at least one multi-valued operation, called a hyperoperation. The largest classes of the hyperstructures are the ones called – structures. A hyperoperation on a nonempty set is a mapping from to the nonempty power set , meaning the set of all nonempty subsets of , i.e. For we define and is a semihypergroup if is an associative hyperoperation, i.e. for all Furthermore, a hypergroup is a semihypergroup , where the reproduction axiom is valid, i.e. for all
https://en.wikipedia.org/wiki/Friis%20formulas%20for%20noise
Friis formula or Friis's formula (sometimes Friis' formula), named after Danish-American electrical engineer Harald T. Friis, is either of two formulas used in telecommunications engineering to calculate the signal-to-noise ratio of a multistage amplifier. One relates to noise factor while the other relates to noise temperature. The Friis formula for noise factor Friis's formula is used to calculate the total noise factor of a cascade of stages, each with its own noise factor and power gain (assuming that the impedances are matched at each stage). The total noise factor can then be used to calculate the total noise figure. The total noise factor is given as where and are the noise factor and available power gain, respectively, of the i-th stage, and n is the number of stages. Both magnitudes are expressed as ratios, not in decibels. Consequences An important consequence of this formula is that the overall noise figure of a radio receiver is primarily established by the noise figure of its first amplifying stage. Subsequent stages have a diminishing effect on signal-to-noise ratio. For this reason, the first stage amplifier in a receiver is often called the low-noise amplifier (LNA). The overall receiver noise "factor" is then where is the overall noise factor of the subsequent stages. According to the equation, the overall noise factor, , is dominated by the noise factor of the LNA, , if the gain is sufficiently high. The resultant Noise Figure expressed in dB is: Derivation For a derivation of Friis' formula for the case of three cascaded amplifiers () consider the image below. A source outputs a signal of power and noise of power . Therefore the SNR at the input of the receiver chain is . The signal of power gets amplified by all three amplifiers. Thus the signal power at the output of the third amplifier is . The noise power at the output of the amplifier chain consists of four parts: The amplified noise of the source () The output referred noise o
https://en.wikipedia.org/wiki/HERG
hERG (the human Ether-à-go-go-Related Gene) is a gene () that codes for a protein known as Kv11.1, the alpha subunit of a potassium ion channel. This ion channel (sometimes simply denoted as 'hERG') is best known for its contribution to the electrical activity of the heart: the hERG channel mediates the repolarizing IKr current in the cardiac action potential, which helps coordinate the heart's beating. When this channel's ability to conduct electrical current across the cell membrane is inhibited or compromised, either by application of drugs or by rare mutations in some families, it can result in a potentially fatal disorder called long QT syndrome. Conversely, genetic mutations that increase the current through these channels can lead to the related inherited heart rhythm disorder Short QT syndrome. A number of clinically successful drugs in the market have had the tendency to inhibit hERG, lengthening the QT and potentially leading to a fatal irregularity of the heartbeat (a ventricular tachyarrhythmia called torsades de pointes). This has made hERG inhibition an important antitarget that must be avoided during drug development. hERG has also been associated with modulating the functions of some cells of the nervous system and with establishing and maintaining cancer-like features in leukemic cells. Function hERG forms the major portion of one of the ion channel proteins (the 'rapid' delayed rectifier current (IKr)) that conducts potassium (K+) ions out of the muscle cells of the heart (cardiac myocytes), and this current is critical in correctly timing the return to the resting state (repolarization) of the cell membrane during the cardiac action potential. Sometimes, when referring to the pharmacological effects of drugs, the terms "hERG channels" and IKr are used interchangeably, but, in the technical sense, "hERG channels" can be made only by scientists in the laboratory; in formal terms, the naturally occurring channels in the body that include hERG a
https://en.wikipedia.org/wiki/Sum-free%20sequence
In mathematics, a sum-free sequence is an increasing sequence of positive integers, such that no term can be represented as a sum of any subset of the preceding elements of the sequence. This differs from a sum-free set, where only pairs of sums must be avoided, but where those sums may come from the whole set rather than just the preceding terms. Example The powers of two, 1, 2, 4, 8, 16, ... form a sum-free sequence: each term in the sequence is one more than the sum of all preceding terms, and so cannot be represented as a sum of preceding terms. Sums of reciprocals A set of integers is said to be small if the sum of its reciprocals converges to a finite value. For instance, by the prime number theorem, the prime numbers are not small. proved that every sum-free sequence is small, and asked how large the sum of reciprocals could be. For instance, the sum of the reciprocals of the powers of two (a geometric series) is two. If denotes the maximum sum of reciprocals of a sum-free sequence, then through subsequent research it is known that . Density It follows from the fact that sum-free sequences are small that they have zero Schnirelmann density; that is, if is defined to be the number of sequence elements that are less than or equal to , then . showed that for every sum-free sequence there exists an unbounded sequence of numbers for which where is the golden ratio, and he exhibited a sum-free sequence for which, for all values of , , subsequently improved to by Deshouillers, Erdős and Melfi in 1999 and to by Luczak and Schoen in 2000, who also proved that the exponent 1/2 cannot be further improved. Notes
https://en.wikipedia.org/wiki/Relocation%20%28computing%29
Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses. Prior to the advent of multiprocess systems, and still in many embedded systems, the addresses for objects were absolute starting at a known location, often zero. Since multiprocessing systems dynamically link and switch between programs it became necessary to be able to relocate objects using position-independent code. A linker usually performs relocation in conjunction with symbol resolution, the process of searching files and libraries to replace symbolic references or names of libraries with actual usable addresses in memory before running a program. Relocation is typically done by the linker at link time, but it can also be done at load time by a relocating loader, or at run time by the running program itself. Some architectures avoid relocation entirely by deferring address assignment to run time; as, for example, in stack machines with zero address arithmetic or in some segmented architectures where every compilation unit is loaded into a separate segment. Segmentation Object files are segmented into various memory segment types. Example segments include code segment (.text), initialized data segment (.data), uninitialized data segment (.bss), or others. Relocation table The relocation table is a list of pointers created by the translator (a compiler or assembler) and stored in the object or executable file. Each entry in the table, or "fixup", is a pointer to an absolute address in the object code that must be changed when the loader relocates the program so that it will refer to the correct location. Fixups are designed to support relocation of the program as a complete unit. In some cases, each fixup in the table is itself relative to a base address of zero, so the fixups themselves must be changed as the loader moves through the table. In some architectures a fixup that crosses ce
https://en.wikipedia.org/wiki/Harry%20C.%20Oberholser
Harry Church Oberholser (June 25, 1870 – December 25, 1963) was an American ornithologist. Biography Harry Oberholser was born to Jacob and Lavera S. Oberholser on June 25, 1870, in Brooklyn, New York. He attended Columbia University, but did not graduate. Later, Oberholser was awarded degrees (B.A., M.S., and PhD.) from the George Washington University. He married Mary Forrest Smith on June 30, 1914. From 1895 to 1941, he was employed by the United States Bureau of Biological Survey (later the United States Fish and Wildlife Service) as an ornithologist, biologist, and editor. During his career, he collected bird specimens while on trips with Vernon Bailey and Louis Agassiz Fuertes. In 1928, Oberholser helped organize the Winter Waterfowl Survey, which continues to this day. In 1941, at the age of 70, he became curator of ornithology at the Cleveland Museum of Natural History. Oberholser was the author of a number of books and articles. A complete manuscript of his work is available at the Dolph Briscoe Center for American History. He died on December 25, 1963. Memory Empidonax oberholseri (dusky flycatcher) was named in his honor. Books The Bird Life of Texas (1974) Birds of Mt. Kilimanjaro (1905) Birds of the Anamba Islands (1917) Birds of the Natuna Islands (1932) The Bird Life of Louisiana (1938) When Passenger Pigeons Flew in the Killbuck Valley (1999) Critical notes on the subspecies of the spotted owl (1915) The birds of the Tambelan Islands, South China Sea (1919) The great plains waterfowl breeding grounds and their protection (1918)
https://en.wikipedia.org/wiki/DIMES
DIMES (Distributed Internet Measurements & Simulations) was a subproject of the EVERGROW Integrated Project in the EU Information Society Technologies, Future and Emerging Technologies programme. It studied the structure and topology of the Internet to obtain map and annotate it with delay, loss and link capacity. DIMES used measurements by software agents downloaded by volunteers and installed on their privately owned machines. Once installed at the agent operates at a very low rate so as to have minimal impact on the machine performance and on its network connection. DIMES intended to explore relationships between the data gathered on the Internet's growth with geographical and socio-economic data, in particular for fast developing countries, to see if they can provide a measure of economic development and societal openness. The project published period maps at several aggregation levels on the web. , over 12500 agents were installed by over 5500 users residing in about 95 nations, and in several hundreds of ASes. The project collected over 2.2 billion measurements. See also ETOMIC List of volunteer computing projects Network mapping
https://en.wikipedia.org/wiki/Nokia%20770%20Internet%20Tablet
The Nokia 770 Internet Tablet is a wireless Internet appliance from Nokia, originally announced at the LinuxWorld Summit in New York City on 25 May 2005. It is designed for wireless Internet browsing and email functions and includes software such as Internet radio, an RSS news reader, ebook reader, image viewer and media players for selected types of media. The device went on sale in Europe on 3 November 2005, at a suggested retail price of €349 to €369 (£245 in the United Kingdom). In the United States, the device became available for purchase through Nokia USA's web site on 14 November 2005 for $359.99. On 8 January 2007, Nokia announced the Nokia N800, the successor to the 770. In July 2007, the price for the Nokia 770 fell to under US$150 / 150 EUR / 100 GBP. Specifications Dimensions: 141×79×19 mm (5.5×3.1×0.7 in) Weight: 230 g (8.1 oz) with protective cover or 185 g (6.5 oz) without. Processor: Texas Instruments OMAP 1710 CPU running at 252 MHz. It combines the ARM architecture of the ARM926TEJ core subsystem with a Texas Instruments TMS320C55x digital signal processor. Memory: 64 MB (64 × 220 bytes) of DDR RAM, and 128 MB of internal flash memory, of which about 64 MB should be available to the user. Option for extended virtual memory (RS-MMC up to 1 GB (2 GB after flash upgrade)). Display and resolution: 4.1 inches, 800×480 pixels at 225 pixels per inch with up to 65,536 colors Connectivity: WLAN (IEEE 802.11b/g), Bluetooth 1.2, dial-up access, USB (both user-mode, and non-powered host-mode) Expansion: RS-MMC (both RS-MMC and DV-RS-MMC cards are supported). Audio: speaker and a microphone The device was manufactured in Estonia and Germany. Maemo The 770, like all Nokia Internet Tablets, runs Maemo, which is similar to many handheld operating systems and provides a "Home" screen—the central point from which all applications and settings are accessed. The home screen is divided into areas for launching applications, a menu bar, and a large custo
https://en.wikipedia.org/wiki/Dale%27s%20principle
In neuroscience, Dale's principle (or Dale's law) is a rule attributed to the English neuroscientist Henry Hallett Dale. The principle basically states that a neuron performs the same chemical action at all of its synaptic connections to other cells, regardless of the identity of the target cell. However, there has been disagreement about the precise wording. Because of an ambiguity in the original statement, there are actually two versions of the principle, one that has been shown definitively to be false, and another that remains a valuable rule of thumb. The term "Dale's principle" was first used by Sir John Eccles in 1954, in a passage reading, "In conformity with Dale's principle (1934, 1952) that the same chemical transmitter is released from all the synaptic terminals of a neurone…" Some modern writers have understood the principle to state that neurons release one and only one transmitter at all of their synapses, which is false. Others, including Eccles himself in later publications, have taken it to mean that neurons release the same set of transmitters at all of their synapses. Dale himself never stated his "principle" in an explicit form. The source that Eccles referred to was a lecture published by Dale in 1934, called Pharmacology and nerve endings, describing some of the early research into the physiology of neurotransmission. At that time, only two chemical transmitters were known, acetylcholine and noradrenaline (then thought to be adrenaline). In the peripheral nervous system, cholinergic and adrenergic transmission were known to arise from different groups of nerve fibers. Dale was interested in the possibility that a neuron releasing one of these chemicals in the periphery might also release the same chemical at central synapses. He wrote: It is to be noted, further, that in the cases for which direct evidence is already available, the phenomena of regeneration appear to indicate that the nature of the chemical function, whether choli
https://en.wikipedia.org/wiki/Adaptogen
Adaptogens or adaptogenic substances are used in herbal medicine for the purported stabilization of physiological processes and promotion of homeostasis. History The term "adaptogens" was coined in 1947 by Soviet toxicologist Nikolai Lazarev to describe substances that may increase resistance to stress. The term "adaptogenesis" was later applied in the Soviet Union to describe remedies thought to increase the resistance of organisms to biological stress. Most of the studies conducted on adaptogens were performed in the Soviet Union, Korea, and China before the 1980s. The term was not accepted in pharmacological, physiological, or mainstream clinical practices in the European Union. Sources Compounds studied for putative adaptogenic properties are often derived from the following plants: Eleutherococcus senticosus Oplopanax elatus Panax ginseng Rhaponticum cartamoides Rhodiola rosea Schisandra chinensis
https://en.wikipedia.org/wiki/Superdiamagnetism
Superdiamagnetism (or perfect diamagnetism) is a phenomenon occurring in certain materials at low temperatures, characterised by the complete absence of magnetic permeability (i.e. a volume magnetic susceptibility = −1) and the exclusion of the interior magnetic field. Superdiamagnetism established that the superconductivity of a material was a stage of phase transition. Superconducting magnetic levitation is due to superdiamagnetism, which repels a permanent magnet which approaches the superconductor, and flux pinning, which prevents the magnet floating away. Superdiamagnetism is a feature of superconductivity. It was identified in 1933, by Walther Meissner and Robert Ochsenfeld, but it is considered distinct from the Meissner effect which occurs when the superconductivity first forms, and involves the exclusion of magnetic fields that already penetrate the object. Theory Fritz London and Heinz London developed the theory that the exclusion of magnetic flux is brought about by electrical screening currents that flow at the surface of the superconducting material and which generate a magnetic field that exactly cancels the externally applied field inside the superconductor. These screening currents are generated whenever a superconducting material is brought inside a magnetic field. This can be understood by the fact that a superconductor has zero electrical resistance, so that eddy currents, induced by the motion of the material inside a magnetic field, will not decay. Fritz, at the Royal Society in 1935, stated that the thermodynamic state would be described by a single wave function. "Screening currents" also appear in a situation wherein an initially normal, conducting metal is placed inside a magnetic field. As soon as the metal is cooled below the appropriate transition temperature, it becomes superconducting. This expulsion of magnetic field upon the cooling of the metal cannot be explained any longer by merely assuming zero resistance and is calle
https://en.wikipedia.org/wiki/Nature%20Reviews%20Molecular%20Cell%20Biology
Nature Reviews Molecular Cell Biology is a monthly peer-reviewed review journal published by Nature Portfolio. It was established in October 2000 and covers all aspects of molecular and cell biology. The editor-in-chief is Kim Baumann. According to the Journal Citation Reports, the journal has a 2021 impact factor of 113.915, ranking it 1st out of 194 journals in the category "Cell Biology".
https://en.wikipedia.org/wiki/Mendsaikhany%20Enkhsaikhan
Mendsaikhany Enkhsaikhan (; born 4 June 1955) was the prime minister of Mongolia from July 7, 1996 to April 23, 1998, the first in 80 years not belonging to the Mongolian People's Revolutionary Party. Life Early years Enkhsaikhan was born 1955 in Ulaanbaatar. He participated International Mathematical Olympiad in 1973 and took bronze medal. He earned PhD in economic sciences from the Kiev State University, former USSR in the 1970s. From 1978 to 1988 he worked as an economist, researcher and deputy head of a department at the Foreign Ministry of Mongolia. From 1988 to 1990 he worked as a specialist at the Ministry of Foreign Economic Relations and Trade and in the meantime, he was a director of a market research institute. From 1990 to 1992 he held a seat in the State Small Khural (parliament) for the first time, representing the Mongolian Democratic Party (1990) and chaired a standing committee in the parliament. From 1992 to 1993 he held a seat in the State Great Khural (parliament) representing the Mongolian Democratic Party. He was appointed as a president of the Academy for Political Education in 1992. After that he served as the chairman of the common elections commission of Mongolian National Democratic Party and Mongolian Social Democratic Party for the presidential elections of June 6, 1993. From 1993 to 1996 he worked as the chairman of the Presidential office for Punsalmaagiin Ochirbat, first official president of Mongolia. Prime Minister Enkhsaikhan became Prime Minister of Mongolia on July 7, 1996 after the Mongolian Democratic Union Coalition won at the parliamentary elections. Enkhsaikhan was the elections campaign manager of Mongolian Democratic Union while Tsakhiagiin Elbegdorj was the coalition chairman. This made Enkhsaikhan the first prime minister since the 1920s who wasn't a member of the Mongolian People's Revolutionary Party (MPRP). He presided over a period of aggressive economic reforms, including housing privatization, acceleration
https://en.wikipedia.org/wiki/Decoding%20methods
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel. Notation is considered a binary code with the length ; shall be elements of ; and is the distance between those elements. Ideal observer decoding One may be given the message , then ideal observer decoding generates the codeword . The process results in this solution: For example, a person can choose the codeword that is most likely to be received as the message after transmission. Decoding conventions Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include: Request that the codeword be resent automatic repeat-request. Choose any random codeword from the set of most likely codewords which is nearer to that. If another code follows, mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them Maximum likelihood decoding Given a received vector maximum likelihood decoding picks a codeword that maximizes , that is, the codeword that maximizes the probability that was received, given that was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding. In fact, by Bayes Theorem, Upon fixing , is restructured and is constant as all codewords are equally likely to be sent. Therefore, is maximised as a function of the variable precisely when is maximised, and the claim follows. As with ideal observer decoding, a convention must be agreed to for non-unique decoding. The maximum likelihood decoding problem can also be modeled as an integer programming problem. The
https://en.wikipedia.org/wiki/FK-AK%20space
In functional analysis and related areas of mathematics an FK-AK space or FK-space with the AK property is an FK-space which contains the space of finite sequences and has a Schauder basis. Examples and non-examples the space of convergent sequences with the supremum norm has the AK property. () the absolutely p-summable sequences with the norm have the AK property. with the supremum norm does not have the AK property. Properties An FK-AK space has the property that is the continuous dual of is linear isomorphic to the beta dual of FK-AK spaces are separable spaces. See also
https://en.wikipedia.org/wiki/The%20Science%20of%20Life
The Science of Life is a book written by H. G. Wells, Julian Huxley and G. P. Wells, published in three volumes by The Waverley Publishing Company Ltd in 1929–30, giving a popular account of all major aspects of biology as known in the 1920s. It has been called "the first modern textbook of biology" and "the best popular introduction to the biological sciences". Wells's most recent biographer notes that The Science of Life "is not quite as dated as one might suppose". In undertaking The Science of Life, H. G. Wells, who had published The Outline of History a decade earlier, selling over two million copies, desired the same sort of treatment for biology. He thought of his readership as "the intelligent lower middle classes ... [not] idiots, half-wits ... greenhorns, religious fanatics ... smart women or men who know all that there is to be known". Julian Huxley, the grandson of T. H. Huxley under whom Wells had studied biology, and Wells' son "Gip", a zoologist, divided the initial writing between them; H. G. Wells revised, dealt (with the help of his literary agent, A. P. Watt) with publishers, and acted as a strict taskmaster, often obliging his collaborators to sit down and work together and keeping them on a tight schedule. (H. G. Wells had begun the book during his wife's final illness and is said to have used work on the book as a way to keep his mind off his loss.) The text as published is presented as the common work of a "triplex author". H. G. Wells took 40% of the royalties; the remainder was split between Huxley and Wells's son. In his will, H. G. Wells left his rights in the book to G. P. Wells. In 1927, Huxley gave up his chair of Zoology at King's College, London to concentrate on the work. Thanks to the success of the book, Huxley was able to give up teaching and devote himself to administration and experimental science. The book was originally serialised in 31 fortnightly parts, published in 3 volumes in 1929–30 and in a single volume in 1931. T
https://en.wikipedia.org/wiki/6bone
The 6bone was a testbed for Internet Protocol version 6; it was an outgrowth of the IETF IPng project that created the IPv6 protocols intended to eventually replace the current Internet network layer protocols known as IPv4. The 6bone was started outside the official IETF process at the March 1996 IETF meetings, and became a worldwide informal collaborative project, with eventual oversight from the "NGtrans" (IPv6 Transition) Working Group of the IETF. The original mission of the 6bone was to establish a network to foster the development, testing, and deployment of IPv6 using a model to be based upon the experiences from the Mbone, hence the name "6bone". The 6bone started as a virtual network (using IPv6 over IPv4 tunneling/encapsulation) operating over the IPv4-based Internet to support IPv6 transport, and slowly added native links specifically for IPv6 transport. Although the initial 6bone focus was on testing of standards and implementations, the eventual focus became more on testing of transition and operational procedures, as well as actual IPv6 network usage. The 6bone operated under the IPv6 Testing Address Allocation (see ), which specified the 3FFE::/16 IPv6 prefix for 6bone testing purposes. At its peak in mid-2003, over 150 6bone top level 3FFE::/16 network prefixes were routed, interconnecting over 1000 sites in more than 50 countries. When it became obvious that the availability of IPv6 top level production prefixes was assured, and that commercial and private IPv6 networks were being operated outside the 6bone using these prefixes, a plan was developed to phase out the 6bone (see ). The phaseout plan called for a halt to new 6bone prefix allocations on 1 January 2004 and the complete cessation of 6bone operation and routing over the 6bone testing prefixes on 6 June 2006. Addresses within the 6bone testing prefix have now reverted to the IANA. Related RFCs IPv6 Testing Address Allocation 6bone (IPv6 Testing Address Allocation) Phaseout Ext
https://en.wikipedia.org/wiki/Synteny
In genetics, the term synteny refers to two related concepts: In classical genetics, synteny describes the physical co-localization of genetic loci on the same chromosome within an individual or species. In current biology, synteny more commonly refers to colinearity, i.e. conservation of blocks of order within two sets of chromosomes that are being compared with each other. These blocks are referred to as syntenic blocks. The Encyclopædia Britannica gives the following description of synteny, using the modern definition: Etymology Synteny is a neologism meaning "on the same ribbon"; Greek: , syn "along with" + , tainiā "band". This can be interpreted classically as "on the same chromosome", or in the modern sense of having the same order of genes on two (homologous) strings of DNA (or chromosomes). : co-localization on a chromosome The classical concept is related to genetic linkage: Linkage between two loci is established by the observation of lower-than-expected recombination frequencies between them. In contrast, any loci on the same chromosome are by definition syntenic, even if their recombination frequency cannot be distinguished from unlinked loci by practical experiments. Thus, in theory, all linked loci are syntenic, but not all syntenic loci are necessarily linked. Similarly, in genomics, the genetic loci on a chromosome are syntenic regardless of whether this relationship can be established by experimental methods such as DNA sequencing/assembly, genome walking, physical localization or hap-mapping. Students of (classical) genetics employ the term synteny to describe the situation in which two genetic loci have been assigned to the same chromosome but still may be separated by a large enough distance in map units that genetic linkage has not been demonstrated. Across evolution: Shared synteny (also known as conserved synteny) describes preserved co-localization of genes on chromosomes of different species. During evolution, rearrangements to t
https://en.wikipedia.org/wiki/Distributed%20object
In distributed computing, distributed objects are objects (in the sense of object-oriented programming) that are distributed across different address spaces, either in different processes on the same computer, or even in multiple computers connected via a network, but which work together by sharing data and invoking methods. This often involves location transparency, where remote objects appear the same as local objects. The main method of distributed object communication is with remote method invocation, generally by message-passing: one object sends a message to another object in a remote machine or process to perform some task. The results are sent back to the calling object. Distributed objects were popular in the late 1990s and early 2000s, but have since fallen out of favor. The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing, such as replicated objects or live distributed objects. Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states, and that respond to requests in a coordinated manner. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol, perhaps resulting in only a weak consistency between their local states. Live distributed objects can also be defined as running instances of distributed multi-party protocols, viewed from the object-oriented perspective as entities that have a distinct identity, and that can encapsulate distributed state and behavior. See also Internet protocol suite. Local vs. distributed objects Local and distributed objects differ in many respects. Here are s
https://en.wikipedia.org/wiki/Local%20Management%20Interface
Local Management Interface (LMI) is a term for some signaling standards used in networks, namely Frame Relay and Carrier Ethernet. Frame Relay LMI is a set of signalling standards between routers and Frame Relay switches. Communication takes place between a router and the first Frame Relay switch to which it is connected. Information about keepalives, global addressing, IP multicast and the status of virtual circuits is commonly exchanged using LMI. There are three standards for LMI: Using DLCI 0: ANSI's T1.617 Annex D standard ITU-T's Q.933 Annex A standard Using DLCI 1023: The "Gang of Four" standard, developed by Cisco, DEC, StrataCom and Nortel Carrier Ethernet Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices.
https://en.wikipedia.org/wiki/Vacuum%20polarization
In quantum field theory, and specifically quantum electrodynamics, vacuum polarization describes a process in which a background electromagnetic field produces virtual electron–positron pairs that change the distribution of charges and currents that generated the original electromagnetic field. It is also sometimes referred to as the self-energy of the gauge boson (photon). After developments in radar equipment for World War II resulted in higher accuracy for measuring the energy levels of the hydrogen atom, I.I. Rabi made measurements of the Lamb shift and the anomalous magnetic dipole moment of the electron. These effects corresponded to the deviation from the value −2 for the spectroscopic electron g-factor that are predicted by the Dirac equation. Later, Hans Bethe theoretically calculated those shifts in the hydrogen energy levels due to vacuum polarization on his return train ride from the Shelter Island Conference to Cornell. The effects of vacuum polarization have been routinely observed experimentally since then as very well-understood background effects. Vacuum polarization, referred to below as the one loop contribution, occurs with leptons (electron–positron pairs) or quarks. The former (leptons) was first observed in 1940s but also more recently observed in 1997 using the TRISTAN particle accelerator in Japan, the latter (quarks) was observed along with multiple quark–gluon loop contributions from the early 1970s to mid-1990s using the VEPP-2M particle accelerator at the Budker Institute of Nuclear Physics in Siberia, Russia and many other accelerator laboratories worldwide. History Vacuum polarization was first discussed in papers by P. A. M. Dirac and W. Heisenberg in 1934. Effects of vacuum polarization were calculated to first order in the coupling constant by R. Serber and E. A. Uehling in 1935. Explanation According to quantum field theory, the vacuum between interacting particles is not simply empty space. Rather, it contains short-lived v
https://en.wikipedia.org/wiki/Table%20wine
Table wine (rarely abbreviated TW) is a wine term with two different meanings: a style of wine and a quality level within wine classification. In the United States, the term primarily designates a wine style: an ordinary wine which is not fortified or expensive and is not usually sparkling. In the European Union wine regulations, the term is the lower of two overall quality categories, the higher of which is quality wines produced in specified regions (QWPSR). All levels of national wine classification systems within the EU correspond to either TW or QWPSR, although the terms that actually appear on wine labels are defined by national wine laws with the EU regulations as a framework. Most EU countries have a national classification called table wine in the country's official language. Examples include vin de table in France, vino da tavola in Italy, vino de mesa in Spain, vinho de mesa in Portugal, Tafelwein in Germany, and επιτραπέζιος οίνος (epitrapézios oínos) in Greece. These classifications generally represent the lowest level of classification in their country. United States The Alcohol and Tobacco Tax and Trade Bureau and Code of Federal Regulations define table wine as grape wine having a maximum alcoholic content of 14 percent alcohol by volume. Wines between 14% and 24% ABV are known as dessert wines. Table wine may also be designated using terms such as light wine, light white wine, red table wine, sweet table wine, etc. European Union European Union guidelines stipulate that all wine produced must fall into one of two categories: table wine or the superior quality wines produced in specified regions (often referred to as quality wine psr). Within the category of table wines, a difference is made between "plain" table wines, which are only allowed to display the country of origin, and table wines with geographical indication, which may indicate a region of origin and are a form of protected geographical indication (PGI) applied to wine. For the lowe
https://en.wikipedia.org/wiki/In-memory%20database
An in-memory database (IMDB, or main memory database system (MMDB) or memory resident database) is a database management system that primarily relies on main memory for computer data storage. It is contrasted with database management systems that employ a disk storage mechanism. In-memory databases are faster than disk-optimized databases because disk access is slower than memory access and the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disk. Applications where response time is critical, such as those running telecommunications network equipment and mobile advertising networks, often use main-memory databases. IMDBs have gained much traction, especially in the data analytics space, starting in the mid-2000s – mainly due to multi-core processors that can address large memory and due to less expensive RAM. A potential technical hurdle with in-memory data storage is the volatility of RAM. Specifically in the event of a power loss, intentional or otherwise, data stored in volatile RAM is lost. With the introduction of non-volatile random-access memory technology, in-memory databases will be able to run at full speed and maintain data in the event of power failure. ACID support In its simplest form, main memory databases store data on volatile memory devices. These devices lose all stored information when the device loses power or is reset. In this case, IMDBs can be said to lack support for the "durability" portion of the ACID (atomicity, consistency, isolation, durability) properties. Volatile memory-based IMDBs can, and often do, support the other three ACID properties of atomicity, consistency and isolation. Many IMDBs have added durability via the following mechanisms: Snapshot files, or, checkpoint images, which record the state of the database at a given moment in time. The system typically
https://en.wikipedia.org/wiki/Vibronic%20coupling
Vibronic coupling (also called nonadiabatic coupling or derivative coupling) in a molecule involves the interaction between electronic and nuclear vibrational motion. The term "vibronic" originates from the combination of the terms "vibrational" and "electronic", denoting the idea that in a molecule, vibrational and electronic interactions are interrelated and influence each other. The magnitude of vibronic coupling reflects the degree of such interrelation. In theoretical chemistry, the vibronic coupling is neglected within the Born–Oppenheimer approximation. Vibronic couplings are crucial to the understanding of nonadiabatic processes, especially near points of conical intersections. The direct calculation of vibronic couplings used to be uncommon due to difficulties associated with its evaluation, but has recently gained popularity due to increased interest in the quantitative prediction of internal conversion rates, as well as the development of cheap but rigorous ways to analytically calculate the vibronic couplings, especially at the TDDFT level. Definition Vibronic coupling describes the mixing of different electronic states as a result of small vibrations. Evaluation The evaluation of vibronic coupling often involves complex mathematical treatment. Numerical gradients The form of vibronic coupling is essentially the derivative of the wave function. Each component of the vibronic coupling vector can be calculated with numerical differentiation methods using wave functions at displaced geometries. This is the procedure used in MOLPRO. First order accuracy can be achieved with forward difference formula: Second order accuracy can be achieved with central difference formula: Here, is a unit vector along direction . is the transition density between the two electronic states. Evaluation of electronic wave functions for both electronic states are required at N displacement geometries for first order accuracy and 2*N displacements to achieve second o
https://en.wikipedia.org/wiki/Architecture%20of%20Windows%20NT
The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant multitasking operating system, which has been designed to work with uniprocessor and symmetrical multiprocessor (SMP)-based computers. To process input/output (I/O) requests, it uses packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows XP, Microsoft began making 64-bit versions of Windows available; before this, there were only 32-bit versions of these operating systems. Programs and subsystems in user mode are limited in terms of what system resources they have access to, while the kernel mode has unrestricted access to the system memory and external devices. Kernel mode in Windows NT has full access to the hardware and system resources of the computer. The Windows NT kernel is a hybrid kernel; the architecture comprises a simple kernel, hardware abstraction layer (HAL), drivers, and a range of services (collectively named Executive), which all exist in kernel mode. User mode in Windows NT is made of subsystems capable of passing I/O requests to the appropriate kernel mode device drivers by using the I/O manager. The user mode layer of Windows NT is made up of the "Environment subsystems", which run applications written for many different types of operating systems, and the "Integral subsystem", which operates system-specific functions on behalf of environment subsystems. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to. The Executive interfaces, with all the user mode subsystems, deal with I/O, object management, security and process management. The kernel sits between the hardware abstraction layer and the Executive to provide multiprocessor synchronization, thread and interrupt scheduling and dispatching, an
https://en.wikipedia.org/wiki/WeatherBug
WeatherBug is a brand based in New York City, that provides location-based advertising to businesses. WeatherBug consists of a mobile app reporting live and forecast data on hyperlocal weather to consumer users. History Originally owned by Automated Weather Source, the WeatherBug brand was founded by Bob Marshall and other partners in 1993. It started in the education market by selling weather tracking stations and educational software to public and private schools and then used the data from the stations on their website. Later, the company began partnering with TV stations so that broadcasters could use WeatherBug's local data and camera shots in their weather reports. In 2000, the WeatherBug desktop application and website were launched. Later, the company launched WeatherBug and WeatherBug Elite as smartphone apps for iOS and Android, which won an APPY app design award in 2013. The company also sells a lightning tracking safety system that is used by schools and parks in southern Florida and elsewhere. The company used lightning detection sensors throughout Guinea in Africa to track storms as they develop and has more than 50 lightning detection sensors in Brazil. Earth Networks received The Award for Outstanding Services to Meteorology by a Corporation in 2014 from the American Meteorological Society for "developing innovative lightning detection data products that improve severe-storm monitoring and warnings." WeatherBug announced in 2004 it had been certified to display the TRUSTe privacy seal on its website. In 2005, Microsoft AntiSpyware flagged the application as a low-risk spyware threat. According to the company, the desktop application is not spyware because it is incapable of tracking users' overall Web use or deciphering anything on their hard drive. In early 2011, AWS Convergence Technologies, Inc. (formerly Automated Weather Source) changed its name to Earth Networks, Inc. In April 2013, WeatherBug was the second most popular weather informat
https://en.wikipedia.org/wiki/Doublet%20state
In quantum mechanics, a doublet is a composite quantum state of a system with an effective spin of 1/2, such that there are two allowed values of the spin component, −1/2 and +1/2. Quantum systems with two possible states are sometimes called two-level systems. Essentially all occurrences of doublets in nature arise from rotational symmetry; spin 1/2 is associated with the fundamental representation of the Lie group SU(2). History and applications The term "doublet" dates back to the 19th century, when it was observed that certain spectral lines of an ionized, excited gas would split into two under the influence of a strong magnetic field, in an effect known as the anomalous Zeeman effect. Such spectral lines were observed not only in the laboratory, but also in astronomical spectroscopy observations, allowing astronomers to deduce the existence of, and measure the strength of magnetic fields around the Sun, stars and galaxies. Conversely, it was the observation of doublets in spectroscopy that allowed physicists to deduce that the electron had a spin, and that furthermore, the magnitude of the spin had to be 1/2. See the history section of the article on Spin (physics) for greater detail. Doublets continue to play an important role in physics. For example, the healthcare technology of magnetic resonance imaging is based on nuclear magnetic resonance. In this technology, a spectroscopic doublet occurs in a spin-1/2 atomic nucleus, whose doublet splitting is in the radio-frequency range. By applying both a magnetic field and carefully tuning a radio-frequency transmitter, the nuclear spins will flip and re-emit radiation, in an effect known as the Rabi cycle. The strength and frequency of the emitted radio waves allows the concentration of such nuclei to be measured. Another potential application is the use of doublets as the emitting layer in light emitting diodes (LEDs). These materials have the advantage of having 100% theoretical quantum efficiency based on spi
https://en.wikipedia.org/wiki/Vernon%20Estes
Vernon Estes (usually referred to as Vern), born January 4, 1930, is the founder and namesake of Estes Industries, the highly recognized model rocket production company, headquartered in Penrose, Colorado. In 1957, G. Harry Stine and Orville Carlisle founded the first model rocket company, Model Missiles Incorporated, in Denver, Colorado. By 1959, the demand for rocket engines was too great for their production capabilities, so they sought out an external supplier. The Estes family business was the first fireworks company listed in the Denver phone book. Their son, Vern, took it upon himself to find a way to mechanize the production of rocket engines. He assembled a machine which he named "Mabel," capable of producing a rocket engine every 5.5 seconds. The machine was powered by compressed air and hydraulics, which was a much safer choice than electricity. Model Missiles, Inc. was forced to fold due to a number of unwise business decisions. Although a model rocketry supplier had disappeared, the market still existed, and Estes formed his own company, Estes Industries, to fill this market. His first kit was the Astron Scout, a simple design that was so small it fit inside the cardboard tubes used for shipping rocket engines. In 1961, Estes moved his company to a facility near Penrose, Colorado. Although he sold his interest in Estes Industries in 1969, he remains active in model rocketry and occasionally attends launch events. He also helped start the National Association of Rocketry, and also helped make the Model Rocket Safety Code.
https://en.wikipedia.org/wiki/Henderson%20limit
The Henderson limit is the X-ray dose (energy per unit mass) a cryo-cooled crystal can absorb before the diffraction pattern decays to half of its original intensity. Its value is defined as 2 × 107 Gy (J/kg). Decay of diffraction patterns with increasing X-ray dose Although the process is still not fully understood, diffraction patterns of crystals typically decay with X-ray exposure due to a number of processes which non-uniformly and irreversibly modify molecules that compose the crystal. These modifications induce disorder and thus decrease the intensity of Bragg diffraction. The processes behind these modifications include primary damage via the photo electric effect, covalent modification by free radicals, oxidation (methionine residues), reduction (disulfide bonds) and decarboxylation (glutamate, aspartate residues). Practical significance Although generalizable, the limit is defined in the context of biomolecular X-ray crystallography, where a typical experiment consists of exposing a single frozen crystal of a macromolecule (generally protein, DNA or RNA) to an intense X-ray beam. The beams that are diffracted are then analyzed towards obtaining an atomically resolved model of the crystal. Such decay presents itself as a problem for crystallographers who require that the diffraction intensities decay as little as possible, to maximize the signal to noise ratio in order to determine accurate atomic models that describe the crystal.
https://en.wikipedia.org/wiki/Puzzle%20box
A puzzle box (also called a secret box or trick box) is a box that can be opened only by solving a puzzle. Some require only a simple move and others a series of discoveries. Modern puzzle boxes developed from furniture and jewelry boxes with secret compartments and hidden openings, known since the Renaissance. Puzzle boxes produced for entertainment first appeared in Victorian England in the 19th century and as tourist souvenirs in the Interlaken region in Switzerland and in the Hakone region of Japan at the end of the 19th and the beginning of the 20th century. Boxes with secret openings appeared as souvenirs at other tourist destinations during the early 20th century, including the Amalfi Coast, Madeira, and Sri Lanka, though these were mostly 'one-trick' traditions. Chinese cricket boxes represent another example of intricate boxes with secret openings. Interest in puzzle boxes subsided during and after the two World Wars. The art was revived in the 1980s by three pioneers of this genre: Akio Kamei in Japan, Trevor Wood in England, and Frank Chambers in Ireland. There are currently a number of artists producing puzzle boxes, including the Karakuri group in Japan set up by Akio Kamei, US puzzle box specialists Robert Yarger and Kagen Sound, as well as a number of other designers and puzzle makers who produce puzzle boxes across the globe. Clive Barker's horror novella The Hellbound Heart (later adapted into a film, Hellraiser, followed by numerous original sequels) centers on the fictional Lemarchand's box, a puzzle box which opens the gates to another dimension when manipulated. See also Mechanical puzzle
https://en.wikipedia.org/wiki/Single-molecule%20magnet
A single-molecule magnet (SMM) is a metal-organic compound that has superparamagnetic behavior below a certain blocking temperature at the molecular scale. In this temperature range, a SMM exhibits magnetic hysteresis of purely molecular origin. In contrast to conventional bulk magnets and molecule-based magnets, collective long-range magnetic ordering of magnetic moments is not necessary. Although the term "single-molecule magnet" was first employed in 1996, the first single-molecule magnet, [Mn12O12(OAc)16(H2O)4] (nicknamed "Mn12") was reported in 1991. This manganese oxide compound features a central Mn(IV)4O4 cube surrounded by a ring of 8 Mn(III) units connected through bridging oxo ligands, and displays slow magnetic relaxation behavior up to temperatures of ca. 4 K. Efforts in this field primarily focus on raising the operating temperatures of single-molecule magnets to liquid nitrogen temperature or room temperature in order to enable applications in magnetic memory. Along with raising the blocking temperature, efforts are being made to develop SMMs with high energy barriers to prevent fast spin reorientation. Recent acceleration in this field of research has resulted in significant enhancements of single-molecule magnet operating temperatures to above 70 K. Measurement Arrhenius behavior of magnetic relaxation Because of single-molecule magnets' magnetic anisotropy, the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier. The stable orientations define the molecule's so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. Identical to a superparamagnet, the mean time between two flips is called the Néel relaxation time and is given by the following Néel–Arrhenius equation: where: τ is the magnetic relaxation time, or the average amount of time that it takes for the molecule's magnetization to randomly flip as a
https://en.wikipedia.org/wiki/Greek%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. In these contexts, the capital letters and the small letters represent distinct and unrelated entities. Those Greek letters which have the same form as Latin letters are rarely used: capital A, B, E, Z, H, I, K, M, N, O, P, T, Y, X. Small ι, ο and υ are also rarely used, since they closely resemble the Latin letters i, o and u. Sometimes, font variants of Greek letters are used as distinct symbols in mathematics, in particular for ε/ϵ and π/ϖ. The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used. The Bayer designation naming scheme for stars typically uses the first Greek letter, α, for the brightest star in each constellation, and runs through the alphabet before switching to Latin letters. In mathematical finance, the Greeks are the variables denoted by Greek letters used to describe the risk of certain investments. Typography The Greek letter forms used in mathematics are often different from those used in Greek-language text: they are designed to be used in isolation, not connected to other letters, and some use variant forms which are not normally used in current Greek typography. The OpenType font format has the feature tag "mgrk" ("Mathematical Greek") to identify a glyph as representing a Greek letter to be used in mathematical (as opposed to Greek language) contexts. The table below shows a comparison of Greek letters rendered in TeX and HTML. The font used in the TeX rendering is an italic style. This is in line with the convention that variables should be italicized. As Greek letters are more often than not used as variables in mathematical formulas, a Greek letter appearing similar to the TeX rendering is more likely to be encountered in works involving mathematics. Concepts represented by a Greek letter Αα (alpha) repr
https://en.wikipedia.org/wiki/Liang%E2%80%93Barsky%20algorithm
In computer graphics, the Liang–Barsky algorithm (named after You-Dong Liang and Brian A. Barsky) is a line clipping algorithm. The Liang–Barsky algorithm uses the parametric equation of a line and inequalities describing the range of the clipping window to determine the intersections between the line and the clip window. With these intersections it knows which portion of the line should be drawn. So this algorithm is significantly more efficient than Cohen–Sutherland. The idea of the Liang–Barsky clipping algorithm is to do as much testing as possible before computing line intersections. The algorithm uses the parametric form of a straight line: A point is in the clip window, if and which can be expressed as the 4 inequalities where To compute the final line segment: A line parallel to a clipping window edge has for that boundary. If for that , , then the line is completely outside and can be eliminated. When , the line proceeds outside to inside the clip window, and when , the line proceeds inside to outside. For nonzero , gives for the intersection point of the line and the window edge (possibly projected). The two actual intersections of the line with the window edges, if they exist, are described by and , calculated as follows. For , look at boundaries for which (i.e. outside to inside). Take to be the largest among . For , look at boundaries for which (i.e. inside to outside). Take to be the minimum of . If , the line is entirely outside the clip window. If it is entirely inside it. // Liang—Barsky line-clipping algorithm #include<iostream> #include<graphics.h> #include<math.h> using namespace std; // this function gives the maximum float maxi(float arr[],int n) { float m = 0; for (int i = 0; i < n; ++i) if (m < arr[i]) m = arr[i]; return m; } // this function gives the minimum float mini(float arr[], int n) { float m = 1; for (int i = 0; i < n; ++i) if (m > arr[i]) m = arr[i]; return m; } void liang_b
https://en.wikipedia.org/wiki/Sexuality%20in%20older%20age
Sexuality in older age concerns the sexual drive, sexual activity, interests, orientation, intimacy, self-esteem, behaviors, and overall sexuality of people in middle age and old age, and the social perceptions concerning sexuality in older age. Older people engage in a variety of sexual acts from time to time for a variety of reasons. Desire for intimacy does not disappear with age, yet there are many restrictions placed on the elderly preventing sexual expressions and discouraging the fulfillment of sexual needs. Sexuality in older age is often considered a taboo, yet it is considered to be quite a healthy practice; however, this stigma can affect how older individuals experience their sexuality. While the human body has some limits on the maximum age for reproduction, sexual activity can be performed or experienced well into the later years of life. Physical changes Both male and female libidos tend to decline with increasing age, and women tend to lose their libido faster than men. However, desire for sexual activity is not lost completely. Neither does it decrease for everyone. Menopause, a female biological process, has been linked to a loss of interest in sexual activity and to a desensitization of the genital area. In some cases, vaginal penetration can be painful for older women (see, for example, vaginismus). However, with the advent of hormone replacement therapy (HRT) treatments, the effects of menopause have lessened and women have more opportunities to continue experiencing an active sex life. Similarly, treatments for erectile dysfunction can make it possible for men to enjoy sexual activity again. Health benefits It has been suggested that an active sex life can increase longevity among the elderly. Positive sexual health in older age is slowly becoming more of a commonplace idea with the steady increase in the percentage of the older population. This population percentage increase requires placing more attention on the needs of this age group, inc
https://en.wikipedia.org/wiki/Menaechmus
Menaechmus (, 380–320 BC) was an ancient Greek mathematician, geometer and philosopher born in Alopeconnesus or Prokonnesos in the Thracian Chersonese, who was known for his friendship with the renowned philosopher Plato and for his apparent discovery of conic sections and his solution to the then-long-standing problem of doubling the cube using the parabola and hyperbola. Life and work Menaechmus is remembered by mathematicians for his discovery of the conic sections and his solution to the problem of doubling the cube. Menaechmus likely discovered the conic sections, that is, the ellipse, the parabola, and the hyperbola, as a by-product of his search for the solution to the Delian problem. Menaechmus knew that in a parabola y2 = Lx, where L is a constant called the latus rectum, although he was not aware of the fact that any equation in two unknowns determines a curve. He apparently derived these properties of conic sections and others as well. Using this information it was now possible to find a solution to the problem of the duplication of the cube by solving for the points at which two parabolas intersect, a solution equivalent to solving a cubic equation. There are few direct sources for Menaechmus's work; his work on conic sections is known primarily from an epigram by Eratosthenes, and the accomplishment of his brother (of devising a method to create a square equal in area to a given circle using the quadratrix), Dinostratus, is known solely from the writings of Proclus. Proclus also mentions that Menaechmus was taught by Eudoxus. There is a curious statement by Plutarch to the effect that Plato disapproved of Menaechmus achieving his doubled cube solution with the use of mechanical devices; the proof currently known appears to be purely algebraic. Menaechmus was said to have been the tutor of Alexander the Great; this belief derives from the following anecdote: supposedly, once, when Alexander asked him for a shortcut to understanding geometry, he repli
https://en.wikipedia.org/wiki/Indel
Indel (insertion-deletion) is a molecular biology term for an insertion or deletion of bases in the genome of an organism. Indels ≥ 50 bases in length are classified as structural variants. In coding regions of the genome, unless the length of an indel is a multiple of 3, it will produce a frameshift mutation. For example, a common microindel which results in a frameshift causes Bloom syndrome in the Jewish or Japanese population. Indels can be contrasted with a point mutation. An indel inserts or deletes nucleotides from a sequence, while a point mutation is a form of substitution that replaces one of the nucleotides without changing the overall number in the DNA. Indels can also be contrasted with Tandem Base Mutations (TBM), which may result from fundamentally different mechanisms. A TBM is defined as a substitution at adjacent nucleotides (primarily substitutions at two adjacent nucleotides, but substitutions at three adjacent nucleotides have been observed). Indels, being either insertions, or deletions, can be used as genetic markers in natural populations, especially in phylogenetic studies. It has been shown that genomic regions with multiple indels can also be used for species-identification procedures. An indel change of a single base pair in the coding part of an mRNA results in a frameshift during mRNA translation that could lead to an inappropriate (premature) stop codon in a different frame. Indels that are not multiples of 3 are particularly uncommon in coding regions but relatively common in non-coding regions. There are approximately 192-280 frameshifting indels in each person. Indels are likely to represent between 16% and 25% of all sequence polymorphisms in humans. In fact, in most known genomes, including humans, indel frequency tends to be markedly lower than that of single nucleotide polymorphisms (SNP), except near highly repetitive regions, including homopolymers and microsatellites. The term "indel" has been co-opted in recent years by
https://en.wikipedia.org/wiki/Lists%20of%20microcomputers
For an overview of microcomputers of different kinds, see the following lists of microcomputers: List of early microcomputers List of home computers List of home computers by video hardware Lists of computer hardware
https://en.wikipedia.org/wiki/Ettercap%20%28software%29
Ettercap is a free and open source network security tool for man-in-the-middle attacks on a LAN. It can be used for computer network protocol analysis and security auditing. It runs on various Unix-like operating systems including Linux, Mac OS X, BSD and Solaris, and on Microsoft Windows. It is capable of intercepting traffic on a network segment, capturing passwords, and conducting active eavesdropping against a number of common protocols. Its original developers later founded Hacking Team. Functionality Ettercap works by putting the network interface into promiscuous mode and by ARP poisoning the target machines. Thereby it can act as a 'man in the middle' and unleash various attacks on the victims. Ettercap has plugin support so that the features can be extended by adding new plugins. Features Ettercap supports active and passive dissection of many protocols (including ciphered ones) and provides many features for network and host analysis. Ettercap offers four modes of operation: IP-based: packets are filtered based on IP source and destination. MAC-based: packets are filtered based on MAC address, useful for sniffing connections through a gateway. ARP-based: uses ARP poisoning to sniff on a switched LAN between two hosts (full-duplex). PublicARP-based: uses ARP poisoning to sniff on a switched LAN from a victim host to all other hosts (half-duplex). In addition, the software also offers the following features: Character injection into an established connection: characters can be injected into a server (emulating commands) or to a client (emulating replies) while maintaining a live connection. SSH1 support: the sniffing of a username and password, and even the data of an SSH1 connection. Ettercap is the first software capable of sniffing an SSH connection in full duplex. HTTPS support: the sniffing of HTTP SSL secured data—even when the connection is made through a proxy. Remote traffic through a GRE tunnel: the sniffing of remote traffic through a
https://en.wikipedia.org/wiki/Kaimingjie%20germ%20weapon%20attack
The Kaimingjie germ weapon attack was a Japanese biological warfare bacterial germ strike against Kaimingjie, an area of the port of Ningbo in the Chinese province of Zhejiang in October 1940, during the Second Sino-Japanese War. The attack was a joint Unit 731 and Unit 1644 endeavour. Bubonic plague was the area of greatest interest to the doctors of these units. Six different plague attacks were conducted in China during the war. Using airdropped wheat, corn, scraps of cotton cloth and sand infested with plague infected fleas, an outbreak was started that resulted in a hundred deaths, infections and mild to severe bodily mutilations. The area was evacuated and a wall was built around it to enforce a quarantine then burned to the ground to eradicate the disease. A later attack in 1942 on the same area by the two units led to the development of their final delivery system: airdropped ceramic bombs. Some work was conducted during the war with the use of liquid forms of the pathogen agents but the results were unsatisfactory for the researchers. The attacks were formed on the research of units 731 and 1644. These units researched the effects of various chemicals and pathogens that had the potential to be used as biological weapons on soldiers and civilians. The aim of the development of these biological weapons was the use of the weapons to further expand the Japanese empire across Asia. These units were arms of the Japanese top-secret biological weapons program. The program was headed by General Shirō Ishii. He devised the plans for the Kaimingjie germ weapon attack and played a major role in the Japanese biological weapons program, specifically through fundraising. Background The Kaimingjie germ weapon attack occurred during the Second Sino-Japanese war (1937–45). This was a military conflict between the Republic of China and the Empire of Japan. The war broke out as a resistance to the Japanese expansion and influence on Chinese territory. In 1931, Japan'
https://en.wikipedia.org/wiki/External%20fertilization
External fertilization is a mode of reproduction in which a male organism's sperm fertilizes a female organism's egg outside of the female's body. It is contrasted with internal fertilization, in which sperm are introduced via insemination and then combine with an egg inside the body of a female organism. External fertilization typically occurs in water or a moist area to facilitate the movement of sperm to the egg. The release of eggs and sperm into the water is known as spawning. In motile species, spawning females often travel to a suitable location to release their eggs. However, sessile species are less able to move to spawning locations and must release gametes locally. Among vertebrates, external fertilization is most common in amphibians and fish. Invertebrates utilizing external fertilization are mostly benthic, sessile, or both, including animals such as coral, sea anemones, and tube-dwelling polychaetes. Benthic marine plants also use external fertilization to reproduce. Environmental factors and timing are key challenges to the success of external fertilization. While in the water, the male and female must both release gametes at similar times in order to fertilize the egg. Gametes spawned into the water may also be washed away, eaten, or damaged by external factors. Sexual selection Sexual selection may not seem to occur during external fertilization, but there are ways it actually can. The two types of external fertilizers are nest builders and broadcast spawners. For female nest builders, the main choice is the location of where to lay her eggs. A female can choose a nest close to the male she wants to fertilize her eggs, but there is no guarantee that the preferred male will fertilize any of the eggs. Broadcast spawners have a very weak selection, due to the randomness of releasing gametes. To look into the effect of female choice on external fertilization, an in vitro sperm competition experiment was performed. The results concluded that ther
https://en.wikipedia.org/wiki/Whorl
A whorl ( or ) is an individual circle, oval, volution or equivalent in a whorled pattern, which consists of a spiral or multiple concentric objects (including circles, ovals and arcs). In nature For mollusc whorls, the body whorl in a mollusc shell is the most recently formed whorl of a spiral shell, terminating in the aperture. Artificial objects See also Whirl (disambiguation)
https://en.wikipedia.org/wiki/Wigner%20effect
The Wigner effect (named for its discoverer, Eugene Wigner), also known as the discomposition effect or Wigner's disease, is the displacement of atoms in a solid caused by neutron radiation. Any solid can display the Wigner effect. The effect is of most concern in neutron moderators, such as graphite, intended to reduce the speed of fast neutrons, thereby turning them into thermal neutrons capable of sustaining a nuclear chain reaction involving uranium-235. Cause To cause the Wigner effect, neutrons that collide with the atoms in a crystal structure must have enough energy to displace them from the lattice. This amount (threshold displacement energy) is approximately 25 eV. A neutron's energy can vary widely, but it is not uncommon to have energies up to and exceeding 10 MeV (10,000,000 eV) in the centre of a nuclear reactor. A neutron with a significant amount of energy will create a displacement cascade in a matrix via elastic collisions. For example, a 1 MeV neutron striking graphite will create 900 displacements. Not all displacements will create defects, because some of the struck atoms will find and fill the vacancies that were either small pre-existing voids or vacancies newly formed by the other struck atoms. Frenkel defect The atoms that do not find a vacancy come to rest in non-ideal locations; that is, not along the symmetrical lines of the lattice. These interstitial atoms (or simply "interstitials") and their associated vacancies are a Frenkel defect. Because these atoms are not in the ideal location, they have a Wigner energy associated with them, much as a ball at the top of a hill has gravitational potential energy. When a large number of interstitials have accumulated, they risk releasing all of their energy suddenly, creating a rapid, great increase in temperature. Sudden, unplanned increases in temperature can present a large risk for certain types of nuclear reactors with low operating temperatures. One such release was the indirect caus
https://en.wikipedia.org/wiki/Roman%20dodecahedron
A Roman dodecahedron or Gallo-Roman dodecahedron is a small hollow object made of copper alloy which has been cast into a regular dodecahedral shape: twelve flat pentagonal faces, each face having a circular hole of varying diameter in the middle, the holes connecting to the hollow center. Roman dodecahedra date from the 2nd to 4th centuries AD and their purpose remains unknown. They rarely show signs of wear, and do not have any inscribed numbers or letters. History The first dodecahedron was found in 1739. Since then, at least 116 similar objects have been found in Italy, Austria, Belgium, France, Germany, Hungary, Luxembourg, the Netherlands, Switzerland and the United Kingdom. Instances range in size from . A Roman icosahedron has also been discovered after having long been misclassified as a dodecahedron. This icosahedron was excavated near Arloff in Germany and is currently on display in the Rheinisches Landesmuseum in Bonn. Purpose No mention of dodecahedrons has been found in contemporary accounts or pictures of the time. Speculative uses include as a survey instrument for estimating distances to (or sizes of) distant objects, though this is questioned as there are no markings to indicate that they would be a mathematical instrument; as spool knitting devices for making gloves, though the earliest known reference to spool knitting is from 1535; as part of a child's toy, or for decorative purposes. Contemporary craftspeople and YouTubers have successfully created knit gloves using 3-D printed reconstructions of the dodecohedrons, demonstrating their functionality for this purpose, and suggesting this as the historic use case. Several dodecahedra were found in coin hoards, providing evidence that their owners either considered them valuable objects, or believed their only use was connected with coins. It has been suggested that they might have been religious artifacts, or even fortune-telling devices. This latter speculation is based on the fact that mo
https://en.wikipedia.org/wiki/Hammerhead%20Networks
Hammerhead Networks was a computer networking company based in Billerica, Massachusetts. It produced software solutions for the delivery of Internet Protocol service features. History It was founded in April 2000 by Eddie Sullivan, who also served as its CEO. It was acquired by Cisco Systems on May 1, 2002, in a stock transaction worth up to US$173M. Cisco had previously owned a minority interest. It had 85 employees at the time of the acquisition.
https://en.wikipedia.org/wiki/Eisenstein%27s%20theorem
In mathematics, Eisenstein's theorem, named after the German mathematician Gotthold Eisenstein, applies to the coefficients of any power series which is an algebraic function with rational number coefficients. Through the theorem, it is readily demonstrable, for example, that the exponential function must be a transcendental function. Theorem Suppose that is a formal power series with rational coefficients an, which has a non-zero radius of convergence in the complex plane, and within it represents an analytic function that is in fact an algebraic function. Then Eisenstein's theorem states that there exists a non-zero integer A, such that Anan are all integers. This has an interpretation in terms of p-adic numbers: with an appropriate extension of the idea, the p-adic radius of convergence of the series is at least 1, for almost all p (i.e., the primes outside a finite set S). In fact that statement is a little weaker, in that it disregards any initial partial sum of the series, in a way that may vary according to p. For the other primes the radius is non-zero. History Eisenstein's original paper is the short communication Über eine allgemeine Eigenschaft der Reihen-Entwicklungen aller algebraischen Functionen (1852), reproduced in Mathematische Gesammelte Werke, Band II, Chelsea Publishing Co., New York, 1975, p. 765–767. More recently, many authors have investigated precise and effective bounds quantifying the above almost all. See, e.g., Sections 11.4 and 11.55 of the book by E. Bombieri & W. Gubler.
https://en.wikipedia.org/wiki/Zacusc%C4%83
Zacuscă () is a vegetable spread popular in Romania and Moldova. Similar spreads are found in other countries in the Balkan region, and bordering regions. Ingredients The main ingredients are roasted eggplant, sauteed onions, tomato paste, and roasted Paprika Pepper (Romanian pepper called gogoșari). Some add mushrooms, carrots, or celery. Bay leaves are added as spice, as well as other ingredients (oil, salt, and pepper). Traditionally, a family will cook a large quantity of it after the fall harvest and preserve it through canning. Use Zacuscă can be eaten as a relish or spread, typically on bread. It is said to improve in taste after some months of maturing but must be used within days of opening. Although traditionally prepared at home, it is also commercially available. Some Bulgarian and Middle Eastern brands are available in the United States. In the Orthodox Christian majority countries, it is sometimes eaten during fasting seasons due to the absence of meat, eggs or dairy products. Etymology The word zacuscă is of Slavic origin which means simply "appetizer", "breakfast" or "snack" (see zakuska) See also Ajvar, pindjur and ljutenica, similar spreads in Balkan cuisine Kyopolou, a similar Bulgarian dish Biber salçası, a Turkish paste made from red peppers alone List of eggplant dishes List of spreads
https://en.wikipedia.org/wiki/Logos%20Dictionary
Logos Dictionary is a large multilingual online dictionary provided by Logos Group, a European translation company. It was started in 1995, and as of 2005 contains over 7 million terms in over 200 languages, some of them minority languages as Breton, Leonese, Scots or Venetian. The dictionary offers a variety of search options, and requires free registration in order to add or update translations. There is also a Logos Dictionary for Children where mainly words for kids can be found, and a quote citation in several languages. See also Endangered languages External links Logos dictionary Online dictionaries Leonese language Breton language Scots language Venetian language
https://en.wikipedia.org/wiki/HyperWRT
HyperWRT is a GPL firmware project for the Linksys WRT54G and WRT54GS wireless routers based on the stock Linksys firmware. The original goal of the HyperWRT project was to add a set of features—such as power boost—to the latest Linux-based Linksys firmware, extending its possibilities but staying close to the official firmware. Over time, it continued to be updated with newer Linksys firmware, and added many more features typically found in enterprise routing equipment. HyperWRT is no longer maintained and has been succeeded by Tomato. History The original HyperWRT project was started in 2004 by Timothy Jans (a.k.a. Avenger 2.0), with continued development into early 2005. Another programmer called Rupan then continued HyperWRT development by integrating newer Linksys code as it was released. Later in 2005, two developers called tofu and Thibor picked up HyperWRT development with HyperWRT +tofu for the WRT54G and HyperWRT Thibor for the WRT54GS. Both developers frequently collaborated and added features from each other's releases, and both developed WRTSL54GS versions of their firmware. After February 2006, tofu discontinued development and his code was incorporated into HyperWRT Thibor. HyperWRT Thibor15c (July 2006) was the last version of HyperWRT and was compatible with the WRT54G (v1-v4), WRT54GL (v1-v1.1), WRT54GS (v1-v4), and WRTSL54GS (later unfinished beta 17rc3 released February 2008). See also List of wireless router firmware projects
https://en.wikipedia.org/wiki/Environmental%20consulting
Environmental consulting is often a form of compliance consulting, in which the consultant ensures that the client maintains an appropriate measure of compliance with environmental regulations. Sustainable consulting is a specialized field that offers guidance and solutions for businesses seeking to operate in an environmentally responsible and sustainable way. The goal of sustainable consulting is to help organizations reduce their environmental impact while maintaining profitability and social responsibility. There are many types of environmental consultants, but the two main groups are those who enter the field from the industry side, and those who enter the field from the environmentalist side. Environmental consultants work in a very wide variety of fields. Whether it be providing construction services such as Asbestos Hazard Assessments or Lead Hazard Assessments or conducting due diligence reports for customers to rid them of possible sanctions. Consultancies may generalize across a wide range of disciplines or specialize in certain areas of environmental consultancy such as waste management. Environmental consultants usually have an undergraduate degree and sometimes even master's degree in Environmental Engineering, Environmental Science, Environmental Studies, Geology, or some other science discipline. They should have deep knowledge on environmental regulations, which they can advise particular clients in the private industry or public government institutions to help them steer clear of possible fines, legal action or misguided transactions. Environmental consulting spans a wide spectrum of industry. The most basic industry that environmental consulting remains prominent in is the commercial estate market. Many commercial lenders rely on both small and large environmental firms. Many commercial lenders will not lend monies to borrowers if the property or personal capital does not exceed the worth of the land. If an environmental problem is discovered p
https://en.wikipedia.org/wiki/CosmicOS
CosmicOS is a self-contained message designed to be understood primarily by treating it as a computer program and executing it. It is inspired by Hans Freudenthal's Lincos and resembles the programming language Scheme in many ways. The message is written with only four basic symbols representing the binary digits one and zero and open and close brackets. Numbers are represented as a string of binary digits between a pair of brackets and expressions are represented as a string of numbers between brackets. Identifiers for operations are arbitrarily assigned numbers and their functions can be defined within the message itself. Self-contained messages are of interest for CETI research, but there is much difference of opinion over the most appropriate encoding and broadcast medium to use. CosmicOS is released in modular form, so that the basic message can be adapted to a particular concrete instantiation. The message is released under the GPL licence. See also Hello world
https://en.wikipedia.org/wiki/Moyal%20product
In mathematics, the Moyal product (after José Enrique Moyal; also called the star product or Weyl–Groenewold product, after Hermann Weyl and Hilbrand J. Groenewold) is an example of a phase-space star product. It is an associative, non-commutative product, , on the functions on , equipped with its Poisson bracket (with a generalization to symplectic manifolds, described below). It is a special case of the -product of the "algebra of symbols" of a universal enveloping algebra. Historical comments The Moyal product is named after José Enrique Moyal, but is also sometimes called the Weyl–Groenewold product as it was introduced by H. J. Groenewold in his 1946 doctoral dissertation, in a trenchant appreciation of the Weyl correspondence. Moyal actually appears not to know about the product in his celebrated article and was crucially lacking it in his legendary correspondence with Dirac, as illustrated in his biography. The popular naming after Moyal appears to have emerged only in the 1970s, in homage to his flat phase-space quantization picture. Definition The product for smooth functions and on takes the form where each is a certain bidifferential operator of order characterized by the following properties (see below for an explicit formula): Deformation of the pointwise product — implicit in the formula above. Deformation of the Poisson bracket, called Moyal bracket. The 1 of the undeformed algebra is also the identity in the new algebra. The complex conjugate is an antilinear antiautomorphism. Note that, if one wishes to take functions valued in the real numbers, then an alternative version eliminates the in the second condition and eliminates the fourth condition. If one restricts to polynomial functions, the above algebra is isomorphic to the Weyl algebra , and the two offer alternative realizations of the Weyl map of the space of polynomials in variables (or the symmetric algebra of a vector space of dimension ). To provide an explicit formu
https://en.wikipedia.org/wiki/Alveolus
Alveolus (; pl. alveoli, adj. alveolar) is a general anatomical term for a concave cavity or pit. Uses in anatomy and zoology Pulmonary alveolus, an air sac in the lungs Alveolar cell or pneumocyte Alveolar duct Alveolar macrophage Mammary alveolus, a milk sac in the mammary glands Alveolar gland Dental alveolus, also known as "tooth socket", a socket in the jaw that holds the roots of teeth Alveolar ridge, the jaw structure that contains the dental alveoli Alveolar canals Alveolar process Arteries: Superior alveolar artery (disambiguation) Anterior superior alveolar arteries Posterior superior alveolar artery Inferior alveolar artery Nerves: Anterior superior alveolar nerve Middle superior alveolar nerve Inferior alveolar nerve Uses in botany, microbiology and related disciplines Surface cavities or pits, such as on the stem of Myrmecodia species Pits on honeycombed surfaces such as receptacles of many angiosperms Pits on the fruiting bodies of fungi such as Boletus or the ascocarps of fungi such as typical ascomycetes Pits on the valves of the tests of many diatoms Membrane supporting vesicles of the alveolates Uses in linguistics Alveolar consonant, a linguistic vocalization depending upon touching tongue to alveolar ridge Alveolar stop See also Alveolar soft part sarcoma, a very rare type of soft-tissue sarcoma, Acinus, considered by some (but not all) sources to be synonymous with Alveolus Human anatomy Animal anatomy
https://en.wikipedia.org/wiki/Critical%20period
In developmental psychology and developmental biology, a critical period is a maturational stage in the lifespan of an organism during which the nervous system is especially sensitive to certain environmental stimuli. If, for some reason, the organism does not receive the appropriate stimulus during this "critical period" to learn a given skill or trait, it may be difficult, ultimately less successful, or even impossible, to develop certain associated functions later in life. Functions that are indispensable to an organism's survival, such as vision, are particularly likely to develop during critical periods. "Critical period" also relates to the ability to acquire one's first language. Researchers found that people who passed the "critical period" would not acquire their first language fluently. Some researchers differentiate between 'strong critical periods' and 'weak critical periods' (a.k.a. 'sensitive' periods) — defining 'weak critical periods' / 'sensitive periods' as more extended periods, after which learning is still possible. Other researchers consider these the same phenomenon. For example, the critical period for the development of a human child's binocular vision is thought to be between three and eight months, with sensitivity to damage extending up to at least three years of age. Further critical periods have been identified for the development of hearing and the vestibular system. Strong versus weak critical periods Examples of strong critical periods include monocular deprivation, filial imprinting, monaural occlusion, and Prefrontal Synthesis acquisition. These traits cannot be acquired after the end of the critical period. Examples of weak critical periods include phoneme tuning, grammar processing, articulation control, vocabulary acquisition, music training, auditory processing, sport training, and many other traits that can be significantly improved by training at any age. Critical period mechanisms Critical period opening Critical per
https://en.wikipedia.org/wiki/Rishon%20model
The Harari–Shupe preon model (also known as rishon model, RM) is the earliest effort to develop a preon model to explain the phenomena appearing in the Standard Model (SM) of particle physics. It was first developed independently by Haim Harari and by Michael A. Shupe and later expanded by Harari and his then-student Nathan Seiberg. The model The model has two kinds of fundamental particles called rishons (which means "primary" in Hebrew). They are T ("Third" since it has an electric charge of + e, or Tohu which means "unformed" in Hebrew) and V ("Vanishes", since it is electrically neutral, or Vohu which means "void" in Hebrew). All leptons and all flavours of quarks are three-rishon ordered triplets. These groups of three rishons have spin-. They are as follows: TTT = positron (anti-electron); VVV = electron neutrino; TTV, TVT and VTT = three colours of up quarks; VVT, VTV and TVV = three colours of down antiquarks. Each rishon has a corresponding antiparticle. Hence: = electron; = electron antineutrino; , , = three colours of up antiquarks; , , = three colours of down quarks. The W+ boson = TTTVVV; The W− boson = . Note that: Matter and antimatter are equally abundant in nature in the RM. This still leaves open the question of why , , and TTV etc. are common whereas TTT, TVV, and etc. are rare. Higher generation leptons and quarks are presumed to be excited states of first generation leptons and quarks, but those states are not specified. The simple RM does not provide an explanation of the mass-spectrum of the leptons and quarks. Baryon number () and lepton number () are not conserved, but the quantity is conserved. A baryon number violating process (such as proton decay) in the model would be In the expanded Harari–Seiberg version the rishons possess color and hypercolor, explaining why the only composites are the observed quarks and leptons. Under certain assumptions, it is possible to show that the model allows exactly for three gener
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare
Windows Live OneCare (previously Windows OneCare Live, codenamed A1) was a computer security and performance enhancement service developed by Microsoft for Windows. A core technology of OneCare was the multi-platform RAV (Reliable Anti-virus), which Microsoft purchased from GeCAD Software Srl in 2003, but subsequently discontinued. The software was available as an annual paid subscription, which could be used on up to three computers. On 18 November 2008, Microsoft announced that Windows Live OneCare would be discontinued on 30 June 2009 and will instead be offering users a new free anti-malware suite called Microsoft Security Essentials to be available before then. However, virus definitions and support for OneCare would continue until a subscription expires. In the end-of-life announcement, Microsoft noted that Windows Live OneCare would not be upgraded to work with Windows 7 and would also not work in Windows XP Mode. History Windows Live OneCare entered a beta state in the summer of 2005. The managed beta program was launched before the public beta, and was located on BetaPlace, Microsoft's former beta delivery system. On 31 May 2006, Windows Live OneCare made its official debut in retail stores in the United States. The beta version of Windows Live OneCare 1.5 was released in early October 2006 by Microsoft. Version 1.5 was released to manufacturing on 3 January 2007 and was made available to the public on 30 January 2007. On 4 July 2007, beta testing started for version 2.0, and the final version was released on 16 November 2007. Microsoft acquired Komoku on 20 March 2008 and merged its computer security software into Windows Live OneCare. Windows Live OneCare 2.5 (build 2.5.2900.28) final was released on 3 July 2008. On the same day, Microsoft also released Windows Live OneCare for Server 2.5. Features Windows Live OneCare features integrated anti-virus, personal firewall, and backup utilities, and a tune-up utility with the integrated functionality o
https://en.wikipedia.org/wiki/Plasma%20acceleration
Plasma acceleration is a technique for accelerating charged particles, such as electrons, positrons, and ions, using the electric field associated with electron plasma wave or other high-gradient plasma structures (like shock and sheath fields). The plasma acceleration structures are created either using ultra-short laser pulses or energetic particle beams that are matched to the plasma parameters. These techniques offer a way to build high performance particle accelerators of much smaller size than conventional devices. The basic concepts of plasma acceleration and its possibilities were originally conceived by Toshiki Tajima and John M. Dawson of UCLA in 1979. The initial experimental designs for a "wakefield" accelerator were conceived at UCLA by Chandrashekhar J. Joshi et al. Current experimental devices show accelerating gradients several orders of magnitude better than current particle accelerators over very short distances, and about one order of magnitude better (1 GeV/m vs 0.1 GeV/m for an RF accelerator) at the one meter scale. Plasma accelerators have immense promise for innovation of affordable and compact accelerators for various applications ranging from high energy physics to medical and industrial applications. Medical applications include betatron and free-electron light sources for diagnostics or radiation therapy and protons sources for hadron therapy. Plasma accelerators generally use wakefields generated by plasma density waves. However, plasma accelerators can operate in many different regimes depending upon the characteristics of the plasmas used. For example, an experimental laser plasma accelerator at Lawrence Berkeley National Laboratory accelerates electrons to 1 GeV over about 3.3 cm (5.4x1020 gn), and one conventional accelerator (highest electron energy accelerator) at SLAC requires 64 m to reach the same energy. Similarly, using plasmas an energy gain of more than 40 GeV was achieved using the SLAC SLC beam (42 GeV) in just 85 cm usi
https://en.wikipedia.org/wiki/Account%20aggregation
Account aggregation sometimes also known as financial data aggregation is a method that involves compiling information from different accounts, which may include bank accounts, credit card accounts, investment accounts, and other consumer or business accounts, into a single place. This may be provided through connecting via an API to the financial institution or provided through "screen scraping" where a user provides the requisite account-access information for an automated system to gather and compile the information into a single page. The security of the account access details as well as the financial information is key to users having confidence in the service. The database either resides in a web-based application or in client-side software. While such services are primarily designed to aggregate financial information, they sometimes also display other things such as the contents of e-mail boxes and news headlines. Account Aggregator System Account aggregator system is a data-sharing system, which helps lenders to conduct an easy and speedy assessment of the creditworthiness of the borrower. Components of Account Aggregator system The Account Aggregator system essentially has three important components – Financial Information Provider (FIP) Financial Information User (FIU) Account Aggregators Financial Information Providers has the necessary data about the customer, which it provides to the Financial Information Users. The Financial Information Provider can be a bank, a Non-Banking Financial Company (NBFC), mutual fund, insurance repository, pension fund repository, or even your wealth supervisor. The account aggregators act as the intermediary by collecting data from FIPs that hold the customer’s financial data and share that with FIUs such as lending banks/agencies that provide financial services. History The ideas around account aggregation first emerged in the mid 1990s when banks started releasing Internet banking applications. In the late
https://en.wikipedia.org/wiki/Hjalmar%20Mellin
Robert Hjalmar Mellin (19 June 1854 – 5 April 1933) was a Finnish mathematician and function theorist. Biography Mellin studied at the University of Helsinki and later in Berlin under Karl Weierstrass. He is chiefly remembered as the developer of the integral transform known as the Mellin transform. He studied related gamma functions, hypergeometric functions, Dirichlet series and the Riemann ζ function. He was appointed professor at the Polytechnic Institute in Helsinki, which later became Helsinki University of Technology with Mellin as first rector. Later in his career Mellin also became known for his critical opposition to the theory of relativity; he published several papers in which he argued against the theory from a chiefly philosophical standpoint. In his private life he was known as an outspoken fennoman: a proponent of adopting Finnish as the language of state and culture in the Grand Duchy of Finland, in preference to Swedish, which had predominantly been used hitherto. See also Mellin inversion theorem Mellin–Barnes integral Poisson–Mellin–Newton cycle External links Hjalmar Mellins obituary, written by Ernst Lindelöf 1854 births 1933 deaths People from Liminka 20th-century Finnish mathematicians Relativity critics Academic staff of the Helsinki University of Technology 19th-century Finnish mathematicians Mathematicians from the Russian Empire People from the Grand Duchy of Finland
https://en.wikipedia.org/wiki/Nest%20box
A nest box, also spelled nestbox, is a man-made enclosure provided for animals to nest in. Nest boxes are most frequently utilized for birds, in which case they are also called birdhouses or a birdbox/bird box, but some mammals such as bats may also use them. Placing nestboxes or roosting boxes may also be used to help maintain populations of particular species in an area. Nest boxes were used since Roman times to capture birds for meat. The use of nest boxes for other purposes began in the mid-18th century, and naturalist August von Berlepsch was the first to produce nest boxes on a commercial scale. Nest boxes are getting more attention because increasing industrialization, urban growth, modern construction methods, deforestation and other human activities since the mid-20th century have caused severe declines in birds' natural habitats, introducing hurdles to breeding. Nest boxes can help prevent bird extinction, as it was shown in the case of scarlet macaws in the Peruvian Amazon. Construction General construction Nest boxes are usually wooden, although the purple martin will nest in metal. Some boxes are made from a mixture of wood and concrete, called woodcrete. Ceramic and plastic nestboxes are not suitable. Nest boxes should be made from untreated wood with an overhanging, sloped roof, a recessed floor, drainage and ventilation holes, a way to access the interior for monitoring and cleaning, and have no outside perches which could assist predators. Boxes may either have an entrance hole or be open-fronted. Some nest boxes can be highly decorated and complex, sometimes mimicking human houses or other structures. They may also contain nest box cameras so that use of, and activity within, the box can be monitored. Bird nest box construction The diameter of the opening in a nest-box has a very strong influence on the species of birds that will use the box. Many small birds select boxes with a hole only just large enough for an adult bird to pass through
https://en.wikipedia.org/wiki/Neuroplasticity
Neuroplasticity, also known as neural plasticity, or brain plasticity, is the ability of neural networks in the brain to change through growth and reorganization. It is when the brain is rewired to function in some way that differs from how it previously functioned. These changes range from individual neuron pathways making new connections, to systematic adjustments like cortical remapping or neural oscillation. Other forms of neuroplasticity include homologous area adaptation, cross modal reassignment, map expansion, and compensatory masquerade. Examples of neuroplasticity include circuit and network changes that result from learning a new ability, information acquisition, environmental influences, practice, and psychological stress. Neuroplasticity was once thought by neuroscientists to manifest only during childhood, but research in the latter half of the 20th century showed that many aspects of the brain can be altered (or are "plastic") even through adulthood. However, the developing brain exhibits a higher degree of plasticity than the adult brain. Activity-dependent plasticity can have significant implications for healthy development, learning, memory, and recovery from brain damage. History Origin The term plasticity was first applied to behavior in 1890 by William James in The Principles of Psychology where the term was used to describe "a structure weak enough to yield to an influence, but strong enough not to yield all at once". The first person to use the term neural plasticity appears to have been the Polish neuroscientist Jerzy Konorski. One of the first experiments providing evidence for the neuroplasticity phenomenon was conducted in 1793, by Italian anatomist Michele Vicenzo Malacarne, who described experiments in which he paired animals, trained one of the pair extensively for years, and then dissected both. Malacarne discovered that the cerebellums of the trained animals were substantially larger than the cerebellum of the untrained animals.
https://en.wikipedia.org/wiki/Ally%20Sloper
Alexander "Ally" Sloper is the eponymous fictional character of the British comic strip Ally Sloper. First appearing in 1867, he is considered one of the earliest comic strip characters and he is regarded as the first recurring character in comics. Red-nosed and blustery, an archetypal lazy schemer often found "sloping" through alleys to avoid his landlord and other creditors, he was created for the British magazine Judy by writer and fledgling artist Charles H. Ross, and inked and later fully illustrated by his French wife Émilie de Tessier under the pseudonym "Marie Duval" (or "Marie Du Val"; sources differ). The strips, which used text narrative beneath unbordered panels, premiered in the 14 August 1867 issue of Judy, a humour-magazine rival of the famous Punch. The highly popular character was spun off into his own comic, Ally Sloper's Half Holiday, in 1884. Artists The first illustrations were by Ross, then Tessier took over. When publisher Gilbert Dalziel re-launched the cartoon as Ally Sloper's Half Holiday, in 1884, Sloper was illustrated by William Baxter until his death in 1888. He was succeeded by W. Fletcher Thomas, who continued the illustrations until approximately 1899, when the publisher invited C. H. Chapman to illustrate the series until it ended in 1916. Films The highly popular character was spun off into his own comic, Ally Sloper's Half Holiday in 1884. Sloper appeared in three feature films and a wide array of merchandising from pocket watches to door stops. His popularity and influence led to his being used on occasion as a propaganda tool for the British government's policies. Sloper has also been cited as an influence on W. C. Fields and Charlie Chaplin's "little tramp" character and its imitators. The arrival of the First World War in 1914 saw severe paper rationing, and in 1916 the Half Holiday comic ceased production. Attempts after the war to revive Sloper proved short-lived, as Sloper was a somewhat stereotypical Victorian and Ed
https://en.wikipedia.org/wiki/Heat%20capacity%20ratio
In thermal physics and thermodynamics, the heat capacity ratio, also known as the adiabatic index, the ratio of specific heats, or Laplace's coefficient, is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume (). It is sometimes also known as the isentropic expansion factor and is denoted by (gamma) for an ideal gas or (kappa), the isentropic exponent for a real gas. The symbol is used by aerospace and chemical engineers. where is the heat capacity, the molar heat capacity (heat capacity per mole), and the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes and refer to constant-pressure and constant-volume conditions respectively. The heat capacity ratio is important for its applications in thermodynamical reversible processes, especially involving ideal gases; the speed of sound depends on this factor. Thought experiment To understand this relation, consider the following thought experiment. A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals , with representing the change in temperature. The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat (adiabatic expansion). Doing this work, air inside the cylinder will cool to below the target temperature. To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked
https://en.wikipedia.org/wiki/Local%20Futures
Local Futures (formerly the International Society for Ecology and Culture) is a non-profit organization whose purpose is to raise awareness about what it identifies as the root causes of contemporary social, environmental, and economic crises. The group argues that focusing on single issues – saving whales, blocking nuclear power plants, feeding the hungry, etc. – only overwhelms people and ultimately fails as a strategy. Instead, Local Futures believes that the focus must be on changing the fundamental forces that create or exacerbate all of these problems. Among those forces are economic globalization, corporate power, and conventional notions of technological and economic "progress". As a solution, Local Futures promotes economic localization and other locally based alternatives to the global consumer culture, as a means to protect both biological and cultural diversity. The group is also associated with the concept of Counter-development. Local Futures is the parent organization of a program in Ladakh, or "Little Tibet", begun in 1975. The Ladakh Project includes a wide range of hands-on activities, including a renewable-energy program, and has won international recognition for countering the negative effects of conventional development in that region. Local Futures' founder and Director, Helena Norberg-Hodge, shared the 1986 Right Livelihood Award. Local Futures has produced two award-winning documentary films: Ancient Futures: Learning from Ladakh (1993) and The Economics of Happiness (2011). An early advocate for local food (and one of the few organizations to look at local food from a global perspective), Local Futures also produced the book Bringing the Food Economy Home: Local Alternatives to Global Agribusiness (Kumarian Press, 2002), as well as Local Is Our Future (2019), and a range of other resources for local food activists. The group has also organized a series of Economics of Happiness conferences, including recent events in Berkeley, Californi
https://en.wikipedia.org/wiki/Yerevan%20TV%20Tower
Yerevan TV Tower (, Yerevani herustaashtarak) is a high lattice tower built in 1977 on Nork Hill near downtown Yerevan, Armenia. It is the tallest structure in the Caucasus, fourth-tallest tower in Western Asia (The Milad Tower in Tehran being the tallest), sixth-tallest free-standing lattice tower and thirty-eighth-tallest tower in the world. Construction In the late 1960s it was decided to replace the -high TV tower in Yerevan due to the insufficient capacity of the latter. The preparatory work on Yerevan TV tower and Tbilisi Tower, which also needed replacement, started simultaneously at the Ukrainian Institute for Steel Structures. The project leaders were Isaak Zatulovsky, Anatoli Perelmuter, Mark Grinberg, Yuri Shevernitsky, and Boris But. Tbilisi and Yerevan were the first among the Soviet capitals where towers were built. The same group later worked on the project of Kyiv TV Tower though it was finished earlier than in Yerevan). Tbilisi Tower is lower, lighter and slightly tilted compared to Yerevan Tower. Construction started in 1974 and finished 3 years later. The steel was shipped from Rustavi Metallurgical Plant in Georgia. The old tower was moved to Leninakan, current Gyumri, where it is still functional today. Structure The structure of the TV tower in Yerevan is virtually divided into three parts: base, body, and antenna. The base is a truss-steel tetrahedron that at the height of 71 metres becomes a closed-platform observation deck and technical offices. On the roof of this lower-tower basket are radio antennas. The triangular truss-steel structure continues to the height of 137 metres where a two-storey 18-metre structure in the shape of an inverted-truncated cone is situated. The lattice grid structure continues another 30 meters. In the centre of this structure there is a concrete vertical pipe structure with a diameter of 4.2 metres, in which, among other things, the lift-shaft is hidden. The pipe projecting from the basement continues as
https://en.wikipedia.org/wiki/Motion%20control
Motion control is a sub-field of automation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. Motion control systems are extensively used in a variety of fields for automation purposes, including precision engineering, micromanufacturing, biotechnology, and nanotechnology. The main components involved typically include a motion controller, an energy amplifier, and one or more prime movers or actuators. Motion control may be open loop or closed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems include stepper motor or fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System. Typically the position or velocity of machines are controlled using some type of device such as a hydraulic pump, linear actuator, or electric motor, generally a servo. Motion control is an important part of robotics and CNC machine tools, however in these instances it is more complex than when used with specialized machines, where the kinematics are usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile, semiconductor production, and assembly industries. Motion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of
https://en.wikipedia.org/wiki/Dependent%20type
In computer science and logic, a dependent type is a type whose definition depends on a value. It is an overlapping feature of type theory and type systems. In intuitionistic type theory, dependent types are used to encode logic's quantifiers like "for all" and "there exists". In functional programming languages like Agda, ATS, Coq, F*, Epigram, and Idris, dependent types help reduce bugs by enabling the programmer to assign types that further restrain the set of possible implementations. Two common examples of dependent types are dependent functions and dependent pairs. The return type of a dependent function may depend on the value (not just type) of one of its arguments. For instance, a function that takes a positive integer may return an array of length , where the array length is part of the type of the array. (Note that this is different from polymorphism and generic programming, both of which include the type as an argument.) A dependent pair may have a second value the type of which depends on the first value. Sticking with the array example, a dependent pair may be used to pair an array with its length in a type-safe way. Dependent types add complexity to a type system. Deciding the equality of dependent types in a program may require computations. If arbitrary values are allowed in dependent types, then deciding type equality may involve deciding whether two arbitrary programs produce the same result; hence the decidability of type checking may depend on the given type theory's semantics of equality, that is, whether the type theory is intensional or extensional. History In 1934, Haskell Curry noticed that the types used in typed lambda calculus, and in its combinatory logic counterpart, followed the same pattern as axioms in propositional logic. Going further, for every proof in the logic, there was a matching function (term) in the programming language. One of Curry's examples was the correspondence between simply typed lambda calculus and intuition
https://en.wikipedia.org/wiki/Aquificaceae
The Aquificaceae family are bacteria that live in harsh environmental settings such as hot springs, sulfur pools, and hydrothermal vents. Although they are true bacteria as opposed to the other inhabitants of extreme environments, the Archaea, Aquificaceae genera are an early phylogenetic branch. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Unassigned species: "Aquifex aeolicus" Huber and Stetter 2001 See also List of bacterial orders List of bacteria genera
https://en.wikipedia.org/wiki/Hydrogenothermaceae
The Hydrogenothermaceae family are bacteria that live in harsh environmental settings. They have been found in hot springs, sulfur pools, and thermal ocean vents. They are true bacteria as opposed to the other inhabitants of extreme environments, the Archaea. An example occurrence of certain extremophiles in this family are organisms of the genus Sulfurihydrogenibium that are capable of surviving in extremely hot environments such as Hverigerdi, Iceland. Obtaining energy Hydrogenothermaceae families consist of aerobic or microaerophilic bacteria, which generally obtain energy by oxidation of hydrogen or reduced sulfur compounds by molecular oxygen. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the National Center for Biotechnology Information (NCBI).
https://en.wikipedia.org/wiki/Elan%20Graphics
Elan Graphics is a computer graphics architecture for Silicon Graphics computer workstations. Elan Graphics was developed in 1991 and was available as a high-end graphics option on workstations released during the mid-1990s as part of the Express Graphics architectures family. Elan Graphics gives the workstation real-time 2D and 3D graphics rendering capability similar to that of even high-end PCs made over ten years after Elan's introduction, with the exception of texture mapping, which had to be performed in software. The Silicon Graphics Indigo Elan option Graphics systems consist of four GE7 Geometry Engines capable of a combined 128 MFLOPS and one RE3 Raster Engine. Together, they are capable of rendering 180K Z-buffered, lit, Gouraud-shaded triangles per second. The framebuffer has 56 bits per pixel, causing 12-bits per pixel (dithered RGB 4/4/4) to be used for a double-buffered, depth buffered, RGB layout. When double-buffering isn't required, it is possible to run in full 24-bit color. Similarly, when Z-buffering is not required, a double-buffered 24-bit RGB framebuffer configuration is possible. The Elan Graphics system also implemented hardware stencil buffering by allocating 4 bits from the Z-buffer to produce a combined 20-bit Z, 4-bit stencil buffer. Elan Graphics consists of five graphics subsystems: the HQ2 Command Engine, GE7 Geometry Subsystem, RE3 Raster Engine, VM2 framebuffer and VC1 Display Subsystem. Elan Graphics can produce resolutions up to 1280 x 1024 pixels with 24-bit color and can also process unencoded NTSC and PAL analog television signals. The Elan Graphics system is made up of five daughterboards that plug into the main workstation motherboard. The Elan Graphics architecture was superseded by SGI's Extreme Graphics architecture on Indigo2 models and eventually by the IMPACT graphics architecture in 1995. Features Subpixel positioning Advanced lighting models: Multiple colored light sources (up to 8) Ambient, diffuse, and spec
https://en.wikipedia.org/wiki/Navarro%20Networks
Navarro Networks, Inc., was a developer of Ethernet-based ASIC components based in Plano, Texas, in the United States. They produced a network processor for Ethernet and other applications. Navarro Networks was founded in 2000. Their CEO was Mark Bluhm, who was formerly a vice president at Cyrix. A group of nine employees left the Cyrix division of Via on March 21, 2000 to staff the company. The employee walkout had occurred just a day after Via announced that they would be spinning off the Cyrix division as a separate company. Cisco Systems announced their intent to acquire Navarro Networks in May 1, 2002; on the same day, Cisco also announced their bid to acquire Hammerhead Networks. The acquisition was completed in June that year, with Cisco dealing Navarro a stock swap worth $85 million. Most of the 25 employees of Navarro joined the Internet Systems Business Unit to enhance Cisco's internal ASIC capability in Ethernet switching platforms.
https://en.wikipedia.org/wiki/List%20of%20reporting%20software
The following is a list of notable report generator software. Reporting software is used to generate human-readable reports from various data sources. Commercial software ActiveReports Actuate Corporation BOARD Business Objects Cognos BI Crystal Reports CyberQuery GoodData icCube I-net Crystal-Clear InetSoft Information Builders' FOCUS and WebFOCUS Jaspersoft Jedox List & Label Logi Analytics m-Power MATLAB MicroStrategy Navicat OBIEE Oracle Discoverer Oracle Reports Hyperion Oracle XML Publisher Parasoft DTP PolyAnalyst Power BI Plotly Proclarity QlikView RapidMiner Roambi RW3 Technologies SiSense Splunk SQL Server Reporting Services Stimulsoft Reports Style Report Tableau Targit Telerik Reporting TIBCO Text Control Windward Reports XLCubed Zoomdata Zoho Analytics (as part of the Zoho Office Suite) Free software BIRT Project D3.js JasperReports KNIME LibreOffice Base OpenOffice Base Pentaho See also Business intelligence software List of information graphics software
https://en.wikipedia.org/wiki/Graph%20isomorphism%20problem
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. The problem is not known to be solvable in polynomial time nor to be NP-complete, and therefore may be in the computational complexity class NP-intermediate. It is known that the graph isomorphism problem is in the low hierarchy of class NP, which implies that it is not NP-complete unless the polynomial time hierarchy collapses to its second level. At the same time, isomorphism for many special classes of graphs can be solved in polynomial time, and in practice graph isomorphism can often be solved efficiently. This problem is a special case of the subgraph isomorphism problem, which asks whether a given graph G contains a subgraph that is isomorphic to another given graph H; this problem is known to be NP-complete. It is also known to be a special case of the non-abelian hidden subgroup problem over the symmetric group. In the area of image recognition it is known as the exact graph matching. State of the art In November 2015, László Babai announced a quasipolynomial time algorithm for all graphs, that is, one with running time for some fixed . On January 4, 2017, Babai retracted the quasi-polynomial claim and stated a sub-exponential time bound instead after Harald Helfgott discovered a flaw in the proof. On January 9, 2017, Babai announced a correction (published in full on January 19) and restored the quasi-polynomial claim, with Helfgott confirming the fix. Helfgott further claims that one can take , so the running time is . Prior to this, the best accepted theoretical algorithm was due to , and was based on the earlier work by combined with a subfactorial algorithm of V. N. Zemlyachenko . The algorithm has run time 2O() for graphs with n vertices and relies on the classification of finite simple groups. Without this classification theorem, a slightly weaker bound was obtained first for strongly regular graphs by , and then extended t
https://en.wikipedia.org/wiki/TrueOS
TrueOS (formerly PC-BSD or PCBSD) is a discontinued Unix-like, server-oriented operating system built upon the most recent releases of FreeBSD-CURRENT. Up to 2018 it aimed to be easy to install by using a graphical installation program, and easy and ready-to-use immediately by providing KDE SC, Lumina, LXDE, MATE, or Xfce as the desktop environment. In June 2018 the developers announced that since TrueOS had become the core OS to provide a basis for other projects, the graphical installer had been removed. Graphical end-user-orientated OSes formerly based on TrueOS were GhostBSD and Trident. TrueOS provided official binary Nvidia and Intel drivers for hardware acceleration and an optional 3D desktop interface through KWin, and Wine is ready-to-use for running Microsoft Windows software. TrueOS was also able to run Linux software in addition to FreeBSD Ports collection and it had its own .txz package manager. TrueOS supported OpenZFS and the installer offered disk encryption with geli. Development of TrueOS ended in 2020. History TrueOS was founded by FreeBSD professional Kris Moore in early 2005 as PC-BSD. In August 2006 it was voted the most beginner-friendly operating system by OSWeekly.com. The first beta of the PC-BSD consisted of only a GUI installer to get the user up and running with a FreeBSD 6 system with KDE3 pre-configured. This was a major innovation for the time as anyone wishing to install FreeBSD would have to manually tweak and run through a text installer. Kris Moore's goal was to make FreeBSD easy for everyone to use on the desktop and has since diverged even more in the direction of usability by including additional GUI administration tools and .pbi application installers. PC-BSD's application installer management involved a different approach to installing software than many other Unix-like operating systems, up to and including version 8.2, by means of the pbiDIR website. Instead of using the FreeBSD Ports tree directly (although it remain
https://en.wikipedia.org/wiki/Crowbar
A crowbar, also called a wrecking bar, pry bar or prybar, pinch-bar, or occasionally a prise bar or prisebar, colloquially gooseneck, or pig bar, or in Britain and Australia a jemmy or jimmy (also called jemmy bar), is a lever consisting of a metal bar with a single curved end and flattened points, used to force two objects apart or gain mechanical advantage in lifting; often the curved end has a notch for removing nails. The design can be used as any of the three lever classes. The curved end is usually used as a first-class lever, and the flat end as a second-class lever. Designs made from thick flat steel bar are often referred to as utility bars. Materials and construction A common hand tool, the crow bar is typically made of medium-carbon steel, possibly hardened on its ends. Commonly crowbars are forged from long steel stock, either hexagonal or sometimes cylindrical. Alternative designs may be forged with a rounded I-shaped cross-section shaft. Versions using relatively wide flat steel bar are often referred to as "utility" or "flat bars". Etymology and usage The accepted etymology identifies the first component of the word crowbar with the bird-name "crow", perhaps due to the crowbar's resemblance to the feet or beak of a crow. The first use of the term is dated back to circa 1400. It was also called simply a crow, or iron crow; William Shakespeare used the latter, as in Romeo and Juliet, Act 5, Scene 2: "Get me an iron crow and bring it straight unto my cell." In Daniel Defoe's 1719 novel Robinson Crusoe, the protagonist lacks a pickaxe so uses a crowbar instead: "As for the pickaxe, I made use of the iron crows, which were proper enough, though heavy." Types Types of crowbar include: Alignment pry bar, also referred to as Sleeve bar Cat’s claw pry bar, more simply known as a cat's paw Digging pry bar Flat pry bar Gooseneck pry bar Heavy-duty pry bar Molding pry bar Rolling head pry bar See also Halligan bar Kinetic energy penetrator Tire iron
https://en.wikipedia.org/wiki/Crowbar%20%28circuit%29
A crowbar circuit is an electrical circuit used for preventing an overvoltage or surge condition of a power supply unit from damaging the circuits attached to the power supply. It operates by putting a short circuit or low resistance path across the voltage output (Vo), like dropping a crowbar across the output terminals of the power supply. Crowbar circuits are frequently implemented using a thyristor, TRIAC, trisil or thyratron as the shorting device. Once triggered, they depend on the current-limiting circuitry of the power supply or, if that fails, the blowing of the line fuse or tripping the circuit breaker. The name is derived from having the same effect as throwing a crowbar over exposed power supply terminals to short the output. An example crowbar circuit is shown to the right. This particular circuit uses an LM431 adjustable zener regulator to control the gate of the TRIAC. The resistor divider of R1 and R2 provide the reference voltage for the LM431. The divider is set so that during normal operating conditions, the voltage across R2 is slightly lower than VREF of the LM431. Since this voltage is below the minimum reference voltage of the LM431, it remains off and very little current is conducted through the LM431. If the cathode resistor is sized accordingly, very little voltage will be dropped across it and the TRIAC gate terminal will be essentially at the same potential as MT1, keeping the TRIAC off. If the supply voltage increases, the voltage across R2 will exceed VREF and the LM431 cathode will begin to draw current. The voltage at the gate terminal will be pulled down, exceeding the gate trigger voltage of the TRIAC and latching it on. Overview A crowbar circuit is distinct from a clamp in pulling, once triggered, the voltage below the trigger level, usually close to ground voltage. A clamp prevents the voltage from exceeding a preset level. Thus, a crowbar will not automatically return to normal operation when the overvoltage condition is remo