source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Spit-take
A spit-take is a comedic technique or reaction in which someone spits a drink, or sometimes food, out of their mouth as a reaction to a surprising or funny statement. An essential part of the spit-take is comedic timing. The person performing the spit-take usually starts drinking or eating right before the punchline is delivered. When the joke hits, the person accentuates the effect by pretending that the alleged humor/shock is so overwhelming and irresistible, that they cannot even control the urge of laughter/scream before swallowing, and therefore has to reflexively spit out the mouthful of content to prevent choking. In performance, a spit-take represents a reaction of shock, while in real life it is typically one of mirth. "Spit take" was included in the Oxford Dictionaries (not to be confused with the Oxford English Dictionary) in a 2014 update. It was also added to the Merriam-Webster dictionary in an April 2019 update. Etymology The spit-take, as a comedic technique, is a noun, but it shows up in media in some different forms. It can be seen used figuratively in a description of a satire work, Popstar: Never Stop Never Stopping: "(it) is a spit-take on the world of contemporary pop music and celebrity..." It can also be used as a verb: "On the morning of May 12, LinkedIn ... emailed scores of my contacts and told them I'm a professional racist. It was one of those updates that LinkedIn regularly sends its users, algorithmically assembled missives about their connections' appearances in the media.... This surely caused a few of my professional acquaintances to spit-take. — Will Johnson, Slate, 24 May 2016." The conjugation of spit-take as a verb is not clearly defined. There is evidence of both "spit-taked" and "spit took." Constructions like "made me spit-take" are convenient for avoiding this issue all together. Beyond that, leaving the phrase as a noun, like "do a spit-take," continues to be the most common usage. Origin Originally called a spit g
https://en.wikipedia.org/wiki/Benchmark%20%28surveying%29
The term benchmark, bench mark, or survey benchmark originates from the chiseled horizontal marks that surveyors made in stone structures, into which an angle iron could be placed to form a "bench" for a leveling rod, thus ensuring that a leveling rod could be accurately repositioned in the same place in the future. These marks were usually indicated with a chiseled arrow below the horizontal line. A benchmark is a type of survey marker. The term is generally applied to any item used to mark a point as an elevation reference. Frequently, bronze or aluminum disks are set in stone or concrete, or on rods driven deeply into the earth to provide a stable elevation point. If an elevation is marked on a map, but there is no physical mark on the ground, it is a spot height. Purpose The height of a benchmark is calculated relative to the heights of nearby benchmarks in a network extending from a fundamental benchmark. A fundamental benchmark is a point with a precisely known relationship to the vertical datum of the area, typically mean sea level. The position and height of each benchmark are shown on large-scale maps. The terms "height" and "elevation" are often used interchangeably, but in many jurisdictions, they have specific meanings; "height" commonly refers to a local or relative difference in the vertical (such as the height of a building), whereas "elevation" refers to the difference from a nominated reference surface (such as sea-level, or a mathematical/geodetic model that approximates the sea level known as the geoid). Elevation may be specified as normal height (above a reference ellipsoid), orthometric height, or dynamic height which have slightly different definitions. Other types of survey marks Triangulation points, also known as trig points, are marks with a precisely established horizontal position. These points may be marked by disks similar to benchmark disks, but set horizontally, and are also sometimes used as elevation benchmarks. Prominent fe
https://en.wikipedia.org/wiki/Benchmark%20%28computing%29
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves. Benchmarking is usually associated with assessing performance characteristics of computer hardware, for example, the floating point operation performance of a CPU, but there are circumstances when the technique is also applicable to software. Software benchmarks are, for example, run against compilers or database management systems (DBMS). Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures. Purpose As computer architecture advanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example, Pentium 4 processors generally operated at a higher clock frequency than Athlon XP or PowerPC processors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. See BogoMips and the megahertz myth. Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a hard disk or networking device. Benchmarks are particularly important in CPU design, giving processor architects the ability to measur
https://en.wikipedia.org/wiki/Rocky%20Mountain%20Arsenal%20National%20Wildlife%20Refuge
The Rocky Mountain Arsenal National Wildlife Refuge is a National Wildlife Refuge located adjacent to Denver and Commerce City, Colorado, in the United States. It is approximately northeast of downtown Denver. The refuge is on the grounds of the former Rocky Mountain Arsenal, a United States Army chemical weapons manufacturing facility. The site was designated a national wildlife refuge in 1992 by the United States Congress, and underwent a costly environmental cleanup in order to remove pollutants. The refuge is managed by the United States Fish and Wildlife Service. More than 330 species of wildlife inhabit the refuge, including raptors, deer, raccoons, coyotes, white pelicans, black-footed ferrets, black-tailed prairie dogs, and bison. Previous uses The Rocky Mountain Arsenal (RMA) was built in 1942 to manufacture chemical weapons. A portion of the site was leased to private industry in 1946 for petroleum production and agricultural and industrial chemical manufacturing. When the American chemical weapons program was shut down after the Vietnam War, the RMA served as a site for dismantling and disposing of these weapons. The Shell Oil Company also used a portion of the site in the 1980s to produce pesticides. The RMA was closed in 1985, and in 1987 environmental testing revealed that the site was extremely polluted. The RMA was listed on the National Priorities List, a list of hazardous waste sites in the United States eligible for long-term remedial action (cleanup) financed under the federal Superfund program run by the Environmental Protection Agency. In 1986, while environmental testing was continuing, a winter communal roost of bald eagles, then an endangered species, was discovered at the Rocky Mountain Arsenal. Additional investigation by the U.S. Fish and Wildlife Service (USFWS) discovered that the RMA was home more than 330 species of wildlife. With the arsenal not fit for human habitation, pressure quickly built to have it turned into a wildlife r
https://en.wikipedia.org/wiki/XrML
XrML is the eXtensible Rights Markup Language which has also been standardized as the Rights Expression Language (REL) for MPEG-21. XrML is owned by ContentGuard. XrML is based on XML and describes rights, fees and conditions together with message integrity and entity authentication information. History and development Xerox PARC and DPRL Mark Stefik, a researcher at Xerox PARC, is known as the originator of the concepts that became the XrML language. Stefik was engaged in research on the topic of trusted systems for secure digital commerce, of which one part was a language to express the rights that the system would allow users to perform on digital resources. The first version of the rights expression language that became XrML was developed at Xerox PARC, and called the Digital Property Rights Language (DPRL). DPRL appears in a patent filed by Xerox in November 1994 (and was granted in February 1998) entitled: "System for Controlling the Distribution and Use of Digital Work Having Attached Usage Rights Where the Usage Rights are Defined by a Usage Rights Grammar" (US Patent 5,715,403, issued to Xerox Corporation). Between 1994 and 1998, Xerox formed its Rights Management Group to continue the work represented in the patent. In November 1998, Xerox issued the first XML version of the Digital Property Rights Language (DPRL), labelled Version 2.0. Prior to that time, DPRL had been written in the LISP programming language. The DPRL 2.0 documentation makes it clear that DPRL was designed for machine-to-machine interaction, with rights expressed as machine actionable functions. It also states clearly that in interpreting a DPRL-based expression of rights, only those rights that are explicitly granted can be acted upon. Any areas where a rights expression is silent must be interpreted as rights not granted, and therefore must be denied by the software enforcing the rights. XrML 1.0 In 1999, version 2 of DPRL was licensed to a new company founded by Microsoft and
https://en.wikipedia.org/wiki/Bundle%20gerbe
In mathematics, a bundle gerbe is a geometrical model of certain 1-gerbes with connection, or equivalently of a 2-class in Deligne cohomology. Topology -principal bundles over a space (see circle bundle) are geometrical realizations of 1-classes in Deligne cohomology which consist of 1-form connections and 2-form curvatures. The topology of a bundle is classified by its Chern class, which is an element of , the second integral cohomology of . Gerbes, or more precisely 1-gerbes, are abstract descriptions of Deligne 2-classes, which each define an element of , the third integral cohomology of M. As a cohomology class in Deligne cohomology Recall for a smooth manifold the p-th Deligne cohomology groups are defined by the hypercohomology of the complex called the weight q Deligne complex, where is the sheaf of germs of smooth differential k-forms tensored with . So, we write for the Deligne-cohomology groups of weight . In the case the Deligne complex is then We can understand the Deligne cohomology groups by looking at the Cech resolution giving a double complex. There is also an associated short exact sequence where are the closed germs of complex valued 2-forms on and is the subspace of such forms where period integrals are integral. This can be used to show are the isomorphism classes of bundle-gerbes on a smooth manifold , or equivalently, the isomorphism classes of -bundles on . History Historically the most popular construction of a gerbe is a category-theoretic model featured in Giraud's theory of gerbes, which are roughly sheaves of groupoids over M. In 1994 Murray introduced bundle gerbes, which are geometric realizations of 1-gerbes. For many purposes these are more suitable for calculations than Giraud's realization, because their construction is entirely within the framework of classical geometry. In fact, as their name suggests, they are fiber bundles. This notion was extended to higher gerbes the following year. Relationship wi
https://en.wikipedia.org/wiki/Domestic%20rabbit
The domestic or domesticated rabbit—more commonly known as a pet rabbit, bunny, bun, or bunny rabbit—is the domesticated form of the European rabbit, a member of the lagomorph order. A male rabbit is known as a buck, a female is a doe, and a young rabbit is a kit, or kitten. Rabbits were first used for their food and fur by the Romans, and have been kept as pets in Western nations since the 19th century. Rabbits can be housed in exercise pens, but free roaming without any boundaries in a rabbit-proofed space has become popularized on social media in recent years. Beginning in the 1980s, the idea of the domestic rabbit as a house companion, a so-called house rabbit similar to a house cat, was promoted. Rabbits can be litter box-trained and taught to come when called, but they require exercise and can damage a house that has not been "rabbit proofed" based on their innate need to chew. Accidental interactions between pet rabbits and wild rabbits, while seemingly harmless, are usually strongly discouraged due to the species' different temperaments as well as wild rabbits potentially carrying diseases. Unwanted pet rabbits end up in animal shelters, especially after the Easter season (see Easter Bunny). In 2017, they were the United States' third most abandoned pet. Some of them go on to be adopted and become family pets in various forms. Because their wild counterparts have become invasive in Australia, pet rabbits are banned in the state of Queensland. Pet rabbits, being a domesticated breed that lack survival instincts, do not fare well in the wild if they are abandoned or escape from captivity. History Phoenician sailors visiting the coast of Spain c. 12th century BC, mistaking the European rabbit for a species from their homeland (the rock hyrax Procavia capensis), gave it the name i-shepan-ham (land or island of hyraxes). The captivity of rabbits as a food source is recorded as early as the 1st century BC, when the Roman writer Pliny the Elder described the u
https://en.wikipedia.org/wiki/The%20Tower%20of%20Druaga
is a 1984 arcade action role-playing maze game developed and published in Japan by Namco. Controlling the golden-armored knight Gilgamesh, the player is tasked with scaling 60 floors of the titular tower in an effort to rescue the maiden Ki from Druaga, a demon with eight arms and four legs, who plans to use an artifact known as the Blue Crystal Rod to enslave all of mankind. It ran on the Namco Super Pac-Man arcade hardware, modified with a horizontal-scrolling video system used in Mappy. Druaga was designed by Masanobu Endo, best known for creating Xevious (1983). It was conceived as a "fantasy Pac-Man" with combat and puzzle solving, taking inspiration from games such as Wizardry and Dungeons & Dragons, along with Mesopotamian, Sumerian and Babylonian mythology. It began as a prototype game called Quest with interlocking mazes, revised to run on an arcade system; the original concept was scrapped due to Endo disliking the heavy use of role-playing elements, instead becoming a more action-oriented game. In Japan, The Tower of Druaga was widely successful, attracting millions of fans for its use of secrets and hidden items. It is cited as an important game of its genre for laying down the foundation for future games, as well as inspiring the idea of sharing tips with friends and guidebooks. Druaga is noted as being influential for many games to follow, including Ys, Hydlide, Dragon Slayer and The Legend of Zelda. The success of the game in Japan inspired several ports for multiple platforms, as well as spawning a massive franchise known as the Babylonian Castle Saga, including multiple sequels, spin-offs, literature and an anime series produced by Gonzo. However, the 2009 Wii Virtual Console release in North America was met with a largely negative reception for its obtuse design, which many said was near-impossible to finish without a guidebook, alongside its high difficulty and controls. Gameplay The Tower of Druaga is an action role-playing maze video game. C
https://en.wikipedia.org/wiki/SpaceWire
SpaceWire is a spacecraft communication network based in part on the IEEE 1355 standard of communications. It is coordinated by the European Space Agency (ESA) in collaboration with international space agencies including NASA, JAXA, and RKA. Within a SpaceWire network the nodes are connected through low-cost, low-latency, full-duplex, point-to-point serial links, and packet switching wormhole routing routers. SpaceWire covers two (physical and data-link) of the seven layers of the OSI model for communications. Architecture Physical layer SpaceWire's modulation and data formats generally follow the data strobe encoding - differential ended signaling (DS-DE) part of the IEEE Std 1355-1995. SpaceWire utilizes asynchronous communication and allows speeds between 2 Mbit/s and 200 Mbit/s, with initial signalling rate of 10Mbit/s. DS-DE is well-favored because it describes modulation, bit formats, routing, flow control, and error detection in hardware, with little need for software. SpaceWire also has very low error rates, deterministic system behavior, and relatively simple digital electronics. SpaceWire replaced old PECL differential drivers in the physical layer of IEEE 1355 DS-DE by low-voltage differential signaling (LVDS). SpaceWire also proposes the use of space-qualified 9-pin connectors. SpaceWire and IEEE 1355 DS-DE allows for a wider set of speeds for data transmission, and some new features for automatic failover. The fail-over features let data find alternate routes, so a spacecraft can have multiple data buses, and be made fault-tolerant. SpaceWire also allows the propagation of time interrupts over SpaceWire links, eliminating the need for separate time discretes. Link layer Each transferred character starts with a parity bit and a data-control flag bit. If data-control flag is a 0-bit, an 8-bit LSB character follows. Otherwise one of the control codes, including end of packet (EOP). Network layer The network data frames look as follows: One or
https://en.wikipedia.org/wiki/Effectiveness
Effectiveness or effectivity is the capability of producing a desired result or the ability to produce desired output. When something is deemed effective, it means it has an intended or expected outcome, or produces a deep, vivid impression. Etymology The origin of the word effective stems from the Latin word , which means "creative, productive, or effective". It surfaced in Middle English between 1300 and 1400 AD. Usage In mathematics and logic, effective is used to describe metalogical methods that fit the criteria of an effective procedure. In group theory, a group element acts effectively (or faithfully) on a point, if that point is not fixed by the action. In physics, an effective theory is, similar to a phenomenological theory, a framework intended to explain certain (observed) effects without the claim that the theory correctly models the underlying (unobserved) processes. In heat transfer, effectiveness is a measure of the performance of a heat exchanger when using the NTU method. In medicine, effectiveness relates to how well a treatment works in practice, especially as shown in pragmatic clinical trials, as opposed to efficacy, which measures how well it works in explanatory clinical trials or research laboratory studies. In management, effectiveness relates to getting the right things done. Peter Drucker reminds us that "effectiveness can and must be learned". In human–computer interaction, effectiveness is defined as "the accuracy and completeness of users' tasks while using a system". In military science, effectiveness is a criterion used to assess changes determined in the target system, in its behavior, capability, or assets, tied to the attainment of an end state, achievement of an objective, or creation of an effect, while combat effectiveness is: "...the readiness of a military unit to engage in combat based on behavioral, operational, and leadership considerations. Combat effectiveness measures the ability of a military force to accomplis
https://en.wikipedia.org/wiki/Nuclear%20force
The nuclear force (or nucleon–nucleon interaction, residual strong force, or, historically, strong nuclear force) is a force that acts between hadrons, most commonly observed between protons and neutrons of atoms. Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e, they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei. The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or 0.8×10−15 metre), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, measured in angstroms (Å, or 10−10 m), is five orders of magnitude larger). The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons. The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons. Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect, which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together. A quantitative description of the nuclear force relies
https://en.wikipedia.org/wiki/Weather%20Underground%20%28weather%20service%29
Weather Underground is a commercial weather service providing real-time weather information over the Internet. Weather Underground provides weather reports for most major cities around the world on its Web site, as well as local weather reports for newspapers and third-party sites. Its information comes from the National Weather Service (NWS), and over 250,000 personal weather stations (PWS). The site is available in many languages, and customers can access an ad-free version of the site with additional features for an annual fee. Weather Underground is owned by The Weather Company, a subsidiary of IBM. History The company is based in San Francisco, California and was founded in 1995 as an offshoot of the University of Michigan internet weather database. The name is a reference to the 1960s radical left-wing militant organization the Weather Underground, which also originated at the University of Michigan. Jeff Masters, a doctoral candidate in meteorology at the University of Michigan working under the direction of Professor Perry Samson, wrote a menu-based Telnet interface in 1991 that displayed real-time weather information around the world. In 1993, they recruited Alan Steremberg and initiated a project to bring Internet weather into K–12 classrooms. Weather Underground president Alan Steremberg wrote "Blue Skies" for the project, a graphical Mac Gopher client, which won several awards. When the Mosaic Web browser appeared, this provided a natural transition from "Blue Skies" to the Web. In 1995 Weather Underground Inc. became a commercial entity separate from the university. It has grown to provide weather for print sources, in addition to its online presence. In 2005, Weather Underground became the weather provider for the Associated Press; Weather Underground also provides weather reports for some newspapers, including the San Francisco Chronicle and the Google search engine. Alan Steremberg also worked on the early development of the Google search engine
https://en.wikipedia.org/wiki/Lanczos%20approximation
In mathematics, the Lanczos approximation is a method for computing the gamma function numerically, published by Cornelius Lanczos in 1964. It is a practical alternative to the more popular Stirling's approximation for calculating the gamma function with fixed precision. Introduction The Lanczos approximation consists of the formula for the gamma function, with Here g is a real constant that may be chosen arbitrarily subject to the restriction that Re(z+g+) > 0. The coefficients p, which depend on g, are slightly more difficult to calculate (see below). Although the formula as stated here is only valid for arguments in the right complex half-plane, it can be extended to the entire complex plane by the reflection formula, The series A is convergent, and may be truncated to obtain an approximation with the desired precision. By choosing an appropriate g (typically a small integer), only some 5–10 terms of the series are needed to compute the gamma function with typical single or double floating-point precision. If a fixed g is chosen, the coefficients can be calculated in advance and, thanks to partial fraction decomposition, the sum is recast into the following form: Thus computing the gamma function becomes a matter of evaluating only a small number of elementary functions and multiplying by stored constants. The Lanczos approximation was popularized by Numerical Recipes, according to which computing the gamma function becomes "not much more difficult than other built-in functions that we take for granted, such as sin x or ex." The method is also implemented in the GNU Scientific Library, Boost, CPython and musl. Coefficients The coefficients are given by where represents the (n, m)th element of the matrix of coefficients for the Chebyshev polynomials, which can be calculated recursively from these identities: Godfrey (2001) describes how to obtain the coefficients and also the value of the truncated series A as a matrix product. Derivation Lanczos derive
https://en.wikipedia.org/wiki/Push%20technology
Push technology or server push is a style of Internet-based communication where the request for a given transaction is initiated by the publisher or central server. It is contrasted with pull, or get, where the request for the transmission of information is initiated by the receiver or client. Push services are often based on information and data preferences expressed in advance, called the publish-subscribe model. A client "subscribes" to various information channels provided by a server; whenever new content is available on one of those channels, the server "pushes" or "publishes" that information out to the client. Push is sometimes emulated with a polling technique, particularly under circumstances where a real push is not possible, such as sites with security policies that reject incoming HTTP requests. General use Synchronous conferencing and instant messaging are examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs (such as WASTE) and centralized programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient. Email may also be a push system: SMTP is a push protocol (see Push e-mail). However, the last step—from mail server to desktop computer—typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive. The original BlackBerry was the first popular example of push-email in a wireless context. Another example is the PointCast Network, which was widely covered in the 1990s. It delivered news and stock market data as a screensaver. Both Netscape and Microsoft integrated push technology through the Channel Definition Format (CDF) into the
https://en.wikipedia.org/wiki/Five%20color%20theorem
The five color theorem is a result from graph theory that given a plane separated into regions, such as a political map of the countries of the world, the regions may be colored using no more than five colors in such a way that no two adjacent regions receive the same color. The five color theorem is implied by the stronger four color theorem, but is considerably easier to prove. It was based on a failed attempt at the four color proof by Alfred Kempe in 1879. Percy John Heawood found an error 11 years later, and proved the five color theorem based on Kempe's work. Outline of the proof by contradiction First of all, one associates a simple planar graph to the given map, namely one puts a vertex in each region of the map, then connects two vertices with an edge if and only if the corresponding regions share a common border. The problem is then translated into a graph coloring problem: one has to paint the vertices of the graph so that no edge has endpoints of the same color. Because is a simple planar, i.e. it may be embedded in the plane without intersecting edges, and it does not have two vertices sharing more than one edge, and it does not have loops, then it can be shown (using the Euler characteristic of the plane) that it must have a vertex shared by at most five edges. (Note: This is the only place where the five-color condition is used in the proof. If this technique is used to prove the four-color theorem, it will fail on this step. In fact, an icosahedral graph is 5-regular and planar, and thus does not have a vertex shared by at most four edges.) Find such a vertex, and call it . Now remove from . The graph obtained this way has one fewer vertex than , so we can assume by induction that it can be colored with only five colors. If the coloring did not use all five colors on the five neighboring vertices of , it can be colored in with a color not used by the neighbors. So now look at those five vertices , , , , that were adjacent to in cyclic o
https://en.wikipedia.org/wiki/Peptide%20mass%20fingerprinting
Peptide mass fingerprinting (PMF) (also known as protein fingerprinting) is an analytical technique for protein identification in which the unknown protein of interest is first cleaved into smaller peptides, whose absolute masses can be accurately measured with a mass spectrometer such as MALDI-TOF or ESI-TOF. The method was developed in 1993 by several groups independently. The peptide masses are compared to either a database containing known protein sequences or even the genome. This is achieved by using computer programs that translate the known genome of the organism into proteins, then theoretically cut the proteins into peptides, and calculate the absolute masses of the peptides from each protein. They then compare the masses of the peptides of the unknown protein to the theoretical peptide masses of each protein encoded in the genome. The results are statistically analyzed to find the best match. The advantage of this method is that only the masses of the peptides have to be known. A disadvantage is that the protein sequence has to be present in the database of interest. Additionally most PMF algorithms assume that the peptides come from a single protein. The presence of a mixture can significantly complicate the analysis and potentially compromise the results. Typical for the PMF-based protein identification is the requirement for an isolated protein. Mixtures exceeding a number of 2–3 proteins typically require the additional use of MS/MS-based protein identification to achieve sufficient specificity of identification. Therefore, typical PMF samples are isolated proteins from two-dimensional gel electrophoresis (2D gels) or isolated SDS-PAGE bands. Additional analyses by MS/MS can either be direct, e.g., MALDI-TOF/TOF analysis or downstream nanoLC-ESI-MS/MS analysis of gel spot eluates. Origins Due to the long, tedious process of analyzing proteins, peptide mass fingerprinting was developed. Edman degradation was used in protein analysis, and it required
https://en.wikipedia.org/wiki/Difference%20gel%20electrophoresis
Difference gel electrophoresis (DIGE) is a form of gel electrophoresis where up to three different protein samples can be labeled with size-matched, charge-matched spectrally resolvable fluorescent dyes (for example Cy3, Cy5, Cy2) prior to two dimensional gel electrophoresis. Procedure The three samples are mixed and loaded onto IEF (isoelectric focusing chromatography) for first dimension and the strip is transferred to a SDS PAGE. After the gel electrophoresis, the gel is scanned with the excitation wavelength of each dye one after the other, so each sample can be seen separately (if we scan the gel at the excitation wavelength of the Cy3 dye, we will see in the gel only the sample that was labeled with that dye). This technique is used to see changes in protein abundance (for example, between a sample of a healthy person and a sample of a person with disease), post-translational modifications, truncations and any modification that might change the size or isoelectric point of proteins. The binary shifts might be left to right (change in isoelectric point), vertical (change in size) or diagonal (change in both size and isoelectric point). Reciprocal Labeling is done to make sure the changes seen are not due to dye-dependent interactions. Advantages It overcomes limitations in traditional 2D electrophoresis that are due to inter-gel variation. This can be considerable even with identical samples. Since the proteins from the different sample types (e.g. healthy/diseased, virulent/non-virulent) are run on the same gel they can be directly compared. To do this with traditional 2D electrophoresis requires large numbers of time-consuming repeats. Standards In experiments comprising several gels, a common technique is to include an internal standard in each gel. The internal standard is prepared by mixing together several or all of the samples in the experiment. This allows the measurement of the abundance of a protein in each sample relative to the internal standard.
https://en.wikipedia.org/wiki/Partition%20%28database%29
A partition is a division of a logical database or its constituent elements into distinct independent parts. Database partitioning is normally done for manageability, performance or availability reasons, or for load balancing. It is popular in distributed database management systems, where each partition may be spread over multiple nodes, with users at the node performing local transactions on the partition. This increases performance for sites that have regular transactions involving certain views of data, whilst maintaining availability and security. Partitioning criteria Current high-end relational database management systems provide for different criteria to split the database. They take a partitioning key and assign a partition based on certain criteria. Some common criteria include: Range partitioning: selects a partition by determining if the partitioning key is within a certain range. An example could be a partition for all rows where the "zipcode" column has a value between 70000 and 79999. It distributes tuples based on the value intervals (ranges) of some attribute. In addition to supporting exact-match queries (as in hashing), it is well-suited for range queries. For instance, a query with a predicate “A between A1 and A2” may be processed by the only node(s) containing tuples. List partitioning: a partition is assigned a list of values. If the partitioning key has one of these values, the partition is chosen. For example, all rows where the column Country is either Iceland, Norway, Sweden, Finland or Denmark could build a partition for the Nordic countries. Composite partitioning: allows for certain combinations of the above partitioning schemes, by for example first applying a range partitioning and then a hash partitioning. Consistent hashing could be considered a composite of hash and list partitioning where the hash reduces the key space to a size that can be listed. Round-robin partitioning: the simplest strategy, it ensures uniform data dist
https://en.wikipedia.org/wiki/Siegbahn%20notation
The Siegbahn notation is used in X-ray spectroscopy to name the spectral lines that are characteristic to elements. It was introduced by Manne Siegbahn. The characteristic lines in X-ray emission spectra correspond to atomic electronic transitions where an electron jumps down to a vacancy in one of the inner shells of an atom. Such a hole in an inner shell may have been produced by bombardment with electrons in an X-ray tube, by other particles as in PIXE, by other X-rays in X-ray fluorescence or by radioactive decay of the atom's nucleus. Although still widely used in spectroscopy, this notation is unsystematic and often confusing. For these reasons, International Union of Pure and Applied Chemistry (IUPAC) recommends another nomenclature. History The use of the letters K and L to denote X-rays originates in a 1911 paper by Charles Glover Barkla, titled The Spectra of the Fluorescent Röntgen Radiations ("Röntgen radiation" is an archaic name for "X-rays"). By 1913, Henry Moseley had clearly differentiated two types of X-ray lines for each element, naming them α and β. In 1914, as part of his thesis, Ivar Malmer (:sv:Ivar Malmer), a student of Manne Siegbahn, discovered that the α and β lines were not single lines, but doublets. In 1916, Siegbahn published this result in the journal Nature, using what would come to be known as the Siegbahn notation. Correspondence between the Siegbahn and IUPAC notations The table below shows a few transitions and their initial and final levels. See also Characteristic X-ray Moseley's law X-ray notation
https://en.wikipedia.org/wiki/Moseley%27s%20law
Moseley's law is an empirical law concerning the characteristic X-rays emitted by atoms. The law had been discovered and published by the English physicist Henry Moseley in 1913–1914. Until Moseley's work, "atomic number" was merely an element's place in the periodic table and was not known to be associated with any measurable physical quantity. In brief, the law states that the square root of the frequency of the emitted X-ray is approximately proportional to the atomic number: History The historic periodic table was roughly ordered by increasing atomic weight, but in a few famous cases the physical properties of two elements suggested that the heavier ought to precede the lighter. An example is cobalt having the atomic weight of 58.9 and nickel having the atomic weight of 58.7. Henry Moseley and other physicists used X-ray diffraction to study the elements, and the results of their experiments led to organizing the periodic table by proton count. Apparatus Since the spectral emissions for the lighter elements would be in the soft X-ray range (absorbed by air), the spectrometry apparatus had to be enclosed inside a vacuum. Details of the experimental setup are documented in the journal articles "The High-Frequency Spectra of the Elements" Part I and Part II. Results Moseley found that the lines (in Siegbahn notation) were indeed related to the atomic number, Z. Following Bohr's lead, Moseley found that for the spectral lines, this relationship could be approximated by a simple formula, later called Moseley's Law. where: is the frequency of the observed X-ray emission line and are constants that depend on the type of line (that is, K, L, etc. in X-ray notation) Rydberg frequency and = 1 for lines, and Rydberg frequency and for lines. Derivation Moseley derived his formula empirically by fitting the square root of the X-ray frequency plotted against the atomic number. This formula can be explained based on the Bohr model of the atom, namely,
https://en.wikipedia.org/wiki/Flat-field%20correction
Flat-field correction (FFC) is a digital imaging technique to mitigate the image detector pixel-to-pixel sensitivity and distortions in the optical path. It is a standard calibration procedure in everything from personal digital cameras to large telescopes. Overview Flat fielding refers to the process of compensating for different gains and dark currents in a detector. Once a detector has been appropriately flat-fielded, a uniform signal will create a uniform output (hence flat-field). This then means any further signal is due to the phenomenon being detected and not a systematic error. A flat-field image is acquired by imaging a uniformly-illuminated screen, thus producing an image of uniform color and brightness across the frame. For handheld cameras, the screen could be a piece of paper at arm's length, but a telescope will frequently image a clear patch of sky at twilight, when the illumination is uniform and there are few, if any, stars visible. Once the images are acquired, processing can begin. A flat-field consists of two numbers for each pixel, the pixel's gain and its dark current (or dark frame). The pixel's gain is how the amount of signal given by the detector varies as a function of the amount of light (or equivalent). The gain is almost always a linear variable, as such the gain is given simply as the ratio of the input and output signals. The dark-current is the amount of signal given out by the detector when there is no incident light (hence dark frame). In many detectors this can also be a function of time, for example in astronomical telescopes it is common to take a dark-frame of the same time as the planned light exposure. The gain and dark-frame for optical systems can also be established by using a series of neutral density filters to give input/output signal information and applying a least squares fit to obtain the values for the dark current and gain. where: C = corrected image R = raw image F = flat field image D = dark field or da
https://en.wikipedia.org/wiki/Air%20interface
The air interface, or access mode, is the communication link between the two stations in mobile or wireless communication. The air interface involves both the physical and data link layers (layer 1 and 2) of the OSI model for a connection. Physical Layer The physical connection of an air interface is generally radio-based. This is usually a point to point link between an active base station and a mobile station. Technologies like Opportunity-Driven Multiple Access (ODMA) may have flexibility regarding which devices serve in which roles. Some types of wireless connections possess the ability to broadcast or multicast. Multiple links can be created in limited spectrum through FDMA, TDMA, or SDMA. Some advanced forms of transmission multiplexing combine frequency- and time-division approaches like OFDM or CDMA. In cellular telephone communications, the air interface is the radio-frequency portion of the circuit between the cellular phone set or wireless modem (usually portable or mobile) and the active base station. As a subscriber moves from one cell to another in the system, the active base station changes periodically. Each changeover is known as a handoff. In radio and electronics, an antenna (plural antennae or antennas), or aerial, is an electrical device which converts electric power into radio waves, and vice versa. It is usually used with a radio transmitter or radio receiver. In transmission, a radio transmitter supplies an electric current oscillating at radio frequency to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). An antenna focuses the radio waves in a certain direction. Usually, this is called the main direction. Because of that, in other directions less energy will be emitted. The gain of an antenna, in a given direction, is usually referenced to an (hypothetical) isotropic antenna, which emits the radiation evenly strong in all directions. The antenna gain is the power in the
https://en.wikipedia.org/wiki/148%20%28number%29
148 (one hundred [and] forty-eight) is the natural number following 147 and before 149. In mathematics 148 is the second number to be both a heptagonal number and a centered heptagonal number (the first is 1). It is the twelfth member of the Mian–Chowla sequence, the lexicographically smallest sequence of distinct positive integers with distinct pairwise sums. There are 148 perfect graphs with six vertices, and 148 ways of partitioning four people into subsets, ordering the subsets, and selecting a leader for each subset. In other fields In the Book of Nehemiah 7:44 there are 148 singers, sons of Asaph, at the census of men of Israel upon return from exile. This differs from Ezra 2:41, where the number is given as 128. Dunbar's number is a theoretical cognitive limit to the number of people with whom one can maintain stable interpersonal relationships. Dunbar predicted a "mean group size" of 148, but this is commonly rounded to 150. See also The year AD 148 or 148 BC List of highways numbered 148
https://en.wikipedia.org/wiki/Nukernel
NuKernel is a microkernel which was developed at Apple Computer during the early 1990s. Written from scratch and designed using concepts from the Mach 3.0 microkernel, with extensive additions for soft real-time scheduling to improve multimedia performance, it was the basis for the Copland operating system. Only one NuKernel version was released, with a Copland alpha release. Development ended in 1996 with the cancellation of Copland. The External Reference Specification (ERS) for NuKernel is contained in its entirety in its patent. The one-time technical lead for NuKernel, Jeff Robbin, was one of the leaders of iTunes and the iPod. Apple's NuKernel is not the microkernel in BeOS, nukernel. See also XNU, the microkernel in Mac OS X
https://en.wikipedia.org/wiki/F-coalgebra
In mathematics, specifically in category theory, an -coalgebra is a structure defined according to a functor , with specific properties as defined below. For both algebras and coalgebras, a functor is a convenient and general way of organizing a signature. This has applications in computer science: examples of coalgebras include lazy evaluation, infinite data structures, such as streams, and also transition systems. -coalgebras are dual to -algebras. Just as the class of all algebras for a given signature and equational theory form a variety, so does the class of all -coalgebras satisfying a given equational theory form a covariety, where the signature is given by . Definition Let be an endofunctor on a category . An -coalgebra is an object of together with a morphism of , usually written as . An -coalgebra homomorphism from to another -coalgebra is a morphism in such that . Thus the -coalgebras for a given functor F constitute a category. Examples Consider the endofunctor that sends a set to its disjoint union with the singleton set . A coalgebra of this endofunctor is given by , where is the so-called conatural numbers, consisting of the nonnegative integers and also infinity, and the function is given by , for and . In fact, is the terminal coalgebra of this endofunctor. More generally, fix some set , and consider the functor that sends to . Then an -coalgebra is a finite or infinite stream over the alphabet where is the set of states and is the state-transition function. Applying the state-transition function to a state may yield two possible results: either an element of together with the next state of the stream, or the element of the singleton set as a separate "final state" indicating that there are no more values in the stream. In many practical applications, the state-transition function of such a coalgebraic object may be of the form , which readily factorizes into a collection of "selectors", "observers", "me
https://en.wikipedia.org/wiki/Institutional%20review%20board
An institutional review board (IRB), also known as an independent ethics committee (IEC), ethical review board (ERB), or research ethics board (REB), is a committee at an institution that applies research ethics by reviewing the methods proposed for research done at that institution to ensure that the projects are ethical. Such boards are formally designated to approve (or reject), monitor, and review biomedical and behavioral research involving humans, and they are legally required in some countries under certain specified circumstances. Most countries use some form of IRB to safeguard ethical conduct of research so that it complies with national and international norms, regulations or codes. The purpose of the IRB is to assure that appropriate steps are taken to protect the rights and welfare of people participating in a research study. A key goal of IRBs is to protect human subjects from physical or psychological harm, which they attempt to do by reviewing research protocols and related materials. The protocol review assesses the ethics of the research and its methods, promotes fully informed and voluntary participation by prospective subjects, and seeks to maximize the safety of subjects. They often conduct some form of risk-benefit analysis in an attempt to determine whether or not research should be conducted. IRBs are most commonly used for studies in the fields of health and the social sciences, including anthropology, sociology, and psychology. Such studies may be clinical trials of new drugs or medical devices, studies of personal or social behavior, opinions or attitudes, or studies of how health care is delivered and might be improved. Many types of research that involves humans, such as research into which teaching methods are appropriate, unstructured research such as oral histories, journalistic research, research conducted by private individuals, and research that does not involve human subjects, are not typically required to have IRB approval.
https://en.wikipedia.org/wiki/Frequency%20changer
A frequency changer or frequency converter is an electronic or electromechanical device that converts alternating current (AC) of one frequency to alternating current of another frequency. The device may also change the voltage, but if it does, that is incidental to its principal purpose, since voltage conversion of alternating current is much easier to achieve than frequency conversion. Traditionally, these devices were electromechanical machines called a motor-generator set. Also devices with mercury arc rectifiers or vacuum tubes were in use. With the advent of solid state electronics, it has become possible to build completely electronic frequency changers. These devices usually consist of a rectifier stage (producing direct current) which is then inverted to produce AC of the desired frequency. The inverter may use thyristors, IGCTs or IGBTs. If voltage conversion is desired, a transformer will usually be included in either the AC input or output circuitry and this transformer may also provide galvanic isolation between the input and output AC circuits. A battery may also be added to the DC circuitry to improve the converter's ride-through of brief outages in the input power. Frequency changers vary in power-handling capability from a few watts to megawatts. Applications Frequency changers are used for converting bulk AC power from one frequency to another, when two adjacent power grids operate at different utility frequency. A variable-frequency drive (VFD) is a type of frequency changer used for speed control of AC motors such as used for pumps and fans. The speed of a Synchronous AC motor is dependent on the frequency of the AC power supply, so changing frequency allows the motor speed to be changed. This allows fan or pump output to be varied to match process conditions, which can provide energy savings. A cycloconverter is also a type of frequency changer. Unlike a VFD, which is an indirect frequency changer since it uses an AC-DC stage and then a D
https://en.wikipedia.org/wiki/Gigantothermy
Gigantothermy (sometimes called ectothermic homeothermy or inertial homeothermy) is a phenomenon with significance in biology and paleontology, whereby large, bulky ectothermic animals are more easily able to maintain a constant, relatively high body temperature than smaller animals by virtue of their smaller surface-area-to-volume ratio. A bigger animal has proportionately less of its body close to the outside environment than a smaller animal of otherwise similar shape, and so it gains heat from, or loses heat to, the environment much more slowly. The phenomenon is important in the biology of ectothermic megafauna, such as large turtles, and aquatic reptiles like ichthyosaurs and mosasaurs. Gigantotherms, though almost always ectothermic, generally have a body temperature similar to that of endotherms. It has been suggested that the larger dinosaurs would have been gigantothermic, rendering them virtually homeothermic. Disadvantages Gigantothermy allows animals to maintain body temperature, but is most likely detrimental to endurance and muscle power as compared with endotherms due to decreased anaerobic efficiency. Mammals' bodies have roughly four times as much surface area occupied by mitochondria as reptiles, necessitating larger energy demands, and consequently producing more heat to use in thermoregulation. An ectotherm the same size of an endotherm would not be able to remain as active as the endotherm, as heat is modulated behaviorally rather than biochemically. More time is dedicated to basking than eating. Advantages Large ectotherms displaying the same body size as large endotherms have the advantage of a slow metabolic rate, meaning that it takes reptiles longer to digest their food. Consequently gigantothermic ectotherms would not have to eat as often as large endotherms that need to maintain a constant influx of food to meet energy demands. Although lions are much smaller than crocodiles, the lions must eat more often than crocodiles because o
https://en.wikipedia.org/wiki/Biomineralization
Biomineralization, also written biomineralisation, is the process by which living organisms produce minerals, often resulting in hardened or stiffened mineralized tissues. It is an extremely widespread phenomenon: all six taxonomic kingdoms contain members that are able to form minerals, and over 60 different minerals have been identified in organisms. Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates. These minerals often form structural features such as sea shells and the bone in mammals and birds. Organisms have been producing mineralized skeletons for the past 550 million years. Calcium carbonates and calcium phosphates are usually crystalline, but silica organisms (sponges, diatoms...) are always non-crystalline minerals. Other examples include copper, iron, and gold deposits involving bacteria. Biologically formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity-sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin). In terms of taxonomic distribution, the most common biominerals are the phosphate and carbonate salts of calcium that are used in conjunction with organic polymers such as collagen and chitin to give structural support to bones and shells. The structures of these biocomposite materials are highly controlled from the nanometer to the macroscopic level, resulting in complex architectures that provide multifunctional properties. Because this range of control over mineral growth is desirable for materials engineering applications, there is interest in understanding and elucidating the mechanisms of biologically-controlled biomineralization. Types Mineralization can be subdivided into different categories depending on the following: the organisms or processes that create chemical conditions necessary for mineral formation, the origin of the substrate at the site of m
https://en.wikipedia.org/wiki/Aleph%20kernel
Aleph is a discontinued operating system kernel developed at the University of Rochester as part of their Rochester's Intelligent Gateway (RIG) project in 1975. Aleph was an early step on the road to the creation of the first practical microkernel operating system, Mach. Aleph used inter-process communications to move data between programs and the kernel, so applications could transparently access resources on any machine on the local area network (which at the time was a 3-Mbit/s experimental Xerox Ethernet). The project eventually petered out after several years due to rapid changes in the computer hardware market, but the ideas led to the creation of Accent at Carnegie Mellon University, leading in turn to Mach. Applications written for the RIG system communicated via ports. Ports were essentially message queues that were maintained by the Aleph kernel, identified by a machine unique (as opposed to globally unique) ID consisting of a process id, port id pair. Processes were automatically assigned a process number, or pid, on startup, and could then ask the kernel to open ports. Processes could open several ports and then "read" them, automatically blocking and allowing other programs to run until data arrived. Processes could also "shadow" another, receiving a copy of every message sent to the one it was shadowing. Similarly, programs could "interpose" on another, receiving messages and essentially cutting the original message out of the conversation. RIG was implemented on a number of Data General Eclipse minicomputers. The ports were implemented using memory buffers, limited to 2 kB in size. This produced significant overhead when copying large amounts of data. Another problem, realized only in retrospect, was that the use of global ID's allowed malicious software to "guess" at ports and thereby gain access to resources they should not have had. And since those IDs were based on the program ID, the port IDs changed if the program was restarted, making it dif
https://en.wikipedia.org/wiki/User%20operation%20prohibition
The user operation prohibition (abbreviated UOP) is a form of use restriction used on video DVD discs and Blu-ray discs. Most DVD players and Blu-ray players prohibit the viewer from performing a large majority of actions during sections of a DVD that are protected or restricted by this feature, and will display the no symbol or a message to that effect if any of these actions are attempted. It is used mainly for copyright notices or warnings, such as an FBI warning in the United States, and "protected" (i.e., unskippable) commercials. Countermeasures Some DVD players ignore the UOP flag, allowing the user full control over DVD playback. Virtually all players that are not purpose-built DVD player hardware (for example, a player program running on a general purpose computer) ignore the flag. There are also modchips available for some standard DVD players for the same purpose. The UOP flag can be removed in DVD ripper software such as: DVD Decrypter, DVD Shrink, AnyDVD, AVS Video Converter, Digiarty WinX DVD Ripper Platinum, MacTheRipper, HandBrake and K9Copy. On many DVD players, pressing stop-stop-play will cause the DVD player to play the movie immediately, ignoring any UOP flags that would otherwise make advertisements, piracy warnings or trailers unskippable. Nevertheless, removing UOP does not always provide navigation function in the restricted parts of the DVD. This is because those parts are sometimes lacking the navigation commands which allow skipping to the menu or other parts of the DVD. This has become more common in recent titles, in order to circumvent the UOP disabling that many applications or DVD players offer. Newer DVD players (i.e. post-c. late 2010) have, however, been designed to override the aforementioned counter-countermeasures. The DVD reader software inside the DVD player automatically generates chapters for parts of the DVD lacking navigation commands, allowing them to be fast-forwarded or skipped; pressing the menu button, even in the
https://en.wikipedia.org/wiki/Contract%20research%20organization
In the life sciences, a contract research organization (CRO) is a company that provides support to the pharmaceutical, biotechnology, and medical device industries in the form of research services outsourced on a contract basis. A CRO may provide such services as biopharmaceutical development, biological assay development, commercialization, clinical development, clinical trials management, pharmacovigilance, outcomes research, and real world evidence. CROs are designed to reduce costs for companies developing new medicines and drugs in niche markets. They aim to simplify entry into drug markets, and simplify development, as the need for large pharmaceutical companies to do everything ‘in house’ is now redundant. CROs also support foundations, research institutions, and universities, in addition to governmental organizations (such as the NIH, EMA, etc.). Many CROs specifically provide clinical-study and clinical-trial support for drugs and/or medical devices. However, the sponsor of the trial retains responsibility for the quality of the CRO's work. CROs range from large, international full-service organizations to small, niche specialty groups. CROs that specialize in clinical-trials services can offer their clients the expertise of moving a new drug or device from its conception to FDA/EMA marketing approval, without the drug sponsor having to maintain a staff for these services. Organizations who have had success in working with a particular CRO in a particular context (e.g. therapeutic area) might be tempted or encouraged to expand their engagement with that CRO into other, unrelated areas; however, caution is required as CROs are always seeking to expand their experience and success in one area cannot reliably predict success in unrelated areas that might be new to the organization. Definition, regulatory aspects The International Council on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, a 2015 Swiss NGO of ph
https://en.wikipedia.org/wiki/R%C5%ABpa
Rūpa () means "form". As it relates to any kind of basic object, it has more specific meanings in the context of Indic religions. Definition According to the Monier-Williams Dictionary (2006), rūpa is defined as: ... any outward appearance or phenomenon or colour (often pl.), form, shape, figure RV. &c &c ... to assume a form ; often ifc. = " having the form or appearance or colour of ", " formed or composed of ", " consisting of ", " like to " .... Hinduism In Hinduism, many compound words are made using rūpa to describe subtle and spiritual realities such as the svarupa, meaning the form of the self. It may be used to express matter or material phenomena, especially that linked to the power of vision in samkhya, In the Bhagavad Gita, the Vishvarupa form, an esoteric conception of the Absolute is described. Buddhism Overall, rūpa is the Buddhist concept of material form, including both the body and external matter. More specifically, in the Pali Canon, rūpa is contextualized in three significant frameworks: rūpa-khandha – "material forms," one of the five aggregates (khandha) by which all phenomena can be categorized (see Fig. 1). rūpa-āyatana – "visible objects," the external sense objects of the eye, one of the six external sense bases (āyatana) by which the world is known (see Fig. 2). nāma-rūpa – "name and form" or "mind and body," which in the causal chain of dependent origination (paticca-samuppāda) arises from consciousness and leads to the arising of the sense bases. In addition, more generally, rūpa is used to describe a statue, in which it is sometimes called Buddharupa. In Buddhism, Rūpa is one of Skandha, it perceived by colors and images. Rūpa-khandha According to the Yogacara school, rūpa is not matter as in the metaphysical substance of materialism. Instead it means both materiality and sensibility—signifying, for example, a tactile object both insofar as that object is made of matter and that the object can be tactically sensed. In
https://en.wikipedia.org/wiki/Voussoir
A voussoir () is a wedge-shaped element, typically a stone, which is used in building an arch or vault. Although each unit in an arch or vault is a voussoir, two units are of distinct functional importance: the keystone and the springer. The keystone is the centre stone or masonry unit at the apex of an arch. The springer is the lowest voussoir on each side, located where the curve of the arch springs from the vertical support or abutment of the wall or pier. The keystone is often decorated or enlarged. An enlarged and sometimes slightly dropped keystone is often found in Mannerist arches of the 16th century, beginning with the works of Giulio Romano, who also began the fashion for using voussoirs above rectangular openings, rather than a lintel (Palazzo Stati Maccarani, Rome, circa 1522). The word is a stonemason's term borrowed in Middle English from French verbs connoting a "turn" (OED). Each wedge-shaped voussoir turns aside the thrust of the mass above, transferring it from stone to stone to the springer's bottom face (impost), which is horizontal and passes the thrust on to the supports. Voussoir arches distribute weight efficiently, and take maximum advantage of the compressive strength of stone, as in an arch bridge. The outer boundary of a voussoir is an extrados. In Visigothic and Moorish architectural traditions, the voussoirs are often in alternating colours (ablaq), usually red and white. This is also found sometimes in Romanesque architecture. During the 18th and 19th centuries, British bricklayers became aware that, by thickening the vertical mortar joint between regularly shaped bricks from bottom to top, they could construct an elliptical arch of useful strength over either a standard "former", or over specially constructed timber falsework (temporary structure to be removed once the construction is complete). The bricks used in such an arch are often referred to as "voussoirs". See also Glossary of architecture
https://en.wikipedia.org/wiki/Adiabatic%20shear%20band
In physics, mechanics and engineering, an adiabatic shear band is one of the many mechanisms of failure that occur in metals and other materials that are deformed at a high rate in processes such as metal forming, machining and ballistic impact. Adiabatic shear bands are usually very narrow bands, typically 5-500 μm wide and consisting of highly sheared material. Adiabatic is a thermodynamic term meaning an absence of heat transfer – the heat produced is retained in the zone where it is created. (The opposite extreme, where all heat that is produced is conducted away, is isothermal.) Deformation It is necessary to include some basics of plastic deformation to understand the link between heat produced and the plastic work done. If we carry out a compression test on a cylindrical specimen to, say, 50% of its original height, the stress of the work material will increase usually significantly with reduction. This is called ‘work hardening’. During work hardening, the micro-structure, distortion of grain structure and the generation and glide of dislocations all occur. The remainder of the plastic work done – which can be as much as 90% of the total, is dissipated as heat. If the plastic deformation is carried out under dynamic conditions, such as by drop forging, then the plastic deformation is localized more as the forging hammer speed is increased. This also means that the deformed material becomes hotter the higher the speed of the drop hammer. Now as metals become warmer, their resistance to further plastic deformation decreases. From this point we can see that there is a type of cascade effect: as more plastic deformation is absorbed by the metal, more heat is produced, making it easier for the metal to deform further. This is a catastrophic effect which almost inevitably leads to failure. History The first person to carry out any reported experimental programme to investigate the heat produced as a result of plastic deformation was Henri Tresca in June 1878 T
https://en.wikipedia.org/wiki/Gelfond%27s%20constant
In mathematics, Gelfond's constant, named after Aleksandr Gelfond, is , that is, raised to the power . Like both and , this constant is a transcendental number. This was first established by Gelfond and may now be considered as an application of the Gelfond–Schneider theorem, noting that where is the imaginary unit. Since is algebraic but not rational, is transcendental. The constant was mentioned in Hilbert's seventh problem. A related constant is , known as the Gelfond–Schneider constant. The related value  +  is also irrational. Numerical value The decimal expansion of Gelfond's constant begins ... Construction If one defines and for , then the sequence converges rapidly to . Continued fraction expansion This is based on the digits for the simple continued fraction: As given by the integer sequence A058287. Geometric property The volume of the n-dimensional ball (or n-ball), is given by where is its radius, and is the gamma function. Any even-dimensional ball has volume and, summing up all the unit-ball () volumes of even-dimension gives Similar or related constants Ramanujan's constant This is known as Ramanujan's constant. It is an application of Heegner numbers, where 163 is the Heegner number in question. Similar to , is very close to an integer: ... This number was discovered in 1859 by the mathematician Charles Hermite. In a 1975 April Fool article in Scientific American magazine, "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name. The coincidental closeness, to within 0.000 000 000 000 75 of the number is explained by complex multiplication and the q-expansion of the j-invariant, specifically: and, where is the error term, which explains why is 0.000 000 000 000 75 below . (For more detail on this proof, consult the article on Heegner numbers.) The number The decimal
https://en.wikipedia.org/wiki/Acetogen
An acetogen is a microorganism that generates acetate (CH3COO−) as an end product of anaerobic respiration or fermentation. However, this term is usually employed in a narrower sense only to those bacteria and archaea that perform anaerobic respiration and carbon fixation simultaneously through the reductive acetyl coenzyme A (acetyl-CoA) pathway (also known as the Wood-Ljungdahl pathway). These genuine acetogens are also known as "homoacetogens" and they can produce acetyl-CoA (and from that, in most cases, acetate as the end product) from two molecules of carbon dioxide (CO2) and four molecules of molecular hydrogen (H2). This process is known as acetogenesis, and is different from acetate fermentation, although both occur in the absence of molecular oxygen (O2) and produce acetate. Although previously thought that only bacteria are acetogens, some archaea can be considered to be acetogens. Acetogens are found in a variety of habitats, generally those that are anaerobic (lack oxygen). Acetogens can use a variety of compounds as sources of energy and carbon; the best studied form of acetogenic metabolism involves the use of carbon dioxide as a carbon source and hydrogen as an energy source. Carbon dioxide reduction is carried out by the key enzyme acetyl-CoA synthase. Together with methane-forming archaea, acetogens constitute the last limbs in the anaerobic food web that leads to the production of methane from polymers in the absence of oxygen. Acetogens may represent ancestors of the first bioenergetically active cells in evolution. Metabolic roles Acetogens have diverse metabolic roles, which help them thrive in different environments. One of their metabolic products is acetate which is an important nutrient for the host and its inhabiting microbial community, most seen in termite's guts. Acetogens also serve as "hydrogen sinks" in termite's GI tract. Hydrogen gas inhibits biodegradation and acetogens use up these hydrogen gases in the anaerobic environmen
https://en.wikipedia.org/wiki/Expedition%20%28book%29
Expedition: Being an Account in Words and Artwork of the 2358 A.D. Voyage to Darwin IV is a 1990 speculative evolution and science fiction book written and illustrated by the American artist and writer Wayne Barlowe. Written as a first-person account of a 24th-century crewed expedition to the fictional exoplanet of Darwin IV, Expedition describes and discusses an imaginary extraterrestrial ecosystem as if it were real. The extraterrestrial or alien organisms of Darwin IV were designed to be "truly alien", with Barlowe having grown dissatisfied with the common science fiction trope of alien life being similar to life on Earth, especially the notion of intelligent alien humanoids. None of Darwin IV’s wildlife have eyes, external ears, hair, or jaws, and they bear little resemblance to Earthlings. Various sources of inspiration were used for the creature designs, including dinosaurs, modern beasts and different types of vehicles. Expedition garnered very favorable reviews, being praised particularly for its many illustrations and for the level of detail in the text, which serves to maintain the illusion of realism. Several reviewers also criticized the life forms, finding some of them to be implausible or doubting that Darwin IV could actually function as an ecosystem. In 2005, Expedition was adapted into a TV special for the Discovery Channel titled Alien Planet. Barlowe served as the design consultant and one of the executive producers of the adaptation. Premise Expedition is written as though it is published in the year 2366, five years after Barlowe partook in a crewed expedition to the planet Darwin IV. In the 24th century, the exploitation of the Earth's ecosystem has created an environment so toxic that mass extinctions have wiped out nearly most of its nonhuman animal population. Most of the remaining fauna, with the exception of humans themselves, have suffered horrible mutations and can only be found in zoos. Aided by a benevolent and technologically supe
https://en.wikipedia.org/wiki/Utility%20computing
Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing (such as grid computing), the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented. This repackaging of computing services became the foundation of the shift to "on demand" computing, software as a service and cloud computing models that further propagated the idea of computing, application and network as a service. There was some initial skepticism about such a significant shift. However, the new model of computing caught on and eventually became mainstream. IBM, HP and Microsoft were early leaders in the new field of utility computing, with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility computing can support grid computing which has the characteristic of very large computations or sudden peaks in demand which are supported via a large number of computers. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer.
https://en.wikipedia.org/wiki/Simply%20typed%20lambda%20calculus
The simply typed lambda calculus (), a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor () that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus. The term simple type is also used to refer extensions of the simply typed lambda calculus such as products, coproducts or natural numbers (System T) or even full recursion (like PCF). In contrast, systems which introduce polymorphic types (like System F) or dependent types (like the Logical Framework) are not considered simply typed. The simple types, except for full recursion, are still considered simple because the Church encodings of such structures can be done using only and suitable type variables, while polymorphism and dependency cannot. Syntax In this article, the symbols and are used to range over types. Informally, the function type refers to the type of functions that, given an input of type , produce an output of type . By convention, associates to the right: is read as . To define the types, a set of base types, , must first be defined. These are sometimes called atomic types or type constants. With this fixed, the syntax of types is: . For example, , generates an infinite set of types starting with A set of term constants is also fixed for the base types. For example, it might be assumed that a base type , and the term constants could be the natural numbers. In the original presentation, Church used only two base types: for "the type of propositions" and for "the type of individuals". The type has no term constants, whereas has one term constant. Frequently the calculus with only one base type, usually , is considered. The syntax of the simply typed lambda calculus is essentially that of the lambda calculus itself. The term denotes that the
https://en.wikipedia.org/wiki/Symbolic%20dynamics
In mathematics, symbolic dynamics is the practice of modeling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator. Formally, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another. History The idea goes back to Jacques Hadamard's 1898 paper on the geodesics on surfaces of negative curvature. It was applied by Marston Morse in 1921 to the construction of a nonperiodic recurrent geodesic. Related work was done by Emil Artin in 1924 (for the system now called Artin billiard), Pekka Myrberg, Paul Koebe, Jakob Nielsen, G. A. Hedlund. The first formal treatment was developed by Morse and Hedlund in their 1938 paper. George Birkhoff, Norman Levinson and the pair Mary Cartwright and J. E. Littlewood have applied similar methods to qualitative analysis of nonautonomous second order differential equations. Claude Shannon used symbolic sequences and shifts of finite type in his 1948 paper A mathematical theory of communication that gave birth to information theory. During the late 1960s the method of symbolic dynamics was developed to hyperbolic toral automorphisms by Roy Adler and Benjamin Weiss, and to Anosov diffeomorphisms by Yakov Sinai who used the symbolic model to construct Gibbs measures. In the early 1970s the theory was extended to Anosov flows by Marina Ratner, and to Axiom A diffeomorphisms and flows by Rufus Bowen. A spectacular application of the methods of symbolic dynamics is Sharkovskii's theorem about periodic orbits of a continuous map of an interval into itself (1964). Examples Concepts such as heteroclinic orbits and homoclinic orbits have a particularly simple representation in symbolic dynamics.
https://en.wikipedia.org/wiki/Subshift%20of%20finite%20type
In mathematics, subshifts of finite type are used to model dynamical systems, and in particular are the objects of study in symbolic dynamics and ergodic theory. They also describe the set of all possible sequences executed by a finite state machine. The most widely studied shift spaces are the subshifts of finite type. Definition Let be a finite set of symbols (alphabet). Let denote the set of all bi-infinite sequences of elements of together with the shift operator . We endow with the discrete topology and with the product topology. A symbolic flow or subshift is a closed -invariant subset of and the associated language is the set of finite subsequences of . Now let be an adjacency matrix with entries in Using these elements we construct a directed graph with the set of vertices and the set of edges containing the directed edge in if and only if . Let be the set of all infinite admissible sequences of edges, where by admissible it is meant that the sequence is a walk of the graph, and the sequence can be either one-sided or two-sided infinite. Let be the left shift operator on such sequences; it plays the role of the time-evolution operator of the dynamical system. A subshift of finite type is then defined as a pair obtained in this way. If the sequence extends to infinity in only one direction, it is called a one-sided subshift of finite type, and if it is bilateral, it is called a two-sided subshift of finite type. Formally, one may define the sequences of edges as This is the space of all sequences of symbols such that the symbol can be followed by the symbol only if the -th entry of the matrix is 1. The space of all bi-infinite sequences is defined analogously: The shift operator maps a sequence in the one- or two-sided shift to another by shifting all symbols to the left, i.e. Clearly this map is only invertible in the case of the two-sided shift. A subshift of finite type is called transitive if is strongly connected
https://en.wikipedia.org/wiki/Ferranti%20Pegasus
Pegasus was an early British vacuum-tube (valve) computer built by Ferranti, Ltd that pioneered design features to make life easier for both engineers and programmers. Originally it was named the Ferranti Package Computer as its hardware design followed that of the Elliott 401 with modular plug-in packages. Much of the development was the product of three men: W. S. (Bill) Elliott (hardware); Christopher Strachey (software) and Bernard Swann (marketing and customer support). It was Ferranti's most popular valve computer with 38 being sold. The first Pegasus was delivered in 1956 and the last was delivered in 1959. Ferranti received funding for the development from the National Research Development Corporation (NRDC). At least two Pegasus machines survive, one in The Science Museum, London and one which was displayed in the Science and Industry Museum, Manchester but which has now been removed to the storage in the Science Museum archives at Wroughton. The Pegasus in The Science Museum, London ran its first program in December 1959 and was regularly demonstrated until 2009 when it developed a severe electrical fault. In early 2014, the Science Museum decided to retire it permanently, effectively ending the life of one of the world's oldest working computers. The Pegasus officially held the title of the world's oldest computer until 2012, when the restoration of the Harwell computer was completed at the National Museum of Computing. Design In those days it was common for it to be unclear whether a failure was due to the hardware or the program. As a consequence, Christopher Strachey of NRDC, who was himself a brilliant programmer, recommended the following design objectives: The necessity for optimum programming (favoured by Alan Turing) was to be minimised, "because it tended to become a time-wasting intellectual hobby of the programmers". The needs of the programmer were to be a governing factor in selecting the instruction set. It was to be cheap and reliable.
https://en.wikipedia.org/wiki/Magnetic%20deviation
Magnetic deviation is the error induced in a compass by local magnetic fields, which must be allowed for, along with magnetic declination, if accurate bearings are to be calculated. (More loosely, "magnetic deviation" is used by some to mean the same as "magnetic declination". This article is about the former meaning.) Compass readings Compasses are used to determine the direction of true North. However, the compass reading must be corrected for two effects. The first is magnetic declination or variation—the angular difference between magnetic North (the local direction of the Earth's magnetic field) and true North. The second is magnetic deviation—the angular difference between magnetic North and the compass needle due to nearby sources of interference such as magnetically permeable bodies, or other magnetic fields within the field of influence. Sources In navigation manuals, magnetic deviation refers specifically to compass error caused by magnetized iron within a ship or aircraft. This iron has a mixture of permanent magnetization and an induced (temporary) magnetization that is induced by the Earth's magnetic field. Because the latter depends on the orientation of the craft relative to the Earth's field, it can be difficult to analyze and correct for it. The deviation errors caused by magnetism in the ship's structure are minimised by precisely positioning small magnets and iron compensators close to the compass. To compensate for the induced magnetization, two magnetically soft iron spheres are placed on side arms. However, because the magnetic "signature" of every ship changes slowly with location, and with time, it is necessary to adjust the compensating magnets, periodically, to keep the deviation errors to a practical minimum. Magnetic compass adjustment and correction is one of the subjects in the examination curriculum for a shipmaster's certificate of competency. The sources of magnetic deviation vary from compass to compass or vehicle to vehicle. H
https://en.wikipedia.org/wiki/Anticaking%20agent
An anticaking agent is an additive placed in powdered or granulated materials, such as table salt or confectioneries, to prevent the formation of lumps (caking) and for easing packaging, transport, flowability, and consumption. Caking mechanisms depend on the nature of the material. Crystalline solids often cake by formation of liquid bridge and subsequent fusion of microcrystals. Amorphous materials can cake by glass transitions and changes in viscosity. Polymorphic phase transitions can also induce caking. Some anticaking agents function by absorbing excess moisture or by coating particles and making them water-repellent. Calcium silicate (CaSiO3), a commonly used anti-caking agent, added to e.g. table salt, absorbs both water and oil. Anticaking agents are also used in non-food items such as road salt, fertilisers, cosmetics, and detergents. Some studies suggest that anticaking agents may have a negative effect on the nutritional content of food; one such study indicated that most anti-caking agents result in the additional degradation of vitamin C added to food. Examples An anticaking agent in salt is denoted in the ingredients, for example, as "anti-caking agent (554)", which is sodium aluminosilicate. This product is present in many commercial table salts as well as dried milk, egg mixes, sugar products, flours and spices. In Europe, sodium ferrocyanide (535) and potassium ferrocyanide (536) are more common anticaking agents in table salt. "Natural" anticaking agents used in more expensive table salt include calcium carbonate and magnesium carbonate. Diatomaceous earth, mostly consisting of silicon dioxide (SiO2), may also be used as an anticaking agent in animal foods, typically mixed at 2% rate of a product dry weight. List of anticaking agents The most widely used anticaking agents include the stearates of calcium and magnesium, silica and various silicates, talc, as well as flour and starch. Ferrocyanides are used for table salt. The following a
https://en.wikipedia.org/wiki/Black%20hole%20electron
In physics, there is a speculative hypothesis that, if there were a black hole with the same mass, charge and angular momentum as an electron, it would share other properties of the electron. Most notably, Brandon Carter showed in 1968 that the magnetic moment of such an object would match that of an electron. This is interesting because calculations ignoring special relativity and treating the electron as a small rotating sphere of charge give a magnetic moment roughly half the experimental value (see Gyromagnetic ratio). However, Carter's calculations also show that a would-be black hole with these parameters would be "super-extremal". Thus, unlike a true black hole, this object would display a naked singularity, meaning a singularity in spacetime not hidden behind an event horizon. It would also give rise to closed timelike curves. Standard quantum electrodynamics (QED), currently the most comprehensive theory of particles, treats the electron as a point particle. There is no evidence that the electron is a black hole (or naked singularity) or not. Furthermore, since the electron is quantum-mechanical in nature, any description purely in terms of general relativity is paradoxical until a better model based on understanding of quantum nature of blackholes and gravitational behaviour of quantum particles is developed by research. Hence, the idea of a black hole electron remains strictly hypothetical. Details An article published in 1938 by Albert Einstein, Leopold Infeld, and Banesh Hoffmann showed that if elementary particles are treated as singularities in spacetime it is unnecessary to postulate geodesic motion as part of general relativity. The electron may be treated as such a singularity. If one ignores the electron's angular momentum and charge, as well as the effects of quantum mechanics, one can treat the electron as a black hole and attempt to compute its radius. The Schwarzschild radius of a mass is the radius of the event horizon for a non-rotatin
https://en.wikipedia.org/wiki/Contact%20angle
The contact angle (symbol θC) is the angle between a liquid surface and a solid surface where they meet. More specifically, it is the angle between the surface tangent on the liquid–vapor interface and the tangent on the solid–liquid interface at their intersection. It quantifies the wettability of a solid surface by a liquid via the Young equation. A given system of solid, liquid, and vapor at a given temperature and pressure has a unique equilibrium contact angle. However, in practice a dynamic phenomenon of contact angle hysteresis is often observed, ranging from the advancing (maximal) contact angle to the receding (minimal) contact angle. The equilibrium contact is within those values, and can be calculated from them. The equilibrium contact angle reflects the relative strength of the liquid, solid, and vapour molecular interaction. The contact angle depends upon the medium above the free surface of the liquid, and the nature of the liquid and solid in contact. It is independent of the inclination of solid to the liquid surface. It changes with surface tension and hence with the temperature and purity of the liquid. Thermodynamics The theoretical description of contact arises from the consideration of a thermodynamic equilibrium between the three phases: the liquid phase (L), the solid phase (S), and the gas or vapor phase (G) (which could be a mixture of ambient atmosphere and an equilibrium concentration of the liquid vapor). (The "gaseous" phase could be replaced by another immiscible liquid phase.) If the solid–vapor interfacial energy is denoted by , the solid–liquid interfacial energy by , and the liquid–vapor interfacial energy (i.e. the surface tension) by , then the equilibrium contact angle is determined from these quantities by the Young equation: The contact angle can also be related to the work of adhesion via the Young–Dupré equation: where is the solid – liquid adhesion energy per unit area when in the medium G. Modified Young’s equation
https://en.wikipedia.org/wiki/Hutch%20%28animal%20cage%29
A hutch is a type of cage used typically for housing domestic rabbits. Other small animals can also be housed in hutches such as guinea pigs, ferrets, and hamsters. Most hutches have a frame constructed of wood, including legs to keep the unit off the ground. The floor may be wood, wire mesh, or some combination of the two. Wire mesh is very bad for rabbits' feet and can cause sore hocks. One or more walls of the hutch are also wire mesh to allow for ventilation. Some hutches have built-in nest boxes and shingled roofs—these are generally intended to be placed directly outside rather than inside another shelter such as a barn. Some hutches have a felt roof. In any case it is important that the hutch is draft-free and provides a shelter in case the animal is scared and wants to retreat to a safe haven. Not only will this help protect your pet from harsh weather conditions, but also predator attacks. The generally accepted minimum hutch size is 10 square feet for a 4 kg medium-sized breed. If the animal is very protective or even aggressive, this is generally a sign that the hutch is too small. However, it has in the past decade, become unacceptable for people who are more knowledgeable about rabbits' needs that they should live in a hutch of this size, or any small cage for that matter. Rabbits love to run and jump and need space. For many animal rescues, now a predator safe run must be attached to, or contain the hutch; the run must be at least 10 ft x 6 ft with a run height of 3 ft, or in metric, 3m x 2m and a run height of 1m. (Rabbit Welfare Association and Trust, 2018)
https://en.wikipedia.org/wiki/OpenURL
An OpenURL is similar to a web address, but instead of referring to a physical website, it refers to an article, book, patent, or other resource within a website. OpenURLs are similar to permalinks because they are permanently connected to a resource, regardless of which website the resource is connected to. Libraries and other resource centers are the most common place to find OpenURLs because an OpenURL can help Internet users find a copy of a resource that they may otherwise have limited access to. The source that generates an OpenURL is often a bibliographic citation or bibliographic record in a database. Examples of these databases include Ovid Technologies, Web of Science, Chemical Abstracts Service, Modern Language Association and Google Scholar. The National Information Standards Organization (NISO) has developed standards for OpenURL and its data container as American National Standards Institute (ANSI) standard ANSI/NISO Z39.88-2004. OpenURL standards create a clear structure for links that go from information resource databases (sources) to library services (targets). A target is a resource or service that helps satisfy a user's information needs. Examples of targets include full-text repositories, online journals, online library catalogs and other Web resources and services. OpenURL knowledge bases provide links to the appropriate targets available. History OpenURL was created by Herbert Van de Sompel, a librarian at the University of Ghent, in the late 1990s. His link-server software, SFX, was purchased by the library automation company Ex Libris Group which popularized OpenURL in the information industry. In 2005, a revised version of OpenURL (version 1.0) became ANSI/NISO standard Z39.88-2004, with Van de Sompel's version designated as version 0.1. The new standard provided a framework for describing new formats, as well as defining XML versions of the various formats. In 2006 a research report found some problems affecting the efficiency o
https://en.wikipedia.org/wiki/Nape
The nape is the back of the neck. In technical anatomical/medical terminology, the nape is also called the nucha (from the Medieval Latin rendering of the Arabic , ). The corresponding adjective is nuchal, as in the term nuchal rigidity for neck stiffness. In many mammals the nape bears a loose, non-sensitive area of skin, known as the scruff, by which a mother carries her young by her teeth, temporarily immobilizing it during transport. In the mating of cats the male will grip the female's scruff with his teeth to help immobilize her during the act, a form of pinch-induced behavioral inhibition. Cultural connotations In traditional Japanese culture, the was one of the few areas of the body (other than face and hands) left uncovered by women's attire. The nape of a woman's neck held a strong attraction for many Japanese men (see ). In Egyptian and Lebanese culture, slapping the nape is considered a gesture of utter humiliation.
https://en.wikipedia.org/wiki/LogMeIn%20Hamachi
LogMeIn Hamachi is a virtual private network (VPN) application developed and released in 2004 by Alex Pankratov. It is capable of establishing direct links between computers that are behind network address translation (NAT) firewalls without requiring reconfiguration (when the user's PC can be accessed directly without relays from the Internet/WAN side). Like other VPNs, it establishes a connection over the Internet that emulates the connection that would exist if the computers were connected over a local area network (LAN). Hamachi became a LogMeIn product after the acquisition of Applied Networking Inc. in 2006. It is currently available as a production version for Microsoft Windows and macOS, as a beta version for Linux, and as a system-VPN-based client compatible with Android and iOS. For paid subscribers Hamachi runs in the background on idle computers. The feature was previously available to all users but became restricted to paid subscribers only as of November 19, 2012. Operational summary Hamachi is a proprietary centrally-managed VPN system, consisting of the server cluster managed by the vendor of the system and the client software, which is installed on end-user devices. Client software adds a virtual network interface to a computer, and it is used for intercepting outbound as well as injecting inbound VPN traffic. Outbound traffic sent by the operating system to this interface is delivered to the client software, which encrypts and authenticates it and then sends it to the destination VPN peer over a specially initiated UDP connection. Hamachi currently handles tunneling of IP traffic including broadcasts and multicast. The Windows version also recognizes and tunnels IPX traffic. Each client establishes and maintains a control connection to the server cluster. When the connection is established, the client goes through a login sequence, followed by the discovery process and state synchronization. The login step authenticates the client to the serve
https://en.wikipedia.org/wiki/Reciprocal%20inhibition
Reciprocal inhibition describes the relaxation of muscles on one side of a joint to accommodate contraction on the other side. In some allied health disciplines, this is known as reflexive antagonism. The central nervous system sends a message to the agonist muscle to contract. The tension in the antagonist muscle is activated by impulses from motor neurons, causing it to relax. Mechanics Joints are controlled by two opposing sets of muscles called extensors and flexors, that work in synchrony for smooth movement. When a muscle spindle is stretched, the stretch reflex is activated, and the opposing muscle group must be inhibited to prevent it from working against the contraction of the homonymous muscle. This inhibition is accomplished by the actions of an inhibitor interneuron in the spinal cord. The afferent of the muscle spindle bifurcates in the spinal cord. One branch innervates the alpha motor neuron that causes the homonymous muscle to contract, producing the reflex. The other branch innervates the inhibitory interneuron, which then innervates the alpha motor neuron that synapses onto the opposing muscle. Because the interneuron is inhibitory, it prevents the opposing alpha motor neuron from firing, thereby reducing the contraction of the opposing muscle. Without this reciprocal inhibition, both groups of muscles might contract simultaneously and work against each other. If opposing muscles were to contract at the same time, a muscle tear can occur. This may occur during physical activities such as running, during which opposing muscles engage and disengage sequentially to produce coordinated movement. Reciprocal inhibition facilitates ease of movement and is a safeguard against injury. However, if a "misfiring" of motor neurons occurs, causing simultaneous contraction of opposing muscles, a tear can occur. For example, if the quadriceps femoris and hamstring contract simultaneously at a high intensity, the stronger muscle (traditionally the quadriceps)
https://en.wikipedia.org/wiki/Supercluster%20%28genetic%29
Most usage of supercluster in population genetics research articles applies to proposed large groups of human mtDNA haplotype lineages, found by cluster analysis, that are thought to stem from a single distant most recent common ancestor, on a time scale of tens of thousands of years. Other usage Usage of supercluster for geographically defined human populations instead of mtDNA strains is rarely seen. However, it does appear in the seminal Cavalli-Sforza paper Reconstruction of human evolution: bringing together genetic, archaeological, and linguistic data. (1988) to describe "Northeurasian" and "Southeast Asian" collections of sampled populations, which are also more frequently referred to in the paper as "major cluster" or simply "cluster". Therefore use of "supercluster" as a euphemism for "race" might be considered a neologism or, more likely, an idiosyncratic usage according to the Google test. Usage of supercluster for populations as well as haplotypes makes the term ambiguous and may require clarification when the word is used. External links Examples of usage to describe haplogroups, not races: The sub-Saharan African mtDNAs belong largely to an mtDNA supercluster L More than 90% of European mtDNAs belong to nine haplogroups (Fig. 1), which are highly specific for Western Eurasia (4, 6). These clusters are all thought to originate from one supercluster, L3n (N). The main determinants of this PC analysis are therefore the HV supercluster members, haplogroups H, HV, and pre-HV. Actually, whereas in the European populations haplogroup H reaches its highest frequencies, HV and pre-HV mtDNAs (when present) have a very low incidence. We have denoted it as lineage M-I as it is obviously different from other members of M supercluster occurring in Siberia/Asia. A high frequency of mtDNA types belonging to Asian supercluster M was peculiar for Yakuts A high frequency of mtDNA types belonging to Asian supercluster M was peculiar for Yakuts Ethnicity Popula
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay%20filter
A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data. Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article". Applications The data consists of a set of points {xj, yj}, j = 1, ..., n, where xj is an independent variable and yj is an observed value. They are treated with a set of m convolution coefficients, Ci, according to the expression Selected convolution coefficients are shown in the tables, below. For example, for smoothing by a 5-point quadratic polynomial, m = 5, i = −2, −1, 0, 1, 2 and the jth smoothed data point, Yj, is given by , where, C−2 = −3/35, C−1 = 12 / 35, etc. There are numerous applications of smoothing, which is performed primarily to make the data appear to be less noisy than it really is. The following are applic
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Kalm%C3%A1r
László Kalmár (27 March 1905, Edde – 2 August 1976, Mátraháza) was a Hungarian mathematician and Professor at the University of Szeged. Kalmár is considered the founder of mathematical logic and theoretical computer science in Hungary. Biography Kalmár was of Jewish ancestry. His early life mixed promise and tragedy. His father died when he was young, and his mother died when he was 17, the year he entered the University of Budapest, making him essentially an orphan. Kalmár's brilliance manifested itself while in Budapest schools. At the University of Budapest, his teachers included Kürschák and Fejér. His fellow students included the future logician Rózsa Péter. Kalmár graduated in 1927. He discovered mathematical logic, his chosen field, while visiting Göttingen in 1929. Upon completing his doctorate at Budapest, he took up a position at the University of Szeged. That university was mostly made up of staff from the former University of Kolozsvár, a major Hungarian university before World War I that found itself after the War in Romania. Kolozsvár was renamed Cluj. The Hungarian university moved to Szeged in 1920, where there had previously been no university. The appointment of Haar and Riesz turned Szeged into a major research center for mathematics. Kalmár began his career as a research assistant to Haar and Riesz. Kalmár was appointed a full professor at Szeged in 1947. He was the inaugural holder of Szeged's chair for the Foundations of Mathematics and Computer Science. He also founded Szeged's Cybernetic Laboratory and the Research Group for Mathematical Logic and Automata Theory. In mathematical logic, Kalmár proved that certain classes of formulas of the first-order predicate calculus were decidable. In 1936, he proved that the predicate calculus could be formulated using a single binary predicate, if the recursive definition of a term was sufficiently rich. (This result is commonly attributed to a 1954 paper of Quine's.) He discovered an alternative fo
https://en.wikipedia.org/wiki/Western%20Digital%20FD1771
The FD1771, sometimes WD1771, is the first in a line of floppy disk controllers produced by Western Digital. It uses single density FM encoding introduced in the IBM 3740. Later models in the series added support for MFM encoding and increasingly added onboard circuitry that formerly had to be implemented in external components. Originally packaged as 40-pin dual in-line package (DIP) format, later models moved to a 28-pin format that further lowered implementation costs. Derivatives The FD1771 was succeeded by many derivatives that were mostly software-compatible: The FD1781 was designed for double density, but required external modulation and demodulation circuitry, so it could support MFM, M2FM, GCR or other double-density encodings. The FD1791-FD1797 series added internal support for double density (MFM) modulation, compatible with the IBM System/34 disk format. They required an external data separator. The WD1761-WD1767 series were versions of the FD179x series rated for a maximum clock frequency of 1 MHz, resulting in a data rate limit of 125 kbit/s for single density and 250 kbit/s for double density, thus preventing them from being used for 8-in (200 mm) floppy drives or the later "high-density" or 90 mm floppy drives. These were sold at a lower price point and widely used in home computer floppy drives. The WD2791-WD2797 series added an internal data separator using an analog phase-locked loop, with some external passive components required for the VCO. They took a 1 MHz or 2 MHz clock and were intended for and drives. The WD1770, WD1772, and WD1773 added an internal digital data separator and write precompensator, eliminating the need for external passive components but raising the clock rate requirement to 8 MHz. They supported double density, despite the apparent regression of the part number, and were packaged in 28-pin DIP packages. The WD1772PH02-02 was a version of the chip that Atari fitted to the Atari STE which supported high density (500
https://en.wikipedia.org/wiki/Computational%20cognition
Computational cognition (sometimes referred to as computational cognitive science or computational psychology or cognitive simulation) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology. Artificial intelligence There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature. In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology. The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols. However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to break down the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism. Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started
https://en.wikipedia.org/wiki/Tatung%20Company
Tatung Company () (Tatung; ) is a multinational corporation established in 1918 and headquartered in Zhongshan, Taipei, Taiwan. Description Established in 1918 and headquartered in Taipei, Tatung Company holds 3 business groups, which includes 8 business units: Industrial Appliance BU, Motor BU, Wire & Cable BU, Solar BU, Smart Meter BU, System Integration BU, Appliance BU, and Advanced Electronics BU. As a conglomerate, Tatung's investees involve in some major industries such as optoelectronics, energy, system integration, industrial system, branding retail channel, and asset development. History Xie Zhi Business Enterprise, the forerunner of Tatung Company, was established in 1918 by Shang-Zhi Lin. It was involved in high-profile construction projects, including the Tamsui River embankment project and the Executive Yuan building. In 1939, Tatung Iron Works was established as the company ventured into iron and steel manufacturing. Following the arrival of the ROC administration in 1945, Tatung Iron Works was renamed to Tatung Steel and Machinery Manufacturing Company. The company began mass production of electrical motors and appliances 10 years later in 1949. In 1962 the company became publicly listed on the Taiwan Stock Exchange, and was renamed to Tatung Company in 1968. A year later, Tatung began production of color TVs, and adopted the "Tatung Boy" mascot, which became a Taiwanese cultural symbol. Timeline 1970 Revenues exceeded NT$2.2 billion, making Tatung Taiwan's foremost private company. 1972 W. S. Lin, the grandson of Shang-Zhi Lin, was appointed as president of Tatung. Shortly thereafter he was implicated in a case of embezzlement at Tatung which would take more than ten years to litigate. 1977 Participated in the Ten Major Construction Projects with the construction of a slag treatment facility for China Steel and provision for Chiang Kai-shek International Airport's power control station 2000 Chunghwa Picture Tubes was listed on the OTC marke
https://en.wikipedia.org/wiki/Narrow-gap%20semiconductor
Narrow-gap semiconductors are semiconducting materials with a magnitude of bandgap that is smaller than 0.5 eV, which corresponds to an infrared absorption cut-off wavelength over 2.5 micron. A more extended definition includes all semiconductors with bandgaps smaller than silicon (1.1 eV). Modern terahertz, infrared, and thermographic technologies are all based on this class of semiconductors. Narrow-gap materials made it possible to realize satellite remote sensing, photonic integrated circuits for telecommunications, and unmanned vehicle Li-Fi systems, in the regime of Infrared detector and infrared vision. They are also the materials basis for terahertz technology, including security surveillance of concealed weapon uncovering, safe medical and industrial imaging with terahertz tomography, as well as dielectric wakefield accelerators. Besides, thermophotovoltaics embedded with narrow-gap semiconductors can potentially use the traditionally wasted portion of solar energy that takes up ~49% of the sun light spectrum. Space crafts, deep ocean instruments, and vacuum physics setups use narrow-gap semiconductors to achieve cryogenic cooling. List of narrow-gap semiconductors {| class="wikitable" |- !Name !Chemical formula !Groups !Band gap (300 K) |- | Mercury cadmium telluride | Hg1−xCdxTe | II-VI | 0 to 1.5 eV |- | Mercury zinc telluride | Hg1−xZnxTe | II-VI | 0.15 to 2.25 eV |- | Lead selenide | PbSe | IV-VI | 0.27 eV |- | Lead(II) sulfide | PbS | IV-VI | 0.37 eV |- | Lead telluride | PbTe | IV-VI | 0.32 eV |- | Indium arsenide | InAs | III-V | 0.354 eV |- | Indium antimonide | InSb | III-V | 0.17 eV |- |Gallium antimonide | GaSb | III-V | 0.67 eV |- | Cadmium arsenide | Cd3As2 | II-V | 0.5 to 0.6 eV |- | Bismuth telluride | Bi2Te3 | | 0.21 eV |- | Tin telluride | SnTe | IV-VI | 0.18 eV |- | Tin selenide | SnSe | IV-VI | 0.9 eV |- | Silver(I) selenide | Ag2Se | | 0.07 eV |- |Magnesium silicide |Mg2Si |II-IV |0.79 eV |} See also List of semiconductor materia
https://en.wikipedia.org/wiki/8x8
8x8 Inc. is an American provider of Voice over IP products. Its products include cloud-based voice, contact center, video, mobile and unified communications for businesses. Since 2018, 8x8 manages Jitsi. History The company was founded in 1987 by Chi-Shin Wang and Y. W. Sing formerly of Weitek as Integrated Information Technology, Inc., or IIT. The name was changed in the mid-1990s. According to the company, IIT began as an integrated circuit designer. The company produced math coprocessors for x86 microprocessors, as well as Graphics accelerator cards for the personal computer market during the late 1980s. The company later changed its name to 8x8, and began producing products for the videoconferencing market. 8x8 went public on the NASDAQ market in 1997. The company moved their trading to NYSE in 2017, under the ticker symbol EGHT. In 1999, 8x8 acquired two companies, Odisei and U|Force, to acquire network and server VoIP technologies. In March 2000, 8x8 relaunched itself as a VoIP service provider under the name Netergy Networks. The company changed its name back to 8x8 in July 2001. 8x8 began trading on the Nasdaq SmallCap Market on 26 July 2002. The company's stock was listed on the New York Stock Exchange for a time before switching back to Nasdaq in November 2022. In 2003, the company launched a videophone service. In July 2007, after startup SunRocket was liquidated, 8x8 entered an agreement to accept 200,000 of its customers. Gartner has listed 8x8 several times as a Leader for UCaaS (Unified Communications as a Service) within its Gartner Magic Quadrant, a series of technology market reports. 8x8 has been awarded 128 patents related to semiconductors, computer architecture, video processing algorithms, videophones and communications technologies and security. In 2018, the company acquired Jitsi and Jitsi Meet. Acquisitions In May 2010, 8x8 acquired Central Host, a California-based managed hosting company. In June 2011, the company announced the acq
https://en.wikipedia.org/wiki/Indeterminate%20growth
In biology and botany, indeterminate growth is growth that is not terminated in contrast to determinate growth that stops once a genetically pre-determined structure has completely formed. Thus, a plant that grows and produces flowers and fruit until killed by frost or some other external factor is called indeterminate. For example, the term is applied to tomato varieties that grow in a rather gangly fashion, producing fruit throughout the growing season. In contrast, a determinate tomato plant grows in a more bushy shape and is most productive for a single, larger harvest, then either tapers off with minimal new growth or fruit or dies. Inflorescences In reference to an inflorescence (a shoot specialised for bearing flowers, and bearing no leaves other than bracts), an indeterminate type (such as a raceme) is one in which the first flowers to develop and open are from the buds at the base, followed progressively by buds nearer to the growing tip. The growth of the shoot is not impeded by the opening of the early flowers or development of fruits and its appearance is of growing, producing, and maturing flowers and fruit indefinitely. In practice the continued growth of the terminal end necessarily peters out sooner or later, though without producing any definite terminal flower, and in some species it may stop growing before any of the buds have opened. Not all plants produce indeterminate inflorescences however; some produce a definite terminal flower that terminates the development of new buds towards the tip of that inflorescence. In most species that produce a determinate inflorescence in this way, all of the flower buds are formed before the first ones begin to open, and all open more or less at the same time. In some species with determinate inflorescences however, the terminal flower blooms first, which stops the elongation of the main axis, but side buds develop lower down. One type of example is Dianthus; another type is exemplified by Allium; and yet ot
https://en.wikipedia.org/wiki/Aura%20Battler%20Dunbine
is an anime television series created by Yoshiyuki Tomino and produced by Sotsu and Sunrise. Forty-nine episodes aired on Nagoya TV from February 5, 1983, to January 21, 1984, . An three-episode anime OVA sequel entitled New Story of Aura Battler Dunbine (also known as The Tale of Neo Byston Well) was released in 1988. The series was later dubbed by ADV Films and was released to DVD in North America, along with the original Japanese version in 2003. It soon went out of print, and until 2018, was only available as a digital purchase from the now-defunct Daisuki site, then Sentai Filmworks licensed the series. Premise The story is set in Byston Well, a parallel world that resembles the countryside of medieval Europe with kingdoms ruled by monarchs in castles, armies of unicorn-riding cavalry armed with swords and crossbows, and little winged creatures called Ferario, flying about offering help or hindrance depending on their mood. The main draw to the series were the insect-like Aura Battlers, used by the population of Byston Well to fight their wars. These fighting suits are powered by a powerful energy called "aura" or "life energy." Certain people are strong enough with the aura-energy to act as power-supply to these mecha, making them Aura Warriors. Plot The series followed Shō Zama, as he finds himself pulled into the world of Byston Well during a vehicular incident with one of his rivals. Byston Well is located in another dimension located between the sea and the land, and is populated with dragons, castles, knights, and powerful robots known as Aura Battlers. Once Shō is discovered to possess a formidable "aura", he is drafted into the Byston Well conflict as the pilot of the lavender-colored Dunbine. As in other of Tomino's works, a young man is caught in the midst of an ongoing war that threatens to destabilize the world. There are romances that cut across battle lines, and the non-stop battles between elaborate fighting craft on land and on the air. Most
https://en.wikipedia.org/wiki/Black%20tar%20heroin
Black tar heroin, also known as black dragon, is a form of heroin that is sticky like tar or hard like coal. Its dark color is the result of crude processing methods that leave behind impurities. Despite its name, black tar heroin can also be dark orange or dark brown in appearance. Black tar heroin is impure diacetylmorphine. Other forms of heroin require additional steps of purification post acetylation. With black tar, the product's processing stops immediately after acetylation. Its unique consistency however is due to acetylation without a reflux apparatus. As in homebake heroin in Australia and New Zealand the crude acetylation results in a gooey mass. Black tar as a type holds a variable admixture of morphine derivatives—predominantly 6-MAM (6-monoacetylmorphine), which is another result of crude acetylation. The lack of proper reflux during acetylation fails to remove much of the moisture retained in the acetylating agent, acetic anhydride. The acetic anhydride reacts with the moisture to produce the milder acetylating agent, glacial acetic acid which is unable to acetylate the 3 position of the morphine molecule. Black tar heroin is often produced in Latin America, and is most commonly found in the western and southern parts of the United States, while also being occasionally found in Western Africa. It has a varying consistency depending on manufacturing methods, cutting agents, and moisture levels, from tarry goo in the unrefined form to a uniform, light-brown powder when further processed and cut with a variety of agents. One of the more notable compounds added to heroin is lactose. Composition Pure morphine and heroin are both fine powders. Black tar heroin's unique appearance and texture are due to its acetylation without the benefit of the usual reflux apparatus. The assumption that tar has fewer adulterants and diluents is a misconception. The most common adulterant is lactose, which is added to tar via dissolving of both substances in a liquid
https://en.wikipedia.org/wiki/Game%20demo
A game demo is a trial version of a video game that is limited to a certain time period or a point in progress. A game demo comes in forms such as shareware, demo disc, downloadable software, and tech demos. Distribution In the early 1990s, shareware distribution was a popular method for publishing games for smaller developers, including then-fledgling companies such as Apogee Software (now 3D Realms), Epic MegaGames (now Epic Games), and id Software. It gave consumers the chance to try a trial portion of the game, usually restricted to the game's complete first section or "episode", before purchasing the rest of the adventure. Racks of games on single 5" and later 3.5" floppy disks were common in many stores, often very cheaply. Since the shareware versions were essentially free, the cost only needed to cover the disk and minimal packaging. Sometimes, the demo disks were packaged within the box of another game by the same company. As the increasing size of games in the mid-1990s made them impractical to fit on floppy disks, and retail publishers and developers began to earnestly mimic the practice, shareware games were replaced by shorter demos that were either distributed free on CDs with gaming magazines or as free downloads over the Internet, in some cases becoming exclusive content for specific websites. Shareware was also the distribution method of choice of early modern first-person shooters (FPS). There is a technical difference between shareware and demos. Up to the early 1990s, shareware could easily be upgraded to the full version by adding the "other episodes" or full portion of the game; this would leave the existing shareware files intact. Demos are different in that they are "self-contained" programs that cannot be upgraded to the full version. An example is the Descent shareware versus the Descent II demo; players were able to retain their saved games on the former but not the latter. Magazines that include the demos on a CD or DVD and likewise
https://en.wikipedia.org/wiki/Phagosome
In cell biology, a phagosome is a vesicle formed around a particle engulfed by a phagocyte via phagocytosis. Professional phagocytes include macrophages, neutrophils, and dendritic cells (DCs). A phagosome is formed by the fusion of the cell membrane around a microorganism, a senescent cell or an apoptotic cell. Phagosomes have membrane-bound proteins to recruit and fuse with lysosomes to form mature phagolysosomes. The lysosomes contain hydrolytic enzymes and reactive oxygen species (ROS) which kill and digest the pathogens. Phagosomes can also form in non-professional phagocytes, but they can only engulf a smaller range of particles, and do not contain ROS. The useful materials (e.g. amino acids) from the digested particles are moved into the cytosol, and waste is removed by exocytosis. Phagosome formation is crucial for tissue homeostasis and both innate and adaptive host defense against pathogens. However, some bacteria can exploit phagocytosis as an invasion strategy. They either reproduce inside of the phagolysosome (e.g. Coxiella spp.) or escape into the cytoplasm before the phagosome fuses with the lysosome (e.g. Rickettsia spp.). Many Mycobacteria, including Mycobacterium tuberculosis and Mycobacterium avium paratuberculosis, can manipulate the host macrophage to prevent lysosomes from fusing with phagosomes and creating mature phagolysosomes. Such incomplete maturation of the phagosome maintains an environment favorable to the pathogens inside it. Formation Phagosomes are large enough to degrade whole bacteria, or apoptotic and senescent cells, which are usually >0.5μm in diameter. This means a phagosome is several orders of magnitude bigger than an endosome, which is measured in nanometres. Phagosomes are formed when pathogens or opsonins bind to a transmembrane receptor, which are randomly distributed on the phagocyte cell surface. Upon binding, "outside-in" signalling triggers actin polymerisation and pseudopodia formation, which surrounds and fus
https://en.wikipedia.org/wiki/Digital%20Data%20Communications%20Message%20Protocol
Digital Data Communications Message Protocol (DDCMP) is a byte-oriented communications protocol devised by Digital Equipment Corporation in 1974 to allow communication over point-to-point network links for the company's DECnet Phase I network protocol suite. The protocol uses full or half duplex synchronous and asynchronous links and allowed errors introduced in transmission to be detected and corrected. It was retained and extended for later versions of the DECnet protocol suite. DDCMP has been described as the "most popular and pervasive of the commercial byte-count data link protocols". See also DEX
https://en.wikipedia.org/wiki/Breit%20equation
The Breit equation is a relativistic wave equation derived by Gregory Breit in 1929 based on the Dirac equation, which formally describes two or more massive spin-1/2 particles (electrons, for example) interacting electromagnetically to the first order in perturbation theory. It accounts for magnetic interactions and retardation effects to the order of 1/c2. When other quantum electrodynamic effects are negligible, this equation has been shown to give results in good agreement with experiment. It was originally derived from the Darwin Lagrangian but later vindicated by the Wheeler–Feynman absorber theory and eventually quantum electrodynamics. Introduction The Breit equation is not only an approximation in terms of quantum mechanics, but also in terms of relativity theory as it is not completely invariant with respect to the Lorentz transformation. Just as does the Dirac equation, it treats nuclei as point sources of an external field for the particles it describes. For particles, the Breit equation has the form ( is the distance between particle and ): where is the Dirac Hamiltonian (see Dirac equation) for particle at position and is the scalar potential at that position; is the charge of the particle, thus for electrons . The one-electron Dirac Hamiltonians of the particles, along with their instantaneous Coulomb interactions , form the Dirac–Coulomb operator. To this, Breit added the operator (now known as the (frequency-independent) Breit operator): where the Dirac matrices for electron i: . The two terms in the Breit operator account for retardation effects to the first order. The wave function in the Breit equation is a spinor with elements, since each electron is described by a Dirac bispinor with 4 elements as in the Dirac equation, and the total wave function is the tensor product of these. Breit Hamiltonians The total Hamiltonian of the Breit equation, sometimes called the Dirac–Coulomb–Breit Hamiltonian () can be decomposed into the follo
https://en.wikipedia.org/wiki/Euclidean%20plane%20isometry
In geometry, a Euclidean plane isometry is an isometry of the Euclidean plane, or more informally, a way of transforming the plane that preserves geometrical properties such as length. There are four types: translations, rotations, reflections, and glide reflections (see below under ). The set of Euclidean plane isometries forms a group under composition: the Euclidean group in two dimensions. It is generated by reflections in lines, and every element of the Euclidean group is the composite of at most three distinct reflections. Informal discussion Informally, a Euclidean plane isometry is any way of transforming the plane without "deforming" it. For example, suppose that the Euclidean plane is represented by a sheet of transparent plastic sitting on a desk. Examples of isometries include: Shifting the sheet one inch to the right. Rotating the sheet by ten degrees around some marked point (which remains motionless). Turning the sheet over to look at it from behind. Notice that if a picture is drawn on one side of the sheet, then after turning the sheet over, we see the mirror image of the picture. These are examples of translations, rotations, and reflections respectively. There is one further type of isometry, called a glide reflection (see below under classification of Euclidean plane isometries). However, folding, cutting, or melting the sheet are not considered isometries. Neither are less drastic alterations like bending, stretching, or twisting. Formal definition An isometry of the Euclidean plane is a distance-preserving transformation of the plane. That is, it is a map such that for any points p and q in the plane, where d(p, q) is the usual Euclidean distance between p and q. Classification It can be shown that there are four types of Euclidean plane isometries. (Note: the notations for the types of isometries listed below are not completely standardised.) Reflections Reflections, or mirror isometries, denoted by Fc,v, where c is a point
https://en.wikipedia.org/wiki/William%20Nierenberg
William Aaron Nierenberg (February 13, 1919 – September 10, 2000) was an American physicist who worked on the Manhattan Project and was director of the Scripps Institution of Oceanography from 1965 through 1986. He was a co-founder of the George C. Marshall Institute in 1984. Background Nierenberg was born on February 13, 1919, at 213 E. 13th Street, on the Lower East Side of New York, the son of very poor Jewish immigrants from Austro-Hungary. He went to Townsend Harris High School and then the City College of New York (CCNY), where he won a scholarship to spend his junior year abroad in France at the University of Paris. In 1939, he became the first recipient of a William Lowell Putnam fellowship from the City College. Also in 1939, he participated in research at Columbia University, where he took a course in statistical mechanics from his future mentor, I. I. Rabi. He went on to graduate work at Columbia, but from 1941 spent the war years seconded to the Manhattan Project, working on isotope separation, before returning to Columbia to complete his PhD. Career In 1948, Nierenberg took up his first academic staff position, as Assistant Professor of Physics at the University of Michigan. From 1950 to 1965, he was Associate and then Professor of Physics at the University of California, Berkeley, where he had a very large and productive low energy nuclear physics laboratory, graduating 40 PhD’s during this time and publishing about 100 papers. He was responsible for the determination of more nuclear moments than any other single individual. This work was cited when he was elected to the National Academy of Sciences in 1971. During this period, in 1953, Nierenberg took a one-year leave to serve as the director of the Columbia University Hudson Laboratories, working on naval warfare problems. Later, he oversaw the design and construction of the “new” physics building at Berkeley. Much later (1960–1962) he took leave once again as Assistant Secretary General of the
https://en.wikipedia.org/wiki/Database%20audit
Database auditing involves observing a database to be aware of the actions of database users. Database administrators and consultants often set up auditing for security purposes, for example, to ensure that those without the permission to access information do not access it.
https://en.wikipedia.org/wiki/Millennium%20Run
The Millennium Run, or Millennium Simulation (referring to its size) is a computer N-body simulation used to investigate how the distribution of matter in the Universe has evolved over time, in particular, how the observed population of galaxies was formed. It is used by scientists working in physical cosmology to compare observations with theoretical predictions. Overview A basic scientific method for testing theories in cosmology is to evaluate their consequences for the observable parts of the universe. One piece of observational evidence is the distribution of matter, including galaxies and intergalactic gas, which are observed today. Light emitted from more distant matter must travel longer in order to reach Earth, meaning looking at distant objects is like looking further back in time. This means the evolution in time of the matter distribution in the universe can also be observed directly. The Millennium Simulation was run in 2005 by the Virgo Consortium, an international group of astrophysicists from Germany, the United Kingdom, Canada, Japan and the United States. It starts at the epoch when the cosmic background radiation was emitted, about 379,000 years after the universe began. The cosmic background radiation has been studied by satellite experiments, and the observed inhomogeneities in the cosmic background serve as the starting point for following the evolution of the corresponding matter distribution. Using the physical laws expected to hold in the currently known cosmologies and simplified representations of the astrophysical processes observed to affect real galaxies, the initial distribution of matter is allowed to evolve, and the simulation's predictions for formation of galaxies and black holes are recorded. Since the completion of the Millennium Run simulation in 2005, a series of ever more sophisticated and higher fidelity simulations of the formation of the galaxy population have been built within its stored output and have been made public
https://en.wikipedia.org/wiki/GunZ%3A%20The%20Duel
GunZ: The Duel (), or simply GunZ, was an online third-person shooting game, created by South Korean-based MAIET Entertainment. It was free-to-play, with a microtransaction business model for purchasing premium in-game items. The game allowed players to perform exaggerated, gravity-defying action moves, including wall running, stunning, tumbling, and blocking bullets with swords, in the style of action films and anime. Gameplay In Quest mode, players, in a group of up to 4 members, went through parts of a map for a certain number of stages, which were determined by the quest level. In each stage, players were required to kill 18 to 44 creatures, and the game ended when every member of the player team died or completed all of the stages. Quests could take place in the Prison, Mansion, or Dungeon map. Players could make the quests tougher and more profitable by using special quest items to increase the quest level that could be bought from the in-game store or obtained during a quest. Quest items in-game were stored in glowing chests that spawned where the monster that it came from died; certain items could have been dropped depending on the monster killed. Players ran through these to obtain an item randomly selected from the possibilities of that monster. The items obtained depended on the monster that the chest came from. By sacrificing certain items in combination, players could enter a boss quest. Boss items were obtained through pages and other boss quests, and pages were obtained through the in-game shop. The quest system was designed to reduce the amount of time needed to prepare for boss raids that are typical in many other online games. A significant and unique part of the gameplay was the movement system. Players could run on walls, perform flips off of them, and do quick mid-air dodges in any horizontal direction. Advanced movement and combat techniques were commonly referred to as "K-Style" or Korean style; a variety of techniques fell under this cat
https://en.wikipedia.org/wiki/Regulator%20%28automatic%20control%29
In automatic control, a regulator is a device which has the function of maintaining a designated characteristic. It performs the activity of managing or maintaining a range of values in a machine. The measurable property of a device is managed closely by specified conditions or an advance set value; or it can be a variable according to a predetermined arrangement scheme. It can be used generally to connote any set of various controls or devices for regulating or controlling items or objects. Examples are a voltage regulator (which can be a transformer whose voltage ratio of transformation can be adjusted, or an electronic circuit that produces a defined voltage), a pressure regulator, such as a diving regulator, which maintains its output at a fixed pressure lower than its input, and a fuel regulator (which controls the supply of fuel). Regulators can be designed to control anything from gases or fluids, to light or electricity. Speed can be regulated by electronic, mechanical, or electro-mechanical means. Such instances include; Electronic regulators as used in modern railway sets where the voltage is raised or lowered to control the speed of the engine Mechanical systems such as valves as used in fluid control systems. Purely mechanical pre-automotive systems included such designs as the Watt centrifugal governor whereas modern systems may have electronic fluid speed sensing components directing solenoids to set the valve to the desired rate. Complex electro-mechanical speed control systems used to maintain speeds in modern cars (cruise control) - often including hydraulic components, An aircraft engine's constant speed unit changes the propeller pitch to maintain engine speed. See also Controller (control theory) Governor (device) Process control Control engineering
https://en.wikipedia.org/wiki/Chest%20photofluorography
Chest photofluorography, or abreugraphy (also called mass miniature radiography), is a photofluorography technique for mass screening for tuberculosis using a miniature (50 to 100 mm) photograph of the screen of an X-ray fluoroscopy of the thorax, first developed in 1936. History Abreugraphy receives its name from its inventor, Dr. Manuel Dias de Abreu, a Brazilian physician and pulmonologist. It has received several different names, according to the country where it was adopted: mass radiography, miniature chest radiograph (United Kingdom and United States), roentgenfluorography (Germany), radiophotography (France), schermografia (Italy), radioscopy (Spain) and photofluorography (Sweden). In many countries, miniature mass radiographs (MMR) was quickly adopted and extensively utilized in the 1950s. For example, in Brazil and in Japan, tuberculosis prevention laws went into effect, obligating ca. 60% of the population to undergo MMR screening. A model of a mass radiograph used for screening for tuberculosis from 1936 to the mid 1950s can be seen in the Medical Gallery of the Science Museum. As a mass screening program for low-risk populations, the procedure was largely discontinued in the 1970s, following recommendation of the World Health Organization, due to three main reasons: The dramatic decrease of the general incidence of tuberculosis in developed countries (from 150 cases per 100,000 inhabitants in 1900, 70/100,000 in 1940 and 5/100,000 in 1950); Decreased benefits/cost ratio (a recent Canadian study has shown a cost of CD$236,496 per case in groups of immigrants with a low risk for tuberculosis, versus CD$3,943 per case in high risk groups); Risk of exposure to ionizing radiation doses, particularly among children, in the presence of extremely low yield rates of detection. Current use MMR is still an easy and useful way to prevent transmission of the disease in certain situations, such as in prisons and for immigration applicants and foreign workers
https://en.wikipedia.org/wiki/Surface%20plasmon%20resonance
Surface plasmon resonance (SPR) is a phenomenon that occurs where electrons in a thin metal sheet become excited by light that is directed to the sheet with a particular angle of incidence, and then travel parallel to the sheet. Assuming a constant light source wavelength and that the metal sheet is thin, the angle of incidence that triggers SPR is related to the refractive index of the material and even a small change in the refractive index will cause SPR to not be observed. This makes SPR a possible technique for detecting particular substances (analytes) and SPR biosensors have been developed to detect various important biomarkers. Explanation The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. Since the wave is on the boundary of the conductor and the external medium (air, water or vacuum for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the conducting surface. To describe the existence and properties of surface plasmon polaritons, one can choose from various models (quantum theory, Drude model, etc.). The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. This quantity, hereafter referred to as the materials' "dielectric function", is the complex permittivity. In order for the terms that describe the electronic surface plasmon to exist, the real part of the dielectric constant of the conductor must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the infrared-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive). LSPRs (localized surface plasmon resonances) are co
https://en.wikipedia.org/wiki/National%20Institute%20for%20Environmental%20eScience
The National Institute for Environmental eScience (NIEeS) was a collaboration between Natural Environment Research Council (NERC) and the University of Cambridge. It was established in July 2002 and in addition to its main role of promoting and supporting the use of e-Science and grid technologies within the field of environmental research, its purpose was to: Train scientists in environmental eScience Demonstrate environmental eScience Help develop the environmental eScience community Aid collaborations between scientists and industries It was intended as a national resource to be "owned by the whole community". The website remains available; however, the contract for the project ended in August 2008.
https://en.wikipedia.org/wiki/Harem%20%28zoology%29
A harem is an animal group consisting of one or two males, a number of females, and their offspring. The dominant male drives off other males and maintains the unity of the group. If present, the second male is subservient to the dominant male. As juvenile males grow, they leave the group and roam as solitary individuals or join bachelor herds. Females in the group may be inter-related. The dominant male mates with the females as they become sexually active and drives off competitors, until he is displaced by another male. In some species, incoming males that achieve dominant status may commit infanticide. For the male, the primary benefit of the harem system is obtaining exclusive access to a group of mature females. The females benefit from being in a stable social group and the associated benefits of grooming, predator avoidance and cooperative defense of territory. The disadvantages for the male are the energetic costs of gaining or defending a harem which may leave him with reduced reproductive success. The females are disadvantaged if their offspring are killed during dominance battles or by incoming males. Overview The term harem is used in zoology to distinguish social organization consisting of a group of females, their offspring, and one to two males. The single male, called the dominant male, may be accompanied by another young male, called a "follower" male. Females that closely associate with the dominant male are called "central females," while females who associate less frequently with the dominant male are called "peripheral females." Juvenile male offspring leave the harem and live either solitarily, or, with other young males in groups known as bachelor herds. Sexually mature female offspring may stay within their natal harem, or may join another harem. The females in a harem may be, but are not exclusively, genetically related. For instance, the females in hamadryas baboon harems are not usually genetically related because their harems are for
https://en.wikipedia.org/wiki/Double%20beta%20decay
In nuclear physics, double beta decay is a type of radioactive decay in which two neutrons are simultaneously transformed into two protons, or vice versa, inside an atomic nucleus. As in single beta decay, this process allows the atom to move closer to the optimal ratio of protons and neutrons. As a result of this transformation, the nucleus emits two detectable beta particles, which are electrons or positrons. The literature distinguishes between two types of double beta decay: ordinary double beta decay and neutrinoless double beta decay. In ordinary double beta decay, which has been observed in several isotopes, two electrons and two electron antineutrinos are emitted from the decaying nucleus. In neutrinoless double beta decay, a hypothesized process that has never been observed, only electrons would be emitted. History The idea of double beta decay was first proposed by Maria Goeppert Mayer in 1935. In 1937, Ettore Majorana demonstrated that all results of beta decay theory remain unchanged if the neutrino were its own antiparticle, now known as a Majorana particle. In 1939, Wendell H. Furry proposed that if neutrinos are Majorana particles, then double beta decay can proceed without the emission of any neutrinos, via the process now called neutrinoless double beta decay. It is not yet known whether the neutrino is a Majorana particle, and, relatedly, whether neutrinoless double beta decay exists in nature. In 1930–1940s, parity violation in weak interactions was not known, and consequently calculations showed that neutrinoless double beta decay should be much more likely to occur than ordinary double beta decay, if neutrinos were Majorana particles. The predicted half-lives were on the order of ~ years. Efforts to observe the process in laboratory date back to at least 1948 when E.L. Fireman made the first attempt to directly measure the half-life of the isotope with a Geiger counter. Radiometric experiments through about 1960 produced negative results or
https://en.wikipedia.org/wiki/Neutrinoless%20double%20beta%20decay
The neutrinoless double beta decay (0νββ) is a commonly proposed and experimentally pursued theoretical radioactive decay process that would prove a Majorana nature of the neutrino particle. To this day, it has not been found. The discovery of the neutrinoless double beta decay could shed light on the absolute neutrino masses and on their mass hierarchy (Neutrino mass). It would mean the first ever signal of the violation of total lepton number conservation. A Majorana nature of neutrinos would confirm that the neutrino is its own antiparticle. To search for neutrinoless double beta decay, there are currently a number of experiments underway, with several future experiments for increased sensitivity proposed as well. Historical development of the theoretical discussion In 1939, Wendell H. Furry proposed the idea of the Majorana nature of the neutrino, which was associated with beta decays. Furry stated the transition probability to even be higher for the neutrinoless double beta decay. It was the first idea proposed to search for the violation of lepton number conservation. It has, since then, drawn attention to it for being useful to study the nature of neutrinos (see quote). The Italian physicist Ettore Majorana first introduced the concept of a particle being its own antiparticle. Particles' nature was subsequently named after him as Majorana particles. The neutrinoless double beta decay is one method to search for the possible Majorana nature of neutrinos. Physical relevance Conventional double beta decay Neutrinos are conventionally produced in weak decays. Weak beta decays normally produce one electron (or positron), emit an antineutrino (or neutrino) and increase the nucleus' proton number by one. The nucleus' mass (i.e. binding energy) is then lower and thus more favorable. There exist a number of elements that can decay into a nucleus of lower mass, but they cannot emit one electron only because the resulting nucleus is kinematically (that is, in
https://en.wikipedia.org/wiki/Vlasov%20equation
The Vlasov equation is a differential equation describing time evolution of the distribution function of plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph. Difficulties of the standard kinetic approach First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction. He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics: Theory of pair collisions disagrees with the discovery by Rayleigh, Irving Langmuir and Lewi Tonks of natural vibrations in electron plasma. Theory of pair collisions is formally not applicable to Coulomb interaction due to the divergence of the kinetic terms. Theory of pair collisions cannot explain experiments by Harrison Merrill and Harold Webb on anomalous electron scattering in gaseous plasma. Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates: explicitly a PDE: and adapted it to the case of a plasma, leading to the systems of equations shown below. Here is a general distribution function of particles with momentum at coordinates and given time . Note that the term is the force acting on the particle. The Vlasov–Maxwell system of equations (Gaussian units) Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions and for electrons and (positive) plasma ions. The distribution functi
https://en.wikipedia.org/wiki/Craterellus
Craterellus is a genus of generally edible fungi similar to the closely related chanterelles, with some new species recently moved from the latter to the former. Both groups lack true gills on the underside of their caps, though they often have gill-like wrinkles and ridges. General The three most common species, C. cornucopioides, C. lutescens and C. tubaeformis, are gathered commercially and, unlike Cantharellus, can be easily preserved by drying. Molecular phylogenetics have been applied to the problem of discriminating between Craterellus and Cantharellus genera. Results indicate that the presence of a hollow stipe may be a synapomorphy (a trait corresponding to the evolutionary relationship) which reliably identifies Craterellus species. C. cornucopioides appears to be a single polymorphic species, while C. tubaeformis may be two separate genetic groups separated by geography. Definition of the genus The genera Craterellus and Cantharellus have always been recognized as closely related. The whole group may be recognized by their lack of division into cap and stipe, and their rudimentary or missing gills ("false gills"). Originally Cantharellus was defined by Fries in 1821 to mean all these species together and then in 1825 Persoon separated some species off to create the Craterellus group, with Cr. cornucopioides as type species. Since then some authorities have tried to merge the two genera again, but DNA studies now indicate that (with recent changes) each genus is monophyletic, and so they are likely to remain separate. In the past Craterellus was distinguished on the basis that the fruiting body had a hollow stipe, generally being funnel-shaped, and there were no clamp connections. But phylogenetic DNA work starting with the 2000 paper of Dahlman et al. has shown that some species traditionally placed in Cantharellus (C. tubaeformis, C. ignicolor and C. lutescens) really belong in Craterellus, and this means that the second distinguishing rule
https://en.wikipedia.org/wiki/Allium%20ampeloprasum
Allium ampeloprasum is a member of the onion genus Allium. The wild plant is commonly known as wild leek or broadleaf wild leek. Its native range is southern Europe to south western Asia and north Africa, but it is cultivated in many other places and has become naturalized in many countries. Allium ampeloprasum is regarded as native to all the countries bordering on the Black, Adriatic, and Mediterranean Seas from Portugal to Egypt to Romania. In Russia and Ukraine, it is considered invasive except in Crimea, where it is native. It is also native to Ethiopia, Uzbekistan, Iran and Iraq. It is considered naturalized in the United Kingdom, Ireland, the Czech Republic, the Baltic States, Belarus, the Azores, Madeira, the Canary Islands, Armenia, Azerbaijan, Afghanistan, China, Australia (all states except Queensland and Tasmania), Mexico, the Dominican Republic, Puerto Rico, Haiti, the United States (southeastern region plus California, New York State, Ohio and Illinois), Galápagos, and Argentina. In tidewater Virginia, where it is commonly known as the "Yorktown onion", it is protected by law in York County. The species may have been introduced to Britain by prehistoric people, where its habitat consists of rocky places near the coast in south-west England and Wales. Allium ampeloprasum has been differentiated into five cultivated vegetables, namely leek, elephant garlic, pearl onion, kurrat, and Persian leek. Wild populations produce bulbs up to 3 cm across. Scapes are round in cross-section, each up to 180 cm tall, bearing an umbel of as many as 500 flowers. Flowers are urn-shaped, up to 6 mm across; tepals white, pink or red; anthers yellow or purple; pollen yellow. Vegetables Allium ampeloprasum comprises several vegetables, of which the most notable ones are: leek elephant garlic or great-headed garlic pearl onion kurrat, Egyptian leek or salad leek – this variety has small bulbs, and primarily the leaves are eaten. Persian leek (Allium ampeloprasum ssp
https://en.wikipedia.org/wiki/Precancerous%20condition
A precancerous condition is a condition, tumor or lesion involving abnormal cells which are associated with an increased risk of developing into cancer. Clinically, precancerous conditions encompass a variety of abnormal tissues with an increased risk of developing into cancer. Some of the most common precancerous conditions include certain colon polyps, which can progress into colon cancer, monoclonal gammopathy of undetermined significance, which can progress into multiple myeloma or myelodysplastic syndrome. and cervical dysplasia, which can progress into cervical cancer. Bronchial premalignant lesions can progress to squamous cell carcinoma of the lung. Pathologically, precancerous tissue can range from benign neoplasias, which are tumors which don't invade neighboring normal tissues or spread to distant organs, to dysplasia, a collection of highly abnormal cells which, in some cases, has an increased risk of progressing to anaplasia and invasive cancer which is life-threatening. Sometimes, the term "precancer" is also used for carcinoma in situ, which is a noninvasive cancer that has not grown and spread to nearby tissue, unlike the invasive stage. As with other precancerous conditions, not all carcinoma in situ will become an invasive disease but is at risk of doing so. Classification The term precancerous or premalignant condition may refer to certain conditions, such as monoclonal gammopathy of unknown significance, or to certain lesions, such as colorectal adenoma (colon polyps), which have the potential to progress into cancer (see: Malignant transformation). Premalignant lesions are morphologically atypical tissue which appear abnormal when viewed under the microscope, and which are more likely to progress to cancer than normal tissue. Precancerous conditions and lesions affect a variety of organ systems, including the skin, oral cavity, stomach, colon, lung, and hematological system. Some authorities also refer to hereditary genetic conditions which p
https://en.wikipedia.org/wiki/Craterellus%20cornucopioides
Craterellus cornucopioides, or horn of plenty, is an edible mushroom. It is also known as the black chanterelle, black trumpet, trompette de la mort (French), trombetta dei morti (Italian) or trumpet of the dead, djondjon (Haitian). The Cornucopia, in Greek mythology, referred to the magnificent horn of the nymph Amalthea's goat (or of herself in goat form), that filled itself with whatever meat or drink its owner requested. It has become the symbol of plenty. A possible origin for the name "trumpet of the dead" is that the growing mushrooms were seen as being played as trumpets by dead people under the ground. Description The fruiting body does not have a separation into stalk and cap, but is shaped like a funnel expanded at the top, normally up to about tall and in diameter, but said to grow exceptionally to tall. The upper and inner surface is black or dark grey, and rarely yellow. The lower and outer fertile surface is a much lighter shade of grey. The fertile surface is more or less smooth but may be somewhat wrinkled. The size of the elliptical spores is in the range 10–17 µm × 6–11 µm. The basidia are two-spored. Distribution and habitat This fungus is found in woods in Europe, North America, and East Asia. It mainly grows under beech, oak or other broad-leaved trees, especially in moss in moist spots on heavy calcareous soil. In Europe it is generally common but seems to be rare in some countries such as the Netherlands. It appears from June to November, and in the UK, from August to November. The mushroom is usually almost black, and it is hard to find because its dark colour easily blends in with the leaf litter on the forest floor. Hunters of this mushroom say it is like looking for black holes in the ground. Related species Craterellus cornucopioides has a smooth spore-bearing surface, but the rare, distantly related Cantharellus cinereus has rudimentary gills. The colour and smooth undersurface make C. cornucopioides very distinctive. The
https://en.wikipedia.org/wiki/The%20Laws%20of%20Thought
An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities by George Boole, published in 1854, is the second of Boole's two monographs on algebraic logic. Boole was a professor of mathematics at what was then Queen's College, Cork (now University College Cork), in Ireland. Review of the contents The historian of logic John Corcoran wrote an accessible introduction to Laws of Thought and a point by point comparison of Prior Analytics and Laws of Thought. According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were “to go under, over, and beyond” Aristotle's logic by: Providing it with mathematical foundations involving equations; Extending the class of problems it could treat from assessing validity to solving equations, and; Expanding the range of applications it could handle — e.g. from propositions having only two terms to those having arbitrarily many. More specifically, Boole agreed with what Aristotle said; Boole's ‘disagreements’, if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced the four propositional forms of Aristotle's logic to formulas in the form of equations—by itself a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic—another revolutionary idea—involved Boole's doctrine that Aristotle's rules of inference (the “perfect syllogisms”) must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce “No quadrangle that is a square is a rectangle that is a rhombus” from “No square that is a quadrangle is a rhombus that is a rectangle” or from “No rhombus that is a rectangle is a square that is a quadrangle”.
https://en.wikipedia.org/wiki/Factiva
Factiva is a business information and research tool owned by Dow Jones & Company. Factiva aggregates content from both licensed and free sources. Providing organizations with search, alerting, dissemination, and other information management capabilities. Factiva products claim to provide access to more than 32,000 sources such as newspapers, journals, magazines, television and radio transcripts, photos, etc. These are sourced from nearly every country in the world in 28 languages, including more than 600 continuously updated newswires. History The company was founded as a joint-venture between Reuters and Dow Jones & Company in May 1999 under the Dow Jones Reuters Business Interactive name, and renamed Factiva six months later. Timothy M. Andrews, a longtime Dow Jones executive, was founding president and chief executive of the venture. Mr. Andrews was succeeded by Clare Hart in January 2000, another longtime Dow Jones executive, who was serving as Factiva's vice president and director of global sales. It developed modules with Microsoft, Oracle Corp., IBM and Yahoo!. Factiva has also partnered with EuroSpider, Comintelli, PeopleSoft, Media Map, Biz360, Choice Point, BTRadianz, AtHoc, and Reuters. Factiva has also been included as of March 2003 in Microsoft's Office 2003 program as one of the News research options within the Research Pane. In 2005, Factiva acquired two private companies: London-based 2B Reputation Intelligence Ltd. and Denver, Colorado-based taxonomy services and software firm, Synapse, the Knowledge Link Corporation. 2B was a technology and consulting business, specializing in media monitoring and reputation management. Synapse provided taxonomy management software, pre-built taxonomies and taxonomy-building and indexing services. This acquisition brought with it Synaptica, the taxonomy management software tool developed by Synapse, and Taxonomy Warehouse, a website developed by Synapse. Both Synaptica and Taxonomy Warehouse were developed b
https://en.wikipedia.org/wiki/Swedish%20grid
The Swedish grid (in Swedish Rikets Nät, RT 90) is a coordinate system that was previously used for government maps in Sweden. RT 90 is a slightly modified version of the RT 38 from 1938. RT 90 has been replaced with SWEREF 99 as the official Swedish spatial reference system. While the system could be used with negative numbers to represent all four "quarters" of the earth (NE, NW, SE, and SW hemispheres), the standard application of RT 90 is only useful for the northern half of the eastern hemisphere where numbers are positive. The coordinate system is based on metric measures rooting from the crossing of the Prime Meridian and the Equator at 0,0. The Central Meridian used to be based on a meridian located at the old observatory in Stockholm, but today it is based on the Prime Meridian at Greenwich. The numbering system's first digit represents the largest distance, followed by what can be seen as fractional decimal digits (though without an explicit decimal point). Therefore, X 65 is located halfway between X 6 and X 7. The coordinate grid is specified using two numbers, named X and Y, X being the south–north axis and Y the west–east axis. Two seven-digit numbers are sufficient to specify a location with a one m resolution. Example: X=6620000 Y=1317000 (X is the northing and Y is the easting) denotes a position 6620 km north of the Equator and -183 km (1317 km-1500 km) west of the Central Meridian, which happens to be somewhere near the town center of Arvika. RT90 Map Projection Parameters
https://en.wikipedia.org/wiki/Granulation%20tissue
Granulation tissue is new connective tissue and microscopic blood vessels that form on the surfaces of a wound during the healing process. Granulation tissue typically grows from the base of a wound and is able to fill wounds of almost any size. Examples of granulation tissue can be seen in pyogenic granulomas and pulp polyps. Its histological appearance is characterized by proliferation of fibroblasts and thin-walled, delicate capillaries (angiogenesis), and infiltrated inflammatory cells in a loose extracellular matrix. Appearance During the migratory phase of wound healing, granulation tissue is: light red or dark pink, being perfused with new capillary loops or "buds"; soft to the touch; moist; bumpy (granular) in appearance, due to punctate hemorrhages; pulsatile on palpation; painless when healthy; Structure Granulation tissue is composed of tissue matrix supporting a variety of cell types, most of which can be associated with one of the following functions: formation of extracellular matrix; operation of the immune system; vascularisation; An excess of granulation tissue (caro luxurians) is informally referred to as "proud flesh". Extracellular matrix The extracellular matrix of granulation tissue is created and modified by fibroblasts. Initially, it consists of a network of type-III collagen, a weaker form of the structural protein that can be produced rapidly. This is later replaced by the stronger, long-stranded type-I collagen, as evidenced in scar tissue. Immunity The main immune cells active in the tissue are macrophages and neutrophils, although other leukocytes are also present. These work to phagocytize old or damaged tissue, and protect the healing tissue from pathogenic infection. This is necessary both to aid the healing process and to protect against invading pathogens, as the wound often does not have an effective skin barrier to act as a first line of defense. Vascularization It is necessary for a network of blood vessels
https://en.wikipedia.org/wiki/Imus%20in%20the%20Morning
Imus in the Morning was a long-running radio show hosted by Don Imus. The show originated on June 2, 1968, on various stations in the Western United States and Cleveland, Ohio, before settling on WNBC radio in New York City in 1971. In October 1988, the show moved to WFAN when that station took over WNBC's dial position following an ownership change. It was later syndicated to 60 other stations across the country by Westwood One, a division of CBS Radio, airing weekdays from 5:30 to 10 am Eastern time. Beginning September 3, 1996, the 6 to 9 am portion was simulcast on the cable television network MSNBC. The show had been broadcast almost every weekday morning for 36 years on radio and 11 years on MSNBC until it was canceled on April 12, 2007, due to controversial comments made on the April 4, 2007, broadcast. Imus in the Morning program returned to the morning drive on New York radio station WABC on December 3, 2007. WABC is the flagship station of ABC Radio Networks (which itself was eventually subsumed into Westwood One in 2012), which syndicates the show nationally. From 2007 to August 2009, the show was simulcast on television nationwide on RFD-TV and rebroadcast each evening on RFD HD in high-definition. After Imus and RFD reached a mutual agreement to prematurely terminate the five-year deal, Fox Business Network began simulcasting the program on October 5, 2009, an arrangement which ended on May 29, 2015. In March 2018, Cumulus Media, in the middle of a bankruptcy process, told Imus they were going to stop paying him, and as a result, Imus ended the show. The final broadcast of Imus in the Morning was March 29, 2018. History Following a successful run as an on-air personality in Cleveland, Don Imus was hired by WNBC to host Imus in the Morning in late 1971. Imus is credited with introducing New York, and the larger Top 40 radio community, to the shock jock style of hosting. His initial run in New York ended in August 1977, when NBC management ordered a
https://en.wikipedia.org/wiki/Spectral%20resolution
The spectral resolution of a spectrograph, or, more generally, of a frequency spectrum, is a measure of its ability to resolve features in the electromagnetic spectrum. It is usually denoted by , and is closely related to the resolving power of the spectrograph, defined as where is the smallest difference in wavelengths that can be distinguished at a wavelength of . For example, the Space Telescope Imaging Spectrograph (STIS) can distinguish features 0.17 nm apart at a wavelength of 1000 nm, giving it a resolution of 0.17 nm and a resolving power of about 5,900. An example of a high resolution spectrograph is the Cryogenic High-Resolution IR Echelle Spectrograph (CRIRES+) installed at ESO's Very Large Telescope, which has a spectral resolving power of up to 100,000. Doppler effect The spectral resolution can also be expressed in terms of physical quantities, such as velocity; then it describes the difference between velocities that can be distinguished through the Doppler effect. Then, the resolution is and the resolving power is where is the speed of light. The STIS example above then has a spectral resolution of 51 km/s. IUPAC definition IUPAC defines resolution in optical spectroscopy as the minimum wavenumber, wavelength or frequency difference between two lines in a spectrum that can be distinguished. Resolving power, R, is given by the transition wavenumber, wavelength or frequency, divided by the resolution. See also Angular resolution Resolution (mass spectrometry)
https://en.wikipedia.org/wiki/International%20Chemical%20Identifier
The International Chemical Identifier (InChI or ) is a textual identifier for chemical substances, designed to provide a standard way to encode molecular information and to facilitate the search for such information in databases and on the web. Initially developed by the International Union of Pure and Applied Chemistry (IUPAC) and National Institute of Standards and Technology (NIST) from 2000 to 2005, the format and algorithms are non-proprietary. Since May 2009, it has been developed by the InChI Trust, a nonprofit charity from the United Kingdom which works to implement and promote the use of InChI. The identifiers describe chemical substances in terms of layers of information — the atoms and their bond connectivity, tautomeric information, isotope information, stereochemistry, and electronic charge information. Not all layers have to be provided; for instance, the tautomer layer can be omitted if that type of information is not relevant to the particular application. The InChI algorithm converts input structural information into a unique InChI identifier in a three-step process: normalization (to remove redundant information), canonicalization (to generate a unique number label for each atom), and serialization (to give a string of characters). InChIs differ from the widely used CAS registry numbers in three respects: firstly, they are freely usable and non-proprietary; secondly, they can be computed from structural information and do not have to be assigned by some organization; and thirdly, most of the information in an InChI is human readable (with practice). InChIs can thus be seen as akin to a general and extremely formalized version of IUPAC names. They can express more information than the simpler SMILES notation and, in contrast to SMILES strings, every structure has a unique InChI string, which is important in database applications. Information about the 3-dimensional coordinates of atoms is not represented in InChI; for this purpose a format such
https://en.wikipedia.org/wiki/Business.com
Business.com is a digital media company and B2B web destination which offers various performance marketing advertising, including lead generation products on a pay per lead and pay per click basis, directory listings, and display advertising. The site covers business industry news and trends for growth companies and the B2B community to stay up-to-date, and hosted more than 15,000 pieces of content as of November 2014. Business.com operates as a subsidiary of the Purch Group since being acquired in 2016. Having sold their brands to Future, Purch's existing B2B assets later reorganised into Business.com. History Business.com, Inc. was founded in 1999 by Jake Winebaum, previously chairman of the Walt Disney Internet Group; and Sky Dayton, founder of Earthlink, Boingo Wireless, and Helio, among others. Around that time, the Business.com domain name was purchased from Marc Ostrofsky by Winebaum's eCompanies Ventures for $7.5 million. In addition to investment by eCompanies, early funding in the amount of $61 million was provided in 2000 by Pearson PLC, Reed Business Information, McGraw Hill, and others. In its initial form, Business.com aimed to be the Internet's leading search engine for small business and corporate information. Business.com struggled through the Dot-com bubble years. The company retooled beginning in 2002 after massive layoffs and a new focus on developing a pay for performance ad network model. In April 2003, the company achieved profitability, and on November 8, 2004, the company secured an additional $10 million in venture capital funding from Benchmark Capital. On October 9, 2006, Business.com launched Work.com, a site with business how-to guides contributed by the small business community. Work.com was sold in March 2012 and is now owned by Salesforce.com. Then on July 26, 2007, after beating out Dow Jones & Company, the New York Times Company, IAC/InterActiveCorp, and News Corp, print and interactive marketing company R.H. Donnelley Corpor
https://en.wikipedia.org/wiki/Niter%20kibbeh
Niter kibbeh, or niter qibe ( ), also called (in Tigrinya), is a seasoned, clarified butter used in Ethiopian and Eritrean cuisine. Its preparation is similar to that of ghee, but niter kibbeh is simmered with spices such as besobela (known as Ethiopian sacred basil), koseret, fenugreek, cumin, coriander, turmeric, Ethiopian cardamom (korarima), cinnamon, or nutmeg before straining, imparting a distinct, spicy aroma. The version using vegetable oil instead of butter is called yeqimem zeyet. See also List of Ethiopian dishes and foods Kibbeh
https://en.wikipedia.org/wiki/Thermalisation
In physics, thermalisation (or thermalization) is the process of physical bodies reaching thermal equilibrium through mutual interaction. In general the natural tendency of a system is towards a state of equipartition of energy and uniform temperature that maximizes the system's entropy. Thermalisation, thermal equilibrium, and temperature are therefore important fundamental concepts within statistical physics, statistical mechanics, and thermodynamics; all of which are a basis for many other specific fields of scientific understanding and engineering application. Examples of thermalisation include: the achievement of equilibrium in a plasma. the process undergone by high-energy neutrons as they lose energy by collision with a moderator. the process of heat or phonon emission by charge carriers in a solar cell, after a photon that exceeds the semiconductor band gap energy is absorbed. The hypothesis, foundational to most introductory textbooks treating quantum statistical mechanics, assumes that systems go to thermal equilibrium (thermalisation). The process of thermalisation erases local memory of the initial conditions. The eigenstate thermalisation hypothesis is a hypothesis about when quantum states will undergo thermalisation and why. Not all quantum states undergo thermalisation. Some states have been discovered which do not (see below), and their reasons for not reaching thermal equilibrium are unclear . Theoretical description The process of equilibration can be described using the H-theorem or the relaxation theorem, see also entropy production. Systems resisting thermalisation Some such phenomena resisting the tendency to thermalize include (see, e.g., a quantum scar): Conventional quantum scars, which refer to eigenstates with enhanced probability density along unstable periodic orbits much higher than one would intuitively predict from classical mechanics. Perturbation-induced quantum scarring: despite the similarity in appearance to conven
https://en.wikipedia.org/wiki/Grain%20boundary
In materials science, a grain boundary is the interface between two grains, or crystallites, in a polycrystalline material. Grain boundaries are two-dimensional defects in the crystal structure, and tend to decrease the electrical and thermal conductivity of the material. Most grain boundaries are preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep. On the other hand, grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve mechanical strength, as described by the Hall–Petch relationship. High and low angle boundaries It is convenient to categorize grain boundaries according to the extent of misorientation between the two grains. Low-angle grain boundaries (LAGB) or subgrain boundaries are those with a misorientation less than about 15 degrees. Generally speaking they are composed of an array of dislocations and their properties and structure are a function of the misorientation. In contrast the properties of high-angle grain boundaries, whose misorientation is greater than about 15 degrees (the transition angle varies from 10–15 degrees depending on the material), are normally found to be independent of the misorientation. However, there are 'special boundaries' at particular orientations whose interfacial energies are markedly lower than those of general high-angle grain boundaries. The simplest boundary is that of a tilt boundary where the rotation axis is parallel to the boundary plane. This boundary can be conceived as forming from a single, contiguous crystallite or grain which is gradually bent by some external force. The energy associated with the elastic bending of the lattice can be reduced by inserting a dislocation, which is essentially a half-plane of atoms that act like a wedge, that creates a permanent misorientation between the two sides. As the grain is bent further, more
https://en.wikipedia.org/wiki/Nucleation
In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) below 0°C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0°C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay. Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately. Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface. Characteristics Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts th
https://en.wikipedia.org/wiki/Microtubule-associated%20protein
In cell biology, microtubule-associated proteins (MAPs) are proteins that interact with the microtubules of the cellular cytoskeleton. MAPs are integral to the stability of the cell and its internal structures and the transport of components within the cell. Function MAPs bind to the tubulin subunits that make up microtubules to regulate their stability. A large variety of MAPs have been identified in many different cell types, and they have been found to carry out a wide range of functions. These include both stabilizing and destabilizing microtubules, guiding microtubules towards specific cellular locations, cross-linking microtubules and mediating the interactions of microtubules with other proteins in the cell. Within the cell, MAPs bind directly to the tubulin dimers of microtubules. This binding can occur with either polymerized or depolymerized tubulin, and in most cases leads to the stabilization of microtubule structure, further encouraging polymerization. Usually, it is the C-terminal domain of the MAP that interacts with tubulin, while the N-terminal domain can bind with cellular vesicles, intermediate filaments or other microtubules. MAP-microtubule binding is regulated through MAP phosphorylation. This is accomplished through the function of the microtubule-affinity-regulating-kinase (MARK) protein. Phosphorylation of the MAP by the MARK causes the MAP to detach from any bound microtubules. This detachment is usually associated with a destabilization of the microtubule causing it to fall apart. In this way the stabilization of microtubules by MAPs is regulated within the cell through phosphorylation. Types MAPs have been divided into several different categories and sub-categories. There are "structural" MAPs which bind along the microtubules and "+TIP" MAPs which bind to the growing end of the microtubules. Structural MAPs have been divided into MAP1, MAP2, MAP4, and Tau families. +TIP MAPs are motor proteins such as kinesin, dyneins, and other MAPs
https://en.wikipedia.org/wiki/Information%20security%20audit
An information security audit is an audit of the level of information security in an organization. It is an independent review and examination of system records, activities, and related documents. These audits are intended to improve the level of information security, avoid improper information security designs, and optimize the efficiency of the security safeguards and security processes. Within the broad scope of auditing information security there are multiple types of audits, multiple objectives for different audits, etc. Most commonly the controls being audited can be categorized as technical, physical and administrative. Auditing information security covers topics from auditing the physical security of data centers to auditing the logical security of databases, and highlights key components to look for and different methods for auditing these areas. When centered on the Information technology (IT) aspects of information security, it can be seen as a part of an information technology audit. It is often then referred to as an information technology security audit or a computer security audit. However, information security encompasses much more than IT. The audit process Step 1: Preliminary audit assessment The auditor is responsible for assessing the current technological maturity level of a company during the first stage of the audit. This stage is used to assess the current status of the company and helps identify the required time, cost and scope of an audit. First, you need to identify the minimum security requirements: Security policy and standards Organizational and Personal security Communication, Operation and Asset management Physical and environmental security Access control and Compliance IT systems development and maintenance IT security incident management Disaster recovery and business continuity management Risk management Step 2: Planning & preparation The auditor should plan a company's audit based on the information found in the p
https://en.wikipedia.org/wiki/Crystal%20twinning
Crystal twinning occurs when two or more adjacent crystals of the same mineral are oriented so that they share some of the same crystal lattice points in a symmetrical manner. The result is an intergrowth of two separate crystals that are tightly bonded to each other. The surface along which the lattice points are shared in twinned crystals is called a composition surface or twin plane. Crystallographers classify twinned crystals by a number of twin laws. These twin laws are specific to the crystal structure. The type of twinning can be a diagnostic tool in mineral identification. Deformation twinning, in which twinning develops in a crystal in response to a shear stress, is an important mechanism for permanent shape changes in a crystal. Definition Twinning is a form of symmetrical intergrowth between two or more adjacent crystals of the same mineral. It differs from the ordinary random intergrowth of mineral grains in a mineral deposit, because the relative orientations of the two crystal segments show a fixed relationship that is characteristic of the mineral structure. The relationship is defined by a symmetry operation called a twin operation. The twin operation is not one of the normal symmetry operations of the untwinned crystal structure. For example, the twin operation may be reflection across a plane that is not a symmetry plane of the single crystal. On the microscopic level, the twin boundary is characterized by a set of atomic positions in the crystal lattice that are shared between the two orientations. These shared lattice points give the junction between the crystal segments much greater strength than that between randomly oriented grains, so that the twinned crystals do not easily break apart. Twin laws Twin laws are symmetry operations that define the orientation between twin crystal segments. These are as characteristic of the mineral as are its crystal face angles. For example, crystals of staurolite show twinning at angles of almost prec
https://en.wikipedia.org/wiki/Paleogenetics
Paleogenetics is the study of the past through the examination of preserved genetic material from the remains of ancient organisms. Emile Zuckerkandl and Linus Pauling introduced the term in 1963, long before the sequencing of DNA, in reference to the possible reconstruction of the corresponding polypeptide sequences of past organisms. The first sequence of ancient DNA, isolated from a museum specimen of the extinct quagga, was published in 1984 by a team led by Allan Wilson. Paleogeneticists do not recreate actual organisms, but piece together ancient DNA sequences using various analytical methods. Fossils are "the only direct witnesses of extinct species and of evolutionary events" and finding DNA within those fossils exposes tremendously more information about these species, potentially their entire physiology and anatomy. The most ancient DNA sequence to date was reported in February 2021, from the tooth of a Siberian mammoth frozen for over a million years. Applications Evolution Similar sequences are often found along DNA (and the derived protein polypeptide chains) in different species. This similarity is directly linked to the sequence of the DNA (the genetic material of the organism). Due to the improbability of this being random chance, and its consistency too long to be attributed to convergence by natural selection, these similarities can be plausibly linked to the existence of a common ancestor with common genes. This allows DNA sequences to be compared between species. Comparing an ancient genetic sequence to later or modern ones can be used to determine ancestral relations, while comparing two modern genetic sequences can determine, within error, the time since their last common ancestor. Human evolution Using the thigh bone of a Neanderthal female, 63% of the Neanderthal genome was recovered and 3.7 billion bases of DNA were decoded. It showed that Homo neanderthalensis was the closest living relative of Homo sapiens, until the former lineage di