source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Non-functional%20requirements%20framework
NFR (Non-Functional Requirements) need a framework for compaction. The analysis begins with softgoals that represent NFR which stakeholders agree upon. Softgoals are goals that are hard to express, but tend to be global qualities of a software system. These could be usability, performance, security and flexibility in a given system. If the team starts collecting them it often finds a great many of them. In order to reduce the number to a manageable quantity, structuring is a valuable approach. There are several frameworks available that are useful as structure. Structuring Non-functional requirements The following frameworks are useful to serve as structure for NFRs: 1. Goal Modelling The finalised softgoals are then usually decomposed and refined to uncover a tree structure of goals and subgoals for e.g. the flexibility softgoal. Once uncovering tree structures, one is bound to find interfering softgoals in different trees, e.g. security goals generally interferes with usability. These softgoal trees now form a softgoal graph structure. The final step in this analysis is to pick some particular leaf softgoals, so that all the root softgoals are satisfied.[1] 2. IVENA - Integrated Approach to Acquisition of NFR The method has integrated a requirement tree. [2] 3. Context of an Organization There are several models to describe the context of an organization such as Business Model Canvas, OrgManle [3], or others [4]. Those models are also a good framework to assign NFRs. Measuring the Non-functional requirements SNAP is the Software Non-functional Assessment Process. While Function Points measure the functional requirements by sizing the data flow through a software application, IFPUG's SNAP measures the non-functional requirements. The SNAP model consists of four categories and fourteen sub-categories to measure the non-functional requirements. Non-functional requirement are mapped to the relevant sub-categories. Each sub-category is sized, and the size of
https://en.wikipedia.org/wiki/Goal-oriented%20Requirements%20Language
Goal-oriented Requirements Language (GRL), an i*-based modeling language used in systems development, is designed to support goal-oriented modeling and reasoning about requirements especially the non-functional requirements GRL topics Concepts Goal-oriented Requirements Language (GRL) allows to express conflict between goals and helps to make decisions that resolve conflicts. There are three main categories of concepts in GRL: intentional elements, intentional relationships and actors. They are called for intentional because they are used in models that primarily concerned with answering "why" question of requirements (for ex. why certain choices for behavior or structure were made, what alternatives exist and what is the reason for choosing of certain alternative.) Intentional elements Intentional elements are: goal, soft goal, task, belief and resource. Goal is condition or situation that can be achieved or not. Goal is used to define the functional requirements of the system. In GRL notation goal is represented by a rounded rectangle with the goal name inside. Task is used to represent different ways of how to accomplish goal. In GRL notation task is represented by hexagon with the task name inside. Softgoal is used to define non-functional requirements. It’s usually a quality attribute of one of the intentional elements. In GRL notation softgoal is represented by irregular curvilinear shape with the softgoal name inside. Resource is a physical or informational object that is available for use in the task. Resource is represented in GRL as a rectangle. Belief is used to represent assumptions and relevant conditions. This construct is represented as ellipse in GRL notation. Relationships Intentional relationships are: means-ends, decomposition, contribution, correlation and dependency. Means-ends relationship shows how the goal can be achieved. For example, it can be used to connect task to a goal. Decomposition relationship is used to show the sub
https://en.wikipedia.org/wiki/Salt%20bridge%20%28protein%20and%20supramolecular%29
In chemistry, a salt bridge is a combination of two non-covalent interactions: hydrogen bonding and ionic bonding (Figure 1). Ion pairing is one of the most important noncovalent forces in chemistry, in biological systems, in different materials and in many applications such as ion pair chromatography. It is a most commonly observed contribution to the stability to the entropically unfavorable folded conformation of proteins. Although non-covalent interactions are known to be relatively weak interactions, small stabilizing interactions can add up to make an important contribution to the overall stability of a conformer. Not only are salt bridges found in proteins, but they can also be found in supramolecular chemistry. The thermodynamics of each are explored through experimental procedures to access the free energy contribution of the salt bridge to the overall free energy of the state. Salt bridges in chemical bonding In water, formation of salt bridges or ion pairs is mostly driven by entropy, usually accompanied by unfavorable ΔH contributions on account of desolvation of the interacting ions upon association. Hydrogen bonds contribute to the stability of ion pairs with e.g. protonated ammonium ions, and with anions is formed by deprotonation as in the case of carboxylate, phosphate etc; then the association constants depend on the pH. Entropic driving forces for ion pairing (in absence of significant H-bonding contributions) are also found in methanol as solvent. In nonpolar solvents contact ion pairs with very high association constants are formed,; in the gas phase the association energies of e.g. alkali halides reach up to 200 kJ/mol. The Bjerrum or the Fuoss equation describe ion pair association as function of the ion charges zA and zB and the dielectric constant ε of the medium; a corresponding plot of the stability ΔG vs. zAzB shows for over 200 ion pairs the expected linear correlation for a large variety of ions. Inorganic as well as organic ions
https://en.wikipedia.org/wiki/Capillary%20refill
Capillary refill time (CRT) is defined as the time taken for color to return to an external capillary bed after pressure is applied to cause blanching. It can be measured by holding a hand higher than heart-level and pressing the soft pad of a finger or fingernail until it turns white, then taking note of the time needed for the color to return once pressure is released. In humans, CRT of more than three seconds indicates decreased peripheral perfusion and may indicate cardiovascular or respiratory dysfunction. The most reliable and applicable site for CRT testing is the finger pulp (not at the fingernail), and the cut-off value for the normal CRT should be 3 seconds, not 2 seconds. Assessment In adults CRT can be measured by applying pressure to the pad of a finger or toe for 5–10 seconds. It became popularized in the 1980s when Champion et al. proposed a CRT of less than 2 seconds be deemed normal and included CRT in the Trauma Score. The value of 2 seconds for a normal CRT that was proposed by Dr Champion had been arbitrarily chosen by his nurse, and no evidence supporting that value has subsequently been found. CRT has been shown to be influenced by ambient temperature, age, sex, and the anatomical testing and lighting conditions. The most reliable and applicable site for CRT testing is the finger pulp (not at the fingernail), and the cut-off value for the normal CRT should be 3 seconds, not 2 seconds. To assess shock, central CRT, which is done by assessing capillary refill time at the sternum, rather than finger CRT, is more useful. In infants In newborn infants, capillary refill time can be measured by pressing on the sternum for five seconds with a finger or thumb, and noting the time needed for the color to return once the pressure is released (central CRT). The upper normal limit for capillary refill in newborns is 3 seconds. Capillary refill time can also be assessed in animals by pressing on their gums as opposed to the sternum which is generally
https://en.wikipedia.org/wiki/TRIX%20%28operating%20system%29
TRIX is a network-oriented research operating system developed in the late 1970s at MIT's Laboratory for Computer Science (LCS) by Professor Steve Ward and his research group. It ran on the NuMachine and had remote procedure call functionality built into its kernel, but was otherwise a Version 7 Unix workalike. Design and implementation On startup, the NuMachine would load the same program on each CPU in the system, passing each instance the numeric ID of the CPU it was running on. TRIX relied on this design to have the first CPU set up global data structures and then set a flag to signal that initialization was complete. After that, each instance of the kernel was able to access global data. The system also supported data private to each CPU. Access to the filesystem was provided by a program in user space. The kernel supported unnamed threads running in domains. A domain was the equivalent of a Unix process without a stack pointer (each thread in a domain had a stack pointer). A thread could change domains, and the system scheduler would migrate threads between CPUs in order to keep all processors busy. Threads had access to a single kind of mutual exclusion primitive, and one of seven priorities. The scheduler was designed to avoid priority inversion. User space programs could create threads through a spawn system call. A garbage collector would periodically identify and free unused domains. The shared memory model used to coordinate work between the various CPUs caused memory bus contention and was known to be a source of inefficiency. The designers were aware of designs that would have alleviated the contention. Indeed, TRIX's original design used a nonblocking message passing mechanism, but "this implementation was found to have deficiencies often overlooked in the literature," including poor performance. Although the TRIX operating system was first implemented on the NuMachine, this was due to the availability of the NuMachine at MIT, not becau
https://en.wikipedia.org/wiki/Ecological%20Metadata%20Language
Ecological Metadata Language (EML) is a metadata standard developed by and for the ecology discipline. It is based on prior work done by the Ecological Society of America and others, including the Knowledge Network for Biocomplexity. EML is a set of XML schema documents that allow for the structural expression of metadata. It was developed specifically to allow researchers to document a typical data set in the ecological sciences. EML is largely designed to describe digital resources, however, it may also be used to describe non-digital resources such as paper maps and other non-digital media. The Knowledge Network for Biocomplexity project has developed a software client specifically to address this need. Morpho is data management software intended for generating metadata in EML format. Morpho is part of the DataONE Investigator Toolkit, and therefore intended to facilitate data sharing and reuse among ecologists and environmental scientists.
https://en.wikipedia.org/wiki/Floquet%20theory
Floquet theory is a branch of the theory of ordinary differential equations relating to the class of solutions to periodic linear differential equations of the form with a piecewise continuous periodic function with period and defines the state of the stability of solutions. The main theorem of Floquet theory, Floquet's theorem, due to , gives a canonical form for each fundamental matrix solution of this common linear system. It gives a coordinate change with that transforms the periodic system to a traditional linear system with constant, real coefficients. When applied to physical systems with periodic potentials, such as crystals in condensed matter physics, the result is known as Bloch's theorem. Note that the solutions of the linear differential equation form a vector space. A matrix is called a fundamental matrix solution if all columns are linearly independent solutions. A matrix is called a principal fundamental matrix solution if all columns are linearly independent solutions and there exists such that is the identity. A principal fundamental matrix can be constructed from a fundamental matrix using . The solution of the linear differential equation with the initial condition is where is any fundamental matrix solution. Floquet's theorem Let be a linear first order differential equation, where is a column vector of length and an periodic matrix with period (that is for all real values of ). Let be a fundamental matrix solution of this differential equation. Then, for all , Here is known as the monodromy matrix. In addition, for each matrix (possibly complex) such that there is a periodic (period ) matrix function such that Also, there is a real matrix and a real periodic (period-) matrix function such that In the above , , and are matrices. Consequences and applications This mapping gives rise to a time-dependent change of coordinates (), under which our original system becomes a linear system with real constant coe
https://en.wikipedia.org/wiki/Global%20Biodiversity%20Information%20Facility
The Global Biodiversity Information Facility (GBIF) is an international organisation that focuses on making scientific data on biodiversity available via the Internet using web services. The data are provided by many institutions from around the world; GBIF's information architecture makes these data accessible and searchable through a single portal. Data available through the GBIF portal are primarily distribution data on plants, animals, fungi, and microbes for the world, and scientific names data. The mission of the GBIF is to facilitate free and open access to biodiversity data worldwide to underpin sustainable development. Priorities, with an emphasis on promoting participation and working through partners, include mobilising biodiversity data, developing protocols and standards to ensure scientific integrity and interoperability, building an informatics architecture to allow the interlinking of diverse data types from disparate sources, promoting capacity building and catalysing development of analytical tools for improved decision-making. GBIF strives to form informatics linkages among digital data resources from across the spectrum of biological organisation, from genes to ecosystems, and to connect these to issues important to science, society and sustainability by using georeferencing and GIS tools. It works in partnership with other international organisations such as the Catalogue of Life partnership, Biodiversity Information Standards, the Consortium for the Barcode of Life (CBOL), the Encyclopedia of Life (EOL), and GEOSS. The biodiversity data available through the GBIF has increased by more than 1,150% in the past decade, partially due to the participation of citizen scientists. From 2002 to 2014, GBIF awarded a prestigious annual global award in the area of biodiversity informatics, the Ebbe Nielsen Prize, valued at €30,000. , the GBIF Secretariat presents two annual prizes: the GBIF Ebbe Nielsen Challenge and the Young Researchers Award. See al
https://en.wikipedia.org/wiki/Stem%20%28music%29
In musical notation, stems are the, "thin, vertical lines that are directly connected to the [note] head." Stems may point up or down. Different-pointing stems indicate the voice for polyphonic music written on the same staff. Within one voice, the stems usually point down for notes on the middle line or higher, and up for those below. If the stem points up from a notehead, the stem originates from the right-hand side of the note, but if it points down, it originates from the left. If there are multiple notes beamed together, the stem's direction is defined by the average of the lowest and highest notes in the beam. There is an exception to this rule: if a chord contains a second, the stem runs between the two notes with the higher being placed on the right of the stem and the lower on the left. If the chord contains an odd numbered cluster of notes a second apart (such as C, D, E), the outer two will be on the correct side of the stem, while the middle note will be on the wrong side. The length of a stem should be that of an octave on the staff, going to either an octave higher or lower than the notehead, depending on which way the stem is pointing. If a note head is on a ledger line more than an octave away from the middle line of a staff, the stem will be elongated to touch the middle line. In any polyphonic music in which two parts are written on the same staff, stems are typically shortened to keep the music visually centered upon the staff. Stems may be altered in various ways to alter the rhythm or other method of performance. For example, a note with diagonal slashes through its stem is played tremolo. See also Beam (music) Notehead
https://en.wikipedia.org/wiki/Duffing%20equation
The Duffing equation (or Duffing oscillator), named after Georg Duffing (1861–1944), is a non-linear second-order differential equation used to model certain damped and driven oscillators. The equation is given by where the (unknown) function is the displacement at time , is the first derivative of with respect to time, i.e. velocity, and is the second time-derivative of i.e. acceleration. The numbers and are given constants. The equation describes the motion of a damped oscillator with a more complex potential than in simple harmonic motion (which corresponds to the case ); in physical terms, it models, for example, an elastic pendulum whose spring's stiffness does not exactly obey Hooke's law. The Duffing equation is an example of a dynamical system that exhibits chaotic behavior. Moreover, the Duffing system presents in the frequency response the jump resonance phenomenon that is a sort of frequency hysteresis behaviour. Parameters The parameters in the above equation are: controls the amount of damping, controls the linear stiffness, controls the amount of non-linearity in the restoring force; if the Duffing equation describes a damped and driven simple harmonic oscillator, is the amplitude of the periodic driving force; if the system is without a driving force, and is the angular frequency of the periodic driving force. The Duffing equation can be seen as describing the oscillations of a mass attached to a nonlinear spring and a linear damper. The restoring force provided by the nonlinear spring is then When and the spring is called a hardening spring. Conversely, for it is a softening spring (still with ). Consequently, the adjectives hardening and softening are used with respect to the Duffing equation in general, dependent on the values of (and ). The number of parameters in the Duffing equation can be reduced by two through scaling (in accord with the Buckingham π theorem), e.g. the excursion and time can be scaled as: and a
https://en.wikipedia.org/wiki/List%20of%20Apple%20II%20application%20software
Following is a List of Apple II applications including utilities and development tools. 0–9 3D Art Graphics - 3D computer graphics software, a set of 3D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 A A2Command - Norton Commander style file manager ADTPro - telecom Apple Writer - word processor AppleWorks - integrated word processor, spreadsheet, and database suite (II & GS) ASCII Express - telecom B Bank Street Writer - word processor C CatFur - file transfer / chat software for the APPLE-CAT modem Cattlecar Galactica - Super Hi-Res Chess in its later, expanded version Contiki - 8-bit text web browser Copy II+ - copy and disk utilities Crossword Magic - Given clues and answers, software automatically arranges the answers into a crossword grid. D Dalton Disk Desintegrator - disk archiver Davex - Unix type shell Dazzle Draw - bitmap graphics editor Design Your Own Home - home design (GS) Disk Muncher - disk copy Diversi Copy - disk copy (GS) DOS.MASTER - DOS 3.3 -> ProDOS utility E Edisoft - text editor EasyMailer EasyWriter F Fantavision - vector graphics animation package G GEOS - integrated office suite GNO/ME - Unix type shell (GS) GraphicEdge - business graphics for AppleWorks spreadsheets (II & GS & Mac) Great American Probability Machine - first full-screen Apple II animations L Lock Smith - copy and disk utilities Logo - easy educational graphic programming language M Magic Window - one of the most popular Apple II word processors by Artsci Merlin 8 & 16 - assembler (II & GS) Micro-DYNAMO - simulation software to build system dynamics models MouseWrite and MouseWrite II - first mouse based word processor for Apple II (II & GS) O Omnis I,II, and III - database/file manager (II & GS) ORCA - program language suite (II & GS) P Point2Point - computer to computer communications program for chat and file transmission (II) PrintShop - sign, banner, and card maker (II & GS) ProSel - disk and file utilities (II & GS) Pro
https://en.wikipedia.org/wiki/Kodaira%20vanishing%20theorem
In mathematics, the Kodaira vanishing theorem is a basic result of complex manifold theory and complex algebraic geometry, describing general conditions under which sheaf cohomology groups with indices q > 0 are automatically zero. The implications for the group with index q = 0 is usually that its dimension — the number of independent global sections — coincides with a holomorphic Euler characteristic that can be computed using the Hirzebruch–Riemann–Roch theorem. The complex analytic case The statement of Kunihiko Kodaira's result is that if M is a compact Kähler manifold of complex dimension n, L any holomorphic line bundle on M that is positive, and KM is the canonical line bundle, then for q > 0. Here stands for the tensor product of line bundles. By means of Serre duality, one also obtains the vanishing of for q < n. There is a generalisation, the Kodaira–Nakano vanishing theorem, in which , where Ωn(L) denotes the sheaf of holomorphic (n,0)-forms on M with values on L, is replaced by Ωr(L), the sheaf of holomorphic (r,0)-forms with values on L. Then the cohomology group Hq(M, Ωr(L)) vanishes whenever q + r > n. The algebraic case The Kodaira vanishing theorem can be formulated within the language of algebraic geometry without any reference to transcendental methods such as Kähler metrics. Positivity of the line bundle L translates into the corresponding invertible sheaf being ample (i.e., some tensor power gives a projective embedding). The algebraic Kodaira–Akizuki–Nakano vanishing theorem is the following statement: If k is a field of characteristic zero, X is a smooth and projective k-scheme of dimension d, and L is an ample invertible sheaf on X, then where the Ωp denote the sheaves of relative (algebraic) differential forms (see Kähler differential). showed that this result does not always hold over fields of characteristic p > 0, and in particular fails for Raynaud surfaces. Later give a counterexample for singular varieties with non-log
https://en.wikipedia.org/wiki/Philopatry
Philopatry is the tendency of an organism to stay in or habitually return to a particular area. The causes of philopatry are numerous, but natal philopatry, where animals return to their birthplace to breed, may be the most common. The term derives from the Greek roots philo, "liking, loving" and patra, "fatherland", although in recent years the term has been applied to more than just the animal's birthplace. Recent usage refers to animals returning to the same area to breed despite not being born there, and migratory species that demonstrate site fidelity: reusing stopovers, staging points, and wintering grounds. Some of the known reasons for organisms to be philopatric would be for mating (reproduction), survival, migration, parental care, resources, etc.. In most species of animals, individuals will benefit from living in groups, because depending on the species, individuals are more vulnerable to predation and more likely to have difficulty finding resources and food. Therefore, living in groups increases a species' chances of survival, which correlates to finding resources and reproducing. Again, depending on the species, returning to their birthplace where that particular species occupies that territory is the more favorable option. The birthplaces for these animals serve as a territory for them to return for feeding and refuge, like fish from a coral reef. In an animal behavior study conducted by Paul Greenwood, overall female mammals are more likely to be philopatric, while male mammals are more likely to disperse. Male birds are more likely to be philopatric, while females are more likely to disperse. Philopatry will favor the evolution of cooperative traits because the direction of sex has consequences from the particular mating system. Breeding-site philopatry One type of philopatry is breeding philopatry, or breeding-site fidelity, and involves an individual, pair, or colony returning to the same location to breed, year after year . The animal can liv
https://en.wikipedia.org/wiki/Flag%20patch
A flag patch is a piece of fabric displaying the national flag of a country. The image of the flag is usually produced by embroidery, using different colored threads. It can also be produced by printing directly on the fabric, although this is less common. Many countries have patches made to resemble their flag for use in their militaries, although it is not uncommon for them to also be used for personnel in civil jobs (police officers, civilian pilots, bus drivers, astronauts, etc.), as well as sports teams who include the flag patch of the country they represent in their uniform. Some countries, for instance the United States, have versions of their flag patch made in different color schemes in order to better blend in with their military camouflage. The three most common alternate color schemes are urban (black/silver, pictured), desert (tan/brown), and woodland (black/olive drab). Flag patches are usually sewn onto bags or clothes. There is a fashion among backpackers to buy flag patches in the places where they go and keep them as souvenirs, or to show their origin. Backpackers from Canada, and the United States, have also sewn Canadian flag patches on their luggage when travelling, in order to avoid hostility while abroad.
https://en.wikipedia.org/wiki/Intelligent%20enterprise
Intelligent Enterprise is a management approach that applies technology and new service paradigms to the challenge of improving business performance. The concept, as articulated in James Brian Quinn's seminal book Intelligent Enterprise posits that intellect is the core resource in producing and delivering services. This approach is referred to in business jargon as Knowledge Management. Paradigm This paradigm highlights that in today’s world, highly intelligent systems is the key to expanding industry, but that the correct organizational set up is required to do so. Organizational set up includes "autonomous problem solving, intelligently seeking solutions and taking whatever action is required". In the Intelligent Enterprise, managers should provide a rewarding work environment, lower friction, and energize within the company. By outsourcing the less core functions to superior vendors, firms will become more centralized in core components. However, the degree to which the Intelligent Enterprise can be successful depends on the "competencies of the people and its operational capabilities" such as, structure, policies and systems. In order to achieve exceptional success, the combination of utilizing intelligence and competitive information of the environment is essential. Real life examples Honda Once Honda began, it competed with companies such as Toyota and other Japanese producers – however, outsourcing many of its components to achieve high economies of scale and solely focusing on the development and production of its manufacturing operations made it successful – in addition to developing an organized leadership team. Apple Apple when introduced to the highly competitive computer environment retailed for about $2000 but cost less than $500, as over 70% of its components were outsourced. Instead, Apple focused on the design, logistics, software and product assembly. Due to the concentration of only a few knowledge adding services, Apple was able to rise to
https://en.wikipedia.org/wiki/Triacetin
Triacetin is the organic compound with the formula . It is classified as a triglyceride, i.e., the triester of glycerol with acetic acid. It is a colorless, viscous, and odorless liquid with a high boiling point and a low melting point. It has a mild, sweet taste in concentrations lower than 500 ppm, but may appear bitter at higher concentrations. It is one of the glycerine acetate compounds. Uses Triacetin is a common food additive, for instance as a solvent in flavourings, and for its humectant function, with E number E1518 and Australian approval code A1518. It is used as an excipient in pharmaceutical products, where it is used as a humectant, a plasticizer, and as a solvent. Potential uses The plasticizing capabilities of triacetin have been utilized in the synthesis of a biodegradable phospholipid gel system for the dissemination of the cancer drug paclitaxel (PTX). In the study, triacetin was combined with PTX, ethanol, a phospholipid and a medium chain triglyceride to form a gel-drug complex. This complex was then injected directly into the cancer cells of glioma-bearing mice. The gel slowly degraded and facilitated sustained release of PTX into the targeted glioma cells. Triacetin can also be used as a fuel additive as an antiknock agent which can reduce engine knocking in gasoline, and to improve cold and viscosity properties of biodiesel. It has been considered as a possible source of food energy in artificial food regeneration systems on long space missions. It is believed to be safe to get over half of one's dietary energy from triacetin. Synthesis Triacetin was first prepared in 1854 by the French chemist Marcellin Berthelot. Triacetin was prepared in the 19th century from glycerol and acetic acid. Its synthesis from acetic anhydride and glycerol is simple and inexpensive. 3  + 1  → 1  + 3  This synthesis has been conducted with catalytic sodium hydroxide and microwave irradiation to give a 99% yield of triacetin. It has also been conducted w
https://en.wikipedia.org/wiki/Bus%20analyzer
A bus analyzer is a type of a protocol analysis tool, used for capturing and analyzing communication data across a specific interface bus, usually embedded in a hardware system. The bus analyzer functionality helps design, test and validation engineers to check, test, debug and validate their designs throughout the design cycles of a hardware-based product. It also helps in later phases of a product life cycle, in examining communication interoperability between systems and between components, and clarifying hardware support concerns. A bus analyzer is designed for use with specific parallel or serial bus architectures. Though the term bus analyzer implies a physical communication and interface that is being analyzed, it is sometimes used interchangeably with the term protocol analyzer or Packet Analyzer, and may be used also for analysis tools for Wireless interfaces like wireless LAN (like Wi-Fi), PAN (like Bluetooth, Wireless USB), and other, though these technologies do not have a “Wired” Bus. The bus analyzer monitors and captures the bus communication data, decodes and analyses it and displays the data and analysis reports to the user. It is essentially a logic analyzer with some additional knowledge of the underlying bus traffic characteristics. One of the key differences between a bus analyzer and a logic analyzer is notably its ability to filter and extract only relevant traffic that occurs on the analyzed bus. Some advanced logic analyzers present data storage qualification options that also allow to filter bus traffic, enabling bus analyzer-like features. Some key differentiators between bus and logic analyzers are: 1. Cost: Logic analyzers usually carry higher prices than bus analyzers. The converse of this fact is that a logic analyzer can be used with a variety of bus architectures, whereas a bus analyzer is only good with one architecture. 2. Targeted Capabilities and Preformatting of data: A bus analyzer can be designed to provide very specific
https://en.wikipedia.org/wiki/Calcium%20acetate
Calcium acetate is a chemical compound which is a calcium salt of acetic acid. It has the formula Ca(C2H3O2)2. Its standard name is calcium acetate, while calcium ethanoate is the systematic name. An older name is acetate of lime. The anhydrous form is very hygroscopic; therefore the monohydrate (Ca(CH3COO)2•H2O) is the common form. Production Calcium acetate can be prepared by soaking calcium carbonate (found in eggshells, or in common carbonate rocks such as limestone or marble) or hydrated lime in vinegar: CaCO3(s) + 2CH3COOH(aq) → Ca(CH3COO)2(aq) + H2O(l) + CO2(g) Ca(OH)2(s) + 2CH3COOH(aq) → Ca(CH3COO)2(aq) + 2H2O(l) Since both reagents would have been available pre-historically, the chemical would have been observable as crystals then. Uses In kidney disease, blood levels of phosphate may rise (called hyperphosphatemia) leading to bone problems. Calcium acetate binds phosphate in the diet to lower blood phosphate levels. Calcium acetate is used as a food additive, as a stabilizer, buffer and sequestrant, mainly in candy products under the number E263. Tofu is traditionally obtained by coagulating soy milk with calcium sulfate. Calcium acetate has been found to be a better alternative; being soluble, it requires less skill and a smaller amount. Because it is inexpensive, calcium acetate was once a common starting material for the synthesis of acetone before the development of the cumene process: Ca(CH3COO)2 → CaCO3(s) + (CH3)2CO A saturated solution of calcium acetate in alcohol forms a semisolid, flammable gel that is much like "canned heat" products such as Sterno. Chemistry teachers often prepare "California Snowballs", a mixture of calcium acetate solution and ethanol. The resulting gel is whitish in color, and can be formed to resemble a snowball. Natural occurrence Pure calcium acetate is yet unknown among minerals. calcium acetate chloride is listed as a known mineral, but its genesis is likely anthropogenic (human-generated, as opposed to na
https://en.wikipedia.org/wiki/Northbound%20interface
In computer networking and computer architecture, a northbound interface of a component is an interface that allows the component to communicate with a higher level component, using the latter component's southbound interface. The northbound interface conceptualizes the lower level details (e.g., data or functions) used by, or in, the component, allowing the component to interface with higher level layers. In architectural overviews, the northbound interface is normally drawn at the top of the component it is defined in; hence the name northbound interface. A southbound interface decomposes concepts in the technical details, mostly specific to a single component of the architecture. Southbound interfaces are drawn at the bottom of an architectural overview. Typical use A northbound interface is typically an output-only interface (as opposed to one that accepts user input) found in carrier-grade network and telecommunications network elements. The languages or protocols commonly used include SNMP and TL1. For example, a device that is capable of sending out syslog messages but that is not configurable by the user is said to implement a northbound interface. Other examples include SMASH, IPMI, WSMAN, and SOAP. The term is also important for software-defined networking (SDN), to facilitate communication between the physical devices, the SDN software and applications running on the network.
https://en.wikipedia.org/wiki/Butyl%20acetate
n-Butyl acetate is an organic compound with the formula . A colorless, flammable liquid, it is the ester derived from n-butanol and acetic acid. It is found in many types of fruit, where it imparts characteristic flavors and has a sweet smell of banana or apple. It is used as an industrial solvent. The other three isomers (four, including stereoisomers) of butyl acetate are isobutyl acetate, tert-butyl acetate, and sec-butyl acetate (two enantiomers). Production and use Butyl acetate is commonly manufactured by the Fischer esterification of butanol (or its isomer to make an isomer of butyl acetate) and acetic acid with the presence of sulfuric acid: Butyl acetate is mainly used as a solvent for coatings and inks. It is a component of fingernail polish. Occurrence in nature Apples, especially of the 'Red Delicious' variety, are flavored in part by this chemical. The alarm pheromones emitted by the Koschevnikov gland of honey bees contain butyl acetate.
https://en.wikipedia.org/wiki/DBc
dBc (decibels relative to the carrier) is the power ratio of a signal to a carrier signal, expressed in decibels. For example, phase noise is expressed in dBc/Hz at a given frequency offset from the carrier. dBc can also be used as a measurement of Spurious-Free Dynamic Range (SFDR) between the desired signal and unwanted spurious outputs resulting from the use of signal converters such as a digital-to-analog converter or a frequency mixer. If the dBc figure is positive, then the relative signal strength is greater than the carrier signal strength. If the dBc figure is negative, then the relative signal strength is less than carrier signal strength. Although the decibel (dB) is permitted for use alongside SI units, the dBc is not. Example If a carrier (reference signal) has a power of , and noise signal has power of . Power of reference signal expressed in decibel is : Power of noise expressed in decibel is : The calculation of dBc difference between noise signal and reference signal is then as follows: It is also possible to compute the dBc power of noise signal with respect to reference signal directly as logarithm of their ratio as follows: .
https://en.wikipedia.org/wiki/Church%20encoding
In mathematics, Church encoding is a means of representing data and operators in the lambda calculus. The Church numerals are a representation of the natural numbers using lambda notation. The method is named for Alonzo Church, who first encoded data in the lambda calculus this way. Terms that are usually considered primitive in other notations (such as integers, booleans, pairs, lists, and tagged unions) are mapped to higher-order functions under Church encoding. The Church–Turing thesis asserts that any computable operator (and its operands) can be represented under Church encoding. In the untyped lambda calculus the only primitive data type is the function. Use A straightforward implementation of Church encoding slows some access operations from to , where is the size of the data structure, making Church encoding impractical. Research has shown that this can be addressed by targeted optimizations, but most functional programming languages instead expand their intermediate representations to contain algebraic data types. Nonetheless Church encoding is often used in theoretical arguments, as it is a natural representation for partial evaluation and theorem proving. Operations can be typed using higher-ranked types, and primitive recursion is easily accessible. The assumption that functions are the only primitive data types streamlines many proofs. Church encoding is complete but only representationally. Additional functions are needed to translate the representation into common data types, for display to people. It is not possible in general to decide if two functions are extensionally equal due to the undecidability of equivalence from Church's theorem. The translation may apply the function in some way to retrieve the value it represents, or look up its value as a literal lambda term. Lambda calculus is usually interpreted as using intensional equality. There are potential problems with the interpretation of results because of the difference between t
https://en.wikipedia.org/wiki/Key%20whitening
In cryptography, key whitening is a technique intended to increase the security of an iterated block cipher. It consists of steps that combine the data with portions of the key. Details The most common form of key whitening is xor-encrypt-xor -- using a simple XOR before the first round and after the last round of encryption. The first block cipher to use a form of key whitening is DES-X, which simply uses two extra 64-bit keys for whitening, beyond the normal 56-bit key of DES. This is intended to increase the complexity of a brute force attack, increasing the effective size of the key without major changes in the algorithm. DES-X's inventor, Ron Rivest, named the technique whitening. The cipher FEAL (followed by Khufu and Khafre) introduced the practice of key whitening using portions of the same key used in the rest of the cipher. This offers no additional protection from brute force attacks, but it can make other attacks more difficult. In a Feistel cipher or similar algorithm, key whitening can increase security by concealing the specific inputs to the first and last round functions. In particular, it is not susceptible to a meet-in-the-middle attack. This form of key whitening has been adopted as a feature of many later block ciphers, including AES, MARS, RC6, and Twofish. See also Whitening transformation
https://en.wikipedia.org/wiki/Nested%20Context%20Language
In the field of digital and interactive television, Nested Context Language (NCL) is a declarative authoring language for hypermedia documents. NCL documents do not contain multimedia elements such as audio or video content; rather they function as a "glue" language that specifies how multimedia components are related. In particular, NCL documents specify how these components are synchronized relative to each other and how the components are composed together into a unified document. Among its main facilities, it treats hypermedia relations as first-class entities through the definition of hypermedia connectors, and it can specify arbitrary semantics for a hypermedia composition using the concept of composite templates. NCL is an XML application language that is an extension of XHTML, with XML elements and attributes specified by a modular approach. NCL modules can be added to standard web languages, such as XLink and SMIL. NCL was initially designed for the Web environment, but a major application of NCL is use as the declarative language of the Japanese-Brazilian ISDB-Tb (International Standard for Digital Broadcasting) terrestrial DTV digital television middleware (named Ginga). It is also the first standardized technology of the ITU-T multimedia application framework series of specifications for IPTV (internet protocol television) services. In both cases it is used to develop interactive applications to digital television. Structure of an NCL document NCL was designed to be modular to allow for use of subsets of modules according to the needs of the particular application. The 3.1 version of the standard is split into 14 areas with each module assigned to an area. Each module in turn defines one or more XML elements. The areas and associated modules are Structure Structure Module Components Media Module Context Module Interfaces MediaContentAnchor Module CompositeNodeInterface Module PropertyAnchor Module SwitchInterface Module Layout Layout Module Present
https://en.wikipedia.org/wiki/Rain%20%28Madonna%20song%29
"Rain" is a song by American singer Madonna from her fifth studio album Erotica (1992). The song was released on July 19, 1993, by Maverick Records as the album's fifth single internationally and the fourth single in North America. It was later included on her ballad compilation album Something to Remember (1995). The song was written and produced by Madonna and Shep Pettibone. A pop and R&B ballad, "Rain" features a more "friendly" composition than the other singles released from the album. Lyrically, the song likens rain to the empowering effect of love, and as with water's ability to clean and wash away pain. Like the other songs on Erotica, sexual contact is also a possible interpretation of the song. "Rain" received positive response from music critics, who noted it as an exceptional ballad amongst the overtly sexual content on Erotica. It peaked at number 14 on the Billboard Hot 100 in the United States, while becoming a top-10 hit in Australia, Canada, Ireland, Italy, Japan, and the United Kingdom. The accompanying music video, shot by director Mark Romanek, features Madonna singing the song against various backdrops on a set. The music video was praised by many critics for its innovation and cinematography. Madonna has performed the song during The Girlie Show World Tour in 1993, and The Celebration Tour in 2023, while a remixed version of the song was used as a video interlude during her 2008–09 Sticky & Sweet Tour. Background and release After the completion of filming A League of Their Own, Madonna began working on her fifth studio album Erotica with Shep Pettibone. "Rain" was one of the first songs developed for the album, alongside "Deeper and Deeper", "Erotica", and "Thief of Hearts", during the writing session in October and November 1991. According to Pettibone, these songs were essentially Madonna's stories and the things she wanted to say. "Rain" was written and produced together by Madonna and Pettibone. She had initially written the song for a
https://en.wikipedia.org/wiki/Enharmonic%20keyboard
An enharmonic keyboard is a musical keyboard, where enharmonically equivalent notes do not have identical pitches. A conventional keyboard has, for instance, only one key and pitch for C and D, but an enharmonic keyboard would have two different keys and pitches for these notes. Traditionally, such keyboards use black split keys to express both notes, but diatonic white keys may also be split. As an important device to compose, play and study enharmonic music, enharmonic keyboards are capable of producing microtones and have separate keys for at least some pairs of not equal pitches that must be enharmonically equal in conventional keyboard instruments. The term (divergence of scholar opinions) "Enharmonic keyboard" is a term used by scholars in their studies of enharmonic keyboard instruments (organ, harpsichord, piano, harmonium and synthesizer) with reference to a keyboard with more than 12 keys per octave. Scholarly consensus about the term's precise definition currently has not been established. In the New Grove Dictionary (2001) Nicolas Meeùs defines an "enharmonic keyboard" as "a keyboard with more than 12 keys and sounding more than 12 different pitches in the octave". He however does not specify the origin of the term in his article. Rudolph Rasch (2002) suggested applying the term "enharmonic keyboard" more precisely, to keyboards with 29–31 keys per octave. Patrizio Barbieri (2007), in his turn, raised the objection that this usage is not supported by early theoretical works. As for historical evidence, confusion has often reigned over the terminology of split-keyed instruments, which were sometimes called ‘chromatic', sometimes 'enharmonic'. The builders (or persons who only described the construction) of such keyboard instruments often gave them names without any reference to genus, like 'archicembalo' (Nicola Vicentino), 'cembalo pentarmonico' (Giovanni Battista Doni), 'clavicymbalum universale' (Michael Praetorius) or even simply 'clauocembalo' (
https://en.wikipedia.org/wiki/Ectopic%20expression
Ectopic is a word used with a prefix, ecto, meaning “out of place.” Ectopic expression is an abnormal gene expression in a cell type, tissue type, or developmental stage in which the gene is not usually expressed. The term ectopic expression is predominantly used in studies using metazoans, especially in Drosophila melanogaster for research purposes. How is it used Although ectopic expression can be caused by a natural condition, it is uncommonly seen in nature because it is a product of defects in gene regulation. In fact, ectopic expression is more commonly used for research purposes. Artificially induced gene expression helps to determine the function of a gene of interest. Common techniques such as overexpressing or misexpressing the genes by UAS-Gal4 system in D. melanogaster are used. In model organisms, such techniques are used to perform genetic screens to identify a function of the gene involved in specific cellular or developmental processes. Ectopic expression using these techniques is a useful tool because phenotypes induced in a tissue or cell type where are not normally expressed are easily distinguishable compared to a tissue or cell type where the gene is normally expressed. By the comparison with its basal expression, the function of a gene of interest can be identified. Although the understanding of ectopic expressions deals with endogenous genes in an organism, it can be expended to a similar concept like transgenesis, which an exogenous gene is introduced to a cell or tissue type in which the gene is not usually expressed. Practices of ectopic expression in biological science is not only limited to identifying a function of the gene in a known cell or tissue type but also implemented to discover unknown or additional functions of the gene by ectopic expression. Research examples Paired box protein Paired box protein Pax-6 in humans is a transcription factor, which is a main regulatory gene of eye and brain development. Ectopic expression o
https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley%20model
The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical engineering characteristics of excitable cells such as neurons and muscle cells. It is a continuous-time dynamical system. Alan Hodgkin and Andrew Huxley described the model in 1952 to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon. They received the 1963 Nobel Prize in Physiology or Medicine for this work. Basic components The typical Hodgkin–Huxley model treats each component of an excitable cell as an electrical element (as shown in the figure). The lipid bilayer is represented as a capacitance (Cm). Voltage-gated ion channels are represented by electrical conductances (gn, where n is the specific ion channel) that depend on both voltage and time. Leak channels are represented by linear conductances (gL). The electrochemical gradients driving the flow of ions are represented by voltage sources (En) whose voltages are determined by the ratio of the intra- and extracellular concentrations of the ionic species of interest. Finally, ion pumps are represented by current sources (Ip). The membrane potential is denoted by Vm. Mathematically, the current flowing through the lipid bilayer is written as and the current through a given ion channel is the product of that channel's conductance and the driving potential for the specific ion where is the reversal potential of the specific ion channel. Thus, for a cell with sodium and potassium channels, the total current through the membrane is given by: where I is the total membrane current per unit area, Cm is the membrane capacitance per unit area, gK and gNa are the potassium and sodium conductances per unit area, respectively, VK and VNa are the potassium and sodium reversal potentials, respectively,
https://en.wikipedia.org/wiki/Parenthesome
Within the cells of some members of basidiomycetes fungi are found microscopic structures called parenthesomes or septal pore caps. They are shaped like parentheses and found on either side of pores in the dolipore septum which separates cells within a hypha. Their function has not been established, and their composition has not been fully elucidated. The variations in their appearance are useful in distinguishing individual species. Generally, they are barrel shaped, with an endoplasmic reticulum covering. See also Pit connection
https://en.wikipedia.org/wiki/Synapsis
Synapsis is the pairing of two chromosomes that occurs during meiosis. It allows matching-up of homologous pairs prior to their segregation, and possible chromosomal crossover between them. Synapsis takes place during prophase I of meiosis. When homologous chromosomes synapse, their ends are first attached to the nuclear envelope. These end-membrane complexes then migrate, assisted by the extranuclear cytoskeleton, until matching ends have been paired. Then the intervening regions of the chromosome are brought together, and may be connected by a protein-RNA complex called the synaptonemal complex. During synapsis, autosomes are held together by the synaptonemal complex along their whole length, whereas for sex chromosomes, this only takes place at one end of each chromosome. This is not to be confused with mitosis. Mitosis also has prophase, but does not ordinarily do pairing of two homologous chromosomes. When the non-sister chromatids intertwine, segments of chromatids with similar sequence may break apart and be exchanged in a process known as genetic recombination or "crossing-over". This exchange produces a chiasma, a region that is shaped like an X, where the two chromosomes are physically joined. At least one chiasma per chromosome often appears to be necessary to stabilise bivalents along the metaphase plate during separation. The crossover of genetic material also provides a possible defences against 'chromosome killer' mechanisms, by removing the distinction between 'self' and 'non-self' through which such a mechanism could operate. A further consequence of recombinant synapsis is to increase genetic variability within the offspring. Repeated recombination also has the general effect of allowing genes to move independently of each other through the generations, allowing for the independent concentration of beneficial genes and the purging of the detrimental. Following synapsis, a type of recombination referred to as synthesis dependent strand annealing
https://en.wikipedia.org/wiki/%CE%93-convergence
In the field of mathematical analysis for the calculus of variations, Γ-convergence (Gamma-convergence) is a notion of convergence for functionals. It was introduced by Ennio de Giorgi. Definition Let be a topological space and denote the set of all neighbourhoods of the point . Let further be a sequence of functionals on . The and the are defined as follows: . are said to -converge to , if there exist a functional such that . Definition in first-countable spaces In first-countable spaces, the above definition can be characterized in terms of sequential -convergence in the following way. Let be a first-countable space and a sequence of functionals on . Then are said to -converge to the -limit if the following two conditions hold: Lower bound inequality: For every sequence such that as , Upper bound inequality: For every , there is a sequence converging to such that The first condition means that provides an asymptotic common lower bound for the . The second condition means that this lower bound is optimal. Relation to Kuratowski convergence -convergence is connected to the notion of Kuratowski-convergence of sets. Let denote the epigraph of a function and let be a sequence of functionals on . Then where denotes the Kuratowski limes inferior and the Kuratowski limes superior in the product topology of . In particular, -converges to in if and only if -converges to in . This is the reason why -convergence is sometimes called epi-convergence. Properties Minimizers converge to minimizers: If -converge to , and is a minimizer for , then every cluster point of the sequence is a minimizer of . -limits are always lower semicontinuous. -convergence is stable under continuous perturbations: If -converges to and is continuous, then will -converge to . A constant sequence of functionals does not necessarily -converge to , but to the relaxation of , the largest lower semicontinuous functional below . Applications An impor
https://en.wikipedia.org/wiki/Area%20compatibility%20factor
In survival analysis, the area compatibility factor, F, is used in indirect standardisation of population mortality rates. where: is the standardised central exposed-to risk from age x to x + t for the standard population, is the central exposed-to risk from age x to x + t for the population under study and is the mortality rate in the standard population for ages x to x + t. The expression can be thought of as the crude mortality rate for the standard population divided by what the crude mortality rate is for the region being studied, assuming the mortality rates are the same as for the standard population. F is then multiplied by the crude mortality rate to arrive at the indirectly standardised mortality rate.
https://en.wikipedia.org/wiki/263%20%28number%29
263 is the natural number between 262 and 264. It is also a prime number. In mathematics 263 is a balanced prime, an irregular prime, a Ramanujan prime, a Chen prime, and a safe prime. It is also a strictly non-palindromic number and a happy number.
https://en.wikipedia.org/wiki/269%20%28number%29
269 (two hundred [and] sixty-nine) is the natural number between 268 and 270. It is also a prime number. In mathematics 269 is a twin prime, and a Ramanujan prime. It is the largest prime factor of 9! + 1 = 362881, and the smallest natural number that cannot be represented as the determinant of a 10 × 10 (0,1)-matrix.
https://en.wikipedia.org/wiki/Difference%20set
In combinatorics, a difference set is a subset of size of a group of order such that every non-identity element of can be expressed as a product of elements of in exactly ways. A difference set is said to be cyclic, abelian, non-abelian, etc., if the group has the corresponding property. A difference set with is sometimes called planar or simple. If is an abelian group written in additive notation, the defining condition is that every non-zero element of can be written as a difference of elements of in exactly ways. The term "difference set" arises in this way. Basic facts A simple counting argument shows that there are exactly pairs of elements from that will yield nonidentity elements, so every difference set must satisfy the equation If is a difference set and then is also a difference set, and is called a translate of ( in additive notation). The complement of a -difference set is a -difference set. The set of all translates of a difference set forms a symmetric block design, called the development of and denoted by In such a design there are elements (usually called points) and blocks (subsets). Each block of the design consists of points, each point is contained in blocks. Any two blocks have exactly elements in common and any two points are simultaneously contained in exactly blocks. The group acts as an automorphism group of the design. It is sharply transitive on both points and blocks. In particular, if , then the difference set gives rise to a projective plane. An example of a (7,3,1) difference set in the group is the subset . The translates of this difference set form the Fano plane. Since every difference set gives a symmetric design, the parameter set must satisfy the Bruck–Ryser–Chowla theorem. Not every symmetric design gives a difference set. Equivalent and isomorphic difference sets Two difference sets in group and in group are equivalent if there is a group isomorphism between and such that for
https://en.wikipedia.org/wiki/Antibody%20opsonization
Antibody opsonization is a process by which a pathogen is marked for phagocytosis. Given normal inflammatory circumstances, microbial pathogen-associated molecular patterns (PAMPs) bind with the endocytic pattern recognition receptors (PRRs) of phagocytes, which mediates neutrophil mediation or macrophage phagocytosis. As well as endocytic PRRs, phagocytes furthermore express opsonin receptors such as Fc receptor and complement receptor 1 (CR1). Should the microbe be coated with opsonising antibodies or C3b complement, the co-stimulation of endocytic PRR and opsonin receptor increases the efficacy of the phagocytic process, enhancing the lysosomal elimination of the infective agent. This mechanism of antibody-mediated increase in phagocytic efficacy is named opsonization. Opsonization involves the binding of an opsonin (e.g., antibody) to an epitope on a pathogen. After opsonin binds to the membrane, phagocytes are attracted to the pathogen. The Fab portion of the antibody binds to the antigen, whereas the Fc portion of the antibody binds to an Fc receptor on the phagocyte, facilitating phagocytosis. The core receptor + opsonin complex also creates byproducts like C3b and C4b which are important components for the efficient function of the complement system where they can mediate inflammation, such as when C3b binds to C3-convertase, or they can facilitate the formation of the complement membrane attack complex (MAC) by being deposited on the cell surface of the pathogen. In antibody-dependent cell-mediated cytotoxicity the pathogen does not need to be phagocytized to be destroyed. During this process, the pathogen is opsonized and bound with the antibody IgG via its Fab domain. This allows the antibody binding of an immune effector cell via its Fc domain. Antibody-dependent cell-mediated inherent mediation then triggers a release of lysis products from the bound immune effector cell (monocytes, neutrophils, eosinophils and NK cells). Lack of mediation can c
https://en.wikipedia.org/wiki/Rhodamine%20B
Rhodamine B is a chemical compound and a dye. It is often used as a tracer dye within water to determine the rate and direction of flow and transport. Rhodamine dyes fluoresce and can thus be detected easily and inexpensively with fluorometers. Rhodamine B is used in biology as a staining fluorescent dye, sometimes in combination with auramine O, as the auramine-rhodamine stain to demonstrate acid-fast organisms, notably Mycobacterium. Rhodamine dyes are also used extensively in biotechnology applications such as fluorescence microscopy, flow cytometry, fluorescence correlation spectroscopy and ELISA. Other uses Rhodamine B is often mixed with herbicides to show where they have been used. It is also being tested for use as a biomarker in oral rabies vaccines for wildlife, such as raccoons, to identify animals that have eaten a vaccine bait. The rhodamine is incorporated into the animal's whiskers and teeth. Rhodamine B is an important hydrophilic xanthene dye well known for its stability and is widely used in the textile industry, leather, paper printing, paint, coloured glass and plastic industries. Rhodamine B (BV10) is mixed with quinacridone magenta (PR122) to make the bright pink watercolor known as Opera Rose. Properties Rhodamine B can exist in equilibrium between two forms: an "open"/fluorescent form and a "closed"/nonfluorescent spirolactone form. The "open" form dominates in acidic condition while the "closed" form is colorless in basic condition. The fluorescence intensity of rhodamine B will decrease as temperature increases. The solubility of rhodamine B in water varies by manufacturer, and has been reported as 8 g/L and ~15 g/L, while solubility in alcohol (presumably ethanol) has been reported as 15 g/L. Chlorinated tap water decomposes rhodamine B. Rhodamine B solutions adsorb to plastics and should be kept in glass. Rhodamine B is tunable around 610 nm when used as a laser dye. Its luminescence quantum yield is 0.65 in basic ethanol, 0.49
https://en.wikipedia.org/wiki/SigmaTel
SigmaTel, Inc., was an American system-on-a-chip (SoC), electronics and software company headquartered in Austin, Texas, that designed AV media player/recorder SoCs, reference circuit boards, SoC software development kits built around a custom cooperative kernel and all SoC device drivers including USB mass storage and AV decoder DSP, media player/recorder apps, and controller chips for multifunction peripherals. SigmaTel became Austin's largest IPO as of 2003 when it became publicly traded on NASDAQ. The company was driven by a talented mix of electrical and computer engineers plus other professionals with semiconductor industry experience in Silicon Hills, the number two IC design region in the United States, after Silicon Valley. SigmaTel (trading symbol SGTL) was acquired by Freescale Semiconductor in 2008 and delisted from NASDAQ. History In the 90's and early 2000's SigmaTel produced audio codecs which went into the majority of PC sound cards. Creative's Sound Blaster used mainly SigmaTel and ADI codecs. This expanded to on board audio for computer motherboards and MP3 players. In 2004, SigmaTel SoCs were found in over 70% of all flash memory based MP3 devices sold in the global market. However, SigmaTel lost its last iPod socket in 2006 when it was not found in the next-generation iPod Shuffle. PortalPlayer was the largest competitor, but were bought by Nvidia after PortalPlayer's chips lost their socket in the iPod. SigmaTel was voted "Best Place to Work in Austin 2005" by the Austin Chronicle. In July 2005, SigmaTel acquired the rights to different software technologies sold by Digital Networks North America (a subsidiary of D&M Holdings, and owner of Rio Audio). On July 25, 2006, Integrated Device Technology, Inc. (IDT) announced its acquisition of SigmaTel, Inc.'s AC'97 and High Definition Audio (HD-Audio) PC and Notebook audio codec product lines for approximately $72 million in cash, and the acquisition of SigmaTel's intellectual property and e
https://en.wikipedia.org/wiki/Ethmoid%20sinus
The ethmoid sinuses or ethmoid air cells of the ethmoid bone are one of the four paired paranasal sinuses. Unlike the other three pairs of paranasal sinuses which consist of one or two large cavities, the ethmoidal sinuses entail a number of small air-filled cavities ("air cells"). The cells are located within the lateral mass (labyrinth) of each ethmoid bone and are variable in both size and number. The cells are grouped into anterior, middle, and posterior groups; the groups differ in their drainage modalities, though all ultimately drain into either the superior or the middle nasal meatus of the lateral wall of the nasal cavity. Structure The ethmoid air cells consist of numerous thin-walled cavities in the ethmoidal labyrinth that represent invaginations of the mucous membrane of the nasal wall into the ethmoid bone. They are situated between the superior parts of the nasal cavities and the orbits, and are separated from these cavities by thin bony lamellae. There are 5-15 air cells in either ethmoid bone in the adult, with a combined volume of 2-3mL. Drainage The anterior ethmoidal cells drain (directly or indirectly) into the middle nasal meatus by way of the ethmoidal infundibulum. The middle ethmoidal cells drain directly into the middle nasal meatus. The posterior ethmoidal cells drain directly into the superior nasal meatus at the sphenoethmoidal recess; sometimes, one or more opens into the sphenoidal sinus. Haller cells Haller cells are infraorbital ethmoidal air cells lateral to the lamina papyracea. These may arise from the anterior or posterior ethmoidal sinuses. Lamellae The ethmoidal labyrinth is divided by multiple obliquely oriented, parallel lamellae. The first lamellae is equivalent to the uncinate process of ethmoid bone, the second corresponds the ethmoid bulla, and the third is the basal lamella, and the fourth is equivalent to the superior nasal concha. The anterior and posterior ethmoid cells are separated by the basal lamella
https://en.wikipedia.org/wiki/Globar
A Globar is used as a thermal light source for infrared spectroscopy. The preferred material for making Globar is silicon carbide that is shaped as rods or arches of various sizes. When inserted into a circuit that provides it with electric current, it emits radiation from ~ 2 to 50 micrometres wavelength via the Joule heating phenomenon. Globars are used as infrared sources for spectroscopy because their spectral behavior corresponds approximately to that of a Planck radiator (i.e. a black body). Alternative infrared sources are Nernst lamps, coils of chrome–nickel alloy or high-pressure mercury lamps. The technical term Globar is an English portmanteau word consisting of glow and bar. The term glowbar is sometimes used synonymously in English (which is an incorrect spelling in the strict sense). The American Resistor Company in Milwaukee, Wisconsin, had word and lettering Globar registered as a trademark (in a special decorative script font) with the United States Patent and Trademark Office on June 30, 1925 (registration number 0200201) and on October 18, 1927 (registration number 0234147). This registration had been renewed for the third time in 1987 (by various companies throughout 60 years). See also Nernst lamp List of light sources External links Viewgraphs about infrared beamlines and IR spectroscopy Advanced Light Source, Berkeley, CA, USA Introduction to the optical principles of IR spectroscopy, light sources Ralf Arnold (in German) Lighting Spectroscopy
https://en.wikipedia.org/wiki/Immunostimulant
Immunostimulants, also known as immunostimulators, are substances (drugs and nutrients) that stimulate the immune system usually in a non-specific manner by inducing activation or increasing activity of any of its components. One notable example is the granulocyte macrophage colony-stimulating factor. The goal of this stimulated immune response is usually to help the body have a stronger immune system response in order to improve outcomes in the case of an infection or cancer malignancy. There is also some evidence that immunostimulants may be useful to help decrease severe acute illness related to chronic obstructive pulmonary disease or acute infections in the lungs. Classification There are two main categories of immunostimulants: Specific immunostimulants provide antigenic specificity in immune response, such as vaccines or any antigen. Non-specific immunostimulants act irrespective of antigenic specificity to augment immune response of other antigen or stimulate components of the immune system without antigenic specificity, such as adjuvants and non-specific immunostimulators. Non-specific Many endogenous substances are non-specific immunostimulators. For example, female sex hormones are known to stimulate both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. Some publications point towards the effect of deoxycholic acid (DCA) as an immunostimulant of the non-specific immune system, activating its main actors, the macrophages. According to these publications, a sufficient amount of DCA in the human body corresponds to a good immune reaction of the non-specific immune system. Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate
https://en.wikipedia.org/wiki/Thyrse
A thyrse is a type of inflorescence in which the main axis grows indeterminately, and the subaxes (branches) have determinate growth. Gallery
https://en.wikipedia.org/wiki/Kloosterman%20sum
In mathematics, a Kloosterman sum is a particular kind of exponential sum. They are named for the Dutch mathematician Hendrik Kloosterman, who introduced them in 1926 when he adapted the Hardy–Littlewood circle method to tackle a problem involving positive definite diagonal quadratic forms in four as opposed to five or more variables, which he had dealt with in his dissertation in 1924. Let be natural numbers. Then Here x* is the inverse of modulo . Context The Kloosterman sums are a finite ring analogue of Bessel functions. They occur (for example) in the Fourier expansion of modular forms. There are applications to mean values involving the Riemann zeta function, primes in short intervals, primes in arithmetic progressions, the spectral theory of automorphic functions and related topics. Properties of the Kloosterman sums If or then the Kloosterman sum reduces to the Ramanujan sum. depends only on the residue class of and modulo . Furthermore and if . Let with and coprime. Choose and such that and . Then This reduces the evaluation of Kloosterman sums to the case where for a prime number and an integer . The value of is always an algebraic real number. In fact is an element of the subfield which is the compositum of the fields where ranges over all odd primes such that and for with . The Selberg identity: was stated by Atle Selberg and first proved by Kuznetsov using the spectral theory of modular forms. Nowadays elementary proofs of this identity are known. For an odd prime, there are no known simple formula for , and the Sato–Tate conjecture suggests that none exist. The lifting formulas below, however, are often as good as an explicit evaluation. If one also has the important transformation: where denotes the Jacobi symbol. Let with prime and assume . Then: where is chosen so that and is defined as follows (note that is odd): This formula was first found by Hans Salie and there are many simple proofs in the
https://en.wikipedia.org/wiki/Kuiper%27s%20theorem
In mathematics, Kuiper's theorem (after Nicolaas Kuiper) is a result on the topology of operators on an infinite-dimensional, complex Hilbert space H. It states that the space GL(H) of invertible bounded endomorphisms of H is such that all maps from any finite complex Y to GL(H) are homotopic to a constant, for the norm topology on operators. A significant corollary, also referred to as Kuiper's theorem, is that this group is weakly contractible, ie. all its homotopy groups are trivial. This result has important uses in topological K-theory. General topology of the general linear group For finite dimensional H, this group would be a complex general linear group and not at all contractible. In fact it is homotopy equivalent to its maximal compact subgroup, the unitary group U of H. The proof that the complex general linear group and unitary group have the same homotopy type is by the Gram-Schmidt process, or through the matrix polar decomposition, and carries over to the infinite-dimensional case of separable Hilbert space, basically because the space of upper triangular matrices is contractible as can be seen quite explicitly. The underlying phenomenon is that passing to infinitely many dimensions causes much of the topological complexity of the unitary groups to vanish; but see the section on Bott's unitary group, where the passage to infinity is more constrained, and the resulting group has non-trivial homotopy groups. Historical context and topology of spheres It is a surprising fact that the unit sphere, sometimes denoted S∞, in infinite-dimensional Hilbert space H is a contractible space, while no finite-dimensional spheres are contractible. This result, certainly known decades before Kuiper's, may have the status of mathematical folklore, but it is quite often cited. In fact more is true: S∞ is diffeomorphic to H, which is certainly contractible by its convexity. One consequence is that there are smooth counterexamples to an extension of the Brouwer fixed-
https://en.wikipedia.org/wiki/Bugonia
In the ancient Mediterranean region, bugonia or bougonia was a ritual based on the belief that bees were spontaneously (equivocally) generated from a cow's carcass, although it is possible that the ritual had more currency as a poetic and learned trope than as an actual practice. Description A detailed description of the bugonia process can be found in Byzantine Geoponica: Build a house, ten cubits high, with all the sides of equal dimensions, with one door, and four windows, one on each side; put an ox into it, thirty months old, very fat and fleshy; let a number of young men kill him by beating him violently with clubs, so as to mangle both flesh and bones, but taking care not to shed any blood; let all the orifices, mouth, eyes, nose etc. be stopped up with clean and fine linen, impregnated with pitch; let a quantity of thyme be strewed under the reclining animal, and then let windows and doors be closed and covered with a thick coating of clay, to prevent the access of air or wind. After three weeks have passed, let the house be opened, and let light and fresh air get access to it, except from the side from which the wind blows strongest. Eleven days afterwards, you will find the house full of bees, hanging together in clusters, and nothing left of the ox but horns, bones and hair. The story of Aristaeus was an archetype of this ritual, serving to instruct bee keepers on how to recover from the loss of their bees. By extension, it was thought that fumigation with cow dung was beneficial to the health of the hive. Variations The idea that wasps are born of the corpses of horses was often described alongside bugonia. And given that European wasps bear a passing resemblance to European bees, it may be possible that the myth arose out of a mis-reported or misunderstood observation of a natural event. Different variations are attested, such as simply burying the cow, or covering the corpse with mud or dung. Another variation states that use of the rumen alone is
https://en.wikipedia.org/wiki/Tensor%20product%20of%20modules
In mathematics, the tensor product of modules is a construction that allows arguments about bilinear maps (e.g. multiplication) to be carried out in terms of linear maps. The module construction is analogous to the construction of the tensor product of vector spaces, but can be carried out for a pair of modules over a commutative ring resulting in a third module, and also for a pair of a right-module and a left-module over any ring, with result an abelian group. Tensor products are important in areas of abstract algebra, homological algebra, algebraic topology, algebraic geometry, operator algebras and noncommutative geometry. The universal property of the tensor product of vector spaces extends to more general situations in abstract algebra. The tensor product of an algebra and a module can be used for extension of scalars. For a commutative ring, the tensor product of modules can be iterated to form the tensor algebra of a module, allowing one to define multiplication in the module in a universal way. Balanced product For a ring R, a right R-module M, a left R-module N, and an abelian group G, a map is said to be R-balanced, R-middle-linear or an R-balanced product if for all m, m′ in M, n, n′ in N, and r in R the following hold: The set of all such balanced products over R from to G is denoted by . If φ, ψ are balanced products, then each of the operations and −φ defined pointwise is a balanced product. This turns the set into an abelian group. For M and N fixed, the map is a functor from the category of abelian groups to itself. The morphism part is given by mapping a group homomorphism to the function , which goes from to . Remarks Properties (Dl) and (Dr) express biadditivity of φ, which may be regarded as distributivity of φ over addition. Property (A) resembles some associative property of φ. Every ring R is an R-bimodule. So the ring multiplication in R is an R-balanced product . Definition For a ring R, a right R-module M, a left R-module N,
https://en.wikipedia.org/wiki/Marek%20Karpinski
Marek Karpinski is a computer scientist and mathematician known for his research in the theory of algorithms and their applications, combinatorial optimization, computational complexity, and mathematical foundations. He is a recipient of several research prizes in the above areas. He is currently a Professor of Computer Science, and the Head of the Algorithms Group at the University of Bonn. He is also a member of Bonn International Graduate School in Mathematics BIGS and the Hausdorff Center for Mathematics. See also List of computer scientists List of mathematicians
https://en.wikipedia.org/wiki/Thue%27s%20lemma
In modular arithmetic, Thue's lemma roughly states that every modular integer may be represented by a "modular fraction" such that the numerator and the denominator have absolute values not greater than the square root of the modulus. More precisely, for every pair of integers with , given two positive integers and such that , there are two integers and such that and Usually, one takes and equal to the smallest integer greater than the square root of , but the general form is sometimes useful, and makes the uniqueness theorem (below) easier to state. The first known proof is attributed to who used a pigeonhole argument. It can be used to prove Fermat's theorem on sums of two squares by taking m to be a prime p that is congruent to 1 modulo 4 and taking a to satisfy a2 + 1 = 0 mod p. (Such an "a" is guaranteed for "p" by Wilson's theorem.) Uniqueness In general, the solution whose existence is asserted by Thue's lemma is not unique. For example, when there are usually several solutions , provided that and are not too small. Therefore, one may only hope for uniqueness for the rational number , to which is congruent modulo if y and m are coprime. Nevertheless, this rational number need not be unique; for example, if , and , one has the two solutions . However, for and small enough, if a solution exists, it is unique. More precisely, with above notation, if and , with and then This result is the basis for rational reconstruction, which allows using modular arithmetic for computing rational numbers for which one knows bounds for numerators and denominators. The proof is rather easy: by multiplying each congruence by the other and subtracting, one gets The hypotheses imply that each term has an absolute value lower than , and thus that the absolute value of their difference is lower than . This implies that , hence the result. Computing solutions The original proof of Thue's lemma is not efficient, in the sense that it does not provi
https://en.wikipedia.org/wiki/Egress%20filtering
In computer networking, egress filtering is the practice of monitoring and potentially restricting the flow of information outbound from one network to another. Typically, it is information from a private TCP/IP computer network to the Internet that is controlled. TCP/IP packets that are being sent out of the internal network are examined via a router, firewall, or similar edge device. Packets that do not meet security policies are not allowed to leave – they are denied "egress". Egress filtering helps ensure that unauthorized or malicious traffic never leaves the internal network. In a corporate network, typical recommendations are that all traffic except that emerging from a select set of servers would be denied egress. Restrictions can further be made such that only select protocols such as HTTP, email, and DNS are allowed. User workstations would then need to be configured either manually or via proxy auto-config to use one of the allowed servers as a proxy. Corporate networks also typically have a limited number of internal address blocks in use. An edge device at the boundary between the internal corporate network and external networks (such as the Internet) is used to perform egress checks against packets leaving the internal network, verifying that the source IP address in all outbound packets is within the range of allocated internal address blocks. Egress filtering may require policy changes and administrative work whenever a new application requires external network access. For this reason, egress filtering is an uncommon feature on consumer and very small business networks. See also Content-control software Ingress filtering Web Proxy Autodiscovery Protocol
https://en.wikipedia.org/wiki/Transfer%20factor
Transfer factors are essentially small immune messenger molecules that are produced by all higher organisms. Transfer factors were originally described as immune molecules that are derived from blood or spleen cells that cause antigen-specific cell-mediated immunity, primarily delayed hypersensitivity and the production of lymphokines, as well as binding to the antigens themselves. They have a molecular weight of approximately 5000 Daltons and are composed entirely of amino acids. Transfer factors were discovered by Henry Sherwood Lawrence in 1954. A second use of the term transfer factor applies to a likely different entity derived from cow colostrum or chicken egg yolk which is marketed as an oral dietary supplement under the same name citing claims of benefit to the immune system. History In 1942, Merrill Chase discovered that cells taken from the peritoneum of Guinea pigs that had been immunized against an antigen could transfer immunity when injected into Guinea pigs that had never been exposed to the antigen; this phenomenon was the discovery of cell-mediated immunity. Subsequent research attempted to uncover how the cells imparted their effects. Henry Sherwood Lawrence, in 1955, discovered that partial immunity could be transferred even when the immune cells had undergone lysis - indicating that cells did not need to be fully intact in order to produce immune effects. Lawrence went on to discover that only the factors less than 8000 Daltons were required to transfer this immunity; he termed these to be "transfer factors". The history of cellular derived transfer factor as a treatment effectively ended in the early 1980s. While the research world was initially excited by the discovery of Dr. Lawrence and the possibility that a small molecule could affect the immune system, the concept of small molecules having such profound biologic effect had not been proven. Despite several successes in using transfer factor to treat human disease and uncover immun
https://en.wikipedia.org/wiki/Propylparaben
Propylparaben, the n-propyl ester of p-hydroxybenzoic acid, occurs as a natural substance found in many plants and some insects, although it is manufactured synthetically for use in cosmetics, pharmaceuticals, and foods. It is a member of the class of parabens. It is a preservative typically found in many water-based cosmetics, such as creams, lotions, shampoos, and bath products. As a food additive, it has the E number, which is E216. Sodium propyl p-hydroxybenzoate, the sodium salt of propylparaben, a compound with formula Na(C3H7(C6H4COO)O), is also used similarly as a food additive and as an anti-fungal preservation agent. Its E number is E217. In 2010, the European Union Scientific Committee on Consumer Safety stated that it considered the use of butylparaben and propylparaben as preservatives in finished cosmetic products as safe to the consumer, as long as the sum of their concentrations does not exceed 0.19%. Uses Propylparaben has antifungal and antimicrobial properties and is typically used in a variety of water-based cosmetics and personal-care products. It is also used as a food additive, and is designated with E number, E216. Propylparaben is commonly used as a preservative in packaged baked goods, particularly pastries and tortillas. Propylparaben is also a Standardized Chemical Allergen and is used in allergenic testing. As of May 2023, New York began considering banning the use of propylparaben because studies in humans and animals indicate that it acts as an endocrine disrupter and affects male and female reproductive health. In September of 2023, California passed legislation outlawing the use of propylparaben in foods by 2027. The new law bans the manufacture, sale and distribution of propylparaben and three other additives. This is the first law in the U.S. to ban it and will possibly have nationwide effects.
https://en.wikipedia.org/wiki/Glucocorticoid%20receptor
The glucocorticoid receptor (GR or GCR) also known as NR3C1 (nuclear receptor subfamily 3, group C, member 1) is the receptor to which cortisol and other glucocorticoids bind. The GR is expressed in almost every cell in the body and regulates genes controlling the development, metabolism, and immune response. Because the receptor gene is expressed in several forms, it has many different (pleiotropic) effects in different parts of the body. When glucocorticoids bind to GR, its primary mechanism of action is the regulation of gene transcription. The unbound receptor resides in the cytosol of the cell. After the receptor is bound to glucocorticoid, the receptor-glucocorticoid complex can take either of two paths. The activated GR complex up-regulates the expression of anti-inflammatory proteins in the nucleus or represses the expression of pro-inflammatory proteins in the cytosol (by preventing the translocation of other transcription factors from the cytosol into the nucleus). In humans, the GR protein is encoded by gene which is located on chromosome 5 (5q31). Structure Like the other steroid receptors, the glucocorticoid receptor is modular in structure and contains the following domains (labeled A - F): A/B - N-terminal regulatory domain C - DNA-binding domain (DBD) D - hinge region E - ligand-binding domain (LBD) F - C-terminal domain Ligand binding and response In the absence of hormone, the glucocorticoid receptor (GR) resides in the cytosol complexed with a variety of proteins including heat shock protein 90 (hsp90), the heat shock protein 70 (hsp70) and the protein FKBP4 (FK506-binding protein 4). The endogenous glucocorticoid hormone cortisol diffuses through the cell membrane into the cytoplasm and binds to the glucocorticoid receptor (GR) resulting in release of the heat shock proteins. The resulting activated form GR has two principal mechanisms of action, transactivation and transrepression, described below. Transactivation A direct
https://en.wikipedia.org/wiki/Crosstalk
In electronics, crosstalk is any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit or channel to another. Crosstalk is a significant issue in structured cabling, audio electronics, integrated circuit design, wireless communication and other communications systems. Mechanisms Every electrical signal is associated with a varying field, whether electrical, magnetic or traveling. Where these fields overlap, they interfere with each other's signals. This electromagnetic interference creates crosstalk. For example, if two wires next to each other carry different signals, the currents in them will create magnetic fields that will induce a smaller signal in the neighboring wire. In electrical circuits sharing a common signal return path, electrical impedance in the return path creates between the signals, resulting in crosstalk. In cabling In structured cabling, crosstalk refers to electromagnetic interference from one unshielded twisted pair to another twisted pair, normally running in parallel. Signals traveling through adjacent pairs of wire create magnetic fields that interact with each other, inducing interference in the neighboring pair. The pair causing the interference is called the disturbing pair, while the pair experiencing the interference is the disturbed pair. (NEXT) NEXT is a measure of the ability of a cable to reject crosstalk, so the higher the NEXT value, the greater the rejection of crosstalk at the local connection. It is referred to as near end because the interference between the two signals in the cable is measured at the same end of the cable as the interfering transmitter. The NEXT value for a given cable type is generally expressed in decibels per feet or decibels per 1000 feet and varies with the frequency of transmission. General specif
https://en.wikipedia.org/wiki/Crosstalk%20%28biology%29
Biological crosstalk refers to instances in which one or more components of one signal transduction pathway affects another. This can be achieved through a number of ways with the most common form being crosstalk between proteins of signaling cascades. In these signal transduction pathways, there are often shared components that can interact with either pathway. A more complex instance of crosstalk can be observed with transmembrane crosstalk between the extracellular matrix (ECM) and the cytoskeleton. Crosstalk between signalling pathways One example of crosstalk between proteins in a signalling pathway can be seen with cyclic adenosine monophosphate's (cAMP) role in regulating cell proliferation by interacting with the mitogen-activated protein (MAP) kinase pathway. cAMP is a compound synthesized in cells by adenylate cyclase in response to a variety of extracellular signals. cAMP primarily acts as an intracellular second messenger whose major intracellular receptor is the cAMP-dependent protein kinase (PKA) that acts through the phosphorylation of target proteins. The signal transduction pathway begins with ligand-receptor interactions extracellularly. This signal is then transduced through the membrane, stimulating adenylyl cyclase on the inner membrane surface to catalyze the conversion of ATP to cAMP. ERK, a participating protein in the MAPK signaling pathway, can be activated or inhibited by cAMP. cAMP can inhibit ERKs in a variety of ways, most of which involve the cAMP-dependent protein kinase (PKA) and the inhibition of Ras-dependent signals to Raf-1. However, cAMP can also stimulate cell proliferation by stimulating ERKs. This occurs through the induction of specific genes via phosphorylation of the transcription factor CREB by PKA. Though ERKs do not appear to be a requirement for this phosphorylation of CREB, the MAPK pathway does play into crosstalk again, as ERKs are required to phosphorylate proteins downstream of CREB. Other known examples of the
https://en.wikipedia.org/wiki/Sony%20HDVS
Sony HDVS is a range of high-definition video equipment developed in the 1980s to support an early analog high-definition television system (used in multiple sub-Nyquist sampling encoding (MUSE) broadcasts) thought to be the broadcast television systems that would be in use today. The line included professional video cameras, video monitors and linear video editing systems. History Sony first demonstrated a wideband analog video HDTV capable video camera, monitor and video tape recorder (VTR) in April 1981 at an international meeting of television engineers in Algiers, Algeria. The HDVS range was launched in April 1984, with the HDC-100 camera, which was the world's first commercially available HDTV camera and HDV-1000 video recorder, with its companion HDT-1000 processor/TBC, and HDS-1000 video switcher all working in the 1125-line component video format with interlaced video and a 5:3 aspect ratio. The first system consisting of a monitor, camera and VTR was sold by Sony in 1985 for $1.5 million, and the first HDTV production studio, Captain Video, was opened in Paris. The helical scan VTR (the HDV-100) used magnetic tape similar to 1" type C videotape for analog recording. Sony in 1988 unveiled a new HDVS digital line, including a reel-to-reel digital recording VTR (the HDD-1000) that used digital signals between the machines for dubbing but the primary I/O remained analog signals. The Sony HDVS HDC-300 camera was also introduced. The large HDD-1000 unit was housed in a 1-inch reel-to-reel transport, and because of the high tape speed needed, had a limit of 1-hour per reel. By this time, the aspect ratio of the system had been changed to 16:9. Sony, owner of Columbia Pictures/Tri-Star, would start to archive feature films on this format, requiring an average of two reels per movie. There was also a portable videocassette recorder (the HDV-10) for the HDVS system, using the "UniHi" format of videocassette using 1/2" wide tape. The tape housing is similar in a
https://en.wikipedia.org/wiki/Zolotarev%27s%20lemma
In number theory, Zolotarev's lemma states that the Legendre symbol for an integer a modulo an odd prime number p, where p does not divide a, can be computed as the sign of a permutation: where ε denotes the signature of a permutation and πa is the permutation of the nonzero residue classes mod p induced by multiplication by a. For example, take a = 2 and p = 7. The nonzero squares mod 7 are 1, 2, and 4, so (2|7) = 1 and (6|7) = −1. Multiplication by 2 on the nonzero numbers mod 7 has the cycle decomposition (1,2,4)(3,6,5), so the sign of this permutation is 1, which is (2|7). Multiplication by 6 on the nonzero numbers mod 7 has cycle decomposition (1,6)(2,5)(3,4), whose sign is −1, which is (6|7). Proof In general, for any finite group G of order n, it is straightforward to determine the signature of the permutation πg made by left-multiplication by the element g of G. The permutation πg will be even, unless there are an odd number of orbits of even size. Assuming n even, therefore, the condition for πg to be an odd permutation, when g has order k, is that n/k should be odd, or that the subgroup <g> generated by g should have odd index. We will apply this to the group of nonzero numbers mod p, which is a cyclic group of order p − 1. The jth power of a primitive root modulo p will have index the greatest common divisor i = (j, p − 1). The condition for a nonzero number mod p to be a quadratic non-residue is to be an odd power of a primitive root. The lemma therefore comes down to saying that i is odd when j is odd, which is true a fortiori, and j is odd when i is odd, which is true because p − 1 is even (p is odd). Another proof Zolotarev's lemma can be deduced easily from Gauss's lemma and vice versa. The example , i.e. the Legendre symbol (a/p) with a = 3 and p = 11, will illustrate how the proof goes. Start with the set {1, 2, . . . , p − 1} arranged as a matrix of two rows such that the sum of the two elements in any column is zero mod p, say: Apply the
https://en.wikipedia.org/wiki/Theta%20correspondence
In mathematics, the theta correspondence or Howe correspondence is a mathematical relation between representations of two groups of a reductive dual pair. The local theta correspondence relates irreducible admissible representations over a local field, while the global theta correspondence relates irreducible automorphic representations over a global field. The theta correspondence was introduced by Roger Howe in . Its name arose due to its origin in André Weil's representation theoretical formulation of the theory of theta series in . The Shimura correspondence as constructed by Jean-Loup Waldspurger in and may be viewed as an instance of the theta correspondence. Statement Setup Let be a local or a global field, not of characteristic . Let be a symplectic vector space over , and the symplectic group. Fix a reductive dual pair in . There is a classification of reductive dual pairs. Local theta correspondence is now a local field. Fix a non-trivial additive character of . There exists a Weil representation of the metaplectic group associated to , which we write as . Given the reductive dual pair in , one obtains a pair of commuting subgroups in by pulling back the projection map from to . The local theta correspondence is a 1-1 correspondence between certain irreducible admissible representations of and certain irreducible admissible representations of , obtained by restricting the Weil representation of to the subgroup . The correspondence was defined by Roger Howe in . The assertion that this is a 1-1 correspondence is called the Howe duality conjecture. Key properties of local theta correspondence include its compatibility with Bernstein-Zelevinsky induction and conservation relations concerning the first occurrence indices along Witt towers . Global theta correspondence Stephen Rallis showed a version of the global Howe duality conjecture for cuspidal automorphic representations over a global field, assuming the validity of the Howe dua
https://en.wikipedia.org/wiki/Multimedia%20Broadcast%20Multicast%20Service
Multimedia Broadcast Multicast Services (MBMS) is a point-to-multipoint interface specification for existing 3GPP cellular networks, which is designed to provide efficient delivery of broadcast and multicast services, both within a cell as well as within the core network. For broadcast transmission across multiple cells, it defines transmission via single-frequency network configurations. The specification is referred to as Evolved Multimedia Broadcast Multicast Services (eMBMS) when transmissions are delivered through an LTE (Long Term Evolution) network. eMBMS is also known as LTE Broadcast. Target applications include mobile TV and radio broadcasting, live streaming video services, as well as file delivery and emergency alerts. Questions remain whether the technology is an optimization tool for the operator or if an operator can generate new revenues with it. Several studies have been published on the domain identifying both cost savings and new revenues. Deployments In 2013, Verizon announced that it would launch eMBMS services in 2014, over its nationwide (United States) LTE networks. AT&T subsequently announced plans to use the 700 MHz Lower D and E Block licenses it acquired in 2011 from Qualcomm for an LTE Broadcast service. Several major operators worldwide have been lining-up to deploy and test the technology. The frontrunners being Verizon in the United States, Kt and Reliance in Asia, and recently EE and Vodafone in Europe. In January 2014, Korea’s Kt launched the first commercial LTE Broadcast service. The solution includes Kt’s internally developed eMBMS Bearer Service, and Samsung mobile devices fitted with the Expway Middleware as the eMBMS User Service. In February 2014, Verizon demonstrated the potential of LTE Broadcast during Super Bowl XLVIII, using Samsung Galaxy Note 3s, fitted with Expway's eMBMS User Service. In July 2014, Nokia demonstrated the use of LTE Broadcast to replace Traditional Digital TV. This use case remains controversi
https://en.wikipedia.org/wiki/Dysphania%20botrys
Dysphania botrys (syn. Chenopodium botrys), the Jerusalem oak goosefoot, sticky goosefoot or feathered geranium, is a flowering plant in the genus Dysphania (the glandular goosefoots). It is native to the Mediterranean region. Jerusalem oak goosefoot was formerly classed in the genus Ambrosia, with the binomial name Ambrosia mexicana. It is naturalised in the United States and Mexico, the old species synonym deriving from the latter. Cultivation The plant has a strong scent, reminiscent of stock cubes, and can be used as a flavouring in cooking. It is cultivated as a hardy annual by gardeners.
https://en.wikipedia.org/wiki/Interstitium
The interstitium is a contiguous fluid-filled space existing between a structural barrier, such as a cell membrane or the skin, and internal structures, such as organs, including muscles and the circulatory system. The fluid in this space is called interstitial fluid, comprises water and solutes, and drains into the lymph system. The interstitial compartment is composed of connective and supporting tissues within the body – called the extracellular matrix – that are situated outside the blood and lymphatic vessels and the parenchyma of organs. Structure The non-fluid parts of the interstitium are predominantly collagen types I, III, and V, elastin, and glycosaminoglycans, such as hyaluronan and proteoglycans that are cross-linked to form a honeycomb-like reticulum. Such structural components exist both for the general interstitium of the body, and within individual organs, such as the myocardial interstitium of the heart, the renal interstitium of the kidney, and the pulmonary interstitium of the lung. The interstitium in the submucosae of visceral organs, the dermis, superficial fascia, and perivascular adventitia are fluid-filled spaces supported by a collagen bundle lattice. The fluid spaces communicate with draining lymph nodes though they do not have lining cells or structures of lymphatic channels. Functions The interstitial fluid is a reservoir and transportation system for nutrients and solutes distributing among organs, cells, and capillaries, for signaling molecules communicating between cells, and for antigens and cytokines participating in immune regulation. The composition and chemical properties of the interstitial fluid vary among organs and undergo changes in chemical composition during normal function, as well as during body growth, conditions of inflammation, and development of diseases, as in heart failure and chronic kidney disease. The total fluid volume of the interstitium during health is about 20% of body weight, but this space is dynamic
https://en.wikipedia.org/wiki/Potential%20isomorphism
In mathematical logic and in particular in model theory, a potential isomorphism is a collection of finite partial isomorphisms between two models which satisfies certain closure conditions. Existence of a partial isomorphism entails elementary equivalence, however the converse is not generally true, but it holds for ω-saturated models. Definition A potential isomorphism between two models M and N is a non-empty collection F of finite partial isomorphisms between M and N which satisfy the following two properties: for all finite partial isomorphisms Z ∈ F and for all x ∈ M there is a y ∈ N such that Z ∪ {(x,y)} ∈ F for all finite partial isomorphisms Z ∈ F and for all y ∈ N there is a x ∈ M such that Z ∪ {(x,y)} ∈ F A notion of Ehrenfeucht-Fraïssé game is an exact characterisation of elementary equivalence and potential isomorphism can be seen as an approximation of it. Another notion that is similar to potential isomorphism is that of local isomorphism.
https://en.wikipedia.org/wiki/Cerebral%20perfusion%20pressure
Cerebral perfusion pressure, or CPP, is the net pressure gradient causing cerebral blood flow to the brain (brain perfusion). It must be maintained within narrow limits because too little pressure could cause brain tissue to become ischemic (having inadequate blood flow), and too much could raise intracranial pressure (ICP). Definitions The cranium encloses a fixed-volume space that holds three components: blood, cerebrospinal fluid (CSF), and very soft tissue (the brain). While both the blood and CSF have poor compression capacity, the brain is easily compressible. Every increase of ICP can cause a change in tissue perfusion and an increase in stroke events. From resistance CPP can be defined as the pressure gradient causing cerebral blood flow (CBF) such that where: CVR is cerebrovascular resistance By intracranial pressure An alternative definition of CPP is: where: MAP is mean arterial pressure ICP is intracranial pressure JVP is jugular venous pressure This definition may be more appropriate if considering the circulatory system in the brain as a Starling resistor, where an external pressure (in this case, the intracranial pressure) causes decreased blood flow through the vessels. In this sense, more specifically, the cerebral perfusion pressure can be defined as either: (if ICP is higher than JVP) or (if JVP is higher than ICP). Physiologically, increased intracranial pressure (ICP) causes decreased blood perfusion of brain cells by mainly two mechanisms: Increased ICP constitutes an increased interstitial hydrostatic pressure that, in turn, causes a decreased driving force for capillary filtration from intracerebral blood vessels. Increased ICP compresses cerebral arteries, causing increased cerebrovascular resistance (CVR). FLOW Ranging from in white matter to in grey matter. Autoregulation Under normal circumstances a MAP between 60 and 160 mmHg and ICP about 10 mmHg (CPP of 50-150 mmHg) sufficient blood flow can be maintained with a
https://en.wikipedia.org/wiki/Weierstrass%E2%80%93Enneper%20parameterization
In mathematics, the Weierstrass–Enneper parameterization of minimal surfaces is a classical piece of differential geometry. Alfred Enneper and Karl Weierstrass studied minimal surfaces as far back as 1863. Let and be functions on either the entire complex plane or the unit disk, where is meromorphic and is analytic, such that wherever has a pole of order , has a zero of order (or equivalently, such that the product is holomorphic), and let be constants. Then the surface with coordinates is minimal, where the are defined using the real part of a complex integral, as follows: The converse is also true: every nonplanar minimal surface defined over a simply connected domain can be given a parametrization of this type. For example, Enneper's surface has , . Parametric surface of complex variables The Weierstrass-Enneper model defines a minimal surface () on a complex plane (). Let (the complex plane as the space), the Jacobian matrix of the surface can be written as a column of complex entries: where and are holomorphic functions of . The Jacobian represents the two orthogonal tangent vectors of the surface: The surface normal is given by The Jacobian leads to a number of important properties: , , , . The proofs can be found in Sharma's essay: The Weierstrass representation always gives a minimal surface. The derivatives can be used to construct the first fundamental form matrix: and the second fundamental form matrix Finally, a point on the complex plane maps to a point on the minimal surface in by where for all minimal surfaces throughout this paper except for Costa's minimal surface where . Embedded minimal surfaces and examples The classical examples of embedded complete minimal surfaces in with finite topology include the plane, the catenoid, the helicoid, and the Costa's minimal surface. Costa's surface involves Weierstrass's elliptic function : where is a constant. Helicatenoid Choosing the functions and , a one parame
https://en.wikipedia.org/wiki/Alfred%20Enneper
Alfred Enneper (June 14, 1830, Barmen – March 24, 1885 Hanover) was a German mathematician. Enneper earned his PhD from the Georg-August-Universität Göttingen in 1856, under the supervision of Peter Gustav Lejeune Dirichlet, for his dissertation about functions with complex arguments. After his habilitation in 1859 in Göttingen, he was from 1870 on Professor (Extraordinarius) at Göttingen. He studied minimal surfaces and parametrized Enneper's minimal surfaces in 1864. A contemporary of Karl Weierstrass, the two created a whole class of parameterizations, the Enneper–Weierstrass parameterization.
https://en.wikipedia.org/wiki/Hiroshi%20Ishii%20%28computer%20scientist%29
is a Japanese computer scientist. He is a professor at the Massachusetts Institute of Technology. Ishii pioneered the Tangible User Interface in the field of Human-computer interaction with the paper "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms", co-authored with his then PhD student Brygg Ullmer. Biography Ishii was born in Tokyo and raised in Sapporo. He received B.E. in electronic engineering, and M.E. and Ph.D. in computer engineering from Hokkaido University in Sapporo, Japan. Hiroshi Ishii founded the Tangible Media Group and started their ongoing Tangible Bits project in 1995, when he joined the MIT Media Laboratory as a professor of Media Arts and Sciences. Ishii relocated from Japan's NTT Human Interface Laboratories in Yokosuka, where he had made his mark in Human Computer Interaction (HCI) and Computer-Supported Cooperative Work (CSCW) in the early 1990s. Ishii was elected to the CHI Academy in 2006. He was named to the 2022 class of ACM Fellows, "for contributions to tangible user interfaces and to human-computer interaction". He currently teaches the class MAS.834 Tangible Interfaces at the Media Lab. External links Computer programmers Japanese computer scientists Human–computer interaction researchers Ubiquitous computing researchers MIT School of Architecture and Planning faculty Hokkaido University alumni MIT Media Lab people 1956 births Living people People from Tokyo Fellows of the Association for Computing Machinery
https://en.wikipedia.org/wiki/Advances%20in%20Physics
Advances in Physics is a bimonthly scientific journal published by Taylor & Francis that was established in 1952. The journal is also issued as a supplement to the Philosophical Magazine. Peer review is determined on a case-by-case basis. The editors-in-chief are Paolo Radaelli and Joerg Schmalian. The frequency of this publication varied from 1952 until 2007, when it became a bimonthly journal. Aims and scope The focus of the journal is critical reviews that are relevant to condensed matter physicists. Each review is intended to present the author's perspective. Readers are expected to have a fundamental knowledge of the subject. These reviews are sometimes complemented by a "Perspectives" section which publishes shorter articles that may be controversial, with the intention of stimulating debate. The intended audience consists of physicists, materials scientists, and physical chemists working in universities, industry, and research institutes. Broad, interdisciplinary coverage includes topics ranging from condensed matter physics, statistical mechanics, quantum information, cold atoms, and soft matter physics to biophysics. Impact factor and ranking The impact factor for Advances in Physics 23.750 in 2021. Abstracting and indexing This journal is indexed in the following databases: See also List of physics journals
https://en.wikipedia.org/wiki/Code%20page%20932%20%28IBM%29
IBM code page 932 (abbreviated as IBM-932 or ambiguously as CP932) is one of IBM's extensions of Shift JIS. The coded character sets are JIS X 0201:1976, JIS X 0208:1983, IBM extensions and IBM extensions for IBM 1880 UDC. It is the combination of the single-byte Code page 897 and the double-byte Code page 301. Code page 301 is designed to encode the same repertoire as IBM Japanese DBCS-Host. IBM-932 resembles IBM-943. One difference is that IBM-932 encodes the JIS X 0208:1983 characters but preserves the 1978 ordering, whereas IBM-943 uses the 1983 ordering (i.e. the character variant swaps made in JIS X 0208:1983). Another difference is that IBM-932 does not incorporate the NEC selected extensions, which IBM-943 includes for Microsoft compatibility. IBM-942 includes the same double-byte codes as IBM-932 (those from Code page 301) but includes additional single-byte extensions. International Components for Unicode treats "ibm-932" and "ibm-942" as aliases for the same decoder. IBM-932 contains 7-bit ISO 646 codes, and Japanese characters are indicated by the high bit of the first byte being set to 1. Some code points in this page require a second byte, so characters use either 8 or 16 bits for encoding. Layout See also LMBCS-16 Code page 942 Code page 943
https://en.wikipedia.org/wiki/Fix%20and%20Foxi
Fix und Foxi was a weekly German comics magazine created by Rolf Kauka, which ran uninterrupted from 1953 until 1994. Re-christened Fix & Foxi, it was relaunched as a monthly magazine in 2000, 2005 and 2010 respectively. Since the end of 2010, publication has once again ceased. During its heyday it was one of the most successful German comics magazines. History of the comic In 1947, Rolf Kauka founded Kauka Publishing. In May 1953, his first comic book, Till Eulenspiegel, appeared. The characters in the comic book were loosely based on German folklore. Apart from the legendary jokester as the main character, it featured an adaptation of Baron Munchausen as well as animal characters such as Reineke Fuchs (Reynard the Fox) and Isegrim Wolf (Isengrin). In issue number 6, the fox characters Fix and Foxi first appeared in a short comic story. They soon became the favorites of the readers and from issue number 29 the comic book was retitled Fix und Foxi. Apart from Germany, Kauka found experienced illustrators in Yugoslavia, Italy and Spain. Over the years, they created more than 80 different comic characters under his supervision (for the most important ones see list below). Kauka also published series from other countries, mainly France/Belgium, giving them their first big break in Germany and popularizing them. Fix und Foxi was published weekly in Germany with a circulation up to 400,000 per week in the magazine's heyday. Allegedly, a total of over 750 million comic books were sold internationally. Kauka's foxes also appeared in magazine spin-offs, pocket books and albums. In 1973, Kauka sold his publishing house and Fix und Foxi was published by others, though he retained creative control. Bauer/VPM eventually became the publisher. In 1994 the comic series was retooled by the publishers into an adolescent tabloid magazine and the publication was changed from weekly to monthly. At this point, Rolf Kauka withdrew the publishing rights and stopped all publication of t
https://en.wikipedia.org/wiki/Bauer%E2%80%93Fike%20theorem
In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors. The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960. The setup In what follows we assume that: is a diagonalizable matrix; is the non-singular eigenvector matrix such that , where is a diagonal matrix. If is invertible, its condition number in -norm is denoted by and defined by: The Bauer–Fike Theorem Bauer–Fike Theorem. Let be an eigenvalue of . Then there exists such that: Proof. We can suppose , otherwise take and the result is trivially true since . Since is an eigenvalue of , we have and so However our assumption, , implies that: and therefore we can write: This reveals to be an eigenvalue of Since all -norms are consistent matrix norms we have where is an eigenvalue of . In this instance this gives us: But is a diagonal matrix, the -norm of which is easily computed: whence: An Alternate Formulation The theorem can also be reformulated to better suit numerical methods. In fact, dealing with real eigensystem problems, one often has an exact matrix , but knows only an approximate eigenvalue-eigenvector couple, and needs to bound the error. The following version comes in help. Bauer–Fike Theorem (Alternate Formulation). Let be an approximate eigenvalue-eigenvector couple, and . Then there exists such that: Proof. We can suppose , otherwise take and the result is trivially true since . So exists, so we can write: since is diagonalizable; taking the -norm of both sides, we obtain: However is a diagonal matrix and its -norm is easily computed: whence: A Relativ
https://en.wikipedia.org/wiki/Logical%20Methods%20in%20Computer%20Science
Logical Methods in Computer Science (LMCS) is a peer-reviewed open access scientific journal covering theoretical computer science and applied logic. It opened to submissions on September 1, 2004. The editor-in-chief is Stefan Milius (Friedrich-Alexander Universität Erlangen-Nürnberg). History The journal was initially published by the International Federation for Computational Logic, and then by a dedicated non-profit. It moved to the . platform in 2017. The first editor-in-chief was Dana Scott. In its first year, the journal received 75 submissions. Abstracting and indexing The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Mathematical Reviews, Science Citation Index Expanded, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2016 impact factor of 0.661.
https://en.wikipedia.org/wiki/Isobutyl%20acetate
The chemical compound isobutyl acetate, also known as 2-methylpropyl ethanoate (IUPAC name) or β-methylpropyl acetate, is a common solvent. It is produced from the esterification of isobutanol with acetic acid. It is used as a solvent for lacquer and nitrocellulose. Like many esters it has a fruity or floral smell at low concentrations and occurs naturally in raspberries, pears and other plants. At higher concentrations the odor can be unpleasant and may cause symptoms of central nervous system depression such as nausea, dizziness and headache. A common method for preparing isobutyl acetate is Fischer esterification, where precursors isobutyl alcohol and acetic acid are heated in the presence of a strong acid. Isobutyl acetate has three isomers: n-butyl acetate, tert-butyl acetate, and sec-butyl acetate, which are also common solvents.
https://en.wikipedia.org/wiki/Sleep%20temple
Sleep temples (also known as dream temples or Egyptian sleep temples) are regarded by some as an early instance of hypnosis over 4000 years ago, under the influence of Imhotep. Imhotep served as Chancellor and as High Priest of the sun god Ra at Heliopolis. He was said to be a son of the ancient Egyptian demiurge Ptah, his mother being a mortal named Khredu-ankh. Sleep temples were hospitals of sorts, healing a variety of ailments, perhaps many of them psychological in nature. Patients were taken to an unlit chamber to sleep and be treated for their specific ailment.The treatment involved chanting, placing the patient into a trance-like or hypnotic state, and analysing their dreams in order to determine treatment. Meditation, fasting, baths, and sacrifices to the patron deity or other spirits were often involved as well. Sleep temples also existed in the Middle East and Ancient Greece. In Greece, they were built in honor of Asclepios, the Greek god of medicine and were called Asclepieions. The Greek treatment was referred to as incubation and focused on prayers to Asclepios for healing. These sleep chambers were filled with snakes, a symbol to Asclepios. A similar Hebrew treatment was referred to as Kavanah and involved focusing on letters of the Hebrew alphabet spelling the name of their God. In 1928, Mortimer Wheeler unearthed a Roman sleep temple at Lydney Park, Gloucestershire, with the assistance of a young J.R.R. Tolkien.
https://en.wikipedia.org/wiki/Chai%20%28symbol%29
Chai or Hai ( "living" ) is a symbol that figures prominently in modern Jewish culture; the Hebrew letters of the word are often used as a visual symbol. History According to The Jewish Daily Forward, its use as an amulet originates in 18th century Eastern Europe. Chai as a symbol goes back to medieval Spain. Letters as symbols in Jewish culture go back to the earliest Jewish roots, the Talmud states that the world was created from Hebrew letters which form verses of the Torah. In medieval Kabbalah, Chai is the lowest (closest to the physical plane) emanation of God. According to 16th century Greek rabbi Shlomo Hacohen Soloniki, in his commentary on the Zohar, Chai as a symbol has its linkage in the Kabbalah texts to God's attribute of 'Ratzon', or motivation, will, muse. The Jewish commentaries give an especially long treatment to certain verses in the Torah with the word as their central theme. Three examples are Leviticus 18:5 'Chai Bahem', 'and you shall live by [this faith]' (as opposed to just doing it), this is part of the section dealing with the legacy of Moses after his death. Deuteronomy 30:15 "Verily, I have set before thee this day life and good, and death and evil, in that I command thee this day to love the thy God, to walk in His ways, and to keep His commandments and His statutes and His ordinances; then thou shalt live." There is nary an ancient Jewish commentator who does not comment on that verse. The Shema prayer as well speaks of the importance of Chai, to live and walk in the Jewish cultural lifestyle. Two common Jewish names used since Talmudic times, are based on this symbol, Chaya feminine, Chayim masculine. The Jewish toast (on alcoholic beverages such as wine) is l'chaim, 'to life'. Linguistics The word is made up of two letters of the Hebrew alphabet – Chet () and Yod (), forming the word "chai", meaning "alive", or "living". The most common spelling in Latin script is "Chai", but the word is occasionally also spelled "Hai". The u
https://en.wikipedia.org/wiki/ICFP%20Programming%20Contest
The ICFP Programming Contest is an international programming competition held annually around June or July since 1998, with results announced at the International Conference on Functional Programming. Teams may be of any size and any programming language(s) may be used. There is also no entry fee. Participants have 72 hours to complete and submit their entry over the Internet. There is often also a 24-hour lightning division. The winners reserve "bragging rights" to claim that their language is "the programming tool of choice for discriminating hackers". As such, one of the competition's goals is to showcase the capabilities of the contestants' favorite programming languages and tools. Previous first prize winners have used Haskell, OCaml, C++, Cilk, Java, F#, and Rust. The contests usually have around 300 submitted entries. Past tasks Prizes Prizes have a modest cash value, primarily aimed at helping the winners to attend the conference, where the prizes are awarded and the judges make the following declarations: First prize [Language 1] is the programming tool of choice for discriminating hackers. Second prize [Language 2] is a fine programming tool for many applications. Third prize [Language 3] is also not too shabby. Winner of the lightning division [Language L] is very suitable for rapid prototyping. Judges' prize [Team X] are an extremely cool bunch of hackers. Where a winning entry involves several languages, the winners are asked to nominate one or two. The languages named in the judges' declarations have been: See also International Collegiate Programming Contest (ICPC) Online judge
https://en.wikipedia.org/wiki/Architecture%20Analysis%20%26%20Design%20Language
The Architecture Analysis & Design Language (AADL) is an architecture description language standardized by SAE. AADL was first developed in the field of avionics, and was known formerly as the Avionics Architecture Description Language. The Architecture Analysis & Design Language is derived from MetaH, an architecture description language made by the Advanced Technology Center of Honeywell. AADL is used to model the software and hardware architecture of an embedded, real-time system. Due to its emphasis on the embedded domain, AADL contains constructs for modeling both software and hardware components (with the hardware components named "execution platform" components within the standard). This architecture model can then be used either as a design documentation, for analyses (such as schedulability and flow control) or for code generation (of the software portion), like UML. AADL ecosystem AADL is defined by a core language that defines a single notation for both system and software aspects. Having a single model eases the analysis tools by having only one single representation of the system. The language specifies system-specific characteristics using properties. The language can be extended with the following methods: user-defined properties: user can extend the set of applicable properties and add their own to specify their own requirements language annexes: the core language is enhanced by annex languages that enrich the architecture description. For now, the following annexes have been defined. Behavior annex: add components behavior with state machines Error-model annex: specifies fault and propagation concerns ARINC653 annex: defines modelling patterns for modelling avionics system Data-Model annex: describes the modelling of specific data constraint with AADL AADL tools AADL is supported by a wide range of tools: MASIW - is an open source Eclipse-based IDE for development and analysis of AADL models. It is developed by ISP RAS OSATE includes
https://en.wikipedia.org/wiki/Google%20Base
Google Base was a database provided by Google into which any user can add almost any type of content, such as text, images, and structured information in formats such as XML, PDF, Excel, RTF, or WordPerfect. As of September 2010, the product has been downgraded to Google Merchant Center. If Google found user-added content relevant, submitted content appeared on its shopping search engine, Google Maps or even the web search. The piece of content could then be labeled with attributes like the ingredients for a recipe or the camera model for stock photography. Because information about the service was leaked before public release, it generated much interest in the information technology community prior to release. Google subsequently responded on their blog with an official statement: "You may have seen stories today reporting on a new product that we're testing, and speculating about our plans. Here's what's really going on. We are testing a new way for content owners to submit their content to Google, which we hope will complement existing methods such as our web crawl and Google Sitemaps. We think it's an exciting product, and we'll let you know when there's more news." Files could be uploaded to the Google Base servers by browsing your computer or the web, by various FTP methods, or by API coding. Online tools were provided to view the number of downloads of the user's files, and other performance measures. On December 17, 2010, it was announced that Google Base's API is deprecated in favor of a set of new APIs known as Google Shopping APIs. See also List of Google services and tools Resources of a Resource – ROR Base Feeder – Software to create bulk submission Google Base Feeds External links Google Base About Google Base Official Google Base Blog Official Google Blog Press Release Google Base API Mashups
https://en.wikipedia.org/wiki/Propyl%20acetate
Propyl acetate, also known as propyl ethanoate, is an organic compound. Nearly 20,000 tons are produced annually for use as a solvent. This colorless liquid is known by its characteristic odor of pears. Due to this fact, it is commonly used in fragrances and as a flavor additive. It is formed by the esterification of acetic acid and propan-1-ol, often via Fischer–Speier esterification, with sulfuric acid as a catalyst and water produced as a byproduct.
https://en.wikipedia.org/wiki/Disease%20mongering
Disease mongering is a pejorative term for the practice of widening the diagnostic boundaries of illnesses and aggressively promoting their public awareness in order to expand the markets for treatment. Among the entities benefiting from selling and delivering treatments are pharmaceutical companies, physicians, alternative practitioners and other professional or consumer organizations. It is distinct from the promulgation of bogus or unrecognised diagnoses. Term The term "monger" has ancient roots, providing the basis for many common compound forms such as cheesemonger, fishmonger, and fleshmonger, for those who peddle such wares respectively. "Disease mongering" as a label for the "invention" or promotion of diseases in order to capitalize on their treatment was first used in 1992 by health writer Lynn Payer, who applied it to the Listerine mouthwash campaign against halitosis (bad breath). Payer defined disease mongering as a set of practices which include the following: Stating that normal human experiences are abnormal and in need of treatment Claiming to recognize suffering which is not present Defining a disease such that a large number of people have it Defining a disease's cause as some ambiguous deficiency or hormonal imbalance Associating a disease with a public relations spin campaign Directing the framing of public discussion of a disease Intentionally misusing statistics to exaggerate treatment benefits Setting a dubious clinical endpoint in research Advertising a treatment as without side effect Advertising a common symptom as a serious disease The incidence of conditions not previously defined as illness being medicalised as "diseases" is difficult to scientifically assess due to the inherent social and political nature of the definition of what constitutes a disease, and what aspects of the human condition should be managed according to a medical model. For example, halitosis, the condition which prompted Payer to coin the phrase "d
https://en.wikipedia.org/wiki/Facilitated%20variation
The theory of facilitated variation demonstrates how seemingly complex biological systems can arise through a limited number of regulatory genetic changes, through the differential re-use of pre-existing developmental components. The theory was presented in 2005 by Marc W. Kirschner (a professor and chair at the Department of Systems Biology, Harvard Medical School) and John C. Gerhart (a professor at the Graduate School, University of California, Berkeley). The theory of facilitated variation addresses the nature and function of phenotypic variation in evolution. Recent advances in cellular and evolutionary developmental biology shed light on a number of mechanisms for generating novelty. Most anatomical and physiological traits that have evolved since the Cambrian are, according to Kirschner and Gerhart, the result of regulatory changes in the usage of various conserved core components that function in development and physiology. Novel traits arise as novel packages of modular core components, which requires modest genetic change in regulatory elements. The modularity and adaptability of developmental systems reduces the number of regulatory changes needed to generate adaptive phenotypic variation, increases the probability that genetic mutation will be viable, and allows organisms to respond flexibly to novel environments. In this manner, the conserved core processes facilitate the generation of adaptive phenotypic variation, which natural selection subsequently propagates. Description of the theory The theory of facilitated variation consists of several elements. Organisms are built from a set of highly conserved modules called "core processes" that function in development and physiology, and have remained largely unchanged for millions (in some instances billions) of years. Genetic mutation leads to regulatory changes in the package of core components (i.e. new combinations, amounts, and functional states of those components) exhibited by an organism. Finall
https://en.wikipedia.org/wiki/Patent%20Blue%20V
Patent Blue V, also called Food Blue 5, Sulphan Blue, Acid Blue 3, L-Blau 3, C-Blau 20, Patentblau V, Sky Blue, or C.I. 42051, is a sky blue synthetic triphenylmethane dye used as a food coloring. As a food additive, it has E number E131. It is a sodium or calcium salt of [4-(α-(4-diethylaminophenyl)-5-hydroxy- 2,4-disulfophenylmethylidene)-2,5-cyclohexadien-1-ylidene] diethylammonium hydroxide inner salt. Use as dye It is not widely used, but in Europe it can be found in Scotch eggs, certain jelly sweets, blue Curaçao, certain gelatin desserts, among others. An important advantage is the very deep color it produces even at low concentration, a disadvantage is that it fades fairly quickly when exposed to light. In medicine, Patent Blue V is used in lymphangiography and sentinel node biopsy as a dye to color lymph vessels. It is also used in dental disclosing tablets as a stain to show dental plaque on teeth. The color of the dye is pH-dependent. In aqueous solution, its color will vary from a deep blue in alkaline or weakly acidic medium to a yellow–orange in stronger acidic conditions. It is useful as a pH indicator for the range 0.8–3.0. The structure is also redox-sensitive, changing from a reduced yellow form to an oxidized red form in solution. The reduction potential of around 0.77 volts is similar to that of other triphenylmethane dyes. It is usable as a reversible redox indicator in some analytical methods. Because of its pH-dependent color, Patent Blue V was included in chemistry sets from Salter Science in the 1970s and 80s under the name Sky Blue. Regulation Patent Blue V is banned as a food dye in Australia and US, because health officials in these countries suspect that it may cause allergic reactions, with symptoms ranging from itching and nettle rash to nausea, hypotension, and in rare cases anaphylactic shock; it is therefore not recommended in those countries for children.
https://en.wikipedia.org/wiki/Direct%20numerical%20control
Direct numerical control (DNC), also known as distributed numerical control (also DNC), is a common manufacturing term for networking CNC machine tools. On some CNC machine controllers, the available memory is too small to contain the machining program (for example machining complex surfaces), so in this case the program is stored in a separate computer and sent directly to the machine, one block at a time. If the computer is connected to a number of machines it can distribute programs to different machines as required. Usually, the manufacturer of the control provides suitable DNC software. However, if this provision is not possible, some software companies provide DNC applications that fulfill the purpose. DNC networking or DNC communication is always required when CAM programs are to run on some CNC machine control. Wireless DNC is also used in place of hard-wired versions. Controls of this type are very widely used in industries with significant sheet metal fabrication, such as the automotive, appliance, and aerospace industries. History 1950s-1970s Programs had to be walked to NC controls, generally on paper tape. NC controls had paper tape readers precisely for this purpose. Many companies were still punching programs on paper tape well into the 1980s, more than twenty-five years after its elimination in the computer industry. 1980s The focus in the 1980s was mainly on reliably transferring NC programs between a host computer and the control. The Host computers would frequently be Sun Microsystems, HP, Prime, DEC or IBM type computers running a variety of CAD/CAM software. DNC companies offered machine tool links using rugged proprietary terminals and networks. For example, DLog offered an x86 based terminal, and NCPC had one based on the 6809. The host software would be responsible for tracking and authorising NC program modifications. Depending on program size, for the first time operators had the opportunity to modify programs at the DNC terminal. No
https://en.wikipedia.org/wiki/Psychology%2C%20philosophy%20and%20physiology
Psychology, philosophy and physiology (PPP) was a degree at the University of Oxford. It was Oxford's first psychology degree, beginning in 1947, but admitted its last students in October 2010. It has been, in part, replaced by psychology, philosophy, and linguistics (PPL, in which students usually study two of three subjects). PPP covered the study of thought and behaviour from the differing points of view of psychology, physiology and philosophy. Psychology includes social interaction, learning, child development, mental illness and information processing. Physiology considers the organization of the brain and body of mammals and humans, from the molecular level to the organism as a whole. Philosophy is concerned with ethics, knowledge, the mind, etc. External links Academic courses at the University of Oxford Philosophy education Physiology Psychology education
https://en.wikipedia.org/wiki/Cometabolism
Cometabolism is defined as the simultaneous degradation of two compounds, in which the degradation of the second compound (the secondary substrate) depends on the presence of the first compound (the primary substrate). This is in contrast to simultaneous catabolism, where each substrate is catabolized concomitantly by different enzymes. Cometabolism occurs when an enzyme produced by an organism to catalyze the degradation of its growth-substrate to derive energy and carbon from it is also capable of degrading additional compounds. The fortuitous degradation of these additional compounds does not support the growth of the bacteria, and some of these compounds can even be toxic in certain concentrations to the bacteria. The first report of this phenomenon was the degradation of ethane by the species Pseudomonas methanica. These bacteria degrade their growth-substrate methane with the enzyme methane monooxygenase (MMO). MMO was discovered to be capable of degrading ethane and propane, although the bacteria were unable to use these compounds as energy and carbon sources to grow. Another example is Mycobacterium vaccae, which uses an alkane monooxygenase enzyme to oxidize propane. Accidentally, this enzyme also oxidizes, at no additional cost for M. vaccae, cyclohexane into cyclohexanol. Thus, cyclohexane is co-metabolized in the presence of propane. This allows for the commensal growth of Pseudomonas on cyclohexane. The latter can metabolize cyclohexanol, but not cyclohexane. Cometabolism in Bioremediation Some of the molecules that are cometabolically degraded by bacteria are xenobiotic, persistent compounds, such as PCE, TCE, and MTBE, that have harmful effects on several types of environments. Co-metabolism is thus used as an approach to biologically degrade hazardous solvents. Cometabolism can be used for the biodegradation of methyl-tert-butyl ether (MTBE): an aquatic environment pollutant. Some Pseudomonas members were found to be able to fully degrade MTBE c
https://en.wikipedia.org/wiki/Light%20fixture
A light fixture (US English), light fitting (UK English), lamp, or luminaire is an electrical device containing an electrical component called a lamp that provides illumination. All light fixtures have a fixture body and one or more lamps. The lamps may be in sockets for easy replacement—or, in the case of some LED fixtures, hard-wired in place. Fixtures may also have a switch to control the light, either attached to the lamp body or attached to the power cable. Permanent light fixtures, such as dining room chandeliers, may have no switch on the fixture itself, but rely on a wall switch. Fixtures require an electrical connection to a power source, typically AC mains power, but some run on battery power for camping or emergency lights. Permanent lighting fixtures are directly wired. Movable lamps have a plug and cord that plugs into a wall socket. Light fixtures may also have other features, such as reflectors for directing the light, an aperture (with or without a lens), an outer shell or housing for lamp alignment and protection, an electrical ballast or power supply, and a shade to diffuse the light or direct it towards a workspace (e.g., a desk lamp). A wide variety of special light fixtures are created for use in the automotive lighting industry, aerospace, marine and medicine sectors. Portable light fixtures are often called lamps, as in table lamp or desk lamp. In technical terminology, the lamp is the light source, which, in casual terminology, is called the light bulb. Both the International Electrotechnical Commission (IEC) and the Illuminating Engineering Society (IES) recommend the term luminaire for technical use. History Fixture manufacturing began soon after production of the incandescent light bulb. When practical uses of fluorescent lighting were realized after 1924, the three leading companies to produce various fixtures were Lightolier, Artcraft Fluorescent Lighting Corporation, and Globe Lighting in the United States. Fixture types Light f
https://en.wikipedia.org/wiki/Kinetic%20bombardment
A kinetic bombardment or a kinetic orbital strike is the hypothetical act of attacking a planetary surface with an inert kinetic projectile from orbit (orbital bombardment), where the destructive power comes from the kinetic energy of the projectile impacting at very high speeds. The concept originated during the Cold War. Typical depictions of the tactic are of a satellite containing a magazine of tungsten rods and a directional thrust system. When a strike is ordered, the launch vehicle brakes one of the rods out of its orbit and into a suborbital trajectory that intersects the target. The rods would typically be shaped to minimize air resistance and maximize velocity upon impact. The kinetic bombardment has the advantage of being able to deliver projectiles from a very high angle at a very high speed, making them extremely difficult to defend against. In addition, projectiles would not require explosive warheads, and—in the simplest designs—would consist entirely of solid metal rods, giving rise to the common nickname "rods from God". Disadvantages include the technical difficulties of ensuring accuracy and the high costs of positioning ammunition in orbit. Real life concepts and theories Predecessors and early concepts During the Korean and Vietnam Wars, there was limited use of the Lazy Dog bomb, a kinetic projectile shaped like a conventional bomb but only about and . A piece of sheet metal was folded to make the fins and welded to the rear of the projectile. These were dumped from aircraft onto enemy troops and had the same effect as a machine gun fired vertically. Similar flechette projectiles have been used since the first World War. In the 1980s, another kinetic swarm system was conceptualized as a potential part of the Strategic Defense Initiative, there codenamed Brilliant Pebbles. Project Thor was an idea for a weapons system that launches telephone pole-sized kinetic projectiles made from tungsten from Earth's orbit to damaging targets on the
https://en.wikipedia.org/wiki/Gut-associated%20lymphoid%20tissue
Gut-associated lymphoid tissue (GALT) is a component of the mucosa-associated lymphoid tissue (MALT) which works in the immune system to protect the body from invasion in the gut. Owing to its physiological function in food absorption, the mucosal surface is thin and acts as a permeable barrier to the interior of the body. Equally, its fragility and permeability creates vulnerability to infection and, in fact, the vast majority of the infectious agents invading the human body use this route. The functional importance of GALT in body's defense relies on its large population of plasma cells, which are antibody producers, whose number exceeds the number of plasma cells in spleen, lymph nodes and bone marrow combined. GALT makes up about 70% of the immune system by weight; compromised GALT may significantly affect the strength of the immune system as a whole. Structure The gut-associated lymphoid tissue lies throughout the intestine, covering an area of approximately 260–300 m2. In order to increase the surface area for absorption, the intestinal mucosa is made up of finger-like projections (villi), covered by a monolayer of epithelial cells, which separates the GALT from the lumen intestine and its contents. These epithelial cells are covered by a layer of glycocalyx on their luminal surface so as to protect cells from the acid pH. New epithelial cells derived from stem cells are constantly produced on the bottom of the intestinal glands, regenerating the epithelium (epithelial cell turnover time is less than one week). Although in these crypts conventional enterocytes are the dominant type of cells, Paneth cells can also be found. These are located at the bottom of the crypts and release a number of antibacterial substances, among them lysozyme, and are thought to be involved in the control of infections. Underneath them, there is an underlying layer of loose connective tissue called lamina propria. There is also lymphatic circulation through the tissue connecte
https://en.wikipedia.org/wiki/Certance
Certance, LLC, was a privately held company engaged in design and manufacture of computer tape drives. Based in Costa Mesa, California, Certance designed and manufactured drives using a variety of tape formats, including Travan, DDS, and Linear Tape-Open computer tape drives. Certance was one of the three original technology partners, (Certance, IBM, and Hewlett-Packard), that created the Linear Tape-Open technology. In 2005, Certance was acquired by Quantum Corporation. History The company began as the removable storage systems division of Seagate Technology. The division was formed in 1996 from storage companies Archive Corporation, Irwin Magnetic Systems, Cipher Data Products, and Maynard Electronics. In a restructuring involving Seagate Technology and Veritas Software, the division was spun off in 2000 into the independent company Seagate Removable Storage Systems. The company was the worldwide unit volume shipment leader in 2001, 2002, and 2003. The company name was changed to "Certance" in 2003. In 2004, Quantum Corporation announced plans to acquire Certance. The acquisition was completed in 2005, whereupon Certance ceased to exist as an independent company.
https://en.wikipedia.org/wiki/Proportional-fair%20scheduling
Proportional-fair scheduling is a compromise-based scheduling algorithm. It is based upon maintaining a balance between two competing interests: Trying to maximize the total throughput of the network (wired or not) while at the same time allowing all users at least a minimal level of service. This is done by assigning each data flow a data rate or a scheduling priority (depending on the implementation) that is inversely proportional to its anticipated resource consumption. Weighted fair queuing Proportionally fair scheduling can be achieved by means of weighted fair queuing (WFQ), by setting the scheduling weights for data flow to , where the cost is the amount of consumed resources per data bit. For instance: In CDMA spread spectrum cellular networks, the cost may be the required energy per bit in the transmit power control (the increased interference level). In wireless communication with link adaptation, the cost may be the required time to transmit a certain number of bits using the modulation and error coding scheme that this required. An example of this is EVDO networks, where reported SNR is used as the primary costing factor. In wireless networks with fast Dynamic Channel Allocation, the cost may be the number of nearby base station sites that can not use the same frequency channel simultaneously, in view to avoid co-channel interference. User prioritization Another way to schedule data transfer that leads to similar results is through the use of prioritization coefficients. Here we schedule the channel for the station that has the maximum of the priority function: denotes the data rate potentially achievable for the station in the present time slot. is the historical average data rate of this station. and tune the "fairness" of the scheduler. By adjusting and in the formula above, we are able to adjust the balance between serving the best mobiles (the ones in the best channel conditions) more often and serving the costly mobiles often enou
https://en.wikipedia.org/wiki/Unique%20games%20conjecture
In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines. The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not. Formulations The unique games conjecture can be stated in a number of equivalent ways. Unique label cover The following formulation of the unique games conjecture is often used in hardness of approximation. The conjecture postulates the NP-hardness of the following promise problem known as label cover with unique constraints. For each edge, the colors on the two vertices are restricted to some particular ordered pairs. Unique constraints means that for each edge none of the ordered pairs have the same color for the same node. This means that an instance of label cover with unique constraints over an alphabet of size k can be represented as a directed graph together with a collection of permutations πe: [k] → [k], one for each edge e of the graph. An assignment to a label cover instance gives to each vertex of G a value in the set [k] = {1, 2, ... k}, often called “colours.” Such instances are strongly constrained in the sense that the colour of a vertex uniquely defines the colours of its neighbours, and hence for its entire connected component. Thus, if the input insta
https://en.wikipedia.org/wiki/CVSNT
CVSNT is a version control system compatible with and originally based on Concurrent Versions System (CVS), but whereas that was popular in the open-source world, CVSNT included features designed for developers working on commercial software including support for Windows, Active Directory authentication, reserved branches/locking, per-file access control lists and Unicode filenames. Also included in CVSNT were various RCS tools updated to work with more recent compilers and compatible with CVSNT. CVSNT was initially developed by users unhappy with the limitations of CVS 1.10.8, addressing limitations related to running CVS server on Windows and handling filenames for case-insensitive platforms. March Hare Software began sponsorship of the project in July 2004 to guarantee the project's future and to employ the original project manager on CVSNT development and commercial support. CVSNT was commercially popular, with a number of commercial IDEs directly including support for it including Oracle JDeveloper, IBM Rational Application Developer, and IBM WebSphere Business Modeler. The CVSNT variation of RCS tools were also widely used, including by Apple, Inc. CVSNT was so ubiquitous in commercial programming that it was often referred to simply as CVS, even though the open-source CVS developers had publicly stated that CVSNT was significantly different and should be kept as a separate project. Several books were written about CVSNT including CVSNT (CVS for NT) and All About CVS. Features CVSNT keeps track of the version history of a project (or set of files). CVSNT is based on the same client–server architecture as the Concurrent Versions System: a server stores the current version(s) of the project and its history, and clients connect to the server in order to check-out a complete copy of the project, work on this copy and then later check-in their changes. A server may be a caching or proxy server (a read only server that passes on write requests to another se
https://en.wikipedia.org/wiki/PCP%20theorem
In computational complexity theory, the PCP theorem (also known as the PCP characterization theorem) states that every decision problem in the NP complexity class has probabilistically checkable proofs (proofs that can be checked by a randomized algorithm) of constant query complexity and logarithmic randomness complexity (uses a logarithmic number of random bits). The PCP theorem says that for some universal constant K, for every n, any mathematical proof for a statement of length n can be rewritten as a different proof of length poly(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof. The PCP theorem is the cornerstone of the theory of computational hardness of approximation, which investigates the inherent difficulty in designing efficient approximation algorithms for various optimization problems. It has been described by Ingo Wegener as "the most important result in complexity theory since Cook's theorem" and by Oded Goldreich as "a culmination of a sequence of impressive works […] rich in innovative ideas". Formal statement The PCP theorem states that NP = PCP[O(log n), O(1)], where PCP[r(n), q(n)] is the class of problems for which a probabilistically checkable proof of a solution can be given, such that the proof can be checked in polynomial time using r(n) bits of randomness and by reading q(n) bits of the proof, correct proofs are always accepted, and incorrect proofs are rejected with probability at least 1/2. n is the length in bits of the description of a problem instance. Note further that the verification algorithm is non-adaptive: the choice of bits of the proof to check depend only on the random bits and the description of the problem instance, not the actual bits of the proof. PCP and hardness of approximation An alternative formulation of the PCP theorem states that the maximum fraction of satisfiable constraints of a constraint satisfaction problem is NP-hard to approximat
https://en.wikipedia.org/wiki/Asymptotic%20distribution
In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing approximations to the cumulative distribution functions of statistical estimators. Definition A sequence of distributions corresponds to a sequence of random variables Zi for i = 1, 2, ..., I . In the simplest case, an asymptotic distribution exists if the probability distribution of Zi converges to a probability distribution (the asymptotic distribution) as i increases: see convergence in distribution. A special case of an asymptotic distribution is when the sequence of random variables is always zero or Zi = 0 as i approaches infinity. Here the asymptotic distribution is a degenerate distribution, corresponding to the value zero. However, the most usual sense in which the term asymptotic distribution is used arises where the random variables Zi are modified by two sequences of non-random values. Thus if converges in distribution to a non-degenerate distribution for two sequences {ai} and {bi} then Zi is said to have that distribution as its asymptotic distribution. If the distribution function of the asymptotic distribution is F then, for large n, the following approximations hold If an asymptotic distribution exists, it is not necessarily true that any one outcome of the sequence of random variables is a convergent sequence of numbers. It is the sequence of probability distributions that converges. Central limit theorem Perhaps the most common distribution to arise as an asymptotic distribution is the normal distribution. In particular, the central limit theorem provides an example where the asymptotic distribution is the normal distribution. Central limit theorem Suppose is a sequence of i.i.d. random variables with and . Let be the average of . Then as approaches infinity, the random variables converge in d
https://en.wikipedia.org/wiki/Crash%20reporter
A crash reporter is usually a system software whose function is to identify reporting crash details and to alert when there are crashes, in production or on development / testing environments. Crash reports often include data such as stack traces, type of crash, trends and version of software. These reports help software developers- Web, SAAS, mobile apps and more, to diagnose and fix the underlying problem causing the crashes. Crash reports may contain sensitive information such as passwords, email addresses, and contact information, and so have become objects of interest for researchers in the field of computer security. Implementing crash reporting tools as part of the development cycle has become a standard, and crash reporting tools have become a commodity, many of them are offered for free, like Crashlytics. Many giant industry players, that are part of the software development eco-system have entered the game. Companies such as Twitter, Google and others are putting a lot of efforts on encouraging software developers to use their APIs, knowing this will increase their revenues down the road (through advertisements and other mechanisms). As they realize that they must offer elegant solutions for as many as possible development issues, otherwise their competitors will take actions, they keep adding advanced features. Crash reporting tools make an important development functionality that giant companies include in their portfolio of solutions. Many crash reporting tools are specialized in mobile app. Many of them are SDKs. macOS In macOS there is a standard crash reporter in . Crash Reporter.app sends the Unix crash logs to Apple for their engineers to look at. The top text field of the window has the crash log, while the bottom field is for user comments. Users may also copy and paste the log in their email client to send to the application vendor for them to use. Crash Reporter.app has 3 main modes: display nothing on crash, display "Application has cras
https://en.wikipedia.org/wiki/Environmental%20Change%20Network
The Environmental Change Network (ECN) was established in 1992 by the Natural Environment Research Council (NERC) to monitor long-term environmental change and its effects on ecosystems at a series of sites throughout Great Britain and Northern Ireland. Measurements made include a wide range of physical, chemical and biological variables. See also Climate change External links Environmental Change Network website Environment of the United Kingdom Natural Environment Research Council
https://en.wikipedia.org/wiki/Language%20binding
In programming and software design, binding is an application programming interface (API) that provides glue code specifically made to allow a programming language to use a foreign library or operating system service (one that is not native to that language). Characteristics Binding generally refers to a mapping of one thing to another. In the context of software libraries, bindings are wrapper libraries that bridge two programming languages, so that a library written for one language can be used in another language. Many software libraries are written in system programming languages such as C or C++. To use such libraries from another language, usually of higher-level, such as Java, Common Lisp, Scheme, Python, or Lua, a binding to the library must be created in that language, possibly requiring recompiling the language's code, depending on the amount of modification needed. However, most languages offer a foreign function interface, such as Python's and OCaml's ctypes, and Embeddable Common Lisp's cffi and uffi. For example, Python bindings are used when an extant C library, written for some purpose, is to be used from Python. Another example is libsvn which is written in C to provide an API to access the Subversion software repository. To access Subversion from within Java code, libsvnjavahl can be used, which depends on libsvn being installed and acts as a bridge between the language Java and libsvn, thus providing an API that invokes functions from libsvn to do the work. Major motives to create library bindings include software reuse, to reduce reimplementing a library in several languages, and the difficulty of implementing some algorithms efficiently in some high-level languages. Runtime environment Object models Common Object Request Broker Architecture (CORBA) – cross-platform-language model Component Object Model (COM) – Microsoft Windows only cross-language model Distributed Component Object Model (DCOM) – extension enabling COM to work over net
https://en.wikipedia.org/wiki/Network%20of%20practice
Network of practice (often abbreviated as NoP) is a concept originated by John Seely Brown and Paul Duguid. This concept, related to the work on communities of practice by Jean Lave and Etienne Wenger, refers to the overall set of various types of informal, emergent social networks that facilitate information exchange between individuals with practice-related goals. In other words, networks of practice range from communities of practice, where learning occurs, to electronic networks of practice (often referred to as virtual or electronic communities). Basic concepts To further define the concept, firstly the term network implies a set of individuals who are connected through social relationships, whether they be strong or weak. Terms such as community tend to denote a stronger form of relationship, but networks refer to all networks of social relationships, be they weak or strong. Second, the term practice represents the substrate that connects individuals in their networks. The principal ideas are that practice implies the actions of individuals and groups when conducting their work, e.g., the practice of software engineers, journalists, educators, etc., and that practice involves interaction among individuals. What distinguishes a network of practice from other networks is that the primary reason for the emergence of relationships within a network of practice is that individuals interact through information exchange in order to perform their work, asking for and sharing knowledge with each other. A network of practice can be distinguished from other networks that emerge due to other factors, such as interests in common hobbies or discussing sports while taking the same bus to work, etc. Finally, practice need not necessarily be restricted to include those within one occupation or functional discipline. Rather it may include individuals from a variety of occupations; thus, the term, practice, is more appropriate than others such as occupation. As indicated above
https://en.wikipedia.org/wiki/Allium%20neapolitanum
Allium neapolitanum is a bulbous herbaceous perennial plant in the onion subfamily within the Amaryllis family. Common names include Neapolitan garlic, Naples garlic, daffodil garlic, false garlic, flowering onion, Naples onion, Guernsey star-of-Bethlehem, star, white garlic, and wood garlic. Its native range extends across the Mediterranean Region from Portugal to the Levant. The species is cultivated as an ornamental and has become naturalized in many areas, including Pakistan, Australia, New Zealand, and in southern and western parts of the United States. It is classed as an invasive species in parts of the U.S., and is found primarily in the U.S. states of California, Texas, Louisiana, and Florida. Allium neapolitanum produces round bulbs up to across. The scape is up to tall, round in cross-section but sometimes with wings toward the bottom. The inflorescence is an umbel of up to 25 white flowers with yellow anthers. Allium neapolitanum seems to have beta-adrenergic antagonist properties. Gallery
https://en.wikipedia.org/wiki/Torelli%20theorem
In mathematics, the Torelli theorem, named after Ruggiero Torelli, is a classical result of algebraic geometry over the complex number field, stating that a non-singular projective algebraic curve (compact Riemann surface) C is determined by its Jacobian variety J(C), when the latter is given in the form of a principally polarized abelian variety. In other words, the complex torus J(C), with certain 'markings', is enough to recover C. The same statement holds over any algebraically closed field. From more precise information on the constructed isomorphism of the curves it follows that if the canonically principally polarized Jacobian varieties of curves of genus are k-isomorphic for k any perfect field, so are the curves. This result has had many important extensions. It can be recast to read that a certain natural morphism, the period mapping, from the moduli space of curves of a fixed genus, to a moduli space of abelian varieties, is injective (on geometric points). Generalizations are in two directions. Firstly, to geometric questions about that morphism, for example the local Torelli theorem. Secondly, to other period mappings. A case that has been investigated deeply is for K3 surfaces (by Viktor S. Kulikov, Ilya Pyatetskii-Shapiro, Igor Shafarevich and Fedor Bogomolov) and hyperkähler manifolds (by Misha Verbitsky, Eyal Markman and Daniel Huybrechts). Notes
https://en.wikipedia.org/wiki/Cue%20validity
Cue validity is the conditional probability that an object falls in a particular category given a particular feature or cue. The term was popularized by , and especially by Eleanor Rosch in her investigations of the acquisition of so-called basic categories (;). Definition of cue validity Formally, the cue validity of a feature with respect to category has been defined in the following ways: As the conditional probability ; see , , . As the deviation of the conditional probability from the category base rate, ; see , . As a function of the linear correlation; see , , , . Other definitions; see , . For the definitions based on probability, a high cue validity for a given feature means that the feature or attribute is more diagnostic of the class membership than a feature with low cue validity. Thus, a high-cue validity feature is one which conveys more information about the category or class variable, and may thus be considered as more useful for identifying objects as belonging to that category. Thus, high cue validity expresses high feature informativeness. For the definitions based on linear correlation, the expression of "informativeness" captured by the cue validity measure is not the full expression of the feature's informativeness (as in mutual information, for example), but only that portion of its informativeness that is expressed in a linear relationship. For some purposes, a bilateral measure such as the mutual information or category utility is more appropriate than the cue validity. Examples As an example, consider the domain of "numbers" and allow that every number has an attribute (i.e., a cue) named "is_positive_integer", which we call , and which adopts the value 1 if the number is actually a positive integer. Then we can inquire what the validity of this cue is with regard to the following classes: {rational number, irrational number, even integer}: If we know that a number is a positive integer we know that it is a rational number. Thus
https://en.wikipedia.org/wiki/Superdense%20coding
In quantum information theory, superdense coding (also referred to as dense coding) is a quantum communication protocol to communicate a number of classical bits of information by only transmitting a smaller number of qubits, under the assumption of sender and receiver pre-sharing an entangled resource. In its simplest form, the protocol involves two parties, often referred to as Alice and Bob in this context, which share a pair of maximally entangled qubits, and allows Alice to transmit two bits (i.e., one of 00, 01, 10 or 11) to Bob by sending only one qubit. This protocol was first proposed by Charles H. Bennett and Stephen Wiesner in 1970 (though not published by them until 1992) and experimentally actualized in 1996 by Klaus Mattle, Harald Weinfurter, Paul G. Kwiat and Anton Zeilinger using entangled photon pairs. Superdense coding can be thought of as the opposite of quantum teleportation, in which one transfers one qubit from Alice to Bob by communicating two classical bits, as long as Alice and Bob have a pre-shared Bell pair. The transmission of two bits via a single qubit is made possible by the fact that Alice can choose among four quantum gate operations to perform on her share of the entangled state. Alice determines which operation to perform accordingly to the pair of bits she wants to transmit. She then sends Bob the qubit state evolved through the chosen gate. Said qubit thus encodes information about the two bits Alice used to select the operation, and this information can be retrieved by Bob thanks to pre-shared entanglement between them. After receiving Alice's qubit, operating on the pair and measuring both, Bob obtains two classical bits of information. It is worth stressing that if Alice and Bob do not pre-share entanglement, then the superdense protocol is impossible, as this would violate Holevo's theorem. Superdense coding is the underlying principle of secure quantum secret coding. The necessity of having both qubits to decode the inform